uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,155,528 | arxiv | \section{Introduction}
\input{Sections_gaming/1_Introduction/introduction}
\input{Sections_gaming/2_Background/background}
\input{Sections_gaming/3_Database/database}
\input{Sections_gaming/4_Study/study}
\input{Sections_gaming/5_data_process/data_process}
\input{Sections_gaming/6_performance/performance}
\section{Conclusion}
\label{conclusion}
UGC gaming videos are receiving significant consumer interest.
We have presented a new gaming video quality assessment database, called LIVE-YT-Gaming, containing 600 videos of unique user generated gaming contents labeled by 18,600 subjective ratings.
We designed and conducted a new online study to collect subjective data from a smaller group of reliable subjects.
As compared against laboratory and crowdsourced studies, our online study has several advantages.
We also tested several popular general-purpose and gaming-specific video quality assessment models on the new database, and compared their performances with that of a new VQA model, called GAME-VQP designed for gaming videos.
We show that while some existing VQA models performed well, GAME-VQP significantly outperformed the others.
We believe this new subjective data resource will help other researchers advance their work on the VQA problem for gaming videos.
\section*{Acknowledgment}
The human study was approved by the Institutional Review Board of UT-Austin.
This research was supported by a gift from YouTube, and by grant number 2019844 for the National Science Foundation AI Institute for Foundations of Machine Learning (IFML).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}
\input{Sections_gaming/1_Introduction/introduction}
\input{Sections_gaming/2_Background/background}
\input{Sections_gaming/3_Database/database}
\input{Sections_gaming/4_Study/study}
\input{Sections_gaming/5_data_process/data_process}
\input{Sections_gaming/6_performance/performance}
\section{Conclusion}
\label{conclusion}
UGC gaming videos are receiving significant consumer interest.
We have presented a new gaming video quality assessment database, called LIVE-YT-Gaming, containing 600 videos of unique user generated gaming contents labeled by 18,600 subjective ratings.
We designed and conducted a new online study to collect subjective data from a smaller group of reliable subjects.
As compared against laboratory and crowdsourced studies, our online study has several advantages.
We also tested several popular general-purpose and gaming-specific video quality assessment models on the new database, and compared their performances with that of a new VQA model, called GAME-VQP designed for gaming videos.
We show that while some existing VQA models performed well, GAME-VQP significantly outperformed the others.
We believe this new subjective data resource will help other researchers advance their work on the VQA problem for gaming videos.
\section*{Acknowledgment}
The human study was approved by the Institutional Review Board of UT-Austin.
This research was supported by a gift from YouTube, and by grant number 2019844 for the National Science Foundation AI Institute for Foundations of Machine Learning (IFML).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}
\label{introduction_game}
Over the past few years, Internet traffic has continued to grow dramatically.
According to \cite{ciscociscovisualnetworkingindex}, video streaming now occupies the majority of Internet bandwidth and is the main driver of this growth.
Streaming service providers like Netflix, Hulu, and Amazon Prime Video generate and stream a substantial portion of this visual traffic in the form of Professionally-Generated-Content (PGC) videos.
At the same time, online video sharing platforms like YouTube, Vimeo, and Twitch collect and make available vast numbers of User-Generated-Content (UGC) videos, generally uploaded by less skilled videographers.
Driven by the rapid development of the digital game industry and by consumer desire to see skilled gamers in action, a large number of online videos of gameplay content have been uploaded and made public on the Internet.
In 2020, YouTube Gaming reached a milestone of 100 billion hours of watch time and 40 million active gaming channels \cite{youtubegamingbiggestyear}.
The rapid expansion of the gaming market has also given birth to many streaming-related services, such as game live broadcasts, online games, and cloud games.
At the same time, increasingly powerful computing, communication, and display hardware and software technologies have continuously uplifted the quality of these services.
In order to provide users with better quality streaming gaming video services and to improve users’ viewing experiences, perceptual Video Quality Assessment (VQA) research has become particularly important.
Both general-purpose algorithms able to handle diverse distortion scenarios, as well as VQA models specifically designed for gaming videos need to be studied and developed.
VQA research efforts can generally be divided into two categories: conducting subjective quality assessment experiments, and developing objective quality assessment models and algorithms.
If it were feasible, subjective quality assessment would best meet the needs of streaming providers and users, but the cost is prohibitive and does not allow for real-time quality based decisions or processing steps, such as perceptually optimized compression.
Therefore, automatic perception-based objective video quality prediction algorithms are of great interest.
Objective VQA models can be classified into three categories, based on whether there is a reference video available.
The first, Full-Reference (FR) models, require the availability of an entire source video as a reference against which to measure visual differences, i.e. video fidelity.
Reduced-Reference (RR) models only require partial reference information, often only a small amount.
No-Reference (NR), or ``blind'' models predict the quality of a test video without using any reference information.
It is generally believed that when predicting the quality of the test video, if there is an available PGC video as a reference, that is, an original, ``pristine'' video of high quality, shot by experts with professional equipment, then FR or RR VQA algorithms are able to achieve superior prediction results, as compared to NR models.
However, there is often a lack of accessible reference videos, e.g., for UGC videos captured by naive users having limited capture skills or technology.
These can only be evaluated using NR VQA models.
Since these kinds of videos are quite pervasive on social media and sharing platforms, the NR VQA problem is of high practical importance.
Since they also typically encompass much wider ranges of quality and of distortion types which are often simultaneously present and commingled, the NR VQA problem is much more challenging.
Unlike real-world videos taken by videographers with optical cameras, gaming videos are generally synthetically generated.
Because of underlying statistical differences in the structure of synthetic and artificial videos \cite{barman2018comparative, barman2018objective}, popular NR VQA models may not achieve adequate quality prediction performance on them.
Research on the assessment of gaming video quality mostly commenced very recently.
The authors of \cite{barman2018comparative} studied and analyzed differences between gaming and non-gaming content videos, in terms of compression and user Quality of Experience (QoE).
Some VQA models designed for gaming videos have been proposed, e.g., \cite{zadtootaghaj2020demi,utke2020ndnetgaming}.
However, this work has been limited to the study of the characteristics of PGC gaming videos and how they are affected by compression.
None of these have discussed or analyzed the quality assessment of UGC gaming videos recorded by amateur users.
We briefly elaborate on the characteristics of PGC and UGC gaming videos in Section \ref{pgc_ugc_gaming_video}.
Advancements of research on video quality have always relied on freely available databases of distorted videos and subjective judgments of them.
Over the past two decades, many VQA databases have been created and shared with researchers, which has significantly driven the development of VQA models.
Databases created in the early stages of VQA research usually contained only a small number of PGC reference videos and a small number of synthetically distorted versions of them.
In recent years, as UGC VQA research has gained more attention, more advanced and specialized databases have been created, containing hundreds or thousands of unique UGC videos with numerous, complex authentic distortions.
Recently, databases dedicated to the development of gaming video quality assessment research have also been proposed.
However, these databases only contain a limited number of PGC gaming videos as references, along with compressed versions of them.
Therefore, towards advancing progress on understanding streamed UGC gaming video quality, we have created a new UGC gaming video resource, which we call the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database.
This new dataset contains subjective rating scores on a large number of unique gaming videos from a study that was conducted online.
We summarize the contributions we make as follows:
\begin{itemize}
\item Using this new resource, we conducted a novel online human study whereby we collected a large number of subjective quality labels on gaming videos. To demonstrate the utility of the new database, we compared and contrasted the performance of a wide range of state-of-the-art (SOTA) VQA models, including one of our own design that attains top performance.
\item We constructed a novel subjective UGC gaming video database, which we call the LIVE-YouTube Gaming video quality database (LIVE-YT-Gaming). The new database contains 600 UGC gaming videos of unique content, from 59 different games. This is the largest video quality database addressing UGC gaming, and it includes the most exemplar games. Unlike the few existing gaming video quality databases, where PGC videos were professionally recorded, the LIVE-YT-Gaming videos were collected from online uploads by casual users (UGC videos) and are generally afflicted by highly diverse, mixed distortions.
\end{itemize}
The rest of the paper is organized as follows: we discuss related work in Section \ref{related_work}.
We present the new distorted gaming video database in Section \ref{database}.
We provide details on the subjective study in Section \ref{study}, and an analysis of the collected subjective data in Section \ref{data_process}.
We compare the performances of several current VQA models on the new database in Section \ref{performance}, and introduce a new model, called GAME-VQP.
Finally, we summarize the paper and discuss possible future work in Section \ref{conclusion}.
\section{Related Work}
\label{related_work}
\subsection{Subjective Video Quality Database}
\label{related_work_database}
Existing VQA databases differ by the types of video contents and distortions that are included, as well by the volume and methods of procuring subjective data on them.
Two different methods are commonly used to collect subjective ratings on videos in the databases: laboratory studies and online crowdsourced studies.
\subsubsection{Laboratory VQA Studies}
Most of the early VQA databases collected subjective data in the laboratory.
A small set of unique reference videos (usually 10 to 20), along with distorted versions of them were presented to a relatively small ($<$100) set of volunteer subjects.
We will refer to these synthetically-distorted datasets as \textit{legacy} VQA databases.
Some representative legacy databases are LIVE VQA \cite{seshadrinathan2010study}, LIVE Mobile \cite{moorthy2012video}, CDVL \cite{pinson2013consumer}, MCL-V \cite{lin2015mcl}, MCL-JCV \cite{wang2016mcl}, and VideoSet \cite{wang2017videoset}.
Laboratory studies have advantages: the human data that is obtained is generally more reliable, and the scientist has significant control over the experimental environment.
However, it is difficult to recruit a large number of subjects, hence much less data can be collected.
Because of this, legacy databases are data-poor although they are valuable test beds because of the high quality of their data.
\subsubsection{Crowdsourced VQA Studies}
In recent years, crowdsourcing as a tool for conducting subjective studies has been used to create a variety of large-scale VQA databases containing UGC videos.
Crowdsourcing is usually conducted on online platforms like Amazon Mechanical Turk (MTurk) and CrowdFlower, whereby it is possible to collect a large amount of subjective data from participants around the world.
Crowdsourced databases usually contain hundreds to thousands of UGC videos selected and sampled from online video sources, on which large amounts of subjective data can be collected by crowdsourcing.
Some representative crowdsourced video quality databases are CVD2014 \cite{nuutinen2016cvd2014}, LIVE-In-Capture \cite{ghadiyaram2017capture}, KoNViD-1k \cite{hosu2017konstanz}, YFCC100M \cite{thomee2016yfcc100m}, LIVE-VQC \cite{sinno2018large}, and YouTube-UGC \cite{wang2019youtube}.
Crowdsourcing makes it possible to collect a large volume subjective data from thousands of workers.
However, collected data is less reliable than quality scores collected in the laboratory, and disingenuous subjects must be addressed.
Further, the researcher must ensure that the subjects have adequate displays and network bandwidths to properly participate.
Since a very large number of paid workers are required, online studies can be very expensive.
\subsubsection{Gaming Video Quality Assessment Databases}
\label{gaming_database_compare}
As far as we are aware, there are four video quality databases that have been designed for gaming video quality research purposes: GamingVideoSET \cite{barman2018gamingvideoset}, KUGVD \cite{barman2019no}, CGVDS \cite{zadtootaghaj2020quality}, and TGV \cite{wen2021subjective}.
The GamingVideoSET database contains 24 gaming contents recorded from 12 different games, along with 576 compressed versions of these videos using H.264 compression.
However, only 90 of these have associated subjective data.
Three different resolutions are included: 480p, 720p and 1080p, all with frame rates of 30 frames/second (fps).
The KUGVD database is similar to GamingVideoSET, but only has 6 gaming contents and 144 compressed versions of them, 90 of which have subjective data available.
Both GamingVideoSET and KUGVD databases are therefore limited by having small amounts of subjective data, which hinders model development.
All of the reference videos were recorded under professional conditions without visible distortion, unlike UGC videos.
Moreover, the distorted videos were all compressed using the same H.264 standard, making quality prediction less difficult.
The CGVDS database was specifically developed for analyzing and modeling the quality of gaming videos compressed using hardware accelerated implementations of H.264/MPEG-AVC.
It has 360 distorted videos with subjective quality ratings, compressed from 15 PGC reference videos, at framerates of 20, 30, and 60 fps.
Finally, the TGV database is a mobile gaming video database containing 1293 gaming video sequences compressed from 150 source videos.
Unlike the aforementioned databases, which only contain computer or console games, the videos in the TGV database were all recorded from 17 different mobile games.
All four of the above-described gaming video databases contain only PGC videos that have been impaired by a single distortion type (compression).
Prior to our effort here, there has been no database designed for conducting VQA research on real UGC gaming videos created by casual users.
A comparison of the four existing gaming video quality databases is given in Table \ref{database_comparison}.
There is one more recently published database called GamingHDRVideoSET \cite{barman2021user}, which includes HDR gaming videos, however it does not include any subjective data, so we excluded it from our evaluations.
\begin{table*}
\resizebox{\textwidth}{!}{%
\begin{threeparttable}
\caption{Evaluation of Four Existing Gaming Video Quality Databases: GamingVideoSET, KUGVD, CGVDS, and TGV}
\label{database_comparison}
\begin{tabular}{cccccccccccccccc}
\toprule
Database & Year & Content No & Video No & Game No & Subjective Data & Public & Resolution & FPS & Duration & Format & Distortion Type & Subject No & Rating No & Data & Study Type \\ \midrule
GamingVideoSET & 2018 & 24 & 600 & 12 & 90 & Yes & 480p, 720p, 1080p & 30 & 30 sec & mp4, yuv & H.264 compression & 25 & 25 & MOS & In-lab study \\
KUGVD & 2019 & 6 & 150 & 6 & 90 & Yes & 480p, 720p, 1080p & 30 & 30 sec & mp4, yuv & H.264 compression & 17 & 17 & MOS & In-lab study \\
CGVDS & 2020 & 15 & 255 & 15 & 360 + anchor stimuli & Yes & 480p, 720p, 1080p & 20, 30, 60 & 30 sec & mp4, yuv & H.264 compression & over 100 & Unavailable & MOS & In-lab study \\
TGV & 2021 & 150 & 1293 & 17 & Unavailable & No & 480p, 720p, 1080p & 30 & 5 sec & Unavailable & H264, H265, Tencent codec & 19 & Unavailable & Unavailable & In-lab study \\ \bottomrule
\end{tabular}
\begin{tablenotes}
\small
\item Content No: Total number of unique contents. \qquad Video No: Total number of videos. \qquad Game No: Total number of source games. \qquad Subjective Data: Total number of videos with subjective ratings available. \qquad \\ FPS: Framerate per second. \qquad Subject No: Total number of participating subjects. \qquad Rating No: Average number of ratings per video.
\end{tablenotes}
\end{threeparttable}
}
\end{table*}
\subsection{Objective Quality Assessment Model}
The following is a brief introduction to the development of modern NR VQA models, including both general-purpose models and gaming video quality models.
\subsubsection{General NR VQA Models}
While the earliest NR VQA models were developed to address specific types of distortion, the focus of VQA research shifted toward the development of more general-purpose NR algorithms, which extract various ``quality-aware'' hand-crafted features, which are fed to simple regressors, such as SVRs, to learn mappings to human subjective quality labels.
The most successful features derive from models of natural natural scene statistics (NSS) \cite{ruderman1994statistics} and natural video statistics (NVS) \cite{soundararajan2012video}, under which high quality optical images, when subjected to perceptually-relevant bandpass processes obey certain statistical regularities that are predictably altered by distortions.
Noteworthy examples include NIQE \cite{mittal2013making}, BRISQUE \cite{mittal2012no}, V-BLIINDS \cite{saad2014blind}, HIGRADE \cite{kundu2017no}, GM-LOG \cite{xue2014blind}, DESIQUE \cite{zhang2013no}, and FRIQUEE \cite{ghadiyaram2017perceptual}.
More recent models that employ efficiently optimized NSS/NVS features, and/or combined with deep features, include VIDEVAL \cite{tu2021ugc}, which leverages a hand-optimized selection of statistical features, and RAPIQUE \cite{tu2021rapique}, which combines a large number of easily-computed statistics with semantic deep features in an extremely efficient manner, yielding both excellent quality prediction accuracy and rapid computation.
Unlike NSS-based models, data-driven methods like CORNIA \cite{ye2012unsupervised} begin by constructing a codebook via unsupervised learning techniques, instead of using a fixed set of features, followed by temporal hysteresis pooling.
A recent published NR VQA model called TLVQM \cite{korhonen2019two} makes use of a two-level feature extraction mechanism to achieve efficient computation of a set of distortion-relevant features that measure motion, specific distortion artifacts, and aesthetics.
Given the emergence of large-scale video quality databases in recent years, the availability of sufficient quantities of training samples has made it possible to train high-performance deep learning models.
VSFA \cite{li2019quality} makes use of a pre-trained Convolutional Neural Network (CNN) as a deep feature extractor, then integrates the frame-wise features using a gated recurrent unit and a temporal pooling layer.
It is one of several SOTA deep learning VQA models that attain superior performance on publicly available UGC video quality databases.
PaQ-2-PiQ \cite{ying2019patches} is a recent frame-based local-to-global deep learning based VQA architecture, while PVQ \cite{ying2020patch} extends the idea of using local quality predictions to improve global space-time VQA performance.
Other deep models include V-MEON \cite{liu2018end}, NIMA \cite{talebi2018nima}, PQR \cite{zeng2018blind}, and DLIQA \cite{hou2015blind}, which deliver high performance on legacy and UGC video quality databases.
\subsubsection{Gaming VQA Models}
\label{gaming_VQA_intro}
Several NR VQA models have been proposed for gaming.
In \cite{zadtootaghaj2018nr}, the authors proposed a blind model called NR-GVQM, which trains an SVR model to evaluate the quality of gaming content by extracting nine frame-level features, and by using VMAF scores as proxy ground truth labels.
The Nofu model proposed in \cite{goring2019nofu} is a learning-based VQA model, which applies center cropping to rapidly compute 12 frame-based features, followed by model training and temporal pooling.
The authors of \cite{barman2019no} proposed two models.
Their NR-GVSQI model also uses VMAF scores as training targets, while their NR-GVSQE model uses MOS as the training target.
They both make use of basic distortion features that measure blockiness, blurriness, contrast, exposure etc., as well as quality scores generated by the NSS-based IQA models BIQI, BRISQUE, and NIQE.
The ITU-T standard G.1072 \cite{gaming2020methodology} describes a gaming video QoE model based on three quality dimensions: spatial video quality, temporal video quality, and input quality (interaction quality).
Two recently released models based on deep learning, called NDNetGaming \cite{utke2020ndnetgaming} and DEMI \cite{zadtootaghaj2020demi}, were both developed on a DenseNet framework.
NDNetGaming is based on a CNN structure trained on VMAF scores as proxy ground truth, then fine-tuned using MOS.
Quality prediction phase is then accomplished via temporal pooling.
DEMI uses a CNN architecture similar to NDNetGaming, but focuses on only two types of distortions, blurriness and blockiness.
However, each of these systems were designed and developed on databases containing only a small number of PGC reference videos impaired by a single type of manually applied compression.
None have been systematically trained and tested on UGC gaming videos.
One reason for this is that there is no UGC gaming video quality database currently available.
\subsection{Video Game and Gaming Video}
\label{introduction_videogame}
Video games generally refer to interactive games that run on electronic media platforms.
Popular mainstream games may take the form of computer games, console games, mobile games, handheld games, VR games, cloud games, and so on.
Many games are available on multiple platforms.
Games also fall into various genres, including role-playing games, adventure games, action games, first-person shooters, real-time strategy games, fighting games, board games, massive multiplayer online role-playing games, and others.
At present, most gaming video quality research has focused on streamed gaming, such as interactive cloud games and online games, as well as passive live broadcast and recorded gameplay \cite{barman2018evaluation}.
\subsection{Recording of PGC and UGC Gaming Video}
\label{pgc_ugc_gaming_video}
Next we describe ways by which gaming videos are created, leading to a discussion of PGC and UGC gaming videos.
Unlike natural videos captured by optical cameras, gaming \textit{videos}, as opposed to live gameplay, are usually obtained by recording the screen of the devices on which the games are being played.
A variety of factors affect the output quality of recorded gaming videos.
The most basic are the graphics configuration settings of the operation system, and the settings built into each game.
The display quality of games is usually affected by settings like spatial resolution, frame rate, motion blur, control of screen-space ambient occlusions, vertical synchronization, variation of texture/material, grass quality, anisotropic filtering, depth of field, reflection quality, water surface quality, shadow quality/style, type of tessellation, and use of triple buffering and anti-aliasing.
Professional players enjoy the ultimate gaming experience by optimizing these settings, however, they present formidably complicated choices for most players, so games provide more intuitive and convenient options.
Players can simply set the game quality to high, medium, or low, letting the game program automatically set the appropriate parameters.
Of course, whether or not the video quality determined by the game settings can be correctly displayed depends heavily on the hardware configuration.
A sufficiently powerful GPU can provide complete real-time calculation support to ensure that the game display quality is stable at a high level, but ordinary GPUs may not meet all the quality requirements of all users.
For example, high motion scenes may present with noticeable frame drops or lagging.
Some games require very large data calculations, so if the GPU cannot process them quickly enough, delays in the display may occur.
For those games which require connection to the Internet, poor network conditions may also cause the game screen to delay or even freeze.
The quality of recorded gaming videos is also affected by the recording software used.
Professional software can accurately reproduce the original gameplay, but substantial system resources are required, increasing the burden on the gameplay device.
The software may provide options for faster and easier use, such as automatic resolution selection, frame rate reduction, or further compression during recording, which may degrade the output quality.
When we refer to PGC gaming videos, we refer to videos captured using professional-grade screen recording software, where the video settings during the gameplay are adjusted to a high level, processed and displayed using professional hardware equipment.
By contrast, UGC gaming videos refer to videos recorded by ordinary, casual non-professional users on ordinary computers or consoles, using diverse game graphics settings, and various types of recording software.
Thus, unlike PGC videos, recorded UGC game videos recorded may present with a very wide range of perceptual qualities.
\section{LIVE-YouTube Gaming Video Quality Database}
\label{database}
We present a detailed description of the new LIVE-YT-Gaming database in this section.
The new database contains 600 UGC gaming videos harvested from online sources.
It is the largest public-domain real UGC gaming video quality database having associated subjective scores.
Our main objective has been to create a resource that will support and boost gaming video quality research.
By providing this useful and accessible tool, we hope more researchers will be able to engage in research on the perceptual quality of gaming videos.
Fig. \ref{gamevideo_screenshot} shows a few example videos from the new database.
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/3_Database/gamevideo_screenshot.jpg}
\caption{Example frames of videos from the new LIVE-YT-Gaming database.}
\label{gamevideo_screenshot}
\end{figure}
Many recent (non-gaming) UGC video quality databases \cite{hosu2017konstanz, ying2019patches, ying2020patch, li2020ugc} were created by harvesting a large number of source videos from one or more large free public video repositories, such as the Internet Archive \cite{internetarchive} or YFCC-100M \cite{thomee2016yfcc100m}.
These are typically ``winnowed'' to a set of videos that are representative of a category of interest, such as social media videos.
This is accomplished by a statistical matching process based on low-level video features, such as blur, colorfulness \cite{hasler2003measuring}, contrast, Spatial Information (SI) \cite{winkler2012analysis} and Temporal Information (TI) \cite{itu910subjective}.
Statistical sampling based on matching is not suitable for harvesting gaming videos, however, because of a dearth of large gaming video databases available for free download.
While there are numerous and diverse gaming videos that have been uploaded by users onto video websites like YouTube and Twitch, these generally cannot be downloaded because of copyright issues.
Furthermore, gaming videos are strongly characterized by the type and content of the original games, which impacts the statistical structure of the video signals, such as their bandpass properties \cite{barman2018comparative}.
Overall, data collection is more difficult and complex than it is for real-world UGC videos.
\subsection{Video Collection}
We found the Internet Archive (IA), a free digital library, to be a good source of gaming videos.
Taking into account the popularity of games on YouTube, as well as the wide variety of types of games (as described in Section \ref{introduction_videogame}), we selected 59 games to be included in our database.
Unlike the completely random downloading of videos used by ourselves and others to create many PGC and UGC databases, we found that the best approach was to search the videos by their game titles, then to ensure the diversity of sources and to reduce bias, videos of each same game title were downloaded randomly, subject to wild resolution and frame rate constraints.
Four video resolutions were allowed: 360p (360x640), 480p (480x854), 720p (720x1280), and 1080p (1080x1920), and two frame rates were allowed: 30 fps and 60 fps.
In addition, we used the Windows 10 Xbox game bar \cite{xboxgamebar} to capture the gameplay of some games, at frame rates of 30 fps and 60 fps, and resolutions of 720p and 1080p.
The video resolutions were selected based on the YouTube video display resolution and aspect ratio standard \cite{youtuberesolution}.
In the end, we downloaded dozens to hundreds of source videos of each game to use as a data corpus for the next step, video selection.
\subsection{Video Selection}
After obtaining the source video resources, we randomly extracted a few 10-second clips from each video, thereby obtaining about 3,000 gaming video clips.
Considering the desired scale of the online human study to be conducted, and the number of subjects available (to be explained), we further randomly selected 600 videos from amongst those clips.
We then observed the remaining videos, and replaced videos of some popular games that were over-represented, with videos of less popular games to ensure diversity, and we removed videos containing any inappropriate content.
For a variety of reasons, including avoiding video stalls and limiting the subjects' session durations (Section \ref{stall_resolution_issue}), we cropped the videos to durations in the range of 8-9 sec.
A summary of the distributions of the video resolutions present in the LIVE-YT-Gaming database is tabulated in Table \ref{vid_resolution}.
\begin{table}
\caption{Distribution of Video Resolutions in LIVE-YT-Gaming Database}
\label{vid_resolution}
\centering
\begin{tabular}{ccccc}
\toprule
Resolution & 1080p & 720p & 480p & 360p \\ \midrule
30 fps & 137 & 187 & 36 & 55 \\
60 fps & 129 & 51 & 0 & 5 \\ \bottomrule
\end{tabular}
\end{table}
We also computed SI and TI on all of the remaining videos.
These quantities roughly measure the spatial and temporal richness and variety of the video contents.
SI and TI are defined as follows:
\begin{equation}
SI = \textrm{max}_{\mathit{time}}\left \{ std_{space}\left [ \textit{Sobel}( F_{n}(i, j)) \right ] \right \},
\label{SI}
\end{equation}
\begin{equation}
TI = \textrm{max}_{\mathit{time}}\left \{ std_{space}\left [ M_{n}(i, j) \right ] \right \},
\label{TI}
\end{equation}
\noindent
where $F_{n}$ denotes the luminance component of a video frame at instant $n$, $(i, j)$ denotes spatial coordinates, and $M_{n} = F_{n} - F_{n+1}$ is the frame difference operation.
$Sobel(F_{n})$ denotes Sobel filtering \cite{itu910subjective}.
Fig. \ref{SI_TI_gaming} shows the distributions of SI and TI for the video contents we selected, indicating that in these aspects, the selected videos contain richer contents than many other UGC video and gaming video databases (\cite{tu2021ugc}, \cite{barman2018objective}, \cite{barman2019no}).
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/3_Database/SI_TI_gaming_database_journal-eps-converted-to.pdf}
\caption{Scatter plot of SI against TI on the 600 gaming videos in the LIVE-YouTube Gaming Video Quality Database. }
\label{SI_TI_gaming}
\end{figure}
\section{Subjective Quality Assessment}
\label{study}
Next we describe the design and implementation of our online study.
We stored all of the videos on the Amazon S3 cloud server, providing a safe cloud storage service at high Internet speeds, with sufficient bandwidth to ensure satisfactory video loading speed at the client devices of the study participants.
We recruited 61 volunteer naive subjects who participated in and completed the entire study.
All were students at The University of Texas at Austin (UT-Austin) with no background knowledge of VQA research.
We designed the study in this way, as an online study with fewer, but very reliable subjects, because of Covid-19 regulations.
Before the study, we randomly divided the subjects into six groups, each containing about 10 subjects.
We also randomly divided the 600 videos into six groups, each containing 100 videos.
Each subject watched three groups of videos, or 300 videos.
In order to avoid any possible bias caused by a same group of videos being watched by a fixed group of subjects, we adopted a round-robin presentation ordering to cross-assign video groups to different subject groups.
In this way each video would be watched by about 30 subjects from the three different subject groups.
Fig. \ref{round_robin} illustrates the structure of this round-robin approach.
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/4_Study/round_robin.pdf}
\caption{Illustration of the round-robin approach used to allocate video groups and subject groups. Grids having the same color indicate video groups watched by subjects in the same group. S1, S2 and S3 are session indices. }
\label{round_robin}
\end{figure}
\subsection{Study Protocol}
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/4_Study/study_flowchart.pdf}
\caption{Flow chart of the online study. }
\label{study_procedure}
\end{figure}
Fig. \ref{study_procedure} shows the flow chart of the steps of the online study.
The volunteer subjects first registered by signing a consent form describing the nature of the human study, after which they received an instruction sheet explaining the purpose, procedures, and display device configuration required for the study.
The subjects were required to use a desktop or laptop computer to complete the experiment, and needed to complete a computer configuration check before the study, to meet the study requirements.
Each subject received a web link to the training session.
Based on the data records captured from the training session, we analyzed whether the subjects' hardware configurations met our requirements.
Following that, the subjects received a link to the three rounds of testing sessions at two-day intervals, three rounds overall.
After the subjects finished the entire study, they were asked to complete a short questionnaire regarding their opinions of the study.
\subsection{Training Session}
The training session was conducted as follows.
After opening the link, the subjects first read four webpages of instructions.
The first webpage introduced the purpose and basic flow of the study.
There were five sample videos at the bottom of the page, labeled as bad, poor, fair, good, and excellent, respectively, exemplifying the possible quality levels they may encounter in the study.
The second webpage provided an explanation of how to use the rating bar.
The third webpage introduced the study schedule and how to submit data.
The last webpage explained other particulars, such as suggested viewing distance, desired resolution settings, the use of corrective lenses if normally worn, and so on.
The subjects were required to display each instructional webpage for at 30 seconds before proceeding to the next page.
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/4_Study/slider_demo_v3.png}
\caption{Screenshot of the rating bar used in the online human study. }
\label{rating_bar}
\end{figure}
The subjects then entered the experiential training phase and started watching videos.
After a subject watched a gaming video, a rating bar appeared after it was played.
It was emphasized that they were to provide ratings of video quality, rather than of content or other aspects.
On the rating page, a continuous Likert scale \cite{likert1932technique} was displayed, as shown in Fig. \ref{rating_bar}.
The quality range was labeled from low to high with five guide markers: BAD, POOR, FAIR, GOOD, and EXCELLENT.
The initial position of the rating cursor was randomized.
The subject was asked to provide an overall opinion score of video quality by dragging the marker anywhere along the continuous rating bar.
After the subject clicked the ``Submit" button, the marker's final position was considered to be the rating response.
Then the next video was presented, and the process repeated until the end of the session.
The rating scores received from all the subjects were linearly mapped to a numerical quality score in [0, 100].
All of the videos presented in each session were displayed in a random order, and each appeared only once.
The order of presentation was different across subjects.
\subsection{Test Session}
Each subject participated in a total of three test sessions, each about 30 minutes.
The subjects were allowed to take a break in the middle of the session, if they desired.
The sessions were provided to each subject on alternating days to avoid fatigue and memory bias.
The steps of the test session were similar to the training session, except that there was no time limit on viewing of the instruction page, so subjects could quickly browse and skip the instructions.
\subsection{Post Questionnaire}
We received feedback from 53 of the participants regarding aspects of the study.
\subsubsection{Video Duration}
We asked the subjects for their opinions on the video playback duration.
A summary of the results is given in Table \ref{question_duration}.
Among the subjects who participated in the questionnaire, 79\% believed that the durations of the observed videos (8-9 seconds) was long enough for them to accurately rate the video quality.
The results in the Table indicate that the video durations were generally deemed to be satisfactory.
\begin{table}
\centering
\caption{The Opinion of Study Participants About Video Duration}
\label{question_duration}
\begin{tabular}{cccc}
\toprule
& Long enough & Not long enough & Could be shorter \\ \midrule
No. & 42 (79.2\%) & 8 (15.1\%) & 3 (5.7\%) \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Dizziness}
Another issue is that some participants may feel dizzy when watching some gaming videos, especially those that contain fast motion.
From the survey results in Table \ref{question_dizziness}, about two-thirds of the subjects did not feel any discomfort during the test, while the remaining one-third suffered varying degrees of dizziness.
This is an important issue that should be considered in other subjective studies of gaming, since it may affect the reliability of the final data.
\begin{table}
\centering
\caption{Opinions of Study Participants Regarding Video-Induced Dizziness}
\label{question_dizziness}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccc}
\toprule
& None & \textless{}30\% & 30\%$\sim$50\% & 50\%$\sim$75\% & \textgreater{}75\% \\ \midrule
No. & 34 (64.2\%) & 16 (30.2\%) & 2 (3.8\%) & 1 (1.9\%) & 0 \\ \bottomrule
\end{tabular}
}
\end{table}
\subsubsection{Demographics}
All of the participants were between 18 to 22 years old.
Fig. \ref{watch_video_survey} plots the statistics from the answers to two questions: the total time typically spent watching gaming videos each week, and the devices they used to watch gaming videos.
Approximately 30\% of the participants did not watch gaming videos, while 50\% of them watched at least 2 hours of gaming videos a week.
Most of the subjects watched gaming videos on computers including laptops and desktops, while 30\% of them watched gaming videos on mobile phones.
This suggests that it is of interest to conduct additional research on the quality of gaming videos on mobile devices.
\begin{figure}
\centering
\subfigure[]{
\label{watch_video_time}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/4_Study/survey_watchtime-eps-converted-to.pdf}}
\subfigure[]{
\label{watch_video_device}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/4_Study/survey_device-eps-converted-to.pdf}}
\caption{Demographic details of the participants (a) Typical number of total hours watching gaming videos each week. (b) Device used to watch gaming videos (multiple choice question). }
\label{watch_video_survey}
\end{figure}
\subsection{Data Recording}
In addition to the subject’s subjective quality scores, we also recorded information on the subject’s computing equipment, display, network conditions, and real-time playback logs.
These data helped us guarantee the reliability of the ratings collected.
Examination of the collected data, along with feedback from the subjects, revealed no issues worth acting upon.
We also recorded the random initial values of the rating cursor for each displayed video and compared them against the final scores.
This was done to ensure that the subjects were responsive and moved the cursor.
We recorded the operating system used by each subject, of the three allowed types: Windows, Linux and macOS, and the model and version of the browser.
\subsection{Challenges}
\label{stall_resolution_issue}
During our study, we encountered two issues which had to be addressed to ensure the reliability of the collected data.
The first was video stalls caused by poor Internet conditions.
If a subject’s Internet was unstable, a video could be delayed at startup or paused during playback.
This must be avoided, since the subjects might account for any delays, pauses or rebuffering events when giving their quality ratings, leading to inaccurate results.
The second problem was artifacts arising from automatic rescaling by the client device.
For example, if a subject were to not set the browser to full screen mode, their device may spatially scale the videos to fit the current window size, which introducing rescaling artifacts which may affect the subjects' video quality scores.
We took the following steps to deal with these issues.
\subsubsection{Video Stalls}
To avoid stalls, we applied several protocols.
First, each video was required to download entirely before playback.
As mentioned, the videos were of 8-9 sec duration.
While most of the videos had volumes less than 20 Megabytes (MB), if any exceeded this, we applied very light visually lossless compression to reduce their size.
In this way, we were able to reduce the burden of the Internet download process.
Likewise, as each video was playing, download of the next two videos would commence in the background.
By preloading the videos to be played next, the possibility of video playback problems arising from network instabilities was reduced.
We also recorded a few relevant parameters of each video playback to determine whether each subject's Internet connection was stable and whether the videos played correctly.
We recorded the playing time of each video on the subject's device, then compared it with the actual duration of the video, to detect and measure any playback delays.
We calculated the total (summed) delay times of all videos played in each same session, and found that the accumulated delay time over all sessions of all subjects was less than 0.5 sec, indicating that all of the videos played smoothly.
We attributed this to the fact that all subjects participated in the same local geographic area (Austin, Texas), avoiding problems encountered in large, international online studies.
\subsubsection{Rescaling Effects}
On most devices, videos are automatically rescaled to fit the screen size, which can introduce rescaling artifacts that may significantly alter the perceived video quality, resulting in inaccurate quality ratings.
To avoid this difficulty, we recorded the resolution setting of each subjects' device as they opened the study webpage.
We asked all subjects to set their system resolution to 1080x1920, and to display the webpage in full screen mode throughout the study, so the videos would be played at their original resolutions.
Before each session, we recorded the display resolution of the subject's system and the actual resolution of their browser in full screen mode.
We also checked the zoom settings of the system and browser and required the subjects to set it to default (no zoom).
The subjects had to pass these simple criteria before beginning each session.
\subsection{Comparison of Our Study with Laboratory and Crowdsourced Studies}
There are interesting similarities and differences between our online study and conventional laboratory and crowdsourced studies.
\subsubsection{Volume of Collected Data}
The amount of data obtained in the study depends on the number of available videos and of participating subjects.
Laboratory studies usually accommodate only dozens to hundreds of both subjects and original video contents.
By contrast, crowdsourced studies often recruit thousands of workers who may collectively rate thousands of videos.
While our online study was not as large as crowdsourced studies, it was larger than most laboratory VQA studies in regards to both video volume and subject subscription.
\subsubsection{Study Equipment}
The experimental equipment used in laboratory studies is provided by the researchers, and is often of professional/scientific grade.
Because of this, laboratory studies can address more extreme scenarios, such as very high resolutions or frame rates, and high dynamic display ranges.
Crowdsourced studies rely on the subjects' own equipment, which generally implies only moderate playback capabilities, which limits the research objectives.
In large crowdsourced studies, many of the participants may be quite resource-light.
These are also constrained by the tools available in the crowdsourcing platform being used.
For example, MTurk does not allow full-screen video playback.
Therefore, 1080p videos are often automatically downscaled to adapt to the platform page size, introducing rescaling distortions.
By comparison, the subject pool in our experiment was composed of motivated university students who already are required to possess adequate personal computing resources and who generally have access to gigabit Internet in the Austin area.
\subsubsection{Reliability}
The reliability of the study mainly depends on control over the experimental environment and on the veracity and focus of the subjects.
Laboratory studies are conducted in a professional scientific setting, and the recruited subjects generally derive from a known, reliable pool.
They also personally interact with and are accompanied by the researchers.
The online study that we conducted was similar, since although it was carried out remotely, a similar reliable subject pool was subscribed, who were in remote communication with the research team in regards to the study instructions and to receive assistance to help them set the environment variables.
This was felt to be an optimal approach to address Covid restrictions at the time.
A crowdsourced study is limited by the test platform, and by the often questionable reliability of the remote, unknown subjects, with whom communication is inconvenient.
Moreover, substantial subject validation and rejection protocols must be desired to address the large number of inadequately equipped, distracted, or even frankly dishonest subjects \cite{ghadiyaram2016massive, ying2019patches, ying2020patch}.
\section{Subjective Data Processing}
\label{data_process}
We describe the processing of the collected Mean Opinion Score (MOS) next.
The raw MOS data was first converted into z-scores.
Let $s_{ijk}$ denote the score provided by the $i$-th subject on the $j$-th video in session $k = \{1, 2, 3\}$.
Since each video was only rated by half of the subjects, let $\delta(i,j)$ be the indicator function
\begin{equation}
\delta(i,j) =
\begin{cases}
1 & \text{if subject $i$ rated video $j$} \\
0 & \text{otherwise}.
\end{cases}
\label{indicate_fun}
\end{equation}
The z-scores were then computed as follows:
\begin{equation}
\begin{aligned}
&z_{ijk} = \frac{s_{ijk} - \bar{s}_{ik}}{\sigma_{ik}},
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
&\bar{s}_{ik} = \frac{1}{N_{ik}}\sum_{j=1} ^{N_{ik}} s_{ijk}\\
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
&\sigma_{ik} = \sqrt{\frac{1}{N_{ik}-1} \sum_{j=1} ^{N_{ik}} (s_{ijk} - \bar{s}_{ik})^2}, \\
\end{aligned}
\end{equation}
\noindent
where $N_{ik}$ is the number of videos seen by subject $i$ in session $k$.
The z-scores from all subjects over all sessions were computed to form the matrix $\{z_{ij}\}$, where $z_{ij}$ is the z-score assigned by the $i$-th subject to the $j$-th video, $j \in \{ 1,2 \ldots 600 \}$.
The entries of $\{z_{ij}\}$ are empty when $\delta(i,j)=0$.
Subject rejection was conducted to remove outliers, following the recommended procedure described in ITU-R BT 500.13 \cite{series2012methodology}, resulting in 5 of the 61 subjects being rejected.
The z-scores $z_{ij}$ of the remaining 56 subjects were then linearly rescaled to $[0, 100]$.
Finally, the MOS of each video was calculated by averaging the rescaled z-scores:
\begin{equation}
MOS_j = \frac{1}{N_j} \sum_{i=1} ^N z_{ij}'\delta(i,j),
\end{equation}
where $z_{ij}'$ are the rescaled z-scores, $N_j = \sum_{i=1} ^N \delta(i,j)$, and $N = 600$.
The MOS values all fell in the range $[4.52, 95.95]$.
\subsection{Subject-Consistency Test}
To assess the reliability of the collected subjective ratings, we performed inter and intra subject consistency analysis in the following two ways.
\paragraph{\textit{Inter-Subject Consistency}}
We randomly divided the subjective ratings obtained on each video into two disjoint equal groups, calculated the MOS of each video, one for each group, and computed the SROCC values between the two randomly divided groups.
We conducted 100 such random splits, obtaining a median SROCC of \textbf{0.9400}, indicating a high degree of internal consistency.
\paragraph{\textit{Intra-Subject Consistency}}
The intra-subject reliability test provides a way to measure the degree of consistency of individual subjects \cite{hossfeld2013best}.
We thus measured the SROCC between the individual opinion scores and the MOS.
A median SROCC of \textbf{0.7804} was obtained over all of the subjects.
Both the inter and intra consistency experiments illustrate the high degree of reliability and consistency of the collected subjective ratings.
\subsection{Analysis}
Table \ref{LIVE-gaming_info} lists the particulars of the LIVE-YT-Gaming database.
The overall MOS histogram of the database is plotted in Fig. \ref{MOS_hist}, showing a right-skewed distribution.
Fig. \ref{game_mos_boxplot} shows the boxplots of the MOS of each game, demonstrating the diverse content and quality distribution of the new database.
\begin{table*}
\caption{Details of the LIVE-YT-Gaming Database}
\label{LIVE-gaming_info}
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccccccccccccc}
\toprule
Database & Year & Content No & Video No & Game No & Subjective Data & Public & Resolution & FPS & Duration & Format & Distortion Type & Subject No. & Rating No & Data & Study Type \\ \midrule
LIVE-YT-Gaming & 2021 & 600 & 600 & 59 & 600 & Yes & 360p, 480p, 720p, 1080p & 30, 60 & 8-9 sec & mp4 & UGC distortions & 61 & 30 & MOS & Online study \\ \bottomrule
\end{tabular}
}
\end{table*}
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/5_data_process/mos_database-eps-converted-to.pdf}
\caption{MOS distribution across the entire LIVE-YouTube Gaming Video Quality Database. }
\label{MOS_hist}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/5_data_process/game_mos_boxplot-eps-converted-to.pdf}
\caption{Boxplot of the MOS distribution of videos for each game in the LIVE-YouTube Gaming Video Quality Database. The x-axis indexes the different games. }
\label{game_mos_boxplot}
\end{figure}
\section{Performance and Analysis}
\label{performance}
To demonstrate the usefulness of the new LIVE-YT-Gaming database, we evaluated the quality prediction performance of a variety of leading public-domain NR VQA algorithms on it.
We selected four popular feature-based NR VQA models: BRISQUE, TLVQM, VIDEVAL, and RAPIQUE, all based on training a Support Vector Regression (SVR) \cite{scholkopf2000new}, a `completely blind,' training-free model, NIQE, and a deep learning based model, called VSFA.
We used the LIBSVM package \cite{chang2011libsvm} to implement the SVR for all algorithms that we retrained on the new databases, with a Radial Basis Function (RBF) kernel.
We implemented VSFA using the code released by the authors.
We also tested the performance of two pre-trained networks VGG-16 \cite{simonyan2014very} and Resnet-50 \cite{he2016deep}, by applying the network to the videos and using the output of the fully connected layer to train an SVR model for.
Since our main goal is to evaluate the quality prediction performance of different algorithms for gaming videos, we also include one gaming video quality model, NDNetGaming, the code of which is publicly available.
We evaluated the performance between predicted quality scores and MOS using three criteria: Spearman’s rank order correlation coefficient (SROCC), Pearson’s (linear) correlation coefficient (LCC) and the Root Mean Squared Error (RMSE).
SROCC measures the ranked correlation of two sample distributions without refitting.
Before computing the LCC and RMSE measures, the predicted quality scores were passed through a logistic non-linearity as described in \cite{VQEG2000}.
Larger values of both SROCC and LCC imply better performance, while larger values of RMSE indicate worse performance.
We randomly divided the database into non-overlapping 80\% training and 20\% test sets.
We repeated the above process over 100 random splits, and report the median performances over all iterations.
At each iteration, the number of samples in the training set and test set were 480 and 120, respectively.
For all models, we made use of the publicly available source code from the authors, and used their default settings.
\subsection{A New Blind Gaming VQA Model}
We recently created a blind VQA model designed for the quality prediction of UGC gaming videos, which overcomes the limitations of existing VQA models when evaluating both synthetic and authentic distortions.
We will refer to this model as the Game Video Quality Predictor, or GAME-VQP.
Our proposed model utilizes a novel fusion of NSS features and deep learning features to produce reliable quality prediction performance.
In the design of GAME-VQP, a bag of spatial and spatio-temporal NSS features are extracted over several color spaces, supported by the assumption that NSS features from different spaces capture distinctive aspects of perceived quality.
A widely used pre-trained CNN model, Resnet-50, was used to extract deep learning features.
The extracted NSS features and CNN features were each used to train an independent SVR model, then the final prediction score was obtained as the average prediction score of the two models.
We also include results that compare the performance of GAME-VQP with other leading VQA models.
\subsection{Evaluation Results}
\begin{table*}
\caption{Performance Comparison of Various No-Reference VQA Models on The LIVE-YouTube Gaming Video Quality Database Using Non-Overlapping 80\% Training And 20\% Test Sets. The Numbers Denote Median Values Over $100$ Iterations of Randomly Chosen Non-Overlapping 80\% Training And 20\% Test Sets (Subjective MOS vs Predicted MOS). The Boldfaces Indicate A Top Performing Model. The Italics Indicate Deep Learning VQA Models. The Underline Indicates The Prior VQA Model Designed for Gaming Videos. }
\label{performance_model}
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccccccccc}
\toprule
& NIQE & BRISQUE & TLVQM & VIDEVAL & RAPIQUE & \textit{VSFA} & \textit{VGG-16} & \textit{Resnet-50} & {\ul NDNetGaming} & \textbf{GAME-VQP} \\ \midrule
SROSS & 0.2801 & 0.6037 & 0.7484 & 0.8071 & 0.8028 & 0.7762 & 0.5768 & 0.7290 & 0.4640 & \textbf{0.8451} \\
LCC & 0.3037 & 0.6383 & 0.7564 & 0.8118 & 0.8248 & 0.8014 & 0.6429 & 0.7677 & 0.4682 & \textbf{0.8649} \\
RMSE & 16.208 & 13.268 & 11.134 & 10.093 & 9.661 & 10.396 & 13.240 & 11.083 & 15.108 & \textbf{8.878} \\ \bottomrule
\end{tabular}
}
\end{table*}
\begin{table}
\caption{Results of One-Sided Wilcoxon Rank Sum Test Performed Between SROCC Values of The VQA Algorithms Compared In Table \ref{performance_model}. A Value Of "1" Indicates That The Row Algorithm Was Statistically Superior to The Column Algorithm; " $-$ 1" Indicates That the Row Was Worse Than the Column; A Value Of "0" Indicates That the Two Algorithms Were Statistically Indistinguishable. The Boldfaces Indicate The Top Performing Model. The Italics Indicate Deep Learning VQA Models. The Underline Indicates A Prior VQA Model Designed for Gaming Videos. }
\label{performance_statistc_srocc}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccccccccccc}
\toprule
& NIQE & BRISQUE & TLVQM & VIDEVAL & RAPIQUE & \textit{VSFA} & \textit{VGG-16} & \textit{Resnet-50} & {\ul NDNetGaming} & \textbf{GAME-VQP} \\ \midrule
NIQE & 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & \textbf{-1} \\
BRISQUE & 1 & 0 & -1 & -1 & -1 & -1 & 1 & -1 & 1 & \textbf{-1} \\
TLVQM & 1 & 1 & 0 & -1 & -1 & -1 & 1 & 1 & 1 & \textbf{-1} \\
VIDEVAL & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & \textbf{-1} \\
RAPIQUE & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & \textbf{-1} \\
\textit{VSFA} & 1 & 1 & 1 & -1 & -1 & 0 & 1 & 1 & 1 & \textbf{-1} \\
\textit{VGG-16} & 1 & -1 & -1 & -1 & -1 & -1 & 0 & -1 & 1 & \textbf{-1} \\
\textit{Resnet-50} & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 0 & 1 & \textbf{-1} \\
{\ul NDNetGaming} & 1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 & \textbf{-1} \\
\textbf{GAME-VQP} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{0} \\ \bottomrule
\end{tabular}
}
\end{table}
\begin{table}
\caption{Results of One-Sided Wilcoxon Rank Sum Test Performed Between LCC Values of The VQA Algorithms Compared In Table \ref{performance_model}. A Value Of "1" Indicates That The Row Algorithm Was Statistically Superior to The Column Algorithm; " $-$ 1" Indicates That the Row Was Worse Than the Column; A Value Of "0" Indicates That the Two Algorithms Were Statistically Indistinguishable. The Boldfaces Indicate The Top Performing Model. The Italics Indicate Deep Learning VQA Models. The Underline Indicates A Prior VQA Model Designed for Gaming Videos. }
\label{performance_statistc_lcc}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccccccccccc}
\toprule
& NIQE & BRISQUE & TLVQM & VIDEVAL & RAPIQUE & \textit{VSFA} & \textit{VGG-16} & \textit{Resnet-50} & {\ul NDNetGaming} & \textbf{GAME-VQP} \\ \midrule
NIQE & 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & \textbf{-1} \\
BRISQUE & 1 & 0 & -1 & -1 & -1 & -1 & 1 & -1 & 1 & \textbf{-1} \\
TLVQM & 1 & 1 & 0 & -1 & -1 & -1 & 1 & 1 & 1 & \textbf{-1} \\
VIDEVAL & 1 & 1 & 1 & 0 & -1 & 1 & 1 & 1 & 1 & \textbf{-1} \\
RAPIQUE & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & \textbf{-1} \\
\textit{VSFA} & 1 & 1 & 1 & -1 & -1 & 0 & 1 & 1 & 1 & \textbf{-1} \\
\textit{VGG-16} & 1 & -1 & -1 & -1 & -1 & -1 & 0 & -1 & 1 & \textbf{-1} \\
\textit{Resnet-50} & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 0 & 1 & \textbf{-1} \\
{\ul NDNetGaming} & 1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 & \textbf{-1} \\
\textbf{GAME-VQP} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{0} \\ \bottomrule
\end{tabular}
}
\end{table}
The performances of all models are shown in Table \ref{performance_model}.
To determine whether there exists significant differences between the performances of the compared models, we conducted a statistical significance test.
We used the distributions of the obtained SROCC and LCC values computed over the 100 random train-test iterations.
The non-parametric Wilcoxon Rank Sum Test \cite{wilcoxon1945individual}, which compares the rank of two lists of samples, was used to conduct hypothesis testing.
The null hypothesis was that the median for the row model was equal to the median of the column model at the 95\% significance level.
The alternate hypothesis was that the median of the row was different from the median of the column.
A value of `1' in the table means that the row algorithm was statistically superior to the column algorithm, while a value of `-1' means the counter result.
A value of `0' indicates that the row and column algorithms were statistically indistinguishable (or equivalent).
The statistical significance results are tabulated in Tables \ref{performance_statistc_srocc} and \ref{performance_statistc_lcc}.
As may be observed, our GAME-VQP performed significantly better than the other algorithms.
VIDEVAL and RAPIQUE performed well, but still fell short despite their excellent performance on UGC videos.
The completely blind NR VQA model (NIQE) did not perform well in the new database.
This is understandable because the pristine model used by NIQE was created using natural pristine images, while gaming videos are synthetically generated and have very different statistical distributions.
One could imagine a NIQE created on pristine gaming videos.
Although BRISQUE extracts features similar to NIQE, training it on the UGC gaming videos produced much better results.
The relative results of NIQE and BRISQUE suggest that the statistical structures of the gaming videos are different from those of natural videos, they nevertheless possess regularities that may be learned using NSS features.
TLVQM captures motion characteristics from the videos and delivered better performance than BRISQUE, indicating the importance of accounting for motion in gaming videos.
The performance of the deep learning VSFA model was close to that of RAPIQUE and VIDEVAL, and better than the pre-trained Resnet-50 and VGG-16 models.
These results show that deep models are able to capture the characteristics of synthetic videos, suggesting the potential of deep models for gaming VQA.
VIDEVAL and RAPIQUE both performed well, showing the effectiveness of combining NSS features with CNN features.
This also shows that not all NSS and CNN features successfully contribute to UGC gaming video quality prediction, since the performance of GAME-VQP exceeded those models.
By selecting fewer features that deliver high performance, our gaming VQA model produced results that were better than all of the other models.
\begin{figure}
\centering
\subfigure[]{
\label{performance_boxplot:SROCC}
\includegraphics[width=0.9\columnwidth]{Sections_gaming/6_performance/boxplot_srocc_journal-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_boxplot:LCC}
\includegraphics[width=0.9\columnwidth]{Sections_gaming/6_performance/boxplot_lcc_journal-eps-converted-to.pdf}}
\caption{Box plots of the SROCC and LCC distributions of the algorithms compared in Table \ref{performance_model} over 100 randomized trials on the LIVE-YouTube Gaming Video Quality Database. The central red mark represents the median, while the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers, while the outliers are individually plotted using the `.' symbol. }
\label{performance_boxplot}
\end{figure}
Fig. \ref{performance_boxplot} shows box plots of the SROCC and LCC correlations obtained over 100 iterations for each of the algorithms compared in Table \ref{performance_model}.
A lower standard deviation with a higher median SROCC or LCC values indicates better and more robust performance.
Our proposed algorithm clearly exceeded the performance of all the other algorithms, both in terms of stability and performance results.
\subsection{Scatter Plot}
\begin{figure}
\centering
\subfigure[]{
\label{performance_scatter:NIQE}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_NIQE-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_scatter:TLVQM}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_TLVQM-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_scatter:RAPIQUE}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_RAPIQUE-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_scatter:Resnet50}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_resnet50-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_scatter:proposed}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_proposed-eps-converted-to.pdf}}
\caption{Scatter plots of predicted quality scores versus MOS trained with an SVR using 5-fold cross validation on all videos in the LIVE-YouTube Gaming Video Quality Database. (a) NIQE, (b) TLVQM, (c) RAPIQUE, (d) Resnet-50, (e) GAME-VQP. }
\label{performance_scatter}
\end{figure}
Scatter plots of VQA model predictions are a good way to visualize model correlations.
To calculate scatter plots over the entire LIVE-YT-Gaming database, we applied 5-fold cross validation and aggregated the predicted scores obtained from each fold.
Scatter plots of MOS versus the quality predictions produced by NIQE, TLVQM, RAPIQUE, Resnet-50 and our proposed model are given in Fig. \ref{performance_scatter}.
As may be observed in Fig. \ref{performance_scatter:NIQE}, the predicted NIQE scores correlated poorly with the MOS.
The other three models, as shown in Figs. \ref{performance_scatter:TLVQM}, \ref{performance_scatter:RAPIQUE} and \ref{performance_scatter:Resnet50}, followed regular trends against the MOS.
As may be clearly seen in Fig. \ref{performance_scatter:proposed}, the distribution of the proposed model predictions are more compact than those of the other models/modules, in agreement with the higher correlations against MOS that it attains.
\section{Introduction}
\label{introduction_game}
Over the past few years, Internet traffic has continued to grow dramatically.
According to \cite{ciscociscovisualnetworkingindex}, video streaming now occupies the majority of Internet bandwidth and is the main driver of this growth.
Streaming service providers like Netflix, Hulu, and Amazon Prime Video generate and stream a substantial portion of this visual traffic in the form of Professionally-Generated-Content (PGC) videos.
At the same time, online video sharing platforms like YouTube, Vimeo, and Twitch collect and make available vast numbers of User-Generated-Content (UGC) videos, generally uploaded by less skilled videographers.
Driven by the rapid development of the digital game industry and by consumer desire to see skilled gamers in action, a large number of online videos of gameplay content have been uploaded and made public on the Internet.
In 2020, YouTube Gaming reached a milestone of 100 billion hours of watch time and 40 million active gaming channels \cite{youtubegamingbiggestyear}.
The rapid expansion of the gaming market has also given birth to many streaming-related services, such as game live broadcasts, online games, and cloud games.
At the same time, increasingly powerful computing, communication, and display hardware and software technologies have continuously uplifted the quality of these services.
In order to provide users with better quality streaming gaming video services and to improve users’ viewing experiences, perceptual Video Quality Assessment (VQA) research has become particularly important.
Both general-purpose algorithms able to handle diverse distortion scenarios, as well as VQA models specifically designed for gaming videos need to be studied and developed.
VQA research efforts can generally be divided into two categories: conducting subjective quality assessment experiments, and developing objective quality assessment models and algorithms.
If it were feasible, subjective quality assessment would best meet the needs of streaming providers and users, but the cost is prohibitive and does not allow for real-time quality based decisions or processing steps, such as perceptually optimized compression.
Therefore, automatic perception-based objective video quality prediction algorithms are of great interest.
Objective VQA models can be classified into three categories, based on whether there is a reference video available.
The first, Full-Reference (FR) models, require the availability of an entire source video as a reference against which to measure visual differences, i.e. video fidelity.
Reduced-Reference (RR) models only require partial reference information, often only a small amount.
No-Reference (NR), or ``blind'' models predict the quality of a test video without using any reference information.
It is generally believed that when predicting the quality of the test video, if there is an available PGC video as a reference, that is, an original, ``pristine'' video of high quality, shot by experts with professional equipment, then FR or RR VQA algorithms are able to achieve superior prediction results, as compared to NR models.
However, there is often a lack of accessible reference videos, e.g., for UGC videos captured by naive users having limited capture skills or technology.
These can only be evaluated using NR VQA models.
Since these kinds of videos are quite pervasive on social media and sharing platforms, the NR VQA problem is of high practical importance.
Since they also typically encompass much wider ranges of quality and of distortion types which are often simultaneously present and commingled, the NR VQA problem is much more challenging.
Unlike real-world videos taken by videographers with optical cameras, gaming videos are generally synthetically generated.
Because of underlying statistical differences in the structure of synthetic and artificial videos \cite{barman2018comparative, barman2018objective}, popular NR VQA models may not achieve adequate quality prediction performance on them.
Research on the assessment of gaming video quality mostly commenced very recently.
The authors of \cite{barman2018comparative} studied and analyzed differences between gaming and non-gaming content videos, in terms of compression and user Quality of Experience (QoE).
Some VQA models designed for gaming videos have been proposed, e.g., \cite{zadtootaghaj2020demi,utke2020ndnetgaming}.
However, this work has been limited to the study of the characteristics of PGC gaming videos and how they are affected by compression.
None of these have discussed or analyzed the quality assessment of UGC gaming videos recorded by amateur users.
We briefly elaborate on the characteristics of PGC and UGC gaming videos in Section \ref{pgc_ugc_gaming_video}.
Advancements of research on video quality have always relied on freely available databases of distorted videos and subjective judgments of them.
Over the past two decades, many VQA databases have been created and shared with researchers, which has significantly driven the development of VQA models.
Databases created in the early stages of VQA research usually contained only a small number of PGC reference videos and a small number of synthetically distorted versions of them.
In recent years, as UGC VQA research has gained more attention, more advanced and specialized databases have been created, containing hundreds or thousands of unique UGC videos with numerous, complex authentic distortions.
Recently, databases dedicated to the development of gaming video quality assessment research have also been proposed.
However, these databases only contain a limited number of PGC gaming videos as references, along with compressed versions of them.
Therefore, towards advancing progress on understanding streamed UGC gaming video quality, we have created a new UGC gaming video resource, which we call the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database.
This new dataset contains subjective rating scores on a large number of unique gaming videos from a study that was conducted online.
We summarize the contributions we make as follows:
\begin{itemize}
\item Using this new resource, we conducted a novel online human study whereby we collected a large number of subjective quality labels on gaming videos. To demonstrate the utility of the new database, we compared and contrasted the performance of a wide range of state-of-the-art (SOTA) VQA models, including one of our own design that attains top performance.
\item We constructed a novel subjective UGC gaming video database, which we call the LIVE-YouTube Gaming video quality database (LIVE-YT-Gaming). The new database contains 600 UGC gaming videos of unique content, from 59 different games. This is the largest video quality database addressing UGC gaming, and it includes the most exemplar games. Unlike the few existing gaming video quality databases, where PGC videos were professionally recorded, the LIVE-YT-Gaming videos were collected from online uploads by casual users (UGC videos) and are generally afflicted by highly diverse, mixed distortions.
\end{itemize}
The rest of the paper is organized as follows: we discuss related work in Section \ref{related_work}.
We present the new distorted gaming video database in Section \ref{database}.
We provide details on the subjective study in Section \ref{study}, and an analysis of the collected subjective data in Section \ref{data_process}.
We compare the performances of several current VQA models on the new database in Section \ref{performance}, and introduce a new model, called GAME-VQP.
Finally, we summarize the paper and discuss possible future work in Section \ref{conclusion}.
\section{Related Work}
\label{related_work}
\subsection{Subjective Video Quality Database}
\label{related_work_database}
Existing VQA databases differ by the types of video contents and distortions that are included, as well by the volume and methods of procuring subjective data on them.
Two different methods are commonly used to collect subjective ratings on videos in the databases: laboratory studies and online crowdsourced studies.
\subsubsection{Laboratory VQA Studies}
Most of the early VQA databases collected subjective data in the laboratory.
A small set of unique reference videos (usually 10 to 20), along with distorted versions of them were presented to a relatively small ($<$100) set of volunteer subjects.
We will refer to these synthetically-distorted datasets as \textit{legacy} VQA databases.
Some representative legacy databases are LIVE VQA \cite{seshadrinathan2010study}, LIVE Mobile \cite{moorthy2012video}, CDVL \cite{pinson2013consumer}, MCL-V \cite{lin2015mcl}, MCL-JCV \cite{wang2016mcl}, and VideoSet \cite{wang2017videoset}.
Laboratory studies have advantages: the human data that is obtained is generally more reliable, and the scientist has significant control over the experimental environment.
However, it is difficult to recruit a large number of subjects, hence much less data can be collected.
Because of this, legacy databases are data-poor although they are valuable test beds because of the high quality of their data.
\subsubsection{Crowdsourced VQA Studies}
In recent years, crowdsourcing as a tool for conducting subjective studies has been used to create a variety of large-scale VQA databases containing UGC videos.
Crowdsourcing is usually conducted on online platforms like Amazon Mechanical Turk (MTurk) and CrowdFlower, whereby it is possible to collect a large amount of subjective data from participants around the world.
Crowdsourced databases usually contain hundreds to thousands of UGC videos selected and sampled from online video sources, on which large amounts of subjective data can be collected by crowdsourcing.
Some representative crowdsourced video quality databases are CVD2014 \cite{nuutinen2016cvd2014}, LIVE-In-Capture \cite{ghadiyaram2017capture}, KoNViD-1k \cite{hosu2017konstanz}, YFCC100M \cite{thomee2016yfcc100m}, LIVE-VQC \cite{sinno2018large}, and YouTube-UGC \cite{wang2019youtube}.
Crowdsourcing makes it possible to collect a large volume subjective data from thousands of workers.
However, collected data is less reliable than quality scores collected in the laboratory, and disingenuous subjects must be addressed.
Further, the researcher must ensure that the subjects have adequate displays and network bandwidths to properly participate.
Since a very large number of paid workers are required, online studies can be very expensive.
\subsubsection{Gaming Video Quality Assessment Databases}
\label{gaming_database_compare}
As far as we are aware, there are four video quality databases that have been designed for gaming video quality research purposes: GamingVideoSET \cite{barman2018gamingvideoset}, KUGVD \cite{barman2019no}, CGVDS \cite{zadtootaghaj2020quality}, and TGV \cite{wen2021subjective}.
The GamingVideoSET database contains 24 gaming contents recorded from 12 different games, along with 576 compressed versions of these videos using H.264 compression.
However, only 90 of these have associated subjective data.
Three different resolutions are included: 480p, 720p and 1080p, all with frame rates of 30 frames/second (fps).
The KUGVD database is similar to GamingVideoSET, but only has 6 gaming contents and 144 compressed versions of them, 90 of which have subjective data available.
Both GamingVideoSET and KUGVD databases are therefore limited by having small amounts of subjective data, which hinders model development.
All of the reference videos were recorded under professional conditions without visible distortion, unlike UGC videos.
Moreover, the distorted videos were all compressed using the same H.264 standard, making quality prediction less difficult.
The CGVDS database was specifically developed for analyzing and modeling the quality of gaming videos compressed using hardware accelerated implementations of H.264/MPEG-AVC.
It has 360 distorted videos with subjective quality ratings, compressed from 15 PGC reference videos, at framerates of 20, 30, and 60 fps.
Finally, the TGV database is a mobile gaming video database containing 1293 gaming video sequences compressed from 150 source videos.
Unlike the aforementioned databases, which only contain computer or console games, the videos in the TGV database were all recorded from 17 different mobile games.
All four of the above-described gaming video databases contain only PGC videos that have been impaired by a single distortion type (compression).
Prior to our effort here, there has been no database designed for conducting VQA research on real UGC gaming videos created by casual users.
A comparison of the four existing gaming video quality databases is given in Table \ref{database_comparison}.
There is one more recently published database called GamingHDRVideoSET \cite{barman2021user}, which includes HDR gaming videos, however it does not include any subjective data, so we excluded it from our evaluations.
\begin{table*}
\resizebox{\textwidth}{!}{%
\begin{threeparttable}
\caption{Evaluation of Four Existing Gaming Video Quality Databases: GamingVideoSET, KUGVD, CGVDS, and TGV}
\label{database_comparison}
\begin{tabular}{cccccccccccccccc}
\toprule
Database & Year & Content No & Video No & Game No & Subjective Data & Public & Resolution & FPS & Duration & Format & Distortion Type & Subject No & Rating No & Data & Study Type \\ \midrule
GamingVideoSET & 2018 & 24 & 600 & 12 & 90 & Yes & 480p, 720p, 1080p & 30 & 30 sec & mp4, yuv & H.264 compression & 25 & 25 & MOS & In-lab study \\
KUGVD & 2019 & 6 & 150 & 6 & 90 & Yes & 480p, 720p, 1080p & 30 & 30 sec & mp4, yuv & H.264 compression & 17 & 17 & MOS & In-lab study \\
CGVDS & 2020 & 15 & 255 & 15 & 360 + anchor stimuli & Yes & 480p, 720p, 1080p & 20, 30, 60 & 30 sec & mp4, yuv & H.264 compression & over 100 & Unavailable & MOS & In-lab study \\
TGV & 2021 & 150 & 1293 & 17 & Unavailable & No & 480p, 720p, 1080p & 30 & 5 sec & Unavailable & H264, H265, Tencent codec & 19 & Unavailable & Unavailable & In-lab study \\ \bottomrule
\end{tabular}
\begin{tablenotes}
\small
\item Content No: Total number of unique contents. \qquad Video No: Total number of videos. \qquad Game No: Total number of source games. \qquad Subjective Data: Total number of videos with subjective ratings available. \qquad \\ FPS: Framerate per second. \qquad Subject No: Total number of participating subjects. \qquad Rating No: Average number of ratings per video.
\end{tablenotes}
\end{threeparttable}
}
\end{table*}
\subsection{Objective Quality Assessment Model}
The following is a brief introduction to the development of modern NR VQA models, including both general-purpose models and gaming video quality models.
\subsubsection{General NR VQA Models}
While the earliest NR VQA models were developed to address specific types of distortion, the focus of VQA research shifted toward the development of more general-purpose NR algorithms, which extract various ``quality-aware'' hand-crafted features, which are fed to simple regressors, such as SVRs, to learn mappings to human subjective quality labels.
The most successful features derive from models of natural natural scene statistics (NSS) \cite{ruderman1994statistics} and natural video statistics (NVS) \cite{soundararajan2012video}, under which high quality optical images, when subjected to perceptually-relevant bandpass processes obey certain statistical regularities that are predictably altered by distortions.
Noteworthy examples include NIQE \cite{mittal2013making}, BRISQUE \cite{mittal2012no}, V-BLIINDS \cite{saad2014blind}, HIGRADE \cite{kundu2017no}, GM-LOG \cite{xue2014blind}, DESIQUE \cite{zhang2013no}, and FRIQUEE \cite{ghadiyaram2017perceptual}.
More recent models that employ efficiently optimized NSS/NVS features, and/or combined with deep features, include VIDEVAL \cite{tu2021ugc}, which leverages a hand-optimized selection of statistical features, and RAPIQUE \cite{tu2021rapique}, which combines a large number of easily-computed statistics with semantic deep features in an extremely efficient manner, yielding both excellent quality prediction accuracy and rapid computation.
Unlike NSS-based models, data-driven methods like CORNIA \cite{ye2012unsupervised} begin by constructing a codebook via unsupervised learning techniques, instead of using a fixed set of features, followed by temporal hysteresis pooling.
A recent published NR VQA model called TLVQM \cite{korhonen2019two} makes use of a two-level feature extraction mechanism to achieve efficient computation of a set of distortion-relevant features that measure motion, specific distortion artifacts, and aesthetics.
Given the emergence of large-scale video quality databases in recent years, the availability of sufficient quantities of training samples has made it possible to train high-performance deep learning models.
VSFA \cite{li2019quality} makes use of a pre-trained Convolutional Neural Network (CNN) as a deep feature extractor, then integrates the frame-wise features using a gated recurrent unit and a temporal pooling layer.
It is one of several SOTA deep learning VQA models that attain superior performance on publicly available UGC video quality databases.
PaQ-2-PiQ \cite{ying2019patches} is a recent frame-based local-to-global deep learning based VQA architecture, while PVQ \cite{ying2020patch} extends the idea of using local quality predictions to improve global space-time VQA performance.
Other deep models include V-MEON \cite{liu2018end}, NIMA \cite{talebi2018nima}, PQR \cite{zeng2018blind}, and DLIQA \cite{hou2015blind}, which deliver high performance on legacy and UGC video quality databases.
\subsubsection{Gaming VQA Models}
\label{gaming_VQA_intro}
Several NR VQA models have been proposed for gaming.
In \cite{zadtootaghaj2018nr}, the authors proposed a blind model called NR-GVQM, which trains an SVR model to evaluate the quality of gaming content by extracting nine frame-level features, and by using VMAF scores as proxy ground truth labels.
The Nofu model proposed in \cite{goring2019nofu} is a learning-based VQA model, which applies center cropping to rapidly compute 12 frame-based features, followed by model training and temporal pooling.
The authors of \cite{barman2019no} proposed two models.
Their NR-GVSQI model also uses VMAF scores as training targets, while their NR-GVSQE model uses MOS as the training target.
They both make use of basic distortion features that measure blockiness, blurriness, contrast, exposure etc., as well as quality scores generated by the NSS-based IQA models BIQI, BRISQUE, and NIQE.
The ITU-T standard G.1072 \cite{gaming2020methodology} describes a gaming video QoE model based on three quality dimensions: spatial video quality, temporal video quality, and input quality (interaction quality).
Two recently released models based on deep learning, called NDNetGaming \cite{utke2020ndnetgaming} and DEMI \cite{zadtootaghaj2020demi}, were both developed on a DenseNet framework.
NDNetGaming is based on a CNN structure trained on VMAF scores as proxy ground truth, then fine-tuned using MOS.
Quality prediction phase is then accomplished via temporal pooling.
DEMI uses a CNN architecture similar to NDNetGaming, but focuses on only two types of distortions, blurriness and blockiness.
However, each of these systems were designed and developed on databases containing only a small number of PGC reference videos impaired by a single type of manually applied compression.
None have been systematically trained and tested on UGC gaming videos.
One reason for this is that there is no UGC gaming video quality database currently available.
\subsection{Video Game and Gaming Video}
\label{introduction_videogame}
Video games generally refer to interactive games that run on electronic media platforms.
Popular mainstream games may take the form of computer games, console games, mobile games, handheld games, VR games, cloud games, and so on.
Many games are available on multiple platforms.
Games also fall into various genres, including role-playing games, adventure games, action games, first-person shooters, real-time strategy games, fighting games, board games, massive multiplayer online role-playing games, and others.
At present, most gaming video quality research has focused on streamed gaming, such as interactive cloud games and online games, as well as passive live broadcast and recorded gameplay \cite{barman2018evaluation}.
\subsection{Recording of PGC and UGC Gaming Video}
\label{pgc_ugc_gaming_video}
Next we describe ways by which gaming videos are created, leading to a discussion of PGC and UGC gaming videos.
Unlike natural videos captured by optical cameras, gaming \textit{videos}, as opposed to live gameplay, are usually obtained by recording the screen of the devices on which the games are being played.
A variety of factors affect the output quality of recorded gaming videos.
The most basic are the graphics configuration settings of the operation system, and the settings built into each game.
The display quality of games is usually affected by settings like spatial resolution, frame rate, motion blur, control of screen-space ambient occlusions, vertical synchronization, variation of texture/material, grass quality, anisotropic filtering, depth of field, reflection quality, water surface quality, shadow quality/style, type of tessellation, and use of triple buffering and anti-aliasing.
Professional players enjoy the ultimate gaming experience by optimizing these settings, however, they present formidably complicated choices for most players, so games provide more intuitive and convenient options.
Players can simply set the game quality to high, medium, or low, letting the game program automatically set the appropriate parameters.
Of course, whether or not the video quality determined by the game settings can be correctly displayed depends heavily on the hardware configuration.
A sufficiently powerful GPU can provide complete real-time calculation support to ensure that the game display quality is stable at a high level, but ordinary GPUs may not meet all the quality requirements of all users.
For example, high motion scenes may present with noticeable frame drops or lagging.
Some games require very large data calculations, so if the GPU cannot process them quickly enough, delays in the display may occur.
For those games which require connection to the Internet, poor network conditions may also cause the game screen to delay or even freeze.
The quality of recorded gaming videos is also affected by the recording software used.
Professional software can accurately reproduce the original gameplay, but substantial system resources are required, increasing the burden on the gameplay device.
The software may provide options for faster and easier use, such as automatic resolution selection, frame rate reduction, or further compression during recording, which may degrade the output quality.
When we refer to PGC gaming videos, we refer to videos captured using professional-grade screen recording software, where the video settings during the gameplay are adjusted to a high level, processed and displayed using professional hardware equipment.
By contrast, UGC gaming videos refer to videos recorded by ordinary, casual non-professional users on ordinary computers or consoles, using diverse game graphics settings, and various types of recording software.
Thus, unlike PGC videos, recorded UGC game videos recorded may present with a very wide range of perceptual qualities.
\section{LIVE-YouTube Gaming Video Quality Database}
\label{database}
We present a detailed description of the new LIVE-YT-Gaming database in this section.
The new database contains 600 UGC gaming videos harvested from online sources.
It is the largest public-domain real UGC gaming video quality database having associated subjective scores.
Our main objective has been to create a resource that will support and boost gaming video quality research.
By providing this useful and accessible tool, we hope more researchers will be able to engage in research on the perceptual quality of gaming videos.
Fig. \ref{gamevideo_screenshot} shows a few example videos from the new database.
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/3_Database/gamevideo_screenshot.jpg}
\caption{Example frames of videos from the new LIVE-YT-Gaming database.}
\label{gamevideo_screenshot}
\end{figure}
Many recent (non-gaming) UGC video quality databases \cite{hosu2017konstanz, ying2019patches, ying2020patch, li2020ugc} were created by harvesting a large number of source videos from one or more large free public video repositories, such as the Internet Archive \cite{internetarchive} or YFCC-100M \cite{thomee2016yfcc100m}.
These are typically ``winnowed'' to a set of videos that are representative of a category of interest, such as social media videos.
This is accomplished by a statistical matching process based on low-level video features, such as blur, colorfulness \cite{hasler2003measuring}, contrast, Spatial Information (SI) \cite{winkler2012analysis} and Temporal Information (TI) \cite{itu910subjective}.
Statistical sampling based on matching is not suitable for harvesting gaming videos, however, because of a dearth of large gaming video databases available for free download.
While there are numerous and diverse gaming videos that have been uploaded by users onto video websites like YouTube and Twitch, these generally cannot be downloaded because of copyright issues.
Furthermore, gaming videos are strongly characterized by the type and content of the original games, which impacts the statistical structure of the video signals, such as their bandpass properties \cite{barman2018comparative}.
Overall, data collection is more difficult and complex than it is for real-world UGC videos.
\subsection{Video Collection}
We found the Internet Archive (IA), a free digital library, to be a good source of gaming videos.
Taking into account the popularity of games on YouTube, as well as the wide variety of types of games (as described in Section \ref{introduction_videogame}), we selected 59 games to be included in our database.
Unlike the completely random downloading of videos used by ourselves and others to create many PGC and UGC databases, we found that the best approach was to search the videos by their game titles, then to ensure the diversity of sources and to reduce bias, videos of each same game title were downloaded randomly, subject to wild resolution and frame rate constraints.
Four video resolutions were allowed: 360p (360x640), 480p (480x854), 720p (720x1280), and 1080p (1080x1920), and two frame rates were allowed: 30 fps and 60 fps.
In addition, we used the Windows 10 Xbox game bar \cite{xboxgamebar} to capture the gameplay of some games, at frame rates of 30 fps and 60 fps, and resolutions of 720p and 1080p.
The video resolutions were selected based on the YouTube video display resolution and aspect ratio standard \cite{youtuberesolution}.
In the end, we downloaded dozens to hundreds of source videos of each game to use as a data corpus for the next step, video selection.
\subsection{Video Selection}
After obtaining the source video resources, we randomly extracted a few 10-second clips from each video, thereby obtaining about 3,000 gaming video clips.
Considering the desired scale of the online human study to be conducted, and the number of subjects available (to be explained), we further randomly selected 600 videos from amongst those clips.
We then observed the remaining videos, and replaced videos of some popular games that were over-represented, with videos of less popular games to ensure diversity, and we removed videos containing any inappropriate content.
For a variety of reasons, including avoiding video stalls and limiting the subjects' session durations (Section \ref{stall_resolution_issue}), we cropped the videos to durations in the range of 8-9 sec.
A summary of the distributions of the video resolutions present in the LIVE-YT-Gaming database is tabulated in Table \ref{vid_resolution}.
\begin{table}
\caption{Distribution of Video Resolutions in LIVE-YT-Gaming Database}
\label{vid_resolution}
\centering
\begin{tabular}{ccccc}
\toprule
Resolution & 1080p & 720p & 480p & 360p \\ \midrule
30 fps & 137 & 187 & 36 & 55 \\
60 fps & 129 & 51 & 0 & 5 \\ \bottomrule
\end{tabular}
\end{table}
We also computed SI and TI on all of the remaining videos.
These quantities roughly measure the spatial and temporal richness and variety of the video contents.
SI and TI are defined as follows:
\begin{equation}
SI = \textrm{max}_{\mathit{time}}\left \{ std_{space}\left [ \textit{Sobel}( F_{n}(i, j)) \right ] \right \},
\label{SI}
\end{equation}
\begin{equation}
TI = \textrm{max}_{\mathit{time}}\left \{ std_{space}\left [ M_{n}(i, j) \right ] \right \},
\label{TI}
\end{equation}
\noindent
where $F_{n}$ denotes the luminance component of a video frame at instant $n$, $(i, j)$ denotes spatial coordinates, and $M_{n} = F_{n} - F_{n+1}$ is the frame difference operation.
$Sobel(F_{n})$ denotes Sobel filtering \cite{itu910subjective}.
Fig. \ref{SI_TI_gaming} shows the distributions of SI and TI for the video contents we selected, indicating that in these aspects, the selected videos contain richer contents than many other UGC video and gaming video databases (\cite{tu2021ugc}, \cite{barman2018objective}, \cite{barman2019no}).
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/3_Database/SI_TI_gaming_database_journal-eps-converted-to.pdf}
\caption{Scatter plot of SI against TI on the 600 gaming videos in the LIVE-YouTube Gaming Video Quality Database. }
\label{SI_TI_gaming}
\end{figure}
\section{Subjective Quality Assessment}
\label{study}
Next we describe the design and implementation of our online study.
We stored all of the videos on the Amazon S3 cloud server, providing a safe cloud storage service at high Internet speeds, with sufficient bandwidth to ensure satisfactory video loading speed at the client devices of the study participants.
We recruited 61 volunteer naive subjects who participated in and completed the entire study.
All were students at The University of Texas at Austin (UT-Austin) with no background knowledge of VQA research.
We designed the study in this way, as an online study with fewer, but very reliable subjects, because of Covid-19 regulations.
Before the study, we randomly divided the subjects into six groups, each containing about 10 subjects.
We also randomly divided the 600 videos into six groups, each containing 100 videos.
Each subject watched three groups of videos, or 300 videos.
In order to avoid any possible bias caused by a same group of videos being watched by a fixed group of subjects, we adopted a round-robin presentation ordering to cross-assign video groups to different subject groups.
In this way each video would be watched by about 30 subjects from the three different subject groups.
Fig. \ref{round_robin} illustrates the structure of this round-robin approach.
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/4_Study/round_robin.pdf}
\caption{Illustration of the round-robin approach used to allocate video groups and subject groups. Grids having the same color indicate video groups watched by subjects in the same group. S1, S2 and S3 are session indices. }
\label{round_robin}
\end{figure}
\subsection{Study Protocol}
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/4_Study/study_flowchart.pdf}
\caption{Flow chart of the online study. }
\label{study_procedure}
\end{figure}
Fig. \ref{study_procedure} shows the flow chart of the steps of the online study.
The volunteer subjects first registered by signing a consent form describing the nature of the human study, after which they received an instruction sheet explaining the purpose, procedures, and display device configuration required for the study.
The subjects were required to use a desktop or laptop computer to complete the experiment, and needed to complete a computer configuration check before the study, to meet the study requirements.
Each subject received a web link to the training session.
Based on the data records captured from the training session, we analyzed whether the subjects' hardware configurations met our requirements.
Following that, the subjects received a link to the three rounds of testing sessions at two-day intervals, three rounds overall.
After the subjects finished the entire study, they were asked to complete a short questionnaire regarding their opinions of the study.
\subsection{Training Session}
The training session was conducted as follows.
After opening the link, the subjects first read four webpages of instructions.
The first webpage introduced the purpose and basic flow of the study.
There were five sample videos at the bottom of the page, labeled as bad, poor, fair, good, and excellent, respectively, exemplifying the possible quality levels they may encounter in the study.
The second webpage provided an explanation of how to use the rating bar.
The third webpage introduced the study schedule and how to submit data.
The last webpage explained other particulars, such as suggested viewing distance, desired resolution settings, the use of corrective lenses if normally worn, and so on.
The subjects were required to display each instructional webpage for at 30 seconds before proceeding to the next page.
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/4_Study/slider_demo_v3.png}
\caption{Screenshot of the rating bar used in the online human study. }
\label{rating_bar}
\end{figure}
The subjects then entered the experiential training phase and started watching videos.
After a subject watched a gaming video, a rating bar appeared after it was played.
It was emphasized that they were to provide ratings of video quality, rather than of content or other aspects.
On the rating page, a continuous Likert scale \cite{likert1932technique} was displayed, as shown in Fig. \ref{rating_bar}.
The quality range was labeled from low to high with five guide markers: BAD, POOR, FAIR, GOOD, and EXCELLENT.
The initial position of the rating cursor was randomized.
The subject was asked to provide an overall opinion score of video quality by dragging the marker anywhere along the continuous rating bar.
After the subject clicked the ``Submit" button, the marker's final position was considered to be the rating response.
Then the next video was presented, and the process repeated until the end of the session.
The rating scores received from all the subjects were linearly mapped to a numerical quality score in [0, 100].
All of the videos presented in each session were displayed in a random order, and each appeared only once.
The order of presentation was different across subjects.
\subsection{Test Session}
Each subject participated in a total of three test sessions, each about 30 minutes.
The subjects were allowed to take a break in the middle of the session, if they desired.
The sessions were provided to each subject on alternating days to avoid fatigue and memory bias.
The steps of the test session were similar to the training session, except that there was no time limit on viewing of the instruction page, so subjects could quickly browse and skip the instructions.
\subsection{Post Questionnaire}
We received feedback from 53 of the participants regarding aspects of the study.
\subsubsection{Video Duration}
We asked the subjects for their opinions on the video playback duration.
A summary of the results is given in Table \ref{question_duration}.
Among the subjects who participated in the questionnaire, 79\% believed that the durations of the observed videos (8-9 seconds) was long enough for them to accurately rate the video quality.
The results in the Table indicate that the video durations were generally deemed to be satisfactory.
\begin{table}
\centering
\caption{The Opinion of Study Participants About Video Duration}
\label{question_duration}
\begin{tabular}{cccc}
\toprule
& Long enough & Not long enough & Could be shorter \\ \midrule
No. & 42 (79.2\%) & 8 (15.1\%) & 3 (5.7\%) \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Dizziness}
Another issue is that some participants may feel dizzy when watching some gaming videos, especially those that contain fast motion.
From the survey results in Table \ref{question_dizziness}, about two-thirds of the subjects did not feel any discomfort during the test, while the remaining one-third suffered varying degrees of dizziness.
This is an important issue that should be considered in other subjective studies of gaming, since it may affect the reliability of the final data.
\begin{table}
\centering
\caption{Opinions of Study Participants Regarding Video-Induced Dizziness}
\label{question_dizziness}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccc}
\toprule
& None & \textless{}30\% & 30\%$\sim$50\% & 50\%$\sim$75\% & \textgreater{}75\% \\ \midrule
No. & 34 (64.2\%) & 16 (30.2\%) & 2 (3.8\%) & 1 (1.9\%) & 0 \\ \bottomrule
\end{tabular}
}
\end{table}
\subsubsection{Demographics}
All of the participants were between 18 to 22 years old.
Fig. \ref{watch_video_survey} plots the statistics from the answers to two questions: the total time typically spent watching gaming videos each week, and the devices they used to watch gaming videos.
Approximately 30\% of the participants did not watch gaming videos, while 50\% of them watched at least 2 hours of gaming videos a week.
Most of the subjects watched gaming videos on computers including laptops and desktops, while 30\% of them watched gaming videos on mobile phones.
This suggests that it is of interest to conduct additional research on the quality of gaming videos on mobile devices.
\begin{figure}
\centering
\subfigure[]{
\label{watch_video_time}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/4_Study/survey_watchtime-eps-converted-to.pdf}}
\subfigure[]{
\label{watch_video_device}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/4_Study/survey_device-eps-converted-to.pdf}}
\caption{Demographic details of the participants (a) Typical number of total hours watching gaming videos each week. (b) Device used to watch gaming videos (multiple choice question). }
\label{watch_video_survey}
\end{figure}
\subsection{Data Recording}
In addition to the subject’s subjective quality scores, we also recorded information on the subject’s computing equipment, display, network conditions, and real-time playback logs.
These data helped us guarantee the reliability of the ratings collected.
Examination of the collected data, along with feedback from the subjects, revealed no issues worth acting upon.
We also recorded the random initial values of the rating cursor for each displayed video and compared them against the final scores.
This was done to ensure that the subjects were responsive and moved the cursor.
We recorded the operating system used by each subject, of the three allowed types: Windows, Linux and macOS, and the model and version of the browser.
\subsection{Challenges}
\label{stall_resolution_issue}
During our study, we encountered two issues which had to be addressed to ensure the reliability of the collected data.
The first was video stalls caused by poor Internet conditions.
If a subject’s Internet was unstable, a video could be delayed at startup or paused during playback.
This must be avoided, since the subjects might account for any delays, pauses or rebuffering events when giving their quality ratings, leading to inaccurate results.
The second problem was artifacts arising from automatic rescaling by the client device.
For example, if a subject were to not set the browser to full screen mode, their device may spatially scale the videos to fit the current window size, which introducing rescaling artifacts which may affect the subjects' video quality scores.
We took the following steps to deal with these issues.
\subsubsection{Video Stalls}
To avoid stalls, we applied several protocols.
First, each video was required to download entirely before playback.
As mentioned, the videos were of 8-9 sec duration.
While most of the videos had volumes less than 20 Megabytes (MB), if any exceeded this, we applied very light visually lossless compression to reduce their size.
In this way, we were able to reduce the burden of the Internet download process.
Likewise, as each video was playing, download of the next two videos would commence in the background.
By preloading the videos to be played next, the possibility of video playback problems arising from network instabilities was reduced.
We also recorded a few relevant parameters of each video playback to determine whether each subject's Internet connection was stable and whether the videos played correctly.
We recorded the playing time of each video on the subject's device, then compared it with the actual duration of the video, to detect and measure any playback delays.
We calculated the total (summed) delay times of all videos played in each same session, and found that the accumulated delay time over all sessions of all subjects was less than 0.5 sec, indicating that all of the videos played smoothly.
We attributed this to the fact that all subjects participated in the same local geographic area (Austin, Texas), avoiding problems encountered in large, international online studies.
\subsubsection{Rescaling Effects}
On most devices, videos are automatically rescaled to fit the screen size, which can introduce rescaling artifacts that may significantly alter the perceived video quality, resulting in inaccurate quality ratings.
To avoid this difficulty, we recorded the resolution setting of each subjects' device as they opened the study webpage.
We asked all subjects to set their system resolution to 1080x1920, and to display the webpage in full screen mode throughout the study, so the videos would be played at their original resolutions.
Before each session, we recorded the display resolution of the subject's system and the actual resolution of their browser in full screen mode.
We also checked the zoom settings of the system and browser and required the subjects to set it to default (no zoom).
The subjects had to pass these simple criteria before beginning each session.
\subsection{Comparison of Our Study with Laboratory and Crowdsourced Studies}
There are interesting similarities and differences between our online study and conventional laboratory and crowdsourced studies.
\subsubsection{Volume of Collected Data}
The amount of data obtained in the study depends on the number of available videos and of participating subjects.
Laboratory studies usually accommodate only dozens to hundreds of both subjects and original video contents.
By contrast, crowdsourced studies often recruit thousands of workers who may collectively rate thousands of videos.
While our online study was not as large as crowdsourced studies, it was larger than most laboratory VQA studies in regards to both video volume and subject subscription.
\subsubsection{Study Equipment}
The experimental equipment used in laboratory studies is provided by the researchers, and is often of professional/scientific grade.
Because of this, laboratory studies can address more extreme scenarios, such as very high resolutions or frame rates, and high dynamic display ranges.
Crowdsourced studies rely on the subjects' own equipment, which generally implies only moderate playback capabilities, which limits the research objectives.
In large crowdsourced studies, many of the participants may be quite resource-light.
These are also constrained by the tools available in the crowdsourcing platform being used.
For example, MTurk does not allow full-screen video playback.
Therefore, 1080p videos are often automatically downscaled to adapt to the platform page size, introducing rescaling distortions.
By comparison, the subject pool in our experiment was composed of motivated university students who already are required to possess adequate personal computing resources and who generally have access to gigabit Internet in the Austin area.
\subsubsection{Reliability}
The reliability of the study mainly depends on control over the experimental environment and on the veracity and focus of the subjects.
Laboratory studies are conducted in a professional scientific setting, and the recruited subjects generally derive from a known, reliable pool.
They also personally interact with and are accompanied by the researchers.
The online study that we conducted was similar, since although it was carried out remotely, a similar reliable subject pool was subscribed, who were in remote communication with the research team in regards to the study instructions and to receive assistance to help them set the environment variables.
This was felt to be an optimal approach to address Covid restrictions at the time.
A crowdsourced study is limited by the test platform, and by the often questionable reliability of the remote, unknown subjects, with whom communication is inconvenient.
Moreover, substantial subject validation and rejection protocols must be desired to address the large number of inadequately equipped, distracted, or even frankly dishonest subjects \cite{ghadiyaram2016massive, ying2019patches, ying2020patch}.
\section{Subjective Data Processing}
\label{data_process}
We describe the processing of the collected Mean Opinion Score (MOS) next.
The raw MOS data was first converted into z-scores.
Let $s_{ijk}$ denote the score provided by the $i$-th subject on the $j$-th video in session $k = \{1, 2, 3\}$.
Since each video was only rated by half of the subjects, let $\delta(i,j)$ be the indicator function
\begin{equation}
\delta(i,j) =
\begin{cases}
1 & \text{if subject $i$ rated video $j$} \\
0 & \text{otherwise}.
\end{cases}
\label{indicate_fun}
\end{equation}
The z-scores were then computed as follows:
\begin{equation}
\begin{aligned}
&z_{ijk} = \frac{s_{ijk} - \bar{s}_{ik}}{\sigma_{ik}},
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
&\bar{s}_{ik} = \frac{1}{N_{ik}}\sum_{j=1} ^{N_{ik}} s_{ijk}\\
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
&\sigma_{ik} = \sqrt{\frac{1}{N_{ik}-1} \sum_{j=1} ^{N_{ik}} (s_{ijk} - \bar{s}_{ik})^2}, \\
\end{aligned}
\end{equation}
\noindent
where $N_{ik}$ is the number of videos seen by subject $i$ in session $k$.
The z-scores from all subjects over all sessions were computed to form the matrix $\{z_{ij}\}$, where $z_{ij}$ is the z-score assigned by the $i$-th subject to the $j$-th video, $j \in \{ 1,2 \ldots 600 \}$.
The entries of $\{z_{ij}\}$ are empty when $\delta(i,j)=0$.
Subject rejection was conducted to remove outliers, following the recommended procedure described in ITU-R BT 500.13 \cite{series2012methodology}, resulting in 5 of the 61 subjects being rejected.
The z-scores $z_{ij}$ of the remaining 56 subjects were then linearly rescaled to $[0, 100]$.
Finally, the MOS of each video was calculated by averaging the rescaled z-scores:
\begin{equation}
MOS_j = \frac{1}{N_j} \sum_{i=1} ^N z_{ij}'\delta(i,j),
\end{equation}
where $z_{ij}'$ are the rescaled z-scores, $N_j = \sum_{i=1} ^N \delta(i,j)$, and $N = 600$.
The MOS values all fell in the range $[4.52, 95.95]$.
\subsection{Subject-Consistency Test}
To assess the reliability of the collected subjective ratings, we performed inter and intra subject consistency analysis in the following two ways.
\paragraph{\textit{Inter-Subject Consistency}}
We randomly divided the subjective ratings obtained on each video into two disjoint equal groups, calculated the MOS of each video, one for each group, and computed the SROCC values between the two randomly divided groups.
We conducted 100 such random splits, obtaining a median SROCC of \textbf{0.9400}, indicating a high degree of internal consistency.
\paragraph{\textit{Intra-Subject Consistency}}
The intra-subject reliability test provides a way to measure the degree of consistency of individual subjects \cite{hossfeld2013best}.
We thus measured the SROCC between the individual opinion scores and the MOS.
A median SROCC of \textbf{0.7804} was obtained over all of the subjects.
Both the inter and intra consistency experiments illustrate the high degree of reliability and consistency of the collected subjective ratings.
\subsection{Analysis}
Table \ref{LIVE-gaming_info} lists the particulars of the LIVE-YT-Gaming database.
The overall MOS histogram of the database is plotted in Fig. \ref{MOS_hist}, showing a right-skewed distribution.
Fig. \ref{game_mos_boxplot} shows the boxplots of the MOS of each game, demonstrating the diverse content and quality distribution of the new database.
\begin{table*}
\caption{Details of the LIVE-YT-Gaming Database}
\label{LIVE-gaming_info}
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccccccccccccc}
\toprule
Database & Year & Content No & Video No & Game No & Subjective Data & Public & Resolution & FPS & Duration & Format & Distortion Type & Subject No. & Rating No & Data & Study Type \\ \midrule
LIVE-YT-Gaming & 2021 & 600 & 600 & 59 & 600 & Yes & 360p, 480p, 720p, 1080p & 30, 60 & 8-9 sec & mp4 & UGC distortions & 61 & 30 & MOS & Online study \\ \bottomrule
\end{tabular}
}
\end{table*}
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/5_data_process/mos_database-eps-converted-to.pdf}
\caption{MOS distribution across the entire LIVE-YouTube Gaming Video Quality Database. }
\label{MOS_hist}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 1\columnwidth]{Sections_gaming/5_data_process/game_mos_boxplot-eps-converted-to.pdf}
\caption{Boxplot of the MOS distribution of videos for each game in the LIVE-YouTube Gaming Video Quality Database. The x-axis indexes the different games. }
\label{game_mos_boxplot}
\end{figure}
\section{Performance and Analysis}
\label{performance}
To demonstrate the usefulness of the new LIVE-YT-Gaming database, we evaluated the quality prediction performance of a variety of leading public-domain NR VQA algorithms on it.
We selected four popular feature-based NR VQA models: BRISQUE, TLVQM, VIDEVAL, and RAPIQUE, all based on training a Support Vector Regression (SVR) \cite{scholkopf2000new}, a `completely blind,' training-free model, NIQE, and a deep learning based model, called VSFA.
We used the LIBSVM package \cite{chang2011libsvm} to implement the SVR for all algorithms that we retrained on the new databases, with a Radial Basis Function (RBF) kernel.
We implemented VSFA using the code released by the authors.
We also tested the performance of two pre-trained networks VGG-16 \cite{simonyan2014very} and Resnet-50 \cite{he2016deep}, by applying the network to the videos and using the output of the fully connected layer to train an SVR model for.
Since our main goal is to evaluate the quality prediction performance of different algorithms for gaming videos, we also include one gaming video quality model, NDNetGaming, the code of which is publicly available.
We evaluated the performance between predicted quality scores and MOS using three criteria: Spearman’s rank order correlation coefficient (SROCC), Pearson’s (linear) correlation coefficient (LCC) and the Root Mean Squared Error (RMSE).
SROCC measures the ranked correlation of two sample distributions without refitting.
Before computing the LCC and RMSE measures, the predicted quality scores were passed through a logistic non-linearity as described in \cite{VQEG2000}.
Larger values of both SROCC and LCC imply better performance, while larger values of RMSE indicate worse performance.
We randomly divided the database into non-overlapping 80\% training and 20\% test sets.
We repeated the above process over 100 random splits, and report the median performances over all iterations.
At each iteration, the number of samples in the training set and test set were 480 and 120, respectively.
For all models, we made use of the publicly available source code from the authors, and used their default settings.
\subsection{A New Blind Gaming VQA Model}
We recently created a blind VQA model designed for the quality prediction of UGC gaming videos, which overcomes the limitations of existing VQA models when evaluating both synthetic and authentic distortions.
We will refer to this model as the Game Video Quality Predictor, or GAME-VQP.
Our proposed model utilizes a novel fusion of NSS features and deep learning features to produce reliable quality prediction performance.
In the design of GAME-VQP, a bag of spatial and spatio-temporal NSS features are extracted over several color spaces, supported by the assumption that NSS features from different spaces capture distinctive aspects of perceived quality.
A widely used pre-trained CNN model, Resnet-50, was used to extract deep learning features.
The extracted NSS features and CNN features were each used to train an independent SVR model, then the final prediction score was obtained as the average prediction score of the two models.
We also include results that compare the performance of GAME-VQP with other leading VQA models.
\subsection{Evaluation Results}
\begin{table*}
\caption{Performance Comparison of Various No-Reference VQA Models on The LIVE-YouTube Gaming Video Quality Database Using Non-Overlapping 80\% Training And 20\% Test Sets. The Numbers Denote Median Values Over $100$ Iterations of Randomly Chosen Non-Overlapping 80\% Training And 20\% Test Sets (Subjective MOS vs Predicted MOS). The Boldfaces Indicate A Top Performing Model. The Italics Indicate Deep Learning VQA Models. The Underline Indicates The Prior VQA Model Designed for Gaming Videos. }
\label{performance_model}
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccccccccc}
\toprule
& NIQE & BRISQUE & TLVQM & VIDEVAL & RAPIQUE & \textit{VSFA} & \textit{VGG-16} & \textit{Resnet-50} & {\ul NDNetGaming} & \textbf{GAME-VQP} \\ \midrule
SROSS & 0.2801 & 0.6037 & 0.7484 & 0.8071 & 0.8028 & 0.7762 & 0.5768 & 0.7290 & 0.4640 & \textbf{0.8451} \\
LCC & 0.3037 & 0.6383 & 0.7564 & 0.8118 & 0.8248 & 0.8014 & 0.6429 & 0.7677 & 0.4682 & \textbf{0.8649} \\
RMSE & 16.208 & 13.268 & 11.134 & 10.093 & 9.661 & 10.396 & 13.240 & 11.083 & 15.108 & \textbf{8.878} \\ \bottomrule
\end{tabular}
}
\end{table*}
\begin{table}
\caption{Results of One-Sided Wilcoxon Rank Sum Test Performed Between SROCC Values of The VQA Algorithms Compared In Table \ref{performance_model}. A Value Of "1" Indicates That The Row Algorithm Was Statistically Superior to The Column Algorithm; " $-$ 1" Indicates That the Row Was Worse Than the Column; A Value Of "0" Indicates That the Two Algorithms Were Statistically Indistinguishable. The Boldfaces Indicate The Top Performing Model. The Italics Indicate Deep Learning VQA Models. The Underline Indicates A Prior VQA Model Designed for Gaming Videos. }
\label{performance_statistc_srocc}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccccccccccc}
\toprule
& NIQE & BRISQUE & TLVQM & VIDEVAL & RAPIQUE & \textit{VSFA} & \textit{VGG-16} & \textit{Resnet-50} & {\ul NDNetGaming} & \textbf{GAME-VQP} \\ \midrule
NIQE & 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & \textbf{-1} \\
BRISQUE & 1 & 0 & -1 & -1 & -1 & -1 & 1 & -1 & 1 & \textbf{-1} \\
TLVQM & 1 & 1 & 0 & -1 & -1 & -1 & 1 & 1 & 1 & \textbf{-1} \\
VIDEVAL & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & \textbf{-1} \\
RAPIQUE & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & \textbf{-1} \\
\textit{VSFA} & 1 & 1 & 1 & -1 & -1 & 0 & 1 & 1 & 1 & \textbf{-1} \\
\textit{VGG-16} & 1 & -1 & -1 & -1 & -1 & -1 & 0 & -1 & 1 & \textbf{-1} \\
\textit{Resnet-50} & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 0 & 1 & \textbf{-1} \\
{\ul NDNetGaming} & 1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 & \textbf{-1} \\
\textbf{GAME-VQP} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{0} \\ \bottomrule
\end{tabular}
}
\end{table}
\begin{table}
\caption{Results of One-Sided Wilcoxon Rank Sum Test Performed Between LCC Values of The VQA Algorithms Compared In Table \ref{performance_model}. A Value Of "1" Indicates That The Row Algorithm Was Statistically Superior to The Column Algorithm; " $-$ 1" Indicates That the Row Was Worse Than the Column; A Value Of "0" Indicates That the Two Algorithms Were Statistically Indistinguishable. The Boldfaces Indicate The Top Performing Model. The Italics Indicate Deep Learning VQA Models. The Underline Indicates A Prior VQA Model Designed for Gaming Videos. }
\label{performance_statistc_lcc}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccccccccccc}
\toprule
& NIQE & BRISQUE & TLVQM & VIDEVAL & RAPIQUE & \textit{VSFA} & \textit{VGG-16} & \textit{Resnet-50} & {\ul NDNetGaming} & \textbf{GAME-VQP} \\ \midrule
NIQE & 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & \textbf{-1} \\
BRISQUE & 1 & 0 & -1 & -1 & -1 & -1 & 1 & -1 & 1 & \textbf{-1} \\
TLVQM & 1 & 1 & 0 & -1 & -1 & -1 & 1 & 1 & 1 & \textbf{-1} \\
VIDEVAL & 1 & 1 & 1 & 0 & -1 & 1 & 1 & 1 & 1 & \textbf{-1} \\
RAPIQUE & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & \textbf{-1} \\
\textit{VSFA} & 1 & 1 & 1 & -1 & -1 & 0 & 1 & 1 & 1 & \textbf{-1} \\
\textit{VGG-16} & 1 & -1 & -1 & -1 & -1 & -1 & 0 & -1 & 1 & \textbf{-1} \\
\textit{Resnet-50} & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 0 & 1 & \textbf{-1} \\
{\ul NDNetGaming} & 1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 & \textbf{-1} \\
\textbf{GAME-VQP} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{1} & \textbf{0} \\ \bottomrule
\end{tabular}
}
\end{table}
The performances of all models are shown in Table \ref{performance_model}.
To determine whether there exists significant differences between the performances of the compared models, we conducted a statistical significance test.
We used the distributions of the obtained SROCC and LCC values computed over the 100 random train-test iterations.
The non-parametric Wilcoxon Rank Sum Test \cite{wilcoxon1945individual}, which compares the rank of two lists of samples, was used to conduct hypothesis testing.
The null hypothesis was that the median for the row model was equal to the median of the column model at the 95\% significance level.
The alternate hypothesis was that the median of the row was different from the median of the column.
A value of `1' in the table means that the row algorithm was statistically superior to the column algorithm, while a value of `-1' means the counter result.
A value of `0' indicates that the row and column algorithms were statistically indistinguishable (or equivalent).
The statistical significance results are tabulated in Tables \ref{performance_statistc_srocc} and \ref{performance_statistc_lcc}.
As may be observed, our GAME-VQP performed significantly better than the other algorithms.
VIDEVAL and RAPIQUE performed well, but still fell short despite their excellent performance on UGC videos.
The completely blind NR VQA model (NIQE) did not perform well in the new database.
This is understandable because the pristine model used by NIQE was created using natural pristine images, while gaming videos are synthetically generated and have very different statistical distributions.
One could imagine a NIQE created on pristine gaming videos.
Although BRISQUE extracts features similar to NIQE, training it on the UGC gaming videos produced much better results.
The relative results of NIQE and BRISQUE suggest that the statistical structures of the gaming videos are different from those of natural videos, they nevertheless possess regularities that may be learned using NSS features.
TLVQM captures motion characteristics from the videos and delivered better performance than BRISQUE, indicating the importance of accounting for motion in gaming videos.
The performance of the deep learning VSFA model was close to that of RAPIQUE and VIDEVAL, and better than the pre-trained Resnet-50 and VGG-16 models.
These results show that deep models are able to capture the characteristics of synthetic videos, suggesting the potential of deep models for gaming VQA.
VIDEVAL and RAPIQUE both performed well, showing the effectiveness of combining NSS features with CNN features.
This also shows that not all NSS and CNN features successfully contribute to UGC gaming video quality prediction, since the performance of GAME-VQP exceeded those models.
By selecting fewer features that deliver high performance, our gaming VQA model produced results that were better than all of the other models.
\begin{figure}
\centering
\subfigure[]{
\label{performance_boxplot:SROCC}
\includegraphics[width=0.9\columnwidth]{Sections_gaming/6_performance/boxplot_srocc_journal-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_boxplot:LCC}
\includegraphics[width=0.9\columnwidth]{Sections_gaming/6_performance/boxplot_lcc_journal-eps-converted-to.pdf}}
\caption{Box plots of the SROCC and LCC distributions of the algorithms compared in Table \ref{performance_model} over 100 randomized trials on the LIVE-YouTube Gaming Video Quality Database. The central red mark represents the median, while the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers, while the outliers are individually plotted using the `.' symbol. }
\label{performance_boxplot}
\end{figure}
Fig. \ref{performance_boxplot} shows box plots of the SROCC and LCC correlations obtained over 100 iterations for each of the algorithms compared in Table \ref{performance_model}.
A lower standard deviation with a higher median SROCC or LCC values indicates better and more robust performance.
Our proposed algorithm clearly exceeded the performance of all the other algorithms, both in terms of stability and performance results.
\subsection{Scatter Plot}
\begin{figure}
\centering
\subfigure[]{
\label{performance_scatter:NIQE}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_NIQE-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_scatter:TLVQM}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_TLVQM-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_scatter:RAPIQUE}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_RAPIQUE-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_scatter:Resnet50}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_resnet50-eps-converted-to.pdf}}
\subfigure[]{
\label{performance_scatter:proposed}
\includegraphics[width=0.48\columnwidth]{Sections_gaming/6_performance/scatter_proposed-eps-converted-to.pdf}}
\caption{Scatter plots of predicted quality scores versus MOS trained with an SVR using 5-fold cross validation on all videos in the LIVE-YouTube Gaming Video Quality Database. (a) NIQE, (b) TLVQM, (c) RAPIQUE, (d) Resnet-50, (e) GAME-VQP. }
\label{performance_scatter}
\end{figure}
Scatter plots of VQA model predictions are a good way to visualize model correlations.
To calculate scatter plots over the entire LIVE-YT-Gaming database, we applied 5-fold cross validation and aggregated the predicted scores obtained from each fold.
Scatter plots of MOS versus the quality predictions produced by NIQE, TLVQM, RAPIQUE, Resnet-50 and our proposed model are given in Fig. \ref{performance_scatter}.
As may be observed in Fig. \ref{performance_scatter:NIQE}, the predicted NIQE scores correlated poorly with the MOS.
The other three models, as shown in Figs. \ref{performance_scatter:TLVQM}, \ref{performance_scatter:RAPIQUE} and \ref{performance_scatter:Resnet50}, followed regular trends against the MOS.
As may be clearly seen in Fig. \ref{performance_scatter:proposed}, the distribution of the proposed model predictions are more compact than those of the other models/modules, in agreement with the higher correlations against MOS that it attains.
|
2,869,038,155,529 | arxiv | \section{Introduction}
\label{sec:intro}
Since the classic paper by \cite{partridge_peebles67}, intense observational efforts have focused
on the search for Ly-$\alpha$ emitters at high redshifts.
Although most of the early attempts ended in negative results before the mid 1990s, recent observational advances enabled us to identify star forming galaxies
at ever
increasing redshifts. Currently,
several observational projects, such as LALA \cite[e.g.,][]{rhoads_etal03},
CADIS \cite[e.g.,][]{maier02}, the Subaru Deep Field Project \cite[e.g.,][]{taniguchi_etal05}, etc.,
spectroscopic surveys that use lensing magnification from clusters \citep[e.g.,][]{santos_etal03}, surveys that combine
Subaru \citep[e.g.,][]{hu_etal04} or HST/ACS/NICMOS imaging \citep[e.g.,][for the GOODS survey]{stanway_etal04, dickinson_etal04} with Keck spectroscopy, etc.,
focus on finding high-z starforming galaxies.
Surveys currently reach up to $z\simeq 7-8$ \cite[e.g., see][for recent results from the NICMOS observations of the HUDF]{bouwens_etal04} and will likely
reach higher redshifts in the coming years (e.g., via JWST).
The hydrogen Ly-$\alpha$ line is a very promising way to probe the high-redshift universe. Besides yielding redshifts, the shape, equivalent width and offset of the Ly-$\alpha$ line
from other emission/absorption lines potentially convey valuable information about the geometry, kinematics, and underlying stellar population of the host galaxy.
Furthermore, after escaping the environment of the host galaxy, Ly-$\alpha$ photons are scattered in the surrounding IGM. The presence or absence of observed
Ly-$\alpha$ emission can be used to place constraints on the state of the IGM, useful in constraining for example the epoch and topology
of reionization. Because of the
numerous factors that contribute to the final Ly-$\alpha$ emission, the interpretation of such features can be very complex.
To use all the currently available and future observations in the most effective way possible we need to improve
our theoretical understanding of Ly-$\alpha$ emission from high-redshift objects.
To this end we develop a general Ly-$\alpha$ radiative transfer (RT) scheme for cosmological simulations.
As an example, we apply the RT scheme to
gasdynamics+N-body Adaptive Refinement Tree \citep[ART;][]{kravtsov03}
simulations of galaxy formation.
There are quite a few studies of Ly-$\alpha$ emission from high-z objects \cite[e.g.,][]{gould_weinberg96,haiman_spaans99,loeb_rybicki99,ahn_etal01,ahn_etal02b,zheng_miralda-escude02,
santos04,dijkstra_etal05a,dijkstra_etal05b}. The problem these studies address is highly complex and has many unknowns. Inevitably, most of these studies
had to make at some point some simplifying
assumptions. Usually, a high degree of symmetry for the emitting source, and its density, temperature, and velocity field is assumed.
The same is the case with respect to the processes that are responsible for the production of Ly-$\alpha$ photons.
On the other hand, cosmological simulations hopefully capture most of the basic elements, lifting thus practical constraints that existed in these previous studies.
There is a small number of related studies using
cosmological simulations \citep{fardal_etal01,furlanetto_etal03,barton_etal04,furlanetto_etal05,cantalupo_etal05,ledelliou_etal05a,ledelliou_etal05b}.
Some of these simulations are lacking crucial processes such as radiative cooling of the gas and consistent RT,
the
various sources of Ly-$\alpha$ photons,
and/or sufficient resolution in order to resolve the clumpiness of the gas.
Furthermore, most of these studies
do not perform Ly-$\alpha$ RT, but rather they assume that the observer sees whatever is being emitted initially,
simply modified by $e^{-\tau}$ with $\tau$ the optical depth for Ly-$\alpha$ scattering due to neutral hydrogen between the emission
point and the observer.
Namely, in most cases Ly-$\alpha$ spectra from simulations are treated as {\it absorption} spectra
when, in reality, they are {\it scattering} spectra \cite[see, e.g.,][]{gnedin_prada04}. For gas well outside the source of emission this is an
appropriate approximation since scattering off the direction of viewing removes the photons that could be observed and thus appears as effective
absorption. This is no longer true for the source of emission itself, since photons that were originally emitted in directions different from
the direction of observation may scatter into this direction.
It is important that the difficulty of implementing a Ly-$\alpha$ RT scheme for cosmological simulations become clear.
The classical problem of resonance RT,
relevant to a wide range of applications
from planetary atmospheres to accretion disks, has been quite extensively studied in the
literature \cite[e.g.,][]{zanstra49,hummer62,auer68,avery_house68,adams72,
harrington73,neufeld90,ahn_etal01,ahn_etal02b}.
However, analytical solutions derived in the past are applicable only to certain specific conditions.
On the other hand, the slow convergence of the numerical techniques used limited the
numerical studies at optical thicknesses that are relatively low compared to those encountered in high-redshift galaxies (and cosmological simulations
of high-redshift galaxies, as we will show).
Thus, unlike
previous studies, most of which focused on the classical problem of resonant RT in a
semi-infinite slab, in cosmological simulations one
has to solve simultaneously thousands or even millions of these problems.\footnote{For example, in the case of Adaptive Mesh Refinement (AMR) codes,
each time a photon enters a simulation cell one has the equivalent of a new slab RT problem.}Furthermore, having in mind
existing and future cosmological simulations that can achieve sufficiently high resolution to resolve the gas clumpiness and that treat cooling appropriately,
we anticipate column densities that are orders of magnitude higher than those
found in lower resolution simulations without cooling. In this case we need a RT algorithm much faster than
the more standard direct Monte Carlo approach [which, however, is our starting point] of previous studies.
Thus, we must develop RT acceleration methods that, along with the highly parallel nature of the RT problem that enables us
to make use of many parallel machines, can make the Ly-$\alpha$ RT problem tractable.
The paper is organized as follows. In \S~\ref{sec:rt} we discuss the RT scheme.
More specifically, in \S~\ref{montecarlo} we present the basic Monte Carlo algorithm,
in \S~\ref{sec:test} we present tests of the basic algorithm, in \S~\ref{sec:accel} we discuss
the acceleration methods we use to speed up the RT, and in \S~\ref{images_spectra} we present the method images and spectra are constructed.
In \S~\ref{app_sims} we discuss in detail an application of the Ly-$\alpha$ RT code to ART simulations.
More specifically, in \S~\ref{sec:sims} we briefly give some information
about the ART simulations. In \S~\ref{intrinsic_emission} we discuss the
intrinsic Ly-$\alpha$ emission of the specific simulated Ly-$\alpha$ emitter we focus on.
In \S~\ref{beforert} we present results on the emitter before RT.
In \S~\ref{afterrt} we discuss results after performing the RT, and with/without the Gunn-Peterson (GP) absorption,
as well as with/without the red damping wing of the GP absorption.
In \S~\ref{sec:conclusions} we discuss and summarize our results and
conclusions.
\section{The Ly-$\alpha$ Radiative Transfer}
\label{sec:rt}
\subsection{The basic Monte Carlo code}
\label{montecarlo}
The following discussion assumes in various places a cell structure for the simulation outputs, as is inherently the case in AMR codes.
However, the Ly-$\alpha$ RT code we discuss is applicable to outputs from all kinds of cosmological simulation codes, since
one can always create an effective mesh by interpolating the values of the various physical parameters. The
size of the mesh cell can be motivated by resolution related
scales (e.g., the softening scale, or
larger if convergence tests with respect to the Ly-$\alpha$ RT justify a larger scale).
Thus, in what follows we refer to simulation cells either the direct output of the cosmological simulation code has a cell structure or
not.
The initial emission characteristics (simulation cell, frequency, etc.) of each
photon depend on the specific physical conditions, thus we defer this discussion for \S \ref{app_sims} where an
application to a Ly-$\alpha$ emitter produced in ART cosmological simulations is presented.
After determining the initial characteristics for each photon, we follow a series of scatterings up to a certain scale
where the detailed RT stops. This scale is to be determined via a convergence study.
In this subsection we describe the basic steps of the algorithm.
\subsubsection{Propagating the photon}
For every scattering we generate the optical depth, which determines the spatial displacement of the photon,
by sampling the probability
distribution function $e^{-\tau}$
\begin{equation}
\tau=-\ln(R) \, ,
\label{taur}
\end{equation}
with $R$ a uniformly distributed random number.
This optical depth is equal to
\begin{equation}
\tau=\int\limits_{0}^l \int\limits_{-\infty}^{\infty} d\tilde{l}du_{p} \sigma_{L}(\nu(1-u_{p}/c)) \sqrt{\frac{m_{p}}{2 \pi k_{B} T}}
n_{HI}\exp{\left({-\frac{m_{p} u_{p}^2}{2k_{B}T}}\right)} \, ,
\label{tau}
\end{equation}
with $n_{HI}$ the number density of neutral hydrogen. The function $\sigma_{L}$ is the scattering cross section
of Ly-$\alpha$ photons as a function of frequency, defined in the rest frame of the hydrogen atom as
\begin{equation}
\sigma_{L}(\nu) = f_{12} \frac{\pi e^2}{m_{e} c} \frac{\Delta \nu_{L}/2 \pi}{(\nu-\nu_{0})^2+ (\Delta \nu_{L}/2)^2} \, ,
\label{sigma}
\end{equation}
where $f_{12}=0.4162$ is the Ly-$\alpha$ oscillator strength, $\nu_{0}=2.466 \times 10^{15}$ Hz is the line center frequency,
$\Delta \nu_{L}=9.936 \times 10^7$ Hz is the natural width of the line, and other symbols have their usual meaning.
In equation (\ref{tau}) the fact that the photons are encountering atoms with a Maxwellian distribution of thermal
velocities has been taken into account.
Integrating over the distribution of velocities, the resulting cross section in the observer's frame is
\begin{equation}
\label{voigt}
\sigma(x)= f_{12} \frac{\sqrt{\pi} e^2}{m_{e} c \Delta \nu_{D}} H(\alpha,x)
\end{equation}
where
\begin{equation}
H(\alpha,x)=\frac{\alpha}{\pi} \int_{-\infty}^{\infty} \frac{e^{-y^2}}{(x-y)^2 +\alpha^2} dy
\end{equation}
is the Voigt function, $x=(\nu-\nu_{0})/\Delta \nu_{D}$ is the relative frequency of the incident photon
in the observer's frame with $\Delta \nu_{D}= \sqrt{2 k_{B} T/(m_{p} c^{2})} \nu_{0}$ the Doppler
width, and $\alpha=\Delta \nu_{L}/2 \Delta \nu_{D}$ with $\Delta \nu_{L}$ the natural line width.
Assuming that $\sigma$ is independent of $\tilde{l}$, the optical depth is given by
\begin{equation}
\tau=n_{HI} \sigma(x) l \, .
\label{tau2}
\end{equation}
When applied to cosmological simulations, equation~(\ref{tau2}) is substituted by a sum
of terms similar to the r.h.s. This sum is over the different cells (=different physical conditions such as neutral hydrogen density, temperature, etc.)
that the photon crosses until it reaches $\tau$ and gets scattered.
For the Voigt function we
use the following analytic fit, which is a good approximation to better than $1 \%$ for temperatures
$T> 2$K (N. Gnedin, personal communication)
\begin{eqnarray}
\nonumber
V(\alpha,\nu) & \equiv & \frac{1}{\sqrt{\pi} \Delta \nu_{D}} H(\alpha,x)= \frac{1}{\Delta \nu_{D}} \phi(x) \\
& = & \frac{1}{\Delta \nu_{D}} \left[ q + \frac{e^{-\tilde{x}}}{1.77245385} \right]
\end{eqnarray}
where $\tilde{x}=x^2$, and $q=0$ if $z=(\tilde{x}-0.855)/(\tilde{x}+3.42) \le 0$ and
\begin{eqnarray}
\nonumber
q & = & z\left(1+\frac{21}{\tilde{x}}\right)\frac{\alpha}{\pi(\tilde{x}+1)} \\
& \times & \left\{0.1117 +z\left[4.421+z(-9.207+5.674z)\right]\right\}
\end{eqnarray}
if $z>0$. The definition in terms of the function $\phi(x)$ is also given since the latter has been used in many previous studies, and
we also use it in what follows.
If in addition to the thermal motion of the atoms there is bulk motion, such as peculiar or Hubble flow velocities,
in equation (\ref{voigt}) we use
$x_{f}=x-(v_{fz}/c)\nu_{0}/ \Delta \nu_{D}$, where $v_{fz}$ is the component of the fluid bulk velocity along the
direction of the incident photon.
In equation (\ref{tau}) the cross section $\sigma$ becomes $\tilde{l}$-dependent when Hubble expansion is taken into account.
In this case
the equation is an integral and does not reduce to the simple algebraic equation~(\ref{tau2}).
To propagate the photon one must solve for
the step which is the upper limit of the integral.
In the simple examples discussed in \S \ref{sec:test},
things are relatively simple even when the Hubble expansion is included, since
in these cases there is homogeneity and isothermality and no sum over cells is required. In those cases, Hubble expansion is included as follows:
1.) we make a first guess for $l$ using the Hubble velocity at the current point, 2.) we use as a step for the photon a certain fraction of $l$,
3.) for
a specified tolerance with which we want to achieve $\tau$, we refine the step as necessary.
Note that the simple tests of the code presented in \S \ref{sec:test} do not include peculiar motions.
In the actual simulations the peculiar velocities rather than the Hubble
expansion are dominant on the relevant scales (e.g., for the emitter we focus on, the
mean radial component of the peculiar motion dominates over the Hubble expansion up to about
$80$ physical kpc). In the detailed RT which we perform within such distances, we approximate the subdominant
Hubble expansion velocity within a certain cell by the
expansion velocity that corresponds to the center of that cell. This is calculated to have a negligible effect on the
results.
The $n=2$ state of atomic hydrogen consists of the $2S_{1/2}, 2P_{1/2}$ and $2P_{3/2}$ substates, whereas the
$n=1$ state consists of $1S_{1/2}$.
According to the electric dipole selection
rules, the allowed transitions are $2P_{1/2}$ to $1S_{1/2}$ and $2P_{3/2}$ to $1S_{1/2}$, whereas $2S_{1/2}$ corresponds
to destruction of the initially absorbed Ly-$\alpha$ photon, since this state de-excites through
the emission of two continuum photons.
The multiplicity of each of these states is $2J+1$. Thus the probabilities for the $2P$ states the atom
can be found in when absorbing the Ly-$\alpha$ photons $2P_{1/2}:2P_{3/2}$ are
$1:2$.
Collisions can potentially
cause the $2P\rightarrow 2S$ transition in which case the photon gets destroyed.
A similar destruction effect can be caused by the existence of dust. Both these destruction mechanisms are
briefly discussed in the context of the ART simulations in \S \ref{sec:collisions} and \S \ref{sec:dust}, respectively.
Considering the $2P_{1/2}$ and $2P_{3/2}$ cases separately, one would have to modify both the Voigt function
and the velocity distribution of the scattering atom discussed in the next section \citep[see, e.g., ][]{ahn_etal01}.
However, the level splitting between the two $2P$ states is small, just 10 GHz. This corresponds to a velocity width of $\sim 1$ km/s, much
smaller than the width due to thermal velocities in media with roughly $T>100$ K. In addition, even for lower temperatures, this level splitting
is still small for high optical depths. In our case, the thermal, peculiar, and Hubble velocities are all more important than the splitting, and
combined with the
fact that we have high optical depths, we do not make the distinction between the two sublevels.
As discussed below, however, the different fine structure levels are taken into account when choosing scattering phase functions, important for
polarization calculations that we will present in a future paper.
\subsubsection{The scattering}
\label{scattering}
After determining the point in space where the photon will be scattered next, we choose the thermal velocity components
of the scattering atom. In the two directions perpendicular to the direction of the incident photon the components are drawn
from a (1-D) Gaussian distribution with dispersion equal to $\sqrt{\frac{k_{B} T}{m_{p}}}$.
The component $u_{p}$ of the thermal velocity of the atom along the direction
of the incident photon is drawn from the distribution
\begin{equation}
f(v_{p})=\frac{a}{\pi} \frac{e^{-v_{p}^{2}}}{(x-v_{p})^2+a^2} H^{-1}(a,x)\, ,
\label{distr}
\end{equation}
with $v_{p}=u_{p} (m_{p}/ 2kT)^{1/2}$.
To draw numbers that follow this distribution we use the method of \citet{zheng_miralda-escude02}.
After each scattering we need to assign a new frequency (in the observer's frame) and direction to the photon.
To this end we perform a Lorentz transformation of the frequency and direction of the incident photon from the observer to
the atom rest frame, using the velocity of the atom chosen as described previously.
Although the code ignores the level splitting with respect to the scattering cross section and the velocity distribution, it takes into
account the different phase distributions for core versus wing scatterings, as well as for $2P_{1/2}$ versus
$2P_{3/2}$ scatterings.
For resonant scattering, it is the angular momenta of the three states involved and the multipole order of the emitted
radiation that determines the scattering phase function.
\citet{hamilton40} found that the transition from $2P_{1/2}$ gives totally unpolarized photons
and is characterized by an
isotropic angular distribution function, whereas that from the $2P_{3/2}$ state corresponds to a maximum degree of polarization of 3/7 for a
$90^{\circ}$ scattering \citep[also see][]{chandrasekhar}.
More specifically, the scattering phase function for dipole transition can be written as \citep{hamilton40}
\begin{equation}
W(\theta) \propto 1+\frac{R}{Q} \cos^{2}\theta
\end{equation}
with $R/Q$ the degree of polarization for a $90^{\circ}$ scattering and equal to
\begin{equation}
R/Q=\frac{(J+1)(2J+3)}{26J^{2}-15J-1}
\end{equation}
for the $2P_{3/2} \rightarrow 1S_{1/2}$ transition since $\Delta J=-1,\Delta j=1, J=3/2$ according to Hamilton's conventions, and
\begin{equation}
R/Q=\frac{(2J-1)(2J+3)}{12J^{2}+12J+1}
\end{equation}
for the $2P_{1/2} \rightarrow 1S_{1/2}$ transition with $\Delta J=\Delta j=0, J=1/2$. In
both equations, J is the total angular momentum at the excited
($n=2$) state. Thus, $W(\theta)$ is constant (isotropic) for $2P_{1/2}$ as the excited state, whereas it equals
\begin{equation}
W(\theta) \propto 1+ 3/7 \cos^{2}\theta
\end{equation}
with maximum polarization degree of $3/7$ at a $90^{\circ}$ scattering.
On the other hand, Stenflo (1980) showed that at high frequency shifts (i.e., at the line wings) quantum mechanical interference between the two
lines acts in such a way as to give a scattering behavior identical to that of a classical oscillator, namely
pure Rayleigh scattering. Then the direction follows a dipole angular distribution
with Rayleigh polarization 100$\%$
at $90^0$ scattering, namely
\begin{equation}
W(\theta) \propto 1+\cos^{2}\theta \, .
\end{equation}
Lastly,
the frequency of the photon before and after scattering in the rest frame of the atom differs only by the recoil effect. Hence,
\begin{equation}
\tilde{\nu}=\frac{\nu}{ 1 + \frac{h \nu}{m_{p} c^{2}} (1- \cos\theta)}
\end{equation}
where $\nu, \tilde{\nu}$ are the frequency of the incident and scattered photon in the atom rest frame, respectively, the latter modified
due to the recoil effect. This effect is negligible for the environments produced in the simulations.
After determining the new direction and frequency of the scattered photon in the atom's rest frame we transform back to the observer's frame, and repeat the
whole scattering procedure.
\subsection{Testing the basic scheme}
\label{sec:test}
Here we present some of the tests of the RT code we performed against analytical solutions, as well as other numerical
results that exist in the literature.
In addition to showing the good performance of the code, these tests are presented here as relevant to either Ly-$\alpha$ emitters and/or the way we accelerate the
code when applied in cosmological simulations (see \S \ref{sec:accel}).
\begin{figure*}[thb]
\centerline{{\epsfxsize=3.5truein \epsffile{comp_nr.eps}}\hspace{0.5cm}{\epsfxsize=3.5truein \epsffile{comp_wr.eps}}}
\caption[Comparison with Neufeld]{{\it Left panel:} Emergent spectra
from the Monte Carlo RT ({\it solid histograms}) and as predicted analytically by \cite{neufeld90}
({\it dotted lines}) for 3 different center--of--line optical depths. The agreement between Monte Carlo and analytical result
becomes better with increasing optical depth, as expected since the analytical solution is valid for very optically thick media.
{\it Right panel:} The same as in left panel but in this case the Monte Carlo results are derived with recoil being included,
whereas the analytic solution does not include recoil. The dashed line in the case of $\tau_{0}=10^6$ is obtained by modifying the
spectrum obtained from the Monte Carlo RT without recoil by the factor correcting for recoil (see text for details).
\label{fig:neufeld}}
\end{figure*}
\subsubsection{\citet{neufeld90} test}
\label{sec:neufeld}
\cite{neufeld90} derived an analytic solution in the limit of large optical depth for a source radiating
resonance line photons in a thick, plane-parallel, isothermal semi-infinite slab of uniform density.
The analytic emergent spectrum as a function of frequency shift for a midplane source is
\begin{equation}
J(\pm \tau_{0}, x) = \frac{\sqrt{6}}{24} \frac{x^2}{\alpha \tau_{0}}\frac{1}{\cosh[(\pi^4/54)^{0.5}
(|x^3-x_{i}^3|/\alpha \tau_{0})] }
\label{neuf}
\end{equation}
with $x=(\nu-\nu_{0})/\Delta \nu_{D}$, $\Delta \nu_{D}=\nu_{0} \sqrt{2k_{B}T/(m_{p} c^{2})}$ the thermal Doppler width,
$x_{i}$ the injection frequency shift (zero for injection at line center), and $\alpha$ the ratio of the natural to two times the thermal
Doppler width.
The quantity $\tau_{0}$ is the optical depth from midplane to one
boundary of the slab at the line center.\footnote{Neufeld's definition, used in equation (\ref{neuf}),
is such that the optical depth at frequency shift $x$ is given as $\tau_{x}=\tau_{0} \phi(x)$.
Note that throughout this section, with the exception of equation (\ref{neuf}),
our definition of $\tau_{0}$ is such that the optical depth at frequency shift $x$ is given
as $\tau_{x}=\tau_{0} H(\alpha,x)$.
This definition was chosen following recent studies \citep[e.g.][]{ahn_etal02b,zheng_miralda-escude02} so that comparisons
with these studies be easier. Since
$\phi(x)=H(\alpha,x)/\sqrt{\pi} $, our $\tau_{0}$ is smaller than
Neufeld's by a factor of $\sqrt{\pi}$. Note though that in the following sections we return to the \citet{neufeld90} definition of $\tau_{0}$.}
This analytical solution is valid in the very optically thick limit, with the latter being defined according to \cite{neufeld90}
as $\tau_{0} \ge 10^3/(\sqrt{\pi} \alpha)$. This corresponds to
$\tau_{0}\ge 3.8 \times 10^4$ approximately for a temperature T=10 K assumed in the tests we present here.
In deriving equation (\ref{neuf}) the scattering was assumed to be isotropic.
In addition, it was assumed coherence in the rest frame of the atom, an assumption that makes the solution valid
at the low density limit only, as well as approximations under the assumption that wing scatterings dominate were done (hence the solution is valid
at high optical depths).
Furthermore, note that the classical slab problem is independent of the real size of the slab (all quantities depend on $l/l_{0}$ with $l_{0}$ the actual size of the finite dimension).
Lastly, for this solution it is assumed that the source has unit strength and is isotropic, namely
it emits 1 photon per unit time or $1/ 4 \pi$ photons per unit time and steradian.
For center-of-line injection frequencies
the emerging spectrum has maximum at $x \simeq \pm 0.88119 (\alpha \sqrt{\pi} \tau_{0})^{1/3}$, and an average
number of scatterings $N\simeq 0.909316 \sqrt{\pi} \tau_{0}$ \citep[][with $\tau_{0}$
in these expressions defined using our conventions rather than Neufeld's]{harrington73}.
This scaling of the mean number of scatterings with optical depth in the case of resonant-line RT
in extremely optically thick media was first explained by \citet{adams72}, who understood that photons escape the medium after a series of excursions
to the wings. Before this study it was believed that
the number of scatterings scales with $\tau_{0}^{2}$, as would be predicted by plain spatial random walk arguments \citep{osterbrock62}.
We briefly review the interpretation given by \citet{adams72} with respect to
the linear scaling of the mean number of scatterings with optical depth, since we refer to it
extensively in the following sections.
The mean number of scatterings is the inverse of the escape probability per scattering.
The escape probability per scattering is the integral of the probability per scattering that a photon is scattered
beyond certain frequency shift $x_{*}$. \citet{adams72} identified this frequency as the frequency where the photon, while
performing an excursion to the wings, and before returning back to the core, travels an rms distance comparable to the size of
the medium. Note that this is in fact an essential difference in the understanding of resonant-line RT in extremely thick
media compared to the spatial random walk approach. The latter approach assumes that during an excursion to the wings the photon travels an rms distance
much smaller than the size of the medium. Thus, the first step is to determine $x_{*}$.
Using the redistribution function (i.e., the function that gives the probability that a photon with certain frequency shift $x$ before scattering
will have a frequency shift $x^{'}$ after scattering)
one can calculate both the rms frequency shift and the mean frequency shift of a photon which is scattered repeatedly.
For a photon initially in the wings with a frequency shift $x$ \citet{osterbrock62} found that the rms shift is 1 and the mean frequency shift is
$-1/|x|$. For $x\gg1$, the mean shift is much smaller than the rms and the photon is undergoing a random walk in frequency with mean number of scatterings
$\sim x^{2}$. In real space, the rms distance traveled is equal to the square
root of the mean number of scatterings times the mean free path. In the wings, the Voigt profile varies
relatively slowly and the mean free path is $\sim 1/\phi(x) \sim x^{2}/ \alpha$ line center optical depths (we
only focus on the scalings here, hence constants of order unity
are dropped).
Thus, the distance traveled is $x/\phi(x) \sim x^{3}/\alpha$. Setting this rms distance equal
to $\tau_{0}$ we get $x_{*} \sim (\alpha \tau_{0})^{1/3}$, which is in fact the scaling of the frequency shift where the
emergent spectrum takes its maximum value. Thus, going
back to the mean number of scatterings, the escape probability {\it per scattering} will be $\sim \int_{x_{*}}^{\infty} A(x) dx$
with $A(x)$ a function to be determined. According to the previous discussion
$x_{*}$ is the minimum frequency shift for which the photon during an excursion to the wings can travel an
rms distance at least equal to the size of the medium. If, for simplicity, one assumes complete redistribution the probability that a photon is found after
scattering with a shift between $x$ and $x+dx$ is $\phi(x) dx$. However, this is not the probability {\it per scattering}, since the photon will scatter
$\sim x^{2}$ before returning to the core. Thus, $A(x)$ is $\phi(x)/x^{2}$, and $N_{sc} \sim \left[\int_{x_{*}}^{\infty} \phi(x)/x^{2} dx\right]^{-1}$,
with $\phi(x) \sim \alpha/x^{2}$ in the wings. Using the above expression for $x_{*}$ one obtains $N_{sc} \sim \tau_{0}$, with
the constant of proportionality being of the order of
unity.
The emerging spectra without and with recoil included are shown in the left and right panel of Figure \ref{fig:neufeld}, respectively.
A convergence test indicates that these results are robust if more than of order $10^{3}$ photons are used.
Referring to the left panel of the figure, the agreement between the results obtained with the code and
the analytic solution gets better at higher optical depths.
As has been already mentioned, the analytic solution is derived after a series of approximations done
on the assumption of optically thick media. For example, when deriving the analytic solution
the Voigt function is set equal to $\alpha /\pi x^2$.
Setting the Voigt function equal to this approximation in the code makes the agreement even better.
The way the spectrum behaves for different $\alpha \tau_{0}$ is expected qualitatively: the higher the optical depth or the lower
the temperature (the higher the $\alpha$), the more difficult it is for the photons to exit the medium and
the photon frequencies must move further away from resonance to escape. Hence,
the peaks of the emerging spectrum occur at higher frequency shifts, and the separation between the two
peaks becomes larger. The width of the peaks gets larger with larger $\alpha \tau_{0}$ in agreement with the dependence of the
optical depth on frequency (i.e., when in the less optically thick regime core photons are relevant and the
optical depth goes as $e^{-x^{2}}$, whereas in the more optically thick regime wing photons are more relevant, and there the
optical depth scales as $1/x^{2}$).
In the right panel of Figure \ref{fig:neufeld} we present numerical results when recoil is included,
along with the analytical solution (as a guide) that does not include recoil.
As expected, including recoil shifts more photons to smaller (more red) frequencies.
The magnitude of the effect can be understood as reflecting the thermalization of photons around frequency
$\nu_{0}$ \citep{wouthuysen52,field59}. This process modifies the photon abundance by $\exp(-x/x_{T})$, with $x_{T}=k_{B}T/h \Delta \nu_{D}$.
Indeed, in the right panel of Figure \ref{fig:neufeld} the dashed line for $\tau_{0}=10^6$ is obtained by modifying by
$\exp(-x/x_{T})$ the emerging spectrum obtained
from the simulation when no recoil is included.
These results are in agreement with the results and interpretations by \citet{zheng_miralda-escude02}.
\subsubsection{\citet{loeb_rybicki99} test}
\citet{loeb_rybicki99} address the RT problem in a spherically symmetric, uniform, radially expanding
neutral hydrogen cloud surrounding a central point source of
Ly-$\alpha$ photons. No thermal motions are included ($T=0$ K). They find that the mean intensity $\tilde{J} (\tilde{r}, \tilde{\nu})$ as a function
of distance from the source $\tilde{r}$ and frequency shift $\tilde{\nu}$ in the diffusion (high optical depth) limit is given by
\begin{equation}
\tilde{J}=\frac{1}{4 \pi} \left( \frac{9}{4 \pi \tilde{\nu}^3}\right)^{3/2} \exp\left(-\frac{9 \tilde{r}^2}{4 \tilde{\nu}^3} \right)
\end{equation}
with $\tilde{\nu}=\nu/\nu_{\star}$, $\nu=\nu_{0}-\nu_{photon}$, $\nu_{0}$ the Ly-$\alpha$ resonance frequency, and $\nu_{\star}$ the frequency where the optical depth
becomes unity. The scaled radius, $\tilde{r}$ is equal to
$r/r_{\star}$, with $r_{\star}$ the
physical distance where the frequency shift due to the Hubble-like expansion of the hydrogen cloud
equals the frequency shift that corresponds to unit optical depth ($=\nu_{\star}$).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=9cm]{loeb_rybicki.eps}
\caption[Comparison with \citet{loeb_rybicki99}]{Mean intensity as a function of radius for certain frequency shifts. Solid lines show the results from the
Monte Carlo code and dotted lines show the analytic solution of \cite{loeb_rybicki99}, appropriate in the diffusion limit.
The specific frequency shifts plotted were chosen based on the fact that the diffusion limit is the right limit for $\tilde{\nu} << 1$
(for details and definitions see text).}
\label{loeb_rybicki}
\end{center}
\end{figure}
A comparison of the results from the code with the analytic solution is shown in Figure \ref{loeb_rybicki}.
The analytic solution becomes progressively more accurate the higher the optical depth (or the smaller the
frequency shift in the way this problem is parameterized, so that we are still in the core of the line).
In addition, it deviates more and more from the (exact) simulation result at larger $\tilde{r}$, since the larger the $\tilde{r}$ the
more optically thin the medium and thus the further away we are from the assumption of an optically thick medium made by the analytic
solution. Thus, the disagreement at high $\tilde{r}$ is real and not an artifact caused, e.g., by small number of photons
that would be inadequate to sample the low intensities at large $\tilde{r}$.
\subsubsection{Simple models of Ly-$\alpha$ emitters: Spherical clouds of uniform density and temperature}
\label{simple_models}
Here we develop some simple models of Ly-$\alpha$ emitters. Even though there are no
analytic solutions for these cases, one could compare our results with the published results of \citet{zheng_miralda-escude02}.
More speficically, in this section, following these authors we model spherical neutral hydrogen clouds. We consider two different cases as far as the emission is concerned. In the first case it is assumed that we have a spherical cloud
with a Ly-$\alpha$ emitting point source at its center. In the second case we assume uniform emissivity, namely a photon is equally likely
to be emitted from any point within the cloud.
For each one
of these two cases we make runs assuming the cloud is static, contracting and expanding. In the latter two cases the
contraction/expansion is assumed to be Hubble-like, namely the velocity of the neutral hydrogen atoms scales linearly with the radius measured
from the center of the cloud. This velocity is set equal to 200 km/s at the edge of the system (and is negative/positive in the case of
contraction/expansion). For each case we perform two runs, one with column density equal to $2 \times 10^{18} \rm{cm}^{-2}$, typical for Lyman limit
systems, and one with column density equal to $2 \times 10^{20} \rm{cm}^{-2}$, typical of Damped Ly-$\alpha$ systems (or line center optical depths
equal to $8.3 \times 10^{4}$ and $8.3 \times 10^{6}$, respectively).
In all cases the temperature is set equal to $2 \times 10^{4}$K. The initial photon frequency
is assumed to be at the line center in the rest frame of the atom.
In all results shown,
the effect of recoil is included. Lastly, 1000 photons were used in all runs.
\begin{figure}[t]
\begin{center}
\includegraphics[width=7.5cm]{uniform_thin.eps}
\includegraphics[width=7.5cm]{point_thin.eps}
\caption[Thin spherical cloud: uniform and central point source emissivity]{{\it Top panel}: frequency distribution of emergent Ly-$\alpha$ photons in the case of a static ({\it solid histograms }), a contracting
({\it dotted histograms}), and an expanding ({\it dashed histograms}), isothermal, spherically symmetric neutral hydrogen cloud
with column density $\rm{N}_{\rm{HI}}=2 \times 10^{18} \rm{cm}^{-2}$ and uniform emissivity. {\it Bottom panel}: same
as left panel but the Ly-$\alpha$ photons in this case originate from a central point source.}
\label{thin}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7.5cm]{uniform_thick.eps}
\includegraphics[width=7.5cm]{point_thick.eps}
\caption[Thick spherical cloud: uniform and central point emissivity]{{\it Top panel}: frequency distribution of emergent Ly-$\alpha$ photons in the case of a static ({\it solid histograms }), a contracting
({\it dotted histograms}), and an expanding ({\it dashed histograms}), isothermal, spherically symmetric neutral hydrogen cloud
with column density $\rm{N}_{\rm{HI}}=2 \times 10^{20} \rm{cm}^{-2}$ and uniform emissivity. {\it Bottom panel}: same
as left panel but the Ly-$\alpha$ photons in this case originate from a central point source.}
\label{thick}
\end{center}
\end{figure}
The results for the optically thin case are shown in Figure \ref{thin} and that for the optically thick configuration
are shown in Figure \ref{thick}. In both case the agreement with the results obtained by \citet{zheng_miralda-escude02} is very good.
These spectra can be understood qualitatively using the way
the Neufeld solution behaves depending on the optical thickness.
In the case of an expanding cloud, the photons will escape on average with a redshift because they are doing work on the expansion of the
cloud as they are scattered. Photons with negative frequency shifts (redshifted) can escape, but those with positive frequency shift (blueshifted) will be
scattered at some point and they have to undergo a series of many positive shift scatterings to escape. Hence, the
blue part of the spectrum is suppressed. The situation is reversed in the case of a contracting cloud. It is important to keep in mind that the degree
of suppression of one of the two peaks due to bulk motions depends on factors such as the optical depth and the temperature.
In the case of uniform emissivity and expansion/contraction all spectra become broader because of the different velocities of the emission sites of the photons. In addition,
when the cloud is expanding (contracting), the blue (red) part of the spectrum is not suppressed as much as in the central point source case because, at least,
photons initially emitted close to the edge of the system have some chance of escaping even if they are blue (red).
In the optically thicker cloud, as soon as the photon reaches
a sufficiently large $x$ it is not likely that it will be scattered by an atom with the right velocity to bring the photon into the line center.
Rather, the photon will get another random shift in frequency and will follow an excursion in frequency while at the same time it
diffuses spatially. This along with the fact that the optical depth has a power law rather than Gaussian dependence on $x$
broadens the peaks compared to those of the optically thin case, exactly as discussed for behavior of the Neufeld solution.
In addition, the emission peaks move further away from the
center compared to those from the optically thin case, since the photons have to be further away from the center of the line in order to escape
when the medium is optically thick.
\subsection{Accelerating the RT}
\label{sec:accel}
The previous tests demonstrated that our basic Monte Carlo scheme works well for the simple test cases.
When using it in its simplest form in high resolution cosmological simulations, such as the ART simulations (see \S \ref{app_sims}), it takes unrealistic
running times in order to produce results with sufficient numbers of photons.
This is because in the case of resonant RT in extremely optically thick media,
a significant amount of time is spent on the relatively insignificant core scatterings.
If we define the core through the frequency range where the Doppler profile dominates over the
Lorentzian wings, then roughly speaking the core is given by $\alpha/ \pi x_{c}^{2}=e^{-x_{c}^{2}}/\sqrt{\pi}$, where
$x_{c}=(\nu_{c}-\nu_{0})/\Delta \nu_{D}$ and $\alpha$ is as defined previously.
For a temperature of $10^{5}$ K, the core is roughly $x_{c} = 3.5$. Also, assuming complete
redistribution,\footnote{ In
other words, assuming that the frequency distribution after scattering is independent of the frequency before scattering and
is given by the line profile (i.e., the source function is independent of frequency). The
assumption of complete redistribution
was found to be pretty accurate for core photons \citep{unno52,jefferies_white60}. This is intuitively expected since, when in the core,
the photon frequency shift is small or comparable to the thermal velocities of the atoms. Thus, the latter can have
a significant impact on the frequency of the photon and in effect they redistribute it after each scattering according to the
line profile.} the probability per scattering for a core photon
to exit the core is $I/(I+\rm{erf}(x_{c}))$ with $I=2 \alpha/(\pi x_{c})$ and $\rm{erf}(x_{c})=\frac{2}{\sqrt{\pi}} \int_{0}^{x_{c}}e^{-t^{2}}dt$.
That is, roughly, the photon will have to scatter $(I+\rm{erf}(x_{c}))/I=1+\rm{erf}(x_{c})/I \simeq 1+I^{-1} \simeq I^{-1}$ times before exiting the core.
Using
the above core definition, one finds that this is equal to $\sqrt{\pi} e^{x_{c}^{2}}/(2 x_{c})$ and keeping only the dominant dependence on $x_{c}$, this is roughly $e^{x_{c}^{2}}$ or
$\sim 10^{5}$ scatterings.
These scatterings are insignificant in the sense that they happen in such copious amounts, without being accompanied by significant
spatial diffusion, since the latter
occurs mostly through the wings.
One way to advance photons in very high optical depths is to use the technique of the
{\it prejudiced first scattering} \citep{cashwell_everett59}.
With this technique one biases the $\tau$ values toward larger values than the ones that would be drawn
from equation(\ref{taur}). More specifically,
$\tau$ is chosen to be uniformly distributed in $[0,\tau_{esc}]$, with $\tau_{esc}$ the optical depth for escape. Then one weights the photons by
$\tau_{esc}e^{-\tau}$ to correct for the fact that $\tau$ (i) is limited to be less than or equal to $\tau_{esc}$, and
(ii) is assumed to be uniformly distributed in the $<\tau_{esc}$ range. Using
this technique however does not improve run time requirements to the extent we need and clearly more drastic acceleration methods are needed.
Exiting the core does not in general guarantee that the photons escape.
In fact, the photons
may return back to the core many times before escaping. This is not surprising since, as we will discuss, the maximum core frequencies that can be used
are much smaller than $x_{*}$ discussed previously. Especially for extremely
optically thick media ($\alpha \tau_{0} >10^{3}$), this in-- and out--of--the--core procedure is still
very expensive to follow.
Hence, we accelerate our RT scheme by implementing two different methods, depending on the center-of-line optical thickness of the cell a photon
finds itself in ($\tau_{0}$), as well as on the thickness of the cell for the specific frequency shift of the incident
photon ($=\tau_{0} \phi(x_{i})$ with $x_{i}$ the frequency shift of
the incident photon). In fact we parameterize the optical thickness of a cell not only via $\tau_{0}$, the line-of-center optical depth from the center of the
cell to one of its edges, but rather via the product of $\alpha$ and $\tau_{0}$, motivated by the Neufeld solution. This parameterization
turns out to be
very good for media less optically thick than those the Neufeld solution applies to.
We discuss these two acceleration methods, as well as some additional acceleration techniques in the following subsections.
\subsubsection{Extremely optically thick cells: Controlled Monte Carlo motivated by the Neufeld solution}
\label{controlled}
This acceleration scheme is based on controlled Monte Carlo simulations of resonance RT
in cells (cubes) with several physical conditions,
representative of the extremely optically thick cells in the simulations.
The idea is to obtain trends and best--fit functional forms for the
spectra emerging from thick cells. These spectra can then be used when running the code so that
instead of following the scattering of the photons in detail, we can draw the frequency of the photon emerging
from a thick cell using the pre-calculated spectrum appropriate for the physical conditions in this cell.
In principle, controlled Monte Carlo simulations can be used for any range of optical thicknesses.
We use it only in the extremely optically thick cells where
$(\alpha \tau_{0})_{eff}>2\times 10^{3}$ and $(\tau_{0} \phi(x_{i}))_{eff}\gg1$ with $x_{i}$ the frequency shift of the incident
photon.\footnote{We have used the index {\it eff} because, as is discussed later in this section having in mind an implementation of
the RT code for AMR simulations, to decide
whether this method is applicable or not we create a mesh on top of the simulation mesh. In this new mesh, the photon is always at the center of a cell.
Then it is the 'effective' physical conditions in this new cell that are relevant when deciding if the acceleration method at hand is applicable or not.
In the case of simulations without a cell structure, the index {\it eff} becomes redundant, since there is no initial mesh to begin with.}
We do that because we are motivating this method by
the Neufeld solution which is applicable only at the diffusion limit.
The inherent cell structure of the AMR simulation outputs or the cell structure that can be generated for other cosmological codes,
along with the resolution imposed isothermality and uniformity of each cell,
are conducive to some kind of modification of the Neufeld solution.
In some sense, with the advent of cosmological simulations, the
contemporary analogue of the
extensively studied classical slab problem is the completely unexplored
problem of resonance RT in a cube. This motivated a detailed study of the resonance RT
problem in cubes where the reader is referred to for more
details and results \citep{tasitsiomi05}. Here we only
summarize briefly some key results relevant to the current study.
As discussed in \S \ref{sec:neufeld}, the Neufeld solution was obtained under some assumptions.
To fit the controlled Monte Carlo spectra with a Neufeld type spectrum we have to investigate how sensitively
the solution depends on these assumptions, as well as whether these assumptions are valid in cosmological simulations.
This is done in the following paragraphs.
\subsubsubsection{Choosing the exiting frequency}
\label{exit_frequency}
The exiting frequency of a photon entering an extremely optically
thick cell is drawn by an emerging frequency distribution similar to the Neufeld solution (equation \ref{neuf}).
However, the Neufeld solution is derived for a semi-infinite
slab, whereas the simulation cells are finite cubes. Furthermore, the solution assumes isotropic scattering, no recoil, which
anyway is negligible in the simulations, and does not include velocities such as those associated with peculiar motions or the Hubble
expansion. Lastly, it assumes that the source of the radiation lies within the slab,\footnote{ More
specifically,
equation \ref{neuf} assumes that the source is a plane source in the middle of the slab. Due to symmetry arguments,
a plane source located at the middle of a slab is equivalent with respect to the spectrum of the emergent radiation
to a central point source. \citet{neufeld90} provides a more general expression for different
source positions.} and is valid for optically thick frequencies ($\tau_{0} \phi(x_{i}) \gg1$).
Starting from the point on bulk velocities, we use the Neufeld solution -- applicable for an observer moving with the
bulk flow of the fluid -- by taking into account the way the specific intensity
transforms between two inertial observers moving at a certain speed with respect to each other (i.e., $I_{\nu}/\nu^{2}$ is invariant, where $I_{\nu}$ is
the {\it number} of photons rather than the energy intensity. In the latter case, the quantity that would be invariant would be $I_{\nu}/\nu^{3}$).
The second point we address has to to do with the slab versus cube difference between the analytical solution and the simulations.
As discussed in \S \ref{sec:neufeld}, Neufeld's solution depends on one parameter, $\alpha \tau_{0}$.
Qualitatively, one expects that the spectrum emerging from a cube rather than a slab be well described by the same
solution but for an effective $\alpha \tau_{0}$ smaller than the actual $\alpha \tau_{0}$ of the cell. The
reason for this
is that when for example observing the emergent flux from the z-direction in a cube, we lose all photons that in the case of the slab would wander, scatter
many times along the infinite dimensions and finally find their way out from the z-plane. In the case of the cube these photons would not be counted
simply because they have exited the cube from planes other than the z-plane. This would be equivalent to solving the problem that Neufeld solved but this
time including losses of photons (or, more appropriately, by generalizing the 2-dimensional diffusion equation derived by Neufeld into
a four dimensional one -- instead of $\tau, \nu$ now the intensity will be a function of $\tau_{x}, \tau_{y},
\tau_{z}$ and $\nu$).
Numerical experimentation of RT in cubes and slabs of the same physical conditions, verified
that the above guess is correct. In fact, the cube spectrum is well described by the Neufeld solution for a slab if $2/3$ of
the $\alpha \tau_{0}$ of the cube are used as input parameter to the slab analytic solution.
This is shown in Figure \ref{slab_cube} \citep[also see][]{tasitsiomi05}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=9cm]{slab_cube.eps}
\caption[Convergence wrt number of photons]{Comparison of the emergent spectra from a semi-infinite slab ({\it dotted histograms}) and
a cube ({\it dashed histograms})
of the same physical conditions ($\alpha \tau_{0}=10^{5}$). Also shown is the analytical
solution derived by \citet{neufeld90} ({\it solid line}) for the emergent spectrum from a semi-infinite slab.
Note that the analytical solution is for $\alpha \tau_{0}=2\times 10^{5}/3$, which is the 'effective' $\alpha \tau_{0}$
one has to use in the analytic solution obtained for a semi-infinite slab, for the solution to give the spectrum from a finite cube
of the same physical conditions as the slab.
\label{slab_cube}}
\end{center}
\end{figure}
Furthermore, the Neufeld solution assumes that the source of radiation lies within the slab (or cube in our case). In fact,
the version of the solution we have been discussing so far (equation \ref{neuf}) assumes that the source is at the center
of the slab.
However, in the case of mesh--based codes, as photons cross from one cell to the other,
in general the source is not at the center of the cell. For codes without an inherent cell structure the obvious solution to this is to create a cell
and have the photon at each instant at the center of the cell. As is discussed in what follows, this turns out to be the most efficient solution in the
case of mesh--based codes as well.
Neufeld provides a more
general expression for various source positions within the slab, as well as for the transmission and reflection coefficients assuming an external source.
Using either option for mesh--based codes, trying to take advantage of the already existing mesh structure, creates complications: in the case of a non-central but internal source, the equivalence
of a point or infinite plane source -- necessary for all the above discussion to be valid -- breaks down
if the source is not located at the center of the slab. And using the reflection/transmission probabilities makes the algorithm more complicated.
But most importantly, there is an intrinsic limitation in the simulations due to finite resolution: it is not clear
how meaningful it is to be discussing differences
in position less than the cell size (i.e., if one can really tell the edge from the center of the cube). Instead, at every point the photon is found we
create a new mesh on top of the simulation mesh. The photon is always found at the center of a cell whose physical parameters are calculated using the
cloud-in-cell weighting scheme. Each time the size of the cell is set to the simulation cell size the photon is in.
Note that it is the physical parameters of this effective cell that determine the way the code proceeds
(i.e., if the effective cell $\alpha \tau_{0}$ is larger than $2 \times 10^{3}$ and $\tau_{0} \phi(x_{i}) \gg1$
then the controlled Monte Carlo results are used. If one of these two conditions (or both) is not satisfied in
the effective cell then the code returns to the original cell. Depending on the
original cell physical conditions and the photon frequency either the exact Monte Carlo or
the method described in \S \ref{skipping} is used).
In the Neufeld solution the condition $\tau_{0} \phi(x_{i})\gg1$ allows him to truncate a series appearing in the solution
process by keeping up to first order terms in $1/ (\tau_{0} \phi(x_{i}))$. Thus, the solution is valid only for optically thick injection frequencies.
We find that the higher order corrections are pretty small. However, for a certain tolerance, one must decide how thick is thick enough for the Neufeld spectrum to be applicable.
We take that the spectrum from a slab is satisfactorily predicted by the analytical solution
for frequency shifts for which $\tau_{0} \phi(x_{i}) \ge 10$.
As has been shown in \S \ref{sec:neufeld} the recoil effect can be easily accounted for multiplying the Neufeld solution by the appropriate factor.
In any event, the recoil effect for our conditions is negligible and hence is dropped in the simulation calculations.
To see this, the recoil effect corresponds to a frequency shift that would be caused by a velocity $\simeq h \nu/m_{p} c
=$ 3 m/s. This velocity is negligible compared to the thermal velocities
expected in cosmological simulations, and given the peculiar and Hubble flow velocities, the small non-coherence in the atom's rest frame
introduced by recoil will be totally unobservable.
Hence, the Neufeld approximation is good in that respect as well.
\subsubsubsection{Choosing the exiting direction and point}
Referring to $\mu$, the cosine of the angle with which the photon is exiting a cell, measured with respect to the normal to the exiting surface,
we draw its value from the following cumulative probability distribution function (cpdf) \citep{tasitsiomi05}
\begin{equation}
P(<\mu)=\frac{\mu^{2}}{7}(3+4\mu) \, .
\label{dir}
\end{equation}
This cpdf is found to be an excellent description of the directionality of the emergent spectrum and
clearly deviates from isotropy.
In fact, it verifies the findings of other studies that in optically thick media photons tend to exit in directions perpendicular to
the exiting surface \cite[see, e.g.,][]{chandrasekhar,phillips_meszaros86,ahn_etal02b}. In the case of RT in accretion disks this has been identified
as an expected limb darkening \citep[or 'beaming';][]{phillips_meszaros86} of the disk (i.e., the disk is very bright when observed face on and less bright when observed edge on).
In cases of very optically thick media, the emerging radiation directionality approaches the Thomson
scattered radiation emergent from a Thomson-thick electron medium. This Thomson limit obtained initially by \citet{chandrasekhar}, was confirmed later
numerically by \citet{phillips_meszaros86}.
It has been implied by some authors \citep{ahn_etal02b} that
the fact that in optically thick media RT occurs mostly via wing photons with
the latter being described by a dipole phase function (see \S \ref{scattering}), and the fact that Thomson scattering is also described by a Rayleigh
(dipole) scattering phase function, explains why the resulting $\mu$ probability distributions are similar.
However, we find the same cpdf
when the scattering is taken to follow either an isotropic or a dipole distribution.
For such optical thicknesses the details of the exact phase function do not matter, at least not
with respect to the exiting angle cpdf. All the phase functions involved in Ly-$\alpha$ scattering are only mildly anisotropic
and they simply enhance a little bit the coherence of the scattering at the observer's frame compared to the isotropic
scattering case. So the fact that the exiting angle cpdf in extremely optically thick slabs (cubes)
does not depend crucially on the assumptions on the phase functions does not
come as a surprise.
The underlying physics is simply that in extremely thick media
most of the photons escape along the normal to the slab where the opacity is smaller.
The azimuthal angle $\phi$ with which the photon exits a cell is distributed fairly uniformly in $[0,2\pi]$ \citep[for more details see][]{tasitsiomi05}.
Referring to the distribution of exit points, one can argue that trying to specify the exact coordinates of the exit point of a photon
from a simulation
cell is, in some sense, superfluous since there is always the resolution limitation.
Thus, we assume that the
exiting points are distributed uniformly.
The deviations of the exiting points from uniformity are relatively small \citep{tasitsiomi05}.
Similarly, resolution limitations make us focus on total distribution functions of photon properties -- where total here means distributions averaged over an entire cube side -- without
regards to a possible dependence of these distribution functions on the photon exit point.
Lastly, we have checked whether the emergent photon parameters can be drawn independently. We found no significant correlations among them
(e.g., we checked for correlations between emergent frequency shift and (preferred) range of exiting directions). Thus, drawing
them independently
is correct.
\subsubsection{Moderately optically thick cells: Skipping the core scatterings}
\label{skipping}
This acceleration scheme is used if the cell the photon is in has
$1 \leq \alpha \tau_{0} \leq 2 \times 10^{3}$. It is also used in the case of cosmological simulation
codes with a pre-existing mesh when the cell the photon is in
has $\alpha \tau_{0}> 2 \times 10^{3}$, but the effective cell (see \S \ref{controlled}) has
$1 \leq \alpha \tau_{0} \leq 2 \times 10^{3}$, and thus the previous acceleration scheme (discussed in \S \ref{controlled})
is not applicable.
The scheme is based on the idea that if a photon is within a certain {\em core} (to be determined), we
can skip all the core scatterings and go directly to the scattering with
a rapidly moving atom that can bring the photon out of the core \citep[for some first implementations of this idea see][]{avery_house68,ahn_etal02b}. As
soon as this happens, the
initial detailed transfer resumes until either the photon escapes or re-enters the core.
The scheme's validity relies upon the correct choice of the core value, so that
while in the core the photon does not diffuse significantly
in space, whereas significant diffusion occurs when the photon exits the core.
To achieve the scattering that brings the photon outside the core
we choose thermal velocities (in units of $\sqrt{2kT/m}$) from the distribution \citep{avery_house68,ahn_etal02b}
\begin{equation}
p(v)=\frac{1}{\sqrt{\pi}} e^{-v^{2}}
\label{eq:core}
\end{equation}
and in the range $[v_{min},v_{max}]$. The lower limit $v_{min}$ is the minimum velocity necessary
for the photon to just make it
to the core $x_{c}$. The upper limit is formally infinite, but for any practical realization it can be set to a large enough number (e.g.,
$\sqrt{x_{c}^{2}+10}$).
For a scattering to bring the photon to just $x_{c}$ from the center,
independent of the directions of incident and outgoing photon, and under the assumptions of
coherence in the rest frame of the atom, isotropic scattering phase function, and
zero radiation damping, it can be shown that $v_{min}=max(|x|,|x_{c}|)$ \citep{hummer62}, with
$x$ the initial frequency shift (as usual in units of the thermal Doppler width). In our case
it is always $v_{min}=|x_{c}|$ since the
photon is inside the core. We checked and verified that
the assumptions under which $v_{min}$ is derived
are good for cosmological simulations.
This is not surprising since, e.g., the assumption of
an isotropic phase function is not very crucial. As discussed already,
none of the relevant phase
functions is strongly anisotropic. Those that are anisotropic simply tend to favor slightly smaller frequency shifts (since they favor
post-scattering directions close to pre-scattering directions) and hence increase a little bit the
coherence in the observer's frame from scattering to scattering. At the limit of many scatterings (and while still at the
optically thick regime) this is not a significant effect \citep[for the tiny differences in the frequency redistribution function with isotropic versus
dipole phase function see Figure I of][]{hummer62}.
Or, the assumption of coherence in the rest frame of the atom is also expected
to be a pretty good assumption for the
media in the simulations from the point of view of the recoil effect, as we discuss in \S \ref{exit_frequency}, and from the point of view of
collisions as we discuss in \S \ref{sec:collisions}.
To motivate the core values we can use (i.e., the maximum frequency shifts
for which we can ignore the repeated scatterings without biasing the results) we must take into
account the different physics
of resonant RT in the two different regimes, $1\le \alpha \tau_{0}\le 2\times 10^{3}$ and
$\alpha \tau_{0} > 2 \times 10^{3}$. In the first regime photons escape on a {\em single longest flight} \citep{adams72} in accordance with the understanding of resonant RT in moderately thick media
developed by \citet{osterbrock62}. In this thickness regime the important frequency is the frequency where the optical depth becomes unity. Photons within this frequency shift barely diffuse in space, whereas as soon as they exit this frequency shift they escape while taking their longest spatial step ({\em flight}).
In the second, extremely optically thick regime ($\alpha \tau_{0}>2 \times 10^{3}$) as \citet{adams72} suggested, photons escape during a {\em single longest excursion} rather than flight. In this case the important frequency is the frequency with the following property: if a photon is given this frequency and is left to slowly return to the center of the line
(by performing a double random walk, in space and frequency), the overall rms distance that it will
travel in real space while returning to the line center equals the size of the medium (i.e., the important frequency shift in this case is the shift $x_{*}$ discussed in \S \ref{sec:neufeld}). This physics motivates our cores, i.e., for moderately thick media the core must be safely optically thick, whereas for extremely optically thick media the core must be safely smaller than $x_{*}$. Then using numerical experimentation we find the exact maximum possible core values that can be used
in each case.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=7.5cm]{core1.eps}
\includegraphics[width=7.5cm]{core.eps}
\caption[]{{\it Top panel:} Comparison of the exact Monte Carlo results ('ex.', {\it solid line}) and the results obtained using the
core
acceleration method ('ap.', {\it dotted line}) for the minimum cell $\alpha \tau=1$ for which this acceleration method is used in the Ly-$\alpha$ RT code.
Also shown is a larger core frequency, $x_{c}=0.2$, which shows the way the emergent spectrum is biased if one uses a higher core frequency.
{\it Bottom panel:} Same as in top panel but for more optically thick cells, $\alpha \tau_{0}=10^{2}$. In this case a core frequency $x_{c}=0.8$ can be used.
In addition, we show exact results for a different pair of temperature and optical depth ({\it dashed line}), that however correspond to $\alpha \tau_{0}=10^{2}$.
Clearly, $\alpha \tau_{0}$ is a good way to parameterize the problem at these moderate optical thicknesses.
\label{core}}
\end{center}
\end{figure}
A comparison of the exact Monte Carlo and the core acceleration scheme applied to moderately thick media is shown
in Figure \ref{core}. Note that these spectra are {\it one} cell runs, and are not the final results of the RT around the Ly-$\alpha$ emitter (which
are discussed in a later section).
In the top panel, we present the exact emergent spectrum from a cube with
$\alpha \tau_{0}=1$, as well as the spectrum obtained if a core $x_{c}=0.02$ is used. Despite it being
a pretty small core, it improves the speed of the algorithm
by orders of magnitude.\footnote{The exact improvement factor depends on optical thickness, and is higher for thinner cells. Furthermore, the improvement
factor is different for the same $\alpha \tau_{0}$ but different temperatures and optical depths. More specifically, it is
higher for lower optical depths and
temperatures.} Also
shown is what the bias would be if one
used a higher core frequency ($x_{c}=0.2$): photons would be artificially shifted at higher (absolute) frequency shifts.
To find the maximum core that can be used
without this biasing, we made runs with successively higher cores. We use as cores: 0.02 for $1\leq \alpha \tau_{0}<10$, 0.1 for
$10\leq \alpha \tau_{0}<10^{2}$ and 0.8 for $10^{2}\leq \alpha \tau_{0}<2\times 10^{3}$.
One can easily verify that for a wide temperature range these cores are safely within the optically thick regime.
The comparison between the exact Monte Carlo and the accelerated
scheme for optically thicker cells (but still at the moderately thick regime) is shown at the bottom panel of Figure \ref{core}.
We have seen via the Neufeld solution
that characterizing a slab -- or a cube in our case -- using $\alpha \tau_{0}$
is very good in the case of very optically thick media ($\alpha \tau_{0} \geq 10^{3}$). In the bottom panel of Figure \ref{core}
we present two different sets of temperature and $\tau_{0}$, which nevertheless correspond to the same $\alpha \tau_{0}$ (and smaller than
that for which the Neufeld solution is applicable). Clearly, $\alpha \tau_{0}$
parameterizes nicely enough these emergent spectra as well. This fact justifies our classification of simulation cells with respect to their
$\alpha \tau_{0}$ value. Note that the fact that the emergent spectrum for these physical conditions seems to depend on $\alpha \tau_{0}$
is not trivial, and was checked only for ranges of temperature and optical depth that are
anticipated to be relevant to cosmological simulation environments. A simple way to see why this may not be a general statement comes from the physics of RT in moderately thick media.
As discussed in such media photons escape roughly when they reach the frequency where the optical depth is unity.
If, for example, the frequency shift $x$ where the optical depth becomes unity is within the Doppler core (as anticipated) then this frequency shift is
defined through $\tau_{0} e^{-x^{2}}=1$ and clearly depends only on $\tau_{0}$ and not on temperature. This is in contrast to extremely optically thick media where the
frequency shift relevant for escape through the single longest excursion is $x_{*}\sim (\alpha \tau_{0})^{1/3}$ (see \S \ref{sec:neufeld}), namely it depends on $\alpha \tau_{0}$.
In the case of extremely thick media we find roughly the following maximum possible cores:
3 for $2 \times 10^{3} \leq \alpha \tau_{0}<10^{4}$, 5 for $10^{4}\leq \alpha \tau_{0}<10^{5}$, 7 for $10^{5} \leq \alpha \tau_{0}<10^{6}$,
17 for $10^{6} \leq \alpha \tau_{0}<10^{7}$, 30 for $10^{7} \leq \alpha \tau_{0}< 10^{8}$, and 80 for $\alpha \tau_{0} \geq 10^{8}$.
As an example, in Figure \ref{deep} we show the Neufeld prediction for the emergent spectrum from a slab with
$\alpha \tau_{0}=10^{7}$ and the results of our acceleration scheme using a core $x_{c}=30$. This is a quite large core frequency, and still the
acceleration scheme gives a very accurate emergent spectrum.
The core values we find scale with $x_{*}$ roughly as $x_{c} \simeq 0.15 x_{*}$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=9cm]{core_large.eps}
\caption[]{Comparison of the analytic solution obtained by \citet[]['ex.', {\it solid line}]{neufeld90} and the results obtained using the
approximate core
acceleration method ('ap.', {\it dotted line}) for $\alpha \tau=10^{7}$ and $x_{c}=30$. The point of the figure is that at extremely high optical depths
the core values one can use can be pretty high.
\label{deep}}
\end{center}
\end{figure}
The $\alpha \tau_{0}$-dependent core frequencies that we
motivate here based on the different
physics for different $\alpha \tau_{0}$ regimes is a quite new approach. Previous studies \citep[e.g.,][]{hansen_oh05} define the core frequency
as
the frequency where the wings start dominating over the Doppler core.
Clearly, to achieve
the best efficiency of the
acceleration scheme, which is highly desirable in our applications due to the very complex environments, we have to use
a depth--dependent core definition.
Other authors who considered variation of the core frequency with temperature and optical depth \citep{ahn_etal02b}
find a bit different values than ours, at least for the low $\alpha \tau_{0}$ range that they worked with: they find that
a core frequency of about $\sqrt{\pi}$ can be used for $\alpha \tau_{0}>10^{3}$, with slightly higher values permitted for even larger
$\tau_{0}$. However, we find that this value is a bit large for $\alpha \tau_{0} \simeq 10^{3}$, and that significantly higher core values can be used for
higher $\tau_{0}$. The reasons for our disagreement with \cite{ahn_etal02b} are not clear.
The discussion with respect to the validity of this acceleration scheme has been limited so far to the emergent spectrum of radiation.
One would expect to see that indeed the assumption that the photons do not move significantly in space during the multiple core scatterings that are skipped
is true. And, that all other quantities, such as exit point and exit angle distributions remain the same, in addition to the emergent spectrum.
The latter has been tested and found true. Furthermore, note that the angle information is relevant mostly when the photon is at the optically thin regime,
where anyway we use the exact transfer scheme. With respect to the exit points, or distances that the photons move while in the core, since these are not
larger than one cell size, limitations due to the
finite simulation resolution render these concerns moot. To get an idea, following an argument similar to that presented in \S \ref{sec:neufeld} leading to $x_{*}\simeq (\alpha \tau_{0})^{1/3}$, and using
the scaling $x_{c} \simeq 0.15 x_{*}$ one finds that by ignoring the scatterings within the core for extremely optically thick cells roughly one ignores a spatial diffusion
of the photons of order $10^{-3}$ of the size of a simulation cell.
In summary, each time a photon enters a simulation cell, there are the following three possibilities:
\begin{enumerate}
\item If the cell has $\alpha \tau_{0}<1$, the exact Monte Carlo RT is used.
\item If the cell has $1 \leq \alpha \tau_{0} \leq 2 \times 10^{3}$ and the photon frequency shift is $|x| \leq x_{c}$, then we
skip the core scatterings. If the photon frequency is outside the core we use again the
exact Monte Carlo RT.
\item If the cell has $\alpha \tau_{0} > 2\times 10^{3}$ then if there is no pre-existing mesh structure of the
cosmological simulation then if the frequency of the photon is such that $\tau_{0} \phi(x) \gg 1$
then the controlled Monte Carlo results are used.
If there is a pre-existing mesh structure, then if $\alpha \tau_{0} > 2\times 10^{3}$
then the physical conditions of the effective cell are calculated.
If for the effective cell it is (i) $\alpha \tau_{0} > 2 \times 10^{3}$ and the frequency of the photon is such that (ii) $\tau_{0} \phi(x) \gg 1$, then
we use the controlled Monte Carlo motivated by the Neufeld solution.
If either (i) or (ii) is not true, then the first acceleration scheme is tried for the {\it original} rather than the {\it effective} cell.
And if it is not applicable, then the exact Monte Carlo scheme is used.
\end{enumerate}
\subsubsection{Calculating images and spectra}
To construct images of the Ly-$\alpha$ emitters for various directions of observation the code
calculates the contribution to the image along a certain direction at {\it each} scattering \cite[see, e.g.][]{yusef-zadehetal84,zheng_miralda-escude02}. This contribution
is $e^{-\tau_{esc}} P(\phi,\mu)$ where
$\tau_{esc}$ is the optical depth for escape from the current scattering position
along the direction of observation to the observer, $\mu$ is the cosine of the angle between the direction of the
incident photon and the direction of observation, $\phi$ is the azimuthal angle, and $P(\phi, \mu)$ is the normalized
probability distribution for the photon direction (in fact
$P$ is independent of $\phi$ in our case).
This way of calculating images and spectra
has the advantage of giving fairly good statistics for relatively small numbers of photons. Thus, by
lowering the number of photons needed for the results to converge, it can potentially speed up the calculations. It also converges rapidly for
the fainter parts of the source, hence it is very useful for sources with high emissivity contrast. One disadvantage is that due to computing resources limitations
it limits the calculations
to only a small number of pre-chosen directions of observations. In addition, for complicated geometries such as those produced in simulations
one must verify that running more photons is more expensive than calculating $\tau_{esc}$ used in this method.
We find that indeed this is the case for the ART environments where the RT code is applied in this study.
\subsubsection{Parallelization}
To reach high performance we implement the parallel execution of the code. Our Monte Carlo scheme is particularly easy to parallelize, since
each ray is independent of others. The parallelization is done using the
Message Passing Interface (MPI) library of routines. As every photon ray is independent,
communication requirements among the different processes are minimal, and in essence MPI distributes copies of the code which are
run autonomously in the different nodes used. However, each processor is assigned and runs photons from different emission regions.
To get an idea about the performance of the code (using the above acceleration schemes), $10^{7}$ photons
\footnote{This number of photons is well above the minimum necessary for the
results to converge as will be discussed in a later section} transfer to 10 physical kpc from the center of the
ART Ly-$\alpha$ emitter we apply the code to in about
4 hours on 8 Intel Xeon 3.2 GHz processors on the Tungsten NCSA cluster .
\subsection{Final images and spectra of simulated Ly-$\alpha$ emitters}
\label{images_spectra}
The detailed Ly-$\alpha$ RT is carried out up to a certain distance from the center of the source and then the Ly-$\alpha$ GP absorption is added.
This distance where the detailed RT stops is determined through a convergence test.
The existence of such a scale is guaranteed given that the further away a photon moves from the center of the object, the most improbable it becomes for
it to scatter back in the direction of observation.
Furthermore, the size of this convergence radius can also be motivated observationally, from the extent of Ly-$\alpha$ halos that have been observed.
The surface brightness of each pixel of the constructed image is
\begin{equation}
SB_{p}=\frac{\Sigma_{i,j} F_{i,j} e^{-\tau_{esc,i,j}} P(\phi,\mu)}{\Omega_{pix}} \times e^{-\tau_{GP}} \, ,
\label{sb}
\end{equation}
where the sum is over the fluxes of all photons ($i$), and all their scatterings ($j$) with scattering positions that project onto the
pixel; $\Omega_{pix}$ is the angle subtended by the pixel to the observer, and the factor $e^{-\tau_{GP}}$ accounts for the
diminishing of the brightness due to the hydrogen intervening between the radius where the detailed RT stops and the observer. To find the flux $F_{i,j}$ carried by each photon at each interaction, we first calculate the total luminosity, $L_{tot}$, of the
emitter through the sum of the luminosities of the individual source cells.
For $N$ photons (or more accurately wavepackets) used in the Monte Carlo, then each photon carries a luminosity $F_{i,j}$ (independent of photon and scattering numbers $i$ and $j$, respectively,
in our case) equal to
\begin{equation}
F_{i,j}=\frac{L_{tot}}{N} \frac{1}{d_{L}^{2}}
\end{equation}
where $d_{L}$ is the luminosity distance calculated for the adopted cosmology. Note that there is no $1/ 4 \pi$ factor. This factor comes from $P(\phi, \mu)$ -- in equation (\ref{sb}) -- which is
normalized to unity.
The GP absorption optical depth is calculated as described in \citet{hui_etal97}. It is calculated for each pixel separately, and the number of different lines
of sight that have to be used per pixel is determined by checking convergence of the final result. For high enough image spatial resolution (similar to the one used
in this study) one line of sight per pixel is enough, since the simulations themselves have finite spatial resolution. The
characteristics of the line emerging after the detailed RT (i.e., its width) and before
adding the GP absorption determine how far away in distance one must go when calculating $\tau_{GP}$, since one needs to go up to the point where the shortest
line wavelength
is redshifted at least to the Ly-$\alpha$ resonance because of Hubble expansion. Often, this physical distance is larger than
the physical size of the cosmological simulation box. In this case,
we take advantage of the periodic boundary conditions and use replicas of the same box making sure we do not
go through the same structures. This turns out to be easily done as long as one does not have to use the box
too many times (more than $\sim$ 5).
Furthermore, we consider two distinct scenarios, one where the effect of the red damping wing is taken into account and one where the
red damping wing is suppressed as would be the case if for example the Ly-$\alpha$ emitter was in the vicinity of a bright quasar.
Lastly, spectra are obtained by collapsing the 3-D image array (2 spatial dimensions+wavelength) along the spatial dimensions.
\section{Application to Cosmological Simulations}
\label{app_sims}
\subsection{The simulations}
\label{sec:sims}
Here we present some basic information regarding the cosmological simulations we use in what follows in order
to apply the Ly-$\alpha$ RT code in a cosmological setting.
The RT is carried out using outputs of the ART
code for the concordance flat
{$\Lambda$CDM} model: $\Omega_0=1-\Omega_{\Lambda}=0.3$, $h=0.7$, where
$\Omega_0$ and $\Omega_{\Lambda}$ are the present-day matter and
vacuum densities, and $h$ is the dimensionless Hubble constant defined
as $H_0\equiv 100h{\ }{\rm km\ s^{-1}\,Mpc^{-1}}$. For the power spectrum normalization
the value $\sigma_{8}=0.9$ is used.
This model is
consistent with recent observational constraints
\citep[e.g.,][]{spergel_etal03}.
The initial conditions of these simulations
are the same as those in \citet{kravtsov03} and \citet{kravtsov_gnedin05}, leading to the formation of a Milky Way sized galaxy at $z=0$.
However, these simulations are different in that, in addition to
dark matter, gas dynamics, star formation and feedback, cooling, etc., they also include non-equilibrium ionization and
thermal balance of H, He, H$_{2}$ and
primordial chemistry, full RT of ionizing radiation and optically thin line RT of
Lyman-Werner radiation. The continuum RT is
modeled according to the Optically Thin Variable Eddington Tensor approximation described in \cite{gnedin_abel01},
whereas cooling uses the abundances of species from the reaction network, as well as corrections for cooling enhancement due to metals.
The code reaches
high force resolution by refining all high-density regions with an
automated refinement algorithm. The criterion for refinement is the
mass of dark matter particles and gas per cell.
Overall there are 9 refinement levels. The physical size of a cell of refinement level
$l$ is $26.161 \times 2^{9-l}$ pcs at $z \simeq 8$ (the redshift we focus on in this study).
The dark matter particle mass at the highest resolution region is $9.18 \times 10^{5} h^{-1} \rm{M}_{\odot}$, and the box size for
which results are presented in this paper is $6 h^{-1} \rm{Mpc}$.
For each simulation cell we have available information
such as the
temperature, the peculiar velocity, the neutral hydrogen density, the ionized hydrogen density, the metallicity, etc.
With this information and using the mesh of the ART code itself we follow how Ly-$\alpha$ photons are being initially emitted and subsequently getting
scattered. As an example of an application of the Ly-$\alpha$ RT code developed for the ART code we focus on the most massive emitter
at $z \simeq 8$. This emitter is found within a highly ionized, butterfly--shaped bubble. Outside this bubble the Universe is highly neutral, whereas some
dense neutral cores associated with the forming galaxy exist within the bubble. Results for more emitters, different redshifts, multiple directions of observation, larger simulation boxes, etc.,
will be presented in future papers.
\subsection{Intrinsic Ly-$\alpha$ emission}
\label{intrinsic_emission}
There are a number of different mechanisms that can produce Ly-$\alpha$ emission from high-redshift objects.
Here we classify them into recombination and collisional emission mechanisms.
By recombination emission mechanisms we refer to Ly-$\alpha$ photons that are the final result of the cascading of
recombination photons produced in {\it ionized} gas.
The gas may be ionized by the UV radiation of hot, young, massive stars, from an AGN hosted by the galaxy, or
by the intergalactic UV background. By collisional emission mechanisms we refer to photons
that are produced by the radiative decay of excited bound ({\it neutral}) hydrogen states, with
collisions being the mechanism by which these excited states are being populated. This mechanism takes place when
gas within a dark matter halo is cooling and collapsing to form a galaxy and radiates some of the gravitational collapse
energy by collisionally excited Ly-$\alpha$ emission, when gas is shock heated by galactic winds or by jets in radio galaxies, and
in supernova remnant cooling shells. We underscore the
fact that the states are bound states, because in principle collisions can also cause ionization in which case we would have production
of Ly-$\alpha$ photons under a recombination mechanism, according to our definition conventions.
With the exception of AGN and jets, which are not included in ART simulations, as
well as the fluorescence emission due to the intergalactic UV background which would be relevant at lower redshifts than
we focus on in this study, we will try to briefly assess the importance of these separate Ly-$\alpha$ emission sources. This is interesting in particular
because, in addition to the different dependence on the physical parameters (i.e., different temperature dependence and dependence on
ionized versus neutral hydrogen), these mechanisms may also have a different spatial distribution. For example,
shock heated gas from gravitational collapse may be a spatially more extended Ly-$\alpha$ source than the gas photoionized
by UV radiation of young stars at the relatively
compact star forming regions. The dominant source of Ly-$\alpha$ emission may be what distinguishes most Ly-$\alpha$ emitters from the more extended sources
referred to in literature as Ly-$\alpha$ blobs \citep{steidel_etal00,haiman_etal00,fardal_etal01,bower_etal04}.
Before discussing the different Ly-$\alpha$ emission mechanisms, we should first mention that,
due to practical limitations (i.e., we can only use a relatively limited number of photons),
we use as source cells only the cells that contribute significantly to the total luminosity of the object.
Hence, we set a threshold on the cell luminosity and use as source cells only the cells whose luminosity exceeds this
threshold. Then by performing a convergence test, namely by doing runs assuming different luminosity thresholds up to the point where including
lower luminosity source cells does not change the results (within some pre-specified tolerance), we determine the minimum luminosity a simulation
cell must emit to be one of the cells where photons will originate from.
It is meaningful to consider a similar convergence check with respect to the Ly-$\alpha$ RT results, and this
will be discussed in a later section.
The convergence test reveals that the luminosity of the object is dominated by a few very luminous cells.
To get an idea, the luminosities of cells within the virial extent roughly range from $10^{41}$ to several times
$10^{54}$ photons/s.
The total luminosity of the object is the sum of the luminosities of the cells considered. Even though most of the volume, say, within the virial radius
is in low to moderate luminosity cells, the sum of the luminosities of these cells is not significant enough compared to the less numerous
high luminosity cells. For the object at hand the convergence test suggests that one can use as source cells only cells with
luminosities above $\simeq 5 \times 10^{50}$ photons s$^{-1}$. This value determines the relative importance of the different Ly-$\alpha$ emission mechanisms discussed
in what follows. With the aforementioned luminosity threshold, the total luminosity of the emitter at hand is roughly equal to
$4.8 \times 10^{43}$ ergs/s.
We sample the emission region (i.e., the cells with luminosity above the luminosity threshold discussed)
by emitting equal weight wave packets, but in numbers
that reflect the relative luminosities of the cells.
Note that this discussion on the various mechanisms, emission rates, etc., should somehow be affected by the limited simulation resolution, a factor
that will be studied in detail in the future.
Furthermore, the approach adopted in this section is an 'order-of-magnitude' one. We defer a more thorough
and statistical analysis of the Ly-$\alpha$ emission sources in high
redshift galaxies to a future study, where all factors will be taken into account. For example, the discussion about the importance of the various
emission mechanisms must be extended to the after RT results and after including dust. This
is because it could, for example, be the case that recombination Ly-$\alpha$ photons, despite being more numerous as discussed
below,
may be more likely to be absorbed than collisional Ly-$\alpha$ photons, if one assumes that there is more dust in star forming regions -- where recombination
photons are generated -- than in regions where collisional Ly-$\alpha$ photons originate from.
\subsubsection{Ly-$\alpha$ photons from recombinations}
The recombination rate of a cell is
\begin{equation}
r=n_{e}n_{p} \alpha_{B}V
\label{rec_rate}
\end{equation}
with $n_{e}, n_{p}$ the number density of electrons and protons, respectively, and
$V$ the volume.
In principle species other than hydrogen may contribute to $n_{e}$. Thus, $n_{e}$ in general is not equal to $n_{p}$.
In what follows, we take into account electrons contributed by the ionization of He.
Other BBN predicted species such as Li, Be and B
(with, anyway, tiny abundances), and elements produced through stellar processing such as C, N and O
are not taken into account.
Recombination photons are
converted with certain efficiency into Ly-$\alpha$ photons.
In particular, for a broad range of temperatures centered on $T=10^{4}$ K, roughly
$38 \%$ of recombinations go directly to the ground state.
A fraction $\sim 1/3$ ($32 \%$) of the recombinations that do not
go to the ground state go to $2S$ rather than $2P$ and then go to the ground state via two continuum photon decay \cite[cf. Table 9.1 of][]{spitzer78}.
Hence, only a fraction
$\sim 40 \%$ of the recombinations yield a Ly-$\alpha$ photon. The temperatures of simulation cells within the virial extent
of the emitter are in the $10^{2.4}-10^{6.3}$ K range, with most cells in the $10^{4}-10^{6}$ K range. Due to the weak temperature dependence of the
various recombination coefficients the above
conversion efficiencies are roughly applicable throughout this temperature range. Furthermore, if the gas is optically thick, then
photons that originate from recombinations to the ground state will be immediately absorbed by another
neutral hydrogen atom and eventually they, as well, will produce Ly-$\alpha$ photons.
Assuming for now that this is the case (as will be discussed later in this section), as well as that the medium is thick in Lyman-series photons, so that
all higher Lyman-series photons are
re-captured and eventually yield Ly-$\alpha$ photons, we adopt case B recombination. For
the recombination coefficient we use the fit obtained by \citet{hui_gnedin97}, accurate to $0.7\%$ for temperatures from 1 to
$10^{9}$K
\begin{equation}
\alpha_{B}=2.753 \times 10^{-14} \rm{cm}^{3} \rm{s}^{-1} \frac{\lambda^{1.5}}{\left[1+\left(\frac{\lambda}{2.74}\right)^{0.407}
\right]^{2.242}}
\end{equation}
with $\lambda=2 T_{i}/T$, and $T_{i}=157807$ K the hydrogen ionization threshold temperature.
In agreement with the above argument, the effective recombination coefficient at level 2P is approximately
$2/3$ of the case B recombination
coefficient and that is what we use to convert recombination rates into Ly-$\alpha$ photon emission rates.
Thus we assume that the conversion efficiency from recombination to Ly-$\alpha$ photons is exactly the same for all simulation cells.
This is a good assumption since the conversion efficiency has a very weak temperature dependence.
The exact conversion efficiency for each source cell also depends on the rate at which collisions redistribute atoms between
the 2S and 2P state.
Collisions with both electrons and protons are relevant.
To get an idea for the cross sections involved, for
a temperature of $10^{4}$K and thermal protons $\sigma_{2S\rightarrow 2P} \simeq 3 \times 10^{-10} \rm{cm}^{2}$ \citep{osterbrock89}.
For thermal protons and electrons the thermally averaged collisional cross sections for
the processes
\begin{equation}
H(2P)+p\rightarrow H(2S)+p
\end{equation}
and
\begin{equation}
H(2P)+e\rightarrow H(2S)+e
\end{equation}
are $q_{p}=4.74\times 10^{-4} \rm{cm}^{3}/\rm{s}$ and $q_{e}=5.70\times 10^{-5} \rm{cm}^{3}/\rm{s}$, respectively,
for a temperature of $10^{4}$K \cite[cf. table 4.10 of][]{osterbrock89}.
The $2P$ to $2S$ transition is relatively important when the proton number densities are small ($<10^{4} \rm{cm}^{-3}$), and
in this case there is some probability that the Ly-$\alpha$ photon gets destroyed through a two quantum decay. For higher densities the opposite conversion
($2S$ to $2P$) becomes important, canceling out the destruction effect \citep{osterbrock89}.
At the lower density regime, which is applicable in the simulations since there
$n_{p}<10^{4}$ cm$^{-3}$ everywhere (within the virial extent the proton number density range is $10^{-4}-10^{2.5}$ cm$^{-3}$,
with most cells in the range $10^{-3}-1$ cm$^{-3}$), we can check how important this process really is by
comparing the radiative decay time and the typical time between collisions,
\begin{eqnarray}
\nonumber
p=\frac{q_{p}(T)n_{p}+q_{e}(T)n_{e}}{A_{21}} \\
\simeq 8.5 \times 10^{-13} n_{p} T_{4}^{-0.17}
\label{collisions_prob}
\end{eqnarray}
where the number densities of protons and electrons were assumed to be roughly the same and in $\rm{cm}^{-3}$, $A_{21}=6.25\times 10^{8} \rm{s}^{-1}$
is the spontaneous radiative decay for the Ly-$\alpha$ transition, and temperature is measured in $10^{4}$K units. The temperature
dependence of the collision rates is taken from
\cite{neufeld90}.
For the temperature and proton/electron density ranges relevant to the source cell conditions, the probability for a collisional
$2P$ to $2S$ transition is negligible, at least for the initial emissivity.
We discuss
their effect during scattering of the photons in \S \ref{sec:collisions}.
\begin{figure*}[thb]
\centerline{{\epsfxsize=3.5truein\epsffile{lyman_series.eps}}\hspace{0.5cm}{\epsfxsize=3.5truein\epsffile{tau_rec.small.eps}}}
\caption{{\it Left panel:} Cumulative probability distribution of center-of-line optical depths of ART
simulation cells within the virial extent. The three different lines correspond
to the cell optical depth distribution in Ly-$\alpha$ ({\it solid}), Ly-$\delta$ ({\it dotted}) and Ly-limit ({\it dashed}) photons.
{\it Right panel:} Ly-$\alpha$ center-of-line optical depth of the simulation cells within the virial extent of the emitter plotted against the cell
recombination rate. Since only cells with the highest recombination rates ($\ge 10^{51}$ s$^{-1}$ or, equivalently, luminosities roughly
$\ge 5 \times 10^{50}$ photons s$^{-1}$) need to be used as source cells, and almost all of
these cells
have $\tau \ge 10^{3}$, roughly speaking our 'on-the-spot' approximation is satisfactory (see text for details).
\label{lyman_series}}
\end{figure*}
One assumption that we make is that the cascading of the Lyman series photons, as well as the re-emission and re-absorption of photons
from recombination to the ground state, is done 'on-the-spot', namely, locally. In our case ''locally'' means within the same simulation cell.
This assumption is essential if one wants the Ly-$\alpha$ emissivity of a cell to depend on its own recombination rate only. If not, one
faces the complicated situation
where the Ly-$\alpha$ emissivity of one cell depends on the recombination rates and photon cascade processes that are happening in other cells as well.
The validity of our assumption depends on the optical depth of Lyman series and ionizing photons when traversing
a typical cell in the simulation (and
should also be affected somewhat by resolution).
In the left panel of Figure \ref{lyman_series} we show the optical depth probability distribution function for Ly-$\alpha$, Ly-$\delta$ and Ly-limit radiation.
The distribution function has as independent variable the optical depth of simulation cells within 10 physical kpc ($\simeq$ virial extent)
from the
center of the emitter.
These distributions are very similar, differing only by the values of $\tau$ because of different oscillator strengths and
characteristic frequencies.
Clearly, in all cases more than half potential source cells are not optically thick, and this is expected to get worse for ionizing radiation beyond the
Lyman limit. However, as shown in the right panel of Figure \ref{lyman_series} the optical depth of a cell correlates
with its recombination rate. In this figure the optical depth plotted
is that for Ly-$\alpha$ photons, but it is easy to see how this scales approximately with optical thickness for other Lyman-series photons. Since
only cells with recombination
rates higher than $10^{51}$ s$^{-1}$ (or equivalently with luminosities higher than roughly $5 \times 10^{50}$ photons s$^{-1}$)
are used as source cells, our 'on-the-spot'
assumption seems pretty satisfactory, if not always
accurate. It becomes less and less accurate the higher we go in the Lyman series, and of course beyond the Lyman limit but for the time being
we content ourselves with this approximation, given the complexities introduced when this assumption is not adopted. We will investigate this point further
in the future.
Lastly, to get an idea about the physical conditions of the highest recombination rate (luminosity) source cells, they
consist of two classes with respect to temperature and
neutral hydrogen fraction: one class contains cold gas elements ($T \sim 10^{3}$K),
with a neutral hydrogen fraction $>0.9$ (and high gas number density). The second class of very luminous cells consist of warmer
gas elements ($T\sim 10^{4}$ K and a bit higher).
In the context of Ly-$\alpha$ cooling radiation, discussed in the next section,
the first class of cells are unable to cool via atomic hydrogen cooling since they are cold, whereas the
second class of most luminous cells could cool via atomic hydrogen cooling temperature-wise, but that is not happening
because these cells are highly ionized.
\begin{figure}[hbt]
\centerline{{\epsfxsize=3.5truein\epsffile{cooling_vs_recombination_small.eps}}}
\caption{Maximum cooling Ly-$\alpha$ luminosity, $L_{cool}$, plotted against recombination Ly-$\alpha$ luminosity, $L_{rec}$, for
all ART simulation cells within the virial extent of a Ly-$\alpha$ emitter at
$z\simeq 8$. The cooling luminosity is the maximum possible Ly-$\alpha$ luminosity from cooling because it is derived assuming that all cooling radiation is
emitted in Ly-$\alpha$ photons. The solid line shows the case where the two luminosities are equal. Since, as discussed in the text, only cells with luminosities
roughly above $5 \times 10^{50}$ photons s$^{-1}$ contribute significantly to the
luminosity of the emitter, this figure shows that recombination is dominant
over cooling Ly-$\alpha$ radiation.
\label{cooling_vs_recombination}}
\end{figure}
\subsubsection{Ly-$\alpha$ photons from collisional excitations}
A collisional emission mechanism whose importance for the simulated objects
can be assessed relatively easily is that of atomic hydrogen cooling. Using the expression by \citet{hui_gnedin97}
for the hydrogen cooling rate (used in the ART
simulations analyzed here),
and assuming for the moment that this energy is all emitted in the form of Ly-$\alpha$ photons, we obtain for the luminosity (number of Ly-$\alpha$ photons/s) emitted by a cell
\begin{equation}
L_{cool}=4.6 \times 10^{-8} \frac{e^{-1.18355/T_{5}}}{1+T_{5}^{0.5}} n_{e} n_{HI} V \,
\end{equation}
with $T_{5}$ the temperature in units of $10^{5}$ K.
This is compared with the recombination luminosity $L_{rec}$ ($\simeq 0.68 r$) in Figure \ref{cooling_vs_recombination}.
Taking into account the results of the convergence test performed to specify what is the minimum cell luminosity that needs to be taken into account
($\sim 5 \times 10^{50}$ s$^{-1}$), we see that
the cells which are relevant are
cells where recombination processes dominate, as can be seen in Figure \ref{cooling_vs_recombination}. Namely,
similar to previous studies \citep[e.g.,][]{fardal_etal01} we find that
the cooling radiation Ly-$\alpha$ contribution is subdominant compared to the recombination contribution, hence in what follows we focus only on the latter.
\subsubsection{Supernovae Remnants (SNR)}
A Ly-$\alpha$ source that yields Ly-$\alpha$ photons both from recombinations and collisional excitations is supernova remnants (SNR).
\citet{shull_silk79} have computed the time-averaged Ly-$\alpha$ luminosity of a population of Type II SNR using a radiative-shock code.
They find that the Ly-$\alpha$ luminosity of a galaxy due to SNR
is $L_{SNR}=3\times10^{43} n_{H}^{-0.5} E_{0}^{0.75} \dot{N}_{SN}$ ergs/s,
with $n_{H}$ the ambient density in cm$^{-3}$, $E_{0}$ the typical supernova energy in units of $10^{51}$, and $\dot{N}_{SN}$ the number of
supernova per year. Strictly speaking, this quantity also depends on the assumptions on the IMF, and the lower and upper stellar masses of the
mass range over which the IMF is to be integrated. This expression includes both contributions, from recombination and collisional emission
mechanisms: from UV and X-ray ionization (coming from the hot SNR interior) of the surrounding medium and from
cooling shells, respectively. A thorough investigation of the relative importance of SNR Ly-$\alpha$ emission with respect to
that from young stars photoionization has been carried out by \citet{charlot_fall91,charlot_fall93}. The general conclusion reached is that for
a broad range of physical conditions and assumptions, the SNR contribution is at best a factor of 2.5 less than that from stellar ionizing
radiation.
These results make the effort to include the (anyway not resolved in ART simulations) SNR contributions superfluous.
\subsection{The Ly-$\alpha$ emitter before RT}
\label{beforert}
To get an idea of the size of the emitting region, the prevailing physical conditions, and for comparison with results obtained later after including
RT, in this subsection we briefly present the emission spectrum and image of the emitter as they would appear to an observer at $z=0$ if the Ly-$\alpha$ photons
escaped without any scattering.
An image and a spectrum
of the emitter along a certain direction of observation is shown in the left and right panel, respectively,
of Figure \ref{fig:iniemission}.
The image is a surface brightness map (in units of ergs s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$) of a roughly
$1.4 \times 1.4$ arcsecs$^{2}$ field which corresponds to approximately one third of the virial extent of the dark matter halo the emitter lives in
(with the virial extent $\simeq20$ physical kpc in diameter). There are two distinct emission regions, each one corresponding to the two progenitors that merged and formed this object.
The color scale for the surface brightness is logarithmic. Clearly, the emission region is very small (the largest of the two structures is
at most $\simeq 2-2.5$ physical kpc in diameter, if one includes the faintest pixels), compared for example to the virial extent of the
dark halo. The resolution of this image is $0.01$ arcsecs ($\simeq$ 0.05 physical kpc), at least 10 times higher than the best resolution currently available.
As discussed before, for these results only cells with recombination rates higher or equal to $10^{51}$ s$^{-1}$ are used.
The initial frequency is chosen according to a Voigt profile that is sampled for each cell out to $10$ Doppler widths and shifted around the bulk
(peculiar + Hubble) velocity component along the direction of observation.
The number of photons used ($3 \times 10^{5}$) has been determined after a convergence study.
Note that when we study the convergence with respect to the number of photons we take into account that this must be done
in parallel with how far away in the wings
we go when sampling the emission Voigt profile of each cell, since the higher the number of
photons used the better one can sample frequencies further away from resonance. The convergence procedure gave the aforementioned number of photons and
initial emission frequency range (i.e., 10 thermal Doppler widths).
In the right panel of Figure \ref{fig:iniemission},
the frequency resolution is $\lambda/\Delta \lambda \sim 50000$.
The
line shape has converged, namely the peaks shown correspond to real velocity
substructure. For example, the most pronounced peak at $\lambda = 10952 \AA$ corresponds to the component of the peculiar velocity
along the direction of observation of the most luminous pixel of the image shown at the left panel (with coordinates on the image
(0.24,-0.42) arcsecs, roughly). The
dominant contribution to this
pixel comes from the highest recombination cell of the emitter with a recombination rate equal to $\simeq 1.3 \times 10^{55}$ s$^{-1}$ and
a peculiar velocity component along the direction of observation equal to $0.27 \times 10^{-3}$ the speed of light.
As mentioned, for each emission cell the Voigt profile was used and sampled up to 10 thermal Doppler widths.
The total width of the line however is dominated by the bulk velocity structure of the emitter. The full width of the line
at the minimum flux level shown in the figure ($10^{-22}$ ergs $s^{-1}$ cm$^{-2}$ $\AA^{-1}$) is roughly $15 \AA$ (with the width
if bulk velocities are set to zero being less than half this). This width corresponds to
projected velocities along the direction of observation roughly in the $[-200,200]$ km/s range (this is just approximate,
note however that the line is not symmetric around the
rest frame resonance). This velocity range is what is expected given the peculiar velocities of the emitting cells.
Also shown is the spectrum of the smallest of the two substructures ({\it dotted line}) of the image shown in the left panel. One can
easily infer what the spectrum of the large Ly-$\alpha$ substructure looks like.
The results discussed in this section may be specific to the emitter at hand, but the considerations themselves are pretty general.
The same kind of procedure must be repeated for each individual emitter identified in the simulations.
\begin{figure*}[bht]
\centerline{{\epsfxsize3.5truein\epsffile{pgplot.ps}}\hspace{0.5cm}{\epsfxsize3.5truein\epsffile{initial_emission.eps}}}
\caption{{\it Left panel:} Image of Ly-$\alpha$ direct emission (i.e., assuming the Ly-$\alpha$ photons escape directly to the observer after they are produced). The
approximately $1.4 \times 1.4$ arcsecs$^{2}$ ($\simeq 6.5 \times 6.5$ physical kpc) field corresponds to roughly one third of the virial extent of the dark matter halo where the emitter lives.
The surface brightness (SB) is bolometric and in units
of $\rm{ergs} \ \rm{s}^{-1} \rm{cm}^{-2} \rm{arcsecs}^{-2}$. The SB color scale is logarithmic. The object had undergone a recent merger, that is why there
are two distinct luminous blobs that dominate the emission. {\it Right panel:} Initial Ly-$\alpha$ injection spectrum. Shown are the total spectrum
({\it solid line}), namely the spectrum for the image shown in the left panel, and the spectrum of the smallest of the two blobs in the image ({\it dotted line}).
Note that the wavelength is in $10^{4} \AA$ (i.e., $\mu m$). The dashed line shows the Ly-$\alpha$ resonance for $z \simeq 8$. See text for discussion of the structure of the line.
\label{fig:iniemission}}
\end{figure*}
\subsection{The Ly-$\alpha$ emitter after RT}
\label{afterrt}
It is interesting to first treat the emitter as a finite configuration. In this case, as soon as the photons exit this configuration (whose size is
taken to be roughly equal to the virial extent of the object, namely 10 physical kpc)
they travel towards the observer. In other words at first we ignore the effect of the GP absorption. This
context is pretty similar to that of \S \ref{sec:test} and \S \ref{simple_models}.
We focus on the emergent spectrum shown with the solid line in the left panel of Figure \ref{rt_only} .
\begin{figure*}[th]
\centerline{{\epsfxsize3.5truein\epsffile{rt_only.eps}}\hspace{0.5cm}{\epsfxsize3.5truein\epsffile{pgplotasinitial.ps}}}
\caption{{\it Left panel:} Emerging Ly-$\alpha$ emission spectrum before adding the Ly-$\alpha$ Gunn-Peterson absorption (GPA)
({\it no GPA, solid line}), with the GPA but without the red damping wing ({\it GP, no DW, short-dashed line}),
and with GPA and the damping wing ({\it GPA+DW, dotted line}).
Note that the wavelength is in $10^{4} \AA$ (i.e., $\mu m$). The long-dashed line shows the Ly-$\alpha$ resonance for $z \simeq 8$.
{\it Right panel:} Image of the Ly-$\alpha$ emitter after RT, GPA and DW. The
$1.4 \times 1.4$ arcsecs$^{2}$ ($\simeq 6.5 \times 6.5$ physical kpc) field corresponds to roughly one third of the virial extent of the dark matter halo where the emitter lives.
The surface brightness (SB) is bolometric and in units
of $\rm{ergs} \ \rm{s}^{-1} \rm{cm}^{-2} \rm{arcsecs}^{-2}$. The SB color scale is logarithmic.
\label{rt_only}}
\end{figure*}
The spectrum converges if $3 \times 10^{5}$ photons are used, namely if the same number of photons are used as the number
of photons needed for the initial emission results (discussed in
\S \ref{beforert}) to converge. Of course, the higher
the number of photons the better one samples low intensity wavelengths. We find that the number of
photons used affects wavelength ranges with flux less than about $10^{-22}$
ergs s$^{-1}$ cm$^{-2}$ $\AA^{-1}$. The spectrum shown in Figure \ref{rt_only} is produced using $10^{7}$ photons.
The spectral resolution used in the figure is $\lambda/\Delta \lambda \simeq 5000$, whereas the spectrum is
identical if ten times
better resolution is used.
We have performed a large set of convergence tests among which the most interesting are for different (smaller) cores for the acceleration scheme discussed in
\S \ref{skipping}, and /or a larger minimum $\tau_{0} \phi(x_{i})$ for which the acceleration scheme discussed in \S \ref{controlled} is used.
Our results are pretty robust, as should following the discussion in \S \ref{controlled} and \ref{skipping}
with respect to the one cell convergence results.
Even though meant for a slab, it is interesting to check if some predictions
of the Neufeld solution, such as the frequency where the spectrum has a maximum ($\simeq 0.9 (\alpha \tau_{0})^{1/3}$), are roughly
in agreement with the spectrum of the simulated emitter.
Of course, the Ly-$\alpha$ emitter environment is neither isothermal nor homogeneous, and it is not obvious how to define an 'effective' temperature and
optical depth for these purposes.
Thus, focusing on order of magnitude checks, setting the expression for the frequency where the peak emission occurs in a slab
equal to the frequency where
the spectrum of the emitter peaks (say in red wavelengths) one finds that the 'effective' optical depth and 'effective' temperature of the equivalent
slab (i.e., the slab that would give a spectrum with peak at the frequencies where the emitter spectrum peaks) roughly satisfy the relation
$\tau_{0} T \simeq 1.4\times 10^{10}$ with $T$ measured in eV.
The effective optical depth will be at least
equal to the most optically thick cell the photon found itself in. Since the emission originates from the most optically thick cells (see
Figure \ref{lyman_series}),
$\tau_{0}$ will be at least $10^{3}$. If we assume for example a temperature $T=10^{5}$K, the above relation yields $\tau_{0} \simeq 10^{9}$ which is roughly the optical depth
from the center of the object to its virial radius along the direction of observation.
Thus, the maximum of the spectrum is roughly where it is expected to be if one assumes the scaling from the Neufeld solution ($\simeq 2550$ km/s).
The emerging spectrum looks pretty similar to the spectrum that would emerge from a static
configuration, namely it has two quite similar peaks, one to the red and one to the blue of the Ly-$\alpha$ resonance.
Note however that the peaks are not really symmetric, since the flux decreases more rapidly near the resonance.
The width of the blue peak at a flux level of $10^{-22}$ ergs s$^{-1}$ cm$^{-2}$ $\AA^{-1}$ is roughly 180 $\AA$ or $\simeq 5000$ km/s.
We obtain quite a similar spectrum if we set the bulk velocity field to zero in the code, that is kinematics do not seem to play a crucial
role in this case.
In the case of the specific Ly-$\alpha$ emitter and for the specific direction of observation,
analyzing the bulk velocity field (i.e., the peculiar velocity field
since the Hubble expansion is negligible at the distances we are working) we find that there is some net infalling motion, but with
significant transverse velocity components as well. Hence, the obtained static--like spectrum does not come as a surprise.
Furthermore, the peak asymmetry due to the existence of bulk fields depends on the relative magnitudes of the bulk and thermal velocities (e.g.,
if the bulk velocity is close to the thermal we do not expect a significant asymmetry since one scattering can give, e.g., a red photon
moving in a contracting medium a large enough shift to erase the effect of the contraction) which varies from cell to cell, and it also
depends on the optical thickness. Since thermal velocities are typically small compared to bulk velocities in simulations,
the optical thickness is a more crucial factor. For such extremely optically thick media where the spectrum is expected to have
in the context of the Neufeld solution a typical frequency of $\simeq 2550$ km/s, bulk velocities of at most some hundreds km/s
will not really favor blue versus red photons (even if the bulk motion was purely inwards) that much, since both red and blue photons see a very optically thick
medium.
It would be interesting to have a sense of what is the number of scatterings each photon undergoes before exiting. With
the acceleration methods that we have to use though it is difficult to
keep track of this quantity.
A simple way to obtain an order-of-magnitude idea of the number of scatterings in such or, at least, similar configurations can be obtained by one of the
examples discussed in \S \ref{simple_models}. For the most optically thick case ($\tau_{0} \simeq 8.3 \times 10^{6}$) and a point source emitting photons that
propagate in a stationary medium the number of scatterings in one run of $\simeq 2000$ photons varies from $2.5 \times 10^{3}$ up to $4\times 10^{7}$, with an average of
$8.3\times 10^{6}$, and a median of $6.6 \times 10^{6}$. Two thirds of the photons are in the $[4.6 \times 10^{6},2.1 \times 10^{7}]$ scatterings range.
More generally, we find that similar to the Neufeld problem, the average number of scatterings in this spherical configuration
scales linearly with optical depth at such thick media (see discussion in \S \ref{sec:neufeld}),
with the proportionality
constant of order unity. From this linear scaling of the average number of scatterings with optical depth,
one can obtain a rough idea of the average number of scatterings of photons in the simulation environments (for the cell optical
depth range in the simulations see, e.g., the left panel of Figure \ref{lyman_series}).
These numbers also make clear why it is absolutely not feasible to perform Ly-$\alpha$ RT in the much thicker and more complicated
simulation environments without some acceleration schemes.
Photons at very optically thick regions have to shift off resonance significantly to escape, and hence
are the ones responsible for the significant line width of the spectrum (along with the $1/x^{2}$ behavior of the wing optical depth, as
discussed previously). It is meaningful to ask whether one should really care about these photons, or instead
ignore them because may be they are trapped indefinitely (for any practical purpose) in the dense cells and do not participate in the radiation propagation.
To answer this question we estimate the photon diffusion time and compare it to the sound crossing and dynamical time scales (other time scales, such as
the Hubble time scale for example which is $\sim 1$ Gyr at $z=8$ are clearly large enough to be non-relevant).
Same as with the number of scatterings, to find the exact diffusion times one should follow the detailed RT. Given our acceleration methods this is not
done. Instead we use some useful scalings.
Since the average number of scatterings in very optically thick media is roughly equal to $\tau_{0}$, then the diffusion time
is roughly $t_{d} \simeq N_{sc} l_{mfp}/c$ with $l_{mfp}$ the mean free path between scatterings defined through
$\langle\tau\rangle=\int_{0}^{\infty} \tau e^{-\tau} d\tau=1$.
In other words, since $\tau_{0}=n \sigma(x=0) L$, $\tau=n \sigma(\tilde{x}) l$, then the mean free path in units of the total (half) width of the slab is $\sigma(x=0)/\sigma(\tilde{x}) 1/\tau_{0}$,
with $\sigma(x=0)$ the cross section at the line center and $\sigma(\tilde{x})$ the cross section calculated at an effective $\tilde{x}$ so that the above definition for the
mean free path is valid.
Substituting in the expression for $t_{d}$ we obtain $t_{d} \sim \sigma(x=0)/\sigma(\tilde{x}) L/c$.
For a slab with $\tau_{0}=10^{6}$ we obtain a mean number of scatterings equal to $9.5 \times 10^{5}$ and
a median equal to $7.2 \times 10^{5}$, whereas $67\%$ of the photons have between $5.1\times 10^{5}$
and $2.3\times 10^{6}$ scatterings. For the mean free path we find a mean equal to
$2.4 \times 10^{-5}$, a median equal to $1.9 \times 10^{-6}$ and $67 \%$ of the scatterings correspond to mean free paths between $1.4 \times 10^{-6}$ and
$7.7 \times 10^{-6}$, all in units of the (half) width of the slab $L$.
For the total distance traveled by the photons before escaping, we find an average distance of 40.2, a median of
32.3 -- implying a $\sigma(x=0)/\sigma(\tilde{x})$ ratio of order 10 -- whereas $67\%$ of photons exit after traveling a distance between 16.7 and 96.3, with these numbers as before in units of the width of the
slab. Based on spatial random walk arguments one would have $N_{sc} \sim \tau_{0}^{2}$, hence the distance before escape would be $\sim \tau_{0}$
or $10^{6}$ for the specific example we use here. However, as discussed in \S \ref{sec:neufeld}
$N_{sc}$ scales linearly with $\tau_{0}$ and this makes a big difference.
We find that the sound crossing time is significantly higher than the dynamical time for most simulation cells, hence the latter is the relevant time against which
the diffusion time must be compared.
We find that the dynamical time scale is at least three orders of magnitude or more larger than $L/c$
which is within a factor of order $10^{2}$ -- for the various physical conditions in the simulation cells -- representative of $t_{d}$. Note that this comparison also justifies the use
of 'static' simulation outputs where the RT is performed, even though we plan on investigating the possibility of incorporating the
RT scheme into the dynamical evolution in the simulations.
Furthermore, the effect of the simulation resolution on these conclusions will be investigated in a future study.
The scattering process diffuses the initial number of emitted photons on a larger area and hence lowers the {\it number} surface brightness
(i.e., number per s cm$^{2}$ arcsec$^{2}$ rather than energy per s cm$^{2}$ arcsec$^{2}$). In general the surface brightness itself can go either up or down, depending
for example on the velocity structure of the medium. To quantify this effect on a photon-by-photon basis
we choose to calculate the distance on the plane of the image between the initial
emission point and the point (pixel) where the photon makes its maximum contribution to the image (see \S \ref{images_spectra}
on how spectra and images are obtained).
We find that these distances vary from roughly $10^{-3}$ to $10$ physical kpc, with a median of 0.27 kpc and a mean of 0.31 kpc.
Given that the largest of the two emission regions has a diameter of $\sim 2-2.5$ physical kpc (see, Figure \ref{fig:iniemission}),
this means that the 'size' of the luminous part of the object increases on average by more that 10\% due to scattering.
If, instead, we focus on the region where a certain fraction of photons originates from we obtain quite similar results. For example,
ignoring the effects of RT, $90\%$ of the emitted photons that would reach the observer would originate within a radius of roughly
2.5 physical kpc. The same percentage of photons after taking into account RT would come from a radius of roughly 2.9 physical kpc.\footnote{Note that the
pixels in the right panel of Figure \ref{rt_only} that give the impression of a diffusion of the photons due to scattering possibly larger than our $\sim 10\%$ estimate,
correspond to pixels with practically zero number of photons.}
So far we have been ignoring the GP absorption. When adding this absorption we consider two distinct cases.
In the first case we include the red damping wing of the GP absorption, and in the second case we set it equal to zero. The latter best-case
scenario is what would happen if for example the emitter was inside the HII region of a very bright quasar.
The spectrum obtained in the first case is shown with the dotted line in the left
panel of Figure \ref{rt_only}, whereas the spectrum in the second case is shown with the short dashed line.
An image of the emitter as would appear on earth with the GP absorption {\it and} the damping wing
included is shown in the right panel of Figure \ref{rt_only}.
Not surprisingly, when the damping wing is not taken into
account the spectrum is identical with that before the GP absorption with the difference that all flux blueward of the Ly-$\alpha$ resonance is missing.
When including the damping wing the maximum flux is suppressed by roughly a factor of 61.7 with respect to the maximum flux without it.
This line is still quite wide, with a width of approximately 1370 km/s at a flux level of $10^{-21}$ ergs s$^{-1}$ cm$^{-2}$ $\AA^{-1}$, and
a FWHM roughly $620$ km/s.
Lastly, these results have converged with respect to both the number of photons and the radius where the detailed RT stops (and beyond which the GP absorption
is added). More specifically, we find that the number of photons required for the initial (no RT) emission to converge ($3 \times 10^{5}$)
is enough for the with RT and GP absorption results. And, the results also converge if a 10 physical kpc radius is used for the detailed RT and
beyond that the GP absorption is added.
Convergence has been checked also with respect to the minimum cell initial luminosity considered. We find that
the results converge if the minimum luminosity discussed in \S \ref{intrinsic_emission} in the context of initial emission convergence is used.
\subsection{Some additional physics considerations}
\label{phys_cons}
Here we discuss the importance of collisions while the photons are propagating, as well as
the possible role of dust (currently not taken into account).
\subsubsection{Collisions}
\label{sec:collisions}
While the photons are undergoing scattering,
collisions should be considered in the following three contexts: (i) collisional redistribution within the $n=2$ state; if for example
a collision makes the atom go from the $2P_{3/2}$ to the $2S_{1/2}$, then the Ly-$\alpha$ photon is destroyed through a 2 photon decay of the
$2S_{1/2}$ state. If instead the collision takes it to the $2P_{1/2}$, the scattering phase function will be different and hence it is relevant
in either case to see how probable the collisional redistribution is (ii) collisional de-excitation of the $n=2$, in which case
the
photon is lost (iii) collisional broadening of the line, which could cause non-coherence in the rest frame of the atom.
The RT code can take all these processes into account, but here we develop some intuition as to their importance.
In fact, since, as will be shown, these processes are in practice negligible, the corresponding calculations in the RT code were switched off when
producing the results presented in this study.
Referring to cases (i) and (ii), the largest collisional cross sections are for momentum changing transitions \cite[$\Delta L=\pm1$; e.g.,][]{osterbrock89}.
As discussed already, both collisions with electrons and protons are relevant, but protons are more significant in case (i), whereas electrons are more
significant in case (ii). We have already calculated the probability per scattering that the $2P \rightarrow 2S$
transition of case (i) happens (see equation (\ref{collisions_prob})).
The maximum value of this probability for the conditions of the simulation cells is roughly
$10^{-10}$ (assuming $T_{4}=1, n_{p}=10^{2}$ cm$^{-3}$, with the latter being of the order of the maximum proton
number density of cells in simulations. The temperature
dependence is so weak that it does not really matter what temperature one assumes, for order of magnitude estimates).
So, unless a photon undergoes $10^{10}$ scatterings, collisions of the type (i) should not matter. The cells that are relevant for this
are optically thick cells where the photons scatter repeatedly. Since as we saw $N_{sc} \simeq \tau_{0}$
and none of the simulation cells has $\tau_{0}$ larger than a few times $10^{9}$, collisions should not have a significant impact.
Note that for most
cells the number of scatterings for which collisions may start to matter is orders of magnitude higher than $10^{10}$ (i.e., what
is described above is the worst case scenario as far as the effect of collisions is concerned since it assumes the {\it maximum}
proton number density, present
in very few cells).
If these collisions do not matter then collisions of type (ii), which have smaller cross sections, should not matter either.
In case (iii), if the atom suffers collisions with other particles while it is emitting, the phase of the emitted radiation can be altered
suddenly. If the phase changes completely randomly at the collision times, then information about the emitting radiation is lost and coherence is
destroyed. In this case, in the rest frame of the atom, the line profile is Lorentzian but the total width is the natural width plus the frequency of
collisions the atom experiences on average.
Since the importance of this effect as well is determined by a comparison of the radiative decay time and the
time between collisions (i.e., equation \ref{collisions_prob}), from the above discussion it becomes clear that it is also negligible.
\subsubsection{Dust}
\label{sec:dust}
Dust absorbs Ly-$\alpha$ photons. Thus, one would assume that dust in the presence of scattering that traps photons, could have
a significant effect, and that this may be true even if it is present in small amounts, as is expected to be the case for the $z\simeq 8$
emitter we discuss (with a metallicity roughly equal to 0.1 the solar metallicity). Indeed, \cite{charlot_fall91} found that only a tiny fraction of Ly-$\alpha$ photons escape from
a static, neutral ISM even if there is a tiny amount of dust present. To include the effect of dust absorption in simulations
we will have to implement a recipe to estimate the amount of dust.
Even though one can come up with an observationally motivated recipe (albeit with
unknown applicability at redshifts as high as 8), we postpone such a treatment
for a future study, since the main focus of the current study is the Ly-$\alpha$ RT scheme (which nevertheless includes the probability per scattering
that the photon will be absorbed, but this probability is currently set to 0).
However, the Ly-$\alpha$ emitter results we present in this study should not be taken as unrealistic, since it is not obvious
how these results will change if we include the effects of dust.
More specifically, many starforming galaxies are observed to have
significant Ly-$\alpha$ luminosities \cite[e.g.,][]{kunth_etal98,pettini_etal00}, and this is usually attributed to the presence of galactic winds in these
systems that allow the Ly-$\alpha$ photons to escape after much fewer scatterings than in the static medium case.
These data seem to support the idea that it is the kinematics of the gas rather than the dust content that is the dominant Ly-$\alpha$ escape regulator.
Furthermore, \cite{neufeld91} found that under suitable conditions the effects of dust absorption may actually increase rather than diminish
the observed Ly-$\alpha$ line strength relative to radiation that suffers little or no scattering. This would happen for example in a multiphase medium consisting
of dusty clumps of neutral hydrogen embedded within a relatively 'transparent' medium. If most of the dust lies in cold neutral clouds then Ly-$\alpha$ photons, not
being able to penetrate those clumps, will not be affected as much by the presence of dust \cite[see also][]{hansen_oh05}. Although
there is no direct observational
evidence to support this structure for the ISM (i.e., that dust lies preferentially in cold, neutral hydrogen clumps, even though the clumpiness in the
distribution of neutral hydrogen itself seems to be established observationally \cite[see][and references therein]{hansen_oh05}),
such a morphology of the dust and atomic hydrogen distribution
could help account for the lack of strong correlation between dust content -- inferred from metallicity or submillimeter emission --
and Ly-$\alpha$ equivalent width. For example, some dust-rich galaxies have substantially higher Ly-$\alpha$ escape fraction than less dusty emitters
\citep{kunth_etal98,kunth_etal03}. In addition, \cite{giavalisco_etal96} found that there is no correlation between the Ly-$\alpha$ equivalent widths and the slope
of the UV continuum, which is a measure of the continuum extinction and hence of dust content.
Another reason why it is not obvious how the results presented here will change if we take dust into account,
is that in the current version of the ART code molecular hydrogen forms only through the catalytic action of electrons.
When molecular hydrogen formation on grains is included in the code,
some of what is currently taken to be neutral atomic hydrogen will transform into
molecular hydrogen, hence this effect will decrease the optical thickness of what currently are the thickest cells.
\section{Summary }
\label{sec:conclusions}
We develop a Ly-$\alpha$ RT code applicable to gasdynamics cosmological simulations.
High resolution, along with appropriately treated cooling can lead
to very optically thick environments.
Solving the Ly-$\alpha$ RT even for one very thick simulation cell takes a long time. Solving it for the whole simulation box, or a significant
fraction of it, takes unrealistic time.
Thus, we develop accelerating schemes to speed up the RT.
We treat the moderately thick cells by skipping the numerous core scatterings which are not associated with any significant spatial
diffusion, and go directly to the scattering that takes the photon outside of the core. We use depth dependent core definitions, and find
that quite large core values can be used.
For the very optically thick cells we motivate our treatment from the classical problem of resonant radiation transfer in a semi-infinite slab.
We find that with some modifications, since the simulations have cubic cells rather than slabs,
we can use the analytical solution derived by \citet{neufeld90} for the problem of the semi-infinite slab.
With these accelerating methods, along with the parallelization of the code we made the problem of Ly-$\alpha$ RT in the complex environments of
cosmological simulations tractable and solvable.
Even though our approach assumes a cell structure for the simulation outputs, as is inherently the case in AMR codes,
the Ly-$\alpha$ RT code we discuss is applicable to outputs from all kinds of cosmological simulation codes.
This is true since
one can always create an effective mesh by interpolating the values of the various physical parameters.
We perform a series of tests of the RT code, and then we apply it to ART cosmological simulations.
We focus on the brightest emitter in those simulations at $z\simeq 8$.
A first interesting result for this emitter pertains to its intrinsic emission region and mechanisms.
The emission region consists of two smaller regions, each corresponding to one of the two main progenitors that merged to form the
emitter at $z\simeq 8$. Both regions are pretty small, with the larger of the two having a diameter of $2-2.5$ physical kpc.
Furthermore, recombination produced Ly-$\alpha$ photons is the dominant intrinsic Ly-$\alpha$ emission mechanism, with collisional excitation and SNR produced
Ly-$\alpha$ photons being subdominant.
The intrinsic luminosity of the emitter is $4.8 \times 10^{43}$ ergs/s, whereas the injection spectrum (i.e., initial emission spectrum)
shows significant velocity structure.
After performing the Ly-$\alpha$ RT, but before adding the GP absorption, the emitter spectrum obtained
resembles that of a very optically thick static configuration, despite the slight trend for inward radial
motions. More specifically, we obtain the usual double horn spectrum. This happens because (i) even though there is
some net inward radial motion, there are still significant tangential peculiar velocity components, and (ii) the optical
depth is so high that velocities of order some hundreds km/s will not favor blue versus red photons (i.e., in order to escape,
both kinds of photons have
to shift off resonance much more than the shift because of peculiar velocities, thus none of the two kinds of photons is favored in particular
because of the existence of bulk motions).
Namely, the velocity information is in fact lost because of the extremely high optical depth.
The width of the two horns is noticeably high ($\sim 5000$ km/s), but in agreement with what is expected
for the high simulation column densities. The size of the emitter increases, since the scatterings disperse the photons on a larger area.
We find that on the plane of the emitter image, a photon on average escapes at a distance
of about $10 \%$ of the initial (before RT) emitter size from the point it was originally emitted.
We include the GP absorption in two different ways: without and with the red damping wing. In the first case the spectrum is identical
to that when the GP is not included, with the difference that now we get only the red peak (rather than both the red and blue peaks).
This case would correspond to the situation where the Ly-$\alpha$ emitter lies within the HII region of a very bright quasar.
In the second case, where the damping wing is taken into account, the red peak is also affected.
Its maximum flux is suppressed compared to when no damping wing is used by roughly a factor of 61.7. The resulting line after including the wing
is still quite broad with a velocity width of about $1350$ km/s at a flux level of $10^{-21}$ergs s$^{-1}$ cm$^{-2}$ $\AA^{-1}$, and a FWHM of
about $620$ km/s.
The line is quite displaced redward from the Ly-$\alpha$ resonance, and reach a maximum monochromatic flux of $10^{-20.2}$ ergs s$^{-1}$ cm$^{-2}$ $\AA^{-1}$.
Attempting a detailed comparison with existing observations, or discussing detection prospects for an object such as
the simulated emitter is beyond the scope of this study. We have studied only one emitter, and this for only one direction of observation since
our main goal was to use it as an application for the Ly-$\alpha$ RT code. Thus, we do not have a large enough and representative simulation sample yet.
Furthermore, currently the highest redshift where a Ly-$\alpha$ line has been observed is $\sim 6.6$ \citep{kodaira_etal03}\footnote{The detection of a
$z=10$ Ly-$\alpha$ emitting galaxy was recently reported by \citet{pello_etal04} following a color selected survey for $z>7$ galaxies located behind a well
studied gravitational lens cluster, but the exact nature of this source remains contentious \citep[e.g.,][]{weatherley_etal04}} and
it is not known how different the properties of higher redshift emitters are from that of lower redshift ones.
The most recent report at $z=9$ is that of \citet{willis_courbin05}. This study finds no detections.
The sky area coverage is possibly
a significant factor contributing to this no detection result.
Instead, we content ourselves here with a simple order of magnitude comparison.
The intrinsic Ly-$\alpha$ luminosity of our emitter is consistent with luminosities reported in literature. For example, the
highest Ly-$\alpha$ luminosity of the $z=5.7$ sample of \citet{hu_etal04} is roughly $6 \times 10^{43}$ ergs/s.
Higher luminosities than those have been inferred for Ly-$\alpha$ blobs, rather than emitters. For example, the most
luminous blob in the sample of \citet{matsuda_etal04} has a Ly-$\alpha$ luminosity of $1.1 \times 10^{44}$ ergs/s.
Most observed Ly-$\alpha$ emitters are unresolved and so is expected to be the simulated emitter.
Reported sizes for the observed objects are in the $\sim$ few kpc range \cite[e.g.,][]{hu_etal02}.
Ly-$\alpha$ blobs on the other hand are quite more extended, with sizes $\sim 100$ kpc \citep{matsuda_etal04}.
The widths of the (lower z) observed lines are typically a few hundred km/s, whereas the FWHM of the simulated line is roughly $620$ km/s.
As discussed already, the velocity width of the ART emitter could be affected by the very high H column densities which will
drop as soon as molecular hydrogen formation on dust grains is taken into account.
In terms of the detectability, if one adopts the present day
limit of ground based detections of $\sim 10^{-18}$ ergs s$^{-1}$ cm$^{-2}$ $\AA^{-1}$, clearly our simulated emitter
would be orders of magnitude fainter. If the emitter is embedded within the HII region of a bright quasar, in which case the red damping wing
will be suppressed, the brightness is marginally below the sensitivity of current ground based instruments.
Note, however, that the prospects of detection will be much better for JWST which is expected to be able to detect $\sim 400$ times fainter objects than currently
studied with ground based infrared telescopes.
\acknowledgements
I am grateful to N. Y. Gnedin and A.V. Kravtsov
for many useful discussions and guidance, for comments on the manuscript, and for allowing me to use their simulations.
I would like to thank P. Jonsson, D. Neufeld, J. Rhoads, Y. Tanigucchi, and Z. Zheng for fast and
comprehensive responses to my questions. This work benefited greatly from my interaction
with J. Carlstrom, A. Konigl, and A.V. Olinto, and was supported by the National Science Foundation (NSF) under grants
ASTR 02-06216 and ASTR 02-39759, by NASA through grants NAG5-13274 and NAG5-12326, and by the Kavli Institute for Cosmological Physics
at the University of Chicago.
The author also acknowledges support through an award from the Onassis Foundation.
The simulations discussed were performed on Linux Clusters and IBM690 arrays at the National
Center for Supercomputer Applications and the San Diego Supercomputer Center under the National Partnership for
Advanced Computational Infrastructure grant $\#$MCA03S023.
This work was presented as part of a dissertation to the Department of Astronomy and Astrophysics, The University of Chicago, in
partial fulfillment of the requirements for the Ph.D. degree.
|
2,869,038,155,530 | arxiv | \bigskip{\bigskip}
\def\medskip{\medskip}
\newcommand\al{\alpha}
\newcommand\Ga{\Gamma}
\newcommand\de{\delta}
\newcommand\ep{\epsilon} \newcommand\vep{\varepsilon}
\newcommand\ka{{\kappa}}
\newcommand\la{\lambda} \newcommand\La{\Lambda}
\newcommand\om{\omega} \newcommand\Om{\Omega} \newcommand\Omi{{\rm O}}
\newcommand\si{\sigma}
\newcommand\te{\theta}
\newcommand\Te{\Theta}
\newcommand\vp{\varphi}
\newcommand\ze{\zeta}
\newcommand\BR{\mathbb{R}}
\newcommand\diff{\mathrm{d}}
\newtheorem*{theorem*}{Theorem}
\begin{document}
\title{A General Framework for Updating Belief Distributions}
\author{P.G. Bissiri, C.C. Holmes \& S.G. Walker
\footnote{
Pier Giovanni Bissiri is Research Associate, Universita degli Studi di Milano-Bicocca,
Italy, (email: [email protected]);
Chris Holmes is Professor of Statistics, Department of Statistics, University of Oxford,
Oxford, U. K. (email: [email protected]);
Stephen G. Walker is Professor of Statistics, School of Mathematics, Statistics \& Actuarial Science, University of Kent,
Canterbury, U. K. (email: [email protected]). Holmes is supported by the Oxford-Man Institute, Oxford, and the Medical Research Council, UK.
}}
\date{}
\maketitle
\vspace{0.1in}
\abstract{We propose a general framework for Bayesian inference that does not require the specification of a complete probability model, or likelihood, for the data. As data sets become larger and systems under investigation
more complex it is increasingly challenging for Bayesian analysts to attempt to model the true data generating mechanism. Moreover, when the object of interest
is a low dimensional statistic, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. If Bayesian analysis is to keep pace with modern applications it will need to forsake the notion that it is either possible or desirable to model the complete data distribution. Our proposed framework uses loss-functions to connect information in the data to statistics of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is given, yet provides coherent subjective inference in much more general settings. We demonstrate our approach in important application areas for which Bayesian inference is problematic including variable selection in survival analysis models and inference on a set of quantiles of a sampling distribution. Connections to other inference frameworks are highlighted. }
\vspace{0.1in} \noindent Keywords: Bayesian updating; Decision theory; Generalized estimating equations; Information; Loss function; Maximum entropy; Self--information loss function.
\vspace{0.3in}
\noindent {\bf 1. Introduction.} Data sets are increasing in size and modelling environments are becoming more complex. This presents opportunities for Bayesian statistics but also major challenges, perhaps the greatest of which is the requirement to define the true sampling distribution, or likelihood, for the data generator $f_0(x)$, regardless of the study objective. So even if the task is inference for a low-dimensional statistic of the population, the analyst is required to model the complete data distribution and, moreover, assume that the model is ``true''. We propose a coherent procedure for general Bayesian inference that does not require complete knowledge of $f_0(x)$ and which connects information in the data to the value of an unknown object or parameter of interest via the use of loss functions. By ``coherent'' we mean that all relevant information is contained in the posterior probability distribution. For ease of exposition we shall use the terminology ``parameter of interest'' and ``statistic of interest'' interchangeably. We show how the approach leads to conventional Bayesian updating when the true likelihood is known but allows for rational updating of beliefs in much more general settings.
The central tenet of our paper is this: as applications get more complex Bayesian analysts will increasingly be forced to forsake the notion that they can precisely model all aspects of the data. Settling for a misspecified model undermines the traditional Bayesian approach leading to interpretability problems along with the reliability of the posterior distribution. If the analyst acknowledges this then they should seek an alternative coherent way to proceed. We aim to contribute to this task.
\vspace{0.2in}
\noindent
{\bf 1.1 The idea.} Let $\theta$ denote a parameter or statistic of interest, for example the mean or median of a population $F_0(x)$, and let $x$ denote a set of observables $x$ from $ F_0(x)$, with $F_0$ unknown. We are interested in a formal, optimal, way to update prior beliefs $\pi(\theta)$ to posterior beliefs $\pi(\theta | x)$ given $x$.
Bayesian inference proceeds through knowledge of a complete, true, model for $f_0(x)$. This is often parameterised via a sampling distribution $f(x; \beta)$ and a prior $\pi(\beta)$, such that,
$$f_0(x) = \int_{\beta} f(x;\beta) \pi(d \beta) \,,$$
and following de Finetti we know that all exchangeable distributions can be modelled in such form, see for example Bernardo and Smith (1994).
Then inference for the statistic of interest, $\theta$, can occur via,
$$
\pi(\theta | x) = \int_{\beta} g[f(\cdot;\beta)] \pi(d \beta | x) \,
$$
where $g[\cdot]$ defines the statistic; for example, if $\theta$ is the mean then $g[f(\cdot;\beta)] = \int x f(x; \beta)\,{\rm d} x$; or if $\theta$ denotes the median then $g[f(\cdot;\beta)] = F_{\beta}^{-1}(0.5)$.
Following
the Savage axioms (Savage, 1954) the Bayesian update can be shown to be the rational way to proceed. However, $f_0(x)$ may be unknown, $x$ may contain a vast number of data points and $\beta$ might be high-dimensional. Taken together, this makes the Bayesian approach somewhat cumbersome.
We are interested in the rational updating of beliefs, $\pi(\theta) \rightarrow \pi(\theta | x)$, under more realistic and manageable assumptions. To do so we relax the assumption that $f_0(x)$ is known and make use of loss functions to connect information in data to parameters of interest. Informally for now, we write such loss functions as $l(\theta,x)$, and we will discuss specific types later in the paper. We shall consider the reporting of subjective beliefs, $\pi(\theta | x)$, as an action made under uncertainty and use decision theory to guide the optimal action. See for example Hirshleifer and Riley (1992).
To outline the theory, let $\nu$ denote a probability measure on the state space of $\theta$. We shall construct a loss function to select an optimal posterior measure $\widehat{\nu}(\theta)$ given a prior $\pi(\theta)$ and data $x$. To achieve this we construct a loss-function $L(\nu ; \pi, x)$ on the space of probability measures on $\theta$, and then present
$$
\widehat{\nu} = \arg \min_{\nu} L(\nu; \pi, x),
$$
as the optimal ``honest" representation of beliefs about the unknown value of $\theta$ given the prior information represented via the belief distribution $\pi$ and data $x$. As it is widely assumed data $x$ is an independent piece of information to that which gave rise to the prior, it is appropriate to consider an additive, or cumulative, loss function of the form
\begin{equation}\label{eq:log-form}
L(\nu; \pi, x)=h_1(\nu,x)+h_2(\nu,\pi),
\end{equation}
where $h_1$ and $h_2$ are themselves loss functions representing fidelity-to-data and fidelity-to-prior, respectively. See, for example, Berger (1993) for more about ideas on uses of loss functions within decision theory.
Under this approach the analyst needs to specify $h_1$ and $h_2$ in such a way that they proceed in an optimal, rational, and coherent manner.
We can deal immediately with the loss function $h_2(\nu,\pi)$. Somewhat remarkably, as proved later, for coherent inference $h_2$ must be the Kullback--Leibler divergence, Kullback and Leibler (1951), and given by
$$h_2(\nu,\pi)=d_{KL}(\nu,\pi)=\int_\Theta \nu({\rm d}\theta)\,\log\{\nu({\rm d}\theta)/\pi({\rm d}\theta)\}.$$
Regarding $h_1$, since $\widehat{\nu}(\theta)$ is a probability measure representing beliefs about $\theta$, the only choice is to take the loss-to-data $h_1(\nu,x)$ as the {\sl expected} loss (see von Neumann and Morgenstern, 1944) of $l(\theta,x)$; that is
$$h_1(\nu,x)=\int_\Theta l(\theta, x)\, \nu({\rm d}\theta),$$
with the particular types of the loss-function $l(\theta,x)$ to be discussed later. In general there the form of $l(\theta, x)$ will be problem specific as discussed in Section 3.
Substituting in $h_1$ and $h_2$, the cumulative loss function is then given by
\begin{equation}\label{f: cum_loss}
L(\nu;\pi,x)=\int_\Theta l(\theta,x)\,\nu({\rm d}\theta)+d_{KL}(\nu,\pi).\end{equation}
Surprisingly, but quite easy to show, the minimizer of $L(\nu; \pi, x)$ is given by
\begin{eqnarray} \label{eq:bayes}
\widehat{\nu}(\theta) & = & \arg \min_{\nu} L(\nu; \pi, x) \nonumber \\
& = & \frac{\exp\{-l(\theta,x)\}\pi(\theta)}{\int_\Theta \exp\{-l(\theta,x)\}\pi({\rm d}\theta) }.
\end{eqnarray}
This has the form of a Bayesian update but where the complete log-likelihood, $\log f(x;\beta)$, is replaced by a loss function $l(x, \theta)$ targeting the parameter of interest. As is usual in decision problems involving the use of loss function, it is incumbent on the decision maker to ensure solutions exist. So $l(\theta,x)$ needs to be constructed so that
$0<\int_\Theta \exp\{-l(\theta,x)\}\pi({\rm d}\theta)<+\infty$.
Whereas the Bayesian approach requires the construction of a probability model for all possible outcomes conditional on all unknown states of nature, the approach here requires the construction of loss functions given the outcomes for only the parameter of interest. This allows the decision maker to concentrate on modeling only those quantities that are important to the task to hand.
\vspace{0.2in}
\noindent
{\bf 1.2 Connections with other work.}
There is a vast amount of literature on procedures for robustly estimating a parameter of interest by minimizing the cumulative loss
$$L(\theta;x)=\sum_{i=1}^n l(\theta,x_i).$$
See, for example, H\"uber (2009), where we note that the primary aim is not modeling the data but rather estimating a statistic. This is an advantage when a probability model for the data is too hard to formulate.
We are presenting a Bayesian extension of this idea. Since we are interested in a belief distribution for $\theta$ given data, and have further information provided by $\pi$, we claim the appropriate Bayesian version is given by (\ref{f: cum_loss}).
Some of the ideas presented in the paper have been considered in a less general setting by Zhang (2006a, 2006b) and
Jiang and Tanner (2008). In Zhang (2006a) an estimation procedure, named Information Risk Minimization, also known as a Gibbs posterior, which has the same form as \eqref{eq:bayes}, is described in Section IV of his paper. This is our procedure when data is regarded as stochastic. Zhang then concentrates on the properties of the Gibbs posterior.
Further theoretical work is done in Zhang (2006b). In Jiang and Tanner (2008) a Gibbs posterior is studied in comparison with a true Bayesian posterior where the model is assumed to be misspecified. The claim is that posterior performance of a Bayesian model can be unreliable when misspecified, whereas a Gibbs posterior which targets points of interest can have better performance. The comparison involves variable selection for high-dimensional classification problems involving a logit model.
We build on the work of Zhang (2006a, 2006b) and
Jiang and Tanner (2008) in a number of important directions. The first is that we develop an approach for inference and statistical applications rather than studying the theoretical properties of the posterior under misspecification. We provide a principled approach to scale the relative information in the data to information in the prior; that is left as an arbitrary free parameter in Zhang (2006a, 2006b) and
Jiang and Tanner (2008). We show that in order to remain coherent, the modeller {\it must} adopt the Kullback-Leibler divergence as the loss between prior $\pi$ and $\nu$. Finally, we demonstrate how to incorporate non-stochastic information into the cumulative loss function, which provides a definition of a conditional probability in the presence of non-stochastic information.
Another similar construct to $L(\nu;\pi, x)$ is provided by Zellner (1988), who presents what is essentially a loss function for the posterior distribution using ideas of information processing from prior to posterior. The motivation is different and relies on notions of information present in log probabilities and log likelihoods, which may not be compatible as noted by J.M. Bernardo in the discussion of Zellner's paper.
Furthermore, our derivation of the loss function allows a broader interpretation of the elements, which does not require the existence of a probability distribution for the observation.
Concerns that the specification of a complete model for the data generating distribution is unachievable date back to de Finetti (1937) and the notion of ``prevision''. In his work de Finetti considers conditional expectation as the fundamental primitive,
or statistic, of interest on which prior beliefs are expressed and updated. Recently other researchers have further developed this approach under the field of Bayesian linear statistics, see Goldstein and Wooff (2007).
There has been increasing awareness of the restrictive assumptions that formal Bayesian analysis entails. Royall and Tsou (2003) describe procedures for adjusting likelihood functions when the model is misspecified. More recently, Doucet and Shepherd (2012), and Muller (2012) consider formal approaches to pseudo-Bayesian methods using sandwich estimators to update subjective beliefs, motivated by robustness to model misspecification, see also Hoff and Wakefield (2013). Ribatet et al (2009) consider pseudo-Bayesian approaches with composite likelihoods.
Several authors have considered issues with Bayesian updating with proxy models, $f(x; \theta)$, for example, Key et al. (1999), when $(x_i)$ is known not to arise from $f(x;\theta)$ for any value of $\theta$. That is, there is no $\theta$ conditional on which $x$ is from $f(x;\theta)$. This is referred to as the M--open case in Bernardo and Smith (1994). One suggested solution is to use methods based on approximations and Key et al. (1999) describe one such idea using a cross--validation approach. While this may be a pragmatic it does have some shortcomings. Most serious is that there is little back--up theory and this has repercussions in that the update suffers from a lack of coherence
Another approach is to ignore the problem. That is, assume the observations are coming from $f(x;\theta)$ even though it is known they are not. According to Goldstein (1981), ``there is no obvious meaning for Bayesian analysis in this case". The disaster of making horribly wrong inference can be protected to some extent by model selection; that is, postulating a number of models for $f_0(x)$, say $f_j(x;\theta_j)$, with corresponding priors $\pi_j(\theta_j)$, and model probabilities $(p_j)$, for $j=1,\ldots,M$. But as Key et al. (1999) point out, how does one construct $\pi_j(\theta_j)$ and $p_j$ when one knows none of the postulated models are correct. So the Bayesian update breaks down in that nothing has any interpretation.
A recent popular idea is to use Bayesian nonparametrics. See Ghosh and Ramamoorthi (2003), and Hjort et al. (2010) for reviews. The idea here is making the choice of modeling density $f(x)$ so large by constructing a prior directly on a space of density functions, and written as $\pi({\rm d} f)$, which has such a large support that it is reasonable to assume $f_0(x)$ lies in the support. A well known model is the infinite mixture model, whereby $\pi({\rm d} f)$ is generating random density functions of the type
$$f(x)=\int_{z\in Z} K(x|z)\,{\rm d} P(z),$$
where $K$ is a density for each $z$, often the normal density and $z$ denotes the mean and variance, and $P$ is a random distribution function, usually of the type
$$P=\sum_{l=1}^\infty w_l\,\delta_{z_l},$$
and the prior is assigned to $(w_l,z_l)_{l=1}^\infty$. Here the $(w_l)$ are weights and sum to unity. The Dirichlet process, Ferguson (1973), is widely used in such contexts; see Lo (1984) and Escobar (1988) for the origins of the model and sampling based algorithms for estimating the model. While this methodology has made rapid developments in recent years, including the development of sampling algorithms, for complex data structures there are still issues about just how large the supports are and indeed how complicated inference can be and how to construct priors which capture reasonable beliefs about dependencies in the data. Moreover this still requires the specification of complete beliefs on $f_0(x)$ even when the objective is inference for a summary statistic of the data distribution.
Finally, we note that it is informative to view the selection of $\widehat{\nu}$, i.e.
\begin{eqnarray} \label{eq:log-bayes}
\widehat{\nu}(\theta) & = & \arg \min_{\nu} \left\{ h_1(\nu; x) + h_2(\nu, \pi) \right\}
\end{eqnarray}
as trading off fidelity to the data and fidelity to the prior. This highlights connections with penalised likelihood and regularized regression, see for example Hastie et al (2009). But whereas in penalised likelihood the objective is to select a single parameter estimate $\widehat{\theta}$, the general Bayesian approach \eqref{eq:log-bayes} selects a probability distribution $\widehat{\nu}(\theta)$.
The layout of the remainder of the paper is as follows. In Section 2 we discuss how \eqref{eq:bayes} arises as the unique minimiser of expected loss. In Section 3 we discuss forms for the loss-to-data functions and calibration.
Section 4 then considers general forms of data, such as partial information and non-stochastic information.
Section 5 provides some numerical illustrations including inference based on the Cox proportional hazards model and inference about the median of a distribution function.
Section 6 concludes with a discussion on a number of points.
\vspace{0.2in}
\noindent
{\bf 2. Information in the prior.}
Here we discuss the choice of the Kullback--Leibler divergence as being appropriate for quantifying the loss-to-prior $h_2(\nu,\pi)$ in \eqref{eq:log-form}. With $n$ independent pieces of information $x=(x_1,\ldots,x_n)$ we take the cumulative loss as
\begin{equation}\label{f: Loss1}
L(\nu;\pi, x)=\sum_{i=1}^n h_1(\nu,x_i)+h_2(\nu,\pi),\end{equation}
where $h_1$ will be taken in the integral form, i.e. the average or
expected loss:
\begin{equation*
h_1(\nu,x_i) = \int_{\Theta} l(\theta,x_i)\; \nu(\diff \theta).
\end{equation*}
Now, adhering to the ``likelihood principle'' (see Bernardo and Smith 1994), for any $0<m<n$, all the information contained in $(x_1,\ldots,x_m)$ is to be found in $\widehat{\nu}_m$,
where $\widehat{\nu}_m$ minimizes
$$L(\nu;\pi, x_1,\ldots,x_m)=\sum_{i=1}^m h_1(\nu,x_i)+h_2(\nu,\pi).$$
and hence it follows that,
$$L(\nu; \pi, x)=\sum_{i=m+1}^n h_1(\nu,x_i)+h_2(\nu,\widehat{\nu}_m),$$
where $\widehat{\nu}_m$ now serves as the prior for future information $(x_{m+1},\ldots,x_n)$.
For coherence, the solution from $L$ for all cases of $m$ must be the same. To derive the form of $h_2$ we start with the family of $g$--divergences, that is
\begin{equation}\label{f: g-div}
h_2(\nu,\pi)=d_g(\nu,\pi)=\int g({\rm d}\pi/{\rm d}\nu)\,{\rm d}\nu\end{equation}
where $g$ is a convex function from $(0,\infty)$ to the real line and $g(1)=0$. See Ali and Silvey (1966).
For this coherence to be in force, it is necessary
that the discrepancy $h_2$ is the Kullback-Leibler divergence.
To be more precise, the following theorem can be stated:
\begin{theorem*}
Let the loss $L(\nu;\pi, [x_1, x_2])$ be defined by
\eqref{f: Loss1} and \eqref{f: g-div}.
Moreover, let $\widehat{\nu}_{(\pi, x_1, x_2)}$ be the probability measure that minimizes the loss $$L(\nu;\pi,[x_1,x_2])$$ among the probability measures on $\Theta$ that are absolutely continuous with respect to $\pi$.
Similarly, let $\widehat{\nu}_{(\pi, x_1)}$ and $\widehat{\nu}_{(\widehat{\nu}_{(\pi, x_1)}, x_2)}$ be the probability measures minimizing the loss $L(\nu;\pi,x_1)$ and $L(\nu;\widehat{\nu}_{(\pi, x_1)}, x_2)$, respectively.
Assume that
\begin{equation}\label{f: coherence}
\widehat{\nu}_{(\widehat{\nu}_{(\pi, x_1)},x_2)} = \widehat{\nu}_{(\pi, [x_1, x_2])} \end{equation}
for every probability measure $\pi$ on $\Theta$ and for every choice of the loss functions $h_1(\nu,x_1)$ and $h_2(\nu,x_2)$
such that
$\widehat{\nu}_{(\pi, [x_1, x_2])}$, $\widehat{\nu}_{(\pi, x_1)}$, $\widehat{\nu}_{(\widehat{\nu}_{(x_1)},x_2)}$,
are all properly defined.
Then $h_2$ is the Kullback--Leibler divergence.
\end{theorem*}
In virtue of this Theorem, which is proved in Appendix A,
for coherence it is required to take
$$h_2(\nu,\pi)=d_{KL}(\nu,\pi)=\int \nu\,\log(\nu/\pi),$$ the Kullback--Leibler divergence. So, in the case of $m=0$, we have
$$L(\nu;\pi,x)=\sum_{i=1}^n h_1(\nu,x_i)+d_{KL}(\nu,\pi),$$
where $\pi$ is the initial choice of probability measure representing beliefs about $\theta$ in the absence of $x$.
The solution to this minimization problem is easy to find and is given by
$$\nu({\rm d}\theta)=\frac{\exp\left\{-\sum_{i=1}^n l_i(\theta,x_i)\right\}\,\pi({\rm d}\theta)}{\int_\Theta \exp\left\{-\sum_{i=1}^n l_i(\theta,x_i)\right\}\,\pi({\rm d}\theta)},$$
and this is the solution since one can see that
$$\begin{array}{ll}
\int_\Theta l(\theta,x)\,\nu({\rm d}\theta) & +\int_\Theta \nu({\rm d}\theta)\,\log\{\nu(\theta)/\pi(\theta)\} \\ \\
&=\int_\Theta \nu({\rm d}\theta)\,\log\{\nu(\theta)/[\exp(-l(\theta,x))\,\pi(\theta)]\}.\end{array}$$
\vspace{0.2in}
\noindent
{\bf 3. Information in the data.} In this section we will consider the form of $h_1$ in \eqref{eq:log-form} that connects information in the data to the value of the unknown $\theta$. We shall consider three broad situations, first when the analyst really believes they know the complete family of distributions from which $(x_i)$ arose, the so called M--closed scenario. Second when $f_0(x)$ is unknown but where a complete likelihood $f(x; \theta)$ is being used as a proxy model, the so called M--open perspective. Finally, when the statistic $\theta$ does not index a complete sampling distribution or proxy model for $x$.
\vspace{0.2in}
\noindent
{\bf 3.1 M--closed and self-information loss.}
When the analyst knows the family from which $(x_i)$ arose then the Bayesian approach to learning is fully justified, well known and widely used as a statistical approach to inference; the book of Bernardo and Smith (1994) is comprehensive. Here we recall the essence of it: A parameter of a density function $f(x;\theta)$, $\theta\in\Theta$, is unknown and beliefs about it are encapsulated via a prior distribution $\pi(\theta)$. Once (conditionally) independent samples $(x_1,\ldots,x_n)$ are observed from the density function $f(x;\theta)$, the prior is updated to the posterior distribution via Bayes' Theorem; given by
$$\pi({\rm d}\theta|x_1,\ldots,x_n)=\frac{l_n(\theta)\,\pi({\rm d}\theta) }{\int_\Theta l_n(\theta)\,\pi({\rm d}\theta)},$$
where $l_n(\theta)=\prod_{i=1}^n f(x_i;\theta)$ is the likelihood function. The posterior then represents revised beliefs taking into account both the prior distribution and the observations. Mathematically, it is an application of Bayes' Theorem via the standard definition of conditional probability.
So the Bayesian update works and is applicable in the case when the $(x_i)$ come from the density $f(x;\theta)$ for some $\theta\in\Theta$. In Bernardo and Smith (1994) this is referred to as the M--closed view. To see how Bayes arises in our framework, we would need to construct a loss function for $l(\theta,x)$ with the knowledge that $x$ came from $f(x;\theta)$. It is well known that the ``honest" loss function in this case is the self--information, or logarithmic loss function, given by
$$l(\theta,x)=-\log f(x;\theta).$$
See Bernardo (1979) and Merhav and Feder (1998) for more on the self information loss function. This amounts to the use of proper scoring rules to ensure that the analyst remains honest in
expressing subjective beliefs, under which we recover the Bayesian updating rule. However, there are different ideas behind our derivation of this rule, with different assumptions being made. Most crucially, we need the $(x_i)$ to provide independent pieces of information to maintain the credibility of the cumulative loss function.
\vspace{0.2in}
\noindent
{\bf 3.2 M--open and the use of proxy models.}
As has been mentioned by many authors, for example, Key et al. (1999), issues with the Bayesian rule arise when $f(x;\theta)$ is known not to be the family of densities from which the $(x_i)$ come. Equivalently, there is no $\theta$ conditional on which $x$ is from $f(x;\theta)$. This is referred to as the M--open case in Bernardo and Smith (1994). In many situations, the correct sampling density, $f_0(x)$, is unknown or unavailable or too complex to work with. There are a number of ways to attempt to resolve this issue from a Bayesian perspective.
One idea is to use methods based on approximations and Key et al. (1999) describe one such idea using a cross--validation approach. While this may be a suitable idea which can work in practice it does have some shortcomings. Most serious is that there is little back--up theory and this has repercussions in that the update suffers from a lack of coherence
Another approach is to ignore the problem. That is, assume the observations are coming from $f(x;\theta)$ even though it is known they are not. According to Goldstein (1981), ``there is no obvious meaning for Bayesian analysis in this case". The disaster of making horribly wrong inference can be protected to some extent by model selection; that is, postulating a number of models for $f_0(x)$, say $f_j(x;\theta_j)$, with corresponding priors $\pi_j(\theta_j)$, and model probabilities $(p_j)$, for $j=1,\ldots,M$. But as Key et al. (1999) point out, how does one construct $\pi_j(\theta_j)$ and $p_j$ when one knows none of the postulated models are correct. So the Bayesian update breaks down in that nothing has any interpretation.
We show in Appendix B that it is possible to learn about this $\theta_0$ since an infinite collection of $(x_i)$ yields $f_0(\cdot)$ via the empirical distribution function and so $\theta_0$ will be found with sufficient samples. Then we would wish the sequence of $\nu({\rm d}\theta)$ to accumulate about $\theta_0$. So what is the appropriate loss $l(\theta,x)$ in the case where we're trying to learn about the value of $\theta_0$? The loss function $l(\theta,x)=-\log f(x;\theta)$ is still the right choice. For the standardized cumulative loss based on a sequence of observations is given by
$$-n^{-1}\log f(x_i;\theta)\rightarrow \int -\log f(x;\theta)\,{\rm d} F_0(x)\quad\mbox{a.s.}$$
which is minimized by $\theta_0$.
When an approximate model $f(x;\theta)$ has been supposed, it is often prudent to consider a number of models, say $f_j(x;\theta_j)$ for $j=1,\ldots,M$, as we have mentioned previously. We can deal with this in a simple way. So let $\theta=(\theta_1,\ldots,\theta_M)$ and let $\pi(\theta)$ be the prior distribution for $\theta$ on $\Theta=\cup_{j=1}^M\Theta_j$. This would be constructed by considering beliefs about which $\theta_j$ from $f_j(\cdot;\theta_j)$ takes this family closest to $f_0(\cdot)$. The model $f(x;\theta)$ would then be given by
$$f(x;\theta)=\sum_{j=1}^M p_j\,f_j(x;\theta_j)$$
and the $(p_j)$ would now be the probabilities describing beliefs about which model provides the closest density to $f_0(\cdot)$. Hence, unlike the Bayesian approach to model selection in the M--open case, all the quantities to be specified have clear interpretation. We can recover the Bayesian update when we take, for each $i\in(1,\ldots,n)$,
$$l(\theta,x_i)=-\log f(x_i;\theta).$$
So while the Bayesian approach has some issues to deal with whether the M--open or M--closed view hold, for us it is irrelevant. If one adopts $\theta_0$ as the parameter value taking the family closest to $f_0(\cdot)$ then one does not need to worry if in M--open or M--closed, since if $f(\cdot;\theta)$ is the true family then obviously $\theta_0$ reverts to the true parameter value. This point is crucial, since for the Bayesian it may be that one simply does not know if one is in the M--open or M--closed view (though strictly speaking this puts you in M--open) and then one needs a framework in which the same approach is adopted and justified regardless of which view is held. We have provided such a framework.
\vspace{0.2in}
\noindent
{\bf 3.3 M--free.}
Often the analyst might not wish to express a full probability model for the data, either as it's too cumbersome or too problematic. A motivating example is inference for the median of a population of iid data.
However, the analyst knows the object or statistic $\theta$ that they wish to express
beliefs about. It is incumbent on them to choose a specification for $l(\theta, x)$ that provides greatest information on the unknown value. The literature on this is in the area of {\it Robust Statistics} and loss functions can be found in the literature pertaining to $M$-estimation and estimating equations.
See, for example, H\"uber (2009). We refer to this setting as M--free to highlight the model free aspect of inference.
An important class of loss functions is provided by the $M$ estimators for a location parameter, H\"uber (1964). So rather than using the loss function $-\log f(x_i;\theta)$, a $\rho(x_i;\theta)$ is used in an attempt to obtain robust estimation, rather than the traditional maximum likelihood estimator, which can be suspect if the model is incorrect. This idea has been generalized to the class of estimating equations, whereby the estimate of $\theta$ is obtained by minimizing
$$\sum_{i=1}^n \rho(x_i;\theta).$$
Our approach, which mirrors this classical robust procedure, would use the loss function
$$L(\nu;x_1,\ldots,x_n,\pi)=\int_\Theta\sum_{i=1}^n \rho(x_i;\theta)\,\nu({\rm d}\theta)+d_{KL}(\nu,\pi)$$
with solution provided by
$$\nu({\rm d}\theta)\propto \exp\left\{-\sum_{i=1}^n \rho(x_i;\theta)\right\}\,\pi({\rm d}\theta).$$
For example, one possible application would be the Generalized Estimating Equations, see Liang and Zeger (1986). For the grouped observations $(x_{i1},\ldots,x_{in_i})$,
$$\rho(x_i,\theta)=\hbox{$1\over2$}(x_i-\mu_i(\beta))'V_i(\phi,\alpha)^{-1}(x_i-\mu_i(\beta))$$
where $\theta=(\beta,\phi,\alpha)$ and for some link function $g$, $g(\mu_{ij}(\beta))=x_{ij}'\beta$, and for some correlation matrix $R_i(\alpha)$ and diagonal matrix $A_i$, with $j$ entry
given by $a(\mu_{ij}(\beta))$, with $a$ a specified variance function, $V_i=\phi A_i^{1/2}R_i(\alpha)A_i^{1/2}$, with $\phi$ a scale parameter.
There is by now an abundance of literature on $M$-estimation, estimating equations and generalized estimating equations. Our point is that all such equations can be viewed as loss functions connecting independent units with parameters of interest. Hence, all fit within our framework and we would extend the loss function to include the prior $\pi$ and we obtain an explicit expression for $\nu({\rm d}\theta)$. In cases when the parameter estimation is done via iterative methods, which is typically the case, Markov chain Monte Carlo methods would substitute for our sampling strategies from $\nu({\rm d}\theta)$.
In essence, this is the practical innovations of the framework we are proposing. We are claiming that any loss function of the type
$$\sum_{i=1}^n \rho(x_i,\theta)$$
can be extended to the Bayesian type updating mechanism. The $\theta_0$ of interest is implicitly assumed to be the limit of the sequence of minimizers of the cumulative losses. This would be the minimizer of $\int \rho(x;\theta)\,{\rm d} F_0(x)$ and hence the prior beliefs are being expressed about this unknown value. Then the loss function $l(\theta,x)=\rho(x;\theta)$ is ensuring the updates are indeed ``moving towards" $\theta_0$. To complete the picture, it would have been that the decision maker would be happy to make a decision given the minimizer of $\int \rho(x;\theta)\,{\rm d} F_0(x)$.
\vspace{0.2in}
\noindent
{\bf 3.4 M-free calibration of relative losses.} In the M--closed and M--open settings the use of the self-information loss $l(\theta, x)= -\log f(x;\theta)$ results in a fully specified form for \eqref{eq:bayes}. However in the M--free setting there is an issue about the scale of the loss function $h_1$ which is a consequence of the apparent arbitrariness in the weight of $l(\nu,x)$ relative to $l(\nu, \pi)$, in that we are free to multiply either by an arbitrary factor. So equivalently we are interested in the loss function $w\,l(\theta,x)$ for some $w>0$. The question is how to select $w$, noting that $w$ controls the relative weight of loss-to-data to loss-to-prior. Of course, such an issue does not arise in the classical literature on estimation using such loss functions since there is no combining with different styles of loss functions.
However the calibration of different types of loss function is not a unique problem. It arises in many applied contexts; possibly the most well known be in health economics where losses pertaining to costs need to be balanced against losses pertaining to health benefits.
There are a number of ideas for the choice of $w$ and we discuss them here.
\vspace{0.2in}
\noindent
{\bf 3.4.1 Annealing. } In the literature on Gibbs posteriors, the weighting parameter is labelled as a ``temperature" and selected subjectively.
There are clear connections here with the use of ``power priors'' (Ibrahim \& Chen, 2000) where
$$\nu({\rm d}\theta)\propto \prod_{i=1}^n f(x_i;\theta)^w\,\pi({\rm d}\theta).$$
Such an idea has also been discussed in Walker and Hjort (2001). It is evident what $w$ achieves; if $0<w<1$ then the loss-to-prior is given more prominence than in the Bayesian update and the data will be less influential. In the extreme case when $w=0$ we retain the prior throughout. On the other hand, when $w>1$ the loss $-\log f(x;\theta)$ is given more prominence than in the Bayesian update and in the extreme case when $w$ is very large the $\nu$ is accumulating about the maximum likelihood estimator for the model; that is $\nu({\rm d}\theta)\approx \delta_{\widehat{\theta}}(d\theta)$, where $\widehat{\theta}$ maximizes $\prod_{i=1}^n f(x_i;\theta)$.
Alternative ideas for setting $w$ include a data dependent assignment based on cross-validation and a random assignment once the parameter has appeared in the Gibbs posterior. That is, one considers
$$\widehat{\nu}(\theta|x)=\int \widehat{\nu}(\theta|w,x)\,\pi_w({\rm d} w)$$
for some probability measure $\pi_w({\rm d} w)$.
\vspace{0.2in}
\noindent
{\bf 3.4.2 Unit information loss. }
Here we discuss a subjective assignment but a more orientated and direct allocation. The subjective choice is based on a prior evaluation of the expected value of $l(\theta,x)$.
To aid in the calibration of the loss functions and the selection of $w$ we can consider the following. Write the loss function with an additional term $\log \pi(\widehat{\theta})$, which is a constant, and where $\widehat{\theta}$ maximizes $\pi(\theta)$, so that the cumulative loss becomes
$$L(\nu;x,\pi)=\int\left[w \,l(\theta,x)+\log\{\pi(\widehat{\theta})/\pi(\theta)\} \right]\nu({\rm d}\theta)+\int\nu({\rm d}\theta)\,\log \nu(\theta).$$
In order to calibrate the information in the data relative to the prior we now assume that both loss functions, $l(\theta,x)$ and $\log \{\pi(\widehat{\theta})/\pi(\theta)\}$ are non-negative, and we standardise $l(\theta, x)$ such that $\min_{\theta} l(\theta,x)=0$ for any $x$. If this is not the case then we replace $l(\theta,x)$ by $l(\theta,x)-l(\theta_x,x)$ where now $\theta_x$ minimizes $l(\theta,x)$.
Hence, we can regard
$$L(\theta;x,\pi)=w\,l(\theta,x)+\log\{\pi(\widehat{\theta})/\pi(\theta)\} $$
as a loss function for $\theta$ with information provided by $x$ and $\pi$.
So, assuming that $l(\theta,x)>0$, we want to calibrate the two loss functions given by
$$w\,l(\theta,x)\quad\mbox{and}\quad \log \{\pi(\widehat{\theta})/\pi(\theta)\}.$$
These are two loss functions for $\theta$ and to adhere with the notion that at the outset before we have data, there is a single piece of information, we can calibrate the two losses by making the expected losses to match. That is, whether someone takes a $\theta$ and is penalized by the loss $\log\{\pi(\widehat{\theta})/\pi(\theta)\}$, or
takes a $(\theta,x)$ and is penalized by the loss $wl(\theta,x)$, at the outset, the expected losses should match. They are confronted by two choices of loss with one piece of information and thus the losses can be calibrated by ensuring their expected losses coincide. The connection between expected information and expected loss can be found in Bernardo (1979).
Thus $w$ can be set by ensuring
$$w{\rm E} \left(l(\theta,x)\right) ={\rm E}\left(\log \{\pi(\widehat{\theta})/\pi(\theta)\}\right).$$
Here ${\rm E}$ is with respect to a joint belief in $x$ and $\theta$; say $m(x,\theta)$, the marginal for $\theta$ of which is $\pi(\theta)$.
So
$$w=\frac{\int\log \{\pi(\widehat{\theta})/\pi(\theta)\}\,\pi({\rm d}\theta) }
{\int l(\theta,x)\,m({\rm d}\theta,{\rm d} x) }.$$
One empirical choice is then given by
$$w=\frac{\int\log \{\pi(\widehat{\theta})/\pi(\theta)\}\,\pi({\rm d}\theta) }
{\int\int l(\theta,x)\,\pi({\rm d}\theta)\,{\rm d} F_n(x)}.$$
Let us consider an example, where $l(\theta,x)=(\theta-x)^2$ with $\pi(\theta)=\mbox{N}(\theta|0,\tau^2)$ with
$m(x|\theta)$ being any density with mean $\theta$ and variance $\sigma^2$.
Then we can evaluate
$$\int \log\{ \pi(\widehat{\theta})/\pi(\theta)\}\,\pi({\rm d}\theta)=1/2$$
and
$$\int\int (\theta-x)^2\,m({\rm d} x,{\rm d}\theta)=\sigma^2.$$
So
$$w=\frac{1}{2\sigma^2}.$$
Hence, this calibration idea yields the ``correct" value of $1/(2\sigma^2)$ in this case. This construction requires the user specification of a joint density $m({\rm d} x, {\rm d} \theta)$ which
in some circumstances may prove difficult. One further suggestion is to replace the prior evaluation of the expected
datum-loss with the observed unit information loss given $x$,
\begin{equation}\label{eq:euil}
\int\int l(\theta,x) \, m(dx , d\theta) \approx \frac{1}{n-p} \sum_i l(\hat{\theta}_x, x_i)
\end{equation}
where $\hat{\theta}_x = \arg \min_{\theta} \left[ \sum_i l(\theta, x_i) \right]$ is the data-loss estimate of $\theta$ and $p$ is the dimension of $\theta$. For instance, in the above example this leads to,
$$
w = \frac{1}{2 \hat{\sigma}^2}
$$
where $\hat{\sigma}^2 = \frac{1}{n-1} \sum_i (x_i - \bar{x})^2$.
\vspace{0.2in}
\noindent
{\bf 3.4.3 Hierarchical loss.} Another way to proceed is to extend the loss function to include $w$ as an unknown parameter. Standard ideas here would suggest we take
$$L(\theta, w;x,\pi)=w\,l(\theta,x) + \xi l(w) -\log\pi(\theta,w)$$
for some $\xi>0$. We would appear to be making no progress since we now have a $\xi$ to assign. However, this is akin to the hierarchical Bayesian model where uncertainty is propagated via hyper-prior distributions to robustify the ultimate prior choice at some level. Hence, the allocation of a $\xi$ would not be as crucial as the assignment of a $w$.
For example, as $w$ is a scale parameter on loss-to-data, taking $l(w)= \log w$ the solution is given by
$$\widehat{\nu}(\theta,w|x,\pi)\propto w^\xi\,\exp\{-w\,l(\theta,x)\}\,\pi(\theta,w)$$
and given that $w^\xi$ can be absorbed in to the prior $\pi$ it is perfectly reasonable to assess $\xi$ subjectively. That is, it seems unreasonable to accept that $\pi$ can be chosen subjectively but that $\xi$ can not.
\vspace{0.2in}
\noindent
{\bf 3.4.4 Operational Characteristics. } The idea here is to set $w$ so that the posterior quantiles match up at some level of error to frequentist confidence intervals based on the estimation of $\theta$ via
minimizing the loss
$$\sum_{i=1}^n l(\theta,x_i).$$
So, if $C_\alpha (w, x_1, . . . , x_n )$ is the $100(1-\alpha)$\% level confidence interval for $\theta$, then we would select the $w$ such that the posterior distribution of $\theta$, with parameter $w$, is such that
$$\mbox{P}(\theta \in C_\alpha(w,x_1,\ldots,x_n)|x_1,\ldots,x_n) = 1-\alpha.$$
See, for example, the review article by Datta and Sweeting (2005) for references to probability matching priors and posteriors, and Ribatet et al (2009) for ideas in pseudo-Bayesian approaches with composite likelihoods.
\vspace{0.2in}
\noindent {\bf 4. General forms of information.} In this Section we discuss more general forms of information to condition on, rather than a complete stochastic data sample $x$ from unknown $F_0(x)$. In particular we provide a definition of conditional probability when non--stochastic information is available, and updating using partial-information in a data set.
\vspace{0.2in}
\noindent
{\bf 4.1 Conditional probability distributions and non-stochastic data.} The theory of conditional probability distributions
is a well-established mathematical theory that
provides a procedure to update probabilities taking into account new information.
Such a procedure is available only if the information which is used to update the probability concerns stochastic events; that is, events to which a probability is assigned.
In other words, such information needs to be already included into the probability model. In this section, we shall show how the approach can be used to define conditional probability distributions based on non--stochastic information.
Information about $\theta$ may arrive in the form of non--stochastic data; such if an expert declares ``$\theta$ is close to 0". This type of information has been discussed by a number of authors and is known to be problematic for the Bayesian especially when such information arises after or during the arrival of stochastic observations $(x_i)$. We cite the paper by Diaconis and Zabell (1982) and in particular refer the reader to example in Section 1.1 of their paper.
If we denote by $I$ a piece of information for which no probability model for each $\theta$ is possible. In other words it is not and can not be determined to be stochastic in any way. However, a loss function $l(\theta,I)$ can be assigned. Our theory does not preclude such a loss function based on such a piece of information.
The answer $\widehat{\nu}(\theta)$ based on $I$ and $\pi$ can then be considered as a means of defining a conditional probability distribution in the presence of non-stochastic information. This section develops this argument.
Before proceeding, we introduce the notation for this section, being different to put the discussion in a more broader context than simply a Bayesian statistical style updating.
Let $Y$ be a random variable on a probability space $(\Omega, \mathscr{F},\mathbb{P})$, which will be the outcome of interest,
and valued into a measurable space $(\mathbb{Y},\,\mathscr{Y})$ with probability distribution $P_Y$.
Hence, $P_Y$ represents initial belief about the outcome concerning $Y$.
Now, assume that that the outcome of another random variable, say $X$, is known.
So, let $X$ be a random variable
from $(\Omega, \mathscr{F},\mathbb{P})$ into $(\mathbb{X},\,\mathscr{X})$ with probability distribution $P_X$ and the additional information $I$ about $Y$
will be assumed to be an outcome of $X$.
Then it is possible to update the unconditional distribution of $Y$ to the probability distribution of $Y$ given $X$.
In probability theory,
a conditional distribution of $Y$ given $X$ is a map $p$
from $\mathscr{Y}\times \mathbb{X}$ into $\BR$ such that:
\begin{itemize}
\item for each $x$ in $\mathbb{X}$, $p(\cdot,x)$ is a probability measure on $\mathscr{Y}$,
\item for each $B$ in $\mathscr{Y}$, $p(B,X(\omega))$ is a version of the conditional probability
$\mathbb{P}(Y\in B\mid X(\omega))$, i.e.
for each $A$ in $\mathscr{X}$ and each $B$ in $\mathscr{Y}$,
\begin{equation}\label{f: def.}
\mathbb{P}\{X\in A,\, Y\in B\} = \int_A p(B,x)\, \diff P_X(x),\end{equation}
where $P_X$ denotes the probability distribution of $X$.
\end{itemize}
The conditional distribution is known to be essentially unique, i.e. unique only up to almost sure equality.
This is a consequence of $X$ being stochastic. In fact, as (Feller, 1971, p. 160)
points out, if, for instance, the distribution of $X$ is concentrated on a subset $\mathbb{X}_0$ of $\mathbb{X}$, no natural definition of $p(B,x)$ is possible for $x$ outside $\mathbb{X}_0$. Nevertheless, in individual cases, there usually exists a natural
choice dictated by regularity requirements.
Moreover, it is well known that
conditional distributions do not always exist unless some conditions are satisfied by the spaces
$(\mathbb{X}, \mathscr{X})$ and $(\mathbb{Y}, \mathscr{Y})$. For more information about conditional probability
distributions, see, for instance,
Feller (1971) or Billingsley (1995).
Here, we will consider the case in which
there are two $\sigma$-finite measures $\mu_1$ and $\mu_2$ on $\mathscr{F}$ such that
the probability distribution of $(X,\,Y)$ is absolutely continuous with respect to $\mu_1\times \mu_2$.
Denote its density by $f$.
This is a general framework which includes most applications and enables us to find easily
an expression for the conditional distributions.
Generally, $\mathbb{X}$ and
$\mathbb{Y}$ are subsets of $\BR^k$, for some $k$, and $\mu_1$ and $\mu_2$ are the corresponding Lebesgue measures.
If $f$ is the density of the probability distribution of $(X,\,Y)$ with respect to $\mu_1\times\mu_2$, then
one can take
\begin{equation}\label{f: prob_cond}
p(B,x)=\frac{\int_B f(x,y)\ \mu_2(\diff y)}{\int_\mathbb{Y} f(x,y)\ \mu_2(\diff y)},
\end{equation}
for every $B$ in $\mathbb{Y}$
and every $x$ in $\mathbb{X}$ such that
\begin{equation}\label{f: marginalx}
0\,<\,f_X(x):=\int_\mathbb{Y} f(x,y)\ \mu_2(\diff y)\,<\,\infty.\end{equation}
Note that $p(\cdot,x)$
is absolutely continuous w.r.t. $\mu_2$ and its density is
\begin{equation}\label{f: cond_density}
f_{Y|X}(y|x):= f(x,y)/f_X(x),
\end{equation}
for every $x$ in $\mathbb{X}$ satisfying \eqref{f: marginalx}.
The density \eqref{f: cond_density}, which is called the conditional density of $Y$ given $X$,
is what is used in most application to find an expression for the conditional distribution.
Therefore,
\eqref{f: prob_cond} deserves to be considered as the ``practical definition" of
conditional probability distribution.
Indeed, it is the natural version of the conditional distribution of $Y$ given $X$ whenever a joint density $f$ exists for $X$ and $Y$.
Clearly, this approach relies on the joint distribution of $X$ and $Y$ and therefore is not available when $X$ is replaced by some non--stochastic information $I$.
Moreover, even if $I$ coincides with an outcome of the random variable $X$,
to define the conditional distribution of $Y$ given $X$, it is required to know
all the possible alternatives of $I$, that is, all the outcomes of $X$.
It is also required to assess the joint distribution of $X$ and $Y$ or the conditional distribution of $X$ given $Y$. This is quite easy if, for instance, $I$ is known to be an outcome of some well-defined random experiment.
In many situations, one has seen the outcome $X$ and in order to establish an update of the distribution of $Y$, one needs to retrospectively ponder and imagine a joint probability model.
This difficulty arises in different puzzles such as, for instance,
Freund's puzzle of the two aces, introduced by Freund (1965).
For other puzzles about conditional probabilities, see, for instance, Gardener (1959).
These puzzles have been widely used to discuss the concept of conditional probability.
Hutchison (1999, 2008)
emphasizes that the updating process needs to take into account
the circumstances under which the truth of $I$ was conveyed.
Also, Bar--Hillel and Falk (1982)
claim that to know how the knowledge was obtained is
``a crucial ingredient to select the appropriate model".
These scholars present different views about the concept of conditionalization, but
all agree on the fact that there would not be a problem if it was known how the information $I$ became available, and therefore one could build a model including $I$.
The concept of conditional probability distributions
is certainly appropriate as a procedure to update probabilities on the basis of any new information
that was already included in the probability model.
But it can be difficult to construct a model that considers all possible relevant information that in the future could become available.
Therefore, the problem arises when one obtains some new and possibly unexpected information
and wants to use it to update a probability distribution.
Indeed, it does not seem appropriate to assess the probability of something which has been already observed.
First, we shall now show that if instead $I$ is the outcome of a random variable $X$ and there is a joint density $f$ for
$(X, Y)$, then one can recover as particular case the conditional distribution of $Y$ given $X$.
If there is a joint density $f$ for $(X, Y)$,
then the conditional distribution \eqref{f: prob_cond} of $Y$ given $X$
arises as the solution
of a decision theoretic problem related to a loss function of the form \eqref{f: cum_loss}.
For every $x$ in $\mathbb{X}$
satisfying \eqref{f: marginalx},
such a loss function is:
\begin{equation}\label{f: loss1
-\int_{S} \log (f(x,y)/f_Y(y))\ \mu_2(\diff y)\ +\, \ d_{KL}(\nu, P_Y),
\end{equation}
where
\begin{equation*}
f_Y(y):=\int_{\mathbb{X}} f(x,y) \ \mu_1(\diff x),
\end{equation*}
$S$ is the set of all $y$ in $S$ such that $0<f_Y(y)<\infty$,
$P_Y$ is the probability distribution of $Y$,
$\nu$ is a probability measure on $\mathbb{Y}$ absolutely continuous
w.r.t. $P_Y$.
The loss \eqref{f: loss1} is of the form \eqref{f: cum_loss} with
\begin{equation}\label{f: self-information-loss}\begin{split}
l(\theta,I)=h(y, x):&=-\ind_S(y)\log (f(x,y)/f_Y(y))\\&=-\ind_S(y)\log f_{X| Y}(x|y),
\end{split}\end{equation}
where $\ind_S(y)$ is equal to $1$ or $0$ depending on whether $y$ belongs to $S$ or not.
For every $x$ in $\mathbb{X}$ satisfying \eqref{f: marginalx},
the conditional distribution $p(\cdot,x)$ given by \eqref{f: prob_cond}
minimizes the loss \eqref{f: loss1}.
If the random variable $X$ is replaced by some non--stochastic information $I$,
then the self--information loss \eqref{f: self-information-loss} cannot be defined, but one can still resort to a loss function of the form \eqref{f: cum_loss}, by choosing a different loss $l(\theta,I)$.
So, the approach introduced in Section 2
provides a general definition of conditional distributions based on non-stochastic information.
\vspace{0.2in}
\noindent{\bf 4.2 Partial information.} We now consider a partial information problem. Here the parameter of interest is $\theta$ yet the information $I$ collected is more informative; it is possible to identify $I_\Theta\subset I$ which provides all the information about $\theta$. One is therefore interested in constructing the loss function $l(\theta,I_\Theta)$. A particular example to be looked at in detail is the proportional hazards model. If the model is that the hazard function is $g_0(t)\,\exp(g(\theta,z_i))$ for individual $i$, where $z_i$ is the covariate value for individual $i$, $g_0$ is the baseline hazard function and $g(\theta,z)$ is the regression function, then the information about $\theta$ is provided by individual $i$ failing from the set of possible failures $S_i=\{j:t_j\geq t_i\}$, where $t_i$ is the time of failure of individual $i$, and these are assumed to be different for each individual. The assumption for us to use the partial information is that information of the failure times provide information about $\theta$ only through the sets $\{S_i\}$. Hence, there are $k\leq n$ pieces of information, where $k$ is the number of individuals whose failure time is known, and it is usual to denote this by setting $\delta_i=1$.
Using the partial self--information or logarithmic loss function, we have
$$\begin{array}{ll}
l(\theta,I_\Theta) & =-\sum_{\delta_i=1}\log \mbox{P}(i|S_i,z) \\ \\
&=-\sum_{\delta_i=1}\left\{h(\theta,z_i)-\log \left(\sum_{j\in S_i} \exp(h(\theta,z_j)) \right)\right\}\end{array}$$
and so the solution to the decision problem is given by
$$\nu({\rm d}\theta)\propto \exp\{-l(\theta,I_\Theta)\}\,\pi({\rm d}\theta).$$
This is a new approach and not taken on by Bayesians due to the lack of motivation. In Appendix C we consider other stylized models.
\vspace{0.2in} \noindent {\bf 5. Illustrations.} In this section we discuss the application of our approach to two important inferential problems. The first is an analysis of variation in survival times of colon cancer patients incorporating genetic information as potential predictors. The second is for joint inference on a set of quantiles. In both cases we claim that the choice of loss function is well founded (and unique) and that there is no traditional Bayesian interpretation of the updates we are implementing. Yet the updates we employ do allow us to learn about the specified parameters of interest. All of the models used to generate results are available as open source code in R.
\vspace{0.2in}
\noindent
{\bf 5.1 Colon cancer genetic survival analysis.}
Colon cancer is a major worldwide disease with increasing prevalence particularly within western societies. Exploring the genetic contribution to variation in survival times following incidence of the cancer may shed light into the disease eitiology and underlying disease heterogeneity. To this aim collaborators at the Wellcome Trust Centre for Human Genetics, University of Oxford, obtained survival times on 918 cancer patients with germline genotype data at 100,000's of markers genome-wide. For demonstration purposes we only consider one chromosome's worth of data containing 15,608 genotype measurements. The data table $X$ then has $n=918$ rows and $p=15,608$ columns, where $(X)_{ij} \in \{0,1,2\}$ denotes the genotype of the $i$'th individual at the $j$'th marker. Alongside this we have the corresponding $(n \times 2)$ response table of survival times $Y$ with a column of event-times, $y_{i1} \in {\Re}^+$ and a column of indicator variables $y_{i2} \in \{0,1\}$, denoting whether the event is observed or right-censored at $y_{i1}$.
To explore association between genetic variation and time-to-event we employ a loss function derived under proportional hazards, treating the loss to the baseline hazard as a nuisance parameter.
This is based on the Cox proportional hazard (PH) model, one of the most widely used methods in survival analysis since its introduction in Cox (1972). In this log-linear model the hazard rate at time $t$ for an individual with covariate ${\bf x}= \{x_1, \ldots, x_p\}$ is defined as,
$$
h(t | {\bf{x}}) = h_0(t) \exp \left( \sum_{j=1}^p x_j \beta_j \right )
$$
where $h_0(t)$ is a baseline hazard function. In the seminal work of Cox (1972), $h_0(t)$ is treated as a nuisance parameter (or process) that does not enter into the partial-likelihood for estimating the parameters of interest $\bbeta$.
Using our construction we can consider only the order of events as partial-information relevant to the regression coefficients, $\bbeta$, via the cumulative loss function,
$$
l(\bbeta , {\bf x}) = \sum_{i=1}^n \log \left ( \frac{\exp(\sum_j x_{ij} \beta_j)}{\sum_{l\in R_i} \exp(\sum_j x_{lj} \beta_j)} \right ),
$$
where $R_i$ denotes the risk set, those individuals not censored or at time $t_i$, and in this way obtain a conditional distribution $\pi(\bbeta | \bx)$.
\vspace{0.2in}
\noindent
{\bf 5.1.1 Single marker association.}
As is standard practice, e.g. Balding (2006), we initially investigate the evidence of genetic association by testing each of the 15,608 markers in turn using a univariate model with loss,
$$
l(\beta_j , {\bf x_{j}}) = \sum_{i=1}^n \log \left ( \frac{\exp( x_{ij} \beta_j)}{\sum_{l\in R_i} \exp( x_{lj} \beta_j)} \right ),
$$
for each of the $j = 1, \ldots 15,608$ genetic makers. An advantage of our approach is the incorporation of prior information into the analysis. In most modern genome-wide genetic association studies we expect {\em{a priori}} that the coefficient values of predictive markers will be small, as otherwise we would have detected association of the marker using historic linkage based methods with lower resolution but higher power. Hence, we have additional information on the coefficient values. For unknown markers truly associated with survival we assume,
$$
\beta_j \sim N(0, v_j)
$$
and set $v_j = 0.5$ for our study, reflecting beliefs that associated coefficients will be modest. For each marker we now include an indicator variable, $\gamma_j \in {0,1}$ that specifies whether there is any association at the corresponding marker or not. This defines a hierarchical prior with,
$$
\pi(\beta_j | \delta_j ) = \left\{
\begin{array}{lll}
0 & ~ & {\textrm{if }} \delta_j = 0 \\
\mbox{N}(0, v_j) & ~ & {\textrm{otherwise} },
\end{array}
\right.
$$
and our prior $\pi(\delta_j)$ reflects beliefs about whether the corresponding $\beta_j$ will be zero or not. For now we shall simply assume $\pi(\delta_j = 1) = 0.5$, although we note it is straightforward to incorporate genetic prior information here.
In this way we can use our framework to calculate a posterior measure $\pi(\delta_j, \beta_j | \bx, \by)$ for each marker. Interest lies in the evidence for a non-zero effect, i.e., in the marginal,
$$
\pi(\delta_j | \bx, \by) = \int_{\beta_j} \pi(\beta_j, \delta_j | \bx, \by) {\rm d} \beta_j .
$$
In particular we can define the general Bayes Factor of association at the $j$ th marker as,
$$
BF_j = \frac{ \int_{\beta_j} \exp \left[ -l(\beta_j | \bx_j) \right] \pi(\beta_j | \delta_j=1) {\rm d} \beta_j}{ \exp \left[ -l(\beta_j = 0 | \bx_j) \right] }
$$
The one-dimensional integral in the numerator is simple to evaluate using quadrature or Monte Carlo methods. However, with a large sample size and over 15,000 integrals to calculate it is convenient to adopt a Laplace approximation to the integral, namely,
$$
\int_{\beta_j} \exp \left[ -l(\beta_j | \bx_j) \right] \pi(\beta_j | \delta_j=1) {\rm d} \beta_j \approx | \hat{\Sigma}_j | ^{1/2} \exp[- l( \tilde{\beta}_j | \bx)] \pi(\tilde{\beta}_j | \delta_j=1)
$$
where $\tilde{\beta_j}$ is the MAP estimator, mode of the posterior $\pi(\beta_j | \delta_j, \bx, \by)$, and $\hat{\Sigma}_j$ is an estimate of the Hessian at the mode. Both the MAP estimate and the Hessian can be calculated efficiently under our loss and normal prior for $\beta_j$. We calculated the general Bayes Factors for each marker and in Fig (\ref{BF}) we plot the log Bayes Factors over the chromosome. While there is considerable variation we observe strong evidence of association around marker 10,000. To test if the Laplace approximation is accurate we selected 500 markers at random and ran a Monte Carlo importance sampler with $N(\tilde{\beta}_j, \tilde{\Sigma}_j^{-1})$, and 500 samples. Fig (\ref{LP}) indicates that the Laplace approximation appears accurate. This is not so surprising given we have 918 observations and a single parameter.
It is interesting to compare the evidence of association provided by the Bayes Factor Fig (\ref{BF}) in comparison to that obtained using a conventional Cox PH partial-likelihood based test. In Fig (\ref{BFvP}) we plot the log Bayes Factors versus $- \log_{10}$ p-values obtained from a likelihood ratio test. We can see general agreement especially at the markers with strongest association as one would expect for a large sample size. Interestingly there appears to be greater dispersion at markers of weaker association. In Fig (\ref{pvalcol}) we highlight the region of weaker association and colour the points by the standard error of the maximum likelihood estimate. We can see a tendency for markers with less information, greater standard error, to get attenuated towards a logBF of 0 under the general Bayesian approach. This is further highlighted in Fig (\ref{SEvBF}) where we plot the standard error against log Bayes Factors. Markers with high standard error relate to genotypes of rarer alleles and the attenuation reflects a greater degree of uncertainty for association at these markers that contain less information.
Returning to the ``hit region'' showing strongest association around marker 10,000, in Fig (\ref{BF_hit}) we see the portion of the graph from Fig (\ref{BF}) containing 800 makers around the marker of strongest association. Due to high colinearity between markers it is not clear whether the signal of association arises from a single effect correlated with others, or from multiple independent association signals. In order to investigate this we developed multiple marker methods.
R code to calculate Bayes Factors for single marker association using Laplace and Monte Carlo Importance Sampling is available.
\vspace{0.2in}
\noindent
{\bf 5.1.2 Multiple marker variable selection.}
With the aim of determining if there are multiple markers underlying the signal of association in Fig (\ref{BF_hit}) we consider a model using potentially all 800 makers in the region and phrase the problem as a variable selection task under a partial-likelihood (loss), in which the user suspects that some of the $p=800$ recorded covariates (\ref{eqn:part_lik}) may not be relevant to variation in survival times.
In the non-Bayesian paradigm, variable selection can proceed by defining a cost function, such as AIC or BIC, that adjusts fit to the data by the number of covariates in the model. Inference proceeds using an optimization algorithm, such as forward or stepwise selection, to find a model that minimises the cost. More recently, penalized-likelihood methods have proved popular (Tibshirani, 1997; Fan and Li, 2002) where the partial-likelihood is maximised subject to some constraint on the norm of the regression coefficients defined by some appropriate sparsity inducing metric.
Despite the enormous impact of Cox PH models and the importance of variable selection, the Bayesian literature in this area is very limited. This is because of the lack of a theoretical foundation to treat $h_0(t)$ as a nuisance parameter, leading to either ad hoc methods or the full specification of a joint probability model. For instance, Faraggi and Simon (1998) and Volinsky et al. (1997) adopt pseudo-Bayesian approaches. The paper of Volinsky et al. (1997) take the BIC as an approximation to the marginal likelihood and they use a branch and bound algorithm to find a set of models with differing sets of covariates with high BIC scores. The difficulty here is that, while the methods are important and well motivated, they are ultimately ad hoc. Moreover, prior information on $\pi({\bm{\beta}})$ does not enter into the calculation of the BIC, meaning that an important aspect of the Bayesian approach is lost.
In contrast, Ibrahim et al. (1999) consider variable selection within a full joint model using a prior specification of a gamma process for the baseline hazard. This provides a formal Bayesian solution but inference is then conditional on, and sensitive to, the specification of the prior on $h_0(t)$, something the partial-likelihood model explicitly avoids.
Here we use the partial-information relevant to the regression coefficients $\bbeta$ via the cumulative loss function,
\begin{equation}
l(\bbeta | {\bf x}) = \sum_{i=1}^n \log \left ( \frac{\exp(\sum_j x_{ij} \beta_j)}{\sum_{l\in R_i} \exp(\sum_j x_{lj} \beta_j)} \right ),
\label{eqn:part_lik}
\end{equation}
where $R_i$ denotes the risk set, those individuals not censored or at time $t_i$. As in Section 6.1.1 we assume proper priors, $\pi(\bbeta)$ on the regression coefficient,
$$
\pi(\beta_j) = \left\{
\begin{array}{lll}
0 & ~ & {\textrm{if }} \delta_j = 0 \\
\mbox{N}(0, v_j) & ~ & {\textrm{otherwise} },
\end{array}
\right.
$$
where $\delta_j \in \{0,1\}$ is an indicator variable on covariate relevance with,
$$
\pi(\delta_j) = \mbox{Bn}(a_j)
$$
where $\mbox{Bn}(\cdot)$ denotes the Bernoulli distribution but we now treat $\{\delta_1, \ldots, \delta_{800}\}$ as a vector in a joint model. In this way the posterior $\pi(\bdelta | \bx)$ quantifies beliefs about which variables are important to the regression. We use Markov chain Monte Carlo (MCMC) to draw samples approximately from $\pi(\bbeta, \bdelta | \bx)$ from which the marginal distribution on $\bdelta$ can be examined. In particular we make use of an efficient joint updating proposal, $q(\bdelta', \bbeta' | \bdelta)$, within the MCMC as
$$
q(\bdelta' , \bbeta' | \bdelta) = q(\bdelta' | \bdelta) q(\bbeta' | \bdelta')
$$
where $q(\bdelta' | \bdelta)$ proposes a local move to add, remove, or swap one variable per MCMC iteration in or out of the current model indexed by $\bdelta$, and $q(\bbeta' | \bdelta')$ is a joint independence Metropolis update proposal,
$$
q(\bbeta' | \bdelta') = \mbox{N}(\tilde{\bbeta}_{\delta'}, \tilde{{\bm{V}}}_{\delta'})
$$
where $\{\tilde{\bbeta}_{\delta'}, \tilde{{\bm{V}}}_{\delta'}\}$ are the MAP and approximate Information Matrix obtained from the combination of log-partial-loss and normal prior. The joint proposal is then accepted with probability,
$$
\alpha = \min \left\{ 1, \frac{ \exp[-l(\bbeta' | {\bf x})] \pi(\bbeta' | \bdelta') \pi(\bdelta') q(\bbeta, \bdelta | \bdelta') } { \exp[-l(\bbeta | {\bf x})] \pi(\bbeta | \bdelta) \pi(\bdelta) q(\bbeta', \bdelta' | \bdelta) } \right\}
$$
We ran our MCMC algorithm for 100,000 iterations with prior parameter settings, $\{v_j = 0.5, a_j=1/800\}$, for all $j = 1, \ldots, p$, equivalent to a prior assumption of a single associated marker. In Fig (\ref{Prob_post}) we show the marginal inclusion probability, after discarding 10,000 samples as a burn in. The algorithm showed an overall acceptance rate of 8\% for proposed moves. The model suggest overwhelming evidence for a single marker in the region of index 10200 but also weaker evidence of independent signal in a couple of other regions.
R code to perform the reversible jump MCMC multiple variable sampling for the Cox PH partial-likelihood with normal priors is available on request.
\vspace{0.2in}
\noindent
{\bf 5.2 Joint inference for quantiles and the Bayesian Boxplot.} We discuss this illustration for three reasons. The first is that there is a unique loss function for learning about a set of quantiles, countering the notion that loss functions are arbitrary, and second there is no traditional Bayesian version for updating a set of quantiles which can coincide with our approach. Finally, we show how boxplots, one of the most widely used exploratory graphical tool, can be enhanced by taking into account uncertainty in the plot due to a finite sample size.
Let us start with the median solely.
The unique loss function for learning about the median of a distribution function is given by $l(\theta,x)=w|\theta-x|$ for some $w>0$. Hence, the posterior distribution is given by
$$\pi(\theta|x_1,\ldots,x_n)\propto \exp\left\{-w\sum_{i=1}^n|x_i-\theta|\right\}\,\pi(\theta).$$
One might be tempted to argue that this is merely a Bayesian update using the Laplace distribution and hence falls within the Bayesian paradigm. This is correct but it would put the Bayesian in an awkward quandary if she knew, for example, the observations were coming from a normal distribution.
In fact we are, as we have stated previously, not assigning a probability model for $x$. To make this distinction more explicit let us consider the situation where we want to learn about the three quartiles $(\theta_1,\theta_2,\theta_3)$ jointly, where $\theta_1$ is the lower quartile, $\theta_2$ the median, and $\theta_3$ the upper quartile. The prior will be denoted by $\pi(\theta_1,\theta_2,\theta_3)$ which would obviously include the constraint $\theta_1<\theta_2<\theta_3$.
The loss function $l(\theta,x)$ in this case, treating the learning of the quartiles with equal importance, is given by
$$\begin{array}{ll}
l(\theta,x) & = w\left\{0.25(\theta_1-x)_++0.75(x-\theta_1)_+ + \right.\\ \\
& \left.+0.5|\theta_2-x|+0.75(\theta_3-x)_++0.25(x-\theta_3)_+\right\}
\end{array}$$
for some $w>0$.
Then the posterior distribution is given by
$$\pi(\theta|x_1,\ldots,x_n)\propto \pi(\theta)\exp\left\{\sum_{i=1}^n l(\theta,x_i)\right\}.$$
This can not be obtained by any Bayesian model that has currently been proposed. It is certainly therefore not classifiable as a Bayesian update.
We can illustrate the utility of this by considering a boxplot. In Fig (\ref{BP}) we show a boxplot of data taken from the example used in MATLAB help file for the function {\tt{boxplot.m}}, in the statistics toolbox. The plot illustrates the distribution of miles per gallon (MPG) from records of a selection of cars taken in the 1970s, broken down by manufacturing country. The data set is available as {\tt{carbig.mat}} in MATLAB, we have omitted the `England' group which contains only 1 observation.
The boxplot is one of the most important and widely used graphical tool applied to summarise the distribution of data and highlight potential differences in the distributions across groups, but there is traditionally no uncertainty displayed in the summary statistics of the distributions used in the boxplot. In fact, for this data there are only 13 observations for ``French'' cars while there are 249 observations for the ``USA'', yet the conventional boxplot fails to inform on this.
We placed a prior on the median, upper and lower quartiles defined by the blue boxes in Fig (\ref{BP}) and account for the uncertainty by inferring the posterior distribution on these unknowns. Let $\theta_1$ denote the lower quartile, $\theta_2$ the median and $\theta_3$ the upper quartile. We adopted a normal, fairly vague, prior,
$$
\theta_1 \sim N(10, 100); ~~~ \theta_2 \sim N(20,100); ~~~ \theta_3 \sim N(30,100),
$$
with the constraint $\theta_1<\theta_2<\theta_3$. We adopt the ``observed unit information loss'' in the setting of $w$, see Section 3.4.2,
$$
\hat{w} = \frac{\int\log \{\pi(\widehat{\theta})/\pi(\theta)\}\,\pi({\rm d}\theta)}{\frac{1}{n-p} \sum_i l(\hat{\theta}_x, x_i)}
$$
where we estimate $\int\log \{\pi(\widehat{\theta})/\pi(\theta)\}\,\pi({\rm d}\theta)$ via Monte Carlo and use a Nelder-Mead optimiser for $\hat{\theta}_x$.
We then implemented a Metropolis-Hastings MCMC algorithm to sample from the posterior $\pi(\theta_1, \theta_2, \theta_3 | x)$, for each of the 6 groups of cars shown in Fig (\ref{BP}), using 100,000 samples with a 50,000 sample burn-in.
In Fig (\ref{BBP}) we show our ``Bayesian boxplot'' which includes the original boxes (empirical estimates) overlaid with $95\%$ credible intervals for $(\theta_1, \theta_2, \theta_3)$. Credible intervals are shown as extended dotted lines from the empirical estimates with a small diamond denoting the edge of the interval. In comparison with Fig (\ref{BP}) we see that Fig (\ref{BBP}) contains much more information. For example, we see that while in Fig (\ref{BP}) the median MPG of Italian and Swedish cars look different, in fact the 95\% credible intervals overlap in Fig (\ref{BBP}). In addition we see that there is considerable overlap in the distribution of medians between Sweden and the USA; and in general, comparison of medians or distributions in the conventional boxplot are obscured and confounded by sample size.
The MCMC samples approximately from $\pi(\theta_1, \theta_2, \theta_3 | x)$ for France and USA are shown in Figs (\ref{Fr}), (\ref{USA}). The data set for France contains 13 observations and hence there is much greater uncertainty in the posterior marginals. Moreover, looking at the joint densities of $(\theta_1, \theta_2)$ and $(\theta_2, \theta_3)$ we can see the constraints imposed by the prior. In contrast, due to the higher sample size the posterior samples for the USA are tighter and hence exhibit less dependence. An interesting extension would be to include hierarchical priors on the quartiles whereby one could borrow strength across groups.
\vspace{0.2in}
\noindent
{\bf 6. Discussion.} We have provided a basis for general learning and the updating of information using belief probability distributions. Loss functions constructed on spaces of probability measures allow for coherent updating. Specifically, information is connected to the parameter of interest via a loss function and this is the fundamental concept, replacing the restrictive connection based on probability models. We can recover precisely the traditional updating rules such as the Bayes rule when we select the self--information loss function, when it is appropriate to do so.
The assumptions we make are minimal. That information can be connected to unknown parameters via loss functions and that individuals then act rationally by minimizing their expected loss. If information is assumed to come from some probability model then we can accommodate this within our framework by appealing to the self--information loss function equivalent to the negative log-likelihood and so we can argue that loss functions are sufficient for learning mechanisms currently in use.
The scope of our findings provides extensive generalizations to the Bayes updating rule. For the Bayesian, when it is problematic to construct a probability model with all the implications about assigning probability one to events, can be compared to the ease of introducing a loss function which has no further implications. A probability model needs to assert a sample space with alternatives and assign probabilities to all outcomes. On the other hand, a loss function can be constructed after the information has been received and determined solely for the known information without need to consider which alternative information could have been received. Yet, surprisingly, both approaches can coincide which suggests the Bayesian support theory is more than is really needed.
More generally, we can use loss functions currently employed in a classical context for robust estimation; for example, generalized estimating equations. We can also deal appropriately with partial information where it is only a part of some observed information is useful or relevant for learning about the decision making process based on a particular relevant parameter of interest.
We have developed a rigorous approach to updating beliefs where we are required only to think about which is the best parameter from a chosen model needed to make a decision rather than have to think about a non--existent true model parameter which coincides with the true data generating mechanism.
\vspace{0.2in}
\noindent
{\bf 6.1 Optimal Decisions.}
Let us now recap the story from a slightly different perspective when observations are independent and identically distributed from $F_0(x)$ and action $a\in A$ is to be made. The decision maker is happy to make an action if the minimizer, $\theta_0$, of $\int l(\theta,x)\,d F_0(x)$ is known, for some loss function $l(\theta,x)$. This action is based on the utility function $u(a,\theta)$ and hence the action would be the one maximizing $u(a,\theta_0)$.
With $\theta_0$ not being known, as $F_0$ is not known,
a prior distribution $\pi(\theta)$ is constructed expressing beliefs about the location of $\theta_0$. Then, with data $(x_i)_{i=1}^n$, the loss function picking out the appropriate probability measure $\nu(\theta)$, with which to provide an action $a$ through the maximization of expected utility, i.e. $U(a)=\int_\Theta u(a,\theta)\,\nu({\rm d}\theta)$,
itself minimizes the loss function
$$L(\nu)=\sum_{i=1}^n \int_\Theta l(\theta,x_i)\,\nu({\rm d}\theta)+d_{KL}(\nu,\pi).$$
In this way it is seen that the sequence of $\nu(\theta)$ should accumulate about $\theta_0$.
To us, now, there seems to be no reason whatsoever why $l(\theta,x)$ should be exclusively based on a probability distribution. For example, if we want the median then $l(\theta,x)=|\theta-x|$; if we want the mean then $l(\theta,x)=(\theta-x)^2$; whereas if we want the $\theta$ taking us closest in Kullback--Leibler divergence to $f_0$, then $l(\theta,x)=-\log f(x;\theta)$.
\vspace{0.2in}
\noindent
{\bf 6.2 Conclusion.}
We acknowledge we have presented a general framework which at first sight might appear to sanction ``anything goes". This is wrong. We have replaced a subjective probability model with an objective loss function, since the parameter of interest is typically defined by the statistical problem. In this case, the loss function connecting the information to the parameter is unique. See, for example, Section 5.2, in the case of the parameter of interest being the median. On the other hand, there is no unique probability distribution to use to first model the data and then use this to estimate the median.
When the interest is in a parameter indexing a family of densities and the parameter to target is the one which makes this family closest to the true model, then the unique loss function in this case is the self--information loss, which yields the Bayesian update.
We believe it is more fundamental to identify parameters of interest through loss functions and the corresponding information available. The alternative route through a probability model is, we argue, highly restrictive and leads to narrow types of Bayesian updating and, moreover, is more arbitrary. The necessary supporting theory for us is minimal, the construction and minimization of loss functions. Whereas for the use of probability models it is also more intricate and restrictive.
\vspace{0.2in} \noindent {\bf References.}
\begin{description}
\item Ali, S.M. and Silvey, S.D. (1966). A general class of coefficients of divergence of one distribution from another. {\sl Journal of the Royal Statistical Society, Series B} {\bf 28}, 131--142.
\item Balding, D. J. (2006). A tutorial on statistical methods for population association studies. {\sl Nature Reviews Genetics}, {\bf 7}(10), 781--791.
\item
Bar-Hillel, M.~and Falk, R.~(1982).
\newblock Some teasers concerning conditional probabilities.
\newblock \emph{Cognition}, 11:\penalty0 109--122, 1982.
\item Barron, A., Schervish, M.J and Wasserman, L. (1999)
The consistency of posterior distributions in nonparametric problems
{\sl Annals of Statistics} {\bf 27}, 536--561.
\item Berger, J.O. (1993). {\sl Statistical Decision Theory and Bayesian Analysis}. Springer Series in Statistics.
\item Berk, R.H. (1966). Limiting behaviour of posterior distributions when the model is incorrect. {\sl Annals of Mathematical Statistics} {\bf 37}, 51--58.
\item Bernardo, J.M. (1979). Expected information as expected utility. {\sl Annals of Statistics} {\bf 7} 686--690.
\item Bernardo, J.M. and Smith, A.F.M. (1994). {\sl Bayesian Theory}. Wiley.
\item Billingsley, P. (1995).
\emph{Probability and measure}.
Wiley Series in Probability and Mathematical Statistics. John Wiley
\& Sons Inc., New York, third edition, 1995.
ISBN 0-471-00710-2.
A Wiley-Interscience Publication.
\item Bunke, O. and Milhaud, X. (1998). Asymptotic behaviour of Bayes estimates under possibly incorrect models. {\sl Annals of Statistics} {\bf 26}, 617--644.
\item Cox, D. R. (1972). Regression models and life tables (with discussion). {\sl Journal of the Royal Statistical Society, Series B} {\bf 34}, 187-220.
\item Datta, G. S., and Sweeting, T. J. (2005). Probability matching priors. {\sl Handbook of statistics}, {\bf 25}, 91-114.
\item De Blasi, P. and Walker, S.G. (2012). Bayesian asymptotics with misspecified models. {\sl Statistica Sinica} {\bf 23}, 169-187.
\item Dempster, A.P. (1968). A generalization of Bayesian inference. {\sl Journal of the Royal Statistical Society, Series B} {\bf 30}, 205--247.
\item Diaconis, P. and Zabell, S.L. (1982). Updating subjective probability. {\sl Journal of the American Statistical Association} {\bf 77}, 822--830.
\item Doucet, A., and Shephard, N. (2012). Robust inference on parameters via particle filters and sandwich covariance matrices. {\sl University of Oxford, Department of Economics}. No. 606.
\item de Finetti, B. (1937). La pr\'evision: ses lois logiques, ses sources subjectives. {\sl Annales de l'Institute Henri Poincar\'e} {\bf 7}, 1--68.
\item Escobar, M.D. (1988). {\sl Estimating the means of several normal populations by nonparametric estimation of the distribution of the means}.
Unpublished PhD dissertation, Department of Statistics, Yale University.
\item Fan, J and Li, R. (2002). Variable Selection for Cox's proportional Hazards Model and Frailty Model. {\sl Ann. Statist.} {\bf 30}, 1, 74-99.
\item Faraggi, D. and Simon R. (1998). Bayesian variable selection method for censored survival data. {\sl Biometrics} {\bf 54}, 1475-1485.
\item Feller, W. (1971).
An Introduction to Probability Theory and its Applications. Vol. II. Wiley Series in Probability and Mathematical Statistics. John Wiley \& Sons Inc., New York-London-Sydney, second edition, 1971.
\item Ferguson, T.S. (1973). A Bayesian analysis of some nonparametric problems. {\sl Annals of Statistics} {\bf 1}, 209--230.
\item
Freund., J.~E. (1965)
Puzzle or paradox?
\emph{Am. Stat.}, 19 (4): 29--44, 1965.
\item
Gardner, M.~ (1959).
The Scientific American Book of Mathematical Puzzles and
Diversions.
Simon and Schuster, New York, 1959.
\item Ghosh, J.K. and Ramamoorthi, R.V. (2003). {\sl Bayesian Nonparametrics} Berlin: Springer--Verlag.
\item Goldstein, M. (1981). Revising previsions: A geometric interpretation. {\sl Journal of the Royal Statistical Society, Series B} {\bf 43}, 105--130.
\item Goldstein, M., and Wooff, D. (2007). {\sl Bayes Linear Statistics, Theory \& Methods.} {\bf 716}. Wiley.
\item Hastie, T., Tibshirani, R. and Friedman, J. (2009). {\sl Elements of Statistical Learning}. Springer.
\item Hirshleifer, J. and Riley, J.G. (1992). {\sl The Analytics of Uncertainty and Information}. Cambridge University Press.
\item Hjort, N.L., Holmes, C.C., M\"uller, P. and Walker, S.G. (2010). {\sl Bayesian Nonparametrics}. Cambridge University Press.
\item Hoff, P. and Wakefield, J.C. (2013). Bayesian sandwich posteriors for pseudo-true parameters. To appear in {\sl Journal of Statistical Planning and Inference}
\item H\"uber, P. (1964). Robust estimation of a location parameter. {\sl Annals of Mathematical Statistics} {\bf 35}, 73-101.
\item H\"uber, P. (2009). Robust Statistics (2nd ed.). Hoboken, NJ: John Wiley \& Sons Inc.
\item
Hutchison, K.~(1999).
\newblock What are conditional probabilities conditional upon?
\newblock \emph{Brit. J. Phi. Sci.}, 50:\penalty0 665--695, 1999.
\ite
Hutchison,K.~(2008).
\newblock Resolving some puzzles of conditional probability.
\newblock \emph{Adv. Sci. Lett.}, 1:\penalty0 212--221, 2008.
\item Ibrahim, J.G. and Chen, M.H. (2000). Power prior distributions for regression models. {\sl Statistical Science} {\bf 15}, 46--60.
\item Ibrahim, J.G., Chen, M.H. and MacEachern, S.N. (1999). Bayesian variable selection for proportional hazards models. {\sl The Canadian Journal of Statistics}. {\bf 27}, 701-171.
\item Jiang, W. and Tanner, M.A. (2008). Gibbs posterior for variable selection in high-dimensional classification and data mining. {\sl Ann. Statist.} {\bf 36}, 2207-2231
\item Key, J.T., Pericchi, L.R. and Smith, A.F.M. (1999) Bayesian model choice: What and why? (with discussion). In {\sl Bayesian Statistics 6}, Bernardo, J.M., Berger, J.O., Dawid,
A.P. and Smith, A.F.M. (Eds). Oxford University Press, 343--370.
\item Kleijn, B.J.K. and van der Vaart, A.W. (2006). Misspecification in infinite dimensional Bayesian statistics. {\sl Annals of Statistics} {\bf 34}, 837--877.
\item Kullback, S. and Leibler, R.A. (1951). On information and sufficiency. {\sl Annals of Mathematical Statistics} {\bf 22}, 79--86.
\item Liang, K.Y. and Zeger, S.L. (1986). Longitudinal data analysis using generalized linear models. {\sl Biometrika} {\bf 73}, 13--22.
\item Lo, A.Y. (1984). On a class of Bayesian nonparametric estimates I. Density estimates. {\sl Annals of Statistics} {\bf 12}, 351--357.
\item Merhav, N. and Feder, M. (1998). Universal prediction. {\sl IEEE Transactions on Information Theory} {\bf 44}, 2124--2147.
\item Muller, U. (2012). Risk of Bayesian inference in misspecified models, and the sandwich covariance matrix. Department of Economics, Princeton University.
\item Ribatet, M., Cooley, D. and Davison, A. C. (2009). Bayesian inference from composite likelihoods, with an application to spatial extremes. {\sl arXiv preprint} :0911.5357.
\item Royall, R., and Tsou, T. S. (2003). Interpreting statistical evidence by using imperfect models: robust adjusted likelihood functions. {\sl Journal of the Royal Statistical Society: Series B} {\bf 65}, 391--404.
\item Savage, L.J. (1954). {\sl The Foundations of Statistics}. New York. Wiley.
\item Shafer, G. (1976). {\sl A Mathematical Theory of Evidence}. Princeton University Press.
\item Tibshirani, R. J. (1997). The lasso method for variable selection in the Cox model. {\sl Statistics in Medicine} {\bf 16}, 385-395.
\item Volinsky, C.T., Madigan, D., Raftery, A. E. and Kronmal, A. (1997). Bayesian Model Averaging in Proportional Hazard Models: Assessing the Risk of a Stroke. {\sl Journal of the Royal Statistical Society, Series C}. {\bf 46}, 4, 433--448.
\item von Neumann, J. and Morgenstern, O. (1944). {\sl Theory of Games and Economic Behaviour}. Princeton University Press.
\item Walker, S.G. and Hjort, N.L. (2001). On Bayesian consistency. {\sl Journal of the Royal Statistical Society, Series B}
\item Walker, S.G. (2004). New approaches to Bayesian consistency. {\sl Annals of Statistics} {\bf 32}, 2028--2043.
\item White, H. (1982). Maximum likelihood estimation of misspecified models. {\sl Econometrica} {\bf 50}, 1--25.
\item Zellner, A. (1988). Optimal information processing and Bayes's theorem. {\sl The American Statistician} {\bf 42}, 278--284.
\item Zhang, T. (2006a). From $\epsilon$-entropy to KL-entropy: Analysis of minimum information complexity density estimation. {\sl Ann. Statist.} {\bf 34}, 2180-2210.
\item Zhang, T. (2006b). Information theoretical upper and lower bounds for statistical estimation. {\sl IEEE Trans. Inform. Theory} {\bf 52}, 1307-1321.
\end{description
\newpage
\vspace{0.2in} \noindent {\bf Appendix A: Proof of Theorem in Section 2.1.} This result is proven
by Bissiri and Walker (2012) in their Theorem 2. Here, a shorter proof is given by assuming the differentiability of $g$.
Assume that $\Theta$ contains at least two distinct points, say $\theta_1$ and $\theta_2$. Otherwise, $\pi$ is degenerate and the thesis is trivially satisfied.
To prove this theorem, it is sufficient to consider the case $n=2$ and a very specific choice for $\pi$, taking
$\pi=p_0\delta_{\theta_1}+ (1-p_0)\delta_{\theta_2},$
where $0< p_0< 1$.
Any probability measure $\nu$ absolutely continuous with respect to $\pi$ has to be equal to
$p\delta_{\theta_1}+(1-p)\delta_{\theta_2}$, for some $0\leq p\leq 1$.
Therefore, in this specific situation,
the loss $L(\nu;I,\pi)$ becomes:
\begin{equation*}
\begin{split}
l(p,p_0,h_I)&:=p\,h_I(\theta_1)\, +\,(1-p)\,h_I(y_1)\\
&\phantom{=}+\, p_0\,g\left(\frac{p}{p_0}\right)\,+\,(1-p_0)\,g\left(\frac{1-p}{1-p_0}\right),
\end{split}
\end{equation*}
where $h_I(\theta_i)=h(\theta_i,I_1)+h(\theta_i,I_2)$ for $I=(I_1,I_2)$ and $h_I(\theta_i)=h_1(\theta_i,I_j)$ for $I=I_j$, $i,j=1,2$.
Denote by $p_1$ the probability $\pi_{I_1}(\{\theta_1\})$, i.e. the minimum point of $l(p,p_1,h_{(I_1,I_2)})$
as a function of $p$, and by $p_2$ the probability $\pi_{(I_1,I_2))}(\{\theta_1\})$.
By hypotheses, $p_2$ is the unique minimum point of both loss functions
$l(p,p_1,h_{I_2})$ and $l(p,p_0,h_{(I_1,I_2)})$.
Again by hypothesis, we shall consider only those functions
$h_{I_1}$ and $h_{I_2}$
such that each one of the functions
$l(p,p_0,h_{I_1})$, $l(p,p_1,h_{I_2})$, and $l(p,p_0,h_{(I_1,I_2)})$,
as a function of $p$, has a unique minimum point, which is $p_1$ for the first one and $p_2$
for the second and third one.
The values $p_1$ and $p_2$ have to be strictly bigger than zero and strictly smaller than one: this was proved by
Bissiri and Walker (2012) in their Lemma 2.
Hence, $p_1$ has to be a stationary point of
$l(p,p_0,h_{I_1})$
and $p_2$ of both the functions
$l(p,p_1,h_{I_2})$ and $l(p,p_0,h_{(I_1,I_2)})$.
Therefore,
\begin{align}\label{f: loss.1}
g'\left(\frac{p_1}{p_0}\right)\,-\,g'\left(\frac{1-p_1}{1-p_0}\right)\,&=\,
h_{I_1}(y_1)\, -\, h_{I_1}(\theta_1),\\
\label{f: loss.2}
\, g'\left(\frac{p_2}{p_0}\right)\,-\,g'\left(\frac{1-p_2}{1-p_0}\right)\,&=\,
h_{(I_1,I_2)}(y_1)\, -\, h_{(I_1,I_2)}(\theta_1),\\
\label{f: loss.3}
\, g'\left(\frac{p_2}{p_1}\right)\,-\,g'\left(\frac{1-p_2}{1-p_1}\right)\,&=\,
h_{I_2}(y_1)\, -\, h_{I_2}(\theta_1).
\end{align}
Recall that
$h_{(I_1,I_2)}=h_{I_2}+h_{I_1}$.
Therefore,
summing up term by term \eqref{f: loss.1} and \eqref{f: loss.3},
and considering \eqref{f: loss.2}, one obtains:
\begin{equation}\label{f: equation}\begin{split}
g'&\left(\frac{p_2}{p_0}\right)\,-\,g'\left(\frac{1-p_2}{1-p_0}\right)\\
&\phantom{XXX}=\,g'\left(\frac{p_1}{p_0}\right)\,-\,g'\left(\frac{1-p_1}{1-p_0}\right)\, +\,
g'\left(\frac{p_2}{p_1}\right)\,-\,g'\left(\frac{1-p_2}{1-p_1}\right).
\end{split}\end{equation}
Recall that by hypothesis \eqref{f: loss.1}--\eqref{f: loss.3} need to hold for every two functions
$h_{I_1}$ and $h_{I_2}$
arbitrarily chosen with the only requirement that $p_1$ and $p_2$ uniquely exist.
Hence, \eqref{f: equation} needs to hold for every $(p_0,p_1,p_2)$ in $(0,1)^3$.
By substituting $t=p_0$, $x=p_1/p_0$ and $y=p_2/p_1$, \eqref{f: equation} becomes
\begin{equation}\label{f: equation.}\begin{split}
g'&\left(xy\right)\,-\,g'\left(\frac{1-txy}{1-t}\right)\\
&\phantom{XXX}=\,g'(x)\,-\,g'\left(\frac{1-tx}{1-t}\right)\, +\,
g'\left(y\right)\,-\,g'\left(\frac{1-txy}{1-tx}\right),
\end{split}\end{equation}
which holds for every $0<t<1$, and every $x,y>0$ such that $x<1/t$ and $y<1/(xt)$.
Being $g$ convex and differentiable, its derivative $g'$ is continuous. Therefore,
letting $t$ go to zero, \eqref{f: equation.} implies that
\begin{equation}\label{f: equation+}
g'\left(xy\right)
=\,g'(x)\, +\,
g'\left(y\right)\,-\,g'(1)
\end{equation}
holds true for every $x,y>0$. Define the function
$\varphi(\cdot)=g'(\cdot) -g'(1)$.
This function is continuous, being $g'$ such,
and by \eqref{f: equation+},
$\varphi(xy)=\varphi(x)\, +\,\varphi(y)$ holds for every $x,y>0$. Hence, $\varphi(\cdot)$ is $k\ln(\cdot)$ for some $k$, and therefore
\begin{equation}\label{f: log}
g'(x)\
=\ k\,\ln (x)\ +\ g'(1),\end{equation}
where
\(k\,=\,(g'(2)\, -\, g'(1))/\ln(2)\).
Being $g$ convex, $g'$ is not decreasing and therefore
$k \geq 0$. If $k=0$, then $g'$ is constant, which is impossible, otherwise, for any $h_I$,
$p_1$ satisfying \eqref{f: loss.1} either would not exist or would not be unique. Therefore, $k$ must be positive.
Being $g(1)=0$ by assumption, \eqref{f: log} implies that
$g(x)\:=\:k\, x\ln(x)\, +\, (g'(1)-k) (x-1)$.
Hence,
\[ h_2(\nu_1,\nu_2)=
k\int \ln \bigg(\frac{\diff \nu_1}{\diff \nu_2}\bigg)\ \diff \nu_1 \]
holds true for some $k>0$ and
for every couple of measures $(\nu_1, \nu_2)$ on $\Theta$ such that $\nu_1$ is absolutely continuous with respect to $\nu_2$.
\vspace{0.2in} \noindent {\bf Appendix B: Asymptotics under M--open.} Here we discuss the asymptotic properties of the general Bayesian learning model. The difference to typical asymptotic studies is that we need to understand what happens when the proxy model chosen is ``wrong", in a sense to be made precise. We will do this for the parametric model; $f(x;\theta)$, $\theta\in\Theta$, and the idea is that we want the posterior distribution to accumulate about $\theta_0$; the parameter which minimizes the Kullback--Leibler divergence between the family and the true density function $f_0(x)$; i.e. $\theta_0$ minimizes
$$D(f_0(\cdot),f(\cdot;\theta))=\int_X f_0(x)\,\log \{f_0(x)/f(x;\theta)\}\,{\rm d} x,$$
and we will let
$$\delta=\int f_0(x)\,\log\{f_0(x)/f(x;\theta_0)\}\,{\rm d} x.$$
Early work in this direction has been done by Berk (1966) and more recently by Bunke and Milhaud (1998), Kleijn and van der Vaart (2006) and De Blasi and Walker (2012).
For our idea, two assumptions in order to achieve this almost sure accumulation are:
\begin{description}
\item 1. The likelihood ratio satisfies $$n^{-1}\sum_{i=1}^n \log \{f(x_i;\widehat{\theta})/f(x_i;\theta_0)\}\rightarrow 0\,\,\mbox{a.s}$$
where $\widehat{\theta}$ is the maximum likelihood estimator; that is, $\widehat{\theta}$ maximizes $\prod_{i=1}^n f(x_i|\theta)$. We of course assume that this exists in the first place.
\item 2. The best parameter $\theta_0$ is in the support of the prior, so $$\pi(\theta:0<D(f_0(x),f(x;\theta))<\delta+\eta)>0$$ for all $\eta>0$.
\end{description}
The first is that the maximum likelihood estimator converges to the best parameter $\theta_0$.
The topic is dealt with by White (1982) and gives conditions under which $\widehat{\theta}\rightarrow \theta_0$ a.s., and the additional assumptions under which condition 1. is satisfied.
Condition 2. is clearly a support condition, so that the prior actually does put mass in a suitable neighborhood of $\theta_0$.
It is sufficient to consider the following problem. Take out a neighborhood $N$ about $\theta_0$ so that now the parameter closest to $f_0$ has a Kullback--Leibler distance $\delta^*>\delta$, and label the parameter as $\theta_0^*$.
We will now show that
$$I_{n1}/I_{n2}\rightarrow +\infty\,\,\,\mbox{a.s}$$
where
$$I_{n1}=\int_N \left\{\prod_{i=1}^n f(x_i)/f_0(x_i)\right\}\,\pi_N({\rm d}\theta) $$
and
$$I_{n2}=\int_{\Theta-N} \left\{\prod_{i=1}^n f(x_i)/f_0(x_i)\right\}\,\pi_{\Theta-N}({\rm d}\theta)$$
where, for example, $\pi_N$ is $\pi$ restricted to the set $N$.
Now, using assumption 2., and following ideas in Barron, Schervish and Wasserman (1999), it can be shown that
$$I_{n1}\geq e^{-nc}\,\,\,\mbox{a.s}$$
for all large $n$, for any $c>\delta$.
Also, based on assumption 1.,
$$I_{n2}\leq \prod_{i=1}^n \widehat{f^*}(x_i)/f_0(x_i)$$
where $\widehat{f^*}$ is the maximum likelihood restricted to $\Theta-N$. A similar technique is used in Walker and Hjort (2001).
Using the result of White (1982), we have that
$$\limsup_n n^{-1}\log I_{n2}\leq -\delta^*\,\,\,\mbox{a.s.}$$
Putting this together we see that we have
$$\liminf_n n^{-1} \log I_{n1}/I_{n2}\geq -c+\delta^*>0\,\,\,\mbox{a.s.}$$
and hence the desired result, since we can choose $\delta<c<\delta^*$.
More recently, De Blasi and Walker (2012) have extended the consistency result of Walker (2004) to the misspecified model case. In Walker (2004) the support condition along with
$$\sum_j \pi(A_{j,\epsilon})^{1/2}<+\infty$$
for all $\epsilon>0$, where the $(A_{j,\epsilon})$ form a Hellinger partition of the space of densities with balls of size $\epsilon$ is sufficient for consistency. For the misspecified case, for accumulation about $f_0$, the condition becomes
$$\sum_j\pi(A_{j,\epsilon(\alpha)})^\alpha<+\infty$$
for all $\alpha>0$, where, e.g., $\epsilon(\alpha)=(\alpha^2/2)^{2\alpha}$.
This is an important result. If the target is $\theta_0$ then we need to be sure we can find it given an arbitrary large amount of information.
\vspace{0.2in} \noindent {\bf Appendix C: Stylized inference problems.} The form of the problem is as follows. We have independent stochastic pieces of information $I_i$. We identify a $\theta$ of interest to aid us in the decision process from which we will construct a utility $u(a,\theta)$. Equally, if $\theta_0$ were known we would be happy to select the action $a$ maximizing $u(a,\theta)$. The information $(I_i)$ provides further knowledge about $\theta_0$ through an appropriate loss function $l(\theta,I_i)$.
Let us return to the case when we observe $(x_i)$ independent and identically distributed from some density $f_0(x)$ and $f(x;\theta)$ is the chosen family to model this. In our framework we do not need to concern ourselves whether this family contains $f_0(x)$ or not, provided we use $l(\theta,x)=-\log f(x;\theta)$ under both scenarios.
We then obtain the standard Bayesian updating rule, but now the prior $\pi({\rm d}\theta)$ and posterior $\nu({\rm d}\theta)$ represent our best beliefs about which $\theta$ gets us closest to $f_0(x)$. All other aspects of inference can be done with this interpretation of $\theta$ and $\nu$. So, for an action $a\in A$, we would maximize $U(a)$ defined previously in terms of $\nu$.
There is also a difference to be highlighted with prediction. It is clear that as far as prediction is concerned,
$$p(x)=\int_\Theta f(x;\theta)\,\nu({\rm d}\theta)$$
is the estimate of the density closest to $f_0(x)$ and, unlike the Bayesian interpretation, one knows that the next $x$ is certainly not coming from this density. We can formally obtain $p(x)$ as the estimate for the density closest to $f_0(x)$ by using the utility function $$u(p,\theta)=-\int_X (p(x)-f(x;\theta))^2\,{\rm d} x.$$
Let us move on to more complicated data structures.
\vspace{0.2in} \noindent {\bf C1 Regression model.} We will first consider a standard regression model; so we consider the case when $x_i$ come from the density $f_0(x|z)$ and we use the model $f(\cdot|z_i,\theta)$, where the $(z_i)$ are covariate information and the $(x_i)$ are independent observations. So $I_i=(x_i,z_i)$. We recover the Bayesian approach when we take $l(\theta,(x_i,z_i))=-\log f(x_i|z_i,\theta)$ and as before the usual interpretation of $\theta$ is the parameter which takes us closest to $f_0(x_i|z_i)$. If the $(z_i)$ are independent and identically distributed with probability measure $\mu$ then we can define $\theta_0$ to minimize
$$\int_Z d_{KL} (f(\cdot|z,\theta),f_0(\cdot|z))\mu({\rm d} z).$$
An infinite collection of the $(x_i,z_i)$ will give us $f_0(x|z)$ and so $\theta_0$ is defined asymptotically. An equivalent idea would be to define $\theta_0$ as
the $\theta$ minimizing
$$\lim_{n\rightarrow\infty} n^{-1}\sum_{i=1}^n d_{KL}(f(\cdot|z_i,\theta),f_0(\cdot|z_i)).$$
In both of these cases, the loss function is suitable for learning about this $\theta_0$; in the sense that the asymptotic minimizer of
$$n^{-1}\sum_{i=1}^n l(\theta,(x_i,z_i))=-n^{-1}\log \prod_{i=1}^n f(x_i|z_i,\theta)$$
is, under mild regularity conditions, precisely $\theta_0$.
Alternatively, we could, if the $(z_i)$ are non--stochastic, define $\theta_0$ as minimizing
$$\sup_{z\in Z}d_{KL}(f(\cdot|z,\theta),f_0(\cdot|z)).$$
In this case it would be necessary to construct a loss function $l(\theta, (x,z))$ which asymptotically yielded this $\theta_0$.
\vspace{0.2in} \noindent {\bf C2 Hierarchical model.} A random effects hierarchical model will be similar to the above described regression model; yet here we would have the $f(x|z,\theta)$ in a particular form given by
$$f(x_i|z_i,\theta)=\int_B f(x_i|z_i,\beta_i,\theta)\,f(\beta_i|z_i,\theta)\,{\rm d}\beta_i.$$
We retain $I_i=(x_i,z_i)$.
One determines here that there is a $\theta$ to be learnt about which involves an unobserved set of $(\beta_i)$.
We can define $\theta_0$ as in the regression case; that is the $\theta$ which minimizes
$$\lim_{n\rightarrow\infty} n^{-1}\sum_{i=1}^n d_{KL}(f(\cdot|z_i,\theta),f_0(\cdot|z_i)).$$
In this case it would be quite challenging to find an alternative to the
$l(\theta,(x_i,z_i))=-\log f(x_i|z_i,\theta)$ loss function. For inference one can solve for $\nu({\rm d}\ \theta)$ and then set up the joint measure
$$\nu({\rm d}\theta,\beta_1,\ldots,\beta_n)=\left\{\prod_{i=1}^n f(x_i|z_i,\beta_i,\theta)\,f(\beta_i|z_i,\theta)\right\}\pi({\rm d}\theta)$$
to allow inference via Markov chain Monte Carlo methods, for example.
\vspace{0.2in} \noindent {\bf C3 Time series model.} Now let us consider a time series setting whereby it is deemed that $x_i$ depends on $(x_{i-1},\ldots,x_{i-p})$; that is a $p$--autoregressive model. In this case
$I_i=(x_i,x_{i-1},\ldots,x_{i-p})$ and if we model the observations through $f(x_i|x_{i-1},\ldots,x_{i-p},\theta)$ then the Bayesian update arises by taking
$l(\theta,I_i)=-\log f(x_i|x_{i-1},\ldots,x_{i-p},\theta)$. In this case the target $\theta_0$ will be the parameter minimizing
$$\lim_{n\rightarrow\infty} n^{-1}\sum_{i=1}^n d_{KL}(f(\cdot|x_{i-1},\ldots,x_{i-p},\theta),f_0(\cdot|x_{i-1},\ldots,x_{i-p})).$$
Of course this assumes that the order is known to be $p$ and typically this will be unknown. Writing the true model as $f_0(x_i|x_{i-1},\ldots,x_1)$ we can construct a general model
as
$$f(x_i|x_{i-1},\ldots,x_1,\theta)=\sum_{p=1}^\infty w_p\,f_p(x_i|x_{i-1},\ldots,x_{i-p},\theta_p)$$
where $w_p$ is the probability that the correct order is $p$ and $\theta=(\theta_1,\theta_2,\ldots)$ so $\pi(\theta)=\prod_{p=1}^\infty\pi_p(\theta_p)$ and $\pi_p(\theta_p)$ represents the beliefs about which $\theta_p$ takes $f_p(\cdot|x_{i-1},\ldots,x_{i-p},\theta_p)$ closest to $f_0(\cdot|x_{i-1},\ldots,x_{i-p})$, conditional on the truth of $p$ being the correct order.
The Bayesian update now arises by taking $l(\theta,(x_i,x_{i-1},\ldots,x_1))=-\log f(x_i|x_{i-1},\ldots,x_i,\theta)$.
\vspace{0.2in} \noindent {\bf C4 Grouped data model.} Here we consider the case when we have repeated observations on independent units; so $I_i=(x_{i1},\ldots,x_{in_i},z_i)$, where the $z_i$ are unit specific covariates. If it assumed that the $(x_{ij})_j$ are conditionally independent given an unobserved parameter $\beta_i$ then we would have a model of the type
$$f(x_i|z_i,\theta)=\int \prod_{j=1}^{n_i}f(x_{ij}|z_i,\beta_i,\theta)\,f(\beta_i|z_i,\theta)\,{\rm d}\beta_i$$
and we recover the Bayesian update when we take $$l(\theta,(x_i,z_i))=-\log f(x_i|z_i,\theta)$$ and the interpretation for the prior $\pi(\theta)$ is again to do with beliefs about where the $\theta$ taking this model closest to the true model is to be located.
\vspace{0.2in} \noindent This section shows that it is possible to undertake Bayesian inference with models in the M--open view by taking the logarithmic loss functions associated with these models. The interpretation of $\theta$ is different however. We construct prior distributions and learn about the best parameter $\theta_0$ which takes us closest to the true model.
It is assumed the data can give the true model completely and therefore there is access to $\theta_0$.
\newpage
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{BF_v_index}}
\caption{Log Bayes Factor (Laplace) vrs marker index along chromosome}\label{BF}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{MC_v_Laplace_500}}
\caption{Log Bayes Factor using 500 Monte Carlo samples vrs Laplace approximation: at 500 random markers }\label{LP}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{BF_v_pval_bw}}
\caption{Log Bayes Factor vrs -log10 p-value of association}\label{BFvP}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{BF_v_pval_low_col}}
\caption{Log Bayes Factor vrs -log10 p-value of association coloured by standard error in MLE }\label{pvalcol}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{SE_v_BF}}
\caption{Standard Error in MLE vrs log Bayes Factor }\label{SEvBF}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{BF_v_index_hit_region}}
\caption{Log Bayes Factor vrs marker index in the ``hit region'' }\label{BF_hit}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{Post_ssvs_region}}
\caption{Posterior marginal inclusion probability from multiple marker model }\label{Prob_post}
\end{figure}
\newpage
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{BP_carbig}}
\caption{Boxplot of cars MPG data; taken from the MATLAB boxplot.m help file illustration. }\label{BP}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{BayesBP_carbig_uil}}
\caption{General Bayesian Boxplot of cars MPG data using Unit Information Loss }\label{BBP}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{France_carbig_uil}}
\caption{Posterior samples for quartiles of Franch cars MPG data using Unit Information Loss}\label{Fr}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.8]{USA_carbig_uil}}
\caption{Posterior samples for quartiles of USA cars MPG data using Unit Information Loss }\label{USA}
\end{figure}
\end{document}
One interpretation or setting in which can discuss our ideas is as a decision maker, as discussed in Section 2. The decision maker is using a parameter $\theta$, which if known, would be happy to proceed by taking the action $a$ which maximized $u(a,\theta)$. So there is a true $\theta_0$, by which we mean that there is information in the world which if gathered in totality would yield $\theta_0$, solving the problem. The decision maker would then take the best action $a_0$ which maximized $u(a,\theta_0)$. For example, suppose the decision maker has adopted a probabilistic model $f(x; \theta)$ as a proxy for $f_0(x)$ in order to aid a decision. Here $\theta_0$ could be the maximum likelihood estimate for $\theta$ in the limit of infinite data; such that should this value ever be revealed to the decision maker then the problem is solved. More generally suppose the observations are of the form of independent and identically distributed from some density $f_0(x)$ and the model of the decision maker is $f(x;\theta)$, then, in the M--open case, it is not clear which of the values that $\theta$ can take is the one being targeted to be learnt about. We will assume for now, but will delve into this in more detail later, that a well motivated choice of parameter value to learn about is the $\theta_0$ which is the parameter taking the model $f(x;\theta)$ closest to the true $f_0(x)$ with respect to the Kullback--Leibler divergence. Hence, $\pi(\theta)$ is expressing beliefs about the location of $\theta_0$, the parameter which yields optimal decision making. Formally this would involve interest in the $\theta$ value which minimized
$$-\int \log f(x|\theta)\,{\rm d} F_0(x).$$
To illustrate, consider the following toy example whereby $f(x;\theta)$ is assumed to be normal with mean $\theta$ and known variance $\sigma^2$. Thought to be a good approximation to $f_0(x)$ but also known to be incorrect. The Kullback--Leibler divergence between $f_0(x)$ and this normal distribution is given by
$$K+\hbox{$1\over2$}\sigma^{-2}\int (x-\theta)^2 f_0(x)\,{\rm d} x,$$
for some constant $K>0$. So $\theta_0$ is the mean of $f_0(x)$ and hence assigning a prior distribution to $\pi(\theta)$ is equivalent to assigning a belief probability about the mean of $x$. So, in this $M$--open case, the issue of assigning a prior distribution is not problematic at all. And of course note that the limit of the sample means will be $\theta_0$. It is important to note that the Bayesian has no formal way of proceeding here as $x$ is not from $f(x; \theta)$
|
2,869,038,155,531 | arxiv | \section{Introduction}
Many recent advances in deep neural networks have led to significant improvement in the quality of abstractive summarization \cite{radford2019language, gehrmann-etal-2019-generating, lewis2019bart}.
Despite this progress,
there are still many limitations facing neural text summarization \cite{kryscinski-etal-2019-neural},
the most serious of which is their tendency to generate summaries that are not factually consistent with the input document;
a factually consistent summary only contains statements that can be derived from the source document. Recent studies show that about 30\% of the summaries generated by neural network sequence-to-sequence models suffer from fact fabrication \cite{AAAI1816121}. Unfortunately, the widely used ROUGE score is inadequate to quantify factual consistency \cite{kryscinski-etal-2019-neural}.
Factual inconsistency can occur at either the entity or the relation level.
At the entity level, a model generated summary may contain named-entities that never appeared in the source document. We call this the entity \emph{hallucination} problem. For example, consider the following model generated summary:
\begin{quotation}
\emph{People in Italy and the Netherlands are more likely to consume fewer cups of coffee than those in the \underline{UK}, a study suggests.}
\end{quotation}
``UK'' never appeared in the input source document (taken from the test set of the XSUM dataset \cite{narayan-etal-2018-dont}). In fact, the source document mentioned a study involving people in Italy and Netherlands; ``UK'' was a result of model hallucination.
Another type of inconsistency occurs when the entities indeed exist in the source document but the relations between them are not in the source document. This type of inconsistency is much harder to identify.
Open Information Extraction (OpenIE) and dependency parsing tools have been used \cite{AAAI1816121} to identify the underlying relations in a summary, but
are not yet accurate enough for practical use.
Ultimately, these researchers
relied on manually classifying generated summaries into \emph{faithful}, \emph{fake}, or \emph{unclear}.
In this paper, we propose a set of simple metrics to quantify factual consistency at the entity-level.
We analyze the factual quality of summaries produced by the state-of-the-art BART model \cite{lewis2019bart} on three news datasets. We then propose several techniques including data filtering, multi-task learning and joint sequence generation to improve performance on these metrics. We leave the relation level consistency to future work.
\section{Related work}
Large transformer-based neural architectures combined with pre-training have set new records across many natural language processing tasks \cite{NIPS2017_7181, devlin-etal-2019-bert, radford2019language}. In particular, the BART model \cite{lewis2019bart} has shown superior performance in many text generation tasks including abstractive summarization. In contrast to encoder-only pre-training such as in BERT \cite{devlin-etal-2019-bert} or decoder-only pre-training such as in GPT-2 \cite{radford2019language},
BART is an encoder-decoder transformer-based neural translation model jointly pre-trained to reconstruct corrupted input sequences of text.
Several authors have pointed out the problem of factual inconsistency in abstractive summarization models \cite{kryscinski-etal-2019-neural, kryciski2019evaluating, AAAI1816121, welleck-etal-2019-dialogue}.
The authors in \cite{kryciski2019evaluating} proposed to train a neural network model to classify if a summary is factually consistent with a given source document, similar to a natural language inference task. In the dialogue generation setting, authors in \cite{DBLP:journals/corr/abs-1911-03860} proposed using unlikelihood to surpress logically inconsistent responses. Our work is complementary to such existing approaches as we focus on simple entity-level metrics to quantify and improve factual consistency. Our goal of improving entity-level metrics of summaries is also related to controllable abstractive summarization \cite{fan-etal-2018-controllable}, where a list of named-entities that a user wants to see in the summary can be passed as input to influence the generated summary. In contrast, our goal is to \emph{predict} which entities are summary-worthy while generating the summary that contains them. In this view we are trying to solve a more challenging problem.
\section{Entity-level factual consistency metrics} \label{sec:entity}
We propose three new metrics that rely on off-the-shelf tools to perform Named-Entity Recognition (NER). \footnote{We use Spacy \cite{spacy2}.}
We use $\mathcal{N}(t)$ and $\mathcal{N}(h)$ to denote the number of named-entities in the target (gold summary) and hypothesis (generated summary), respectively. We use $\mathcal{N}(h\cap s)$ to denote the number of entities found in the generated summary that can find a match in the source document. If a named-entity in the summary consists of multiple words, we consider it a match as long as any n-gram of the named-entity can be found in the source document. This is meant to capture the situation where the named-entity can be shortened; for example, ``Obama '' is a match for ``Barack Obama'' and ``Harvard'' is a match for ``Harvard University''. When the match is at the unigram level, we make sure that it is not a stop word such as ``the''. We also make the match case-insensitive to accommodate casing variances.
\paragraph{Precision-source:}
We propose precision-source ($\mathbf{prec}_s$) to quantify the degree of hallucination with respect to the source: $\mathbf{prec}_s = \mathcal{N}(h\cap s) / \mathcal{N}(h).$
It is simply the percentage of named-entities in the summary that can be found in the source. Low $\mathbf{prec}_s$ means hallucination is severe.
We first evaluate the $\mathbf{prec}_s$ score on the ground truth summaries of the 3 datasets: Newsroom \cite{newsroom}, CNN/DailyMail \cite{nallapati-etal-2016-abstractive} and XSUM \cite{narayan-etal-2018-dont}.
\begin{table}[h]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{Newsroom} & \multicolumn{3}{c|}{CNNDM} & \multicolumn{3}{c|}{XSUM} \\ \hline
& \multicolumn{1}{c|}{train} & \multicolumn{1}{c|}{val} & \multicolumn{1}{c|}{test} & \multicolumn{1}{c|}{train} & \multicolumn{1}{c|}{val} & \multicolumn{1}{c|}{test} & \multicolumn{1}{c|}{train} & \multicolumn{1}{c|}{val} & \multicolumn{1}{c|}{test} \\ \hline
avg. $\mathcal{N}(t)$ & 2.08 & 2.10 & 2.09 & 4.36 & 5.09 & 4.87 & 2.08 & 2.06 & 2.08 \\ \hline
avg. $\mathcal{N}(t \cap s)$ & 1.88 & 1.90 & 1.90 & 4.21 & 4.92 & 4.70 & 1.64 & 1.64 & 1.64 \\ \hline
$\mathbf{prec}_s$ (\%) & 90.6 & 90.6 & 90.5 & 96.5 & 96.7 & 96.6 & 79.0 & 79.5 & 79.3 \\ \hline
\end{tabular}
}
\vspace{-1mm}
\caption{Average number of named-entities and the $\mathbf{prec}_s$ scores (\%) in the ground truth summary.}
\label{table:groundTruthSummaryStats}
\end{table}
Table \ref{table:groundTruthSummaryStats} shows that among the three datasets, the ground truth summaries in XSUM have the lowest $\mathbf{prec}_s$ score.
This is because the ground truth summaries in the XSUM dataset often use the first sentence of the article as the summary;
the source document is constructed to be the rest of the article and may not repeat the named-entities that appeared in the summary. We hypothesize that the hallucination problem is largely caused by the training data itself. Thus, we propose to perform entity-based data filtering to construct a ``clean'' version of these datasets as described next.
\paragraph{Entity-based data filtering:}
For each dataset, we apply Spacy NER on the gold summary to identify all the named-entities. \footnote{We ignore certain types of entities such as date, time, numerals because they tend to have large variations in representation and are difficult to determine a match in the source document. The appendix contains more details.}
If any of the entities cannot find a match in the source document, we discard the sentence that contains the entity from the ground truth summary. If the ground truth summary consists of only one sentence and it needs to be discarded, we remove the document-summary pair from the dataset.
This way, we ensure that our filtered dataset does not contain hallucination of entities ($\mathbf{prec}_s =1$) in the ground truth summary. The dataset size before and after the filtering is shown in Table \ref{table:datasetStats}. About a third of examples are filtered out for XSUM. Again, this is because of the way XSUM dataset is constructed as mentioned in the previous paragraph. As we shall see in Table \ref{table:dataFilterResult}, entity-based data filtering reduces hallucination of the trained model and the effect is especially significant in the XSUM dataset.
\begin{table*}[h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|r|r|r|r|r|r|r|r|r|}
\hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{Newsroom} & \multicolumn{3}{c|}{CNNDM} & \multicolumn{3}{c|}{XSUM} \\ \cline{2-10}
& \multicolumn{1}{c|}{train} & \multicolumn{1}{c|}{val} & \multicolumn{1}{c|}{test} & \multicolumn{1}{c|}{train} & \multicolumn{1}{c|}{val} & \multicolumn{1}{c|}{test} & \multicolumn{1}{c|}{train} & \multicolumn{1}{c|}{val} & \multicolumn{1}{c|}{test} \\ \hline
\thead{original} & \thead{922,500 (1.58)} & \thead{100,968 (1.60)} & \thead{100,933 (1.59)} & \thead{287,112 (3.90)} & \thead{13,368 (4.13)} & \thead{11,490 (3.92)} & \thead{203,540 (1.0)} & \thead{11,301 (1.0)} & \thead{11,299 (1.0)} \\ \hline
\thead{after filtering} & \thead{855,975 (1.62)} & \thead{93,678 (1.64)} & \thead{93,486 (1.64)} & \thead{286,791 (3.77)} & \thead{13,350 (3.99)} & \thead{11,483 (3.77)} & \thead{135,155 (1.0)} & \thead{7,639 (1.0)} & \thead{7,574 (1.0)} \\ \hline
\end{tabular}
}
\vspace{-1mm}
\caption{Number of examples in three datasets together with the average number of sentences in the ground truth summary (in parentheses) before and after entity-based filtering.}
\label{table:datasetStats}
\end{table*}
\paragraph{Precision-target and recall-target:} Although the precision-source ($\mathbf{prec}_s$) metric quantifies the degree of entity hallucination with respect to the source document, it does not capture the entity-level accuracy of the generated summary with respect to the ground truth summary. To get a complete picture of the entity-level accuracy of the generated summary, we propose the precision-target ($\mathbf{prec}_t$) score: $\mathbf{prec}_t = \mathcal{N}(h\cap t) / \mathcal{N}(h),$
where $\mathcal{N}(h\cap t)$ is the number of named-entities in the generated summary that can find a match in the ground truth summary; and the recall-target ($\mathbf{recall}_t$) score: $\mathbf{recall}_t = \mathcal{N}(h\cap t) / \mathcal{N}(t),$
where $\mathcal{N}(t)$ is the number of named-entities in the ground truth summary. We compute the F1 score as $F1_t=2 \cdot \mathbf{prec}_t \cdot \mathbf{recall}_t / (\mathbf{prec}_t + \mathbf{recall}_t)$.
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline
& \thead{training \\ data} & \thead{Rouge1} & \thead{Rouge2} & \thead{RougeL} & \thead{macro \\ $\mathbf{prec}_s$} & \thead{micro \\ $\mathbf{prec}_s$} & \thead{macro \\ $\mathbf{prec}_t$} & \thead{micro \\ $\mathbf{prec}_t$} & \thead{macro \\ $\mathbf{recall}_t$} & \thead{micro \\ $\mathbf{recall}_t$} & \thead{macro \\ $F1_t$} & \thead{micro \\ $F1_t$} \\ \hline
\multirow{4}{*}{Newsroom} & original & 47.7{\scriptsize $\pm$0.2} & 35.0{\scriptsize $\pm$0.3} & 44.1{\scriptsize $\pm$0.2} & 97.2{\scriptsize $\pm$0.1} & 97.0{\scriptsize $\pm$0.1} & 65.4{\scriptsize $\pm$0.3} & 62.9{\scriptsize $\pm$0.4} & 70.8{\scriptsize $\pm$0.3} & 68.5 {\scriptsize $\pm$0.2} & 68.0{\scriptsize $\pm$0.2} & 65.6{\scriptsize $\pm$0.3} \\
& + filtering & 47.7{\scriptsize $\pm$0.1} & 35.1{\scriptsize $\pm$0.1} &44.1 {\scriptsize $\pm$0.1} & 98.1{\scriptsize $\pm$0.1} & 98.0{\scriptsize $\pm$0.0} & 66.5{\scriptsize $\pm$0.1} & 63.8{\scriptsize $\pm$0.1} & 70.2 {\scriptsize $\pm$0.2} & 67.7{\scriptsize $\pm$0.3} & 68.3{\scriptsize $\pm$0.1} & 65.7{\scriptsize $\pm$0.1} \\
& + classification & 47.7{\scriptsize $\pm$0.2} & 35.1{\scriptsize $\pm$0.1} & 44.2{\scriptsize $\pm$0.2} & 98.1{\scriptsize $\pm$0.1} & 98.0{\scriptsize $\pm$0.0} & 67.2{\scriptsize $\pm$0.4} & 64.2{\scriptsize $\pm$0.4} & 70.3{\scriptsize $\pm$0.2} & 67.8{\scriptsize $\pm$0.4} & 68.7{\scriptsize $\pm$0.3} & 65.9{\scriptsize $\pm$0.4} \\
& ~~ JAENS & 46.6 {\scriptsize $\pm$0.5} & 34.3{\scriptsize $\pm$0.3} & 43.2{\scriptsize $\pm$0.3} & {\bf 98.3}{\scriptsize $\pm$0.1} & {\bf 98.3}{\scriptsize $\pm$0.1} & {\bf 69.5}{\scriptsize $\pm$1.6} & {\bf 67.3}{\scriptsize $\pm$1.2} & 68.9{\scriptsize $\pm$1.5} & 66.8{\scriptsize $\pm$1.6} & {\bf 69.2}{\scriptsize $\pm$0.1} & {\bf 67.0}{\scriptsize $\pm$0.2} \\ \hline
\multirow{4}{*}{CNNDM} & original & 43.7{\scriptsize $\pm$0.1}& {\bf 21.1}{\scriptsize $\pm$0.1} & 40.6{\scriptsize $\pm$0.1} & 99.5{\scriptsize $\pm$0.1} & 99.4{\scriptsize $\pm$0.1} & 66.0{\scriptsize $\pm$0.4} & 66.5{\scriptsize $\pm$0.4} & 74.7{\scriptsize $\pm$0.7} & 75.4{\scriptsize $\pm$0.6} & 70.0{\scriptsize $\pm$0.2} & 70.7{\scriptsize $\pm$0.3} \\
& + filtering & 43.4{\scriptsize $\pm$0.2} & 20.8{\scriptsize $\pm$0.1} & 40.3{\scriptsize $\pm$0.2} & {\bf 99.9}{\scriptsize $\pm$0.0} &{\bf 99.9}{\scriptsize $\pm$0.0} & 66.2 {\scriptsize $\pm$0.4} & 66.6{\scriptsize $\pm$0.3} & 74.1{\scriptsize $\pm$0.6} & 74.9{\scriptsize $\pm$0.6} & 69.9{\scriptsize $\pm$0.2} & 70.5{\scriptsize $\pm$0.2} \\
& + classification &43.5{\scriptsize $\pm$0.2} & 20.8{\scriptsize $\pm$0.2} & 40.4{\scriptsize $\pm$0.2} & {\bf 99.9}{\scriptsize $\pm$0.0} & {\bf 99.9}{\scriptsize $\pm$0.0} & {\bf 67.0}{\scriptsize $\pm$0.6} & {\bf 67.5}{\scriptsize $\pm$0.5} & 74.7{\scriptsize $\pm$0.2} & 75.5{\scriptsize $\pm$0.1} & 70.6{\scriptsize $\pm$0.3} & 71.3{\scriptsize $\pm$0.3} \\
& ~~ JAENS &42.4 {\scriptsize $\pm$0.6} & 20.2{\scriptsize $\pm$0.2} & 39.5{\scriptsize $\pm$0.5} & {\bf 99.9}{\scriptsize $\pm$0.0} & {\bf 99.9}{\scriptsize $\pm$0.0} & {\bf 67.9}{\scriptsize $\pm$0.7} & {\bf 68.4}{\scriptsize $\pm$0.6} & {\bf 75.1}{\scriptsize $\pm$0.7} & {\bf 76.4}{\scriptsize $\pm$0.7} & {\bf 71.3}{\scriptsize $\pm$0.2} & {\bf 72.2}{\scriptsize $\pm$0.2} \\ \hline
\multirow{4}{*}{XSUM} & original & {\bf 45.6}{\scriptsize $\pm$0.1} & {\bf 22.5}{\scriptsize $\pm$0.1} & {\bf 37.2}{\scriptsize $\pm$0.1} & 93.9{\scriptsize $\pm$0.1} & 93.6{\scriptsize $\pm$0.2} & 74.1{\scriptsize $\pm$0.2} & 73.3{\scriptsize $\pm$0.2} & 80.1{\scriptsize $\pm$0.1} & 80.3{\scriptsize $\pm$0.3} & 77.0{\scriptsize $\pm$0.1} & 76.6{\scriptsize $\pm$0.2} \\
& + filtering & 45.4{\scriptsize $\pm$0.1} & 22.2{\scriptsize $\pm$0.1} & 36.9{\scriptsize $\pm$0.1} & 98.2{\scriptsize $\pm$0.0} & 98.2{\scriptsize $\pm$0.1} & 77.9{\scriptsize $\pm$0.2} & 77.3{\scriptsize $\pm$0.2} & 79.4{\scriptsize $\pm$0.2} & 79.6{\scriptsize $\pm$0.2} & 78.6{\scriptsize $\pm$0.1} & 78.4{\scriptsize $\pm$0.2} \\
& + classification & 45.3{\scriptsize $\pm$0.1} & 22.1{\scriptsize $\pm$0.0} & 36.9{\scriptsize $\pm$0.1} & 98.3{\scriptsize $\pm$0.1} & 98.2{\scriptsize $\pm$0.1} & 78.6{\scriptsize $\pm$0.3} & {\bf 78.0}{\scriptsize $\pm$0.3} & 79.5{\scriptsize $\pm$0.3} & 79.8{\scriptsize $\pm$0.4} & {\bf 79.1}{\scriptsize $\pm$0.1} & {\bf 78.9}{\scriptsize $\pm$0.1} \\
& ~~ JAENS &43.4{\scriptsize $\pm$0.7} & 21.0{\scriptsize $\pm$0.3} & 35.5 {\scriptsize $\pm$0.4}& {\bf 99.0}{\scriptsize $\pm$0.1} & {\bf 99.0}{\scriptsize $\pm$0.1} & 77.6{\scriptsize $\pm$0.9} & 77.1{\scriptsize $\pm$0.6} & 79.5{\scriptsize $\pm$0.6} & 80.0{\scriptsize $\pm$0.5} & 78.5{\scriptsize $\pm$0.2} & 78.5{\scriptsize $\pm$0.1} \\ \hline
\end{tabular}%
}
\vspace{-1mm}
\caption{Comparison of models trained using original data, with entity-based data filtering, with an additional classification task and with JAENS. Scores are all in percentages, averaged over 5 runs and shown with standard deviations. We bold the numbers that are significantly better in the sense that the means are separated by at least the standard deviations. We report both the micro and macro averages of our proposed entity-level scores. In all datasets, data filtering leads to higher $\mathbf{prec}_s$ scores, indicating that entity hallucination can be alleviated by this simple technique. In addition, data filtering generally improves other entity level metrics: $\mathbf{prec}_t$, $\mathbf{recall}_t$ and $F1_t$. Adding the classification task (multi-task) or JAENS to data filtering further improves the performance on $\mathbf{prec}_t$ and $\mathbf{recall}_t$ and therefore the overall entity-level $F1_t$.}
\label{table:dataFilterResult}
\vspace{-1mm}
\end{table*}
\section{Multi-task learning:}
In addition to entity-based data filtering, we also explore another method to further improve the summarization quality. In particular, we incorporate an additional task of classifying
summary-worthy named-entities in the source document.
A summary-worthy named-entity in the source document is one that appears in the ground truth summary and thus, is a salient entity, worthy of inclusion in the generated summary. Intuitively, if we can identify these summary-worthy named-entities using the encoder representation, we may potentially increase the entity-level precision and recall metrics as well as the overall quality of the summary. We achieve this by adding a classification head to the encoder of BART.
To prepare for the classification label, we first identify the named-entities in the ground truth summary and find the matching tokens in the source document. We then assign the (B)eginning-(I)nside-(O)utside labels to each token of the source document to denote if the token is beginning, inside or outside of a summary-worthy named-entity, respectively.
During training, we simply add the classification loss for each token at the encoder to the original sequence-to-sequence loss.
More precisely, let $\{\left(x^i, y^i\right)\}_{i=1}^N$ be a dataset of $N$ examples where $x^i=x^i_1, \dots, x^i_{ts(i)}$ are the tokens of the $i$th source document and $y^i=y^i_1, \dots, y^i_{tt(i)}$ are the tokens of the target (ground truth summary). The standard sequence-to-sequence training minimizes the maximum log likelihood estimation (MLE) loss:
\begin{equation*}
\mathcal{L}^i_{\text{MLE}} (\theta, x^i, y^i) = - \sum_{t=1}^{tt(i)} \log p_{\theta}(y^i_t | x^i, y^i_{<t}).
\end{equation*}
With summary-worthy entity classification, each example has an additional sequence of BIO labels $z^i=z^i_1, \dots, z^i_{ts(i)}, z^i_t \in \{0,1,2\}$. By adding an additional fully connected layer on top of the BART encoder, we obtain the classification loss
\begin{equation*}
\mathcal{L}^i_{\text{BIO}} (\theta(\text{enc}), x^i, z^i) = - \sum_{t=1}^{ts(i)} \log p_{\theta(\text{enc})} (z^i_t |x^i).
\end{equation*}
Finally, we can minimize the joint loss $ \mathcal{L}^i_{\text{Multitask}} = \mathcal{L}^i_{\text{MLE}} + \alpha \mathcal{L}^i_{\text{BIO}},$
where $\alpha$ is a hyper parameter. We choose $\alpha$ between 0.1 to 0.5 via the validation sets.
\section{Joint Entity and Summary Generation:}
We also explore another generative approach to promote entity-level precision and recall metrics. In particular, instead of just generating the summary, we train the BART model to generate the sequence of summary-worthy named-entities, followed by a special token, and then the summary. We call this approach JAENS (Join sAlient ENtity and Summary generation). Similar to the multi-task learning approach discussed earlier, JAENS encourages the model to jointly learn to identify the summary-worthy named-entities while learning to generate summaries. Since the decoder generates the salient named-entities first, the summaries that JAENS generate can further attend to these salient named-entities through decoder self-attention.
\section{Experiment results}
We use the pre-trained BART-large model in the Fairseq library \cite{ott2019fairseq} to fine-tune on the 3 summarization datasets.\footnote{Our code is available at \rurl{https://github.com/amazon-research/fact-check-summarization}}
The appendix contains additional details of experimental setup.
In Table \ref{table:dataFilterResult}, we
show the effect of the entity-based data filtering. For each dataset, we train two separate models: using the training data before and after entity-based data filtering as shown in Table \ref{table:datasetStats}. We evaluate both models on the ``clean'' test set after entity-based data filtering. We choose this filtered version of the original test set because we only want to measure entity-level consistency against the correct set of entities; using the unfiltered dataset means we could count a hallucinated entity as correct. We observe improvements of $\mathbf{prec}_s$ across all three datasets trained using the filtered subset of data. For example in XSUM, the $\mathbf{prec}_s$ is increased from 93.6\% to 98.2\%, indicating a significant reduction in entity hallucination. In addition, the entity-based data filtering generally improves other entity-level metrics as well. Even with less training data, the entity-based data filtering is able to maintain the ROUGE scores quite well. For XSUM, about 34\% of the training data is filtered out (c.f. Table \ref{table:datasetStats}), which explains the more noticable impact on the ROUGE scores.
The results in Table \ref{table:dataFilterResult} suggest that entity-level data filtering is a simple yet effective approach to achieve higher entity-level factual consistency as well as general summarization quality. In Table \ref{tab:dataFilterResultQualitative} we provide qualitative examples where the model trained on the original data produces hallucination and the entity-level data filtering removes such hallucination.
Table \ref{table:dataFilterResult} shows that adding the classification task (multi-task) futher increases the $\mathbf{prec}_t$ and $\mathbf{recall}_t$ metric and therefore the overall entity-level $F1_t$ on top of the improvements from data filtering. Similar gains can be observed with JAENS, which out-performs the multi-task approach on CNNDM and Newsroom datasets. The result confirms our intuition that the summaries in JAENS can benefit from attending to the generated salient entities in terms of the entity level metrics. However, the additional complexity during decoding may have hurt the ROUGE scores.
For the interested readers, we also evaluated the PEGASUS \cite{pmlr-v119-zhang20ae} models for the ROUGE and entity level metrics on these three datasets in the appendix.
\paragraph{Accuracy of entity level metrics:} As our entity level metrics are based on automatic NER tools and heuristics matching rules, errors in both steps can lead to inaccuracy in the metrics.
By manually checking 10 random ground truth summaries together with the source documents in the validation split of XSUM dataset, we found that all of the named entities are correctly identified by the NER tool and the matchings are correct. Therefore, we believe that even our current NER tool and matching rule already produce high accuracy in practice.
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{p{0.25\linewidth}|p{0.25\linewidth}|p{0.25\linewidth}|p{0.25\linewidth}}
\thead{Before data filtering} & \thead{After data filtering} & \thead{With classification} & \thead{Ground truth summary} \\ \hline
People in Italy and the Netherlands are more likely to consume fewer cups of coffee than those in the \underline{UK}, a study suggests.
& The desire to drink coffee may be encoded in our DNA, according to scientists.
& People with a particular gene are more likely to consume fewer cups of coffee, a study has suggested.
& Researchers have identified a gene that appears to curb coffee consumption. \\ \hline
A cathedral in \underline{Surrey} is set to be restored after more than £5m was raised to pay for repairs and improvements.
& A £7m project to save a Grade II-listed cathedral from demolition is set to go ahead.
& A cathedral which has been threatened with demolition is set to be saved by a £5m fundraising campaign.
& A 1960s-built cathedral that was "at serious risk of closure" has raised more than 90\% of its £7m target for urgent repairs and development. \\ \hline
More than 800,000 chemists in the Indian \underline{capital, Delhi}, have gone on strike in protest against online drug sales.
& More than 800,000 chemists in India will go on strike on Wednesday to protest against illegal online drug sales.
& More than 800,000 chemists in India are set to go on strike on Wednesday in a row over the sale of drugs online.
& At least 800,000 pharmacies in India are on a one-day strike, demanding an end to online drug sales which they say is affecting their business. \\ \hline
Police officers in \underline{Pembrokeshire} are to be issued with body-worn cameras. & Police officers in Powys are to be issued with body-worn cameras in a bid to improve transparency in the force.
& Police officers in Powys are to be issued with body cameras in a bid to improve transparency in the force.
& A police force has begun the rollout of body cameras for 800 officers and community support officers. \\ \hline
Wales midfielder \underline{Becky Lawrence} has been speaking to \underline{BBC Sport} about her time as a player-manager with Melbourne City. & It's been a great few weeks for me as a player-manager and now I'm heading home to Wales ahead of the Cyprus Cup.
& It's been a very busy few weeks for me as I'm heading home to Wales ahead of the Cyprus Cup.
& I have certainly had worse 24 hours in my life than winning the Grand Final with Melbourne City and then being named in the Wales squad for the Cyprus Cup.
\end{tabular}%
}
\vspace{1mm}
\caption{Generated and ground truth summary examples from the test set of XSUM. The first three columns are generated from the model trained without entity-based data filtering, with entity-based data filtering and with the additional classification task, respectively. The right column contains the ground truth summaries. The hallucinated named-entities are underscored. Proposed data filtering overcomes hallucination in these examples.}
\label{tab:dataFilterResultQualitative}
\end{table*}
\section{Conclusion}
In this paper we study the entity-level factual consistency of the state-of-the-art summarization model. We propose precision-source score $\mathbf{prec}_s$ to quantify the degree of entity hallucination. We also propose additional metrics $\mathbf{prec}_t$ and $\mathbf{recall}_t$ to measure entity level accuracy of the generated summary with respect to the ground truth summary. We found that the ground truth summaries of the XSUM dataset contain a high level of entity hallucination. We propose a simple entity-level data filtering technique to remove such hallucination in the training data. Experiments show that such data filtering leads to significant improvement in $\mathbf{prec}_s$. ($\mathbf{prec}_s$ increases from below 94\% to above 98\% in XSUM for example.)
We futher proposed a multi-task learning and a joint sequence generation approach to further improve the entity-level metrics.
Overall, combining our proposed approaches significantly reduces entity hallucination and leads to higher entity level metrics with minimal degradation of the ROUGE scores.
\bibliographystyle{acl_natbib}
|
2,869,038,155,532 | arxiv |
\section{Introduction}
For many applications, for example artistic rendering and
sculpting, a few subdivision steps provide a pleasing rounding
of the original polyhedral shape.
The simplicity of subdivision with small, local stencils (refinement rules)
is appealing and, in particular Catmull-Clark subdivision
\cite{Catmull-1978-CC} is a staple of geometric modeling
environments for creating computer graphics assets.
However Catmull-Clark subdivision has also been demonstrated to lead to
shape deficiencies, such as pinching of highlight lines,
that can be traced back to the simple stencil-based rules
\cite{Karciauskas:2004:SCS,Karciauskas:ISR:2016}.
The algorithm of \cite{Akle:2017:IM} proposes an approach to obtaining
`$C^2$ continuous Bi-Cubic B\'ezier patches that are guaranteed to be stitched
with $G^1$ continuity regardless of the underlying mesh topology'.
This approach consists of applying not Catmull-Clark but
\emph{Doo-Sabin subdivision} to an initial polyhedral input mesh.
The approach then derives quadrilateral facets and B\'ezier control
points from the refined mesh and constructs $n$ bi-cubic patches for each
$n$-sided facet.
Beyond demonstrating pleasant rounding,
\cite{Akle:2017:IM} emphasizes that the result is a `smooth surface with
$G^1$ continuity'
\footnote{$G^1$ is typeset as $G_1$ in several places in \cite{Akle:2017:IM}.
}.
If true, this result would be remarkable.
It would contradict the restrictions on bi-cubic $G^1$
spline complexes that \cite[Section 3]{Peters:2009:CSS}
derived and that prompted the special constructions
in \cite{Peters:2015:PSS,Sarov:RPG:2016}.
If \cite{Akle:2017:IM} were correct then these special constructions
published earlier in the same conference series would be superfluous!
Below we show that, while the surfaces generated by the approach of
\cite{Akle:2017:IM} often appear to be smooth, in general they are not.
\noindent{\bf Overview.}
\secref{sec:thm} summarizes the algorithm in \cite{Akle:2017:IM}
and the lower bound result of \cite{Peters:2009:CSS}as it pertains to
bi-cubic $G^1$ constructions.
\secref{sec:counter} provides an explicit, minimal counterexample
to the claim that the approach in \cite{Akle:2017:IM} generates
$G^1$ surfaces.
\secref{sec:alternative} discusses options for constructing
both formally smooth and near-smooth bi-3 constructions.
\section{$G^1$ continuity,
the construction of \cite{Akle:2017:IM} and a theorem}
\label{sec:thm}
\newcommand{vertex-localized}{vertex-localized}
\newcommand{edge knot}{edge knot}
The construction of \cite{Akle:2017:IM}
applies two steps of Doo-Sabin subdivision
to an initial polyhedral input mesh ${\cal{M}}$ and then places
the corners of bicubic patches at the Doo-Sabin limit points of
the facets obtained in the initial subdivision
(Fig 5 of \cite{Akle:2017:IM}).
That is every vertex and every face of ${\cal{M}}$ has a corner
of a bi-3 patch associated with it.
This layout looks more general, and therefore more challenging
than the one in \cite{conf/gmp/HahmannBC08} which assumed
that the input mesh has quadrilateral faces
and used $2\times 2$ bi-cubics to cover them.
Denote by $\mathbf{v}$ and $\mathbf{w}$ limit points associated with
adjacent facets of ${\cal{M}}$ (see \figref{fig:tet}).
Since ${\cal{M}}$ is unrestricted and
$\mathbf{v}$ and $\mathbf{w}$ and their tangent planes can be freely adjusted.
This independence is typically desirable for flexibility of modeling.
The construction in \cite{Akle:2017:IM}
therefore $G^1$ \emph{vertex-localized} in the sense that the
Taylor expansion at $\mathbf{v}$ is not tightly linked to that at $\mathbf{w}$.
Since it does not matter in the construction
whether $\mathbf{v}$ or $\mathbf{w}$ is listed first,
the construction along the common boundary
is also \emph{unbiased}, and this is also typically desirable.
The \emph{unbiased $G^1$ constraints} between two patches
$\mathbf{p}, \mathbf{q}: (u,v) \to {\mathbb{R}}^3$ along $\mathbf{p}(u,0) = \mathbf{q}(u,0)$ are
\begin{equation}
\partial_2\mathbf{p}(u,0) +\partial_2\mathbf{q}(u,0)
=
\alpha(u) \partial_1 \mathbf{p}(u,0). \label{eq:g1}
\end{equation}
When, at the split point $\mathbf{m}$ of
the edge $\mathbf{v},\mathbf{w}$, the four bicubic patches join $C^1$
(see e.g.\ Fig 8 of \cite{Akle:2017:IM})
the following theorem applies.
\begin{theorem} [\cite{Peters:2009:CSS}: two double edge knot s needed]
In general, using splines of degree bi-3 for
a vertex-localized\ unbiased $G^1$ construction
without forced linear boundary segments,
the splines must have at least two internal double knots.
\label{thm:twodblknt}
\end{theorem}
In other words, \thmref{thm:twodblknt} states that to satisfy $G^1$ constraints
along $\mathbf{v}$ and $\mathbf{w}$
(and not have straight line segments embedded in the surface),
three rather than the constructed two polynomial
boundary segments are needed to connect $\mathbf{v}$ and $\mathbf{w}$.
One might hope that the initialization via Doo-Sabin or
adding or leaving out some adjustment does side-step the
assumptions of \thmref{thm:twodblknt}.
The next section therefore
looks more closely at the construction of \cite{Akle:2017:IM}.
Below the bicubic tensor-product polynomial surface patches $\mathbf{p}$, $\mathbf{q}$
of bi-degree $3$ are in the following expressed in Bernstein-B\'ezier (BB) form,
e.g.\
\[
\mathbf{p}(u,v):=\sum_{i=0}^3\sum_{j=0}^3\mathbf{p}_{ij}B^3_i(u)B^3_j(v),
\quad (u,v) \in \square:=[0..1]^2,
\]
where $B^3_k(t) := \binom{3}{k}(1-t)^{3-k}t^k$ is
the Bernstein-B\'ezier (BB) polynomials of degree $3$
and $\mathbf{p}_{ij} \in {\mathbb{R}}^3$ are the BB coefficients \cite{Farin02,Prautzsch02}.
\section{A Counterexample: an input mesh where \cite{Akle:2017:IM}
does not yield a $G^1$ output}
\label{sec:counter}
\def0.6\linewidth{0.6\linewidth}
\begin{figure}[h]
\centering
\begin{overpic}[scale=.5,tics=10]{snapshot05color.png}
\put(42,38){$\mathbf{m}$}
\put(65,40){$\mathbf{p}$}
\put(45,25){$\mathbf{q}$}
\put(65,24){$\mathbf{v}$}
\put(90,14){$B$}
\put(5,5){$C$}
\put(45,74){$A$}
\end{overpic}
\label{fig:tet}
\caption{
Counterexample: the input is regular tetrahedron, only one of whose
faces is shown with a wood texture.
The grey quad-mesh is the result of applying three steps of
Doo-Sabin subdivision. The subnet of 12 Bernstein-B\'ezier
control points of interest are sketched on the
refined mesh: from the 4-valent point $\mathbf{m}$ to the 3-valent point
$\mathbf{v}$, these are the BB-coefficients of \eqref{eq:g1coef} that influence the
$G^1$ continuity between the two bi-3 patches $\mathbf{p}$ and $\mathbf{q}$.
}
\end{figure}
Since the algorithm of \cite{Akle:2017:IM} applies initially multiple
steps of Doo-Sabin subdivision, the challenge of finding a simple
explicit counterexample seems formidable.
Yet, the simplest example, ${\cal{M}}$ a regular tetrahedron with vertices
\begin{equation}
A :=
\left[ \begin{smallmatrix}
-1 \\ -1\\ -1
\end{smallmatrix} \right],
\quad
B:=
\left[ \begin{smallmatrix}
-1 \\ 1\\ 1
\end{smallmatrix} \right],
\quad
C:=
\left[ \begin{smallmatrix}
1 \\ -1\\ 1
\end{smallmatrix} \right],
\quad
D:=
\left[ \begin{smallmatrix}
1 \\ 1\\ -1
\end{smallmatrix} \right],
\end{equation}
suffices to show that the construction of \cite{Akle:2017:IM}
as stated can not in general generate $G^1$ surfaces.
Let $\mathbf{m}$ be the point where the curves connecting
the limit points associated with $C$ and $D$ meet
the curves connecting $\mathbf{v}$, the center of the face $B,C,D$, to
the center of $A,C,D$ (see \figref{fig:tet}).
We consider $G^1$ continuity along the edge from $\mathbf{v}$ to $\mathbf{m}$.
To compute with integers throughout we scale ${\cal{M}}$ by $2^23^2\cdot5\cdot7$.
Following the algorithm of \cite{Akle:2017:IM} up to the claim
`Our calculation of the control points guarantees $G_1$ continuity',
the mesh points and BB-coefficients can then be computed as integers.
Three rows of BB-coefficients determine the $G^1$ continuity
constraints \eqref{eq:g1} between the resulting two adjacent bi-3 patches
$\mathbf{p}$ and $\mathbf{q}$.
We focus on on the BB-coefficients of $\mathbf{p}_{ij}$ for $i=0,1,2,3$ and $j=0,1$
using $\sim$ to indicate proportionality after scaling the
coefficients to the right of $\sim$ to the smallest integer values:
\input{comp.tex}
Taking taking the dot-product of \eqref{eq:g1} with
$\partial_1 \mathbf{p}(u,0) \times \partial_2 \mathbf{q}(u,0)$ implies
that \\
$|\partial_2 \mathbf{p}(u,0), \partial_1 \mathbf{p}(u,0), \partial_2 \mathbf{q}(u,0) |=0$.
However in the counterexample the determinant is non-zero and hence
the two patches do not join with $G^1$ continuity.
\section{Alternative bi-3 constructions}
\label{sec:alternative}
Attempts to generalize bi-cubic splines to irregular layouts have a long
history and include
\cite{bezier77a},
\cite{beeker86a},
\cite{Catmull-1978-CC},
\cite{Sabin:1968:CCS},
\cite{Sarraga:1987:IGU},
\cite{Gregory:1994:FPH},
\cite{Peters:1991:SIM},
\cite{Peters:1994:SAT},
\cite{Wijk:1986:BPA}
to list just a few.
The $3\times3$ bi-3 patches per quad construction of \cite{Fan:2008:SBS}
achieves the lower bound determined by \thmref{thm:twodblknt} and
has been used to implement the multi-sided caps included in
bi-cubic T-spline constructions.
\cite{Sarov:RPG:2016} focuses on restricted input mesh
to ensure that $G^1$ bi-3 surfaces can be built with fewer pieces.
Use of a guide shape (of higher polynomial degree) appears to be necessary
to construct bi-3 surfaces with a good distribution of highlight lines,
as required for car styling and many other outer surfaces.
For example, the guided approach improves the shape
of bi-3 singularly parameterized surfaces \cite{Karciauskas:CCB:2016}.
The paper ``Can bi-cubic surfaces be class {A}?''
\cite{Karciauskas:2015:CBSconf} emphasizes the distinction
between exact $G^1$ continuity and acceptable shape
in terms of curvature distribution and highlight lines.
This distinction, accompanied by mathematical estimates of the
jump in normals, could also be useful in the context of
\cite{Akle:2017:IM}. Since proving surface `fairness' is typically not possible,
it is recommended to test new surface construction algorithms
on the obstacle course \cite{obstaclecourse} of local input meshes.
\section{Conclusion}
The approach of \cite{Akle:2017:IM} rounds shapes
but cannot guarantee $G^1$ continuity.
A number of alternative finite bi-3 surface constructions exist
in the literature. Depending on the valence they
require more or fewer pieces than \cite{Akle:2017:IM}.
There are many constructions
using few patches but of higher degree than bi-3.
\bibliographystyle{alpha}
|
2,869,038,155,533 | arxiv | \section{Introduction}
Spin-based qubits, owing to their long coherence times and individual coherent manipulation, are promising candidates for building blocks
of quantum information processors.\cite{wuReview,buluta104401,loss120,hanson1217,zwanenburg961} A conventional
spin qubit can be simply realized via Zeeman splitting of two Kramers-degenerate states by a static magnetic field and controlled by an ac magnetic
field.\cite{hanson1217,awschalom1174,koppens766,engel4648} However, its application is
limited due to the difficulty in generating and localizing an ac magnetic field at
the nanoscale. Owing to the interplay between spin and orbital degrees of freedom,
the spin-orbit qubit allows the possibility for manipulating spins via an easily-accessible ac electric field, i.e., by means of the
electric-dipole spin resonance (EDSR).\cite{rashba126405,rashba137}
Intuitively, the interplay between spin and orbit can arise from the spin-orbit coupling (SOC),
e.g., the Rashba or Dresselhaus type, which
couples the electron spin ${\bgreek \sigma}$ to the momentum ${\bf p}$. The SOC-mediated EDSR has been
widely studied in the literature.\cite{Nowack1430,perge166801,li086805,sadreev115302,golovach165319,hu035314,khomitsky125312,Arrondo155328}
Instead of invoking SOC, an alternative way to achieve the interplay between spin and
orbit is coupling the electron spin ${\bgreek \sigma}$ to the
coordinate ${\bf r}$. This spin-coordinate coupling can be accomplished by,
e.g., an inhomogeneous Zeeman-like interaction \cite{tokura047202,kato1201,ladriere776,obata085317,hu035314,cottet160502} or a fluctuating
hyperfine interaction.\cite{laird246601,shafiei107601}
Apart from coherent manipulation, scaling up the spin-orbit-qubit
architecture also involves quantum information storing and transferring. Embedding the spin-orbit qubit into
a cavity resonator to achieve spin-photon coupling seems particularly attractive, as
the mobile photons in the cavity can store and transfer quantum information with little loss
of coherence.\cite{xiang623} Indeed, in view of their energy scales, the semiconductor-based spin-orbit qubit is compatible with
the superconducting microwave resonator. Moreover, integrating the spin-orbit qubit into the superconducting cavity promotes hybrid quantum
communications, e.g., in combination with superconducting
qubits or charge qubits.\cite{xiang623,you42,lambert17005} Several proposals for coupling
spin-orbit qubits to superconducting cavities have been reported.\cite{burkard041307,jin190506,cottet160502,hu035314} However, the
spin-orbit qubit invoking SOC requires an external static magnetic field,\cite{Nowack1430,li086805,perge166801,Arrondo155328} which
is not naturally compatible with superconducting cavities of high quality
factors.\cite{cottet160502} Therefore, a spin-orbit qubit
without an external magnetic field is preferred for constructing a hybrid
system. It has been proposed that, by using an inhomogeneous Zeeman-like
interaction induced by ferromagnetic contacts or micromagnets,\cite{tokura047202,obata085317,ladriere776,cottet160502} one can
realize spin-orbit qubits in the absence of a magnetic field and effectively
couple them to superconducting cavities.\cite{cottet160502,hu035314}
In this work, we propose a spin-orbit qubit mediated by the spin-coordinate
coupling and study its coupling to a superconducting coplanar waveguide
resonator. Different from previous studies,\cite{tokura047202,kato1201,laird246601,shafiei107601,ladriere776,obata085317,hu035314,cottet160502} our
proposal relies on the inhomogeneous exchange field arising from the
multiferroic insulators with a cycloidal spiral magnetic
order.\cite{kimura387,lebeugle024116,rovillain975,khomskii20,thomas423201,yamasaki147204}
These multiferroic insulators provide a unique opportunity for the design of
functional devices owing to the cycloidal spiral magnetic order as well as the magnetoelectric coupling.\cite{zhang014433,zhai022107,thomas423201} In
our setup for the spin-orbit qubit, as illustrated in Fig.~\ref{fig1}(a), a
gated nanowire with a quantum dot is placed on top of a multiferroic
insulator. The spiral exchange field arising from the magnetic moments in the
multiferroic insulator causes an inhomogeneous Zeeman-like interaction on the
quantum-dot spin. Therefore, a spin-orbit qubit is produced in the nanowire quantum
dot. The absence of an external magnetic field facilitates the
integration of the spin-orbit qubit into the superconducting coplanar
waveguide, as illustrated in Fig.~\ref{fig1}(b). In
this hybrid circuit, both the level spacing of the spin-orbit qubit and the
spin-photon coupling depend on the ratio between the dot size
and the wavelength of the spiral magnetic order in the substrate. When the Rashba SOC is introduced into
the nanowire, the level spacing and spin-photon coupling can be
adjusted by tuning the Rashba SOC via a gate voltage on the
nanowire. With the modulation of the Rashba SOC, we can obtain an effective
spin-photon coupling with an efficient on/off switching. This is promising for manipulating, storing and transferring quantum
information in the data bus provided by the circuit cavity.
This paper is organized as follows. First, we establish the
spin-orbit qubit on the surface of a multiferroic insulator. After that, we integrate the spin-orbit
qubit into a superconducting coplanar waveguide and study its spin-photon
coupling. We further study the modulation of the
Rashba SOC on the hybrid system. At last we discuss the
experimental realizability of the proposed device.
\section{Spin-orbit qubit on a multiferroic insulator}
Our study starts from the device schematically shown in Fig.~\ref{fig1}(a). In this
setup, a nanowire lies on the surface of a multiferroic insulator,
e.g., TbMnO$_3$ or BiFeO$_3$,\cite{lebeugle024116,rovillain975,khomskii20,yamasaki147204,thomas423201}
and is aligned parallel to the propagation direction of the spiral magnetic moments in
the substrate. The nanowire is gated by
two electrodes producing a quantum dot, which is assumed to be subject to a 1D
parabolic potential.
\begin{figure}[thb]
{\includegraphics[width=8cm]{fig1.pdf}}
\caption{(Color online) (a)~Schematic of the proposed spin-orbit qubit: a
gated nanowire on the surface of a multiferroic insulator. The nanowire is aligned
along the propagation direction of the spiral magnetic order in the
multiferroic insulator, indicated by the series of rotating arrows. Two
gate electrodes (indicated by the two dark blue ring-shaped contacts) supply a parabolic confining
potential and form a quantum dot in between. The gate electrode on the top of the quantum dot with voltage
$V_g$ controls the Rashba SOC. Two gate electrodes on the top
and bottom of the multiferroic insulator supply a voltage $V_c$ which controls the spiral helicity of
the magnetic order. (b)~Schematic of the integration of the spin-orbit
qubit into a superconducting coplanar waveguide resonator. The nanowire
is placed parallel to the electric field between the center conductor and
the ground plane and located at the maximum of the electric field. Note
that in the multiferroic-insulator substrate, only the magnetic moments near the nanowire are schematically
shown by the rotating arrows.}
\label{fig1}
\end{figure}
We consider a single electron in the quantum dot. In the coordinate system
with the $x$-axis along the nanowire and the $z$-axis
perpendicular to the top surface of the multiferroic insulator, the electron is described by the Hamiltonian,
\begin{align}
H=\frac{p^2}{2m_e}+\frac{1}{2}m_e\omega^2x^2+{\bf J}(x)\cdot{\bgreek \sigma}.
\label{hamil}
\end{align}
Here $m_e$ is the effective electron mass and $p=-i\hbar\partial_x$ is the momentum
operator. The second term in the Hamiltonian is the parabolic potential. The
last term depicts the interaction between the electron spin ${\bgreek \sigma}$ and the exchange
field from the cycloidal spiral magnetic moments,
\begin{align}
{\bf J}(x)=J(\sin(\chi qx+\phi),0,\cos(\chi q x+\phi)).
\end{align}
In writing this term we have assumed the dot size $x_0=\sqrt{\hbar/(m_e\omega)}$ to be much larger than the distance
($\sim$~0.1~nm) between the nearby magnetic atoms in the substrate. Here $q=2\pi/\lambda$ is
the wavevector of the spiral order corresponding to a wavelength $\lambda$ and $\phi$ ($0\le \phi<2\pi$) is the phase of the
exchange field at $x=0$. The spiral helicity $\chi$ ($=\pm 1$) of the magnetic order is reversable
by a gate voltage on the multiferroic insulator [as illustrated by $V_c$ in Fig.~\ref{fig1}(a)] due to the
magnetoelectric coupling.\cite{yamasaki147204} The strength of the exchange coupling $J$ (we assume $J>0$) between the electron spin and the magnetic
moments, depending on their distance and the specific hosts, is weak and assumed to be of the order
of 1-10~$\mu$eV.\cite{cottet160502,tokura047202}
Due to the spiral geometry of the magnetic order, the macroscopic magnetism of the multiferroic
insulator is zero, while the exchange coupling still breaks the time-reversal
symmetry locally and causes an inhomogeneous Zeeman-like interaction on the
quantum-dot spin. In the presence to this inhomogeneous Zeeman-like
interaction, a spin-orbit qubit is realizable in the quantum dot. One can also understand the availability of a spin-orbit qubit
in our setup in the spiral frame with the spin ${z}$-axis along the local
magnetic moment. Using a unitary transformation ${\tilde H}=U^\dagger(x)HU(x)$, where
$U(x)=\exp{[-i(\chi q x+\phi)\sigma_y/2]}$,\cite{zhang014433} one arrives at
\begin{align}
{\tilde H}=\frac{p^2}{2m_e}+\frac{1}{2}m_e\omega^2x^2-\alpha_0 p\sigma_y+J\sigma_z+\frac{\hbar^2q^2}{8m_e},\label{rl}
\end{align}
where $\alpha_0=\chi\hbar q/(2m_e)$. This Hamiltonian evidently indicates that
in the spiral frame, the exchange field supplies not only the homogeneous Zeeman-like
interaction, $J\sigma_z$, but also an effective Rashba-like SOC, $-\alpha_0 p\sigma_y$.\cite{zhang014433} This Hamiltonian is equivalent to the one studied in
Ref.~\onlinecite{li086805}, where a spin-orbit qubit was realized by virtue of an external magnetic
field and the genuine Rashba/Dresselhaus SOC.
Now we demonstrate the realization of a spin-orbit qubit by studying the
low-energy bound states in the quantum dot. For the reasonable case with
$J/(\hbar\omega)\lesssim 0.1$, the exchange
coupling can be treated as a perturbation. We rewrite the Hamiltonian (\ref{hamil}) as
$H=H_0+H_1$, where $H_1={\bf J}(x)\cdot{\bgreek\sigma}$. The eigenstates of $H_0$, describing a
harmonic oscillator, can be written as $|n\pm\rangle=|n\rangle|\pm\rangle$ with the eigenenergies
$\varepsilon_n=\left(n+\frac{1}{2}\right)\hbar\omega$ ($n=0,1,2...$). Here $|n\rangle$ is
the orbital eigenstate of the harmonic oscillator and $|+\rangle$
($|-\rangle$) is the spin-up (-down) eigenstate of $\sigma_z$. We focus on
the $n=0$ Hilbert subspace which is two-fold degenerate. First-order degenerate perturbation theory gives the lowest
two bound states of $H$ with energies $\varepsilon_{0\pm}=\varepsilon_0\pm
\hbar\Delta/2$, where
\begin{align}
\Delta=\Delta_0\exp{(-\eta^2)},
\end{align}
with $\Delta_0=2J/\hbar$ and $\eta=\chi\pi x_0/\lambda$. The corresponding wavefunctions are
\begin{align}\nonumber
|\widetilde{0\pm}\rangle=&e^{-i\phi\sigma_y/2}\Big\{|0\pm\rangle-\frac{J
e^{-\eta^2}}{\hbar\omega}\sum_{m=1}^{+\infty}\Big[\frac{\pm(i\sqrt{2}\eta)^{2m}}{2m\sqrt{(2m)!}}|2m\pm\rangle
\\&-\frac{i(i\sqrt{2}\eta)^{2m-1}}{(2m-1)\sqrt{(2m-1)!}}|2m-1\mp\rangle\Big]\Big\}.
\end{align}
The two lowest bound states $|\widetilde{0\pm}\rangle$, spaced by $\hbar\Delta$ and about $\hbar\omega$ away from the
nearest higher-energy state, can be used to encode the spin-orbit
qubit. As a result, with the aid of the spiral exchange field supplied by a
multiferroic insulator, we realize a spin-orbit qubit in the absence of an
external magnetic field as well as the Rashba/Dresselhaus SOC.
\section{Spin-photon coupling in a superconducting cavity}
The spin-orbit qubit can respond to an ac electric field, via EDSR.\cite{rashba126405,rashba137} Due to the small level spacing, the spin-orbit
qubit is controllable by low-temperature microwave technology. This can be accomplished by virtue of a superconducting
resonator, which works at temperatures $\sim$~mK with resonance frequencies
$\sim$~GHz.\cite{xiang623} Indeed, integrating spin-orbit qubits into
superconducting resonators has recently attracted much
interest,\cite{burkard041307,cottet160502,jin190506,hu035314}
to explore novel hybrid quantum circuits.\cite{xiang623} Moreover, the spin-orbit qubit proposed here,
which is external-magnetic-field-free, is naturally compatible with
superconducting resonators of high quality factors.
As schematically shown in Fig.~\ref{fig1}(b), we embed the spin-orbit qubit into a superconducting coplanar
waveguide,\cite{xiang623,hu035314} with the nanowire aligned parallel to the
electric field between the center conductor and the ground plane. The resonant
photon energy ($\sim$~GHz) is too low to excite magnons in the
multiferroic-insulator substrate,\cite{rovillain975} and we assume that the spiral
magnetic order keeps steady during the operation
of the spin-orbit qubit. The spin-orbit qubit, photons, as well as their coupling, can be described by the
Hamiltonian,\cite{xiang623}
\begin{align}
H_{\rm eff}=\frac{\hbar\Delta}{2}s_z+\hbar\omega_r\left(a^\dagger a+\frac{1}{2}\right)+\hbar g(a^\dagger s_-+a s_+).
\label{edsr}
\end{align}
Here $a$ ($a^\dagger$) is the annihilation (creation) operator for photons with
frequency $\omega_r$ in the cavity, and $s_{x,y,z}$ are the Pauli matrices in
the $|\widetilde{0\pm}\rangle$ subspace with $s_\pm=(s_x\pm i s_y)/2$. The
spin-photon coupling strength
\begin{align}
g=\langle\widetilde{0+}|x|\widetilde{0-}\rangle Ee/\hbar,
\end{align}
where $E$ is the cavity electric field on the spin-orbit qubit. Up to first order in
$J/(\hbar\omega)$,
\begin{align}
g=g_0(x_0/\lambda)^3\eta \exp{(-\eta^2)},
\end{align}
where $g_0=-eEm_eJ\lambda^3/\hbar^3$.
Note that in this device, both the level spacing $\Delta$ and
the spin-photon coupling $g$ are independent of the phase $\phi$ and proportional to the exchange coupling strength $J$. Also, both $\Delta$ and
$g$ strongly depend on the ratio between the dot size $x_0$ and
the wavelength $\lambda$ of the spiral magnetic order. In Fig.~\ref{fig2}, we plot the dependence of $\Delta/\Delta_0$
and $|g/g_0|$ on the parameter $x_0/\lambda$. One finds that when $x_0/\lambda$
is close to 1, both $\Delta$ and $|g|$ approach zero, hindering the device operation. This is because when $x_0/\lambda$ is large,
the exchange field, oscillating with a high frequency in the scale of the dot
size, has quite small matrix elements between the $|n\pm\rangle$ and $|n^\prime\pm\rangle$ states. This
leads to a vanishing Zeeman-like splitting and spin-orbit mixing of the harmonic
oscillator states. However, in the $x_0/\lambda=0$ limit, $\Delta$ reaches its maximum while
$|g|$ again approaches zero. In fact, in this regime, with the
approximately homogeneous exchange field experienced by the quantum-dot
electron, the spin-orbit interplay becomes quite weak and
a nearly pure spin qubit with the largest Zeeman-like splitting is obtained.
\begin{figure}[hbt]
{\includegraphics[width=9cm]{fig2.pdf}}
\caption{(Color online) Dimensionless level spacing $\Delta/\Delta_0$ of the
spin-orbit qubit and dimensionless spin-photon coupling $|g/g_0|$ versus the dimensionless dot size $x_0/\lambda$.}
\label{fig2}
\end{figure}
\section{Modulation by the Rashba SOC}
Although the spin-orbit qubit proposed here is available without employing the
Rashba/Dresselhaus SOC, in reality the SOC may be present and
even important.
Nonetheless, the Rashba SOC is controllable, e.g., by a
gate voltage applied to the nanowire [as illustrated by $V_g$ from the gate
electrode on top of the nanowire in Fig.~\ref{fig1}(a)]. Below we introduce
the Rashba SOC into the nanowire, supplying an effective channel to modulate the spin-orbit qubit as well as its coupling to photons.
With the Rashba SOC included, the Hamiltonian given by Eq.~(\ref{hamil}) becomes
\begin{align}
H_\alpha=\frac{p^2}{2m_e}+\frac{1}{2}m_e\omega^2x^2+\alpha p\sigma_y+{\bf
J}(x)\cdot{\bgreek \sigma},
\label{hamil1}
\end{align}
where $\alpha$ is the Rashba SOC strength. We now apply the unitary transformation
${\tilde H}_\alpha=U_\alpha^\dagger(x)H_\alpha U_\alpha(x)$ with
$U_\alpha(x)=\exp{(-im_e\alpha x\sigma_y/\hbar)}$, and obtain \cite{zhang014433,li086805}
\begin{align}
{\tilde H}_\alpha=\frac{p^2}{2m_e}+\frac{1}{2}m_e\omega^2x^2+{\bf J}_\alpha(x)\cdot{\bgreek \sigma}+\frac{m_e\alpha^2}{2},
\end{align}
where ${\bf J}_\alpha(x)=J(\sin(\chi q_\alpha x+\phi),0,\cos(\chi q_\alpha
x+\phi))$ with $q_\alpha=(1-\alpha/\alpha_0)q$. The Hamiltonian ${\tilde
H}_\alpha$ has exactly the same form as in Eq.~(\ref{hamil}). Therefore, one can obtain the
low-energy eigenstates of ${\tilde H}_\alpha$ immediately based on the results
given previously. By noting that the electric dipole moment commutes with
the unitary operator $U_\alpha(x)$, one straightforwardly obtains the level spacing of the spin-orbit
qubit and the spin-photon coupling in the presence of the Rashba SOC,
\begin{align}
&{\Delta}_\alpha=\Delta_0\exp{(-\eta_\alpha^2)},\\& g_\alpha=g_0(x_0/\lambda)^3{\eta_\alpha}\exp{(-\eta_\alpha^2)},
\end{align}
with ${\eta}_\alpha=(1-\alpha/\alpha_0)\eta$.
The above results can be understood by considering the Rashba SOC to superimpose
on the effective Rashba-like SOC from the spiral geometry, or, in other words,
to equivalently modulate the wavelength of the spiral magnetic order. This feature allows to control
both the level spacing of the spin-orbit qubit and the spin-photon coupling by
adjusting the Rashba SOC via the gate voltage. To show the modulation of the
Rashba SOC on the hybrid system, in Figs.~\ref{fig3}(a, b) we plot the
dimensionless level spacing, $\Delta_\alpha/\Delta_0$, and
dimensionless spin-photon coupling, $|g_\alpha/g_0|$, versus the parameters $x_0/\lambda$ and
$\alpha/\alpha_0$. Those calculations indicate that when $x_0\sim\lambda$ and
$\alpha\sim(1\pm 0.2)\alpha_0$, the spin-orbit qubit can be effectively coupled
to photons, as indicated by the area near the ``on'' points in Fig.~\ref{fig3}(b). Moreover, by tuning $\alpha$ to
$\alpha_0$, the spin-photon coupling is completely switched off due to the
decoupling of the spin to the orbit, as indicated by the area near the``off'' point in
Fig.~\ref{fig3}(b). During this switch process, the level spacing of the
spin-orbit qubit changes by about 30\%. These features are promising
for manipulating, storing and transferring information in the hybrid quantum
systems.\cite{xiang623}
\begin{figure}[hbt]
{\includegraphics[width=8cm]{fig3a.pdf}}\\{\includegraphics[width=8cm]{fig3b.pdf}}
\caption{(Color online) (a)~Dimensionless level spacing $\Delta_\alpha/\Delta_0$ of the
spin-orbit qubit and (b)~dimensionless spin-photon coupling $|g_\alpha/g_0|$
(in log-scale) versus $x_0/\lambda$ and $\alpha/\alpha_0$.}
\label{fig3}
\end{figure}
Note that for a particular Rashba SOC, its modulation depends on the spiral helicity in
the substrate, as $\alpha_0$ depends on $\chi$. This feature supplies another
control channel of the device via the gate voltage $V_c$ on the substrate, and
also in turn provides the possibility to determine the exchange coupling
strength $J$ as well as the Rashba SOC strength $\alpha$. By measuring the level spacings of the spin-orbit qubit corresponding to
opposite spiral helicities, which satisfy $\ln[\Delta_\alpha(\chi=1)/\Delta_\alpha(\chi=-1)]=2q\alpha/\omega$, one can
obtain the Rashba SOC strength $\alpha$ with the knowledge of the confining
potential of the quantum dot. Here $\alpha$ is assumed to be marginally affected
by the reversal of the spiral helicity. Further, the exchange coupling
strength $J$ is available based on the known ${\Delta}_\alpha$ and $\alpha$.
\section{Experimental realizability}
Let us now discuss the experimental realizability of the proposed spin-orbit
qubit and its coupling to the superconducting coplanar waveguide. We
consider a $\langle 110\rangle$-oriented Ge nanowire \cite{wu3165,greytak4176,niquet084301} on the surface
of the multiferroic insulator BiFeO$_3$.\cite{khomskii20,thomas423201,lebeugle024116} In the $\langle
110\rangle$-oriented Ge nanowire, the electron effective mass $m_e=0.08m_0$,
where $m_0$ is the free electron mass.\cite{niquet084301} For BiFeO$_3$, the
wavelength of the spiral magnetic order $\lambda=62$~nm,\cite{lebeugle024116,thomas423201} while the
magnon frequency is of the order of 100~GHz.\cite{rovillain975} The exchange coupling strength is set as $J=5~\mu$eV,
smaller than the estimated interface exchange coupling (16~$\mu$eV) induced by
the ferromagnetic-insulator contacts in Ref.~\onlinecite{cottet160502}. The electric field
in the superconducting coplanar waveguide has the typical maximal strength\cite{hu035314}
$E=0.2$~V/m. With these parameters, we have $\Delta_0=(2\pi)2.4$~GHz,
$|g_0|=(\pi)0.1$~MHz, and $|\alpha_0|=7.4\times 10^4$~m/s. Moreover, even when
$x_0\sim\lambda$, the orbital splitting in the quantum dot is $\hbar\omega\sim$~0.25~meV,
still much larger than the exchange coupling strength $J$. In addition to the
availability of an effective spin-photon coupling with an efficient on/off switching, the proposed device has another
advantage. That is, in isotopically-purified $^{72}$Ge samples, the
hyperfine interaction can be markedly suppressed and hence the coherence time
of the spin-orbit qubit in the zero-temperature limit can be quite long.\cite{hu035314} This feature benefits the application of the proposed
device.
\section{Conclusion}
In conclusion, we have proposed a spin-orbit qubit based on a nanowire quantum
dot on the surface of a multiferroic insulator, and designed a hybrid quantum
circuit by integrating this spin-orbit qubit into a superconducting coplanar
waveguide.
The spiral exchange field from the magnetic moments in the multiferroic
insulator causes an inhomogeneous Zeeman-like interaction on the electron spin in the
quantum dot. This effect assists the realization of a spin-orbit qubit in the
quantum dot. In this approach, no external magnetic field is employed, benefitting the on-chip fabrication of the spin-orbit qubit in a superconducting
coplanar waveguide. Our study reveals that both the
level spacing of the spin-orbit qubit and the spin-photon coupling are
proportional to the exchange coupling strength and depend on the ratio of the dot size to the wavelength of the
spiral magnetic order. We further consider the effect of the Rashba SOC, which
is controllable by a gate voltage on the nanowire. It is found that by invoking the Rashba SOC,
one is able to obtain an effective spin-photon coupling with an efficient on/off
switching, making the device promising for applications. The proposed spin-orbit
qubit may be experimentally realizable by placing a $\langle 110\rangle$-oriented
Ge nanowire on the surface of the multiferroic insulator BiFeO$_3$.
\begin{acknowledgments}
The authors gratefully acknowledge E.~Ya.~Sherman and X.~Hu for valuable
discussions and comments. F.N. is partially supported by the ARO, RIKEN iTHES
Project, MURI Center for Dynamic Magneto-Optics, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for
Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS
via its FIRST program.
\end{acknowledgments}
|
2,869,038,155,534 | arxiv | \section{Introduction}
Of the 151 nearby stars known to harbor one or more planets, 19 are
well-characterized multiple-planet systems, and an additional 24 show
radial
velocity (RV) residuals
indicative of additional companions \citep{Butler06}. For instance,
\citet{Vogt05} reported additional companions around five stars,
including two revealed by incomplete orbits apparent in the
RV residuals (HD 50499 and HD 217107), and one as a
short-period, low amplitude variation in the residuals of the fit to a
long-period outer companion. \citet{Rivera05} detected a 7.5 Earth-mass
mass companion to GJ 876 in a 2-day period through analysis of the RV
residuals to a 2-planet dynamical fit of the more massive, outer exoplanets.
\citet{Gozdziewski06} similarly analyzed the RV residuals of 4 stars
to search for Neptune-mass companions.
Very little is known about the frequency or nature of exoplanets
with orbital distances greater than 5 AU \citep{Marcy05}.
Precise radial velocities have only reached the precision required
to detect such objects within the last 10 years \citep{Butler96b},
which is less than the orbital period of such objects ($P>12$ y for
exoplanets orbiting solar mass stars). Thus, the RV curves for such
planets are all necessarily incomplete, and we must obtain many more
years of data before our knowledge of their orbits improves
significantly.
The ability to put constraints on planets with incomplete
orbits, however weak, allows us to peek beyond the 5 AU completeness
limit inherent in the ten-year-old planet searches. Characterizing
incomplete orbits also increases our sample of known
multiple exoplanetary systems, which improves our understanding of the
frequency of orbital resonances, the growth of multiple planets, and
the mechanics of orbital migration.
In this work, we present our analysis of the RV data of \citet{Butler06} in an effort
to determine which of those systems have additional, low-amplitude
companions.
Many systems known to host one exoplanet show more distant, long-period companions with highly
significant but incomplete orbits. In these systems, it can be
extremely difficult to constrain the properties of the outer
companion: in the case of a simple trend with no curvature, very
little can be said about the nature of these companions beyond their
existence, but even this informs studies of exoplanet multiplicity and
the frequency of exoplanets in binary systems.
In \S~\ref{hip14810} we discuss a new multiple planet system
from the N2K project, HIP 14810. In
\S~\ref{trends} we describe how we have employed
a false alarm probability statistic to test the significance of trends in the RV data of
stars already known to host exoplanets. We find that six stars known
to host exoplanets have previously undetected trends, and thus
additional companions.
When the RV residuals to a single Keplerian show significant
curvature, one may be able to place additional constraints on the
maximum $\ensuremath{m \sin{i}}$ of the additional companion. In \S\S~\ref{mapsection}--\ref{twonew} we present our analysis
of this problem in the cases of HD 24040 b and HD 154345 b, two
substellar companions new to this work with very incomplete orbits. By mapping
$\chi^2$ space for Keplerian fits, we show that HD 154345 b is
almost certainly planetary ($\ensuremath{m \sin{i}} <10 \ensuremath{\mbox{M}_{\mbox{Jup}}}$), and that HD 24040 b may
be planetary ($5\ensuremath{\mbox{M}_{\mbox{Jup}}} < \ensuremath{m \sin{i}} < 30\ensuremath{\mbox{M}_{\mbox{Jup}}}$).
In \S~\ref{mining} we describe how we extended this method to the RV residuals of known
planet-bearing stars which show trends. We find that for 2 stars we can place sufficiently strong upper limits on \ensuremath{m \sin{i}}\
to suggest that the additional companions are planetary in nature.
\section{HIP 14810}
\label{hip14810}
HIP 14810 is a metal-rich ($\ensuremath{[\mbox{Fe}/\mbox{H}]} = 0.23$) G5 V, V=8.5 star which we have observed at Keck
Observatory as part of the N2K program \citep{Fischer05} since Nov
2005. Table~\ref{RVhip14810} contains the RV data for this star. Its
stellar characteristics are listed in Table~\ref{starchar},
determined using the same LTE spectral analysis used for stars in the
SPOCS catalog \citep{Valenti05}. We quickly detected a
short-period, high-amplitude companion ($P=6.67$ d, $\ensuremath{m \sin{i}}=3.9
\,\ensuremath{\mbox{M}_{\mbox{Jup}}}$) and a strong, $\sim 200$ m/s trend. Further observations
revealed evidence for substantial curvature in the residuals to a
planet plus trend fit. Fig.\ref{hip14810fig} shows the RV curve for this
star decomposed into Keplerian curves for the b and c components, and
Table~\ref{orbitupdates} contains the best-fit double Keplerian elements.
\begin{deluxetable}{lrc}
\tablecaption{RV Data for HIP 14810\label{RVhip14810}}
\tablecolumns{3}
\tablewidth{0pc}
\tablehead{
{Time} &{Radial Velocity} & {Unc.}\\
{(JD-2440000)} &{(m/s)}&{(m/s)}}
\startdata
13693.760579 & -130.8 &1.3 \\
13694.831481 & -473.6 &1.2 \\
13695.909225 & -226.9 &1.2 \\
13723.786250 & 162.6 &1.0 \\
13724.688484 & 324.9 &1.2 \\
13746.814595 & 2.4 &1.3 \\
13747.852940 & -435.82 &0.94 \\
13748.734190 & -433.3 &1.2 \\
13749.739236 & -71.4 &1.2 \\
13751.898252 & 358.3 &1.1 \\
13752.807431 & 241.05 &0.80 \\
13752.912477 & 211.6 &1.7 \\
13753.691574 & -79.8 &1.1 \\
13753.810359 & -137.6 &1.2 \\
13753.901042 & -180.2 &1.2 \\
13775.836157 & -240.01 &0.97 \\
13776.812859 & 123.4 &1.4 \\
13777.723102 & 346.7 &1.3 \\
13778.720799 & 416.0 &1.3 \\
13779.744410 & 238.4 &1.3 \\
13841.722049 & -515.7 &1.4 \\
13961.130301 & -280.9 &1.0 \\
13962.133333 & -413.4 &1.1 \\
13969.097315 & -348.3 &1.2 \\
13981.969815 & -476.5 &1.2 \\
13982.947431 & -200.9 &1.2 \\
13983.981470 & 151.5 &1.0 \\
13984.096979 & 187.3 &1.2 \\
13984.985775 & 345.7 &1.3 \\
13985.102106 & 357.6 &1.3
\enddata
\end{deluxetable}
A 2-planet Keplerian fit yields an outer planet with $\ensuremath{m \sin{i}} = 0.95\,\ensuremath{\mbox{M}_{\mbox{Jup}}}$,
$P=114$ d, and eccentricity of 0.27. We present the orbital solutions for
this two-planet system in Table~\ref{orbitupdates}.
\begin{figure}
\plotone{f1.eps}
\caption{RV curve for HIP 14810 with data from Keck, showing the inner planet with $P =
6.67$ d and $\ensuremath{m \sin{i}}=3.9 \ensuremath{\mbox{M}_{\mbox{Jup}}}$ the outer planet with $P=95.3$ d and
$\ensuremath{m \sin{i}} =0.76$
\ensuremath{\mbox{M}_{\mbox{Jup}}}.\label{hip14810fig}}
\end{figure}
\section{Detecting Long-Period Companions}
\label{trends}
Very long period substellar companions appear in radial
velocity data first as linear trends (constant accelerations), then
as trends with curvature, and finally, as the duration of the
observations becomes a substantial fraction of the orbital period, as
recognizable portions of a Keplerian velocity curve. It is important,
then, to have a statistically robust test for trends in
velocity residuals. In this section, we discuss calculating false alarm
probabilities (FAPs) for such trends.
\subsection{Using FAP to Detect Trends\label{FAPtrend}}
\citet[\S~5.2][]{Marcy05} present a detailed discussion of using false
alarm probabilities (FAPs) for determining the
significance of a periodic signal in an RV time series. Here, our task
is similar. We wish to test the hypothesis that a star has an
additional companion with a long period, manifest only as a linear
trend in the RV series. We compare this hypothesis to the null
hypothesis that the data are adequately described only by the best-fit
Keplerians and noise.
We first fit the data set with a Keplerian model and compare the
$\chi^2_{\nu}$ statistic to that of a model employing a Keplerian plus a
linear trend. If this statistic improves, that is, if
$\Delta\chi^2_{\nu} = \chi^2_{\nu,\mbox{trend}}-\chi^2_{\nu, \mbox{no
trend}}$ is negative, then the inclusion of the trend may be
justified. To test the significance of the reduction in $\chi^2_{\nu}$,
we employ an FAP test.
We first employ a bootstrap method to determine
our measurement uncertainties. We subtract the best-fit Keplerian RV curve
from the data and assume the null hypothesis --- namely that the
residuals to this fit are properly characterized as noise and thus
approximate the underlying probability distribution function of the noise in the measurements.
We then draw from this set of residuals (with replacement) a mock set of residuals with the same
temporal spacing as the original set.
By adding these mock residuals
to the best-fit Keplerian RV curve we produce a mock data set with the
same temporal sampling as the original data set, but with the velocity
residuals ``scrambled'' (``re-drawn'' might be a better term since
we have drawn residuals {\it with} replacement.) It is important in
this procedure that internal errors remain associated with the scrambled residuals.
This ensures that points with error bars so
large that they contribute little to the $\chi^2_{\nu}$ sum, but
nonetheless lie far from the best-fit curve, do not gain significance
when ``scrambled'', inappropriately increasing $\chi^2_{\nu}$.
We then compare $\Delta\chi^2_{\nu}$ for our mock data set to that of
our genuine data. By repeating this procedure 400 times, we
produce 400 mock sets of residuals and 400 values for
$\Delta\chi^2_{\nu}$. If the linear trend is simply an artifact of the
noise, then re-drawing the residuals should not systematically improve
or worsen $\Delta\chi^2_{\nu}$. Conversely, if a linear trend is
significant, then the null hypothesis, that the residuals to a
Keplerian are uncorrelated noise, is invalid, and re-drawing them
should worsen the quality of the Keplerian(s) plus trend fit, since
scrambling will remove evidence of the trend.
Thus, the fraction of these sets with $\Delta\chi^2_{\nu}$ less than that
of the proper, unscrambled residuals, provides a measurement of the
false alarm probability that the residuals to a Keplerian-only fit are correlated.
\subsection{Velocity Trends and Additional Companions in Known
Exoplanet Systems}
\label{foundtrends}
The Catalog of Nearby Exoplanets \citep{Butler06} contains 172
substellar companions with $\ensuremath{m \sin{i}} < 24 \,\ensuremath{\mbox{M}_{\mbox{Jup}}}$ orbiting 148 stars
within 200 pc. Since then, at least 3 more systems have been
announced, including a triple-Neptune \citep{Lovis06}, and two
single-planet detections \citep{Johnson06}, \citep{Hatzes06}. Of these 151 systems, 24 show significant trends in
addition to the Keplerian curves of the known exoplanets. We have
reanalyzed the radial velocities of \citet{Butler06}
to determine the significance of these trends and to find evidence for
additional trends using the FAP test described in \S~\ref{FAPtrend}.
Note that we have obtained additional data for some of these systems since
\citet{Butler06} went to press.
We confirm here 21 of the 24 trends in \citet{Butler06} to have FAPs below
1\% (2 others are in systems on which we have no data to test, and the
third is HD 11964, discussed in \S~\ref{lowamps}). We also confirm
the trend in the 14 Her system, first announced in \citet{Naef04} and
analyzed more thoroughly in \citet{Gozdziewski06b}, and
\S~\ref{incompletes}. We confirm the finding of \citet{Endl06} that
the trend reported in \citet{Marcy05} for HD 45350 b is not significant (FAP=0.6 and $\chi^2_{\nu}$ {\it increases} with the introduction of a trend).
We announce here the detection of statistically significant linear
trends (FAP $<1 \%$) around 4 stars already known to harbor a single
exoplanet: HD 83443, GJ 436 (= HIP
57087), HD 102117, and HD 195019. GJ 436
will be discussed more thoroughly in an upcoming work, (Maness et
al., 2006, submitted) ). In one additional case, HD 168443, we detect a
radial velocity trend with FAP $< 1\%$ in a system already known to
have two exoplanets, indicating that a third, long-period companion
may exist. We present the updated orbital solutions in Table~\ref{orbitupdates}.
HD 49674 has an FAP for an additional trend of $\sim 2\%$, which is of
borderline significance when we account for the
size of our sample: we should expect that around 2 of our 100 systems
will prove to have FAPs $\sim 2\%$ purely by chance, and not because of an
additional companion. We include the fit for HD 49674 with a trend in
Table~\ref{orbitupdates}, but note here the weakness of the detection.
\begin{deluxetable}{llrrcccllrcllrc}
\tabletypesize{\tiny}
\tablewidth{0pc}
\tablecolumns{15}
\tablecaption{Properties of Three Stars Hosting New Substellar Companions\label{starchar}}
\tablehead{ {HD}&{Hip
\#}&{RA}&{Dec.}&{\bv}&{$V$}&{Distance}&{\ensuremath{T_{\mbox{\scriptsize eff}}}}&{$\log{g}$}&{[Fe/H]}&{$v\sin{i}$}&{Mass}&{S}&{\ensuremath{\Delta\mv}}&{jitter}\\
&
& {(J2000)} & {(J2000)} & && {(pc)}
&{(K)} & {(cm $\mbox{s}^{-2}$)} & & {$(\mbox{m} \persec)$}
& {(\ensuremath{\mbox{M}_{\mbox{$\odot$}}})} & & &{(m \ensuremath{\mbox{s}^{-1}})}}
\startdata
24040 & 17960 & 03 50 22.968 & +17 28 34.92 & 0.65 & 7.50 & 46.5(2.2) & 5853(44) & 4.361(70) & 0.206(30) & 2.39(50) & 1.18 & 0.15 & 0.65 & 5.7\\
154345 & 83389 & 17 02 36.404 & +47 04 54.77 & 0.73 & 6.76 & 18.06(18) & 5468(44) & 4.537(70) & -0.105(30) & 1.21(50) & 0.88 & 0.18 & -0.21 & 5.7\\
\nodata & 14810 & 03 11 14.230 & +21 05 50.49 & 0.78 & 8.52 & 52.9(4.1) & 5485(44) & 4.300(70) & 0.231(30) & 0.50(50) & 0.99 & 0.16 & 0.64 & 3.5\\
\enddata
\tablecomments{For succinctness, we express uncertainties using parenthetical notation, where the least
significant digit of the uncertainty, in parentheses, and that of the quantity
are understood to have the same place value. Thus, ``$0.100(20)$'' indicates
``$0.100 \pm 0.020$'', ``$1.0(2.0)$'' indicates ``$1.0 \pm 2.0$'', and
``$1(20)$'' indicates ``$1 \pm 20$''. \\ Data from columns 3--5, and 12 are from Hipparcos \citep{PerrymanESA},
columns 6--10 are from the SPOCS catalog \citep{SPOCS}, column 11 is
from \citet{Wright04}, and column 13 was derived from using the
formula in \citet{Wright05}. Columns 6--10 for HIP 14810 were derived
with the same methods used for the SPOCS catalog.}
\end{deluxetable}
\begin{deluxetable}{l@{ }rccccccccccr}
\tabletypesize{\tiny}
\tablewidth{0pc}
\tablecolumns{13}
\tablecaption{Updated Orbital Fits for 9 Exoplanets\label{orbitupdates}}
\tablehead{
\multicolumn{2}{c}{Planet} &{Per} &{K}&{e}
&{$\omega$} &{$T_{\mbox{p}}$} &{trend}
&{\ensuremath{m \sin{i}}} &{a} &{r.m.s.}
&{$\sqrt{\chi^2_\nu}$}&{N$_{\mbox{obs}}$}\\
& &{(d)}&{(\mbox{m} \persec)}&
& {($\deg$)}&{(JD-2440000)}&{(m/s/yr)}
&{($\mbox{\scriptsize M}_{\mbox{\tiny Jup}}$)} &{(AU)}&{(\mbox{m} \persec)}&
& }
\startdata
HIP 14810 & b & 6.6742(20) & 428.3(3.0) & 0.1470(60) & 158.6(2.0) &
13694.588(40) & & 3.91(55) & 0.0692(40) & 5.1 & 1.4 & 30 \\
HIP 14810 & c & 95.2914(20) & 37.4(3.0) & 0.4088(60) & 354.2(2.0) &
13679.575(40) & & 0.76(12) & 0.407(23) & 3.3 & 0.77 & 21\\
HD 49674 & b & 4.9437(23) & 13.7(2.1) & 0.29(15) & 283 & 11882.90(86)
& 2.6(1.1) & 0.115(16) & 0.0580(33) & 4.4 & 0.62 & 39\\
HD 83443 & b & 2.985625(60) & 56.4(1.4) & 0.008(25) & 24 &
11211.04(82) & 2.40(79) & 0.400(34) & 0.0406(23) & 8.2 & 0.93 & 51\\
GJ 436 & b & 2.643859(74) & 18.35(80) & 0.145(52) & 353(24) &
11551.72(12) & 1.42(35) & 0.0682(63) & 0.0278(16) & 4.2 & 0.93 & 60\\
HD 102117 & b & 20.8079(55) & 11.91(77) & 0.106(70) & 283 &
10942.9(3.0) & -0.91(26) & 0.172(18) & 0.1532(88) & 3.3 & 0.83 & 45 \\
HD 168443 & b & 58.11289(86) & 475.9(1.6) & 0.5286(32) & 172.87(94) &
10047.387(34) & & 8.02(65) & 0.300(17) & 4.1 & 0.97 & 109 \\
HD 168443 & c & 1749.5(2.4) & 298.0(1.2) & 0.2125(15) & 65.07(21) &
10273.0(4.6) & & 18.1(1.5) & 2.91(17) & 4.1 & 0.97 & 109\\
HD 195019 & b & 18.20163(40) & 272.3(1.4) & 0.0140(44) & 222(20) &
11015.0(1.2) & 1.31(51) & 3.70(30) & 0.1388(80) & 16 & 1.5 & 154\\
\enddata
\tablecomments{For succinctness, we express uncertainties using parenthetical notation, where the least
significant digit of the uncertainty, in parentheses, and that of the quantity
are understood to have the same place value. Thus, ``$0.100(20)$'' indicates
``$0.100 \pm 0.020$'', ``$1.0(2.0)$'' indicates ``$1.0 \pm 2.0$'', and
``$1(20)$'' indicates ``$1 \pm 20$''.}
\end{deluxetable}
\section{Constraining Long Period Companions}
\label{mapsection}
\subsection{The Problem of Incomplete Orbits}
It is difficult to properly characterize the orbit of an
exoplanet when the data do not span at least one complete revolution.
After one witnesses a complete orbit of the planet in a single-planet
system, subsequent orbits
should have exactly the same shape (absent strong planet-planet
interactions), and so one can interpret deviations as
the effects of an additional companion. Before witnessing one
complete orbit, one can easily misinterpret the signature of an additional
companion as it is absorbed into the orbital solution for the
primary companion. Even when only one planet is present, small portions of
single Keplerian curves can easily mimic portions of other Keplerians
with very different orbital elements.
\subsection{Constraining \ensuremath{m \sin{i}}\ and $P$ \label{maps}}
When an RV curve shows significant
curvature, it may be possible to constrain the minimum mass (\ensuremath{m \sin{i}})
and orbital period of the companion. \citet{Brown04} discussed the
problem extensively, and \citet{Wittenmyer06} studied the
significance of non-detections in the McDonald Observatory planet
search without assuming circular orbits by injecting artificial RV signals into
program data to determine the strength of a just-recoverable signal.
\citet{Wittenmyer06} reasonably assigned a broad range of eccentricities, $0 < e <
0.6$, with an upper limit they justified by the fact that over 90\% of
all known exoplanets have $e < 0.6$ \citep{Butler06}. The presence of
this upper limit greatly limits the number of pathological solutions
to a given RV set. Below, we explore the nature of limits on mass
and period implied by a given data set and how constraining $e$ can
improve those limits.
Since \ensuremath{m \sin{i}}, not $K$, is the astrophysically interesting quantity in
exoplanet detection, it is useful to transform into $P$, $e$, \ensuremath{m \sin{i}}\
coordinates when considering constraints. The minimum mass of a
companion can be calculated from the {\it mass function}, $f(m)$, and
the stellar mass, according to the relation:
\begin{equation}
\label{massfn}
f(m) = \frac{m^3 \sin^3{i}}{(m+M_*)^2} =
\frac{P K^3(1-e^2)^{\frac{3}{2}}}{2 \pi G}
\end{equation}
where, in the minimum mass case (where $\sin{i}=1$), we set $m$ equal to
\ensuremath{m \sin{i}}. This relation allows us to fit for the minimum mass (which
we refer to as \ensuremath{m \sin{i}}\ for brevity) eliminating the orbital
parameter $K$.
Using Eq.~\ref{massfn}, we can find the best-fit Keplerian RV curve
across $P-\ensuremath{m \sin{i}}$ space, allowing $e$, $\omega$, and $\gamma$
(the RV zero point) to vary at many fixed values of $P$ and \ensuremath{m \sin{i}}\ to map
$\chi^2$.
\section{Two New Substellar Companions with Incomplete Orbits}
\label{twonew}
Here we consider HD 24040, a metal-rich ($\ensuremath{[\mbox{Fe}/\mbox{H}]} = 0.21$) G0 V star at 46 pc
(stellar characteristics summarized in Table~\ref{starchar}). This
star shows RV variations consistent with a planetary companion with $P \sim 15$
y and $\ensuremath{m \sin{i}} \sim 7\,\ensuremath{\mbox{M}_{\mbox{Jup}}}$ (Figure~\ref{24040}), although longer
orbital periods and minimum masses as high as $\ensuremath{m \sin{i}} \sim 30$ cannot
be ruled out. The RV data for HD
24040 appear in Table~\ref{RVHD24040}.
Here and in \S \ref{incompletes}, we use $\chi^2$ as a merit function and infer parameters for
acceptable fits from increases of this function by 1, 4
and 9, which correspond to 1-, 2-, and 3-sigma confidence levels for
systems with Gaussian noise. Because stellar jitter provides a source
of pseudo-random noise which may vary on a stellar rotation timescale, the
noise in RV residuals may be non-Gaussian. Thus, to the degree that
the RV residuals are non-Gauassian, the translation of
these confidence limits into precise probabilities is not
straightforward.
\begin{deluxetable}{lrc}
\tablewidth{0pc}
\tablecolumns{3}
\tablecaption{RV Data for HD 24040\label{RVHD24040}}
\tablehead{{Time} &{Radial Velocity} & {Unc.}\\
{(JD-2440000)} &{(m/s)}&{(m/s)}}
\startdata
10838.773206 & -37.1 &1.4\\
11043.119653 & -25.8 &1.5\\
11072.039039 & -24.3 &1.4\\
11073.002315 & -26.8 &1.2\\
11170.876921 & -9.9 &1.4 \\
11411.092975 & 3.6 &1.6 \\
11550.824005 & 16.4 &1.3 \\
11551.863449 & 16.5 &1.2 \\
11793.136725 & 42.2 &1.2 \\
11899.945741 & 50.7 &1.1 \\
12516.065405 & 74.4 &1.3 \\
12575.951921 & 87.6 &1.5 \\
12854.115278 & 81.4 &1.2 \\
12856.115671 & 82.1 &1.1 \\
13071.741674 & 59.4 &1.2 \\
13072.799190 & 55.6 &1.3 \\
13196.130000 & 54.2 &1.3 \\
13207.120116 & 57.3 &1.3 \\
13208.125625 & 54.0 &1.2 \\
13241.091852 & 57.5 &1.2 \\
13302.947025 & 49.9 &1.3 \\
13339.945972 & 46.1 &1.2 \\
13368.865139 & 48.4 &1.2 \\
13426.809792 & 36.5 &1.1 \\
13696.904468 & 18.0 &1.1 \\
13982.061586 & 6.8 &1.2 \\
\enddata
\end{deluxetable}
\begin{figure}
\plotone{f2.eps}
\caption[Radial velocity curve for HD 24040]{RV curve for HD 24040
with data from Keck.
The best-fit Keplerian is poorly
constrained due to incomplete coverage of the orbit. The fit shown
here is for $P =16.5$ y and $\ensuremath{m \sin{i}} = 6.9 \,\ensuremath{\mbox{M}_{\mbox{Jup}}}$, one of a family
of adequate solutions.\label{24040}}
\end{figure}
In this case, we have enough RV information to put an upper limit on
\ensuremath{m \sin{i}}. Figure~\ref{24040_contour} shows $\chi^2$ for best-fit
orbits in the $P-\ensuremath{m \sin{i}}$ plan. Fits with $P$ as low as 10 y and
\ensuremath{m \sin{i}}\ as low as 5 \ensuremath{\mbox{M}_{\mbox{Jup}}}\ are allowed. Interestingly, the data
(following the middle, $\chi^2 = \chi^2_{\mbox{min}} +4$ contour)
exclude orbits with $\ensuremath{m \sin{i}} > 30 \,\ensuremath{\mbox{M}_{\mbox{Jup}}}$, providing a
``maximum minimum-mass''. Since without an
assumption for the eccentricity (which we will make below), we cannot exclude
orbits with \ensuremath{m \sin{i}}\ as high as 30 \ensuremath{\mbox{M}_{\mbox{Jup}}}, there is a chance that this
companion to HD 24040 is a brown dwarf, or even stellar.
\begin{figure}
\plotone{f3.eps}
\caption[Contours of $\chi^2$ and $e$ of best-fit orbits
to the radial velocity data of HD 24040]{Contours of $\chi^2$ and $e$ in $P-\ensuremath{m \sin{i}}$ space of best-fit orbits
to the RV data of HD 24040 (Fig.~\ref{24040}), with $\chi^2$ in grayscale. The solid contours mark
the levels where $\chi^2$ increases by 1, 4 and 9 from the
minimum. The dashed contours mark levels
of the eccentricity of 0.2, 0.6, and 0.9,
Planets with $e > 0.6$ are rare, implying that this object is
unlikely to have a period longer than 100 y. The orbit is largely unconstrained, but \ensuremath{m \sin{i}}\
has a maximum value around 20 \ensuremath{\mbox{M}_{\mbox{Jup}}}\ for orbits with $e < 0.6$. The position of
the cross at 16.5 y and 6.9 \ensuremath{\mbox{M}_{\mbox{Jup}}}\ represents the solution plotted in
Fig.~\ref{24040}, one in a family of adequate orbital solutions.\label{24040_contour}}
\end{figure}
A similar case is HD 154345, a G8 V star at 18pc (stellar characteristics
summarized in Table~\ref{starchar}). This star shows RV variations
remarkably similar to those of HD 24040 (Figure~\ref{154345}), but
with an amplitude about 6 times smaller. In this case, the maximum
\ensuremath{m \sin{i}}\ is only around 10 \ensuremath{\mbox{M}_{\mbox{Jup}}}, giving us confidence that this
object is likely a true exoplanet, and masses as low as 1 \ensuremath{\mbox{M}_{\mbox{Jup}}}\ are
allowed. The RV data for HD 154345 are in Table~\ref{RVHD154345}.
We summarize the orbital constraints for these objects in
Table~\ref{masslimits}.
\begin{deluxetable}{lrc}
\tablewidth{0pc}
\tablecaption{RV Data for HD 154345\label{RVHD154345}}
\tablecolumns{3}
\tablehead{
\colhead{Time} &\colhead{Radial Velocity} & \colhead{Unc.}\\
\colhead{(JD-2440000)} &\colhead{(m/s)}&\colhead{(m/s)}}
\startdata
10547.110035 & 6.8 &1.4 \\
10603.955845 & 8.6 &1.4 \\
10956.015625 & 15.1 &1.5 \\
10982.963634 & 13.1 &1.4 \\
11013.868657 & 16.3 &1.5 \\
11311.065486 & 16.6 &1.6 \\
11368.789491 & 18.7 &1.5 \\
11441.713877 & 23.0 &1.4 \\
11705.917836 & 30.3 &1.5 \\
12003.078183 & 30.4 &2.3 \\
12098.916539 & 37.1 &1.5 \\
12128.797813 & 34.7 &1.7 \\
12333.173299 & 38.1 &1.6 \\
12487.860197 & 35.5 &1.6 \\
12776.985463 & 29.9 &1.6 \\
12806.951852 & 19.5 &1.6 \\
12833.801030 & 27.3 &1.4 \\
12848.772037 & 25.3 &1.5 \\
12897.776562 & 26.5 &1.5 \\
13072.046921 & 18.9 &1.6 \\
13074.077766 & 21.2 &1.4 \\
13077.128090 & 20.5 &1.4 \\
13153.943171 & 15.2 &1.6 \\
13179.992454 & 20.6 &1.5 \\
13195.819190 & 17.5 &1.4 \\
13428.162502 & 10.09 &0.78\\
13547.914433 & 11.44 &0.80\\
13604.829999 & 6.08 &0.78 \\
13777.155347 & 7.5 &1.5 \\
13807.077257 & 2.4 &1.4 \\
13931.955714 & 1.33 &0.72 \\
13932.913019 & 1.98 &0.70 \\
\enddata
\end{deluxetable}
\begin{deluxetable}{cccc}
\tablecolumns{4}
\tablewidth{0pc}
\tablecaption{Mass constraints for some substellar companions with
incomplete orbits\label{masslimits}}
\tablehead{{Object} &{Per} &{\ensuremath{m \sin{i}}} &{a} \\
&{(y)}&{(\ensuremath{\mbox{M}_{\mbox{Jup}}})}&{(AU)}}
\startdata
HD 24040 b & 10 --- 100 & 5 --- 20 & 5 --- 23\\
HD 68988 c & 11 --- 60 & 11 --- 20 & 5 --- 7 \\
HD 154345 b & 7 --- 100 & 0.8 --- 10 & 4 --- 25\\
HD 187123 c & 10 --- 40 & 2 --- 5 & 5 --- 12\\
\enddata
\tablecomments{These contraints correspond to the extrema of the are
given by $\chi^2_{\mbox{min}} +4$ contour in $P-\ensuremath{m \sin{i}}$ space for
orbits with $e < 0.6$.}
\end{deluxetable}
\begin{figure}
\plotone{f4.eps}
\caption[Radial velocity curve for HD 154345]{RV curve for HD 154345
with data from Keck. The best-fit Keplerian is poorly
constrained due to incomplete coverage of the orbit. The fit shown
here is for $P =35.8$ y and $\ensuremath{m \sin{i}} = 2.2\, \ensuremath{\mbox{M}_{\mbox{Jup}}}$ (marked in Fig.~\ref{154345_contour}), one in a family
of adequate orbital solutions. \label{154345}}
\end{figure}
\begin{figure}
\plotone{f5.eps}
\caption[Contours of $\chi^2$ and $e$ of best-fit orbits
to the radial velocity data of HD 154345]{Contours of $\chi^2$ and $e$ in $P-\ensuremath{m \sin{i}}$ space of best-fit orbits
to the RV data of HD 154345 (Fig.~\ref{154345}), with $\chi^2$ in
grayscale. The solid contours mark
the levels where $\chi^2$ increases by 1, 4 and 9 from the
minimum. The dashed contours mark levels
of the eccentricity of 0.2, 0.6, and 0.9.
Planets with $e > 0.6$ are rare, implying that this exoplanet is
unlikely to have a period longer than 100 y. The orbit is largely unconstrained, but \ensuremath{m \sin{i}}\
has a maximum value around 10 \ensuremath{\mbox{M}_{\mbox{Jup}}}\ for orbits with $e < 0.6$. The
white cross at 36 y and 2.2 \ensuremath{\mbox{M}_{\mbox{Jup}}}\ represents the solution shown in Fig.~\ref{154345}.\label{154345_contour}}
\end{figure}
We can put more stringent constraints on these orbits by noting that
ninety percent of all known exoplanets have $e < 0.6$
\citep{Butler06}. For both HD 24040 b and HD 154345 b, the
high-period solutions all have high eccentricities (the dashed
contours in Figs.~\ref{24040_contour} and \ref{154345_contour}). If,
following \citet{Wittenmyer06}, we therefore assume that
$e < 0.6$ for these objects, and that the true values of \ensuremath{m \sin{i}}\ and
$P$ lie within
the limits of the middle ($\chi^2 = \chi^2_{\mbox{min}}+4$) contour,
then we can constrain
$5 \,\ensuremath{\mbox{M}_{\mbox{Jup}}} < \ensuremath{m \sin{i}} < 20 \,\ensuremath{\mbox{M}_{\mbox{Jup}}}$ and $10 < P < 100$ y for HD 24040 b, and $0.8
\,\ensuremath{\mbox{M}_{\mbox{Jup}}} < \ensuremath{m \sin{i}} < 10 \,\ensuremath{\mbox{M}_{\mbox{Jup}}}$ and $7 < P < 100$ y for HD 154345 b.
\section{Mining Velocity Residuals for Additional Exoplanets}
\label{mining}
\subsection{Velocity Residuals Suggesting Additional Companions}
For exoplanetary systems in which an additional, low-amplitude
signal is not well-characterized by just a linear trend --- for instance,
where there is significant curvature (e.g. HD 13445 or HD 68988) or even multiple
orbits (e.g. GJ 876) --- a full, multi-planet fit is needed to properly
characterize the system. In this case, we can apply an FAP analysis
similar to the one in \S~\ref{FAPtrend} testing the $(N+1)$-planet hypothesis
versus the null hypothesis of $N$ planets plus a trend plus
noise, where $N$ is the number of previously confirmed planets.
This is a much more computationally intensive procedure than that of \S~\ref{FAPtrend}, since we are
introducing 5 new, non-linear, highly covariant parameters ($P, e, \omega, T_{\mbox{p}}$,
and $K$), so we have performed only 50--100 trials. In most cases the low-amplitude signal we seek is much weaker
than that of the known planet(s). This means we have good initial guesses
for the orbital parameters of the established exoplanets, and that
those parameters are usually rather insensitive to those of the
additional companion, easing the difficulty of the simultaneous
11-parameter fit (16-parameter for existing double systems).
As in \S~\ref{FAPtrend}, we calculate the improvement in the
goodness-of-fit parameter,
$\Delta\chi^2_{\nu} =
\chi^2_{\nu,N+1\,\mbox{planets}}-\chi^2_{\nu,N\, \mbox{planets+trend}}$ with the introduction of an additional exoplanetary companion compared
to a fit including only an additional trend.
We compare this reduction to that of mock data sets bootstrapped as the sum of
the best-fit solution with a trend, plus noise, drawn, with replacement,
from the residuals of the actual data to this fit. We then construct
an FAP as the fraction of mock sets that saw a greater reduction in
$\chi^2_{\nu}$ with the introduction of an additional planet than the
genuine data set.
A low FAP for the presence of a second planet is not tantamount to the
detection of an additional exoplanet. It is only a sign that the null
hypothesis is unlikely, i.e. that the distribution of
residuals is not representative of the actual noise in the system or
that the presumed orbital solution from which the residuals were drawn
is in error. This would be the case if, for instance, if the
residuals are correlated due to non-Keplerian RV variations (such as
systematic errors or astrophysical jitter).
The fits discussed here are purely Keplerian and not dynamical. In
particular, fits which produce unstable or unphysical orbits are
allowed. More sophisticated, Newtonian fits \citep[e.g.][]{Rivera05}
would better constrain the orbits of multiple planet systems.
\subsection{Long-Period Companions with Incomplete Orbits}
\label{incompletes}
We have identified 8 other systems in which the FAP for an additional
Keplerian vs. a simple trend is below 2\%: HD 142, HD
13445, HD 68988, 23 Lib (= HD 134987), 14 Her,
$\tau$ Boo (= HD 120136), HD 183263, and HD 187123. In addition, we
have identified a ninth system, HD 114783, which has a compelling
second Keplerian despite a slightly larger FAP (6\%). We summarize the
orbital constraints for these objects in Table~\ref{masslimits}.
--- HD 142: Most of the RV data for HD 142 show a simple linear trend
superimposed on the known $K=34$ m/s, 350-d orbit
\citep{Tinney02b}. HD 142 is known to have a
stellar companion ($V=10$)
\citep{Proveda94}, which could explain the trend. The
first two data points, taken in 1998-9, are
significantly low, producing a low FAP for curvature ($< 1\%$).
HD 142 has \bv = 0.52, indicating it is a late F or early G star,
suggesting it may have moderate jitter ($\sim 5$ m\ensuremath{\mbox{s}^{-1}},
\citet{Wright05}), so we therefore view the low FAP for curvature,
apparently based on only two low points, with suspicion. If the
curvature is real, it is consistent with an exoplanet with period
longer than the span of the observations ($P > 10$ y) with a
minimum mass of at least 4 \ensuremath{\mbox{M}_{\mbox{Jup}}}.
\begin{figure}
\plotone{f6.eps}
\caption[Radial velocity curve for HD 142]{RV curve for HD 142, a multiple companion system, with data from from AAT, . The previously known inner
planet has $P=350$ d, and the outer companion is poorly constrained,
but consistent with the known stellar companion.
The data are inconsistent with a linear trend, mostly because of the
first two data points.
\label{142}}
\end{figure}
--- HD 13445 (= GL 86) has a known planet with $P=15.76$ d
\citep{Queloz00,Butler06}.Superimposed on that Keplerian velocity
curve is a velocity trend of roughly -94 m/s/yr during the past 9
years, apparently consistent with the brown dwarf companion
previously reported by \citet{Els01,Chauvin06,Queloz00}. stellar
or brown-dwarf companion \citep{Queloz00}. There is a a hint of
curvature in these residuals to the inner planet, but not enough
to put meaningful constraints on this outer object beyond that
fact that its period is longer than the span of the
observations ($\sim 10$ y) and $\ensuremath{m \sin{i}} > 22$ \ensuremath{\mbox{M}_{\mbox{Jup}}}.
--- HD 68988 shows definite signs of curvature in the residuals to the
1.8\ \ensuremath{\mbox{M}_{\mbox{Jup}}}\ inner planet (as Fig.~\ref{68988} shows).
Fig.~\ref{68988_contour} shows the outer companion has $\ensuremath{m \sin{i}} <
30 \ensuremath{\mbox{M}_{\mbox{Jup}}}$, and the assumption of $e < 0.6$, using the middle
contour, further restricts $\ensuremath{m \sin{i}} < 20 \ensuremath{\mbox{M}_{\mbox{Jup}}}$, and $P < 60 y$.
\begin{figure}
\plotone{f7.eps}
\caption[Radial velocity curve for HD 68988]{RV curve for HD 68988,
a multiple-companion system, with data from Keck. The previously known inner
planet has $P=6.28$ d, and the outer companion is poorly
constrained, but likely has $\ensuremath{m \sin{i}} < 20 \ensuremath{\mbox{M}_{\mbox{Jup}}}$ and $P < 60 y$.
\label{68988}}
\end{figure}
\begin{figure}
\plotone{f8.eps}
\caption[Contours of $\chi^2$ and $e_c$ for
the best double-Keplerian fits to the radial velocity data of HD 68988]{Contours of $\chi^2$ and $e_c$ in $P_c-(\ensuremath{m \sin{i}})_c$ space for
the best double-Keplerian fits to the RV data of HD 68988
(Fig.~\ref{68988}), with $\chi^2$ in grayscale. The solid contours mark
the levels where $\chi^2$ increases by 1, 4 and 9 from the
minimum. The dashed contours mark levels
of the eccentricity of 0.2, 0.6, and 0.9.
Assuming $e < 0.6$, we can constrain $6 \,\ensuremath{\mbox{M}_{\mbox{Jup}}} < \ensuremath{m \sin{i}} < 20
\,\ensuremath{\mbox{M}_{\mbox{Jup}}}$ and $11 < P < 60$ y.
\label{68988_contour}}
\end{figure}
--- HD 114783 shows curvature in its residuals, and may
have experienced both an RV minimum (in 2000) and maximum (in
2006) as the RV curve in Fig.~\ref{114783} shows. The data are
only moderately inconsistent with a linear trend, however (FAP =
6\%), indicating that the outer companion's orbit is still
underconstrained.
\begin{figure}
\plotone{f9.eps}
\caption[Radial velocity curve for HD 114783]{RV curve for HD
114783, a multiple-companion system, with data from Keck. The previously known inner
planet has $P=495$ d and $\ensuremath{m \sin{i}} = 1.1 \ensuremath{\mbox{M}_{\mbox{Jup}}}$. The residuals are
only moderately inconsistent with a linear trend (FAP $= 6\%$),
indicating that the outer companion is poorly constrained.
\label{114783}}
\end{figure}
--- 23 Lib (= HD 134987) shows signs of curvature in the residuals to
the known inner planet . The signal appears as a change in the
level of otherwise flat residuals between 2000 and 2002 of 15
m/s (see Fig.~\ref{134987}). This suggests an outer planet on a
rather eccentric orbit
which reached periastron in 2001. The small magnitude of this
change in RV suggests a low-mass object, but the
incomplete nature of this orbit makes us less than certain that it
is due to an exoplanet.
\begin{figure}
\plotone{f10.eps}
\caption[Radial velocity curve for 23 Lib]{RV curve for 23 Lib (= HD
134987), a multiple-companion system, with data from Keck and AAT. The previously known inner
planet has $P=258$ d and $\ensuremath{m \sin{i}} = 1.62$. The outer companion
is poorly constrained. The orbital parameters of the inner planet
are not significantly changed with a two-parameter fit.
\label{134987}}
\end{figure}
--- 14 Her (= HD 145645): This star has a known trend \citep{Naef04} and has been
analyzed by \citet{Gozdziewski06b}
as a possible resonant multiple system. The previously-known
planet has
$\ensuremath{m \sin{i}} = 4.9$ \ensuremath{\mbox{M}_{\mbox{Jup}}}\ and $P=4.8$ y, but the character of the second companion
is uncertain. Combining our data with the published ELODIE data from the
Geneva Planet Search \citep{Naef04} (Fig.~\ref{14Her} ) provides a
good picture of the system. The character of the orbit of the
outer planet is unconstrained, and several equally acceptable but
qualitatively distinct solutions exist. One is a long period,
nearly circular orbit like the one shown in Fig. 10 and a mass
near 2 \ensuremath{\mbox{M}_{\mbox{Jup}}}. Other solutions include a 3:1 resonance with the
inner planet. The next few years of observation should break this
degeneracy. The degeneracy may also be broken by high contrast,
high resolution imaging, and we suggest that such attempts be made on this interesting system.
\begin{figure}
\plotone{f11.eps}
\caption[Radial velocity curve for 14 Her]{RV curve for 14 Her (= HD
145675), a system with multiple
companions. Crosses represent data from the ELODIE
instrument operated by the Geneva Planet Search
\citep[taken from][]{Naef04}, and large filled circles represent
data taken at Keck Observatory by the California and Carnegie Planet
Search \citep{Butler06}. Error bars represent quoted errors on
individual velocities; for some points the error bars are smaller
than the plotted points. The combined data set shows a long-period companion
with $P > 12$y and $\ensuremath{m \sin{i}} > 5$ \ensuremath{\mbox{M}_{\mbox{Jup}}} . The previously known inner planet
has $P=4.8$ y and $\ensuremath{m \sin{i}} = 4.9$ \ensuremath{\mbox{M}_{\mbox{Jup}}}.
\label{14Her}}
\end{figure}
--- HD 183263 shows definite signs of curvature in the residuals to
the known inner planet (as Fig.~\ref{183263}
shows), but too little to constrain the mass of the distant
companion. Fig.~\ref{183263} shows that there is little
meaningful constraint on the orbit beyond $P > 7$ y and $\ensuremath{m \sin{i}} >
4 \,\ensuremath{\mbox{M}_{\mbox{Jup}}}$. Even the assumption of $e < 0.6$ allows for $\ensuremath{m \sin{i}} >
13$, so the planetary nature of the companion is very uncertain.
\begin{figure}
\plotone{f12.eps}
\caption[Radial velocity curve for HD 183263]{RV curve for HD
183263, a multiple-companion system, with data from Keck. The previously known inner
planet has $P=635$ d and $\ensuremath{m \sin{i}} = 3.8\, \ensuremath{\mbox{M}_{\mbox{Jup}}}$, and the outer companion is poorly constrained.
\label{183263}}
\end{figure}
\begin{figure}
\plotone{f13.eps}
\caption[Contours of $\chi^2$ and $e_c$ for
the best double-Keplerian fits to the radial velocity data of HD
183263]{Contours of $\chi^2$ and $e_c$ in $P_c-(\ensuremath{m \sin{i}})_c$ space for
the best double-Keplerian fits to the RV data of HD 183263
(Fig.~\ref{183263}), with $\chi^2$ in grayscale. The solid contours mark
the levels where $\chi^2$ increases by 1, 4 and 9 from the
minimum. The dashed contours mark levels of the eccentricity of
0.2, 0.6, and 0.9. $P$ and
\ensuremath{m \sin{i}}\ for this companion are poorly constrained.
\label{183263_contour}}
\end{figure}
--- $\tau$ Boo (= HD 120136) has residuals to the fit for the known
inner planet which show evidence of a
long-period companion which have been discussed elsewhere \citep{Butler06}.
Analysis of the distant companion is complicated by the lower
quality of the data during the apparent periastron in 1990. The
current best fit suggests a period greater than 15 years, but
is otherwise unconstrained. \citet{Proveda94} report that $\tau$
Boo has a faint
(V=10.3) companion (sep. 5.4\arcsec) which may be the source of
the RV residuals.
--- HD 187123 is known to host a 0.5 \ensuremath{\mbox{M}_{\mbox{Jup}}}\ ``Hot Jupiter'' in a 3 day
orbit \citep{Butler98}. Observations over the subsequent 8 years
have revealed a trend of -7.3 m/s in the residuals to a one planet
fit \citep{Butler06}. In 2001, the trend began to show signs of
curvature, and in 2006 a it became clear that the residuals had
passed through an RV minimum (see Fig.~\ref{187123}).
Fig.~\ref{187123_contour} shows the $\chi^2$ and $e$ contours in
$P-\ensuremath{m \sin{i}}$ space. In this case, the $e=0.6$ contour and middle
$\chi^2$ contour provide the following constraints: $2 \,\ensuremath{\mbox{M}_{\mbox{Jup}}} <
\ensuremath{m \sin{i}} < 5 \,\ensuremath{\mbox{M}_{\mbox{Jup}}}$ and $10 < P < 40$ y.
\begin{figure}
\plotone{f14.eps}
\caption[Radial velocity curve for HD 187123]{RV curve for HD
187123, with data from Keck, showing the 0.5 \ensuremath{\mbox{M}_{\mbox{Jup}}} ``Hot Jupiter'' and the outer
companion of uncertain period and mass.\label{187123}}
\end{figure}
\begin{figure}
\plotone{f15.eps}
\caption[Contours of $\chi^2$ and $e_c$ for
the best two-planet fits to the radial velocity data of HD 187123]{Contours of $\chi^2$ and $e_c$ in $P_c-(\ensuremath{m \sin{i}})_c$ space for
the best two-planet fits to the RV data of HD 187123
(Fig.~\ref{187123}), with $\chi^2$ in grayscale. The solid contours mark
the levels where $\chi^2$ increases by 1, 4 and 9 from the
minimum. The dashed contours mark levels
of the eccentricity of 0.2, 0.6, and 0.9.
Planets with $e > 0.6$ are rare, implying that this object is
unlikely to have a period longer than 40 y or \ensuremath{m \sin{i}}\ greater than
5 \ensuremath{\mbox{M}_{\mbox{Jup}}}. \label{187123_contour}}
\end{figure}
\subsection{Short-Period Companions to Stars with Known Planets}
\label{lowamps}
We now consider known single-planet systems with low FAPs for second planets
whose best-fit solutions have periods shorter than the span of
observations. We have identified eight such systems, and we discuss
them below.
Five stars appear to exhibit coherent residuals (FAP $<2 \%$)
to a one planet fit, but in all cases the best two-Keplerian fits are not
compelling (as noted in \S~\ref{foundtrends}, in a sample of 100 known
planet-bearing stars, we expect around 2 to exhibit residuals coherent
at this level purely by chance). These possible companions do not appear in
Table~\ref{orbitupdates} because the tentative nature of these signals do
not warrant publication of a full orbital solution with errors.
--- HD 11964 was announced in \citet{Butler06} as having a planet with
a 5.5 y orbital period and a linear
trend. We find an FAP for a linear trend to be 6\%, suggesting
that while the inner planet is real, the trend is not. We find an
FAP for a second planet to be $< 2\%$, and a best-fit solution
finds an inner planet with $P=37.9$d. This star
sits 2 magnitudes above the
main sequence, and the residuals to the known planet are
consistent with the typical jitter for subgiants of 5.7 m/s
\citep{Wright05}, so this signal could represent some sort of
correlated noise. This very low amplitude
signal ($K= 5.6$ m/s) will thus require much more
data for confirmation.
--- HD 177830 is already known
to have a Jupiter-mass object in a nearly circular, 1.12 y orbit.
This remarkable system has a low FAP $< 1\%$ for a second planet vs.\ a trend.
Two good two-planet solutions exist for this system: the first has
$P=111$ d and $\ensuremath{m \sin{i}} = 0.19$ \ensuremath{\mbox{M}_{\mbox{Jup}}}, the second has $P=46.8$ and
$\ensuremath{m \sin{i}} = 0.16$ \ensuremath{\mbox{M}_{\mbox{Jup}}}. This star sits more than 3.5 magnitudes above the
main sequence, and the residuals to the known planet are
consistent with the typical jitter for subgiants of 5.7 m/s
\citep{Wright05}, so this signal could represent some sort of
correlated noise.
--- 70 Vir (= HD 117176) is a subgiant with a massive, 116.6 d planet on an
eccentric orbit ($e = 0.39$). The FAP for a second planet is 2\%,
but the best-fit second planet is not persuasive: $P=$ 9.58 d, and
$K= 7$ m/s. The typical internal errors for this target
are 5.4 m/s, making a bona fide detection of a 7 m/s planet very
difficult. We suspect that this signal is an artifact of stellar
jitter, possibly due to the advanced evolution of the star.
--- HD 164922 has a known planet with a 3.1 y orbital period. For
this star, the FAP for a second planet is $<1\%$. The best
fit for this second planet has $P = 75.8$ d
and $\ensuremath{m \sin{i}} = 0.06 \ensuremath{\mbox{M}_{\mbox{Jup}}}$. The amplitude of this signal is
extremely low --- only $K= 3$ m/s --- making this an intriguing but
marginal detection.
--- HD 210277 is already known to host a planet with a 1.2 y orbit.
The FAP for a second planet is 2\%, and the best-fit second
Keplerian has $K=3$ m/s signal and $P=3.14$ d, and a 2\% FAP. The
best-fit orbit has $e=0.5$, which is
unlikely given that nearly all known Hot Jupiters have $e < 0.1$
(although the presence of the 1.2 y, $e=0.5$ outer planet could be
responsible, in principle, for pumping an inner planet's
eccentricity.) The extremely low amplitude of this
planet makes the exoplanetary nature of this signal very
uncertain.
Three additional stars with low FAPs are of a very early
spectral type (F7--8): HD 89744 \citep{Korzennik00}, HD 108147 \citep{Pepe02}, and HD
208487 \citep{Tinney05}. Their low activity
yields a low jitter in the estimation of \citet{Wright05}, but this is
likely underestimated due to poor statistics: the California and
Carnegie Planet Search has very few stars of this spectral type from
which to estimate the jitter. For HD 89744 and HD 108147 we suspect
that, the low
FAP of $< 2\%$ is an artifact of coherent noise, since in our judgment neither
case shows a compelling evidence of a second Keplerian of any period.
HD 208487 has a very low FAP ($< 1\%$) despite the modest r.m.s of the
residuals to a one planet fit of 8 m/s. We suspect that stellar
jitter is the likely source of these variations. This star was
discussed by \citet{Gregory05}, who applied a
Bayesian analysis to the published RV data, concluding that a
second planet was likely, having $P = 998^{+57}_{-62}$ d and
$\ensuremath{m \sin{i}} \sim 0.5 \ensuremath{\mbox{M}_{\mbox{Jup}}}$. \citet{Gozdziewski06} also studied the
published data, and suggested a planet with $P=14.5$ d. We note
here two plausible solutions apparent in our data.
The first, with $P \sim 1000$ d and $\ensuremath{m \sin{i}} \sim 0.5$, is
consistent with the solution of \citet{Gregory05}. We also find,
however, an additional solution of equal quality with $P=28.6$ d
(double the period of \citet{Gozdziewski06})
and $\ensuremath{m \sin{i}} = 0.14 \,\ensuremath{\mbox{M}_{\mbox{Jup}}}$. This second solution has a period
uncomfortably close to that of the lunar cycle (we often see
this period in the window function of our observations
due to our tendency to observe during bright time). For both
solutions $K=10$ m/s. We reiterate that we feel that the early
spectral type of this star alone can account for the observed RV residuals.
\citet{Gozdziewski06} analyzed our published RV data and found a low
FAP for the existence of a second
planet in orbit around HD 188015. Their FAP, however, is measured against
a null hypothesis of a single Keplerian plus noise, thus ignoring the
linear trend. We find that $\Delta\chi^2_{\nu}$, the improvement of
the goodness-of-fit parameter with the introduction of a second
Keplerian versus a trend to be very small --- in fact 60\% of
our mock data sets showed greater improvement. We therefore find no motivation
to hypothesize the existence of an additional, short-period
planet; the single planet and trend announced in \citet{Marcy05} are
sufficient to explain the data.
\citet{Gozdziewski06} also found a low FAP for a second planet in
orbit about HD 114729, with a period of 13.8 d. Using the data set
from \citet{Butler06}, which contains 3 recent RV measurements taken
since the publication of \citet{Butler03} (their
source of RV data), we find no such signal, and a large FAP for a
second planet. We suspect our results may differ
because the additional data provide for a slightly better fit to the
known exoplanet, changing the character of the residuals and
destroying the coherence of the spurious 13.8 d signal.
\section{HD 150706}
HD 150706 b, a purported 1.0\,\ensuremath{\mbox{M}_{\mbox{Jup}}}\ eccentric planet at 0.8 AU, was
announced by the Geneva Extrasolar Planet Search Team
(2002, Washington conference ``Scientific Frontiers in Research
in Extrasolar Planets"; Udry, Mayor \& Queloz, \citeyear{Udry03b}) and
appears in \citet{Butler06}; however, there is no refereed discovery
paper giving details.
We have made eight precision velocity measurements at Keck observatory
from 2002 through 2006. These velocities show an RMS scatter of 12.1
m/s, inconsistent with the reported 33m/s semi-amplitude of HD 150706 b.
The RMS to a linear fit is 8 m/s, which is adequately explained
by the expected jitter for a young (700$\pm$300 Myr) and active star
like HD~150706. We therefore doubt the existence of a 1.0\,\ensuremath{\mbox{M}_{\mbox{Jup}}}
eccentric planet orbiting HD 150706 at 0.8 AU.
\section{Discussion}
As noted in \S~\ref{foundtrends}, prior to this work 24 of the 150 nearby
stars known to host exoplanets (including 14 Her and excluding HD
150706) show significant trends in
their residuals and 19 host well-characterized multiple planet
systems. One of these trends is likely spurious (HD 11964), and at
least 3 others may be due to stellar or brown dwarf companions
(HD 142, HD 13445, and $\tau$ Boo). We have announced here the detection of
an additional 5 trends for known planet-bearing stars, 2 new single systems,
and one new multiple system (HIP 14810, which appears as a single-planet
system in \citet{Butler06}). We have also confirmed that the
previously announced trends for HD 68988 and HD 187123 are likely due
to planetary-mass objects. This brings the total number of
stars with RV trends possibly due to planets to 22, the number of
known multiple-planet systems to 22, and the number of nearby
planet-bearing stars to 152. This means
that 30\% of known exoplanet systems show significant evidence of multiplicity.
Considering that the mass distribution of planets increases steeply
toward lower masses \citep{Marcy_Japan_05}, our incompleteness must be
considerable between 1.0 and 0.1 Jupiter-masses. Thus, the actual
occurrence of multiple planets among stars having one known planet
must be considerably greater than 30\%.
From an anthropocentric perspective, this frequency of multiplicity
suggests that in some respects, the Solar System is not such an
aberration. Our Sun has 4 giant planets, and it appears that such
multiplicity is not uncommon,although circular orbits are.
From a planet-hunting perspective this result is
quite welcome as well, since it means that the immediate future of RV
planet searches looks bright. As our temporal baseline
expands, we will become sensitive to longer-period planets. Our search
is just becoming sensitive to true Jupiter analogs with 12 year orbits
and 12 m/s amplitudes. A true Saturn analog would require 15 more
years of observation. As our precision improves we will become
sensitive to lower-mass planets, which may be the richest domain for planets
yet.
\acknowledgements
The authors would like to thank Kathryn Peek for obtaining the crucial
16 Apr 2006 RV measurement of HIP 14810, and Simon O'Toole
and Alan Penny for their assistance. The authors also thank the
anonymous referee for a thorough and constructive report.
This research is based on observations obtained
at the W. M. Keck Observatory, which is operated jointly by the
University of California and the California Institute of Technology.
The Keck Observatory was made possible by the generous financial
support of the W. M. Keck Foundation. The authors wish to recognize
and acknowledge the
very significant cultural role and reverence that the summit of Mauna
Kea has always had within the indigenous Hawaiian community. We are
most fortunate to have the opportunity to conduct observations from
this mountain.
This research has made use of the SIMBAD database, operated at CDS,
Strasbourg, France, and of NASA's Astrophysics Data System
Bibliographic Services, and is made possible by the generous support
of Sun Microsystems, NASA, and the NSF.
|
2,869,038,155,535 | arxiv | \section{Introduction}
A recent renaissance of interest in pairing correlations in atomic
nuclei is linked to the search for
a reliable microscopic theory for describing the structure of medium
mass nuclei around the $N=Z$ line
where protons and neutrons occupy the same major shell and their
mutual interactions are expected to
strongly influence the structure and decay modes of such nuclei. The revival
of interest in pairing
correlations is also prompted by radioactive beam experiments, which
are advancing
the exploration of `exotic' nuclei, such as neutron-deficient or
$N\approx Z$ nuclei far off the valley of
stability. Likewise, such a microscopic framework is important for
astrophysical applications, for
example a description of the $rp$-process in nucleosynthesis, which
runs close to the proton-rich side
of the valley of stability through reaction sequences of proton
captures and competing $\beta $ decays
\cite{Langanke98,Schatz98}.
In this paper we show that a simple but powerful group theoretical
approach, with \Spn{4} the
underpinning symmetry, can provide a microscopic description and
interpretation of the properties of
pairing-governed $0^+$ states in the energy spectra of the even-$A$
nuclei with mass numbers $32\le
A\le 100$ where protons and neutrons are filling the same major
shell. In this regard, it is important
to recall that \SO{5} \cite{Hecht,Ginocchio} (with a Lie algebra that
is isomorphic to \spn{4}) has been shown to play a significant role in
the structure of $fp$-shell $N=Z$ nuclei \cite{EngelLV96}. Indeed, a
model based on this symmetry group can be used to track the
results of an isospin-invariant pairing plus quadrupole shell-model theory
\cite{KanekoHZPRC59}.
A theory that invokes group symmetries is driven by an expectation
that the wave functions of the
quantum mechanical system under consideration can be characterized by
their invariance properties
under the corresponding symmetry transformations. But even if the
symmetries are not exact, if
one can find near invariant operators, the associated symmetries can
be used to help reduce the
dimensionality of a model space to a tractable size. Within the
framework of the \Spn{4} symplectic
group, an approximate symmetry drastically reduces the model space,
which allows the model to be
applied in a broad region of the chart of the nuclides \cite{SGD03}.
This symmetry is adequate only
for a certain class of phenomena, which in our investigation is
related to significant isovector
(isospin $T=1$) pairing correlations in even-$A$ nuclei. While the
model is not valid for all the
states in the energy spectra of the $319$ nuclei we consider (nor can
any model achieve this), it
does yield a realistic reproduction (with only 6 parameters) of the
pairing-governed isobaric analog $0^+$ state ~spectra in medium mass
nuclei where the valence protons and neutrons occupy the same major
shell. The validity and
reliability of the model with respect to the interactions it includes
are confirmed additionally via
a finite energy difference method that we employed to reproduce
detailed nuclear structure, including
$N=Z$ anomalies, isovector pairing gaps and staggering effects \cite{SGD03stg}.
The present investigation shows the advantage of the algebraic
\spn{4} approach over other theoretical
studies. Namely, the \Spn{4} model, which is based on Helmer's
quasi-spin scheme \cite{Helmers}, is
both simpler and provides a better understanding of the fundamental
nature of the nuclear interaction
compared, for example, to the more general but also more elaborate
$U(4\Omega)$ model
\cite{TalmiSimpleModels} based on the conventional seniority scheme
of Racah and Flowers
\cite{Racah,Flowers}. It also yields a better and more detailed
microscopic description of isovector
pairing correlations and proton-neutron interactions than mean-field
theories and results extracted
from semi-empirical mass formulae. In addition, it can be used in
higher-lying shells where other
approaches cannot be applied.
\section{Pairing models: from \SU{}{2} to \Spn{4}}
It has been recognized for a long time that ground states of
even-even nuclei reflect strongly on
the nature of the nuclear interaction, especially its propensity to
form correlated, angular momentum
$J=0$ pairs \cite{BohrMottelsonPines, BohrMottelson}. It is also well
known that the low-lying energy
spectrum of isotopes of a doubly magic core (such as $^{40}$Ca or
$^{56}$Ni) can be well reproduced in
terms of neutron pair [$nn$] addition to the spectrum of states of
the core \footnote{Without
compromising the theory, one can consider closed shells as part of an inert
core that is spherical and does not
affect directly the single-particle motion of the valence nucleons in
the last unfilled shell.}. In
complete analogy with this, if one adds to the ground state of the
core $J=0,\ T=1$ pairs of nucleons
(two protons $pp$, two neutrons $nn$, or a proton and a neutron
$pn$), one would construct fully paired
states that in general reflect the close interplay of like-particle
and $pn$ pairs. Provided that the
isospin is (almost) a good quantum number, which is typically the
case for low-lying states in light
and medium mass even-$A$ nuclei with valence protons and neutrons
simultaneously filling the same major
shell, fully paired $0^+$ states built in this way describe
isobaric analog $0^+$ states ({\it
IAS}) in nuclei across the entire shell.
In the mass range $32 < A < 100$
where the influence of shape deformation on these $0^+$ {\it IAS}
states is relatively weak, this notion
of fully paired (seniority zero) states is a valid, albeit approximate
picture. While in even-even
nuclei within this region the ground states are such fully paired
$0^+$ states, this it is not always
the case for odd-odd nuclei. The strong proton-neutron interaction
usually drives the state with the
least symmetry energy ($\sim T^2$, where $T$ is nuclear isospin)
lowest. However, it is not the
purpose of this article to investigate such states. Rather, we
consider the isobaric analog $0^+$
state, which is typically higher in energy and is strongly influenced
by isovector pairing
correlations. Even though the states are represented as $J=0$ pairs,
the interaction that governs them
is not exclusively pairing in the $J=0$ channel, but must also
include the important $pn$
$J_{\text{odd}}\geq 1$ isoscalar ($T=0$) interaction. The
significant interplay between these
isovector and isoscalar interactions is evident in the low-lying
structure of $N=Z$ odd-odd nuclei and
has been the focus of a large number of experimental
\cite{ZeldesLirin76,Rudolph,Vincent98,GNarro} and
theoretical studies \cite{CivitareseReboiroVogel, LangankeDKR,
Dean97, SatulaDGMN, PovesMartinezPinedo,
MartinezPinedoLV, KanekoHZPRC59, SatulaWyss, MacchiavelliPLB,
Macchiavelli00, Vogel00}.
The zero seniority $0^+$ states can be constructed as ($T=1$)-paired fermions
\begin{equation}
\left|n_{+1},n_{0},n_{-1}\right) =\left( \hat{A}^{\dagger
}_{+1}\right) ^{n_{+1}}\left(
\hat{A}^{\dagger }_{0}\right) ^{n_{0}}\left( \hat{A}^{\dagger
}_{-1}\right) ^{n_{-1}}\left|
0\right\rangle ,
\label{GencsF}
\end{equation}
where $n_{+1,0,-1}$ are the numbers of $J=0$ pairs of each kind, $pp$,
$pn$, $nn$, respectively, and $\left| 0\right\rangle$ denotes the vacuum state.
The transition operator, which changes the number of particles in a
pairwise fashion,
$\hat{A}^{\dagger }_{0,+1,-1}$, creates a proton-neutron ($pn$) pair,
a proton-proton
($pp$) pair or a neutron-neutron $(nn)$ pair of total angular momentum
$J^{\pi}=0^+$ and isospin $T=1$. Each operator $\hat{A}^{\dagger
}_\mu ,\ \mu =0,+1,-1$, together
with its conjugate pair-annihilation operator, $\hat{A}_\mu $, and a
pair number operator generate an \SU{\mu }{2} subgroup of \Spn{4}, which
in the case of $\mu =\pm
$ is the standard like-particle pairing Kerman's \SU{}{2} group
\cite{Kerman}.
From a microscopic perspective, the pair-creation (pair-annihilation)
operators,
$\hat{A}^{(\dagger )}$, are realized in terms of creation $c
_{jm\sigma }^\dagger$ and
annihilation $c _{jm\sigma }$ single-fermion operators with the standard
anticommutation relations
$\{c _{jm\sigma },c _{j^{\prime }m^{\prime }\sigma ^{\prime }}^{\dagger
}\}=\delta _{j,j^{\prime }}\delta _{m,m^{\prime }}\delta _{\sigma ,\sigma
^{\prime }},$ where these operators create (annihilate) a particle of type
$\sigma =\pm 1/2$ (proton/neutron) in a state of total angular momentum $j$
(half integer) with projection
$m$ in a finite space $2\Omega =\Sigma _j (2j+1)$. There are ten independent
scalar products (zero total angular momentum) of the fermion operators:
\begin{eqnarray}
\hat{A}^{\dagger }_{\mu =\sigma+\sigma^{\prime}}&=&
\frac{1}{\sqrt{2\Omega (1+\delta_{\sigma\sigma ^{\prime}})}}
\sum_{jm} (-1)^{j-m} c_{jm\sigma}^\dagger c_{j,-m,\sigma
^{\prime}}^\dagger,\nonumber \\
\hat{A}_\mu &=& (\hat{A}^{\dagger }_\mu)^\dagger,\nonumber \\
\hat{T}_\pm &=& \frac{1}{\sqrt{2\Omega}} \sum_{jm} c^\dagger_{jm,\pm
1/2} c_{jm,\mp 1/2}, \nonumber \\
\hat{N}_{2\sigma }&=& \sum_{jm} c^\dagger_{jm\sigma } c_{jm\sigma },
\label{gen}
\end{eqnarray}
which form a fermion realization of the symplectic \spn{4} Lie algebra. Such an
algebraic structure is exactly the one needed to describe isovector
(like-particle plus $pn$) pairing correlations and isospin symmetry
in nuclear isobaric analog $0^+$ states. In
(\ref{gen}),
$\hat{N}_{\pm 1}$ are the valence proton (neutron) number operators. The
generators $\hat{T}_{0}$ and $\hat{T}_{\pm}$ are associated with the
components of the isospin
of the valence particles and close on an \su{T}{2} subalgebra of \spn{4}. In
terms of the generators of the \Spn{4} group (\ref{gen}), the
operator that counts the total number of
valence particle $n$ is expressed as
$\hat{N}=\hat{N}_{+1}+\hat{N}_{-1}$ and the third isospin
projection operator is
$\hat{T}_{0}=(\hat{N}_{+1}-\hat{N}_{-1})/2$.
While the \Spn{4} symplectic group embeds in itself
the well-known symmetry of like-particle pairing, $\Spn{4} \supset
SU^{+}(2) \otimes SU^{-}(2)$, it brings into the theory the significant
interaction between protons and neutrons through the reduction chains
$Sp(4)\supset U^{\mu }(2)\supset U^{\mu }(1)\otimes SU^{\mu }(2)$ with
$\mu =0$ (proton-neutron pairing symmetry) and $\mu =T$ (isospin
symmetry). These group reductions allow the \Spn{4}-invariant
degenerate energy states to
split, which is the case of physical interest. Such a dynamical
symmetry that our model
possesses provides for a natural classification scheme of nuclei as
belonging to a
single-$j$ level or a major shell (multi-$j$), which are mapped to
the algebraic multiplets. This classification also extends to the
corresponding ground and excited states of the nuclei.
The general model Hamiltonian with \Spn{4} dynamical symmetry consists of
one- and two-body terms and can be expressed through the
\Spn{4} group generators,
\begin{eqnarray}
H =&-G\sum _{i=-1}^{1}\hat{A}^{\dagger }_{i}
\hat{A}_{i}-F \hat{A}^{\dagger }_{0}\hat{A}_{0}-\frac{E}{2\Omega} (\hat{T}
^2-\frac{3\hat{N}}{4 })
\nonumber \\
&-D(\hat{T}
_{0}^2-\frac{\hat{N}}{4})-C\frac{\hat{N}(\hat{N}-1)}{2}-\epsilon
\hat{N},
\label{clH}
\end{eqnarray}
where $\hat{T}^2=\Omega \{ \hat{T}_+,\hat{T}_-\}+\hat{T}_0^2$ is the
isospin operator, $G,F,E,D$ and
$C$ are interaction strength parameters and
$\epsilon >0$ is the Fermi level energy.
This Hamiltonian conserves the number of
particles ($n$) and the third projection ($T_0$) of the isospin,
while it includes
scattering of a $pp$ pair and a $nn$ pair into two $pn$ pairs and
vice versa.
As we have shown in \cite{SGD03}, the algebraic model
Hamiltonian (\ref{clH}) arises naturally within a microscopic picture.
Using relations (2), it can be rewritten in standard second quantized
form, which in turn defines the physical nature of the interaction
and its strength. For example, in this way, one can identify the
parameters $G/\Omega$ and $(G+F)/\Omega$ in (\ref{clH}) with the strength
of the $J=0$ $T=1$ pairing interaction between two protons (neutrons)
and a proton and a neutron, respectively. The $C$, $D$, and
$E$ parameters are related to the expectation value of an average
$J$-independent interaction, which includes for example high-$J$
like-particle interaction with a strength specified by the parameters
$C+\frac{D}{2}+\frac{E}{4\Omega }$\cite{SGD03}.
Furthermore, the $E$ term in (\ref{clH}), together with the $C$
term, is related to the
microscopic nature of the odd $J$ isoscalar ($T =0$) interaction
between a proton and a
neutron, $-\frac{E}{2\Omega} (\hat{T}
^2-\frac{3\hat{N}}{4 })-C\frac{\hat{N}(\hat{N}-1)}{2}=-\frac{E}{2\Omega}(
\hat{T}^2-\frac{\hat{N}}{2}-
\frac{\hat{N}^2}{4})-(C+\frac{E}{4\Omega})\frac{\hat{N}(\hat{N}-1)}{2}$,
where the first part
comprises the $J$-independent $pn$ isoscalar force. It is diagonal in
the isospin basis and
can be compared to \cite{HasegawaKanekoPRC59,KanekoH99}. In addition, the
quadratic in $\hat{N}$ term can be understood as an
average two-body interaction between the
valence particles (note that for $n$ equivalent particles there are
$\binom{n} {2}=\frac{n(n-1)}{2}$ particle couplings).
From another perspective, the $E$-term can be related to the symmetry
energy \cite
{Hecht,TalmiSimpleModels} as its expectation value in states with
definite isospin is of the
form $T(T+1)$, which enters as a symmetry term in many nuclear mass
relationships
\cite{JaneckeB74,DufloZuker}.
We refer to the $E$-term as {\it a symmetry term}, although it is
common to address the
symmetry energy in a slightly different way: the
$T (T +1)$ term together with the isospin dependence of the isovector
pairing term
yield both symmetry ($\sim T ^2\sim (Z-N)^2$) and
Wigner ($\sim T $) energies \cite{Wigner37}. The first one was
originally included in the
Bethe-Weizs\"acker semi-empirical mass formula
\cite{Weizsacker35,Bethe36} and implies that
the nuclear symmetry energy has the tendency toward stability for
$N=Z$. The Wigner energy is
associated with proton-neutron exchange interactions and is
responsible for a sharp energy
cusp at $N=Z$ leading to an additional binding of self-conjugate
nuclei \cite{BohrMottelson}.
In short, the symmetry energy together with the terms that are linear
in $\hat{N} $ (\ref{clH}) can
be directly related to a typical mass formula
\cite{Weizsacker35,Bethe36}, while in addition our model
improves the description of isovector pairing correlations and high
$J$ identical-particle and $pn$
interactions and uses an advanced Coulomb repulsion correction
\cite{RetamosaCaurier}.
In this way, Hamiltonian (\ref{clH}) includes an isovector ($T=1$)
$nn$, $pn$, and $pp$ pairing interaction ($G\geq 0$ for attraction)
and a diagonal
(in an isospin basis) isoscalar ($T=0$) proton-neutron force, which
is related to
the so-called symmetry term ($E$). Hence, the model Hamiltonian
(\ref{clH}) includes the
dominant interactions that govern the $0^+$ states under
consideration and provides for an
exact solution of the present problem.
In addition, the $D$-term in (\ref{clH}) introduces isospin symmetry
breaking and the $F$-term accounts for a plausible, but weak,
isospin mixing. While both terms
are significant in the investigation of certain types of phenomena
\cite{SGD04IM},
the study of their role is
outside the scope of this paper. These parameters yield quantitative
results that are
better than the ones with $F=0$ and $D=0$: for example, in the case of the
\ensuremath{1f_{7/2}} level the variance between the model and experimental
energies of the lowest
isobaric analog $0^+$ states~ increases by $85\%$ when the $D$ and $F$ interactions are
turned off. At the same
time, the latter provide only fine adjustments compared to the main
driving forces
incorporated in (\ref{clH}). In this sense, a simpler isospin
invariant $SO(5)$ model is suitable for a
qualitative description of the isobaric analog $0^+$ state~ energy spectra of nuclei.
\section{Description of isobaric analog $0^+$ states}
\subsection{Interaction strength parameters}
The interaction strength parameters are estimated in a fit of the minimum
eigenvalue $(-E_0)$ of the $H$ energy operator (\ref{clH}) to the
Coulomb corrected
\cite{RetamosaCaurier} experimental energies
\cite{AudiWapstra,Firestone} of the lowest
isobaric analog $0^+$ states of even-$A$ nuclei (ground states for
even-even nuclei and some
[$N\approx Z$] odd-odd nuclei).
We use the Coulomb correction $V_{Coul}(A,Z)$ derived in
\cite{RetamosaCaurier} so that the Coulomb corrected energies are
adjusted to be
$E_{0,exp}(A,Z)=E^C_{0,exp}(A,Z)+V_{Coul}(A,Z)-E_{0,exp}(A_{c},Z_{c})$, where
$E^C_{0,exp}(A,Z)$ is the total measured (positive) energy including
the Coulomb energy and
$E_{0,exp}(A_{c},Z_{c})=E^C_{0,exp}(A_{c},Z_{c})+V_{Coul}(A_{c},Z_{c})$
is the corrected energy of a nuclear core.
Analogously, the theoretically predicted energies that include the Coulomb
repulsion can be obtained as
$E_{0}^C(A,Z)=E_{0}(A,Z)-V_{Coul}(A,Z)+E_{0,exp}(A_{c},Z_{c})$.
\begin{table}[th]
\caption{Parameters and statistics for three regions ({\bf I},
{\bf II}, and {\bf III})
specified by the valence model space. $G$, $F$, $C$, $D$, $E$, $\epsilon $,
and $\chi $ are in MeV, $SOS$ is in MeV$^{2}$.}
\center{
\begin{tabular}{cccc}
\hline
\hline
\text{Parameters} & {\bf I} & {\bf II} & {\bf III} \\
\text{and } & $(\ensuremath{1d_{3/2}} )$ & $(\ensuremath{1f_{7/2}} )$ & $(1f_{5/2}2p_{1/2}$ \\
\text{statistics} & & & $2p_{3/2}1g_{9/2})$ \\
\hline
$G/{\Omega }$ & 0.702 & 0.453 & 0.296 \\
$F/{\Omega }$ & 0.007 & 0.072 & 0.056 \\
$C$ & 0.815 & 0.473 & 0.190 \\
$D$ & 0.127 & 0.149 & -0.307 \\
$E/{(2\Omega )}$ &-1.409 & -1.120 & -0.489 \\
$\epsilon $ & 9.012 & 9.359 & 9.567 \\
\hline
$SOS$ & 1.720 & 16.095 &300.284 \\
$\chi $ & 0.496 & 0.732 & 1.787 \\
\hline \hline
\end{tabular}
}
\label{tab:fitStat}
\end{table}
The fitting procedure was performed separately for three groups of nuclei
with valence nucleons occupying ({\bf I}) the
$1d_{3/2}$ level with a
$^{32}S$ core, ({\bf II}) the
$1f_{7/2}$ level with a $^{40}Ca$ core, and ({\bf III}) the \ensuremath{1f_{5/2}2p_{1/2
shell with a $^{56}Ni$
core \cite{SGD03}. The results reveal that the model interaction
accounts quite well for the
available experimental energies for a total of $149$ nuclei (refer to
the small value of the
$\chi $-statistics in Table \ref{tab:fitStat}, where $\chi ^2$ is the
sum of squares, $SOS$,
divided by the difference between the number of data cases and the
number of fit parameters)
\cite{SGD03}. The values of the parameters
in $H$ (\ref{clH}) are kept fixed hereafter (Table \ref{tab:fitStat}).
In the case of the \ensuremath{1f_{5/2}2p_{1/2 shell ({\bf III}), the parameters
of the effective interaction in the \Spn{4} model with degenerate
multi-$j$ levels are likely
to be influenced by the non-degeneracy of the single-particle orbits.
Nevertheless, as the dynamical
symmetry properties of the two-body interaction in nuclei from this
region are not lost, the model
remains a good multi-$j$ approximation \cite{SGD03}, which is
confirmed with the use of
various discrete derivatives of the energy function \cite{SGD03stg}.
\begin{figure}[h]
\centerline{\epsfxsize=3.4in\epsfbox{pairLimitEnSpectra.eps}}
\caption{(Color online) Low-lying energy spectra of near closed-shell
nuclei in the \ensuremath{1f_{7/2}} level as
described by the like-particle pairing limit of the \Spn{4} model and
compared to the experiment.}
\label{pairLimitEnSpectra}
\end{figure}
While the optimum fit is statistically determined solely by $\chi $,
the physical aspect of the
nuclear problem requires, in addition, the estimate for the
parameters to be physically valid.
Indeed, the values of the like-particle pairing strength $G$,
obtained by our \Spn{4} model, yield
consistent results \cite{SGD03stg} with the experimental pairing gaps
derived from the odd-even mass
differences \cite{NilssonP61,RingS80}. In this way, the $G$ values
are expected to
reproduce the low-lying vibrational spectra of near closed-shell
nuclei in the \SU{\pm
}{2} limit of the model (like-particle pairing) (Figure
\ref{pairLimitEnSpectra}). When the results
from the three nuclear regions ({\bf I}, {\bf II}, and {\bf III}) are
considered, the pairing
strength parameter is found to follow the well-known $1/A$ trend \cite
{KermanLawsonMacfarlane,KisslingerSorensen,Lane,BohrMottelson,DudekMS},
\begin{eqnarray}
\frac{G}{\Omega }&=\frac{23.9 \pm 1.1}{A},\quad &R^2=0.96,
\end{eqnarray}
where $R^2$ is a coefficient of correlation and represents the
proportion of variation in the strength parameter accounted for by
the analytical
curve (Figure \ref{ParamFnA}).
Similarly, the values of the other strength parameters lie on a curve
that decreases with
nuclear mass $A$ (Figure \ref{ParamFnA})
\begin{equation}
\begin{array}{cclc}
\frac{E}{2\Omega }&=&\frac{-50.2 \pm 3.3}{A}, &R^2=0.93,\\
C&=&\left( \frac{32.30 \pm 0.02}{A} \right)^{1.887\pm 0.004}, &R^2=0.99.
\end{array}
\end{equation}
As expected for a symmetry energy term, the $1/A$ dependence
holds for the parameter
$E$. The dependence of $C$ on the mass number $A$ suggests that the
quadratic correction
$C\frac{n(n-1)}{2}$ to the mean field may change slowly from one
nucleus to another, which is consistent with the saturation of the
nuclear force. Although the data set used in the fitting procedure was
rather small, the trend toward a smooth
functional dependence of the interaction strength parameters on the
mass number $A$ reveals their
{\it global } character, namely the interactions in the model
Hamiltonian (\ref{clH}) are related to
an overall behavior common to all nuclei.
\begin{figure}[h]
\centerline{\epsfxsize=3.4in\epsfbox{ParamFnA.eps}}
\caption{(Color online) Dependence of the interaction strength
parameters $G/\Omega $, $E/2\Omega $,
and $C$ (in MeV) on the mass number
$A$ with values from the three regions, ({\bf I}), ({\bf II}) and
({\bf III}).}
\label{ParamFnA}
\end{figure}
Furthermore, the Wigner energy \cite{Wigner37}, $-W2T $, is
implicitly included in the
\Spn{4} theoretical energy, which in turn makes the estimation of its
strength possible.
The Wigner energy appears as the term that is the linear in $T $ in
the $pn$ isoscalar force
(proportional to the symmetry term) and in the isovector pairing through the
second-order Casimir invariant of \spn{4}. In a good-isospin regime,
the symmetry energy contribution
is $-\frac{E}{2\Omega }T(T+1)$ [due to the $\hat{T}^2$-term in
(\ref{clH})], and the $W$ interaction
strength parameter can be expressed through the model parameters
(Table \ref{tab:fitStat}) as
$W=\frac{E-G}{4\Omega }$. In the framework of the
\Spn{4} model, the estimated values for $W$ from the three regions
({\bf I}, {\bf II} and {\bf
III}) are found to lie on a curve
\begin{equation}
W=\frac{-31 \pm 2}{A},\ R^2=0.96,
\end{equation}
with a very good correlation coefficient $R^2$ and a remarkably close
value to most other estimates:
$W=-30/A $
\cite{MollerNK97} , $W=-37/A $ \cite{DufloZuker}, $W=-37.4/A $
\cite{KanekoH99}, $W=-42/A $
\cite{MyersS96} and $W=-47/A $ \cite{Vogel00}.
In short, the outcome of the optimization procedures shows that the
effective interaction
with \Spn{4} dynamical symmetry provides a reasonable description of
the lowest isobaric analog $0^+$ states, retaining the
physical meaning and validity of its microscopic nature.
\subsection{Energy spectra of the isobaric analog $0^+$ states}
In all three nuclear regions, there is good agreement with
experiment (small $\chi $-statistics), as can be seen
in Figure \ref{CaE0expTh} for the isobars $A=40-56$ in the
\ensuremath{1f_{7/2}} level $({\bf
II})$. The theory predicts the lowest isobaric analog $0^+$ state~energy of nuclei
with a deviation ($\chi /\Delta E_{0,exp}\times 100 [\% ]$) of
$0.7\%$ for $({\bf
I})$ and $0.5\%$ for $({\bf II})$ and $({\bf III})$ in the corresponding energy
range considered, $\Delta E_{0,exp}$.
\begin{figure}[t]
\centerline{\epsfxsize=3.5in\epsfbox{CaE0expTh.eps}}
\caption{(Color online) Isobaric analog $0^{+}$ state energy,
$E^C_{0}$, in MeV (including the
Coulomb energy) versus the isospin projection $T_0$ for the isobars
with $A=40$ to $A=56$ in the
\ensuremath{1f_{7/2}} level, $\Omega _{\frac{7}{2}}=4$. The experimental binding energies
$E^C_{\text{BE,exp}}$ (symbol ``$\times
$'') are distinguished from the experimental energies of the isobaric analog
$0^{+}$ excited states $E^C_{0,\exp }$ (symbol ``$\circ $''). Each
line connects
theoretically predicted energies of an isobaric sequence. }
\label{CaE0expTh}
\end{figure}
The fitting procedure not only estimates the magnitude of the interaction
strength and determines how well the model Hamiltonian ``explains" the
experimental data, it also can be used to predict nuclear energies
that have not
been measured. This includes energies of nuclei with odd number of
protons and neutrons and as
well nuclei away from the valley of stability with $N\approx Z$ or
proton-rich that are of great
interest in modern astrophysical studies. From the fit for the
\ensuremath{1f_{7/2}} case, the binding energy of the proton-rich $^{48}$Ni nucleus
is estimated
to be $348.19$ MeV, which is $0.07\%$ greater than the sophisticated
semi-empirical
estimate of \cite{MollerNK97}. Likewise, for the odd-odd nuclei that
do not have measured
energy spectra the theory can predict the energy of their lowest
isobaric analog $0^{+}$ state: $358.62$ MeV ($^{44}$V), $359.34$ MeV
($^{46}$Mn),
$357.49$ MeV ($^{48}$Co), $394.20$ MeV ($^{50}$Co) (Figure
\ref{CaE0expTh}). The
\Spn{4} model predicts the relevant $0^+$ state energies for an additional 165
even-$A$ nuclei in the medium mass region ({\bf III}) plotted in Figure
\ref{NiE0expTh}. The binding energies for 25 of them are also calculated in
\cite{MollerNK97}. For these even-even nuclei, we predict binding energies that
on average are $0.05\%$ less than the semi-empirical approximation
\cite{MollerNK97}.
\begin{widetext}
\begin{figure}[h]
\centerline{\epsfxsize=4.4in\epsfbox{NiE0expTh.eps}}
\caption{(Color online) Theoretical energies $E^C_0$ (including the
Coulomb energy contribution) of the
lowest isobaric analog $0^+$ states ~ for isobars (connected by lines in different
colors) with mass number
$A=56,58,\dots ,100$ in the \ensuremath{1f_{5/2}2p_{1/2 major shell ($^{56}$Ni core), compared to
experimental values (black `$\times $') and semi-empirical estimates in
\cite{MollerNK97} (blue `$+$'). }
\label{NiE0expTh}
\end{figure}
\end{widetext}
Without varying the values of the interaction strength parameters (Table
\ref{tab:fitStat}), the energy of the higher-lying pairing-governed isobaric analog $0^+$ states~ in
nuclei under consideration can be
theoretically calculated. These states are eigenvectors of the model
Hamiltonian (\ref{clH}) and
differ among themselves in their pairing modes due to the close
interplay between like-particle and
$pn$ pairs. The theoretical energy spectra of these isobaric analog $0^+$ states~ agree
remarkably well with the available
experimental values
\footnote{The energy spectra of nuclei in the ({\bf III}) region with
nuclear masses $56<A<100$ is
not yet completely measured, especially the higher-lying $0^+$
states. This makes a
comparison of the theory to the experiment impossible.}
(Figure~\ref{enSpectraCa}). This agreement, which is observed not
only in single cases but throughout
the shells, represents a valuable result. This is because the
higher-lying $0^+$ states under consideration
constitute an experimental set independent of the data that enters
the statistics to determine the model parameters in (\ref{clH}). Such
a result is,
first, an independent test of the physical validity of the strength parameters,
and, second, an indication that the interactions interpreted by the
\Spn{4} model
Hamiltonian are the main driving force that defines the properties of these
states. In this way, the \Spn{4} dynamical symmetry of the zero-seniority
$IAS$ $0^+$ states of even-$A$ nuclei reveals
a simple and fundamental aspect of the nuclear interaction related to
isovector $J=0$ pairing
correlations and higher-$J$ proton-neutron interactions. Moreover, the simple
\Spn{4} model can be used to provide a reasonable prediction of the
(ground and/or excited) pairing-governed isobaric analog $0^+$ states~
in proton-rich nuclei with energy spectra not yet experimentally
fully explored.
\begin{figure}[th]
\centerline{\epsfxsize=3.0in\epsfbox{enSpectraCa.eps}}
\caption{(Color online) Theoretical and experimental (black lines)
energy spectra of the higher-lying pairing-governed isobaric analog $0^+$ states~ for isotopes in the
\ensuremath{1f_{7/2}} shell ($^{40}Ca$ core). Insert: First excited isobaric analog $0^+$ state~
energy in $^{36}$Ar in the \ensuremath{1d_{3/2}} shell ($^{32}S$ core) in
comparison to its experimental
value.}
\label{enSpectraCa}
\end{figure}
Such a conclusion is based furthermore on our complementary
investigation \cite{SGD03stg} on the fine
structure phenomena among the isobaric analog $0^+$ states. A study of this kind is quite
necessary because it is
well-known that a good reproduction of the experimental nuclear
energies does not guarantee straight
away agreement of the fine structure of nuclei in comparison to the
experiment. We have examined such detailed features by discrete
approximations of derivatives of the
energy function (\ref{clH}) filtering out the strong mean-field influence
\cite{SGD03stg}. In short, this investigation revealed a remarkable
reproduction of the two-proton
and two-neutron separation energies, the irregularities found around
the $N=Z$ region, the
like-particle and $pn$ isovector pairing gaps, the significant role
of the symmetry energy and
isovector pairing correlations in determining the fine nuclear
properties, and a prominent
staggering behavior observed between groups of even-even and odd-odd nuclides
\cite{SGD03stg}. This
study confirmed additionally the validity and reliability of the
group theoretical \Spn{4} model and
the interactions it includes.
\section{Conclusions}
In this paper we presented a simple \Spn{4} model that achieved a
reasonable prediction of the pairing-governed
isobaric analog $0^+$ state ~energy spectra of a total of 319 even-even and odd-odd nuclei
with only six
parameters. The model
Hamiltonian is a two-body effective interaction, including
proton-neutron and like-particle pairing
plus symmetry terms (the latter is related to a proton-neutron
isoscalar force). We compared
the theoretical results with experimental values and examined in detail their
outcome. While the model describes only the pairing-governed isobaric analog $0^+$ states
~of even-even
medium mass nuclei with protons and
neutrons occupying the same shell, it reveals a fundamental feature
of the nuclear interaction, which
governs these states. Namely, the latter possess clearly a simple
\Spn{4} dynamical symmetry.
Such a symplectic \Spn{4} scheme allows also for an extensive
systematic study of various
experimental patterns of the even-$A$ nuclei.
\section*{Acknowledgments}
\vskip .5cm This work was supported by the US National Science
Foundation, Grant
Number 0140300. The authors thank Dr. Vesselin G. Gueorguiev for his
computational {\small MATHEMATICA}\ programs for non-commutative algebras.
|
2,869,038,155,536 | arxiv | \section{Introduction}
For any punctured Riemann surface one can define an invariant, its wrapped Fukaya category (Abouzaid et al. \cite{abouzaid2013homological}) or
topological Fukaya category (Haiden-Katzarkov-Kontsevich in \cite{haiden2014flat} or Dyckerhoff-Kapranov \cite{dyckerhoff2013triangulated}).
As its second name suggests, this invariant only depends on the topology of the surface and intuitively it describes the intersection theory of curves on the surface.
The topological Fukaya category carries a rich structure, it is an $A_\infty$-category,
but unlike many other $A_\infty$-categories its description is fairly straightforward and
one can describe its objects and morphisms in a natural way. This makes it an ideal toy model
to study phenomena related to the concept of Mirror symmetry.
Mirror symmetry for punctured surfaces establishes an equivalence between the topological
Fukaya category (the A-side) and certain categories of matrix factorizations (the B-side).
These categories of matrix factorizations can be defined over commutative spaces like in Abouzaid et al. \cite{abouzaid2013homological} and Pascaleff and Sibilla \cite{pascaleff2016topological} or
over noncommutative space like in \cite{bocklandt2016noncommutative}.
In this paper we will study mirror symmetry for punctured surfaces from the point of view of constructing moduli spaces of objects.
Inspired by work of Bridgeland and Smith \cite{bridgeland2015quadratic} and Haiden et al. \cite{haiden2014flat}, we will explain how to construct moduli space of objects in the topological Fukaya category using Strebel differentials and
relate this to the classical GIT-construction of moduli spaces of representations of noncommutative algebras.
The stepping stone between the two sides of our story is the theory of dimer models.
They give a combinatorial description of both the Fukaya category and the category of matrix factorizations. This description can then be used to construct moduli spaces and relate these moduli spaces to tropical geometry.
The structure of the paper is as follows. We start with an review on quivers and $A_\infty$-structures in section \ref{sectionquivers}. Then we introduce dimer models in section \ref{sectionquivers} and explain how they can be used to describe mirror
symmetry for punctured surfaces in section \ref{sectionmirror}. In section \ref{sectionbands} we show how these dimer models can be used to describe objects in the
Fukaya category and the category of matrix factorizations. Section \ref{sectionspider} and \ref{sectiontropical} are used to review some concepts that will be
important in the story: ribbon graphs and spider graphs and tropical and toric geometry.
We will use these concepts to describe moduli spaces in both the A and B-side in section \ref{sectionmoduli} and relate these moduli spaces to the gluing construction of Pascaleff and Sibilla \cite{pascaleff2016topological} in section \ref{sectionglue}.
We illustrate the theory with an example in section \ref{sectionexample}.
\section{Quivers and $A_\infty$-structures}\label{sectionquivers}
\subsection{Quivers}
A quiver consists of a set of vertices $Q_0$ and a set of arrow $Q_1$ together with two maps $h,t:Q_1 \to Q_0$ that assign
to each arrow its head and tail. A nontrivial path is a sequence of arrows $a_0\dots a_k$ such that $t(a_{i})=h(a_{i+1})$ (so arrows point to the left: $\stackrel{a_0}{\leftarrow}\dots \stackrel{a_k}{\leftarrow}$). A path is cyclic if $h(a_0)=t(a_k)$ and a cyclic path up to cyclic shifts is called a cycle.
A trivial path is a vertex. The path algebra $\mathbb{C} Q$ is the complex vector space spanned by all paths. The product of two paths is their concatenation if possible and
zero otherwise. The vertices are orthogonal idempotents for the product and they generate a commutative subalgebra $\mathbbm{k} \cong \mathbb{C}^{\oplus Q_0}$.
A path algebra with relations is the quotient of a path algebra by a two-sided ideal that sits in the ideal spanned by all paths of length at least $2$. Such an
algebra can be considered as an algebra over $\mathbbm{k}$.
\subsection{$A_\infty$-algebras}
An $A_\infty$-algebra is a $\mathbb{Z}$-graded $\mathbbm{k}$-bimodule $A$ with a
set of products $\mu_i : A^{\otimes_\mathbbm{k} i} \to A$ of degree $2-i$ subject to certain generalized associativity laws. For the explicit expressions of these laws and more background about $A_\infty$-algebras we refer to \cite{keller1999introduction,keller2006infinity,kontsevich606241notes}.
If $\mu_i=0$ for $i\ne 2$ then $A$ is an ordinary graded algebra for the product $\mu_2$ and if $\mu_i=0$ for $i\ne 1,2$, $A$ is a dg-algebra for the product $\mu_2$ and
the differental $\mu_1$. Note that the definition allows for a map $\mu_0: \mathbbm{k} \to A$. If that map is nonzero we say that the $A_\infty$-algebra is curved and the element $\mu_0(1) \in A$ is its curvature.
A curved $A_\infty$-algebra with $\mu_i=0$ for ${i\ne 0,2}$ is called a Landau-Ginzburg model, it consists of an ordinary graded algebra
and a degree $2$ element $\ell=\mu_0(1)$, which is central.
If $\mu_0=0$ then the $A_\infty$-algebra is called flat. In this case $\mu_1^2=0$ so $(A, d=\mu_1)$ is a complex and we will denote its homology by $HA$.
If $A$ is flat and $\mu_1=0$ then we say that $A$ is minimal.
Let $A$ be a path algebra with relations and assume it has a $\mathbb{Z}$-grading such that the arrows are homogeneous. If $\mu$ is an $A_\infty$-structure on $A$,
we say that $\mu$ is compatible with the ordinary algebra structure on $A$ if $\mu_1=0$, $\mu_2$ is the ordinary product, and $\mu_{k}(x_1,\dots,x_k)$ is zero
if $k\ge 3$ and one of the entries is a vertex.
There is a notion of an $A_\infty$-morphism between two $A_\infty$-algebras $A$ and $B$. This is a set of maps ${\cal F}_i: A^{\otimes_\mathbbm{k} i}\to B$ with additional constraints \cite{keller1999introduction,keller2006infinity,kontsevich606241notes}.
If $A$ and $B$ are both flat then ${\cal F}_1$ will induce a morphism of complexes and we say that ${\cal F}$ is a quasi-isomorphism if ${\cal F}_1:A\to B$ is a quasi-isomorphism.
\begin{theorem}[Minimal model theorem \cite{kadeishvili1980homology}]
If $A$ is a flat $A_\infty$-algebra then there is an $A_\infty$-structure on $HA$ and a $A_\infty$-quasi-isomorphism ${\cal F}:A \to HA$.
\end{theorem}
The $A_\infty$-algebra $(HA, \mu)$ is called the minimal model of $A$.
\subsection{Twisted completion}
Analogously to $A_\infty$-algebras we can also define $A_\infty$-categories in such a way that an $A_\infty$-algebra $A$ can be viewed as an $A_\infty$-category with one object for each vertex if we're working with a path algebra with relations.
From an $A_\infty$-algebra $A=\mathbb{C} Q/{\cal I}$ we can define the $A_\infty$-category of twisted objects: $\mathtt{Tw}\, A$. A \emph{twisted object} \cite{keller1999introduction} is a pair $(M,\delta)$,
where $M\in \mathbb{N}[\mathbb{Z}^{Q_0}]$ is a formal sum of vertices shifted by elements in $\mathbb{Z}$.
We will write such a sum as $v_1[i_1] \oplus \dots \oplus v_k[i_k]$ where the $v_j$ are vertices and the $i_j$ shifts.
The map $\delta$ is a $k\times k$-matrix with entries $\delta_{st}\in v_{i_s}Av_{i_t}$
of degree $i_s-i_t+1$ and subject to the Maurer-Cartan equation: i.e.
\[
\sum_{n=0}^{\infty}(-1)^{\frac {n(n-1)}2}\mu_n(\delta,\dots,\delta)=0,
\]
where we extended ${\mu}_n$ to matrices in the standard way. Note that for this to make sense this infinite sum has to be finite, which can be achieved if $\delta$ is upper triangular
or if all products $\mu_i$ are zero for $i\gg 0$. Therefore if we are not in the latter case we restrict to objects for which $\delta$ is upper triangular.
The homomorphism space between two such objects $(M,\delta)$ and $(M',\delta')$ is given by
\[
\mathtt{Hom}((M,\delta),(M',\delta')) := \bigoplus_{r,s} v'_sA v_r [i_s-i_r]
\]
which we equip with an ${\mathtt A}_{\infty}$-structure as follows:
\[
\mu(f_1,\dots,f_n) := \sum_{t=0}^{\infty}\sum_{i_0+\dots+i_n=t} \pm {\mu}(\underbrace{\delta,\dots,\delta}_{i_0},f_1,\underbrace{\delta,\dots,\delta}_{i_1},\dots,f_n,\underbrace{\delta,\dots,\delta}_{i_n}).
\]
The $\pm$-sign is calculated by multiplying with a factor $(-1)^{n+t-k}$ for each $\delta$ in the expression on position $k$.
The ${\mathtt A}_\infty$-category of twisted objects and their homomorphism spaces is denoted by $\mathtt{Tw}\, A$. Note that the Maurer-Cartan equation implies
that $\mu_0$ is zero, so $\mathtt{Tw}\, A$ is flat.
\begin{remark}
If $A$ is an ordinary $\mathbb{C}$-algebra concentrated in degree $0$
we can view $\mathtt{Tw}\, A$ as the dg-category of complexes of finitely generated free $A$-modules.
\end{remark}
\subsection{The derived category}
Because $\mathtt{Tw}\, A$ is flat, we can construct a category ${\mathtt {D}} A$, with the same objects but
its hom-spaces are the degree zero part of the $\mu_1$-homology of the hom-spaces in $\mathtt{Tw}\, A$ and the ordinary product is the induced product on the homology.
We will call this category the derived category of $A$.
Unlike $\mathtt{Tw}\, A$, which is an $A_\infty$-category, the derived category is an ordinary category and
it is even triangulated \cite{keller2006infinity}.
In the light of the minimal model theorem the derived category is minimal model of $\mathtt{Tw}\, A$ and therefore it is equiped with an additional $A_\infty$-structure $({\mathtt {D}} A,\mu)$.
Quasi-isomorphic $A_\infty$-algebras will have quasi-equivalent twisted completions and
therefore also equivalent derived categories.
\begin{remark}
In case that $A$ is an ordinary algebra concentrated in degree zero and every
finitely generated module has a resolution of finitely generated free $A$-modules,
${\mathtt {D}} A$ is equivalent with the bounded derived category of $A$-modules $\mathtt{D}^b\ensuremath{\mathtt{Mod}} A^{op}$.
\end{remark}
\begin{remark}
If $A$ is not $\mathbb{Z}$-graded but $\mathbb{Z}/2\mathbb{Z}$-graded or $G$-graded for some other abelian group with a map $\mathbb{Z}\to G$,
we can adjust our definitions to $G$-graded $A_\infty$-algebras and twisted objects with $G$-shifts.
The corresponding categories will be denoted by $\mathtt{Tw}\,_{G}A$ and ${\mathtt {D}}_{G}A$.
\end{remark}
\section{Dimers}\label{sectiondimers}
\subsection{Definition}
A dimer quiver $\mathrm{Q}$ is a quiver that is embedded in a compact orientable surface, such that the complement of $\mathrm{Q}$ is a union of polygons bounded by oriented
cycles of the quiver. These cycles form a set $\mathrm{Q}_2$ that is split in two $\mathrm{Q}_2^+\cup \mathrm{Q}_2^-$ according to their orientation on the surface (anticlockwise vs. clockwise).
Every arrow is contained in precisely one cycle in $\mathrm{Q}_2^+$ and one in $\mathrm{Q}_2^-$.
Examples of dimers can be found in \ref{lotsofmirrors}.
We denote the compact surface in which $\mathrm{Q}$ is embedded by $\surf \mathrm{Q}$. Its Euler characteristic equals
$\chi(\mathrm{Q}) = \#\mathrm{Q}_0 -\#\mathrm{Q}_1 +\#\mathrm{Q}_2$. If we remove the vertices from the surface we get a punctured surface,
which we denote by $\psurf{\mathrm{Q}}:= \surf \mathrm{Q} \setminus \mathrm{Q}_0$. If we cut out open disks around each vertex we get a surface with boundary which we denote by $\rib{\mathrm{Q}}$.
\subsection{Perfect matchings and zigzag paths}
A perfect matching of $\mathrm{Q}$ is a set of arrows ${\cal P} \subset \mathrm{Q}_1$ such that every cycle in $\mathrm{Q}_2$ contains exactly one arrow of ${\cal P}$.
Not every dimer admits a perfect matching: a necessary but not sufficient condition is that $\#\mathrm{Q}_2^+=\#\mathrm{Q}_2^-$.
Given a perfect matching ${\cal P}$ we can define a degree function $\deg_{\cal P}$ on $\mathbb{C} \mathrm{Q}$ such that an arrow has degree one if it sits in ${\cal P}$ and zero otherwise.
In this way all cycles in $\mathrm{Q}_2$ have degree $1$.
The zig ray of an arrow $a \in \mathrm{Q}_1$ is the infinite path $\dots a_2a_1a_0$ such that $a_{i+1}a_{i}$ is a subpath of a positive cycle if $i$ is even and of a
negative cycle if $i$ is odd. The zag ray is defined similarly: $a_{i+1}a_{i}$ is a subpath of a positive cycle if $i$ is odd and of a
negative cycle if $i$ is even. If $\mathrm{Q}$ is finite the zig and zag rays become cyclic and we call them zigzag cycles.
Every arrow $a$ is contained in two zigzag cycles, one coming from its zig ray and one from its zag ray.
\subsection{Dimer Duality}
Given a dimer $\mathrm{Q}$ we can construct a new dimer $\twist{\mathrm{Q}}$.
Geometricaly we can construct $\twist{\mathrm{Q}}$ from $\mathrm{Q}$ by cutting open $|\mathrm{Q}|$ along the arrows. Then we flip over the polygons that come from negative cycles to their mirror images
and reverse the orientations of the sides (so that there are still oriented clockwise). Finally we glue all polygons together along sides that come from the same arrows.
In this way we get a new quiver embedded in a new surface with a possibly different topology of the first dimer.
The new dimer is also called the \emph{untwisted dimer}\cite{feng2008dimer}, the \emph{mirror dimer}\cite{bocklandt2016noncommutative} or the \emph{specular dual}\cite{hanany2012brane}.
\begin{center}
\begin{tikzpicture}
\begin{scope}
\draw (.5,2) node{$\mathrm{Q}$};
\draw [-latex,shorten >=5pt] (0,0.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (1,0.3);
\draw [-latex,shorten >=5pt] (1,0.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (1,1.3);
\draw [-latex,shorten >=5pt] (1,1.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny z}} (0,0.3);
\draw [-latex,shorten >=5pt] (0,0.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (0,1.3);
\draw [-latex,shorten >=5pt] (0,1.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (1,1.3);
\draw (0,0.3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (0,1.3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (1,1.3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (1,0.3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\end{scope}
\begin{scope}[xshift=2cm]
\draw (-.5,2) node{cut};
\draw [-latex] (0,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (1,0);
\draw [-latex] (1,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (1,1);
\draw [-latex] (1,1) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny z}} (0,0);
\draw [-latex] (1,1.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny z}} (0,0.3);
\draw [-latex] (0,0.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (0,1.3);
\draw [-latex] (0,1.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (1,1.3);
\end{scope}
\begin{scope}[xshift=4cm]
\draw (-.5,2) node{flip};
\draw [-latex] (0,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (1,0);
\draw [-latex] (1,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (1,1);
\draw [-latex] (1,1) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny z}} (0,0);
\draw [-latex] (1,1.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny z}} (0,0.3);
\draw [-latex] (0,0.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (0,1.3);
\draw [-latex] (0,1.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (1,1.3);
\end{scope}
\begin{scope}[xshift=6cm]
\draw (-.5,2) node{glue};
\draw (.5,2) node{$\twist{\mathrm{Q}}$};
\draw [-latex,shorten >=5pt] (0,0.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (1,0.3);
\draw [-latex,shorten >=5pt] (1,0.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (1,1.3);
\draw [-latex,shorten >=5pt] (1,1.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny z}} (0,0.3);
\draw [-latex,shorten >=5pt] (0,0.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (0,1.3);
\draw [-latex,shorten >=5pt] (0,1.3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (1,1.3);
\draw (0,0.3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (0,1.3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny3}};
\draw (1,1.3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny2}};
\draw (1,0.3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny3}};
\end{scope}
\end{tikzpicture}
\end{center}
From the geometric construction it is easy to see that it forms an involution on the set of dimers. An interesting observation
is that this involution induces a bijection between the set of perfect matchings of $\mathrm{Q}$ and $\twist{\mathrm{Q}}$.
Moreover the vertices of $\twist{\mathrm{Q}}$ are in one to one correspondence to the zigzag cycles in $\mathrm{Q}$ and vice versa.
\subsection{Examples}\label{lotsofmirrors}
We illustrate this with some extra examples from \cite{bocklandt2016noncommutative}:
\begin{center}
\resizebox{10cm}{!}{
\begin{tabular}{ccccc}
$\mathrm{Q}$&$\vcenter{
\xymatrix@C=.75cm@R=.75cm{
\vtx{1}\ar[r]_{a}&\vtx{2}\ar[d]|z&\vtx{1}\ar[l]^b\\
\vtx{3}\ar[u]_c\ar[d]^d&\vtx{4}\ar[l]|w\ar[r]|y&\vtx{3}\ar[u]^c\ar[d]_d\\
\vtx{1}\ar[r]^{a}&\vtx{2}\ar[u]|x&\vtx{1}\ar[l]_b
}}$
&
$\vcenter{
\xymatrix@C=.4cm@R=.75cm{
\vtx{1}\ar[rrr]_a\ar[dr]|{u_1}&&&\vtx{1}\ar[ld]|z\\
&\vtx{3}\ar[r]|y\ar[ld]|x&\vtx{2}\ar[ull]|{u_2}\ar[dr]|{v_2}&\\
\vtx{1}\ar[rrr]^a\ar[uu]_b&&&\vtx{1}\ar[ull]|{v_1}\ar[uu]^b
}}$
&
$\vcenter{
\xymatrix@C=.75cm@R=.75cm{
\vtx{1}\ar[r]_{a}\ar[d]^{b}&\vtx{1}\ar[r]_b&\vtx{1}\ar[d]_c\\
\vtx{1}\ar[d]^a&&\vtx{1}\ar[d]_d\\
\vtx{1}\ar[r]^{d}&\vtx{1}\ar[r]^c&\vtx{1}\ar[uull]|x
}}$
&
$\vcenter{
\xymatrix@C=.4cm@R=.75cm{
\vtx{1}\ar[dd]\ar[rrr]&&&\vtx{2}\ar[dll]\ar@{.>}[dl]\\
&\vtx{5}\ar[ul]\ar[drr]&\vtx{6}\ar@{.>}[ull]\ar@{.>}[dr]&\\
\vtx{4}\ar[ur]\ar@{.>}[urr]&&&\vtx{3}\ar[uu]\ar[lll]
}}$
\vspace{.5cm}
\\
$\twist{\mathrm{Q}}$&
$\vcenter{
\xymatrix@C=.75cm@R=.75cm{
\vtx{1}\ar[r]_{a}&\vtx{2}\ar[d]|c&\vtx{1}\ar[l]^y\\
\vtx{3}\ar[u]_z\ar[d]^d&\vtx{4}\ar[l]|w\ar[r]|b&\vtx{3}\ar[u]^z\ar[d]_d\\
\vtx{1}\ar[r]^{a}&\vtx{2}\ar[u]|x&\vtx{1}\ar[l]_y
}}$
&
$\vcenter{
\xymatrix@C=.4cm@R=.75cm{
\vtx{1}\ar[rrr]_z\ar[dr]|{u_1}&&&\vtx{1}\ar[ld]|a\\
&\vtx{3}\ar[r]|y\ar[ld]|b&\vtx{2}\ar[ull]|{u_2}\ar[dr]|{v_1}&\\
\vtx{1}\ar[rrr]^z\ar[uu]_x&&&\vtx{1}\ar[ull]|{v_2}\ar[uu]^x
}}$
&
$\vcenter{
\xymatrix@C=.75cm@R=.75cm{
\vtx{1}\ar[r]_{a}&\vtx{2}\ar[r]_b&\vtx{1}\ar[ld]|c\\
&\vtx{3}\ar[ld]|d&\\
\vtx{1}\ar[uu]_x\ar[r]^{a}&\vtx{2}\ar[r]^b&\vtx{1}\ar[uu]^x
}}$
&
$\vcenter{
\xymatrix@C=.75cm@R=.75cm{
\vtx{1}\ar[r]&\vtx{2}\ar[r]\ar[ld]&\vtx{1}\ar[ld]\\
\vtx{3}\ar[r]\ar[u]&\vtx{4}\ar[r]\ar[u]\ar[ld]&\vtx{3}\ar[u]\ar[ld]\\
\vtx{1}\ar[r]\ar[u]&\vtx{2}\ar[r]\ar[u]&\vtx{1}\ar[u]
}}$
\end{tabular}}
\end{center}
On the top row the first two quivers are embedded in a torus, the third in a surface with genus $2$ and the fourth in a sphere.
The mirrors on the bottom row are all embedded in a torus.
Note that the first $2$ are isomorphic to their mirrors, but in a nontrivial way.
\subsection{Consistency}
A dimer is called \emph{zigzag consistent} if in the universal cover the zig and the zag ray of an arrow $a$ do not have arrows in common apart from $a$. In the example above the first and third from the $\mathrm{Q}$-row are consistent.
The second is not consistent because the zig and zag ray from $x$ both contain $z$.
In the $\twist{\mathrm{Q}}$-row the first and fourth are consistent.
\begin{remark}
Many different versions of consistency can be found in the literature \cite{broomhead2012dimer, gulotta2008properly, ishii2010note, bocklandt2016dimer,mozgovoy2009crepant,davison2011consistency}.
For dimers on the torus these are equivalent to the notion of zigzag consistency. For more information about these equivalences we refer to \cite{bocklandt2016dimer,ishii2010note}.
\end{remark}
Zigzag consistent dimers have nice properties, especially when they are embedded in a torus.
Suppose that $\surf{\mathrm{Q}}$ is torus and fix two cyclic paths $p_X,p_Y$ in $\mathrm{Q}$ that span the homology of the torus. We can assign to each
perfect matching a lattice point $$(\deg_{\cal P} p_X,\deg_{\cal P} p_Y) \in \mathbb{Z}^2.$$
These lattice points span a convex polygon which is called the \emph{matching polygon} $\mathrm{MP}(\mathrm{Q})$.
Note that it depends on the choice of $p_X,p_Y$ but different choices will result in polygons that are equal up to an affine integral transformation.
\begin{theorem}\label{matchingconsistent}
If $\mathrm{Q}$ is a zigzag consistent dimer on a torus then
\begin{enumerate}
\item On each lattice point of the matching polygon there is at least one perfect matching.
\item On each corner of the matching polygon there is exactly one perfect matching.
\item There is a one-to-one correspondence between the zigzag cycles and the outward pointing normal vectors of the zigzag polygon.
\end{enumerate}
\end{theorem}
This theorem can also be translated to some properties of the mirror dimer
\begin{theorem}\label{genusmirror}
If $\mathrm{Q}$ is a zigzag consistent dimer on a torus then
\begin{enumerate}
\item The genus of $\surf{\twist{\mathrm{Q}}}$ is equal to the internal lattice points of the matching polygon.
\item The number of punctures $\psurf{\twist{\mathrm{Q}}}$ is equal to the number of bondary lattice points of the matching polygon.
\end{enumerate}
\end{theorem}
\begin{remark}
These two theorems are known by experts and appear in many different disguises in the literature e.g \cite{ishii2009dimer, mozgovoy2010noncommutative}. For a proof of these we refer to \cite{bocklandt2016dimer}.
\end{remark}
\subsection{Example}
The suspended pinchpoint \cite{franco2006brane}[section 4.1] is an example of a dimer model on the torus. Below we show the dimer $\mathrm{Q}$ and its mirror $\twist{\mathrm{Q}}$ (see also \cite{bocklandt2016dimer}).
\begin{center}
\begin{tikzpicture}
\begin{scope}[scale=.75]
\draw (1.5,3.5) node{$\mathrm{Q}$};
\draw [-latex,shorten >=5pt] (0,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (3,0);
\draw [-latex,shorten >=5pt] (3,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny u}} (3,1);
\draw [-latex,shorten >=5pt] (3,1) -- (0,0);
\draw [-latex,shorten >=5pt] (0,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny u}} (0,1);
\draw [-latex,shorten >=5pt] (0,1) -- (3,2);
\draw [-latex,shorten >=5pt] (3,2) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny w}} (3,3);
\draw [-latex,shorten >=5pt] (3,2) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny v}} (3,1);
\draw [-latex,shorten >=5pt] (0,2) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny v}} (0,1);
\draw [-latex,shorten >=5pt] (0,3) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (3,3);
\draw [-latex,shorten >=5pt] (3,3) -- (0,2);
\draw [-latex,shorten >=5pt] (0,2) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny w}} (0,3);
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (0,1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny2}};
\draw (0,2) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny3}};
\draw (0,3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (3,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (3,1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny2}};
\draw (3,2) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny3}};
\draw (3,3) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\end{scope}
\begin{scope}[xshift=6cm]
\begin{scope}[scale=.75]
\draw (1.5,3.5) node{$\twist{\mathrm{Q}}$};
\draw [-latex,shorten >=5pt] (0,.5) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (3,-.5);
\draw [-latex,shorten >=5pt] (3,-.5) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny u}} (3,.5);
\draw [-latex,shorten >=5pt] (3,.5) -- (0,.5);
\draw [-latex,shorten >=5pt] (0,.5) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny v}} (0,1.5);
\draw [-latex,shorten >=5pt] (0,2.5) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny v}} (0,1.5);
\draw [-latex,shorten >=5pt] (3,1.5) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny u}} (3,.5);
\draw [-latex,shorten >=5pt] (3,1.5) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny w}} (3,2.5);
\draw [-latex,shorten >=5pt] (0,1.5) -- (3,1.5);
\draw [-latex,shorten >=5pt] (3,2.5) -- (0,2.5);
\draw [-latex,shorten >=5pt] (0,2.5) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (3,3.5);
\draw [-latex,shorten >=5pt] (3,3.5) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny w}} (3,2.5);
\draw (0,.5) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny d}};
\draw (0,2.5) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny d}};
\draw (0,1.5) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny e}};
\draw (3,.5) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny c}};
\draw (3,2.5) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny b}};
\draw (3,1.5) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny a}};
\draw (3,-.5) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny a}};
\draw (3,3.5) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny a}};
\end{scope}
\end{scope}
\end{tikzpicture}
\end{center}
There are five zigzag paths in $\mathrm{Q}$ with homology classes normal to the five line segments on the boundary.
There are six perfect matchings, one for each corner and two on the lattice point that is not a corner.
\begin{center}
\begin{tikzpicture}
\draw[thick](0,0) -- (0,2) -- (1,0) -- (1,-1) -- (0,0);
\draw (0,0) node{$\bullet$};
\draw (0,2) node{$\bullet$};
\draw (0,1) node{$\bullet$};
\draw (1,0) node{$\bullet$};
\draw (1,-1) node{$\bullet$};
\draw[-latex] (0,.5)--(-.5,.5);
\draw[-latex] (0,1.5)--(-.5,1.5);
\draw[-latex] (.5,1)--(1.5,1.5);
\draw[-latex] (1,-.5)--(1.5,-.5);
\draw[-latex] (.5,-.5)--(0,-1);
\begin{scope}[xshift=-1.5cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [thick,-latex] (3,0) -- (3,1)--(0,0);
\end{scope}
\end{scope}
\begin{scope}[xshift=-1.5cm,yshift=1cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [thick,-latex] (3,2) -- (3,3)--(0,2);
\end{scope}
\end{scope}
\begin{scope}[xshift=1.75cm,yshift=-1cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [thick,-latex] (0,1) -- (3,2)--(3,1);
\end{scope}
\end{scope}
\begin{scope}[xshift=-1.75cm,yshift=-1.5cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [dotted] (3,0) -- (6,0) -- (6,3) -- (3,3);
\draw [dotted] (3,0) -- (6,1) -- (6,2) -- (3,1)--(3,2)--(6,3);
\draw [thick,-latex] (6,3) -- (3,2)--(3,1)--(0,0)--(3,0);
\end{scope}
\end{scope}
\begin{scope}[xshift=1.75cm,yshift=1cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [dotted] (3,0) -- (6,0) -- (6,3) -- (3,3);
\draw [dotted] (3,0) -- (6,1) -- (6,2) -- (3,1)--(3,2)--(6,3);
\draw [thick,-latex] (0,0) -- (3,0)--(3,1)--(6,2)--(6,3);
\end{scope}
\end{scope}
\begin{scope}[xshift=6cm]
\draw[thick](0,0) -- (0,2) -- (1,0) -- (1,-1) -- (0,0);
\draw (0,0) node{$\bullet$};
\draw (0,2) node{$\bullet$};
\draw (0,1) node{$\bullet$};
\draw (1,0) node{$\bullet$};
\draw (1,-1) node{$\bullet$};
\begin{scope}[xshift=-1cm,yshift=1.5 cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [thick] (0,0) -- (0,1);
\draw [thick] (3,0) -- (3,1);
\draw [thick] (0,2) -- (0,3);
\draw [thick] (3,2) -- (3,3);
\end{scope}
\end{scope}
\begin{scope}[xshift=-2cm,yshift=.5 cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [thick] (0,0) -- (3,1);
\draw [thick] (0,2) -- (0,3);
\draw [thick] (3,2) -- (3,3);
\end{scope}
\end{scope}
\begin{scope}[xshift=-1cm,yshift=.5 cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [thick] (0,0) -- (0,1);
\draw [thick] (3,0) -- (3,1);
\draw [thick] (0,2) -- (3,3);
\end{scope}
\end{scope}
\begin{scope}[xshift=-1cm,yshift=-.5cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [thick] (0,0) -- (3,1);
\draw [thick] (0,2) -- (3,3);
\end{scope}
\end{scope}
\begin{scope}[xshift=1.5cm,yshift=-1.5cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [thick] (0,1) -- (0,2);
\draw [thick] (3,1) -- (3,2);
\draw [thick] (0,0) -- (3,0);
\draw [thick] (0,3) -- (3,3);
\end{scope}
\end{scope}
\begin{scope}[xshift=1.5cm,yshift=-.5cm]
\begin{scope}[scale=.25]
\draw [dotted] (0,0) -- (3,0) -- (3,3) -- (0,3);
\draw [dotted] (0,0) -- (3,1) -- (3,2) -- (0,1)--(0,2)--(3,3);
\draw [thick] (0,1) -- (3,2);
\draw [thick] (0,0) -- (3,0);
\draw [thick] (0,3) -- (3,3);
\end{scope}
\end{scope}
\end{scope}
\end{tikzpicture}
\end{center}
As there are no internal lattice points and 5 lattice points on the boundary, the mirror dimer describes a sphere with 5 punctures.
This can also be seen by gluing the dimer on the right together.
\section{Mirror Symmetry for dimers}\label{sectionmirror}
\subsection{The A-model}
For a dimer $\mathrm{Q}$ we define a second quiver ${\koppa}$: its vertices are the midpoints of the arrows of $\mathrm{Q}$ and its arrows are angle arcs inside
the polygons that connect these arrows in a clockwise fashion.
From this new quiver we can define an $A_\infty$-algebra $\mathtt{Gtl}(\mathrm{Q})$.
As an ordinary algebra $A =\mathtt{Gtl}(\mathrm{Q})$ is the path algebra $\mathbb{C} {\koppa}$ modulo the relations of the form $\alpha\beta$ where $\alpha$ and $\beta$ are two consecutive
angle arcs in a cycle $c \in \mathrm{Q}_2$.
\begin{example}\label{torus1}
Below is an example for a torus with one marked point.
\begin{center}
\begin{tikzpicture}
\begin{scope}[scale=1.5]
\draw (.5,1.6) node{$\mathrm{Q}$};
\draw [-latex] (0,0.3) to node [fill=white,sloped,inner sep=1pt] {{\tiny 1}} (1,0.3);
\draw [-latex] (1,0.3) to node [fill=white,sloped,inner sep=1pt] {{\tiny 2}} (1,1.3);
\draw [-latex] (1,1.3) to node [fill=white,sloped,inner sep=1pt] {{\tiny 3}} (0,0.3);
\draw [-latex] (0,0.3) to node [fill=white,sloped,inner sep=1pt] {{\tiny 2}} (0,1.3);
\draw [-latex] (0,1.3) to node [fill=white,sloped,inner sep=1pt] {{\tiny 1}} (1,1.3);
\draw (0,0.3) node {{$\bullet$}};
\draw (0,1.3) node {{$\bullet$}};
\draw (1,1.3) node {{$\bullet$}};
\draw (1,0.3) node {{$\bullet$}};
\draw (.7,0.6) node {{$c_1$}};
\draw (.3,1) node {{$c_2$}};
\end{scope}
\begin{scope}[xshift=3cm]
\begin{scope}[scale=1.5]
\draw [-latex,shorten >=5pt] (.5,0.8) -- (0,0.8);
\draw [-latex,shorten >=5pt] (.5,0.8) -- (1,0.8);
\draw [-latex,shorten >=5pt] (.5,.3)--(.5,0.8);
\draw [-latex,shorten >=5pt] (.5,1.3)--(.5,0.8);
\draw [-latex,shorten >=5pt] (0,0.8) -- (.5,1.3);
\draw [-latex,shorten >=5pt] (1,0.8) -- (.5,0.3);
\draw [dotted] (0,0.3) -- (1,0.3);
\draw [dotted] (1,0.3) -- (1,1.3);
\draw [dotted] (1,1.3) -- (0,0.3);
\draw [dotted] (0,0.3) -- (0,1.3);
\draw [dotted] (0,1.3) -- (1,1.3);
\draw (1,0.8) node [circle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 2}};
\draw (0,0.8) node [circle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 2}};
\draw (.5,0.8) node [circle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 3}};
\draw (0.5,0.3) node [circle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 1}};
\draw (0.5,1.3) node [circle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 1}};
\end{scope}
\end{scope}
\begin{scope}[xshift=6.5cm]
\draw (.5,2.4) node{${\koppa}$};
\draw (.5,1.2) node{$\xymatrix@R=.75cm@C=.66cm{&\vtx{1}\ar@2[rd]^{\alpha_1,\alpha_2}&\\\vtx{2}\ar@2[ur]^{\beta_1,\beta_2}&&\vtx{3}\ar@2[ll]^{\gamma_1,\gamma_2}}$};
\end{scope}
\end{tikzpicture}
\[
\mathtt{Gtl}(\mathrm{Q}) \cong \frac{\mathbb{C}{\koppa}}{\<\alpha_i\beta_i, \beta_i\gamma_i, \gamma_i\alpha_i| i=1,2\>}
\]
\end{center}
\end{example}
The algebra $\mathtt{Gtl}(\mathrm{Q})$ comes with a natural $A_\infty$-structure \cite{bocklandt2016noncommutative}.
The higher products ${\mu}:A^{\otimes k}\to A$ are defined inductively: let $\rho_1,\dots,\rho_k$ be any sequence of paths
and $\beta_1\dots \beta_l$ a cycle of angle arrows that goes around a cycle of $\mathrm{Q}_2$ with $h(\beta_1)=t(\rho_i)$.
We set
\[
\text{[IR] : }{\mu}(\rho_1,\dots, \rho_i\beta_{1},\beta_{2},\dots, \beta_{l-1},\beta_{l}\rho_{i+1},\dots, \rho_k) := \pm {\mu}(\rho_1,\dots,\rho_k).
\]
For the sign convention we refer to \cite{bocklandt2016noncommutative}. Pictorially this gives rise to the following diagram:
\[
{\mu}\left(
\vcenter{\xymatrix@=.3cm{&\ar[dr]&\\
\ar[ur]&&\ar[lldd]_(.8){\rho_{i}\beta_1}\\
&&\\
\dots&&\dots\ar[lluu]_(.2){\beta_l\rho_{i+1}}
}}
\right)=\pm
{\mu}\left(
\vcenter{\xymatrix@=.3cm{
&\ar[ld]_{\rho_{i}}&\\
\dots&&\dots\ar[lu]_{\rho_{i+1}}
}}
\right).
\]
We set ${\mu}(\sigma_1,\dots, \sigma_k)=0$ if $k>2$ and we cannot perform any reduction of the form above. For
$k=2$ we use the ordinary product. $A$ has a natural $\mathbb{Z}_2$-grading by assigning degree $1$ to each angle arrow in ${\koppa}$.
\begin{lemma}\label{muhomotopy}
Let $\rho_1,\dots,\rho_k$ be a composable sequence of nonzero angle paths in ${\koppa}$ with $k>2$ such that
$\rho_i\rho_{i+1}=0$ and $\rho_1\dots\rho_k$ is a contractible cycle on $\psurf{\mathrm{Q}}$.
For each angle $\beta$ with $\rho_k\beta\ne 0$ in $\mathtt{Gtl}(\mathrm{Q})$ we have
\[
\mu_k(\rho_1,\dots,\rho_k\beta) =(-1)^{|\beta|} \beta
\]
and for each angle $\beta$ with $\beta\rho_1\ne 0$ in $\mathtt{Gtl}(\mathrm{Q})$ we have
\[
\mu_k(\beta\rho_1,\dots,\rho_k) =(-1)^{|\beta|} \beta
\]
All other $\mu_k$ with $k>2$ are zero.
\end{lemma}
\begin{proof}
This follows easily from Lemma 10.9 in \cite{bocklandt2016noncommutative}. This expression can also be found in Haiden et al. \cite{haiden2014flat} where it is part of the
definition of the $A_\infty$-structure.
\end{proof}
\begin{corollary}\label{subzero}
If $\mu(\rho_1,\dots,\rho_k)$ is nonzero then $\mu(\rho_i,\dots,\rho_j)$ is zero for allow
proper subsequences $\rho_i,\dots,\rho_j$.
\end{corollary}
\begin{proof}
This from the induction hypotesis: if the product
$\mu(\rho_i,\dots,\rho_j)$ were nonzero
we can reduce $\rho_i,\dots,\rho_j$ to a single angle.
After reducing the original product, that single angle composes with $\rho_{i-1}$ or $\rho_{j+1}$ to something nonzero.
Which contradicts the fact that the original is nonzero.
\end{proof}
If $a$ is an arrow in $\mathrm{Q}$, we can see $a$ as an oriented 1-dimensional submanifold in the punctured surface $\psurf \mathrm{Q}$ and if we chose a symplectic structure
on $\psurf \mathrm{Q}$, we can see $a$ as an embedded Lagrangian submanifold. Such a submanifold can be seen as an object in the wrapped Fukaya category of this surface.
For a precise definition of this category we refer to \cite{abouzaid2013homological}, but its main objects are immersions $\gamma: I \to \psurf \mathrm{Q}$ where $I=(0,1)$ or $\mathbb{R}/\mathbb{Z}$. These objects are
called open or closed Lagrangians depending on $I$. For the open Lagrangians, we also demand that the limit points are punctures of the surface.
\begin{proposition}\cite{bocklandt2016noncommutative}
The category ${\mathtt {D}}_{\mathbb{Z}_2} \mathtt{Gtl}(\mathrm{Q})$ is equivalent to the derived version of the wrapped Fukaya category of the punctured surface $\psurf \mathrm{Q}$ in the sense of Abouzaid et Al. \cite{abouzaid2013homological}.
Under this isomorphism $a[0] \in {\mathtt {D}}_{\mathbb{Z}_2} \mathtt{Gtl}(\mathrm{Q})$ corresponds to the embedded Lagrangian submanifold represented by $a$.
\end{proposition}
A \emph{graded surface} $\vec S=(S,V)$ consists of an oriented surface $S$ equiped with a vector field $V$. For a graded surface we can define the
notion of a graded Lagrangian submanifold ${\cal L}=(\gamma,\theta)$.
This is an immersed curve $\gamma: I \to S$ together with a map $\theta: I \to \mathbb{R}$ which specifies the angle between $V_{\gamma(t)}$ and $\frac{d\gamma}{dt}(t)$.
Note that while every open Lagrangian can be given a grading, this is not true for every closed Lagrangian.
To shift a graded Lagrangian we reverse its orientation and add $\pi$ to the map $\theta$: ${\cal L}[1] = (\gamma(1-t), \theta+\pi)$.
For a graded surface with boundary we can define its topological Fukaya category. This category is $\mathbb{Z}$-graded instead of $\mathbb{Z}_2$-graded. For its general construction we refer to \cite{haiden2014flat}.
Although the definition above uses a metric to determine the angles, the actuall Fukaya category does not do not depend on the choice of the metric
because the actual angles are not important only how many multiples of $\pi$ they differ.
Given a perfect matching on $\mathrm{Q}$ we can put a vector field on $\psurf \mathrm{Q}$ in the following way: we fill each polygon with integral curves
that start at the head of the arrow in the perfect matching and run to its tail, as illustrated in the picture.
\begin{center}
\begin{tikzpicture}
\begin{scope}[scale=1.5]
\draw [draw=blue,-latex,shorten >=5pt] (0,0) to[out=5, in=265] (1,1) ;
\draw [draw=blue,-latex,shorten >=5pt] (0,0) to[out=85, in=185] (1,1) ;
\draw [draw=blue,-latex,shorten >=5pt] (0,0) to[out=15, in=255] (1,1) ;
\draw [draw=blue,-latex,shorten >=5pt] (0,0) to[out=30, in=240] (1,1) ;
\draw [draw=blue,-latex,shorten >=5pt] (0,0) to[out=60, in=210] (1,1) ;
\draw [draw=blue,-latex,shorten >=5pt] (0,0) to[out=75, in=195] (1,1) ;
\draw (.5,1.5) node{$\mathrm{Q},~{\cal P}=\{z\}$};
\draw [-latex,shorten >=5pt] (0,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (1,0);
\draw [-latex,shorten >=5pt] (1,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (1,1);
\draw [-latex,shorten >=5pt] (1,1) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny z}} (0,0);
\draw [-latex,shorten >=5pt] (0,0) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny y}} (0,1);
\draw [-latex,shorten >=5pt] (0,1) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny x}} (1,1);
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{}};
\draw (0,1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{}};
\draw (1,1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{}};
\draw (1,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{}};
\end{scope}
\end{tikzpicture}
\end{center}
In this way we get a graded surface $\gsurf{{\cal P}}{\mathrm{Q}}$. Each arrow not in the perfect matching corresponds to a curve parallel to the vector field, so we can give it a grading by putting $\theta=0$. Each arrow in the perfect matching runs opposite to
the vector field so we can grade it with $\theta=\pi$.
We can also use the perfect matching to put a $\mathbb{Z}$-grading on $\mathtt{Gtl} \mathrm{Q}$: we put the degree of an angle arrow $-1$ if it arrives in an arrow of the perfect matching and $+1$ otherwise. In this way the degree of a cycle of angle arrows that goes around a face in $\mathrm{Q}_2$ is $k-2$, where $k$ is the
length of the cycle. In the induction step the order of the multiplication also goes down by $k-2$, so the $A_\infty$-structure is compatible with this grading.
We denote this graded algebra by $\mathtt{Gtl}_{\cal P}(\mathrm{Q})$.
\begin{proposition}
The category ${\mathtt {D}}_{\mathbb{Z}} \mathtt{Gtl}_{\cal P}(\mathrm{Q})$ is equivalent to the derived version of the topological Fukaya category of the graded surface $\gsurf{{\cal P}}{\mathrm{Q}}$ in the sense of
Haiden et Al. \cite{haiden2014flat}. Under this isomorphism $a[0] \in {\mathtt {D}}_{\mathbb{Z}} \mathtt{Gtl}_{\cal P}(\mathrm{Q})$ corresponds to the embedded Lagrangian submanifold represented by $a$
graded by $\theta=0$ if $a\not \in {\cal P}$ and $\theta=\pi$ if $a\in {\cal P}$.
\end{proposition}
\subsection{The B-model}
Given a dimer quiver $\mathrm{Q}$ we can construct its \emph{Jacobi algebra} ${\mathtt{J}}(\mathrm{Q})$. This is the path algebra of the quiver $\mathrm{Q}$ with relations
\[
{\mathtt{J}}(\mathrm{Q}) := \mathbb{C} \mathrm{Q}/\<r_a^+-r_a^- | a \in \mathrm{Q}_1\>
\]
where $r_a^\pm a$ is the unique cycle in $\mathrm{Q}_2^\pm$ containing $a$.
These relations state that going back around a cycle to the left of an arrow is the same as going back along the right cycle.
This algebra has a central element, which is the sum of cycles in $\mathrm{Q}_2$, one starting in each vertex.
\[
\ell = \sum_{v \in \mathrm{Q}_2} c_v \text{ with $h(c_v)=v$ and $c_v \in \mathrm{Q}_2$}
\]
Note that the relations in ${\mathtt{J}}(\mathrm{Q})$ ensure that this element is central and it is independent of the choices of the $c_v$.
\begin{example}\label{torus1jac}
If we apply this definition to $\mathrm{Q}$ in example \ref{torus1}, we get ${\mathtt{J}}(\mathrm{Q})=\mathbb{C}[X,Y,Z]$ and $\ell=XYZ$.
\end{example}
Under certain conditions, this algebra has very nice properties:
\begin{theorem}\label{consprop}
If $\mathrm{Q}$ zigzag consistent on a torus then
\begin{enumerate}
\item ${\mathtt{J}}(\mathrm{Q})$ is finitely generated over its center, which is a 3d Gorenstein ring.
\item ${\mathtt{J}}(\mathrm{Q})$ is a 3-Calabi-Yau algebra, which means that it has global dimension 3 and $\mathtt{Hom}^\bullet_{{\mathtt{J}}^e}({\mathtt{J}},{\mathtt{J}}^e)\cong {\mathtt{J}}[3]$.
\item ${\mathtt{J}}(\mathrm{Q})$ embeds in ${\widehat{\mathtt{J}}} ={\mathtt{J}}(\mathrm{Q})\otimes_{\mathbb{C}[\ell]}\mathbb{C}[\ell,\ell^{-1}]\cong \mathsf{Mat}_n(\mathbb{C}[X^{\pm 1},Y^{\pm 1},Z^{\pm 1}])$.
\end{enumerate}
\end{theorem}
\begin{remark}\label{historyconsistent}
This result is a summary of many results in the literature. Proofs of these statements for different equivalent notions of consistency can be found in
\cite{mozgovoy2010noncommutative,davison2011consistency,broomhead2012dimer,ishii2010note,bocklandt2012consistency}.
\end{remark}
Given an arrow $a \in \mathrm{Q}_1$ we set $a^{-1}=r_a^+\ell^{-1}$. This new element satisfies $aa^{-1}=h(a)$ and $a^{-1}a=t(a)$ and we call
it the inverse of the arrow. We also define the inverse of a path by $(a_1\dots a_k)^{-1}=a_k^{-1}\dots a_{1}^{-1}$.
The algebra ${\widehat{\mathtt{J}}}$ is generated by all arrows of $\mathrm{Q}$ and their inverses.
Paths consisting of arrows and inverses of arrows are called \emph{weak paths} and ${\widehat{\mathtt{J}}}$ is called the \emph{weak Jacobi algebra}.
If ${\mathtt{J}}$ is graded by a perfect matching ${\cal P}$ we can extend
this grading to ${\widehat{\mathtt{J}}}$ by putting $\deg_{\cal P} p^{-1}= -\deg_{\cal P} p$.
We will need a few lemmas about the difference between weak and real paths and the existence
of real paths.
\begin{lemma}[Broomhead]\label{zeroonallpm}\cite{broomhead2012dimer}
Let $\mathrm{Q}$ be a consistent dimer on a torus.
A weak path $p$ in ${\widehat{\mathtt{J}}}$ sits in ${\mathtt{J}}$ if and only if $\deg_{{\cal P}} p\ge 0$ for all perfect matchings.
\end{lemma}
\begin{lemma}[Broomhead]\label{existspath}\cite{broomhead2012dimer}
Let $\mathrm{Q}$ be a consistent dimer on a torus.
\begin{enumerate}
\item For each pair of vertices $v,w$ and each homotopy class, there is a path $v\stackrel{p}{\leftarrow}w$ that represents the homotopy class and a perfect matching such that $\deg_{{\cal P}} p= 0$.
\item For each homology class there is a central element $z$ and a perfect matching $\deg_{{\cal P}} z= 0$ such that $zv$ is a path with that homology class.
\item
Two paths represent the same element in ${\mathtt{J}}$ or ${\widehat{\mathtt{J}}}$ if they have the same homotopy class and
the same degree for at least one perfect matching.
\end{enumerate}
\end{lemma}
The last lemma implies that the algebra ${\widehat{\mathtt{J}}}$ can be identified with $\mathsf{Mat}_n(\mathbb{C}[X^{\pm 1},Y^{\pm 1},Z^{\pm}])$. To do this
fix a vertex $v$ and a perfect matching ${\cal P}$. Choose paths $p_{vw}:v \leftarrow w$ for all other vertices $w$ (and assume $p_{vv}=v$).
The path $p$ is mapped to $X^iY^jZ^kE_{h(p)t(p)}$ where $E_{h(p)t(p)}$ is an elementary matrix,
$(i,j)$ is the homology class of $p_{vh(a)}pp_{vt(a)}^{-1}$
and $k = \deg_{\cal P} p_{vh(a)}+\deg_{\cal P} p- \deg_{\cal P} p_{vt(a)}^{-1}$.
We can use $\ell$ to turn ${\mathtt{J}}(\mathrm{Q})$ into a $\mathbb{Z}_2$-graded Landau-Ginzburg model ${\mathtt{J}}(\mathrm{Q},\ell)$ by adding a zeroth multiplication with $\mu_0(1)=\ell$.
We assume that ${\mathtt{J}}(\mathrm{Q})$ is $\mathbb{Z}_2$-graded and concentrated in degree $0$.
The objects in $\mathtt{Tw}\, {\mathtt{J}}(\mathrm{Q},\ell)$ are also known as matrix factorizatons.
A matrix factorization of $\ell$ consists of a pair $(P,d)$ where $P$ is a ($\mathbb{Z}_2$ or $\mathbb{Z}$)-graded finitely generated projective ${\mathtt{J}}(\mathrm{Q})$-module and
$d$ is a degree $1$ map such that $d^2=\ell$. Every finitely generated projective ${\mathtt{J}}(\mathrm{Q})$-module can be seen as a direct sum of shifts of projective modules
coming from the vertices: $P= v_1{\mathtt{J}}[i_1]\oplus\dots \oplus v_k{\mathtt{J}}[i_k]$ and $d$ corresponds to a matrix with entries of degree $1$. The Maurer-Cartan equation
in this case becomes $\ell - d^2=0$, so $(v_1[i_1]\oplus \dots \oplus v_k[i_k],d)$ is a twisted object.
We will often write the matrix factorization in a diagram
\[
\xymatrix{P_0 \ar@<.5ex>[r]^{d_{10}}&P_1 \ar@<.5ex>[l]^{d_{01}}}
\]
where $P_0$ and $P_1$ are the even and odd part of the matrix projective module.
Every arrow $a \in \mathrm{Q}_1$ gives rise to a matrix factorization
\[
M_a := \left( h(a)[1] \oplus t(a), \sm{0&a\\rightarrow_a^+&0}\right) =
\xymatrix{t(a){\mathtt{J}} \ar@<.5ex>[r]^{a}&h(a){\mathtt{J}} \ar@<.5ex>[l]^{r_a^+}}
\]
If $a$ and $b$ are consecutive arrows in a positive cycle $abu \in \mathrm{Q}_2^+$ there is a morphism of matrix factorizations $\widehat{ab}$
that corresponds to a commutative diagram
\[
\xymatrix{
&t(a){\mathtt{J}} \ar@<.5ex>[r]^{a}\ar[d]^{-\id}&h(a){\mathtt{J}} \ar@<.5ex>[r]^{bu}\ar[d]^{u=\ell(ab)^{-1}}&\\
\ar@<.5ex>[r]^{-b}&h(b){\mathtt{J}} \ar@<.5ex>[r]^{-ua}&t(b){\mathtt{J}}
}
\]
(The minus signs in the bottom row come from the shift $M_b[1]$).
These morphisms generate the endomorphism algebra of $\oplus_{a \in \mathrm{Q}_1}M_a$ in ${\blb M}\mathtt{Tw}\,_{\mathbb{Z}_2} {\mathtt{J}}(\mathrm{Q},\ell)$ and we denote
this algebra by ${\mathtt {mf}}(\mathrm{Q})$.
\begin{theorem}\cite{bocklandt2016noncommutative}
If $\mathrm{Q}$ is zigzag consistent then ${\mathtt {mf}}(\mathrm{Q})$ and $\mathtt{Gtl}(\twist{\mathrm{Q}})$ are $A_\infty$-isomorphic as $A_\infty$-algebras, so
\[
{\mathtt {D}}_{\mathbb{Z}_2} {\mathtt {mf}}(\mathrm{Q}) \cong {\mathtt {D}}_{\mathbb{Z}_2} \mathtt{Gtl}(\twist{\mathrm{Q}}).
\]
\end{theorem}
If a perfect matching ${\cal P}$ is specified we can make ${\mathtt{J}}(\mathrm{Q})$ $\mathbb{Z}$-graded by giving all arrows in ${\cal P}$ degree $2$ and all
other arrows degree $0$. In this way $\ell$ has degree $2$. Again we have a matrix factorization $M_a$ for every arrow in $\mathrm{Q}_1$ and we let
${\mathtt {mf}}_{\cal P}(\mathrm{Q})$ be the graded endomorphism algebra of $\oplus_{a \in \mathrm{Q}_1}M_a$ in the category $\mathtt{Tw}\,_{\mathbb{Z}} {\mathtt{J}}(\mathrm{Q},\ell)$.
\begin{theorem}\cite{bocklandt2016noncommutative}
If $\mathrm{Q}$ is zigzag consistent and ${\cal P}$ a perfect matching then ${\mathtt {mf}}_{\cal P}(\mathrm{Q})$ and $\mathtt{Gtl}_{\cal P}(\twist{\mathrm{Q}})$ are quasi-isomorphic as $A_\infty$-algebras, so
\[
{\mathtt {D}}_{\mathbb{Z}} {\mathtt {mf}}_{\cal P}(\mathrm{Q}) \cong {\mathtt {D}}_{\mathbb{Z}} \mathtt{Gtl}_{\cal P}(\twist{\mathrm{Q}}).
\]
\end{theorem}
\section{Strings and Bands}\label{sectionbands}
In \cite{haiden2014flat} Haiden et al. described all indecomposable objects in the topological Fukaya category of a graded surface.
They showed that these objects come in two types: strings and bands. The strings correspond to immersed open curves in the surface that run between two punctures, while
the bands correspond to closed curves. The bands also come equiped with a local system, given by a Jordan matrix that measures the transport around the curve.
In this section we will give a combinatorial description of these objects using the dimer formalism and use this description
to construct the corresponding matrix factorizations for the $B$-model. We will focus on the band objects, a similar construction can be done for the string objects.
Note that in the $\mathbb{Z}_2$-graded case there are more objects possible and a classification is not known, although there are results in the completed case \cite{burban2010maximal}.
In what follows we assume that $\mathrm{Q}$ is a consistent dimer on a torus and $\twist{\mathrm{Q}}$ is its mirror dimer, which is embedded in a surface $S=\psurf{\twist{\mathrm{Q}}}$.
The dimer $\mathrm{Q}$ will be used for the B-model and $\twist{\mathrm{Q}}$ for the A-model.
\subsection{Definition}
A \emph{garland band} is a sequence $g =(g_0, g_1, \dots, g_{2k-1})$ where $k\ge 1$, $g_{2i} \in \twist{\mathrm{Q}}_1=\mathrm{Q}_1$ and $g_{2i+1} \in \twist{\mathrm{Q}}_2=\mathrm{Q}_2$.
We also demand that
\begin{itemize}
\item[B1] the arrows are contained in the cycles around them: $g_{2i} \in g_{2i\pm 1}$,
\item[B2] consecutive arrows and cycles are different: $g_{i} \ne g_{i+2}$,
\end{itemize}
The indices in the definition are considered in $\mathbb{Z}/{2k\mathbb{Z}}$ and bands are considered upto cyclic shifts by an even offset.
A band is called \emph{primitive} if it there is no nontrivial cyclic permutation that maps $g$ to itself, so $g$ is not the power of a smaller band.
Note that this definition does not depend on whether you are working in $\mathrm{Q}$ or its mirror, so there is a one to one correspondence between
garland bands in $\mathrm{Q}$ and its dual $\twist{\mathrm{Q}}$.
\subsection{Bands in the A-model}
Given a garland band, we can draw a closed curve on the surface $\psurf{\twist{\mathrm{Q}}}$ by connecting the centers of the arrows $g_{2i+2}$ and $g_{2i}$ by a line
through the face $g_{2i+1}$. Vice versa if we have a closed curve on $\psurf \twist{\mathrm{Q}}$ we can isotope it such that it never enters and leaves a face
via the same arrow. After this isotopy we can write down a list of all the arrows and faces this curve meets and this gives us a garland band.
This band is depends uniquely on the isotopy class of the curve in $\psurf \twist{\mathrm{Q}}$ because the universal cover of $\psurf \twist{\mathrm{Q}}$ is a tree of faces
glued together along the arrows.
Given a garland $g$ there is a unique path $p_0\dots p_u \in \mathbb{C} \twist{\mathrm{Q}}$ starting with $p_0=g_0$ and running along all the face cycles.
It is the concatenation of subpaths $p_{i_j}\dots p_{i_{j+1}-1}$ of $g_{2j+1}$ such that $p_{i_j}=g_{2j}$ and $t(p_{i_{j+1}-1}) = h(g_{2j+2})$.
We will call this path the snake path of $g$. It has the property that two consecutive arrows of a path are either contained in a positive or a negative cycle of $\twist{\mathrm{Q}}$.
The arrows of $p_i$ for which $p_{i-1}p_{i}$ and $p_{i}p_{i+1}$ are subpaths of different cycles correspond to the arrows in the garland.
\[
\xymatrix@=.4cm{
\ar@{.>}@/^/[rr]&&\vtx{}\ar[dd]|{g_0}&&\vtx{}\ar@/_/[ll]\ar@{.>}@/^/[rr]&&\vtx{}\ar[dd]|{g_4}&&\vtx{}\ar@/_/[ll]\ar@{.>}@/^/[rr]&&\dots\\
g:&&&g_1&&g_3&&g_5&&&\\
p:&&\vtx{}\ar@/^/[ll]\ar@{.>}@/_/[rr]&&\vtx{}\ar[uu]|{g_2}&&\vtx{}\ar@/^/[ll]\ar@{.>}@/_/[rr]&&\vtx{}\ar[uu]|{g_6}&&\dots \ar@/^/[ll]
}
\]
From the snake path $p=p_0\dots p_r$ and a sequence of invertible square matrices $(\alpha_i)_{0\dots u}$ we can construct a twisted complex
\[
\left(P =\bigoplus_i p_i[u_i], \delta= \sum_i (\alpha_i\otimes\widehat{p_{i-1}p_{i}})^{\pm 1}\right)
\]
where $\widehat{p_{i-1}p_{i}}$ is the angle between the two arrows inside the face of which $p_{i-1}p_{i}$ is a subpath. The exponent of $\widehat{p_{i-1}p_{i}}$ is $\pm 1$ depending on whether this face is in $\twist{\mathrm{Q}}_2^{\pm}$. It is there to make sure that $\widehat{p_{i-1}p_{i}}^{\pm 1} \in \mathtt{Gtl}(\twist{\mathrm{Q}})$, because
in the negative cycles the angle goes in the opposite direction.
The shifts $u_i$ are uniquely determined by $u_0=0$ and the demand that $\deg \delta=1$. If we work with $\mathbb{Z}_2$-gradings all the $u_i$ are zero,
but if we specified a perfect matching the degrees can only be satisfied when $\sum_i \pm \deg \widehat{p_{i-1}p_{i}}=0$.
This is precisely when the curve of the garland is graded for the graded surface $\gsurf{{\cal P}}{\twist{\mathrm{Q}}}$.
The pair $(P,\delta)$ is indeed a twisted complex: if we look at the Maurer-Cartan equation all higher products are zero because
to apply the reduction step all angles in one face have to be present but this is not possible because the curve of a garland enters and leaves a face by a different arrow.
Furthermore $\delta^2=0$ because if $\widehat{p_{i-1}p_{i}}^{\pm 1}$ and $\widehat{p_{i}p_{i+1}}^{\pm 1}$ concatenate they must sit in the same cycle and then their product is zero
in $\mathtt{Gtl}(\twist{\mathrm{Q}})$.
By applying base changes to the matrices we see that the twisted object only depends on the conjugacy class of the product $\alpha = \prod_i \alpha_i^{\pm 1}$.
We denote the twisted object by $B(g,\alpha)$.
\begin{lemma}
If $\gamma$ is a graded curve in $\gsurf{{\cal P}}{\twist{\mathrm{Q}}}$ and the corresponding garland is $g$, then $B(g,\alpha)$ corresponds
to the object in the topological Fukaya category associated to (an appropriate shift of) $\gamma$ and $\alpha$ by Haiden et al.
\end{lemma}
\begin{proof}
This follows from the construction in \cite{haiden2014flat}.
\end{proof}
\subsection{Bands in the B-model}
Now we want to construct the corresponding matrix factorizations of ${\mathtt{J}}(\mathrm{Q},\ell)$. The idea is to make a cone over all the matrix factorizations
coming from arrows in the snake path and then simplifying this matrix factorization using a shortening lemma.
\begin{lemma}[Shortening lemma]
Let $(P,d)$ be a matrix factorization and $P_a, P_b$ two graded summands of $P$ such that $d_{ba}:=\id_b d \id_a= \phi$ is an isomorphism.
The pair
\[
(P^{red}, d^{red}) := \left(\frac{P}{P_a\oplus P_b},~ d^{red}_{ij} = d_{ij} - d_{ia}\phi^{-1}d_{bj}\right)
\]
is a matrix factorization quasi-isomorphic to $(P,d)$
\end{lemma}
\begin{proof}
We need to check that for $i,j\not \in \{a,b\}$
\se{
(d^{red})^2_{ij}
&=\sum_{k\ne a,b} (d_{ik} - d_{ia}\phi^{-1}d_{bk})(d_{kj} - d_{ka}\phi^{-1}d_{bj})\\
&=\sum_{k\ne a,b} (d_{ik}d_{kj} - d_{ia}\phi^{-1}d_{bk}d_{kj} - d_{ik}d_{ka}\phi^{-1}d_{bj} + d_{ia}\phi^{-1}d_{bk}d_{ka}\phi^{-1}d_{bj})\\
&=(\sum_{k\ne a,b} d_{ik}d_{kj}) - d_{ia}\phi^{-1}(d^2_{bj}-d_{ba}d_{aj}-d_{bb}d_{bj}) \\
&\phantom{=}- (d^2_{ia}-d_{ia}d_{aa}-d_{ib}d_{ba})\phi^{-1}d_{bj}+ d_{ia}\phi^{-1}(d^2_{ab}-d_{ba}d_{aa}-d_{bb}d_{ba})\phi^{-1}d_{bj})\\
&= (\sum_{k\ne a,b} d_{ik}d_{kj}) + d_{ia}d_{aj} + d_{ib}d_{bj} + 0 = d^2_{ij}.
}
In the calculation we used the fact that $d^2_{uv}=0$ if $u\ne v$ and $d_{uv}=0$ if $u=v$.
The quasi-isomorphism $\psi: P_{red}\to P$ is given by
\[
\psi_{ij} = \begin{cases}
\delta_{ij}\id_i &\text{if }i \ne a,b\\
-\phi^{-1}d_{bj} &\text{if }i = a\\
0 &\text{if }i = b
\end{cases}
\text{ and }
\psi^{-1}_{ij} = \begin{cases}
\delta_{ij}\id_i &\text{if }j \ne a,b\\
0 &\text{if }j = a\\
-d_{ia}\phi^{-1} &\text{if }j = b
\end{cases}
\]
A straightforward calculation shows that these maps are indeed quasi-inverses.
\end{proof}
The shortening lemma has some easy consequences.
\begin{lemma}\label{subcycle}
If $a_1\dots a_k$ is a positive cycle in $\mathrm{Q}_2^+$ then
\[
M_{a_1\dots a_l} := \left( h(a_1)[1] \oplus t(a), \sm{0&a_1\dots a_l\\a_{l+1}\dots a_k&0}\right) =
\xymatrix{t(a_l){\mathtt{J}} \ar@<.5ex>[r]^{a_1\dots a_l}&h(a){\mathtt{J}} \ar@<.5ex>[l]^{a_{l+1}\dots a_k}}
\]
is equivalent to the repeated cone of the morphisms $M_{a_1}\stackrel{\widehat{a_1a_2}}{\to}M_{a_2}\dots M_{a_l}$.
\end{lemma}
\begin{proof}
For $l=2$ we just apply the shortening lemma:
\[
\vcenter{
\xymatrix@C=2cm{\vtx{}\ar@/_/[r]|{\ell a^{-1}}&\vtx{}\ar@/_/[l]|{a}\\
\vtx{}\ar@/_/[r]|{b}\ar@{<.}[u]|{\ell (ab)^{-1}}&\vtx{}\ar@/_/[l]|{\ell b^{-1}}\ar@{<.}[u]|{-1}\\
}}
\hspace{.5cm}\to\hspace{.5cm}
\vcenter{
\xymatrix@C=2cm{\vtx{}\ar@/_/[r]|{\ell a^{-1}b^{-1}}&\vtx{}\ar@/_/[l]|{ab}}}.
\]
Under the quasi-isomorphism $\mathrm{Cone}(\widehat{ab})\cong M_{ab}$ the morphism
$\widehat{bc}: M_c \to \mathrm{Cone}(\widehat{ab})$ becomes
\[
\vcenter{
\xymatrix@C=2cm{\vtx{}\ar@/_/[r]|{\ell (ab)^{-1}}&\vtx{}\ar@/_/[l]|{ab}\\
\vtx{}\ar@/_/[r]|{c}\ar@{<.}[u]|{\ell (abc)^{-1}}&\vtx{}\ar@/_/[l]|{\ell c^{-1}}\ar@{<.}[u]|{-1}
}}
\]
and hence the proof can be finished by induction.
\end{proof}
If we apply a similar reasoning to more complicated cones, this leads to the following construction:
consider a band $g =(g_0,g_1,\dots,g_{2k})$ in $\mathrm{Q}$ and assume that $g_1$ is a cycle in $\mathrm{Q}_2^+$.
This gives a picture that looks like this:
\[
\xymatrix@=.4cm{
\dots\ar@/^/[rr]&&\vtx{0}\ar[dd]|{g_0}&&\vtx{3}\ar@/_/[ll]\ar@/^/[rr]&&\vtx{4}\ar[dd]|{g_4}&&\vtx{7}\ar@/_/[ll]\ar@/^/[rr]&&\dots\\
&&&g_1&&g_3&&g_5&&&\\
\dots&&\vtx{-\!\!2}\ar@/^/[ll]\ar@/_/[rr]&&\vtx{1}\ar[uu]|{g_2}&&\vtx{2}\ar@/^/[ll]\ar@/_/[rr]&&\vtx{5}\ar[uu]|{g_6}&&\dots\ar@/^/[ll]
}
\]
Let $P$ be the direct sum of all the numbered vertices, where the even/odd numbered have even/odd degree.
The map $d:P\to P$ consists of $2$ parts $d=d_a+d_b$, which can be written as a signed sum of paths.
The first part $d_a$ is the sum of the subpaths of
$g_i$ that connect the vertices $i$ and $i-1$ (in both directions and only for odd $i$).
The second part is the sum of all the upper arcs minus the sum off all lower arcs in the picture.
It is easy to check that both $(P,d)$ and $(P,d_a)$ are matrix factorizations.
\[
\xymatrix@=.4cm{
\dots\ar@/^/[ddrr]|{d_a}\ar@/^/[rr]|{d_b}&&\vtx{0}\ar@/_/[ddrr]|{d_a}&&\vtx{3}\ar@/^/[ddrr]|{d_a}\ar@/_/[ll]|{d_b}\ar@/^/[rr]|{d_b}&&\vtx{4}\ar@/_/[ddrr]|{d_a}&&\vtx{7}\ar@/_/[ll]|{d_b}\ar@/^/[rr]|{d_b}\ar@/^/[ddrr]|{d_a}&&\dots\\
&&&&&&&&&&\\
\dots&&\vtx{-\!\!\!2}\ar@/^/[uull]|{d_a}\ar@/^/[ll]|{-d_b}\ar@/_/[rr]|{-d_b}&&\vtx{1}\ar@/_/[uull]|{d_a}&&\vtx{2}\ar@/^/[uull]|{d_a}\ar@/^/[ll]|{-d_b}\ar@/_/[rr]|{-d_b}&&\vtx{5}\ar@/_/[uull]|{d_a}&&\dots\ar@/^/[ll]|{-d_b}\ar@/^/[uull]|{d_a}
}
\]
More precisely, $(P,d_a)$ can be seen as a direct sum of $k$ matrix factorzations, one for
each cycle in the band. This matrix factorization can be split as a direct sum of two subfactorizations, the one from the positive cycles and the one from the negative cycles.
The map $d_b$ can be interpreted as a morphism between those two and
$M(g,1):=(P,d_a+d_b)$ is the cone of this map.
\begin{proposition}
Under the equivalence ${\mathtt {D}}_{\mathbb{Z}_2} {\mathtt {mf}}(\mathrm{Q})\cong {\mathtt {D}}_{\mathbb{Z}_2} \mathtt{Gtl}(\twist{\mathrm{Q}})$
the matrix factorization $M(g,1)$ corresponds to the object $B(g,1)$.
\end{proposition}
\begin{proof}
By construction the equivalence commutes with taking cones. Therefore,
to find an object equivalent to $B(g,1)$,
we need to perform the cone construction on the matrix factorizations of the arrows in
the snake path $p=p_1\dots p_k$ in the mirror dimer according to the morphisms $\widehat{p_ip_{i+1}}$.
The arrows of the snake path $p$ in the mirror dimer do not form a path in $\mathrm{Q}$ but
something like this
\[
\xymatrix@=.2cm{
\dots\ar@/^/[rr]&&\vtx{}\ar[dd]|{g_0}&&\vtx{}\ar@/_/[ll]\ar@/^/[rr]&&\vtx{}\ar[dd]|{g_4}&&\vtx{}\ar@/_/[ll]\ar@/^/[rr]&&\dots\\
&&&g_1&&g_3&&g_5&&&\\
\dots&&\vtx{}&&\vtx{}\ar[uu]|{g_2}&&\vtx{}&&\vtx{}\ar[uu]|{g_6}&&\dots
}
\]
At the arrows $p_j=g_i$ we have two morphisms leaving or arriving. At all other arrows
there is a morphism arriving and one leaving. To perform the cone operation we proceed in two steps.
First we do all the cones over morphisms $\widehat{p_ip_{i+1}}$ for which $p_i$ is not one of the $g_j$
or $\widehat{p_ip_{i+1}}$ is contained in a positive cycle.
Repeated application of lemma \ref{subcycle} this shows that the cone after the first step is gives $(P,d_a)$.
The remainder consist of morphisms between consecutive summands of $(P,d_a)$ which are of the form
\[
\xymatrix@=.4cm{
\vtx{}\ar@/^/[ddrr]|{vu}\ar@{.>}@/^/[rr]|{u}&&\vtx{}\ar@/_/[ddrr]|{wv}&&\\
&&&&\\
&&\vtx{}\ar@/^/[uull]^{\ell (vu)^{-1}}\ar@{.>}@/_/[rr]|{-w}&&\vtx{}\ar@/_/[uull]_{\ell (wv)^{-1}}
}.
\]
These are precisely the components of $d_b$.
\end{proof}
\begin{remark}
If we have an invertible $n\times n$-matrix $\alpha$ we can also construct a matrix factorization for $B(g,\alpha)$:
\[
M(g,\alpha) := (P^{\oplus n}, d_a^{\oplus n} + d_b^\alpha)
\]
where $d_b^\alpha$ is equal to $d_b^{\oplus n}$ except for the two paths that go from vertex $-1$ and $-2$ to vertex $0$ and $1$. Those paths are
tensored by $\alpha$ to ensure that the parallel transport around the cycle equals $\alpha$.
\end{remark}
\section{Intermezzo I: Spider graphs and Ribbon graphs}\label{sectionspider}
Spider graphs and ribbon graphs are two combinatorial objects that occur as limit situations of surfaces.
A \emph{graph with legs} $\Lambda$ is a generalization of a graph where we allow edges that are only connected on one side to a node and
the other side goes to infinity. These edges are called the legs or external edges, while the other edges are called the internal edges.
We split the set of edges as $\Lambda_1 = \Lambda_1^{int}\cup \Lambda_1^{ext}$.
A graph with legs is called weighted
if there is a map $w: \Lambda_1^{int} \to \mathbb{R}_{>0}$. These weights can be thought of as the lengths of the edges. The legs have infinite length, so they don't need
weights. By gluing together line segments of the appropriate length, we can construct a metric space $|\Lambda|$ that is a realization of the graph.
\subsection{Ribbon graphs}
A graph with legs $\Gamma$ becomes a \emph{ribbon graph} \cite{mulase1998ribbon} if we assign to each node a cyclic order on the incoming edges.
From a ribbon graph we can construct a surface with boundary by substituting each node of valency $n$ by an $n$-gon and
each edge by a strip. We glue the ends of the strips to the polygon of the node in the specified cyclic order. The result is an oriented surface with boundary $\rib \Gamma$
and there is a retraction $\pi: \rib \Gamma \to |\Gamma|$, defined up to homotopy.
If $\Gamma$ is a connected ribbon graph without legs we say that it is of type $\Sigma_{g,n}$ if $\rib \Gamma$ is a surface
with genus $g$ and $n$ boundary components.
Vice versa if $\Gamma$ is any graph embedded in an orientable surface, we can give it the structure of a ribbon graph by
using the anticlockwise cyclic order on the surface in each node. The space $\rib \Gamma$ will be homeomorphic to a neighborhood of the graph in the surface.
\erbij{
If $e$ is an internal edge of the ribbon graph $\Gamma$ that is not a loop, we can define its contraction $C_e\Gamma$ as the ribbon graph where $e$ is removed and
the two nodes of $e$ are identified. The cyclic order in the new node is obtained by gluing
together the linear orders we get in both nodes after deleting $e$ (e.g $x<e<y<$ and $u<e<v<$ gives $y<x<v<u<$). }
Given a dimer quiver we can construct a ribbon graph in the following way. As a graph it is dual to the quiver, so its nodes correspond to the cycles in $\mathrm{Q}_2$
and its edges correspond to the arrows. We draw this graph on $\psurf{\mathrm{Q}}$ by putting a node in each face and connecting two node by an edge perpendicular to
the corresponding arrow. Note that this graph is by partite because the $\mathrm{Q}_2 =\mathrm{Q}_2^+\cup \mathrm{Q}_2^-$. We can also go in the opposite
directing and construct a dimer quiver from a bipartite ribbon graph without legs
\footnote{This is the way how dimer models are often introduced in the literature}.
\subsection{Spider graphs}
A graph with legs becomes a \emph{spider graph} if we assign to each node $v$ a genus $g_v$, which is a nonnegative integer.
The total genus of a spider graph is the sum of the genera of the nodes plus the genus of the graph itself
\[
g(\Lambda) = \sum_{v \in \Lambda_0} g_v + \# \Lambda_1^{int} - \# \Lambda_0 + 1.
\]
From a spider graph we can construct a surface by substituting each node of genus $g$ and valency $n$ by a surface
with genus $g$ and $n$ boundary components, substituting each edge by a cylinder and gluing these surfaces and cylinders together
along boundary components. The resulting surface has the same genus as the spider graph
and has a hole for each leg of the spider graph. It is called the tubular surface and it comes with a projection map $\pi: \tube{\Lambda}\to \Lambda$
which is its boundary components correspond defined up to homotopy. We say that $\Lambda$ is of type $\Sigma_{g,n}$ if $\tube{\Lambda}$ is a surface with genus $g$ and $n$ holes.
We can also go in the opposite direction. Let $\dot S$ be a surface of genus $g$ and $n$ holes. A slicing $\Sigma$ is a set of homotopically different nonintersecting curves on the surface which
includes the $n$ simple curves that go around the holes. The spider graph $\Lambda_\Sigma$ has an edge for each curve and a node for each connected component
of the complement of the curves, except the $n$ components that wrap around the holes. The genus of the node is the genus of the corresponding connected component.
It is clear from this construction that $\tube{\Lambda_\Sigma}$ is isomorphic to $\dot S$.
\erbij{If $e$ is an internal edge of the spider graph $\Lambda$, we can define its contraction $C_e\Lambda$ as the spider graph where $e$ is removed and
the two nodes of $e$ are identified. The genus of this new node is the sum of the genera of the two original nodes if they are different, or
one more then the genus of the original node if $e$ was a loop.
}
\section{Intermezzo II: Tropical and toric geometry}\label{sectiontropical}
In this intermezzo we review some well-known facts from tropical geometry and toric geometry and tie this to the notion of spider graphs.
For more information on the former we refer to \cite{mikhalkin2004amoebas}, while for the latter we refer to \cite{fulton1993introduction}.
\subsection{Tropical curves}
A tropical polynomial in two variables is an expression of the form $f = \min_{i\in I}\{a_iX+b_iY+c_i\}$, where $a_i,b_i,c_i \in \mathbb{Z}$.
The tropical curve $\mathtt{trop}(f)$ is the set of all point $(X,Y) \in \mathbb{R}^2$ for which the map $f:\mathbb{R}^2 \to f(X,Y)$ is not smooth, or
equivalently the points where the minimum is reached by at least two of the linear functions. In general this set is a union of line segments
and half lines.
To each tropical polynomial we can assign its Newton polygon $\mathrm{NP}(f)\subset \mathbb{R}^2$. This is the convex hull of the points $(a_i,b_i)$.
For each point $(x,y)$ in the tropical curve we define its support as the convex hull of all $(a_i,b_i)$ for which $f(x,y)= a_ix+b_iy+c_i$.
If $(x,y)$ lies on an edge then $\mathrm{supp}(x,y)$ will be a line segment and if $(x,y)$ is a node then $\mathrm{supp}(x,y)$ will be a subpolygon of ${\cal P}(f)$.
The set of all supports will form a subdivision of the Newton polygon ${\cal F}(f)=\{\mathrm{supp}(x,y) | (x,y) \in \mathtt{trop}(f)\}$.
From a tropical polynomial, we construct a spider graph $\Lambda(f)$ by
letting the line segments of $\mathtt{trop}(f)$ be the internal edges and the half lines be the legs.
We turn an edge into a $k$-fold edge if there are $k-1$ lattice points in the interior of the support in ${\cal F}(f)$.
The nodes of the spider graph are the endpoints of the line segments.
We set the genus of a node equal to the number of interior points in the support of the node.
We can also assign weights to all the internal edges that correspond to the affine lengths of
the line segments of the tropical curve. I.e. if $e$ is an edge between the points
$(x_1,y_1)$ and $(x_2,y_2)$ then its affine length is equal to $W_e :=\min_{n,m\in \mathbb{Z}} |(x_1-x_2)m+(y_1-y_2)n|$.
In other words the vector that connects the two points is $W_e$ times an elementary affine
line segment. Note that this makes sense because the vector always has a rational direction
because the coefficients $a_i,b_i$ of the tropical polynomial are integers.
A tropical curve is called smooth if the subdivision ${\cal F}(f)$ is as fine as possible: it consists of elementary triangles. For the spider graph
this means that all nodes of the curve are trivalent and have genus zero. Just as in complex geometry a generic tropical curve is smooth.
\subsection{Amoebae}
Given a tropical polynomial we can consider a one parameter family of ordinary polynomials $f_t := \sum_i \alpha_i t^{c_i}X^{a_i}Y^{b_i}$
where the $\alpha_i \in \mathbb{C}$ are some constants. For generic $\alpha_i$ and $t$ this polynomial defines a smooth Riemann surface $$S_t=\{(x,y) \in (\mathbb{C}^*)^2 | f_t(x,y)=0\}.$$
We can look at the image of the map $\ensuremath{\mathsf{Am}}_t : S_t \to \mathbb{R}^2: (x,y) \mapsto \frac 1{\log |t|}(\log |x|, \log |x|)$.
This image is called the amoeba of $f_t$ because it looks like a blob with holes and tentacles. If we let $t$ go to infinity the interior of the amoeba becomes thinner and thinner until we end up with a graph. The main theorem in tropical geometry states that $\lim_{t \to \infty }\ensuremath{\mathsf{Am}}_t = \mathtt{trop} f$ \cite{mikhalkin2004amoebas}.
If we fix a point $p$ on the tropical curve and a small ${\epsilon}>0$ we can look at the preimage of $\ensuremath{\mathsf{Am}}_t \cap B(p,{\epsilon})$ for $t>\!\!>0$. If $p$ is on the inside
of a $k$-fold edge of the tropical curve, this preimage will be a disjoint union of $k$ cylinders.
If $p$ is a node with valency $v$ and genus $g$ the preimage will be a surface of genus $g$ with $v$ punctures. Therefore $S_t$ can be seen
as the tubular surface of $\Lambda(f)$.
\begin{lemma}\label{genustropical}
Let $f$ be a tropical polynomial and $\Lambda(f)$ be its spider graph
\begin{enumerate}
\item The genus of $\Lambda(f)$ equals the number of internal lattice points of the Newton polygon $\mathrm{NP}(f)$.
\item The number of legs of $\Lambda(f)$ equals the number of boundary lattice points of the Newton polygon $\mathrm{NP}(f)$.
\end{enumerate}
\end{lemma}
\begin{proof}
It is well known that if $S\subset \mathbb{C}^*\times \mathbb{C}^*$ is a smooth curve defined by a polynomial with Newton polygon $\mathrm{NP}$, then the genus
of $S$ is the number of internal lattice points and number of punctures in $S$ is the number of boundary lattice points.
\end{proof}
We end this section with an example. Below we give the Newton polygon, the tropical curve and
the spider graph for the tropical polynomial
\begin{center}
$f = \max \{2x,x+y, -x+y, -x+1, -y, 2x-y, y\}$\\
$f_t = X^2+XY+tX^{-1}Y +tX^{-1}+Y^{-1}+X^2Y^{-1}+Y$\\
~\\
\resizebox{!}{2cm}{\begin{tikzpicture}
\draw (1,1) -- (2,0);
\draw (0,1) -- (1,1);
\draw (-1,0) -- (-1,1);
\draw (0,-1) -- (-1,0);
\draw (2,-1) -- (0,-1);
\draw (2,0) -- (2,-1);
\draw (-1,1) -- (0,1);
\draw (0,-1) -- (0,1);
\draw (2,0) node[circle,draw,fill=black,minimum size=5pt,inner sep=1pt] {};
\draw (1,1) node[circle,draw,fill=black,minimum size=5pt,inner sep=1pt] {};
\draw (-1,1) node[circle,draw,fill=black,minimum size=5pt,inner sep=1pt] {};
\draw (-1,0) node[circle,draw,fill=black,minimum size=5pt,inner sep=1pt] {};
\draw (0,-1) node[circle,draw,fill=black,minimum size=5pt,inner sep=1pt] {};
\draw (2,-1) node[circle,draw,fill=black,minimum size=5pt,inner sep=1pt] {};
\draw (0,1) node[circle,draw,fill=black,minimum size=5pt,inner sep=1pt] {};
\draw (0,0) node[circle,draw,fill=white,minimum size=5pt,inner sep=1pt] {};
\draw (1,0) node[circle,draw,fill=white,minimum size=5pt,inner sep=1pt] {};
\draw (1,-1) node[circle,draw,fill=white,minimum size=5pt,inner sep=1pt] {};
\end{tikzpicture}}
\hspace{.5cm}\resizebox{!}{2cm}{\begin{tikzpicture}
\draw (0,0) -- (1,1);
\draw (0,0) -- (0,1);
\draw (-2,0) -- (-3,0);
\draw (-2,0) -- (-3,-1);
\draw (0,0) --(0,-2);
\draw (0,0) -- (1,0);
\draw (-2,0) -- (-2,1);
\draw (0,0) --(-2,0);
\end{tikzpicture}}
\hspace{.5cm}\resizebox{!}{2cm}{\begin{tikzpicture}
\draw (0,0) -- (1,1);
\draw (0,0) -- (0,1);
\draw (-2,0) -- (-3,0);
\draw (-2,0) -- (-3,-1);
\draw (0,0) .. controls (1/30,0) and (1/10,0) .. (1/10,-2);
\draw (0,0) .. controls (-1/30,0) and (-1/10,0) .. (-1/10,-2);
\draw (0,0) -- (1,0);
\draw (-2,0) -- (-2,1);
\draw (0,0) .. controls (-1,1/10) .. (-2,0);
\draw (0,0) .. controls (-1,-1/10) .. (-2,0);
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (-2,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny 0}};
\end{tikzpicture}}\\
\end{center}
\subsection{Toric varieties}
From a tropical curve $f = \min_{i \in I}\{a_iX+b_iY+c_i\}$ we can also construct a $3$-dimensional toric variety ${\blb V}_f$. The fan of this variety can be constructed by taking
a cone over the Newton polygon $\mathrm{NP}(f)$ and look at the subdivision of this cone induced
by the subdivision ${\cal F}(f)$.
Instead of using the fan we can also construct ${\blb V}_f$ as follows.
First we define the 3d polytopes $\mathrm{PT}_{f,r} := \{ (x,y,z) \in \mathbb{R}^3| a_ix+b_iy+z \ge -r c_i\}$. These polytopes are noncompact because we can make $z$ arbitrarily large.
If we don't specify $r$ we assume it is equal to $1$: $\mathrm{PT}_f := \mathrm{PT}_{f,1}$.
We can partition $\mathrm{PT}_{f,r}$ into subsets of points for which the same
inequalities $a_ix+b_iy +z\ge -r c_i$ are equalities. These subsets are called the faces.
For each face $\sigma$ we define the support $I_{\sigma}$ as the set of indices which give equalities.
\[
I_{\sigma} := \{ i \in I | \forall (x,y,z) \in \sigma: a_ix+b_iy +z= -r c_i\}.
\]
For all $r>0$
$\mathrm{PT}_{f,r}$ has the same number of faces because $\mathrm{PT}_{f,r}=r \mathrm{PT}_{f,1}$.
On the other hand $\mathrm{PL}_{f,0}=\lim_{r \to 0}\mathrm{PT}_{f,r}$ so $\mathrm{PL}_{f,0}$ is a cone and its faces correspond to the noncompact faces of $\mathrm{PT}_{f,1}$ because
all other's are shrunken to a point.
Because $\mathrm{PT}_f$ is a $3$-dimensional polytope there are $2,1$ and $0$-dimensional faces. The $1$-skeleton of the polytope is the union of all $1$ and $0$-dimensional faces.
The polytope projects onto the $X,Y$-plane and the image of the $1$-skeleton of the polytope $\mathrm{PT}_{f}$ is precisely the tropical curve $\mathtt{trop}_f$.
Now consider the graded ring
\[
R_f := \bigoplus_{r=0}^\infty \mathbb{C}\{X^uY^vZ^w| (u,v,w) \in \mathrm{PT}_{f,r} \cap \mathbb{Z}^3\}.
\]
The toric variety of $f$ is defined as
\[
{\blb V}_f := \ensuremath{\mathsf{Proj}} R_f.
\]
It is a projective variety over the ring $(R_f)_0 = \mathbb{C}[X^uY^vZ^w| a_iu+b_iv+w\ge 0 ]$.
The $\mathbb{Z}^3$ grading on $R_f$ coming from the exponents of $X,Y,Z$ gives rise to a $\mathbb{C}^{*3}$-action on ${\blb V}_f$ and this turns ${\blb V}_f$ into a toric variety.
The geometry of the polytope $\mathrm{PT}_f$ and the variety ${\blb V}_f$ are closely related.
Every $k$-dimensional face $\sigma$ will correspond to a torus orbit ${\cal O}_\sigma \subset {\blb V}_f$ which
consists of the points for which the functions $X^uY^vZ^w$ are nonzero if and only if $(u,v,w) \in \sigma$.
Moreover, if we restrict the action to the real torus $U_1^3 \subset \mathbb{C}^{*3}$, the quotient ${\blb V}_f/U_1^3$ can be identified with the polytope
$\mathrm{PT}_{f}$ such that each face of the polytope is the image of the corresponding orbit. The identification ${\blb V}_f/U_1^3\to \mathrm{PT}_{f}$ is however
not unique: it depends on a moment map and hence the choice of a symplectic structure on ${\blb V}_f$.
If we look at the tropical curve $\mathtt{trop} f$, each node $p$ will correspond to a torus fixed point in ${\blb V}_f$. All torus orbits that have this
fixed point in its closure form an open subset ${\blb U}_p \subset {\blb V}_f$. This subset is also a toric variety corresponding to the tropical polynomial containing
only the terms in the support of $p$: $f_p = \min_{i \in I_p}\{a_iX+b_iY+c_i\}$. The tropical curve corresponding to $f_p$ is obtained by looking only at a neighborhood of $p$ and extending all edges to infinity. Finally, ${\blb V}_f$ is a smooth variety if and only if $\mathtt{trop}(f)$ is a smooth tropical curve.
\begin{remark}
We can also define tropical curves for which the coefficients $c_i$ are in ${\blb Q}$ instead of $\mathbb{Z}$.
The curve $S_t$ will still be uniquely defined for $t \in \mathbb{R}_{\>0}$ and we can define the limit
of the Amoeba-map for these $t$ and observe that it is equal to the tropical curve.
On the other hand in the construction of toric variety, it is easy to see that
if $f = \min_{i \in I}\{a_iX+b_iY+c_i\}$ and $f' = \min_{i \in I}\{a_iX+b_iY+kc_i\}$
where $k\in \mathbb{N}$ then $\ensuremath{\mathsf{Proj}} R_f$ and $\ensuremath{\mathsf{Proj}} R_{f'}$ are related by a veronese map and
hence isomorphic varieties, so it also makes sense to define ${\blb V}_f$ for tropical polynomials
with rational $c$-coefficients.
\end{remark}
\section{Moduli spaces}\label{sectionmoduli}
One main direction of research in mirror symmetry is the idea that two mirror manifolds $M,M^\vee$ should be connected by an SYZ-fibration.
Loosely speaking this means that there is a diagram
\[
\xymatrix{M\ar[dr]^\pi&&M'\ar[dl]_{\pi'}\\&B&}
\]
such that $B$ is an $n$-dimensional space and the generic fibers of the projection maps $\pi,\pi'$ are $n$-dimensional Lagrangian tori.
The space $B$ is called the base and in general it is not smooth but in many cases it has the structure of a tropical variety.
\erbij{
On the symplectic side this base can be seen as a moduli space that classifies nonintersecting Lagrangian submanifolds, while on the complex side
it could be seen as a quotient space that classifies orbits of points under some $U_1^n$-action.
}
In our setting the base should be a tropical curve and on the A-side the fibers are circles. On the B-side
the mirror is a noncommutative Landau-Ginzburg model $({\mathtt{J}},\ell)$ and the points of $M'$ should be interpreted as representations of the algebra ${\mathtt{J}}$. These representations give rise to matrix factorizations, so we get a moduli space of matrix factorizations.
In this section we will work out both sides in detail.
\subsection{The A-model: quadratic differentials}
Let $\bar S$ be a compact smooth Riemann surface of genus $g$ and $m_1,\dots, m_n \in \bar S$ an ordered set of $n$ distinct marked points.
Removing the marked points we get a punctured Riemann surface $\dot S = \bar S \setminus \{m_1,\dots, m_n\}$. We will assume that $\dot S$ has at least 3 punctures, to ensure
that $\dot S$ is a Riemann surface with negative curvature.
A quadratic differential \cite{strebel1984quadratic} on $\dot S$ is a holomorphic section of the square of the canonical bundle: $\Phi \in \Gamma(K_{\bar S}^{\otimes 2}, \dot S)$. In local coordinates we can write $\Phi= f(z)dz^2$ where $f(z)$ is only allowed to have poles at the marked points.
If $\Phi$ is nonzero in a point $p \in \dot S$ we can find a holomorphic coordinate
such that around $p$ $\Phi= dw^2$ with $w(p)=0$.
In all other points $q \in \bar S$ we can write $\Phi=aw^k dw^2$ where $k$ is the order of the zero/pole of $f$ at $q$ and a is a constant.
If $k\ne -2$ then we can choose $a=1$ but if $k=-2$ we cannot get rid of the coeficient $a$ by base change. The coefficient $a$ is called the residue.
A tangent vector $v \in T_p \dot S$ is called horizontal if $\Phi(v)\ge 0$. Note that in general
$\Phi(v)$ is a complex number, so in a generic point $p$ this condition singles out a one-dimensional real subspace
in $T_p \dot S$. This means that $\Phi$ defines a foliation on $\dot S$, which we call the horizontal foliation.
Generically, a leave of the foliation can either be a closed curve or a curve that connects two punctures. There are also special leaves: the zeros of $\Phi$ and
the leaves that have a zero of $\Phi$ in their limit.
In the neighborhood of a regular point $p$ the horizontal leaves are all parallel: they look like the horizontal lines in the $w$-complex plane.
If $p$ is a zero of order $k>0$ the picture of the trajectories has a symmetry of order $k+2$ because a base change $w = \zeta w'$ with $\zeta^{k+2}=1$ will leave
the differential invariant. The local picture of horizontal leaves looks like this:
\begin{center}
\begin{tabular}{cccc}
\begin{tikzpicture}[scale=.5]
\draw (0,0) coordinate (a_1) node {$\bullet$} -- (0,0);
\draw (0,-1.5) -- (0,1.5);
\foreach \i in {1,...,3}
{
\draw (.35*\i,-1.5) -- (.35*\i,1.5);
\draw (-.35*\i,-1.5)-- (-.35*\i,1.5);
}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.35]
\draw (0,-.5) coordinate (a_1) node {$\bullet$} -- (0,1.5);
\draw (0,-.5) -- (-1.73,-1.5);
\draw (0,-.5) -- (1.73,-1.5);
\foreach \i in {1,...,3}
{
\draw (-1.73-.2*\i,-1.5+.3*\i) to[out=30,in=270] (-.3*\i,1.5);
\draw (1.73+.2*\i,-1.5+.3*\i) to[out=150,in=270] (.3*\i,1.5);
\draw (-1.73+.2*\i,-1.5-.3*\i) to[out=30,in=150] (1.73-.2*\i,-1.5-.3*\i);
}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.5]
\draw (0,0) coordinate (a_1) node {$\bullet$} -- (0,0);
\foreach \i in {1,...,12}
{
\draw (0,0) -- (30*\i:1.5);
}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.5]
\draw (0,0) coordinate (a_1) node {$\bullet$} -- (0,0);
\foreach \i in {1,...,5}
{
\draw (.3*\i,0) arc (0:360:.3*\i);
}
\end{tikzpicture}
\\
$k=0$&$k=1$&$k=-2, a>0$&$k=-2, a<0$
\end{tabular}
\end{center}
In a zero of order $k$ there are $k+2$ special leaves and all other leaves bend away from the zero.
If we look at poles, the most interesting case is $\Phi = a \frac{dw^2}{w^2}$.
If $a$ is a positive real number the leaves will be rays emanating from the pole. If $a$ is negative the leaves will be circles
around the poles and if $a$ is complex they will be spirals around the poles. Poles of other orders will always have leaves that emanate from the pole.
The special leaves form a graph $\mathrm{SG}(\Phi)$ on the surface $S$, which has internal edges corresponding to trajectories between
two zeros and external edges corresponding to trajectories between a zero and a pole.
The complement of the graph consists of connected components that contain parallel trajectories. In each component all trajectories are isotopic and we have
two possibilities: the leaves are open curves or the leaves are circles. We call these components strips and cylinders.
Each strip can be assigned a width. To do this we define lengths for any curve $\gamma$
\[
\ell(\gamma) =\int_0^1 \sqrt{|\Phi(\frac {d\gamma}{dt})|} dt ~~~(*)
\]
The width of the strip is the minimal length of a curve that crosses it. The cylinders can be given a width and a circumference, which is the length of a leaf.
Note that cylinders can have an infinite width, if they wrap around one of the punctures. This is not possible for the strips.
The leaf space of the foliation $\mathtt{LH}(\Phi)$ can be seen as a weighted graph with legs. Its nodes correspond to the connected components of $\mathrm{SG}(\Phi)$ and
the strips and cylinders are the edges. The weigths are the widths of the strips and cylinders.
We will now look at two special cases: when there are only strips and when there are only cylinders.
The former is called the ribbon case, while the latter is called the spider case.
\begin{itemize}
\item
In the ribbon case the leaves in each strip correspond to an isotopy class of a curve that connects two punctures.
If we draw one of the leaves in each strip and we get a graph on the surface $\bar S$ that splits the surface into polygons.
The leaf space can be identified with the dual graph of this split and hence it has the structure of a ribbon graph. We can assign weights
to this ribbon graph by assigning the widths of a strips to its edge. Note that there are no external edges.
\item
In the spider case each cylinder corresponds to an isotopy class of closed curves. The leaf space has a structure of a spider graph, where
the number of legs equals the number of punctures. To each node we can also assign a genus, which is the genus of a small neighborhood of
its the connected component in $\mathrm{SG}(\Phi) \subset S$. Finally we can also give weights to the edges which correspond to the widths of the cylinders.
\end{itemize}
If we are in the spider case the differential is called a Jenkins-Strebel differential and if the corresponding spider graph has just one node then
the differential is called a Strebel differential. In the latter case there is just one cylinder for every puncture.
Note that a Jenkins-Strebel differential can only have poles of order $2$ with a negative residue.
A classical result by Strebel \cite{strebel1984quadratic} tells us that if $S$ is a Riemann surface with punctures $p_1,\dots,p_n$ and we specify
residues $a_1,\dots,a_n<0$ then there is a unique Strebel differential on $S$ with these residues.
This result can be generalized as follows.
\begin{theorem}[Liu]\cite{liu2008jenkins}
Let $\Lambda$ be a weighted spider graph with $n$ legs and fix an $n$-tuple $(a_i) \in \mathbb{R}_{>0}$.
For each complex structure on $\tube \Lambda$ there is a unique Jenkins-Strebel differential $\Phi$ such that
$\mathtt{LH}(\Phi)$ and $\Lambda$ are isomorphic as weighted spider graphs and the maps
projection maps $\pi: \tube \Lambda \to \mathtt{LH}(\Phi)$ and $\pi: \tube \Lambda \to \Lambda$ are homotopic.
\end{theorem}
This theorem implies that we can use Jenkins-Strebel differentials with a fixed weighted spider graph to paramatrize the space of
complex structures on $S$, i.e. Teichmuller space.
\begin{theorem}\label{doublegraph}
Let $\Gamma$ be a weighted ribbon graph of type $\Sigma_{g,n}$, let $\Lambda$ be a weighted spider graph of the same type, and
choose an identification $S:=\tube \Lambda = \tub \Gamma$.
There exists a unique Jenkins-Strebel differential $\Phi$ on $S$ such that
\begin{enumerate}
\item $\Lambda \cong \mathtt{LH}(\Phi)$ and $S \to \mathtt{LH}(\Phi)$ is homotopic to $\tube \Lambda \to \Lambda$,
\item $\Gamma \cong \mathtt{LH}(-\Phi)$ and $S \to \mathtt{LH}(-\Phi)$ is homotopic to $\rib \Gamma \to \Gamma$.
\end{enumerate}
\end{theorem}
\begin{proof}
Draw the dual graph of the ribbon graph on $S$. For each edge $a$ in the ribbon graph, let $a^\perp$ be the edge of the dual graph. Choose an orientation for $a^\perp$
and look at the projection $\pi: \tube \Lambda \to \Lambda$. This gives a sequence of edges of $\Lambda$ starting with a leg and ending with a leg. For each edge $e$ in the sequence
we cut out a strip $[0,w_e]\times [0,w_a]i$ equipped with the differential $dz^2$. Now we glue all these strips together as follows.
Each node of the ribbon graph corresponds to a polygon in the dual graph bounded by dual edges.
Under $\pi: \tube \Lambda \to \Lambda$ this factors through a tree that is immersed
in $\Lambda$. The preimage of each edge $e$ of the tree is bounded by two dual edges $a^\perp,b^\perp$
and we glue
Every dual edge $a^\perp$ with an image under $\pi$ that runs through a node of the tree,
is mapped to two edges $e_1,e_2$ incident with that node.
We glue $[0,w_{e_1}]\times [0,w_a]i$ and $[0,w_{e_2}]\times [0,w_b]i$ together along the common edge.
In each node of valency $k$, $2k$ strips come together forming an angle of $2k\times \frac{\pi}{2}=k\pi$.
We identify each node with a zero $z^{k-2}(dz)^2$. In this way we get a quadratic differential
on each polygon for which the dual edges are vertical leaves. If we glue all these polygons together
we get the required $\Phi$.
From the construction it is clear that any Jenkins-Strebel differential with the properties above must have the same strip decomposition, so it is unique.
\end{proof}
\begin{remark}
General quadratic differentials appear in the theory of stability conditions as initial data to construct Bridgeland stability conditions for Fukaya categories.
This has been studied by Bridgeland an Smith for 3-dimensional Fukaya categories in \cite{bridgeland2015quadratic} and by Haiden et al. for topological
Fukaya categories of surfaces \cite{haiden2014flat}. On the other hand, we are interested in the link between Strebel quadratic differentials and King (i.e. GIT) stability \cite{king1994moduli} conditions as they
are more suited to the construction of moduli spaces.
\end{remark}
\subsection{The B-model: representations of the Jacobi algebra}
\subsubsection{Moduli spaces of representations}
In this section we fix a consistent dimer on a torus $\mathrm{Q}$ and following \cite{broomhead2012dimer,ishii2009dimer,bocklandt2016dimer}
we will look at the space of representations of its Jacobi algebra ${\mathtt{J}}={\mathtt{J}}(\mathrm{Q})$
for the fixed dimension vector $\alpha:\mathrm{Q}_0\to \mathbb{N}$ that maps every vertex to one. These are the $\mathbbm{k}$-algebra morphisms
$\rho: {\mathtt{J}} \to \mathsf{Mat}_n(\mathbb{C})$ where $\mathbbm{k}=\mathbb{C}^{\# \mathrm{Q}_0}$ is identified with subalgebra in ${\mathtt{J}}$ generated by the vertices and
the subalgebra of diagonal matrices in $\mathsf{Mat}_n(\mathbb{C})$.
The space of these representations will be denoted by $\ensuremath{\mathsf{rep}}({\mathtt{J}},\alpha)$ and is in fact an affine scheme. We can present its ring of coordinates as
\[
\mathbb{C}[\ensuremath{\mathsf{rep}}({\mathtt{J}},\alpha)]=\mathbb{C}[x_a| a\in \mathrm{Q}_1]/\<\prod_{b\in r_a^+ } x_b - \prod_{b\in r_a^- } x_b| a \in \mathrm{Q}_a\>.
\]
because a representation maps each arrow $a$ to a scalar $x_a$ times the elementary matrix $E_{h(a)t(a)}$.
A representation is called a torus representation if these scalars are nonzero for every arrow. The torus representations
form an open set of $\ensuremath{\mathsf{rep}}({\mathtt{J}},\alpha)$ and its closure is a component $\ensuremath{\mathsf{trep}}({\mathtt{J}},\alpha)\subset\ensuremath{\mathsf{rep}}({\mathtt{J}},\alpha)$ called the scheme of all toric representations.
One can prove \cite{broomhead2012dimer} that a representation $\rho$ is toric if the set of zero arrows $\{a | \rho((a)=0 \}$ is a union of perfect matchings.
On ${\ensuremath{\mathsf{trep}}}({\mathtt{J}},\alpha)$ there is an action of $\ensuremath{\mathsf{GL}}_\alpha := \mathbbm{k}^*=\mathbb{C}^{*\# \mathrm{Q}_0}$ by conjugation. The orbits of this action classify the isomorphism classes of representations
of ${\mathtt{J}}$, so if we want to construct a moduli space toric of representations we need to construct a GIT-quotient \cite{king1994moduli}.
We start with a map $\theta:\mathrm{Q}_0 \to \mathbb{Z}$, which can also be seen as a character
$\ensuremath{\mathsf{GL}}_\alpha \to \mathbb{C}^*: g \mapsto g^\theta:= \prod_{v\in \mathrm{Q}_0}g_v^{\theta_v}$. From
this datum we construct a graded ring of semi-invariants
\[
{\mathrm{Semi}}_\theta({\mathtt{J}},\alpha) := \bigoplus_{i=0}^\infty \{\phi \in \mathbb{C}[{\ensuremath{\mathsf{trep}}}({\mathtt{J}},\alpha)]: \phi(x^g) = g^{n\theta}\phi(x) \}
\]
The proj of this ring ${\mathcal{M}}_\theta({\mathtt{J}},\alpha) := \ensuremath{\mathsf{Proj}}\, {\mathrm{Semi}}_\theta({\mathtt{J}},\alpha)$
is called the \emph{moduli space of $\theta$-semistable toric representations}.
\erbij{Because ${\mathcal{M}}_\theta({\mathtt{J}},\alpha) \cong {\mathcal{M}}_{n\theta}({\mathtt{J}},\alpha)$,
it also makes sense to define the moduli space if $\theta: \mathrm{Q}_0 \to {\blb Q}$:
we set ${\mathcal{M}}_\theta := {\mathcal{M}}_{n\theta}$ where $n$ is the least common multiple of the denominators in $\theta$. }
A representation $\rho$ is called \emph{$\theta$-semistable} if there is a nontrivial $f \in \mathrm{Semi}_\theta({\mathtt{J}},\alpha)$ with $f(\rho)\ne 0$.
One can prove that from a representation theoretic point of view this means that $\theta\cdot \alpha=\sum_{v\in \mathrm{Q}_0}\alpha_v\theta_v=0$ and
that it has no proper subrepresentations with dimension vector $\beta$
such that $\beta\cdot \theta<0$.
An $\alpha$-dimensional semistable representation $\rho$ is called \emph{$\theta$-stable}
if it has no proper subrepresentations with dimension vector $\beta$ such that $\beta\cdot \theta\le 0$, and it is called \emph{$\theta$-polystable}
if it is the direct sum of stable representations.
A character is called generic if $\beta\cdot \theta\ne 0$ for all dimension vectors $\beta<\alpha$. For generic $\theta$ the three notions of stability
coincide. For $\theta=0$ stable is the same as simple, polystable is the same as semisimple and every representation is semistable.
It is well known \cite{king1994moduli} that $\mathcal{M}_\theta({\mathtt{J}},\alpha)$ is the categorical quotient of the space of all semistable representations by the
$\ensuremath{\mathsf{GL}}_\alpha$-action. The closed orbits are the orbits of polystable representations and the points of $\mathcal{M}_\theta({\mathtt{J}},\alpha)$
correspond to the isomorphism classes of $\theta$-polystable representations in $\ensuremath{\mathsf{trep}}({\mathtt{J}},\alpha)$. Note that $\theta\cdot \alpha$ has to be zero otherwise the moduli space $\mathcal{M}_\theta({\mathtt{J}},\alpha)$ is empty.
The structure of this toric variety for generic $\theta$ has been studied in \cite{ishii2009dimer,mozgovoy2009crepant}, but we are interested in a slightly different approach that can be found in \cite{bocklandt2016dimer}. Instead of starting with a character we start with a weigth function on the arrows $W: \mathrm{Q}_1 \to \mathbb{Z}$ and associate to it a
character
\[
\theta_W: \mathrm{Q}_0 \to \mathbb{Z} : v \mapsto \sum_{h(a)=v}W_a- \sum_{t(a)=v}W_a.
\]
Because every $W_a$ appears twice with opposite sign we have that $\theta_W\cdot \alpha=0$.
Furthermore if $\mathrm{Q}$ is connected, any $\theta$ for which $\theta\cdot \alpha=0$ is equal to a $\theta_W$ for an appropriate $W$.
Fix a vertex $v \in \mathrm{Q}_0$ and $3$ cyclic paths $x,y,z:v\leftarrow v$ in $\mathrm{Q}$ such that $x,y$ span the
homology of the torus and $z$ is a cycle in $\mathrm{Q}_2$. These $3$ paths correspond to $\ensuremath{\mathsf{GL}}_\alpha$-invariant functions and one can show that all
rational $\ensuremath{\mathsf{GL}}_\alpha$-invariant functions are generated by these. Similarly all rational $\theta_W$-semi-invariants are the product of
$u= \prod x_a^{W_a}$ and an invariant. By \cite{broomhead2012dimer} one can check whether an expression $ux^iy^jz^k$ is polynomial by checking whether
$\deg_{\cal P} ux^iy^jz^k\ge 0$ for all perfect matchings.
If we define the tropical polynomial
\[
f_W := \min_{{\cal P} \in \mathrm{PM}(\mathrm{Q})} (\deg_{{\cal P}} x) X + (\deg_{{\cal P}} y) Y + \sum_{a \in {\cal P}}W_a.
\]
where ${\cal P}$ runs over all perfect matchings of $\mathrm{Q}$, then $ux^iy^jz^k$ is polynomial if and only if $(i,j,k) \in \mathrm{PT}_f$.
\begin{theorem}\cite{bocklandt2016dimer}
If $\mathrm{Q}$ is consistent then the normalization of the moduli space $\mathcal{M}_{\theta_W}$ can be identified with ${\blb V}_{f_W}$.
\end{theorem}
\begin{remark}
Again everything makes sense for rational weight $W$ functions, because the conditions
of being (semi,poly)-stable are the same for $\theta_W$ and $\lambda\theta_W$ if $\lambda$
is any positive scalar.
\end{remark}
To describe the torus orbits in $\mathcal{M}_{\theta_W}$,
we have to look a bit closer at $f_W$. The Newton polygon of $f_W$ is the same as the matching polygon of $\mathrm{Q}$.
Perfect matchings that are on the same lattice point in the matching polygon generate
terms $(\deg_{{\cal P}} x) X + (\deg_{{\cal P}} y) Y + \sum_{a \in {\cal P}}W_a$ that differ only by a constant and hence
only the one with the lowest constant $\sum_{a \in {\cal P}}W_a$ will contribute to the function $f_W$.
\begin{definition}
A perfect matching ${\cal P}$ is \emph{semistable} if $(\deg_{{\cal P}} x) X + (\deg_{{\cal P}} y) Y + \sum_{a \in {\cal P}}W_a$ defines a face of $\mathrm{PT}_f$. It is called
stable if no other perfect matching defines the same face.
\end{definition}
\begin{lemma}
A perfect matching ${\cal P}$ is (semi)stable if and only if the representations
\[
\rho_{\cal P}(a)=\begin{cases}
0 & a\in {\cal P}\\
1 & a\not\in {\cal P}
\end{cases}
\]
is $\theta_W$-(semi)stable.
\end{lemma}
\begin{proof}
The representation $\rho_{\cal P}$ is semistable if and only if there is a $\theta_W$ semi-invariant $s=ux^iy^jz^k$ that is nonzero for $\rho$. This semi-invariant must sit
on the face defined by ${\cal P}$ because if $\deg_{\cal P} s>0$ then $s(\rho)=0$.
If $\rho_{\cal P}$ is semistable then it cannot be polystable because the complement of the perfect matching is connected. Therefore
there must be a polystable representation $\rho$ in the closure of the $\ensuremath{\mathsf{GL}}_\alpha$-orbit of $\rho_{\cal P}$. The set of zero arrows of this representation
is the union of at least 2 perfect matchings ${\cal P}$ and ${\cal P}'$. Because the invariants of $\rho_{\cal P}$ and $\rho$ are the same ${\cal P}'$ must define
the same face of $\mathrm{PT}_f$.
\end{proof}
\begin{definition}
We say that $W$ is
\begin{itemize}
\item \emph{generic} if all semistable toric representations are stable.
\item \emph{nondegenerate} if all semistable perfect matchings are stable.
\end{itemize}
\end{definition}
If $W$ is nondegenerate every $2d$-torus orbit corresponds precisely to one perfect matching and all representations
in the orbit are stable. This means that the nonstable locus of $\mathcal{M}_{\theta_W}$ has dimension at most $1$.
The generic condition is stronger than the nondegenerate condition because in the latter case the nonstable locus is empty.
An example of a nondegenerate $W$ that is not generic is $W=0$ because one can show that the simple locus of $\mathcal{M}_\alpha^0$ has dimension at most $1$ \cite{bocklandt2012consistency}.
\subsubsection{Matrix factorizations from representations}
Given an algebra $B$ one can construct its category of singularities.
This is the Verdier quotient
of its derived category by the subcategory of perfect complexes,
which are the bounded complexes of projective modules:
$$\mathtt{Sing}\, B := \frac{D^b\ensuremath{\mathtt{Mod}} B}{\mathtt{Perf} B}$$
If $B$ is the quotient of an algebra $A$ with finite global dimension by a central element $\ell$
then Buchweitz \cite{buchweitz1986maximal} showed that taking the cokernel
$\mathrm{cok}\, d_{01}$ of a matrix factorization $(P,d)\in {\mathtt {MF}}_{\mathbb{Z}/2\mathbb{Z}}(A,\ell)$ and
viewing it as a $B$-module, induces an equivalence between the category of matrix factorizations and the category of singularities of $B$.
If we go back to the representation theory of the Jacobi algebra, we see that every representation $\rho: {\mathtt{J}} \to \mathsf{Mat}_n(\mathbb{C})$ that maps
the central element $\ell$ to zero, can be considered as an element in $\mathtt{Sing}\, {\mathtt{J}}/\<\ell\> \cong {\mathtt {MF}}({\mathtt{J}},\ell)$.
Suppose $\rho \in \ensuremath{\mathsf{trep}}({\mathtt{J}},\alpha)$ and $\rho(\ell)=0$. Because $\ell$ is a sum of cycles in $\mathrm{Q}_2$ there must be at least one arrow $a$ in each cycle of $\mathrm{Q}_2$ for which $\rho(a)=0$. This means that there is a perfect matching ${\cal P}$ such
that $a \in {\cal P} \implies \rho(a)=0$.
This perfect matching defines a grading on ${\mathtt{J}}$ and $\rho$ can be seen as a graded representation of ${\mathtt{J}}$ concentrated in degree $0$ because it factors
through ${\mathtt{J}}/{\mathtt{J}}_{>0}$. The matrix factorization corresponding to $\rho$ will also be graded and hence by the result by Haiden, Katzarkov and Kontsevich \cite{haiden2014flat}
it will correspond to a direct sum of string and band objects. We will now determine an explicit decomposition of this.
The idea is quite simple: we draw curves in the neighborhood of each arrow in the following way and connect the dotted curves.
\vspace{.2cm}
\begin{center}
\begin{tikzpicture}[scale=.75]
\draw [-latex,shorten >=5pt] (0,0)--(0,2) ;
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (0,2) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw[-latex,dotted] (-.5,0) arc (180:0:.5);
\draw[latex-,dotted] (-.5,2) arc (180:360:.5);
\draw (0,-.5) node{$\mathrm{Q}: \rho(a)=0$};
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=.75]
\draw[-latex,shorten >=5pt] (0,0)--(0,2);
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (0,2) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw[dotted] (-.5,0) arc (330:390:2);
\draw[dotted] (.5,0) arc (210:150:2);
\draw (0,-.5) node{$\mathrm{Q}:\rho(a)\ne 0$};
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=.75]
\draw[-latex] (0,0)--(0,2);
\draw[-latex,dotted] (-.5,0)--(.5,2);
\draw[latex-,dotted] (-.5,2)--(.5,0);
\draw (0,-.5) node{$\twist{\mathrm{Q}}: \rho(a)=0$};
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=.75]
\draw[-latex] (0,0)--(0,2);
\draw[dotted] (-.5,0) arc (330:390:2);
\draw[dotted] (.5,0) arc (210:150:2);
\draw (0,-.5) node{$\twist{\mathrm{Q}}:\rho(a)\ne 0$};
\end{tikzpicture}
\end{center}
The resulting curves are the bands.
Note that some of these curves can be contractible in $\psurf\twist{\mathrm{Q}}$. If this is the case
this curve will not determine a band, so we omit it.
The decorations of the bands is the product of all $\rho(a)^{\pm 1}$
along which the curve passes multiplied by a sign $(-1)^l$ where $2l$ is the number of faces through which the band passes.
The exponent of $\rho(a)$ is $-1$ if the band goes in the same direction as the arrow and $+1$ if it goes opposite.
We denote this collection of decorated bands by $B_\rho$.
\begin{theorem}
If $\mathrm{Q}$ is consistent and $\rho\in \ensuremath{\mathsf{rep}}({\mathtt{J}}(\mathrm{Q}),\alpha)$ is a representation that is
zero for the perfect matching ${\cal P}$ then the object $B_\rho \in \mathtt{Tw}\,_{\mathbb{Z}_2} \mathtt{Gtl}\twist{\mathrm{Q}})$ is
isomorphic to $\rho \in \mathtt{Sing}\, {\mathtt{J}}/\<\ell\>$ under the standard identification
\[
\mathtt{Tw}\,_{\mathbb{Z}_2} \mathtt{Gtl}(\twist{\mathrm{Q}}) \cong \mathtt{Tw}\,_{\mathbb{Z}_2} {\mathtt {mf}}(\mathrm{Q}) \subset {\blb M}{\mathtt {MF}}({\mathtt{J}},\ell) \cong
\mathtt{Sing}\, {\mathtt{J}}/\<\ell\>.
\]
\end{theorem}
\begin{proof}
First of all $\rho$ does not contain any string objects because
$\mathtt{Hom}_{\mathtt{Sing}\, {\mathtt{J}}/\<\ell\>}(\rho,\rho)$ is finite while the endomorphism
ring of a string object is always infinite.
To determine the band objects we will look at the hom spaces between $\rho$ and
certain test objects. If we determine the dimensions of these spaces we can single
out the unique combination of bands that has the same hom spaces.
The test object we consider are the objects $M_a$ corresponding to the arrows.
Note that for a given band object $g=(g_0,\dots,g_{2k-1})$ and an $n\times n$ Jordan matrix $\lambda$, the dimension of $\mathtt{Hom}^\bullet(B(g,\lambda), M_a)$ is equal to $n$ times the number of arrows $g_{i}$ that are equal to $a$. Each matching arrow contributes a morphism
of even degree if $i=0 \mod 4$ and of odd degree if $i=2 \mod 4$. This can easily be
seen in the $A$-model because $B(g,\lambda)$ represents curves that runs through
the center of these arrows.
In $\mathtt{Sing}\, {\mathtt{J}}\!/\<\ell\>$ the object $M_a$ corresponds to the
periodic complex
\[
\bar M_a : P_{h(a)}\stackrel{a}{\leftarrow} P_{t(a)}\stackrel{\ell a^{-1}}{\leftarrow} P_{h(a)}\stackrel{a}{\leftarrow} P_{t(a)}\stackrel{\ell a^{-1}}{\leftarrow} \dots
\]
Where $P_v$ stands for the projective module $v{\mathtt{J}}$.
Using the fact that $\mathtt{Hom}(P_{v},S)=\mathbb{C}^{\alpha_v}=\mathbb{C}$ we see that $\mathtt{Hom}_{\mathtt{Sing}\,}^\bullet (\bar M_a, \rho)$ is the homology of the complex
\[
\mathbb{C}\stackrel{\rho(a)}{\to} \mathbb{C} \stackrel{\rho(\ell a^{-1})}{\to} \mathbb{C}\stackrel{\rho(a)}{\to} \mathbb{C}\stackrel{\rho(\ell a^{-1})}{\to} \dots
\]
This complex only has nontrivial homology if $\rho(a)=\rho(\ell a^{-1})=0$ and in that case
$$\mathsf{Ext}^i_{{\mathtt{J}}\!/\<\ell\>}(P_{h(a)}/\<a\>,S)=\mathbb{C}.$$
This implies that the object $\rho$ only contains band objects that run through
arrows that are zero for $\rho$. Furthermore all decorations must be $1$-dimensional.
To determine the precise bands, note that $B(g,\lambda)$ is a band object that occurs in the decomposition of $\rho$ or $\rho[1]$ if and only if the dimension of the space
\[
\mathtt{Hom}^0(B(g,\kappa),\rho)
\]
will jump by one if $\kappa$ is equal to $\lambda$ compared to when $\kappa \ne \lambda$.
If we go to the matrix factorization $B(g,\kappa)$ we can calculate
$\mathtt{Hom}^i(B(g,\kappa),\rho)$ by calculating the homology of the complex
\[
\xymatrix@R=.4cm@C=.6cm{
\dots\ar@{<-}@/^/[ddrr]|{0}\ar@{<-}@/^/[rr]|{\rho(d_b)}&&\vtx{0}\ar@{<-}@/^/[ddrr]|{0}&&\vtx{3}\ar@{<-}@/^/[ddrr]|{0}\ar@{<-}@/_/[ll]|{\rho(d_b)}\ar@{<-}@/^/[rr]|{\rho(d_b)}&&\vtx{4}\ar@{<-}@/^/[ddrr]|{0}&&\vtx{7}\ar@{<-}@/_/[ll]|{\rho(d_b)}\ar@{<-}@/^/[rr]|{\rho(d_b)}\ar@{<-}@/^/[ddrr]|{0}&&\dots\\
&&&&&&&&&&\\
\dots&&\vtx{-\!\!\!2}\ar@{<-}@/^/[uull]|{0}\ar@{<-}@/^/[ll]|{-\rho(d_b)}\ar@{<-}@/_/[rr]|{-\rho(d_b)}&&\vtx{1}\ar@{<-}@/^/[uull]|{0}&&\vtx{2}\ar@{<-}@/^/[uull]|{0}\ar@{<-}@/^/[ll]|{-\rho(d_b)}\ar@{<-}@/_/[rr]|{-\rho(d_b)}&&\vtx{5}\ar@{<-}@/^/[uull]|{0}&&\dots\ar@{<-}@/^/[ll]|{-\rho(d_b)}\ar@{<-}@/^/[uull]|{0}
}
\]
where we changed every vertex to the vector space $\mathbb{C}$ because the dimension vector is
$(1,\dots,1)$, reversed all arrows and substituted them by their values for $\rho$.
If $g=(g_1,\dots,g_{2k-1})$ is a band for which all the arrows $g_i$ evaluate to zero,
the homology of the complex reduced to that of $d_b$, which splits in an upper and a lower part. If one of the arrows in the upper row
is zero, the $d_b$-complex becomes contractible, so no jump can take place.
If all arrows in the upper row are nonzero, we can form an element in the homology
by chosing a number $\beta_0$ in vertex $0$, $-\rho(p_2)^{-1}\rho(p_1)\beta_0$ in vertex $4$
(where $p_1$ and $p_2$ are the paths that connect vertices $1$ and $4$ with $3$). Proceeding like this
all the way around the upper row,
it is clear we get an element in the homology if and only if $(-1)^l\prod (\rho(p_i))^{\pm 1}\lambda=1$
where $2l$ is the number of faces of the band.
Therefore dimension of the hom-space jumps if the decoration is equal to
$(-1)^l\prod \rho(p_i)^{\mp 1}$.
The same can be said about the lower band but in that case the homology element
that corresponds to the identity will sit in degree one instead of degree zero. From this we can conclude that
the bands that occur are precisely those for which the upper row only consists
of nonzero arrows.
We can draw these bands in $\mathrm{Q}$ by drawing two half circle paths
through each zero arrow
and two parallel lines along each nonzero arrow. Because the mirror dimer $\twist{\mathrm{Q}}$ is twisted around each arrow, the two half circles become a cross.
\end{proof}
\begin{remark}
One can extend this result for representation with a dimension smaller than $\alpha$. To do this
we need to remove the half circles that run around vertices with dimension zero in the $\mathrm{Q}$-quiver.
\vspace{.2cm}
\begin{center}
\begin{tikzpicture}[scale=.75]
\draw [-latex,shorten >=5pt] (0,0)--(0,2) ;
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (0,2) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny0}};
\draw[-latex,dotted] (-.5,0) arc (180:0:.5);
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=.75]
\draw [-latex,shorten >=5pt] (0,0)--(0,2) ;
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny0}};
\draw (0,2) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw[latex-,dotted] (-.5,2) arc (180:360:.5);
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=.75]
\draw [-latex,shorten >=5pt] (0,0)--(0,2) ;
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny0}};
\draw (0,2) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny0}};
\end{tikzpicture}
\end{center}
\end{remark}
\subsubsection{The mirror dimer and the spider graph}
To construct a moduli space of matrix factorizations of $({\mathtt{J}},\ell)$, we can look at a moduli space of the form $\mathcal{M}_\theta({\mathtt{J}},\alpha)$ and restrict to those representations that give a nonzero matrix factorization.
To find the matrix factorizations, we have to look at $\mathcal{M}_\theta({\mathtt{J}}\!/\<\ell\>,\alpha)$ inside $\mathcal{M}_\theta({\mathtt{J}},\alpha)$.
Because $\ell$ is zero as soon as one arrow in each cycle of $\mathrm{Q}_2$ is zero, these are the representation that are zero on at least one perfect matching.
However if $\rho$ is zero for only one perfect matching,
the construction in the previous paragraph only results in contractible curves that each run in two neighboring faces. This tells us that the matrix factorization is zero.
\[
\twist{\mathrm{Q}}: \vcenter{{\xymatrix@=.5cm{
\vtx{}\ar[rr]&&\vtx{}\ar[dd]&&\vtx{}\ar[ll]\\
&&\ar@(rd,ru)@{.>}\ar@(ld,lu)@{.>}&&\\
\vtx{}\ar[uu]&&\vtx{}\ar[rr]\ar[ll]&&\vtx{}\ar[uu]
}}}
\]
We will now restrict to the case where $\theta=\theta_W$ for a \emph{weight function $W:\mathrm{Q}_1\to \mathbb{Z}$ that is nondegenerate}.
In this case each two-dimensional orbit of $\mathcal{M}_\theta({\mathtt{J}},\alpha)$ is the zero locus of precisely one perfect matching. The one-dimensional orbits will be the zero locus
of two perfect matchings and the zero-dimensional orbits of more than two perfect matchings.
Therefore the representations that give nonzero matrix factorizations correspond to
union of the zero- and one-dimensional orbits. We denote this union by
\[
\mathcal{M}^{sing}_{\theta}({\mathtt{J}},\alpha) \subset \mathcal{M}_\theta({\mathtt{J}},\alpha).
\]
\begin{lemma}\label{kbands}
Suppose $\mathrm{Q}$ is consistent.
Let $\rho=\rho_{{\cal P}_1\cup{\cal P}_2}$ be a representation that is zero
for two perfect matchings ${\cal P}_1,{\cal P}_2$ located on lattice points $n_{{\cal P}_1},n_{{\cal P}_2}$
and let
$k$ be the elementary length of the line segment between $n_{{\cal P}_1}$ and $n_{{\cal P}_2}$ (i.e. the line segment contains $k+1$ lattice points).
\begin{enumerate}
\item The representation $\rho$ splits as a direct sum of $k$ indecomposable representations
$\rho_1\oplus \dots \oplus \rho_k$.
\item Each matrix factorization $M_{\rho_i}$ corresponds to the direct sum of a band and its opposite
\item All the bands of the $M_{\rho_i}$ are (anti)parallel in $\mathrm{Q}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Because there are at most two arrows zero in every face the bands look like
$$(a_1,f_1,a_2,f_2,a_3,\dots, a_n,f_n),$$
where $a_i \in {\cal P}_1$ if $i$ is odd and $a_i \in {\cal P}_2$ if $i$ is even. Also if a band
is present then also its opposite is present.
Moreover two bands must either be parallel or antiparallel in because an intersection
would result in at least three zero arrows in a face.
The vector $n_{{\cal P}_1}-n_{{\cal P}_2}=(\deg_{PM_1}x -\deg_{PM_2}x,\deg_{PM_1}y -\deg_{PM_2}y,0)$ measures the intersection
number of parallel bands with $x$ and $y$ so the homology class of the parallel bands is
$$(-\deg_{PM_1}y +\deg_{PM_2}y,\deg_{PM_1}x -\deg_{PM_2}x).$$
If $\gcd(-\deg_{PM_1}y +\deg_{PM_2}y,\deg_{PM_1}x -\deg_{PM_2}x)=k$
this means that the homology class is that of $k$ simple curves so there must be $k$ parallel bands and
$k$ antiparallel bands, so $\rho$ decomposes as a sum of $2k$ indecomposable objects that are pairwise each other's shift.
On the torus $\surf{\mathrm{Q}}$ the bands split the surface in $k$ cylinders and all arrows
crossing the bands are $0$, so $\rho$ is the direct sum of $k$ subrepresentations, one
supported on each cylinder. Each of these representations is indecomposable as a
representation. If it were not indecomposable $M_{\rho_i}$ would split in more than $2$ indecomposable
matrix factorizations.
\erbij{
To show that the decorations of the bands are the same we look at a one-parameter
family $\rho_t$ with $\lim_{t\to 0}=\rho$.
If $g$ and $h$ are parallel band then the weak paths $p,q$ that bounds
the cycles on the left.
Let $r$ be any weak path that connects $h(p)$ with $h(q)$. The paths $rp$ and $qr$ have the same homotopy class and the same degree
for ${\cal P}_1$ so by the consistency of the dimer they represent the same element ${\mathtt{J}}$.
Therefore
$$\frac{\rho(p)}{\rho(q)} = \lim_{t\to 0}\frac{\rho_t(p)}{\rho_t(q)}=\lim_{t\to 0}\frac{\rho_t(rp)}{\rho_t(qr)}=1$$
so the decorations are the same.}
\end{proof}
Each $k$-fold edge of $\mathtt{trop}(f_W)$ corresponds to a one-parameter orbit of polystable representations
that decompose into $k$ summands. This implies that there is a correspondence between
the edges of the spider graph $\Lambda(f_W)$ and the number of bands that occur in the matrix factorizations of the one-dimensional
torus orbit. Each of these bands corresponds to an unoriented curve in $\psurf{\twist{\mathrm{Q}}}$.
To make a moduli space that parametrizes these summands we can take the fibered product
\[
\mathcal{M}^{mf}_{\theta_W} := \mathcal{M}^{sing}_{\theta_W}({\mathtt{J}},\alpha) \times_{\mathtt{trop}_{f_W}} \Lambda(f_W)
\]
This moduli space has $k$ 1-parameter family of each indecomposable summand and comes with
a natural projection $\mathcal{M}^{mf}_{\theta_W} \to \Lambda(f_W)$. Note that in the generic
case every polystable representation is stable and hence indecomposable,
so $\mathcal{M}^{mf}_{\theta_W} = \mathcal{M}^{sing}_{\theta_W}({\mathtt{J}},\alpha)$.
From theorem \ref{genusmirror} and lemma \ref{genustropical} we know that the $\psurf{\twist{\mathrm{Q}}}$
and $\tub{\Lambda(f_W)}$ are surfaces with the same genus and same number of punctures.
Therefore it is a natural question to ask whether we can draw the dimer quiver $\twist{\mathrm{Q}}$ on
$\tub{\Lambda(f_W)}$ such that the bands of the matrix factorizations
correspond to simple curves that wrap around the edges of $\Lambda(f_W)$.
This is indeed possible. We will first show this for generic $W$ and
then extend the result to nondegenerate $W$.
First we need to determine how the projection of each arrow onto
the spider graph must look. Because an arrow connects two punctures, its projection
will be a sequence of edges, starting with a leg and ending in a leg.
Which edges are needed can be determined by the subdivision of the Newton Polygon
$\mathrm{NP}(f_W)$.
For any subpath $p$ of a cycle in $\mathrm{Q}_2$
we can mark all lattice points that contain an arrow of $p$
and look at the subcomplex $C_p$ consisting of all edges and triangles
that contain only marked
lattice points.
\vspace{.3cm}
\begin{center}
\resizebox{!}{2cm}{\begin{tikzpicture}
\draw (.5,-2) node {{$C_a$}};
\fill [black!25] (-1,0)--(0,1)--(1,1)--(0,0)--(-1,0);
\draw [black!25] (0,-1) -- (-1,0);
\draw [black] (0,0) -- (-1,0);
\draw [black] (-1,0) -- (0,1);
\draw [black] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,1) -- (1,-1);
\draw [black] (0,1) -- (0,0);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (0,0) -- (0,-1);
\draw [black] (1,1) -- (0,0);
\draw [black!25] (2,0) -- (0,0);
\draw (-1,0) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (1,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\end{tikzpicture}}
\hspace{1cm}
\resizebox{!}{2cm}{\begin{tikzpicture}
\draw (.5,-2) node {{$C_{r_a^+}$}};
\fill [black!25] (1,0)--(2,0)--(1,-1)--(1,0);
\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw [black!25] (2,0) -- (0,0);
\draw [black!25] (0,1) -- (0,-1);
\draw [black!25] (1,1) -- (1,0);
\draw [black] (1,0) -- (1,-1);
\draw [black] (1,0) -- (2,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (1,0) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\end{tikzpicture}}
\end{center}
\erbij{\resizebox{!}{1.2cm}{\begin{tikzpicture}
\draw [red, thick](-4/3,2/3) -- (-7/3,-1/3);
\draw [black!25](-4/3,8/3) -- (-4/3,2/3);
\draw [black!25](-4/3,8/3) -- (-7/3,11/3);
\draw [black!25](-4/3,8/3) -- (-4/3,11/3);
\draw [red, thick](-2/3,2) -- (1/3,3);
\draw [black!25](-2/3,4/3) -- (1/3,1/3);
\draw [red, thick](-4/3,2/3) -- (-2/3,4/3);
\draw [black!25](-4/3,2/3) -- (-4/3,-1/3);
\draw [black!25](-4/3,8/3) -- (-2/3,2);
\draw [red, thick](-2/3,2) -- (-2/3,4/3);
\end{tikzpicture}}}
\begin{lemma}\label{cont}
Let $p$ be as subpath of a cycle in $\mathrm{Q}_2$. If $\mathrm{Q}$ is consistent
and $\theta=\theta_W$ generic then the subcomplex $C_p$ is contractible.
\end{lemma}
\begin{proof}
If $\theta$ is generic then Ishii and Ueda proved \cite{ishii2009dimer}
that $\mathcal{M}_\theta({\mathtt{J}},\alpha)$ is derived equivalent to ${\mathtt{J}}$.
They did this by constructing a tilting bundle on $\mathcal{M}_\theta({\mathtt{J}},\alpha)$.
Pick a base vertex $w$ and let $p$
be a path from $w \to v$. Then define
\[
{\cal L}_{w\to v} := {\cal L}( \sum_{{\cal P}} \deg_{\cal P}(p)D_{{\cal P}})
\]
where the sum is taken over all stable perfect matchings and $D_{\cal P}$ is the toric divisor corresponding to the lattice point of ${\cal P}$.
This line bundle does not depend on the choice of path between $w$ and $v$, only on the choice of the vertices.
For each vertex $w$ the sum
\[
\oplus_{v \in \mathrm{Q}_0} {\cal L}_{w \to v}
\]
is a tilting bundle, so we have that $$\mathsf{Ext}^{>0}({\cal L}_{w\to w},{\cal L}_{w\to v})={\blb M}^{>0}({\cal L}_{w\to v})=0$$
for all $v,w \in \mathrm{Q}_0$.
Clearly the arrow $a$ is a path from $t(a)$ to $h(a)$ so
\[
{\cal L}_{t(a)\to h(a)} = {\cal L}(\sum_{{\cal P}} \deg_{\cal P}(a)D_{{\cal P}}).
\]
must have vanishing higher cohomology.
From toric geometry \cite{fulton1993introduction} we know that the higher
homology of ${\cal L}_{t(a)\to h(a)}$ comes from the homology of
the subfan marked by the divisors. If $C_p$ is not contractible this homology will be nonzero
and $\oplus_{v \in \mathrm{Q}_0} {\cal L}_{t(a) \to v}$ cannot be a tilting bundle.
Similarly $C_{r_a^+}$ is also contractible because the higher cohomology of
${\cal L}_{h(a)\to t(a)}$ vanishes.
\end{proof}
\newcommand{\mathsf{line}}{\mathsf{line}}
\newcommand{\mathsf{tree}}{\mathsf{tree}}
For every arrow the subcomplexes $C_a$ and $C_{r_a^+}$ together contain all
lattice points and are both contractible, therefore we can draw a single curve through
the Newton polygon that separates both complexes. This curve $\mathsf{line}_a$ can be seen as a sequence
of edges in the dual graph, which is the spider graph.
\vspace{.3cm}
\begin{center}
\resizebox{!}{1.3cm}{\begin{tikzpicture}
\fill [black!25] (-1,0)--(0,1)--(1,1)--(0,0)--(-1,0);
\draw [black!25] (0,-1) -- (-1,0);
\draw [black] (0,0) -- (-1,0);
\draw [black] (-1,0) -- (0,1);
\draw [black] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,1) -- (1,-1);
\draw [black] (0,1) -- (0,0);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (0,0) -- (0,-1);
\draw [black] (1,1) -- (0,0);
\draw [black!25] (2,0) -- (0,0);
\draw (-1,0) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (1,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=black,minimum size=10pt,inner sep=1pt]{};
\draw[red] (-1,-1) -- (-.33,-.33)--(.33,-.66)--(.66,-.33)--(.66,.33)--(1.33,.33)--(2,1);
\end{tikzpicture}}
\end{center}
Consider a face $a_1\dots a_k$ in $\mathrm{Q}_2$ and look at the edges in
the spider graph. Every lattice point in the Newton polygon correspond to one stable perfect matching and
hence is contained in precisely one of the $C_{a_i}$. Each edge is either dual to an edge in one of the
$C_{a_i}$ or it is dual to an edge that connects two lattice points
that are in two different $C_{a_i}$. The latter edges form a tree $\mathsf{tree}(c)$ in the spider
graph. Indeed if $\mathsf{tree}(c)$ contains a cycle then there is at least one lattice point inside the cycle.
This lattice point must be contained in one of the $C_a$ and hence the full $C_a$ is contained
inside this cycle, but then $C_{r_a^+}$ cannot be contractible.
Because $\mathsf{tree}(c)$ is embedded in the plane as a subgraph of the tropical curve, we
can also consider a small closed neighborhood $U_c$ of $\mathsf{tree}(c)$ in the plane. Topologically, this
neigborhood is homeomorphic to a $k$-gon with the corners removed and
there is a map $\mathtt{retr}_c: U_c \to \mathtt{trop}(f_W)$ that maps this polygon onto the
tree $\mathsf{tree}(c)$.
In a positive cycle the $C_{a_i}$ will sit next to each other in an anticlockwise way, while
for a negative cycle they will follow in a clockwise way. This is because the legs of
$\mathsf{line}_a$ correspond to the zig ray and the zag ray in $\mathrm{Q}$. If $ab$ sits in a positive cycle
the zig ray of $a$ is equal to the zag ray of $b$, while it is the other way round if $ab$ sits
in a negative cycle.
Therefore the ends of the lines match up like the edges of the polygons do in $\twist{\mathrm{Q}}$,
and we can glue the trees together to obtain a surface that is homeomorphic to
$\psurf{\twist{\mathrm{Q}}}$. Gluing all the maps $\mathtt{retr}_c$ together we get a
map $\mathtt{retr} :\psurf{\twist{\mathrm{Q}}} \to \mathtt{trop}(f_w)$. If $\theta$ is generic we can identify
$\mathtt{trop}(f_w)$ with $\Lambda(f_W)$.
This means we have identified $\psurf{\twist{\mathrm{Q}}}$ and $\tub{\Lambda(f_W)}$ so we can translate
theorem \ref{doublegraph} to this setting.
\begin{theorem}
Let $\mathrm{Q}$ be a consistent quiver and consider a generic weight function
$W:\mathrm{Q}_1 \to \mathbb{Z}$ and a map $B:\mathrm{Q}_1 \to \mathbb{R}_>0$.
There exists a unique Strebel differential $\Phi$ on $\psurf{\twist{\mathrm{Q}}}$ such that
\begin{enumerate}
\item
the horizontal strips correspond to the arrows $a \in \twist{\mathrm{Q}}_1$ and have width $B_a$,
\item
the vertical cylinders correspond to the bands that occur as summands in decomposition
of the matrix factorizations of the $\theta_W$-stable one-dimensional
orbits in $\mathcal{M}_{\theta_W}({\mathtt{J}},\alpha)$. They have a width equal to the affine length of the corresponding edge in $\mathtt{trop}(f_L)$.
\end{enumerate}
In other words, we have a diagram
\[
\xymatrix{\psurf{\twist{\mathrm{Q}}}\ar[dr]^\pi&&\mathcal{M}^{mf}_{\theta_W}\ar[dl]_{\pi'}\\&\Lambda(f_W)=\mathtt{LH}(\Phi)&}
\]
such that $\pi^{-1}\pi'(\rho)$ is a curve that represents the bands in $M_\rho$.
\end{theorem}
\begin{proof}
We have an identification $\psurf{\twist{\mathrm{Q}}}=\tub{\Lambda_{f_W}}$ so we can apply
theorem \ref{doublegraph} to $\Lambda=\Lambda_{f_W}$ and $\Gamma=\twist{\mathrm{Q}}^\perp$.
To identify the bands with the cylinders, let $x$ be a point on
an edge $e$ of $\mathtt{trop}(f_W)$. The inverse image $\pi^{-1}(x)$ will
in each polygon consist of a line connecting 2 arrows $a,b$
with lines $\mathsf{line}_a,\mathsf{line}_b$ that contain $e$. If $e$ is dual to the edge
that connects the perfect matchings ${\cal P}_1$ and ${\cal P}_2$ then $a$ and $b$
will each be contained in one of these perfect matchings. Therefore the
total inverse image will be a curve that runs along the band representing
$\rho_{{\cal P}_1\cup{\cal P}_2}$.
\end{proof}
\begin{remark}
One can generalize this statement to the case of nondegenerate weights as well.
The idea is to approximate the nondegenerate weight $W$ by generic ones $W'$ (using
rational weight functions).
If we chose $W'$ close enough to $W$, the only thing that can happen is that
a $\theta_{W'}$-stable perfect are $\theta_{W}$-unstable but not the other way round.
This means that in limit the
cycles corresponding to the $\theta_{W}$-unstable perfect matchings
will shrink and disappear in the tropical curve.
There is a continuous map from $\mathtt{trop}_{f_{W'}}\to \mathtt{trop}_{f_{W}}$.
The image of the curve $\mathsf{line}_a\subset \mathtt{trop}_{f_{W'}}$ will still be a line in
$\mathtt{trop}_{f_{W}}$
that separates the lattice points of stable matchings containing $a$ from those not containing $a$.
The image of the tree $\mathsf{tree}(c)$ will also be a tree in $\mathtt{trop}_{f_{W}}$. This means
we can still apply the same procedure to map $\psurf{\twist{\mathrm{Q}}}$ to $\mathtt{trop}(f_W)$.
The only difference now is that the inverse image of a point on a $k$-fold edge
will now consist of $k$ circles. Therefore we can factorize the map
\[
\xymatrix{\psurf{\twist{\mathrm{Q}}}\ar[rr]\ar[rd]&&\mathtt{trop}(f_W)\\&\Lambda(f_W)\ar[ru]&}
\]
and we have found and identification of $\psurf{\twist{\mathrm{Q}}}$ with $\tub{\Lambda(f_W)}$.
\end{remark}
\section{Gluing and Localizing}\label{sectionglue}
The Fukaya category of a punctured surface can be constructed by gluing smaller Fukaya categories together.
There are several ways to make this precise: see for instance \cite{dyckerhoff2013triangulated} and \cite{pascaleff2016topological}.
In this section we will use a method that combines naturally with the theory of stability
conditions and discuss how this can be interpreted in the $A$- and the $B$-model.
Our starting point is again a consistent dimer on a torus $\mathrm{Q}$ and its mirror dimer $\twist{\mathrm{Q}}$.
On $\mathrm{Q}$ we consider a nondegenerate stability condition $\theta=\theta_W$ and
on $\psurf{\twist{\mathrm{Q}}}$ we consider a quadratic Jenkins-Strebel differential $\Phi$ for which the horizontal
leaf space is $\mathtt{LH}(\Phi)=\Lambda(f_W)$ and the vertical leaves are the arrows of the quiver.
\subsection{The A-model: restriction functors}
Each point $p$ of the spider graph $\mathtt{LH}(\Phi)$ corresponds
to a new punctured surface with a Strebel differential on it.
This surface can be obtained by taking the inverse image of a small open
neighborhood $U_p \subset \mathtt{LH}(\Phi)$.
If $p$ is an internal point of an edge this surface is a cylinder and if $p$ is a
node of the spider graph it is a surface with a genus that is equal to
the genus of the node and a number of holes, that is
equal to the valence of the node.
The restriction of the quadratic differential gives a quadratic differential on this
surface. We can extend the surface and the differential by gluing infinite cylinders to the holes. These cylinders are of the form $(\frac{\mathbb{R}+\mathbb{R}_{>0}i}{\ell\mathbb{Z}},-dz^2)$
where the circumference $\ell$ is equal to the circumference of the corresponding tube.
In this way we get a new punctured surface with a Strebel differential: $(\dot S_p,\Phi_p)$
\begin{center}
\begin{tikzpicture}
\begin{scope}
\draw [thick](-.5,-1) -- (0,0) -- (1.5,0) -- (2,-1);
\draw [thick](-.5,1) -- (0,0) -- (1.5,0) -- (2,1);
\draw (0,0) node{$\bullet$};
\draw (1.5,0) node{$\bullet$};
\draw (2.25,0) node{$U_p$};
\draw[dotted] (2,0) arc (0:360:.5);
\end{scope}
\begin{scope}[xshift=5cm]
\draw[dotted] (1,0) arc(0:360:1);
\draw (-1.5,.2)--(-.8,.2) ..controls (-.1,.2) and (-.1,.2) .. (.6,.9) -- (.81,1.1);
\draw (-1.5,-.2)--(-.8,-.2) ..controls (-.1,-.2) and (-.1,-.2) .. (.6,-.9) -- (.81,-1.1);
\draw (1.2,1) ..controls (.2,0) and (.2,0) .. (1.2,-1);
\draw (-1.4,0) node{$\dots$};
\draw (1,1.05) node[rotate=45]{$\dots$};
\draw (1,-1.05) node[rotate=-45]{$\dots$};
\draw (-1,0) ellipse (0.1 and 0.2);
\draw[rotate=133] (-1,0) ellipse (0.1 and 0.19);
\draw[rotate=-133] (-1,0) ellipse (0.1 and 0.19);
\draw (0,0) node{$\dot S_p$};
\end{scope}
\end{tikzpicture}
\end{center}
If $p$ is a point of an internal edge we have that
$(S_p,\Phi_p)\cong(\mathbb{C}^*,-a \frac{dz^2}{z^2})$ for some $a \in \mathbb{R}_{>0}$. For all points in the edge $e$ this pair is the same, so we denote it by $(\dot S_e,\Phi_e)$.
If $p$ is a node, the vertical leaves of $(\dot S_p,\Phi_p)$ correspond to arrows for which the image under
$\pi:\psurf{\twist{\mathrm{Q}}} \to \Lambda(f_W)$ passes through $p$. In fact we can see $S_p$ as the punctured surface of a new dimer
$\twist{\mathrm{Q}}_p$. The arrows of the dimer $\twist{\mathrm{Q}}_p$ are in one to one correspondence to the arrows passing through $p$ under $\pi$.
If $a_1\dots a_k$ is a face $\twist{\mathrm{Q}}_2^\pm$, we construct an analogous face for $\mathrm{Q}_p$ by removing the arrows that do not pass through $p$.
If none of the arrows remain we omit the face.
Each arrow $a$ in $\twist{\mathrm{Q}}$ corresponds to an idempotent $e_a$ in $\mathtt{Gtl}(\twist{\mathrm{Q}})$ and each nonzero path $\beta$ in $\mathtt{Gtl}(\twist{\mathrm{Q}})$ winds around one of the punctures of $\psurf{\twist{\mathrm{Q}}}$. There is a natural algebra morphisms ${\cal F}:\mathtt{Gtl}(\twist{\mathrm{Q}}) \to \mathtt{Gtl}(\twist{\mathrm{Q}}_p)$ that maps $e_a$ to the corresponding idempotent in $\mathtt{Gtl}(\twist{\mathrm{Q}}_p)$ if $a$ intersects $S_p$ and zero otherwise. The image of a path $\beta$ is nonzero if it winds around a tube that is also present in $S_p$ and zero otherwise.
This algebra morphism can be extended to an $A_\infty$-morphism. Let $k>1$ and suppose that $\rho_1,\dots,\rho_k$ are nonzero angle paths in $\mathtt{Gtl}(\twist{\mathrm{Q}})$ and
\begin{itemize}
\item[F1] $h(\rho_1)\in \twist{\mathrm{Q}}_{p1}$ but $h(\rho_i)\not\in \twist{\mathrm{Q}}_{p1}$ if $i>1$,
\item[F2] $t(\rho_k)\in \twist{\mathrm{Q}}_{p1}$ but $t(\rho_i)\not\in \twist{\mathrm{Q}}_{p1}$ if $i<k$.
\item[F3] $\rho_i\rho_{i+1}=0$ for $1<i<k-1$,
\end{itemize}
We define
\[
{\cal F}(\rho_1,\dots,\rho_k) = (-1)^{|\rho|} \rho
\]
If the path $\rho_1\dots\rho_k$ in $\psurf{\twist{\mathrm{Q}}}$ is homotopic to the angle path $\rho$ in $\psurf{{\twist{\mathrm{Q}}}_p}\cong U_p \subset \psurf{\twist{\mathrm{Q}}}$.
all other ${\cal F}$-products of paths are zero.
\begin{lemma}
${\cal F}$ is an $A_\infty$-morphism.
\end{lemma}
\begin{proof}
First note that by conditions F1 and F2, given a sequence of angle paths $\rho_1,\dots,\rho_k$ in $\mathtt{Gtl} \twist{\mathrm{Q}}$
there is at most one expression of the form
\[
\mu({\cal F}(\rho_1,\dots,\rho_{i_1}),\dots, {\cal F}(\rho_{i_{l+1}},\dots,\rho_{k}))
\]
that is nonzero. This is the expression obtained by cutting the $\rho$-sequence
in bits such that the cuts occur at all the heads $h(\rho_i)$ that are in $\twist{\mathrm{Q}}_{p1}$.
This means that the left hand side of the $A_\infty$-morphism identity \cite{keller1999introduction}
\se{
&\sum_{l,i_1,\dots, i_l}\pm \mu({\cal F}(\rho_1,\dots,\rho_{i_1}),\dots, {\cal F}(\rho_{i_{l+1}},\dots,\rho_{k}))\\
=&
\sum_{r<s}\pm {\cal F}(\rho_1,\dots,\rho_{r},\mu(\rho_{r+1},\dots,\rho_s),
\rho_{s+1},\dots,\rho_{k})
}
has at most one term. If that term is nonzero then either the term comes from a $\mu_2$
or from a higher order $\mu$.
In the first case this term is
$\mu_2({\cal F}(\dots,\rho_{i_1}),{\cal F}(\rho_{i_1+1}),\dots))$ and
we have one nonzero term on the right hand side:
\[
{\cal F}(\dots,\mu_2(\rho_{i_1},\rho_{i_1+1}),\dots).
\]
In the second case, set $\tilde \rho=\mu({\cal F}(\rho_1,\dots,\rho_{i_1}),\dots, {\cal F}(\rho_{i_{l+1}},\dots,\rho_{k}))$. By lemma \ref{muhomotopy} the angle path
$\tilde \rho$ in $\twist{\mathrm{Q}}_p$ is either trivial or a subpath of
${\cal F}(\rho_1,\dots,\rho_{i_1})$ or ${\cal F}(\rho_{i_{l+1}},\dots,\rho_{k})$.
If $\tilde \rho$ is trivial then $h(\rho_1)=t(\rho_k)=\mu(\rho_1,\dots,\rho_k)$ and
the RHS has one nonzero term ${\cal F}(\mu(\rho_1,\dots,\rho_k))$.
If $\tilde \rho$ is a subpath of ${\cal F}(\rho_1,\dots,\rho_{i_1})$ then $t(\tilde\rho)$ is an arrow $a$ that enters the contractible region bounded by the arrows $h(\rho_1),\dots,t(\rho_k)$. The arrow will end in a puncture and will cut one of the angles
$\rho_i$ in $2$: $\rho_i=\alpha\beta$ with $h(\beta)= a$.
\begin{center}
\resizebox{!}{2cm}{
\begin{tikzpicture}
\foreach \s in {1,2,3,4,5,6,7,8,9}
{
\draw (230-\s*25:1.5)node {$\rho_\s$};
}
\draw (0:1.5)--(180:2);
\draw (0,0) node[above] {$a$};
\draw (5:1.2) arc (5:55:1.2);
\draw (65:1.2) arc (65:135:1.2);
\draw (140:1.2) arc (140:205:1.2);
\draw (30:.9) node {${\cal F}$};
\draw (92:.9) node {${\cal F}$};
\draw (170:.9) node {${\cal F}$};
\draw (175:1.9) node {$\beta$};
\draw (185:1.9) node {$\alpha$};
\draw (-5:1.2) -- (210:1.2);
\draw (285:.8) node{$\mu$};
\draw (2.15,0) node{$=$};
\begin{scope}[xshift=4.5cm]
\foreach \s in {1,2,3,4,5,6,7,8,9}
{
\draw (230-\s*25:1.5)node {$\rho_\s$};
}
\draw (0:1.5)--(180:2);
\draw (0,0) node[above] {$a$};
\draw (5:1.2) arc (5:175:1.2);
\draw (92:.9) node {$\mu$};
\draw (175:1.9) node {$\beta$};
\draw (185:1.9) node {$\alpha$};
\draw (-5:1.2) to[out=185,in=120] (210:1.2);
\draw (285:.5) node{${\cal F}$};
\end{scope}
\end{tikzpicture}}
\end{center}
Therefore $\mu(\rho_i,\dots,\rho_k)=\alpha$ and ${\cal F}(\rho_1,\dots,\rho_{i-1},\mu(\rho_i,\dots,\rho_k))$ is nonzero. It is the only nonzero term on the RHS because of lemma \ref{muhomotopy}.
The case when $\tilde\rho$ is a subpath of ${\cal F}(\rho_{i_{l+1}},\dots,\rho_{k})$ is similar. So we can conclude that if the LHS is nonzero, the RHS has one nonzero term, which is equal to the LHS because it is an angle path with the same homotopy.
\vspace{.3cm}
Now let's have a closer look at the right hand side. We distinguish the following cases.
\begin{itemize}
\item[C1] If one of the $\rho_i$ is an idempotent then the right hand side has two
terms with a $\mu_2$ that give a possibly nonzero contribution. These two terms have opposite sign and hence they cancel out. All other terms have the idempotent in a higher $\mu$ or ${\cal F}$-term and hence by $F3$ and the definition of $\mu$ they are zero. Similarly the left hand side is zero.
\item[C2]
If $\rho_i\rho_{i+1}$ is nonzero and neither $\rho_i$ nor $\rho_{i+1}$ is an idempotent then
the only terms on the right hand side that are nonzero must contain a $\mu_2(\rho_i,\rho_{i+1})$ or a higher $\mu$ that either contains $\rho_i$ or $\rho_{i+1}$ but not both (again by lemma \ref{muhomotopy}).
We distinguish two subcases.
\begin{itemize}
\item[C2.1] If $h(\rho_{i+1}) \in \twist{\mathrm{Q}}_{p1}$ then only the $\mu_2$-term on the right hand side can be nonzero because of F1 and F2. If this term is indeed nonzero, F1 and F2 also
imply that $h(\rho_j) \not \in \twist{\mathrm{Q}}_{p1}$ for $j\ne {i+1}$.
On the left hand side this means that the possibly nonzero term is of the
form $\mu_2({\cal F}(\rho_1,\dots,\rho_i),{\cal F}(\rho_{i+1},\dots,\rho_k))$ and by lemma
\ref{muhomotopy} it is equal to the RHS.
\item[C2.2] If $a=h(\rho_{i+1}) \not \in \twist{\mathrm{Q}}_{p1}$ then $h(\rho_{i+1})$ must correspond to
an arrow that can be drawn completely inside the polygon spanned by the arrows
$h(\rho_{j})$. This arrow must connect two corners of the polygon: one corresponding
to the puncture around which $\rho_{i+1}$ winds and another one around which $\rho_j$
winds. If $j>{i+1}$ then term ${\cal F}(\dots,\mu(\rho_{i+1},\dots,\rho_{j})\dots)$
cancels the term with $\mu_2$ and if $j<i$ then the term
${\cal F}(\dots,\mu(\rho_{j},\dots,\rho_{i+1})\dots)$ does this. The other terms on the RHS are zero. The left hand side is zero because the split has only one term (and $\mu_1=0$).
\begin{center}
\resizebox{!}{2cm}{
\begin{tikzpicture}
\foreach \s in {1,2,3,4,5,6,7,8,9}
{
\draw (230-\s*25:1.5)node {$\rho_\s$};
}
\draw (87:1.2) arc (87:112:1.2);
\draw (97:.9) node {$\mu_2$};
\draw (140:.7) node {$a$};
\draw[dotted] (92:1.8) to[out=-88,in=-12] (168:1.8);
\draw (-5:1.2) -- (210:1.2);
\draw (285:.8) node{${\cal F}$};
\draw (2.15,0) node{$-$};
\begin{scope}[xshift=4.5cm]
\foreach \s in {1,2,3,4,5,6,7,8,9}
{
\draw (230-\s*25:1.5)node {$\rho_\s$};
}
\draw (97:1.2) arc (97:160:1.2);
\draw (140:.8) node {$\mu$};
\draw (-5:1.2) -- (210:1.2);
\draw (285:.8) node{${\cal F}$};
\end{scope}
\end{tikzpicture}}
\end{center}
\end{itemize}
\item[C3] If there is a higher $\mu$-term $\mu(\rho_i,\dots,\rho_j)$ that is nonzero on the RHS then
\begin{itemize}
\item[C3.1]
If $i=1$ and $j=k$ then no other term on the RHS can be nonzero and the term on the LHS
corresponds to the same contractible disk.
\item[C3.2]
If $i>1,j<k$ then $\beta = \mu(\rho_i,\dots,\rho_j)$ must be a nontrivial angle path. Therefore it must be a subpath of either $\rho_i$ or $\rho_j$.
If it is a subpath of $\rho_i=\beta\gamma$ then $\rho_j\rho_{j+1}\ne 0$, so we are in C2.
Similarly if $\rho_j=\alpha\beta$ we can show that $\rho_{i-1}\rho_{i}\ne 0$.
\item[C3.3]
If $i>1$ and $j=k$ we can apply the same reasoning as C3.2 but not when $\rho_j=\alpha\beta$. In that case all consecutive product $\rho_i\rho_{i+1}$ are zero. This implies that
we can group them into subsequences for which ${\cal F}(\rho_u,\dots,\rho_v)\ne 0$ and
because of lemma \ref{muhomotopy} applied in $\twist{\mathrm{Q}}_p$, there is a nonzero term on the LHS.
\item[C3.4]
If $i=1$ and $j<k$ we can apply the same reasoning as C3.3.
\end{itemize}
\end{itemize}
This shows that if the RHS has a nonzero term then either there is another RHS term that cancels it or the LHS has a nonzero term.
\end{proof}
By going to the twisted completion, the $A_\infty$-morphism gives rise to an $A_\infty$-functor
\[
\mathrm{wfuk}(\dot S) \to \mathrm{wfuk}(\dot S_p).
\]
What does this functor do geometrically? If at curves that can be drawn inside
$\pi^{-1}(U_{p})$ it is clear that the corresponding band object will be mapped to a band object with the same curve.
From this point of view it is clear that this functor is a special case of the restriction functors constructed by Dyckerhoff in \cite{dyckerhoff2013triangulated} and Pascaleff and Sibilla in \cite{pascaleff2016topological}.
All the categories we get out of these restriction maps can be glued together to obtain the original
Fukaya category. For each edge $e$ between nodes $n_1$, $n_2$ we have a diagram
\[
\mathrm{wfuk} S_{n_1} \to \mathrm{wfuk} S_{p} \leftarrow \mathrm{wfuk} S_{n_2},
\]
which all together make a big diagram of $A_\infty$-categories. We can consider this diagram as a diagram inside
the category $\mathsf{dgcat}_{\mathbb{Z}_2}$, which is the category of $\mathbb{Z}_2$-graded dg-categories localized by the quasi-equivalences.
In this category we can take the colimit, which is known as the homotopy colimit.
\begin{theorem}[Dyckerhoff\cite{dyckerhoff2015a1},Pascaleff-Sibilla\cite{pascaleff2016topological}]
There is an equivalence between the homotopy colimit of
\[
\cup_{e \in \Lambda_1(f_W)} \mathrm{wfuk} S_{n_1} \to \mathrm{wfuk} S_{p} \leftarrow \mathrm{wfuk} S_{n_2},
\]
and the Fukaya category $\mathrm{wfuk} S$.
\end{theorem}
\subsection{The B-model: matrix factorizations and sheaves}
Given a consistent dimer $\mathrm{Q}$ and a nondegenerate stability condition
$\theta =\theta_W$ we can look at the representation space $\mathcal{M}_{\theta}$.
Each point in this space corresponds to a semistable representation $\rho$ and we can look at
the universal localization
\[
{\mathtt{J}}_\rho := {\mathtt{J}}\< a^{-1}| \rho(a)\ne 0\>.
\]
Because $\mathrm{Q}$ is consistent, by theorem \ref{consprop} this algebra can be seen as a subalgebra of
$${\widehat{\mathtt{J}}} := \mathsf{Mat}_n(\mathbb{C}[X^{\pm 1},Y^{\pm 1},Z^{\pm 1}]).$$
The representation classes $\sigma \in \mathcal{M}_{\theta}$ such that $\rho(a)\ne 0 \implies \sigma(a)\ne 0$ form
an open subset $U_\rho \subset \mathcal{M}_{\theta}$ and following \cite{adriaenssens2003local} this space can be seen as $\mathcal{M}_0({\mathtt{J}}_\rho)$.
This means that $\mathcal{M}_{\theta}$ can be covered by a sheaf of algebras, such that each affine open part is a representation space of semistable representations of a new stabilitity condition.
In general these algebras can be quite complicated but in the case of a nondegenerate stability condition they have nice properties.
\begin{theorem}
If $\theta_W$ is a nondegenerate stability condition then all
${\mathtt{J}}_{\rho}$ are 3-Calabi-Yau algebras with center $\mathbb{C}[U_{\rho}]$, which implies that $\mathcal{M}_{\theta}$ can be covered
with a sheaf of Calabi Yau algebras.
\begin{itemize}
\item If $\rho$ sits in a $2$-dimensional torus orbit then ${\mathtt{J}}_{\rho}$ is Morita equivalent to $$\mathbb{C}[\mathbb{C}^*\times \mathbb{C}^*\times \mathbb{C}].$$
\item If $\rho$ sits in a $1$-dimensional torus orbit corresponding to an edge with length $k$ then
${\mathtt{J}}_{\rho}$ is Morita equivalent to $\mathbb{C}[\mathbb{C}^*]\times \mathbb{C}[X,Y]\star \mathbb{Z}_k$, where $gXg^{-1} =\zeta X$ and
$gYg^{-1} =\zeta^{-1} Y$ for a $k^{th}$ root of unity $\zeta$.
\item If $\rho$ sits in a $0$-dimensional torus orbit then ${\mathtt{J}}_{\rho}$ is Morita equivalent to the Jacobi algebra of a consistent dimer $\mathrm{Q}_\rho$
with a matching polygon equal to the subpolygon associated to $U_\rho$.
\end{itemize}
\end{theorem}
\begin{proof}
We say that two vertices in $\mathrm{Q}$ are equivalent if there is a path $p$ between them
with $\rho(p)\ne 0$. In ${\mathtt{J}}_\rho$ these paths have inverses and therefore if we chose one vertex $v_i$ for every equivalence class and take the sum $w=\sum_i v_i$
we see that ${\mathtt{J}}_\rho$ is Morita equivalent to $w {\mathtt{J}}_\rho w$.
\begin{itemize}
\item
If $\rho$ is in a $2$-dimensional torus orbit then the nondegeneracy of $\theta$ implies that there is precisely one perfect matching ${\cal P}$ on which $\rho$ is zero.
Because $\mathrm{Q} \setminus {\cal P}$ is connected all vertices are in the same equivalence class.
This implies that ${\mathtt{J}}_\rho$ is Morita equivalent to $v{\mathtt{J}}_\rho v\cong \mathbb{C}[U_{\rho}]$ which is Calabi-Yau-3 because $U_\rho=\mathbb{C}^*\times \mathbb{C}^*\times \mathbb{C}$.
\item
If $\rho$ is in a 1-dimensional torus orbit then the nondegeneracy of $\theta$ implies that there are precisely two perfect matchings ${\cal P}_1,{\cal P}_2$ on which $\rho$ is zero.
We can draw the curves according to $\rho_{{\cal P}_1\cup {\cal P}_2}$ on the surface $\surf{\mathrm{Q}}$. These curves split the torus into strips and all vertices in the same strip are
equivalent, because there is a weak path that goes around along the strip. This weak path gives
rise to a central element $f$.
There are $3$ important types of generators in $w {\mathtt{J}}_\rho w$.
Arrows $x_i$ that connect two strips from the left to the right, arrows $y_i$ that connect two strips from the right to the left
and the central elements $f,f^{-1}$.
Two arrows connecting the same strips in the same direction are equivalent because viewed from fixed vertices they correspond
to weak paths with the same homology class.
\[
\hspace{2cm}
\vcenter{
\xymatrix@=.4cm{
f\to\ar@{.>}@/^/[rr]&&\vtx{}\ar[dd]|{x_i}&&\vtx{}\ar@{.>}@/_/[ll]\ar@{.>}@/^/[rr]&&\vtx{}\ar[dd]|{x_i}&&\vtx{}\ar@{.>}@/_/[ll]\ar@{.>}@/^/[rr]&&\dots\\
&&&&&&&&&&\\
f\to&&\vtx{}\ar@{.>}@/^/[ll]\ar@{.>}@/_/[rr]&&\vtx{}\ar[uu]|{y_i}&&\vtx{}\ar@{.>}@/^/[ll]\ar@{.>}@/_/[rr]&&\vtx{}\ar[uu]|{y_i}&&\dots \ar@{.>}@/^/[ll]
}
}
\hspace{1cm}
\vcenter{
\xymatrix@=.2cm{
\vdots\ar@/_/[d]\\
\vtx{}\ar@/_/[ddd]|{x_i}\ar@/_/[u]\ar@(lu,ld)_f\\
\\~\\
\vtx{}\ar@/_/[d]\ar@/_/[uuu]|{y_i}\ar@(lu,ld)_f\\
\vdots\ar@/_/[u]
}}
\]
The quiver is the double of the extended dynkin quiver $\tilde A_n$ with relations $x_iy_{i}=y_{i+1}x_{i+1}$,
and a loop in each vertex corresponding to $f$.
The algebra $w {\mathtt{J}}_\rho w$ is the preprojective algebra tensored with $\mathbb{C}[f,f^{-1}]$.
Because the first factor is CY2 \cite{bocklandt2008graded} and the
second CY1 the whole is CY3.
\item
If $\rho$ is zero dimensional then there are at least $3$ perfect matchings which are zero for $\rho$. These perfect matchings, ${\cal P}_1,\dots,{\cal P}_k$ lie on different points in the matching polygon, so there is no nontrivial weak path that has zero degree for all these perfect matchings. Therefore the vertices in an equivalence class and the invertible arrows connecting them form a contractible subset on the torus.
If we contract all the invertible arrows we end up with a quiver on a surface.
By lemma \ref{existspath}, the algebra $w {\mathtt{J}}_\rho w$ is in fact the path algebra of this new quiver where two paths are identified if they have the same homology and ${\cal P}_i$-degree.
We do not need all arrows of this quiver to generate the whole algebra. If $a$ is contracted
to a contractible loop then this path will be equivalent to any cycle in $\mathrm{Q}_2$ starting in the same vertex. Therefore we can remove loops and we denote the remaining quiver $\mathrm{Q}_\rho$.
The mirror of $\mathrm{Q}_\rho$ is the dimer $\twist{\mathrm{Q}}_p$ where $p$
is the point in the spider graph corresponding to $\rho$. Indeed $a$ is an arrow
that passes through $p$ under the projection $\dot S(\twist{\mathrm{Q}}) \to \mathtt{trop}(f_W)$ if and only if
$p$ is on the boundary between the perfect matchings that contain $p$ and those that do not.
If $a$ does not pass through $p$ then either $a \not \in {\cal P}_1\cup\dots\cup{\cal P}_k$ or
$a \in {\cal P}_1\cap\dots\cap{\cal P}_k$. In the former case we contracted $a$ to obtain $\mathrm{Q}_\rho$ and in the latter cases we removed it because it was a contractible loop.
A dimer $\mathrm{Q}$ is \emph{well-ordered} if the cyclic order in which the zigzag cycles meet any positive cycle is the same as the cyclic order of their homology classes.
This property was introduced by Gulotta in \cite{gulotta2008properly} and Ishii and Ueda proved that a dimer is well-ordered if and only if it is zigzag-consistent \cite{ishii2010note}.
We will now show that $\mathrm{Q}_\rho$ is well-ordered and hence consistent.
The image of a positive or negative cycle in $\twist{\mathrm{Q}}_2$ under $\pi_{\twist{\mathrm{Q}}}$ is a tree because these cycles are contractible. Two arrows of $\twist{\mathrm{Q}}_p$
that follow each other in the cyclic order of the cycle enter and leave the node $p$ via the same edge in the spider graph. This implies zigzag paths in $\mathrm{Q}_\rho$
consist of all arrows entering/leaving the node via the same edge.
These are the arrows of the garland corresponding to that edge.
Hence there is a one to one correspondence between bands coming from edges connected to $p$ and zigzag paths in $\mathrm{Q}_\rho$.
If we look at a face in $\mathrm{Q}_{red2}^+$ we see that the zigzag paths that are incident
with that face, meet that face in the arrows that enter and leave the neighborhood of $p$
by the edge corresponding to that zigzag path. Moreover the homology class of the zigzag path in $\mathrm{Q}_{red2}^+$ is the same as the direction of this edge. Therefore
the cyclic order of the zigzag paths around the cycle is the same as the cyclic
order of the homology classes of the zigzag paths.
Now we show that the matching polygon of $\mathrm{Q}_\rho$ is the subpolygon associated to $U_\rho$. The perfect matchings ${\cal P}_1,\dots,{\cal P}_k$ of $\mathrm{Q}$ give positive gradings on ${\mathtt{J}}_\rho$ and hence also on $w {\mathtt{J}}_\rho w$. Therefore they induce perfect matchings on $\mathrm{Q}_\rho$.
Vice versa if ${\cal P}$ is a perfect matching on $\mathrm{Q}_\rho$, we get a positive
grading on $w {\mathtt{J}}_\rho w$, which we can extend to a positive grading ${\mathtt{J}}_\rho$ by giving
the paths we contracted degree zero. This grading gives a perfect matching supported by $\rho$.
Finally because $\mathrm{Q}_\rho$ is consistent the Jacobi algebra is the path algebra of $\mathrm{Q}_\rho$ where two paths are identified if they have the same homotopy class
and the same degree for at least one perfect matching. As every perfect matching on $\mathrm{Q}_\rho$ gives a perfect matching on $\mathrm{Q}$ we see that $w {\mathtt{J}}_\rho w$ is
isomorphic to ${\mathtt{J}}(\mathrm{Q}_\rho)$
\end{itemize}
\end{proof}
\begin{remark}
The operation $\mathrm{Q}\to \mathrm{Q}_\rho$ can be performed for every collection of arrows,
for which the underlying subgraph is a union of trees. It is in general not true
that the resulting reduced quiver is consistent if the original dimer is. Certain special
subsets of arrows for which this is the case have been studied by Ishii and Ueda in
\cite{ishii2009dimer}. The result above gives a different way to generate such subsets of arrows.
\end{remark}
The embedding ${\mathtt{J}} \subset {\mathtt{J}}_\rho$ gives a functor $-\otimes {\mathtt{J}}_\rho:{\mathtt {MF}}({\mathtt{J}},\ell) \to {\mathtt {MF}}({\mathtt{J}}_\rho,\ell)$ that maps
$(P,d)$ to $(P \otimes_{{\mathtt{J}}} {\mathtt{J}}_\rho,d)$. Because ${\mathtt{J}}_\rho$ is Morita-equivalent to ${\mathtt{J}}(\mathrm{Q}_\rho)$, the category
${\mathtt {MF}}({\mathtt{J}}_\rho,\ell)$ will be equivalent to ${\mathtt {MF}}({\mathtt{J}}(\mathrm{Q}_\rho),\ell)$. The shortening lemma implies that
$M_a$ is mapped to zero if $\rho(a)\ne 0$. It is also easy to check that if $\rho(a)=0$ then $M_a$ is mapped to $M_a'$,
where $a'$ is the arrow corresponding to $a$ in $\mathrm{Q}_\rho$.
Furthermore if $M_a\otimes {\mathtt{J}}_\rho=M_{a'}$, $M_b\otimes {\mathtt{J}}_\rho=M_{b'}$ and there is a nonzero morphism $M_a\to M_b$, the tensored version
will also give a nonzero morphism in ${\mathtt {MF}}({\mathtt{J}}_\rho,\ell)$ and ${\mathtt {MF}}({\mathtt{J}}(\mathrm{Q}_\rho),\ell)$. This implies that the ${\cal F}_0$ and ${\cal F}_1$ part of the functor we constructed on
the A-side is the same as the tensor functor restricted to ${\mathtt D}_{\mathbb{Z}_2}{\mathtt {mf}}(\mathrm{Q})$. Hence, they give isomorphic $A_\infty$-functors and
just like in the $A$-model we have a diagram of functors for which the homotopy colimit is equivalent to ${\mathtt D}_{\mathbb{Z}_2}{\mathtt {mf}}(\mathrm{Q})$.
\section{An example}\label{sectionexample}
Consider the following dimer $\mathrm{Q}$ with weight function $W:\mathrm{Q}_1\to\mathbb{Z}$ such that $W_{a}=1$ if $a\in \{1,4,6,16\}$ and $W_a=0$ otherwise. The corresponding stability condition $\theta_W$ has $7$ stable perfect matchings.
\begin{center}
\begin{tikzpicture}
\begin{scope}[scale=.4]
\begin{scope}
\draw[loosely dotted] (102pt,20pt) rectangle (402pt,320pt);
\draw [-latex,shorten >=5pt] (320pt,320pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 1}} (293pt,245pt);
\draw [-latex,shorten >=5pt] (184pt,320pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 11}} (184pt,170pt);
\draw [-latex,shorten >=5pt] (320pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 6}} (211pt,95pt);
\draw [-latex,shorten >=5pt] (320pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 2}} (320pt,170pt);
\draw [-latex,shorten >=5pt] (293pt,245pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 3}} (320pt,170pt);
\draw [-latex,shorten >=5pt] (293pt,245pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 4}} (184pt,320pt);
\draw [-latex,shorten >=5pt] (320pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 5}} (402pt,170pt);
\draw [-latex,shorten >=5pt] (102pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 7}} (102pt,20pt);
\draw [-latex,shorten >=5pt] (402pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 7}} (402pt,20pt);
\draw [-latex,shorten >=5pt] (102pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 8}} (102pt,320pt);
\draw [-latex,shorten >=5pt] (402pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 8}} (402pt,320pt);
\draw [-latex,shorten >=5pt] (402pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 9}} (320pt,20pt);
\draw [-latex,shorten >=5pt] (402pt,320pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 9}} (320pt,320pt);
\draw [-latex,shorten >=5pt] (102pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 10}} (184pt,20pt);
\draw [-latex,shorten >=5pt] (102pt,320pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 10}} (184pt,320pt);
\draw [-latex,shorten >=5pt] (184pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 12}} (211pt,95pt);
\draw [-latex,shorten >=5pt] (184pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 13}} (293pt,245pt);
\draw [-latex,shorten >=5pt] (184pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 14}} (102pt,170pt);
\draw [-latex,shorten >=5pt] (211pt,95pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 15}} (320pt,20pt);
\draw [-latex,shorten >=5pt] (211pt,95pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 16}} (184pt,170pt);
\node at (320pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $1$}};
\node at (320pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $1$}};
\node at (293pt,245pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $2$}};
\node at (320pt,170pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $3$}};
\node at (102pt,170pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $4$}};
\node at (402pt,170pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $4$}};
\node at (102pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (402pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (402pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (102pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (184pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $6$}};
\node at (184pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $6$}};
\node at (184pt,170pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $7$}};
\node at (211pt,95pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $8$}};
\end{scope}\end{scope}
\draw (250pt,64pt) node {
\begin{minipage}{4cm}
Stable perfect matchings:
\begin{enumerate}
\item $\{ 9, 6, 4, 14 \}$
\item $\{ 8, 2, 4, 16 \}$
\item $\{ 8, 2, 13, 12 \}$
\item $\{ 5, 15, 13, 10 \}$
\item $\{ 3, 15, 11, 7 \}$
\item $\{ 1, 6, 11, 7 \}$
\item $\{ 9, 15, 13, 14 \}$
\end{enumerate}
\end{minipage}
};
\end{tikzpicture}
\end{center}
The matching polygon is a hexagon and it is subdivided in three quadrangles. The numbers of the lattice points in the hexagons correspond to the stable perfect matchings.
The dual spider graph has $3$ nodes, two with genus zero and one with genus one because the square has an internal lattice point.
\begin{center}
\begin{tikzpicture}
\draw (0,-1) -- (-1,0);
\draw (0,0) -- (-1,0);
\draw (-1,0) -- (0,1);
\draw (0,1) -- (1,1);
\draw (1,1) -- (2,0);
\draw (2,0) -- (1,-1);
\draw (0,0) -- (1,-1);
\draw (1,-1) -- (0,-1);
\draw (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\draw (0,1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny2}};
\draw (1,1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny3}};
\draw (2,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny4}};
\draw (1,-1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny5}};
\draw (0,-1) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny6}};
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny7}};
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=.57]
\draw (-1,-1) -- (-2,-2);
\draw (-1,1) -- (-1,-1);
\draw (-1,1) -- (-2,2);
\draw (-1,1) -- (-1,2);
\draw (0,0) -- (1,1);
\draw (0,0) -- (1,-1);
\draw (-1,-1) -- (0,0);
\draw (-1,-1) -- (-1,-2);
\draw (-1,1) -- (0,0);
\draw (0,0) node[circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {{\tiny1}};
\end{tikzpicture}
\end{center}
Every arrow gives rise to a connected subset of the subdivision of the matching polygon
which is supported on the lattice points of the stable perfect matchings
that contain this arrow. The boundary of this subset corresponds
to a line in the spider graph.
\vspace{.3cm}
\newcommand{\ZARb}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [red, very thick](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [red, very thick](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARc}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [red, very thick](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [red, very thick](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [red, very thick](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARd}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [red, very thick](0,0) -- (1,-1);
\draw [red, very thick](-1,-1) -- (0,0);
\draw [red, very thick](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARe}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [red, very thick](-1,-1) -- (-2,-2);
\draw [red, very thick](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [red, very thick](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARf}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [red, very thick](0,0) -- (1,1);
\draw [red, very thick](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARg}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [red, very thick](-1,1) -- (-1,-1);
\draw [red, very thick](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [red, very thick](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARh}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [red, very thick](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [red, very thick](0,0) -- (1,-1);
\draw [red, very thick](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARi}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [red, very thick](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [red, very thick](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [red, very thick](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARj}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [red, very thick](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [red, very thick](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [red, very thick](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [red, very thick](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARba}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [red, very thick](0,0) -- (1,1);
\draw [red, very thick](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARbb}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [red, very thick](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [red, very thick](0,0) -- (1,-1);
\draw [red, very thick](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARbc}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [red, very thick](-1,1) -- (-1,2);
\draw [red, very thick](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [red, very thick](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARbd}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [red, very thick](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [red, very thick](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [red, very thick](0,0) -- (1,-1);
\draw [red, very thick](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARbe}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [red, very thick](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [red, very thick](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [red, very thick](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [red, very thick](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARbf}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black] (2,0) -- (1,-1);
\draw [black] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [red, very thick](-1,1) -- (-1,-1);
\draw [black!25](-1,1) -- (-2,2);
\draw [black!25](-1,1) -- (-1,2);
\draw [red, very thick](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [red, very thick](-1,-1) -- (-1,-2);
\draw [red, very thick](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\newcommand{\ZARbg}{\begin{tikzpicture}\begin{scope}[scale=.33]\draw [black!25] (0,-1) -- (-1,0);
\draw [black!25] (0,0) -- (-1,0);
\draw [black!25] (-1,0) -- (0,1);
\draw [black!25] (0,1) -- (1,1);
\draw [black!25] (1,1) -- (2,0);
\draw [black!25] (2,0) -- (1,-1);
\draw [black!25] (0,0) -- (1,-1);
\draw [black!25] (1,-1) -- (0,-1);
\draw [black!25] (1,1) -- (0,0);
\draw (-1,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,1) node[circle,draw,fill=black,minimum size=3pt,inner sep=1pt]{};
\draw (1,1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (2,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (1,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,-1) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\draw (0,0) node[circle,draw,fill=white,minimum size=3pt,inner sep=1pt]{};
\end{scope}\end{tikzpicture}{\begin{tikzpicture}\begin{scope}[scale=.25]
\draw [black!25](-1,-1) -- (-2,-2);
\draw [black!25](-1,1) -- (-1,-1);
\draw [red, very thick](-1,1) -- (-2,2);
\draw [red, very thick](-1,1) -- (-1,2);
\draw [black!25](0,0) -- (1,1);
\draw [black!25](0,0) -- (1,-1);
\draw [black!25](-1,-1) -- (0,0);
\draw [black!25](-1,-1) -- (-1,-2);
\draw [black!25](-1,1) -- (0,0);
\end{scope}\end{tikzpicture}}}
\vspace{.2cm}
\begin{tabular}{|cc|cc|cc|ccc|}\hline1&\ZARb&2&\ZARc&3&\ZARd&4&\ZARe&\\ \hline
5&\ZARf&6&\ZARg&7&\ZARh&8&\ZARi&\\ \hline
9&\ZARj&10&\ZARba&11&\ZARbb&12&\ZARbc&\\ \hline
13&\ZARbd&14&\ZARbe&15&\ZARbf&16&\ZARbg&\\ \hline
\end{tabular}
\vspace{.3cm}
To find the dimers corresponding to the three nodes of the spider graph,
we need to contract
the arrows of which the line does not run through that node.
For the upper left node these are
$\{1, 3, 5, 7, 10, 11\}$, for the lower left node $\{2, 5, 8, 10, 12, 16\}$ and for the right node
$\{1,4,12,16\}$. This gives us the following reduced dimers. The $2$-cycles that can be
removed are drawn in light grey.
\begin{center}
\begin{tikzpicture}
\begin{scope}[scale=.25]
\begin{scope}
\draw[loosely dotted] (102pt,20pt) rectangle (402pt,320pt);
\draw [-latex,shorten >=5pt] (402pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 6}} (211pt,95pt);
\draw [latex-latex,shorten >=5pt,black!12] (402pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 2,8,9,14}} (402pt,320pt);
\draw [latex-latex,shorten >=5pt,black!12] (102pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 2,8,9,14}} (102pt,320pt);
\draw [latex-latex,shorten >=5pt,black!12] (402pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 4,13}} (102pt,320pt);
\draw [-latex,shorten >=5pt] (102pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 12}} (211pt,95pt);
\begin{scope}\clip (102pt,20pt) rectangle (402pt,320pt);
\draw [-latex,shorten >=5pt] (211pt,95pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 15}} (402pt,-280pt);
\draw [-latex,shorten >=5pt] (211pt,395pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 15}} (402pt,20pt);
\end{scope}
\draw [-latex,shorten >=5pt] (211pt,95pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 16}} (102pt,320pt);
\node at (102pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (402pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (402pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (102pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (211pt,95pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $8$}};
\end{scope}\end{scope}
\end{tikzpicture}
\hspace{.5cm}
\begin{tikzpicture}
\begin{scope}[scale=.25]
\begin{scope}
\draw[loosely dotted] (102pt,20pt) rectangle (402pt,320pt);
\begin{scope}\clip (102pt,20pt) rectangle (402pt,320pt);
\draw [-latex,shorten >=5pt] (402pt,620pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 1}} (293pt,245pt);
\draw [-latex,shorten >=5pt] (402pt,320pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 1}} (293pt,-55pt);
\end{scope}
\draw [latex-latex,shorten >=5pt,black!12] (402pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 7,9,11,14}} (402pt,320pt);
\draw [latex-latex,shorten >=5pt,black!12] (102pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 7,9,11,14}} (102pt,320pt);
\draw [latex-latex,shorten >=5pt,black!12] (402pt,320pt) .. controls (282pt,140pt) and (282pt,140pt) .. node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 6,15}} (102pt,20pt);
\draw [-latex,shorten >=5pt] (293pt,245pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 3}} (402pt,320pt);
\draw [-latex,shorten >=5pt] (293pt,245pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 4}} (102pt,320pt);
\draw [-latex,shorten >=5pt] (102pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 13}} (293pt,245pt);
\node at (293pt,245pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $2$}};
\node at (102pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (402pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (402pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (102pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\end{scope}\end{scope}
\end{tikzpicture}
\hspace{.5cm}
\begin{tikzpicture}
\begin{scope}[scale=.25]
\begin{scope}
\draw[loosely dotted] (102pt,20pt) rectangle (402pt,320pt);
\draw [-latex,shorten >=5pt] (320pt,320pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 3}} (320pt,170pt);
\draw [<->,shorten >=5pt,black!12] (320pt,320pt) .. controls (250pt,254pt) and (250pt,254pt) .. node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 11,13}} (320pt,170pt);
\draw [-latex,shorten >=5pt] (320pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 2}} (320pt,170pt);
\draw [<->,shorten >=5pt,black!12] (320pt,20pt) .. controls (250pt,95pt) and (250pt,95pt) .. node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 12,15}} (320pt,170pt);
\draw [-latex,shorten >=5pt] (320pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 5}} (402pt,170pt);
\draw [-latex,shorten >=5pt] (102pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 7}} (102pt,20pt);
\draw [-latex,shorten >=5pt] (402pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 7}} (402pt,20pt);
\draw [-latex,shorten >=5pt] (102pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 8}} (102pt,320pt);
\draw [-latex,shorten >=5pt] (402pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 8}} (402pt,320pt);
\draw [-latex,shorten >=5pt] (402pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 9}} (320pt,20pt);
\draw [-latex,shorten >=5pt] (402pt,320pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 9}} (320pt,320pt);
\draw [-latex,shorten >=5pt] (102pt,20pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 10}} (320pt,20pt);
\draw [-latex,shorten >=5pt] (102pt,320pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 10}} (320pt,320pt);
\draw [-latex,shorten >=5pt] (320pt,170pt) to node [rectangle,draw,fill=white,sloped,inner sep=1pt] {{\tiny 14}} (102pt,170pt);
\node at (320pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $1$}};
\node at (320pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $1$}};
\node at (320pt,170pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $3$}};
\node at (102pt,170pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $4$}};
\node at (402pt,170pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $4$}};
\node at (102pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (402pt,20pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (402pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\node at (102pt,320pt) [circle,draw,fill=white,minimum size=10pt,inner sep=1pt] {\mbox{\tiny $5$}};
\end{scope}\end{scope}
\end{tikzpicture}
\end{center}
If an arrow $a$ is part of a $2$-cycle $ab$, the path that runs in the same direction but on the other side of the arrow $b$ has the same homotopy class and ${\cal P}_i$-degree
as $a$ and must hence be equal to $a$. Therefore we can remove $2$-cycles from the dimer quiver without changing the Jacobi algebra. We have drawn these $2$-cycles in grey.
After removing the $2$-cycles, the first two dimer quivers can be recognized as the conifold quiver (up to a Dehn twist of the torus), while the third is the dimer for local
${\blb P}_2$, which is in agreement with the shapes of the polygons in the subdivision.
|
2,869,038,155,537 | arxiv | \section{Introduction}
Ultrasound elastography (USE) is a technique for detecting
alterations in mechanical properties of tissue using ultrasound imaging, which is a widely available modality and offers the additional advantage of being non-invasive and low cost.
As such, USE may help in early diagnosis and improves the prognosis of treatments. In recent years, USE has been utilized in several
clinical applications including ablation guidance
and monitoring \cite{sigrist2017ultrasound}, differentiating
benign thyroid nodules from malignant ones \cite{hong2009real,trimboli2012ultrasound,samir2015shear} and breast lesion characterization \cite{hall2001vivo,doyley2001freehand,uniyal2015ultrasound}. Surgical treatment of liver cancer \cite{rivaz2014ultrasound,yang2014monitoring,liverlas}, assessment
of fibrosis in chronic liver diseases \cite{tang2015ultrasound, tsochatzis2011elastography},
detecting prostate cancer \cite{lorenz1999new, correas2013update}, differentiating abnormal
lymph nodes in benign conditions \cite{saftoiu2006endoscopic} and brain tumor
surgery \cite{selbekk2005strain, selbekk2010tissue} are other relevant clinical applications of USE.
Pathological alterations are correlated with the mechanical properties of tissue, and for each material, 81 constants are required to describe the fourth-order stiffness tensor \cite{theoryofelasticity,ophir1999elastography}. Many of these parameters depend on each other in a valid stiffness tensor. Moreover, for most clinical applications, the tissue can be assumed linear elastic and isotropic, which reduces the number of parameters to 2 independent ones \cite{ophir1999elastography}. These two constants are known as the Lam\'e constants, or their equivalents, Young's modulus and Poisson's ratio.
Different methods are proposed for estimating the Young's modulus and Poisson's ratio, which can be broadly grouped into dynamic and quasi-static elastography. Dynamic methods, such as shear wave imaging (SWI)~\cite{bercoff2004supersonic} and acoustic radiation force imaging (ARFI) \cite{nightingale2003shear,dumont2015robust}, use Acoustic Radiation Force (ARF) to stimulate displacement in tissue. Quasi-static elastography proposed in \cite{ophir1991elastography} use external excitation by slowly pressing the probe against the tissue utilizing a robotic arm \cite{schneider2012remote,adebar2011robotic} or a hand-held probe (i.e. free-hand palpation) \cite{xia2014dynamic,hall2003vivo}. Even though the induced compression is uni-axial, tissue deforms in all directions due to its incompressible property where the volume of the tissue does not change when compressed. The first step of quasi static elastography is time delay estimation (TDE), wherein tissue deformation should be estimated and differentiated to provide the strains. In the next step the inverse problem should be solved to estimate the Young's modulus based on
strains \cite{skovoroda1995tissue,pan2014regularization}.
\begin{figure*}
\centering
\subfloat[SA]{{\includegraphics[height=5cm]{SA}}}
\quad
\subfloat[Line by line]{{\includegraphics[height=5cm]{line_per_line}}}
\quad
\subfloat[VSSA]{{\includegraphics[height=5cm]{transmit}}}
\caption{Schematics of different imaging modes. The red dashed lines show the beam pattern, while the gray areas show the regions that received data should be focused. (a) shows the SA imaging in which a wave propagates in tissue. (b) is line by line imaging that transmitted beam gets narrow at focal point, which is shown by a green circle. (c) is VSSA in which the transmission is similar to line by line imaging and the focus point is assumed as virtual source of transmission.}
\label{imaging}
\end{figure*}
Different methods are proposed to perform TDE that can be broadly categorized as window-based, regularized optimization-based, and deep-learning approaches \cite{tehrani2020displacement,tehrani2020semi}.
For estimating the displacement of a sample, the window based approaches consider a window around each sample and estimate the displacement of each window by determining a window closest in pixel values in the next frame. Several similarity metrics are proposed to compare the windows of pre-compressed and post-compressed tissue such as normalized cross correlation (NCC) of windows~\cite{varghese2000direct,mirzaei20203d,zahiri2006motion}, phase-correlation wherein zero crossing of phase determines displacement \cite{yuan2015analytical} and sum of absolute difference of windows~\cite{chaturvedi1998testing}.
Another class of TDE methods is regularized optimizing based technique that imposes regularization between neighboring samples \cite{rivaz2011real,mirzaei2020accurate}. We have recently proposed a method called OVERWIND, which is a combination of window-based approaches and regularized optimization method to take advantages of both methods.
Despite the capability of OVERWIND in estimating both axial and lateral displacements, the latter is of lower quality compared to the former for three main reasons: low sampling rate, lack of carrier signal and low resolution in the lateral direction \cite{luo2009effects,he2017performance}. One of the most utilized techniques for increasing the data size in lateral direction is interpolation \cite{konofagou1998new,liu2017systematic}. It is shown by experimental results that spline interpolation has the best performance among different techniques of interpolation \cite{luo2009effects}.
In these methods, the bandwidth should be large enough to have sufficient overlap between adjacent lines \cite{konofagou1998new}. It is shown that the minimum density of A-lines for original data should be at least 2 A-line per beam width to have an acceptable interpolation \cite{konofagou1998new}. Accordingly, not only interpolation does not change the resolution, it cannot be implemented for high resolution data to increase the number of samples.
Besides low resolution, another disadvantage of interpolation data especially for large factor interpolations is decreasing the robustness of the TDE, since interpolation can be a source of error \cite{ebbini2006phase}.
Synthetic Aperture (SA) imaging is used for lateral strain estimation in \cite{korukonda2011estimating}.
SA has narrow and fixed beam width in all field of view in contrast to line by line imaging, which has narrow beam width in the focal zone only. Moreover, SA has capability of increasing sampling rate in lateral direction without interpolation, which also improves the resolution.
It is shown that the accuracy of TDE increases by decreasing the beam width \cite{korukonda2011estimating}. Accordingly, SA is better than line by line imaging for lateral elastography, with the disadvantage of lower transmission power and penetration depth, which could hinder clinical use of SA \cite{korukonda2013noninvasive,nayak2017principal}.
In this paper, we propose to use Virtual Source Synthetic Aperture (VSSA) imaging that implements SA-based beamforming on focused transmitted signals. On the one hand, this enables us to benefit from advantages of SA such as high resolution and the capability to increase the sampling frequency to increase the resolution and number of A-lines. On the other hand, we can take advantageous of line by line imaging in high penetration depth. Then the beamformed data is fed to our recently published TDE method, OVERWIND \cite{overwind} that has shown to outperform window-based and other regularized optimization-based techniques. We call the results of OVERWIND on VSSA with high sampling frequency in lateral direction as High Frequency OVERWIND (HF OVERWIND) and compare the results with OVERWIND on spline based interpolated data (Inter. OVERWIND) and also with OVERWIND on VSSA with low sampling frequency in lateral direction in which the number of A-lines is equal to the number of piezo-electrics (LF OVERWIND).
\section{METHODS}
Most elastography techniques like OVERWIND requires two sets of data
collected as the tissue undergoes some deformation. Let $I_{1}$ and $I_{2}$ of size $(m,n)$ be the beamformed RF data where $m$ and $n$ are depth and width of the imaged tissue. The goal of TDE is estimating the displacement field between these two data sets.
In this section, we first briefly review our
recently developed ultrasound elastography method, OVERWIND~\cite{overwind}, and then present the beamforming technique to increase the number of lines and resolution in the lateral direction to help OVERWIND in accurately estimating displacements.
\subsection{OVERWIND: tOtal Variation Regularization and WINDow-based time delay estimation}
The displacement estimation in OVERWIND comprises two steps for increasing the capabilities of the technique in estimating large deformations. In the first step, an integer estimation of the displacement is calculated using Dynamic Programming (DP), which is a recursive optimization based method for image registration. In this method, we consider a range of displacements for each sample and optimize the cost function that incorporates similarity of RF samples and displacement continuity to estimate integer displacement of RF samples \cite{rivaz2008ultrasound}. In the second step, the sub-sample displacements are calculated by minimizing the following cost function:
\begin{equation*}
\begin{array}{l}
C(\varDelta a_{1,1},\ldots,\varDelta l_{m,n})= \varSigma_{j=1}^{n}\varSigma_{i=1}^{m}
\bigg[\frac{1}{L}\varSigma_{k,r}\Big(I_1(i+k,j+r)\\ -I_2(.)-\varDelta a_{i,j} I'_{2a}(.)
-\varDelta l_{i,j} I'_{2l}(.)\Big)^2 \\
+\alpha_1 \delta_1(a_{i,j}+\varDelta a_{i,j}-a_{i-1,j}-\varDelta a_{i-1,j}-\varepsilon_a) \\
+ \alpha_2 \delta_2 (a_{i,j}+\varDelta a_{i,j}-a_{i,j-1}-\varDelta a_{i,j-1})\\
+ \beta_1 \delta_3 (l_{i,j}+\varDelta l_{i,j}-l_{i-1,j}-\varDelta l_{i-1,j}) \\
+ \beta_2 \delta_4 (l_{i,j}+\varDelta l_{i,j}-l_{i,j-1}-\varDelta l_{i,j-1}-\varepsilon_l)\bigg],
\end{array}
\label{costoverwind}
\end{equation*}
where $i$ and $j$ are indices of RF samples in the region of interest and the symbols $i+k$ and $j+r$ represent indices of RF samples inside the window that is considered around each sample. $\{a_{i,j}$, $\varDelta a_{i,j}\}$ and $\{l_{i,j}$,$\varDelta l_{i,j}\}$ represent the integer and sub-sample displacement of $(i,j)$ in axial and lateral directions, respectively. $I_2(.)$ represent $I_2(i+k+a_{i,j},j+r+l_{i,j})$ and $I'_{2a}(.)$ and $I'_{2l}(.)$ are derivatives of $I_2$ in axial and lateral directions, respectively. $\delta_x(s)=2\lambda_x \sqrt{\lambda_x^2+s^2}$ is an approximate of norm L1 for regularization which allows sharp transitions where $\lambda_x$ is a scaling parameter.
Finally, $\alpha_1, \alpha_2, \beta_1$ and $\beta_2$ are regularization parameters to be tuned. These four parameters can be related to each other as explained in Discussion Section.
OVERWIND considers both displacements in axial and lateral directions, but the estimation in the former direction is more accurate since, among other reasons, ultrasound data usually has less samples in lateral direction compared to axial direction. One of the most common techniques to cope with this issue is interpolating data in lateral direction. Not only it does not change the resolution but also its performance can deteriorate for the high resolution data sets since the A-lines do not have high overlap with each other in the high resolution data \cite{luo2009fundamental,luo2009key}. Another disadvantage of interpolation is low robustness for complex interpolation techniques.
In this paper, we propose to use VSSA imaging mode for ultrasound elastography, which has the ability of increasing sampling frequency in the lateral direction as much as axial direction and also has high resolution in lateral direction while allowing high penetration. In the next subsection we describe SA, line by line imaging, and then show how VSSA can benefit to relax two main limitations for lateral displacement estimation in ultrasound elastography.
\subsection{SA: Synthetic Aperture}
In SA, a single element transmits a wave through the tissue as shown in Fig. \ref{imaging}(a) and all elements record reflections. Each element generates an image of tissue (the gray area of Fig. \ref{imaging}(a)) by focusing the received beam at any point according to the expression
\begin{equation}
t_p({ij})=\dfrac{\sqrt{(x_p-x_i)^2+(z_p)^2}+\sqrt{(x_p-x_j)^2+(z_p)^2}}{c}
\end{equation}
where $c$ is the speed of sound in soft tissue and $i$ and $j$ are the transmitter and receiver elements symbols. $x_i$, $x_j$ and $x_p$ are the horizontal positions of the transmitter $i$, receiver $j$ and point $p$ where the beam is focused and $z_p$ is depth of point $p$ by assuming the probe is at zero depth. During each transmission, all receivers in the aperture focus the received beam at all points of the aperture and summation of these data for all receivers generate a low-resolution image. The next element of the array transmits and the previously described operation is repeated to generate another low-resolution image. By repeating the experiment for all piezo-electrics as transmitter and adding up all low-resolution images, the final image is generated as per the following expression
\begin{equation}
y_p=\sum\limits_{i=1}^e \sum\limits_{j=1}^e t_p({ij}),
\end{equation}
where $e$ is the total number of elements in the transducer array.
The main disadvantage of this technique for ultrasound elastography is the limitation in imaging deep areas since the emitted signal from one piezo-electric does not have enough power to penetrate deep areas.
\begin{figure*}
\centering
\subfloat[Ground truth]{{\includegraphics[width=4cm]{sim_dis_lat_ground}}}
\subfloat[LF OVERWIND]{{\includegraphics[width=4cm]{sim_dis_lat_low}}}
\subfloat[Inter. OVERWIND]{{\includegraphics[width=4cm]{sim_dis_lat_inter}}}
\subfloat[HF OVERWIND]{{\includegraphics[width=4cm]{sim_dis_lat_my}}}
\subfloat{{\includegraphics[width=0.75cm]{sim_dis_lat_col}}}
\subfloat[Ground truth]{{\includegraphics[width=4cm]{sim_st_lat_ground}}}
\subfloat[LF OVERWIND]{{\includegraphics[width=4cm]{sim_st_lat_low}}}
\subfloat[Inter. OVERWIND]{{\includegraphics[width=4cm]{sim_st_lat_inter}}}
\subfloat[HF OVERWIND]{{\includegraphics[width=4cm]{sim_st_lat_my}}}
\subfloat{{\includegraphics[width=0.8cm]{sim_st_lat_col}}}
\caption{ Estimated lateral displacement with LF OVERWIND, Inter. OVERWIND and HF OVERWIND for simulation data are shown in (b)-(d), respectively.} The second row shows the corresponding strains. The red and blue rectangles in (f) are considered as target and background areas for CNR calculation. The horizontal white and green vertical lines are also used for plotting the edge spread function of Fig.~\ref{esf_lateral}
\label{sim_virtual_lateral}
\end{figure*}
\subsection{Line by Line Imaging}
In this imaging technique, a group of elements (i.e. transmission aperture) transmits the beam to increase the penetration of signal to image deeper areas. The transmitted beam focuses at a single point during each transmission. The data received at different channels is processed to generate one line of the US image. Fig. \ref{imaging}-(b) shows the pattern of transmission by dashed red lines, while the area that is imaged in each transmission is highlighted by gray. In this technique, the lateral resolution at the focal depth
is high and close to the resolution of the SA imaging. Further away from this point, the resolution decreases. Moreover, the number of lines is limited to the number of elements (without interpolation).
\subsection{VSSA: Virtual Source Synthetic Aperture}
This imaging technique benefits from both SA and line by line imaging and has the ability of increasing sampling frequency and resolution in lateral direction in all imaged area while penetrating to deep fields which are essential for ultrasound elastography. Similar to line by line imaging, a group of elements transmits the beam by focusing at a single point \cite{frazier1998synthetic,passmann1996100}. Since the beam at the focal point is very narrow, we can assume the focal point as a virtual source that transmits the beam similar to SA as shown in Fig. \ref{imaging}-(c). Then each receiver focuses the received data at any point inside the aperture according to the following expression \cite{bottenus2013synthetic}
\begin{equation}
t_p({fj})=\dfrac{z_f\pm \sqrt{(x_p-x_f)^2+(z_f-z_p)^2}+\sqrt{(x_j-x_p)^2+z_p^2}}{c}
\label{virtual_focus}
\end{equation}
where $c$ is the speed of sound in soft tissue, $x_p$, $x_f$, $z_p$ and $z_f$ are the positions of the focal point $f$ and $p$ is the location at which the beam is focused. The $\pm$ term in Eq. \ref{virtual_focus} divides the imaging area into regions above and below the virtual source.
Similar to SA, summation of focused data by different receivers generates a low-resolution image and by switching the transmissions new focal points will be the virtual sources and in each step a new low resolution image will be generated. Similar to SA, adding up these images will end up with a high-resolution image as
\begin{equation}
y_p=\sum\limits_{f=1}^e \sum\limits_{j=1}^e t_p({fj})
\end{equation}
Similar to line by line imaging, the number of A-lines is usually equal to number of elements in VSSA, and interpolation is the most commonly used technique for increasing the sampling frequency in the lateral direction for elastography purposes. However, for the VSSA the received data can be focused at any point inside the aperture (highlighted by gray in Fig. \ref{imaging}-(c)). To increase the sampling frequency, we consider a grid consisting of nodes with same spatial distance of nodes in axial and lateral direction as $p_d=\frac{c}{2f_s}$. Each receiver element generates an image on that grid as shown in Fig. \ref{imaging}-(c). Therefore, we can increase number of data and resolution in lateral direction without interpolation.
\subsection{Data Acquisition and comparison metrics}
In this section, the data that are utilized in different experiments of the paper are described and then results of HF OVERWIND are compared with Inter. OVERWIND and also LF OVERWIND wherein number of data in lateral direction is equal to number of piezo-electrics. In the Results Section, the CNR metric is used to provide a quantitative value for assessing the proposed method \cite{varghese1998analysis}
\begin{equation}
\textrm{CNR}=20\ \textrm{log}_{10}{\Big(\dfrac{2(\bar{s}_b-\bar{s}_t)^2}{\sigma_b^2+\sigma_t^2}\Big)},
\label{eq1}
\end{equation}
where $\bar{s}_t$ and $\bar{s}_b$ are the spatial strain averages of the target and
background, $\sigma_t^2$ and $\sigma_b^2$ are the spatial strain variances of the
target and background, respectively \cite{ophir1999elastography}.
For the simulation results where we know the ground truth, we use Root Mean Square Error (RMSE), Mean of estimation Error (ME) and Variance of estimation Error (VE) as other metrics according to
\begin{equation}
\begin{array}{c}
\textrm{RMSE\%}=100*\dfrac{\sqrt{m\times n\times \sum\limits_{i=1}^m\sum\limits_{j=1}^n\big(S_e(i,j)-S_g(i,j)\big)^2}}{\sum\limits_{i=1}^m\sum\limits_{j=1}^nS_g(i,j)},\\
\textrm{ME}=\dfrac{\sum\limits_{i=1}^m\sum\limits_{j=1}^nS_e(i,j)-S_g(i,j)}{m\times n}\\
\textrm{VE}=\dfrac{\sum\limits_{i=1}^m\sum\limits_{j=1}^n\big(S_e(i,j)-S_g(i,j)\big)^2}{m\times n}-ME^2
\end{array}
\label{eq2}
\end{equation}
where $m$ and $n$ are size of estimated strains. $S_e$ and $S_g$ are estimated and ground truth strains, respectively.
\begin{figure}
\centering
\subfloat[horizontal]{{\includegraphics[width=4.2cm]{esf_lat_hor}}}
\subfloat[vertical]{{\includegraphics[width=4.2cm]{esf_lat_ver}}}
\caption{Edge spread function of the lateral strain in horizontal (a) and vertical line (b).}
\label{esf_lateral}
\end{figure}
\begin{figure*}
\centering
\subfloat[Ground truth]{{\includegraphics[width=4cm]{sim_dis_ax_ground}}}
\subfloat[LF OVERWIND]{{\includegraphics[width=4cm]{sim_dis_ax_low}}}
\subfloat[Inter. OVERWIND]{{\includegraphics[width=4cm]{sim_dis_ax_inter}}}
\subfloat[HF OVERWIND]{{\includegraphics[width=4cm]{sim_dis_ax_my}}}
\subfloat{{\includegraphics[width=0.68cm]{sim_dis_ax_col}}}
\subfloat[Ground truth]{{\includegraphics[width=4cm]{sim_st_ax_ground}}}
\subfloat[LF OVERWIND]{{\includegraphics[width=4cm]{sim_st_ax_low}}}
\subfloat[Inter. OVERWIND]{{\includegraphics[width=4cm]{sim_st_ax_inter}}}
\subfloat[HF OVERWIND]{{\includegraphics[width=4cm]{sim_st_ax_my}}}
\subfloat[]{{\includegraphics[width=0.86cm]{sim_st_ax_col}}}
\caption{Estimated axial displacement with LF OVERWIND, Inter. OVERWIND and HF OVERWIND for simulation data are shown in (b)-(d), respectively. The second row shows the corresponding strains. The red and blue rectangles in (f) are considered as target and background areas for CNR calculation. The horizontal white and green vertical lines are also used for plotting the edge spread function of Fig.~\ref{esf_axial} }
\label{sim_virtual_axial}
\end{figure*}
For estimating the axial strain, the displacement field should be differentiated. To reduce the impact of noise during differentiating, it is common to use Least Square Estimation (LSQ) for strain estimating. For estimating the strain at each sample, a few neighboring samples in a window of size $\rho$ are considered and a line is fitted to their displacements. The tangent of the line is considered as the strain for the middle sample. Considering more data points for least square makes the strain smooth at the cost of losing resolution. Throughout the paper, the size of LSQ window is $5\%$ of total data size.
\begin{figure}
\centering
\subfloat[horizontal]{{\includegraphics[width=4cm]{esf_ax_hor}}}
\subfloat[vertical]{{\includegraphics[width=4cm]{esf_ax_ver}}}
\caption{Edge spread function of the axial strain in horizontal (a) and vertical line (b).}
\label{esf_axial}
\end{figure}
\subsection*{Simulation Data}
A simulated phantom is generated by utilizing the Field II ultrasound simulation software~\cite{fielii,139123} by randomly distributing
slightly more than $10$ scatterers per resolution cell
to satisfy the Rayleigh scattering regime.
The simulated phantom consists of a homogeneous region with a Young's modulus of $4\ kPa$ and one cylindrical inclusion with a Young's modulus of $40\ kPa$. For compressing the phantom and computing its ground truth displacement, Finite Element Method (FEM)-based deformations are computed using the ABAQUS software package (Johnston, RI, USA) with triangular mesh sizes of $0.05$ $\textrm{mm}^2$.
The probe consists of 128 elements with pitch of $0.15$ mm. The center frequency is $7$ MHz, while the sampling rate is $100$ MHz. The lateral sampling frequency in HF OVERWIND is 19 times higher than LF OVERWIND. Therefore, the data is interpolated by a factor of 19 and using a cubic spline method for Inter. OVERWIND, so that Inter. OVERWIND and HF OVERWIND have the same number of samples.
\begin{figure*}
\centering
\begin{tabular}{ccccc}
\subfloat[B-MODE]{{\includegraphics[width=4cm]{bmode}}}
&\subfloat[LF OVERWIND]{{\includegraphics[width=4cm]{ph_dis_lat_low}}}
&\subfloat[Inter. OVERWIND]{{\includegraphics[width=4cm]{ph_dis_lat_inter}}}
&\subfloat[HF OVERWIND]{{\includegraphics[width=4cm]{ph_dis_lat_my}}}
&\subfloat{{\includegraphics[width=0.64cm]{ph_dis_lat_col}}}\\
&\subfloat[LF OVERWIND]{{\includegraphics[width=4cm]{ph_st_lat_low}}}&
\subfloat[Inter. OVERWIND]{{\includegraphics[width=4cm]{ph_st_lat_inter}}}&
\subfloat[HF OVERWIND]{{\includegraphics[width=4cm]{ph_st_lat_my}}}&
\subfloat{{\includegraphics[width=0.74cm]{ph_st_lat_col}}}\\
\end{tabular}
\caption{Results on a tissue mimicking phantom. B-Mode image is shown in (a). Estimated lateral displacement with LF OVERWIND, Inter. OVERWIND and HF OVERWIND are shown in (b)-(d), respectively. The second row shows the corresponding strains. The red and blue rectangles in (a) are considered as target and background areas for CNR calculation. }
\label{phantom_virtual_lateral}
\end{figure*}
\subsection*{Phantom Data}
The phantom data is acquired from a tissue mimicking breast phantom (059 tissue mimicking breast phantom, CIRS tissue simulation \& phantom technology, Norfolk, VA, USA) using an E-Cube R12 ultrasound machine (Alpinion, Bothell, WA, USA) with a L3-12H probe at the center frequency of $8$ MHz and sampling frequency of $40$ MHz. The lateral sampling frequency in HF OVERWIND is 8 times higher than LF OVERWIND, therefore, the data is interpolated by a factor of 8 using the cubic spline method for Inter. OVERWIND.
\section{Results}
\subsection*{Simulation Results}
Fig. \ref{sim_virtual_lateral} shows the lateral displacement and strain for LF OVERWIND, Inter. OVERWIND and HF OVERWIND. It is clear that elastography on data sampled at a higher rate significantly improves the estimations. As anticipated, the interpolation decreases the variance of estimation significantly at the expense of over smoothing compared to low sampled data. The reported RMSE, ME and VE and CNR in table \ref{sim_rmse_lateral} also corroborate improvements in lateral estimations. To provide a better comparison, we illustrate the Edge Spread Function (ESF) of the estimated strains across two vertical and horizontal lines shown in Fig.~\ref{sim_virtual_lateral} (a). As illustrated in Fig.~\ref{esf_lateral}, the ESF of the HF OVERWIND is substantially closer to the ground truth as compared to those of Inter. OVERWIND and LF OVERWIND.
\begin{table}
\begin{center}
\caption{Quantitative comparison of lateral strain estimation on simulated phantom.}
\label{sim_rmse_lateral}
\begin{tabular}[c]{cccc}
\hline
&LF&Inter.& HF\\
&OVERWIND&OVERWIND& OVERWIND\\
\hline
ME &$-1.1\times 10^{-3}$ & $1.6\times 10^{-3}$ &$4.33\times 10^{-4}$ \\
VE & $9.77\times 10^{-6}$&$2.18\times 10^{-6}$ & $6.21\times 10^{-7}$\\
RMSE& $85.04\%$& $56.73\%$ & $23.09\%$ \\
CNR & 0.37 & -1.49 & 21.70\\
\hline
\end{tabular}\\
\end{center}
\end{table}
Fig. \ref{sim_virtual_axial} also shows the axial displacement and axial strain for laterally low sampled data, interpolated data and laterally high sampled data. It is inevitable that correct lateral estimation leads to slightly improved axial estimations. Table \ref{sim_rmse_axial} and Fig. \ref{esf_axial} show marginal improvement of axial strain.
\begin{table}
\begin{center}
\caption{Quantitative comparison of axial strain estimation on simulated phantom.}
\label{sim_rmse_axial}
\begin{tabular}[c]{cccc}
\hline
&LF&Inter.& HF\\
&OVERWIND&OVERWIND& OVERWIND\\
\hline
ME &$-3.13\times 10^{-5}$ & $-3.93\times 10^{-5}$ & $-4.38\times 10^{-5}$\\
VE &$5.21\times 10^{-7}$ & $4.97\times 10^{-7}$ & $4.96\times 10^{-7}$ \\
RMSE& $7.99\%$ & $7.82\%$ & $7.81\%$ \\
CNR & $50.76$ & $52.56$ & $54.09$\\
\hline
\end{tabular}\\
\end{center}
\end{table}
\subsection*{Phantom Results}
Estimated lateral displacement and strain for experimental phantom are shown in Fig. \ref{phantom_virtual_lateral}. Similar to the simulation study, the lateral estimation by spline interpolation is over smoothed and HF OVERWIND outperforms the previous methods. Fig. \ref{phantom_virtual_axial} shows the axial displacements and strains and it illustrates better performance of HF OVERWIND over both LF OVERWIND and Inter. OVERWIND. The reported CNR values in Table \ref{CNR_PHANTOM} also show improvement in both lateral and axial estimations by HF OVERWIND.
\begin{table}
\begin{center}
\caption{The CNR comparison of different method on the phantom experiment in axial and lateral estimations.}
\label{CNR_PHANTOM}
\begin{tabular}[c]{lcc}
\hline
& \multicolumn{2}{c}{CNR} \\
\cline{2-3}
& Axial & Lateral \\
\hline
LF OVERWIND & 10.80 & -29.89 \\
inter OVERWIND &10.28 & -1.55\\
HF OVERWIND &11.01 & 6.35\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\centering
\subfloat[LF OVERWIND]{{\includegraphics[width=4cm]{ph_dis_ax_low}}}
\subfloat[Inter. OVERWIND]{{\includegraphics[width=4cm]{ph_dis_ax_inter}}}
\subfloat[HF OVERWIND]{{\includegraphics[width=4cm]{ph_dis_ax_my}}}
\subfloat{{\includegraphics[width=0.64cm]{ph_dis_ax_col}}}
\subfloat[LF OVERWIND]{{\includegraphics[width=4cm]{ph_st_ax_low}}}
\subfloat[Inter. OVERWIND]{{\includegraphics[width=4cm]{ph_st_ax_inter}}}
\subfloat[HF OVERWIND]{{\includegraphics[width=4cm]{ph_st_ax_my}}}
\subfloat{{\includegraphics[width=0.75cm]{ph_st_ax_col}}}
\caption{Results on a tissue mimicking phantom. Estimated axial displacement with LF OVERWIND, Inter. OVERWIND and HF OVERWIND are shown in (a)-(c), respectively. The second row shows the corresponding strains.}
\label{phantom_virtual_axial}
\end{figure*}
\section{Discussion}
In this paper we proposed to use VSSA as an advanced beamforming technique for time delay estimation in OVERWIND. In this imaging technique multiple piezo-electrics participate in the transmission which improves the beam strength in deep regions. The focused region of the beam can be considered as a virtual element that transmits a beam in tissue. As such, the imaging procedure becomes closer to the operation of the synthetic aperture and received data can be beamformed similar to synthetic aperture. In this imaging mode, the sampling frequency in the lateral direction can be increased as much as the sampling frequency in the axial direction to increase the resolution and addresses two of the major limitations in estimation lateral displacements. Meanwhile, VSSA has fixed and narrow beam width in all imaging field which results in accurate and high-resolution displacement estimation.
Virtual source synthetic aperture assumes the focal point as a beam source that transmits the signal and conducts beamforming according to Eq. (\ref{virtual_focus}). The $\pm$ term in Eq. (\ref{virtual_focus}) divides the imaging area into top and bottom regions above and below the virtual source resulting in a discontinuity at the focal depth as shown in Fig. \ref{virtual}. This discontinuity is a source of error for USE. Therefore, the focal point should be established in an area outside the region of interest
and corresponding data should be cropped before elastography to avoid this discontinuity.
\begin{figure}[h!]
\centering
\subfloat[]{{\includegraphics[width=4cm]{virtual}}}
\caption{Discontinuity of the VSSA imaging in the focal depth}
\label{virtual}
\end{figure}
In addition to the advantages associated with HF OVERWIND with respect to higher accuracy and increased resolution in its estimation, tuning the parameters in HF OVERWIND is much easier than LF OVERWIND. The OVERWIND cost function has four regularization parameters, namely $\alpha_1, \alpha_2, \beta_1$ and $\beta_2$. The parameters $\alpha_1,$ and $\beta_1$ regularize displacements of two neighboring samples in the same A-line in axial and lateral directions, while $\alpha_2, \beta_2$ regularize displacements of two neighbor samples at the same depth and neighbor A-line. As a rule of thumb, by assuming the Poisson's ratio of biological tissues close to $0.5$, lateral displacements of two samples is half of axial displacements, therefore $\beta_1$ and $\beta_2$ can be adjusted as $\beta_1=0.5*\alpha_1$ and $\beta_2=0.5*\alpha_2$, to reduce the number of parameters to two. In HF OVERWIND, the sample size in axial and lateral directions is equal. Therefore $\alpha_2$ can be set equal to $\alpha_1$ to reduce the number of parameters to one parameter.
The VSSA yielding high sampling frequency and high resolution in the lateral direction can be utilized with all elastography techniques and it significantly improves the results. For window-based techniques such as NCC, it is important to note that they are slow and computationally expensive techniques even for data with a low sampling frequency in the lateral direction. In window-based techniques, the running time has a linear relationship with image size which is a disadvantage of these techniques for VSSA with high sampling frequency in the lateral direction. The data with high sampling frequency can also be utilized for other regularized optimization-based elastography techniques. However, over-smoothness is a challenging issue for regularized optimization-based techniques due to a low resolution and also smaller deformation in the lateral direction and it is shown in \cite{overwind} that OVERWIND has a better capability in estimation the sharp transitions.
\section{Conclusion}
Accurate estimation of the tissue mechanical parameters requires accurate strain estimation in all direction.
Although elastography techniques estimate displacements in both axial and lateral directions, estimation in the axial direction is more accurate than in the lateral direction due to high sampling frequency, improved axial resolution and a carrier signal propagating in the axial direction.
In this paper we proposed to use VSSA imaging mode to benefit from advantages of both SA and line by line imaging in a high resolution with high number of A-lines and penetration depth. The beamformed data is fed to our recently developed TDE method, OVERWIND. The results exhibit significant improvement compared to interpolating data in lateral direction as one of the commonly used techniques in estimating lateral strain.
\section*{Acknowledgment}
We acknowledge the support of the Natural Sciences and Engineering
Research Council of Canada (NSERC) RGPIN-2020-04612 and RGPIN-2017-06629.
We thank Alpinion for technical support.
\FloatBarrier
\bibliographystyle{IEEEtran}
|
2,869,038,155,538 | arxiv | \section{Introduction}
\label{sec_1}
We are concerned with blowup solutions to the semilinear heat equation.
\begin{equation}\label{eq1.1}
\begin{cases}
u_t = \Delta u+|u|^{p-1}u-|u|^{q-1}u
&
\text{in } \R^n\times(0,T),\\
u|_{t=0}=u_0(x)
&
\text{on } \R^n.
\end{cases}
\end{equation}
The exponents $p$ and $q$ are given by
\[
p=\tfrac{n+2}{n-2},
\qquad
q\in(0,1).
\]
From the sign of the last term $|u|^{q-1}u$,
\eqref{eq1.1} admits a unique local classical solution
for any bounded continuous initial data $u_0$.
If there exits $T>0$ such that $\limsup_{t\to T}\|u(t)\|_\infty=\infty$,
we say that a solution $u(x,t)$ blows up in a finite time $T$.
Blowup solutions are often classified into two cases
in terms of the blowup rate.
As in the case of $u_t=\Delta u+|u|^{p-1}u$,
we define
\begin{align*}
\limsup_{t\to T}(T-t)^\frac{1}{p-1}\|u(t)\|_\infty
&<\infty
\qquad (\text{type I}),
\\
\limsup_{t\to T}(T-t)^\frac{1}{p-1}\|u(t)\|_\infty
&=\infty
\qquad (\text{type II}).
\end{align*}
The factor $(T-t)^{-\frac{1}{p-1}}$ in this definition
comes from
the blowup rate of ODE solutions of
$u_t=|u|^{p-1}u-|u|^{q-1}u$.
On the other hand,
by the presence of the absorption term $-|u|^{q-1}u$ with $q\in(0,1)$,
this equation has a finite time extinction property.
In fact,
if $|u_0(x)|<A<1$,
the solution $u(x,t)$ becomes identically zero at $t=T$,
where $T$ is a certain positive time.
By the uniqueness of solutions to \eqref{eq1.1},
the solution $u(x,t)$ must be zero for $t>T$.
In this paper,
we try to understand the role of the extinction properties
in blowup phenomena.
Our motivation of this paper
comes from the work
by Le Coz, Martel and Rapa\'el \cite{Coz} (see also \cite{Matsui}).
They consider
\begin{equation}\label{eq1.2}
i\pa_tu
+
\Delta u
+
|u|^{p-1}u
+
\epsilon
|u|^{q-1}
u
=0
\end{equation}
with $p=1+\frac{4}{n}$, $\epsilon=\pm1$ and $q\in(1,p)$.
For the case $\epsilon=1$,
they show the existence of minimal blowup solutions of \eqref{eq1.2}
satisfying
\[
\|\nabla u(t)\|_2=c_q(T-t)^{-\sigma_q}
\]
with
$\sigma_q=\frac{4}{4+n(q-1)}$.
In \cite{Coz},
they mention that
this blowup speed $\sigma_q$ is discontinuous at $q=p$,
since the blowup speed for $q=p$ is given by
$\|\nabla u(t)\|_2=c(T-t)^{-1}$,
which corresponds to a standard minimal blowup solution of
$i\pa_tu+\Delta u+2|u|^{p-1}u=0$
(on the other hand, $\sigma_q$ is continuous at $q=1$).
The problem \eqref{eq1.1} is the heat equation version of \eqref{eq1.2}.
We are also interested in
the blowup dynamics of \eqref{eq1.1} and $q$ dependence of them.
Since the range of $q$ in our case is quite different from their case,
blowup solutions of \eqref{eq1.1} are expected to exhibit
a new type of asymptotic formula.
The study of type II blowup solutions to the nonlinear heat equation
are started from Herrero and Vel\'azquez
\cite{HerreroV2,HerreroV3} (see also \cite{Mizoguchi}).
In the pioneering work \cite{HerreroV2,HerreroV3},
they construct positive radial type II blowup solutions to
the Fujita equation
\begin{equation}\label{eq1.3}
u_t=\Delta u+|u|^{p-1}u
\end{equation}
for the case $p>p_{\rm JL}$.
Here we do not give the definition of the exponent
$p_{\rm JL}$,
which is defined only for $n\geq10$ and satisfies $p_{\rm JL}>\frac{n+2}{n-2}$.
After their work,
Seki obtains
a new type of type II blowup solutions to \eqref{eq1.3}
for a certain range of $p\geq p_{\rm JL}$ \cite{Seki2, Seki3}.
For the critical case $p=\frac{n+2}{n-2}$,
Filippas, Herrero and Vel\'azquez \cite{Filippas}
find a quite different type of
type II blowup solutions to \eqref{eq1.3}.
They formally obtain type II blowup solutions
by using the matched asymptotic expansion technique.
The first rigorous proof for the existence of
type II blowup solutions to the critical problem
is given by Schweyer \cite{Schweyer}.
He constructs a type II blowup solution for $n=4$.
Recently
del Pino, Musso and Wei
\cite{del_Pino, del_Pino2, del_Pino3}
construct type II blowup solutions for the critical case.
They
develop a new method so-called the inner-outer gluing method
and obtain type II blowup solutions for the case $n=3,4,5$.
The author \cite{Harada,Harada_2}
also constructs type II blowup solutions
for the critical case with $n=5,6$
applying their gluing method.
We now go back to \eqref{eq1.1}.
Since a blowup solution $u(x,t)$ behaves like
$u(x,t)\to\infty$ near the singular point,
the behavior of the solution near singular point
is dominated by $u_t=\Delta u+|u|^{p-1}u$.
We recall that
a blowup solution of \eqref{eq1.3}
constructed in \cite{del_Pino, del_Pino2, del_Pino3}
behaves like
\[
u(x,t)
=
\lambda(t)^{-\frac{2}{p-1}}
{\sf Q}(y)
\qquad
\text{with }
x=\lambda(t) y
\]
near the singular point.
Here
${\sf Q}(y)$ is the standard Talenti function
and
$\lambda(t)$ is a scaling function satisfying $\lim_{t\to T}\lambda(t)=0$.
Under this assumption,
the solution is expected to be
$\lim_{t\to T}u(x,t)=0$
for $|y|\to\infty$, $|x|\ll1$.
For such a region,
the solution is governed by
\begin{equation}\label{eq1.4}
u_t=\Delta u-|u|^{q-1}u.
\end{equation}
The asymptotic formula for solutions to \eqref{eq1.4}
satisfying $\lim_{t\to T}u(x,t)=0$ is well understood.
There are two possibilities.
\begin{enumerate}[(I)]
\item
One is given by
$u(x,t)=(1-q)^\frac{1}{1-q}(T-t)^\frac{1}{1-q}$,
\item
the other is more delicate and complicated,
which is a kind of type II behavior (see \cite{Guo,Seki}).
\end{enumerate}
From this observation,
we look for solutions of the form
\begin{align}\label{eq1.5}
u(x,t)=
\begin{cases}
\lambda(t)^{-\frac{2}{p-1}}
{\sf Q}(y)
&
\text{near the singular point},
\\
\text{(I) or (II)}
&
\text{for }
|y|\to\infty,\
|x|\ll1.
\end{cases}
\end{align}
Therefore
our problem is reduced to the following question:
``can we construct blowup solutions of \eqref{eq1.1}
by connecting
a specific blowup solution of \eqref{eq1.3}
and
a specific solution of \eqref{eq1.4} satisfying $\lim_{t\to T}u(x,t)=0$?''
This question is a concrete motivation of this paper,
which seems to provide a new perspective on blowup problems.
This paper gives an affirmative answer to this question.
In fact,
such a solution will be constructed
by the addition of several correction terms.
\section{Main result}
\label{sec_2}
To state our main theorem,
we briefly prepare several notations.
Let ${\sf Q}(y)$ be the Talenti solution
defined by
\[
{\sf Q}(y)
=
\left( 1+\tfrac{1}{n(n-2)}|y|^2 \right)^{-\frac{n-2}{2}}.
\]
This gives a positive solution of
$\Delta_y{\sf Q}+{\sf Q}^p=0$.
Furthermore
let
${\sf U}(\xi)$
be a positive radially symmetric solution of
$\Delta_\xi{\sf U}-{\sf U}^q=0$,
and
let
${\sf U}_\infty(x)$
be a nonnegative radially symmetric solution of
$\Delta_x{\sf U}-{\sf U}^q=0$
with ${\sf U}_\infty|_{x=0}=0$.
Let us consider a linearized problem around ${\sf U}_\infty(x)$.
\begin{equation}\label{eq2.1}
\pa_tw
=
\Delta_x
w
-
q{\sf U}_\infty^{q-1}
w.
\end{equation}
Fix $J\in\N$
and
let $\Theta_J(x,t)$ be the explicit solution of \eqref{eq2.1}
defined in Section \ref{eq4.2}.
Furthermore
we introduce
\begin{itemize}
\item
$\eta(t)=(T-t)^{J(\frac{2}{1-q}-\gamma)^{-1}}$,
\item
$y=\lambda(t)^{-1}x$,
\item
$\xi=\eta(t)^{-1}x$,
\item
$z=(T-t)^{-\frac{1}{2}}x$.
\end{itemize}
The cut off functions $\chi_i$ ($i=1,2,3,4$) are defined in Section \ref{sec_5.1}.
\begin{thm}\label{Thm1}
Let $n=5$ and $J\in\N$.
There exist $T>0$ and a radially symmetric solution
$u(x,t)\in C(\R^5\times[0,T))\cap C^{2,1}(\R^5\times(0,T))$
of \eqref{eq1.1} satisfying the following properties\,{\rm$:$}
\begin{enumerate}[\rm(i)]
\item the solution $u(x,t)$ is written as
\begin{align*}
u(x,t)
&=
\dis
\lambda^{-\frac{n-2}{2}}
{\sf Q}(y)
\chi_2
+
\lambda^{-\frac{n-2}{2}}\sigma
T_1(y)
\chi_1
\\
& \quad
-
(
\eta^\frac{2}{1-q}{\sf U}(\xi)
\chi_2
+
{\sf U}_\infty(x)
(1-\chi_2)
\chi_4
)
(1-\chi_1)
\\
& \quad
-
(
\theta(x,t)
+
\Theta_J(x,t)
)
(1-\chi_2)
\chi_3
+
u_1(x,t),
\end{align*}
where $T_1(y)$ is a bounded smooth function defined in Section
{\rm\ref{sec_4.1}},
$\theta(x,t)$ is a correction term defined in Section {\rm\ref{sec_4.5}}
and
$u_1(x)$ is a remainder term,
\item
$\lambda(t)=(1+o(1))(T-t)^{\frac{2}{6-n}\Gamma_J}$
with
$\Gamma_J=
(\frac{2J}{1-q}+\frac{2}{1-q}-\gamma)
(\frac{2}{1-q}-\gamma)^{-1}$,
\item
$\sigma(t)=(1+o(1))\lambda\dot\lambda$,
\item
there exist
${\sf d}_1,{\sf c}_1>0$
and
$\kappa>2$
such that
\begin{align*}
|u_1(x,t)|
&<
\begin{cases}
(T-t)^{\frac{1}{2}{\sf d}_1}
\lambda(t)^{-\frac{n-2}{2}}
\sigma
& {\rm for}\ |y|<1,
\\[1mm] \dis
(T-t)^{{\sf d}_1}
\eta(t)^\frac{2}{1-q}
{\sf U}(\xi)
& {\rm for}\ |y|>1,\ |\xi|<1,
\\
(T-t)^{{\sf d}_1}
(1+|z|^2)^{\frac{3}{2}{\sf d}_1}
\Theta_J(x,t)
& {\rm for}\ |\xi|>1,\ |z|<(T-t)^{-\frac{1}{\kappa}},
\\
\tfrac{1}{8}
{\sf U}_\infty(x)
&
{\rm for}\ |z|>(T-t)^{-\frac{1}{\kappa}},\ |x|<1,
\\
{\sf c}_1
&
{\rm for}\ |x|>1.
\end{cases}
\end{align*}
\end{enumerate}
\end{thm}
\begin{rem}
\label{Rem2.1}
The blowup rate of this solution is given by
\[
\|u(t)\|_\infty
=
\|\lambda(t)^{-\frac{n-2}{2}}{\sf Q}(y)\|_\infty
=
\lambda(t)^{-\frac{n-2}{2}}
=
(T-t)^{-\frac{n-2}{6-n}\Gamma_J}
\]
with
$\Gamma_J=
(\frac{2J}{1-q}+\frac{2}{1-q}-\gamma)
(\frac{2}{1-q}-\gamma)^{-1}$.
Since $0<\frac{2}{1-q}-\gamma<2$ {\rm(}see {\rm\eqref{eq4.9}}{\rm)},
the blowup rate
$\sigma_q=\frac{n-2}{6-n}\Gamma_J$
diverges to infinity as $q\to 1$
for any $J\in\N$.
This discontinuity phenomenon at $q=1$
is the same as that of the problem \eqref{eq1.2}
discussed in {\rm\cite{Coz}}.
\end{rem}
\begin{rem}\label{Rem2.2}
Since the solution in Theorem {\rm\ref{Thm1}}
satisfies $\lim_{t\to T}u(x,t)=0$ in the region $|z|\sim1$,
the solution is approximately dominated by $u_t=\Delta u-|u|^{q-1}u$
in this region.
However,
it can be seen that the effect from $|u|^{p-1}u$ is not so small
in this region.
Therefore
the correction term $\theta(x,t)$ is needed to compensate this effect.
\end{rem}
\begin{rem}\label{Rem2.3}
We can expect that
three types of type II blowup solutions exist in \eqref{eq1.1}.
\begin{enumerate}[\rm(i)]
\item
Blowup solutions described in Theorem {\rm\ref{Thm1}}
give the first one,
which corresponds to {\rm (II)} in \eqref{eq1.5}.
\item
The second one is obtained from {\rm (I)} in \eqref{eq1.5}.
A formal computation will be given in Section {\rm\ref{sec_4}}.
Since its asymptotic formula is simpler,
we omit a rigorous proof.
\item
The third one is given by
blowup solutions of \eqref{eq1.3} obtained in {\rm\cite{Filippas,del_Pino}}.
We recall from {\rm\cite{Filippas,del_Pino,Harada}} that
\eqref{eq1.3} with $n=5$
admits infinite many blowup solutions $\{u_k(x,t)\}_{k\in\N}$ satisfying
\begin{align*}
\|u_k(t)\|_\infty
=
(T-t)^{-3k}
\qquad
(k\in\N).
\end{align*}
We can verify that
the additional term $-|u|^{q-1}u$ in \eqref{eq1.1}
does not have much effect on $u_k(x,t)$
only for the case $k=1$
{\rm(}see the asymptotic formula for $u_k(x,t)$ in {\rm\cite{Filippas}}
and the proof in {\rm\cite{del_Pino,Harada}}{\rm)}.
Hence
we can construct a blowup solution of \eqref{eq1.1}
asymptotically behaves like $u_k(x,t)$ with $k=1$.
\end{enumerate}
\end{rem}
\begin{rem}\label{Rem2.4}
From the asymptotic formula for blowup solutions to \eqref{eq1.3}
{\rm(}see {\rm\cite{Filippas}}{\rm)},
we believe that
a type II blowup exists only for the case $n=5$.
\end{rem}
\begin{rem}\label{Rem2.5}
For the case $1\leq q<p$,
\eqref{eq1.1} is similar to \eqref{eq1.2}
discussed in {\rm\cite{Coz}}.
When $q$ is close to $1$,
\eqref{eq1.1} admits
blowup solutions
with the same asymptotic form as
$u_k(x,t)$ {\rm(}$k\in\N${\rm)}
describe in Remark {\rm\ref{Rem2.3}} {\rm(iii)}
{\rm(}see proof in {\rm\cite{del_Pino,Harada}}{\rm)}.
However,
it is not clear what happens when $q\to p$.
\end{rem}
We explain the strategy of the proof.
Our proof is a combination of
the matched asymptotic expansion technique
(see {\rm\cite{Filippas,Seki}})
and
the inner-outer gluing method developed in
(see {\rm\cite{Cortazar,del_Pino,del_Pino2,del_Pino3}}).
We divide the whole space $x\in\R^n$ into four parts.
\begin{enumerate}[\rm(i)]
\item inner region $|y|\sim1$\quad ($x=\lambda(t)y$),
\item semiinner region $|\xi|\sim1$ \quad ($x=\eta(t)\xi$),
\item selfsimilar region $|z|\sim1$ \quad ($x=z\sqrt{T-t}$),
\item outer region $|x|\sim1$,
\end{enumerate}
where $0<\lambda(t)<\eta(t)<\sqrt{T-t}$.
We first construct approximate solutions in each region separately.
In the region (i),
\eqref{eq1.1} is approximately written as
$u_t=\Delta u+|u|^{p-1}u$.
Hence
we can use
$u(x,t)=\lambda(t)^{-\frac{n-2}{2}}{\sf Q}(y)
+\lambda(t)^{-\frac{n-2}{2}}\sigma(t)T_1(y)$
as a approximate solution in this region,
which is the same one as in \cite{Filippas,del_Pino,Harada}
(Section \ref{sec_4.1}).
In the region (ii)-(iii),
we first look for solutions of $u_t=\Delta u-|u|^{p-1}u$.
The function
$u(x,t)=\eta(t)^\frac{2}{1-q}{\sf U}(\xi)$ gives an approximate solution in (ii)
(Section \ref{sec_4.2}).
This solution is borrowed from \cite{Seki}.
In the region (iii),
we need a correction term $\theta(x,t)$.
We will see that
$u(x,t)={\sf U}_\infty(x)+\theta(x,t)+\Theta_J(x,t)$
gives an appropriate approximate solution in this region
(Section \ref{sec_4.2}, Section\ref{sec_4.5}).
The region (iv) is not important in our analysis.
We next investigate
two matching conditions
between (i) and (ii),
and between (ii) and (iii)
to connect each solution obtained in the first step
(Section \ref{sec_4.3} - Section \ref{sec_4.4}).
This procedure determines $\lambda(t)$ and $\eta(t)$.
We finally construct blowup solutions
near the approximate solutions obtained in the first and the second step
by a fixed point argument (Section \ref{sec_5}).
To investigate the behavior of solutions more precisely,
we introduce a system of three parabolic equations
from \eqref{eq1.1}.
This three equations correspond to (i) - (iii) respectively.
The first equation is treated in the same way as in \cite{Cortazar,del_Pino}
(Section \ref{sec_6}),
and the last two equations are analyzed based on \cite{Seki}
(Section \ref{sec_7}, Section \ref{sec_8} respectively).
Although our argument is not new,
there are two remarks on the analysis of this three parabolic equations.
\begin{enumerate}
\item
Due to Lemma \ref{Lem3.3},
our argument in the first equation
slightly simplifies that of \cite{Cortazar,del_Pino,Harada}
(Section \ref{sec_6.2}).
\item
Unlike the setting in \cite{Seki},
we introduces the second equation corresponding to (ii).
This provides a simpler alternative proof of \cite{Seki}.
In fact,
our argument does not require
the explicit form of the heat kernel of
$w_t=\Delta w-q{\sf U}_\infty^{q-1}w$
(Section \ref{sec_8}).
\end{enumerate}
\section{Preliminary}
\label{sec_3}
\subsection{Linearized problem around the ground state ${\sf Q}(y)$
and its eigenvalue problem}
\label{sec_3.1}
To describe the asymptotic behavior of solutions around
the ground state ${\sf Q}(y)={\sf Q}_\lambda(y)|_{\lambda=1}$,
we study the linearized problem.
\[
\epsilon_t=H_y\epsilon,
\]
where the operator $H_y$ is defined by
\[
H_y=\Delta_y+V(y),
\qquad
V(y)=f'({\sf Q}(y))=p{\sf Q}(y)^{p-1}.
\]
Following the idea in \cite{Cortazar},
we consider this problem on $B_R$ instead of $\R^n$.
Solutions of $\epsilon_t=H_y\epsilon$ are completely described by
the following eigenvalue problems.
\begin{equation}\label{eq3.1}
\begin{cases}
-H_y\psi=\mu\psi & \text{in } B_R,
\\
\psi=0 & \text{on } \pa B_R,
\\
\psi \text{ is radially symmetric}.
\end{cases}
\end{equation}
We denote the $i$th eigenvalue of \eqref{eq3.1} by $\mu_i^{(R)}$
and the associated eigenfunction by $\psi_i^{(R)}$.
We normalize $\psi_i^{(R)}(r)$ as $\psi_i^{(R)}(0)=1$.
We recall two lemmas obtained in \cite{Cortazar} and \cite{Harada}.
\begin{lem}[Lemma 3.1-Lemma 3.2 \cite{Harada}]
\label{Lem3.1}
There exists $c>0$ such that
if $R>1$,
then
\begin{align*}
0
<
\psi_1^{(R)}(r)
&<
c
\left( 1+r \right)^{-\frac{n-1}{2}}
e^{-\sqrt{|\mu_1|}\,r}
\quad\text{\rm for } r\in(0,R),
\\
|\psi_2^{(R)}(r)|
&<
c
\left( 1+r \right)^{-(n-2)}
\qquad\text{\rm for } r\in(0,R).
\end{align*}
The constant $\mu_1<0$ is
the first eigenvalue of \eqref{eq3.1}
on $\R^n$.
\end{lem}
\begin{lem}[Lemma 7.2 \cite{Cortazar}, Lemma 3.3 \cite{Harada}]
\label{Lem3.2}
Let $n\geq5$.
Then
\begin{itemize}
\item
$\dis\lim_{R\to\infty}\mu_1^{(R)}=\mu_1<0$ and
\item
there exits $c>0$ such that $\mu_2^{(R)}>cR^{-(n-2)}$ for $R>1$.
\end{itemize}
\end{lem}
We here provide information of the third eigenvalue of \eqref{eq3.1}.
This lemma simplifies arguments in Section \ref{sec_6}.
\begin{lem}
\label{Lem3.3}
Let $n\geq5$.
There exists $c>0$ such that
$\mu_3^{(R)}>cR^{-\frac{n}{2}}$ for $R>1$.
\end{lem}
\begin{proof}
We note that $Z_1(r)={\Lambda_y}{\sf Q}(r)$ gives a radial solution of
$H_yZ=0$ on $\R^n$.
Let $Z_2(r)=\Gamma(r)$ be another independent radial solution of $H_yZ=0$ on $\R^n$.
From a direct computation,
we see that
\begin{align}
\label{eq3.2}
\begin{cases}
Z_2(r)
=
a_1r^{-(n-2)}
+
o(r^{-(n-2)})
\qquad
\text{as } r\to0,
\\
Z_2(r)
=
a_2
+
o(1)
\qquad
\text{as } r\to\infty
\end{cases}
\end{align}
for some $a_1,a_2\not=0$.
It is known that
any radial solution of
the inhomogeneous problem $H_yu=f$
can be represented
by $Z_1,Z_2,f$
(see proof of Lemma 7.2 in \cite{Cortazar}).
From this formula,
$\psi_3^{(R)}(r)$ is written as
\begin{align}
\label{eq3.3}
\psi_3^{(R)}(r)
&=
k\mu_3^{(R)}Z_2(r)\int_0^r\psi_3^{(R)}Z_1r_1^{n-1}dr_1
+
k\mu_3^{(R)}Z_1(r)\int_r^R\psi_3^{(R)}Z_2r_1^{n-1}dr_1
\nonumber
\\
&\qquad
-
k\mu_3^{(R)}
\frac{Z_2(R)}{Z_1(R)}
Z_1(r)
\int_0^R\psi_3^{(R)}Z_1r^{n-1}dr.
\end{align}
The constant $k$ depends only on $n$.
To estimate $\mu_3^{(R)}$,
we compute $L^2$ norm of both sides on \eqref{eq3.3}.
Throughout this proof,
we denote by $c$ a general positive constant
which depends only on $n$.
For simplicity,
we write
\[
\|\psi\|_{L_{\text{rad}}^2}^2
=
\int_0^R
|\psi(r)|^k
r^{n-1}
dr.
\]
We first compute the first term of \eqref{eq3.3}.
\begin{align*}
\|
Z_2(r)
\int_0^r\psi_3^{(R)}Z_1
&
r_1^{n-1}
dr_1
\|_{L_{\text{rad}}^2}
=
\left\|
\cdots
\right\|_{L_{\text{rad}}^2(r<1)}
+
\left\|
\cdots
\right\|_{L_{\text{rad}}^2(1<r<R)}
\\
&\leq
\|\psi_3^{(R)}Z_1\|_{L^\infty(r<1)}
\|
Z_2
r^n
\|_{L_{\text{rad}}^2(r<1)}
+
\|
Z_2
\|_{L_{\text{rad}}^2(1<r<R)}
\|
\psi_3^{(R)}
\|_{L_{\text{rad}}^2}^2
\|
Z_1
\|_{L_{\text{rad}}^2}^2.
\end{align*}
We note that
$Z_1(r)\in L_{\text{rad}}^2(\R^n)$ if $n\geq5$
and
$\|Z_2\|_{L_\text{rad}^2(1<r<R)}\lesssim R^\frac{n}{2}$
(see \eqref{eq3.2}).
Hence
we get
\begin{align*}
\|
Z_2(r)
\int_0^r
&
\psi_3^{(R)}Z_1r_1^{n-1}dr_1
\|_{L_{\text{rad}}^2}
\leq
\|
\cdots
\|_{L_{\text{rad}}^2(r<1)}
+
\|
\cdots
\|_{L_{\text{rad}}^2(1<r<R)}
\\
&\leq
\|\psi_3^{(R)}Z_1\|_{L_{\text{rad}}^\infty(r<1)}
\|Z_2r^n\|_{L_{\text{rad}}^2(r<1)}
+
\|Z_2\|_{L_{\text{rad}}^2(1<r<R)}
\|\psi_3^{(R)}\|_{L_{\text{rad}}^2}
\|Z_1\|_{L_{\text{rad}}^2}
\\
&\leq
c
(
\|\psi_3^{(R)}\|_{L_{\text{rad}}^\infty}
+
R^\frac{n}{2}
\|\psi_3^{(R)}\|_{L_{\text{rad}}^2}
).
\end{align*}
From the order property of eigenvalues:
$0<\mu_3^{(R_2)}<\mu_3^{(R_1)}$ if $R_1<R_2$
and
a local parabolic estimates,
it holds that
$\|\psi_3^{(R)}\|_{L_{\text{rad}}^\infty}
<c\|\psi_3^{(R)}\|_{L_{\text{rad}}^2}$.
Therefore
it follows that
\begin{align}\label{eq3.4}
\|
Z_2(r)
\int_0^r
\psi_3^{(R)}Z_1r_1^{n-1}dr_1
\|_{L_{\text{rad}}^2}
\leq
c
R^\frac{n}{2}
\|\psi_3^{(R)}\|_{L_{\text{rad}}^2}.
\end{align}
The second term of \eqref{eq3.3} can be computed in the same manner.
\begin{align}\label{eq3.5}
\|
Z_1(r)
\int_r^R
\psi_3^{(R)}Z_2r_1^{n-1}dr_1
\|_{L_{\text{rad}}^2}
&\leq
\|Z_1\|_{L_{\text{rad}}^2}
(
\int_0^1
|\psi_3^{(R)}Z_2|r_1^{n-1}dr_1
+
\int_1^R
|\psi_3^{(R)}Z_2|r_1^{n-1}dr_1
)
\nonumber
\\
&\leq
c
R^2
\|\psi_3^{(R)}\|_{L_{\text{rad}}^2}.
\end{align}
Since $(\psi_2^{(R)},\psi_3^{(R)})_{L_y^2(B_R)}=0$,
the last term of \eqref{eq3.3} is rewritten as
\begin{align}\label{eq3.6}
\int_0^R
\psi_3^{(R)}
Z_1r^{n-1}
dr
=
(\psi_3^{(R)},Z_1)_{L_{\text{rad}}^2}
=
(\psi_3^{(R)},Z_1-\alpha\psi_2^{(R)})_{L_{\text{rad}}^2}
\end{align}
for any $\alpha\in\R$.
We here choose
$\alpha=
(k\mu_2^{(R)}\frac{Z_2(R)}{Z_1(R)}(\psi_2^{(R)},Z_1)_{L_{\text{rad}}^2})^{-1}$.
We express $\psi_2^{(R)}(r)$
in the same form as \eqref{eq3.3}.
\begin{align*}
\nonumber
Z_1(r)
&
-
\alpha
\psi_2^{(R)}(r)
\\
&=
Z_1(r)
-
\alpha
k\mu_2^{(R)}Z_2(r)\int_0^r\psi_2^{(R)}Z_1r^{n-1}dr
-
\alpha
k\mu_2^{(R)}Z_1(r)\int_r^R\psi_2^{(R)}Z_2 r^{n-1}dr
\nonumber
\\
\nonumber
&\quad
+
\alpha
k\mu_2^{(R)}
\tfrac{Z_2(R)}{Z_1(R)}
Z_1(r)
(\psi_2^{(R)},Z_1)_{L_{\text{rad}}^2}
\\
&=
-
\alpha
k\mu_2^{(R)}Z_2(r)\int_0^r\psi_2^{(R)}Z_1r^{n-1}dr
-
\alpha
k\mu_2^{(R)}Z_1(r)\int_r^R\psi_2^{(R)}Z_2 r^{n-1}dr.
\end{align*}
We can estimate
two integrals on the right-hand side
in the same way as \eqref{eq3.4} - \eqref{eq3.5}.
\begin{align*}
\|
Z_2(r)
\int_0^r
\psi_2^{(R)}Z_1r_1^{n-1}dr_1
\|_{L_{\text{rad}}^2}
+
\|
Z_1(r)
\int_r^R
\psi_2^{(R)}Z_2r_1^{n-1}dr_1
\|_{L_{\text{rad}}^2}
\leq
c
(R^\frac{n}{2}+R^2)
\|\psi_2^{(R)}\|_{L_{\text{rad}}^2}.
\end{align*}
Therefore
we derive a better estimate of \eqref{eq3.6}.
\begin{align}\label{eq3.7}
(\psi_3^{(R)},Z_1-\alpha\psi_2^{(R)})_{L_{\text{rad}}^2}
\leq
\|\psi_3^{(R)}\|_{L_{\text{rad}}^2}
\|Z_1-\alpha\psi_2^{(R)}\|_{L_{\text{rad}}^2}
\leq
c
\alpha
\mu_2^{(R)}
R^\frac{n}{2}
\|\psi_3^{(R)}\|_{L_{\text{rad}}^2}.
\end{align}
We note from
Lemma \ref{Lem3.1} and \eqref{eq3.2}
that
\[
\alpha
=
(
k\mu_2^{(R)}
\tfrac{Z_2(R)}{Z_1(R)}(\psi_2^{(R)},Z_1)_{L_{\text{rad}}^2}
)^{-1}
\lesssim
(\mu_2^{(R)})^{-1}
\tfrac{Z_1(R)}{Z_2(R)}
\lesssim
R^{n-2}
\cdot
R^{-(n-2)}
=1.
\]
Therefore
we take $L^2$ norm of both sides on \eqref{eq3.3}
and
combine \eqref{eq3.4}-\eqref{eq3.5} and \eqref{eq3.7},
we conclude
\begin{align*}
\|\psi_3^{(R)}\|_{L_\text{rad}^2}
\leq
c
\mu_3^{(R)}
R^\frac{n}{2}
\|
\psi_3^{(R)}
\|_{L_{\text{rad}}^2}.
\end{align*}
This completes the proof.
\end{proof}
\subsection{Local $L^\infty$ bound and gradient estimates for parabolic equations}
\label{sec_3.2}
In this subsection,
we consider
\begin{equation}\label{eq_sec_3.2}
u_t=\Delta_xu+{\bf b}(x,t)\cdot\nabla_xu+V(x,t)u+f(x,t)
\qquad\text{in } Q,
\end{equation}
where $Q=B_2\times(0,1)$, $B_r=\{x\in\R^n;\ |x|<r\}$.
The coefficients are assumed to be
\begin{equation}
\tag{A1}
{\bf b}(x,t)\in(L^\infty(Q))^n,
\quad
V(x,t)\in L^\infty(Q)
\qquad\text{with }
\|{\bf b}\|_{L^\infty(Q)}+\|V\|_{L^\infty(Q)}<M.
\end{equation}
For $p,q\in[1,\infty]$,
we define
\[
\|f\|_{L^{p,q}(Q)}
=
\begin{cases}
\dis
\left( \int_0^1\|f(t)\|_{L^p(B_2)}^qdt \right)^\frac{1}{q}
& \text{if } q\in[1,\infty),
\\
\dis
\sup_{t\in(0,1)}\|f(t)\|_{L^p(B_2)}
& \text{if } q=\infty.
\end{cases}
\]
\begin{lem}[Exercise 6.5 p. 154 \cite{Lieberman}, Theorem 8.1 p. 192 \cite{Ladyzenskaja}]
\label{Lem3.4}
Let $p,q\in(1,\infty]$ satisfy $\frac{n}{2p}+\frac{1}{q}<1$ and assume {\rm(A1)}.
There exists $c>0$ depending on $p$, $q$, $n$ and $M$ such that
\begin{enumerate}[{\rm (i)}]
\item
if $u(x,t)$ is a weak solution of \eqref{eq_sec_3.2},
then
\[
\|u\|_{L^\infty(B_1\times(\frac{1}{2},1))}
<
c\left( \|u\|_{L^2(Q)}+\|f\|_{L^{p,q}(Q)} \right),
\]
\item
if $u(x,t)\in C(B_2\times[0,1))$ is a weak solution of \eqref{eq_sec_3.2} with $u(x,t)|_{t=0}=0$,
then
\[
\|u\|_{L^\infty(B_1\times(0,1))}
<
c\left( \|u\|_{L^2(Q)}+\|f\|_{L^{p,q}(Q)} \right).
\]
\end{enumerate}
\end{lem}
\begin{lem}[Theorem 4.8 p. 56 \cite{Lieberman}, Theorem 11.1 p. 211 \cite{Ladyzenskaja}]
\label{Lem3.5}
Let $p,q\in(1,\infty]$ satisfy $\frac{n}{p}+\frac{2}{q}<1$ and assume {\rm(A1)}.
There exists $c>0$ depending on $p$, $q$, $n$ and $M$ such that
\begin{enumerate}[{\rm (i)}]
\item
if $u(x,t)\in C^{2,1}(Q)$ is a solution of \eqref{eq_sec_3.2},
then
\[
\|\nabla u\|_{L^\infty(B_1\times(\frac{1}{2},1))}
<
c\left( \|u\|_{L^\infty(Q)}+\|f\|_{L^{p,q}(Q)} \right),
\]
\item
if $u(x,t)\in C^{2,1}(Q)\cap C^{2,1}(B_2\times[0,1))$ is a solution of \eqref{eq_sec_3.2}
with $u(x,t)|_{t=0}=0$,
then
\[
\|\nabla u\|_{L^\infty(B_1\times(0,1))}
<
c\left( \|u\|_{L^\infty(Q)}+\|f\|_{L^{p,q}(Q)} \right),
\]
\item
if $u(x,t)\in C^{2,1}(\bar Q)$ is a solution of \eqref{eq_sec_3.2} with
$u(x,t)|_{t=0}=0$ and $u(x,t)|_{\pa B_2}=0$,
then
\[
\|\nabla u\|_{L^\infty(Q)}
<
c\left(
\|u\|_{L^\infty(Q)}+\|f\|_{L^{p,q}(Q)}
\right).
\]
\end{enumerate}
\end{lem}
\section{Formal derivation of blowup speed}
\label{sec_4}
In this section,
we derive the asymptotic behavior of solutions described in Theorem \ref{Thm1}
by using a matched asymptotic expansion technique.
Since our blowup solution has four characteristic lengths,
we denote them by
\begin{itemize}
\item inner region \quad $|x|\sim\lambda(t)$,
\item semiinner region \quad $|x|\sim\eta(t)$,
\item selfsimilar region \quad $|x|\sim\sqrt{T-t}$,
\item outer region \quad $|x|\sim1$,
\end{itemize}
where $\lambda(t)$ and $\eta(t)$ are unknown functions satisfying
\[
\lambda(t)\ll\eta(t)\ll\sqrt{T-t}.
\]
\subsection{Inner region $|x|\sim\lambda(t)$}
\label{sec_4.1}
We first investigate the asymptotic behavior of solutions in the inner region.
We assume that
the solution $u(x,t)$ behaves like
\[
u(x,t)
=
\lambda(t)^{-\frac{n-2}{2}}
{\sf Q}(y)
+
o(\lambda^{-\frac{n-2}{2}})
\qquad
\text{in the inner region},
\]
where $x=\lambda(t)y$
and
$\lambda(t)$ is an unknown function satisfying $\lambda(t)\to0$ as $t\to T$.
Under this assumption,
this solution blows up at $t=T$.
Therefore
we can expect that
the solution approximately solves $u_t=\Delta_xu+|u|^{p-1}u$ in this region.
Following the argument in Section 4 of \cite{Harada_2}
(the original idea is in \cite{Filippas}),
we put
\[
u(x,t)
=
\lambda^{-\frac{n-2}{2}}{\sf Q}(y)
+
\lambda^{-\frac{n-2}{2}}\sigma(t)T_1(y)
+
\lambda^{-\frac{n-2}{2}}\epsilon_1(y,t),
\]
where $0<\sigma(t)\ll1$.
The function $\epsilon_1(y,t)$ solves
\[
\lambda^2\pa_t\epsilon_1
+
\lambda^2\dot\sigma T_1
=
H_y\epsilon_1
+
\lambda\dot\lambda
(\Lambda_y{\sf Q})
+
\sigma
H_yT_1
+
\lambda\dot\lambda
\sigma
\Lambda_yT_1
+
\lambda\dot\lambda
\Lambda_y\epsilon_1
+
N,
\]
where
\begin{itemize}
\item
$H_y=\Delta_y+f'({\sf Q}(y))$,
\item
$\Lambda_y=\frac{n-2}{2}+y\cdot\nabla_y$,
\item
$N=f(u)-f(\lambda^{-\frac{n-2}{2}}{\sf Q})
-f'(\lambda^{-\frac{n-2}{2}}{\sf Q})(u-\lambda^{-\frac{n-2}{2}}{\sf Q})$.
\end{itemize}
We choose
$\sigma=\lambda\dot\lambda$
and
$T_1$ as a solution of $H_yT_1+\Lambda_y{\sf Q}=0$
to cancel the lower order terms.
Then
$\epsilon_1(y,t)$ solves
\[
\lambda^2\pa_t\epsilon_1
+
\lambda^2\dot\sigma T_1
=
H_y\epsilon_1
+
\lambda\dot\lambda
\sigma
\Lambda_yT_1
+
\lambda\dot\lambda
\Lambda_y\epsilon_1
+
N.
\]
From this equation,
we can expect $|\epsilon_1(y,t)|\ll|\sigma|$.
This implies
\[
u(x,t)
=
\lambda^{-\frac{n-2}{2}}{\sf Q}(y)
+
\lambda^{-\frac{n-2}{2}}\sigma
T_1(y)
(1+o)
\qquad
\text{as }
t\to T.
\]
We recall that
there exits ${\sf A}_1>0$ such that
(see p. 12 - p. 13 in \cite{Harada_2})
\begin{align}
\label{eq4.1}
T_1(y)
&=
{\sf A}_1
+
O(|y|^{-(n-4)})
+
O(|y|^{-2})
\qquad
\text{as }
|y|\to\infty
\qquad
(n\geq5),
\\
\label{eq4.2}
|\nabla_yT_1(y)|
&=
O(|y|^{-(n-3)})
+
O(|y|^{-3})
\qquad
\text{as }
|y|\to\infty
\qquad
(n\geq5).
\end{align}
Therefore
the asymptotic behavior of the solution is given by
\begin{equation}\label{eq4.3}
u(x,t)
=
\lambda^{-\frac{n-2}{2}}{\sf Q}(y)
+
\lambda^{-\frac{n-2}{2}}\sigma
{\sf A}_1
(1+o)
\qquad
\text{as }
|y|\to\infty.
\end{equation}
\subsection{Semiinner region $|x|\sim\eta(t)$ and Selfsimilar region $|x|\sim\sqrt{T-t}$}
\label{sec_4.2}
Let $\eta(t)$ be an unknown function which represents the characteristic length
of the semiinner region.
To describe the asymptotic behavior of solutions,
we introduce two new variables
\[
\xi=\eta(t)^{-1}x,
\qquad
z=(T-t)^{-\frac{1}{2}}x.
\]
We now assume
\begin{equation}\label{eq4.4}
\lim_{t\to T}
u(x,t)
=
0
\qquad
\text{in } k^{-1}\eta(t)<|x|<k\sqrt{T-t}
\end{equation}
for arbitrary $k>1$.
Under this assumption,
the equation \eqref{eq1.1} is approximated by
\begin{equation}\label{eq4.5}
u_t=\Delta_xu-|u|^{q-1}u.
\end{equation}
It is known that
there are two types of solutions of \eqref{eq4.5} satisfying \eqref{eq4.4}.
\begin{enumerate}[(I)]
\item One is given by $u(x,t)=\pm(1-q)^\frac{1}{1-q}(T-t)^\frac{1}{1-q}$.
\item The other solution behaves like
\begin{align}
\label{eq4.6}
u(x,t)
&=
\pm\eta(t)^\frac{2}{1-q}{\sf U}(\xi)+o(\eta(t)^\frac{2}{1-q})
\qquad \text{for } |x|\sim\eta(t),
\\
\label{eq4.7}
u(x,t)
&=
\pm{\sf U}_\infty(x)+\Theta_J(x,t)+o(\Theta_J(x,t))
\qquad \text{for } |x|\sim\sqrt{T-t}
\qquad
(J\in\N).
\end{align}
\end{enumerate}
We explain the case (II) along \cite{Seki}
and
give definition of
${\sf U}(\xi)$, ${\sf U}_\infty(x)$ and $\Theta_J(x,t)$.
Let ${\sf U}(\xi)$ be a unique solution of
\[
\begin{cases}
\Delta_\xi{\sf U}-f_2({\sf U})=0
\qquad
\text{in }
\R^n,
\\
{\sf U}(\xi)|_{\xi=0}=1,
\\
{\sf U}(\xi) \text{ is radially symmetric}.
\end{cases}
\]
As in \cite{Seki},
we assume
that
the solution
in the semiinner region
moves along the scale of the steady state ${\sf U}(\xi)$ like
\begin{equation*}
u(x,t)
=
\pm
\eta(t)^\frac{2}{1-q}
{\sf U}(\xi)^\frac{2}{1-q}
(1+o)
\qquad
\text{in }
|x|\sim\eta(t).
\end{equation*}
This implies \eqref{eq4.6}.
To investigate the asymptotic behavior of solutions in the selfsimilar region,
we introduce a singular solution ${\sf U}_\infty(x)=L_1|x|^\frac{2}{1-q}$.
The constant $L_1$ is given by
\begin{align}\label{eq4.8}
L_1^{q-1}
=
\beta_0
(
\beta_0+n-2
),
\quad
\beta_0
=
\tfrac{2}{1-q}.
\end{align}
This gives a unique nonnegative radial solution of
$\Delta_x{\sf U}-{\sf U}^q=0$ with ${\sf U}|_{x=0}=0$.
Furthermore
let $\gamma\in(0,\frac{2}{1-q})$ be a unique real number satisfying
\[
(\Delta_x-qL_1^{q-1}|x|^{-2})|x|^\gamma=0.
\]
It is known that
(see Remark 1.2 (i) in \cite{Seki} p. 4)
\begin{equation}\label{eq4.9}
\tfrac{2}{1-q}-2<\gamma<\tfrac{2}{1-q}.
\end{equation}
To investigate the behavior of the solution
on the border
between the semiinner region and the selfsimilar region,
we need the following property of ${\sf U}(\xi)$.
\begin{equation}\label{eq4.10}
{\sf U}(\xi)
=
{\sf U}_\infty(\xi)
+
{\sf B}_1
|\xi|^\gamma
+
O(|\xi|^{\gamma-{\sf k}_1})
\qquad
\text{as }
|\xi|\to\infty,
\end{equation}
where ${\sf B}_1>0$ and ${\sf k}_1>0$ (see (2.3) in \cite{Seki}).
From \eqref{eq4.10},
we expect that
the solution $u(x,t)$ behaves like
\begin{align*}
u(x,t)
&=
\pm
\eta(t)^\frac{2}{1-q}
{\sf U}(\xi)
(1+o)
=
\pm
\eta(t)^\frac{2}{1-q}
{\sf U}_\infty(\xi)
(1+o)
\\
&=
\pm
{\sf U}_\infty(x)
(1+o)
\qquad
\text{as }
|\xi|\to\infty.
\end{align*}
From this relation,
we assume that
the solution in the selfsimilar region behaves like
\[
u(x,t)
=
\pm
{\sf U}_\infty(x)
(1+o)
\qquad
\text{for }
|x|\sim\sqrt{T-t}.
\]
To obtain more precise asymptotic behavior of solutions,
we consider a linearized problem around $\pm{\sf U}_\infty(x)$.
\[
w_t
=
\Delta_x
w
-
f_2'(\pm{\sf U}_\infty)
w
=
\Delta_x
w
-
q|{\sf U}_\infty|^{q-1}
w
=
\Delta_x
w
-
qL_1^{q-1}
|x|^{-2}
w.
\]
Here the nonlinear term $f(u)$ is neglected,
since $u(x,t)\to0$ for $|x|\sim\sqrt{T-t}$.
To describe a local behavior of $w(x,t)$,
we perform a change of variable:
$z=(T-t)^{-\frac{1}{2}}x$,
$T-t=e^{-\tau}$.
The linearized equation can be written in the new variable as
\begin{equation*}
w_\tau
=
\Delta_zw
-
\tfrac{z}{2}\cdot\nabla_zw
-
qL_1^{q-1}
|z|^{-2}
w
\qquad
\text{for }
z\in\R^n,\
\tau\in(-\log T,\infty).
\end{equation*}
We define a weighted $L^2$ space.
\begin{align*}
L_\rho^2(\R^n)
:=
\{f\in L_\text{loc}^2(\R^n);\ \|f\|_\rho<\infty\},
\qquad
\|f\|_\rho^2=\int_{\R^n}f(z)^2\rho(z)dz
\quad
\text{with }
\rho(z)=e^{-\frac{|z|^2}{4}}.
\end{align*}
The inner product is defined by
\[
(f_1,f_2)_\rho=\int_{\R^n}f_1(z)f_2(z)\rho(z)dz.
\]
A corresponding eigenvalue problem is given by
\begin{equation*}
-
(
\Delta_z-\tfrac{z}{2}\cdot\nabla_z-qL_1^{q-1}|z|^{-2}
)
e_j=\mu_ie_j
\qquad\text{in } L_{\rho,\text{rad}}^2(\R^n).
\end{equation*}
It is known that
the eigenvalue $\mu_j$ and the eigenfunction $e_j(z)$
are explicitly given by
\begin{itemize}
\item $\mu_j=\frac{\gamma}{2}+j$ \quad ($j=0,1,2,\cdots$),
\item $e_j(z)\in L_{\rho,\text{rad}}^2(\R^n)$ \quad ($j=0,1,2,\cdots$),
\item $e_0(z)={\sf D}_0|z|^\gamma$,
\item $e_j(z)={\sf D}_j|z|^\gamma+O(|z|^{\gamma+2})$ \quad as $|z|\to0$
\quad ($j=1,2,\cdots$),
\item $e_j(z)={\sf E}_j|z|^{2j+\gamma}+O(|z|^{2j-2+\gamma})$ \quad as $|z|\to\infty$
\quad ($j=1,2,\cdots$),
\end{itemize}
and
$L_{\rho,\text{rad}}^2(\R^n)$ is spanned by
$\{e_j(z)\}_{j=0}^\infty$
(see Lemma 2.2 p. 8 in \cite{Seki}, Corollary 2.3 p. 10 in \cite{Seki}).
This eigenfunction is normalized as $\|e_j\|_\rho=1$ ($j=0,1,2,\cdots$).
From this fact,
we choose
\[
w(x,t)
=
\Theta_J(z,\tau)
=
K
e^{-\mu_j\tau}e_J(z)
=
K
(T-t)^{\frac{\gamma}{2}+J}
e_J(z)
\qquad (J=0,1,2,\cdots),
\]
where $K$ is an arbitrary constant.
This implies
\begin{align*}
u(x,t)
&=
\pm
{\sf U}_\infty(x)
+
\Theta_J(x,t)
\nonumber
\\
&=
\pm
{\sf U}_\infty(x)
+
K(T-t)^{\frac{\gamma}{2}+J}
e_J(z)
\qquad
\text{for }
|x|\sim\sqrt{T-t}.
\end{align*}
This gives the formula \eqref{eq4.7}.
The asymptotic behavior of this solution is given by
\begin{align}\label{eq4.11}
\nonumber
u(x,t)
&=
\pm
{\sf U}_\infty(x)
+
K{\sf D}_J
(T-t)^{\frac{\gamma}{2}+J}
|z|^\gamma
(1+o(1))
\\
&=
\pm
{\sf U}_\infty(x)
+
K{\sf D}_J
(T-t)^J
\eta^\gamma
|\xi|^\gamma
(1+o(1))
\qquad
\text{as }
|z|\to0.
\end{align}
\subsection{Matching condition for the case (I)}
\label{sec_4.3}
We firsts consider the case (I) in Section \ref{sec_4.2}.
Since ${\sf Q}(y)\to0$ as $|y|\to\infty$,
\eqref{eq4.3} implies
\begin{align*}
u(x,t)
&=
\lambda^{-\frac{n-2}{2}}
{\sf Q}(y)
+
\lambda(t)^{-\frac{n-2}{2}}
\sigma
{\sf A}_1
(1+o)
=
\lambda^{-\frac{n-2}{2}}
\sigma
{\sf A}_1
(1+o)
\qquad
\text{as } |y|\to\infty.
\end{align*}
This relation
and
the asymptotic formula (I)
give the following matching condition.
\[
\lambda^{-\frac{n-2}{2}}
\sigma
{\sf A}_1
=
\lambda^{-\frac{n-2}{2}}
\lambda
\dot\lambda
{\sf A}_1
=
\pm
(1-q)^\frac{1}{1-q}
(T-t)^\frac{1}{1-q}
\qquad
\text{as } |y|\to\infty.
\]
Since ${\sf A}_1>0$ (see \eqref{eq4.1}) and $\dot\lambda<0$,
we choose a minus sign.
Therefore
we obtain
\[
\lambda(t)
=
(\tfrac{6-n}{2(2-q){\sf A}_1})^{\frac{2}{6-n}}
(1-q)^{\frac{2-q}{1-q}\frac{2}{6-n}}
(T-t)^{\frac{2-q}{1-q}\frac{2}{6-n}}.
\]
\subsection{Matching condition for the case (II)}
\label{sec_4.4}
We next consider the case (II).
We determine $\lambda(t)$ and $\eta(t)$ by the matching procedure.
From \eqref{eq4.3} and \eqref{eq4.6}-\eqref{eq4.7},
we obtain matching conditions ($n=5$).
\begin{align*}
\lambda^{-\frac{n-2}{2}}{\sf Q}(y)
+
\lambda^{-\frac{n-2}{2}}\sigma
{\sf A}_1
&=
\pm
\eta^\frac{2}{1-q}{\sf U}(\xi)
\qquad
\text{for } |y|\to\infty,\ |\xi|\to0,
\\
\pm
\eta^\frac{2}{1-q}{\sf U}(\xi)
&=
\pm
{\sf U}_\infty(x)
+
\Theta_J(x,t)
\qquad
\text{for } |\xi|\to\infty,\ |z|\to0.
\end{align*}
Due to \eqref{eq4.10} and \eqref{eq4.11},
these conditions can be written as
\begin{align*}
\lambda^{-\frac{n-2}{2}}
\sigma
{\sf A}_1
&=
\pm
\eta^\frac{2}{1-q}
\qquad
\text{for } |y|\to\infty,\ |\xi|\to0,
\\
\pm
\eta^\frac{2}{1-q}
(
{\sf U}_\infty(\xi)
+
\eta^\frac{2}{1-q}
{\sf B}_1|\xi|^\gamma
)
&=
\pm
(
{\sf U}_\infty(x)
+
\eta^\frac{2}{1-q}
{\sf B}_1|\xi|^\gamma
)
\\
&=
\pm
{\sf U}_\infty(x)
+
K{\sf D}_J
(T-t)^J
\eta^\gamma
|\xi|^\gamma
\qquad
\text{for } |\xi|\to\infty,\ |z|\to0.
\end{align*}
We now take a minus sign and obtain
\begin{align*}
\lambda^{-\frac{n-2}{2}}
\sigma
{\sf A}_1
&=
-
\eta^\frac{2}{1-q}
\qquad
\text{for } |y|\to\infty,\ |\xi|\to0,
\\
-
\eta^\frac{2}{1-q}
{\sf B}_1
&=
K{\sf D}_J
(T-t)^J
\eta^\gamma
\qquad
\text{for } |\xi|\to\infty,\ |z|\to0.
\end{align*}
From these relations,
we can derive
\begin{itemize}
\item
$K=-{\sf D}_J^{-1}{\sf B}_1$,
\item
$\eta(t)=(T-t)^{\gamma_J}$
\quad
with
$\gamma_J=J(\frac{2}{1-q}-\gamma)^{-1}$,
\item
$\lambda(t)
=
(\frac{6-n}{2{\sf A}_1\Gamma_J})^\frac{2}{6-n}
(T-t)^{\frac{2}{6-n}\Gamma_J}$
\quad
with
$\Gamma_J
=(\frac{2J}{1-q}+\frac{2}{1-q}-\gamma)
(\frac{2}{1-q}-\gamma)^{-1}$.
\end{itemize}
Since $\eta(t)\to0$ as $t\to T$,
$J$ must be $J\in\N$.
The blowup rats of the solution is given by
\begin{align*}
\|u(t)\|_\infty
=
\lambda(t)^{-\frac{n-2}{2}}
=
(T-t)^{-\frac{n-2}{6-n}\Gamma_J}
\qquad
(J\in\N).
\end{align*}
\subsection{Corrections in the selfsimilar region $|x|\sim\sqrt{T-t}$}
\label{sec_4.5}
Unfortunately
solutions obtained in Section \ref{sec_4.4} does not give
appropriate approximate solutions of \eqref{eq1.1},
since the contribution of $f(u)$ is not negligible.
We now reconstruct approximate solutions along Section \ref{sec_4.4}.
As in Section \ref{sec_4.4},
we assume
\[
u(x,t)
=
-{\sf U}_\infty(x)
+
o({\sf U}_\infty(x))
\qquad
\text{for }
|x|\sim\sqrt{T-t}.
\]
To take in the effect of $f(u)$,
we construct a solution of the form
\begin{align}
\label{eq4.12}
u
=
-{\sf U}_\infty(x)-\theta
=
-{\sf U}_\infty(x)-(\theta_0+\theta_1+\cdots+\theta_L)
\qquad
(L\gg1)
\end{align}
satisfying
the following conditions.
\begin{enumerate}[(i) ]
\item
$\theta_0(x)=a_0{\sf U}_\infty^{p+1-q}
={a}_0L_1^{p+1-q}|x|^{\frac{2p}{1-q}+2}$
\qquad (${a}_0\not=0$),
\item $\dis\theta_i(x)={a}_i|x|^{\frac{2(p-q)i}{1-q}}\theta_0(x)
+|x|^{\frac{2(p-q)i}{1-q}}\theta_0(x)\sum_{l=1}^Mb_i|x|^\frac{2(p-q)i}{1-q}$
\qquad (${a}_i\not=0$)
\qquad ($i=1,2,\cdots,L$),
\item
$|\Delta({\sf U}_\infty+\theta)+f({\sf U}_\infty+\theta)
-f_2({\sf U}_\infty+\theta)|
\ll\Theta_J(x,t)
=(T-t)^{\frac{\gamma}{2}+J}e_J(z)$
\qquad
in $|x|\sim\sqrt{T-t}$.
\end{enumerate}
Once
such a function
$\theta(x)=\theta_0(x)+\theta_1(x)+\cdots+\theta_L(x)$
is constructed,
we can verify that
\begin{align*}
u(x,t)
=
-
{\sf U}_\infty(x)
-
\theta(x)
-
\Theta_J(x)
\end{align*}
gives a new approximate solution in the selfsimilar region
instead of \eqref{eq4.7}.
From conditions (i)-(ii),
the matching condition derived in Section \ref{sec_4.4} does not change.
Therefore
we obtain the the same $\lambda(t)$ and $\eta(t)$ as in Section \ref{sec_4.4}.
To construct functions
$\theta(x)=\theta_0(x)+\theta_1(x)+\cdots+\theta_L(x)$,
we substitute \eqref{eq4.12} to \eqref{eq1.1} and
apply the Taylor expansion.
\begin{align*}
0
&=
\Delta({\sf U}_\infty+\theta)
+
f({\sf U}_\infty+\theta)
-
f_2({\sf U}_\infty+\theta)
\\
&=
\Delta{\sf U}_\infty
+
\Delta \theta
+
f({\sf U}_\infty)
+
\sum_{i=1}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
\theta^i
-
f_2({\sf U}_\infty)
-
f_2'({\sf U}_\infty)
\theta
-
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
\theta^i
\\
&=
\Delta(\theta_0+\cdots+\theta_L)
+
f({\sf U}_\infty)
+
\sum_{i=1}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
(\theta_0+\cdots+\theta_L)^i
\\
& \quad
-
f_2'({\sf U}_\infty)
(\theta_0+\cdots+\theta_L)
-
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
(\theta_0+\cdots+\theta_L)^i.
\end{align*}
We determine $\theta_0(x)$, $\theta_1(x)$ and $\theta_2(x)$
as follows.
\begin{align}
&
\label{eq4.13}
(\Delta -f_2'({\sf U}_\infty))
\theta_0
+
f({\sf U}_\infty)
=
0,
\\
&
\label{eq4.14}
(\Delta -f_2'({\sf U}_\infty))
\theta_1
+
\sum_{i=1}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
\theta_0^i
-
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
\theta_0^i
=
0,
\\
&
\label{eq4.15}
(\Delta -f_2'({\sf U}_\infty))
\theta_2
+
\sum_{i=1}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
((\theta_0+\theta_1)^i-\theta_0^i)
-
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
((\theta_0+\theta_1)^i-\theta_0^i)
=
0.
\end{align}
In the same manner,
we define $\theta_k(x)$ ($k=3,\cdots,L$) by
\begin{align*}
(\Delta -f_2'({\sf U}_\infty))
\theta_k
&+
\sum_{i=1}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
((\theta_0+\cdots+\theta_{k-1})^i-(\theta_0+\cdots+\theta_{k-2})^i)
\\
&
-
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
((\theta_0+\cdots+\theta_{k-1})^i-(\theta_0+\cdots+\theta_{k-2})^i)
=0.
\end{align*}
From \eqref{eq4.13} - \eqref{eq4.15},
$\theta(x)=\theta_0(x)+\theta_1(x)+\cdots+\theta_L(x)$ satisfies
\begin{align*}
(\Delta-f_2'({\sf U}_\infty))\theta
&=
-f({\sf U}_\infty)
-
\sum_{i=0}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
(\theta_0+\cdots+\theta_{L-1})^i
+
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
(\theta_0+\cdots+\theta_{L-1})^i.
\end{align*}
Since $\Delta{\sf U}_\infty=f_2({\sf U}_\infty)$,
this relation implies
\begin{align*}
\Delta({\sf U}_\infty+\theta)
&=
-
f({\sf U}_\infty)
-
\sum_{i=1}^L
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
(\theta-\theta_L)^i
+
f_2({\sf U}_\infty)
+
f_2'({\sf U}_\infty)
\theta
+
\sum_{i=2}^L
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
(\theta-\theta_L)^i
\\
&=
-
\sum_{i=0}^L
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
(\theta-\theta_L)^i
+
\sum_{i=0}^L
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
(\theta-\theta_L)^i.
\end{align*}
Therefore
we get
\begin{align}\label{eq4.16}
\nonumber
|
\Delta
({\sf U}_\infty+\theta)
&
+
f
({\sf U}_\infty+\theta)
-
f_2({\sf U}_\infty+\theta)
|
\\
\nonumber
&<
|
f({\sf U}_\infty+\theta)
-
\sum_{i=0}^L
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
\theta^i
|
+
|
f_2({\sf U}_\infty+\theta)
-
\sum_{i=0}^L
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
\theta^i
|
\\
& \quad
+
\sum_{i=0}^L
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
|(\theta-\theta_L)^i-\theta^i|
+
\sum_{i=0}^L
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
|(\theta-\theta_L)^i-\theta^i|.
\end{align}
We note that
the conditions (i) - (ii) imply
$\theta(x)=O(|x|^\frac{2(p-q)}{1-q}{\sf U}_\infty(x))$
and
$\theta_L(x)=O(|x|^{\frac{2(p-q)}{1-q}(L+1)}{\sf U}_\infty(x))$.
Therefore
if the conditions (i)-(ii) are verified,
the condition (iii) follows from \eqref{eq4.16}.
We now construct $\theta(x)=\theta_0(x)+\cdots+\theta_L(x)$.
Equation \eqref{eq4.13} can be written as
\begin{equation}\label{eq4.17}
0
=
(
\Delta
-
f_2'({\sf U}_\infty)
)
\theta_0
+
f({\sf U}_\infty)
=
(
\Delta
-
qL_1^{q-1}
|x|^{-2}
)
\theta_0
+
L_1^p|x|^{\frac{2p}{1-q}}.
\end{equation}
We easily check that
\begin{equation}\label{eq4.18}
\theta_0(x)
=
{a}_0
{\sf U}_\infty^{p+1-q}
=
{a}_0
L_1^{p+1-q}|x|^{\frac{2p}{1-q}+2}
\end{equation}
gives a solution of \eqref{eq4.17}.
We now claim that
\begin{equation}\label{eq4.19}
-(1-q)^{-1}<{a}_0<0.
\end{equation}
We write
$\theta_0(x)={a}_0L_1^{p+1-q}|x|^\beta$
with
$\beta=\frac{2p}{1-q}+2$.
From \eqref{eq4.17},
it holds that
\begin{align*}
{a}_0
L_1^{p+1-q}
\{
\beta(\beta+n-2)
-
qL_1^{q-1}
\}
&=
-L_1^p,
\\
{a}_0
L_1^{1-q}
\{
\beta(\beta+n-2)
-
qL_1^{q-1}
\}
&=
-1.
\end{align*}
This implies
$-
{a}_0^{-1}
=
L_1^{1-q}
\beta(\beta+n-2)
-
q
$.
Since $L_1^{q-1}=\beta_0(\beta_0+n-2)$
with $\beta_0=\frac{2}{1-q}$
(see \eqref{eq4.8})
and $\beta>\beta_0$,
we get
\begin{align*}
-{a}_0^{-1}
=
L_1^{1-q}
\beta(\beta+n-2)
-
q
=
\tfrac{
\beta(\beta+n-2)
}{\beta_0(\beta_0+n-2)}
-q
>
1-q.
\end{align*}
This concludes \eqref{eq4.19}.
We next construct $\theta_1(x)$ in the same way.
We note from \eqref{eq4.18} that
\begin{equation}\label{eq4.20}
{\sf U}_\infty^q
\theta_0
=
{a}_0
{\sf U}_\infty^{p+1}
\quad
\Leftrightarrow
\quad
{\sf U}_\infty^{-1}
\theta_0
=
{a}_0
{\sf U}_\infty^{p-q}.
\end{equation}
By using this relation,
we compute the second and the third terms of \eqref{eq4.14}.
\begin{align*}
\sum_{i=1}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
\theta_0^i
&=
f'({\sf U}_\infty)
\theta_0
+
\sum_{i=2}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
\theta_0^i
\\
&=
p
{\sf U}_\infty^{p-1}
\theta_0
+
\sum_{i=2}^N
c_i
{\sf U}_\infty^{p-i}
\theta_0^i
\\
&=
p
{\sf U}_\infty^{p-1}
\theta_0
+
{\sf U}_\infty^{p-1}
\theta_0
\sum_{i=2}^N
c_i
({\sf U}_\infty^{-1}\theta_0)^{i-1}
\\
&=
p
{\sf U}_\infty^{p-1}
\theta_0
+
{\sf U}_\infty^{p-1}
\theta_0
\sum_{i=2}^N
c_i
|x|^\frac{2(p-q)(i-1)}{1-q},
\\
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
\theta_0^i
&=
\tfrac{1}{2}
f_2''({\sf U}_\infty)
\theta_0^2
+
\sum_{i=3}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
\theta_0^i
\\
&=
\tfrac{q(q-1)}{2}
{\sf U}_\infty^{q-2}
\theta_0^2
+
\sum_{i=3}^N
d_i
{\sf U}_\infty^{q-i}
\theta_0^i
\\
&=
\tfrac{q(q-1)}{2}
{a}_0
{\sf U}_\infty^{p-1}
\theta_0
+
{\sf U}_\infty^{q-2}
\theta_0^2
\sum_{i=3}^N
d_i
(
{\sf U}_\infty^{-1}
\theta_0
)^{i-2}
\\
&=
\tfrac{q(q-1)}{2}
{a}_0
{\sf U}_\infty^{p-1}
\theta_0
+
{\sf U}_\infty^{p-1}
\theta_0
\sum_{i=3}^N
d_i
|x|^\frac{2(p-q)(i-2)}{1-q}
\\
&=
\tfrac{q(q-1)}{2}
{a}_0
{\sf U}_\infty^{p-1}
\theta_0
+
{\sf U}_\infty^{p-1}
\theta_0
\sum_{i=2}^{N-1}
d_i
|x|^\frac{2(p-q)(i-1)}{1-q}.
\end{align*}
From \eqref{eq4.19},
we obtain
\begin{align*}
\sum_{i=1}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
\theta_0^i
-
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
\theta_0^i
&=
p
{\sf U}_\infty^{p-1}
\theta_0
-
\tfrac{q(q-1)}{2}
{a}_0
{\sf U}_\infty^{p-1}
\theta_0
+
{\sf U}_\infty^{p-1}
\theta_0
\sum_{i=2}^{N}
C_i
|x|^{\frac{2(p-q)(i-1)}{1-q}}
\\
&=
(
\underbrace{
p+\tfrac{q(1-q)}{2}
{a}_0
}_{\not=0}
)
{\sf U}_\infty^{p-1}
\theta_0
+
{\sf U}_\infty^{p-1}
\theta_0
\sum_{i=1}^{N-1}
C_i
|x|^\frac{2(p-q)i}{1-q}.
\end{align*}
Therefore
we obtain a solution $\theta_1(x)$ of \eqref{eq4.14} satisfying
\begin{align}
\label{eq4.21}
\theta_1(x)
&=
{\sf c}_1|x|^2{\sf U}_\infty^{p-1}\theta_0
+
{\sf h}_1(x)
=
{a}_1
|x|^\frac{2(p-q)}{1-q}
\theta_0
+
{\sf h}_1(x)
\qquad
({a}_1\not=0),
\\
\nonumber
{\sf h}_1(x)
&=
|x|^\frac{2(p-q)}{1-q}
\theta_0
\sum_{i=1}^{N-1}
b_i|x|^\frac{2(p-q)i}{1-q}.
\end{align}
A solution $\theta_2(x)$ of \eqref{eq4.15} is constructed in the same manner.
We write
\begin{align*}
\sum_{i=1}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
((\theta_0+\theta_1)^i-\theta_0^i)
&=
f'({\sf U}_\infty)
\theta_1
+
\sum_{i=2}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
((\theta_0+\theta_1)^i-\theta_0^i),
\\
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
((\theta_0+\theta_1)^i-\theta_0^i)
&=
\tfrac{f''({\sf U}_\infty)}{2}
(2\theta_0\theta_1+\theta_1^2)
+
\sum_{i=3}^N
d_i
{\sf U}_\infty^{q-i}
((\theta_0+\theta_1)^i-\theta_0^i).
\end{align*}
From \eqref{eq4.20} and \eqref{eq4.21},
we see that
\begin{align*}
f'({\sf U}_\infty)
\theta_1
&=
p{\sf U}_\infty^{p-1}
\theta_1
\\
&=
p{\sf U}_\infty^{p-1}
{a}_1
|x|^\frac{2(p-q)}{1-q}
\theta_0
+
p{\sf U}_\infty^{p-1}
{\sf h}_1,
\\
\sum_{i=2}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
((\theta_0+\theta_1)^i-\theta_0^i)
&=
\sum_{i=2}^N
\sum_{l=1}^i
c_{i,l}
f^{(i)}({\sf U}_\infty)
\theta_0^{i-l}\theta_1^l
\\
&=
\sum_{i=2}^N
\sum_{l=1}^i
c_{i,l}
{\sf U}_\infty^{p-i}
\theta_0^{i-l}\theta_1^l
\\
&=
({\sf U}_\infty^{p-1}\theta_1)
({\sf U}_\infty^{-1}\theta_0)
\sum_{i=2}^N
\sum_{l=1}^i
c_{i,l}
({\sf U}_\infty^{-1}\theta_0)^{i-2}
(\theta_0^{-1}\theta_1)^{l-1},
\\
\tfrac{f''({\sf U}_\infty)}{2}
(2\theta_0\theta_1+\theta_1^2)
&=
q(q-1)
{\sf U}_\infty^{q-2}
\theta_0
\theta_1
+
c{\sf U}_\infty^{q-2}
\theta_1^2
\\
&=
q(q-1)
{a}_0
{\sf U}_\infty^{p-1}
\theta_1
+
c
{\sf U}_\infty^{p-1}
\theta_0^{-1}
\theta_1^2
\\
&=
q(q-1)
{a}_0
{\sf U}_\infty^{p-1}
{a}_1
|x|^\frac{2(p-q)}{1-q}
\theta_0
+
q(q-1)
{a}_0
{\sf U}_\infty^{p-1}
{\sf h}_1
+
c
{\sf U}_\infty^{p-1}
\theta_1
(\theta_0^{-1}\theta_1),
\\
\sum_{i=3}^N
d_i
{\sf U}_\infty^{q-i}
((\theta_0+\theta_1)^i-\theta_0^i)
&=
\sum_{i=3}^N
\sum_{l=1}^i
d_{i,l}
{\sf U}_\infty^{q-i}
\theta_0^{i-l}\theta_1^l
\\
&=
{\sf U}_\infty^{q-2}
\theta_0
\theta_1
\sum_{i=3}^N
\sum_{l=1}^i
d_{i,l}
{\sf U}_\infty^{2-i}
\theta_0^{i-l-1}\theta_1^{l-1}
\\
&=
{\sf U}_\infty^{p-1}
\theta_1
\sum_{i=3}^N
\sum_{l=1}^i
d_{i,l}
({\sf U}_\infty^{-1}\theta_0)^{i-2}
(\theta_0^{-1}\theta_1)^{l-1}
\\
&=
({\sf U}_\infty^{p-1}\theta_1)
({\sf U}_\infty^{-1}\theta_0)
\sum_{i=3}^N
\sum_{l=1}^i
d_{i,l}
({\sf U}_\infty^{-1}\theta_0)^{i-3}
(\theta_0^{-1}\theta_1)^{l-1}.
\end{align*}
Furthermore
we note from
\eqref{eq4.18} and \eqref{eq4.21}
that
$\theta_0^{-1}\theta_1
=
|x|^\frac{2(p-q)}{1-q}
(a_1
+
\sum_{i=1}^{N-1}
b_i|x|^\frac{2(p-q)i}{1-q})$.
This relation together with \eqref{eq4.20} implies
\begin{align*}
\sum_{i=2}^N
\sum_{l=1}^i
c_{i,l}
({\sf U}_\infty^{-1}\theta_0)^{i-2}
(\theta_0^{-1}\theta_1)^{l-1}
&=
\sum_{i=0}^{N'}
c_i
|x|^\frac{2(p-q)i}{1-q},
\\
\sum_{i=3}^N
\sum_{l=1}^i
d_{i,l}
({\sf U}_\infty^{-1}\theta_0)^{i-3}
(\theta_0^{-1}\theta_1)^{l-1}
&=
\sum_{i=0}^{N'}
d_i
|x|^\frac{2(p-q)i}{1-q}.
\end{align*}
Combining all computations above,
we obtain
\begin{align*}
\sum_{i=1}^N
\tfrac{f^{(i)}({\sf U}_\infty)}{i!}
&
((\theta_0+\theta_1)^i-\theta_0^i)
-
\sum_{i=2}^N
\tfrac{f_2^{(i)}({\sf U}_\infty)}{i!}
((\theta_0+\theta_1)^i-\theta_0^i)
\\
&=
{a}_1
(
p-q(q-1){a}_0
)
{\sf U}_\infty^{p-1}
|x|^\frac{2(p-q)}{1-q}
\theta_0
+
{\sf U}_\infty^{p-1}
|x|^\frac{2(p-q)}{1-q}
\theta_0
\sum_{i=1}^{N'}
c_i|x|^\frac{2(p-q)}{1-q}.
\end{align*}
Therefore
since $p-q(q-1){a}_0>0$ (see \eqref{eq4.19}),
we conclude
\begin{align*}
\theta_2(x)
&=
{\sf c}_2
|x|^2
{\sf U}_\infty^{p-1}
|x|^\frac{2(p-q)}{1-q}
\theta_0
+
{\sf h}_2(x)
=
{a}_2
|x|^\frac{4(p-q)}{1-q}
\theta_0
+
{\sf h}_2(x)
\qquad
({a}_2\not=0),
\\
{\sf h}_2(x)
&=
|x|^\frac{4(p-q)}{1-q}
\theta_0
\sum_{i=1}^{N'}
b_i
|x|^\frac{2(p-q)i}{1-q}.
\end{align*}
In the same manner,
we can construct solutions $\theta_k(x)$ ($k=3,4,\cdots,L$) satisfying
\begin{align*}
\theta_k(x)
&=
{a}_k
|x|^\frac{2k(p-q)}{1-q}
\theta_0
+
{\sf h}_k(x)
\qquad
({a}_k\not=0),
\\
{\sf h}_k(x)
&=
|x|^\frac{2k(p-q)}{1-q}
\theta_0
\sum_{i=1}^{N'}
b_i
|x|^\frac{2(p-q)i}{1-q}.
\end{align*}
\section{Formulation}
\label{sec_5}
In this section,
we set up our problem.
We recall that
two types of blowup solutions are predicted
in Section \ref{sec_4.2} - Section \ref{sec_4.4}.
However,
we here discuss the case (II) only,
since the case (I) is easier.
Our strategy in this paper is basically the same as in \cite{Harada_2}
(the original idea comes from \cite{Cortazar,del_Pino}).
Let ${\sf M}(t)$ be a unique solution of
\[
\begin{cases}
\frac{d}{dt}{\sf M}=f({\sf M})-f_2({\sf M})
\qquad
\text{for }
t>0,
\\
{\sf M}(t)|_{t=0}
=
{\sf M}_0,
\qquad
\text{where }
{\sf M}_0={\sf U}_\infty(x)|_{|x|=1}.
\end{cases}
\]
\subsection{Setting}
\label{sec_5.1}
From the observation in Section \ref{sec_4},
we look for solutions of the form
\begin{align}
\label{eq5.1}
u(x,t)
&=
\dis
\lambda^{-\frac{n-2}{2}}
{\sf Q}(y)
\chi_2
+
\lambda^{-\frac{n-2}{2}}\sigma
T_1(y)
\chi_1
-
{\sf U}_{\sf c}(x,t)
(1-\chi_1)
\\
& \quad
-
(
\theta(x)
+
\Theta_J(x,t)
)
(1-\chi_2)
\chi_3
+
u_1(x,t),
\nonumber
\end{align}
where
\begin{align*}
{\sf U}_{\sf c}(x,t)
&=
\eta^\frac{2}{1-q}{\sf U}(\xi)
\chi_2
+
{\sf U}_\infty(x)
(1-\chi_2)
\chi_4
+
{\sf M}(t)(1-\chi_4),
\\
\theta(x)
&=
\theta_0(x)
+
\cdots
+
\theta_L(x)
\qquad
(\text{see Section \ref{sec_4.5}}),
\\
\Theta_J(x,t)
&=
{\sf D}_J^{-1}
{\sf B}_1
(T-t)^{\frac{\gamma}{2}+J}
e_J(z)
\qquad
(\text{see Section \ref{sec_4.2} and Section \ref{sec_4.4}}),
\\
u_1(x,t)
&=
\lambda(t)^{-\frac{n-2}{2}}
\epsilon(y,t)
\chi_{\sf in}
+
\eta^\frac{2}{1-q}
v(\xi,t)
\chi_{\sf mid}
+
w(x,t).
\end{align*}
The function $\lambda(t)$ is unknown at this moment which is determined
in Section \ref{sec_6.1}.
Other parameters $\eta(t)$ and $\sigma(t)$ are defined by
\begin{itemize}
\item
$\eta(t)=(T-t)^{\gamma_J}$
\quad
($J\in\N$)
\quad
with
$\gamma_J=J(\frac{2}{1-q}-\gamma)^{-1}$,
\item
$\sigma(t)=
-{\sf A}_1^{-1}
\eta(t)^\frac{2}{1-q}
\lambda(t)^{\frac{n-2}{2}}$.
\end{itemize}
New variables $y$, $\xi$ and $z$ are defined by
\begin{itemize}
\item $y=\lambda(t)^{-1}x$,
\item $\xi=\eta(t)^{-1}x$,
\item $z=(T-t)^{-\frac{1}{2}}x$.
\end{itemize}
Let $\chi(r)$ be a smooth cut off function satisfying
$\chi(r)
=
\begin{cases}
1 & \text{for } 0\leq r<1 \\
0 & \text{for } r>2
\end{cases}$
and put
\begin{itemize}
\item
$\dis\chi_{\sf in}=\chi(|y|/{\sf R}_{\sf in})$,
where ${\sf R}_{\sf in}>0$ is a large constant,
\item
$\chi_{\sf mid}=\chi(|\xi|/{\sf R}_{\sf mid})$,
where ${\sf R}_{\sf mid}>0$ is a large constant,
\item
$\dis\chi_0=\chi(|z|/{\sf r}_0)$,
where ${\sf r}_0>0$ is a small constant,
\item
$\dis\chi_1=\chi(|y|/{\sf l}_1(t))$,
where ${\sf l}_1(t)=|\sigma(t)|^{-\frac{1}{n-2}}$,
\item
$\dis\chi_2=\chi(|\xi|/{\sf l}_2(t))$,
where ${\sf l}_2(t)=(T-t)^{-{\sf b}}$
and
${\sf b}>0$ is a small constant,
\item
$\chi_3=\chi(|x|/{\sf r}_3)$,
where ${\sf r}_3>0$ is a small constant,
\item
$\chi_4=\chi(|x|)$.
\end{itemize}
Constants
${\sf R}_{\sf in},{\sf R}_{\sf mid},{\sf r}_0,{\sf r}_3,{\sf b}$
will be chosen
in Section \ref{sec_6} - Section \ref{sec_8},
which depend only on $q,n,J$.
We remark that
we cut off $\theta(x)$, $\Theta(x,t)$ and ${\sf U}_\infty(x)$
by using $\chi_3,\chi_4$
in \eqref{eq5.1},
since they are not bounded for $|x|\to\infty$.
\subsection{Derivation of equations for $\epsilon(y,t)$, $v(\xi,t)$, $w(x,t)$}
\label{sec_5.2}
We here derive
equations for $\epsilon(y,t)$, $v(\xi,t)$ and $w(x,t)$.
To do that,
we substitute \eqref{eq5.1} into \eqref{eq1.1}
and compute them.
We first give computations for $u_t$.
\begin{align*}
\pa_t
(
\lambda^{-\frac{n-2}{2}}
{\sf Q}
\chi_2
)
&=
-\lambda^{-\frac{n+2}{2}}
\lambda
\dot\lambda
(\Lambda_y{\sf Q})
\chi_2
+
\lambda^{-\frac{n-2}{2}}
{\sf Q}
\dot\chi_2
\\
&=
-\lambda^{-\frac{n+2}{2}}
\lambda
\dot\lambda
(\Lambda_y{\sf Q})
\chi_1
-
\underbrace{
\lambda^{-\frac{n+2}{2}}
\lambda
\dot\lambda
(\Lambda_y{\sf Q})
(\chi_2-\chi_1)
}_{=g_0}
+
\underbrace{
\lambda^{-\frac{n-2}{2}}
{\sf Q}
\dot\chi_2
}_{=g_1},
\\
\pa_t
(
\lambda^{-\frac{n-2}{2}}\sigma
T_1
\chi_1
)
&=
\underbrace{
(
\pa_t(
\lambda^{-\frac{n-2}{2}}\sigma
T_1
)
)
\chi_1
}_{=g_2}
+
\lambda^{-\frac{n-2}{2}}\sigma
T_1
\dot\chi_1,
\\
\pa_t
({\sf U}_{\sf c}(x,t)(1-\chi_1))
&=
(
\pa_t{\sf U}_{\sf c}
)
(1-\chi_1)
-
{\sf U}_{\sf c}
\dot\chi_1
\\
&=
\{
\underbrace{
(
\pa_t
(
\eta^\frac{2}{1-q}
{\sf U}
)
)
\chi_2
}_{=g_3}
+
\eta^\frac{2}{1-q}
{\sf U}
\dot\chi_2
-
{\sf U}_\infty
\dot\chi_2
\chi_4
\}
(1-\chi_1)
\\
& \quad
+
\dot {\sf M}(t)
(1-\chi_4)
(1-\chi_1)
-
{\sf U}_{\sf c}
\dot\chi_1
\\
&=
g_3
+
\eta^\frac{2}{1-q}
{\sf U}
\dot\chi_2
-
{\sf U}_\infty
\dot\chi_2
+
\dot {\sf M}(t)
(1-\chi_4)
-
{\sf U}_{\sf c}
\dot\chi_1,
\\
\pa_t(\theta(1-\chi_2)\chi_3)
&=
-
\underbrace{
\theta
\dot\chi_2
\chi_3
}_{=g_4},
\\
\pa_t(\Theta_J(1-\chi_2)\chi_3)
&=
\dot\Theta_J
(1-\chi_2)
\chi_3
-
\Theta_J
\dot\chi_2
\chi_3,
\\
\pa_tu_1
&=
\lambda^{-\frac{n-2}{2}}
\epsilon_t\chi_{\sf in}
-
\underbrace{
\lambda^{-\frac{n+2}{2}}
\lambda\dot\lambda
(\Lambda_y\epsilon)\chi_{\sf in}
+
\lambda^{-\frac{n-2}{2}}
\epsilon
\dot\chi_{\sf in}
}_{=h_1[\epsilon]}
+
\eta^\frac{2}{1-q}
v_t
\chi_{\sf mid}
\\
& \qquad
+
\eta^{\frac{2}{1-q}-1}
\dot\eta
(\Lambda_\xi v)
\chi_{\sf mid}
+
\underbrace{
\eta^\frac{2}{1-q}
v
\dot\chi_{\sf mid}
}_{=k_1[v]}
+
w_t,
\end{align*}
where $\Lambda_\xi=\frac{2}{1-q}-\xi\cdot\nabla_\xi$.
Therefore
we get
\begin{align*}
u_t
&=
-
\lambda^{-\frac{n+2}{2}}
\lambda\dot\lambda
(\Lambda_y{\sf Q})
\chi_1
+
\underbrace{
(
\lambda^{-\frac{n-2}{2}}\sigma T_1
+
{\sf U}_{\sf c}
)
\dot\chi_1
}_{=g_5}
+
\underbrace{
(
-
\eta^\frac{2}{1-q}{\sf U}
+
{\sf U}_\infty
\chi_4
+
\Theta_J
\chi_3
)
\dot\chi_2
}_{=g_6}
\\
&\quad
-
\dot{\sf M}
(1-\chi_4)
-
\dot\Theta_J
(1-\chi_2)\chi_3
+
\lambda^{-\frac{n-2}{2}}
\epsilon_t\chi_{\sf in}
+
\eta^\frac{2}{1-q}
v_t
\chi_{\sf mid}
\\
& \quad
+
\eta^{\frac{2}{1-q}-1}
\dot\eta
(\Lambda_\xi v)
\chi_{\sf mid}
+
w_t
+
g_0+g_1+g_2+g_3+g_4
+
h_1+k_1
\\
&=
\lambda^{-\frac{n-2}{2}}
\epsilon_t\chi_{\sf in}
+
\eta^\frac{2}{1-q}
v_t
\chi_{\sf mid}
+
\eta^{\frac{2}{1-q}-1}
\dot\eta
(\Lambda_\xi v)
\chi_{\sf mid}
+
w_t
\\
& \quad
-
\lambda^{-\frac{n+2}{2}}
\lambda\dot\lambda
(\Lambda_y{\sf Q})
\chi_1
-
\dot\Theta_J
(1-\chi_2)
\chi_3
-
\dot{\sf M}
(1-\chi_4)
+
g_{0\sim6}
+
h_1
+
k_1.
\end{align*}
Next
we compute $\Delta_xu$.
The first two terms in \eqref{eq5.1} are estimated as
\begin{align*}
\Delta_x
&(
\lambda^{-\frac{n-2}{2}}{\sf Q}(y)
\chi_2
+
\lambda^{-\frac{n-2}{2}}\sigma
T_1(y)\chi_1
)
\\
&=
\lambda^{-\frac{n+2}{2}}
(\Delta_y{\sf Q})
\chi_2
+
\underbrace{
\eta^{-1}
\lambda^{-\frac{n}{2}}
(\nabla_y{\sf Q}\cdot\nabla_\xi\chi_2)
+
\eta^{-2}
\lambda^{-\frac{n-2}{2}}
{\sf Q}
(\Delta_\xi\chi_2)
}_{=g_0'}
\\
& \quad
+
\lambda^{-\frac{n+2}{2}}
\sigma
(\Delta_yT_1)
\chi_1
+
\underbrace{
2\lambda^{-\frac{n+2}{2}}
\sigma
\nabla_yT_1
\cdot\nabla_y\chi_1
}_{=g_1'}
+
\lambda^{-\frac{n+2}{2}}
\sigma
T_1(\Delta_y\chi_1).
\end{align*}
The rest of terms are computed as
\begin{align*}
\Delta_x
{\sf U}_{\sf c}
&=
\Delta_x
\{
\eta^\frac{2}{1-q}{\sf U}
\chi_2
+
{\sf U}_\infty
(1-\chi_2)
\chi_4
+
{\sf M}(t)(1-\chi_4)
\}
\\
&=
\eta^\frac{2q}{1-q}
(\Delta_\xi{\sf U})
\chi_2
+
(\Delta_x{\sf U}_\infty)
(1-\chi_2)
\chi_4
\\
& \quad
+
2\eta^{-1}
\nabla_x(
\eta^\frac{2}{1-q}{\sf U}
-
{\sf U}_\infty
\chi_4
)
\cdot\nabla_\xi\chi_2
\\
& \quad
+
\eta^{-2}
(
\eta^\frac{2}{1-q}
{\sf U}
-
{\sf U}_\infty
\chi_4
)
(\Delta_\xi\chi_2)
+
\underbrace{
2
(\nabla_x{\sf U}_\infty\cdot\nabla_x\chi_4)
(1-\chi_2)
}_{=g_{{\sf out},1}'}
\\
& \quad
-
\underbrace{
2{\sf U}_\infty
(\nabla_x\chi_2\cdot\nabla_x\chi_4)
}_{=0}
+
\underbrace{
{\sf U}_\infty
(1-\chi_2)
(\Delta_x\chi_4)
}_{=g_{{\sf out},2}'}
-
\underbrace{
{\sf M}(t)
\Delta_x\chi_4
}_{=g_{{\sf out},3}'},
\end{align*}
\begin{align*}
\Delta_x
\{
{\sf U}_{\sf c}
(1-\chi_1)
\}
&=
(\Delta_x{\sf U}_{\sf c})
(1-\chi_1)
-
\underbrace{
2\lambda^{-1}
\nabla_x{\sf U}_{\sf c}\cdot\nabla_y\chi_1
}_{=g_2'}
-
\lambda^{-2}
{\sf U}_{\sf c}
(\Delta_y\chi_1)
\\
&=
\eta^\frac{2q}{1-q}
(\Delta_\xi{\sf U})
(1-\chi_1)
\chi_2
+
\underbrace{
(\Delta_x{\sf U}_\infty)
(1-\chi_1)
(1-\chi_2)
\chi_4
}_{=(\Delta_x{\sf U}_\infty)(1-\chi_2)\chi_4}
\\
& \quad
+
2\eta^{-1}
\nabla_x(
\eta^\frac{2}{1-q}{\sf U}
-
{\sf U}_\infty
\chi_4
)
\cdot\nabla_\xi\chi_2
+
\eta^{-2}
(
\eta^\frac{2}{1-q}
{\sf U}
-
{\sf U}_\infty
\chi_4
)
(\Delta_\xi\chi_2)
\\
& \quad
-
\lambda^{-2}
{\sf U}_{\sf c}
(\Delta_y\chi_1)
+
g_2'
+
g_{{\sf out},1}'+g_{{\sf out},2}'+g_{{\sf out},3}',
\\
\Delta_x
\{
\theta
(1-\chi_2)
\chi_3
\}
&=
(\Delta_x\theta)
(1-\chi_2)
\chi_3
-
\underbrace{
2
(\nabla_x\theta\cdot\nabla_x\chi_2)
\chi_3
}_{=g_3'}
-
\underbrace{
\theta(\Delta_x\chi_2)
\chi_3
}_{=g_4'}
\\
& \quad
+
\underbrace{
2
(\nabla_x\theta\cdot\nabla_x\chi_3)
(1-\chi_2)
}_{=g_{{\sf out},4}'}
-
\underbrace{
2\theta
(\nabla_x\chi_2\cdot\nabla_x\chi_3)
}_{=0}
+
\underbrace{
\theta
(1-\chi_2)
(\Delta_x\chi_3)
}_{=g_{{\sf out},5}'},
\\
\Delta_x
\{
\Theta_J
(1-\chi_2)
\chi_3
\}
&=
(\Delta_x\Theta_J)
(1-\chi_2)
\chi_3
-
\underbrace{
2\eta^{-1}
(\nabla_x\Theta_J\cdot\nabla_\xi\chi_2)
\chi_3
}_{=
2\eta^{-1}
\nabla_x(\Theta_J\chi_3)\cdot\nabla_\xi\chi_2
}
-
\eta^{-2}
\Theta_J(\Delta_\xi\chi_2)
\chi_3
\\
& \quad
+
\underbrace{
2
(\nabla_x\Theta_J\cdot\nabla_x\chi_3)
(1-\chi_2)
}_{=g_{{\sf out},6}'}
-
\underbrace{
2\Theta_J
(\nabla_x\chi_2\cdot\nabla_x\chi_3)
}_{=0}
+
\underbrace{
\Theta_J
(1-\chi_2)
(\Delta_x\chi_3)
}_{=g_{{\sf out},7}'}.
\end{align*}
Combining these estimates,
we get
\begin{align*}
\Delta_x
&
(u-u_1)
=
\Delta_x
(
\lambda^{-\frac{n-2}{2}}{\sf Q}(y)
\chi_2
+
\lambda^{-\frac{n-2}{2}}\sigma
T_1(y)\chi_1
-
{\sf U}_{\sf c}
(1-\chi_1)
-
(\Theta+\theta_J)
(1-\chi_2)
\chi_3
)
\\
&=
\lambda^{-\frac{n+2}{2}}
(\Delta_y{\sf Q})
\chi_2
+
\lambda^{-\frac{n+2}{2}}
\sigma
(\Delta_yT_1)
\chi_1
+
\underbrace{
\lambda^{-2}
(
\lambda^{-\frac{n-2}{2}}
\sigma T_1
+
{\sf U}_{\sf c}
)
(\Delta_y\chi_1)
}_{=g_5'}
\\
& \quad
-\eta^\frac{2q}{1-q}
(\Delta_\xi{\sf U})
(1-\chi_1)
\chi_2
-
(\Delta_x{\sf U}_\infty)
(1-\chi_2)
\chi_4
-
(\Delta_x\theta)
(1-\chi_2)
\chi_3
-
(\Delta_x\Theta_J)
(1-\chi_2)
\chi_3
\\
& \quad
-
\underbrace{
2\eta^{-1}
\nabla_x(
\eta^\frac{2}{1-q}{\sf U}
-
{\sf U}_\infty
\chi_4
-
\Theta_J
\chi_3
)
\cdot\nabla_\xi\chi_2
}_{=g_6'}
-
\underbrace{
\eta^{-2}
(
\eta^\frac{2}{1-q}
{\sf U}
-
{\sf U}_\infty
\chi_4
-
\Theta_J
\chi_3
)
(\Delta_\xi\chi_2)
}_{=g_7'}
\\
& \quad
+
g_0'+\cdots+g_4'
+
g_{{\sf out},1}'+\cdots+g_{{\sf out},7}'
\\
&=
-
\underbrace{
f(\lambda^{-\frac{n-2}{2}}{\sf Q})
\chi_2
}_{={\sf t}_1}
-
\lambda^{-\frac{n+2}{2}}
\sigma
(VT_1)
\chi_1
+
\lambda^{-\frac{n+2}{2}}
\sigma
(H_yT_1)
\chi_1
-
\underbrace{
f_2(\eta^\frac{2}{1-q}{\sf U})
(1-\chi_1)
\chi_2
}_{={\sf t}_2}
\\
& \quad
-
\underbrace{
\{
f_2
({\sf U}_\infty)
+
q{\sf U}_\infty^{q-1}
(\theta+\Theta_J)
\chi_3
+
(\Delta_x\theta-q{\sf U}_\infty^{q-1}\theta)
\chi_3
\}
(1-\chi_2)
\chi_4
}_{={\sf t}_3}
\\
& \quad
-
(\Delta_x\Theta_J-q{\sf U}_\infty^{q-1}\Theta_J)
(1-\chi_2)
\chi_3
+
g_0'+\cdots+g_7'
+
g_{{\sf out},1}'+\cdots+g_{{\sf out},7}'.
\end{align*}
We write ${\sf t}_i$ ($i=1,2,3$) as
\begin{align*}
{\sf t}_1
&=
f(\lambda^{-\frac{n-2}{2}}{\sf Q})
\chi_2
\\
&=
-
\underbrace{
\{
f(u)
-
f(\lambda^{-\frac{n-2}{2}}{\sf Q})
-
\lambda^{-2}
V
(
u
-
\lambda^{-\frac{n-2}{2}}
{\sf Q}
)
\}
\chi_2
}_{={\sf N}_1[\epsilon,v,w]}
+
f(u)
\chi_{2}
-
\lambda^{-2}
V
(
u
-
\lambda^{-\frac{n-2}{2}}
{\sf Q}
)
\chi_{2}
\\
&=
f(u)
\chi_{2}
-
\lambda^{-2}
V
(
\lambda^{-\frac{n-2}{2}}
\sigma
T_1
\chi_1
+
u_1
)
\chi_{2}
-
\underbrace{
\lambda^{-2}
V
(
{\sf U}_{\sf c}
(1-\chi_1)
+
(\theta+\Theta_J)
(1-\chi_2)
)
\chi_{2}
}_{=g_8'}
+
N_1,
\\
{\sf t}_2
&=
f_2
(\eta^\frac{2}{1-q}{\sf U})
(1-\chi_1)
\chi_2
\\
&=
\underbrace{
\{
f_2(u)
+
f_2(\eta^\frac{2}{1-q}{\sf U})
-
q
\eta^{-2}
{\sf U}^{q-1}
(u+\eta^\frac{2}{1-q}{\sf U})
\}
(1-\chi_{1})
\chi_{2}
}_{={\sf N}_2[\epsilon,v,w]}
\\
& \quad
-
f_2(u)
(1-\chi_{1})
\chi_{2}
+
\underbrace{
q
\eta^{-2}
{\sf U}^{q-1}
(u+\eta^\frac{2}{1-q}{\sf U}-u_1)
(1-\chi_{1})
\chi_{2}
}_{=g_9'}
+
q
\eta^{-2}
{\sf U}^{q-1}
u_1
(1-\chi_{1})
\chi_{2},
\\
{\sf t}_3
&=
\{
f_2
({\sf U}_\infty)
+
q{\sf U}_\infty^{q-1}
(\theta+\Theta_J)
\chi_3
+
(\Delta_x\theta-q{\sf U}_\infty^{q-1}\theta)
\chi_3
\}
(1-\chi_2)
\chi_4
\\
&=
-
\underbrace{
\{
f(u)
-
f_2(u)
-
f_2({\sf U}_\infty)
+
q{\sf U}_\infty^{q-1}
(u+{\sf U}_\infty)
-
(\Delta_x\theta-q{\sf U}_\infty^{q-1}\theta)
\chi_3
\}
(1-\chi_{2})
\chi_4
}_{=N_3[\epsilon,v,w]}
\\
& \quad
+
\{
f(u)
-
f_2(u)
\}
(1-\chi_{2})
\chi_4
+
\underbrace{
q
{\sf U}_\infty^{q-1}
(u+{\sf U}_\infty+(\theta+\Theta_J)\chi_3-u_1)
(1-\chi_{2})
\chi_4
}_{=g_{10}'}
\\
& \quad
+
q{\sf U}_\infty^{q-1}
u_1
(1-\chi_{2})
\chi_4.
\end{align*}
This implies
\begin{align*}
{\sf t}_1
+
{\sf t}_2
+
{\sf t}_3
&=
f(u)
\chi_{2}
-
\lambda^{-2}
V
(
\lambda^{-\frac{n-2}{2}}
\sigma
T_1
\chi_1
+
u_1
)
\chi_{2}
\\
& \quad
-
f_2(u)
(1-\chi_{1})
\chi_{2}
+
q
\eta^{-2}
{\sf U}^{q-1}
u_1
(1-\chi_{1})
\chi_{2},
\\
& \quad
+
\{
f(u)
-
f_2(u)
\}
(1-\chi_{2})
\chi_4
+
q{\sf U}_\infty^{q-1}
u_1
(1-\chi_{2})
\chi_4
\\
& \quad
+
g_8'+g_9'+g_{10}'
+
N_1+N_2+N_3
\\
&=
f(u)
\chi_4
-
f_2(u)
\chi_4
+
\underbrace{
f_2(u)
\chi_1
}_{=N_4[\epsilon,v,w]}
-
\lambda^{-2}
V
\lambda^{-\frac{n-2}{2}}
\sigma
T_1
\chi_1
\\
& \quad
-
\lambda^{-2}
V
u_1
\chi_2
+
q
\eta^{-2}
{\sf U}^{q-1}
u_1
(1-\chi_{1})
\chi_{2}
+
q{\sf U}_\infty^{q-1}
u_1
(1-\chi_{2})
\chi_4
\\
& \quad
+
g_8'+g_9'+g_{10}'
+
N_1
+
N_2
+
N_3.
\end{align*}
Therefore
it follows that
\begin{align*}
\Delta_x
(u-u_1)
&=
-{\sf t}_1
-
\lambda^{-\frac{n+2}{2}}
\sigma
(VT_1)
\chi_1
+
\lambda^{-\frac{n+2}{2}}
\sigma
(H_yT_1)
\chi_1
-
{\sf t}_2
-
{\sf t}_3
\\
& \quad
-
(\Delta_x\Theta_J-q{\sf U}_\infty^{q-1}\Theta_J)
(1-\chi_2)
\chi_3
+
g_{0\sim7}'
+
g_{{\sf out},1\sim7}'
\\
&=
\lambda^{-\frac{n+2}{2}}
\sigma
(H_yT_1)
\chi_1
-
(\Delta_x\Theta_J-q{\sf U}_\infty^{q-1}\Theta_J)
(1-\chi_2)
\chi_3
\\
& \quad
+
\lambda^{-2}
V
u_1
\chi_2
-
q
\eta^{-2}
{\sf U}^{q-1}
u_1
(1-\chi_{1})
\chi_{2}
-
q{\sf U}_\infty^{q-1}
u_1
(1-\chi_{2})
\chi_4
\\
& \quad
-
f(u)
\chi_4
+
f_2(u)
\chi_4
+
g_{0\sim10}'
+
g_{{\sf out},1\sim7}'
+
N_{1\sim4}.
\end{align*}
Furthermore
we verify that
\begin{align*}
\Delta_xu_1
&=
\lambda^{-\frac{n+2}{2}}(\Delta_y\epsilon)
\chi_{\sf in}
+
\underbrace{
2\lambda^{-\frac{n+2}{2}}
\nabla_y\epsilon\cdot\nabla_y\chi_{\sf in}
+
\lambda^{-\frac{n+2}{2}}
\epsilon\Delta_y\chi_{\sf in}}_{=h_1'[\epsilon]}
\\
& \quad
+
\eta^\frac{2q}{1-q}
(\Delta_\xi v)
\chi_{\sf mid}
+
\underbrace{
2\eta^\frac{2q}{1-q}
(\nabla_\xi v\cdot\nabla_\xi\chi_{\sf mid})
+
\eta^\frac{2q}{1-q}
v(\Delta_\xi\chi_{\sf mid})
}_{=k_1'[v]}
+
\Delta_xw
\\
&=
\lambda^{-\frac{n+2}{2}}(H_y\epsilon)
\chi_{\sf in}
+
\eta^\frac{2q}{1-q}
(\Delta_\xi v-q{\sf U}^{q-1}v)
\chi_{\sf mid}
+
(\Delta_xw-q{\sf U}_\infty^{q-1}w)
\\
&
\quad
-
\lambda^{-\frac{n+2}{2}}V\epsilon
+
\eta^\frac{2q}{1-q}
q{\sf U}^{q-1}
v
\chi_{\sf mid}
+
q{\sf U}_\infty^{q-1}
w
+
h_1'[v]
+
k_1'[v].
\end{align*}
As a consequence,
we obtain
\begin{align*}
\Delta_xu
&=
\Delta_x(u-u_1)
+
\Delta_xu_1
\\
&=
\lambda^{-\frac{n+2}{2}}
\sigma
(H_yT_1)
\chi_1
-
(\Delta_x\Theta_J-q{\sf U}_\infty^{q-1}\Theta_J)
(1-\chi_2)
\chi_3
\\
& \quad
+
\lambda^{-\frac{n+2}{2}}(H_y\epsilon)\chi_{\sf in}
+
\eta^\frac{2q}{1-q}
(\Delta_\xi v-q{\sf U}^{q-1}(1-\chi_1)v)
\chi_{\sf mid}
+
(\Delta_xw-q{\sf U}_\infty^{q-1}w)
\\
& \quad
+
\lambda^{-2}
V(\eta^\frac{2}{1-q}v\chi_{{\sf mid}}+w)\chi_2
-
q
(
\eta^{-2}
{\sf U}^{q-1}
(1-\chi_1)
-
{\sf U}_\infty^{q-1}
)
w
\chi_2
\\
& \quad
+
\underbrace{
q{\sf U}_\infty^{q-1}
w
(1-\chi_4)
}_{=N_5[w]}
-
f(u)
\chi_4
+
f_2(u)
\chi_4
+
g_{0\sim10}'
+
g_{{\sf out},1\sim7}'
+
N_{1\sim4}.
\end{align*}
Since $HT_1=-\Lambda_y{\sf Q}$,
we conclude
\begin{align*}
\lambda^{-\frac{n-2}{2}}
&
\epsilon_t
\chi_{\sf in}
+
\eta^\frac{2}{1-q}
v_t
\chi_{\sf mid}
+
w_t
=
\lambda^{-\frac{n+2}{2}}
(\lambda\dot\lambda-\sigma)
(\Lambda_y{\sf Q})
\chi_1
+
\lambda^{-\frac{n+2}{2}}
(H_y\epsilon)
\chi_{\sf in}
\\
&
+
\eta^\frac{2q}{1-q}
(\Delta_\xi v-q{\sf U}^{q-1}(1-\chi_1)v)
\chi_{\sf mid}
-
\eta^{\frac{2}{1-q}-1}
\dot\eta
(\Lambda_\xi v)
\chi_{\sf mid}
+
(\Delta_xw-q{\sf U}_\infty^{q-1}w)
\\
&
+
\underbrace{
\lambda^{-2}
V(\eta^\frac{2}{1-q}v\chi_{{\sf mid}}+w)
\chi_2
}_{=F_1[v,w]}
-
\underbrace{
q
(
\eta^{-2}
{\sf U}^{q-1}
(1-\chi_1)
-
{\sf U}_\infty^{q-1}
)
w
\chi_2
}_{=F_2[w]}
\\
&
+
\underbrace{
\{
\dot{\sf M}
+
f(u)
-
f_2(u)
\}
(1-\chi_4)
}_{=N_6[w]}
-
g_{0\sim6}
+
g_{0\sim10}'
+
g_{{\sf out},1\sim7}'
+
N_{1\sim5}
-
h_1
+
h_1'
+
k_1
-
k_1'.
\end{align*}
For simplicity,
we write
\begin{itemize}
\item
$g=
g_{0\sim6}
+
g_{0\sim10}'
+
g_{{\sf out},1\sim7}'$,
\item
$N=N_{1\sim6}$,
\item
$h[\epsilon]=-h_1[\epsilon]+h_1'[\epsilon]$,
\item
$k[v]=-k_1[v]+k_1'[v]$.
\end{itemize}
We introduce the other cut off function.
\[
\chi_{{\sf sq}}
=
\chi(|\xi|/\sqrt{{\sf R}_{\sf mid}}).
\]
We now decompose this equation into three equations.
\begin{align}\label{eq5.2}
\begin{cases}
\dis
\lambda^2\epsilon_t
=
H_y\epsilon
+
(
\lambda\dot\lambda
-
\sigma
)
\Lambda_y{\sf Q}
+
\lambda^\frac{n+2}{2}
F_1[v,w]
&
\text{in } |y|<{\sf R}_{\sf in},\
t\in(0,T),
\\[2mm]
\eta^2v_t
=
\Delta_\xi v
-
q{\sf U}^{q-1}
(1-\chi_1)
v
-
\eta
\dot\eta
(\Lambda_\xi v)
\chi_{\sf mid}
\\
\qquad
\quad
+
\eta^{-\frac{2q}{1-q}}
\lambda^{-\frac{n+2}{2}}
(\lambda\dot\lambda-\sigma)
(\Lambda_y{\sf Q})
\chi_1(1-\chi_{\sf in})
\\
\qquad
\quad
+
\eta^{-\frac{2q}{1-q}}
(
F_1[v,w]
(1-\chi_{{\sf in}})
\chi_{{\sf sq}}
-
F_2[w]
\chi_{{\sf sq}}
+
h[\epsilon]
)
\\
\qquad
\quad
+
\eta^{-\frac{2q}{1-q}}
(
g
+
N[\epsilon,v,w]
)
{\bf 1}_{|\xi|<1}
&
\text{in } |\xi|<{\sf R}_{\sf mid},\
t\in(0,T),
\\[2mm]
w_t
=
\Delta_xw
-
q
{\sf U}_\infty^{q-1}
w
+
(
F_1[v,w]
-
F_2[w]
)
(1-\chi_{{\sf sq}})
\\
\qquad
\quad
+
k[v]
+
(
g
+
N[\epsilon,v,w]
)
{\bf 1}_{|\xi|>1}
&
\text{in } x\in\R^n,\
t\in(0,T).
\end{cases}
\end{align}
For convenience,
we here write the explicit expression of $F_i$ ($i=1,2$).
\begin{itemize}
\item
$F_1[v,w]=\lambda^{-2}
V(\eta^\frac{2}{1-q}v\chi_{{\sf mid}}+w)
\chi_2$,
\item
$F_2[w]=
q
(
\eta^{-2}
{\sf U}^{q-1}
(1-\chi_1)
-
{\sf U}_\infty^{q-1}
)
w
\chi_2$.
\end{itemize}
Once a pair of solutions
$(\lambda(t),\epsilon(y,t),v(\xi,t),w(x,t))$ of \eqref{eq5.2}
is obtained,
$u(x,t)$ described in \eqref{eq5.1} gives a solution of \eqref{eq1.1}.
This decomposition is crucial in this procedure,
which is the same spirit as in \cite{Cortazar,del_Pino}.
We appropriately choose the initial data and the boundary condition
in each equation of \eqref{eq5.2}
so that solutions of \eqref{eq5.2} decay enough as $t\to T$.
\subsection{Fixed point argument}
\label{sec_5.3}
Let ${\sf d}_1\in(0,1)$ be a small constant,
and define
\[
{\sf l}_{\sf out}
=
L_2
(T-t)^{-\frac{1}{2}+{\sf b}_{\sf out}}
\qquad
\text{with }
{\sf b}_{\sf out}
=
\tfrac{{\sf d}_1}{2(\gamma+2J-\frac{2}{1-q}+3{\sf d}_1)}.
\]
The constant $L_2$ is chosen so that
\[
(T-t)^{{\sf d}_1}
(T-t)^{\frac{\gamma}{2}+J}
|z|^{\gamma+2J+3{\sf d}_1}
=
{\sf U}_\infty(x)
\qquad
\text{on }
|z|={\sf l}_{\sf out}.
\]
Furthermore
we define
\begin{align*}
{\cal W}(x,t)
&=
\begin{cases}
(T-t)^{{\sf d}_1}
(T-t)^{\frac{\gamma}{2}+J}
|z|^\gamma
& \text{for } |z|<1,
\\
(T-t)^{{\sf d}_1}
(T-t)^{\frac{\gamma}{2}+J}
|z|^{\gamma+2J+3{\sf d}_1}
& \text{for } 1<|z|<{\sf l}_{\sf out}(t),
\\
{\sf U}_\infty(x)
& \text{for }
{\sf l}_{\sf out}(t)\sqrt{T-t}<|x|<1,
\\
{\sf M}_0|x|^{-1}
& \text{for } |x|>1,
\end{cases}
\end{align*}
where ${\sf M}_0={\sf U}_\infty(x)|_{|x|=1}$,
and
\begin{align*}
{\cal V}(\xi,t)
&=
(T-t)^{{\sf d}_1}
(1+|\xi|^2)^\frac{\gamma}{2},
\qquad
\text{where }
\xi=\eta(t)^{-1}x.
\end{align*}
From the definition of ${\sf l}_{\sf out}$,
the function ${\cal W}(x,t)$ is continuous on $\R^n\times[0,T]$.
We now introduce two inequalities
to define the functional spaces used in a fixed point argument.
\begin{align}
\label{eq5.3}
|w_1(x,t)|
&\leq
{\sf R}_1^{-1}
{\cal W}(x,t)
\qquad
\text{for }
(x,t)\in\R^n\times[0,T-\delta],
\\
\label{eq5.4}
|v_1(x,t)|
&\leq
{\cal V}(x,t)
\qquad
\text{for }
(x,t)\in\overline{B}_{{\sf R}_{\sf mid}}\times[0,T-\delta],
\end{align}
where ${\sf R}_1>0$ is a large constant.
\begin{enumerate}[(i)]
\item
Let $X_1^{(\delta)}$ be the set of all continuous functions
on $\R^n\times[0,T-\delta]$
satisfying \eqref{eq5.3},
\item
let $X_2^{(\delta)}$ be the set of all continuous functions
on $\overline{B}_{{\sf R}_{\sf mid}}\times[0,T-\delta]$
satisfying
\eqref{eq5.4},
and
\item
set
$X^{(\delta)}=X_1^{(\delta)}\times X_2^{(\delta)}$.
\item
For the special case,
we write
$X_1=X_1^{(\delta)}|_{\delta=0}$,
$X_2=X_2^{(\delta)}|_{\delta=0}$,
$X=X^{(\delta)}|_{\delta=0}$.
\end{enumerate}
Since a function $w_1(x,t)\in X_1^{(\delta)}$ is not defined in
$t\in(T-\delta,T)$,
we extend it a continuous function on $\R^n\times[0,T]$.
\[
w_1^{\sf ext}(x,t)=
\begin{cases}
w_1(x,t)
& \text{if } (x,t)\in\R^n\times[0,T-\delta],
\\
w_1(x,T-\delta)
& \text{if } (x,t)\in\R^n\times[T-\delta,T].
\end{cases}
\]
Similarly
we define
\[
v_1^{\sf ext}(\xi,t)
=
\begin{cases}
v_1(\xi,t)
& \text{if }
(\xi,t)\in
\overline{B}_{{\sf R}_{\sf mid}}\times[0,T-\delta],
\\
v_1(\xi,T-\delta)
& \text{if }
(\xi,t)\in
\overline{B}_{{\sf R}_{\sf mid}}\times[T-\delta,T],
\\
v_1(B_{{\sf R}_{\sf mid}}\frac{\xi}{|\xi|},t)
& \text{if }
(\xi,t)\in(\R^n\setminus\overline{B}_{{\sf R}_{\sf mid}})\times[T-\delta,T],
\\
v_1(B_{{\sf R}_{\sf mid}}\frac{\xi}{|\xi|},T-\delta)
& \text{if }
(\xi,t)\in(\R^n\setminus\overline{B}_{{\sf R}_{\sf mid}})\times[T-\delta,T].
\end{cases}
\]
For our purpose,
these extended functions must be
$v_1^{\sf ext}(\xi,t)=w_1^{\sf ext}(x,t)=0$ for $t=T$.
Hence we additionally define
\begin{align*}
\bar{w}_1(x,t)
&=
\begin{cases}
w_1^{\sf ext}(x,t) & \text{if } x\in\R^n,\ t\in[0,T-\delta],
\\
{\cal W}(x,t) & \text{if }x\in\R^n,\ t\in[T-\delta,T]
\text{\ \ and\ \ }
w_1^{\sf ext}(x,t)>{\cal W}(x,t),
\\
w_1^{\sf ext}(x,t) & \text{if } x\in\R^n,\ t\in[T-\delta,T]
\text{\ \ and\ \ }
|w_1^{\sf ext}(x,t)|\leq{\cal W}(x,t),
\\
-{\cal W}(x,t) & \text{if } x\in\R^n,\ t\in[T-\delta,T]
\text{\ \ and\ \ }
w_1^{\sf ext}(x,t)<-{\cal W}(x,t),
\end{cases}
\\
\bar{v}_1(\xi,t)
&=
\begin{cases}
v_1^{\sf ext}(\xi,t) & \text{if } x\in\R^n,\ t\in[0,T],
\\
{\cal V}(\xi,t) & \text{if }x\in\R^n,\ t\in[T-\delta,T]
\text{\ \ and\ \ }
v_1^{\sf ext}(\xi,t)>{\cal V}(\xi,t),
\\
v_1^{\sf ext}(\xi,t) & \text{if } x\in\R^n,\ t\in[T-\delta,T]
\text{\ \ and\ \ }
|v_1^{\sf ext}(\xi,t)|\leq{\cal V}(\xi,t),
\\
-{\cal V}(\xi,t) & \text{if } x\in\R^n,\ t\in[T-\delta,T]
\text{\ \ and\ \ }
v_1^{\sf ext}(\xi,t)<-{\cal V}(\xi,t)
\end{cases}
\end{align*}
and put
\[
\tilde{w}_1(x,t)
=
\bar{w}_1(x,t)
\chi_\delta(t),
\qquad
\tilde{v}_1(\xi,t)
=
\bar{v}_1(\xi,t)
\chi_\delta(t).
\]
The cut off function $\chi_\delta(t)$ is defined by
$\chi_{\delta}(t)=1$ for $t<T-\delta$
and
$\chi_{\delta}(t)=0$ for $t>T-\frac{1}{2}\delta$.
From this definition,
we see that
$\tilde{w}_1(x,t)\in C(\R^n\times[0,T])$ if $w_1\in X_1^{(\delta)}$
and
\begin{align}
\nonumber
\tilde{w}_1(x,t)
&=
w_1(x,t)
\qquad
\text{for } x\in\R^n\times[0,T-\delta],
\\
\nonumber
|\tilde{w}_1(x,t)|
&\leq
{\sf R}_1^{-1}
\begin{cases}
(T-t)^{{\sf d}_1}
(T-t)^{\frac{\gamma}{2}+J}
|z|^\gamma
& \text{for } |z|<1,\ t\in[0,T],
\\
(T-t)^{{\sf d}_1}
(T-t)^{\frac{\gamma}{2}+J}
|z|^{\gamma+2J+3{\sf d}_1}
& \text{for } 1<|z|<{\sf l}_{\sf out}(t),\ t\in[0,T],
\\
{\sf U}_\infty(x)
& \text{for }
\sqrt{T-t}\cdot{\sf l}_{\sf out}(t)<|x|<1,\ t\in[0,T],
\\
{\sf M}_0|x|^{-1}
& \text{for } |x|>1,\ t\in[0,T],
\end{cases}
\\
\label{eq5.5}
\tilde w_1(x,t)
&=
0
\qquad \text{for } x\in\R^n,\ t\in[T-\tfrac{1}{2}\delta,T].
\end{align}
Similarly
$\tilde{v}_1(\xi,t)\in C(\R^n\times[0,T])$ if $v_1\in X_2^{(\delta)}$
and
\begin{align}
\nonumber
\tilde{v}_1(\xi,t)
&=
v_1(\xi,t)
\qquad
\text{for } x\in\overline{B}_{{\sf R}_{\sf mid}}\times[0,T-\delta],
\\
\nonumber
|\tilde{v}_1(x,t)|
&\leq
(T-t)^{{\sf d}_1}(1+|\xi|^2)^\frac{\gamma}{2}
\qquad
\text{for } (x,t)\in\R^n\times[0,T],
\\
\label{eq5.6}
\tilde{v}_1(x,t)
&=
0 \qquad
\text{for } (x,t)\in\R^n\times[T-\tfrac{1}{2}\delta,T].
\end{align}
We introduce the metrics on $X^{(\delta )}$ and $X$.
\begin{align*}
d_{X^{(\delta)}}((w_1,v_1),(w_2,v_2))
&=
\sup_{(x,t)\in\R^n\times[0,T-\delta]}
|w_1-w_2|
+
\sup_{(\xi,t)\in\overline{B}_{{\sf R}_{\sf mid}}\times[0,T-\delta]}
|v_1-v_2|,
\\
d_{X}((\tilde w_1,\tilde v_1),(\tilde w_2,\tilde v_2))
&=
\sup_{(x,t)\in\R^n\times[0,T]}
|\tilde w_1-\tilde w_2|
+
\sup_{(\xi,t)\in\overline{B}_{{\sf R}_{\sf mid}}\times[0,T]}
|\tilde v_1-\tilde v_2|.
\end{align*}
From this construction,
we easily see that
the mapping
${\mathcal T}_1(w_1,v_1)=(\tilde w_1,\tilde v_1)$ is continuous
from
$(X^{(\delta)},d_{X^{(\delta)}})$
to
$(X,d_{X})$.
To formulate our problem as a fixed point problem,
we now define the mapping ${\mathcal T}_2:X\to X\subset X^{(\delta )}$.
For given $(w_1,v_1)\in X^{(\delta)}$,
we solve the first equation of \eqref{eq5.2}.
\[
\lambda^2\epsilon_t
=
H_y\epsilon
+
(
\lambda\dot\lambda
-
\sigma
)
\Lambda_y{\sf Q}
+
\lambda^\frac{n+2}{2}
F_1[\tilde v_1,\tilde w_1]
\qquad
\text{in } |y|<{\sf R}_{\sf in},\
t\in(0,T),
\]
where
$(\tilde w_1,\tilde v_1)={\mathcal T}_1(w_1,v_1)$.
In this step,
we determine $\lambda(t)$ and
obtain a solution $\epsilon(y,t)$ satisfying $\epsilon(y,t)\to0$ as $t\to T$.
Next
we construct $v(\xi,t)$ as a solution of the second equation of \eqref{eq5.2}.
\begin{align*}
\eta^2v_t
&=
\Delta_\xi v
-
q{\sf U}^{q-1}
(1-\chi_1)
v
-
\eta
\dot\eta
(\Lambda_\xi v)
+
\eta^{-\frac{2q}{1-q}}
\lambda^{-\frac{n+2}{2}}
(\lambda\dot\lambda-\sigma)
(\Lambda_y{\sf Q})
\chi_1(1-\chi_{\sf in})
\\
& \quad
+
\eta^{-\frac{2q}{1-q}}
(
F_1[\tilde v_1,\tilde w_1]
(1-\chi_{{\sf in}})
\chi_{{\sf sq}}
-
F_2[\tilde w_1]
\chi_{{\sf sq}}
+
h[\epsilon]
)
\\
& \quad
+
\eta^{-\frac{2q}{1-q}}
(
g
+
N[\epsilon,\tilde v_1,\tilde w_1]
)
{\bf 1}_{|\xi|<1}
\qquad
\text{in } |\xi|<{\sf R}_{\sf mid},\
t\in(0,T).
\end{align*}
Here
$\lambda(t)$ and $\epsilon(y,t)$ are functions obtained in the first step,
and
$(\tilde w_1,\tilde v_1)={\mathcal T}_1(w_1,v_1)$ are given functions.
We will see that $v\in X_2^{(\delta)}$.
We finally consider
\begin{align*}
w_t
&=
\Delta_xw
-
q
{\sf U}_\infty^{q-1}
w
+
(
F_1[\tilde v_1,\tilde w_1]
-
F_2[\tilde w_1]
)
(1-\chi_{{\sf sq}})
\\
& \quad
+
k[v]
+
(
g
+
N[\epsilon,\tilde v_1,\tilde w_1]
)
{\bf 1}_{|\xi|>1}
\qquad
\text{in } x\in\R^n,\
t\in(0,T).
\end{align*}
As in the second step,
$\lambda(t)$ and $\epsilon(y,t)$ are functions obtained in the first step,
$v(\xi,t)$ is the function constructed in the second step
and $(\tilde w_1,\tilde v_1)={\mathcal T}_1(w_1,v_1)$.
We can construct a solution $w(x,t)$ of this equation
such that $w\in X_1^{(\delta)}$.
By using this $v(\xi,t)$ and $w(x,t)$,
we define
\[
{\cal T}_2
(\tilde w_1,\tilde v_1)
=
(w,v).
\]
This mapping ${\mathcal T}_2:X\to X\subset X^{(\delta)}$ is continuous.
Hence
the mapping
${\mathcal T}={\mathcal T}_2\circ{\mathcal T}_1:X^{(\delta)}\to X^{(\delta)}$
is also continuous.
Furthermore
we can show that
the mapping ${\mathcal T}:X^{(\delta)}\to X^{(\delta)}$ is compact
for any $\delta>0$.
Therefore
the Schauder fixed point theorem
shows that
${\mathcal T}:X^{(\delta)}\to X^{(\delta)}$ has a fixed point
$(w_1,v_1)\in X^{(\delta)}$.
This gives a solution $u^{(\delta)}(x,t)$ of \eqref{eq1.1}
with the desired properties.
However this $u^{(\delta)}(x,t)$ is defined only on $\R^n\times[0,T-\delta]$.
Finally
we take $\delta\to0$ and
obtain a solution $u(x,t)=\lim_{\delta\to0}u^{(\delta)}(x,t)$,
which proves Theorem \ref{Thm1}.
For the rest of paper,
we investigate the mapping ${\cal T}_2$ and prove (i) - (iii).
\begin{enumerate}[(i)]
\item ${\cal T}_2:X\to X^{(\delta)}$ is well defined,
\item ${\cal T}_2:(X,d_X)\to(X^{(\delta)},d_{X^{(\delta)}})$ is continuous and
\item ${\cal T}_2:(X,d_X)\to(X^{(\delta)},d_{X^{(\delta)}})$ is compact.
\end{enumerate}
Section \ref{sec_6} - Section \ref{sec_8} are devoted to the proof of (i).
We do not give precise proofs of (ii) - (iii),
since those are standard.
In fact,
since our construction $(\tilde w_1,\tilde v_1)\mapsto(w,v)$
is unique at each step, (ii) follows.
Furthermore
since
the H\"older continuity
is bounded by the $L^\infty$ bound
from standard parabolic estimates (see Theorem 6.29 in \cite{Lieberman} p. 131),
(iii) holds.
\subsection{Notations}
We fix
\begin{itemize}
\item ${\sf R}_{\sf in}={\sf R}_{\sf mid}=-\log T$,
\item ${\sf R}_1=\log {\sf R}_{\sf in}$.
\end{itemize}
Throughout this paper,
the symbol $C$ denotes a generic positive constant
depending only on $q,n,J$
(independent of
$\delta$, ${\sf b}$, ${\sf d}_1$
${\sf R}_{\sf in}$, ${\sf R}_{\sf mid}$, ${\sf R}_1$, ${\sf r}_0$, ${\sf r}_3$).
For real numbers $X$ and $Y$,
we write $X\lesssim Y$
if there exists a generic positive constant $C$
such that $|X|\leq C|Y|$.
\section{In the inner region}
\label{sec_6}
In this section,
we construct a pair of solutions $(\lambda(t),\epsilon(y,t))$
in the same procedure as in the proof of Proposition 7.1 \cite{Cortazar}
(see also Lemma 4.1 in \cite{del_Pino}).
Here we follow a simplified version of
their method obtained in \cite{Harada,Harada_2}
(see Section 6 in \cite{Harada_2}).
By virtue of Lemma \ref{Lem3.3},
we can skip one procedure in the proof of Proposition 7.1 in \cite{Cortazar},
which provides a more direct approach.
Let $\mu_i^{({\sf R}_{\sf in})}$ be the $i$th eigenfunction of
\[
\begin{cases}
-H_ye=\mu e & \text{in } |y|<4{\sf R}_{\sf in},
\\
e=0 & \text{on } |y|=4{\sf R}_{\sf in}
\end{cases}
\]
and
$\psi_i^{({\sf R}_{\sf in})}$ be the associated eigenfunctions ($i\in\N$).
Here we take $\|\psi_i^{({\sf R}_{\sf in})}\|_{L_y^2(B_{4{\sf R}_{\sf in}})}=1$.
For simplicity,
we write
\begin{itemize}
\item
$\mu_i=\mu_i^{({\sf R}_{\sf in})}$,
\item
$\psi_i=\psi_i^{({\sf R}_{\sf in})}$.
\end{itemize}
\subsection{Choice of $\lambda(t)$}
\label{sec_6.1}
We fix $(v_1,w_1)\in X^{(\delta)}$,
and write ${\mathcal T}_1(v_1,w_1)=(\tilde v_1,\tilde w_1)\in X$
(see Section \ref{sec_5.3}).
In this subsection,
we determine $\lambda(t)$
such that
$\lambda(t)\to0$ as $t\to T$.
As is explained in Section \ref{sec_5.3},
we consider
\begin{align}\label{eq6.1}
&
\begin{cases}
\dis
\lambda^2\epsilon_t
=
H_y\epsilon
+
(
\lambda\dot\lambda
-
\sigma
)
\Lambda_y{\sf Q}
+
\lambda^\frac{n-2}{2}
V
(\eta^\frac{2}{1-q}\tilde v_1+\tilde w_1)
&
\text{in } |y|<4{\sf R}_{\sf in},\ t\in(0,T),
\\
\epsilon=0
&
\text{on } |y|=4{\sf R}_{\sf in},\ t\in(0,T),
\\
\epsilon=\epsilon_0
&
\text{on } |y|<4{\sf R}_{\sf in},\ t=0.
\end{cases}
\end{align}
Initial data $\epsilon_0$ will be chosen in Section \ref{sec_6.2}
so that $\epsilon(y,t)\to0$ as $t\to T$.
We define $\lambda(t)$ as a solution of
\begin{equation}\label{eq6.2}
\begin{cases}
(\lambda\dot\lambda-\sigma)
(
\Lambda_y{\sf Q},
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
+
\lambda^\frac{n-2}{2}
(
V
(\eta^\frac{2}{1-q}\tilde v_1+\tilde w_1),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
=
0
\qquad
\text{for } t\in(0,T),
\\
\lambda(t)=0
\qquad
\text{for } t=T.
\end{cases}
\end{equation}
We note that
the inner product
$(V
(\eta^\frac{2}{1-q}\tilde v_1+\tilde w_1),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}$ in \eqref{eq6.2}
is nonlinear as a function of $\lambda$,
since it is expressed by
\[
(
V(\eta^\frac{2}{1-q}\tilde v_1+\tilde w_1),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
=
\int_{B_{4{\sf R}_{\sf in}}}
V(y)
\{
\eta^\frac{2}{1-q}\tilde v_1(\lambda\eta^{-1}y,t)+\tilde w_1(\lambda y,t)
\}
\psi_2(y)
dy.
\]
We put
${\sf a}=\lambda^\frac{6-n}{2}$.
Then \eqref{eq6.2} is written as
\begin{equation}\label{eq6.3}
\begin{cases}
\frac{2}{6-n}
(\dot{\sf a}+\frac{6-n}{2{\sf A}_1}\eta^\frac{2}{1-q})
(
\Lambda_y{\sf Q},
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
+
(
V
\eta^\frac{2}{1-q}\tilde v_1(\eta^{-1}{\sf a}^{\frac{2}{6-n}}y,t),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
\\[2mm]
\hspace{45mm}
+
(
V
\tilde w_1({\sf a}^{\frac{2}{6-n}}y,t),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
=
0
\qquad
\text{for } t\in(0,T),
\\
{\sf a}=0
\qquad
\text{for } t=T.
\end{cases}
\end{equation}
To construct a solution of \eqref{eq6.3},
we introduce
\begin{align*}
{\cal A}
&=
\{{\sf a}\in C([0,T]),\
|{\sf a}(t)-{\sf a}_0(t)|\leq(T-t)^{\frac{{\sf d}_1}{2}}{\sf a}_0(t)\},
\\
& \quad
\text{where }
{\sf a}_0(t)
=
\frac{6-n}{2{\sf A}_1}
\int_t^T
\eta(t_1)^\frac{2}{1-q}dt_1
=
\kappa_1
(T-t)
\eta^\frac{2}{1-q}.
\end{align*}
We now show that
for any $(v_1,w_1)\in X^{(\delta)}$,
there exists the unique solution ${\sf a}(t)\in{\cal A}$ of \eqref{eq6.3}.
The existence is proved by a fixed point argument.
In fact,
for any ${\sf a}_1\in {\cal A}$,
we define ${\sf a}(t)$ as a unique solution of
\eqref{eq6.3} with replaced
$\tilde v_1(\eta^{-1}{\sf a}^\frac{2}{6-n}y,t)$,
$\tilde w_1({\sf a}^\frac{2}{6-n}y,t)$
by
$\tilde v_1(\eta^{-1}{\sf a}_1^\frac{2}{6-n}y,t)$,
$\tilde w_1({\sf a}_1^\frac{2}{6-n}y,t)$
respectively.
From Lemma \ref{Lem3.1},
we easily see that
\begin{align*}
|
(
V
\eta^\frac{2}{1-q}
\tilde v_1
(\eta^{-1}{\sf a}_1^{\frac{2}{6-n}}y,t),
&
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
+
(
V
\tilde w_1({\sf a}_1^{\frac{2}{6-n}}y,t),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
|
\\
&\lesssim
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
|
(V,\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
|
\\
&\lesssim
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\qquad
\text{for }
(\tilde v_1,\tilde w_1)\in X,\
{\sf a}_1\in{\cal A}.
\end{align*}
Hence
the solution ${\sf a}(t)$ satisfies
\[
|{\sf a}(t)-{\sf a}_0(t)|
\lesssim
(T-t)^{{\sf d}_1+1}
\eta^\frac{2}{1-q}
\lesssim
(T-t)^{{\sf d}_1}
{\sf a}_0.
\]
The Schauder fixed point theorem shows that
the mapping ${\sf a}_1\mapsto{\sf a}$ has a fixed point in ${\cal A}$,
which gives a solution of \eqref{eq6.3}.
We next discuss the uniqueness of solutions to \eqref{eq6.3}.
Let ${\sf a}_1(t),{\sf a}_2(t)\in{\cal A}$ be solutions of \eqref{eq6.3}.
For simplicity,
we write $\lambda_i(t)={\sf a}_i(t)^\frac{6-n}{2}$
($i=1,2$).
A direct computation shows that
\begin{align*}
(
V\tilde v_1(\eta^{-1}\lambda_iy,t),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
&=
\int_{|y|<4{\sf R}_{\sf in}}
V(y)
\tilde v_1(\eta^{-1}\lambda_iy,t)
\psi_2(y)
dy
\\
&=
\eta^n\lambda_i^{-n}
\int_{|\xi|<4{\sf R}_{\sf in}\eta\lambda_i^{-1}}
V(\eta\lambda_i^{-1}\xi)
\tilde v_1(\xi,t)
\psi_2(\eta\lambda_i^{-1}\xi)
d\xi.
\end{align*}
By change of variables,
we get
\begin{align*}
(
V\tilde v_1
&
(\eta^{-1}\lambda_2y,t),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
-
(
V\tilde v_1(\eta^{-1}\lambda_2y,t)),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
\\
&=
\eta^n\lambda_1^{-n}
\int_{|\xi|<4{\sf R}_{\sf in}\eta\lambda_1^{-1}}
|
V(\eta\lambda_1^{-1}\xi)
\tilde v_1(\xi,t)
\psi_2(\eta\lambda_1^{-1}\xi)
d\xi
\\
& \quad
-
\eta^n\lambda_2^{-n}
\int_{|\xi|<4{\sf R}_{\sf in}\eta\lambda_2^{-1}}
V(\eta\lambda_2^{-1}\xi)
\tilde v_1(\xi,t)
\psi_2(\eta\lambda_2^{-1}\xi)
d\xi
\\
&=
\eta^n
(\lambda_1^{-n}-\lambda_2^{-n})
\int_{|\xi|<4{\sf R}_{\sf in}\eta\lambda_1^{-1}}
V(\eta\lambda_1^{-1}\xi)
\tilde v_1(\xi,t)
\psi_2(\eta\lambda_1^{-1}\xi)
d\xi
\\
& \quad
+
\eta^n\lambda_2^{-n}
\int_{|\xi|<4{\sf R}_{\sf in}\eta\lambda_1^{-1}}
(
V(\eta\lambda_1^{-1}\xi)
\psi_2(\eta\lambda_1^{-1}\xi)
-
V(\eta\lambda_2^{-1}\xi)
\psi_2(\eta\lambda_2^{-1}\xi)
)
\tilde v_1(\xi,t)
d\xi
\\
& \quad
+
\eta^n\lambda_2^{-n}
\left(
\int_{|\xi|<4{\sf R}_{\sf in}\eta\lambda_1^{-1}}
d\xi
-
\int_{|\xi|<4{\sf R}_{\sf in}\eta\lambda_2^{-1}}
d\xi
\right)
V(\eta\lambda_2^{-1}\xi)
\psi_2(\eta\lambda_2^{-1}\xi)
\tilde v_1(\xi,t)
d\xi.
\end{align*}
Since $\lambda_1^{-1}\lambda_2=1$ as $t\to T$,
it follows from Lemma \ref{Lem3.1} that
\begin{align*}
|
(
V\tilde v_1
&
(\eta^{-1}\lambda_1y,t),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
-
(
V\tilde v_1(\eta^{-1}\lambda_2y,t)),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
|
\\
&\lesssim
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_1^n
|\lambda_1^{-n}-\lambda_2^{-n}|
\int_{|y|<4{\sf R}_{\sf in}}
|
V(y)
\psi_2(y)
|
dy
\\
& \quad
+
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_1^n\lambda_2^{-n}
\int_{|y|<4{\sf R}_{\sf in}}
|
V(y)
\psi_2(y)
-
V(y_1)
\psi_2(y_1)
|
dy
\qquad
(y_1=\lambda_1\lambda_2^{-1}y)
\\
& \quad
+
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_1^n
\lambda_2^{-n}
\left|
\int_{|y|<4{\sf R}_{\sf in}}
dy
-
\int_{|\xi|<4{\sf R}_{\sf in}\lambda_1\lambda_2^{-1}}
dy
\right|
V(\lambda_1\lambda_2^{-1}y)
\psi_2(\lambda_1\lambda_2^{-1}y)
dy
\\
&\lesssim
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_1^n
|\lambda_1^{-n}-\lambda_2^{-n}|
+
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_1^n\lambda_2^{-n}
|1-\lambda_1\lambda_2^{-1}|
\\
& \quad
+
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_1^n
\lambda_2^{-n}
{\sf R}_{\sf in}^{-2}
|1-\lambda_1\lambda_2^{-1}|
\\
&\lesssim
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_2^{-1}
|\lambda_1-\lambda_2|
\\
&\lesssim
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_2^{-1}
{\sf a}_1^\frac{4-n}{2}
|{\sf a}_1-{\sf a}_2|.
\end{align*}
The same computation shows
\begin{align*}
|
(
V\tilde w_1
(\lambda_1y,t)
,
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
-
(
V
&
\tilde w_1
(\lambda_2y,t)),
\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
|
\lesssim
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_2^{-1}
{\sf a}_1^\frac{4-n}{2}
|{\sf a}_1-{\sf a}_2|.
\end{align*}
We recall that
$\tilde v_1=\tilde w_1=0$ for $t\in[T-\frac{\delta}{2},T]$
(see \eqref{eq5.5}, \eqref{eq5.6}).
Hence
${\sf a}_1(t)={\sf a}_2(t)={\sf a}_0(t)$ for $t\in[T-\frac{\delta}{2},T]$.
Therefore
since ${\sf a}_1,{\sf a}_2\in{\cal A}$ are solutions of \eqref{eq6.3},
there exits $B>0$ such that
\begin{align*}
|
\tfrac{d}{dt}
({\sf a}_1-{\sf a}_2)
|
&\lesssim
\begin{cases}
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_2^{-1}
{\sf a}_1^\frac{4-n}{2}
|{\sf a}_1-{\sf a}_2|
& \text{for }
t\in[0,T-\frac{\delta}{2}]
\\
0
& \text{for }
t\in[T-\frac{\delta}{2},T]
\end{cases}
\\
&\lesssim
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
\lambda_2^{-1}
{\sf a}_1^\frac{4-n}{2}
|_{t=T-\frac{\delta}{2}}
\cdot
|{\sf a}_1-{\sf a}_2|
\qquad
\text{for }
t\in[0,T]
\\
&\lesssim
\delta^{-B}
|{\sf a}_1-{\sf a}_2|
\qquad
\text{for }
t\in[0,T].
\end{align*}
Since ${\sf a}_1(t)={\sf a}_2(t)={\sf a}_0(t)$ for $t\in[T-\frac{\delta}{2},T]$,
this proves the uniqueness of solutions to \eqref{eq6.3} in ${\cal A}$.
\subsection{Construction of $\epsilon(y,t)$}
\label{sec_6.2}
We next construct a solution $\epsilon(y,t)$ of \eqref{eq6.1}.
Let $\lambda(t)$ be a solution of \eqref{eq6.2} constructed
in Section \ref{sec_6.1} from $(v_1,w_1)\in X^{(\delta)}$.
As is stated in the beginning of Section \ref{sec_6},
we directly solve \eqref{eq6.1}
(the approach in \cite{Cortazar} need to introduce additional equations).
We choose ${\sf m}_1>1$ such that
($n\geq5$)
\begin{align*}
-H_y|y|^{-2}
&>
-
\tfrac{1}{2}
\Delta_y|y|^{-2}
=
(n-4)
|y|^{-4}
\qquad
\text{for } |y|>{\sf m}_1,
\\
-H_y
|y|^{-(n-4)}
&>
-
\tfrac{1}{2}
\Delta_y|y|^{-(n-4)}
=
(n-4)
|y|^{-(n-2)}
\qquad
\text{for }
|y|>{\sf m}_1.
\end{align*}
The constant ${\sf m}_1$ depends only on $n$.
We here introduce three equations.
\begin{align}\label{eq6.4}
\nonumber
&
\begin{cases}
\dis
\lambda^2
\pa_t\epsilon_1
=
H_y\epsilon_1
+
(
\lambda\dot\lambda
-
\sigma
)
\Lambda_y{\sf Q}
&
\text{in } {\sf m}_1<|y|<4{\sf R}_{\sf in},\ t\in(0,T),
\\
\epsilon_1=0
&
\text{on } |y|={\sf m}_1,\ |y|=4{\sf R}_{\sf in},\ t\in(0,T),
\\
\epsilon_1=0
&
\text{in } {\sf m}_1<|y|<4{\sf R}_{\sf in},\ t=0,
\end{cases}
\\
\nonumber
&
\begin{cases}
\dis
\lambda^2
\pa_t\epsilon_2
=
H_y\epsilon_2
+
\lambda^\frac{n-2}{2}
V
(\eta^\frac{2}{1-q}\tilde v_1+\tilde w_1)
&
\text{in } {\sf m}_1<|y|<4{\sf R}_{\sf in},\ t\in(0,T),
\\
\epsilon_2=0
&
\text{on } |y|={\sf m}_1,\ |y|=4{\sf R}_{\sf in},\ t\in(0,T),
\\
\epsilon_2=0
&
\text{in } {\sf m}_1<|y|<4{\sf R}_{\sf in},\ t=0,
\end{cases}
\\
&
\begin{cases}
\dis
\lambda^2
\pa_t\epsilon_3
=
H_y\epsilon_3
+
\underbrace{
(
\lambda\dot\lambda
-
\sigma
)
(\Lambda_y{\sf Q})
\chi_{{\sf m}_1}
}_{=\mathcal{R}_1}
\\
\qquad
\qquad
+
\underbrace{
\lambda^\frac{n-2}{2}
V
(\eta^\frac{2}{1-q}\tilde v_1+\tilde w_1)
\chi_{{\sf m}_1}
+
l[\epsilon_1,\epsilon_2]
}_{=\mathcal{R}_2}
&
\text{in } |y|<4{\sf R}_{\sf in},\ t\in(0,T),
\\
\epsilon_3=0
&
\text{on } |y|=4{\sf R}_{\sf in},\ t\in(0,T),
\\
\epsilon_3=\epsilon_0
&
\text{in } |y|<4{\sf R}_{\sf in},\ t=0,
\end{cases}
\end{align}
where
$l[\epsilon_1,\epsilon_2]=
-
2\nabla_y
(\epsilon_1+\epsilon_2)
\cdot
\nabla_y\chi_{{\sf m}_1}
-
(\epsilon_1+\epsilon_2)
(\Delta_y\chi_{{\sf m}_1})$.
We can verify that
\[
\epsilon
=
\epsilon_1
(1-\chi_{{\sf m}_1})
+
\epsilon_2
(1-\chi_{{\sf m}_1})
+
\epsilon_3
\qquad
\text{with }
\chi_{{\sf m}_1}
=
\begin{cases}
1 & |y|<{\sf m}_1 \\
0 & |y|>2{\sf m}_1
\end{cases}
\]
gives a solution of \eqref{eq6.1}.
We now construct $\epsilon_1(y,t)$.
We recall that
$(\lambda\dot\lambda-\sigma)\Lambda_y{\sf Q}
\lesssim(T-t)^{{\sf d}_1}\sigma|y|^{-(n-2)}$
from \eqref{eq6.2}.
Therefore
from the choice of ${\sf m}_1$,
a comparison argument shows
\begin{align}\label{eq6.5}
|\epsilon_1(y,t)|
&\lesssim
{\sf m}_1^{n-4}
(T-t)^{{\sf d}_1}
\sigma
|y|^{-(n-4)}
\qquad
\text{for } {\sf m}_1<|y|<4{\sf R}_{\sf in},\ t\in(0,T),
\\
\label{eq6.6}
|\nabla_y\epsilon_1(y,t)|
&\lesssim
{\sf m}_1^{n-3}
(T-t)^{{\sf d}_1}
\sigma
|y|^{-(n-3)}
\qquad
\text{for } {\sf m}_1<|y|<4{\sf R}_{\sf in},\ t\in(0,T).
\end{align}
Estimate \eqref{eq6.6} is obtained from
a comparison argument to the equation of $\pa_r\epsilon_1(y,t)$.
From the definition of $X^{(\delta)}$ (see Section \ref{sec_5.3}),
we easily see that
\[
\lambda^\frac{n-2}{2}
V(\eta^\frac{2}{1-q}\tilde v_1+\tilde w_1)
\lesssim
(T-t)^{{\sf d}_1}
\sigma
|y|^{-4}
\]
for
${\sf m}_1<|y|<4{\sf R}_{\sf in}$,
$t\in(0,T)$.
Hence
by a comparison argument,
we can show that
\begin{align}\label{eq6.7}
|\epsilon_2(y,t)|
\lesssim
{\sf m}_1^2
(T-t)^{{\sf d}_1}
\sigma
|y|^{-2}
\qquad
\text{for } {\sf m}_1<|y|<4{\sf R}_{\sf in},\ t\in(0,T).
\end{align}
This together with Lemma \ref{Lem3.5} implies
\begin{align}\label{eq6.8}
|\nabla_y\epsilon_2(y,t)|
\lesssim
{\sf m}_1^2
(T-t)^{{\sf d}_1}\sigma
|y|^{-2}
\qquad
\text{for } {\sf m}_1<|y|<4{\sf R}_{\sf in},\ t\in(0,T).
\end{align}
To construct $\epsilon_3(y,t)$,
we first investigate the unstable mode of \eqref{eq6.4}.
By change of variables
$s=\int_0^t\tfrac{dt}{\lambda^2}$,
it holds that
\begin{align*}
\pa_s
(\epsilon_3,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}
&=
-
\mu_1
(\epsilon_3,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}
+
(\mathcal{R}_1+\mathcal{R}_2,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}.
\end{align*}
Integrating this equation,
we get
\begin{align*}
e^{\mu_1s}
(\epsilon_3,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}
-
(\epsilon_0,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}
&=
\int_0^s
e^{\mu_1s_1}
(\mathcal{R}_1+\mathcal{R}_2,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}
ds_1.
\end{align*}
From this observation,
we now take the initial data $\epsilon_0$ such that
\begin{align*}
\epsilon_0
=
\alpha_1
\psi_1
\qquad
\text{with }
\alpha_1
=
-
\int_0^\infty
e^{\mu_1s_1}
(\mathcal{R}_1+\mathcal{R}_2,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}
ds_1.
\end{align*}
This integral is finite since $\mu_1$ is negative
(see Lemma \ref{Lem3.2}).
Then
it follows from Lemma \ref{Lem3.1}
and \eqref{eq6.5} - \eqref{eq6.8}
that
\begin{align*}
\alpha_1
&\lesssim
\int_0^\infty
e^{\mu_1s_1}
\{
(\lambda\dot\lambda-\sigma)
+
\lambda^\frac{n-2}{2}
\eta^\frac{2}{1-q}
(T-t)^{{\sf d}_1}
+
(T-t)^{{\sf d}_1}
\sigma
\}
ds_1
\\
&\lesssim
(T-t)^{{\sf d}_1}
\sigma
|_{t=0}
\int_0^\infty
e^{\mu_1s_1}
ds_1
\\
&\lesssim
(T-t)^{{\sf d}_1}
\sigma
|_{t=0}.
\end{align*}
In the last lien,
we use $\lim_{R\to\infty}\mu_1=\exists\bar\mu_1<0$
(see Lemma \ref{Lem3.2}).
The same computation shows
\begin{align*}
(\epsilon_3,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}
&=
-
e^{-\mu_1s}
\int_s^\infty
e^{\mu_1s_1}
(\mathcal{R}_1+\mathcal{R}_2,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}
ds_1
\\
\nonumber
&
\lesssim
e^{-\mu_1s}
\int_s^\infty
e^{\mu_1s_1}
\{
(\lambda\dot\lambda-\sigma)
+
(T-t)^{{\sf d}_1}
\lambda^{\frac{n-2}{2}}
\eta^\frac{2}{1-q}
+
(T-t)^{{\sf d}_1}
\sigma
\}
ds_1
\\
&
\lesssim
(T-t)^{{\sf d}_1}
\sigma.
\end{align*}
We next derive estimates for the stable mode $\epsilon_3^\bot$
defined by
\[
\epsilon_{3}^\bot
=
\epsilon_3
-
(\epsilon_3,\psi_1)_{L_y^2(B_{4{\sf R}_{\sf in}})}
\psi_1
-
(\epsilon_3,\psi_2)_{L_y^2(B_{4{\sf R}_{\sf in}})}
\psi_2.
\]
The choice of $\lambda(t)$ immediately implies
$(\epsilon(t),\psi_2)_{L_y^2(B_{4{\sf R}_{\sf in}})}=0$
for $t\in(0,T)$.
Hence it follows from \eqref{eq6.5} and \eqref{eq6.7} that
\begin{align}\label{eq6.9}
\nonumber
(\epsilon_3,\psi_2)_{L_y^2(B_{4{\sf R}_{\sf in}})}
&=
(
-(\epsilon_1+\epsilon_2)(1-\chi_{{\sf m}_1}),\psi_2
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
\\
\nonumber
&\lesssim
{\sf m}_1^2
(T-t)^{{\sf d}_1}
\sigma
\left(
1+
\int_{1<|y|<4{\sf R}_{\sf in}}
(
|y|^{-(n-4)}+|y|^{-2}
)
|y|^{-(n-2)}
dy
\right)
\\
&\lesssim
{\sf m}_1^2
{\sf R}_{\sf in}^{6-n}
(T-t)^{{\sf d}_1}
\sigma
\qquad
\text{for }
t\in(0,T).
\end{align}
We multiply \eqref{eq6.4} by $\epsilon_3^\bot$ and integrate by parts.
Then we get from \eqref{eq6.5} - \eqref{eq6.8} that
\begin{align*}
\tfrac{\lambda^2}{2}
\tfrac{d}{dt}
\|\epsilon_{3}^{\bot}\|_{L_y^2(B_{4{\sf R}_{\sf in}})}^2
&=
(\epsilon_{3}^{\bot},H_y\epsilon_{3})_{L_y^2(B_{4{\sf R}_{\sf in}})}^2
+
(
\mathcal{R}_1+\mathcal{R}_2,
\epsilon_{3}^{\bot}
)_{L_y^2(B_{4{\sf R}_{\sf in}})}
\\
&<
-
\tfrac{\mu_3}{2}
\|\epsilon_{3}^{\bot}\|_{L_y^2(B_{4{\sf R}_{\sf in}})}^2
+
\tfrac{2}{\mu_3}
\|\mathcal{R}_1+\mathcal{R}_2\|_{L_y^2(B_{4{\sf R}_{\sf in}})}^2
\\
&<
-
\tfrac{\mu_3}{2}
\|\epsilon_{3}^{\bot}(s)\|_{L_y^2(B_{4{\sf R}_{\sf in}})}^2
+
\tfrac{C}{\mu_3}
(T-t)^{2{\sf d}_1}
\sigma^2.
\end{align*}
We note that $\epsilon_0^\bot=0$.
Hence
by change of variables $s=\int_0^t\frac{dt}{\lambda^2}$,
we get
\begin{align*}
\|\epsilon_{3}^{\bot}\|_{L_y^2(B_{4{\sf R}_{\sf in}})}^2
\lesssim
\tfrac{1}{\mu_3}
e^{-\mu_3s}
\int_0^s
e^{\mu_3s_1}
(T-t)^{2{\sf d}_1}
\sigma^2
ds_1.
\end{align*}
From integration by parts,
we observe that
\begin{align*}
\int_0^s
e^{\mu_3s_1}
(T-t)^{2{\sf d}_1}
\sigma^2
ds_1
&=
\left[
\tfrac{1}{\mu_3}
e^{\mu_3s_1}
(T-t)^{2{\sf d}_1}
\sigma^2
\right]_{s_1=0}^{s_1=s}
-
\tfrac{1}{\mu_3}
\int_0^s
e^{\mu_3s_1}
\frac{dt}{ds}
\frac{d}{dt}
\{
(T-t)^{2{\sf d}_1}
\sigma^2
\}
ds_1
\\
&<
\left[
\tfrac{1}{\mu_3}
e^{\mu_3s_1}
(T-t)^{2{\sf d}_1}
\sigma^2
\right]_{s_1=0}^{s_1=s}
+
C
\int_0^s
\tfrac{\lambda^2}{\mu_3(T-t)}
e^{\mu_3s_1}
(T-t)^{2{\sf d}_1}
\sigma^2
ds_1.
\end{align*}
Since $\tfrac{\lambda^2}{\mu_3(T-t)}\ll1$,
it holds that
\begin{align*}
\int_0^s
e^{\mu_3s_1}
(T-t)^{2{\sf d}_1}
\sigma^2
ds_1
<
C
\left[
\tfrac{1}{\mu_3}
e^{\mu_3s_1}
(T-t)^{2{\sf d}_1}
\sigma^2
\right]_{s_1=0}^{s_1=s}
\lesssim
\tfrac{1}{\mu_3}
e^{\mu_3s}
(T-t)^{2{\sf d}_1}
\sigma^2.
\end{align*}
Therefore
from Lemma \ref{Lem3.3},
we obtain
\begin{align}\label{eq6.10}
\|\epsilon_{3}^{\bot}\|_{L_y^2(B_{4{\sf R}_{\sf in}})}
\lesssim
\tfrac{1}{\mu_3}
(T-t)^{{\sf d}_1}
\sigma
\lesssim
{\sf R}_{\sf in}^\frac{n}{2}
(T-t)^{{\sf d}_1}
\sigma
\qquad
\text{for }
t\in(0,T).
\end{align}
We now go back to \eqref{eq6.4}.
We note from \eqref{eq6.9} - \eqref{eq6.10} and Lemma \ref{Lem3.5} that
\begin{align*}
|\epsilon_3(y,t)|
+
|\nabla_y\epsilon_3(y,t)|
&\lesssim
({\sf R}_{\sf in}^{6-n}+{\sf R}_{\sf in}^\frac{n}{2})(T-t)^{{\sf d}_1}
\qquad
\text{for }
|y|<4{\sf R}_{\sf in},\ t\in(0,T).
\end{align*}
Furthermore
we recall that
(see Lemma \ref{Lem3.1})
\[
\epsilon_0
=
\alpha_1
\psi_1
\lesssim
(T-t)^{{\sf d}_1}
\sigma
|_{t=0}
|y|^{-\frac{n-1}{2}}
e^{-\sqrt{|\mu_1|}\cdot|y|}
\qquad
\text{for }
|y|>1.
\]
Since
$\lambda^2\pa_t\epsilon_3=H_y\epsilon_3$ for $|y|>2{\sf m}_1$,
a comparison argument shows
\begin{equation}\label{eq6.11}
|\epsilon_3(y,t)|
\lesssim
{\sf m}_1^{n-\frac{9}{4}}
{\sf R}_{\sf in}^\frac{n}{2}
|y|^{-(n-\frac{9}{4})}
\qquad
\text{for }
2{\sf m}_1<|y|<4{\sf R}_{\sf in},\
t\in(0,T).
\end{equation}
Furthermore
we note that
$\epsilon_{3,r}=\pa_r\epsilon_3$ satisfies
$\pa_s\epsilon_{3,r}
=H_y\epsilon_{3,r}-(n-1)|y|^{-2}\epsilon_{3,r}+(\pa_rV)\epsilon_3$
for $|y|>2{\sf m}_1$.
Due to \eqref{eq6.11},
a comparison argument shows
\begin{equation*}
|\nabla_y\epsilon_3(y,t)|
\lesssim
{\sf m}_1^{n-\frac{5}{4}}
{\sf R}_{\sf in}^\frac{n}{2}
|y|^{-(n-\frac{5}{4})}
\qquad
\text{for }
2{\sf m}_1<|y|<4{\sf R}_{\sf in},\
t\in(0,T).
\end{equation*}
Therefore
since ${\sf m}_1$ depends only on $n$,
we conclude
\begin{align}
\label{eq6.12}
|\epsilon(y,t)|
&\lesssim
{\sf R}_{\sf in}^\frac{n}{2}
(T-t)^{{\sf d}_1}
\sigma
(1+|y|^2)^{-\frac{1}{2}(n-\frac{9}{4})}
\qquad
\text{for }
(y,t)\in\overline{B}_{4{\sf R}_{\sf in}}\times[0,T],
\\
\label{eq6.13}
|\nabla_y\epsilon(y,t)|
&\lesssim
{\sf R}_{\sf in}^\frac{n}{2}
(T-t)^{{\sf d}_1}
\sigma
(1+|y|^2)^{-\frac{1}{2}(n-\frac{5}{4})}
\qquad
\text{for }
(y,t)\in\overline{B}_{4{\sf R}_{\sf in}}\times[0,T].
\end{align}
\section{In the semiinner region}
\label{sec_7}
For simplicity,
we define
\begin{align*}
L_{\xi}v
=
\Delta_\xi v
-
q{\sf U}^{q-1}
(1-\chi_1)
v
-
\eta\dot\eta
(\Lambda_\xi v),
\qquad
\text{where }
\Lambda_\xi=\tfrac{2}{1-q}-\xi\cdot\nabla_\xi.
\end{align*}
In order to construct a solution $v(\xi,t)$ of
the second equation in \eqref{eq5.2},
we here consider
\begin{align}\label{eq7.1}
\begin{cases}
\eta^2v_t
=
L_\xi v
+
\eta^{-\frac{2q}{1-q}}
\lambda^{-\frac{n+2}{2}}
(\lambda\dot\lambda-\sigma)
(\Lambda_y{\sf Q})
\chi_1(1-\chi_{\sf in})
\\
\qquad
\quad
+
\eta^{-\frac{2q}{1-q}}
(
F_1[\tilde v_1,\tilde w_1]
(1-\chi_{{\sf in}})
\chi_{{\sf sq}}
-
F_2[\tilde w_1]
\chi_{{\sf sq}}
+
h[\epsilon]
)
\\
\qquad
\quad
+
\eta^{-\frac{2q}{1-q}}
(
g
+
N[\epsilon,\tilde v_1,\tilde w_1]
)
{\bf 1}_{|\xi|<1}
&
\text{in } |\xi|<4{\sf R}_{\sf mid},\ t\in(0,T),
\\
v=0
&
\text{on } |\xi|=4{\sf R}_{\sf mid},\ t\in(0,T),
\\
v=0
&
\text{in } |\xi|<4{\sf R}_{\sf mid},\ t=0.
\end{cases}
\end{align}
We remark that
a construction of solutions to \eqref{eq7.1}
satisfying $v(\xi,t)\to0$ as $t\to T$
is simpler than that of $\epsilon(y,t)$,
thanks to the stability of the trivial solution of
$v_t=\Delta_\xi v-q{\sf U}^{q-1}v$.
A goal of this section is to construct a solution of \eqref{eq7.1}
satisfying $v\in X_2\subset X_2^{(\delta)}$.
To do that,
we need to compute all terms on the right-hand side of \eqref{eq7.1}.
We collect them in Lemma \ref{Lem7.1} - Lemma \ref{Lem7.3}.
Their proofs are postponed to Appendix.
Throughout this section,
\begin{itemize}
\item
$(\lambda(t),\epsilon(y,t))$ are solutions obtained
in Section {\rm\ref{sec_6}} from $(w_1,v_1)\in X^{(\delta)}$,
\item
$(\tilde w_1,\tilde v_1)\in X$ are extensions of $(w_1,v_1)\in X^{(\delta)}$,
namely $(\tilde w_1,\tilde v_1)=\mathcal{T}_1(w_1,v_1)$.
\end{itemize}
\begin{lem}\label{Lem7.1}
There exist positive constants
${\sf b}_1,{\sf c}_0,{\sf c}_1,{\sf c}_2$
depending only on $q,n,J$
{\rm(}independent of
$\delta$,
${\sf d}_1$,
${\sf b}$,
${\sf R}_{\sf in}$,
${\sf R}_{\sf mid}$,
${\sf R}_1$,
${\sf r}_0$,
${\sf r}_3${\rm)}
such that
if
${\sf b}\in(0,{\sf b}_1)$,
then it holds that
\begin{align*}
\sum_{i=0}^6
|g_i|
{\bf 1}_{|\xi|<1}
+
\sum_{i=0}^{10}
|g_i'|
{\bf 1}_{|\xi|<1}
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
(1+|y|^2)^{-(1+{\sf c}_2)}
{\bf 1}_{|\xi|<1}.
\end{align*}
\end{lem}
\begin{lem}\label{Lem7.2}
There exist positive constants
${\sf b}_1,{\sf c}_0,{\sf c}_1,{\sf c}_2$
depending only on $q,n,J$
such that
if
${\sf b}\in(0,{\sf b}_1)$,
then
\begin{align*}
N_1[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|y|<{\sf l}_1}
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
(1+|y|^2)^{-(1+{\sf c}_2)}
{\bf 1}_{|y|<{\sf l}_1},
\\
N_1[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|y|>{\sf l}_1}
{\bf 1}_{|\xi|<1}
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\eta^\frac{2q}{1-q}
{\bf 1}_{|y|>{\sf l}_1}
{\bf 1}_{|\xi|<1},
\\
N_2[\epsilon,v,w]
{\bf 1}_{|\xi|<(T-t)^{{\sf q}_1}}
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
|y|^{-2(1+{\sf c}_2)}
{\bf 1}_{|y|>{\sf l}_1}
{\bf 1}_{|\xi|<(T-t)^{{\sf q}_1}},
\\
N_2[\epsilon,v,w]
{\bf 1}_{(T-t)^{{\sf q}_1}<|\xi|<1}
&\leq
{\sf c}_0
(T-t)^{2{\sf d}_1}
\eta^\frac{2q}{1-q}
{\bf 1}_{(T-t)^{{\sf q}_1}<|\xi|<1},
\\
N_3[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|\xi|<1}
&=
0,
\\
N_4[\epsilon,\tilde v_1,\tilde w_1]
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
(1+|y|^2)^{-(1+{\sf c}_2)}
{\bf 1}_{|y|<2{\sf l}_2},
\\
N_5[\tilde w_1]
{\bf 1}_{|\xi|<1}
&=
0,
\\
N_6[\tilde w_1]
{\bf 1}_{|\xi|<1}
&=
0.
\end{align*}
\end{lem}
\begin{lem}\label{Lem7.3}
There exist positive constants
${\sf b}_1,{\sf c}_0,{\sf c}_1,{\sf c}_2$
depending only on $q,n,J$
such that
if
${\sf b}\in(0,{\sf b}_1)$,
then
\begin{align*}
F_1[\tilde v_1,\tilde w_1]
(1-\chi_{\sf in})
\chi_{\sf sq}
{\bf 1}_{|\xi|<1}
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
|y|^{-2(1+{\sf c}_2)}
{\bf 1}_{|y|>{\sf R}_{\sf in}}
{\bf 1}_{|\xi|<1},
\\
F_1[\tilde v_1,\tilde w_1]
(1-\chi_{\sf in})
\chi_{\sf sq}
{\bf 1}_{|\xi|>1}
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{1<|\xi|<2\sqrt{{\sf R}_{\sf mid}}},
\\
F_2[\tilde w_1]
\chi_{\sf sq}
{\bf 1}_{|\xi|<1}
&\leq
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2}
{\bf 1}_{|\xi|<1},
\\
F_2[\tilde w_1]
\chi_{\sf sq}
{\bf 1}_{|\xi|>1}
&\leq
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{1<|\xi|<2\sqrt{{\sf R}_{\sf mid}}}.
\end{align*}
\end{lem}
To clarify the role of each term on the right-hand side of \eqref{eq7.1},
we divide them into four parts and introduce
\begin{align}
\label{eq7.2}
&
\begin{cases}
\eta^2
\pa_t
v_1
=
L_\xi
v_1
+
\eta^{-\frac{2q}{1-q}}
G_1
&
\text{in } |\xi|<4{\sf R}_{\sf mid},\ t\in(0,T),
\\
v_1
=
0
&
\text{on } |\xi|=4{\sf R}_{\sf mid},\ t\in(0,T),
\\
v_1
=
0
&
\text{in } |\xi|<4{\sf R}_{\sf mid},\ t=0,
\end{cases}
\\
&
\label{eq7.3}
\begin{cases}
\eta^2
\pa_t
v_2
=
L_\xi
v_2
+
\eta^{-\frac{2q}{1-q}}
G_2
&
\text{in } |\xi|<2,\ t\in(0,T),
\\
v_2=0
&
\text{on } |\xi|=2,\ t\in(0,T),
\\
v_2
=
0
&
\text{in } |\xi|<2,\ t=0,
\end{cases}
\\
&
\label{eq7.4}
\begin{cases}
\eta^2
\pa_t
v_3
=
L_\xi
v_3
+
\eta^{-\frac{2q}{1-q}}
G_3
&
\text{in } |\xi|<4{\sf R}_{\sf mid},\ t\in(0,T),
\\
v_3=0
&
\text{on } |\xi|=4{\sf R}_{\sf mid},\ t\in(0,T),
\\
v_3
=
0
&
\text{in } |\xi|<4{\sf R}_{\sf mid},\ t=0,
\end{cases}
\\
&
\label{eq7.5}
\begin{cases}
\eta^2
\pa_t
v_4
=
L_\xi
v_4
+
\eta^{-\frac{2q}{1-q}}
G_4
&
\text{in } \frac{1}{2}{\sf m}_2<|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T),
\\
v_4=0
&
\text{on } |\xi|=\frac{1}{2}{\sf m}_2,\ |\xi|=4{\sf R}_{\sf mid},\
t\in(0,T),
\\
v_4
=
0
&
\text{in } \frac{1}{2}{\sf m}_2<|\xi|<4{\sf R}_{\sf mid},\
t=0.
\end{cases}
\end{align}
The constant ${\sf m}_2>1$ will be chosen later
such that $L_\xi\approx\Delta_\xi-q{\sf U}_\infty(\xi)^{q-1}$
for $|\xi|>{\sf m}_2$,
and
$G_i$ ($i=1,2,3,4$) are
\begin{align*}
G_1
&=
\lambda^{-\frac{n+2}{2}}
(\lambda\dot\lambda-\sigma)
(\Lambda_y{\sf Q})
\chi_1(1-\chi_{\sf in})
+
F_1[\tilde v_1,\tilde w_1](1-\chi_{\sf in})
\chi_{\sf sq}
{\bf 1}_{|\xi|<1}
+
h[\epsilon]
\\
& \quad
+
g
{\bf 1}_{|\xi|<1}
+
N_1[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|y|<{\sf l}_1}
+
N_2[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|\xi|<(T-t)^{{\sf q}_1}}
+
N_4[\epsilon,\tilde v_1,\tilde w_1],
\\
G_2
&=
-
F_2[\tilde w_1]
\chi_{\sf sq}
{\bf 1}_{|\xi|<1},
\\
G_3
&=
(
F_1[\tilde v_1,\tilde w_1]
(1-\chi_{{\sf in}})
\chi_{{\sf sq}}
-
F_2[\tilde w_1]
\chi_{{\sf sq}}
)
{\bf 1}_{1<|\xi|<{\sf m}_2}
\\
& \quad
+
N_1[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|y|>{\sf l}_1}
{\bf 1}_{|\xi|<1}
+
N_2[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{(T-t)^{{\sf q}_1}<|\xi|<1},
\\
G_4
&=
(
F_1[\tilde v_1,\tilde w_1]
(1-\chi_{{\sf in}})
\chi_{{\sf sq}}
-
F_2[\tilde w_1]
\chi_{{\sf sq}}
)
{\bf 1}_{|\xi|>{\sf m}_2}.
\end{align*}
By using solutions $v_1$, $v_2$, $v_3$ and $v_4$,
we define $v_5$ as
\[
v
=
v_1+v_2\chi_A+v_3+v_4\chi_B+v_5,
\qquad
\chi_A=
\begin{cases}
1 & |\xi|<1 \\
0 & |\xi|>2,
\end{cases}
\qquad
\chi_B=
\begin{cases}
0 & |\xi|<\frac{1}{2}{\sf m}_2 \\
1 & |\xi|>{\sf m}_2.
\end{cases}
\]
From this definition,
$v_5$ satisfies
\begin{align}\label{eq7.6}
\begin{cases}
\eta^2
\pa_t
v_5
=
L_\xi
v_5
+
G_5
&
\text{in } |\xi|<4{\sf R}_{\sf mid},\ t\in(0,T),
\\
v_5=0
&
\text{on } |\xi|=4{\sf R}_{\sf mid},\ t\in(0,T),
\\
v_5=0
&
\text{in } |\xi|<4{\sf R}_{\sf mid},\ t=0,
\end{cases}
\end{align}
where
$G_5=
-
2\nabla_\xi v_2\cdot\nabla_\xi\chi_A
-
v_2\Delta\chi_A
-
2\nabla_\xi v_4\cdot\nabla_\xi\chi_B
-
v_4\Delta\chi_B$.
Before constructing solutions $v_i$ ($i=1,2,3,4,5$),
we prepare the following lemma.
We now assume
\begin{itemize}
\item $T^{\frac{1}{8}{\sf d}_1}<{\sf R}_1^{-1}$.
\end{itemize}
Since ${\sf d}_1$ will be determined by constants depending only on $q,n,J$,
this assumption holds if $T$ is sufficiently small.
\begin{lem}\label{Lem7.4}
There exist positive constants
${\sf b}_1,{\sf c}_0,{\sf c}_1,{\sf c}_2$
depending only on $q,n,J$
such that
if
${\sf b}\in(0,{\sf b}_1)$,
then
\begin{align*}
G_1
&<
{\sf c}_0
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\lambda^{-2}
\eta^\frac{2}{1-q}
(1+|y|^2)^{-(1+{\sf c}_2)}
{\bf 1}_{|\xi|<1},
\\
G_3
&<
{\sf c}_0
{\sf m}_2^\gamma
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
(1+|\xi|^2)^{-(1+{\sf c}_2)}
{\bf 1}_{|\xi|<{\sf m}_2},
\\
G_4
&<
{\sf c}_0
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{|\xi|>{\sf m}_2}.
\end{align*}
\end{lem}
\begin{proof}
From \eqref{eq6.12} - \eqref{eq6.13},
we see that
\begin{align*}
h_1[\epsilon]
&=
-\lambda^{-\frac{n+2}{2}}
\lambda\dot\lambda
(\Lambda_y\epsilon)\chi_{\sf in}
+
\lambda^{-\frac{n+2}{2}}
\lambda^2
\epsilon
\pa_t\chi_{\sf in}
\\
&\lesssim
{\sf R}_{\sf in}^{\frac{n}{2}}
\sigma
\cdot
(T-t)^{{\sf d}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
{\bf 1}_{|y|<2{\sf R}_{\sf in}}
\\
&\lesssim
{\sf R}_{\sf in}^{\frac{n}{2}+4}
\sigma
\cdot
(T-t)^{{\sf d}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
(1+|y|^2)^{-2}
{\bf 1}_{|y|<2{\sf R}_{\sf in}},
\\
h_1'[\epsilon]
&=
2\lambda^{-\frac{n+2}{2}}
\nabla_y\epsilon\cdot\nabla_y\chi_{\sf in}
+
\lambda^{-\frac{n+2}{2}}
\epsilon\Delta_y\chi_{\sf in}
\\
&\lesssim
{\sf R}_{\sf in}^{\frac{n}{2}}
(T-t)^{{\sf d}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
|y|^{-2-(n-\frac{9}{4})}
{\bf 1}_{{\sf R}_{\sf in}<|y|<2{\sf R}_{\sf in}}.
\end{align*}
Since $T=e^{-{\sf R}_{\sf in}}$,
this implies
\begin{align*}
h[\epsilon]
&=
-h_1[\epsilon]+h_1'[\epsilon]
\\
&\lesssim
{\sf R}_{\sf in}^{-\frac{1}{8}(2n-9)}
(T-t)^{{\sf d}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
(1+|y|^2)^{-1-\frac{1}{16}(2n-9)}
{\bf 1}_{|y|<2{\sf R}_{\sf in}}.
\end{align*}
Due to this estimate
and
Lemma \ref{Lem7.1} - Lemma \ref{Lem7.3},
it holds that
\begin{align*}
G_1
&=
\lambda^{-\frac{n+2}{2}}
(\lambda\dot\lambda-\sigma)
(\Lambda_y{\sf Q})
\chi_1(1-\chi_{\sf in})
+
F_1[\tilde v_1,\tilde w_1](1-\chi_{\sf in})
\chi_{\sf sq}
{\bf 1}_{|\xi|<1}
+
h[\epsilon]
\\
& \quad
+
g
{\bf 1}_{|\xi|<1}
+
N_1[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|y|<{\sf l}_1}
+
N_2[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|\xi|<(T-t)^{{\sf q}_1}}
+
N_4[\epsilon,\tilde v_1,\tilde w_1]
\\
&<
{\sf c}_0
{\sf R}_{\sf in}^{-{\sf c}_1}
(T-t)^{{\sf d}_1}
\lambda^{-\frac{n+2}{2}}
\sigma
(1+|y|^2)^{-1-{\sf c}_2}
{\bf 1}_{|y|<2{\sf R}_{\sf in}}
\end{align*}
for some ${\sf c}_0,{\sf c}_1,{\sf c}_2>0$ depending only on $q,n,J$.
This proves the estimate for $G_1$.
Furthermore
we use Lemma \ref{Lem7.2} - Lemma \ref{Lem7.3} again
to get
\begin{align*}
G_3
&=
(
F_1[\tilde v_1,\tilde w_1]
(1-\chi_{{\sf in}})
\chi_{{\sf sq}}
-
F_2[\tilde w_1]
\chi_{{\sf sq}}
)
{\bf 1}_{1<|\xi|<{\sf m}_2}
\\
& \quad
+
N_1[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|y|>{\sf l}_1}
{\bf 1}_{|\xi|<1}
+
N_2[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{(T-t)^{{\sf q}_1}<|\xi|<1}
\\
&\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
(1+|\xi|^2)^{\frac{\gamma}{2}-1-{\sf c}_2}
{\bf 1}_{|\xi|<{\sf m}_2}
+
(T-t)^{2{\sf d}_1}
\eta^\frac{2q}{1-q}
{\bf 1}_{(T-t)^{{\sf q}_1}<|\xi|<1}
\\
&\lesssim
{\sf m}_2^\gamma
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
(1+|\xi|^2)^{-(1+{\sf c}_2)}
{\bf 1}_{|\xi|<{\sf m}_2}
+
T^{{\sf d}_1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
{\bf 1}_{(T-t)^{{\sf q}_1}<|\xi|<1}.
\end{align*}
Since $T^{{\sf d}_1}<{\sf R}_1^{-1}$,
we obtain the estimate for $G_3$.
The inequality for $G_4$ is obvious
from Lemma \ref{Lem7.2} - Lemma \ref{Lem7.3}.
\end{proof}
Most of solutions $v_i$ ($i=1,2,3,4,5$) are immediately constructed
from this lemma.
We put
\[
\bar v_1
=
\bar{\sf c}_0
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
(1+|y|^2)^{-{\sf c}_2}.
\]
We now check that
$\bar v_1$ gives a comparison function of \eqref{eq7.2}.
\begin{align*}
(
\eta^2
\pa_t
-
L_\xi
)
\bar v_1
&=
(
\eta^2\pa_t
-
\Delta_\xi
+
q{\sf U}^{q-1}(1-\chi_1)
+
\eta\dot\eta\Lambda_\xi
)
\bar v_1
\\
&>
(
\eta^2\pa_t
-
\Delta_\xi
+
\eta\dot\eta\Lambda_\xi
)
\bar v_1
\\
&>
(
-
C
\eta^2
(T-t)^{-1}
+
2{\sf c}_2(n-2-2{\sf c}_2)
(\lambda^{-1}\eta)^2
(1+|y|^2)^{-1}
-
C
\eta\dot\eta
)
\bar v_1
\\
&>
\eta^2
(T-t)^{-1}
\{
2
{\sf c}_2(n-2-2{\sf c}_2)
(T-t)
\lambda^{-2}
(1+|y|^2)^{-1}
-
C
\}
\bar v_1.
\end{align*}
From this estimate,
we easily see that
\begin{align*}
(
\pa_t
-
L_\xi
)
\bar v_1
&>
\tfrac{1}{2}
{\sf c}_2(n-2-2{\sf c}_2)
\eta^2
\lambda^{-2}
\bar v_1
\qquad
\text{for } |y|<1,
\\
(
\pa_t
-
L_\xi
)
\bar v_1
&>
\eta^2
(T-t)^{-1}
\{
2
{\sf c}_2(n-2-2{\sf c}_2)
|z|^{-2}
-
{\sf c}
\}
\bar v_1
\\
&>
{\sf c}_2(n-2-2{\sf c}_2)
\eta^2
\lambda^{-2}
|y|^{-2}
\bar v_1
\qquad
\text{for }
|y|>1,\ |\xi|<4{\sf R}_{\sf mid}.
\end{align*}
This together with Lemma \ref{Lem7.4} implies
$(\pa_t-L_\xi)\bar v_1>\eta^{-\frac{2q}{1-q}}|G_1|$
for $|\xi|<4{\sf R}_{\sf mid}$.
Hence
a comparison argument shows
\begin{align*}
|v_1(\xi,t)|
<
\bar v_1
&\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
(1+|y|^2)^{-{\sf c}_2}
\qquad
\text{for }
|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T).
\end{align*}
This implies
$|v_1(\xi,t)|
\lesssim
{\sf R}_1^{-1}(T-t)^{{\sf d}_1}$
for $|\xi|<4{\sf R}_{\sf mid}$, $t\in(0,T)$.
Since $G_1=0$ for $|\xi|>2$,
by a comparison argument,
we get
\[
|\nabla_\xi v_1(\xi,t)|
\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
|\xi|^{-(n-2)}
\qquad
\text{for }
2<|\xi|<4{\sf R}_{\sf in},\
t\in(0,T).
\]
We next construct $v_2$.
We multiply \eqref{eq7.3} by $v_2$ and integrate by parts.
Then we get
\begin{align}\label{eq7.7}
\nonumber
\tfrac{\eta^2}{2}
\tfrac{d}{dt}
\|v_2\|_{L_\xi^2(B_2)}^2
&<
-
\|\nabla_\xi v_2\|_{L_\xi^2(B_2)}^2
-
\eta\dot\eta
(\Lambda_\xi v_2,v_2)_{L_\xi^2(B_2)}
+
\eta^{-\frac{2q}{1-q}}
(G_2,v_2)_{L_\xi^2(B_2)}
\\
&<
-\tfrac{{\sf j}_1}{2}\|v_2(t)\|_{L_\xi^2(B_2)}^2
+
\eta^{-\frac{4q}{1-q}}
\|G_2\|_{L_\xi^2(B_2)}^2,
\end{align}
where ${\sf j}_1>0$ is the first eigenvalue of
$-\Delta_\xi e=je$ in $B_2$ with zero Dirichlet boundary condition.
We remark that
$G_2=-F_2[\tilde w_1]\approx q{\sf U}_\infty^{q-1}w$
is a singular inhomogeneous term at the origin
in \eqref{eq7.3}.
Since
$G_2\lesssim
{\sf R}_1^{-1}(T-t)^{{\sf d}_1}
\eta^{\frac{2q}{1-q}}|\xi|^{-2+\gamma}{\bf 1}_{|\xi|<1}$
(see Lemma \ref{Lem7.3})
and
$|\xi|^{-2}\in L_\xi^2(B_1)$ if $n\geq5$,
\eqref{eq7.7} implies
$\|v_2(t)\|_{L_\xi^2(B_2)}
\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}$
for $t\in(0,T)$.
Furthermore
since $|\xi|^{-2+\gamma}\in L_\xi^p(B_1)$ for some $p>\frac{n}{2}$,
Lemma \ref{Lem3.4} shows that
$v_2(\xi,t)$ is bounded near the origin and satisfies
\begin{align}\label{eq7.8}
|v_2(\xi,t)|
&\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\qquad
\text{for }
|\xi|<{2},\ t\in(0,T).
\end{align}
Estimates for $v_3$ can be derived in the same way as $v_1$.
\begin{align*}
|v_3(\xi,t)|
&\lesssim
{\sf m}_2^{\gamma}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
(1+|\xi|^2)^{-{\sf c}_2}
\qquad
\text{for }
|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T),
\\
|\nabla_\xi v_3(\xi,t)|
&\lesssim
{\sf m}_2^{\gamma}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
{\sf m}_2^{-{\sf c}_2}
{\sf m}_2^{n-2}
|\xi|^{-(n-2)}
\qquad
\text{for }
{\sf m}_2<|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T).
\end{align*}
From the definition of $\gamma$ (see Section \ref{sec_4.2}),
for any ${\sf k}\in(0,n-2)$,
there exists ${\sf c}_A({\sf k})>0$ such that
$(\Delta_\xi-q{\sf U}_\infty(\xi)^{q-1})|\xi|^{\gamma-{\sf k}}
=-{\sf c}_A({\sf k})|\xi|^{\gamma-{\sf k}-2}$.
Hence
form \eqref{eq4.10},
we can choose ${\sf m}_2>1$ such that
$(\Delta_\xi-q{\sf U}^{q-1})|\xi|^{\gamma-{\sf k}}
<-\tfrac{1}{2}{\sf c}_A({\sf k})|\xi|^{\gamma-{\sf k}-2}$
for $|\xi|>\tfrac{1}{2}{\sf m}_2$.
From this relation,
we observe that
\begin{align*}
(\eta^2\pa_t-L_\xi)
\{
(T-t)^{{\sf d}_1}
|\xi|^{\gamma-2{\sf c}_2}
\}
&>
\{
-
{\sf d}_1
\eta^2
(T-t)^{-1}
+
\tfrac{1}{2}
{\sf c}_A({\sf k})
\}
(T-t)^{{\sf d}_1}|\xi|^{\gamma-2-2{\sf c}_1}
\\
&>
\tfrac{1}{4}
{\sf c}_A({\sf k})
(T-t)^{{\sf d}_1}|\xi|^{\gamma-2-2{\sf c}_1}
\qquad
\text{for }
\tfrac{1}{2}{\sf m}_2<|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T).
\end{align*}
Hence
by using
$\bar v_4=\bar c_0{\sf R}_1^{-1}(T-t)^{{\sf d}_1}|\xi|^{\gamma-2{\sf c}_2}$
as a comparison function,
we get from Lemma \ref{Lem7.4} that
\begin{align}\label{eq7.9}
|v_4(\xi,t)|
&\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
|\xi|^{\gamma-2{\sf c}_2}
\qquad
\text{for }
\tfrac{1}{2}{\sf m}_2<|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T).
\end{align}
From this estimate,
it holds that
$|\nabla_yv_4(\xi,t)|
\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
{\sf R}_{\sf mid}^{\frac{1}{2}(\gamma-2{\sf c}_2)}$
for
$|\xi|=2\sqrt{{\sf R}_{\sf mid}}$,
$t\in(0,T)$.
We note that
$G_4=0$
for $|\xi|>2\sqrt{{\sf R}_{\sf mid}}$.
Hence
by a comparison argument in $2\sqrt{{\sf R}_{\sf mid}}<|\xi|<4{\sf R}_{\sf mid}$,
we get
\begin{align*}
|\nabla_yv_4(\xi,t)|
&\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
{\sf R}_{\sf mid}^{\frac{1}{2}(\gamma-2{\sf c}_2)}
{\sf R}_{\sf mid}^{\frac{1}{2}(n-2)}
|\xi|^{-(n-2)}
\\
&\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
{\sf R}_{\sf mid}^{-{\sf c}_2}
{\sf R}_{\sf mid}^{\frac{1}{2}(n-2)}
|\xi|^{\gamma-(n-2)}
\end{align*}
for
$2\sqrt{{\sf R}_{\sf mid}}<|\xi|<4{\sf R}_{\sf mid}$,
$t\in(0,T)$.
We finally construct $v_5$.
We easily see from \eqref{eq7.8} - \eqref{eq7.9} that
\begin{align*}
G_5
&=
2\nabla_\xi v_2\cdot\nabla_\xi\chi_A
+
v_2\Delta\chi_A
+
2\nabla_\xi v_4\cdot\nabla_\xi\chi_B
+
v_4\Delta\chi_B
\\
&\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
{\bf 1}_{1<|\xi|<2}
+
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
{\sf m}_2^{\gamma-2{\sf c}_2-2}
{\bf 1}_{\frac{1}{2}{\sf m}_2<|\xi|<{\sf m}_2}
\\
&\lesssim
{\sf m}_2^{\gamma}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
{\bf 1}_{1<|\xi|<{\sf m}_2}.
\end{align*}
Hence
applying a comparison argument to \eqref{eq7.6},
we obtain
\begin{align*}
|v_5(\xi,t)|
&\lesssim
{\sf m}_2^{\gamma}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
(1+|\xi|^2)^{-\frac{1}{2}(n-3)}
\qquad
\text{for }
|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T),
\\
|\nabla_\xi v_5(\xi,t)|
&\lesssim
{\sf m}_2^{\gamma}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
|\xi|^{-(n-3)}
\qquad
\text{for }
2{\sf m}_2<|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T).
\end{align*}
Therefore
since ${\sf m}_2$ depends only on $n$,
we conclude
\begin{align}
\label{eq7.10}
|v(\xi,t)|
&\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
(1+|\xi|^2)^{\frac{\gamma}{2}-{\sf c}_2}
\qquad
\text{for }
|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T),
\\
\label{eq7.11}
|\nabla_\xi v(\xi,t)|
&\lesssim
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
{\sf R}_{\sf mid}^{-{\sf c}_2}
{\sf R}_{\sf mid}^{\frac{1}{2}(n-2)}
|\xi|^{\gamma-(n-2)}
\qquad
\text{for }
2\sqrt{{\sf R}_{\sf in}}<|\xi|<4{\sf R}_{\sf mid},\
t\in(0,T).
\end{align}
Estimate \eqref{eq7.10} implies $v\in X_2\subset X_2^{(\delta)}$,
which is the desired result in this section.
Gradient estimate \eqref{eq7.11} is used in Section \ref{sec_8}.
\section{In the selfsimilar region}
\label{sec_8}
We study the asymptotic behavior of solutions in the selfsimilar region:
$|x|\sim\sqrt{T-t}$.
We recall that
our solution $u(x,t)$ behaves like
\[
u(x,t)
=
{\sf U}_\infty(x)
-
\theta(x)
-
\Theta(x,t)
+
w(x,t)
\qquad
\text{in }
|x|\sim\sqrt{T-t}.
\]
We here look for solutions of
(see \eqref{eq5.2})
\begin{align}\label{eq8.1}
\begin{cases}
w_t
=
\Delta_xw
-
q
{\sf U}_\infty^{q-1}
w
+
(
F_1[\tilde v_1,\tilde w_1]
-
F_2[\tilde w_1]
)
(1-\chi_{{\sf sq}})
\\
\qquad
\quad
+
k[v]
+
(
g
+
N[\epsilon,\tilde v_1,\tilde w_1]
)
{\bf 1}_{|\xi|>1}
&
\text{in } x\in\R^n,\
t\in(0,T),
\\
w=w_0
&
\text{in } x\in\R^n,\
t=0.
\end{cases}
\end{align}
The initial data $w_0(x)$ is chosen later such that $w(x,t)$
decays enough as $t\to T$.
We remark that
our approach in this section is simpler than
that of Section 5 in \cite{Seki}.
In fact,
we do not need the explicit expression of the heat kernel of
$w_t=\Delta_xw-q{\sf U}_\infty^{q-1}w$.
A goal of this section is to construct a solution
$w(x,t)\in X_1\subset X_1^{(\delta)}$.
Combining \eqref{eq7.10},
we obtain $(w,v)\in X\subset X^{\delta}$.
This proves Theorem \ref{Thm1}.
As in Section \ref{sec_7},
we first provide estimates of each term on the right-hand side \eqref{eq8.1}.
We postpone the proofs of Lemma \ref{Lem8.1} - Lemma \ref{Lem8.3}
to Appendix.
Throughout this section,
\begin{itemize}
\item
$(\lambda(t),\epsilon(y,t))$ are solutions obtained
in Section {\rm\ref{sec_6}} from $(w_1,v_1)\in X^{(\delta)}$,
\item
$v(\xi,t)$ is a solution of \eqref{eq7.1} obtained in Section {\rm\ref{sec_7}},
\item
$(\tilde w_1,\tilde v_1)\in X$ are extensions of $(w_1,v_1)\in X^{(\delta)}$,
namely $(\tilde w_1,\tilde v_1)=\mathcal{T}_1(w_1,v_1)$.
\end{itemize}
\begin{lem}\label{Lem8.1}
There exist positive constants
${\sf b}_1,{\sf c}_0,{\sf c}_1,{\sf c}_2$
depending only on $q,n,J$
{\rm(}independent of
$\delta$,
${\sf d}_1$,
${\sf b}$,
${\sf R}_{\sf in}$,
${\sf R}_{\sf mid}$,
${\sf R}_1$,
${\sf r}_0$,
${\sf r}_3${\rm)}
such that
if
${\sf b}\in(0,{\sf b}_1)$,
then it holds that
\begin{align*}
\sum_{i=0}^6
|g_i|
{\bf 1}_{|\xi|>1}
+
\sum_{i=0}^{10}
|g_i'|
{\bf 1}_{|\xi|>1}
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\eta^{\frac{2q}{1-q}}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{1<|\xi|<2{\sf l}_2},
\\
g_{{\sf out},1}'
+
g_{{\sf out},2}'
+
g_{{\sf out},3}'
&\lesssim
{\bf 1}_{1<|x|<2},
\\
g_{{\sf out},4}'
+
g_{{\sf out},5}'
+
g_{{\sf out},6}'
+
g_{{\sf out},7}'
&\lesssim
|x|^\gamma
{\bf 1}_{{\sf r}_3<|x|<2{\sf r}_3}.
\end{align*}
\end{lem}
\begin{lem}\label{Lem8.2}
There exist positive constants
${\sf b}_1,{\sf c}_0,{\sf c}_1,{\sf c}_2$
depending only on $q,n,J$
such that
if
${\sf b}\in(0,{\sf b}_1)$,
then
\begin{align*}
N_1[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|\xi|>1}
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{1<|\xi|<2{\sf l}_2},
\\
N_2[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|\xi|>1}
&\leq
{\sf c}_0
(T-t)^{2{\sf d}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{1<|\xi|<2{\sf l}_2},
\\
N_3[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|z|<1}
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{|\xi|>{\sf l}_2}
{\bf 1}_{|z|<1},
\\
N_3[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{1<|z|<(T-t)^{-\frac{1}{3}}}
&\leq
{\sf c}_0
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf c}_1}
|z|^{\gamma+2J}
{\bf 1}_{1<|z|<(T-t)^{-\frac{1}{3}}},
\\
N_3[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{(T-t)^{-\frac{1}{3}}<|z|<{\sf l}_{\sf out}}
&\leq
{\sf c}_0
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf d}_1+{\sf c}_1}
|z|^{\gamma+2J+3{\sf d}_1}
{\bf 1}_{(T-t)^{-\frac{1}{3}}<|z|<{\sf l}_{\sf out}},
\\
N_3[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|z|>{\sf l}_{\sf out}}
&\leq
{\sf c}_0
{\sf U}_\infty^q
{\bf 1}_{{\sf l}_{\sf out}\sqrt{T-t}<|x|<2},
\\
N_4[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|\xi|>1}
&=
0,
\\
N_5[\tilde w_1]
&\leq
{\sf c}_0
{\sf R}^{-1}
|x|^{-3}
{\bf 1}_{|x|>1},
\\
N_6[\tilde w_1]
&\leq
{\sf c}_0
{\bf 1}_{1<|x|<4}
+
{\sf c}_0
{\sf R}^{-1}
|x|^{-1}
{\bf 1}_{|x|>1}.
\end{align*}
\end{lem}
\begin{lem}\label{Lem8.3}
There exist positive constants
${\sf b}_1,{\sf c}_0,{\sf c}_1,{\sf c}_2$
depending only on $q,n,J$
such that
if
${\sf b}\in(0,{\sf b}_1)$,
then
\begin{align*}
F_1[\tilde v_1,\tilde w_1]
(1-\chi_{\sf sq})
&\leq
{\sf c}_0
(T-t)^{{\sf c}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{\sqrt{{\sf R}_{\sf mid}}<|\xi|<2{\sf l}_2},
\\
F_2[\tilde w_1]
(1-\chi_{\sf sq})
&\leq
{\sf c}_0
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{\sqrt{{\sf R}_{\sf mid}}<|\xi|<2{\sf l}_2}.
\end{align*}
\end{lem}
Let ${\sf c}_1^{\min}$ be the minimum value of
${\sf c}_1$ obtained in
Lemma \ref{Lem7.1} - Lemma \ref{Lem7.3}
and
Lemma \ref{Lem8.1} - Lemma \ref{Lem8.3}.
We take
\begin{itemize}
\item
${\sf d}_1=\frac{1}{2}{\sf c}_1^{\min}$.
\end{itemize}
As mentioned before,
this ${\sf d}_1$ depends only on $q,n,J$.
From \eqref{eq7.10} - \eqref{eq7.11},
we see that
\begin{align*}
k_1[v]
&=
\eta^\frac{2}{1-q}
v
\pa_t\chi_{\sf mid}
\lesssim
(T-t)^{-1}
\eta^\frac{2}{1-q}
v
{\bf 1}_{{\sf R}_{\sf mid}<|\xi|<2{\sf R}_{\sf mid}}
\\
&=
\eta^2
(T-t)^{-1}
|\xi|^2
\cdot
\eta^\frac{2q}{1-q}
|\xi|^{-2}
v
{\bf 1}_{{\sf R}_{\sf mid}<|\xi|<2{\sf R}_{\sf mid}}
\\
&=
|z|^2
\cdot
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{{\sf R}_{\sf mid}<|\xi|<2{\sf R}_{\sf mid}},
\\
k_1'[v]
&=
2\eta^\frac{2q}{1-q}
(\nabla_\xi v\cdot\nabla_\xi\chi_{\sf mid})
+
\eta^\frac{2q}{1-q}
v(\Delta_\xi\chi_{\sf mid})
\\
&\lesssim
{\sf R}_{\sf mid}^{-{\sf c}_2}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{{\sf R}_{\sf mid}<|\xi|<2{\sf R}_{\sf mid}}.
\end{align*}
Hence
it follows that
\begin{align}\label{eq8.2}
k[v]
=
-k_1[v]+k_1'[v]
\lesssim
{\sf R}_{\sf mid}^{-{\sf c}_2}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{{\sf R}_{\sf mid}<|\xi|<2{\sf R}_{\sf mid}}.
\end{align}
As in Section \ref{sec_7},
we divide \eqref{eq8.1} into two equations.
The first equation is given by
\begin{align}\label{eq8.3}
\begin{cases}
\pa_tw_1
=
\Delta_xw_1
-
q
{\sf U}_\infty^{q-1}
w_1
+
(
F_1[\tilde v_1,\tilde w_1]
-
F_2[\tilde w_1]
)
(1-\chi_{{\sf sq}})
\\
\hspace{12mm}
+
k[v]
+
g
{\bf 1}_{|\xi|>1}
+
(
N_1[\epsilon,\tilde v_1,\tilde w_1]
+
N_2[\epsilon,\tilde v_1,\tilde w_1]
)
{\bf 1}_{|\xi|>1}
\\
\hspace{12mm}
+
N_3[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|z|<1}
{\bf 1}_{|\xi|>1}
&
\text{in } |z|<2{\sf r}_0,\
t\in(0,T),
\\
w=0
& \text{on } |z|=2{\sf r}_0,\
t\in(0,T),
\\
w=0
&
\text{in } |z|<2{\sf r}_0,\
t=0.
\end{cases}
\end{align}
From Lemma \ref{Lem8.1} - Lemma \ref{Lem8.3} and \eqref{eq8.2},
the right-hand side of \eqref{eq8.3} can be computed as
\begin{align*}
(
F_1[\tilde v_1,\tilde w_1]
-
F_2[\tilde w_1]
)
(1-\chi_{{\sf sq}})
&+
k[v]
+
(
g
+
\sum_{i=1}^2
N_i
[\epsilon,\tilde v_1,\tilde w_1]
+
N_3
[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|z|<1}
)
{\bf 1}_{|\xi|>1}
\\
&\lesssim
{\sf R}_{\sf mid}^{-{\sf c}_1}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2q}{1-q}
|\xi|^{\gamma-2-2{\sf c}_2}
{\bf 1}_{|\xi|>1}
{\bf 1}_{|z|<1}
\end{align*}
for some ${\sf c}_1,{\sf c}_2>0$.
We introduce a comparison function.
\[
\bar w_1
=
{\sf R}_{\sf mid}^{-{\sf c}_1}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
|\xi|^\gamma
(1+|\xi|^2)^{-{\sf c}_2}.
\]
From the definition of $\gamma$,
for any ${\sf k}\in(0,\frac{n-2}{2})$,
there exists ${\sf c}_B({\sf k})>0$ such that
\[
(\Delta_\xi-q{\sf U}_\infty(\xi)^{q-1})
|\xi|^{\gamma}
(1+|\xi|^2)^{-{\sf k}}
<
-
{\sf c}_B({\sf k})
|\xi|^{\gamma}
(1+|\xi|^2)^{-(1+{\sf k})}.
\]
Hence
it follows that
\begin{align*}
(\pa_t-\Delta_x+q{\sf U}_\infty^{q-1})
\bar w_1
&=
(\pa_t-\eta^{-2}\Delta_\xi+\eta^{-2}q{\sf U}_\infty(\xi)^{q-1})
\bar w_1
\\
&>
(-C(T-t)^{-1}
+
{\sf c}_B({\sf k})
\eta^{-2}(1+|\xi|^2)^{-1})
\bar w_1
\\
&=
(T-t)^{-1}
(-C
+
{\sf c}_B({\sf k})
(T-t)
\eta^{-2}
(1+|\xi|^2)^{-1})
\bar w_1.
\end{align*}
This immediately implies
\begin{align*}
(\pa_t-\Delta_x+q{\sf U}_\infty^{q-1})
\bar w_1
&>
\tfrac{1}{4}
{\sf c}_B({\sf k})
\eta^{-2}
\bar w_1
\qquad
\text{for } |\xi|<1,
\\
(\pa_t-\Delta_x+q{\sf U}_\infty^{q-1})
\bar w_1
&>
(T-t)^{-1}
(-C+{\sf c}_B({\sf k})|z|^{-2})
\bar w_1
\qquad
\text{for }
|\xi|>1.
\end{align*}
From the second inequality,
there exists ${\sf r}_0'>0$ depending only on $q,n,J$
such that
\begin{align*}
(\pa_t-\Delta_x+q{\sf U}_\infty^{q-1})
\bar w_1
&>
\tfrac{1}{2}
{\sf c}_B({\sf k})
\eta^{-2}
|\xi|^{-2}
\bar w_1
\qquad
\text{for }
|\xi|>1,\
|z|<{\sf r}_0'.
\end{align*}
We fix ${\sf r}_0\in(0,\frac{1}{2}{\sf r}_0')$.
A comparison argument shows
\begin{align}\label{eq8.4}
|w_1|
<
\bar w_1
&\lesssim
{\sf R}_{\sf mid}^{-{\sf c}_1}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
|\xi|^{\gamma}
(1+|\xi|^2)^{-{\sf c}_2}
\end{align}
for
$|z|<2{\sf r}_0,\ t\in(0,T)$.
Since all terms on the right-hand side of \eqref{eq8.3}
are zero for $|\xi|>4{\sf l}_2$,
we apply a comparison argument in $|\xi|>4{\sf l}_2$, $|z|<2{\sf r}_0$,
then
\begin{align}\label{eq8.5}
|w_1|
+
|\nabla_\xi w_1|
&\lesssim
{\sf R}_{\sf mid}^{-{\sf c}_1}
{\sf R}_1^{-1}
(T-t)^{{\sf d}_1}
\eta^\frac{2}{1-q}
{\sf l}_2^{-{\sf c}_2+(n-3)}
|\xi|^{\gamma-(n-3)}
\end{align}
for
$|\xi|>4{\sf l}_2$,
$|z|<2{\sf r}_0$,
$t\in(0,T)$.
We next consider the second equation.
\begin{align}\label{eq8.6}
\begin{cases}
\dis
\pa_tw_2
=
\Delta_xw_2
-
q
{\sf U}_\infty^{q-1}
w_2
+
N_3[\epsilon,\tilde v_1,\tilde w_1]
{\bf 1}_{|z|>1}
\\
\dis
\qquad
\quad
+
(
N_4[\epsilon,\tilde v_1,\tilde w_1]
+
N_5[\tilde w_1]
+
N_6[\tilde w_1]
)
{\bf 1}_{|\xi|>1}
\\
\dis
\qquad
\quad
-
(T-t)^{-1}
w_1
(\Delta_z\chi_0)
-
2
(T-t)^{-\frac{1}{2}}
\eta^{-1}
\nabla_\xi w_1
\cdot
\nabla_z\chi_0
&
\text{in } x\in\R^n,\
t\in(0,T),
\\
w_2=w_0
&
\text{in } x\in\R^n,\
t=0.
\end{cases}
\end{align}
Once $w_2(x,t)$ is constructed,
$w(x,t)=w_1\chi_0+w_2(x,t)$ gives a solution of \eqref{eq8.1},
where $\chi_0=\chi(|z|/{\sf r}_0)$.
We introduce selfsimilar variables.
\[
x=z\sqrt{T-t},
\qquad
T-t=e^{-\tau}.
\]
Equation \eqref{eq8.6} is written in the selfsimilar variables.
\begin{align}\label{eq8.7}
\begin{cases}
\dis
\pa_\tau
w_2
=
\Delta_z
w_2
-
\tfrac{z}{2}
\cdot
\nabla_z
w_2
-
q
{\sf U}_\infty(z)^{q-1}
w_2
+
(T-t)G
&
\text{in } z\in\R^n,\
\tau\in(-\log T,\infty),
\\
w_2=w_0
&
\text{in } z\in\R^n,\
\tau=-\log T,
\end{cases}
\end{align}
where $G$ represents the right-hand side of \eqref{eq8.6}.
From Lemma \ref{Lem8.1} - Lemma \ref{Lem8.2} and \eqref{eq8.5},
there exists a positive constant ${\sf c}_1$ depending only on
$q,n,J$ such that
\begin{align}
\nonumber
G
&\lesssim
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf c}_1}
|z|^{\gamma}
(1+|z|^2)^J
{\bf 1}_{|z|<(T-t)^{-\frac{1}{3}}}
\\
\nonumber
& \quad
+
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf d}_1+{\sf c}_1}
|z|^{\gamma+2J+3{\sf d}_1}
{\bf 1}_{(T-t)^{-\frac{1}{3}}<|z|<{\sf l}_{\sf out}}
\\
\label{eq8.8}
& \quad
+
|x|^\frac{2q}{1-q}
{\bf 1}_{{\sf l}_{\sf out}\sqrt{T-t}<|x|<4}
+
{\sf R}_1^{-1}
|x|^{-1}
{\bf 1}_{|x|>1}.
\end{align}
From this estimate,
we easily see that
\begin{align}\label{eq8.9}
\|G\|_\rho
\lesssim
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf c}_1}.
\end{align}
Let $\rho(z)=e^{-\frac{|z|^2}{4}}$ and
$L_\rho^2(\R^n)$ be the weighted $L^2$ space defined in Section \ref{sec_4.2}.
Furthermore
let $\mu_j=\frac{\gamma}{2}+j$ be the $j$th eigenvalue of
$-(\Delta_z-\frac{z}{2}\cdot\nabla_z-q{\sf U}_\infty(z)^{q-1})e=\mu e$,
and $e_j(z)$ be the corresponding eigenfunction (see Section \ref{sec_4.2}).
We now take
\[
w_0
=
\sum_{j=0}^J
\alpha_j
e_j(z)
\chi_J
\qquad
\text{with }
\chi_J
=
\chi(T^\frac{1}{3}|z|).
\]
We expand $e_j(z)\chi_J$ as
\[
e_j(z)
\chi_J
=
\sum_{i=0}^J
(e_j\chi_J,e_i)_\rho
e_i
+
e_{j,J}^\bot.
\]
We write ${\sf A}_{ji}=(e_j\chi_J,e_i)_\rho$,
then
\begin{align}\label{eq8.10}
w_0
=
\sum_{j=0}^J
\sum_{i=0}^J
\alpha_j
{\sf A}_{ji}
e_i(z)
+
\sum_{j=0}^J
\alpha_j
e_{j,J}^\bot.
\end{align}
We also expand $w_2$ as
\begin{align*}
w_2
=
\sum_{j=0}^Ja_j(\tau)e_j(z)
+
w_2^\bot
\qquad
\text{with }
a_j(\tau)=(w_2(\cdot,\tau),e_j)_\rho.
\end{align*}
From \eqref{eq8.7},
we verify that
\begin{align*}
\pa_\tau
a_j
=
-\mu_j
a_j
+
(T-t)
(G,e_j)_\rho
=
-
(
\tfrac{\gamma}{2}+j
)
a_j
+
(T-t)
(G,e_j)_\rho.
\end{align*}
Integrating both sides,
we get
\[
e^{(\frac{\gamma}{2}+j)\tau}a_j(\tau)
-
\underbrace{
e^{(\frac{\gamma}{2}+j)\tau}a_j|_{\tau=-\log T}
}_{=T^{-(\frac{\gamma}{2}+j)}\sum_{i=0}^J\alpha_i{\sf A}_{ij}}
=
\int_{-\log T}^\tau
e^{(\frac{\gamma}{2}+j)\tau_1}
(T-t)
(G,e_j)_\rho
d\tau_1.
\]
We now chose $\alpha_j$ such that
\[
\sum_{i=0}^J
\alpha_i
{\sf A}_{ij}
=
-
T^{\frac{\gamma}{2}+j}
\int_{-\log T}^\infty
e^{(\frac{\gamma}{2}+j)\tau_1}
(T-t)
(G,e_j)_\rho
d\tau_1
\qquad
(j=0,1,2,\cdots,J).
\]
Since the matrix ${A}_{ij}$ is close to the identity matrix,
$\alpha_j$ can be uniquely determined.
Then
$a_j(\tau)$ is represented as
\[
e^{(\frac{\gamma}{2}+j)\tau}
a_j(\tau)
=
-
\int_\tau^\infty
e^{(\frac{\gamma}{2}+j)\tau_1}
(T-t)
(G,e_j)_\rho
d\tau_1.
\]
Hence
it follows from \eqref{eq8.9} that
\begin{align}\label{eq8.11}
a_j(\tau)
\lesssim
e^{-(\frac{\gamma}{2}+j)\tau}
e^{(j-J-{\sf c}_1)\tau}
=
e^{-(\frac{\gamma}{2}+J+{\sf c}_1)\tau}
=
(T-t)^{\frac{\gamma}{2}+J+{\sf c}_1}.
\end{align}
Furthermore
\eqref{eq8.9} shows
\begin{align}\label{eq8.12}
\nonumber
\alpha_j
&=
-
\sum_{i=0}^J
{\sf A}_{ij}^{-1}
T^{\frac{\gamma}{2}+j}
\int_{-\log T}^\infty
e^{(\frac{\gamma}{2}+j)\tau_1}
(T-t)
(G,e_i)_\rho
d\tau_1
\\
&\lesssim
T^{\frac{\gamma}{2}+j}
\int_0^T
(T-t)^{-1+J-j+{\sf c}_1}
dt
\lesssim
T^{\frac{\gamma}{2}+J+{\sf c}_1}.
\end{align}
We next provide estimates for $w_2^\bot$.
Since $w_2$ solves \eqref{eq8.7},
from \eqref{eq8.9},
we get
\begin{align*}
\tfrac{1}{2}
\pa_\tau
\|w_2^\bot\|_\rho^2
&=
-\|\nabla_zw_2^\bot\|_\rho^2
-q
({\sf U}_\infty(z)^{q-1}w_2^\bot,w_2^\bot)_\rho
+
(T-t)
(G,w_2^\bot)
\\
&<
-
\mu_{J+1}
\|w_2^\bot\|_\rho^2
+
C
(T-t)^{\frac{\gamma}{2}+J+{\sf c}_1}
\|w_2^\bot\|_\rho.
\end{align*}
We recall that
$\mu_{J+1}=\frac{\gamma}{2}+J+1>\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1$
(see Section \ref{sec_4.2}).
Hence
it follows that
\begin{align}\label{eq8.13}
e^{2(\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1)\tau}
\|w_2^\bot\|_\rho^2
-
\underbrace{
e^{2(\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1)\tau}
\|w_2^\bot\|_\rho^2
|_{\tau=-\log T}
}_{=T^{-2(\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1)}
\|w_0^\bot\|_\rho^2}
\lesssim
\int_0^t
(T-t)^{-1+{\sf c}_1}
dt
\lesssim
T^{{\sf c}_1}.
\end{align}
We go back to \eqref{eq8.10} to estimate
$T^{-2(\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1)}
\|w_0^\bot\|_\rho^2$.
We note that
\begin{align*}
e_{j,J}^\bot
&=
e_j\chi_J
-
\sum_{i=0}^J
(e_j\chi_J,e_i)e_i
\\
&=
e_j\chi_J
-
(e_j\chi_J,e_j)e_j
-
\sum_{i\not=j}
(e_j(\chi_J-1),e_i)e_i
\\
&=
e_j(\chi_J-1)
+
(e_j(1-\chi_J),e_j)e_j
-
\sum_{i\not=j}
(e_j(\chi_J-1),e_i)e_i.
\end{align*}
Since $1-\chi_J=0$ for $|z|<T^{-\frac{1}{3}}$
and
$e_j={\sf E}_j|z|^{\gamma+2j}$ as $|z|\to\infty$
(see Section \ref{sec_4.2}),
it holds that
\[
\|e_j(1-\chi_J)\|_\rho^2
=
\|e_j(1-\chi_J)\|_{L_\rho^2(|z|>T^{-\frac{1}{3})}}^2
\lesssim
e^{-{\sf p}}
\qquad
\text{with }
{\sf p}=
\tfrac{1}{8}T^{-\frac{2}{3}}.
\]
This together with \eqref{eq8.12} shows
\[
\|w_0^\bot\|_\rho
=
\|\sum_{j=0}^J
\alpha_j
e_{j,J}^\bot
\|_\rho
\lesssim
\sum_{j=0}^J
\alpha_j
\|e_{j,J}^\bot\|_\rho
\lesssim
T^{\frac{\gamma}{2}+J+{\sf c}_1}
e^{-{\sf p}}.
\]
Therefore
\eqref{eq8.13} can be written as
\begin{align*}
\|w_2^\bot\|_\rho^2
&\lesssim
e^{-2(\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1)\tau}
(
T^{-2(\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1)}
\|w_0^\bot\|_\rho^2
+
T^{{\sf c}_1}
)
\\
&\lesssim
T^{{\sf c}_1}
e^{-2(\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1)\tau}.
\end{align*}
Combining \eqref{eq8.11},
we conclude
\begin{align}\label{eq8.14}
\|w_2\|_\rho
&\lesssim
T^{\frac{1}{2}{\sf c}_1}
(T-t)^{\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1}.
\end{align}
By using this estimate,
we provide pointwise estimates for $w_2$ below.
The proof is divided into four parts.
\begin{enumerate}[(i)]
\item $|z|<1$
\item $1<|z|<{\sf l}_{\sf out}$
\item ${\sf l}_{\sf out}\sqrt{T-t}<|x|<1$
\item $|x|>1$
\end{enumerate}
We first consider the case (i).
To verify $w_2\in X_1$,
it is sufficient to obtain the spacial weight $|z|^\gamma$ as $|z|\to0$.
Let $e_{J+1}(z)$ be the eigenfunction of
$-(\Delta_z-\frac{z}{2}\cdot\nabla_z-q{\sf U}_\infty(z)^{q-1})e=\mu e$
defined in Section \ref{sec_4.2}.
We can choose ${\sf r}\in(0,1)$ such that $e_{J+1}(z)>0$ for $|z|<{\sf r}$.
From \eqref{eq8.8}, \eqref{eq8.12} and \eqref{eq8.14},
we easily check that
$\bar w_2=
T^{\frac{1}{2}{\sf c}_1}
(T-t)^{\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1}
e_{J+1}(z)$
gives a super solution of \eqref{eq8.7} in $|z|<{\sf r}$.
Hence
we have
\begin{align}\label{eq8.15}
\nonumber
|w_2|
&\lesssim
\bar w_2
=
T^{\frac{1}{2}{\sf c}_1}
(t-t)^{\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1}
e_{J+1}(z)
\\
&\lesssim
T^{\frac{1}{2}{\sf c}_1}
(t-t)^{\frac{\gamma}{2}+J+\frac{1}{2}{\sf c}_1}
|z|^\gamma
\qquad
\text{for }
|z|<{\sf r},\
t\in(0,T).
\end{align}
In the last inequality,
we use the property of $e_j(z)$ (see Section \ref{sec_4.2}).
Since we can assume $2{\sf d}_1<{\sf c}_1$,
\eqref{eq8.15} proves the case (i).
We next consider the case (ii).
From Section \ref{sec_5.3},
we recall that
${\sf l}_{\sf out}=L_2(T-t)^{-\frac{1}{2}+{\sf b}_{\sf out}}$
with
${\sf b}_{\sf out}
=
\tfrac{{\sf d}_1}{2(\gamma+2J-\frac{2}{1-q}+3{\sf d}_1)}$
and
\begin{align}\label{eq8.16}
(T-t)^{\frac{\gamma}{2}+J+{\sf d}_1}
|z|^{\gamma+2J+3{\sf d}_1}
>
{\sf U}_\infty(x)
\qquad
\Leftrightarrow
\qquad
|z|>{\sf l}_{\sf out}.
\end{align}
We put
${\sf l}_{\sf x}=(T-t)^{{\sf b}_{\sf x}}$
with ${\sf b}_{\sf x}=\frac{{\sf d}_1}{2(\gamma+2J+3{\sf d}_1)}$.
A direct computation shows
\begin{align}\label{eq8.17}
(T-t)^{\frac{\gamma}{2}+J+{\sf d}_1}
|z|^{\gamma+2J+3{\sf d}_1}
>
1
\qquad
\Leftrightarrow
\qquad
|x|>{\sf l}_{\sf x}.
\end{align}
From the definition of ${\sf l}_{\sf out}$ and ${\sf l}_x$,
we note that
${\sf l}_{\sf out}\sqrt{T-t}<{\sf l}_{\sf x}$.
We here write \eqref{eq8.8} in the following form.
\begin{align*}
G
&
{\bf 1}_{|z|>1}
{\bf 1}_{|x|<{\sf l}_{\sf x}}
\lesssim
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf c}_1}
|z|^{\gamma+2J}
{\bf 1}_{1<|z|<(T-t)^{-\frac{1}{3}}}
\\
&\quad
+
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf d}_1+{\sf c}_1}
|z|^{\gamma+2J+3{\sf d}_1}
{\bf 1}_{(T-t)^{-\frac{1}{3}}<|z|<{\sf l}_{\sf out}}
+
|x|^{\frac{2q}{1-q}}
{\bf 1}_{{\sf l}_{\sf out}\sqrt{T-t}<|x|<{\sf l}_{\sf x}}
\\
&\lesssim
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf c}_1}
|z|^{\gamma+2J+3{\sf d}_1}
{\bf 1}_{1<|z|<{\sf l}_{\sf out}}
+
|x|^{-2}
{\sf U}_\infty
{\bf 1}_{{\sf l}_{\sf out}\sqrt{T-t}<|x|<{\sf l}_x}.
\end{align*}
We observe from \eqref{eq8.16} that
\begin{align*}
|x|^{-2}
{\sf U}_\infty
&\lesssim
|x|^{-2}
(T-t)^{\frac{\gamma}{2}+J+{\sf d}_1}
|z|^{\gamma+2J+3{\sf d}_1}
\\
&\lesssim
(T-t)^{-2{\sf b}_{\sf out}+\frac{\gamma}{2}+J+{\sf d}_1}
|z|^{\gamma+2J+3{\sf d}_1}
\qquad
\text{for }
{\sf l}_{\sf out}\sqrt{T-t}<|x|<{\sf l}_x.
\end{align*}
Hence
it follows that
\begin{align*}
G
{\bf 1}_{|z|>1}
{\bf 1}_{|x|<{\sf l}_{\sf x}}
&\lesssim
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf c}_1}
|z|^{\gamma+2J+3{\sf d}_1}
{\bf 1}_{|z|>1}
{\bf 1}_{|x|<{\sf l}_{\sf x}}.
\end{align*}
Furthermore
from \eqref{eq8.8} and \eqref{eq8.17},
we immediately see that
\begin{align*}
G
{\bf 1}_{|x|>{\sf l}_{\sf x}}
&\lesssim
(
|x|^\frac{2q}{1-q}
{\bf 1}_{{\sf l}_{\sf out}\sqrt{T-t}<|x|<4}
+
{\sf R}_1^{-1}
|x|^{-1}
{\bf 1}_{|x|>1}
)
{\bf 1}_{|x|>{\sf l}_{\sf x}}
\\
&\lesssim
{\bf 1}_{|x|>{\sf l}_{\sf x}}
\\
&\lesssim
(T-t)^{\frac{\gamma}{2}+J+{\sf d}_1}
|z|^{\gamma+2J+3{\sf d}_1}
{\bf 1}_{|x|>{\sf l}_{\sf x}}.
\end{align*}
Therefore
we obtain
\[
G{\bf 1}_{|z|>1}
\lesssim
(T-t)^{-1+\frac{\gamma}{2}+J+{\sf c}_1}
|z|^{\gamma+2J+3{\sf d}_1}
{\bf 1}_{|z|>1}.
\]
We put
$\bar{w}_2
=
(T-t)^{\frac{\gamma}{2}+J+\frac{5}{4}{\sf d}_1}
|z|^{\gamma+2J+3{\sf d}_1}$.
A direct computation shows
\begin{align*}
(
\pa_\tau-\Delta_z+\tfrac{z}{2}\cdot\nabla_z+q{\sf U}_\infty(z)^{q-1}
)
\bar{w}_2
&=
(
-(
\tfrac{\gamma}{2}+J+\tfrac{5}{4}{\sf d}_1
)
+
\tfrac{1}{2}
(
\gamma+2J+3{\sf d}_1
)
-
C
|z|^{-2}
)
\bar{w}_2
\\
&=
(
\tfrac{1}{4}{\sf d}_1
-
C
|z|^{-2}
)
\bar{w}_2.
\end{align*}
Here
we choose ${\sf m}_3>0$ such that
$\tfrac{1}{4}{\sf d}_1-C|z|^{-2}>\tfrac{1}{8}{\sf d}_1$
for $|z|>{\sf m}_3$.
Hence
it follows that
\begin{align*}
(
\pa_\tau-\Delta_z+\tfrac{z}{2}\cdot\nabla_z+q{\sf U}_\infty(z)^{q-1}
)
\bar{w}_2
>
\tfrac{1}{8}
{\sf d}_1
\bar{w}_2
\qquad
\text{for }
|z|>{\sf m}_3.
\end{align*}
Furthermore
from \eqref{eq8.12},
it holds that
\[
w_0
=
\sum_{j=0}^J
\alpha_j
e_j
\chi_J
\lesssim
T^{\frac{\gamma}{2}+J+{\sf c}_1}
|z|^{\gamma+2J}
{\bf 1}_{|z|<2T^{-\frac{1}{3}}}
\qquad
\text{for }
|z|>1.
\]
This implies
$|w_0|<\bar{w}_2|_{t=0}$ for $|z|>1$.
Therefore
a comparison argument shows
\begin{align}\label{eq8.18}
|w_2|
<
\bar{w}_2
=
(T-t)^{\frac{\gamma}{2}+J+\frac{5}{4}{\sf d}_1}
|z|^{\gamma+2J+3{\sf d}_1}
\qquad
\text{for }
|z|>{\sf m}_3,\
t\in(0,T).
\end{align}
Since $T^{\frac{1}{8}{\sf d}_1}<{\sf R}_1^{-1}$,
the case (ii) is proved.
We investigate the case (iii).
From \eqref{eq8.8}.
we easily see that
\begin{align}\label{eq8.19}
\nonumber
G
{\bf 1}_{|z|>{\sf l}_{\sf out}}
&\lesssim
|x|^\frac{2q}{1-q}
{\bf 1}_{{\sf l}_{\sf out}\sqrt{T-t}<|x|<4}
{\bf 1}_{|z|>{\sf l}_{\sf out}}
+
{\sf R}_1^{-1}
|x|^{-1}
{\bf 1}_{|x|>1}
{\bf 1}_{|z|>{\sf l}_{\sf out}}
\\
\nonumber
&\lesssim
|x|^{\frac{2q}{1-q}}
{\bf 1}_{|z|>{\sf l}_{\sf out}}
\\
\nonumber
&\lesssim
({\sf l}_{\sf out}\sqrt{T-t})^{-2}
|x|^{\frac{2}{1-q}}
{\bf 1}_{|z|>{\sf l}_{\sf out}}
\\
&\lesssim
(T-t)^{-2{\sf b}_{\sf out}}
|x|^{\frac{2}{1-q}}
{\bf 1}_{|z|>{\sf l}_{\sf out}}.
\end{align}
From \eqref{eq8.18} and \eqref{eq8.16},
we note that
$|w_2|
<
(T-t)^{\frac{1}{4}{\sf d}_1}
{\sf U}_\infty(x)$
on
$|z|={\sf l}_{\sf out}$.
Furthermore
since $\frac{1}{3}<\frac{1}{2}-{\sf b}_{\sf out}$,
it holds that $w_2|_{t=0}=0$
for $|z|>{\sf l}_{\sf out}|_{t=0}$.
We choose a comparison function $\bar{w}_2$ as
\[
\bar{w}_2
=
2T^{\frac{1}{4}{\sf d}_1}
e^{-2(T-t)^\frac{1}{2}}
{\sf U}_\infty(x).
\]
We here go back to \eqref{eq8.6}.
We compute
\begin{align*}
\left(
\pa_t-\Delta_x+q{\sf U}_\infty(x)^{q-1}
\right)
\bar{w}_2
&=
(
(T-t)^{-\frac{1}{2}}
-
(1-q)
{\sf U}_\infty^{q-1}
)
\bar{w}_2
\\
&=
(
(T-t)^{-\frac{1}{2}}
-
(1-q)
L_1^{q-1}
|x|^{-2}
)
\bar{w}_2
\\
&\geq
(
(T-t)^{-\frac{1}{2}}
-
C(T-t)^{-2{\sf b}_{\sf out}}
)
\bar{w}_2
\\
&\geq
\tfrac{1}{2}
(T-t)^{-\frac{1}{2}}
\bar{w}_2
\qquad
\text{for }
|z|>{\sf l}_{\sf out}.
\end{align*}
Since
${\sf b}_{\sf out}<C{\sf d}_1$ (see the definition of ${\sf b}_{\sf out}$),
we can assume ${\sf b}_{\sf out}<\frac{1}{8}$.
Hence
\eqref{eq8.19} can be written as
\begin{align*}
G
{\bf 1}_{|z|>{\sf l}_{\sf out}}
&<
C
(T-t)^{-2{\sf b}_{\sf out}}
|x|^{\frac{2}{1-q}}
{\bf 1}_{|z|>{\sf l}_{\sf out}}
\\
&<
(T-t)^{-\frac{1}{4}}
|x|^{\frac{2}{1-q}}
{\bf 1}_{|z|>{\sf l}_{\sf out}}
\\
&<
C
(T-t)^{-\frac{1}{4}}
T^{-\frac{1}{4}{\sf d}_1}
\bar{w}_2
{\bf 1}_{|z|>{\sf l}_{\sf out}}
\\
&=
C
(T-t)^{\frac{1}{4}}
T^{-\frac{1}{4}{\sf d}_1}
\cdot
\tfrac{1}{2}
(T-t)^{-\frac{1}{2}}
\bar{w}_2
{\bf 1}_{|z|>{\sf l}_{\sf out}}.
\end{align*}
Therefore
from a comparison argument,
we obtain
\begin{align}\label{eq8.20}
|w_2|
<
\bar{w}_2
=
2T^{\frac{1}{4}{\sf d}_1}
e^{-2(T-t)^\frac{1}{2}}
{\sf U}_\infty(x)
\qquad
\text{for } |z|>{\sf l}_{\sf out},\
t\in(0,T).
\end{align}
This gives the desired estimate for the case (iii).
We finally discuss the case (iv).
It holds from \eqref{eq8.8} that
\begin{align*}
G
{\bf 1}_{|x|>1}
&\lesssim
|x|^\frac{2q}{1-q}
{\bf 1}_{1<|x|<4}
+
{\sf R}_1^{-1}
|x|^{-1}
{\bf 1}_{|x|>1}
\lesssim
|x|^{-1}
{\bf 1}_{|x|>1}.
\end{align*}
From \eqref{eq8.20} and the definition of $w_0(x)$,
we have
\begin{align*}
|w_2|
&\lesssim
T^{\frac{1}{4}{\sf d}_1}
\qquad
\text{on } |x|=1,
\\
w_2|_{t=0}
&=
0
\qquad
\text{for } |x|>1.
\end{align*}
Taking into account this relation,
we define
$
\bar w_2
=
2
T^{\frac{1}{4}{\sf d}_1}
e^{-2(T-t)^\frac{1}{2}}
|x|^{-1}$
as a comparison function.
Then
we get
\begin{align*}
(
\pa_t-\Delta_x+q{\sf U}_\infty(x)^{q-1}
)
\bar{w}_2
>
\pa_t
\bar{w}_2
=
(T-t)^{-\frac{1}{2}}
\bar{w}_2.
\end{align*}
This implies
$|G|
{\bf 1}_{|x|>1}
<
(
\pa_t-\Delta_x+q{\sf U}_\infty(x)^{q-1}
)
\bar{w}_2$
for $|x|>1$.
A comparison argument shows
\begin{align}\label{eq8.21}
|w_2|
<
\bar{w}_2(x,t)
\lesssim
T^{\frac{1}{4}{\sf d}_1}
|x|^{-1}
\qquad
\text{for }
|x|>1.
\end{align}
Combining
\eqref{eq8.4}, \eqref{eq8.15}, \eqref{eq8.18}, \eqref{eq8.20}
and \eqref{eq8.21},
we conclude
$w(x,t)=w_1\chi_0+w_2\in X_1\subset X_1^{(\delta)}$.
This completes the proof.
|
2,869,038,155,539 | arxiv | \section{Introduction}
In this paper we extend the construction given in \cite{2} to the level of decorated representations of algebras with potentials realized via completed tensor algebras. Instead of working with a quiver, we consider the algebra of formal power series $\mathcal{F}_{S}(M)$ where $S=\displaystyle \prod_{i=1}^{n} D_{i}$ is a finite direct product of division rings containing the base field $F$ in its center and $M$ is a finite dimensional $S$-bimodule. In section \ref{sec3} we define a decorated representation of an algebra with potential $(\mathcal{F}_{S}(M),P)$ and in section \ref{sec4} we show how to associate a decorated representation to the premutated algebra with potential $(\mathcal{F}_{S}(\mu_{k}M),\mu_{k}P)$ and we prove this is indeed a decorated representation. In contrast to \cite{8} we do not assume that the basis is semi-multiplicative, but rather impose some conditions on the dual basis associated to the division algebra $Se_{k}$. In section \ref{sec5} we define mutation of a decorated representation, and we show it is a well-defined transformation on the set of right-equivalence classes of decorated representations of algebras with potentials. A crucial result of this section is that mutation of decorated representations is an involution. Finally, in section \ref{sec6} we construct a functor to prove that there exists an Nearly Morita equivalence between the Jacobian algebras which are related via mutation.This construction gives a generalization of the one given in \cite{4}.
\section{Preliminaries}
\begin{definition} Let $F$ be a field and let $D_{1},\hdots,D_{n}$ be division rings, containing $F$ in its center, and each of them is finite-dimensional over $F$. Let $S=\displaystyle \prod_{i=1}^{n} D_{i}$ and let $M$ be an $S$-bimodule of finite dimension over $F$. Define the \emph{algebra of formal power series} over $M$ as the set: \\
\begin{center}
$\mathcal{F}_{S}(M):=\left\{\displaystyle \sum_{i=0}^{\infty} a(i): a(i) \in M^{\otimes i}\right\}$
\end{center}
\end{definition}
where $M^{0}=S$.
Note that $\mathcal{F}_{S}(M)$ is an associative unital $F$-algebra where the product is the one obtained by extending the product of the tensor algebra $T_{S}(M)=\displaystyle \bigoplus_{i=0}^{\infty} M^{\otimes i}$ to $\mathcal{F}_{S}(M)$.
Let $\{e_{1},\ldots,e_{n}\}$ be a complete set of primitive orthogonal idempotents of $S$.
\begin{definition} An element $m \in M$ is \emph{legible} if $m=e_{i}me_{j}$ for some idempotents $e_{i},e_{j}$ of $S$.
\end{definition}
\begin{definition} Let $Z=\displaystyle \sum_{i=1}^{n} Fe_{i} \subseteq S$. We say that $M$ is $Z$-\emph{freely generated} by a $Z$-subbimodule $M_{0}$ of $M$ if the map $\mu_{M}: S \otimes_{Z} M_{0} \otimes_{Z} S \rightarrow M$ given by $\mu(s_{1} \otimes m \otimes s_{2})=s_{1}ms_{2}$ is an isomorphism of $S$-bimodules. In this case we say that $M$ is an $S$-bimodule which is $Z$-\emph{free} or $Z$-freely generated.
\end{definition}
\begin{definition} Let $\mathcal{C}$ be a non-empty subset of $M$. We say that $\mathcal{C}$ is a \emph{right $S$-local basis} of $M$ if every element of $\mathcal{C}$ is legible and if for each pair of idempotents $e_{i},e_{j}$ of $S$ we have that $\mathcal{C} \cap e_{i}Me_{j}$ is a $Se_{j}=D_{j}$-basis of $e_{i}Me_{j}$.
\end{definition}
\begin{definition} Let $\mathcal{D}$ be a non-empty subset of $M$. We say that $\mathcal{D}$ is a \emph{left $S$-local basis} of $M$ if every element of $\mathcal{D}$ is legible and if for each pair of idempotents $e_{i},e_{j}$ of $S$ we have that $\mathcal{D} \cap e_{i}Me_{j}$ is a $Se_{i}=D_{i}$-basis of $e_{i}Me_{j}$.
\end{definition}
A right $S$-local basis $\mathcal{C}$ induces a dual basis $\{u,u^{\ast}\}_{u \in \mathcal{C}}$ of $M$ where $u^{\ast}: M_{S} \rightarrow S_{S}$ is the morphism of right $S$-modules defined by $u^{\ast}(v)=0$ if $v \in \mathcal{C} \setminus \{u\}$ and $u^{\ast}(u)=e_{j}$ if $u=e_{i}ue_{j}$. Similarly, a left $S$-local basis $\mathcal{D}$ of $M$ induces a dual basis $\{v,^{\ast}v\}_{v \in \mathcal{D}}$ where $^{\ast} v: \ _{S}{M} \rightarrow _{S}{S}$ is the morphism of left $S$-modules defined by $^{\ast}v(u)=0$ if $u \in \mathcal{D} \setminus \{v\}$ and $^{\ast}v(v)=e_{i}$ if $v=e_{i}ve_{j}$. \\
Let $L$ be a $Z$-local basis of $S$ and let $T$ be a $Z$-local basis of $M_{0}$. \\
Throughout this paper we will use the following notation $T_{k}= T \cap Me_{k}$ and $_{k}T= T \cap e_{k}M$. \\
We will also assume that for every integer $i$ in $[1,n]$ and for each $F$-basis $L(i)$ of $D_{i}$, the following equalities hold for each $f,w,z \in L(i)$:
\begin{equation} \label{2.1}
f^{\ast}(w^{-1}z)=w^{\ast}(zf^{-1})
\end{equation}
\begin{equation} \label{2.2}
f^{\ast}(zw)=(w^{-1})^{\ast}(f^{-1}z)
\end{equation}
\begin{equation}\label{2.3}
z^{\ast}(wf)=(w^{-1})^{\ast}(fz^{-1})
\end{equation}
\vspace{0.1in}
Note that \ref{2.1} readily implies (1) of \cite[p.29]{2} by taking $w=e_{i}$, $z=s$ and $f=t$. \\
Observe that in \ref{2.1} and \ref{2.2} one can replace $z \in L(i)$ by $z \in D_{i}$, because both expressions are linear in $z$. Similarly, one can replace in \ref{2.3} $f \in L(i)$ by $f \in D_{i}$.
\begin{remark}
If $L(i)$ is a semi-multiplicative basis of $D(i)$ then $L(i)$ satisfies \ref{2.1}, \ref{2.2} and \ref{2.3}.
\end{remark}
\begin{proof}
Indeed, suppose that $f^{\ast}(w^{-1}z) \neq 0$ then $w^{-1}z=cf$ for some uniquely determined $c \in F^{\ast}$; thus $f^{\ast}(w^{-1}c)=c$. On the other hand, $w^{\ast}(zf^{-1})=w^{\ast}(z(z^{-1}wc))=w^{\ast}(wc)=c$ and the equality follows. A similar argument shows that \ref{2.2} and \ref{2.3} also hold.
\end{proof}
\begin{remark}
Suppose that $L_{1}$ is an $F$-basis for the field extension $F_{1}/F$ and $L_{2}$ is an $F_{1}$-basis for the field extension $F_{2}/F_{1}$. If $L_{1}$ and $L_{2}$ satisfy \ref{2.1}, \ref{2.2} and \ref{2.3} then the $F$-basis $\{xy: x \in L_{1},y \in L_{2}\}$ of $F_{2}/F$ also satisfies \ref{2.1}, \ref{2.2} and \ref{2.3}.
\end{remark}
\begin{proof} Suppose that both $L_{1}$ and $L_{2}$ satisfy \ref{2.1}. Let $f=f_{1}f_{2}$, $w=w_{1}w_{2}$, $z=z_{1}z_{2}$ where $f_{1},w_{1},z_{1} \in L_{1}$ and $f_{2},w_{2},z_{2} \in L_{2}$. Then:
\begin{align*}
f^{\ast}(w^{-1}z)&=f_{1}^{\ast}f_{2}^{\ast}(w_{1}^{-1}w_{2}^{-1}z_{2}z_{1}) \\
&=f_{1}^{\ast}(w_{1}^{-1}z_{1}f_{2}^{\ast}(w_{2}^{-1}z_{2}))\\
&=f_{1}^{\ast}(w_{1}^{-1}z_{1}w_{2}^{\ast}(z_{2}f_{2}^{-1}))\\
&=w_{1}^{\ast}(z_{1}f_{1}^{-1}w_{2}^{\ast}(z_{2}f_{2}^{-1}))\\
&=(w_{1}w_{2})^{\ast}(z_{1}z_{2}(f_{1}f_{2})^{-1}) \\
&=w^{\ast}(zf^{-1})
\end{align*}
as claimed.The equalities \ref{2.2} and \ref{2.3} are established in an analogous way.
\end{proof}
\begin{remark} \label{rem3} If the basis $L(i)$ satisfies \ref{2.1}, then for each $f,z \in L(i)$ we have: \\
\begin{center}
$\displaystyle \sum_{w \in L(i)} f^{\ast}(w^{-1}z)w=zf^{-1}$
\end{center}
\end{remark}
\begin{proof}
\begin{center}
$\displaystyle \sum_{w \in L(i)} f^{\ast}(w^{-1}z)w=\displaystyle \sum_{w \in L(i)} w^{\ast}(zf^{-1})w=zf^{-1}.$
\end{center}
\end{proof}
\begin{remark} If the basis $L(i)$ satisfies \ref{2.2}, then for each $r,v \in L(i)$ we have: \\
\begin{center}
\begin{equation} \label{2.4}
\displaystyle \sum_{t \in L(i)} r^{\ast}(vt)t^{-1}=r^{-1}v
\end{equation}
\end{center}
\end{remark}
\begin{proof}
\begin{center}
$\displaystyle \sum_{t \in L(i)} r^{\ast}(vt)t^{-1}=\displaystyle \sum_{t \in L(i)} (t^{-1})^{\ast}(r^{-1}v)t^{-1}=r^{-1}v.$
\end{center}
\end{proof}
\begin{definition} Given an $S$-bimodule $N$ we define the \emph{cyclic part} of $N$ as $N_{cyc}:=\displaystyle \sum_{j=1}^{n} e_{j}Ne_{j}$.
\end{definition}
\begin{definition} A \emph{potential} $P$ is an element of $\mathcal{F}_{S}(M)_{cyc}$. \\
\end{definition}
For each legible element $a$ of $e_{i}Me_{j}$, we let $\sigma(a)=i$ and $\tau(a)=j$. Recall that each $L(i)=L \cap e_{i}S$ is an $F$-basis of $D_{i}$. We will assume that each basis $L(i)$ satisfies that $\operatorname{char}(F) \nmid \operatorname{card}L(i).$
\begin{definition} Let $P$ be a potential in $\mathcal{F}_{S}(M)$, then $R(P)$ is the closure of the two-sided ideal of $\mathcal{F}_{S}(M)$ generated by all the elements $X_{a^{\ast}}(P):=\displaystyle \sum_{s \in L(\sigma(a))} \delta_{(sa)^{\ast}}(P)s$ where $a \in T$.
\end{definition}
In \cite[p.19]{2} it is shown that the Jacobian ideal (in the sense of \cite{6}) contains properly the ideal $R(P)$.
Let $k$ be an integer in $[1,n]$. Using the $S$-bimodule $M$, we define a new $S$-bimodule $\mu_{k}M=\widetilde{M}$ as:
\vspace{0.1in}
\begin{center}
$\widetilde{M}:=\bar{e_{k}}M\bar{e_{k}} \oplus Me_{k}M \oplus (e_{k}M)^{\ast} \oplus ^{\ast}(Me_{k})$
\end{center}
\vspace{0.1in}
where $\bar{e_{k}}=1-e_{k}$, $(e_{k}M)^{\ast}=\operatorname{Hom}_{S}((e_{k}M)_{S},S_{S})$ and $^{\ast}(Me_{k})=\operatorname{Hom}_{S}(_{S}(Me_{k}),_{S}S)$. One can show (see \cite[Lemma 8.7]{2}) that $\mu_{k}M$ is $Z$-freely generated by the following $Z$-subbimodule: \\
\begin{center}
$\bar{e_{k}}M_{0}\bar{e_{k}} \oplus M_{0}e_{k}Se_{k}M_{0} \oplus e_{k}(_{0}N) \oplus N_{0}e_{k}$
\end{center}
\vspace{0.1in}
where $N_{0}=\{h \in M^{\ast} | h(M_{0}) \in Z, h(tM_{0})=0, t \in L^{'} \}$, $_{0}N=\{h \in ^{\ast}M | h(M_{0}) \in Z, h(M_{0}t)=0, t \in L^{'} \}$ and $L'=L \setminus \{e_{1},\hdots,e_{n}\}$.
\begin{definition} An algebra with potential is a pair $(\mathcal{F}_{S}(M),P)$ where $P$ is a potential in $\mathcal{F}_{S}(M)$ and $M_{cyc}=0$.
\end{definition}
Throughout this paper we will assume that $M$ is $Z$-freely generated by $M_{0}$.
\section{Decorated representations} \label{sec3}
\begin{definition} Let $(\mathcal{F}_{S}(M),P)$ be an algebra with potential. A decorated representation of $(\mathcal{F}_{S}(M),P)$ is a pair $\mathcal{N}=(N,V)$ where $N$ is a finite dimensional left $\mathcal{F}_{S}(M)$-module annihilated by $R(P)$ and $V$ is a finite dimensional left $S$-module.
\end{definition}
Equivalently, $N$ is a finite dimensional left module over the quotient algebra $\mathcal{F}_{S}(M)/R(P)$. For $u \in \mathcal{F}_{S}(M)$ we let $u_{N}=u: N \rightarrow N$ denote the multiplication operator $u(n)=un$.
Let $(\mathcal{F}_{S}(M),P)$ and $(\mathcal{F}_{S}(M'),P')$ be algebras with potential. Let $\mathcal{N}=(N,V)$ and $\mathcal{N'}=(N',V')$ be decorated representations of $(\mathcal{F}_{S}(M),P)$ and $(\mathcal{F}_{S}(M'),P')$, respectively. A \emph{right-equivalence} between $\mathcal{N}$ and $\mathcal{N'}$ is a triple $(\varphi,\psi,\eta)$ where:
\begin{itemize}
\item $\varphi: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M')$ is a right-equivalence between $(\mathcal{F}_{S}(M),P)$ and $(\mathcal{F}_{S}(M'),P')$.
\item $\psi: N \rightarrow N'$ is an isomorphism of $F$-vector spaces such that $\psi \circ u_{N}=\varphi(u)_{N'} \circ \psi$ for each $u \in \mathcal{F}_{S}(M)$.
\item $\eta: V \rightarrow V'$ is an isomorphism of left $S$-modules.
\end{itemize}
\begin{remark} Suppose that $M^{\otimes n}=0$ for $n \gg 0$ and that $\mathcal{F}_{S}(M)_{cyc}=\{0\}$, then a decorated representation can be identified with a left module over the tensor algebra $T_{S}(M)$. In the case that the underlying semisimple algebra $S$ happens to be a finite direct product of copies of the base field $F$, then $T_{S}(M)$ can be identified with a path algebra, so in this case a decorated representation is a representation of a quiver in the classical sense.
\end{remark}
Let $M_{1}$, $M_{2}$ be $Z$-freely generated $S$-bimodules and let $T_{1}$ and $T_{2}$ be $Z$-free generating sets of $M_{1}$ and $M_{2}$, respectively. Let $P_{1}$ and $P_{2}$ be potentials in $\mathcal{F}_{S}(M_{1})$ and $\mathcal{F}_{S}(M_{2})$ respectively, and consider the potential $P_{1}+P_{2} \in \mathcal{F}_{S}(M_{1} \oplus M_{2})$. Let $\mathcal{N}=(N,V)$ be a decorated representation of the algebra with potential $(\mathcal{F}_{S}(M_{1} \oplus M_{2}),P_{1}+P_{2})$. We have an injective morphism of topological algebras $\mathcal{F}_{S}(M_{1}) \hookrightarrow \mathcal{F}_{S}(M_{1} \oplus M_{2})$ and thus, by restriction of scalars, $N$ is a left $\mathcal{F}_{S}(M_{1})$-module. We will denote this module by $N|_{\mathcal{F}_{S}(M_{1})}$. Let us show that $R(P_{1})$ annihilates $N|_{\mathcal{F}_{S}(M_{1})}$. Let $a \in T_{1}$, then $X_{a^{\ast}}(P_{2})=\displaystyle \sum_{s \in L(\sigma(a))} \delta_{(sa)^{\ast}}(P_{2})s=0$. Thus $X_{a^{\ast}}(P_{1}+P_{2})=X_{a^{\ast}}(P_{1}) \in R(P_{1}+P_{2})$. It follows that $\mathcal{N}|_{\mathcal{F}_{S}(M_{1})}:=(N|_{\mathcal{F}_{S}(M_{1})},V)$ is a decorated representation of the algebra with potential $(\mathcal{F}_{S}(M_{1}),P_{1})$.
\begin{prop} Let $M_{1}$ and $M_{2}$ be $Z$-freely generated $S$-bimodules and let $P,P'$ be reduced potentials in $\mathcal{F}_{S}(M_{1})$ and $W$ be a trivial potential in $\mathcal{F}_{S}(M_{2})$. Let $\mathcal{N}$ and $\mathcal{N'}$ be decorated representations of $\mathcal{F}_{S}(M_{1} \oplus M_{2})$ with respect the potentials $P+W$ and $P'+W$. If $\mathcal{N}$ is right-equivalent to $\mathcal{N'}$ then $\mathcal{N}|_{\mathcal{F}_{S}(M_{1})}$ is right-equivalent to $\mathcal{N'}|_{\mathcal{F}_{S}(M_{1})}$.
\end{prop}
\begin{proof} Let $(\phi,\psi,\eta): \mathcal{N} \rightarrow \mathcal{N'}$ be a right-equivalence of decorated representations. Then
\begin{enumerate}[(a)]
\item $\phi$ is an algebra automorphism of $\mathcal{F}_{S}(M_{1} \oplus M_{2})$ with $\phi_{|S}=id_{S}$ such that $\phi(P+W)$ is cyclically equivalent to $P'+W$.
\item $\psi: N \rightarrow N'$ is an isomorphism of $F$-vector spaces such that for $n \in N$ and $u \in \mathcal{F}_{S}(M_{1} \oplus M_{2})$ we have $\psi(un)=\phi(u)\psi(n)$.
\item $\eta: V \rightarrow V'$ is an isomorphism of left $S$-modules.
\end{enumerate}
Let $L$ be the closure of the two-sided ideal of $\mathcal{F}_{S}(M_{1} \oplus M_{2})$ generated by $M_{2}$, then as in \cite[Proposition 6.6]{2}:
\begin{enumerate}
\item $\mathcal{F}_{S}(M_{1} \oplus M_{2})=\mathcal{F}_{S}(M_{1}) \oplus L$
\item $R(P+W)=R(P) \oplus L$
\item $R(P'+W)=R(P') \oplus L$
\end{enumerate}
where $R(P)$ (respectively $R(P')$) is the closure of the two-sided ideal in $\mathcal{F}(M_{1})$ generated by all the elements of the form $X_{a^{\ast}}(P)$ (respectively $X_{a^{\ast}}(P'))$ where $a \in T_{1}$, a $Z$-free generating set of $M_{1}$.
Let $p: \mathcal{F}_{S}(M_{1} \oplus M_{2}) \twoheadrightarrow \mathcal{F}_{S}(M_{1})$ be the canonical projection induced by the decomposition $(1)$. As in \cite[Proposition 6.6]{2} there exists an algebra isomorphism
\begin{center}
$\rho=p \circ \phi_{|\mathcal{F}_{S}(M_{1})}: \mathcal{F}_{S}(M_{1}) \rightarrow \mathcal{F}_{S}(M_{1})$
\end{center}
such that $P'-\rho(P)$ is cyclically equivalent to an element of $R(\rho(P))^{2}$. By \cite[Proposition 6.5]{2} there exists an algebra automorphism $\lambda$ of $\mathcal{F}_{S}(M_{1})$ such that $\lambda \rho(P)$ is cyclically equivalent to $P'$ and $\lambda(u)-u \in R(\rho(P))$ for all $u \in \mathcal{F}_{S}(M_{1})$. By definition of decorated representation, $R(P+W)N=0$ and $R(P'+W)N'=0$. Since $L$ is contained in both $R(P+W)$ and $R(P'+W)$ then $LN=0$ and $LN'=0$. Then for $u \in \mathcal{F}_{S}(M_{1} \oplus M_{2})$ and $n \in N$
\begin{center}
$\psi(un)=\phi(u)\psi(n)$
\end{center}
Note that $\phi(u)=p\phi(u)+u'$ where $u' \in L$, then $\phi(u)\psi(n)=p\phi(u)\psi(n)=\rho(u)\psi(n)$, thus
\begin{center}
$\psi(un)=\rho(u)\psi(n)$
\end{center}
Since $P'-\rho(P)$ is cyclically equivalent to an element of $R(\rho(P))^{2}$, then there exists $z \in [\mathcal{F}_{S}(M_{1}),\mathcal{F}_{S}(M_{1})]$ such that $P'+z-\rho(P) \in R(\rho(P))^{2}$. Therefore by \cite[Proposition 6.4]{2} we obtain
\begin{center}
$R(P')=R(P'+z)=R(\rho(P))$
\end{center}
Now consider the automorphism $\lambda \rho$ of $\mathcal{F}_{S}(M_{1})$, this map has the property that $\lambda \rho(P)$ is cyclically equivalent to $P'$; also for $n \in N$ and $u \in \mathcal{F}_{S}(M_{1})$ we have
\begin{center}
$\psi(un)=\rho(u)\psi(n)$
\end{center}
and $\lambda \rho(u)=\rho(u)+w$ where $w \in R(\rho(P))=R(P')$. Therefore
\begin{center}
$\psi(un)=\lambda \rho(u)\psi(n)$
\end{center}
This proves that $(\lambda \rho, \psi, \eta)$ is a right-equivalence between $\mathcal{N}|_{\mathcal{F}_{S}(M_{1})}$ and $\mathcal{N'}|_{\mathcal{F}_{S}(M_{1})}$, as claimed.
\end{proof}
In what follows, we will use the following notation: for an $S$-bimodule $B$, define: \\
\begin{center}
$B_{\hat{k},\hat{k}}=\bar{e_{k}}B\bar{e_{k}}$
\end{center}
\vspace{0.1in}
Now consider the algebra isomorphism $\rho: \mathcal{F}_{S}(M)_{\hat{k},\hat{k}} \rightarrow \mathcal{F}_{S}((\mu_{k}M)_{\hat{k},\hat{k}})$ defined in \cite[Lemma 9.2]{2}. Let $P$ be a reduced potential in $\mathcal{F}_{S}(M)_{\hat{k},\hat{k}}$. Suppose first that:
\begin{center}
(A) $P=\displaystyle \sum_{u=1}^{N} f_{u}\gamma_{u}$
\end{center}
where $f_{u} \in F$ and $\gamma_{u}=x_{1} \hdots x_{n(u)}$ with $x_{i} \in \hat{T}$ as in \cite[Definition 26]{2}. Let $b \in T_{k}$ be fixed and let $N_{b}$ be the set of all $u \in [1,N]$ such that for some $x_{i}$, $a(x_{i})=b$. For each $u \in N_{b}$, let $\mathcal{C}(u)$ be the subset of all cyclic permutations $c$ of the set $\{1,\hdots,n(u)\}$ such that $x_{c(1)}=s_{c}b$. Then for each $c \in C(u)$ define $\gamma_{u}^{c}=x_{c(1)}x_{c(2)} \hdots x_{c(n(u))}$. Thus $\gamma_{u}^{c}=s_{c}br_{c}a_{c}z_{c}$ where $z_{c}=x_{3} \hdots x_{c(n(u))}$. Therefore \\
\begin{center}
$X_{b^{\ast}}(P)=\displaystyle \sum_{u \in N_{b}} \displaystyle \sum_{c \in \mathcal{C}(u)} f_{u}r_{c}a_{c}z_{c}s_{c}$
\end{center}
On the other hand:
\begin{align*}
X_{[bra]^{\ast}}(\rho(P))&=\displaystyle \sum_{u \in N_{b}} \displaystyle \sum_{c \in \mathcal{C}(u),r_{c}=r,a_{c}=a}f_{u}\rho(z_{c})s_{c} \\
&=\rho \left(\displaystyle \sum_{u \in N(b)} \displaystyle \sum_{c \in \mathcal{C}(u),r_{c}=r,a_{c}=a} f_{u}z_{c}s_{c} \right)
\end{align*}
Define $Y_{[bra]}(P):=\displaystyle \sum_{u \in N_{b}} \displaystyle \sum_{c \in \mathcal{C}(u),r_{c}=r,a_{c}=a}f_{u}z_{c}s_{c}$. Then \\
\begin{center}
$X_{[bra]^{\ast}}(\rho(P))=\rho(Y_{[bra]}(P))$
\end{center}
\medskip
Note that if $P$ is a potential in $\mathcal{F}_{S}(M)^{\geq n+3}$ then $Y_{[bra]}(P) \in \mathcal{F}_{S}(M)^{\geq n}$, thus if $(P_{n})_{n \geq 1}$ is a Cauchy sequence in $\mathcal{F}_{S}(M)$ then $(Y_{[bra]}(P_{n}))_{n \geq 1}$ is Cauchy as well.
Now let $P$ be an arbitrary potential in $\mathcal{F}_{S}(M)$. We have \\
\begin{center}
$P=\displaystyle \lim_{n \to \infty} P_{n}$
\end{center}
where each $P_{n}$ is of the form given by (A). Define: \\
\begin{center}
$w=\displaystyle \lim_{n \to \infty} Y_{[bra]}(P_{n})$
\end{center}
Then
\begin{align*}
X_{[bra]^{\ast}}(\rho(P))&=\displaystyle \lim_{n \to \infty} X_{[bra]^{\ast}}(\rho(P_{n})) \\
&=\displaystyle \lim_{n \to \infty} \rho(Y_{[bra]}(P_{n})) \\
&=\rho(\displaystyle \lim_{n \to \infty} Y_{[bra]}(P_{n})) \\
&=\rho(w)
\end{align*}
Thus we let $Y_{[bra]}(P):=w$. Then $X_{[bra]^{\ast}}(\rho(P))=\rho(Y_{[bra]}(P))$. In \cite[p.60]{2} the following equalities are established for each potential $P$ in $T_{S}(M)$:
\begin{equation} \label{3.1}
\rho(b'X_{b^{\ast}}(P))=\displaystyle \sum_{r \in L(k),a \in _{k}T} [b'ra]X_{[bra]^{\ast}}(\rho(P))
\end{equation}
\begin{equation} \label{3.2}
\rho(X_{a^{\ast}}(P)a')=\displaystyle \sum_{b \in T_{k}, r \in L(k)} X_{[bra]^{\ast}}(\rho(P))[bra']
\end{equation}
By continuity, the above formulas remain valid for every potential $P \in \mathcal{F}_{S}(M)$. Using \ref{3.1} yields:
\begin{align*}
\rho(b'X_{b^{\ast}}(P))&=\displaystyle \sum_{r \in L(k),a \in _{k}T} \rho(b'ra) \rho(Y_{[bra]}(P)) \\
&=\rho \left( \displaystyle \sum_{r \in L(k), a \in _{k}T} b'raY_{[bra]}(P)\right)
\end{align*}
Since $\rho$ is injective, then:
\begin{equation} \label{3.3}
b'X_{b^{\ast}}(P)=\displaystyle \sum_{r \in L(k), a \in _{k}T} b'raY_{[bra]}(P)
\end{equation}
Similarly:
\begin{equation} \label{3.4}
X_{a^{\ast}}(P)a'=\displaystyle \sum_{b \in T_{k},r \in L(k)} Y_{[bra]}(P)bra'
\end{equation}
For each $\psi \in M^{\ast}$ and for each positive integer $n$ we have an $F$-linear map $\psi_{\ast}: M^{\otimes n} \rightarrow M^{\otimes (n-1)}$ given by $\psi_{\ast}(m_{1} \otimes m_{2} \otimes \hdots \otimes m_{n})=\psi(m_{1})m_{2} \otimes \hdots \otimes m_{n}$. This map induces an $F$-linear map $\psi_{\ast}: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M)$. Similarly, if $\eta \in ^{\ast}M$ then we obtain an $F$-linear map $_{\ast}\eta: M^{\otimes n} \rightarrow M^{\otimes(n-1)}$ given by $_{\ast}\eta(m_{1} \otimes \hdots \otimes m_{n-1} \otimes m_{n})=m_{1} \otimes \hdots \otimes m_{n-1}\eta(m_{n})$.
Now suppose that $b' \in T_{k}$, then $b'e_{k}=b'$ and thus $e_{k}(b')^{\ast}=(b')^{\ast}$. Since $X^{P}$ is a morphism of $S$-bimodules \cite[Proposition 7.6]{2} then $e_{k}X_{b^{\ast}}(P)=X_{b^{\ast}}(P)$. Applying the map $(b')^{\ast}: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M)$ to the left-hand side of \ref{3.3} yields \\
\begin{equation} \label{3.5}
X_{b^{\ast}}(P)=e_{k}X_{b^{\ast}}(P)=\displaystyle \sum_{r \in L(k),a \in _{k}T} e_{k}raY_{[bra]}(P)
\end{equation}
Therefore
\begin{equation} \label{3.6}
X_{b^{\ast}}(P)=\displaystyle \sum_{r \in L(k), a \in _{k}T} raY_{[bra]}(P)
\end{equation}
Let $H$ denote the set of all non-zero elements of the form $as$ where $a \in _{k}T$ and $s \in L$. Note that $H$ is a left $S$-local basis of $_{S}M$; for $x \in H$ we denote by $^{\ast}x \in ^{\ast}M$ the map given by $^{\ast}x(y)=0$ if $y \in H \setminus \{x\}$ and $x^{\ast}(x)=e_{i}$ where $e_{i}x=x$. Applying the map $^{\ast}(a')$ to \ref{3.4} we obtain: \\
\begin{equation} \label{3.7}
X_{a^{\ast}}(P)=\displaystyle \sum_{b \in T_{k}, r \in L(k)} Y_{[bra]}(P)br
\end{equation}
Let $\mathcal{N}=(N,V)$ be a decorated representation of the algebra with potential $(\mathcal{F}_{S}(M),P)$ and suppose that $k$ satisfies $(Me_{k} \otimes_{S} M)_{cyc}=\{0\}$. Define:
\begin{align*}
N_{in}=\displaystyle \bigoplus_{a \in _{k}T} D_{k} \otimes_{F} N_{\tau(a)} \\
N_{out}=\displaystyle \bigoplus_{b \in T_{k}} D_{k} \otimes_{F} N_{\sigma(b)}
\end{align*}
For each $a$ in $_{k}T$ and $r \in L(k)$ consider the projection map $\pi_{a}': N_{in} \rightarrow D_{k} \otimes_{F} N_{\tau(a)}$ and the map $\pi_{ra}': D_{k} \otimes_{F} N_{\tau(a)} \rightarrow N_{\tau(a)}$ given by $\pi_{ra}'(d \otimes n)=r^{\ast}(d)n$. Let $\xi_{a}': D_{k} \otimes_{F} N_{\tau(a)} \rightarrow N_{in}$ denote the inclusion map and define $\xi_{ra}': N_{\tau(a)} \rightarrow D_{k} \otimes_{F} N_{\tau(a)}$ as the map given by $\xi_{ra}'(n)=r \otimes n$. \\
Then for $r,r_{1} \in L(k)$ we have the following equalities
\begin{equation} \label{3.8}
\pi_{r_{1}a}' \xi_{ra}' = \delta_{r,r_{1}}id_{N_{\tau(a)}}
\end{equation}
\begin{equation} \label{3.9}
\pi_{ra}' \xi_{ra}'= id_{N_{\tau(a)}}
\end{equation}
For $a \in _{k}T$ and $r \in L(k)$ we define the following $F$-linear maps:
\begin{align*}
\pi_{ra}&=\pi_{ra}' \pi_{a}': N_{in} \rightarrow N_{\tau(a)} \\
\xi_{ra}&=\xi_{a}' \xi_{ra}': N_{\tau(a)} \rightarrow N_{in}
\end{align*}
then we have the following equalities \\
\begin{equation} \label{3.10}
\pi_{r_{1}a_{1}}\xi_{ra}=\delta_{r_{1}a_{1},ra}id_{N_{\tau(a)}}
\end{equation}
\begin{equation} \label{3.11}
\displaystyle \sum_{r \in L(k), a \in _{k}T} \xi_{ra}\pi_{ra}=id_{N_{in}}
\end{equation}
Similarly, for each $r \in L(k)$ and $b \in T_{k}$ we have the canonical projection $\pi_{b}': N_{out} \rightarrow D_{k} \otimes_{F} N_{\sigma(b)}$ and $\pi_{br}': D_{k} \otimes_{F} N_{\sigma(b)} \rightarrow N_{\sigma(b)}$ denotes the map given by $\pi_{br}'(d \otimes n)=(r^{-1})^{\ast}(d)n$. \\
We define $\xi_{br}': N_{\sigma(b)} \rightarrow D_{k} \otimes_{F} N_{\sigma(b)}$ as the map given by $\xi_{br}'(n)=r^{-1} \otimes n$ for every $n \in N_{\sigma(b)}$ and $\xi_{b}': D_{k} \otimes_{F} N_{\sigma(b)} \rightarrow N_{out}$ is the inclusion map. \\
Then for $r,r_{1} \in L(k)$ and $b \in T_{k}$ we have the following equalities:
\begin{equation} \label{3.12}
\pi_{br_{1}}' \xi_{br}' = \delta_{r_{1},r} id_{N_{\sigma(b)}}
\end{equation}
\begin{equation} \label{3.13}
\pi_{br}' \xi_{br} ' = id_{N_{\sigma(b)}}
\end{equation}
Define the following $F$-linear maps:
\begin{align*}
\xi_{br}=\xi_{b}' \xi_{br}': N_{\sigma(b)} \rightarrow N_{out} \\
\pi_{br}=\pi_{br}' \pi_{b}': N_{out} \rightarrow N_{\sigma(b)}
\end{align*}
Then for $r,r_{1} \in L(k)$ and $b,b_{1} \in T_{k}$ we have
\begin{equation} \label{3.14}
\pi_{b_{1}r_{1}} \xi_{br}= \delta_{b_{1}r_{1},br}id_{N_{\sigma(b)}}
\end{equation}
and
\begin{equation} \label{3.15}
\displaystyle \sum_{b \in T_{k}, r \in L(k)} \xi_{br} \pi_{br}=id_{N_{out}}
\end{equation}
We define a map of left $D_{k}$-modules $\alpha: N_{in} \rightarrow N_{k}$ as the map such that for all $a \in _{k}T, r \in L(k)$ we have:
\begin{center}
$\alpha \xi_{ra}(n)=ran$
\end{center}
for each $n \in N_{\tau(a)}$. \\
Similarly, we define $\beta: N_{k} \rightarrow N_{out}$ as the $F$-linear map such that for all $b \in T_{k}, r \in L(k)$:
\begin{center}
$\pi_{br} \beta(n)=brn$
\end{center}
for every $n \in N_{k}$. \\
Finally, the map $\gamma: N_{out} \rightarrow N_{in}$ is the morphism of left $D_{k}$-modules such that map $\gamma_{ra,bs}=\pi_{ra}\gamma \xi_{bs}: N_{\sigma(b)} \rightarrow N_{\tau(a)}$ where $r,s \in L(k)$, $a \in _{k}T$, $b \in T_{k}$, is given by:
\begin{center}
$\gamma_{ra,bs}(n)=\displaystyle \sum_{w \in L(k)} r^{\ast}(s^{-1}w)Y_{[bwa]}(P)n$
\end{center}
for every $n \in N_{\sigma(b)}$.
\begin{prop} The map $\beta$ is a morphism of left $D_{k}$-modules.
\end{prop}
\begin{proof} By linearity, it suffices to show that if $c \in L(k)$ and $n \in N_{k}$ then $\beta(cn)=c\beta(n)$. Using \cite[Proposition 7.5]{2} we obtain:
\begin{align*}
\beta(cn)&=c\displaystyle \sum_{b \in T_{k}, r \in L(k)} c^{-1}\xi_{br}\pi_{br}\beta(cn) = c\displaystyle \sum_{b \in T_{k}, r \in L(k)} c^{-1}(r^{-1} \otimes brcn) \\
&=c\displaystyle \sum_{b \in T_{k}, r,r_{1},r_{2} \in L(k)} (r_{1}^{-1})^{\ast}(c^{-1}r^{-1})r_{1}^{-1} \otimes br_{2}^{\ast}(rc)r_{2}n \\
&=c\displaystyle \sum_{b \in T_{k}, r_{1},r_{2} \in L(k)} \left( \displaystyle \sum_{r \in L(k)} r_{2}^{\ast}(rc)(r_{1}^{-1})^{\ast}(c^{-1}r^{-1})\right)(r_{1}^{-1} \otimes br_{2}n) \\
&=c\left(\displaystyle \sum_{b \in T_{k}, r \in L(k)}r^{-1}\otimes brn\right) \\
&=c\beta(n)
\end{align*}
as claimed.
\end{proof}
\begin{lemma} \label{lem1} We have $\alpha \gamma=0$ and $\gamma \beta=0$.
\end{lemma}
\begin{proof} We first show that $\alpha \gamma=0$. It suffices to show that for all $r \in L(k)$, $b \in T_{k}$, $\alpha \gamma \xi_{br}=0$. Let $n \in N_{\sigma(b)}$, then by \ref{3.6} and \ref{3.11}:
\begin{align*}
\alpha \gamma \xi_{br}(n)&= \alpha id_{N_{in}} \gamma \xi_{br}(n) = \displaystyle \sum_{s \in L(k), a \in _{k}T} \alpha \xi_{sa} \pi_{sa} \gamma \xi_{br}(n)\\
&=\displaystyle \sum_{s \in L(k), a \in _{k}T} \alpha \xi_{sa} \gamma_{sa,br}(n)\\
&=\displaystyle \sum_{s,w \in L(k), a \in _{k}T} \alpha \xi_{sa} s^{\ast}(r^{-1}w)Y_{[bwa]}(P)(n)\\
&=\displaystyle \sum_{s,w \in L(k), a \in _{k}T} sas^{\ast}(r^{-1}w)Y_{[bwa]}(P)(n)\\
&=\displaystyle \sum_{w \in L(k), a \in _{k}T} r^{-1}waY_{[bwa]}(P)n \\
&=r^{-1}X_{b^{\ast}}(P)n\\
&=0
\end{align*}
We now show that $\gamma\beta=0$. It suffices to show that for all $r \in L(k)$, $a \in _{k}T$ we have $\pi_{ra} \gamma \beta=0$. Let $n \in N_{k}$, then by \ref{3.7} and \ref{3.15}:
\begin{align*}
\pi_{ra}\gamma \beta(n)&=\pi_{ra} \gamma id_{N_{out}} \beta = \displaystyle \sum_{b \in T_{k},s \in L(k)} \pi_{ra} \gamma \xi_{bs} \pi_{bs} \beta(n) \\
&=\displaystyle \sum_{b \in T_{k}, s \in L(k)} \gamma_{ra,bs} \pi_{bs} \beta(n) \\
&\displaystyle \sum_{b \in T_{k},s,w \in L(k)} s^{\ast}(wr^{-1})Y_{[bwa]}(P)(bsn) \\
&=\displaystyle \sum_{b \in T_{k}, w \in L(k)} Y_{[bwa]}(P)bwr^{-1}n \\
&=X_{a^{\ast}}(P)r^{-1}n \\
&=0
\end{align*}
\end{proof}
\begin{lemma} \label{lem2} For each $m \in N_{in}$, $a \in _{k}T$ we have $\pi_{e_{k}a}(r^{-1}m)=\pi_{ra}(m)$.
\end{lemma}
\begin{proof} First, for any $a_{1} \in _{k}T$ and $n \in N_{\tau(a_{1})}$ we have
\begin{center}
$r^{-1}\xi_{sa_{1}}'(n)=r^{-1}(s \otimes n)=r^{-1}s \otimes n = \displaystyle \sum_{u \in L(k)} u \otimes u^{\ast}(r^{-1}s)n = \displaystyle \sum_{u \in L(k)} \xi_{ua_{1}}'(u^{\ast}(r^{-1}s)n)$
\end{center}
Then
\begin{center}
$r^{-1}\xi_{sa_{1}}(n)=r^{-1}\xi_{a_{1}}' \xi_{sa_{1}}'(n) = \displaystyle \sum_{u \in L(k)} \xi_{a_{1}}' \xi_{ua_{1}}'(u^{\ast}(r^{-1}s)n)=\displaystyle \sum_{u \in L(k)} \xi_{ua_{1}}(u^{\ast}(r^{-1}s)n)$
\end{center}
Now let $m \in N_{in}$, then using \ref{3.10} and \ref{3.11} we obtain \\
\begin{align*}
\pi_{e_{k}a}(r^{-1}m)&=\displaystyle \sum_{s \in L(k), a_{1} \in _{k}T} \pi_{e_{k}a}r^{-1}\xi_{sa_{1}}\pi_{sa_{1}}(m)\\
&=\displaystyle \sum_{s,u \in L(k), a_{1} \in _{k}T} \pi_{e_{k}a} \xi_{ua_{1}}\left(u^{\ast}(r^{-1}s)\pi_{sa_{1}}(m)\right)\\
&=\displaystyle \sum_{s \in L(k)} e_{k}^{\ast}(r^{-1}s)\pi_{sa}(m)\\
&=\pi_{ra}(m)
\end{align*}
and the lemma follows.
\end{proof}
\section{Premutation of a decorated representation} \label{sec4}
Consider now the algebra with potential $(\mathcal{F}_{S}(\widetilde{M}),\widetilde{P})$. Recall from \cite[Definition 37]{2} that:
\begin{align*}
\widetilde{M}&:=\bar{e_{k}}M\bar{e_{k}} \oplus Me_{k}M \oplus (e_{k}M)^{\ast} \oplus ^{\ast}(Me_{k})\\
\widetilde{P}&:=[P]+\displaystyle \sum_{sa \in _{k}\hat{T},bt \in \tilde{T}_{k}}[btsa]((sa)^{\ast})(^{\ast}(bt))
\end{align*}
To a decorated representation $\mathcal{N}=(N,V)$ of the algebra with potential $(\mathcal{F}_{S}(M),P)$ we will associate a decorated representation $\widetilde{\mu_{k}}(\mathcal{N})=(\overline{N},\overline{V})$ of $(\mathcal{F}_{S}(\widetilde{M}),\widetilde{P})$ as follows. First set: \\
\begin{center}
$\overline{N}_{i}=N_{i}$, $\overline{V_{i}}=V_{i}$ if $i \neq k$
\end{center}
Define $\overline{N}_{k}$ and $\overline{V}_{k}$ as follows:
\begin{align*}
\overline{N}_{k}&=\frac{ker(\gamma)}{im(\beta)} \oplus im(\gamma) \oplus \frac{ker(\alpha)}{im(\gamma)} \oplus V_{k} \\
\overline{V}_{k}&=\frac{ker(\beta)}{ker(\beta) \cap im(\alpha)}
\end{align*}
Let
\begin{equation} \label{4.1}
\begin{split}
&J_{1}: \frac{ker(\gamma)}{im(\beta)} \rightarrow \overline{N}_{k} \\
&J_{2}: im(\gamma) \rightarrow \overline{N}_{k} \\
&J_{3}: \frac{ker(\alpha)}{im(\gamma)} \rightarrow \overline{N}_{k} \\
&J_{4}: V_{k} \rightarrow \overline{N}_{k}
\end{split}
\end{equation}
be the corresponding inclusions and let
\begin{equation} \label{4.2}
\begin{split}
&\Pi_{1}: \overline{N}_{k} \rightarrow \frac{ker(\gamma)}{im(\beta)}\\
&\Pi_{2}: \overline{N}_{k} \rightarrow im(\gamma)\\
&\Pi_{3}: \overline{N}_{k} \rightarrow \frac{ker(\alpha)}{im(\gamma)}\\
&\Pi_{4}: \overline{N}_{k} \rightarrow V_{k}
\end{split}
\end{equation}
denote the canonical projections. \\
\textbf{Remark}. Suppose that $M$ is $Z$-freely generated by $M_{0}$ and let $X$ be a finite dimensional left $S$-module. To induce a structure of a $T_{S}(M)$-left module on $X$ it suffices to give a map of $S$-left modules $M \otimes_{S} X \rightarrow X$. Let $i \neq j$ be integers in $[1,n]$. Then
\begin{align*}
\operatorname{Hom}_{D_{i}}(e_{i}Me_{j} \otimes_{S} X,X)& \cong \operatorname{Hom}_{D_{i}}((D_{i} \otimes_{F} e_{i}M_{0}e_{j} \otimes_{F} D_{j}) \otimes_{D_{j}} e_{j}X,e_{i}X) \\
&\cong \operatorname{Hom}_{D_{i}}(D_{i} \otimes_{F} e_{i}M_{0}e_{j} \otimes_{F} (D_{j} \otimes_{D_{j}} e_{j}X), e_{i}X) \\
&\cong \operatorname{Hom}_{D_{i}}(D_{i} \otimes_{F} e_{i}M_{0}e_{j} \otimes_{F} e_{j}X,e_{i}X) \\
&\cong \operatorname{Hom}_{F}(e_{i}M_{0}e_{j} \otimes_{F} e_{j}X, e_{i}X)
\end{align*}
Hence $\operatorname{Hom}_{S}(_{S}(M \otimes_{S} X),_{S}X) \cong \displaystyle \bigoplus_{i,j} \operatorname{Hom}_{F}(e_{i}M_{0}e_{j} \otimes_{F} e_{j}X, e_{i}X)$ as $F$-vector spaces. Therefore, a map of left $S$-modules $M \otimes_{S} X \rightarrow X$ is determined by a collection of $F$-linear maps $\theta_{i,j}: e_{i}M_{0}e_{j} \otimes_{F} e_{j}X \rightarrow e_{i}X$. It follows that each element $c \in e_{\sigma(c)}M_{0}e_{\tau(c)}$ gives rise to a multiplication operator $c_{X}: X_{\tau(c)} \rightarrow X_{\sigma(c)}$ given by $c_{X}(x):=\theta_{\sigma(c),\tau(c)}(c \otimes x)$. \\
Recall from \cite[Lemma 8.7]{2} that $\widetilde{M}$ is $Z$-freely generated by the following $Z$-subbimodule: \\
\begin{center}
$(\widetilde{M})_{0}:=\bar{e_{k}}M_{0}\bar{e_{k}} \oplus M_{0}e_{k}Se_{k}M_{0} \oplus e_{k}(_{0}N) \oplus N_{0}e_{k}$
\end{center}
\vspace{0.2in}
To give $\overline{N}$ a structure of a left $T_{S}(\widetilde{M})$-module on $\overline{N}$ we will proceed by cases, by giving the action of each summand of $(\widetilde{M})_{0}$ in $\overline{N}$. \\
$\bullet$ Suppose first that $i,j \neq k$. Then
\begin{center}
$e_{i}(\widetilde{M})_{0}e_{j}=e_{i}M_{0}e_{j} \oplus e_{i}M_{0}e_{k}Se_{k}M_{0}e_{j}$
\end{center}
\vspace{0.1in}
Assume that $c \in e_{i}M_{0}e_{j}$. By assumption $i,j \neq k$ and thus both $\sigma(c)$ and $\tau(c)$ are not equal to $k$. Then $\overline{N}_{\tau(c)}=N_{\tau(c)}$ and $\overline{N}_{\sigma(c)}=N_{\sigma(c)}$. Therefore we set $c_{\overline{N}}=c_{N}$.
Assume now that $c$ is an element of the $Z$-local basis of $e_{i}M_{0}e_{k}Se_{k}M_{0}e_{j}$ then $c=\rho(bra)$ for some $b \in T \cap e_{i}Me_{k}$, $r \in L(k)$ and $a \in T \cap e_{k}Me_{j}$. In this case we set $\rho(bra)_{\overline{N}}:=(bra)_{N}$. \\
Recall that $\{^{\ast}b: b \in T\}$ is a $Z$-local basis of $_{0}N$ and $\{a^{\ast}: a \in T\}$ is a $Z$-local basis of $N_{0}$. Then $\{^{\ast}b: b \in T_{k}\}$ is a $Z$-free generating set of $e_{k}(^{\ast}M)$ and $\{a^{\ast}: a \in _{k}T\}$ is a $Z$-free generating set of $M^{\ast}e_{k}$. Suppose that $b=e_{\sigma(b)}be_{k}$, then $\tau(^{\ast} b)=\sigma(b)$. Therefore
\begin{align*}
\overline{N}_{in}&=\displaystyle \bigoplus_{b \in T_{k}} D_{k} \otimes_{F} N_{\tau(^{\ast}b)} \\
&=\displaystyle \bigoplus_{b \in T_{k}} D_{k} \otimes_{F} N_{\sigma(b)} \\
&=N_{out}
\end{align*}
whence $\overline{N}_{in}=N_{out}$. A similar argument shows that $\overline{N}_{out}=N_{in}$. \\
We have the inclusion maps
\begin{align*}
& j: ker(\gamma) \rightarrow N_{out} \\
& i: im(\gamma) \rightarrow N_{in} \\
& j': ker(\alpha) \rightarrow N_{in}
\end{align*}
and the canonical projections
\begin{align*}
& \pi_{1}: ker(\gamma) \rightarrow \frac{ker(\gamma)}{im(\beta)} \\
& \pi_{2}: ker(\alpha) \rightarrow \frac{ker(\alpha)}{im(\gamma)}
\end{align*}
\vspace{0.1in}
As in \cite{5} we introduce the following splitting data: \\
\begin{enumerate}[(a)]
\item Choose a $D_{k}$-linear map $p: N_{out} \rightarrow ker(\gamma)$ such that $pj=id_{ker(\gamma)}$.
\item Choose a $D_{k}$-linear map $\sigma_{2}: ker(\alpha)/im(\gamma) \rightarrow ker(\alpha)$ such that $\pi_{2} \sigma_{2}=id_{ker(\alpha)/im(\gamma)}$.
\end{enumerate}
$\bullet$ Suppose now that $i \neq k$ and that $j=k$. Then $e_{i}(\widetilde{M})_{0}e_{k}=e_{i}(N_{0})e_{k}$. Let $a \in _{k}T$, then $\tau(a^{\ast})=k$ and $\sigma(a^{\ast})=\tau(a)$. We define an $F$-linear map: \\
\begin{center}
$\overline{N}(a^{\ast}): \overline{N}_{k} \rightarrow N_{\tau(a)}$
\end{center}
as follows
\begin{equation} \label{4.3}
\begin{split}
\overline{N}(a^{\ast})J_{1}&=0 \\
\overline{N}(a^{\ast})J_{2}&=c_{k}^{-1}\pi_{e_{k}a}i \\
\overline{N}(a^{\ast})J_{3}&=c_{k}^{-1}\pi_{e_{k}a}j'\sigma_{2} \\
\overline{N}(a^{\ast})J_{4}&=0
\end{split}
\end{equation}
where $c_{k}=[D_{k}:F]$. \\
$\bullet$ In what follows, we let $\gamma=i\gamma'$ where $\gamma': N_{out} \twoheadrightarrow im(\gamma)$. Suppose now that $i=k$ and that $j \neq k$. Then \\
\begin{center}
$e_{i}(\widetilde{M})_{0}e_{j}=e_{k}(_{0}N)e_{j}$
\end{center}
\vspace{0.1in}
Since $j \neq k$ then $\overline{N}_{j}=N_{j}=e_{j}N$. For every $b \in T_{k}$, we define an $F$-linear map: \\
\begin{center}
$\overline{N}(^{\ast}b): N_{\sigma(b)} \rightarrow \overline{N}_{k}$
\end{center}
as follows
\begin{equation} \label{4.4}
\begin{split}
\Pi_{1} \overline{N}(^{\ast}b) & = -\pi_{1}p\xi_{be_{k}} \\
\Pi_{2} \overline{N}(^{\ast}b) &= - \gamma' \xi_{be_{k}} \\
\Pi_{3} \overline{N}(^{\ast}b) &= 0 \\
\Pi_{4} \overline{N}(^{\ast}b) &=0
\end{split}
\end{equation}
The previous construction makes $\overline{N}$ a left $T_{S}(\widetilde{M})$-module. To see that $\overline{N}$ is in fact a module over the completed algebra $\mathcal{F}_{S}(\widetilde{M})$ it suffices to note that the $\mathcal{F}_{S}(M)$-module $N$ is nilpotent \cite[p. 39]{5} and thus $\overline{N}$ is annihilated by $\langle \widetilde{M} \rangle^{n}$ for large enough $n$.
\begin{lemma} \label{lem3} Let $\rho: \mathcal{F}_{S}(M)_{\hat{k},\hat{k}} \rightarrow \mathcal{F}_{S}((\mu_{k}M)_{\hat{k},\hat{k}})$ be the algebra isomorphism introduced on page $5$ and let $u \in \mathcal{F}_{S}(M)_{\hat{k},\hat{k}}$. Then $\rho(u)_{\overline{N}}=u_{N}$.
\end{lemma}
\begin{proof}
First note that $\mathcal{F}_{S}(M)=S \oplus M \oplus \mathcal{F}_{S}(M)^{\geq 2}$. Let $u \in \mathcal{F}_{S}(M)_{\hat{k},\hat{k}}$, then $u=s+m+x$ where $s \in \bar{e_{k}}S$, $m \in M_{\hat{k},\hat{k}}$ and $x \in \left(\mathcal{F}_{S}(M)^{\geq 2}\right)_{\hat{k},\hat{k}}$. Then
\begin{center}
$\rho(u)=s+m+\rho(x)$
\end{center}
By continuity and linearity of $\rho$, it suffices to treat the case when $x$ is of the form $s(x_{1})x_{1}s(x_{2}) \hdots s(x_{l})x_{l}$, where $s(x_{i}) \in L(\sigma(x_{i}))$ and $x_{i} \in T$. We'll use induction on $l$. Suppose first that $x=s(x_{1})x_{1}s(x_{2})x_{2}$ and we may assume that $x_{1}s(x_{2})x_{2} \in M_{0}e_{k}Se_{k}M_{0}$, then \\
\begin{center}
$\rho(x)=s(x_{1})\rho(x_{1}s(x_{2})x_{2})$
\end{center}
Therefore
\begin{center}
$\rho(x)n = s(x_{1}) \rho(x_{1}s(x_{2})x_{2})n$
\end{center}
\smallskip
Since $[b_{q}ra_{s}]_{\overline{N}}=(b_{q}ra_{s})_{N}$ then $\rho(b_{q}ra_{s})n=b_{q}ra_{s}n$. It follows that
\begin{align*}
\rho(x)n &= s(x_{1}) \rho(x_{1}s(x_{2})x_{2})n \\
&=s(x_{1}) x_{1}s(x_{2})x_{2}n \\
&=xn
\end{align*}
Suppose now that the claim holds for the length of $x$ less than $n$. We have:
\begin{center}
$x=s(x_{1})x_{1}s(x_{2})x_{2}\hdots s(x_{l-2})x_{l-2} s(x_{l-1})x_{l-1}s(x_{l})x_{l}$
\end{center}
Using the fact that $\rho$ is an algebra morphism together with the base case $l=2$ we obtain:
\begin{align*}
\rho(x)n &=\rho(s(x_{1})x_{1} \hdots s(x_{l-2})x_{l-2}) \rho(s(x_{l-1})x_{l-1}s(x_{l})x_{l})n \\
&=\rho(s(x_{1})x_{1} \hdots(x_{l-2})x_{l-2}) s(x_{l-1})\rho(x_{l-1}s(x_{l})x_{l})n \\
&=\rho(s(x_{1})x_{1} \hdots s(x_{l-2})x_{l-2})s(x_{l-1})x_{l-1}s(x_{l})x_{l}n \\
&=\rho(s(x_{1})x_{1} \hdots s(x_{l-2})x_{l-2})n'
\end{align*}
where $n':= s(x_{l-1})x_{l-1}s(x_{l})x_{l}n$. Since $s(x_{1})x_{1}\hdots s(x_{l-2})x_{l-2}$ has length less than $l$, then:
\begin{center}
$\rho(s(x_{1})x_{1} \hdots s(x_{l-2})x_{l-2})n'=s(x_{1})x_{1} \hdots s(x_{l-2})x_{l-2}n'$
\end{center}
Therefore
\begin{align*}
\rho(x)n&=s(x_{1})x_{1} \hdots s(x_{l-2})x_{l-2}n' \\
&=s(x_{1})x_{1}\hdots s(x_{l-2})x_{l-2}s(x_{l-1})x_{l-1}s(x_{l})x_{l}n \\
&=xn
\end{align*}
it follows that $\rho(x)n=xn$ completing the proof.
\end{proof}
\begin{prop} The pair $\widetilde{\mu_{k}}(\mathcal{N})=(\overline{N},\overline{V})$ is a decorated representation of $(\mathcal{F}_{S}(\widetilde{M}),\widetilde{P})$.
\end{prop}
\begin{proof} We have to verify that $\overline{N}$ is annihilated by $R(\widetilde{P})$. It suffices to check that $(X_{c^{\ast}}(\widetilde{P}))_{\overline{N}}=0$ for each element $c$ of the $Z$-local basis of $(\widetilde{M})_{0}$. We proceed by cases. \\
$\bullet$ Suppose first that $c \in T \cap \bar{e_{k}}M_{0}\bar{e_{k}}$ and let $n \in N$. Then by Lemma \ref{lem3}
\begin{align*}
\biggl(X_{c^{\ast}}(\widetilde{P})_{\overline{N}}\biggr)(n)&=X_{c^{\ast}}(\widetilde{P})n \\
&=X_{c^{\ast}}(\rho(P))n \\
&=\rho(X_{c^{\ast}}(P))n \\
&=X_{c^{\ast}}(P)n
\end{align*}
Since $\mathcal{N}=(N,V)$ is a decorated representation of $(\mathcal{F}_{S}(M),P)$ then $X_{c^{\ast}}(P)n=0$. \\
$\bullet$ Suppose now that $c=\rho(bra)$ where $b \in T_{k}$, $r \in L(k)$ and $a \in _{k}T$. By \cite[p.58]{2} we have the following equality: \\
\begin{center}
$X_{[bra]^{\ast}}(\widetilde{P})=X_{[bra]^{\ast}}(\rho(P))+c_{k}a^{\ast}r^{-1}(^{\ast}b)$
\end{center}
where $c_{k}=[D_{k}:F]$. \\
We now compute the image of the operator $(c_{k}a^{\ast}r^{-1}(^{\ast}b))_{\overline{N}}$. Let $n \in N_{\sigma(b)}$, then remembering \ref{4.3}, \ref{4.4} and Lemma \ref{lem2} we obtain
\begin{align*}
c_{k}\overline{N}(a^{\ast})r^{-1}\overline{N}(^{\ast}b)(n)&=c_{k}\overline{N}(a^{\ast})r^{-1}\left(-\pi_{1}p\xi_{be_{k}}(n),-\gamma'\xi_{be_{k}}(n),0,0\right)\\
&=c_{k}\overline{N}(a^{\ast})\left(-r^{-1}\pi_{1}p\xi_{be_{k}}(n),-r^{-1}\gamma' \xi_{be_{k}}(n),0,0\right) \\
&=c_{k}c_{k}^{-1} \pi_{e_{k}a}i \left (-r^{-1}\gamma' \xi_{be_{k}}(n)\right) \\
&=-\pi_{e_{k}a}\left(r^{-1}\gamma' \xi_{be_{k}}(n)\right) \\
&=-\pi_{ra}\left(\gamma' \xi_{be_{k}}(n)\right) \\
&=-\gamma_{ra,be_{k}}(n) \\
&=-\displaystyle \sum_{w \in L(k)} r^{\ast}(e_{k}^{-1}w)Y_{[bwa]}(P)n \\
&=-\displaystyle \sum_{w \in L(k)} r^{\ast}(w)Y_{[bwa]}(P)n \\
&=-Y_{[bra]}(P)n
\end{align*}
and by Lemma \ref{lem3}
\begin{align*}
\bigl(X_{[bra]^{\ast}}(\rho(P)\bigr)_{\overline{N}}(n)&=X_{[bra]^{\ast}}(\rho(P))n \\
&=\rho(Y_{[bra]}(P))n \\
&=Y_{[bra]}(P)n
\end{align*}
Combining the above: $(X_{[bra]}(\widetilde{P}))_{\overline{N}}=\bigl(X_{[bra]^{\ast}}(\rho(P))\bigr)_{\overline{N}}+(c_{k}a^{\ast}r^{-1}(^{\ast}b))_{\overline{N}}=0$, as desired. It remains to show that $R(\widetilde{P}) \cdot \overline{N}=\{0\}$ for the remaining elements of the $Z$-local basis of $(\widetilde{M})_{0}$.
We now show that $(X_{a^{\ast}}(\widetilde{P}))_{\overline{N}}=0$ for each $a \in _{k}T$. Using the result of page \cite[p.58]{2} we have that
\begin{align*}
(X_{a^{\ast}}(\widetilde{P}))_{\overline{N}}&=\biggl(c_{k}\displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}(^{\ast}b)\rho(bra)\biggr)_{\overline{N}} \\
&=c_{k} \displaystyle \sum_{b \in T_{k},r \in L(k)} (r^{-1}(^{\ast}b)\rho(bra))_{\overline{N}}
\end{align*}
Let $n \in N_{\tau(a)}$. Then remembering \ref{4.4} and \ref{3.15}, we get the following equalities
\begin{align*}
X_{a^{\ast}}(\widetilde{P})(n)&=c_{k}\displaystyle \sum_{b \in T_{k},r \in L(k)} r^{-1}(^{\ast}b)(bran) \\
&=c_{k} \displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1} \left( -\pi_{1}p\xi_{be_{k}}(bran), -\gamma' \xi_{be_{k}}(bran),0,0\right) \\
&=-c_{k} \left ( \displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1} \pi_{1}p \xi_{be_{k}}(bran), \displaystyle \sum_{b \in T_{k},r \in L(k)} r^{-1} \gamma' \xi_{be_{k}}(bran),0,0\right) \\
&=-c_{k}\left ( \displaystyle \sum_{b \in T_{k}, r \in L(k)} \pi_{1}p r^{-1} \xi_{be_{k}}(bran), \displaystyle \sum_{b \in T_{k}, r \in L(k)} \gamma' r^{-1} \xi_{be_{k}}(bran),0,0\right) \\
&=-c_{k} \left( \displaystyle \sum_{b \in T_{k}, r \in L(k)} \pi_{1} p \xi_{br} \pi_{br} \beta(an), \displaystyle \sum_{b \in T_{k}, r \in L(k)} \gamma' \xi_{br} \pi_{br} \beta(an),0,0\right) \\
&=-c_{k} \left( \pi_{1}\beta(an),\gamma' \beta(an),0,0\right) \\
&=\left(0,0,0,0\right)
\end{align*}
by Lemma \ref{lem1}. This proves that $(X_{a^{\ast}}(\widetilde{P}))_{\overline{N}}=0$ for each $a \in _{k}T$. Finally, let us show that $(X_{^{\ast}b}(\widetilde{P}))_{\overline{N}}=0$ for each $b \in T_{k}$. Let us recall the following formula from \cite[p.58]{2}:
\begin{center}
$X_{^{\ast}(b)}(\widetilde{P})=c_{k}\displaystyle \sum_{a \in _{k}T,r \in L(k)} \rho(bra)a^{\ast}r^{-1}$
\end{center}
Now let $n \in N_{k}$. Then remembering \ref{4.3} and using Lemma \ref{lem2} we get \\
\begin{align*}
X_{^{\ast}b}(\widetilde{P})&=c_{k}\displaystyle \sum_{a \in _{k}T, r \in L(k)} \rho(bra) \overline{N}(a^{\ast})(r^{-1}n)\\
&=c_{k}\displaystyle \sum_{a \in _{k}T, r \in L(k)} \rho(bra) \overline{N}(a^{\ast})\left( \displaystyle \sum_{l=1}^{4} J_{l} \Pi_{l}(r^{-1}n)\right) \\
&=c_{k}\displaystyle \sum_{a \in _{k}T, r \in L(k)} \left( \rho(bra)c_{k}^{-1}\pi_{e_{k}a}i \Pi_{2}(r^{-1}n)+\rho(bra)c_{k}^{-1}\pi_{e_{k}a}j' \sigma_{2} \Pi_{3}(r^{-1}n) \right) \\
&=\displaystyle \sum_{a \in _{k}T, r \in L(k)} \rho(bra)\pi_{e_{k}a}i \Pi_{2}(r^{-1}n) + \displaystyle \sum_{a \in T_{k}, r \in L(k)} \rho(bra) \pi_{e_{k}a}j' \sigma_{2} \Pi_{3}(r^{-1}n) \\
&=b \displaystyle \sum_{a \in _{k}T, r \in L(k)} ra \pi_{e_{k}a}\left(r^{-1} \Pi_{2}(n)\right) + b \displaystyle \sum_{a \in _{k}T, r \in L(k)} ra \pi_{e_{k}a} \left(r^{-1} \sigma_{2} \Pi_{3}(n)\right)\\
&=b \displaystyle \sum_{a \in _{k}T, r \in L(k)} ra \pi_{ra}(\Pi_{2}n)+ b \displaystyle \sum_{a \in _{k}T, r \in L(k)} ra \pi_{ra}\left(\sigma_{2}\Pi_{3}(n)\right)\\
&=b \displaystyle \sum_{a \in _{k}T, r \in L(k)} \alpha \xi_{ra} \pi_{ra}(\Pi_{2}n) + b \displaystyle \sum_{a \in _{k}T, r \in L(k)} \alpha \xi_{ra} \pi_{ra} \left(\sigma_{2}\Pi_{3}(n)\right) \\
&=b \alpha(\Pi_{2}n)+b \alpha \left(\sigma_{2} \Pi_{3}(n)\right) \\
&=0
\end{align*}
by Lemma \ref{lem1}. This completes the proof that $\overline{N}$ is annihilated by $R(\widetilde{P})$.
\end{proof}
\begin{definition}
We will refer to $\widetilde{\mu_{k}}(\mathcal{N})=(\overline{N},\overline{V})$ as the premutated decorated representation.
\end{definition}
As in \cite[Proposition 10.9]{6} we now show that the isoclass of the premutated decorated representation does not depend on the choice of the splitting data.
\begin{prop} The isoclass of the decorated representation $\widetilde{\mu_{k}}(\mathcal{N})=(\overline{N},\overline{V})$ does not depend on the choice of the splitting data.
\end{prop}
\begin{proof}
Suppose that we fix $p: N_{out} \rightarrow ker(\gamma)$ such that $pj =id_{ker(\gamma)}$ where $j: ker(\gamma) \rightarrow N_{out}$ is the inclusion map. Let $p': N_{out} \rightarrow ker(\gamma)$ be another map satisfying $p'j=id_{ker(\gamma)}$.
Then the restriction of the map $p'-p$ to the subspace $ker(\gamma)$ is the zero map. Since $\gamma: N_{out} \rightarrow N_{in}$ then $N_{out}/ker(\gamma) \cong im(\gamma)$. \\
Consider the following sequence of maps:
\vspace{0.2in}
\begin{center}
$ker(\gamma) \stackrel{j}{\longrightarrow} N_{out} \stackrel{\gamma}{\longrightarrow} N_{out}/ker(\gamma) \cong im(\gamma) $
\end{center}
\vspace{0.2in}
By the universal property of the cokernel of $j$, there exists a unique linear map $\xi: im(\gamma) \rightarrow ker(\gamma)$ making the following diagram commute
\begin{equation*}
\xymatrix{%
ker(\gamma)\ar[r]^{j} &N_{out} \ar[r]^{\gamma'} \ar[d]^{p'-p} &im(\gamma) \ar@{-->}[ld]^\xi \\
&ker(\gamma)}
\end{equation*}
It follows that $p'=p+\xi \gamma'$ for some linear map $\xi: im(\gamma) \rightarrow ker(\gamma)$. \\
Now suppose that we fix a map $\sigma_{2}: ker(\alpha)/im(\gamma) \rightarrow ker(\alpha)$ such that $\pi_{2}\sigma_{2}=id_{ker(\alpha)/im(\gamma)}$. Let $\sigma'_{2}: ker(\alpha)/im(\gamma) \rightarrow ker(\alpha)$ be another map satisfying $\pi_{2}\sigma'_{2}=id_{ker(\alpha)/im(\gamma)}$. By the universal property of the kernel of $\pi_{2}$, there exists a unique linear map $\eta: ker(\alpha)/im(\gamma) \rightarrow im(\gamma)$ making the following diagram commute
\begin{center}
\[\xymatrixcolsep{5pc}\xymatrix{%
ker(\alpha)/im(\gamma) \ar@{-->}[d]^\eta \ar[rd]^{\sigma'_{2}-\sigma_{2}} & \\
im(\gamma) \ar[r] & ker(\alpha) \ar[r]^{\pi_{2}} & ker(\alpha)/im(\gamma)}
\]
\end{center}
\vspace{0.2in}
Thus $\sigma'_{2}=\sigma_{2} + \eta$ for some linear map $\eta: ker(\alpha)/im(\gamma) \rightarrow im(\gamma)$. \\
Let $\overline{N'}(a^{\ast})$ be the map in \ref{4.3} with $\sigma_{2}$ replaced by $\sigma_{2}'$. Similarly, let $\overline{N'}(^{\ast}b)$ be the map in $\ref{4.4}$ with $p$ replaced by $p'$.\\
As in \cite[Proposition 10.9]{6} we now construct a linear automorphism $\psi: \overline{N}_{k} \rightarrow \overline{N'}_{k}$ such that $\overline{N}(a^{\ast})=\overline{N'}(a^{\ast})\psi$ and $\psi \overline{N}(^{\ast}b)=\overline{N'}(^{\ast}b)$. Since $\overline{N}_{k}=\frac{ker(\gamma)}{im(\beta)} \oplus im(\gamma) \oplus \frac{ker(\alpha)}{im(\gamma)} \oplus V_{k}$, then we may realize $\psi$ as a matrix of order $4$. Define $\psi$ as
\begin{center}
$\psi=
\begin{pmatrix}
I & \pi_{1}\xi & 0 &0 \\
0 & I & -\eta & 0 \\
0 & 0 & I & 0 \\
0 & 0 & 0 & I
\end{pmatrix}$
\end{center}
where $I$ is the identity transformation. Note that $\psi$ is invertible. We have
\begin{center}
\begin{align*}
\overline{N'}(a^{\ast})\psi&=\begin{pmatrix} 0 & c_{k}^{-1}\pi_{e_{k}a}i & c_{k}^{-1}\pi_{e_{k}a}j' \sigma_{2}' & 0 \end{pmatrix} \begin{pmatrix}
I & \pi_{1} \xi & 0 &0 \\
0 & I & -\eta & 0 \\
0 & 0 & I & 0 \\
0 & 0 & 0 & I
\end{pmatrix} \\
&=\begin{pmatrix} 0 & c_{k}^{-1}\pi_{e_{k}a}i & -c_{k}^{-1}\pi_{e_{k}a}i\eta+c_{k}^{-1}\pi_{e_{k}a}j'\sigma_{2}' & 0 \end{pmatrix} \\
&=\begin{pmatrix} 0 & c_{k}^{-1}\pi_{e_{k}a}i & c_{k}^{-1}\pi_{e_{k}a}(-i\eta+j'\sigma_{2}') & 0 \end{pmatrix} \\
&=\begin{pmatrix} 0 & c_{k}^{-1}\pi_{e_{k}a}i & c_{k}^{-1}\pi_{e_{k}a}(-i\eta+j'\sigma_{2}+j'\eta) & 0 \end{pmatrix} \\
&=\begin{pmatrix} 0 & c_{k}^{-1}\pi_{e_{k}a}i & c_{k}^{-1}\pi_{e_{k}a}j'\sigma_{2} & 0 \end{pmatrix} \\
&=\overline{N}(a^{\ast})
\end{align*}
\end{center}
On the other hand:
\begin{align*}
\psi \overline{N}(^{\ast}b)&=\begin{pmatrix} -\pi_{1}(p+\xi \gamma')\xi_{be_{k}} \\ -\gamma' \xi_{be_{k}} \\ 0 \\ 0 \end{pmatrix} \\
&=\begin{pmatrix} -\pi_{1}p'\xi_{be_{k}} \\ -\gamma' \xi_{be_{k}} \\ 0 \\ 0 \end{pmatrix} \\
&=\overline{N'}(^{\ast}b)
\end{align*}
Now let $\varphi: \overline{N} \rightarrow \overline{N}'$ be the map defined as $\varphi_{j}=id$ if $j \neq k$ and $\varphi_{k}=\psi$. Suppose first that $a \in _{k}T$, $d_{1} \in D_{\tau(a)}$, $d_{2} \in D_{k}$ and $w \in \overline{N}_{k}$. Then
\begin{align*}
\varphi(d_{1}a^{\ast}d_{2}w)&=d_{1}a^{\ast}d_{2}w \\
&=d_{1}\overline{N}(a^{\ast})(d_{2}w) \\
&=d_{1}\overline{N'}(a^{\ast})\psi(d_{2}w) \\
&=d_{1}a^{\ast}d_{2}\varphi(w)
\end{align*}
Now if $b \in T_{k}$, $d_{1} \in D_{k}$, $d_{2} \in D_{\sigma(b)}$ and $n \in N_{\sigma(b)}$. Then
\begin{align*}
\varphi(d_{1}(^{\ast}b)d_{2}n)&=\psi(d_{1}(^{\ast}b)d_{2}n) \\
&=\psi(d_{1}\overline{N}(^{\ast}b)(d_{2}n)) \\
&=d_{1}\psi(\overline{N}(^{\ast}b)(d_{2}n)) \\
&=d_{1}\overline{N'}(^{\ast}b)(d_{2}n) \\
&=d_{1}(^{\ast}b)d_{2}\varphi(n)
\end{align*}
Therefore for each $u \in \mathcal{F}_{S}(\mu_{k}M)$ we obtain a commutative diagram:
\begin{center}
\begin{equation*}
\xymatrix{%
\overline{N} \ar[r]^{u_{\overline{N}}} \ar[d]^{\varphi} & \overline{N} \ar[d]^{\varphi} \\
\overline{N'} \ar[r]^{u_{\overline{N'}}} & \overline{N'}}
\end{equation*}
\end{center}
This proves that the decorated representations $\widetilde{\mu_{k}}(\mathcal{N})=(\overline{N},\overline{V})$ and $(\overline{N'},\overline{V})$ are right-equivalent, as desired.
\end{proof}
Let $\mathcal{N}=(N,V)$ be a decorated representation of $(\mathcal{F}_{S}(M),P)$ and let $\mathcal{N}'=(N',V')$ be a decorated representation of $(\mathcal{F}_{S}(M'),P')$. Suppose that such representations are right-equivalent, then there exists an algebra isomorphism $\varphi: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M')$ such that $\varphi(P)$ is cyclically equivalent to $P'$. By \cite[Theorem 5.3]{2} we have: $R(P')=R(\varphi(P))=\varphi(R(P))$. Using the representation $\mathcal{N}=(N,V)$ we construct a decorated representation $\widehat{\mathcal{N}}=(\widehat{N},V)$ of $(\mathcal{F}_{S}(M'),\varphi(P))$ as follows: let $\widehat{N}=N$ as $F$-vector spaces and given $u \in \mathcal{F}_{S}(M')$ and $n \in N$ define $u \ast n:=\varphi^{-1}(u)n$. Clearly $R(P')\widehat{N}=0$. We will denote $\widehat{N}$ by $\widehat{N}=^{\varphi^{-1}}N$.
\begin{prop} \label{prop5} Let $\varphi: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M)$ be a unitriangular automorphism and let $\mathcal{N}=(N,V)$ be a decorated representation of $(\mathcal{F}_{S}(M),P)$ where $P$ is a potential in $\mathcal{F}_{S}(M)$ such that $e_{k}Pe_{k}=0$. Then:
\begin{enumerate}[(a)]
\item There exists a unitriangular automorphism $\hat{\varphi}: \mathcal{F}_{S}(\mu_{k}M) \rightarrow \mathcal{F}_{S}(\mu_{k}M)$ such that $\hat{\varphi}(\mu_{k}P)$ is cyclically equivalent to $\mu_{k}(\varphi(P))$.
\item There exists an isomorphism of decorated representations $\psi: \widetilde{\mu_{k}}(\mathcal{N}) \rightarrow \widetilde{\mu_{k}}(\widehat{\mathcal{N}})$.
\end{enumerate}
\end{prop}
\begin{proof} The fact that (a) holds is an immediate consequence of \cite[Theorem 8.12]{2}. Let us show (b). Let $\hat{\alpha},\hat{\beta}$ and $\hat{\gamma}$ be the maps associated to the representation $\widehat{N}$. Recall that $_{k}\hat{T}=\{sa: a \in _{k}T, s \in L(k)\}$ is a local basis for $(e_{k}M)_{S}$. We have that $\varphi(sa)=\displaystyle \sum_{r \in L(k), a_{1} \in _{k}T} ra_{1}C_{ra_{1},sa}$ for some $C_{ra_{1},sa} \in e_{\tau(a_{1})}\mathcal{F}_{S}(M)e_{\tau(a)}$. \\
Define $C: N_{in} \rightarrow N_{in}$ as the $F$- linear map such that for all $r,s \in L(k)$, $a,a_{1} \in _{k}T$, the map:
\begin{center}
$\pi_{ra_{1}}C\xi_{sa}: N_{\tau(a)} \rightarrow N_{\tau(a_{1})}$
\end{center}
is given by
\begin{center}
$\pi_{ra_{1}}C\xi_{sa}(n)=\varphi^{-1}(C_{ra_{1},sa})n$
\end{center}
for every $n \in N_{\tau(a)}$. Let us show that $\hat{\alpha}C=\alpha$. It suffices to show that for all $a \in _{k}T, r \in L(k)$ we have $\hat{\alpha}C\xi_{ra}=\alpha \xi_{ra}$. \\
In what follows, given $h \in \mathcal{F}_{S}(M)$ and $n \in N$ then $h \ast n=\varphi^{-1}(h)n$ denotes the product in $\widehat{N}$. \\
We have
\begin{align*}
\hat{\alpha}C\xi_{ra}(n)&=\displaystyle \sum_{s \in L(k), a_{1} \in _{k}T} \hat{\alpha}\xi_{sa_{1}}\pi_{sa_{1}}C\xi_{ra}(n) \\
&=\displaystyle \sum_{s \in L(k), a_{1} \in _{k}T} \hat{\alpha} \xi_{sa_{1}}\left(\varphi^{-1}(C_{sa_{1},ra})n\right) \\
&=\displaystyle \sum_{s \in L(k), a_{1} \in _{k}T} sa_{1} \ast \varphi^{-1}(C_{sa_{1},ra})n \\
&=\varphi^{-1} \left( \displaystyle \sum_{s \in L(k), a_{1} \in _{k}T} sa_{1}C_{sa_{1},ra}\right)n \\
&=\varphi^{-1}(\varphi(ra))n \\
&=ran \\
&=\alpha \xi_{ra}(n)
\end{align*}
and therefore $\hat{\alpha}C=\alpha$. This yields the following equalities:
\begin{equation} \label{4.5}
\begin{split}
\operatorname{ker}(\alpha)&=C^{-1}(\operatorname{ker}(\hat{\alpha})) \\
\operatorname{im}(\alpha)&=\operatorname{im}(\hat{\alpha})
\end{split}
\end{equation}
Similarly, for each $b \in T_{k}$ and $s \in L(k)$:
\begin{center}
$\varphi(bs)=\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} D_{bs,b_{1}r}b_{1}r$
\end{center}
for some $D_{bs,b_{1}r} \in e_{\sigma(b)}\mathcal{F}_{S}(M)e_{\sigma(b_{1})}$. \\
Thus there exists an $F$-linear map $D: N_{out} \rightarrow N_{out}$ such that for all $r,s \in L(k)$, $b,b_{1} \in T_{k}$ we have:
\begin{center}
$\pi_{bs}D \xi_{b_{1}r}(n)=\varphi^{-1}(D_{bs,b_{1}r})n$
\end{center}
for every $n \in N_{\sigma(b_{1})}$. We now show that $D\hat{\beta}=\beta$. It suffices to show that for all $b \in T_{k}$, $s \in L(k)$ we have $\pi_{bs}D\hat{\beta}=\pi_{bs}\beta$. Let $n \in N_{k}$, then
\begin{align*}
\pi_{bs}D\hat{\beta}(n)&=\displaystyle \sum_{r \in L(k),b_{1} \in T_{k}} \pi_{bs}D\xi_{b_{1}r}\pi_{b_{1}r} \hat{\beta}(n) \\
&=\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} \varphi^{-1}(D_{bs,b_{1}r})((b_{1}r) \ast n) \\
&=\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} \varphi^{-1}(D_{bs,b_{1}r})\varphi^{-1}(b_{1}r)n \\
&=\varphi^{-1} \left(\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} D_{bs,b_{1}r} b_{1}r\right)n \\
&=\varphi^{-1}(\varphi(bs))n \\
&=bsn \\
&=\pi_{bs}\beta(n)
\end{align*}
Therefore $D\hat{\beta}=\beta$, as claimed. Then we obtain the following equalities
\begin{equation} \label{4.6}
\begin{split}
\operatorname{im}(\beta)&=D(\operatorname{im}(\hat{\beta})) \\
\operatorname{ker}(\beta)&=\operatorname{ker}(\hat{\beta})
\end{split}
\end{equation}
\begin{lemma} \label{lem4} We have that $\hat{\gamma}=C \gamma D$.
\end{lemma}
\begin{proof}
Using \cite[Lemma 9.2]{2} we obtain an algebra isomorphism: \\
\begin{center}
$\rho: \mathcal{F}_{S}(M)_{\hat{k},\hat{k}} \rightarrow \mathcal{F}_{S}((\mu_{k}M)_{\hat{k},\hat{k}})$
\end{center}
\vspace{0.1in}
we may view $\rho$ as a monomorphism of algebras: \\
\begin{center}
$\rho: \bar{e_{k}}\mathcal{F}_{S}(M)\bar{e_{k}} \rightarrow \bar{e_{k}} \mathcal{F}_{S}(\mu_{k}M) \bar{e_{k}}$
\end{center}
\vspace{0.1in}
By \cite[Proposition 8.11]{2} we have algebra isomorphisms:
\begin{align*}
&\phi: \mathcal{F}_{S}(\widehat{M}) \rightarrow \mathcal{F}_{S}(\widehat{M}) \\
&\hat{\varphi}: \mathcal{F}_{S}(\mu_{k}M) \rightarrow \mathcal{F}_{S}(\mu_{k}M)
\end{align*}
where $\widehat{M}:=M \oplus (e_{k}M)^{\ast} \oplus ^{\ast} (Me_{k})$ and
\begin{align*}
&i_{M}: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(\widehat{M}) \\
&i_{\mu_{k}M}: \mathcal{F}_{S}(\mu_{k}M) \rightarrow \mathcal{F}_{S}(\widehat{M})
\end{align*}
are the inclusion maps. We also have commutative diagrams
\begin{center}
\begin{equation*}
\xymatrix{%
\mathcal{F}_{S}(\mu_{k}M) \ar[r]^{\hat{\varphi}} \ar[d]^{i_{\mu_{k}M}} & \mathcal{F}_{S}(\mu_{k}M) \ar[d]^{i_{\mu_{k}M}} \\
\mathcal{F}_{S}(\widehat{M}) \ar[r]^{\phi} & \mathcal{F}_{S}(\widehat{M})}
\end{equation*}
\end{center}
\begin{center}
\begin{equation*}
\xymatrix{%
\mathcal{F}_{S}(M) \ar[r]^{\varphi} \ar[d]^{i_{M}} & \mathcal{F}_{S}(M) \ar[d]^{i_{M}} \\
\mathcal{F}_{S}(\widehat{M}) \ar[r]^{\phi} & \mathcal{F}_{S}(\widehat{M})}
\end{equation*}
\end{center}
Let us see that the previous diagram induces a commutative diagram:
\begin{center}
\begin{equation*}
\xymatrix{%
\mathcal{F}_{S}(M)_{\hat{k},\hat{k}} \ar[r]^{\rho} \ar[d]^{\varphi} & \mathcal{F}_{S}((\mu_{k}M)_{\hat{k},\hat{k}}) \ar[d]^{\hat{\varphi}} \\
\mathcal{F}_{S}(M)_{\hat{k},\hat{k}} \ar[r]^{\rho} & \mathcal{F}_{S}((\mu_{k}M)_{\hat{k},\hat{k}})}
\end{equation*}
\end{center}
Indeed, on one hand $i_{\mu_{k}M}\rho \varphi = i_{M}\varphi$ and on the other hand:
\begin{center}
$i_{\mu_{k}M}\hat{\varphi}\rho=\phi i_{\mu_{k}M} \rho = \phi i_{M}=i_{M} \varphi$
\end{center}
Since $i_{\mu_{k}M}$ is injective then $\rho \varphi=\hat{\varphi}\rho$. \\
Let $\hat{\Delta}: T_{S}(\mu_{k}M) \rightarrow T_{S}(\mu_{k}M) \otimes_{Z} T_{S}(\mu_{k}M)$ be the derivation associated to $T_{S}(\mu_{k}M)$. Define maps:
\begin{align*}
\rho^{k}: \bar{e_{k}}T_{S}(M) \rightarrow T_{S}(\mu_{k}M) \\
^{k}\rho: T_{S}(M)\bar{e_{k}} \rightarrow T_{S}(\mu_{k}M)
\end{align*}
as follows, $\rho^{k}(z)=\rho(z\bar{e_{k}})$ and $^{k}\rho(z)=\rho(\bar{e_{k}}z)$.
\begin{lemma} \label{lem5} For $z \in T_{S}(M)_{\hat{k},\hat{k}}$ we have that $\hat{\Delta}\rho(z)=(\rho^{k} \otimes (^{k}\rho))\Delta(z)$.
\end{lemma}
\begin{proof} The $T_{S}(\mu_{k}M)$-bimodule $T_{S}(\mu_{k}M) \otimes_{Z} T_{S}(\mu_{k}M)$ is a $T_{S}(M)_{\hat{k},\hat{k}}$-bimodule via the map $\rho$. We have that $\hat{\Delta}$ is a $T_{S}(M)_{\hat{k},\hat{k}}$-derivation, $\rho^{k} \otimes (^{k} \rho)$ is a map of $T_{S}(M)_{\hat{k},\hat{k}}$-bimodules and $\Delta$ is a derivation of $T_{S}(M)$. Therefore $\hat{\Delta}\rho$ and $(\rho^{k} \otimes (^{k}\rho))\Delta$ are derivations of $T_{S}(\mu_{k}M)$. Since $T_{S}(M)_{\hat{k},\hat{k}}$ is generated, as an $F$-algebra, by $\bar{e_{k}}S$, $\bar{e_{k}}M_{0}\bar{e_{k}}$ and $M_{0}D_{k}M_{0}$, then it suffices to establish the equality for $z \in \bar{e_{k}}S \cup \bar{e_{k}}M_{0}\bar{e_{k}} \cup M_{0}D_{k}M_{0}$. \\
If $z \in \bar{e_{k}}S$, then
\begin{center}
$\hat{\Delta}\rho(z)=1 \otimes \rho(z) - \rho(z) \otimes 1 = \bar{e_{k}} \otimes \rho(z) - \rho(z) \otimes \bar{e_{k}}=(\rho^{k} \otimes (^{k}\rho))\Delta(z)$
\end{center}
For $z \in \bar{e_{k}}M_{0}\bar{e_{k}}$ we have
\begin{center}
$\hat{\Delta}\rho(z)=1 \otimes \rho(z)=\bar{e_{k}} \otimes \rho(z)=(\rho^{k} \otimes (^{k}\rho))\Delta(z)$
\end{center}
If $z=m_{1}rm_{2}$ where $m_{1} \in \bar{e_{k}}M_{0}e_{k}$, $r \in D_{k}$ and $m_{2} \in e_{k}M_{0}\bar{e_{k}}$, then
\begin{center}
$\hat{\Delta}\rho(m_{1}rm_{2})=1 \otimes \rho(m_{1}rm_{2})=\bar{e_{k}} \otimes \rho(m_{1}rm_{2})$
\end{center}
and
\begin{align*}
(\rho^{k} \otimes ^{k} \rho)\Delta(m_{1}rm_{2})&=(\rho^{k} \otimes (^{k}\rho))(\Delta(m_{1})rm_{2}+m_{1}\Delta(rm_{2})) \\
&=(\rho^{k} \otimes ^{k} \rho)(1 \otimes m_{1}rm_{2}+m_{1} \otimes rm_{2}) \\
&=\bar{e_{k}} \otimes \rho(m_{1}rm_{2}) + m_{1}\bar{e_{k}} \otimes (^{k}\rho(rm_{2})) \\
&=\bar{e_{k}} \otimes \rho(m_{1}rm_{2})
\end{align*}
completing the proof of Lemma \ref{lem5}.
\end{proof}
\begin{lemma} \label{lem6} For $\alpha \in T_{S}(M)_{\hat{k},\hat{k}}$, $z \in \mathcal{F}_{S}(M)_{\hat{k},\hat{k}}$ we have
\begin{center}
$((\rho^{k} \otimes (^{k}\rho))\Delta(\alpha)) \Diamond \rho(z)=\rho^{k}(\Delta(\alpha) \Diamond z)$
\end{center}
\end{lemma}
\begin{proof} One can verify that if the equality holds for all $\alpha$ and for all $z$, and for all $\beta$ and every $z$ then it holds for all $\alpha \beta$ and every $z$. Therefore it suffices to establish the equality for $\alpha \in \bar{e_{k}}S \cup \bar{e_{k}}M_{0}\bar{e_{k}} \cup M_{0}D_{k}M_{0}$. \\
(i) Suppose first that $\alpha \in \bar{e_{k}}S$, then
\begin{align*}
(\rho^{k} \otimes (^{k}\rho))\Delta(\alpha) \Diamond \rho(z)&=(\rho^{k} \otimes (^{k}\rho))(1 \otimes \alpha - \alpha \otimes 1) \Diamond \rho(z) \\
&=(\bar{e_{k}} \otimes \rho(\alpha)-\rho(\alpha) \otimes \bar{e_{k}}) \Diamond \rho(z) \\
&=cyc(\rho(\alpha)\rho(z)-\rho(z)\rho(\alpha)) \\
&=\rho(cyc(\alpha z - z\alpha)) \\
&=\rho^{k}(\Delta(\alpha) \Diamond z)
\end{align*}
(ii) If $\alpha \in \bar{e_{k}}M_{0}\bar{e_{k}}$, then
\begin{align*}
(\rho^{k} \otimes (^{k}\rho))\Delta(\alpha) \Diamond \rho(z)&=(\rho^{k} \otimes (^{k}\rho))(1 \otimes \alpha) \Diamond \rho(z) \\
&=(\bar{e_{k}} \otimes p(\alpha)) \Diamond \rho(z) \\
&=cyc(\rho(\alpha)\rho(z)) \\
&=\rho(cyc(\alpha z)) \\
&=\rho^{k}(\Delta(\alpha) \Diamond z)
\end{align*}
(iii) Finally, if $\alpha=m_{1}rm_{2}$ where $m_{1} \in \bar{e_{k}}M_{0}e_{k}$, $r \in D_{k}$ and $m_{2} \in e_{k}M_{0}\bar{e_{k}}$, then
\begin{align*}
(\rho^{k} \otimes (^{k}\rho))\Delta(m_{1}rm_{2}) \Diamond \rho(z) &= (\rho^{k} \otimes (^{k}\rho))(1 \otimes m_{1}rm_{2}+m_{1} \otimes rm_{2}) \Diamond \rho(z) \\
&=(\bar{e_{k}} \otimes \rho(m_{1}rm_{2})) \Diamond \rho(z) \\
&=cyc(\rho(m_{1}rm_{2}z)) \\
&=\rho^{k}(\Delta(m_{1}rm_{2}) \Diamond z)
\end{align*}
\end{proof}
Lemma \ref{lem5} and Lemma \ref{lem6} imply immediately the following
\begin{lemma} \label{lem7} Let $\alpha \in T_{S}(M)_{\hat{k},\hat{k}}$, $z \in \mathcal{F}_{S}(M)_{\hat{k},\hat{k}}$, then:
\begin{center}
$\hat{\Delta}\rho(\alpha) \Diamond \rho(z)=\rho^{k}(\Delta(\alpha) \Diamond z)$
\end{center}
\end{lemma}
\vspace{0.1in}
Let $r,s \in L(k)$, $a \in _{k}T$ and $b \in T_{k}$. Let $n \in \hat{N}_{\sigma(b)}$, then
\begin{center}
$\hat{\gamma}_{sa,br}(n)=\displaystyle \sum_{w \in L(k)} s^{\ast}(r^{-1}w)\varphi^{-1} \left(Y_{[bwa]}(\varphi(P)) \right)n$.
\end{center}
We have
\begin{align*}
\varphi^{-1} \left( Y_{[bwa]}(\varphi(P)) \right)n&= \varphi^{-1} \rho^{-1} \left( X_{[bwa]^{\ast}}(\rho \varphi(P)) \right)n \\
&=\left(\rho^{-1} \hat{\varphi}^{-1} X_{[bwa]^{\ast}}(\hat{\varphi}(\rho(P))) \right)n
\end{align*}
Also $X_{[bwa]^{\ast}}(\hat{\varphi}(\rho(P))=\displaystyle \lim_{u \to \infty} Z_{u}$ where:
\begin{center}
$Z_{u}=\displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{c \in \tilde{T}_{\hat{k},\hat{k}}} (s\rho(bwa))^{\ast} \left ( \hat{\Delta}(\hat{\varphi}(c)^{\leq u+1}) \Diamond \hat{\varphi}(X_{c^{\ast}}(\rho(P)) \right)s$
\end{center}
Let $v$ be an arbitrary positive integer and let $\alpha,\beta \in \mathcal{F}_{S}(M)$. We will write $\alpha \equiv \beta \ (v)$ if $\alpha-\beta \in \mathcal{F}_{S}(M)^{>v}$. \\
Clearly for any $\alpha \in \mathcal{F}_{S}(M)$ and for every positive integer $v$ we have $\alpha \equiv \alpha^{\leq v} \ (v)$. \\
Note that if $h \in T_{S}(M)^{>v}$ and $z \in \mathcal{F}_{S}(M)$, then $\Delta(h) \Diamond z \in \mathcal{F}_{S}(M)^{>v}$. Therefore if $\alpha,\beta \in T_{S}(M)$ and $\alpha \equiv \beta \ (v)$, then $\Delta(\alpha) \Diamond z \equiv \Delta(\beta) \Diamond z \ (v)$ for every $z \in \mathcal{F}_{S}(M)$. \\
Let $h \in \mathcal{F}_{S}(M)_{\hat{k},\hat{k}}$. Let us see that:
\begin{center}
$\rho(h^{\leq 2v+3}) \equiv \rho(h)^{\leq v+1} \ (v+2)$
\end{center}
Indeed, since $h-h^{\leq 2v+3} \in \mathcal{F}_{S}(M)^{\geq 2v+4}$ then $\rho(h)-\rho(h^{\leq 2v+3}) \in \mathcal{F}_{S}(\mu_{k}M)^{\geq v+2}$, whence $\rho(h) \equiv \rho(h^{\leq 2v+3}) \ (v+2)$ and therefore $\rho(h) \equiv \rho(h)^{\leq v+1} \ (v+2)$. \\
Let $v \gg 0$ be such that $\mathcal{F}_{S}(M)^{\geq v}N=0$. \\
For $i \geq 1$ we have that $Z_{v+i}-Z_{v} \in \mathcal{F}_{S}(\mu_{k}M)^{\geq v+1}$ and thus
\begin{center}
$\varphi^{-1}\rho^{-1}(Z_{v+i}) \equiv \varphi^{-1} \rho^{-1}(Z_{v}) \ (v+1)$
\end{center}
therefore
\begin{center}
$\displaystyle \lim_{u \to \infty} \varphi^{-1} \rho^{-1}(Z_{u})n=\varphi^{-1}\rho^{-1}(Z_{v})n$
\end{center}
Define:
\begin{center}
$W(c)=\varphi^{-1}\rho^{-1} \left( \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast}\left( \hat{\Delta}(\hat{\varphi}(c)^{\leq v+1}) \Diamond \hat{\varphi}(X_{c^{\ast}}(\rho(P)))\right)s\right)n$.
\end{center}
We require the following:
\begin{lemma} \label{lem8} If $c \in \bar{e_{k}}M_{0}\bar{e_{k}} \cap T$ then $W(c)=0$.
\end{lemma}
\begin{proof} Note that $X_{c^{\ast}}(\rho(P))=\rho(X_{c^{\ast}}(P))$, hence\\
\begin{center}
$\hat{\varphi}(c)^{\leq v+1} =\hat{\varphi}(\rho(c))^{\leq v+1}=\rho(\varphi(c))^{\leq v+1} \equiv \rho(\varphi(c)^{\leq 2v+3}) \ (v+2)$
\end{center}
Consequently
\begin{align*}
W(c)&=\varphi^{-1}\rho^{-1} \left( \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast} \left( \hat{\Delta}(\rho({\varphi}(c)^{\leq 2v+3})) \Diamond \hat{\varphi}\rho(X_{c^{\ast}}(P))\right)s\right)n \\
&=\varphi^{-1}\rho^{-1} \left( \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast}\left( \hat{\Delta}(\rho({\varphi}(c)^{\leq 2v+3})) \Diamond \rho{\varphi}(X_{c^{\ast}}(P))\right)s\right)n \\
\end{align*}
Using Lemma \ref{lem7} we obtain
\begin{center}
$W(c)=\varphi^{-1}\rho^{-1} \left( \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast} \left(\rho^{k} \left( \Delta(\varphi(c)^{\leq 2v+3}) \Diamond \varphi(X_{c^{\ast}}(P)) \right)\right)s\right)n$
\end{center}
Letting $z=\varphi(X_{c^{\ast}}(P))$ yields that $W(c)$ is a sum of elements of the form:
\begin{center}
$\varphi^{-1}\rho^{-1} \left( \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast} \left(\rho^{k} \left(\Delta(m_{1}\hdots m_{l}r) \Diamond z\right)\right)s\right)n$
\end{center}
where $m_{1},\hdots,m_{l} \in SM_{0}$ and $r \in \bar{e_{k}}S$.\\
Then
\begin{center}
$\displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast} \left(\rho^{k} \left(\Delta(m_{1}\hdots m_{l}r) \Diamond z\right)\right)s$
\end{center}
is equal to
\begin{center}
$\displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast} \left(\rho^{k} \left(\Delta(m_{1}\hdots m_{l})r \Diamond z\right)\right)s+g$
\end{center}
where
\begin{align*}
g&=\displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast} \left(\rho^{k} \left(m_{1}\hdots m_{l}\Delta(r) \Diamond z\right)\right)s \\
&=\displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast}\left(r\rho(cyc(\bar{e_{k}}zm_{1}\hdots m_{l}))-\rho(cyc(\bar{e_{k}}zm_{1}\hdots m_{l}))r\right)s
\end{align*}
By \cite[Lemma 5.2]{2} we obtain that $g=0$. Therefore:
\begin{center}
$W(c)=\varphi^{-1}\rho^{-1} \left( \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast} \left( \rho^{k} \left(\Delta(m_{1}\hdots m_{l}) \Diamond rz\right)\right)s\right)n$
\end{center}
then $W(c)$ is a sum of elements of the form:
\begin{center}
$\varphi^{-1}\rho^{-1}\left( (s\rho(bwa))^{\ast}(\rho(\bar{e_{k}}m_{i}\hdots m_{l}rzm_{1}\hdots m_{i-1}))s\right)n$
\end{center}
If $i=l$ then $z=\displaystyle \sum_{i} z_{i}z_{i'}$ where $z_{i} \in \bar{e_{k}}M$ and in this case: \\
\begin{center}
$(s\rho(bwa))^{\ast}(\rho(\bar{e_{k}}m_{l}r\bar{e_{k}}zm_{1}\hdots m_{l-1}))=(s\rho(bwa))^{\ast}(\rho(\bar{e_{k}}m_{l}r)\rho(\bar{e_{k}}zm_{1}\hdots m_{l-1}\bar{e_{k}}))=0$
\end{center}
Hence $W(c)$ is a sum of elements of the form:
\begin{center}
$\varphi^{-1}\rho^{-1}\left((s\rho(bwa))^{\ast}(\rho(\bar{e_{k}}m_{i}m_{i+1})\rho(\alpha z \beta))s\right)n$
\end{center}
where $m_{i}m_{i+1} \in Me_{k}M$ and thus $W(c)$ is a sum of elements of the form $\varphi^{-1}(\alpha)\varphi^{-1}(z)\varphi^{-1}(\beta)n$. Since $\varphi^{-1}(z)=X_{c^{\ast}}(P)$ then $W(c) \in R(P)N=0$, completing the proof of Lemma \ref{lem8}. \\
\end{proof}
From the above we obtain the following formula:
\begin{center}
$(\ast): \varphi^{-1}(Y_{[bwa]}(\varphi(P)))n=\varphi^{-1}\rho^{-1}(Z_{v}')n$
\end{center}
where:
\begin{align*}
Z_{v}'&=\displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{b_{1} \in T_{k}, r \in L(k), a_{1} \in _{k}T} (s\rho(bwa))^{\ast} \left( \hat{\Delta}(\hat{\varphi}(\rho(b_{1}ra_{1})))^{\leq v+1} \Diamond \hat{\varphi}(X_{\rho(b_{1}ra_{1})}(\rho(P)))\right)s \\
&=\displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{b_{1}ra_{1}} (s\rho(bwa))^{\ast} \left( \hat{\Delta}(\hat{\varphi}(\rho(b_{1}ra_{1})))^{\leq v+1} \Diamond \hat{\varphi}\rho \left(Y_{[b_{1}ra_{1}]}(P)\right)\right)s \\
&=\displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{b_{1}ra_{1}} (s\rho(bwa))^{\ast} \left( \hat{\Delta}(\rho\varphi(b_{1}ra_{1}))^{\leq v+1} \Diamond \rho\varphi \left(Y_{[b_{1}ra_{1}]}(P)\right)\right)s
\end{align*}
Thus, in $(\ast)$ we may replace $Z_{v}'$ by the term:
\begin{center}
$\displaystyle \sum_{b_{1}ra_{1}} \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast}\left( \hat{\Delta}(\rho({\varphi}(b_{1}ra_{1})^{\leq 2v+3})) \Diamond \rho{\varphi}(Y_{[b_{1}ra_{1}]}(P))\right)s$
\end{center}
which by Lemma \ref{lem7} is equal to:
\begin{center}
$\displaystyle \sum_{b_{1}ra_{1}} \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast}\left( \rho^{k} \left(\Delta(\varphi(b_{1}ra_{1})^{\leq 2v+3}) \Diamond \varphi(Y_{[b_{1}ra_{1}]}(P))\right)\right)s$
\end{center}
the latter term can be replaced by:
\begin{center}
$\displaystyle \sum_{b_{1}ra_{1}} \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast}\left( \rho^{k} \left(\Delta(\varphi(b_{1}r)^{\leq 2v+3}\varphi(a_{1})^{\leq 2v+3}) \Diamond \varphi(Y_{[b_{1}ra_{1}]}(P))\right)\right)s$
\end{center}
which in turn can be replaced by $S=S_{1}+S_{2}$, where:
\begin{align*}
S_{1}&=\displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{b_{1}ra_{1}} (s\rho(bwa))^{\ast} \left( \rho^{k} \left(\Delta(\varphi(b_{1})^{\leq 2v+3}) \Diamond \varphi(ra_{1}Y_{[b_{1}ra_{1}]}(P))\right)\right)s \\
S_{2}&= \displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{b_{1}ra_{1}} (s\rho(bwa))^{\ast} \left( \rho^{k} \left(\Delta(\varphi(a_{1})^{\leq 2v+3}) \Diamond \varphi(Y_{[b_{1}ra_{1}]}(P)b_{1}r)\right)\right)s
\end{align*}
Using \ref{3.7} gives that:
\begin{center}
$S_{2}= \displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{a_{1}} (s\rho(bwa))^{\ast} \left( \rho^{k} \left(\Delta(\varphi(a_{1})^{\leq 2v+3}) \Diamond \varphi(X_{a_{1}^{\ast}}(P))\right)\right)s$
\end{center}
whence:
\begin{center}
$\varphi^{-1}(Y_{[bwa]}(\varphi(P)))n=\varphi^{-1}\rho^{-1}(S_{1})n+\varphi^{-1}\rho^{-1}(S_{2})n$
\end{center}
(i) Let us see that $\varphi^{-1}\rho^{-1}(S_{2}) \subseteq R(P)$. \\
Let $z=\varphi(X_{a_{1}^{\ast}}(P))$, then $\varphi(a_{1})^{\leq 2v+3}$ is a sum of elements of the form $m_{1} \hdots m_{l}t$ where $m_{j} \in SM_{0}$, $t \in \bar{e_{k}}S$. Therefore $S_{2}$ is a sum of elements of the form:
\begin{center}
$(s\rho(bwa))^{\ast}(\rho(\bar{e_{k}}m_{i}\hdots m_{l}tzm_{1} \hdots m_{i-1})\bar{e_{k}})s$
\end{center}
If $i=l$, then $(s\rho(bwa))^{\ast}(\rho(\bar{e_{k}}m_{i}\hdots m_{l}\hdots m_{i-1})\bar{e_{k}})s$ is equal to:
\begin{center}
$(s\rho(bwa))^{\ast}(\rho(\bar{e_{k}}m_{l}\bar{e_{k}})\rho(\bar{e_{k}}tzm_{1}\hdots m_{l-1})\bar{e_{k}})s=0$
\end{center}
If $i<l$ then:
\begin{center}
$(s\rho(bwa))^{\ast}(\rho(\bar{e_{k}}m_{i}\hdots m_{l}tzm_{1}\hdots m_{i-1})\bar{e_{k}})s=(s\rho(bwa))^{\ast}(\rho(\bar{e_{k}}m_{i}\hdots m_{l})\rho(tzm_{1}\hdots m_{i-1}\bar{e_{k}}))s$
\end{center}
Therefore $S_{2}$ is a sum of elements of the form $\rho(\alpha z \beta)$, so $\varphi^{-1} \rho^{-1}(S_{2})$ is a sum of elements of the form $\varphi^{-1}(\alpha)\varphi^{-1}(z)\varphi^{-1}(\beta)$. Since $\varphi^{-1}(z)=X_{a_{1}^{\ast}}(P)$ then $\varphi^{-1}\rho^{-1}(S_{2}) \subseteq R(P)$ which establishes (i). \\
From the above it follows that $\varphi^{-1}(Y_{[bwa]}(\varphi(P)))n=\varphi^{-1}\rho^{-1}(S_{1})n$. \\
(ii) Let us show that $\varphi^{-1}\rho^{-1}(S_{1})=\nu_{1}+\nu_{2}$, where $\nu_{1} \in R(P)$ and $\nu_{2}$ is a sum of elements of the form: \\
\begin{center}
$\displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{ra_{1}} (s\rho(bwa))^{\ast} \rho(\bar{e_{k}}m_{i} \hdots m_{l}z_{r_{a_{1}}}m_{1} \hdots m_{i-1}\bar{e_{k}})s$
\end{center}
where $z_{r_{a_{1}}}=\varphi(ra_{1}Y_{[b_{1}ra_{1}]}(P))$.
Note that $\varphi(b_{1})^{\leq 2v+3}$ is a sum of elements of the form $m_{1}m_{2} \hdots m_{l}$ where $m_{1},\hdots,m_{l-1} \in SM_{0}$ and $m_{l} \in \bar{e_{k}}Me_{k}$. The element $\varphi(b_{1})$ is a sum of elements of the form $m_{1} \hdots m_{l}$ where $m_{1},\hdots,m_{l-1} \in SM_{0}$ and $m_{l} \in \bar{e_{k}}Me_{k}$. Then $\varphi^{-1}\rho^{-1}(S_{1})$ lies in the $F$-vector space generated by $\varphi^{-1}\rho^{-1}(T_{i})$ where $T_{i}$ is the $F$-vector space generated by elements of the form:
\begin{center}
$u_{i}=\displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{ra_{1}} (s\rho(bwa))^{\ast} \rho(\bar{e_{k}}m_{i} \hdots m_{l}z_{r_{a_{1}}}m_{1} \hdots m_{i-1}\bar{e_{k}})s$
\end{center}
Let us show that if $i<l$ then $\varphi^{-1}\rho^{-1}(T_{i}) \subseteq R(P)$. We have that
\begin{center}
$u_{i}=\displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast} \left(\rho(\bar{e_{k}}m_{i} \hdots m_{l-1})\right)\rho(m_{l}w_{b_{1}}m_{1} \hdots m_{i-1}\bar{e_{k}})s$
\end{center}
where $w_{b_{1}}=\varphi(X_{b_{1}^{\ast}}(P))$. \\
It follows that $u_{i}$ is a sum of elements of the form $\rho(\alpha w_{b_{1}}\beta)$ and thus $\varphi^{-1}\rho^{-1}(u_{i})$ is a sum of elements of the form $\varphi^{-1}(\alpha)X_{b_{1}^{\ast}}(P)\varphi^{-1}(\beta)$, as was to be shown. This completes the proof of (ii). \\
We have that
\begin{center}
$\varphi(b_{1})=\displaystyle \sum_{b'r'}D_{b_{1},b'r'}b'r'$
\end{center}
then
\begin{center}
$\varphi(b_{1})^{\leq 2v+3}=\displaystyle \sum_{b'r'} (D_{b_{1},b'r'})^{\leq 2v+2}b'r'$
\end{center}
Also
\begin{align*}
\varphi(ra_{1})&=\displaystyle \sum_{r''a'} r''a'C_{r''a',ra_{1}} \\
z_{r_{a_{1}}}&=\displaystyle \sum_{r''a'} r''a'C_{r''a',ra_{1}}\varphi(Y_{[b_{1}ra_{1}]}(P))
\end{align*}
On the other hand, $(D_{b_{1},b'r'})^{\leq 2v+2}$ is a sum of elements of the form $m_{1}m_{2}\hdots m_{l-1}s''$ where each $m_{i} \in SM_{0}$ and $s'' \in L(\sigma(b'))$. Therefore $\varphi(b_{1})^{\leq 2v+3}$ is a sum of elements of the form $m_{1}m_{2}\hdots m_{l-1}s''b'r'$. \\
In what follows, $z(b_{1}ra_{1})=\varphi(Y_{[b_{1}ra_{1}]})$. \\
We obtain that $\varphi^{-1}(Y_{[bwa]}(\varphi(P)))n$ is a sum of elements of the form:
\begin{center}
$(\ast): \displaystyle \sum_{b',s',s'',r',r''} \varphi^{-1}\rho^{-1}\left((s'\rho(bwa))^{\ast}(\rho(\bar{e_{k}}H))\right)$
\end{center}
where:
\begin{align*}
H&=s''b'r'r''a'C_{r''a',ra_{1}}z(b_{1}ra_{1})m_{1}\hdots m_{l-1}s'n \\
&=\displaystyle \sum_{w_{1} \in L(k)} s''b'w_{1}^{\ast}(r'r'')w_{1}a'C_{r''a',ra_{1}}z(b_{1}ra_{1})m_{1}\hdots m_{l-1}s'n
\end{align*}
The non-zero terms of $(\ast)$ are those with $s'=s''$, $b=b'$, $a'=a$, $w_{1}=w$. Thus:
\begin{center}
$\varphi^{-1}(Y_{[bwa]}(\varphi(P)))n=\displaystyle \sum_{h,r',r'',a_{1},b_{1}}w^{\ast}(r'r'')\varphi^{-1}(C_{r''a,ha_{1}})Y_{[b_{1}ha_{1}]}(P)\varphi^{-1}(D_{b_{1},br'})n$
\end{center}
Therefore:
\begin{align*}
\hat{\gamma}_{sa,br}(n)&=\displaystyle \sum_{w \in L(k)} s^{\ast}(r^{-1}w)\varphi^{-1}(Y_{[bwa]}(\varphi(P)))n\\
&=\displaystyle \sum_{w,h,r',r'',a_{1},b_{1}}s^{\ast}(r^{-1}w)w^{\ast}(r'r'')\varphi^{-1}(C_{r''a,ha_{1}})Y_{[b_{1}ha_{1}]}(P)\varphi^{-1}(D_{b_{1},br'})n\\
&=\displaystyle \sum_{w,h,r',r'',a_{1},b_{1}}s^{\ast}(r^{-1}ww^{\ast}(r'r''))\varphi^{-1}(C_{r''a,ha_{1}})Y_{[b_{1}ha_{1}]}(P)\varphi^{-1}(D_{b_{1},br'})n\\
&=\displaystyle \sum_{h,r',r'',a_{1},b_{1}}s^{\ast}\left(r^{-1}\displaystyle \sum_{w \in L(k)}w^{\ast}(r'r'')w\right)\varphi^{-1}(C_{r''a,ha_{1}})Y_{[b_{1}ha_{1}]}(P)\varphi^{-1}(D_{b_{1},br'})n \\
&=\displaystyle \sum_{h,r',r'',a_{1},b_{1}}s^{\ast}(r^{-1}r'r'')\varphi^{-1}(C_{r''a,ha_{1}})Y_{[b_{1}ha_{1}]}(P)\varphi^{-1}(D_{b_{1},br'})n
\end{align*}
By \cite[Proposition 8.1 (iii)]{2} and \cite[Proposition 8.2 (iii)]{2} the following equalities hold:
\begin{align*}
\displaystyle \varphi^{-1}(C_{r''a,ha_{1}})&=\displaystyle \sum_{u \in L(\sigma(a))} (r'')^{\ast}(hu)\varphi^{-1}(C_{ua,a_{1}})
\\
\varphi^{-1}(D_{b_{1},br'})&=\displaystyle \sum_{v \in L(\tau(b_{1}))}(r')^{\ast}(v)\varphi^{-1}(D_{b_{1},bv})
\end{align*}
whence
\begin{align*}
\hat{\gamma}_{sa,br}(n)&=\displaystyle \sum_{h,r',r'',a_{1},b_{1},u,v}s^{\ast}(r^{-1}r'r'')(r'')^{\ast}(hu)\varphi^{-1}(C_{ua,a_{1}})Y_{[b_{1}ha_{1}]}(P)(r')^{\ast}(v)\varphi^{-1}(D_{b_{1},bv})n\\
&=\displaystyle \sum_{h,r',a_{1},b_{1},u,v} s^{\ast}(r^{-1}r'hu)(r')^{\ast}(v)\varphi^{-1}(C_{ua,a_{1}})Y_{[b_{1}ha_{1}]}(P)\varphi^{-1}(D_{b_{1},bv})n\\
&=\displaystyle \sum_{h,a_{1},b_{1},u,v} s^{\ast}(r^{-1}vhu)\varphi^{-1}(C_{ua,a_{1}})Y_{[b_{1}ha_{1}]}(P)\varphi^{-1}(D_{b_{1},bv})n
\end{align*}
On the other hand, using again \cite[Proposition 8.1 (iii)]{2} and \cite[Proposition 8.2 (iii)]{2} one gets that the $(sa,br)$-entry of $(C\gamma D)(n)$ is given by
\begin{center}
$\displaystyle \sum_{s',t',h,a_{1},b_{1},u,v} (s')^{\ast}((t')^{-1}h)s^{\ast}(s'u)r^{\ast}(vt')\varphi^{-1}(C_{ua,a_{1}})Y_{[b_{1}ha_{1}]}(P)\varphi^{-1}(D_{b_{1},bv})n$
\end{center}
Using \ref{2.4} one obtains the following:
\begin{align*}
\displaystyle \sum_{s',t',h,u,v} (s')^{\ast}((t')^{-1}h)s^{\ast}(s'u)r^{\ast}(vt')&=\displaystyle \sum_{s',t',h,u,v} s^{\ast} \left( (s')^{\ast}((t')^{-1}h)s'u\right)r^{\ast}(vt') \\
&=\displaystyle \sum_{t',h,u,v}s^{\ast}\left((t')^{-1}hu\right)r^{\ast}(vt') \\
&=\displaystyle \sum_{h,u,v}s^{\ast} \left(\displaystyle \sum_{t'} r^{\ast}(vt')(t')^{-1}hu\right)\\
&=\displaystyle \sum_{h,u,v} s^{\ast}(r^{-1}vhu)
\end{align*}
which implies that $\hat{\gamma}=C\gamma D$ and the proof of Lemma \ref{lem4} is now complete.
\end{proof}
Note that Lemma \ref{lem4} implies the following equalities:
\begin{equation} \label{4.7}
\begin{split}
\operatorname{ker}(\gamma)&=D(\operatorname{ker}(\hat{\gamma})) \\
\operatorname{im}(\gamma)&=C^{-1}(\operatorname{im}(\hat{\gamma}))
\end{split}
\end{equation}
We now complete the proof of Proposition \ref{prop5}. Let us establish a right-equivalence $\left( \hat{\varphi}, \psi, \eta\right)$ between the representations $\widetilde{\mu_{k}}(\mathcal{N})$ and $\widetilde{\mu_{k}}\left(\widehat{\mathcal{N}}\right)$. First, we define $\widehat{\varphi}: \mathcal{F}_{S}(\mu_{k}M) \rightarrow \mathcal{F}_{S}(\mu_{k}M)$ as the right-equivalence between the algebras $(\mathcal{F}_{S}(\mu_{k}M),\mu_{k}P)$ and $\left(\mathcal{F}_{S}(\mu_{k}M), \mu_{k}\varphi(P)\right)$ given by \cite[Theorem 8.12]{2}. Let $\widetilde{\mu_{k}}(\widehat{\mathcal{N}})=(\overline{\widehat{N}},\overline{\widehat{V}})$. If $i \neq k$, then $\widehat{N}_{i}=N_{i}$ and: \\
\begin{center}
$\overline{\widehat{N}}_{k}=\frac{\operatorname{ker}(\hat{\gamma})}{\operatorname{im}(\hat{\beta})} \oplus \operatorname{im}(\hat{\gamma}) \oplus \frac{\operatorname{ker}(\hat{\alpha})}{\operatorname{im}(\hat{\gamma})} \oplus V_{k}$
\end{center}
\vspace{0.1in}
For each $i \in \{1,2,3,4\}$, let $\overline{J}_{i}$ and $\overline{\Pi}_{i}$ be the corresponding inclusions and projections associated to $\overline{\widehat{N}}_{k}$, analogous to those given in \ref{4.1} and \ref{4.2}. Then we have inclusion maps:
\begin{align*}
&\overline{j}: \operatorname{ker}(\hat{\gamma}) \rightarrow N_{out} \\
&\overline{i}: \operatorname{im}(\hat{\gamma}) \rightarrow N_{in}
\end{align*}
and projections:
\begin{align*}
&\overline{\pi}_{1}: \operatorname{ker}(\hat{\gamma}) \rightarrow \frac{\operatorname{ker}(\hat{\gamma})}{\operatorname{im}(\hat{\beta})} \\
&\overline{\pi}_{2}: \operatorname{ker}(\hat{\alpha})\rightarrow \frac{\operatorname{ker}(\hat{\alpha})}{\operatorname{im}(\hat{\gamma})}
\end{align*}
By Lemma \ref{lem4} we have $\hat{\gamma}=C\gamma D$ and thus $\hat{\gamma}D^{-1}=C\gamma$. It follows that $D^{-1}$ induces an isomorphism
\begin{center}
$D_{1}^{-1}: \operatorname{ker}(\gamma) \rightarrow \operatorname{ker}(\hat{\gamma})$
\end{center}
such that $\overline{j}D_{1}^{-1}=D^{-1}j$. Also, $D^{-1}$ maps $\operatorname{im}(\beta)$ to $\operatorname{im}(\hat{\beta})$. Therefore, $D^{-1}$ also induces an isomorphism
\begin{center}
$\underline{D}^{-1}: \frac{\operatorname{ker}(\gamma)}{\operatorname{im}(\beta)} \rightarrow \frac{\operatorname{ker}(\hat{\gamma})}{\operatorname{im}(\hat{\beta})}$
\end{center}
\vspace{0.1in}
such that $\underline{D}^{-1}\pi_{1}=\overline{\pi}_{1}D_{1}^{-1}$. The isomorphism $C$ induces an isomorphism
\begin{center}
$C_{1}: \operatorname{im}(\gamma) \rightarrow \operatorname{im}(\hat{\gamma})$
\end{center}
such that $\overline{i}C_{1}=Ci$. The equality $\hat{\alpha}C=\alpha$ implies that $C$ also induces an isomorphism $C_{2}: \operatorname{ker}(\alpha) \rightarrow \operatorname{ker}(\hat{\alpha})$; thus there exists an isomorphism
\begin{center}
$\underline{C}: \frac{\operatorname{ker}(\alpha)}{\operatorname{im}(\gamma)} \rightarrow \frac{ \operatorname{ker}(\hat{\alpha})}{\operatorname{im}(\hat{\gamma})}$
\end{center}
such that $\underline{C}\pi_{2}=\overline{\pi}_{2}C_{2}$. \\
To construct $\widetilde{\mu_{k}}(\widehat{N})$ we choose splitting data as follows:
\begin{align*}
\overline{p}&=D_{1}^{-1}pD: N_{out} \rightarrow \operatorname{ker}(\hat{\gamma}) \\
\overline{\sigma}_{2}&=C_{2}\sigma_{2}\underline{C}^{-1}: \frac{\operatorname{ker}(\hat{\alpha})}{\operatorname{im}(\hat{\gamma})} \rightarrow \operatorname{ker}(\hat{\alpha})
\end{align*}
Note that $\overline{p}\overline{j}=id_{\operatorname{ker}(\hat{\gamma})}$, $\overline{\pi}_{2}\overline{\sigma}_{2}=id_{\operatorname{ker}(\hat{\alpha})/\operatorname{im}(\hat{\gamma})}$. Define: \\
\begin{center}
$\psi: \overline{N} \rightarrow \overline{\widehat{N}}$
\end{center}
as follows. If $i \neq k$ then $\psi_{i}: \overline{N}_{i}=N_{i} \rightarrow \overline{\widehat{N}}_{i}=N_{i}$ is the identity map and
\begin{center}
$\psi_{k}: \overline{N}_{k} \rightarrow \overline{\widehat{N}}_{k}$
\end{center}
\vspace{0.1in}
is the map such that for every $i \neq j$, $\overline{\Pi}_{i} \psi_{k}J_{j}=0$ and
\begin{align*}
\overline{\Pi}_{1}\psi_{k}J_{1}&=\underline{D}^{-1} \\
\overline{\Pi}_{2}\psi_{k}J_{2}&=C_{1} \\
\overline{\Pi}_{3}\psi_{k}J_{3}&=\underline{C} \\
\overline{\Pi}_{4}\psi_{k}J_{4}&=id_{V_{k}}
\end{align*}
Let us show that for every $z \in \widetilde{T}$ and $n \in N_{\tau(z)}$:
\begin{center}
$\psi_{\sigma(z)}(zn)=\widehat{\varphi}(z)\psi_{\tau(z)}(n)$
\end{center}
\vspace{0.1in}
Suppose first that $z=a^{\ast}$ where $a \in _{k}T$. In this case $\tau(z)=\sigma(a)=k$ and $\sigma(z)=\tau(a) \neq k$. By \cite[Proposition 8.11]{2}:
\begin{center}
$\widehat{\varphi}(a^{\ast})=\displaystyle \sum_{t \in L(k), a_{1} \in _{k}T} (C^{-1})_{a,ta_{1}} a_{1}^{\ast}t^{-1}$
\end{center}
whence
\begin{center}
$\overline{\widehat{N}}(\widehat{\varphi}(a^{\ast}))=\displaystyle \sum_{t \in L(k), a_{1} \in _{k}T} (C^{-1})_{a,ta_{1}} \ast \overline{\widehat{N}}(a_{1}^{\ast})t^{-1}$
\end{center}
where $\ast$ denotes the action of $\mathcal{F}_{S}(M)$ in $\widehat{N}$. In this case we have to verify the following equality:
\begin{equation} \label{4.8}
\overline{N}(a^{\ast})=\overline{\widehat{N}}(\widehat{\varphi}(a^{\ast}))\psi_{k}
\end{equation}
On one hand, $\overline{N}(a^{\ast})J_{1}=0$. On the other hand:
\begin{align*}
\overline{\widehat{N}}(\widehat{\varphi}(a^{\ast}))\psi_{k}J_{1}&= \displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast \overline{\widehat{N}}(a_{1}^{\ast})t^{-1}\psi_{k}J_{1} \\
&=\displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast \overline{ \widehat{N}}(a_{1}^{\ast})t^{-1} \overline{J}_{1} \overline{\Pi}_{1} \psi_{k} J_{1} \\
&=\displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast \overline{ \widehat{N}}(a_{1}^{\ast})\overline{J}_{1} t^{-1} \overline{\Pi}_{1} \psi_{k} J_{1} =0
\end{align*}
and thus $\overline{N}(a^{\ast})J_{1}=\overline{\widehat{N}}(\widehat{\varphi}(a^{\ast}))\psi_{k}J_{1}$. Now let us consider $\overline{N}(a^{\ast})J_{2}$. By \ref{4.3} we have $\overline{N}(a^{\ast})J_{2}=c_{k}^{-1}\pi_{e_{k}a}i$. On the other hand:
\begin{align*}
\overline{\widehat{N}}(\widehat{\varphi}(a^{\ast}))\psi_{k}J_{2}&= \displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast \overline{\widehat{N}}(a_{1}^{\ast})t^{-1}\overline{J}_{2} \overline{\Pi}_{2} \psi_{k} J_{2} \\
&= \displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast \overline{ \widehat{N}}(a_{1}^{\ast}) \overline{J}_{2} t^{-1} \overline{\Pi}_{2} \psi_{k}J_{2} \\
&= \displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast c_{k}^{-1} \pi_{e_{k}a_{1}} \overline{i} t^{-1} \overline{\Pi}_{2} \psi_{k} J_{2} \\
&=\displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast c_{k}^{-1} \pi_{e_{k}a_{1}} \overline{i} t^{-1} C_{1} \\
&=c_{k}^{-1} \displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast \pi_{e_{k}a_{1}} t^{-1} Ci \\
&=c_{k}^{-1} \displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast \pi_{ta_{1}} Ci \\
&=c_{k}^{-1} \displaystyle \sum_{a_{1},a_{2} \in _{k}T, r,t \in L(k)} \varphi^{-1}((C^{-1})_{a,ta_{1}})\pi_{ta_{1}}C\xi_{ra_{2}}\pi_{ra_{2}}i \\
&=c_{k}^{-1} \displaystyle \sum_{a_{1},a_{2} \in _{k}T, r,t \in L(k)} \varphi^{-1}((C^{-1})_{a,ta_{1}}) \varphi^{-1}(C_{ta_{1},ra_{2}})\pi_{ra_{2}}i \\
&=c_{k}^{-1} \displaystyle \sum_{a_{1}, a_{2} \in _{k}T, r,t \in L(k)} \varphi^{-1} \left( (C^{-1})_{a,ta_{1}}C_{ta_{1},ra_{2}}\right) \pi_{ra_{2}}i \\
&=c_{k}^{-1} \pi_{e_{k}a}i
\end{align*}
Therefore $\overline{N}(a^{\ast})J_{2}=\overline{\widehat{N}}(\widehat{\varphi}(a^{\ast}))\psi_{k}J_{2}$. For $J_{3}$ we have $\overline{N}(a^{\ast})J_{3}=c_{k}^{-1}\pi_{e_{k}a}j' \sigma_{2}$ and
\begin{align*}
\overline{\widehat{N}}(\widehat{\varphi}(a^{\ast}))\psi_{k}J_{3}&= \displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast \overline{\widehat{N}}(a_{1}^{\ast}) \overline{J}_{3}t^{-1} \overline{\Pi}_{3} \psi_{k} J_{3} \\
&=c_{k}^{-1} \displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} \varphi^{-1} \left( (C^{-1})_{a,ta_{1}}\right) \pi_{ta_{1}} \overline{j} \overline{\sigma}_{2}\underline{C} \\
&=c_{k}^{-1} \displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} \varphi^{-1} \left( (C^{-1})_{a,ta_{1}}\right) \pi_{ta_{1}} Cj' \sigma_{2} \\
&=c_{k}^{-1} \displaystyle \sum_{a_{1}, a_{2} \in _{k}T, t,r \in L(k)} \varphi^{-1} \left( (C^{-1})_{a,ta_{1}}\right)\pi_{ta_{1}}C \xi_{ra_{2}}\pi_{ra_{2}}j ' \sigma_{2} \\
&=c_{k}^{-1} \displaystyle \sum_{a_{1},a_{2} \in _{k}T, t,r \in L(k)} \varphi^{-1} \left((C^{-1})_{a,ta_{1}}\right) \varphi^{-1}(C_{ta_{1},ra_{2}})\pi_{ra_{2}}j' \sigma_{2} \\
&=c_{k}^{-1}\pi_{e_{k}a}j'\sigma_{2}
\end{align*}
Thus $\overline{N}(a^{\ast})J_{3}=\overline{\widehat{N}}(\widehat{\varphi}(a^{\ast}))\psi_{k}J_{3}$. \\
Finally, $\overline{N}(a^{\ast})J_{4}=0$ and:
\begin{center}
$\overline{\widehat{N}}(\varphi(a^{\ast}))\psi_{k}J_{4}=\displaystyle \sum_{a_{1} \in _{k}T, t \in L(k)} (C^{-1})_{a,ta_{1}} \ast \overline{\widehat{N}}(a_{1}^{\ast})\overline{J}_{4}t^{-1} \overline{\Pi}_{4}\psi_{k}J_{4}=0$.
\end{center}
and \ref{4.8} holds. \\
Suppose now that $z=^{\ast}b$ where $b \in T_{k}$. In this case $\tau(z)=\sigma(b) \neq k$ and $\sigma(z)=\tau(b)=k$. By \cite[Proposition 8.11]{2}:
\begin{center}
$\widehat{\varphi}(^{\ast}b)=\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} r^{-1}(^{\ast}b_{1})(D^{-1})_{b_{1}r,b}$
\end{center}
We have to show that
\begin{equation} \label{4.9}
\psi_{k}\overline{N}(^{\ast}b)=\overline{\widehat{N}}(\widehat{\varphi}(^{\ast}b))
\end{equation}
On one hand, $\overline{\Pi}_{1} \psi_{k} \overline{N}(^{\ast}b)=\overline{\Pi}_{1} \psi_{k}J_{1}\Pi_{1} \overline{N}(^{\ast}b)=- \underline{D}^{-1} \pi_{1}p \xi_{be_{k}}$. On the other hand:
\begin{align*}
\overline{\Pi}_{1} \overline{\widehat{N}}(\widehat{\varphi}(^{\ast}b))&= \displaystyle \sum_{b_{1} \in T_{k}, r \in L(k)} r^{-1} \overline{\Pi}_{1} \overline{\widehat{N}}(^{\ast}b_{1})\varphi^{-1}((D^{-1})_{b_{1}r,b}) \\
&=-\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} r^{-1} \overline{\pi}_{1} \overline{p} \xi_{b_{1}e_{k}} \varphi^{-1}((D^{-1})_{b_{1}r,b}) \\
&=-\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} \overline{\pi}_{1} \overline{p} \xi_{b_{1}r} \pi_{b_{1}r}D^{-1} \xi_{be_{k}} \\
&=-\overline{\pi}_{1} \overline{p} D^{-1} \xi_{be_{k}} \\
&=-\overline{\pi}_{1} D_{1}^{-1}p D D^{-1} \xi_{be_{k}} \\
&=-\overline{\pi}_{1}D_{1}^{-1}p \xi_{be_{k}} \\
&= -\underline{D}^{-1} \pi_{1}p \xi_{be_{k}}
\end{align*}
Consequently
\begin{center}
$\overline{\Pi}_{1} \psi_{k} \overline{N}(^{\ast}b)=\overline{\Pi}_{1} \overline{\widehat{N}}(\widehat{\varphi}(^{\ast}b))$
\end{center}
\vspace{0.1in}
Now consider the map
\begin{center}
$\hat{\gamma'}: N_{out} \rightarrow \operatorname{im}(\widehat{\gamma})$
\end{center}
where $\hat{\gamma}=\overline{i}\widehat{\gamma'}$. Let us consider $\overline{\Pi}_{2}$. We have: \\
\begin{center}
$\overline{\Pi}_{2}\psi_{k}\overline{N}(^{\ast}b)=\overline{\Pi}_{2} \psi_{k} J_{2} \Pi_{2} \overline{N}(^{\ast}b)=-C_{1}\gamma' \xi_{be_{k}}$
\end{center}
and
\begin{align*}
\overline{\Pi}_{2} \overline{\widehat{N}}(\widehat{\varphi}(^{\ast}b))&=\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} r^{-1} \overline{\Pi}_{2} \overline{\widehat{N}}(^{\ast}b_{1})\varphi^{-1}((D^{-1})_{b_{1}r,b}) \\
&=-\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} r^{-1} \widehat{\gamma'} \xi_{b_{1}e_{k}} \varphi^{-1}((D^{-1})_{b_{1}r,b}) \\
&=-\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} \widehat{\gamma'} r^{-1} \xi_{b_{1}e_{k}}\varphi^{-1}((D^{-1})_{b_{1}r,b}) \\
&=-\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} \widehat{\gamma'} \xi_{b_{1}r} \pi_{b_{1}r}D^{-1} \xi_{be_{k}} \\
&=-\widehat{\gamma'}D^{-1}\xi_{be_{k}}
\end{align*}
Also:
\begin{center}
$\overline{i}C_{1}\gamma' \xi_{be_{k}}=Ci\gamma' \xi_{be_{k}}=C\gamma \xi_{be_{k}} = \hat{\gamma}D^{-1}\xi_{be_{k}}=\overline{i}\widehat{\gamma'}D^{-1}\xi_{be_{k}}$
\end{center}
Since $\overline{i}$ is injective, then $C_{1}\gamma' \xi_{be_{k}}=\widehat{\gamma'}D^{-1}\xi_{be_{k}}$. Therefore
\begin{center}
$\overline{\Pi}_{2}\psi_{k}\overline{N}(^{\ast}b)=\overline{\Pi}_{2} \overline{\widehat{N}}(\widehat{\varphi}(^{\ast}b))$
\end{center}
Finally:
\begin{align*}
\overline{\Pi}_{3} \psi_{k} \overline{N}(^{\ast}b)&=\overline{\Pi}_{3} \psi_{k}J_{3}\Pi_{3}\overline{N}(^{\ast}b)=0 \\
\overline{\Pi}_{3} \overline{\widehat{N}}(\widehat{\varphi}(^{\ast}b))&=\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} r^{-1}\overline{\Pi}_{3} \overline{ \widehat{N}}(^{\ast}b_{1})\varphi^{-1}((D^{-1})_{b_{1}r,b})=0 \\
\overline{\Pi}_{4}\psi_{k} \overline{N}(^{\ast}b)&=\overline{\Pi}_{4} \psi_{k}J_{4}\Pi_{4}\overline{N}(^{\ast}b)=0 \\
\overline{\Pi}_{4} \overline{ \widehat{N}}(\widehat{\varphi}(^{\ast}b))&=\displaystyle \sum_{r \in L(k), b_{1} \in T_{k}} r^{-1} \overline{\Pi}_{4} \overline {\widehat{N}}(^{\ast}b_{1})\varphi^{-1}((D^{-1})_{b_{1}r,b})=0
\end{align*}
and the proof of Proposition \ref{prop5} is now complete.
\end{proof}
\begin{theorem} \label{teo1} Let $\varphi: \mathcal{F}_{S}(M_{1}) \rightarrow \mathcal{F}_{S}(M)$ be an algebra isomorphism with $\varphi_{|S}=id_{S}$ and let $\mathcal{N}=(N,V)$ be a decorated representation of $(\mathcal{F}_{S}(M_{1}),P)$ where $P$ is a potential such that $e_{k}Pe_{k}=0$. Then there exists an algebra isomorphism:
\begin{center}
$\hat{\varphi}: \mathcal{F}_{S}(\mu_{k}M_{1}) \rightarrow \mathcal{F}_{S}(\mu_{k}M)$
\end{center}
satisfying the following conditions:
\begin{enumerate}[(a)]
\item $\hat{\varphi}(\mu_{k}P)$ is cyclically equivalent to $\mu_{k}(\varphi(P))$.
\item There exists an isomorphism of decorated representations $\psi: \widetilde{\mu_{k}}(\mathcal{N}) \rightarrow \widetilde{\mu_{k}}(\widehat{\mathcal{N}})$.
\end{enumerate}
\end{theorem}
\begin{proof} Part (a) follows by \cite[Theorem 8.14]{2}. Note that $\varphi$ is equal to the composition of an algebra isomorphism $\mathcal{F}_{S}(M_{1}) \rightarrow \mathcal{F}_{S}(M)$, induced by an isomorphism of $S$-bimodules $M_{1} \rightarrow M$, with a unitriangular automorphism of $\mathcal{F}_{S}(M)$. In view of Proposition \ref{prop5}, it suffices to prove the statement when $\varphi$ is induced by an isomorphism of $S$-bimodules $\phi: M_{1} \rightarrow M$. Let $T_{1}$ be a $Z$-free generating set of $M_{1}$ and let $T$ be a $Z$-free generating set of $M$. Associated to the representation $\mathcal{N}$ we have the following left $D_{k}$-spaces:
\begin{align*}
N_{in}^{1}&=\displaystyle \bigoplus_{a \in _{k}T_{1}} D_{k} \otimes_{F} N_{\tau(a)} \\
N_{out}^{1}&= \displaystyle \bigoplus_{b \in (T_{1})_{k}} D_{k} \otimes_{F} N_{\sigma(b)}
\end{align*}
and associated to the representation $\widehat{\mathcal{N}}$ we also have the following left $D_{k}$-spaces:
\begin{align*}
N_{in} &= \displaystyle \bigoplus_{a \in _{k}T} D_{k} \otimes_{F} N_{\tau(a)} \\
N_{out}&=\displaystyle \bigoplus_{b \in T_{k}} D_{k} \otimes_{F} N_{\sigma(b)}
\end{align*}
Let $\gamma_{1}: N_{out}^{1} \rightarrow N_{in}^{1}$, $\alpha_{1}: N_{in}^{1} \rightarrow N_{k}$, $\beta_{1}: N_{k} \rightarrow N_{out}^{1}$ be the maps associated to $\mathcal{N}$ and $\gamma: N_{out} \rightarrow N_{in}$, $\alpha: N_{in} \rightarrow N_{k}$ and $\beta: N_{k} \rightarrow N_{out}$ be the maps associated to $\widehat{\mathcal{N}}$. \\
For $a_{1} \in _{k}T_{1}, r \in L(k)$ we have:
\begin{center}
$\phi(ra_{1})=\displaystyle \sum_{r' \in L(k), a \in _{k}T} r'aC_{r'a,ra_{1}}$
\end{center}
For $b_{1} \in (T_{1})_{k}, t \in L(k)$ we have:
\begin{center}
$\phi(b_{1}t)=\displaystyle \sum_{b \in T_{k}, t' \in L(k)} D_{b_{1}t,bt'}bt'$
\end{center}
where $C_{r'a,ra_{1}}, D_{b_{1}t,bt'} \in S$. Define $C: N_{in}^{1} \rightarrow N_{in}$ as the linear map such that for all $r,r' \in L(k)$, $a,a_{1} \in _{k}T_{1}$, the map:
\begin{center}
$\pi_{r'a}C\xi_{ra_{1}}: N_{\tau(a_{1})} \rightarrow N_{\tau(a)}$
\end{center}
is given by $\pi_{r'a}C\xi_{ra_{1}}(n)=C_{r'a,ra_{1}}n$. Similarly, we define $D: N_{out} \rightarrow N_{out}^{1}$ as the linear map such that for all $t,t' \in L(k)$, $b,b_{1} \in T_{k}$, the map:
\begin{center}
$\pi_{b_{1}t}D\xi_{bt'}: N_{\sigma(b)} \rightarrow N_{\sigma(b_{1})}$
\end{center}
is given by $\pi_{b_{1}t}D\xi_{bt'}(n)=D_{b_{1}t,bt'}n$. \\
Let us show that $\alpha C= \alpha_{1}$, $D\beta=\beta_{1}$. If $n \in N_{\tau(a_{1})}$, then \\
\begin{align*}
\alpha C \xi_{ra_{1}}(n)&=\displaystyle \sum_{r'a' \in _{k}\hat{T}_{1}} \alpha \xi_{r'a'}\pi_{r'a'}C\xi_{ra_{1}}(n) \\
&=\displaystyle \sum_{r'a' \in _{k}\hat{T}_{1}} \phi^{-1}\left(r'a'C_{r'a',ra_{1}}\right)n = \phi^{-1}\phi(ra_{1})n \\
&=ra_{1}n=\alpha_{1}\xi_{ra_{1}}(n)
\end{align*}
whence $\alpha C=\alpha_{1}$. If $n \in N_{\sigma(b)}$, $b_{1} \in (T_{1})_{k}$ and $t \in L(k)$, then \\
\begin{align*}
\pi_{b_{1}t}D\beta(n)&=\displaystyle \sum_{bt' \in \hat{T}_{k}} \pi_{b_{1}t}D\xi_{bt'}\pi_{bt'}\beta(n) \\
&=\displaystyle \sum_{bt' \in \hat{T}_{k}} D_{b_{1}t,bt'}(\phi^{-1}(bt')n) \\
&=\phi^{-1}\left( \displaystyle \sum_{bt' \in \hat{T}_{k}} D_{b_{1}t,bt'}bt'n\right) \\
&=\phi^{-1}\phi(b_{1}t)n \\
&=b_{1}tn=\pi_{b_{1}t}\beta_{1}(n)
\end{align*}
hence $D\beta=\beta_{1}$. \\
We now show that $\hat{\gamma}=C\gamma_{1} D$. Let $br \in \hat{T}_{k}$, $sa \in _{k}\hat{T}$, $n \in N_{\sigma(b)}$. We have:
\begin{center}
$\hat{\gamma}_{sa,br}(n)=\displaystyle \sum_{w \in L(k)} s^{\ast}(r^{-1}w)\varphi^{-1}(Y_{[bwa]}(\varphi(P)))n$ \\
\end{center}
Also:
\begin{center}
$\varphi^{-1}\left(Y_{[bwa]}(\varphi(P))\right)=\varphi^{-1} \rho^{-1} \left(X_{[bwa]}(\rho\varphi(P))\right)=\rho^{-1}\hat{\varphi}^{-1}\left(X_{[bwa]}(\hat{\varphi}\rho(P))\right)$
\end{center}
Let
\begin{center}
$T_{1}^{\mu}=\{\rho(c_{1}): c_{1} \in T_{1} \cap \bar{e_{k}}M_{1}\bar{e_{k}}\} \cup \{a_{1}^{\ast}: a_{1} \in _{k}T_{1}\} \cup \{^{\ast}b_{1}: b_{1} \in (T_{1})_{k} \} \cup \{\rho(b_{1}ra_{1}): b_{1} \in (T_{1})_{k}, a_{1} \in _{k}T_{1}, r \in L(k)\}$
\end{center}
denote the $Z$-free generating set of $\mu_{k}M_{1}$ associated to $T_{1}$. Likewise, let $T^{\mu}$ denote the $Z$-free generating set of $\mu_{k}M$ associated to $T$. Then:
\begin{center}
$X_{[bwa]}(\hat{\varphi}(\rho(P)))=\displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{c \in (\hat{T}_{1})^{\mu}_{\hat{k},\hat{k}}} (s\rho(bwa))^{\ast}(\hat{\Delta}(\hat{\varphi}(c))\Diamond z(c))s$
\end{center}
where $z(c)=\hat{\varphi}(X_{c}(\rho(P)))$. \\
The elements of the set $(T_{1})^{\mu}_{\hat{k},\hat{k}}$ are either elements of the form $\rho(c_{1})$ where $c_{1} \in T_{1} \cap \bar{e_{k}}M\bar{e_{k}}$ or elements of the form $\rho(b_{1}ra_{1})$ where $b_{1} \in (T_{1})_{k},a_{1} \in _{k}T_{1}, r \in L(k)$. We now compute $W(c_{1})$ when $c_{1} \in T_{1} \cap \bar{e_{k}}M\bar{e_{k}}$.
\begin{center}
$W(c_{1})=\displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast}( \hat{\Delta}(\hat{\varphi}\rho(c_{1})) \Diamond z(\rho(c_{1}))s$
\end{center}
Then $\hat{\varphi}\rho(c_{1})=\rho(\varphi(c_{1}))=\rho(\phi(c_{1}))=\displaystyle \sum_{i} r_{i}\rho(c_{i})t_{i}$ where $r_{i},t_{i} \in S$, $c_{i} \in T \cap \bar{e_{k}}M\bar{e_{k}}$.
Using \cite[Lemma 5.2]{2} one gets:
\begin{align*}
W(c_{1})&=\displaystyle \sum_{i} \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast}\left(\hat{\Delta}(\rho(c_{i})) \Diamond t_{i}z(\rho(c_{1}))r_{i}\right)s \\
&=\displaystyle \sum_{i} \displaystyle \sum_{s \in L(\sigma(b))} (s\rho(bwa))^{\ast}\left(\rho(c_{i})t_{i}z(\rho(c_{1}))r_{i}\right)s=0.
\end{align*}
Therefore
\begin{center}
$X_{[bwa]}(\hat{\varphi}(\rho(P)))=\displaystyle \sum_{s \in L(\sigma(b))} \displaystyle \sum_{b_{1}ha_{1}} (s\rho(bwa))^{\ast}(\hat{\Delta}(\hat{\varphi}\rho(b_{1}ha_{1})) \Diamond z(\rho(b_{1}ha_{1})))s$
\end{center}
and \\
\begin{center}
$\hat{\varphi}(\rho(b_{1}ha_{1}))=\rho\varphi(b_{1}ha_{1})=\rho(\phi(b_{1})\phi(ha_{1}))=\displaystyle \sum_{b',r',r'',a'',w'} (w')^{\ast}(r'r'')D_{b_{1},b'r'}\rho(b'w'a'')C_{r''a'',ha_{1}}$
\end{center}
Once again, \cite[Lemma 5.2]{2} yields: \\
\begin{center}
$X_{[bwa]}(\hat{\varphi}(\rho(P)))=\displaystyle \sum_{s \in L(\sigma(b)),b_{1}ha_{1},b',r',r'',a'',w'} (s\rho(bwa))^{\ast}\left((w')^{\ast}(r'r'')\rho(b'w'a'')C_{r''a'',ha_{1}}z(\rho(b_{1}ha_{1}))D_{b_{1},b'r'}\right)s$
\end{center}
Therefore:
\begin{center}
$X_{[bwa]}(\hat{\varphi}(\rho(P)))=\displaystyle \sum_{b_{1}ha_{1},r',r''} w^{\ast}(r'r'')C_{r''a,ha_{1}}z(\rho(b_{1}ha_{1}))D_{b_{1},br'}$
\end{center}
Note that
\begin{center}
$z(\rho(b_{1}ha_{1}))=\hat{\varphi}(X_{[b_{1}ha_{1}]}(\rho(P)))=\hat{\varphi}\rho(Y_{[b_{1}ha_{1}]}(P))=\rho\varphi(Y_{[b_{1}ha_{1}]}(P))$
\end{center}
This yields the equality:
\begin{center}
$X_{[bwa]}(\hat{\varphi}(\rho(P)))=\rho \varphi \left( \displaystyle \sum_{b_{1}ha_{1},r',r''} w^{\ast}(r'r'')C_{r''a,ha_{1}}Y_{[b_{1}ha_{1}]}(P)D_{b_{1},br'}\right)$
\end{center}
Consequently:
\begin{align*}
\hat{\gamma}_{sa,br}(n)&=\displaystyle \sum_{w \in L(k)} s^{\ast}(r^{-1}w) \displaystyle \sum_{b_{1}ha_{1},r',r''} w^{\ast}(r'r'')C_{r''a,ha_{1}}Y_{[b_{1}ha_{1}]}(P)D_{b_{1},br'} n\\
&=\displaystyle \sum_{b_{1}ha_{1},r',r''} s^{\ast}(r^{-1}r'r'')C_{r''a,ha_{1}}Y_{[b_{1}ha_{1}]}(P)D_{b_{1},br'}n
\end{align*}
A similar argument to the proof of Lemma \ref{lem4} yields that $\hat{\gamma}_{sa,br}(n)$ is equal to $(C\gamma_{1}D)_{sa,br}(n)$. The desired right-equivalence is analogous to the one constructed in Proposition \ref{prop5}.
\end{proof}
\begin{prop} \label{prop6} The right-equivalence class of the representation $\widetilde{\mu_{k}}(\mathcal{N})$ is determined by the right-equivalence class of the representation $\mathcal{N}$.
\end{prop}
\begin{proof}
Let $\mathcal{N}=(N,V)$ be a decorated representation of $(\mathcal{F}_{S}(M),P)$ and let $\mathcal{N}'=(N',V')$ be a decorated representation of $(\mathcal{F}_{S}(M'),P')$. Suppose that such representations are right-equivalent, then there exists an algebra isomorphism $\varphi: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M')$, with $\varphi_{|S}=id_{S}$, such that $\varphi(P)$ is cyclically equivalent to $P'$. Recall that associated to the representation $\mathcal{N}=(N,V)$ we have a decorated representation $\widehat{\mathcal{N}}=(\widehat{N},V)$ of $(\mathcal{F}_{S}(M'),\varphi(P))$ where $\widehat{N}=N$ as $F$-vector spaces, and for $u \in \mathcal{F}_{S}(M')$ and $n \in N$ we let $u \ast n:=\varphi^{-1}(u)n$. Let $\psi: \widehat{N} \rightarrow N'$ be the isomorphism of $F$-vector spaces induced by the right-equivalence between $\mathcal{N}$ and $\mathcal{N'}$. Note that $\psi$ is an isomorphism of left $\mathcal{F}_{S}(M')$-modules because if $u \in \mathcal{F}_{S}(M')$ and $n \in \widehat{N}$ then $\psi(un)=\psi(\varphi^{-1}(u)n)=\varphi(\varphi^{-1}(u))\psi(n)=u\psi(n)$. Now, let us show that:
\begin{enumerate}[(a)]
\item $\widetilde{\mu_{k}}(\mathcal{N})$ is right-equivalent to $\widetilde{\mu_{k}}(\mathcal{\widehat{N}})$
\item $\widetilde{\mu_{k}}(\mathcal{\widehat{N}})$ is right-equivalent to $\widetilde{\mu_{k}}(\mathcal{N'})$
\end{enumerate}
\vspace{0.2in}
Note that $(a)$ follows immediately by Theorem \ref{teo1}. We now prove $(b)$. Consider the cyclically equivalent potentials $\varphi(P)$ and $P'$ in $\mathcal{F}_{S}(M')$. By \cite[Proposition 8.15]{2} we have that $\mu_{k}(\varphi(P))$ is cyclically equivalent to $\mu_{k}P'$, in particular $\mu_{k}(\varphi(P))$ is right-equivalent to $\mu_{k}P'$ via the identity map $id_{\mathcal{F}_{S}(\mu_{k}M')}$. \\
Note that $\psi$ induces isomorphisms of $F$-vector spaces:
\begin{align*}
& \hat{\psi}_{i}: N_{in} \rightarrow N_{in}' \\
& \hat{\psi}_{o}: N_{out} \rightarrow N_{out}'
\end{align*}
Let $\rho=\psi^{-1}: N' \rightarrow \widehat{N}$, then $\rho$ also induces isomorphisms of $F$-vector spaces:
\begin{align*}
& \rho_{i}: N_{in}' \rightarrow N_{in} \\
& \rho_{o}: N_{out}' \rightarrow N_{out}
\end{align*}
Let $\psi_{k}: N_{k} \rightarrow N_{k}'$ and $\rho_{k}: N_{k}' \rightarrow N_{k}$ be the maps induced by $\psi$ and $\rho$. Then we have the following equalities:
\begin{equation} \label{4.10}
\begin{split}
\psi_{k}\alpha&=\alpha' \hat{\psi}_{i}\\
\beta' \psi_{k}&=\hat{\psi}_{o} \beta\\
\hat{\psi}_{i} \gamma &= \gamma' \hat{\psi}_{o}\\
\rho_{k}\alpha'&=\alpha\rho_{i}\\
\beta\rho_{k} &= \rho_{o} \beta'\\
\rho_{i} \gamma' &= \gamma \rho_{o}
\end{split}
\end{equation}
Observe that $\hat{\psi}_{i}$ induces a map $ker(\alpha) \rightarrow ker(\alpha')$ and $\rho_{i}$ induces a map $ker(\alpha') \rightarrow ker(\alpha)$ such that $\rho_{i}=(\hat{\psi}_{i})^{-1}$. Hence $\hat{\psi}_{i}$ induces an isomorphism between $ker(\alpha)$ and $ker(\alpha')$. Similarly, $\hat{\psi}_{i}$ induces an isomorphism $im(\gamma) \rightarrow im(\gamma')$, $\hat{\psi}_{o}$ induces an isomorphism $ker(\gamma) \rightarrow ker(\gamma')$ and $\hat{\psi}_{o}$ induces an isomorphism $im(\beta) \rightarrow im(\beta')$.
To construct $\overline{N'}$ we choose the splitting data $(p',\sigma')$ in terms of the splitting data $(p,\sigma)$ of $\overline{N}$ as follows:
\begin{equation} \label{4.11}
\begin{split}
p'&=\psi_{1}p(\hat{\psi}_{o})^{-1} \\
\sigma'&=\psi_{2}\sigma(\overline{\psi}_{2})^{-1}
\end{split}
\end{equation}
where $\psi_{1}$ is the restriction of $\hat{\psi}_{o}$ to $\operatorname{ker}(\gamma)$, $\psi_{2}$ is the restriction of the isomorphism $\hat{\psi}_{i}: N_{in} \rightarrow N_{in}'$ to $ker(\alpha)$ and $\overline{\psi_{2}}: \frac{ker(\alpha)}{im(\gamma)} \rightarrow \frac{ker(\alpha')}{im(\gamma')}$ is the isomorphism induced by $\hat{\psi}_{i}$.
Define $\widetilde{\psi}: \overline{\widehat{N}} \rightarrow \overline{N'}$ as the map $\psi: \widehat{N}_{i} \rightarrow N'_{i}$ for all $i \neq k$ and if $i=k$ then $\widetilde{\psi}$ is the map given in diagonal matrix form by the maps previously defined. Let $d_{1} \in D_{k}$, $b \in T_{k}$ and $d_{2} \in D_{\sigma(b)}$. Let us show the following equality holds for every $n \in N_{\sigma(b)}$: \\
\begin{center}
$\widetilde{\psi}(d_{1}(^{\ast}b)d_{2} \cdot n) = d_{1}(^{\ast}b)d_{2} \cdot \widetilde{\psi}(n)$
\end{center}
\vspace{0.2in}
First consider the term on the left-hand side. We have \\
\begin{align*}
\widetilde{\psi}(d_{1}(^{\ast}b)d_{2} \cdot n)&=\widetilde{\psi}(d_{1}\overline{N}(^{\ast}b)(d_{2}n))\\
&=\widetilde{\psi}
\begin{pmatrix}
-d_{1} \pi_{1} p \xi_{be_{k}}(d_{2}n) \\
-d_{1} \gamma' \xi_{be_{k}}(d_{2}n) \\
0 \\
0
\end{pmatrix}
\end{align*}
On the other hand,
\begin{align*}
d_{1}(^{\ast}b)d_{2} \cdot \widetilde{\psi}(n)&=d_{1}(^{\ast}b)d_{2} \cdot \psi(n) \\
&=\begin{pmatrix}
-d_{1}\pi_{1}' p' \xi_{be_{k}}(d_{2}\psi(n)) \\
-d_{1} \gamma' \xi_{be_{k}}(d_{2}\psi(n)) \\
0 \\
0
\end{pmatrix}
\end{align*}
\vspace{0.1in}
In what follows, we will use the notation of \ref{4.2} for the projection maps associated to $\overline{N'}_{k}$. \\
We now show that $\Pi_{1}'\left(\widetilde{\psi}(d_{1}\overline{N}(^{\ast}b)(d_{2}n))\right)=\Pi_{1}' \left(d_{1}(^{\ast}b)d_{2} \cdot \psi(n)\right)$. \\
On one hand
\begin{center}
$\Pi_{1}'\left(\widetilde{\psi}(d_{1}\overline{N}(^{\ast}b)(d_{2}n))\right)=\pi_{1}' \left(-d_{1}\psi_{1}p\xi_{be_{k}}(d_{2}n)\right)$
\end{center}
and on the other hand
\begin{align*}
\Pi_{1}' \left(d_{1}(^{\ast}b)d_{2} \cdot \psi(n)\right)&=-d_{1}\pi_{1}' p' \xi_{be_{k}}(d_{2}\psi(n)) \\
&=-d_{1}\pi_{1}' p' \hat{\psi}_{o} \xi_{be_{k}}(d_{2}n)
\end{align*}
By \ref{4.11} we have $p' \hat{\psi}_{0}=\psi_{1}p$ and thus \\
\begin{center}
$-d_{1}\pi_{1}' p' \hat{\psi}_{o} \xi_{be_{k}}(d_{2}n)= -d_{1} \pi_{1}' \psi_{1}p \xi_{be_{k}}(d_{2}n)=\pi_{1}'\left(-d_{1}\psi_{1}p\xi_{be_{k}}(d_{2}n)\right)$
\end{center}
Therefore $\Pi_{1}'\left(\widetilde{\psi}(d_{1}\overline{N}(^{\ast}b)(d_{2}n))\right)=\Pi_{1}' \left(d_{1}(^{\ast}b)d_{2} \cdot \psi(n)\right)$, as was to be shown. \\
We now verify that $\Pi_{2}'\left(\widetilde{\psi}(d_{1}\overline{N}(^{\ast}b)(d_{2}n))\right)=\Pi_{2}' \left(d_{1}(^{\ast}b)d_{2} \cdot \psi(n)\right)$. We have
\begin{align*}
\Pi_{2}'\left(\widetilde{\psi}(d_{1}\overline{N}(^{\ast}b)(d_{2}n))\right)&=-d_{1}\hat{\psi}_{i}\gamma' \xi_{be_{k}}(d_{2}n) \\
&=-d_{1}\gamma' \hat{\psi}_{o} \xi_{be_{k}}(d_{2}n) \\
&=-d_{1} \gamma' \xi_{be_{k}} (d_{2}\widetilde{\psi}(n))\\
&=\Pi_{2}' \left(d_{1}(^{\ast}b)d_{2} \cdot \psi(n)\right)
\end{align*}
Let $a \in _{k}T$, $d_{1} \in D_{\tau(a)}$, $d_{2} \in D_{k}$ and $w \in \overline{N}_{k}$. We now prove the following equality: \\
\begin{center}
$\widetilde{\psi}(d_{1}a^{\ast}d_{2} \cdot w)=d_{1}a^{\ast}d_{2} \widetilde{\psi}(w)$
\end{center}
Using \ref{4.3} we obtain
\begin{align*}
\widetilde{\psi}(d_{1}a^{\ast} d_{2} w)&=\widetilde{\psi}\left(d_{1}c_{k}^{-1}\pi_{e_{k}a}i \Pi_{2}(d_{2}w)+d_{1}c_{k}^{-1}\pi_{e_{k}a}j'\sigma_{2}\Pi_{3}(d_{2}w)\right) \\
&=d_{1}c_{k}^{-1}\psi\left(\pi_{e_{k}a}i\Pi_{2}(d_{2}w)\right)+d_{1}c_{k}^{-1}\psi \left(\pi_{e_{k}a}j'\sigma_{2}\Pi_{3}(d_{2}w)\right) \\
\end{align*}
On the other hand
\begin{align*}
d_{1}a^{\ast}d_{2}\widetilde{\psi}(w)&=d_{1}\overline{N}'(a^{\ast})(d_{2}\widetilde{\psi}(w)) \\
&=d_{1}c_{k}^{-1}\pi_{e_{k}a}'i \hat{\psi}_{i} \Pi_{2}(d_{2}w)+d_{1}c_{k}^{-1}\pi_{e_{k}a}j'\sigma_{2}'\overline{\psi}_{2}(\Pi_{3}(d_{2}w)))
\end{align*}
and thus (b) follows. This completes the proof of Proposition \ref{prop6}. \\
\end{proof}
\section{Mutation of decorated representations} \label{sec5}
Let $(\mathcal{F}_{S}(M),P)$ be an algebra with potential such that $P^{(2)}$ is splittable. By \cite[Theorem 7.15]{2} there exists a decomposition of $S$-bimodules $M=M_{1} \oplus \Xi_{2}(P)$ and a unitriangular automorphism $\varphi: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M)$ such that $\varphi(P)$ is cyclically equivalent to $Q^{\geq 3} \oplus Q^{(2)}$ where $Q^{\geq 3}$ is a reduced potential in $\mathcal{F}_{S}(M_{1})$ and $Q^{(2)}$ is a trivial potential in $\mathcal{F}_{S}(\Xi_{2}(P))$. \\
We have that $\widehat{\mathcal{N}}$ is a decorated representation of $(\mathcal{F}_{S}(M),Q^{\geq 3} \oplus Q^{(2)})$. Therefore, $\widehat{\mathcal{N}}|_{\mathcal{F}_{S}(M_{1})}$ is a decorated representation of $(\mathcal{F}_{S}(M_{1}),Q^{\geq 3})$.
\begin{definition} We will refer to the decorated representation $\widehat{\mathcal{N}}|_{\mathcal{F}_{S}(M_{1})}$ as the reduced part of $\mathcal{N}$ and will be denoted by $\mathcal{N}_{red}$.
\end{definition}
Let $k$ be a fixed integer in $[1,n]$ and let $M=M_{1} \oplus M_{2}$ be a $Z$-freely generated $S$-bimodule such that for every $i$, $e_{i}Me_{k} \neq 0$ implies $e_{k}Me_{i}=0$ and likewise $e_{k}Me_{i} \neq 0$ implies $e_{i}Me_{k}=0$.
Let $T_{1}$ be a $Z$-free generating set of $M_{1}$ and let $T_{2}$ be a $Z$-free generating set of $M_{2}$. Let $P_{1}$ be a potential in $\mathcal{F}_{S}(M_{1})$ and let $W=\displaystyle \sum_{i=1}^{l} c_{i}d_{i}$ be a trivial potential in $\mathcal{F}_{S}(M_{2})$, where $\{c_{1},\hdots,c_{l},d_{1}\hdots, d_{l}\}=T_{2}$.
Suppose that $e_{k}P_{1}e_{k}=0$ and consider the potential $P=P_{1}+W$ in $\mathcal{F}_{S}(M_{1} \oplus M_{2})$. Since $e_{k}Pe_{k}=0$, then we can consider
\begin{center}
$\mu_{k}(P)=\mu_{k}(P_{1})+W$
\end{center}
Note that
\begin{center}
$\mu_{k}M=\mu_{k}M_{1} \oplus M_{2}$
\end{center}
Let $\mathcal{N}=(N,V)$ be a decorated representation of $(\mathcal{F}_{S}(M),P)$. Recall that $\widetilde{\mu}_{k}(\mathcal{N})_{|\mathcal{F}_{S}(\mu_{k}M_{1})}$ is a decorated representation of $(\mathcal{F}_{S}(\mu_{k}M_{1}),\mu_{k}P_{1})$. Recall also that $\mathcal{N}_{| \mathcal{F}_{S}(M_{1})}$ is a decorated representation of $(\mathcal{F}_{S}(M_{1}),P_{1})$.
\begin{lemma} \label{lem9} The following equality holds: $\widetilde{\mu}_{k}(\mathcal{N}_{| \mathcal{F}_{S}(M_{1})})=(\widetilde{\mu}_{k}(\mathcal{N}))_{| \mathcal{F}_{S}(\mu_{k}M_{1})}$.
\end{lemma}
\begin{proof} Let $T_{1}$ be a $Z$-free generating set of $(M_{1})_{0}$ and let $T_{2}$ be a $Z$-free generating set of $(M_{2})_{0}$. Then
\begin{center}
$W=\displaystyle \sum_{s=1}^{l} c_{s}d_{s}$
\end{center}
where $T_{2}=\{c_{1}, \hdots, c_{l}, d_{1} \hdots, d_{l}\}$. Note that $e_{k}T_{2}=T_{2}e_{k}=0$. We have \\
\begin{center}
$\mathcal{N}_{| \mathcal{F}_{S}(M_{1})}=(N_{| \mathcal{F}_{S}(M_{1})},V)$
\end{center}
Define $N'=N_{\mathcal{F}_{S}(M_{1})}$. For $i \neq k$, $\overline{N'}_{i}=N'_{i}=N_{i}=\overline{N}_{i}$. On the other hand \\
\begin{center}
$N'_{out}=\displaystyle \bigoplus_{b \in (T_{1})_{k}} D_{k} \otimes_{F} N_{\sigma(b)}= \displaystyle \bigoplus_{b \in T_{k}} D_{k} \otimes_{F} N_{\sigma(b)}=N_{out}$
\end{center}
Similarly, $N'_{in}=N_{in}$. Now let $\alpha': N'_{in} \rightarrow N'_{k}$, $\beta': N'_{k} \rightarrow N'_{out}$ and $\gamma': N'_{out} \rightarrow N'_{in}$ be the maps associated to the representation $\overline{N'}$. Clearly $\alpha=\alpha'$, $\beta=\beta'$. Also:
\begin{center}
$Y_{[bra]}(P_{1}+W)=Y_{[bra]}(P_{1})$
\end{center}
and thus $\gamma=\gamma'$. Then $\overline{N'}_{k}=\overline{N}_{k}$ and it follows that $\overline{N'}=\overline{N}$ as left $S$-modules. Since the action of $\mathcal{F}_{S}(\mu_{k}M_{1})$ in $\overline{N'}$ and $\overline{N}$ is the same, then the claim follows.
\end{proof}
\begin{prop} \label{prop7} Let $(\mathcal{F}_{S}(M),P)$ be an algebra with potential where $P^{(2)}$ is splittable. Suppose that $e_{k}Pe_{k}=0$ and that $(M \otimes_{S} e_{k}M)_{cyc}=0$. For every decorated representation $\mathcal{N}$ of $(\mathcal{F}_{S}(M),P)$, the decorated representation $\widetilde{\mu_{k}}(\mathcal{N})_{red}$ is right-equivalent to $\widetilde{\mu_{k}}(\mathcal{N}_{red})_{red}$.
\end{prop}
\begin{proof}
By \cite[Theorem 7.15]{2} there exists a unitriangular automorphism:
\begin{center}
$\varphi_{1}: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M)$
\end{center}
\vspace{0.1in}
and a decomposition of $S$-bimodules $M=M_{1} \oplus \Xi_{2}(P)$ such that $\varphi_{1}(P)$ is cyclically equivalent to $Q+W_{1}$ where $Q$ is a reduced potential in $\mathcal{F}_{S}(M_{1})$ and $W_{1}$ is a trivial potential in $\mathcal{F}_{S}(\Xi_{2}(P))$. Then $\mu_{k}(\varphi_{1}(P))$ is cyclically equivalent to $\mu_{k}(Q)+W_{1}$. By \cite[Theorem 8.12]{2} there exists a unitriangular automorphism:
\begin{center}
$\varphi_{2}: \mathcal{F}_{S}(\mu_{k}M_{1}) \rightarrow \mathcal{F}_{S}(\mu_{k}M_{1})$
\end{center}
and a decomposition of $S$-bimodules:
\begin{center}
$\mu_{k}M_{1}=M_{2} \oplus \Xi_{2}(\mu_{k}(Q))$
\end{center}
such that $\varphi_{2}(\mu_{k}Q)=Q_{1}+W_{2}$ where $Q_{1}$ is a reduced potential in $\mathcal{F}_{S}(M_{2})$ and $W_{2}$ is a trivial potential in $\mathcal{F}_{S}(\Xi_{2}(\mu_{k}(Q)))$. Let $\hat{\varphi}_{2}$ be the unitriangular automorphism of $\mathcal{F}_{S}(\mu_{k}M)$ extending $\varphi_{2}$ and such that $\hat{\varphi}_{2}$ is equal to the identity in $\mathcal{F}_{S}(\Xi_{2}(P))$.
Now consider the algebra automorphism $\hat{\varphi}_{1}$ of $\mathcal{F}_{S}(\mu_{k}M)$ given by Theorem \ref{teo1}. Then \\
\begin{enumerate}[(a)]
\item $\widehat{\varphi}_{1}(\mu_{k}P)$ is cyclically equivalent to $\mu_{k}(\varphi_{1}(P))$.
\item There exists an isomorphism of decorated representations $\widetilde{\mu_{k}}(\mathcal{N}) \rightarrow \widetilde{\mu_{k}}(^{\varphi_{1}}{\mathcal{N}})$
\end{enumerate}
where $^{\varphi_{1}}{\mathcal{N}}$ denotes the representation $\widehat{\mathcal{N}}$ (to emphasize the dependance of the action on $\varphi_{1}$). We have that $\widehat{\varphi_{2}}\widehat{\varphi}_{1}(\mu_{k}P)$ is cyclically equivalent to $\widehat{\varphi}_{2}\mu_{k}(\varphi_{1}(P))$ and the latter is cyclically equivalent to $\widehat{\varphi}_{2}(\mu_{k}Q+W_{1})=\varphi_{2}(\mu_{k}(Q))+W_{1}$; also, the latter potential is cyclically equivalent to $Q_{1}+W_{1}+W_{2}$ and
\begin{center}
$\mu_{k}M=M_{2} \oplus \Xi_{2}(\mu_{k}Q) \oplus \Xi_{2}(P)$
\end{center}
with $Q_{1}$ a reduced potential in $\mathcal{F}_{S}(M_{2})$ and $W_{1}+W_{2}$ a trivial potential in $\mathcal{F}_{S}(\Xi_{2}(\mu_{k}Q) \oplus \Xi_{2}(P))$. Then $(\widetilde{\mu_{k}}(\mathcal{N}))_{red}$ is isomorphic to $^{\hat{\varphi}_{2}}(\widetilde{\mu_{k}}(^{\varphi_{1}}\mathcal{N}))_{| \mathcal{F}_{S}(M_{2})}$. Note that, by Lemma \ref{lem9}, $^{\widehat{\varphi}_{2}} \widetilde{\mu_{k}}(^{\varphi_{1}}\mathcal{N})_{|\mathcal{F}_{S}(\mu_{k}M_{1})}$ is isomorphic to $^{\widehat{\varphi}_{2}} \widetilde{\mu_{k}}(^{\varphi_{1}}\mathcal{N}_{|\mathcal{F}_{S}(M_{1})})$. The claim follows.
\end{proof}
\begin{definition} \label{def13} Let $(N,V)$ be a decorated representation of the algebra with potential $(\mathcal{F}_{S}(M),P)$ where $P$ is reduced. We define the mutation of the decorated representation $\mathcal{N}$ in $k$ as: \\
\begin{center}
$\mu_{k}(\mathcal{N}):=\widetilde{\mu_{k}}(\mathcal{N})_{red}$
\end{center}
\end{definition}
\begin{cor} The correspondence $\mathcal{N} \mapsto \mu_{k}(\mathcal{N})$ is a well-defined transformation on the set of right-equivalence classes of decorated representations of reduced algebras with potentials.
\end{cor}
\begin{theorem} \label{theo2} The mutation $\mu_{k}$ of decorated representations is an involution; that is, for every decorated representation $\mathcal{N}$ of a reduced algebra with potential $(\mathcal{F}_{S}(M),P)$, the decorated representation $\mu_{k}^{2}(\mathcal{N})$ is right-equivalent to $\mathcal{N}$.
\end{theorem}
\begin{proof} Let $\mathcal{N}=(N,V)$ be a decorated representation of $(\mathcal{F}_{S}(M),P)$. First note that
\begin{align*}
\mu_{k}^{2}(\mathcal{N})&=\mu_{k}\left(\mu_{k}(\mathcal{N})\right) \\
&=\widetilde{\mu_{k}}(\mu_{k}\left(\mathcal{N}\right))_{red} \\
&=\widetilde{\mu_{k}}\left( \widetilde{\mu_{k}}(\mathcal{N}\right)_{red})_{red}
\end{align*}
In view of Proposition \ref{prop7}, $\widetilde{\mu_{k}}\left( \widetilde{\mu_{k}}(\mathcal{N}\right)_{red})_{red}$ is right-equivalent to $\widetilde{\mu_{k}} \left( \widetilde{\mu_{k}} (\mathcal{N})\right)_{red}=\widetilde{\mu_{k}}^{2}(\mathcal{N})_{red}$. Thus, it suffices to show that $\widetilde{\mu_{k}}^{2}\left(\mathcal{N}\right)_{red}$ is right-equivalent to $\mathcal{N}$.
We write the representation $\widetilde{\mu_{k}}^{2}\left(\mathcal{N}\right)$ as $\left(\overline{\overline{N}}, \overline{\overline{V}}\right)$ and this representation is associated to the algebra with potential $\left(\mathcal{F}_{S}(\mu_{k}^{2}M),\mu_{k}^{2}P\right)$. By \cite[Proposition 8.8]{2} and \cite[Theorem 8.17]{2} we have:
\begin{align*}
\mu_{k}^{2}M&=M \oplus Me_{k}M \oplus M^{\ast}e_{k}(^{\ast}M) \\
\mu_{k}^{2}P&=\rho(P)+\displaystyle \sum_{bt,sa} \left( [btsa][(sa)^{\ast}(^{\ast}bt)]+[(sa)^{\ast}(^{\ast}(bt))](bt)(sa)\right)
\end{align*}
By \cite[Theorem 8.17]{2} there exists an algebra automorphism $\psi: \mathcal{F}_{S}(\mu_{k}^{2}M) \rightarrow \mathcal{F}_{S}(\mu_{k}^{2}M)$ such that $\psi(\mu_{k}^{2}P)$ is cyclically equivalent to $P \oplus W$ where $W$ is a trivial potential in $\mathcal{F}_{S}\left(Me_{k}M \oplus M^{\ast}e_{k}(^{\ast}M)\right)$. Such automorphism $\psi$ restricts to an automorphism $\psi_{0}: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M)$ such that $\psi_{0}(b)=-b$ for every $b \in T_{k}$ and $\psi_{0}(x)=x$ for every $x \in T \setminus T_{k}$. In view of Definition \ref{def13}, the representation $\widetilde{\mu_{k}}(\mathcal{N})_{red}$ can be realized as $(\overline{\overline{N}},\overline{\overline{V}})$, the latter being associated to the algebra with potential $(\mathcal{F}_{S}(M),P)$, and the action of $\mathcal{F}_{S}(M)$ in $\overline{\overline{N}}$ is given by $u \cdot n = \psi_{0}^{-1}(u)n$. \\
Let us show that $\overline{\alpha}: N_{out}=\overline{N}_{in} \longrightarrow \overline{N}_{k}$ is the $F$-linear map such that for each $b \in T_{k}$ and $r \in L(k)$, $\overline{\alpha} \xi_{br}=r^{-1}\overline{N}(^{\ast}b)$. We have
\begin{align*}
\overline{\alpha}\xi_{br}(n)&=\displaystyle \sum_{w \in L(k)} \overline{\alpha}\xi_{w(^{\ast}b)}\pi_{w(^{\ast}b)}\xi_{br}(n) \\
&=\displaystyle \sum_{w \in L(k)} \overline{\alpha}\xi_{w(^{\ast}b)}\pi'_{w(^{\ast}b)}(r^{-1} \otimes n) \\
&=\displaystyle \sum_{w \in L(k)} \overline{\alpha}\xi_{w(^{\ast}b)}w^{\ast}(r^{-1})n \\
&=\displaystyle \sum_{w \in L(k)} \overline{N}(w(^{\ast}b))(w^{\ast}(r^{-1})n) \\
&=\overline{N} \left( \displaystyle \sum_{w \in L(k)} w^{\ast}(r^{-1})w(^{\ast}b)\right)(n) \\
&=\overline{N}(r^{-1}(^{\ast}b))(n) \\
&=r^{-1}\overline{N}(^{\ast}b)(n)
\end{align*}
We now show that $ker(\overline{\alpha})=\operatorname{im}(\beta)$. \\
Let $x \in N_{out}$, using \ref{3.15} we have $x=\displaystyle \sum_{b \in T_{k},r \in L(k)} \xi_{br} \pi_{br}(x)$. Therefore
\begin{align*}
\overline{\alpha}(x)&= \displaystyle \sum_{b \in T_{k}, r \in L(k)} \overline{\alpha} \xi_{br} \pi_{br}(x) \\
&=\displaystyle \sum_{b \in T_{k}, r \in L(k)}r^{-1} \overline{N}(^{\ast} b)\left(\pi_{br}(x)\right)
\end{align*}
\vspace{0.1in}
thus $\overline{\alpha}(x)=0$ if and only if $\Pi_{i} \left(\displaystyle \sum_{b \in T_{k}, r \in L(k)}r^{-1} \overline{N}(^{\ast} b)\left(\pi_{br}(x)\right)\right)=0$ for every $i \in \{1,2,3,4\}$. Remembering \ref{4.4} yields
\begin{equation} \label{5.1}
\Pi_{1}\left(\displaystyle \sum_{b \in T_{k}, r \in L(k)}r^{-1} \overline{N}(^{\ast} b)\left(\pi_{br}(x)\right)\right)= -\displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}\pi_{1} p \xi_{be_{k}}\pi_{br}(x)
\end{equation}
\begin{equation} \label{5.2}
\Pi_{2}\left(\displaystyle \sum_{b \in T_{k}, r \in L(k)}r^{-1} \overline{N}(^{\ast} b)\left(\pi_{br}(x)\right)\right)= -\displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}\gamma' \xi_{be_{k}}\pi_{br}(x)
\end{equation}
\vspace{0.1in}
so if $\overline{\alpha}(x)=0$ then \ref{5.2} implies that \\
\begin{equation} \label{5.3}
\displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1} \xi_{be_{k}} \pi_{br}(x) \in \operatorname{ker}(\gamma)
\end{equation}
On the other hand, if $\overline{\alpha}(x)=0$ then by \ref{5.1}, $\displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}p \xi_{be_{k}} \pi_{br}(x) \in \operatorname{im}(\beta)$. Note that
\begin{center}
$\displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}p \xi_{be_{k}} \pi_{br}(x) =p \left( \displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}\xi_{be_{k}}\pi_{br}(x)\right)$
\end{center}
\vspace{0.2in}
Using \ref{5.3} and the fact that $p$ is a left inverse of the inclusion map $j: ker(\gamma) \rightarrow N_{out}$ yields $p \left( \displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}\xi_{be_{k}}\pi_{br}(x)\right)=\displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}\xi_{be_{k}}\pi_{br}(x)$. Finally, note that
\begin{center}
$\displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}\xi_{be_{k}}\pi_{br}(x)=\displaystyle \sum_{b \in T_{k}, r \in L(k)} \xi_{br} \pi_{br}(x)$
\end{center}
\vspace{0.1in}
and by \ref{3.15} $\displaystyle \sum_{b \in T_{k}, r \in L(k)} \xi_{br} \pi_{br}(x)=x$. Therefore, $\overline{\alpha}(x)=0$ if and only if $\displaystyle \sum_{b \in T_{k}, r \in L(k)} r^{-1}p \xi_{be_{k}} \pi_{br}(x)=x \in \operatorname{im}(\beta)$. This proves that:
\begin{equation} \label{5.4}
ker(\overline{\alpha})=\operatorname{im}(\beta)
\end{equation}
\vspace{0.2in}
As a consequence, $\Pi_{1} \overline{\alpha}(x)=-\pi_{1}p$ y $\Pi_{2} \overline{\alpha} = - \gamma'$. Therefore
\begin{equation} \label{5.5}
\operatorname{im}(\overline{\alpha})=\frac{\operatorname{ker(\gamma)}}{\operatorname{im}(\beta)} \oplus \operatorname{im}(\gamma) \oplus \{0\} \oplus \{0\}
\end{equation}
\vspace{0.2in}
We now show that $\overline{\beta}: \overline{N}_{k} \longrightarrow \overline{N}_{out}=N_{in}$ is the $F$-linear map such that for each $r \in L(k)$ and $a \in _{k}T$, $\pi_{ra} \overline{\beta}=\overline{N}(a^{\ast})r^{-1}$. We have:
\begin{align*}
\pi_{ra}\overline{\beta}(n)&=\displaystyle \sum_{w \in L(k)} \pi_{ra} \xi_{a^{\ast}w} \pi_{a^{\ast}w} \overline{\beta}(n) \\
&=\displaystyle \sum_{w \in L(k)} \pi_{ra}\xi_{a^{\ast}w}\overline{N}(a^{\ast}w)(n) \\
&=\displaystyle \sum_{w \in L(k)} \pi'_{ra}\left(w^{-1} \otimes \overline{N}(a^{\ast}w)(n)\right) \\
&=\displaystyle \overline{N} \left(a^{\ast} \displaystyle \sum_{w \in L(k)} r^{\ast}(w^{-1})w\right)(n)
\end{align*}
By Remark \ref{rem3}, $\displaystyle \sum_{w \in L(k)} r^{\ast}(w^{-1})w=r^{-1}$. It follows that
\begin{center}
$\pi_{ra} \overline{\beta}(n)=\overline{N}(a^{\ast}r^{-1})(n)=\overline{N}(a^{\ast})(r^{-1}n)$
\end{center}
We now prove that
\begin{equation} \label{5.6}
\operatorname{ker}(\overline{\beta})=\frac{\operatorname{ker}(\gamma)}{\operatorname{im}(\beta)} \oplus \{0\} \oplus \{0\} \oplus V_{k}
\end{equation}
Let $w \in \overline{N}_{k}$, then $w=\displaystyle \sum_{l=1}^{4} J_{l} \Pi_{l}(w)$. Suppose that $\overline{\beta}(w)=0$, then $\pi_{ra}\overline{\beta}(w)=0$ for every $r \in L(k)$ and $a \in _{k}T$. Using Lemma \ref{lem2} and \ref{4.3} we obtain the following equalities:
\begin{align*}
0=\pi_{ra} \overline{\beta}(w)&=\displaystyle \sum_{l=1}^{4} \pi_{ra} \overline{\beta} \left(J_{l} \Pi_{l}(w) \right) \\
&=\displaystyle \sum_{l=1}^{4} \overline{N}(a^{\ast})\left(J_{l}r^{-1}\Pi_{l}(w)\right) \\
&=\overline{N}(a^{\ast})J_{2}\left(r^{-1}\Pi_{2}(w)\right)+\overline{N}(a^{\ast})J_{3}\left(r^{-1}\Pi_{3}(w)\right) \\
&=c_{k}^{-1}\pi_{e_{k}a}i\left(r^{-1}\Pi_{2}(w)\right)+c_{k}^{-1}\pi_{e_{k}a}j'\sigma_{2}\left(r^{-1}\Pi_{3}(w)\right) \\
&=c_{k}^{-1}\pi_{ra}\left(i\Pi_{2}(w)\right)+c_{k}^{-1}\pi_{ra}\left(j'\sigma_{2}\Pi_{3}(w)\right) \\
&=c_{k}^{-1}\pi_{ra}\left(i\Pi_{2}(w)+j'\sigma_{2}\Pi_{3}(w)\right)
\end{align*}
Since this is true for all projections $\pi_{ra}$, then \\
\begin{equation} \label{5.7}
0=i \Pi_{2}(w) + j' \sigma_{2}\Pi_{3}(w)=\Pi_{2}(w)+\sigma_{2}\Pi_{3}(w)
\end{equation}
By \ref{4.2}, $\Pi_{2}(w) \in \operatorname{im}(\gamma)$. Applying $\pi_{2}$ to \ref{5.7}, where $\pi_{2}: \operatorname{ker}(\alpha) \rightarrow \frac{ \operatorname{ker}(\alpha)}{\operatorname{im}(\gamma)}$ is the projection map, yields $\pi_{2} \sigma_{2} \Pi_{3}(w)=0$. Since $\pi_{2}\sigma_{2}=id_{ker(\alpha)/im(\gamma)}$, then $\Pi_{3}(w)=0$. Substituting $\Pi_{3}(w)=0$ into \ref{5.7} gives $\Pi_{2}(w)=0$. Consequently:
\begin{center}
$\operatorname{ker}(\overline{\beta})=\frac{\operatorname{ker}(\gamma)}{\operatorname{im}(\beta)} \oplus \{0\} \oplus \{0\} \oplus V_{k}$
\end{center}
and the proof of \ref{5.6} is now complete. We also have
\begin{equation} \label{5.8}
\operatorname{im}(\overline{\beta})=\operatorname{ker}(\alpha)
\end{equation}
We now prove the following formula
\begin{equation} \label{5.9}
\operatorname{\overline{\gamma}}=c_{k}\beta \alpha
\end{equation}
First, we compute $Y_{[a^{\ast}w(^{\ast}b)]}(\mu_{k}P)$ where $a \in _{k}T, b \in T_{k}$, $w \in L(k)$ and $\Delta_{k}$ is the following potential
\begin{center}
$\Delta_{k}=\displaystyle \sum_{sa_{1} \in _{k}\hat{T}, b_{1}t \in \tilde{T}_{k}} [b_{1}tsa_{1}]((sa_{1})^{\ast})(^{\ast}(b_{1}t))$
\end{center}
Note that $\Delta_{k}$ is cyclically equivalent to the following potential
\begin{center}
$\Delta_{k}'=\displaystyle \sum_{sa_{1},b_{1}t} a_{1}^{\ast}s^{-1}t^{-1}(^{\ast}b_{1})[b_{1}tsa_{1}]$.
\end{center}
Therefore
\begin{align*}
\Delta_{k}' &= \displaystyle \sum_{sa_{1},b_{1}t} \displaystyle \sum_{v,v_{1} \in L(k)} (v_{1}^{-1})^{\ast}(s^{-1}t^{-1})v^{\ast}(ts)a_{1}^{\ast}v_{1}^{-1}(^{\ast}b_{1})[b_{1}va_{1}] \\
&=\displaystyle \sum_{sa_{1},b_{1}} \displaystyle \sum_{v,v_{1},t \in L(k)} (v_{1}^{-1})^{\ast}(s^{-1}t^{-1})v^{\ast}(ts)a_{1}^{\ast}v_{1}^{-1}(^{\ast}b_{1})[b_{1}va_{1}]
\end{align*}
By \cite[Proposition 7.5]{2} we have:
\begin{center}
$\displaystyle \sum_{t \in L(k)} v^{\ast}(ts)(v_{1}^{-1})^{\ast}(s^{-1}t^{-1})=\delta_{v,v_{1}}$
\end{center}
thus
\begin{align*}
\Delta_{k}' &= \displaystyle \sum_{sa_{1},b_{1}} \displaystyle \sum_{v \in L(k)} a_{1}^{\ast} v^{-1} (^{\ast}b_{1}) [b_{1}va_{1}] \\
&= \displaystyle \sum_{sa_{1},b_{1}} \displaystyle \sum_{v,r \in L(k)} r^{\ast}(v^{-1}) a_{1}^{\ast} r (^{\ast} b_{1}) [b_{1}va_{1}]
\end{align*}
Therefore
\begin{center}
$\rho \left(\Delta_{k}'\right)= \displaystyle \sum_{sa_{1},b_{1}} \displaystyle \sum_{v,r \in L(k)} r^{\ast}(v^{-1})[a_{1}^{\ast}r(^{\ast}b_{1})][b_{1}va_{1}]$
\end{center}
\vspace{0.1in}
Let $a \in _{k}T$, $w \in L(k)$ and $b \in T_{k}$ be fixed. Then: \\
\begin{center}
$X_{[a^{\ast}w(^{\ast}b)]}\left(\rho(\Delta_{k}')\right)=\displaystyle \sum_{s,v \in L(k)} w^{\ast}(v^{-1})[bva]$
\end{center}
Note that
\begin{align*}
Y_{[a^{\ast}w(^{\ast}b)]}(\mu_{k}P)&=\rho^{-1}\left(X_{[a^{\ast}w(^{\ast}b)]}(\rho(\mu_{k}P))\right) \\
&=\rho^{-1}\left(X_{[a^{\ast}w(^{\ast}b)]}(\rho(\Delta_{k}'))\right) \\
&=\rho^{-1}\left(X_{[a^{\ast}w(^{\ast}b)]}(\rho(\Delta_{k}'))\right)
\end{align*}
Therefore
\begin{center}
$Y_{[a^{\ast}w(^{\ast}b)]}\left(\mu_{k}P\right)=\displaystyle \sum_{s,v \in L(k)} w^{\ast}(v^{-1})[bva]=c_{k}\displaystyle \sum_{v \in L(k)} w^{\ast}(v^{-1})[bva]$
\end{center}
Consequently
\begin{align*}
\pi_{bt}\overline{\gamma} \xi_{sa}(n) &= \displaystyle \sum_{w \in L(k)} (t^{-1})^{\ast}(sw)Y_{[a^{\ast}w(^{\ast}b)]}\left(\mu_{k}P\right)n \\
&=c_{k} \displaystyle \sum_{v,w \in L(k)} (t^{-1})^{\ast}(sw) w^{\ast}(v^{-1})[bva]n \\
&=c_{k} \displaystyle \sum_{v \in L(k)}(t^{-1})^{\ast}\left(s\displaystyle \sum_{w \in L(k)} w^{\ast}(v^{-1})w\right)[bva]n \\
&=c_{k}\displaystyle \sum_{v \in L(k)} (t^{-1})^{\ast}(sv^{-1})[bva]n
\end{align*}
By \ref{2.3}, $(t^{-1})^{\ast}(sv^{-1})=v^{\ast}(ts)$. Then
\begin{align*}
\pi_{bt}\overline{\gamma} \xi_{sa}(n) &= c_{k} \displaystyle \sum_{v \in L(k)} v^{\ast} (ts) [bva]n \\
&= c_{k} \left[b \displaystyle \sum_{v \in L(k)} v^{\ast}(ts)v a \right]n \\
&=c_{k} [btsa]n \\
&=c_{k} btsan \\
&=c_{k}\pi_{bt} \beta \alpha \xi_{sa}(n)
\end{align*}
As a direct consequence of \ref{5.4}, \ref{5.5}, \ref{5.6}, \ref{5.8} and \ref{5.9} we conclude that
\begin{equation} \label{5.10}
\begin{aligned}
& \operatorname{ker}(\overline{\alpha})=\operatorname{im}(\beta), \operatorname{im}(\overline{\alpha})=\frac{\operatorname{ker}(\gamma)}{\operatorname{im}(\beta)} \oplus \operatorname{im}(\gamma) \oplus \{0\} \oplus \{0\}, \\
& \operatorname{ker}(\overline{\beta})=\frac{ \operatorname{ker}(\gamma)}{\operatorname{im}(\beta)} \oplus \{0\} \oplus \{0\} \oplus V_{k}, \operatorname{im}(\overline{\beta})=\operatorname{ker}(\alpha), \\
& \operatorname{ker}(\overline{\gamma})=\operatorname{ker}(\beta \alpha), \operatorname{im}(\overline{\gamma})=\operatorname{im}(\beta \alpha).
\end{aligned}
\end{equation}
Therefore
\begin{center}
$\overline{ \overline{V}}_{k} = \frac{ \operatorname{ker}(\overline{\beta})}{\operatorname{ker}(\overline{\beta}) \cap \operatorname{im}(\overline{\alpha})}=V_{k}$
\end{center}
and hence $\overline{\overline{V}}=V$. \vspace{0.2in}
On the other hand
\begin{center}
$\overline{\overline{N}}_{k}= \frac{ \operatorname{ker}(\overline{\gamma})}{\operatorname{im}(\overline{\beta})} \oplus \operatorname{im}(\overline{\gamma}) \oplus \frac{ \operatorname{ker}(\overline{\alpha})}{\operatorname{im}(\overline{\gamma})} \oplus \overline{V}_{k}$
\end{center}
and by \ref{5.10}
\begin{center}
$\overline{\overline{N}}_{k} = \frac{ \operatorname{ker}(\beta \alpha)}{\operatorname{ker}(\alpha)} \oplus \operatorname{im}(\beta \alpha) \oplus \frac{\operatorname{im}(\beta)}{\operatorname{im}(\beta \alpha)} \oplus \frac{ \operatorname{ker}(\beta)}{\operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)}$
\end{center}
\vspace{0.1in}
We now make the following observations:
\begin{enumerate}[(a)]
\item the map $\alpha$ induces an isomorphism $\operatorname{ker}(\beta \alpha)/ \operatorname{ker}(\alpha) \stackrel{\theta_{1}}{\longrightarrow}\operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)$;
\item the map $\beta$ induces an isomorphism $\operatorname{im}(\alpha)/( \operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)) \stackrel{\theta_{2}}{\longrightarrow} \operatorname{im}(\beta \alpha)$;
\item the map $\beta$ induces an isomorphism $N_{k}/(\operatorname{ker}(\beta) + \operatorname{im}(\alpha)) \stackrel{\theta_{3}}{\longrightarrow} \operatorname{im}(\beta)/ \operatorname{im}(\beta \alpha)$.
\item there exists an isomorphism $\operatorname{ker}(\beta)/( \operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)) \stackrel{\theta_{4}}{\longrightarrow} (\operatorname{ker}(\beta) + \operatorname{im}(\alpha))/ \operatorname{im}(\alpha)$.
\end{enumerate}
Using these isomorphisms, we can identify $\overline{\overline{N}}_{k}$ with the space \\
\begin{center}
$\overline{\overline{N}}_{k} \stackrel{f}{\longrightarrow} (\operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)) \oplus \frac{\operatorname{im}(\alpha)}{\operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)} \oplus \frac{N_{k}}{\operatorname{ker}(\beta) + \operatorname{im}(\alpha)} \oplus \frac{\operatorname{ker}(\beta) + \operatorname{im}(\alpha)}{\operatorname{im}(\alpha)}$
\end{center}
via the map
\[f=
\begin{bmatrix}
\theta_{1} & 0 & 0 & 0 \\
0 & \theta_{2}^{-1} & 0 & 0 \\
0 & 0 & \theta_{3}^{-1} & 0 \\
0 & 0 & 0 & \theta_{4}
\end{bmatrix}
\]
We have canonical projections
\begin{align*}
\overline{\pi}_{1}&: \operatorname{ker}(\beta \alpha) \rightarrow \frac{ \operatorname{ker}(\beta \alpha)}{ \operatorname{ker}(\alpha)} \\
\overline{\pi}_{2}&: \operatorname{im}(\beta) \rightarrow \frac{ \operatorname{im}(\beta)}{ \operatorname{im}(\beta \alpha)}\\
\overline{\pi}_{3}&: \operatorname{im}(\alpha) \rightarrow \frac{ \operatorname{im}(\alpha)}{\operatorname{ker}(\beta) \cap \operatorname{im}
(\alpha)} \\
\overline{\pi}_{4}&: N_{k} \rightarrow \frac{N_{k}}{\operatorname{ker}(\beta) + \operatorname{im}(\alpha)} \\
\overline{\pi}_{5}&: \operatorname{ker}(\beta)+\operatorname{im}(\alpha) \rightarrow \frac{\operatorname{ker}(\beta) + \operatorname{im}(\alpha)}{\operatorname{im}(\alpha)}
\end{align*}
and inclusion maps
\begin{align*}
&\overline{i}_{1}: \operatorname{im}(\overline{\gamma}) \rightarrow N_{out} \\
&\overline{i}_{2}: \operatorname{ker}(\beta \alpha) \rightarrow N_{in} \\
&\overline{i}_{3}: \operatorname{im}(\beta \alpha) \rightarrow N_{k}\\
&\overline{j}: \operatorname{ker}(\overline{\alpha}) \rightarrow N_{out} \\
\end{align*}
We now choose splitting data as follows: \\
\begin{enumerate}[(a)]
\item Let $\overline{p}: N_{in} \rightarrow \operatorname{ker}(\beta \alpha)$ be a map of left $D_{k}$-modules such that $\overline{p}\overline{i}_{2} = id_{ \operatorname{ker}(\beta \alpha)}$. \\
\item Let $\overline{\sigma}: \operatorname{im}(\beta)/\operatorname{im}(\beta \alpha) \rightarrow \operatorname{im}(\beta)$ be a map of left $D_{k}$-modules such that $\overline{\pi}_{2} \overline{\sigma}=id_{\operatorname{im}(\beta)/\operatorname{im}(\beta \alpha)}$.
\end{enumerate}
\vspace{0.1in}
For each $a \in _{k}T$, there exists an $F$-linear map \\
\begin{center}
$\overline{\overline{N}}(a): N_{\tau(a)} \rightarrow (\operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)) \oplus \frac{\operatorname{im}(\alpha)}{\operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)} \oplus \frac{N_{k}}{\operatorname{ker}(\beta) + \operatorname{im}(\alpha)} \oplus \frac{\operatorname{ker}(\beta) + \operatorname{im}(\alpha)}{\operatorname{im}(\alpha)}$
\end{center}
such that
\begin{align*}
&\overline{\Pi}_{1} \overline{\overline{N}}(a)=-\theta_{1}\overline{\pi}_{1} \overline{p} \xi_{e_{k}a} \\
&\overline{\Pi}_{2} \overline{\overline{N}}(a)=-\theta_{2}^{-1} \overline{\gamma}' \xi_{e_{k}}a\\
&\overline{\Pi}_{3} \overline{\overline{N}}(a) = \overline{\Pi}_{4} \overline{\overline{N}}(a) = 0
\end{align*}
Let $n \in N_{\tau(a)}$. Then
\begin{align*}
-\theta_{1} \overline{\pi}_{1} \overline{p} \xi_{e_{k}a}(n) &= -\theta_{1}\left(\overline{p}\xi_{e_{k}a}(n)+\operatorname{ker}(\alpha)\right)=-\alpha \overline{p} \xi_{e_{k}a}(n) \\
-\theta_{2}^{-1}(\overline{\gamma}'(\xi_{e_{k}a}(n)))&=-\theta_{2}^{-1}(c_{k}\beta \alpha \xi_{e_{k}a}(n))=-c_{k}\overline{\pi}_{3} \alpha \xi_{e_{k}a}(n)
\end{align*}
\vspace{0.2in}
Thus the map $\overline{\overline{N}}(a)$ takes the following form:
\begin{equation} \label{5.11}
\begin{split}
\overline{\Pi}_{1} \overline{\overline{N}}(a)&=-\alpha \overline{p} \xi_{e_{k}a} \\
\overline{\Pi}_{2} \overline{\overline{N}}(a)&= -c_{k}\overline{\pi}_{3} \alpha \xi_{e_{k}a} \\
\overline{\Pi}_{3} \overline{\overline{N}}(a)&=\overline{\Pi}_{4} \overline{\overline{N}}(a)=0
\end{split}
\end{equation}
\vspace{0.2in}
Similarly, for each $b \in T_{k}$, there exists an $F$-linear map
\begin{center}
$\overline{\overline{N}}(b): (\operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)) \oplus \frac{\operatorname{im}(\alpha)}{\operatorname{ker}(\beta) \cap \operatorname{im}(\alpha)} \oplus \frac{N_{k}}{\operatorname{ker}(\beta) + \operatorname{im}(\alpha)} \oplus \frac{\operatorname{ker}(\beta) + \operatorname{im}(\alpha)}{\operatorname{im}(\alpha)} \longrightarrow N_{\sigma(b)}$
\end{center}
\vspace{0.2in}
given by
\begin{align*}
\overline{\overline{N}}(b)\overline{J}_{1}&=\overline{\overline{N}}(b)\overline{J}_{4}=0 \\
\overline{\overline{N}}(b)\overline{J}_{2}&=c_{k}^{-1} \pi_{be_{k}}\overline{i}_{3} \theta_{2} \\
\overline{\overline{N}}(b)\overline{J}_{3}&=c_{k}^{-1} \pi_{be_{k}} \overline{j} \overline{\sigma} \theta_{3}
\end{align*}
To complete the proof of Theorem \ref{theo2}, it remains to construct an isomorphism of $F$-vector spaces $\psi: \overline{\overline{N}}_{k} \rightarrow N_{k}$ such that
\begin{align*}
\psi \overline{\overline{N}}(a)&=\alpha \xi_{e_{k}a} \\
\pi_{be_{k}}\beta \psi &= \overline{\overline{N}}(b)
\end{align*}
for every $a \in _{k}T$ and $b \in T_{k}$. Choose some sections
\begin{align*}
\sigma_{1}&: \operatorname{im} \alpha/( \operatorname{ker}\beta \cap \operatorname{im} \alpha) \rightarrow \operatorname{im}\alpha \\
\sigma_{2}&: (\operatorname{ker} \beta + \operatorname{im} \alpha)/ \operatorname{im}(\alpha) \rightarrow \operatorname{ker} \beta + \operatorname{im} \alpha \\
\sigma_{3}&: N_{k}/(\operatorname{ker} \beta + \operatorname{im} \alpha) \rightarrow N_{k}
\end{align*}
so that they satisfy
\begin{center}
$\operatorname{im}(\sigma_{1})=\alpha(\operatorname{ker} \overline{p})$, $\operatorname{im}(\sigma_{2}) \subseteq \operatorname{ker}(\beta)$, $\operatorname{im}(\beta \sigma_{3})=\operatorname{im}(\overline{\sigma})$
\end{center}
\vspace{0.1in}
\begin{comment}
Por ejemplo, para ver como obtener $\operatorname{im}(\sigma_{1})=\alpha(\operatorname{ker} \overline{p})$ podemos argumentar como sigue. Dado que $\overline{p}: N_{in} \rightarrow \operatorname{ker}(\beta \alpha)$ es retracci\'{o}n, entonces $N_{in}=\operatorname{ker}(\beta \alpha) \oplus \operatorname{ker}(\overline{p})$. Sea $w \in N_{in}$ entonces $w=x_{1}+x_{2}$, de manera \'{u}nica, con $x_{1} \in \operatorname{ker}(\beta \alpha)$ y $x_{2} \in \operatorname{ker}(\overline{p})$. Notar entonces que $\alpha(x_{1}) \in \operatorname{ker}\beta \cap \operatorname{im} \alpha$. Luego:
\begin{align*}
\sigma_{1}(\alpha(w)+\operatorname{ker}\beta \cap \operatorname{im} \alpha)&=\sigma_{1}(\alpha(x_{1})+\alpha(x_{2}) + \operatorname{ker}\beta \cap \operatorname{im} \alpha) \\
&=\sigma_{1}(\alpha(x_{2})+\operatorname{ker}\beta \cap \operatorname{im} \alpha)
\end{align*}
luego bastar\'{a} definir $\sigma_{1}$ como $\sigma_{1}(\alpha(w)+\operatorname{ker}\beta \cap \operatorname{im} \alpha)=\alpha(x_{2})$. \\
\end{comment}
Define $\psi: \overline{\overline{N}}_{k} \rightarrow N_{k}$ as
\[\psi=
\begin{bmatrix}
-i_{1} & -c_{k}^{-1}i_{2}\sigma_{1} & -c_{k}^{-1}\sigma_{3} & -i_{3}\sigma_{2}
\end{bmatrix}
\]
where
\begin{align*}
&i_{1}: \operatorname{ker}(\beta) \cap \operatorname{im}(\alpha) \rightarrow N_{k} \\
&i_{2}: \operatorname{im}(\alpha) \rightarrow N_{k} \\
&i_{3}: \operatorname{ker}(\beta)+\operatorname{im}(\alpha) \rightarrow N_{k}
\end{align*}
are the inclusion maps. We now show that
\begin{equation} \label{5.12}
\psi \overline{\overline{N}}(a)=\alpha \xi_{e_{k}a}
\end{equation}
Since $\overline{p}: N_{in} \rightarrow \operatorname{ker}(\beta \alpha)$ is a retraction, then $N_{in}=\operatorname{ker}(\overline{p}) \oplus \operatorname{ker}(\beta \alpha).$ On the other hand, since $\operatorname{ker}(\alpha) \subseteq \operatorname{ker}(\beta \alpha)$ then there exists a submodule $L \subseteq \operatorname{ker}(\beta \alpha)$ such that $\operatorname{ker}(\alpha) \oplus L = \operatorname{ker}(\beta \alpha)$. Therefore $N_{in}=\operatorname{ker}(\overline{p}) \oplus \operatorname{ker}(\alpha) \oplus L$.
Let $n \in N_{\tau(a)}$, then $\xi_{e_{k}a}(n) \in N_{in}$. Thus, $\xi_{e_{k}a}(n)=n_{1}+n_{2}+n_{3}$ for some uniquely determined $n_{1} \operatorname{ker}(\overline{p})$, $n_{2} \in \operatorname{ker}(\alpha)$ and $n_{3} \in L$. Therefore:
\begin{center}
$\alpha \xi_{e_{k}a}(n)=\alpha(n_{1}+n_{2}+n_{3})=\alpha(n_{1}+n_{3})$
\end{center}
On the other hand:
\begin{align*}
\left(\psi \overline{\overline{N}}(a)\right)(n)&= \psi \left( - \alpha \overline{p} \xi_{e_{k}a}(n), -c_{k}\overline{\pi}_{3} \alpha \xi_{e_{k}a}(n),0,0\right) \\
&=\alpha \overline{p}(n_{1}+n_{2}+n_{3})+c_{k}^{-1}c_{k}\sigma_{1} \overline{\pi}_{3} \alpha(n_{1}+n_{3}) \\
&=\alpha \overline{p}(n_{2}+n_{3})+\alpha(n_{1}) \\
&=\alpha(n_{2}+n_{3})+\alpha(n_{1}) \\
&=\alpha(n_{1}+n_{3}) \\
&=\alpha \xi_{e_{k}a}(n)
\end{align*}
and the proof of \ref{5.12} is now complete. \\
We now verify that \\
\begin{equation} \label{5.13}
\pi_{be_{k}}\beta \psi = \overline{\overline{N}}(b)
\end{equation}
\vspace{0.1in}
Let $n_{1} \in \operatorname{ker}(\beta) \cap \operatorname{im}(\alpha), n_{2} \in \operatorname{im}(\alpha), n_{3} \in N_{k}, n_{4} \in \operatorname{ker}(\beta)+\operatorname{im}(\alpha)$. Then
\begin{align*}
\overline{\overline{N}}(b)\left(n_{1},\overline{\pi}_{3}(n_{2}), \overline{\pi}_{4}(n_{3}), \overline{\pi}_{5}(n_{4})\right)&=c_{k}^{-1}\pi_{be_{k}}\overline{i}_{3}\theta_{2}\overline{\pi}_{3}(n_{2})+c_{k}^{-1}\pi_{be_{k}}\overline{j}\overline{\sigma}\theta_{3}\overline{\pi}_{4}(n_{3}) \\
&=c_{k}^{-1}b \cdot n_{2} + c_{k}^{-1}b \cdot n_{3} \\
&=c_{k}^{-1}\psi_{0}^{-1}(b)n_{2}+c_{k}^{-1}\psi_{0}^{-1}(b)n_{3} \\
&=-c_{k}^{-1}bn_{2}-c_{k}^{-1}bn_{3}
\end{align*}
On the other hand \\
\begin{align*}
\pi_{be_{k}} \beta \psi(n) &= \pi_{be_{k}} \beta \left(-n_{1}-c_{k}^{-1}i_{2}\sigma_{1} \overline{\pi}_{3}(n_{2})-c_{k}^{-1}\sigma_{3} \overline{\pi}_{4}(n_{3})-i_{3}\sigma_{2}\overline{\pi}_{5}(n_{4})\right)
\end{align*}
\vspace{0.1in}
Since $n_{1} \in \operatorname{ker}(\beta)$ and $\operatorname{im}(\sigma_{2}) \subseteq \operatorname{ker}(\beta)$ we see that the above expression is equal to \\
\begin{align*}
\pi_{be_{k}} \beta(-c_{k}^{-1}i_{2}\sigma_{1}\overline{\pi}_{3}(n_{2})-c_{k}^{-1}\sigma_{3}\overline{\pi}_{4}(n_{3})) &=\pi_{be_{k}}\beta(-c_{k}^{-1}n_{2}-c_{k}^{-1}n_{3}) \\
&=-c_{k}^{-1}bn_{2}-c_{k}^{-1}bn_{3}
\end{align*}
and \ref{5.13} follows. Finally, we extend $\psi$ to an isomorphism of $F$-vector spaces $\psi': \overline{\overline{N}} \rightarrow N$ defined as the identity map on $\displaystyle \bigoplus_{i \neq k} \overline{\overline{N}}_{i}$.
\end{proof}
\section{Nearly Morita equivalence} \label{sec6}
Throughout this section, $\mathcal{F}_{S}(M)/R(P)$-$\operatorname{mod}_{k}$ denotes the category of finite dimensional left $\mathcal{F}_{S}(M)/R(P)$-modules modulo the ideal of morphisms which factor through direct sums of the left $\mathcal{F}_{S}(M)$-simple module at $k$. \\
In this section we prove that the categories $\mathcal{F}_{S}(M)/R(P)-\operatorname{mod}_{k}$ and $\mathcal{F}_{S}(\overline{\mu}_{k}M)/R(\overline{\mu}_{k}P)-\operatorname{mod}_{k}$ are equivalent, where $(\mathcal{F}_{S}(\overline{\mu}_{k}M),\mathcal{F}_{S}(\overline{\mu}_{k}P))$ denotes the mutation at $k$ in the sense of \cite[p.56]{2}. \\
For each finite dimensional left $\mathcal{F}_{S}(M)/R(P)$-module, we choose splitting data $(p_{N},(\sigma_{2})_{N})$. Let $u: N \rightarrow N^{1}$ be a morphism of left $\mathcal{F}_{S}(M)/R(P)$-modules. Then $u$ induces $D_{k}$-linear maps:
\begin{align*}
u_{out}&:N_{out} \rightarrow N_{out}^{1} \\
u_{in}&: N_{in} \rightarrow N_{in}^{1}
\end{align*}
such that for each $a,a_{1} \in _{k}T, r,r_{1} \in L(k)$
\begin{center}
$\pi_{r_{1}a_{1}}^{1}u_{in}\xi_{ra}=\delta_{r_{1}a_{1},ra}u$
\end{center}
and for each $b,b_{1} \in T_{k}, r,r_{1} \in L(k)$
\begin{center}
$\pi_{b_{1}r_{1}}^{1}u_{out}\xi_{br}=\delta_{b_{1}r_{1},br}u$
\end{center}
We also have $D_{k}$-linear maps
\begin{align*}
& \alpha: N_{in} \rightarrow N_{k}; \alpha_{1}: N_{in}^{1} \rightarrow N_{k}^{1} \\
& \beta: N_{k} \rightarrow N_{out}; \beta_{1}: N_{k}^{1} \rightarrow N_{out}^{1} \\
& \gamma: N_{out} \rightarrow N_{in}; \gamma_{1}: N_{out}^{1} \rightarrow N_{in}^{1}
\end{align*}
Then we have the following equalities
\begin{align*}
& u_{k}\alpha=\alpha_{1}u_{in}; u_{out}\beta=\beta_{1}u_{k} \\
& u_{in}\gamma=\gamma_{1}u_{out}
\end{align*}
The map $u_{out}$ induces $D_{k}$-linear maps
\begin{align*}
&u_{1}: \operatorname{ker}(\gamma) \rightarrow \operatorname{ker}(\gamma_{1}) \\
&u_{4}: \operatorname{im}(\beta) \rightarrow \operatorname{im}(\beta_{1})
\end{align*}
The map $u_{in}$ induces $D_{k}$-linear maps
\begin{align*}
&u_{2}: \operatorname{im}(\gamma) \rightarrow \operatorname{im}(\gamma_{1}) \\
&u_{3}: \operatorname{ker}(\alpha) \rightarrow \operatorname{ker}(\alpha_{1})
\end{align*}
The map $u_{1}$ induces a $D_{k}$-linear map
\begin{center}
$\underline{u}_{1}: \operatorname{ker}(\gamma)/\operatorname{im}(\beta) \rightarrow \operatorname{ker}(\gamma_{1})/\operatorname{im}(\beta_{1})$
\end{center}
\vspace{0.1in}
likewise, the map $u_{3}$ induces a $D_{k}$-linear map
\begin{center}
$\underline{u}_{3}: \operatorname{ker}(\alpha)/\operatorname{im}(\gamma) \rightarrow \operatorname{ker}(\alpha_{1})/\operatorname{im}(\gamma_{1})$
\end{center}
so we have a $D_{k}$-linear map
\begin{center}
$h=\underline{u}_{1} \oplus u_{2} \oplus \underline{u}_{3}: \overline{N}_{k} \rightarrow \overline{N^{1}}_{k}$
\end{center}
Then we define a linear map of left $S$-modules:
\begin{center}
$\overline{u}: \overline{N} \rightarrow \overline{N^{1}}$
\end{center}
as $\overline{u}_{i}=u_{i}$ for every $i \neq k$ and $\overline{u}_{k}=h$.
\begin{definition} Following \cite{7} we say that $u \in \operatorname{Hom}_{S}(_{S}L_{1},_{S}L_{2})$ is confined to $k$ if $u_{i}: e_{i}L_{1} \rightarrow e_{i}L_{2}$ is the zero map for all $i \neq k$. Note that a map of left $\mathcal{F}_{S}(M)$-modules $u: L_{1} \rightarrow L_{2}$ factors through a direct sum of the left $\mathcal{F}_{S}(M)$-simple module at $k$ if and only if $u$ is confined to $k$.
\end{definition}
\begin{lemma} Let $u: N \rightarrow N^{1}$ be a map of finite dimensional left $\mathcal{F}_{S}(M)/R(P)$-modules. Then there exists a map of left $S$-modules $\overline{\rho(u)}: \overline{N} \rightarrow \overline{N^{1}}$, confined to $k$, and such that $\overline{u}+\overline{\rho(u)}$ is a map of left $\mathcal{F}_{S}(\mu_{k}M)$-modules.
\end{lemma}
\begin{proof} Let $(p=p_{N},\sigma_{2}=(\sigma_{2})_{N})$ and $(p_{1},\sigma_{2,1})$ be the splitting data for $N$ and $N^{1}$, respectively. Then we have the following commutative diagram with exact rows:
\begin{center}
\begin{tabular}{c}
\xymatrix{
0 \ar[r] & \operatorname{im}(\gamma) \ar[r] \ar[d]^{u_{2}}& \operatorname{ker}(\alpha) \ar[r]^(.35){\pi_{2}} \ar[d]^{u_{3}} & \operatorname{ker}(\alpha)/ \operatorname{im}(\gamma) \ar[r] \ar[d]^{\underline{u}_{3}} & 0 \\
0 \ar[r] & \operatorname{im}(\gamma_{1}) \ar[r] & \operatorname{ker}(\alpha_{1}) \ar[r]^(.35){\pi^{1}_{2}} & \operatorname{ker}(\alpha_{1})/ \operatorname{im}(\gamma_{1}) \ar[r] & 0
}
\end{tabular}
\end{center}
thus $\pi_{2}^{1}(u_{3}\sigma_{2}-\sigma_{2,1}\underline{u}_{3})=\underline{u}_{3}\pi_{2}\sigma_{2}-\underline{u}_{3}=\underline{u}_{3}-\underline{u}_{3}=0$. Hence there exists a linear map of left $S$-modules:
\begin{center}
$\rho_{1}=u_{3}\sigma_{2}-\sigma_{2,1}\underline{u}_{3}: \operatorname{ker}(\alpha)/\operatorname{im}(\gamma) \rightarrow \operatorname{im}(\gamma_{1})$.
\end{center}
Similarly, we have a commutative diagram with exact rows:
\begin{center}
\begin{tabular}{c}
\xymatrix{
0 \ar[r] & \operatorname{ker}(\gamma) \ar[r]^{j} \ar[d]^{u_{1}}& N_{out} \ar[r]^{\gamma} \ar[d]^{u_{out}} & \operatorname{im}(\gamma) \ar[r] \ar[d]^{u_{2}} & 0 \\
0 \ar[r] & \operatorname{ker}(\gamma_{1}) \ar[r]^{j_{1}} & N_{out^{1}} \ar[r]^{\gamma_{1}} & \operatorname{im}(\gamma_{1}) \ar[r] & 0
}
\end{tabular}
\end{center}
hence $(u_{1}p-p_{1}u_{out})j=u_{1}pj-p_{1}u_{out}j=u_{1}-p_{1}j_{1}u_{1}=u_{1}-u_{1}=0$. \\
Therefore, there exists a linear map of left $S$-modules:
\begin{center}
$\rho_{2}: \operatorname{im}(\gamma) \rightarrow \operatorname{ker}(\gamma_{1})$
\end{center}
such that $\rho_{2}\gamma=p_{1}u_{out}-u_{1}p$. \\
Define $\overline{\rho(u)}: \overline{N} \rightarrow \overline{N^{1}}$ as follows: $\overline{\rho(u)}_{i}=0$ for all $i \neq k$ and $\overline{\rho(u)}_{k}: \overline{N}_{k} \rightarrow \overline{N^{1}}_{k}$ is the following linear map of left $S$-modules:
\[
\begin{bmatrix}
0 & \pi_{1}^{1}\rho_{2} & 0 \\
0 & 0 & \rho_{1} \\
0 & 0 & 0
\end{bmatrix}
\]
Let us show that the map $\overline{u}+\overline{\rho(u)}$ is in fact a map of left $\mathcal{F}_{S}(\mu_{k}M)$-modules. \\
Let $n \in N_{\sigma(b)}$. On one hand \\
\begin{align*}
\left(\overline{u}+\overline{\rho(u)}\right)\left(\overline{N}(^{\ast}b)(n)\right)&=\left(-\underline{u}_{1}\pi_{1}p\xi_{be_{k}}(n)-\pi_{1}^{1}\rho_{2}\gamma \xi_{be_{k}}(n),-u_{2}\gamma \xi_{be_{k}}(n),0\right) \\
&=(-\pi_{1}^{1}u_{1}p\xi_{be_{k}}(n)-\pi_{1}^{1}\rho_{2}\gamma \xi_{be_{k}}(n), -u_{2}\gamma \xi_{be_{k}}(n),0) \\
\end{align*}
On the other hand \\
\begin{align*}
\overline{N^{1}}(^{\ast}b)(u_{\sigma(b)}(n))&=(-\pi_{1}^{1}p_{1}\xi_{be_{k}}(u_{\sigma(b)}(n)),-\gamma_{1}\xi_{be_{k}}(u_{\sigma(b)}(n)),0) \\
&=(-\pi_{1}^{1}p_{1}u_{out}(\xi_{be_{k}}(n)),-\gamma_{1}u_{out}(\xi_{be_{k}}(n)),0) \\
&=(-\pi_{1}^{1}u_{1}p\xi_{be_{k}}(n)-\pi_{1}^{1}\rho_{2}\gamma\xi_{be_{k}}(n),-u_{2}\gamma\xi_{be_{k}}(n),0)
\end{align*}
Therefore
\begin{center}
$\left(\overline{u}+\overline{\rho(u)}\right)\overline{N}(^{\ast}b)=\overline{N^{1}}(^{\ast}b)u_{\sigma(b)}$
\end{center}
Now for each $a \in T_{k}$, $x \in \operatorname{ker}(\gamma)/\operatorname{im}(\gamma)$, $y \in \operatorname{im}(\gamma)$ and $z \in \operatorname{ker}(\alpha)/\operatorname{im}(\gamma)$ we have \\
\begin{align*}
u_{\tau(a)}\overline{N}(a^{\ast})(x,y,z)&=c_{k}^{-1}u_{\tau(a)}(\pi_{e_{k}a}i(y)+\pi_{e_{k}a}j'\sigma_{2}(z)) \\
&=c_{k}^{-1}(\pi^{1}_{e_{k}a}i_{1}u_{2}(y)+\pi^{1}_{e_{k}a}j_{1}'u_{3}\sigma_{2}(z))
\end{align*}
On the other hand
\begin{align*}
\overline{N^{1}}(a^{\ast})\left(\overline{u}+\overline{\rho(u)}\right)(x,y,z)&=\overline{N^{1}}(a^{\ast})\left((\overline{u}_{1}(x),u_{2}(y),\overline{u}_{3}(z))+(\pi_{1}^{1}\rho_{2}(y),\rho_{1}(z),0)\right) \\
&=c_{k}^{-1}\left(\pi^{1}_{e_{k}a}i_{1}u_{2}(y)+\pi^{1}_{e_{k}a}j_{1}'\sigma_{2,1}\underline{u}_{3}(z)+\pi^{1}_{e_{k}a}i_{1}\rho_{1}(z)\right) \\
&=c_{k}^{-1}\left(\pi^{1}_{e_{k}a}i_{1}u_{2}(y)+\pi^{1}_{e_{k}a}j_{1}'(\sigma_{2,1}\underline{u}_{3}(z)+\rho_{1}(z))\right) \\
&=c_{k}^{-1}\left(\pi^{1}_{e_{k}a}i_{1}u_{2}(y)+\pi^{1}_{e_{k}a}j_{1}'u_{3}\sigma_{2}(z)\right)
\end{align*}
thus $u_{\tau(a)}\overline{N}(a^{\ast})=\overline{N^{1}}(a^{\ast})\left(\overline{u}+\overline{\rho(u)}\right)$, as was to be shown.
\end{proof}
\begin{prop} \label{prop8} There exists a faithful functor $\widetilde{\mu}_{k}: \mathcal{F}_{S}(M)/R(P)-\operatorname{mod}_{k} \rightarrow \mathcal{F}_{S}(\mu_{k}M)/R(\mu_{k}P)-\operatorname{mod}_{k}$
\end{prop}
\begin{proof} First we define a functor $G: \mathcal{F}_{S}(M)/R(P)-\operatorname{mod} \rightarrow \mathcal{F}_{S}(\mu_{k}M)/R(\mu_{k}P)-\operatorname{mod}_{k}$ as $G(N)=\overline{N}$ and given a map of left $\mathcal{F}_{S}(M)/R(P)$-modules $u: N \rightarrow N^{1}$, we define:
\begin{center}
$G(u)=\underline{\overline{u}+\overline{\rho(u)}}: \overline{N} \rightarrow \overline{N^{1}}$
\end{center}
On the other hand, if $v: N^{1} \rightarrow N^{2}$ is a map of left $\mathcal{F}_{S}(M)/R(P)$-modules then $G(vu)=\underline{\overline{vu}+\overline{\rho(vu)}}$. Since $\rho$ is concentrated in $k$ and $\overline{vu}=\overline{v} \ \overline{u}$ one sees that $G(vu)=G(v)G(u)$ so that $G$ preserves composition. Since $\overline{\rho(id_{N})}=0$ then $G(id_{N})=id_{\overline{N}}$ so that $G$ is indeed a covariant (additive) functor. \\
Finally, note that $G(u)=0$ if and only if $\overline{u}+\overline{\rho(u)}$ is confined to $k$, which happens if and only if $\overline{u}$ is confined to $k$ and the latter happens if only if $u$ is confined to $k$, as was to be shown.
\end{proof}
Let $\varphi: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M_{1})$ be an algebra isomorphism such that $\varphi_{|S}=id_{S}$. Let $P$ be a potential in $\mathcal{F}_{S}(M)$. \\
Throughout the rest of this section, $J(M,P)$ will denote the quotient algebra $\mathcal{F}_{S}(M)/R(P)$. \\
The isomorphism $\varphi$ induces a functor \\
\begin{center}
$H_{\varphi}: J(M,P)-\operatorname{mod} \rightarrow J(M_{1},\varphi(P))-\operatorname{mod}$
\end{center}
\vspace{0.1in}
given as follows. In objects, $H_{\varphi}(N)=^{\varphi}N$; that is, $H_{\varphi}(N)=N$ as $S$-left modules, and given $n \in N$ and $z \in \mathcal{F}_{S}(M_{1})$, $z \cdot n=\varphi^{-1}(z)n$. Clearly, $\operatorname{Hom}_{J(M,P)}(N,N^{1})=\operatorname{Hom}_{J(M_{1},\varphi(P))}(^{\varphi}N,^{\varphi}N^{1})$. Therefore, we let $H_{\varphi}(u)=u$. This gives an equivalence of categories
\begin{equation} \label{6.1}
H_{\varphi}: J(M,P)-\operatorname{mod} \rightarrow J(M_{1},\varphi(P))-\operatorname{mod}
\end{equation}
Now suppose that $M=M_{1} \oplus M_{2}$, and $Q=Q_{1}+W$ where $Q_{1}$ is a reduced potential in $\mathcal{F}_{S}(M_{1})$ and $W$ is a trivial potential in $\mathcal{F}_{S}(M_{2})$. Then the restriction functor \\
\begin{equation} \label{6.2}
\operatorname{res}: J(M,Q)-\operatorname{mod} \rightarrow J(M_{1},Q_{1})-\operatorname{mod}
\end{equation}
yields also an equivalence of categories. \\
On the other hand, by \cite[Theorem 8.17]{2} there exists a right-equivalence \\
\begin{center}
$\psi: \mathcal{F}_{S}(\mu_{k}^2M) \rightarrow \mathcal{F}_{S}(M \oplus M')$
\end{center}
such that $\psi(\mu_{k}^2P)$ is cyclically equivalent to $P+W$ where $W$ is a trivial potential in $\mathcal{F}_{S}(M')$.
Thus, using \ref{6.1} and \ref{6.2} we obtain equivalence of categories:
\begin{align}
H_{\psi}: J(\mu_{k}^2M,\mu_{k}^2P)-\operatorname{mod} \rightarrow J(M \oplus M',P+W)-\operatorname{mod} \label{6.3} \\
res: J(M \oplus M',P+W)-\operatorname{mod} \rightarrow J(M,P)-\operatorname{mod} \label{6.4}
\end{align}
\vspace{0.1in}
composing the above functors yields an equivalence of categories \\
\begin{equation} \label{6.5}
G(\psi)=resH_{\psi}: J(\mu_{k}^2M,\mu_{k}^2P)-\operatorname{mod} \rightarrow J(M,P)-\operatorname{mod}
\end{equation}
In what follows, given $N \in J(M,P)-\operatorname{mod}$, we will denote by $_{S}N$ the $S$-left module underlying $N$. In particular, $_{S}G(\psi)(N)=_{S}N$.
\begin{lemma} \label{lem11} Let $\mathcal{A},\mathcal{B}$ be $F$-categories and let $C: \mathcal{A} \rightarrow \mathcal{B}$, $D: \mathcal{B} \rightarrow \mathcal{A}$ be $F$-functors such that $D$ is faithful and there exists a natural isomorphism $id_{\mathcal{A}} \cong DC$. Then $C$ is fully faithful. Moreover, if $D$ is full, then $id_{\mathcal{B}} \cong CD$.
\end{lemma}
\begin{proof} For each $X \in \operatorname{Ob}(\mathcal{A})$ there exists an isomorphism
\begin{center}
$\phi_{X}: X \rightarrow DC(X)$
\end{center}
Now let $u \in \operatorname{Hom}_{\mathcal{A}}(X,X_{1})$. By naturality, we have a commutative diagram
\begin{center}
\begin{equation*}
\xymatrix{%
X \ar[r]^{\phi_{X}} \ar[d]^{u} & DC(X) \ar[d]^{DC(u)} \\
X_{1} \ar[r]^{\phi_{X_{1}}} & DC(X_{1})}
\end{equation*}
\end{center}
Thus $\phi_{X_{1}}u=DC(u)\phi_{X}$. Therefore if $C(u)=0$, then $u=0$. This shows that $C$ is faithful.
Now let $h \in \operatorname{Hom}_{\mathcal{B}}(C(X),C(X_{1}))$. Let
\begin{center}
$\lambda=\phi_{X_{1}}^{-1}D(h)\phi_{X}: X \rightarrow X_{1}$
\end{center}
Since the following diagram commutes
\begin{center}
\begin{equation*}
\xymatrix{%
X \ar[r]^{\phi_{X}} \ar[d]^{\lambda} & DC(X) \ar[d]^{DC(\lambda)} \\
X_{1} \ar[r]^{\phi_{X_{1}}} & DC(X_{1})}
\end{equation*}
\end{center}
then $\lambda=\phi_{X_{1}}^{-1}DC(\lambda)\phi_{X}$. Therefore $D(h)=DC(\lambda)$ and thus $h=C(\lambda)$. It follows that $C$ is full.
Now suppose that $D$ is full, then for each $Y \in \operatorname{Ob}(\mathcal{B})$ there exists an isomorphism $\phi_{D(Y)}: D(Y) \rightarrow DCD(Y)$. Since $D$ is full, then $\phi_{D(Y)}=D(\psi_{Y})$ for some $\psi_{Y} \in \operatorname{Hom}_{\mathcal{B}}(Y,CD(Y))$. Let us show that $\psi_{Y}$ is natural. Let $f \in \operatorname{Hom}_{\mathcal{B}}(Y_{1},Y_{2})$. We have to prove the following diagram is commutative
\begin{center}
\begin{equation} \label{6.6}
\xymatrix{%
Y \ar[r]^{\psi_{Y}} \ar[d]^{f} & CD(Y) \ar[d]^{CD(f)} \\
Y_{1} \ar[r]^{\psi_{Y_{1}}} & CD(Y_{1})}
\end{equation}
\end{center}
Consider the following commutative diagram
\begin{center}
\begin{equation*}
\xymatrix{%
D(Y) \ar[r]^{\phi_{D(Y)}} \ar[d]^{D(f)} & DCD(Y) \ar[d]^{DCD(f)} \\
D(Y_{1}) \ar[r]^{\phi_{D(Y_{1})}} & DCD(Y_{1})}
\end{equation*}
\end{center}
thus $DCD(f)\phi_{D(Y)}=\phi_{D(Y_{1})}D(f)$. This implies that $DCD(f)D(\psi_{Y})=D(\psi_{Y_{1}})D(f)$. Because $D$ is faithful it follows that $CD(f)\psi_{Y}=\psi_{Y_{1}}f$ and \ref{6.6} commutes, as was to be shown.
\end{proof}
By \ref{6.5} there exists an equivalence of categories \\
\begin{center}
$G(\psi)=resH_{\psi}: J(\mu_{k}^2M,\mu_{k}^2P)-\operatorname{mod} \rightarrow J(M,P)-\operatorname{mod}$
\end{center}
and this functor descends to a functor $G(\psi)_{k}$ in the quotient category $J(M,P)-\operatorname{mod}_{k}$ which is the category $J(M,P)-\operatorname{mod}$, modulo the ideal of morphisms which factor through direct sums of the simple module at $k$. Thus, we have a functor \\
\begin{center}
$G(\psi)_{k}: J(\mu_{k}^{2}M,\mu_{k}^{2}P)-\operatorname{mod}_{k} \rightarrow J(M,P)-\operatorname{mod}_{k}$
\end{center}
\begin{prop} \label{prop9} There exists a natural isomorphism of functors $id_{J(M,P)-\operatorname{mod}_{k}} \cong G(\psi)_{k}\tilde{\mu}_{k}^{2}$.
\end{prop}
\begin{proof} Let $u \in \operatorname{Hom}_{J(M,P)-\operatorname{mod}_{k}}(N,N^{1})$. Remembering the proof of Proposition \ref{prop8} we have
\begin{align*}
\tilde{\mu}_{k}(u)&=\underline{\overline{u}+\overline{\rho(u)}}=u_{1}: \overline{N} \rightarrow \overline{N^{1}} \\
\tilde{\mu}_{k}^{2}(u)=\tilde{\mu}_{k}(u_{1})&=\underline{\overline{u_{1}}+\overline{\rho(u_{1})}}=u_{2}: \overline{\overline{N}} \rightarrow \overline{\overline{N^{1}}}
\end{align*}
Using the notation introduced in the proof of Theorem \ref{theo2}, we have isomorphisms of $J(\mu_{k}^{2}M,\mu_{k}^{2}P)$-left modules
\begin{align*}
&\psi': G(\psi)_{k}\tilde{\mu}_{k}^{2}N \rightarrow N \\
&\psi_{1}': G(\psi)_{k}\tilde{\mu}_{k}^{2} N^{1} \rightarrow N^{1}
\end{align*}
It remains to show that the following diagram commutes in $J(M,P)-\operatorname{mod}_{k}$.
\begin{center}
\begin{equation} \label{6.7}
\xymatrix{%
G(\psi)_{k}\tilde{\mu}_{k}^{2}N \ar[r]^(.65){\psi'} \ar[d]^{u_{2}} & N \ar[d]^{u} \\
G(\psi)_{k}\tilde{\mu}_{k}^{2}N \ar[r]^(.65){\psi_{1}'} & N^{1}}
\end{equation}
\end{center}
but this is true since $u\psi'-\psi_{1}'u_{2}$ is confined to $k$. This completes the proof.
\end{proof}
\begin{prop} \label{prop10} There exists a natural isomorphism of functors \\
\begin{center}
$\tilde{\mu}_{k}G(\psi)_{k}\tilde{\mu}_{k} \cong id_{J(\mu_{k}M,\mu_{k}P)-\operatorname{mod}_{k}}$.
\end{center}
\end{prop}
\begin{proof} By Proposition \ref{prop8}, the functor
\begin{center}
$\tilde{\mu}_{k}: J(\mu_{k}M,\mu_{k}P)-\operatorname{mod}_{k} \rightarrow J(\mu_{k}^{2}M,\mu_{k}^{2}P)-\operatorname{mod}_{k}$
\end{center}
is faithful, hence $G(\psi)_{k}\tilde{\mu}_{k}$ is faithful as well. By Lemma \ref{lem11} and Proposition \ref{prop8}, the functor $\tilde{\mu}_{k}$ is fully faithful. Therefore, $G(\psi)_{k}\tilde{\mu}_{k}$ is full. The result now follows by applying Lemma \ref{lem11}.
\end{proof}
\begin{theorem} Let $P$ be a potential in $\mathcal{F}_{S}(M)$. If $\mu_{k}P$ is splittable, then there exists an equivalence of categories:
\begin{center}
$\mu_{k}: J(M,P)-\operatorname{mod}_{k} \rightarrow J(\overline{\mu}_{k}M,\overline{\mu}_{k}P)-\operatorname{mod}_{k}$
\end{center}
\end{theorem}
\begin{proof}
Since $\mu_{k}P$ is splittable, then by \cite[Theorem 7.15]{2} there exists an algebra isomorphism $\varphi: \mathcal{F}_{S}(\mu_{k}M) \rightarrow \mathcal{F}_{S}(\overline{\mu}_{k}M \oplus M')$, with $\varphi_{|S}=id_{S}$, and such that $\varphi(\mu_{k}P)$ is cyclically equivalent to $\overline{\mu}_{k}P + W$ where $W$ is a trivial potential in $\mathcal{F}_{S}(M')$. By \ref{6.1} and \ref{6.2} there exists an equivalence of categories
\begin{center}
$G(\varphi): J(\mu_{k}M,\mu_{k}P)-\operatorname{mod} \rightarrow J(\overline{\mu}_{k}M,\overline{\mu}_{k}P)-\operatorname{mod}$
\end{center}
which induces an equivalence of categories
\begin{center}
$G(\varphi)_{k}: J(\mu_{k}M,\mu_{k}P)-\operatorname{mod}_{k} \rightarrow J(\overline{\mu}_{k}M,\overline{\mu}_{k}P)-\operatorname{mod}_{k}$
\end{center}
By Propositions \ref{prop9} and \ref{prop10}, the categories $J(M,P)-\operatorname{mod}_{k}$ and $J(\mu_{k}M,\mu_{k}P)-\operatorname{mod}_{k}$ are equivalent; hence, we get an equivalence of categories
\begin{center}
$\mu_{k}: J(M,P)-\operatorname{mod}_{k} \rightarrow J(\overline{\mu}_{k}M,\overline{\mu}_{k}P)-\operatorname{mod}_{k}$
\end{center}
as desired.
\end{proof}
|
2,869,038,155,540 | arxiv | \section{Generalities}
The RVS is a slitless spectrograph whose spectral domain is 847-874 nm and resolving power $R \sim 11500$.
The expected accuracy is 1 km s$^{-1}$ for F0 to K0 stars brighter than V$=13$, and for K1 to K4 stars brighter than V$=14$.
The main scientific objectives of RVS are the chemistry and dynamics of the Milky Way, the
detection and characterisation of multiple systems and variable stars \citep[for more details, see][]{wil05}.
Those objectives will be achieved from a spectroscopic survey of:
\begin{itemize}
\item Radial velocities ($\sim 150 \times 10^6$ objects, V $\le 17$)
\item Rotational velocities ($\sim 5 \times 10^6$ objects, V $\le 13$)
\item Atmospheric parameters ($\sim 5 \times 10^6$ objects, V $\le 13$)
\item Abundances ($\sim 2 \times 10^6$ objects, V $\le 12$)
\end{itemize}
Each star will be observed $\sim 40$ times on average by RVS over the 5 years of the mission.
\section{Calibration of the Gaia RVS}
Because the RVS has no calibration module on board, the zero point of its radial velocities has to be determined
from reference sources. Ground-based observations of a large sample of well-known,
stable reference stars as well as of asteroids are thus critical for
the calibration of the RVS. A sample of 1420 candidate standard stars has been established \citep{cri09,cri10} and has to be validated
by high spectral resolution observations.
Two measurements per candidate star are being made before Gaia is launched (or one, depending on already available archived data).
Another measurement will occur during the mission. The measurements will allow us to check the temporal stability of radial velocities,
and to reject any targets with significant RV variation.
\section{Status of observations of stars}
The ongoing observations are performed with three high spectral
resolution spectrographs:
\begin{itemize}
\item SOPHIE on the 1.93-m telescope at Observatoire de Haute-Provence,
\item NARVAL on the T\'elescope Bernard Lyot at Observatoire Pic-du-Midi,
\item CORALIE on the Euler swiss telescope at La Silla.
\end{itemize}
As of June 2011 we have observed 995 distinct candidates with SOPHIE, CORALIE and NARVAL. The detailed observations per instrument are:
\begin{itemize}
\item 691 stars (1165 velocities) with SOPHIE
\item 669 stars (945 velocities) with CORALIE
\item 93 stars (98 velocities) with NARVAL
\end{itemize}
Figure~\ref{chemin:fig1} (left-hand panel) represents the spatial distribution in the equatorial frame of the sample
and the number of measurements per object we have done so far with the three instruments.
\begin{figure}[b!]
\includegraphics[width=0.5\columnwidth]{chemin_fig1}\includegraphics[width=0.5\columnwidth]{chemin_fig2}
\caption{\textbf{Left panel:} Current status of the number of observations available per candidate standard star observed
with the SOPHIE, CORALIE and NARVAL instruments (995 distinct stars).
Black dots indicate the locations of the 425 remaining sources from the sample of 1420 candidates.
\textbf{Right panel:} Same as in left panel, but including archival data of the ELODIE and HARPS instruments as well.
The maps are represented in the equatorial frame. The Ecliptic plane is shown as a dashed line and the Galactic plane as a dotted line. }
\label{chemin:fig1}
\end{figure}
In addition to those new observations, we use radial velocity measurements available from the spectroscopic archives of two other high-resolution instruments: ELODIE,
which is a former OHP spectrograph, and HARPS which is currently observing at the ESO La Silla 3.6-m telescope.
The archived data allow us to recover 1057 radial velocities for 292 stars (ELODIE) and 1289 velocities for 113 stars (HARPS).
Figure~\ref{chemin:fig1} (right-hand panel) summarizes the status of the total number of measurements for the sample of 1420 candidate stars performed
with all five instruments.
\section{How stable are the radial velocities of our candidates?}
We have derived the variation of radial velocity of each star for which we have at least two velocity measurements separated
by an elapsed time of at least 100 days. These stars represent a subsample of 1044 among 1420 targets.
The variation is defined as the difference between the maximum and minimum velocities, as reported in the frame of the SOPHIE spectrograph.
Its distribution is displayed in Figure~\ref{chemin:fig2}.
A candidate is considered as a reference star for the RVS calibration when its radial velocity does not vary by more than an adopted threshold of 300 m s$^{-1}$.
Such a threshold has been defined to satisfy the condition that the variation of the RV of a candidate must be well smaller than
the expected RVS accuracy (1 km s$^{-1}$ at best for the brightest stars). As a result, we find $\sim 7\%$
of the 1044 stars exhibiting a variation larger than 300 m s$^{-1}$, as derived from available measurements performed to date.
Those variable stars will have to be rejected from the list of standard stars.
Note that about 75\% of the 1044 stars have very stable RV, at a level of variation smaller than 100 m s$^{-1}$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{chemin_fig3}
\caption{Distribution of variations of radial velocities of candidate standard stars having at least two RV measurements separated by 100 days or more.
A dashed line shows the adopted 300 m s$^{-1}$ stability threshold.}
\label{chemin:fig2}
\end{center}
\end{figure}
\section{Spectral observations of asteroids}
Observations of asteroids are very important for the radial velocity calibration. Indeed they will be used to determine the
zero-points of the RVs measured with SOPHIE, CORALIE and NARVAL (as well as the Gaia-RVS zero-point).
Those goals will be achieved by comparing the spectroscopic RVs of asteroids from
ground-based measurements with theoretical kinematical RVs from celestial mechanics.
The theoretical RVs are provided by IMCCE and are known with an accuracy better than 1 m s$^{-1}$.
About 280 measurements of 90 asteroids have been done so far.
As an illustration, Figure~\ref{chemin:fig3} (left-hand panel) displays the residual velocity (observed minus computed RVs) of asteroids observed by the
SOPHIE instrument as a function of the observed RVs. The average residual of asteroids observed with SOPHIE is 30 m s$^{-1}$ and the scatter is 38 m s$^{-1}$.
In Figure~\ref{chemin:fig3} (right-hand panel) we also show the variation of the residual RVs with time. It nicely shows how stable the RVs are
as a function of time. The residual RVs are relatively constant within the quoted errors.
The error-bars represent the dispersion of all measurements performed at each observational run. Their amplitude is mainly related to the conditions of
observations that differ from one session to another (in particular the moonlight contamination). Though significant (between 10 and 50 m s$^{-1}$)
those error-bars remain smaller than our target stability criterion of 300 m s$^{-1}$, which will enable us to determine correctly the RV zero point of each instrument.
\begin{figure}[t!]
\centering
\includegraphics[width=8cm]{chemin_fig4}\includegraphics[width=8cm]{chemin_fig5}
\caption{Radial velocities of asteroids. \textbf{Left panel:} Residual velocities (observed minus computed) of asteroids as a function of their observed velocities
(SOPHIE observations only). Red symbols are points deviant by more than 3$\sigma$. A horizontal dashed line represents the mean residual velocity
of 30 m s$^{-1}$. \textbf{Right panel:} Comparison of residual velocities of asteroids for SOPHIE, CORALIE and NARVAL as a function of time for the various
observing runs.}
\label{chemin:fig3}
\end{figure}
Note also we have verified that intrinsic properties of asteroids (e.g. their size, shape, rotation velocity, albedo, etc...)
have negligible systematic impacts on the determination of RVs zero points for the spectrographs.
From now, observations of asteroids shall be performed with reduced moonlight contamination.
\begin{acknowledgements}
We are very grateful to the AS-Gaia, the PNPS and PNCG for the financial support of the observing campaigns and the help in this project.
\end{acknowledgements}
|
2,869,038,155,541 | arxiv | \section{Introduction}
When molecules are adsorbed on metal surfaces, their electronic states are strongly perturbed by hybridization, charge transfer and screening \cite{Lu2004, Thygesen2009, Braun2009,Tautz2007}. These effects lead to a broadening and shifting of the molecular resonances \cite{Torrente2008}. Often the molecular functionality is also lost due to these interactions \cite{Tegeder2012}. However, addressing individual molecules in devices or by single-molecule spectroscopy as offered in a scanning tunneling microscope, requires a metal electrode. To (partially) preserve the molecular properties the molecule--electrode coupling has to be properly designed. An elegant way is to clamp the molecule between electrodes via weak single-atom bonds at opposing sites of the molecule while the molecule is freely hanging between the electrodes \cite{Reed1997, Reichert2002, Venkatarman2006, Lafferentz2009}. While these configurations give access to important transport properties \cite{Nitzan2003,Tao2006, Aradhya2013}, they do not allow for imaging molecular properties with intramolecular resolution \cite{Reecht2016}. The latter requires the molecules to be flat lying on a surface. To decouple such flat lying molecules from a metal, thin insulating layers have been engineered, ranging from ionic salts \cite{Repp2005a,Liljeroth2007}, over oxides \cite{Qiu2003, Heinrich2004, Rau2014}, nitrides \cite{Hirjibehedin2006}, and molecular layers \cite{Franke2008, Matino2011} to 2D materials, such as graphene \cite{Garnica2013,Riss2014}, and hexagonal boron nitride \cite{Schulz2013}.
The most recent development of decoupling layers made use of the in-situ fabrication of single-layers of transition-metal-dichalcogenides on metal surfaces. A monolayer of MoS$_2$\ on Au(111) provided very narrow molecular resonances, close to the thermal resolution limit at 4.6\,K \cite{Krane2018}. The exquisite decoupling efficiency has been ascribed to a combination of its rather large thickness of three atomic layers, its electronic band gap, and its non-ionic nature. All together, these properties prohibited fast electronic relaxations into the metal and coupling to phonons, which otherwise led to lifetime broadening \cite{Repp2005, Fatayer2018}.
The electronic properties of MoS$_2$\ on a metal surface are not the same as of a free-standing monolayer. Both theory and experiment have found considerable hybridization of electronic states at the interface \cite{Bruix2016}. As a consequence, the band gap is narrowed. Instead of the predicted band gap of 2.8\,eV of the free-standing layer \cite{Cheiwchanchamnangij2012, Qiu2015}, the band gap of the hybrid structure amounts to only $\sim 1.7$\,eV \cite{Bruix2016}. Interestingly, the states at the $K$ point are much less affected than the states at $\Gamma$. Hence, the system remains promising for optoelectronic devices with selective access to the spin-orbit-split bands at $K$ and $K'$ by circularly polarized light \cite{Bana2018}.
The potential as decoupling layer for molecules, may become even more appealing by the fact that monolayers of transition-metal-dichalcogenides can be grown in-situ on different metal surfaces, where the precise hybridization and band alignment depends on the nature of the substrate \cite{Dendzik2017}. One may thus envision tuning the band gap alignment for decoupling either the molecules' lowest unoccupied (LUMO) or highest occupied molecular orbitals (HOMO).
While MoS$_2$\ on Au(111) has already been established as an outstanding decoupling layer \cite{Krane2018}, we will now explore this potential for MoS$_2$\ on a Ag(111) surface. In agreement with the band modifications of WS$_2$ on Au(111) and Ag(111), we find that the band gap remains almost the same, but shifted to lower energies \cite{Dendzik2017}. As a test molecule we chose tetra-cyano-quino-dimethane (TCNQ). Due to its electron-accepting character, this choice will allow us to detect a negative ion resonance within the band gap of MoS$_2$. We will show that the LUMO is indeed decoupled from the metallic substrate as we can detect a narrow linewidth followed by a satellite structure. We can reproduce this fine structure by simulating the vibronic states of the gas-phase molecule.
\section{Results and Discussion}
We have grown monolayer islands of MoS$_2$\ on an atomically clean Ag(111) surface, which had been exposed to sputtering-annealing cycles under ultrahigh vacuum previously. The growth procedure was adapted from the case of of MoS$_2$\ on Au(111) \cite{Gronborg2015,Krane2016}, with Mo deposition on the surface in an H$_2$S atmosphere of $5\cdot 10^{-5}$ mbar, while the sample is annealed to 800\,K.
Tetra-cyano-quino-dimethane (TCNQ) molecules were deposited on the as-prepared sample held at 230\,K. The sample was then cooled down and transferred to the scanning tunneling microscope (STM). All measurements were performed at 4.6\,K. Differential conductance (\ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace) maps and spectra were taken with a lock-in amplifier at modulation frequencies of 812-921\,Hz, with the amplitudes given in the figure captions.
\begin{figure*}[h!]
\includegraphics[width=16cm]{Fig1_new.pdf}
\caption{a) STM topography of MoS$_2$ on Ag(111) recorded at V = 1.2\,V, I = 20\,pA. Inset: Line profile of a monolayer MoS$_2$\ island along the green line. b) Close-up view on the moir\'e structure. c) Atomically resolved terminating S layer (V = 5\,mV, I = 1\,nA).
d) Constant-height \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace spectra on the MoS$_2$/Ag(111) recorded on top and on hollow region of the moir\'e structure as shown on the inserted STM topography (feedback opened at V = 2.5\,V, I = 0.5\,nA, V$_\mathrm{mod}=10$\,mV). The inset shows the gap region of MoS$_2$/Ag(111) in logarithmic scale. We identify the VBM and CBM as the change in slope of the \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace signal. Dashed lines indicate the conduction band minimum (CBM) at $\sim 0.05$\,V and valence band maximum (VBM) at $\sim -1.55$\,V. The strong features in the \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace spectra are associated to the onset of specific bands, which are labeled by $Q$, $\Gamma_1$ and $\Gamma_2$ according to their location in the Brillouin zone. The assignment follows Ref. \cite{Krane2018a}.}
\label{fig1}
\end{figure*}
\subsection{Characterization of single-layer MoS$_2$\ on Ag(111)}
\Figref{fig1}a presents an STM image of the Ag(111) surface after the growth of MoS$_2$\ as described above. We observe islands with tens to hundreds of nanometer diameter and of $2.3 \pm 0.2$\, \AA\ apparent height (inset of \Figref{fig1}). The apparent height is much smaller than the layer distance in bulk MoS$_2$\ \cite{Wakabayashi1975} due to electronic-structure effects, but in agreement with a single layer of MoS$_2$\ on a metal surface \cite{Gronborg2015}. The islands exhibit a characteristic hexagonal pattern reflecting a moir\'e structure which results from the lattice mismatch between the Ag(111) surface and the MoS$_2$\ (\Figref{fig1}b). Areas with large apparent height correspond to domains, where the S atoms sit on top of Ag atoms, whereas the lower areas represent two different hollow sites (fcc or hcp stacking) of the S atoms on the Ag lattice.
The most abundant moir\'e periodicity amounts to $\sim 3.3\pm 0.1$\,nm. This value is similar to the one observed for MoS$_2$\ on Au(111) \cite{Gronborg2015, Sorensen2014, Bruix2016,Bana2018}.
Given the very comparable lattice constants of Au (4.08\, \AA) and Ag (4.09\, \AA), a locking into a similar superstructure at the metal--MoS$_2$\ interface is not surprising. However, occasionally, we also observe moir\'e patterns with $3.6\pm 0.1$\,nm and $3.0\pm 0.1$\,nm lattice constants and different angles between the MoS$_2$\ and Ag(111) lattice. This indicates shallow energetic minima of the lattice orientations.
Atomically resolved STM images (\Figref{fig1}c) reveal the expected S--S distance of 3.15\,\AA\ in the top layer \cite{Wakabayashi1975, Bronsema1986,Schumacher1993,Helveg2000}.
\begin{figure*}
\includegraphics[width=16cm]{Fig2_new.pdf}
\caption{Constant-height \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace spectra recorded (a) on top and (b) on hollow site of the moir\'e structure of MoS$_2$\ on Ag(111) (red curves) and on Au(111) (blue curves). Feedback opened at V = 2.5\,V, I = 0.5\,nA, V$_\mathrm{mod}=10$\,mV (all spectra, except for hollow site on Au(111): V$_\mathrm{mod}=5$\,mV). }
\label{fig2}
\end{figure*}
For an efficient decoupling of a molecule from the substrate, the interlayer must provide an electronic band gap. As the moir\'e pattern bears a topographic and an electronic modulation \cite{Krane2018a}, we investigate the differential conductance (\ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace) spectra on different locations (\Figref{fig1}d). We first examine the spectrum on the top site of the moir\'e structure. We observe a gap in the density of states, which is flanked by an onset of conductance at $\sim -1.55$\,V and $\sim +0.05\,$V (marked by dashed line labelled VBM/CBM, which have been determined from a logarithmically scaled plot). Additionally, there are pronounced peaks at $\sim 0.77$\,V and $\sim 1.28$\,V. First, we note that the observed band gap is significantly smaller than the 2.8\,eV-band-gap of a single layer free-standing MoS$_2$\ \cite{Cheiwchanchamnangij2012, Qiu2015}. This indicates a strong hybridization of the electronic states of the MoS$_2$\ layer and the Ag substrate. Second, we note that the spectral features are similar to those observed for single-layer MoS$_2$\ on Au(111) \cite{Krane2016, Bruix2016, Krane2018a}. For direct comparison, we plot the spectra on the top sites of the MoS$_2$\ moir\'e on Au(111) and Ag(111) in \Figref{fig2}a. At negative bias voltage, the onsets of conductance are essentially the same, while the features at positive bias voltage appear $\sim 140$\,mV closer to the Fermi level on Ag(111) than on Au(111).
Before discussing the differences between the layers on Au(111) and Ag(111), we investigate the effect of the different stacking at the interface on the electronic properties. The spectrum on a hollow site on Ag(111) shows a shift of the features at negative bias voltage by about $\sim 130$\,mV towards the Fermi level ($E_\mathrm{F}$), whereas the peaks at positive bias undergo a much smaller shift ($\sim 50$\, mV) away from $E_\mathrm{F}$ (\Figref{fig1}d). On Au(111), there are also variations between hollow and top sites, with the strongest shift at negative bias voltage (\Figref{fig2}).
To understand the differences between the substrates and local sites, we first discuss the origin of the spectroscopic features. Based on the similarity of the spectral shapes on Au(111) and Ag(111), we tentatively assign the strong peaks at $\sim 0.8$\,V (labeled as $\Gamma_1$) and $\sim 1.3$\,V (labeled as $\Gamma_2$) (values averaged over the different moir\'e sites) to bands at the $\bar{\Gamma}$ point \cite{Krane2018a}. More precisely, the peak at $\Gamma_2$ has been assigned to bands at $\Gamma$, which are also present in free-standing MoS$_2$, but are broadened due to hybridization with the substrate. The peak at $\Gamma_1$ has been observed in tunneling spectra on MoS$_2$\ on Au(111), but has not been found in calculations. It has been interpreted as a hybrid metal-MoS$_2$\ or interface state \cite{Krane2018a}. The conduction band minimum, which is expected to lie at the $\bar{K}$ point for quasi free-standing as well as metal-supported single-layer MoS$_2$\ \cite{Mak2010, Splendiani2010,Miwa2014,Bruix2016} is hardly visible in the tunneling spectra due to the rapid decay of the tunneling constant with $k_{\parallel}$ \cite{Zhang2015, Krane2018a}. The same applies to the valence band maximum, such that the strongest feature in the tunneling spectra at -2\,V arises from bands close to the $\bar{Q}$ point \cite{Krane2018a}.
Comparison of spectra on the moir\'e hollow sites suggest a rigid shift of the conduction bands between the MoS$_2$\ bands on Ag and Au. In a very simple interpretation, this agrees with the lower work function of Ag than of Au.
A down-shift of the conduction band structure by $\sim 280$\, meV has been observed by photoemission of WS$_2$ on Au(111) and Ag(111) \cite{Dendzik2017}. Angle-resolved measurements further showed that the shift also included band distortions, such that bands at $Q$ were crossing $E_\mathrm{F}$ (instead of at $K$). The band distortion was explained by hybridization of the WS$_2$ bands with the Ag substrate \cite{Dendzik2017}. As our \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace signal is not $k_\parallel$-sensitive, we would not be able to detect band distortions in the MoS$_2$ --Ag system. However, the clear shift of the states at $\Gamma$ can be easily understood by hybridization of S-derived states of mainly out-of-plane character with Ag states in analogy to Ref. \cite{Bruix2016}.
In the occupied states, the bands on the hollow site follow the same trend of a down-shift, suggesting that the states near $\bar Q$ are equally affected by hybridization with Ag states \cite{Dendzik2017}. In contrast, the tunneling spectra on the top sites, seem to coincide for Au and Ag substrate. We also note that the tunneling conductance close to the $\bar Q$ point is the most sensitive to the precise location on the moir\'e pattern. Hence, we suggest that this site is most strongly affected by screening effects, which may vary on the different substrates \cite{Roesner2016} and partially compensate for hybridization effects.
\subsection{Electronic properties of TCNQ molecules on MoS$_2$\ on Ag(111)}
\begin{figure}
\includegraphics[width=\columnwidth]{Fig3.pdf}
\caption{ a) Stick-and-ball model of TCNQ. Gray, blue and white spheres represent C, N and H atoms, respectively.
b) STM topography of a TCNQ molecular island on MoS$_2$/Ag(111) recorded at V = 1\,V, I = 10\,pA.
c) STM topography of a TCNQ island on MoS$_2$/Ag(111) recorded at V = 0.8\,V, I = 200\,pA, with superimposed molecular models suggesting intermolecular dipole-dipole interactions (dashed lines). White arrows represent the unit cell of the self-organized TCNQ domain with lattice vectors $a_1$ = (0.9 $\pm$0.10)\,nm and $a_2$ = (1.0 $\pm$0.10)\,nm and the angle between them of (96$\pm$2)$^{\circ}$. }
\label{fig3}
\end{figure}
Deposition of TCNQ molecules (structure shown in \Figref{fig3}a) on the sample held at 230\,K leads to large densely packed molecular islands on the MoS$_2$\ areas (\Figref{fig3}b). The large size and high degree of order of these islands reflects a low diffusion barrier on the MoS$_2$\ substrate.
The moir\'e pattern of MoS$_2$\ remains intact and visible through the molecular monolayer. High-resolution STM images recorded at 0.8\,V (\Figref{fig3}c) allow to resolve the individual molecules and their arrangement. Each TCNQ molecule appears with back-to-back double U-shapes separated by a nodal plane. As will be discussed later, and based on previous work on TCNQ \cite{Torrente2008, Garnica2013}, this appearance can be associated to the spatial distribution of the lowest unoccupied molecular orbital (LUMO). The molecular arrangement can be described by the lattice vectors $a_1=0.9\pm 0.1$\,nm, $a_2=1.0 \pm$0.1\,nm and angle (96$\pm$2)$\, ^{\circ}$ (see model in \Figref{fig3}c). This structure is stabilized by dipole-dipole interactions between the cyano endgroups and the quinone center of neighboring molecules. This assembly is very similar to typical self-assembled TCNQ islands on weakly interacting substrates \cite{Torrente2008, Barja2010, Garnica2013, Park2014, Pham2019}. When measured at lower bias voltage (e.g., at V = 0.2\,V in \Figref{fig4}a), the molecules appear with featureless elliptical shape, reflecting only the topographic extent of the molecules.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig4.pdf}
\caption{a) STM topography of a self-assembled TCNQ island on MoS$_2$/Ag(111), recorded at V = 0.2\,V, I = 20\,pA.
b) \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace spectra acquired on TCNQ molecules within the island in (a), with the precise location marked by colored dots. The gray spectrum was recorded on bare MoS$_2$\ layer for reference. Feedback opened at V = 2\,V, I = 100\,pA, with $V_\mathrm{mod} =20$\,mV.}
\label{fig4}
\end{figure}
The strong bias-voltage dependence of the TCNQ molecules on the MoS$_2$\ layer promises energetically well separated molecular states. To investigate these properties in more detail, we recorded \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace spectra on top of the molecules (\Figref{fig4}b). These show two main resonances at $\sim 0.47$\,V and $\sim 0.64$\,V. Another peak at $\sim 1.3$\,V matches the $\Gamma$ resonance of the bare MoS$_2$\ layer. At negative bias voltage, we observe an onset of conductance at $\sim -1.8$\,V. The \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace spectra thus show that the STM image in \Figref{fig4}a was recorded within the energy gap of the molecule, which explains the featureless shape.
In order to determine the origin of each of the resonances, we recorded constant-height \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace maps at their corresponding energies (\Figref{fig5}).
For the first resonance at positive bias voltage (470\,mV, \Figref{fig5}a), we observe the same double U-shape, separated by a nodal plane, which we used in \Figref{fig3} for the identification of the molecular arrangement. The \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace map at 640\,mV exhibits the same shape, suggesting the same orbital as its origin. At 1.3\,V, the molecules do not show any characteristic feature (\Figref{fig5}c). Finally, \Figref{fig5}d presents a conductance map at -2\,V associated with the onset of conductance observed at negative bias voltage for spectra on the molecule. Here, the \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace signal is rather blurred, but we remark that it is more localized in the center of the molecule as compared to the elliptical shape in \Figref{fig5}c.
For the identification of molecular orbitals, it is often sufficient to compare the \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace maps with the shape of the gas-phase molecular orbitals. Using this method, the U-shaped features have previously been associated to the LUMO of TCNQ \cite{Torrente2008, Garnica2013, Pham2019}. Here, we corroborate this assignment by simulating constant-height \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace maps of a free, flat-lying molecule. We first calculated the gas-phase electronic structure using density-functional-theory calculations with the B3PW91 functional and the 6-31g(d,p) basis set as implemented in the GAUSSIAN09 package \cite{Gaussian}. The isodensity contour plots of the highest occupied molecular orbital (HOMO) and some of the lowest unoccupied orbitals are shown in \Figref{fig5}e, right panel. The HOMO/LUMO can be unambiguously distinguished by the absence/presence of a nodal plane at the center of the quinone backbone. For direct comparison with the \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace maps, we calculate the tunneling matrix element between an s-wave tip and the spatially-resolved molecular wavefunction across the molecule \cite{Bardeen1961}. The maps of the square of the tunneling matrix element are depicted in \Figref{fig5}e next to the corresponding molecular orbitals. Because the LUMO+1 and LUMO+2 are quasi degenerate, we used the sum of their wave functions for the calculations of the tunneling matrix elements. As expected, the nodal planes of the molecular orbitals dominate the simulated \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace maps and can be taken as a robust signature for molecular orbital identification. Additionally, the simulated maps reveal that \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace intensity is not directly proportional to the isosurface density. For instance, there is hardly any intensity within the U shapes of the TCNQ LUMO, and the HOMO is mainly localized at the very center of the quinone moiety. We note that the simulated maps were obtained at a tip-molecule distance (center of the s-wave tip to center of the molecule) of 7.5\,\AA. This value was chosen because it represents reasonable tunneling conditions in experiment. However, variation of the tip height by ($\pm$2\,\AA) does not have any influence on the observation of the main features within the map (i.e., nodal planes, or intensity maxima) \cite{Reecht2020}.
\begin{figure*}
\includegraphics[width=16cm]{Fig5_new.pdf}
\caption{a-d) Constant-height \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace maps of a TCNQ island on MoS$_2$\ recorded at the resonance energies derived in \Figref{fig4}b. Feedback opened at (a-c) V = 2\,V, I = 100\,pA and (d) V = -2\,V, I = 30\,pA on the center of molecule with $V_\mathrm{mod} =20$\,mV. Zoom with increased contrast on one molecule are shown as inset for each map
e) Energy-level diagram of TCNQ determined from gas-phase DFT calculations (left). The isosurfaces of the frontier molecular orbitals are shown on the right. These have been used to calculate the tunneling matrix element $M_\mathrm{ts}$ with an s-wave tip at a tip--molecule distance of 7.5\,\AA, workfunction of 5\,eV. The map of the spatial distribution of $\left|M_\mathrm{ts}\right|^2$ is shown in the middle panel.
}
\label{fig5}
\end{figure*}
Comparison with the experimental constant-height \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace maps, now allows for an unambiguous identification of the origin of the molecular resonances. As suggested previously, the resonance at 0.47\,V can be derived from the LUMO with the double U-shape being in very good agreement to the calculations of the tunneling matrix element. The very same signatures in the conductance map at 0.64\,V suggest that this resonance stems from the LUMO as well. The DFT calculations show that the LUMO is non-degenerate. Hence, we can exclude a substrate-induced lifting of the degeneracy. The energy difference of only 170\,meV between the two resonances lies within the typical vibrational energies of organic molecules and may, thus, be indicative of a vibronic peak. We will elucidate on this point further below.
The \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace map at 1.3\,V essentially shows the same elliptical shapes of the molecules as the STM image recorded in the electronic gap (\Figref{fig4}a). Our DFT calculations suggest that the next higher unoccupied orbitals lie 3\,eV above the LUMO and show a pattern of nodal planes that are absent in the experiment.
Additionally, given the similar energy with the MoS$_2$\ bands, this resonance is probably not associated to the molecular layer, but to direct tunneling into the MoS$_2$\ states.
The assignment of the orbital origin at negative bias voltage bears some intricacies, because the experimental map lacks characteristic nodal planes. The reduced spatial resolution is most probably caused by the overlap with density of states of the substrate as we are approaching the onset of the valence band of MoS$_2$. One may suggest that the stronger localization of \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace intensity toward the quinone center is in agreement with the large tunneling matrix element of the HOMO at the molecule's center. This assignment may be enforced by the coincidence of the observed molecular energy gap of TCNQ with the DFT-derived gap. However, DFT is known to underestimate HOMO--LUMO gaps. Though this effect may be compensated by the screening properties of the substrate, we refrain from a definite assignment. In any case, our data clearly shows that the HOMO is at or within the conduction band of MoS$_2$.
By comparison with simulations, we thus arrive at a clear identification of the energy level alignment. Most notably, we find that the LUMO-derived resonance lies close to, but above the Fermi level of the substrate, whereas the HOMO is far below. This leaves the molecule in a neutral state with a negligible amount of charge transfer, despite of the electron accepting character of TCNQ. Nonetheless, its electron affinity of $\sim$3.4\,eV \cite{Milian2004,Zhu2015} is consistent with the LUMO alignment just above $E_\mathrm{F}$ when considering the work function of MoS$_2$/Ag(111) of 4.7\,eV \cite{Zhong2016}. We found small shifts of the LUMO onsets by at most 50\,mV between the spectra of TCNQ molecules lying at the top or hollow sites of the moir\'e structure of MoS$_2$. These shifts correspond to the moir\'e-induced shifts in unoccupied states of the MoS$_2$\ layer and thus only reflect the different screening properties from the substrate. In turn, we do not observe any modification of the electronic structure of MoS$_2$. This indicates weak interactions of the molecules all along the MoS$_2$\ layer.
Importantly, the 470-meV resonance has a rather narrow width of $\sim 100$\,meV. This is significantly smaller than typically observed on metal surfaces, where strong hybridization effects lead to widths of the order of $\sim500$\,meV \cite{Torrente2008,Park2014}. The narrow width thus reflects that MoS$_2$\ acts as a decoupling layer from the metal substrate. However, this resonance width is broader than has been observed for the HOMO resonance of other organic molecules on MoS$_2$\ on Au(111) \cite{Krane2018, Reecht2019,Reecht2020}. In contrast to those cases, where the HOMO lay well inside the electronic gap of MoS$_2$, the LUMO of TCNQ is located right at the onset of the conduction band. This provides relaxation pathways for electrons tunneling into the LUMO, though still significantly less than on the bare metal.
\subsection{Vibronic excitations of TCNQ on MoS$_2$\ on Ag(111)}
\begin{figure*}[h]
\includegraphics[width=16cm]{Fig6.pdf}
\caption{a) STM topography of a TCNQ island recorded at V = 1\,V, I = 10\,pA. b) Simulated (top panel) and experimental (bottom panel) \ensuremath{\mathrm{d}I/\mathrm{d}V}\xspace spectra at the position indicated by the blue dot in a) with feedback opened at V = 2\,V, I = 100\,pA, with $V_\mathrm{mod} =10$\,mV. The simulated spectrum is obtained from DFT calculations for all the vibrational mode of the TCNQ$^{-}$ molecule with a Huang-Rhys factor higher than 0.01 (dots associated with the right axis). A Lorentzian peak of 60\,meV broadening is applied to all of these modes.
c) Schematic representation of electron transport through a TCNQ molecule adsorbed on MoS$_2$/Ag(111): singly charged TCNQ$^{-}$ is formed upon injecting an electron into a vibronic state of an unoccupied molecular electronic level.
d-f) Visualization of the vibrational modes contributing to the satellite peak. The orange arrows represent the displacement of the atoms involved in these vibrations.
}
\label{fig6}
\end{figure*}
Having shown that the resonances at 470\,mV and 640\,mV originate both from the LUMO of TCNQ, we now turn to their more detailed analysis. A close-up view of the spectral range with these peaks is shown in the bottom panel of \Figref{fig6}b with the LUMO-derived peak at 470\,mV shifted to zero energy and its peak height being normalized. The satellite structure is reminiscent of vibronic sidebands, which occur due to the simultaneous excitation of a vibrational mode upon charging \cite{Qiu2004, Pradhan2005,Nazin2005,Frederiksen2008,Matino2011,Schulz2013,Wickenburg2016}. The sidepeaks should thus obey the same symmetry as the parent orbital state \cite{Huan2011,Schwarz2015,Mehler2018}. In the simplest case, these excitations can be described within the Franck-Condon model (see sketch in \Figref{fig6}c). When probing the LUMO in tunneling spectroscopy, the molecule is transiently negatively charged. Within Born-Oppenheimer approximation, this process is described by a vertical transition in the energy level diagram from the ground state $M^{0}$ to the excited state $M^{*}$. Upon charging, the molecule undergoes a geometric distortion, captured by the shift of the potential energy curve of the excited state. Vertical transitions allow for probing many vibronic states, with the intensities given by a Poisson distribution $I_{kn} = e^{-S_k} \frac{S_k^n}{n!}$, with $S_k$ being the Huang-Rhys factor of the vibrational mode $k$ and $n$ its harmonics. The Huang-Rhys factor is determined by the relaxation energy $\epsilon_k$ of a vibrational mode when charging the molecule as $S_k = \frac{\epsilon_k}{\hslash \omega_k}$. From the DFT calculations of the TCNQ molecule, we determine all vibrational eigenmodes in the negatively charged state and also derive the Huang-Rhys factors $S_k$ \cite{Krane2018}. The latter is plotted in the upper panel of \Figref{fig6}b (dots, right axis).
Applying to each of the vibronic states a Lorentzian peak with a full width at half maximum of 60\,meV and intensity proportional to the Poisson distribution, as described above, leads to a simulated Franck-Condon spectrum in the upper panel of \Figref{fig6}b.
This spectrum closely resembles the experimental one and, therefore, nicely reflects the nature of the satellite structure. We note that the bias voltage axis (bottom panel) is scaled by $10\%$ compared to the energy axis (top panel) to account for the voltage drop across the MoS$_2$\ layer \cite{Krane2019}. We now realize that the peak at $\sim 640$\,meV consists of three vibrational modes (at 151\,meV, 175\,meV and 206\,mV) exhibiting a large Huang-Rhys factor. These modes correspond to in-plane breathing modes of TCNQ (see schemes in \Figref{fig6}d-f), which are particularly sensitive to charging. Additionally, a mode at 40\,meV has a large Huang-Rhys factor. The excitation of this mode is not energetically well separated from the elastic onset of the LUMO in experiment. However, this mode contributes to an asymmetric lineshape, which can be realized by comparing the low-energy flank to the high-energy fall-off of the first resonance. The low-energy side can be fitted by a Voigt profile and suggests a lifetime broadening of $55\pm 15$\,meV. This is, however, insufficient for a peak separation from the 40-meV mode.
We further note that the experimental spectrum was taken on a cyano group, where no nodal planes exist in the LUMO, as their presence may lead to vibration-assisted tunneling in addition to the bare Franck-Condon excitation \cite{Reecht2020}.
\section{Conclusions}
We have shown that a single layer of MoS$_2$\ may act as a decoupling layer for molecules from the underlying metal surface, if the molecular resonances lie within the semiconducting band gap of MoS$_2$. MoS$_2$\ on Au(111) and Ag(111) exhibit very similar gap structures, but are shifted in energy according to the different work functions of the metal. Though this is not the only reason for the band modifications \cite{Dendzik2017}, we suggest that such considerations may help when searching for appropriate decoupling layers for specific molecules.
We have challenged the decoupling properties of MoS$_2$/Ag(111) for TCNQ molecules. These exhibit their LUMO resonance just at the conduction band onset of MoS$_2$, whereas the HOMO lies within the valence band.
Hence, the HOMO is not decoupled from the substrate, and also the LUMO suffers considerable lifetime broadening as compared to resonances, which would be well separated from the onsets of the MoS$_2$\ bands. The lifetime
broadening of $55\pm 15$\,meV can be translated into a lifetime of $\sim $6\,fs of the excited state. This is almost one order of magnitude longer than on the bare metal surface, where the hot electron vanishes into the bulk on ultra-fast timescales, but an order of magnitude shorter than for molecular resonances well separated from the band onsets \cite{Krane2018, Reecht2019, Reecht2020}. Yet, the increase in the lifetime of the excited state allowed us to resolve vibronic states of the transiently negatively charged TCNQ molecule albeit only up to $\sim 200$\,meV above the LUMO resonance, where contributions of MoS$_2$\ bands at $\Gamma$ become strong. Our simulations reproduce the experimental satellite structure of the LUMO very well, although the experimental width prevented us from resolving the individual modes.
\begin{acknowledgements}
We acknowledge discussions with S. Trishin and J. R. Simon. A. Yousofnejad acknowledges a scholarship from the Claussen-Simon Stiftung.
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) - project number 182087777 - SFB 951 (A14), and project number 328545488 - TRR 227 (B05).
\end{acknowledgements}
\bibliographystyle{apsrev4-1}
|
2,869,038,155,542 | arxiv | \section{INTRODUCTION}
The microlensing signal of a planet is a short-duration perturbation on the smooth standard light curve of the primary-induced lensing event occurring on a background source star.
The planetary perturbation is induced by the central and planetary caustics, which are typically separated from each other.
The central caustic is always formed close to a host star and thus the perturbation induced by the central caustic occurs near the peak of the lensing light curve.
In events induced by the central caustic, there exists a $s \leftrightarrow 1/s$ degeneracy, where $s$ represents the projected star-planet separation normalized to the Einstein radius of the lens system (Griest \& Safizadeh 1998; Dominik 1999).
The degeneracy arises due to the similarity of the size and shape of the central caustics for $s$ and $1/s$.
Since the difference of the size and shape of the two central caustics increases as the planet/star mass ratio increases, the degeneracy can be broken for events involving giant planets \citep{chung05}.
High-magnification events for which the source star passes close to the host star are very sensitive for the detection of a planet due to the central caustic \citep{griest98}.
Thus, current microlensing follow-up observations ($\mu$FUN: Dong et al. 2006; PLANET: Albrow et al. 2001; RoboNet: Burgdorf et al. 2007), which intensively monitor events alerted by microlensing survey observations (OGLE: Udalski 2003; MOA: Bond et al. 2002), are biased toward high-magnification events.
Due to the reason, four of 10 extrasolar planets found by microlensing (Udalski et al. 2005; Gaudi et al. 2008; Bennett et al. 2008; Dong et al. 2009) were detected by the central caustic.
On the other hand, the planetary caustic is formed away from the host star and thus the perturbation induced by the planetary caustic can occur at any part of the lensing light curve.
The planetary caustic is much bigger than the central caustic, and thus the probability of detecting a planet by the planetary caustic is much higher than by the central caustic.
In spite of the advantage of high detection efficiency, only two of 10 microlensing extrasolar planets (Beaulieu et al. 2006; Sumi et al. 2010) were detected by the planetary caustic.
This is because it is hard to carry out strategic observations due to the unpredictable nature of the planetary caustic perturbation.
However, if future ground- and space-based surveys with high-cadence monitoring, such as KMTNet (Korean Microlensing Telescope Network; B. Park 2010, private communication) and \emph{MPF} (\emph{Microlensing Planet Finder}; D. Bennett 2004), are conducted, the majority of the planetary caustic events including the events by free-floating planets and wide-separation planets would be detected.
Hence, understanding the planetary caustic perturbations becomes important.
In addition, for the planetary caustic events, there is a degeneracy in the determination of the planet/star mass ratio (Gould \& Loeb 1992; Gaudi \& Gould 1997), which is derived from the duration of the planetary caustic perturbation relative to the total duration of the event.
To find out how much the degeneracy affects the determination of the mass ratio, the estimation of the degeneracy is needed.
In this paper, we investigate in detail the pattern of the planetary caustic perturbations and estimate the degeneracy in the determination of the planet/star mass ratio in planetary caustic events.
This paper is organized as follows. In Section 2, we briefly describe the planetary lensing.
In Section 3, we investigate perturbation patterns of the planetary caustics for various planetary systems.
We also estimate the degeneracy in the mass ratio in the planetary caustic events.
We summarize the results in Section 4.
\section{PLANETARY LENSING}
Planetary lensing is described by the special case of binary lensing with very low mass ratio and the lensing behavior is usually well described by that of a single lensing of the host star for most of the event duration.
In this case, the lens equation \citep{witt90} is expressed as
\begin{equation}
\zeta = z - {1\over{1+q}} \left({ 1\over{\bar{z}}} + {q\over{\bar{z} - \bar{z}_p}}\right),
\end{equation}
where $\zeta = \xi + i\eta$ and $z = x + iy$ represent the complex notations of the source and image positions, respectively, $\bar{z}$ denotes the complex conjugate of $z$, $z_p$ is the position of the planet, and $q$ is the planet/star mass ratio.
Here the position of the star is at the center and all lengths are normalized to the Einstein ring radius of the total mass of the lens system, $\theta_{\rm E}$.
In planetary lensing, the shape and number of the planetary caustics depend on the star-planet separation unlike the central caustics, which are always single and arrow-shaped.
For close planets ($s < 1$), there are two triangular-shaped caustics with three cusps.
The two caustics are symmetrically displaced perpendicular to the star-planet axis and located at the opposite side to the planet.
The horizontal and vertical widths of the caustic, defined as the separations between the on- and off-axis cusps \citep{han06}, are expressed respectively as
\begin{equation}
\Delta \xi_{\rm c} \sim {{3\sqrt{3} \over 4} \sqrt{q}s^{3}},\\
\Delta \eta_{\rm c} \sim \sqrt{q}s^{3}.\
\end{equation}
The distance between the two caustics is represented by
\begin{equation}
y = {4\sqrt{q}(1-s^{2})^{1/2} \over s} .
\end{equation}
Events for which the source passes between the two caustics produce negative perturbations on the lensing light curves (Gaudi \& Gould 1997; Han \& Chang 2003).
For wide planets ($s > 1$), there is a single asteroid-shaped caustic with four cusps, where the horizontal width along the star-planet axis is generally longer than the vertical width.
The horizontal and vertical widths of the caustic \citep{han06} are expressed respectively as
\begin{equation}
\Delta \xi_{\rm w} \sim {4\sqrt{q}\over s^{2}}\left (1 + {1 \over 2s^{2}}\right ),\\
\Delta \eta_{\rm w} \sim {4\sqrt{q} \over s^{2}}\left (1 - {1 \over 2s^{2}}\right ) .
\end{equation}
The caustic for $s \gg 1$ becomes a symmetric diamond-shaped caustic, while for $s \rightarrow 1$ it becomes more asymmetric.
The caustic is located at the same side as the planet.
Events induced by the wide planetary caustic produce positive perturbations on the lensing light curves (Gaudi \& Gould 1997; Han \& Chang 2003).
The position of the close and wide caustics is always located at $s - 1/s$ from the host star.
The lensing behavior of the planetary caustic can be described by Chang-Refsdal lensing, which represents single lensing superposed on a uniform background shear (Chang \& Refsdal 1979; Gould \& Loeb 1992; Gaudi \& Gould 1997). In this case, the single lensing is produced by the planet itself and the shear is induced by the host star.
The lens equation of the Chang-Refsdal lensing is represented by
\begin{equation}
\zeta = z - {1\over{\bar{z}}} + \gamma \bar{z} \ ;\ \ \ \gamma = {1\over
s^2},
\end{equation}
where $\gamma$ is the shear.
\begin{figure}
\includegraphics[width=84mm]{fig1.eps}
\caption{
Magnification excess maps of close and wide planetary systems with various mass ratios. The separations of close and wide systems are $s = 1/1.2$ and $s = 1.2$, respectively. The coordinates ($\xi$, $\eta$) represent the axes that are parallel with and normal to the star-planet axis and are centered at the
planetary caustic center. The color changes into darker scales when the excess is $|\epsilon| = 2\%,\ 4\%,\ 8\%,\ 16\%,\ 32\%,\ 64\%,\ \rm {and},\ 96\%$, respectively. The straight lines with arrows represent the source trajectories for which light
curves of the resulting events are presented in Figure 3.
}
\end{figure}
\section{PLANETARY CAUSTIC PERTURBATION}
To investigate the perturbation pattern of the planetary caustic, we produce magnification excess maps of various planetary systems.
The magnification excess is defined by
\begin{equation}
\epsilon = {A-A_{0}\over A_{0}},
\end{equation}
where $A$ and $A_0$ are the planetary and single lensing magnifications, respectively.
Figures 1 and 2 show the magnification excess maps of planetary systems with various mass ratios and separations.
In each map, the regions with blue and brown-tone colors represent the areas where the excess is negative and positive, respectively.
The color changes into darker scales when the excess is $|\epsilon| = 2\%,\ 4\%,\ 8\%,\ 16\%,\ 32\%,\ 64\%,\ \rm {and},\ 96\%$, respectively.
Since the excess represents the perturbation induced by the planetary caustic, increasing the excess represents that the strength of the perturbation becomes strong.
In Figure 1, the top and second rows show close and wide planetary systems with $s = 1/1.2$ and $s = 1.2$, where each column corresponds to a different mass ratio.
In Figure 2, the top and second rows show close and wide planetary systems with different separations, where each panel has a common mass ratio of $q = 10^{-3}$.
Considering typical microlensing events toward Galactic bulge, we assume that $M_{\rm L} = 0.3\ M_{\odot}$, $D_{\rm L} = 6\ {\rm kpc}$, $D_{\rm S} = 8\ {\rm kpc}$, and the source is a main-sequence star with a radius of $R_{\star} = 1.0\ R_{\odot}$.
Then, the corresponding angular Einstein radius is $\theta_{\rm E} = 0.32\ \rm{mas}$, and the source radius normalized to the Einstein radius is $\rho_{\star} = 1.8\times10^{-3}$.
\begin{figure}
\includegraphics[width=84mm]{fig2.eps}
\caption{
Magnification excess maps of close and wide planetary systems with various
separations.
All the systems have a common mass ratio of $q = 10^{-3}$.
}
\end{figure}
From the analysis of the maps, we find the following properties of the planetary caustic perturbations.
\begin{enumerate}
\item
Planetary systems with the same separation basically produce perturbations of constant strength regardless of $q$, but the duration of each perturbation scales with $\sqrt{q}$.
\item
Close planetary systems with the same separation produce essentially the same negative perturbations between two triangular-shaped caustics regardless of $q$, but the duration of the perturbations scales with $\sqrt{q}$.
\item
For planetary systems with the same mass ratio, the positive perturbations become stronger as the caustic shrinks with the increase (for $s > 1$) or the decrease (for $s < 1$) of $s$.
On the other hand, the negative perturbations become weaker as the caustic shrinks.
\end{enumerate}
The fundamental reason for three properties above is that the lens equation of the binary lens near the location of the planet is approximated as that of a Chang-Refsdal lens which is a point mass on a uniform background shear (Gould \& Loeb 1992; Gaudi \& Gould 1997).
In this case, the point mass is the planet and the shear is induced by the host star.
Thus, if the separation is the same, then the shear, $\gamma \sim 1/s^2$, which causes the perturbation, is also the same.
The three properties imply that the planetary caustic perturbation depends only the star-planet separation.
This fact has been discussed in \citet{gould92}, \citet{gaudi97}, and \citet{dominik99}.
However, they did not summarize in detail the properties of the planetary caustic perturbation such as we have done.
In addition, the third property related to the positive perturbation has never been reported.
For planetary systems for which the source is similar to or bigger than the planetary caustic, the perturbations around the caustic are significantly washed out by the increase of finite-source effects and thus become weak.
However, since the magnification patterns of the planetary caustics at the same separation are basically constant, as long as the ratio of the source size to the caustic size is the same, the first and second properties above are always maintained even in planetary systems with significant finite-source effects \citep{gould97}.
The third property is also maintained in planetary systems with significant finite-source effects, because the positive perturbations becomes stronger as the caustic shrinks.
Therefore, the three properties above are always maintained in planetary systems with significant finite-source effects, as long as the ratio of the source size to the caustic size is the same.
\begin{figure}
\includegraphics[width=84mm]{fig3.eps}
\caption{
Light curves for the source trajectories presented in Figure 1. In the upper panel, solid and dashed curves represent the light curves of the planetary and single lensing events, respectively. The lower panel shows the residuals from the single lensing magnification. In the lower panel, the horizontal line indicates the magnification excess of $|\epsilon| = 0.0$.
}
\end{figure}
\begin{table}
\caption{Degeneracy Range for Close Planetary Lensing Systems.\label{tbl-one}}
\begin{tabular}{lccc}
\hline\hline
& & \multicolumn{2}{c}{Degeneracy Range ($\Delta{q}/q_{0}$)}\\ &&\multicolumn{2}{c}{(\%)} \\ \cline{3-4} \\
(a) & Planet/Star Mass Ratio ($q_0$) & $\rho_\star$ = 0.0018 & $\rho_\star$ = 0.0054\\
\hline
& $10^{-5}$ & 14.7 & 40.0 \\
& $5\times10^{-5}$ & 10.1 & 17.8 \\
& $10^{-4}$ & 7.3 & 12.7 \\
& $5\times10^{-4}$ & 5.4 & 8.9 \\
& $10^{-3}$ & 4.0 & 6.2 \\
& $5\times10^{-3}$ & 1.4 & 2.3 \\
\hline
(b) & Planet-Star Separation ($s$) & &\\
\hline
& $1/1.3$ & 16.7 & 18.2 \\
& $1/1.5$ & 69.6 & 69.6 \\
& $1/1.7$ & 199.0 & 185.0 \\
& $1/1.9$ & 877.0 & 881.0 \\
\hline
(c) & Source Trajectory Angle ($\alpha$) & &\\
\hline
&$20^\circ$ & 20.8 & 22.2 \\
&$40^\circ$ & 4.0 & 6.4 \\
&$60^\circ$ & 0.5 & 0.8 \\
&$80^\circ$ & 0.4 & 1.4 \\
\hline
\end{tabular}
\medskip
Notes. Degeneracy range with $|\delta| \le 5\%$.
(a) The planet-star separation is $s=1/1.2$. (b) The planet/star mass ratio is $q_{0} = 10^{-3}$. The source trajectory angle from the planet-star axis, $\alpha$, for (a) and (b) is 40$^{\circ}$. (c) The mass ratio and separation are $q_{0} = 10^{-3}$ and $s = 1/1.2$. Here the adopted source radii are $R_\star = 1$ and $3\ R_\odot$ for the main-sequence and turnoff stars, respectively. The normalized source radii are determined by an Einstein radius of $\theta_{\rm E} = 0.32\ \rm{mas}$ that corresponds to the value of the typical Galactic bulge event with $M_{\rm L} = 0.3\ M_{\odot}$, $D_{\rm L} = 6\ {\rm kpc}$, and
$D_{\rm S} = 8\ {\rm kpc}$.
\end{table}
Figure 3 shows the light curves resulting from the source trajectories presented in Figure 1.
Figure 3 describes well the first and second properties of the planetary caustic perturbations.
Close planetary systems with the same separation produce negative perturbations with a same depth on the light curves, while wide planetary systems produce positive perturbations with a same bottom of U-shape.
The widths of the negative and positive perturbations increase as $q$ increases, because the distance between two close planetary caustics and size of the wide planetary caustic increase with the increase of $q$.
Since the width of the perturbation is used to determine the mass ratio (Gould \& Loeb 1992; Gaudi \& Gould 1997), the first and second properties above could cause a degeneracy in the determination of the mass ratio.
To find out how much the degeneracy affects the determination of $q$, we estimate the degeneracy.
Tables 1 and 2 show the degeneracy ranges of close and wide planetary lensing systems, respectively.
In each table, the degeneracy range is presented for various mass ratios (a), separations (b), and source trajectory angles (c) with two kinds of source radii.
The degeneracy range represents the range with $|\delta| \le 5\%$, where $\delta$ is defined by
\begin{equation}
\delta = {A_{\rm p} - A_{\rm p,0}\over{A_{\rm p,0}}},
\end{equation}
where $A_{\rm p}$ and $A_{\rm p,0}$ are the magnifications of the planetary lensing events with trial and fiducial mass ratios, respectively.
In the tables, $q_0$ is the fiducial mass ratio and $\Delta q$ is the difference of maximum and minimum trial mass ratios with $|\delta| \le 5\%$.
As shown in the two tables, the degeneracy range decreases with the increase of $q$ and decreases of $|\log s|$ and the finite-source effect.
It implies that we can determine $q$ more precisely as $q$ increases and $|\log s|$ decreases.
For close planetary systems, the degeneracy range dramatically increases as the separation decreases.
This is because the gap between two close caustics become wider and the perturbation becomes weak as the separation decreases, and thus the source passes only weak perturbation regions with $|\epsilon| \le 5\%$.
However, as shown in the case of (c) of Table 1, the degeneracy range considerably decreases as the trajectory angle increases, i.e., the source approaches the caustic.
On the other hand, for wide planetary systems, the degeneracy range is for caustic-crossing events.
The degeneracy range increases as the trajectory angle increases.
This is because the region close to the planet-star axis produces strong perturbations relative to other regions (see the bottom panel in Figure 1).
Therefore, the degeneracy range of events for which the source passes close to the caustic is usually very narrow, and thus it would not significantly affect the determination of $q$.
The degeneracy can also be easily resolved by the full coverage of the perturbation.
However, if the perturbation is covered by only a few data, it is difficult to determine the exact mass ratio.
\begin{table}
\caption{Degeneracy Range for Wide Planetary Lensing Systems.\label{tbl-one}}
\begin{tabular}{lccc}
\hline\hline
& & \multicolumn{2}{c}{Degeneracy Range ($\Delta{q}/q_{0}$)}\\ &&\multicolumn{2}{c}{(\%)} \\ \cline{3-4} \\
(a) & Planet/Star Mass Ratio ($q_0$) & $\rho_\star$ = 0.0018 & $\rho_\star$ = 0.0054\\
\hline
& $10^{-5}$ & 47.0 & 82.0 \\
& $5\times10^{-5}$ & 13.8 & 55.4\\
& $10^{-4}$ & 7.3 & 31.0 \\
& $5\times10^{-4}$ & 1.9 & 8.8 \\
& $10^{-3}$ & 0.9 & 5.8 \\
& $5\times10^{-3}$ & 0.3 & 2.0 \\
\hline
(b) & Planet-Star Separation ($s$) & &\\
\hline
& $1.3$ & 1.0 & 4.9 \\
& $1.5$ & 0.9 & 3.9 \\
& $1.7$ & 2.3 & 4.3 \\
& $1.9$ & 3.4 & 5.9 \\
\hline
(c) & Source Trajectory Angle ($\alpha$) & &\\
\hline
&$20^\circ$ & 1.0 & 5.1 \\
&$40^\circ$ & 0.9 & 6.7 \\
&$60^\circ$ & 1.5 & 8.5 \\
&$80^\circ$ & 1.9 & 10.5 \\
\hline
\end{tabular}
\medskip
Note. Except for $s = 1.2$ in (a) and (c), other parameters are the same with those of Table 1.
\end{table}
\section{CONCLUSION}
We have investigated the pattern of planetary caustic perturbations.
From this, we found three properties of the planetary caustic perturbations.
First, planetary systems with the same star-planet separation basically produce the perturbations with constant strength without regard to $q$, but the duration of each perturbation is proportional to $\sqrt{q}$.
Second, close planetary systems with the same separation produce essentially the same negative perturbations between two triangular-shaped caustics regardless of $q$, but the duration of the perturbation scales with $\sqrt{q}$.
Third, the positive perturbations for planetary systems with the same mass ratio become stronger as the caustic shrinks with the increase ($s > 1$) or the decrease ($s < 1$) of $s$, while the negative perturbations become weaker.
We have estimated the degeneracy in the determination of $q$ that occurs in planetary caustic events.
From this, we found that $q$ can be more precisely determined as $q$ increases and $|\log s|$ decreases.
We also found that the degeneracy range of events for which the source passes close to the caustic is usually very narrow, and thus it would not significantly affect the determination of $q$.
\section*{Acknowledgments}
We would like to thank A. Gould for making helpful comments and suggestions.
|
2,869,038,155,543 | arxiv | \section{Proof of the First Zonklar Equation}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}
\IEEEPARstart{R}{obots}, in particular mobile robots, are expected to perform complex and highly dynamic maneuvers in unknown environments~\cite{mobilerobotssurvey,Thrun2007, scaramuzza, gehring, hutter}. This can be achieved through traditional trajectory-based optimal control approaches, such as trajectory optimization~\cite{trajectorPlanningForRobots}. Trajectory optimization methods are well established and give feasible trajectories which exhibit complex and dynamic behaviors \cite{skaterbots, Bern2019TrajectoryOF, Zimmermann_2019}.
The goal of this work is to leverage these techniques to accomplish highly dynamic tasks in complex settings with robotic systems.
This imposes a challenge because the most successful trajectory optimization methods require an accurate enough differentiable model of the robot's dynamics.
For highly dynamic and complex settings, it is difficult, if not impossible, to obtain such parametric models.
As an example, for race cars with high speeds and cornering accelerations, a model of the tire force and moment characteristics is required. Obtaining this model is very challenging, especially because such a model depends on multiple external factors like the temperature~\cite{tiremodel}.
System identification~\cite{ASTROMSystemID, ljungSystemID, modelRobotLearning} is a valuable technique, used to learn the model of the system from data. Using data collected on the system, universal approximators such as multilayer feedforward neural networks~\cite{universalapproximation} can be trained to capture its dynamics. Such a dynamics model can then be used for trajectory optimization. In practice, this imposes several challenges: \emph{(i)} recording data on the system is time-consuming and can cause wear and tear to the system \emph{(ii)} trajectory optimization schemes generally also require derivatives of the learned model with respect to the control vectors. These can be obtained for simple neural network architectures. But such architectures may fail to capture the true dynamics appropriately. On the other hand, for higher capacity architectures, overfitting the data might lead to noisy derivatives.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{figures/teaser/rc_car.jpg}
\caption{We aim to learn a physical model of a dynamic \gls{rc} car and use the model in gradient-based trajectory optimization. Thanks to a high torque motor, the rear-wheel-driven \gls{rc} car is able to perform drifting turns.}
\label{fig:drift_car}
\end{figure}
In this work, we investigate how high-capacity models like neural networks can be combined with traditional trajectory optimization schemes to perform highly \textit{complex} and \textit{dynamic} maneuvers with robots. We conduct these investigations on the example of
a \gls{rc} car for which we aim to perform \emph{dynamic} drifting behaviors (see \cref{fig:drift_car}). The drifting maneuvers of a car are complex and difficult to model~\cite{extremecarspaper, rccarmodel}. Furthermore, the models typically require tuning that needs to be repeated for different grounds and tires.
Therefore, we instead collect data directly on the car and use it for training a neural network model to capture its dynamics. We propose an intuitive state space representation of the system which allows us to learn an accurate enough model after collecting only a small corpus of data consisting of \emph{15 minutes} driven by a human operator. For the model selection, we leverage a simulation of the car to pick a model which \emph{(i)} captures the dynamics of the car well enough and \emph{(ii)} gives smooth derivatives that can be successfully used for trajectory optimization.
Our results show that by considering \textit{continuous} activation functions even a simple neural network model can be used for traditional gradient-based trajectory optimization methods. Furthermore, our results on hardware demonstrate that trajectory optimization with our learned model is able to perform complex tasks such as drifting on the real robot, which according to \cite{rccarmodel} usually requires expert drivers.
This highlights the benefits of the synergy between traditional trajectory-based control methods and machine learning.
\begin{comment}
\SZ{
\begin{itemize}
\item Goal: Learn a differentiable model of the dynamics based on a small corpus of data, and use it in a more traditional motion planning framework
\item We need derivatives of the learned model to do so; We "show" how to make it differentiable to be used in MPC and TO (I would mainly talk about TO, and introduce MPC in the results section...)
\item Not imitation learning or demonstration learning, only dynamics model
\item Use simulation model to compare gradients and do analysis (advantage of simple dynamical system); Learn model and test on real physical system (highly dynamic motions)
\item We conclude that even a relatively simple neural network works
\end{itemize}
}
\end{comment}
\section{Discussion and Future Work}
\label{sec:discussion}
\begin{comment}
For the real car, show real physics divergence (aleatoric uncertainty), this kind explains also the mismatch in physics we tend to see.
Improve this with more data to capture the state
We are able to generate (optimize) feasible trajectories that make use of those physics.
Further research is more complex systems (as Bhavya is doing), and another controller to actually then follow the feasible trajectory.
\BS{
\begin{itemize}
\item Improve model after policy evaluation.
\item Consider Bayesian uncertainty and effects of model exploitation \cite{PETS}
\item Use physically meaningful models such as Lagrangian neural netowrks~\cite{lagrangianNNs} or hamiltonian neural networks~\cite{hamiltonianNNs}
\item Test on other settings (robot, grounds etc.)
\end{itemize}
}
\SZ{
\begin{itemize}
\item Mention dynamic sampling weighting
\end{itemize}
}
\end{comment}
Based on our experiments, we conclude that even simple network architectures can be used to reliably model the dynamics of a fast-driving RC car.
Furthermore, the network and its derivatives can be successfully leveraged for gradient-based trajectory optimization.
We showed that our pipeline generates feasible optimal control trajectories that can be directly applied to the physical car.
Even though model inaccuracies can accumulate over the trajectory horizon, we note that the predicted behavior can nonetheless be reliably mimicked by the physical car.
Furthermore, we addressed these problems using feedback control to complete a race track using dynamic maneuvers.
During our investigations to find a suitable activation function (see section \ref{activation_function}), we hypothesized that the discontinuity of \gls{relu}'s gradients could lead to noisy control sequences when applied for trajectory optimization.
This was the main reason why we explored \gls{gelu} as a possible alternative.
Indeed, we validated this hypothesis throughout our experiments.
As depicted in \cref{fig:smoothness}, the control sequence resulting from the network with \gls{gelu} activations is considerably smoother than the one obtained using \gls{relu} activations.
We trace this back to the gradient used for trajectory optimization, which is much noisier for \gls{relu} as well.
Nonetheless, based on these observations we are currently unable to deduce which network architecture captures gradients of the real dynamics more accurately.
Even though our proposed network results in smooth input sequences and achieves desirable performance for trajectory optimization, it might still fail to precisely represent gradients of the real system.
Because the real gradients are not available during training on the real hardware, this is difficult to deduce.
We speculate that receiving accurate (rather than just smooth) gradients might improve the convergence behavior of the trajectory optimization technique.
A full-on investigation on this topic, including the analysis of structured learning techniques for capturing robot dynamics~\cite{lagrangianNNs}, is an exciting avenue for future work.
\begin{figure} [h]
\centering
\includegraphics[width=1\linewidth]{figures/smoothness/v2/smoothness.pdf}
\caption{Comparison of gradients $\dv{\l}{\u}$ and control sequences $\u$ for neural network architectures with \gls{relu} and \gls{gelu} activations. The network with \gls{relu} activations results in noisier (i.e. more fluctuating) gradients and input sequences than the network with \gls{relu} activations. }
\label{fig:smoothness}
\end{figure}
We further note that traditional \gls{mbrl} methods update the model with new rollouts collected after a policy evaluation step.
In this work, we focused on learning the dynamics with minimal effort on data collection and training.
Another possible extension to our work is to leverage continual learning to explore how additional data from the policy evaluation can improve our model's performance.
We note that trajectory optimization techniques are known to exploit the inaccuracies of a model~\cite{PETS, mbpoTLDM}. This behavior was observed in our offline trajectory optimization experiments (~\cref{offlineTO}). Approaches such as~\cite{PETS, hucrl} leverage the epistemic uncertainty of a family of models~\cite{model_uncertainties} to hinder such exploitations.
We consider integrating such approaches into our framework in the future.
Finally, we are excited to investigate how our approach generalizes to other robotic systems, particularly for cases where no analytic model is available (such as for example~\cite{gofetch}).
\begin{comment}
\SZ{
\begin{itemize}
\item Plot idea: Simulated VS Learned gradients (smoothness, impact of sampling?)
\item Plot idea: Simple error plot of different degrees of freedom between learned and physical trajectory
\item Plot idea: Distribution of different end targets when repeating the trajectory several times
\item Conclusion: Simple model architecture works "good enough" for this application
\end{itemize}
}
\end{comment}
\section{Related Work}
Nonlinear system identification is also used in \gls{mbrl}~\cite{mbrlreview}. Compared to model-free reinforcement learning methods, \gls{mbrl} approaches are more sample efficient~\cite{rlsurvey}. This is extremely important when working directly with real-world robots since it prevents hardware deterioration.
\Gls{mbrl} has been successfully applied to a variety of robot control tasks in simulation and hardware \cite{PILCO, Deisenroth2015, Kamthe2018, Kabzan2019LearningBasedMP, gofetch, nagabandi2018, PETS, nagabandi2020, hucrl}. In particular, \cite{PILCO, Deisenroth2015, Kamthe2018, Kabzan2019LearningBasedMP, gofetch} use gradient-based optimization schemes to obtain the control policy. Nonetheless, these methods either use parameterized models with hand-picked features~\cite{gofetch} or \glspl{gp}~\cite{Rasmussen06gaussianprocesses} to learn the system dynamics. Hand-picking features is a time-consuming, often unintuitive, and sometimes also an impossible process. \Glspl{gp} on the other hand are powerful machine learning models that generalize well in the low-data regime. But they find it challenging to learn very complex and discontinuous dynamics~\cite{Calandra2016}, which are more and more common in robotics \cite{oneshotlearning}. Therefore, as an alternative, neural networks have been considered \cite{nagabandi2018, PETS, nagabandi2020,hucrl}. They are capable of capturing complex behaviors and also scale well for high-dimensional settings. However, apart from a few notable exceptions~\cite{henaff2017model,predictionandcontrol ,bernSoftRobotControl2020,mbpoTLDM,dreamer}, these methods
mostly use population-based search heuristics such as the cross-entropy method~\cite{CEM}, in contrast to gradient-based optimization schemes, to obtain a control policy.
Population-based methods have two main advantages over gradient-based schemes: \emph{(i)} they seek for the global optimum and \emph{(ii)} they do not suffer from exploding and vanishing gradients over the rollouts~\cite{gradientsexploding}. Nevertheless, these methods are costly and do not scale well with the dimension of the horizon and action space, which hinders their application on real-world robots, where fast computations times are desired for feedback control (e.g., \gls{mpc}~\cite{mpc}). Therefore, a gradient-based optimization scheme is often preferred for such applications. This requires additional considerations for the modeling. In particular, models that can not only capture the dynamics well but also provide useful derivatives with respect to the inputs are desired.
An interesting approach is proposed in~\cite{GPSNN}, where local linear time-varying models are learned using linear regression with Gaussian Mixture Models and combined with linear-Gaussian controllers. Here, the optimal controller parameters are obtained using the iterative linear-Gaussian regulator (iLQG)~\cite{ilQG} solver.
However, the proposed models only represent the system locally. In our work, we demonstrate that even a ``simple" neural network model with continuous activation functions is able to capture complex and highly nonlinear dynamics. Furthermore, our model also provides excellent performance in combination with gradient-based optimization on the \gls{rc} car. This motivates further investigation on other dynamical systems for future work.
\begin{comment}
Some relevant papers found more recently:
Physical Design Optimization with differentiable physics in\cite{allenPhysicalDesignUsing2022}.
Recent enough to be relevant, but totally different approach (latent space): \cite{rybkinModelBasedReinforcementLearning2021}.
This is the paper with the multi-fingered hands and the balls: \cite{nagabandiDeepDynamicsModels2019}.
\cite{jannerWhenTrustYour2021}
This one is very nice, graph networks to learn complex physics \cite{sanchez-gonzalezLearningSimulateComplex2020}.
\cite{liberaModelBasedReinforcementLearning2020}
Differentiable programming: \cite{huDiffTaichiDifferentiableProgramming2020}.
\cite{wangBenchmarkingModelBasedReinforcement2019}
Pilco: \cite{deisenrothPILCOModelBasedDataEfficient2011}
\cite{patanNeuralNetworkBasedModel2015}
Jim did a very related thing back when he was at CRL (2020):\cite{bernSoftRobotControl2020} \textbf{Soft Robot Control With a Learned Differentiable Model.}
He did not do trajectory optimization, and also no inference to create trajectories. But he trained a small network with the physics, and directly used the the network gradient in optimization for end effector position (soft robotics).\\
One of my favorites because it is about cars drifting (2010): \textbf{Probabilistic Approach to Mixed Open-loop and Closed-loop Control,
with Application to Extreme Autonomous Driving}. It remains open how much we want to push the drift thing, but they discuss why controlling drifting is hard (or not that hard). But they specifically dont create trajectories, but follow them.
\textbf{DIFFTAICHI: Differential Programming for Physical Simluation} might also be relevant.
A bit old (2010) but might still be relevant: \textbf{Model learning for robot control: a survey}.
Then there is also \textbf{Neural Network-Based Model Predictive Control: Fault Tolerance and Stability} from 2015. I do not understand it yet, but the title seems relevant at least.
Also to note: a sufficiently complex system is hard to model.
\end{comment}
\section{Method}
\label{sec:method}
Our overarching goal is to find an optimal control sequence for our dynamical system.
We formulate this as a \emph{time-discretized} trajectory optimization problem:
Let $\x := (\x_1, ..., \x_n)$ and $\u := (\u_0, ..., \u_{n-1})$ be the stacked state and control vectors for a total of $n$ trajectory steps.
Given a known initial state $\x_0$, we write the trajectory optimization problem as
\begin{subequations}
\begin{alignat}{3}
&\!\!\min_{\x, \hspace{0.05cm} \u} & \quad & \l(\x, \u) \label{eq:trajectoryOptimizationProblem_Cost} \\\
& \text{s.t.} & & \x_{i+1} = \f(\x_i, \u_i), \hspace{0.25cm} \forall i = 0, ..., n-1
\label{eq:trajectoryOptimizationProblem_Dynamics}
\end{alignat}
\label{eq:trajectoryOptimizationProblem}
\end{subequations}
with total cost $\l$ and state transition function $\f$.
The latter models the dynamics of the physical system we want to control.
As discussed before, we aim to model our dynamical system using a neural network, which we will discuss in the following.
\begin{comment}
We aim to optimize a time-discrete, finite control trajectory $\vec{U}$ to achieve an optimal state trajectory $\vec{X}(\vec{U}, x_0)$:
\begin{align}\label{eq:TO}
\min_{\vec{x},\vec{u}}& & \ell _f(\vec{x}_n) &+ \sum_{i=0}^{n-1} \ell (\vec{x}_i, \vec{u}_i) \\
\text{s.t.}& &\vec{x}_{i+1} &= f(\vec{x}_i, \vec{u}_i)
\end{align}
Where $\ell (\vec{x}_i, \vec{u}_i)$ is the intermediate cost function and $\ell _f(\vec{x}_n)$ is the terminal cost function. $f(\vec{x}_i, \vec{u}_i)$ is the state-transition function that relates new state $\vec{x}_{i+1}$ to old state $\vec{x}_{i}$ and control $\vec{u}_i$, governed by the physics of the system. We aim to approximate this state-transition by a neural network model.
\end{comment}
\begin{comment}
\subsection{Parameterized Car Model}
\SZ{(Parameterized VS Learned model???) Idea: Briefly talk about parameterized car model that we use to evaluate the efficacy of the learned model. We definitely need to state something here, or at least cite a common model. Talk a bit about weaknesses of this model to make the transition to the learned model}
As discussed in the introduction, we choose a toy car as our dynamical system, as it follows relatively simple dynamics.
We start our derivations with a parameterized model of the car, as we use this to run preliminary experiments to evaluate the efficacy of our learned model.
\SZ{Should we discuss the entire model here, or can we just cite something? I think this mainly depends on how the results look like, i.e. what kind of parameterization do we need to make clear we can explain everything properly?}
In the same manner as {\bf\textcolor{orange}{[CITE?]}} \SZ{see slides Stelian?}, the state of the car consists of three degrees of freedom, two for its position, and one for its orientation.
We denote this as $\x_{i} = (p_i^x, p_i^y, \theta_i)$ for a specific trajectory step $i$.
We write the control vector as $\u_i = (s_i, \psi_i)$, where $s_i$ is the speed command in forward direction, and $\psi_i$ denotes the steering angle relative to the forward direction.
Then, the state transition function $f$ can be written as
%
\begin{equation}
f(\x_i, \u_i) = \left( p_i^x - s_i \cos(\theta_i), p_i^y - s_i \sin(\theta_i), \theta_i + \frac{s_i}{L} \tan(\psi_i) \right),
\end{equation}
%
where $L$ denotes the constant distance between the car's front and back axis.
While this simple model is sufficient to capture the driving behavior of the car, it does not capture more dynamic maneuvers like drifting.
Since we are looking to analyze our method based on highly dynamic behaviors, we extend this model as follows: ...
\SZ{ToDo: Extension to incorporate drift}
\NK{The issue here is that my simulation really does "only" the following:
The simulation operates with three steps. For each wheel, we first calculate the current local velocity. Then, we calculate the acceleration needed at each wheel to satisfy a no-slip condition. Last, we impose a maximal acceleration at each wheel, resulting in loss of traction if the requested acceleration is too high. }
\end{comment}
\subsection{Learned Car Model} \label{learned_car_model}
We investigate our methodology based on a specific dynamical system, namely a radio-controlled (RC) toy car (see Fig. \ref{fig:drift_car}).
The intuitive nature of this system enables us to run investigations in both simulation and the physical world.
We note that although this system is comparably simple, it can exhibit highly dynamic behaviors (e.g. when performing drifting maneuvers), and we aim to capture these behaviors in our learned model.
We assume that the toy car exclusively drives on a planar ground.
The state $\x_i$ consists of three degrees of freedom, namely the car's planar position as well as its orientation.
The control input $\u_i$ is two-dimensional, and consists of the acceleration applied on the back wheels (i.e. the "throttle") and the steering angle of the front wheels.
We note that as we aim to learn our model from data, it is impractical to express the state in terms of world coordinates, as it is not a pose-invariant parameterization.
We learn the state in terms of local velocities expressed in the coordinate frame of the car.
This reduces the state-space the neural network has to learn.
We denote these local velocities at trajectory step $i$ with $\v_i$, and consequently choose a state transition function (as given by Eq. (\ref{eq:trajectoryOptimizationProblem_Dynamics})) as
\begin{equation}
\hat{\x}_{i+1} = \x_i + h \f_{\theta}(\v_i, \u_i)
\label{eq:learnedStateTransition}
\end{equation}
with learned dynamics model $f_{\theta}$, parametrized by $\theta$, and time between two consecutive trajectory step $h$.
The resulting optimization problem using the learned model is then:
\begin{subequations}
\begin{alignat}{3}
&\!\!\min_{\hat{\x}, \hspace{0.05cm} \u} & \quad & \l(\hat{\x}, \u) \label{eq:TO_cost_with_net} \\\
& \text{s.t.} & & \hat{\x}_{i+1} = \hat{\x}_{i} + h \f_{\theta}(\hat{\v}_i, \u_i), \hspace{0.25cm} \forall i = 0, ..., n-1 \label{eq:TO_net_Dynamics}
\end{alignat}
\label{eq:TO_problem_net}
\end{subequations}
where $\hat{\x} := (\hat{\x}_1, ..., \hat{\x}_n)$ is the concatenation of predicted states using the learned model.
As we want our system to perform dynamic maneuvers that involve loss of traction and drifting (see Fig. \ref{fig:drift_car}), we use an RC car with a high torque motor.
As it is more convenient to record data and analyze the performance of a learned model in simulation, we employ a simple parameterized model that roughly matches the behavior of the real car.
This allows us faster iterations when investigating different network architectures.
Furthermore, it acts as a baseline when comparing the performance of different model choices in the context of trajectory optimization (section \ref{sec:model_selection}).
We emphasize that we only use this model to analyze the efficacy of individual network design choices, and not to generate data to learn the behavior of the physical car.
We provide a brief and intuitive description of the employed simulation model.
Starting from simple driving dynamics (see e.g. \cite{ackerman_dynamics}), we incorporate drifting behaviors as follows:
First, we calculate the current local velocity for each wheel.
Then, we compute the acceleration needed at each wheel to satisfy the no-slip condition.
If the resulting acceleration per wheel exceeds a predefined static friction coefficient, it results in a loss of traction, which we incorporate by scaling down the accelerations to a lower dynamic friction coefficient.
Finally, we apply the resulting changes to the car's state.
We note that this simulation model was not designed to be physically accurate, but we find that it matches the physical counterpart well enough to make informed decisions for the network choices used to learn the behavior of the real car.
To train the model of the physical car, we collect data directly in the real world.
Hereby we use a commercial, room-scale motion capture system to localize the car at a framerate of 240 Hz in an area of $5 \times 10$ meters.
We recorded 15 minutes of training data plus 1 minute of test data.
We recorded 40 additional validation trajectories of 3s each, that cover specific aspects of the RC cars dynamics such as standard driving with traction as well as intentional loss of traction (i.e. drifting).
These trajectories are used to guide the selection of trained neural networks for the physical car, and we will further discuss this in~\cref{sec:model_selection}.
We note that typical \gls{mbrl} methods usually collect additional data after evaluating a control policy to further update and improve the model.
In this work, we refrain from doing so, as we are interested in investigating how accurate a suitable dynamics model can be learned with minimal effort in terms of data collection and training.
We encounter significant noise in our recorded data, especially concerning the direction of the RC car. Therefore, during preprocessing, we employ a Savitzky‐Golay smoothing filter \cite{pressSavitzkyGolaySmoothing1990} to reduce the noise.
Then, we calculate local velocities based on the captured global pose using finite difference.
Together with the recorded control input $\vec{u}_i$, this constitutes the data we train on.
Hereby, we use feed-forward neural networks and experiment with different network sizes as well as activation and loss functions, which we will discuss in detail in section \ref{sec:model_selection}.
\begin{comment}
As an example of a physical system, we use a radio-controlled toy car, both in simulation and in the real world. The control input $\vec{u}_i$ consists of acceleration and steer. We restrict our model to three degrees of freedom: planar position plus direction. Furthermore, our learned model $\hat{f}$ only relates local velocities at time $i$ to local velocities at time $i+1$:
\begin{equation}\label{eq:transition}
\vec{v}_{i+1} = \hat{f}(\vec{v}_i, \vec{u}_i)
\end{equation}
These velocities are in the car frame, which ensures that our learned model behaves consistently, and does not depend on position or direction. This inherently assumes that the ground is planar and its friction coefficient is homogeneous. Additionally, it reduces the state-space the neural network has to learn. During trajectory generation, we apply these local velocities to update the global position.
Our RC car has a high torque motor, enabling very dynamic loss of traction behavior, or drifting (see \cref{fig:drift_car}). Additionally, we use a simulated model that was designed to exhibit very similar behavior. This allows us faster iteration while choosing a suitable network architecture, plus it acts as a baseline to compare neural network gradients in optimization. However, the simulation model was not designed to be physically accurate. Furthermore, we use no simulated data in the training of the neural network that approximates the real RC car. As such, we only provide a brief description of the simulation model:
The simulation operates with three steps. For each wheel, we first calculate the current local velocity. Then, we calculate the acceleration needed at each wheel to to satisfy a no slip condition. Last, we impose a maximal acceleration at each wheel, resulting in loss of traction if the requested acceleration is too high.
While generating simulated data is straightforward, the same can not be said for the RC car. We use a commercial, room-scale motion capture system to locate the car at 240 Hz in an area of 5 m by 10 m. We recorded 20 minutes of training data plus 1 minute of test data. Additionally, we recorded 40 validation trajectories of 3s each, that cover specific aspects of the RC cars dynamics such as standard driving with traction as well as intentional loss of traction. These trajectories will be used later on to guide the selection of trained neural networks.
We encounter significant noise in our recorded data, especially concerning the direction of the RC car. Therefore, during preprocessing, we employ a Savitzky‐Golay smoothing filter \cite{pressSavitzkyGolaySmoothing1990} to reduce the noise. Then, we calculate local velocities based on the captured global position. Together with the recorded control input $\vec{u}_i$, this constitutes the data we train on. The network input consist of control $\vec{u}_i$ and old state $\vec{v}_i$, while the sample output is $\vec{v}_{i+1}$ as in \cref{eq:transition}.
We use feed-forward neural networks and train them on both simulated and captured data, separately. During trajectory optimization, we then calculate the gradient of the cost function with respect to the control trajectory $\vec{U}$ by making use of the trajectory gradients provided by the neural network:
\NK{some formula about that, then the chapter is finished}
\end{comment}
\subsection{Control Costs}
The total control cost $\l(\hat{\x}, \u)$ as introduced in Eq. (\ref{eq:TO_cost_with_net}) is the sum of several individual costs.
First of all, we define target states in global coordinates for predefined trajectory steps $i$, denoted by $\bar{\x}_i$.
We match them with the corresponding trajectory states by employing a simple quadratic penalty
\begin{equation}
\l_{target}(\hat{\x}) = \sum_{i \in I} \norm{\hat{\x}_i - \bar{\x}_i}^2,
\label{eq:cost_state_targets}
\end{equation}
where $I$ is the set of all predefined target states.
Furthermore, we support the generation of manageable and smooth trajectories by penalizing both high magnitudes and high changes in the control inputs throughout the entire time horizon.
The corresponding cost can be written as
\begin{equation}
\l_{regularizer}(\u) = \sum_{i = 0}^{n-1} \norm{\u_{i}}^2 + \sum_{i = 1}^{n-1} \norm{\u_{i} - \u_{i-1}}^2.
\end{equation}
We incorporate bounds on the control inputs $\u_l \leq \u_i \leq \u_b \hspace{0.1cm} \forall i$ based on the physical limits of the RC car in form of soft constraints
\begin{equation}
\l_{limits}(\u) = \sum_{i = 0}^{n - 1} \left( B(-\u_i + \u_l) + B(\u_i - \u_u) \right),
\end{equation}
where $\u_l$ and $\u_b$ denote the lower and upper bounds, respectively, and $B$ is a one-sided quadratic barrier function.
\subsection{Trajectory Optimization}
We solve the trajectory optimization problem as stated in Eq. (\ref{eq:TO_problem_net}) by using a gradient descent batch approach.
Hereby, we are interested in finding the optimal control parameters that minimize the total cost $\l(\hat{\x}, \u) := \l(\hat{\x}(\u), \u)$, where $\hat{\x} := \hat{\x}(\u)$ reflects the dynamics modeled by the state transition function $f_{\theta}$ for the entire trajectory.
By following the chain rule, we can compute the gradient as
\begin{equation}
\dv{\l}{\u} = \pdv{\l}{\hat{\x}} \dv{\hat{\x}}{\u} + \pdv{\l}{\u}.
\end{equation}
To calculate the Jacobian $\dv{\hat{\x}}{\u}$, we follow the same procedure as described in \cite{zimmermann_optimal_2019}.
First, we define the dynamics residual for one trajectory step $i$ as
\begin{equation}
\g_i := \hat{\x}_{i+1} - \f(\hat{\x}_i, \u_i) = 0,
\end{equation}
and stack them into one vector for the entire trajectory $\g := (\g_0, ..., \g_{n-1})$.
Then, following the implicit function theorem, the Jacobian can be computed analytically by
\begin{equation}
\dv{\hat{\x}}{\u} = - \pdv{\g}{\hat{\x}}^{-1} \pdv{\g}{\u}.
\end{equation}
We note that this can be computed efficiently by leveraging the block structure of the individual matrices (as discussed in \cite{zimmermann_optimal_2019}).
Furthermore, since we are learning the state transition function in terms of local velocities (see Eq. (\ref{eq:learnedStateTransition})), we first compute the derivatives of $\hat{\v}$, and then transform them back into the global frame to receive the derivatives of $\hat{\x}$.
We pair this batch gradient descent formulation with a line search procedure, where we scale the gradient $\dv{\l}{\u}$ with 512 different constants in the range of $[10^{-6}, 10^{0}]$.
We simultaneously evaluate the resulting 512 new control vector candidates and pick the best solution based on the lowest total cost $\l(\hat{\x}, \u)$.
We repeat this simple procedure for each iteration of the optimization until convergence.
An overview of the entire pipeline, including data collection, training, and trajectory optimization is given in~\cref{fig:pipeline}.
\section{Results}
\label{sec:results}
We use the selected model architecture in combination with data recorded on the physical system to train a neural network that captures the dynamics of the real car.
We evaluate the efficacy of our presented pipeline on a number of experiments, where we focus on generating \emph{dynamic} driving behaviors through trajectory optimization.
We present these experiments in the accompanying video and discuss them in the following.
\subsection{Offline Trajectory Optimization} \label{offlineTO}
\defwhite{white}
\defwhite{white}
\def3.7cm{3.7cm}
\begin{figure}
\subfloat[Parallel parking]{
\label{fig:parallel_parking}
\begin{minipage}{0.9\columnwidth}
\begin{tabular}{cc}
\begin{tikzpicture}
\node[rectangle,draw=white, minimum width = 0.25\columnwidth,
minimum height = 3.7cm] (r) at (0,0) {};
\node[inner sep=0] at (r.center) { \includegraphics[width=0.25\columnwidth, height=3.7cm, keepaspectratio, cfbox=white]{figures/scenarios/90_cut_v3.PNG} };
\end{tikzpicture}
& \resizebox{!}{0.5\columnwidth}{%
\input{figures/scenarios/Parallel_Parking}}
\end{tabular}
\end{minipage}
}
\vspace{1mm}
\subfloat[Dynamic reverse]{
\label{fig:dynamic_reverse}
\begin{minipage}{0.9\columnwidth}
\begin{tabular}{cc}
\begin{tikzpicture}
\node[rectangle,draw=white, minimum width = 0.25\columnwidth,
minimum height = 3.7cm] (r) at (0,0) {};
\node[inner sep=0] at (r.center) { \includegraphics[width=0.25\columnwidth, height=3.7cm, keepaspectratio, cfbox=white]{figures/scenarios/180_cut_v3.PNG} };
\end{tikzpicture}
& \resizebox{!}{0.5\columnwidth}{%
\input{figures/scenarios/Dynamic_Reverse}}
\end{tabular}
\end{minipage}
}
\vspace{1mm}
\subfloat[Drifting turn]{
\label{fig:drifting_turn}
\begin{minipage}{0.9\columnwidth}
\begin{tabular}{cc}
\begin{tikzpicture}
\node[rectangle,draw=white, minimum width = 0.25\columnwidth,
minimum height = 3.7cm] (r) at (0,0) {};
\node[inner sep=0] at (r.center) { \includegraphics[width=0.25\columnwidth, height=3.7cm, keepaspectratio, cfbox=white]{figures/scenarios/turn_cut_v3.PNG} };
\end{tikzpicture}
& \resizebox{!}{0.5\columnwidth}{%
\input{figures/scenarios/Drifting_Turn}}
\end{tabular}
\end{minipage}
}
\caption{Nominal trajectories (in orange) are based on optimization with our learned model for the three scenarios. The target states are marked in violet. }
\label{fig:offline_scenarios}
\end{figure}
We perform offline trajectory optimization for three different scenarios. We optimize, based on our learned model, a nominal trajectory and play it back on the physical system in an open-loop manner.
All three scenarios included dynamic drifting maneuvers, and we refer to the individual experiments as: \emph{Parallel Parking} (see \cref{fig:physical_parallel_parking}), \emph{Dynamic Reverse} (see \cref{fig:dynamic_reverse}), and \emph{Drifting Turn} (see \cref{fig:physical_can_drift}).
The scenarios were created by setting both an intermediate as well as an end-state goal using Eq. (\ref{eq:cost_state_targets}).
For each scenario, we repeat the same experiment 20 times for slightly different starting positions.
We execute the resulting optimal control trajectory on the physical car while recording its pose in the same manner as for data collection (see \cref{fig:scenarios_recorded}).
We notice that while the executed trajectory matches the nominal trajectory in general, we accumulate errors over the length of the trajectory. Those errors can arise due to both aleatoric and epistemic uncertainties~\cite{model_uncertainties}. We encounter systematic errors in acceleration that lead, over the time horizon, to an offset in the mean end position (see \cref{tab:offlineTrajComparison}).
\begin{table}[h]
\centering
\caption{Error between the desired end-position and achieved end-position for open-loop trajectory optimization on different trajectories.}
\label{tab:offlineTrajComparison}
\begin{adjustbox}{max width=\linewidth}
\begin{threeparttable}
\begin{tabular}{lclll}
& \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Parallel \\ Parking\end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Dynamic \\ Reverse\end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Drifting \\ Turn\end{tabular}} \\\hline
Trajectory Length & 40 & 40 & 60 \\
\begin{tabular}[c]{@{}l@{}}Mean L2 Position Error {[}m{]}\end{tabular} & 0.38 & 0.21 & 0.37 \\
\begin{tabular}[c]{@{}l@{}}STDev L2 Position Error {[}m{]}\end{tabular} & 0.10 & 0.074 & 0.21 \\
\begin{tabular}[c]{@{}l@{}}Mean Absolute Heading Error {[}rad{]}\end{tabular} & 0.13 & 0.50 & 0.10 \\
\begin{tabular}[c]{@{}l@{}}STDev Absolute Heading Error {[}rad{]}\end{tabular} & 0.08 & 0.10 & 0.10
\end{tabular}
\end{threeparttable}\end{adjustbox}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{figures/physical_parallel_parking/v2/physical_parallel_parking.png}
\caption{\emph{Parallel Parking} experiment: The presented trajectory optimization method leverages the learned dynamics model of the physical car to dynamically park between two soda cans. The figure shows a sequence of several consecutive frames (left to right).}
\label{fig:physical_parallel_parking}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figures/physical_can_drift/v1/physical_can_drift.png}
\caption{\emph{Drifting Turn} experiment: The physical RC car takes a dynamic turn around a soda can by performing a controlled drifting maneuver (sequence top to bottom).}
\label{fig:physical_can_drift}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{figures/scenarios_executed/scenarios_executed.pdf}
\caption{Recorded examples (in green) of executed nominal trajectories (in orange) for the three scenarios. Target states are marked in violet. We encounter similar dynamic behavior on the RC car as our nominal trajectories, including drifts. Due to the open-loop control scheme, we accumulate an error over the length of the trajectories.}
\label{fig:scenarios_recorded}
\end{figure}
\subsection{Online Trajectory Optimization}
We also performed experiments in an online trajectory optimization setting by driving on a predesigned race track (see \cref{fig:track_recorded}).
Hereby, we applied trajectory optimization in a receding horizon fashion, i.e. we reoptimized the control sequence at each control cycle, and applied the first control input to the physical system.
We apply this feedback control scheme by leveraging the same motion capture system for data recording, where we include the measured state of the physical car as initial condition into the optimization.
The overall objective (introduced as costs $\l(\x, \u)$) is to advance along the ace track with a predefined high velocity for the entire time horizon of the trajectory. Additionally, we add a cost for track excursions.
Hereby, we choose a total of $n = 20$ trajectory steps and a framerate of $20$ Hz.
We note that our current implementation for the analytic computation of derivatives from the network is not optimized for run time, and we could therefore not run the optimization at a sufficiently high framerate.
Instead, we leverage parallelization to compute finite differences that estimate the derivatives $\dv{\hat{\v}}{\u}$ for this particular experiment.
Additional performance metrics of the RC car on the racetrack are given in \cref{fig:track_recorded}.
As shown in the video, the car is able to race through the track with high velocity while performing dynamic maneuvers.
This demonstrates that error accumulations encountered in an offline setting can be successfully compensated by feedback control.
\begin{comment}
\SZ{
Talk about:
\begin{itemize}
\item Briefly introduce receding horizon style
\item How we incorporate the measured state
\item Discuss setup of the experiment
\item Framerate (20hz), average velocity, maximum side velocity, time to complete track
\item Current implementation of getting the derivative of the network is not optimized, and therefore not very efficient, which is why we use finite-difference for this particular application
\item Address why the car sometimes leaves the race track
\end{itemize}}
\NK{\begin{itemize}
\item Goal: Racetrack goal consists of driving a certain (fast) velocity and being as far down the track as possible for each position in the trajectory.
\item Optimization: Since optimization already takes 0.05s, we are essentially always a timestep too late. So we take the current state, use the network, estimate the next state based on current control. Based on this estimated state we run the optimization. When starting, we use a 0 control vector. But for subsequent steps we consume the front of the control vector, and shift the vector, appending 0 0 at the end.
\item Data: Track length 16.13m, max forward velocity 4.36m/s, mean velocity for one lap 2.88 m/s. Max side velocity is 0.87m/s. One lap takes 5.6s.
\end{itemize}}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/optim_dist_parallel_park/v1/optim_dist_parallel_park.pdf}
\caption{Distribution of end position vs goal for the parallel park task.
\label{fig:end_pos_distribution}
\end{figure}
\end{comment}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{figures/race_track/v2/race_track.pdf}
\caption{The physical \gls{rc} car (right) follows the control commands generated in a receding horizon fashion to dynamically drive on the race track. A recording of one entire lap is shown in the image on the left.}
\label{fig:track_recorded}
\end{figure*}
\begin{comment}
\SZ{
\begin{itemize}
\item Use simulation model to compare gradients and do analysis (advantage of simple dynamical system); Learn model and test on real physical system (highly dynamic motions); Do not use simulation model to predict behavior of physical system
\item Use consistent color scheme (different colors for "simulated" trajectory, "learned" trajectory, and "physical" trajectory)
\item Physical demo idea "One-Eighty": Turn 180 degrees as dynamically as possible
\item Physical demo idea "Dynamic parking": Drift sideways into a parking spot
\item Physical demo idea "Tokyo Drift": Drift around obstacle
\item Physical demo idea "Race track": MPC style
\end{itemize}
}
\end{comment}
\subsection{Modeled System}
\NK{
\begin{enumerate}
\item Describe the car, how it is controlled and how we measure the position
\item Motivate relative positional changes
\item Trajectory generation
\end{enumerate}
}
We choose a radio-controlled, 1:10 scale, rear wheel driven car as our physical system. Our car is able to generate high torque on its rear axis, enabling hard to model intentional loss of traction maneuvers. We control the car via the commercially available transmitter FS-i6x by FlySky, by sending PPM commands for throttle and steer through its trainer port at 20 Hz. We utilize USB2PPM \cite{schlechtriemKitPPMEncoder2022} to generate PPM commands from our c++ codebase via serial port.
We use a room scale motion capture setup at 240 Hz to determine the location of our RC car with 3 degrees of freedom, assuming planar position plus direction.
Additionally, we assume that the dynamics of our car is independent of its position and direction. Therefore we use a restricted model $\mathcal{M}$ that does not depend on the external state $\vec{p}_i$ (position and direction), but only on an internal state $\vec{q}_i$ (velocity in local frame). For the same reason, we restrict output of our restricted model $\mathcal{M}$ to local velocities. Indeed, our model $\mathcal{M}$ is only operating on velocities in local car frame:
\begin{align}\label{eq:update}
\vec{x}_i &= (\vec{p}_i, \vec{q}_i) \\
\vec{q}_{i+1} &\approx \mathcal{M}(\vec{q}_i, \vec{u}_i)\\
\vec{p}_{i+1} &= \apply(\vec{q}_{i+1},\vec{p}_{i})
\end{align}
Where $\apply()$ is a function that applies the relative, local velocity $\vec{q}_i$ to arrive at global coordinates $\vec{p}_i$. Our restricted model $\mathcal{M}$ relates to the general model $\mathcal{F}$ as follows:
\begin{equation}\label{eq:restricted}
\apply(\mathcal{M}(\vec{q}_i, \vec{u}_i), \vec{p}_i) = \mathcal{F}(\vec{x}_i, \vec{u}_i)
\end{equation}
Instead of only considering the average velocity per timestep, we evenly divide our 20 Hz frames into three subframes each:
\begin{figure}
\label{fig:parametrization}
\centering
\includegraphics[width=1\linewidth]{figures/graphics}
\caption{State parametrization and model input and output.}
\end{figure}
\begin{equation}\label{eq:subframes}
\vec{q}_i = (\vec{q}_i^a, \vec{q}_i^b, \vec{q}_i^c)
\end{equation}
This enables our model $\mathcal{M}$ to also consider trends in the velocity, or accelerations. To approximate a state trajectory $\vec{X}$ based on a control trajectory $\vec{U}$, we first estimate all velocities $\vec{q}_i$ with our model, then apply them to arrive at the global positions $\vec{p}_i$:
\\
\begin{algorithmic}
\FOR{$i=0$ to $n-1$}
\STATE $\vec{q}_{i+1} \approx \mathcal{M}(\vec{q}_i, \vec{u}_i)$
\STATE $\vec{p}_{i+1} = \apply(\vec{q}_{i+1},\vec{p}_{i})$
\STATE $\vec{x}_{i+1} = (\vec{p}_{i+1}, \vec{q}_{i+1})$
\ENDFOR
\end{algorithmic}
\subsection{Simulated Model}
\NK{
\begin{enumerate}
\item Created for fast iteration
\item Specifically match roughly the real physics, especially allow for drift!
\item Note issue with model tuning
\end{enumerate}
}
First, we use a simulated car model. This allows us to quickly generate data for neural network training, as well as comparison of neural network gradients with actual simulated car gradients. The goal of this simulated car is to exhibit similar drifting behavior than the RC car.
\section{Model Selection}
\label{sec:model_selection}
\looseness=-1
In this section, we describe the model selection process.
We used our simulated car model to evaluate different neural network architectures on synthetic data.
While network training solely relies on a loss over one time step, our goal is to learn a model that can match the dynamics of the car over an entire trajectory.
To evaluate this performance, we use 40 prerecorded validation trajectories with known end state poses.
We infer the corresponding control trajectories on each learned model candidate and compare the resulting end-state poses by computing the final state error.
We denote the mean of all 40 final pose errors
as \emph{Trajectory Validation Error} (in short TVE), i.e.
\begin{equation}
\mathrm{TVE} = \frac{1}{40} \sum_{j = 0}^{40} \norm{\hat{\x}_n - \x^j_n}_1,
\label{eq:TVE}
\end{equation}
where $\x^j_n$ denotes the final state of a particular validation trajectory $j$.
We use this as a metric to evaluate the efficacy of the different neural network architectures.
We note that other evaluation metrics could also be applied, but we find that this one works well for our use case, as it essentially captures the accumulated state error throughout the trajectory.
\subsection{Loss Function}
During neural network training, choosing a suitable loss function is of utmost importance. Since we train on the relative velocities of the car, we encounter a high dynamic range. We expect that the most suitable models for trajectory inference are those that minimize the relative error of the training data. Our intuition is that the loss function should be more sensitive to absolute errors for small velocities. We tackle this by introducing the following relative loss for our training:
\begin{equation}
\label{eq:relative_loss}
L_{rel}(\theta) = \frac{\norm{ \v_{i+1} - \f_{\theta}(\v_i, \u_i) }_1}{\norm{\v_{i+1} }_1+\epsilon},
\end{equation}
where $\epsilon > 0$ prevents numerical problems for small velocities.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/loss_traj_comparisons/v2/loss_traj_comparisons.pdf}
\caption{ Left: Comaprison of distribution for TVE of different model architectures trained with respect to $L1$, $L2$ and $L_{rel}$ (Eq. (\ref{eq:relative_loss})) training losses.
Right: Comparison of different model architectures with \gls{relu} or \gls{gelu} activations on the $L_{rel}$ loss and TVE (Eq. (\ref{eq:TVE})).}
\label{fig:validation_traj_vs_loss}
\end{figure}
We trained 15 multiple feed-forward and fully connected neural networks with 8 hidden layers and 64 neurons per layer with three loss functions: L1 loss, L2 loss and our relative loss. For each network, we inferred the validation trajectories and calculated the mean end distance (see \cref{fig:validation_traj_vs_loss} on the left). We found that networks trained with either L1 loss or relative loss perform notably better than those trained with L2 loss. Due to the lower variance, we use our relative loss for all further training.
\subsection{Activation Functions and Network Size} \label{activation_function}
Activation functions are used to capture nonlinearities with neural networks. Typical choices for activation functions are sigmoid, $\tanh$, and \gls{relu}. Nevertheless, sigmoid and $\tanh$ often suffer from the well-known exploding/vanishing gradients problem when trained with backpropogation~\cite{Hochreiter91}. To this end, \gls{relu} is mostly preferred in the machine learning community. However, the gradient of \gls{relu} is not continuous, which potentially imposes a challenge for gradient-based trajectory optimization. Particularly a neural network with \gls{relu} activations may result in very noisy (i.e. fluctuating) control sequences when employed with trajectory optimization.
As we want to apply the optimized control inputs directly to our physical system, this can have a negative impact on the overall performance, and could potentially even damage the hardware.
Therefore, in this work, we also consider \gls{gelu}~\cite{GELU} as a differentiable alternative to \gls{relu}.
We hypothesize that because \gls{gelu} has continuous gradients, it could result in smoother control trajectories, which would make it better suited for our application.
We further discuss this matter in~\cref{sec:discussion}.
For selecting the network architecture (activation functions and size), we trained a total of over 200 neural networks of different sizes with both ReLU and GELU as activation functions (see \cref{fig:validation_traj_vs_loss} on the right). Furthermore, we performed a grid search with different network sizes for both activation functions (\cref{fig:grid_search}).
Finally, we selected an architecture with 8 hidden layers of 64 neurons each with the GELU activation function based on two criteria \emph{(i)} network size/evaluation time, and \emph{(ii)} performance on TVE and test loss.
\begin{comment}
\subsection{Dynamic Sample Weighting}
While exploring the dynamics of models with a low test loss, but high mean validation trajectory distance, we found that these settings typically overemphasize static or quasi-static behavior. This is based on the fact that predicting no change in velocity is a good initial guess for many data points in the training set.
However, we are interested in learning a model that can accurately capture \emph{dynamic} behavior. To emphasize this, we rate "how dynamic" each data point is, then weigh the loss of each data point with this constant. \NK{do we need formulas? do we even have space for them?}
\SZ{I would try to briefly explain it with words}
To analyze this, we trained 15 networks both without and with dynamic weighting (\cref{fig:validation_traj_vs_loss}, right). We conclude that networks trained with dynamic weighting perform better on the validation trajectories.
\BS{Maybe explain more? Could be a contribution in the end.}
\NK{I agree, on slack I wrote more details.}
\end{comment}
\begin{figure} [h]
\centering
\includegraphics[width=1.0\linewidth]{figures/grid_relu_gelu/v2/grid_relu_gelu.pdf}
\caption{Results from grid search for different network architectures with \gls{relu} or \gls{gelu} activations based on TVE (Eq. (\ref{eq:TVE})).
\label{fig:grid_search}
\end{figure}
|
2,869,038,155,544 | arxiv | \section{Introduction}
It has long been recognized that the structure of social networks plays an
important role in the dynamics of disease propagation. Networks showing
the ``small-world'' effect~\cite{Kocken89,Watts99}, where the number of
``degrees of separation'' between any two members of a given population is
small by comparison with the size of the population itself, show much
faster disease propagation than, for instance, simple diffusion models on
regular lattices.
Milgram~\cite{Milgram67} was one of the first to point out the existence of
small-world effects in real populations. He performed experiments which
suggested that there are only about six intermediate acquaintances
separating any two people on the planet, which in turn suggests that a
highly infectious disease could spread to all six billion people on the
planet in only about six incubation periods of the disease.
Early models of this phenomenon were based on random
graphs~\cite{SS88,KM96}. However, random graphs lack some of the crucial
properties of real social networks. In particular, social networks show
``clustering,'' in which the probability of two people knowing one another
is greatly increased if they have a common acquaintance~\cite{WS98}. In a
random graph, by contrast, the probability of there being a connection
between any two people is uniform, regardless of which two you choose.
Watts and Strogatz~\cite{WS98} have recently suggested a new ``small-world
model'' which has this clustering property, and has only a small average
number of degrees of separation between any two individuals. In this paper
we use a variant of the Watts--Strogatz model~\cite{NW99} to investigate
disease propagation. In this variant, the population lives on a
low-dimensional lattice (usually a one-dimensional one) where each site is
connected to a small number of neighboring sites. A low density of
``shortcuts'' is then added between randomly chosen pairs of sites,
producing much shorter typical separations, while preserving the clustering
property of the regular lattice.
Newman and Watts~\cite{NW99} gave a simple differential equation model for
disease propagation on an infinite small-world graph in which communication
of the disease takes place with 100\% efficiency---all acquaintances of an
infected person become infected at the next time-step. They solved this
model for the one-dimensional small-world graph, and the solution was later
generalized to higher dimensions~\cite{Moukarzel99} and to finite-sized
lattices~\cite{NMW99}. Infection with 100\% efficiency is not a
particularly realistic model however, even for spectacularly virulent
diseases like Ebola fever, so Newman and Watts also suggested using a site
percolation model for disease spreading in which some fraction $p$ of the
population are considered susceptible to the disease, and an initial
outbreak can spread only as far as the limits of the connected cluster of
susceptible individuals in which it first strikes. An epidemic can occur
if the system is at or above its percolation threshold where the size of
the largest cluster becomes comparable with the size of the entire
population. Newman and Watts gave an approximate solution for the fraction
$p_c$ of susceptible individuals at this epidemic point, as a function of
the density of shortcuts on the lattice. In this paper we derive an exact
solution, and also look at the case in which transmission between
individuals takes place with less than 100\% efficiency, which can be
modeled as a bond percolation process.
\section{Site percolation}
Two simple parameters of interest in epidemiology are {\em susceptibility},
the probability that an individual exposed to a disease will contract it,
and {\em transmissibility}, the probability that contact between an
infected individual and a healthy but susceptible one will result in the
latter contracting the disease. In this paper, we assume that a disease
begins with a single infected individual. Individuals are represented by
the sites of a small-world model and the disease spreads along the bonds,
which represent contacts between individuals. We denote the sites as being
occupied or not depending on whether an individual is susceptible to the
disease, and the bonds as being occupied or not depending on whether a
contact will transmit the disease to a susceptible individual. If the
distribution of occupied sites or bonds is random, then the problem of when
an epidemic takes place becomes equivalent to a standard percolation
problem on the small-world graph: what fraction $p_c$ of sites or bonds
must be occupied before a ``giant component'' of connected sites forms
whose size scales extensively with the total number $L$ of
sites~\cite{note1}?
We will start with the site percolation case, in which every contact of a
healthy but susceptible person with an infected person results in
transmission, but less than 100\% of the individuals are susceptible. The
fraction $p$ of occupied sites is precisely the susceptibility defined
above.
\begin{figure}[t]
\begin{center}
\psfig{figure=site.eps,width=2in}
\end{center}
\caption{A small-world graph with $L=24$, $k=1$, and four shortcuts. The
colored sites represented susceptible individuals. The susceptibility is
$p=\frac34$ in this example.}
\label{site}
\end{figure}
Consider a one-dimensional small-world graph as in Fig.~\ref{site}.
$L$~sites are arranged on a one-dimensional lattice with periodic boundary
conditions and bonds connect all pairs of sites which are separated by a
distance of $k$ or less. (For simplicity we have chosen $k=1$ in the
figure.) Shortcuts are now added between randomly chosen pairs of sites.
It is standard to define the parameter $\phi$ to be the average number of
shortcuts per bond on the underlying lattice. The probability that two
randomly chosen sites have a shortcut between them is then
\begin{equation}
\psi = 1 - \biggl[1-{2\over L^2}\biggr]^{k \phi L} \simeq {2k \phi \over L}
\end{equation}
for large $L$.
A connected cluster on the small-world graph consists of a number of {\em
local clusters}---occupied sites which are connected together by the
near-neighbor bonds on the underlying one-dimensional lattice---which are
themselves connected together by shortcuts. For $k=1$, the average number
of local clusters of length $i$ is
\begin{equation}
N_i = (1-p)^2 p^i L.
\end{equation}
For general $k$ we have
\begin{eqnarray}
N_i &=& (1-p)^{2k} p (1-(1-p)^k)^{i-1} L\nonumber\\
&=& (1-q)^2 p q^{i-1} L,
\end{eqnarray}
where $q = 1-(1-p)^k$.
Now we build a connected cluster out of these local clusters as follows.
We start with one particular local cluster, and first add to it all other
local clusters which can be reached by traveling along a single shortcut.
Then we add all local clusters which can be reached from those new ones by
traveling along a single shortcut, and so forth until the connected cluster
is complete. Let us define a vector ${\bf v}$ at each step in this process,
whose components $v_i$ are equal to the probability that a local cluster of
size $i$ has just been added to the overall connected cluster. We wish to
calculate the value ${\bf v}'$ of this vector in terms of its value ${\bf v}$ at
the previous step. At or below the percolation threshold the $v_i$ are
small and so the $v_i'$ will depend linearly on the $v_i$ according to a
transition matrix ${\bf M}$ thus:
\begin{equation}
v_i' = \sum_j M_{ij} v_j,
\label{itereqn}
\end{equation}
where
\begin{equation}
M_{ij} = N_i (1 - (1-\psi)^{ij}).
\end{equation}
Here $N_i$ is the number of local clusters of size $i$ as before, and
$1-(1-\psi)^{ij}$ is the probability of a shortcut from a local cluster of
size $i$ to one of size $j$, since there are $ij$ possible pairs of sites
by which these can be connected. Note that $M$ is not a stochastic matrix,
i.e.,~it is not normalized so that its rows sum to unity.
Now consider the largest eigenvalue $\lambda$ of ${\bf M}$. If $\lambda<1$,
iterating Eq.~\eref{itereqn} makes ${\bf v}$ tend to zero, so that the rate at
which new local clusters are added falls off exponentially, and the
connected clusters are finite with an exponential size distribution.
Conversely, if $\lambda>1$, ${\bf v}$ grows until the size of the cluster
becomes limited by the size of the whole system. The percolation threshold
occurs at the point $\lambda=1$.
For finite $L$ finding the largest eigenvalue of $M$ is difficult.
However, if $\phi$ is held constant, $\psi$ tends to zero as $L\to\infty$,
so for large $L$ we can approximate ${\bf M}$ as
\begin{equation}
M_{ij} = ij \psi N_i.
\end{equation}
This matrix is the outer product of two vectors, with the result that
Eq.~\eref{itereqn} can be simplified to
\begin{equation}
\lambda v_i = i \psi N_i \sum_j j v_j,
\end{equation}
where we have set $v'_i = \lambda v_i$. Thus the eigenvectors of ${\bf M}$
have the form $v_i = C\lambda^{-1} i\psi N_i$ where $C = \sum_j j v_j$ is a
constant. Eliminating $C$ then gives
\begin{equation}
\lambda = \psi \sum_j j^2 N_j.
\label{lambdaeqn}
\end{equation}
For $k=1$, this gives
\begin{equation}
\lambda = \psi L p \, \frac{1+p}{1-p} = 2 \phi p \, \frac{1+p}{1-p}.
\label{sitelambdak1}
\end{equation}
Setting $\lambda=1$ yields the value of $\phi$ at the percolation threshold
$p_c$:
\begin{equation}
\phi = \frac{1-p_c}{2 p_c \, (1+p_c)},
\end{equation}
and solving for $p_c$ gives
\begin{equation}
p_c = \frac{\sqrt{4 \phi^2 + 12 \phi + 1} - 2 \phi - 1}{4 \phi}.
\label{sitethreshk1}
\end{equation}
For general $k$, we have
\begin{equation}
\lambda = \psi L p \, \frac{1+q}{1-q}
= 2k \phi p \, \frac{2-(1-p)^k}{(1-p)^k},
\end{equation}
or, at the threshold
\begin{equation}
\phi = \frac{(1-p_c)^k}{2 k p_c \, (2-(1-p_c)^k)}.
\label{sitethreshgenk}
\end{equation}
The threshold density $p_c$ is then a root of a polynomial of order $k+1$.
\section{Bond percolation}
An alternative model of disease transmission is one in which all
individuals are susceptible, but transmission takes place with less than
100\% efficiency. This is equivalent to bond percolation on a small-world
graph---an epidemic sets in when a sufficient fraction $p_c$ of the {\em
bonds\/} on the graph are occupied to cause the formation of a giant
component whose size scales extensively with the size of the graph. In
this model the fraction $p$ of occupied bonds is the transmissibility of
the disease.
For $k=1$, the percolation threshold $p_c$ for bond percolation is the same
as for site percolation for the following reason. On the one hand, a local
cluster of $i$ sites now consists of $i-1$ occupied bonds with two
unoccupied ones at either end, so that the number of local clusters of $i$
sites is
\begin{equation}
N_i = (1-p)^2 \, p^{i-1},
\end{equation}
which has one less factor of $p$ than in the site percolation case. On the
other hand, the probability of a shortcut between two random sites now has
an extra factor of $p$ in it---it is equal to $\psi p$ instead of just
$\psi$. The two factors of $p$ cancel and we end up with the same
expression for the eigenvalue of ${\bf M}$ as before, Eq.~\eref{sitelambdak1},
and the same threshold density, Eq.~\eref{sitethreshk1}.
For $k>1$, calculating $N_i$ is considerably more difficult. We will solve
the case $k=2$. Let $Q_i$ denote the probability that a given site $n$ and
the site $n-1$ to its left are part of the same local cluster of size $i$,
when only bonds to the left of site $n$ are taken into account. Similarly,
let $Q_{ij}$ be the probability that $n$ and $n-1$ are part of two separate
local clusters of size $i$ and $j$ respectively, again when only bonds to
the left of $n$ are considered. Then, by considering site $n+1$ which has
possible connections to both $n$ and $n-1$, we can show that
\begin{equation}
Q_{i+1,j} = \left\lbrace \begin{array}{ll}
(1-p)^2 \bigl[ Q_j + \sum_k Q_{jk} \bigr] \quad &
\mbox{for $i=0$}\\
p(1-p) \,Q_{ji} & \mbox{for $i\ge1$,}
\end{array}\right.
\end{equation}
and
\begin{eqnarray}
Q_{i+1} &=& p(2-p) \,Q_i + p(1-p) \sum_j Q_{ij} +
p^2 \!\! \sum_{j+j'=i} \!\! Q_{jj'}.\nonumber\\
\end{eqnarray}
If we define generating functions $H(z) = \sum_i Q_i z^i$ and $H(z,w) =
\sum_{i,j} Q_{ij} z^i w^j$, this gives us
\begin{eqnarray}
\label{heqn1}
H(z,w) &=& z(1-p)^2 \bigl[ H(w) + H(w,1) \bigr]\nonumber\\
& & \quad +\, zp(1-p) \,H(w,z),\\
\label{heqn2}
H(z) &=& zp(2-p) H(z) + zp(1-p) H(z,1)\nonumber\\
& & \quad +\, zp^2 H(z,z).
\end{eqnarray}
Since any pair of adjacent sites must belong to some cluster or clusters
(possibly of size one), the probabilities $Q_i$ and $Q_{ij}$ must sum to
unity according to $\sum_i Q_i + \sum_{i,j} Q_{ij} = 1$, or equivalently
$H(1) + H(1,1) = 1$. Finally, the density of clusters of size $i$ is equal
to the probability that a randomly chosen site is the rightmost site of
such a cluster, in which case neither of the two bonds to its right are
occupied. Taken together, these results imply that the generating function
for clusters, $G(z) = \sum_i N_i z^i$, must satisfy
\begin{equation}
G(z) = (1-p)^2 \bigl[ H(z) + H(z,1) \bigr].
\end{equation}
Solving Eqs.~\eref{heqn1} and~\eref{heqn2} for $H(z)$ and $H(z,1)$ then
gives
\begin{eqnarray}
G(z) &=& \bigl[z (1 - p)^4 \bigl(1 - 2 p z + p^3 (1 - z) z + p^2 z^2
\bigr)\bigr]\big/\nonumber\\
& & \quad \bigl[1 - 4 p z + p^5 (2 - 3z) z^2 - p^6 (1 - z) z^2\nonumber\\
& & \qquad +\, p^4 z^2 (1 + 3z) + p^2 z (4 + 3z)\nonumber\\
& & \qquad -\, p^3 z \bigl(1 + 5z + z^2\bigr)\bigr],
\end{eqnarray}
the first few terms of which give
\begin{eqnarray}
N_1/L &=& (1-p)^4,\\
N_2/L &=& 2 p \, (1-p)^6,\\
N_3/L &=& p^2 \, (1-p)^6 \, (6 - 8 p + 3 p^2).
\end{eqnarray}
Again replacing $\psi$ with $\psi p$, Eq.~\eref{lambdaeqn} becomes
\begin{equation}
\lambda = \psi p \sum_i i^2 N_i = 2k \phi p \biggl[ \biggl( z \frac{d}{dz}
\biggr)^{\!2} G(z) \,\biggr]_{z=1},
\label{bondlambda}
\end{equation}
which, setting $k=2$, implies that the percolation threshold $p_c$ is given
by
\begin{equation}
\phi = \frac{(1 - p_c)^3 \, (1 - p_c + p_c^2)}
{4 p_c \,(1 + 3 p_c^2 - 3 p_c^3 - 2 p_c^4 + 5 p_c^5 - 2 p_c^6)}.
\label{bondthreshk2}
\end{equation}
In theory it is possible to extend the same method to larger values of $k$,
but the calculation rapidly becomes tedious and so we will, for the moment
at least, move on to other questions.
\section{Site and bond percolation}
The most general disease propagation model of this type is one in which
both the susceptibility and the transmissibility take arbitrary values,
i.e.,~the case in which sites and bonds are occupied with probabilities
$p_{\rm site}$ and $p_{\rm bond}$ respectively. For $k=1$, a local cluster
of size $i$ then consists of $i$ susceptible individuals with $i-1$
occupied bonds between them, so that
\begin{equation}
N_i = (1 - p_{\rm site} \,p_{\rm bond})^2
\,p_{\rm site}^i \,p_{\rm bond}^{i-1}.
\end{equation}
Replacing $\psi$ with $\psi p_{\rm bond}$ in Eq.~\eref{lambdaeqn} gives
\begin{equation}
\lambda = \psi \,p_{\rm bond} \sum_j j^2 N_j = 2 \phi p \,\frac{1+p}{1-p},
\end{equation}
where $p = p_{\rm site} \,p_{\rm bond}$. In other words, the position of
the percolation transition is given by precisely the same expression as
before except that $p$ is now the product of the site and bond
probabilities. The critical value of this product is then given by
Eq.~\eref{sitethreshk1}. The case of $k>1$ we leave as an open problem for
the interested (and courageous) reader.
\section{Numerical calculations}
We have performed extensive computer simulations of percolation and disease
spreading on small-world networks, both as a check on our analytic results,
and to investigate further the behavior of the models. First, we have
measured the position of the percolation threshold for both site and bond
percolation for comparison with our analytic results. A naive algorithm
for doing this fills in either the sites or the bonds of the lattice one by
one in some random order and calculates at each step the size of either the
average or the largest cluster of connected sites. The position of the
percolation threshold can then be estimated from the point at which the
derivative of this size with respect to the number of occupied sites or
bonds takes its maximum value. Since there are ${\rm O}(L)$ sites or bonds on
the network in total and finding the clusters takes time ${\rm O}(L)$, such an
algorithm runs in time ${\rm O}(L^2)$. A more efficient way to perform the
calculation is, rather than recreating all clusters at each step in the
algorithm, to calculate the new clusters from the ones at the previous
step. By using a tree-like data structure to store the
clusters~\cite{BN99}, one can in this way reduce the time needed to find
the value of $p_c$ to ${\rm O}(L\log L)$. In Fig.~\ref{perc} we show numerical
results for $p_c$ from calculations of the largest cluster size using this
method for systems of one million sites with various values of $k$, for
both bond and site percolation. As the figure shows, the results agree
well with our analytic expressions for the same quantities over several
orders of magnitude in $\phi$.
\begin{figure}[t]
\begin{center}
\psfig{figure=perc.eps,width=\columnwidth}
\end{center}
\caption{The points are numerical results for the percolation threshold as
a function of shortcut density~$\phi$ for systems of size $L=10^6$. Left
panel: site percolation with $k=1$ (circles), $2$~(squares), and
$5$~(triangles). Right panel: bond percolation with $k=1$~(circles) and
$2$~(squares). Each point is averaged over 1000 runs and the resulting
statistical errors are smaller than the points. The solid lines are the
analytic expressions for the same quantities, Eqs.~\eref{sitethreshk1},
\eref{sitethreshgenk}, and~\eref{bondthreshk2}. The slight systematic
tendency of the numerical results to overestimate the value of $p_c$ is a
finite-size effect which decreases both with increasing system size and
with increasing $\phi$\protect\cite{note2}.}
\label{perc}
\end{figure}
\begin{figure}[t]
\begin{center}
\psfig{figure=invasion.eps,width=\columnwidth}
\end{center}
\caption{The number of new cases of a disease appearing as a function of
time in simulations of the site-percolation model with $L=10^6$, $k=5$,
and $\phi=0.01$. The top four curves are for $p=0.60$, $0.55$, $0.50$,
and~$0.45$, all of which are above the predicted percolation threshold of
$p_c=0.401$ and show evidence of the occurrence of substantial epidemics.
A fifth curve, for $p=0.40$, is plotted but is virtually invisible next
to the horizontal axis because even fractionally below the percolation
threshold no epidemic behavior takes place. Each curve is averaged over
1000 repetitions of the simulation. Inset: the total percentage of the
population infected as a function of time in the same simulations.}
\label{invasion}
\end{figure}
Direct confirmation that the percolation point in these models does indeed
correspond to the threshold at which epidemics first appear can also be
obtained by numerical simulation. In Fig.~\ref{invasion} we show results
for the number of new cases of a disease appearing as a function of time in
simulations of the site-percolation model. (Very similar results are found
in simulations of the bond-percolation model.) In these simulations we
took $k=5$ and a value of $\phi=0.01$ for the shortcut density, which
implies, following Eq.~\eref{sitethreshgenk}, that epidemics should appear
if the susceptibility within the population exceeds $p_c=0.401$. The
curves in the figure are (from the bottom upwards) $p=0.40$, $0.45$,
$0.50$, $0.55$, and~$0.60$, and, as we can see, the number of new cases of
the disease for $p=0.40$ shows only a small peak of activity (barely
visible along the lower axis of the graph) before the disease fizzles out.
Once we get above the percolation threshold (the upper four curves) a large
number of cases appear, as expected, indicating the onset of epidemic
behavior. In the simulations depicted, epidemic disease outbreaks
typically affected between 50\% and 100\% of the susceptible individuals,
compared with about 5\% in the sub-critical case. There is also a
significant tendency for epidemics to spread more quickly (and in the case
of self-limiting diseases presumably also to die out sooner) in populations
which have a higher susceptibility to the disease. This arises because in
more susceptible populations there are more paths for the infection to take
from an infected individual to an uninfected one. The amount of time an
epidemic takes to spread throughout the population is given by the average
radius of (i.e.,~path length within) connected clusters of susceptible
individuals, a quantity which has been studied in Ref.~\onlinecite{NW99}.
In the inset of Fig.~\ref{invasion} we show the overall (i.e.,~integrated)
percentage of the population affected by the disease as a function of time
in the same simulations. As the figure shows, this quantity takes a
sigmoidal form similar to that seen also in random-graph
models~\cite{SS88,KM96}, simple small-world disease models~\cite{NW99}, and
indeed in real-world data.
\section{Conclusions}
We have derived exact analytic expressions for the percolation threshold on
one-dimensional small-world graphs under both site and bond percolation.
These results provide simple models for the onset of epidemic behavior in
diseases for which either the susceptibility or the transmissibility is
less than 100\%. We have also looked briefly at the case of simultaneous
site and bond percolation, in which both susceptibility and
transmissibility can take arbitrary values. We have performed extensive
numerical simulations of disease outbreaks in these models, confirming both
the position of the percolation threshold and the fact that epidemics take
place above this threshold only.
Finally, we point out that the method used here can in principle give an
exact result for the site or bond percolation threshold on a small-world
graph with any underlying lattice for which we can calculate the density of
local clusters as a function of their size. If, for instance, one could
enumerate lattice animals on lattices of two or more dimensions, then the
exact percolation threshold for the corresponding small-world model would
follow immediately.
\section*{Acknowledgments}
We thank Duncan Watts for helpful conversations. This work was supported
in part by the Santa Fe Institute and DARPA under grant number ONR
N00014-95-1-0975.
|
2,869,038,155,545 | arxiv | \section{Introduction}
\label{sec:intro}
Object pose estimation in 6 degrees of freedom (DoF) plays a key role in a variety of down-stream applications (e.g., autonomous driving, robotic navigation, manipulation, and augmented reality),
and has been extensively studied in computer vision and robotics communities~\cite{Lowe1999ICCV,Rad2017ICCV,Peng2019CVPR,Park2019ICCV,Labbe2020ECCV,Fu2021IROS,Wang2021CVPR}.
Some methods rely on RGB input~\cite{Park2019ICCV,Zakharov2019ICCV,Li2018ECCV,WangXZML0S2019CVPR,Sundermeyer2018ECCV}, while others utilize additional depth input to improve the performance~\cite{Moreno2013CVPR,Xiang2018RSS,Li2018ECCV,WangXZML0S2019CVPR}.
Some deal with a single view~\cite{Xiang2018RSS,Rad2017ICCV,Park2019ICCV},
while others utilize multiple views to enhance the results
~\cite{Collet2010ICRA,Collet2011IJRR,Moreno2013CVPR,Li2018ECCV,Labbe2020ECCV,Fu2021IROS}.
In particular,
multi-view methods can be further categorized into offline structure from motion (SfM) -- where all the frames are given at once~\cite{Collet2010ICRA,Labbe2020ECCV} -- and the online SLAM styles, where frames are provided sequentially and real-time performance is expected~\cite{Moreno2013CVPR,Fu2021IROS}.
This paper focuses on image-based 6DoF pose estimation for multiple objects in the context of an online monocular SLAM system.
\begin{figure}[t]
\includegraphics[width=\columnwidth,trim={0 1.5cm 0 0}]{figures/overview_s53_v23_to_v78}
\caption{Our proposed method leverages the detected keypoints of {\em asymmetric} objects and the 3D scene created from the SLAM system to consistently track the keypoints of {\em symmetric} objects. Given the current camera pose estimated from {\em asymmetric} objects' keypoints, the projections of the existing 3D keypoints into the current image act as informative {\em prior input} to guide the network in predicting keypoints with consistent symmetry over time.}
\label{fig:brief_overview}
\end{figure}
A typical multi-view 6DoF pose estimation method can be decomposed into the single-view estimation stage and the multi-view enhancement stage.
While pose estimates from multiple views can be fused for better performance~\cite{Collet2010ICRA,Labbe2020ECCV,Fu2021IROS}, handling extreme inconsistency -- e.g., those caused by rotational symmetry of objects -- is still challenging. It is also unreliable to manually tune the thresholds for outlier rejection and assign residual weights for nonlinear optimization.
To tackle these challenges,
in this paper, we propose a symmetry and uncertainty-aware 6DoF object pose estimation method which fuses semantic keypoint measurements from all views within a SLAM framework.
The main contributions of this work are:
\begin{itemize}
\item We design a keypoint-based object SLAM system that jointly estimates the
globally-consistent object and camera poses in real time -- even in the presence of
incorrect detections and symmetric objects.
\item We propose a method able to consistently predict and track 2D semantic keypoints for symmetric objects over time,
which leverages the projection of existing 3D keypoints into the current image as an informative prior input to the keypoint network.
\item We develop a method to train the keypoint network to estimate
the uncertainty of its predictions such that the uncertainty measure quantifies the true error of the keypoints,
and significantly improves object pose estimation in the object SLAM system.
\end{itemize}
The rest of this paper is organized as follows:
After briefly reviewing the related literature in Sec. \ref{sec:relwork},
we describe our method in detail in Sec. \ref{sec:method} -- including
the keypoint detector and how it is used in the entire system.
A thorough evaluation of our framework is presented in Sec. \ref{sec:exp}
before concluding in Sec. \ref{sec:conclusion}.
\section{Related Work}
\label{sec:relwork}
\paragraph{Single-view object pose estimation.}
A large number of single-view object pose estimation methods have been presented in recent years. One major trend is to utilize deep networks to predict the relative pose of an object with respect to the camera in a regress-and-refine fashion \cite{Xiang2018RSS, Li2018ECCV, Labbe2020ECCV, Zakharov2019ICCV}. Although effective, the iterative refining process is usually at a high computationally cost. Another trend is to either estimate the 2D projected locations of sparse 3D semantic points from the CAD model \cite{Rad2017ICCV, Pavlakos2017ICRA, Peng2019CVPR}, or to regress the 3D coordinates from the dense 2D pixels within object masks \cite{Park2019ICCV,Wang2021CVPR}, and then solves a perspective $n$ point (PnP) problem to estimate object poses. This type of approach is more efficient, however, not always as reliable under occlusion. In order to achieve superior robustness to occlusion and real-time efficiency simultaneously, we develop a multi-view method which integrates a sparse semantic keypoint detection in an object-level SLAM system. Instead of adapting traditional descriptors \cite{Collet2010ICRA,Collet2011IJRR}, we opt to develop a CNN-based keypoint detector in order to leverage more global context to reason about the keypoint locations and distinguish their semantics. We show that an object SLAM system can effectively utilize the sparse set of semantic keypoints to optimize the poses in a bundle adjustment (BA) optimization with outlier rejection at the keypoint level.
\paragraph{Object-level SLAM.}
Object-level SLAM typically builds upon single-view object pose estimators,
which improves the estimated poses' robustness to occlusions, missing detections, and the global consistency via multi-view optimization.
SLAM++ \cite{Moreno2013CVPR} was notably the first work along this line, but their system only worked on depth images.
There are also some works which model objects as a sparse set of 3D keypoints, and use a 2D keypoint detectors to estimate the correspondences which
are fused over time \cite{Parkhiya2018ICRA,Shan2020IROS}, however none have considered symmetric objects.
PoseRBPF \cite{Deng2019RSS} on the other hand proposed a method to track
objects over time with an autoencoder and particle filter to reason about the symmetry, however their system is only able to track one object at a time -- limiting the application.
CosyPose \cite{Labbe2020ECCV} presented a method to disambiguate pose estimates of symmetric objects from multiple views through object-level RANSAC
, but their method is an offline SfM approach and not directly comparable to ours.
Fu et. al \cite{Fu2021IROS} proposed a multi-hypothesis SLAM approach to estimate the pose of symmetric objects, which is optimized with a max-mixture model.
In contrast, our approach only tracks one hypothesis, and is shown to have superior performance.
\paragraph{Keypoint uncertainty estimation.}
A typical global optimization that uses the predicted object keypoints as a measurement
(i.e., PnP or multi-view graph optimization), requires a proper weighting of the residuals.
Without any measure of certainty to accompany the keypoint measurements, this weight is typically set to identity or some manually-tuned value.
Some works have retrieved a weight directly from the output of the keypoint network
\cite{Pavlakos2017ICRA, Peng2019CVPR} to be used in PnP as a scalar measure of certainty \cite{Pavlakos2017ICRA} or Gaussian covariance matrix \cite{Peng2019CVPR},
while \cite{Shan2020IROS} adapted the Bayesian method of \cite{Kendall2016ArXiv}
to estimate a covariance matrix for the keypoints by sampling over a randomized batch.
Although these methods have been shown to work in practice, none have shown that the uncertainty they are predicting
actually bounds the true error of the prediction compared to the ground truth.
Besides for residual weighting, the uncertainty is especially useful for outlier rejection, since, assuming that the uncertainty is a Gaussian covariance matrix, the $\chi^2$ distribution can determine an outlier threshold more systematically compared to manual tuning.
Inspired by a plethora of recent works (unrelated to keypoint prediction) on self-uncertainty prediction of networks \cite{Bloesch2018CVPR,Klodt2018ECCV,Liu2020RAL,Yang2020CVPR,Ke2021ArXiv,Zuo2021ICRA,Matsuki2021RAL}, we design a maximum likelihood estimator (MLE) loss, which trains the network to predict keypoint locations accurately and to jointly predict the uncertainty to be tightly bound around the actual error of the prediction.
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{figures/symmetry_aware_object_slam}
\caption{An overview of the proposed symmetry and uncertainty aware object SLAM pipeline.}
\label{fig:system}
\end{figure*}
\section{The Proposed Method}
\label{sec:method}
Our multi-view 6DoF object pose estimation method is unified in an object SLAM framework, which jointly estimates object and camera poses -- while accounting for the symmetry of detected objects and utilizing the uncertainty estimations from the network to robustify the system. A depiction of the full pipeline can be seen in Fig. \ref{fig:system}. The pipeline involves two passes to deal with asymmetric and symmetric objects separately. In the first pass, the asymmetric objects are tracked from the 3D scene to estimate the camera pose. In the second pass, the estimated 3D keypoints for symmetric objects are projected into the current camera view to be used as the prior knowledge to help predict keypoints for these objects that are consistent with the 3D scene.
The object SLAM system is primarily comprised of two modules, the front-end tracking using the keypoint network, and back-end global optimization to refine the object and camera pose estimates.
As a result, the proposed system can operate on sequential inputs and estimate the current state in real time for the use of an operator or robot requiring object and camera poses in a feedback loop.
\subsection{Keypoint Network} \label{sec:kp_estim}
We develop a keypoint network that not only predicts the 2D keypoint coordinates but also their uncertainty. In addition, to make it able to provide consistent keypoint tracks for symmetric objects, the network optionally takes prior keypoint heatmap inputs that are expected to be somewhat noisy.
The architecture of our keypoint network can be seen in Fig. \ref{fig:network}.
The backbone architecture of our keypoint network is the stacked hourglass network \cite{Newell2016ECCV}, which has been shown to be a good choice for object pose estimation
\cite{Parkhiya2018ICRA, Pavlakos2017ICRA, Shan2020IROS}.
Similar to the original \cite{Newell2016ECCV} we choose a multi-channel keypoint parameterization due to its simplicity.
With this formulation, each channel is responsible for predicting a single keypoint,
and we can combine all of the keypoints for the dataset into one output tensor
-- allowing for a single network to be used for all of the objects.
Given the image and prior input cropped to a bounding box and resized to a static input resolution, the network predicts
an $N \times H/d \times W/d$ tensor $p$, where $H \times W$ is the input resolution, $d$ is the downsampling ratio (4 in our experiments), and $N$ is the total number of keypoints for the dataset.
From $p$, a set of $N$ 2D keypoints $\{\bm{u}_1,\bm{u}_2,\hdots,\bm{u}_N\}$, $2 \times 2$ covariance matrices $\{\bm{\Sigma}_1,\bm{\Sigma}_2,\hdots,\bm{\Sigma}_N\}$ are predicted.
Every channel of $p$, $p_i$, is enforced to be a 2D probability mass by utilizing a spatial softmax.
The predicted keypoint is taken as the expected value of 2D coordinates over this probability mass
$\bm{u}_i = \sum_{u,v} p_i(u,v) [u ~ v]^\top$.
Unlike the non-differentiable argmax operation, this allows us to use the keypoint coordinate directly in the loss function -- which is important for our uncertainty estimation.
\paragraph{Keypoints with uncertainty.}
Since the keypoint $\bm{u}_i$ is the expected value of the distribution of 2D coordinates with probability mass given by the values of $p_i$, it is straightforward to estimate an uncertainty measure by the covariance of this distribution with the second moment about the mean
\begin{equation}
\bm{\Sigma}_i = \sum_{u,v} p_i(u,v)
\left( [u ~ v]^\top - {\bm{u}}_i \right)
\left( [u ~ v]^\top - {\bm{u}}_i \right)^\top.
\label{eq:cov}
\end{equation}
However, without any particular criteria for the covariance, there is nothing to enforce that the uncertainty actually captures the true error of the prediction.
To this end, we propose to use a Gaussian maximum-likelihood estimator (MLE) loss to jointly optimize the keypoint coordinates as well as the covariance:
\begin{align}
L^{(i)}_\mathrm{MLE}
&= \left( \bm{u}^*_i - {\bm{u}}_i \right)^\top
\bm{\Sigma}^{-1}_i
\left( \bm{u}^*_i - {\bm{u}}_i \right)
+ \frac{1}{2} \log{|\bm{\Sigma}_i|},
\end{align}
where $\bm{u}^*_i$ is the ground truth keypoint coordinate.
From a high-level perspective, the first term enforces that the covariance bounds the true error of the prediction,
while the second prevents it from becoming too large.
This way, the network can predict its own uncertainty in the form of a Gaussian covariance matrix, which is trained to tightly bound the true error of the estimated keypoint.
While our network predicts a total of $N$ keypoints, only a subset of these, $\mathcal{K}(\ell) \subset \{1,2,\hdots,N\}$, are valid for a particular object $\ell$.
Furthermore, considering a single image, only a subset of keypoints $\mathcal{B} \subseteq \mathcal{K}(\ell)$ lie within the bounding box for object $\ell$ (note that occluded keypoints are still predicted).
However, during deployment, while $\mathcal{K}(\ell)$ is known from the object class and keypoint labeling, it may be impossible to know which keypoints lie within the detected bounding box.
For this reason, we add another head onto the network to predict a sigmoid vector $\bm{m} \in [0,1]^N$,
which is trained to estimate the ground-truth binary mask $\bm{m}^* \in \{0,1\}^N$, where $m^*_i = 1$ if $i \in \mathcal{B}$ and 0 otherwise (see Fig. \ref{fig:network} for the architecture).
Thus, for a single object, in a single image, the full loss becomes
\begin{align}
L_\mathrm{tot} &= \mathrm{BCE}(\bm{m}, \bm{m^*}) +
\frac{1}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} L^{(i)}_\mathrm{MLE},
\end{align}
where $\mathrm{BCE}(.)$ is the binary cross entropy loss function.
For the rest of the paper, to simplify notation, we will denote $k \in \{1,2,\hdots,K\}$
as the indices for keypoints which pass the ground-truth mask $\bm{m}^*$ for training (i.e., the next section) or the estimated mask $\bm{m}$ (as well as the known $\mathcal{K}(\ell)$) for deployment in the SLAM system (Sec. \ref{sec:obj_slam}).
\begin{figure}[t]
\centering
\includegraphics[width=0.99\columnwidth,trim={0 0 4cm 0}]{figures/network}
\caption{
The overall architecture of our keypoint network.
The network input is augmented to include additional $N$ channels for the prior keypoint inputs. When no prior is available, these channels are filled with zeros.
The network outputs an $N$-channel feature map corresponding to the raw logits, from where a spatial softmax head predicts keypoints $\bm{u}_i$ and uncertainty $\bm{\Sigma_i}$, while an average pool head predicts the keypoint mask $\bm{m}$.
}
\label{fig:network}
\end{figure}
\paragraph{Keypoints for symmetric objects.}
Since we want to efficiently track the keypoints over time during deployment, it is convenient to obtain keypoint predictions that have a symmetry hypothesis that is consistent with the 3D scene.
Inspired by \cite{Moolanferoze2019ArXiv}, we opt to include $N$ extra channels as input to the keypoint network which contain a prior detection of the object's keypoints.
As shown in Fig. \ref{fig:system}, during deployment in the SLAM system, the prior keypoint detections come from projecting the 3D keypoints from the global object frame
into the current image once the corresponding camera pose is found (i.e., the second pass).
With this paradigm, there are two main issues to address: how to create training examples
of the prior detections (since the SLAM system is not run during training), and how to detect
the initial keypoints on symmetric objects when there is not yet an object pose estimate available.
Here we describe the training scheme used to address these issues.
To create the training prior, we simulate a noisy prior detection that the SLAM system would create by projecting the 3D keypoints from the object frame into the image plane with a perturbed ground truth object pose $\delta \mathbf{T} {}^C_O\mathbf{T}^*$ (see the supplementary Sec. \ref{sec:supp:notation} about the notation).
To further ensure that the network can learn to follow the prior detections for the symmetry
hypothesis, we utilize the set of symmetry transforms
$\mathcal{S} = \{{}^O_{S_1}\mathbf{T}, {}^O_{S_2}\mathbf{T}, \hdots, {}^O_{S_M}\mathbf{T}\}$ that we expect to be available for each object (discretized for objects with continuous axes of symmetry).
Each ${}^O_{S_m}\mathbf{T} \in \mathcal{S}$, when applied to the object CAD model, makes the rendering look (nearly) exactly the same, and in practice, these transforms can be manually chosen fairly easily.
Thus, when constructing a training example with a prior detection, we pick a random symmetry transform and apply it to the ground-truth object pose before doing the projection.
In order for the network to learn to predict initial keypoints on symmetric objects (when no prior is available),
we only provide this simulated prior randomly for roughly half of the examples.
Without the prior detection, however, the network is left up to its own devices to reason about the absolute orientation for the object -- which is theoretically impossible for symmetric objects without special care.
As opposed to the mirroring technique and additional symmetry classifier proposed by \cite{Rad2017ICCV}, we instead teach the network to deal with this issue with a simple criteria of choosing the keypoints that correspond
to the symmetrically-valid pose that is closest to a canonical view where the front of the object faces the camera, and the top of the object faces the top of the image.
We refer the reader to the supplementary material (Sec. \ref{sec:supp:np_choice}) for more details on this procedure.
\subsection{Object SLAM System} \label{sec:obj_slam}
Our symmetry and uncertainty-aware object SLAM system is comprised of two modules: the
front-end tracking, and the back-end global optimization.
The front end is responsible for processing the incoming frames -- running the keypoint network, estimating the current camera pose, and initializing new objects -- while the back end is responsible for refining the camera and object poses for the whole scene.
We refer the reader again to Fig. \ref{fig:system} for a visual representation of our system.
\paragraph{Front-end tracking.}
The first step of our front end is to split the bounding boxes detected in the current image into two information streams -- the first for asymmetric objects and first-time detections of symmetric ones, and the second for symmetric objects that already have 3D estimates.
Again, we expect the symmetry information (i.e., symmetric or \textit{not}) to be included with each object class.
The first information stream sends the images, cropped at the bounding boxes, to the keypoint network without any prior to detect keypoints and uncertainty.
These keypoints are then used to estimate the pose of each asymmetric object ${}^C_O\mathbf{T}_\mathrm{pnp}$ in the current camera frame by using PnP with RANSAC.
These PnP poses are then used to coarsely estimate the current camera pose
and then initialize objects which do not yet have 3D estimates.
See the supplementary material Sec. \ref{sec:supp:front_end} for more details on how this is done as well as more detailed behavior of the front end.
With a rough estimate of the current camera, we move onto the second information stream of the front end.
We use the coarse estimate of the camera pose to create the prior detections for the keypoints of symmetric objects by projecting the 3D keypoints for these objects into the current image, and constructing the prior keypoint heatmaps for network input.
After running the keypoint network on these symmetric objects, we store the keypoint measurements from both information streams for later use in the global optimization.
\paragraph{Back-end global optimization.}
The global optimization step runs periodically to refine the whole scene (object and camera poses) based on the measurements from each image.
Rather than reduce the problem to a pose graph (i.e., using relative pose measurements from PnP), we keep the original noise model of using the keypoint detections as measurements, which allows us to weight each residual with the covariance prediction from the network.
The global optimization problem is formulated by creating residuals that constrain
the pose ${}^{C_j}_G\mathbf{T}$ of image $j$ and the pose ${}^{G}_{O_\ell}\mathbf{T}$ of object $\ell$ with the $k$th keypoint
\begin{align}
\bm{r}_{j,\ell,k} = \bm{u}_{j,\ell,k}
- \Pi_{j,\ell}\left( {}^{C_j}_G\mathbf{T} ~ {}^{G}_{O_\ell}\mathbf{T} ~ {}^{O_\ell}\bar{\bm{p}}_k \right),
\label{eq:res}
\end{align}
where $\Pi_{j,\ell}$ is the perspective projection function for the bounding box of object $\ell$ in image $j$.
Thus the full problem becomes to minimize the cost over the entire scene
\begin{align}
C
&= \sum_{j,\ell,k} s_{j,\ell,k} ~ \rho_H \left(
\bm{r}_{j,\ell,k}^\top ~ \bm{\Sigma}_{j,\ell,k}^{-1} ~ \bm{r}_{j,\ell,k}
\right) \label{eq:global_cost}
\end{align}
where $\bm{\Sigma}_{j,\ell,k}$ is the $2 \times 2$ covariance matrix predicted by the network for the keypoint $\bm{u}_{j,\ell,k}$, $s_{j,\ell,k} \in \{0,1\}$ is a constant indicator
that is 1 if the measurement was deemed an inlier before the optimization started and 0 otherwise, and $\rho_H$ is the Huber norm which reduces the effect of outliers during the optimization steps.
Both $\rho_H$ and $s_{j,\ell,k}$ use the same outlier threshold $\tau$
, which is derived from the 2-dimensional $\chi^2$ distribution, and is always set to the 95\% confidence threshold $\tau = {5.991}$.
Thus we do not need to manually tune the outlier threshold as long as the covariance matrix $\bm{\Sigma}_{j,\ell,k}$ can properly capture the true error of keypoint $\bm{u}_{j,\ell,k}$.
\section{Experiments}
\label{sec:exp}
Our experiments are conducted on two of the most challenging object pose estimation datasets: the YCB-Video dataset \cite{Xiang2018RSS}
and the T-LESS dataset \cite{Hodan2017WACV}.
Both datasets provide ground truth poses for symmetric and asymmetric objects in cluttered environments over multiple keyframe sequences.
YCB-Video contains 21 household objects, including 4 objects with discrete symmetries and one object (the bowl) with a continuous axis of symmetry.
The T-LESS dataset contains 30 industry-relevant objects with very little texture, and most are symmetric.
Note that the symmetry information of each object is provided by \cite{Hodan2018ECCV}.
\subsection{Implementation Details} \label{sec:impl}
\paragraph{Choice of keypoints.}
While our design is agnostic to the choice of keypoint, to reduce the number of channels that the network needs to predict,
we created a set of rules to annotate keypoints manually in such a way that each keypoint can be applied to multiple object instances, and the same rules can be applied to both the YCB-Video and T-LESS dataset.
We manually label the 3D CAD models for both datasets, and project the keypoints from 3D to 2D to create the ground-truth keypoints described in Sec. \ref{sec:kp_estim}.
We refer the reader to the supplementary material Sec. \ref{sec:supp:labeling} for more details on how we annotated the keypoints.
\paragraph{Training procedure.}
We implemented the keypoint network in PyTorch \cite{Paszke2019NeurIPS}.
For all training, we used the Adam optimizer \cite{Kingma2015ICLR} with a learning rate of $10^{-3}$.
For the YCB-Video dataset, we utilized real training data provided along with the official 80k synthetic images.
Due to the high redundancy in the real training data, we used only every 5th image.
We trained on this dataset for 60 epochs using a batch size of 24
with randomized backgrounds for the synthetic dataset as well as randomized bounding boxes, color, and image warping.
For the T-LESS dataset, there are only real training images of single objects on a dark background
, so for the synthetic data
we opted to use the physics-based \texttt{pbr} rendered data provided by \cite{Hodan2020ECCVW}.
For both the \texttt{real} and \texttt{pbr} splits we augment the examples with randomized backgrounds, bounding boxes, color, and warping, as well as randomly pasted objects for the real data only -- since it only contains images of isolated objects.
We trained the TLESS model for 89 epochs with a batch size of 8, which was smaller
than that for YCB-Video due to the higher image resolution of the \texttt{pbr} data.
\paragraph{SLAM system.}
Our SLAM system is implemented in Python.
The GPU is only used for network inference while all other operations are performed on the CPU.
All optimizations are implemented using Python wrappers for the g2o library \cite{Kuemmerle2011ICRA}\footnote{\url{https://github.com/uoip/g2opy}}, besides PnP,
which is done using the Lambda Twist solver \cite{Persson2018ECCV} with RANSAC\footnote{\url{https://github.com/midjji/pnp}}.
Our front-end tracking works on every incoming frame, while the back-end runs every 10th frame.
Note that the testing sequences for both datasets are already provided as keyframes, so no keyframing procedure is needed.
While for actual deployment it is ideal to run the back-end graph optimization on a separate worker thread, this would make reproducing the exact results impossible due to randomness in the operating system's allocation of resources between the two threads.
In order to make the results reproducible, we simply execute both the front-end and back-end on the main thread for evaluation.
Our front-end tracking can typically run at 11Hz on our desktop with a GTX 1080Ti graphics card, and the back-end can run at an average speed of 2Hz.
\subsection{YCB-Video Dataset} \label{sec:ycbv}
\begin{table}
\centering
\caption{Results on the YCB-Video dataset. Data means what synthetic data was used in addition to the real data, and U.M. (unified model) is checked if only one model was trained for all objects instead of one model trained for each object separately. Bold is best, underlined is second best.
}
\begin{tabular}{l|cccc}
\toprule
Method & Data & U.M. & ADD-S & ADD(-S) \\
\hline
PoseCNN \cite{Xiang2018RSS} & \texttt{syn} & \checkmark & 75.3 & 61.3 \\
DeepIM \cite{Li2018ECCV} & \texttt{syn} & \checkmark & 88.1 & 81.9 \\
PoseRBPF \cite{Deng2019RSS} & \texttt{syn} & & 76.3 & 64.4 \\
MHPE \cite{Fu2021IROS} & \texttt{syn} & \checkmark & 82.9 & 69.7 \\
CosyPose \cite{Labbe2020ECCV} & \texttt{pbr} & \checkmark & 89.8 & \underline{84.5} \\
GDR-Net \cite{Wang2021CVPR} & \texttt{pbr} & \checkmark & 89.1 & 80.2 \\
GDR-Net \cite{Wang2021CVPR} & \texttt{pbr} & & {\bf 91.6} & 84.4 \\
\hline
Ours & \texttt{syn} & \checkmark & \underline{90.3} & {\bf 84.7} \\
~ no prior det & \texttt{syn} & \checkmark & 88.7 & 83.3 \\
~ manual cov & \texttt{syn} & \checkmark & 59.1 & 46.1 \\
~ no MLE loss & \texttt{syn} & \checkmark & 47.0 & 35.2 \\
~ single view & \texttt{syn} & \checkmark & 65.7 & 56.9 \\
\bottomrule
\end{tabular}
\label{table:ycbv_main}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{figures/ycbv_qual/ours/scene_57_000029}
\includegraphics[width=0.9\columnwidth]{figures/ycbv_qual/ours/scene_57_000234}
\includegraphics[width=0.9\columnwidth]{figures/ycbv_qual/no_mle/scene_57_000234}
\caption{Qualitative results on YCB-Video. From left to right columns we show the detected object boxes with prior input to the keypoint network, the predicted keypoints with uncertainty ellipses, and the 3D model projection on the image based on predicted 6Dof object poses and camera pose.
{\bf Top:} the uncertainty ellipses tend to be smaller for visible keypoints on textured surfaces or corner points, while appearing larger for occluded keypoints and keypoints on smooth surfaces (like the clamp).
{\bf Center:} our system is able to consistently track the keypoints throughout the scene despite the presence of symmetric objects.
{\bf Bottom:} the network trained with a fixed-variance loss predicts uncertainty ellipses that are visibly too small -- leading to unreliable outlier rejection and object poses.
}
\label{fig:ycbv_qual}
\end{figure}
For the YCB-Video dataset, we compare to the single-view methods \cite{Xiang2018RSS,Li2018ECCV,Labbe2020ECCV,Wang2021CVPR} and SLAM methods \cite{Deng2019RSS,Fu2021IROS}.
Note that we do not include the multi-view results of CosyPose \cite{Labbe2020ECCV} since it is an offline SfM method that is not comparable to real-time SLAM methods.
Following \cite{Xiang2018RSS,Li2018ECCV,Labbe2020ECCV,Wang2021CVPR,Deng2019RSS,Fu2021IROS}, we report the area under curve (AUC) of the ADD-S and ADD(-S) by varying the accuracy threshold from 0 to 10cm, which is calculated for each object separately and then averaged.
To fairly compare the methods, we used the same bounding boxes as PoseCNN.
In practice, the bounding boxes can come from any real-time bounding box detector.
The benchmark results as well as several ablation studies are reported in Table \ref{table:ycbv_main} with our method labeled as ``Ours''.
Methods in Table \ref{table:ycbv_main} are marked as using standard synthetic data (\texttt{syn}) with randomly-placed objects or physics-based (\texttt{pbr}) training data in addition to the real data.
Note that while the \texttt{pbr} data is generally considered
superior to the randomly-placed objects \cite{Hodan2020ECCVW},
it is not a part of the official YCB-Video dataset training splits.
Regardless, our method beats all of the state-of-the-art single view and SLAM methods in terms of the AUC of ADD(-S) metric -- even those utilizing the \texttt{pbr} data while only utilizing one network
for all objects.
The AUC of ADD(-S) is the most important metric here, since it
takes into account the actual object symmetries rather than just shape matching like the ADD-S does.
This shows that our system can provide highly accurate globally-consistent poses for symmetric objects, while still maintaining high accuracy on the texture-asymmetric objects.
Qualitative results can be seen in Fig. \ref{fig:ycbv_qual}.
More detailed results of each object category can be found in the supplementary material Sec. \ref{sec:supp:ext}.
\paragraph{Effect of prior detection.}
The first ablation study is to run our same system without the prior detection.
The results drop slightly, but this is expected on this dataset where only 5 out of 21 objects are considered symmetric, and only the bowl displays a continuous rotational symmetry.
In the next section, we will see that the prior detection actually makes a much bigger difference on the T-LESS dataset, where most of the objects are symmetric, and the camera rotates many times completely around the scene -- whereas the camera motion in YCB-Video is much simpler.
\paragraph{Manual covariance weight.}
For the next ablation in Table \ref{table:ycbv_main}, ``manual cov'', we manually tune a weight to replace the covariance in the SLAM system's residuals and outlier rejection mechanism.
Here, we found that the weight corresponding to $2\times$ the average predicted standard deviation of the network (which was about 2.5 pixels) achieved the best scores.
As observed, the results dropped significantly compared to using a network predicted covariance.
\paragraph{Effect of MLE loss.}
For the ablation labeled ``no MLE loss'', we trained a network with the same procedure, but replaced the MLE loss with a fixed-variance loss with variance regulation
similar to that used by the popular human pose estimation \cite{Nibali2018ArXiv}.
As observed, when placed in the SLAM system, the results are significantly lower than that with our network trained with the MLE loss.
The qualitative results of this experiment are also in Fig. \ref{fig:ycbv_qual}.
Beyond the accuracy of the SLAM system with this change, we have also tested the accuracy of the predicted covariance itself.
To do so, we ran both of the networks (with and without MLE loss) on a separate set of simulated YCB-Video objects (the \texttt{pbr} data which was not used in training), which has perfect ground truth for the keypoints.
Here, we ran the networks with the ground truth bounding boxes and no prior detection.
To evaluate the accuracy of the predicted covariance, we plotted the keypoint error
against the predicted standard deviation of the network.
Ideally, the error will always lie above the cone $e_r < 3 \sigma$ if $e_r$ is the scalar $x$ or $y$ component of the error residual of the keypoint prediction.
The results of this experiment can be viewed in Fig. \ref{fig:cov_consist}.
As observed, the network trained with the MLE loss has much more of the errors within the 3$\sigma$ cone.
In fact, 91.0\% of the data points on the left in Fig. \ref{fig:cov_consist} pass a 99\% confidence $\chi^2$ test while only 7.1\% pass from the points on the right. This shows that the predicted uncertainty describes the actual error distribution well (besides some expected outliers due to heavy occlusion and symmetry), and including the MLE loss is crucial to achieve this.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\columnwidth]{figures/sigma_plot_mle}
\includegraphics[width=0.48\columnwidth]{figures/sigma_plot_fixedvar}
\caption{
The plot of error of the predicted keypoints against the standard deviation predicted by the network over a separate set of rendered YCB-Video objects. The 3$\sigma$ bounds are shown as the cone drawn as the red dotted lines. \textbf{Left:} The result of our network trained with the MLE loss. \textbf{Right:} The result of the same network trained with a typical fixed variance loss instead, which have far fewer points within the 3$\sigma$ cone.
}
\label{fig:cov_consist}
\end{figure}
\paragraph{Comparing to single view.}
For the final ablation in Table \ref{table:ycbv_main}, we ran just our single view network and compared the accuracy.
Specifically, for each view we just ran PnP and refined it using the same procedure as Eq. \ref{eq:global_cost}, but with only one fixed camera pose per optimization.
Clearly the full SLAM system is more accurate.
It is interesting to note that the results for single view are actually more accurate than the SLAM results using the manual covariance or the fixed-variance network.
This is most likely due to the fact that incorrect covariance in our SLAM system can cause the outlier rejection mechanism to be unreliable, and outliers can then pull the object pose in an incorrect direction and hurt the accuracy for {\em all views} despite the fact that most of the keypoints are correct.
\paragraph{Accuracy of camera poses}
The effect of initializing the camera poses with the poses provided by the dataset was minor in this experiment.
Using the given camera poses the system achieved a 90.5 AUC of ADD-S score, while the
system with the estimated camera poses scored the 90.3 shown in Table \ref{table:ycbv_main}.
This shows that the estimated camera poses are very accurate on this dataset.
\subsection{T-LESS Dataset} \label{sec:tless}
\begin{table}
\centering
\caption{Benchmark Results on the T-LESS dataset.}
\begin{tabular}{l|ccc}
\toprule
Method & Data & U.M. & $e_\mathrm{vsd} < 0.3$ \\
\hline
Implicit \cite{Sundermeyer2018ECCV} & \texttt{syn} & & 26.8 \\
Pix2Pose \cite{Park2019ICCV} & \texttt{syn} & & 29.5 \\
PoseRBPF \cite{Deng2019RSS} & \texttt{syn} & & 41.7 \\
CosyPose \cite{Labbe2020ECCV} & \texttt{pbr} & \checkmark & {\bf 63.8} \\
\hline
Ours & \texttt{pbr} & \checkmark & \underline{63.7} \\
~ \texttt{real} only & N/A & \checkmark & 45.9 \\
~ no prior det & \texttt{pbr} & \checkmark & 16.2 \\
~ manual cov & \texttt{pbr} & \checkmark & 13.8 \\
~ single view & \texttt{pbr} & \checkmark & 48.1 \\
\bottomrule
\end{tabular}
\label{table:tless_main}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{figures/tless_qual/ours/scene_2_000100}
\includegraphics[width=0.9\columnwidth]{figures/tless_qual/ours/scene_2_000157}
\includegraphics[width=0.9\columnwidth]{figures/tless_qual/no_prior/scene_2_000157}
\caption{Qualitative results on T-LESS. {\bf Top:} Under misalignment between the prior detection and objects (left column), the network still predicts keypoints accurately (center column) which just uses the prior as a general guide for the symmetry.
{\bf Center:} the system displays robustness to missing and bad bounding boxes here.
{\bf Bottom:} the same system, but without the prior detection, fails to track keypoints corresponding to the same 3D locations currently locating at the back side of the symmetric objects, hence causes the estimated object poses to fly away.
Note that the predicted covariance was used in all of these images, but left out of the visualization for clarity.
Best viewed in color.
}
\label{fig:tless_qual}
\end{figure}
For the T-LESS dataset, we compare to two single-view baselines \cite{Sundermeyer2018ECCV, Park2019ICCV} as well as, again, PoseRBPF \cite{Deng2019RSS} and CosyPose \cite{Labbe2020ECCV}.
To fairly compare to the other methods, we use the same RetinaNet bounding boxes as \cite{Park2019ICCV},
taking the top scoring bounding box for each object.
We use the standard visual surface discrepancy (vsd) recall metric,
$e_\mathrm{vsd} < 0.3$ \cite{Hodan2018ECCV}, that the other methods reported.
Since the T-LESS dataset has multiple scenes that have only symmetric objects, and
our system requires asymmetric objects to estimate a camera pose, we initialize our camera poses with the poses provided by the dataset.
While this is a potential drawback to our system, typical deployment scenarios will contain symmetric objects or allow for retrieving external odometry from another source, such as an additional IMU sensor or traditional feature-based SLAM.
The benchmark results and ablation studies are reported in Table \ref{table:tless_main}, where our system is shown to achieve a 63.7 recall score -- second best to the 63.8 of CosyPose.
However, it is interesting to note that CosyPose is an iterative refinement method that utilizes initial object poses rendered at 1m from the camera, which is close to the distance of all the objects, while our method makes no such assumption.
Qualitative results can be also seen in Fig. \ref{fig:tless_qual}.
\paragraph{Effect of training data.}
To test the sensitivity to the training data,
we train it on only the small \texttt{real} training split,
which contains 1,231 images of each object on a dark background.
From Table \ref{table:tless_main} we observe that, even with this small amount of data,
we still beat all of the state-of-the-art methods besides CosyPose -- all of which used large amounts of
synthetic data on top of the real data.
This shows the ability of our method to work with a limited amount of data which does not even cover all
orientations of the objects.
\paragraph{Effect of prior detection.}
On the T-LESS dataset, where most of the objects are symmetric in some way,
the 63.7 recall score drops to 16.2 in Table \ref{table:tless_main} when the prior detection is removed.
This shows that the prior detection is crucial for tackling these challenging T-LESS objects when the camera is orbiting around their axes of symmetry multiple times.
Without the prior detection, the SLAM system's outlier rejection simply rejects most of the keypoint measurements on the symmetric objects, as they do not correspond to the same 3D location.
Fig. \ref{fig:tless_qual} also includes some qualitative results of this experiment.
\paragraph{Manual covariance weight.}
Here again we set the covariance in the SLAM system's residuals to a manually-tuned weight.
The result in this case drop to a 13.8 recall, which further substantiates
the usefulness of our covariance estimate in the SLAM system.
Furthermore, we found that the optimal weight for this dataset was much larger than
that for YCB-Video, which is not surprising, but shows that removing the need
to manually tune weights by using the predicted covariance is a useful property of our system.
\paragraph{Comparing to single view.}
In this case, the single view result in Table \ref{table:tless_main} outperformed that from the SLAM system when it either used a manual covariance weight or no prior detections.
Since the single-view results use no prior detection, this shows that the keypoints considered independently for each view are reasonable, while the prior detection is crucial for tracking them across time.
\section{Conclusions and Future Work} \label{sec:conclusion}
In this work, we have designed a keypoint-based object-level SLAM system that provides globally consistent 6DoF pose estimates for objects with or without symmetry.
Our method can track semantic keypoints on symmetric objects consistently with the aid of the proposed prior detection, and the uncertainty that our network predicts has been shown to capture the true error of the predicted keypoints as well as greatly improve the object pose accuracy.
In the future, we would like to adapt our system to larger environments and generalize to class-level keypoint prediction with unseen instances.
\paragraph{Acknowledgement.}
We would like to thank the reviewers for their constructive feedback.
This work was partially supported by the University of Delaware
College of Engineering, the NSF (IIS-1924897), the ARL
(W911NF-19-2-0226, W911NF-20-2-0098), Bosch Research North America, and the Technical University of Munich.
{\small
\bibliographystyle{packages/ieee_fullname}
|
2,869,038,155,546 | arxiv | \section{Introduction}
Throughout this paper, $M=\hy^n/\pi$ denotes a closed hyperbolic manifold with fundamental group $\pi$, and $N$ denotes an \emph{exotic smooth structure} (on $M$), i.e.\ a smooth manifold that is homeomorphic but not diffeomorphic to $M$. Define the \emph{symmetry constant} of $N$ as the supremum
\[s(N)=\sup_\rho \fr{|\Isom(N,\rho)|}{|\Isom(M)|},\]
over all Riemannian metrics $\rho$ on $N$. In this paper we study the possible values of this invariant. There is an ``easy" bound
\begin{equation}\label{eqn:easy-bound}\fr{1}{|\Isom(M)|}\le s(N)\le 1\end{equation}
that follows from Mostow rigidity and a theorem of Borel (explained below). Our main results follow:
\begin{mainthm}[maximal symmetry constant]\label{thm:symmetric}
Fix $n$ such that the group $\Theta_n$ of exotic spheres is nontrivial. For every $d>0$, there exists a closed hyperbolic manifold $M^n$ and an exotic smooth structure $N$ such that $|\Isom(M)|\ge d$ and $s(N)=1$.
\end{mainthm}
\begin{mainthm}[arbitrarily small symmetry constant]\label{thm:asymmetric}
Fix $n$ such that $\Theta_{n-1}\neq0$. For every $d>1$, there exists a closed hyperbolic manifold $M^n$ and an exotic smooth structure $N$ such that $s(N)\le\fr{1}{d}$.
\end{mainthm}
The hypothesis $\Theta_n\neq0$ is frequently true, e.g.\ $\Theta_{4k+3}\neq0$ for every $k\ge1$ and $\Theta_{4k+1}$ is nontrivial for any positive $k\notin\{1,3,7,15,31\}$. See \cite[\S7]{kervaire-milnor}, \cite[Appx.\ B]{milnor-stasheff}, and \cite[Thm.\ 1.3]{hill-hopkins-ravenel}.
The problem of computing $s(N)$ is related to two different problems in the study of transformation groups:
\begin{itemize}
\item {\it Degree of symmetry.} The degree of symmetry $\de(W)$ of a manifold $W$ is defined as the largest dimension of a compact Lie group with a smooth, effective action on $W$ \cite{hsiang-hsiang-degree-symmetry}.
When $W=\Si$ is an exotic sphere, computing $\de(\Si)$ is equivalent to computing the supremum \[s(\Si):=\sup_\rho \fr{\dim\Isom(\Si,\rho)}{\dim\Isom(S^n)},\] over all Riemannian metrics $\rho$. Again there is a bound $\fr{1}{\dim\SO(n+1)}\le s(\Si)\le 1$, but the upper bound is not optimal. For example, Hsiang--Hsiang \cite{hsiang,hsiang-hsiang} prove that if $\Si\neq S^n$ has dimension $n\ge40$, then $s(\Si)<\fr{n^2+8}{4(n^2+n)}<1/4$.
When $W$ is an aspherical manifold and $\pi_1(W)$ is centerless, then $\de(W)=0$, i.e.\ $W$ does not admit a nontrivial action of a connected Lie group \cite{borel-isometry-aspherical}. In this case it's fitting to define $\de(W)$ as the largest order of a finite group that acts effectively on $W$. With this definition, for $W=N$ an exotic smooth structure on a hyperbolic manifold, $\de(N)$ is closely related to $s(N)$; see equation (\ref{eqn:nielsen-constant}) below.
\item {\it Propagating group actions} \cite{adem-davis}.
One says that an $F$-action on $Y$ \emph{propagates across} a map $f:X\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow Y$ if there is an $F$-action on $X$ and an equivariant map $X\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow Y$ that is homotopic to $f$. In particular, for an exotic smooth structure $N$ on a hyperbolic manifold $M$, and for a subgroup $F<\Isom(M)$, one can ask whether or not the action of $F$ propagates across some homeomorphism $N\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$. This problem, and its relation to harmonic maps, is discussed in Farrell--Jones \cite{FJ-nielsen}. Theorems \ref{thm:symmetric} and \ref{thm:asymmetric} can be viewed as positive and negative results about propagating group actions, and give partial answers the question of \cite[pg.\ 487]{FJ-nielsen}.
\end{itemize}
{\it Remark.} One could consider refinements of the symmetry constant such as $s_{<0}(N)=\sup_\rho\fr{|\Isom(N,\rho)|}{|\Isom(M)|}$, where the supremum is over all metrics with sectional curvature $K<0$. In general, $s_{<0}(N)\le s(N)$, but computing $s_{<0}(N)$ is more difficult (e.g.\ it does not reduce to a Nielsen realization problem; see below). We improve upon Theorem \ref{thm:symmetric} by giving examples for which $s_{<0}(N)=s(N)=1$.
\begin{mainthm}[maximal symmetry, achieved by negatively-curved metric]\label{thm:isom-split}
Fix $n$, and assume that either $n$ is even or $|\Theta_n|$ is not a power of $2$. Given $d>0$, there exists a closed hyperbolic manifold $M^n$ and an exotic smooth structure $N$ such that $|\Isom(M)|\ge d$ and $N$ admits a Riemannian metric $\rho$ with negative sectional curvature so that $\Isom(N,\rho)\simeq\Isom(M)$.
\end{mainthm}
If $n=4k+3$, then $|\Theta_n|$ is divisible by $2^{2k+1}-1$; see
\cite[Appx.\ B]{milnor-stasheff}.
\subsection{Techniques}
The problem of determining $s(N)$ is related to a \emph{Nielsen realization problem}, which will be our main point of view. By Borel \cite{borel-isometry-aspherical} any compact Lie group that acts effectively on $N$ is finite; furthermore, any finite subgroup of $\Diff(N)$ acts faithfully on $\pi=\pi_1(N)$. Consequently, for every $\rho$, the isometry group $\Isom(N,\rho)$ is a subgroup of $\Out(\pi)=\Aut(\pi)/\pi$. Furthermore, if $\dim M\ge3$, then $\Out(\pi)\simeq\Isom(M)$ by Mostow rigidity. This explains the upper bound in (\ref{eqn:easy-bound}). A subgroup $F< \Out(\pi)$ is said to be \emph{realized by diffeomorphisms} when can we solve the lifting problem (commonly called the Nielsen realization problem --- see e.g.\ \cite{block-weinberger} and \cite{mann-tshishiku}):
\[\begin{xy}
(0,0)*+{\Diff(N)}="A";
(0,-15)*+{\Out(\pi)}="B";
(-25,-15)*+{F}="C";
{\ar"A";"B"}?*!/_4mm/{\Psi_N};
{\ar@{-->} "C";"A"}?*!/_3mm/{};
{\ar@{^{(}->} "C";"B"}?*!/_2mm/{};
\end{xy}\]
If $F<\Out(\pi)$ and $F\simeq\Isom(N,\rho)$ for some $\rho$, then group $F$ is a fortiori realized by diffeomorphisms. Conversely, if $F<\Out(\pi)$ is realized by diffeomorphisms, then by averaging a metric, we find $\rho$ with $F<\Isom(N,\rho)$. Therefore,
\begin{equation}\label{eqn:nielsen-constant}s(N)=\max_F\fr{|F|}{|\Out(\pi)|},\end{equation} where the maximum is over the subgroups $F<\Out(\pi)$ that are realized by diffeomorphisms. Note that $s(N)\le\fr{|\im\Psi_N|}{|\Out(\pi)|}$.
Farrell--Jones \cite{FJ-nielsen} studied the Nielsen realization problem for $N=M\#\Si$, where $M^n$ is a closed, oriented hyperbolic manifold and $\Si\in\Theta_n$ is a nontrivial exotic sphere. The main result of \cite{FJ-nielsen} states that if $M$ is stably parallelizable, $2\Si\neq0$ in $\Theta_n$, and $M$ admits an orientation-reversing isometry, then $\im\Psi_N<\Out(\pi)$ has index at least 2. In particular, $s(N)\le 1/2$ for these examples.
\vspace{.1in}
{\bf Symmetric exotic smooth structures.} Here we discuss the main components in the proof of Theorems \ref{thm:symmetric} and \ref{thm:isom-split}. We find our examples with $s(N)=1$ among the manifolds $N=M\#\Si$ studied by Farrell--Jones. Using (\ref{eqn:nielsen-constant}), observe that $s(N)=1$ if and only if $\Out(\pi)$ is realized by diffeomorphisms of $N$. In particular, we must find examples where $\Psi_N$ is surjective. The following results refine \cite[Thm.\ 1]{FJ-nielsen}.
\begin{thm}\label{thm:image}
Let $M^n$ be a closed, oriented hyperbolic manifold, let $\Si\in\Theta_n$ be a nontrivial exotic sphere, and let $N=M\#\Si$. Denote by $\Out^+(\pi)<\Out(\pi)$ the subgroup that acts trivially on $H_n(N)\simeq\Z$.
\begin{enum}
\item[(a)] The image $\im\Psi_N$ contains $\Out^+(\pi)$.
\item[(b)] Fix $\alpha\in\Out(\pi)\setminus\Out^+(\pi)$. If $2\Si=0$ in $\Theta_n$, then $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta\in\im\Psi_N$. The converse is true if $M$ is stably parallelizable.
\end{enum}
\end{thm}
Every closed hyperbolic manifold has a finite cover that is stably parallelizable \cite[pg.\ 553]{sullivan-stably-parallelizable}. As a consequence of Theorem \ref{thm:image}, if $2\Si=0$, then $\Psi_N$ is surjective, and if $2\Si\neq0$, then $\im\Psi_N=\Out^+(\pi)$. In any case, if $M$ does not admit an orientation-reversing isometry, then $\Psi_N$ is surjective. Farrell--Jones \cite{FJ-exotic-hyperbolic} show (implicitly) that reversing orientation is an obstruction to belonging to $\im\Psi_N$ when $2\Si\neq0$. According to Theorem \ref{thm:image}, this is the only obstruction.
Having identified $\im\Psi_N<\Out(\pi)$, we would like to know if this subgroup is realized by diffeomorphisms.
\begin{thm}\label{thm:diff-split}
Fix $N=M\#\Si$ as in Theorem \ref{thm:image}. Set $d=|\Isom^+(M)|$ and let $m\in\N$ be the size of the largest cyclic subgroup of $\Theta_n$ that contains $\Si$. Assume that $\gcd(d,m)$ divides $\fr{m}{|\Si|}$. Then $\Out^+(\pi)$ is realized by diffeomorphisms.
\end{thm}
The assumption $\gcd(d,m)\mid \fr{m}{|\Si|}$ guarantees that $\Si\in\Theta_n$ has a $d$-th root. This condition is satisfied, for example, whenever $|\Isom^+(M)|$ and $|\Si|$ are relatively prime.
If $\Out^+(\pi)$ is realized by diffeomorphisms of $N$, then $s(N)\ge 1/2$. By Theorems \ref{thm:image} and \ref{thm:diff-split}, if $M$ is stably parallelizable and $2\Si\neq0$, then $s(M\#\Si)$ is equal to $1/2$ or $1$, according to whether or not $M$ admits an orientation-reversing isometry. This completely solves the Nielsen realization problem in these cases.
Theorem \ref{thm:symmetric} reduces to Theorem \ref{thm:diff-split}. Fixing $\Si\neq S^n$, it's possible to find $M$ so that $|\Isom^+(M)|$ and $|\Si|$ are relatively prime, and $|\Isom^+(M)|$ can be made arbitrarily large. This is a consequence of a result of Belolipetsky--Lubotzky \cite{belolipetsky-lubotzky}: for any finite group $F$, there exists a closed hyperbolic $M^n$ with $\Isom(M)=F$. For their examples $\Isom(M)=\Isom^+(M)$. In particular, one can find examples where $\Psi_N:\Diff(N)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Out(\pi)$ is a split surjection with $|\Out(\pi)|$ arbitrarily large.
To prove Theorem \ref{thm:isom-split}, one would like to promote the action of $\Out^+(\pi)$ on $N=M\#\Si$ produced in Theorem \ref{thm:diff-split} to an action by isometries with respect to some negatively curved metric on $N$. Using a warped-metric construction of Farrell--Jones \cite{FJ-exotic-hyperbolic}, it suffices to find an $M$ that is stably parallelizable, has large injectivity radius, and such that $\Isom^+(M)$ acts freely on $M$. Arranging all of these conditions simultaneously becomes delicate, especially arranging that $M$ is stably parallelizable (which is desired because it guarantees that $M\#\Si$ is not diffeomorphic to $M$). Because of this difficulty we take a less direct approach when $\dim M$ is odd --- see Theorem \ref{thm:BL+}.
{\bf Asymmetric exotic smooth structures.} We explain the main ideas for proving Theorem \ref{thm:asymmetric}. For this, we consider exotic smooth structures $N=M_{c,\phi}$ obtained by removing a tubular neighborhood $S^1\times D^{n-1}\hookrightarrow}\newcommand{\onto}{\twoheadrightarrow M$ of a geodesic $c\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm M$ and gluing in $S^1\ti D^{n-1}$ by a diffeomorphism $\bbm1\ti\phi$ of $S^1\times S^{n-2}$, where $\phi\in\Diff(S^{n-2})$ is not isotopic to the identity. Farrell--Jones \cite{FJ-nonuniform} prove that $M_{c,\phi}$ is often an exotic smooth structure on $M$.
The strategy for proving Theorem \ref{thm:asymmetric} is to find $N=M_{c,\phi}$ and $F\simeq\Z/d\Z$ in $\Out(\pi)$ so that $\im\Psi_N\cap F=1$.
This condition implies that the index of $\im\Psi_N<\Out(\pi)$ is at least $|F|$, so $s(N)\le \fr{1}{|F|}$. To show $F\cap\im\Psi_N=1$, we study how the smooth structure on $M_{c,\phi}$ changes if we choose a different geodesic $c$. This is complementary to \cite[Thm.\ 1.1]{FJ-nonuniform}, which studies how the smooth structure changes when the geodesic is fixed and the isotopy class $[\phi]\in\pi_0\Diff(S^{n-2})\simeq\Theta_{n-1}$ is changed. In Theorem \ref{thm:non-concordant} we give a criterion to guarantees that $M_{c_1,\phi}$ and $M_{c_2,\phi}$ are not \emph{concordant}, i.e.\ there is no smooth structure on $M\times[0,1]$ that restricts to $M_{c_1,\phi}\sqcup M_{c_2,\phi}$ on the boundary. This is one of the main technical ingredients in the proof of Theorem \ref{thm:asymmetric}.
The proof of Theorem \ref{thm:asymmetric} works equally well when $M$ is nonuniform, but we won't discuss this further.
Theorem \ref{thm:asymmetric} proves that $s(N)$ may be arbitrarily close to 0, as $N$ varies over exotic smooth structures on all hyperbolic $n$-manifolds (when $\Theta_{n-1}\neq0$), but if we fix the homeomorphism type, we know that $s(N)\ge\fr{1}{|\Isom(M)|}$. It would be interesting to know if there are examples where this lower bound is achieved. Of course if $\Isom(M)=1$, then $s(N)=1=\fr{1}{|\Isom(M)|}$, so to make this interesting one should ask for examples such that $\Isom(M)$ is large.
\begin{qu}
Does there exist $n$ so that for every $d>0$, there exists a hyperbolic manifold $M^n$ and an exotic smooth structure $N$ such that $|\Isom(M)|\ge d$ and $s(N)=\fr{1}{|\Isom(M)|}$?
\end{qu}
Note that $s(N)=\fr{1}{|\Isom(M)|}$ if and only if $\Psi_N:\Diff(N)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Out(\pi)$ is trivial. Equivalently, $\Isom(N,\rho)=1$ for every Riemannian metric $\rho$.
{\it Section outline.} In \S\ref{sec:proofs1} we prove Theorems \ref{thm:image} and \ref{thm:diff-split} and discuss some related questions of interest. In \S\ref{sec:isom-split} we discuss the work of Belolipetsky--Lubotzky and use it to prove Theorem \ref{thm:isom-split}. Finally, in \S\ref{sec:asymmetric} we prove Theorem \ref{thm:asymmetric}; specifically, we study when two smooth structures $M_{c_1,\phi}$ and $M_{c_2,\phi}$ are concordant, which we use as an obstruction to Nielsen realization.
{\it Acknowledgements.} The authors would like to thank I.\ Belegradek and S.\ Cappell for helpful and interesting conversations. M.B. has
been supported by the Special Priority Program SPP 2026 ``Geometry at Infinity” funded by the Deutsche Forschungsgemeinschaft (DFG).
\section{Symmetry constant for $N=M\#\Si$}\label{sec:proofs1}
In this section we prove Theorems \ref{thm:image} and \ref{thm:diff-split}.
\subsection{The image of $\Psi_N:\Diff(N)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Out(\pi)$}\label{sec:image}
\begin{proof}[Proof of Theorem \ref{thm:image}]
Let $N=M\#\Si$ as in the theorem. It will be convenient to fix $p\in M$ and a small metric ball $B=B_r(p)$ where the connected sum is performed.
First we prove (a). For this we fix $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta\in\Out^+(\pi)\simeq\Isom^+(M)$ and define $f\in\Diff(N)$ so that $\Psi_N(f)=\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta$. View $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta$ as an isometry of $M$, and choose an isotopy $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_t\in\Diff(M)$ so that $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_0=\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta$ and $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_1(B)=B$ and $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_1\rest{}{B}\in O(n)$ is an isometry of the ball; for example, if the radius $r$ is sufficiently small, then we can isotope $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta(B)$ to $B$ in $M$ through isometric embeddings, and then extend the isotopy of $B$ to an ambient isotopy. Since $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta$ is orientation-preserving, $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_1\rest{}{B}$ belongs to the identity component $SO(n)\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm O(n)$, and it is easy to see then that $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_1$ induces a diffeomorphism $f:N\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow N$; for example, isotope $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_1\rest{}{B}$ further so that $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_1\rest{}{B_{r/2}(p)}$ is the identity and perform the connected sum along $B_{r/2}(p)$ instead of $B_r(p)$. This proves part (1).
To prove (b), assume that $\alpha\in\Out(\pi)\setminus\Out^+(\pi)$. Viewing $\alpha$ as an orientation-reversing isometry of $M$, the argument above defines an orientation-reversing diffeomorphism $h:M\#\Si\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M\#\ov \Si$ that induces $\alpha$ (recall that for $A\#B$, if the identification of the attaching disk is changed by an orientation-reversing involution, then the result is $A\#\ov B$, where $\ov B$ is $B$ with the opposite orientation). If $2\Si=0$ in $\Theta_n$, then $\Si=\ov\Si$ (because $\ov\Si=-\Si$ in $\Theta_n$), so $h\in\Diff(N)$ and $\Psi_N(h)=\alpha$. This proves the first statement of (b). The converse is already to contained in \cite[Thm.\ 1]{FJ-nielsen}. In short, if $\Psi_N(f)=\alpha$ for some $f\in\Diff(N)$, then $h\circ f$ is an orientation-preserving diffeomorphism $M\#\Si\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M\#\ov\Si$. When $M$ is stably parallelizable, this implies that $2\Si=0$ by \cite[\S2]{FJ-exotic-hyperbolic}.
\end{proof}
\subsection{Sections of $\Psi_N:\Diff(N)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\im\Psi$}
\begin{proof}[Proof of Theorem \ref{thm:diff-split}]
Since $M$ is hyperbolic, $\Out(\pi)$ is realized by isometries of $M$ (by Mostow rigidity). Set $F=\Isom^+(M)$. Since $F$ is finite, there exists $p\in M$ whose stabilizer in $F$ is trivial. Choose a ball $B$ around $p$ whose $F$-translates are disjoint. By assumption, $\gcd(|F|,m)$ divides $\fr{m}{|\Si|}$, which implies that there exists $\Si'\in\Theta_n$ so that $\Si=|F|\cdot\Si'$. Then $N=M\#\Si$ is diffeomorphic to $M\#\Si'\#\cdots}\newcommand{\ld}{\ldots\#\Si'$, where $\Si'$ appears $|F|$ times. If we form the connected sum along the union of balls $F.B$, then we can extend the action of $F$ on $M\setminus F.B$ to a smooth $F$-action on $N=M\#\Si'\#\cdots}\newcommand{\ld}{\ldots\#\Si'$ by rigidly permuting the exotic spheres.
\end{proof}
{\it Remark.} One might think that the above argument could be used to define an action of $\Out(\pi)$ on $N$ under a similar constraint on $|\Out(\pi)|$ and $|\Si|$. This would contradict the fact that $\Psi_N$ is frequently not surjective when $M$ admits an orientation-reversing isometry. In the argument above, when $M$ admits an orientation-reversing isometry, one obtains an action of $\Out(\pi)$ on $M\# k\Si'\# k\ov{\Si'}$, where $k=|\Out(\pi)|/2$. But $M\# k\Si'\# k\ov{\Si'}$ is diffeomorphic to $M$, not $N$.
It would be interesting to know if $\Out^+(\pi)$ ever acts on $N=M\#\Si$ when $N$ has no ``obvious" symmetry:
\begin{qu}\label{q:no-obvious}
Is Theorem \ref{thm:diff-split} ever true without the assumption $\gcd(d,m)\mid \fr{m}{|\Si|}$? For example, fix $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta\in\Isom^+(M)$ of order $d$, and assume that $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta$ acts freely. Choose $\Si\in\Theta_n$ that does not admit a $d$-th root. Prove or disprove that the subgroup $\pair{\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta}\simeq\Z/d\Z$ in $\Out^+(\pi)$ is realized by diffeomorphisms of $N=M\#\Si$.
\end{qu}
In this direction, it would be interesting to know how the choice of $\Si$ affects the answer to Question \ref{q:no-obvious}. For instance, in the study of the symmetry constant of $\Si\in\Theta_n$, there is a marked difference between (1) the standard sphere $\Si=S^n$, (2) the nontrivial exotic spheres that bound a parallelizable manifold $\Si\in bP_{n+1}\setminus\{S^n\}$, and (3) the remaining exotic spheres $\Si\in\Theta_n\setminus bP_{n+1}$. See \cite{hsiang-hsiang-degree-symmetry}. Does this distinction play a role in Question \ref{q:no-obvious}?
Note that the subtlety in Question \ref{q:no-obvious} disappears in the topological category: if $W$ is an aspherical manifold with $\pi_1(W)\simeq\pi$, then $\Homeo(W)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Out(\pi)$ is a split surjection because $W$ and $M$ are homeomorphic by the solution of Farrell--Jones to the Borel conjecture in this case; see \cite[Cor.\ 3 in \S5]{farrell-trieste}.
We mention another problem related to Question \ref{q:no-obvious}. For this, let $W^n$ be an exotic smooth structure on the torus $T^n$. There is a surjective homomorphism $\Diff^+(W)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Out^+(\pi_1(W))\simeq \Sl_n(\Z)$, and whether or not this homomorphism splits is unknown. One approach to this question is to focus on maximal abelian subgroups of $\Sl_n(\Z)$ and try to use the dynamics of Anosov diffeomorphisms; see \cite[Question 1.4]{fks-anosov} and also \cite{brhw}. Alternatively, an obstruction to realizing finite subgroups $F<\Sl_n(\Z)$ as in Question \ref{q:no-obvious} could provide an approach to the splitting problem for certain $W=T^n\#\Si$.
\section{Realization by isometries}\label{sec:isom-split}
In this section, we prove Theorem \ref{thm:isom-split}. The starting point of our argument is the following result from \cite[Thm.\ 1.1 and \S6.3]{belolipetsky-lubotzky}.
\begin{thm}[Belolipetsky--Lubotzky]\label{thm:BL}
For every $n\ge2$ and every finite group $F$, there exists infinitely many compact $n$-dimensional hyperbolic manifolds $M$ with $\Isom(M)=\Isom^+(M)\simeq F$.
\end{thm}
The main result we prove here is as follows.
\begin{thm}\label{thm:BL+}
Fix a finite group $F$ and fix $R>0$. Among the hyperbolic manifolds $M^n$ with $\Isom(M)=\Isom^+(M)\simeq F$, there exists $M$ such that
\begin{enum}
\item[(a)] the group $F$ acts freely on $M$,
\item[(b)] there is a cover $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$ of degree $\ell\in\{1,2,4\}$ so that $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M$ is stably parallelizable, and
\item[(c)] $\InjRad(M)>R$.
\end{enum}
Furthermore, for (b), if $n$ is even, then we can take $\ell=1$.
\end{thm}
Next we deduce Theorem \ref{thm:isom-split} from Theorem \ref{thm:BL+}.
\begin{proof}[Proof of Theorem \ref{thm:isom-split}]
Fix $d>0$. If $n$ is even, take any nontrivial $\Si\in\Theta_n$ and let $F$ be a group with $|F|\ge d$ and $\gcd(|F|,|\Si|)=1$. If $|\Theta_n|\neq 2^i$, take $\Si\in\Theta_n$ nontrivial of odd order and let $F$ be a $2$-primary group with $|F|\ge d$. In either case, there exists $\Si'\in\Theta_n$ with $\Si=|F|\cdot\Si'$.
By Belolipetsky--Lubotzky and Theorem \ref{thm:diff-split}, for every $M$ with $\Isom(M)\simeq\Isom^+(M)\simeq F$, the group $F$ acts by diffeomorphisms of $N=M\#\Si\simeq M\#\Si'\#\cdots}\newcommand{\ld}{\ldots\#\Si'$. We need to show we can choose $M$ and a negatively-curved metric $\rho$ on $N$ so that $F=\Isom(N,\rho)$ in $\Diff(N)$.
According to \cite[Prop.\ 1.3]{FJ-exotic-hyperbolic}, there is a constant $\tau_n>0$ so that if $M^n$ has injectivity radius $\InjRad(M)>\tau_n$, then $N=M\#\Si$ admits a negatively curved metric. This metric agrees with the hyperbolic metric on $M$ away from the disk where the connected sum is performed, and on that disk, the metric is radially symmetric. Choose $M$ satisfying Theorem \ref{thm:BL+} with $R=|F|\cdot\tau_n$ and such that $F$ acts freely on $M$, so the quotient $\ov{M}=M/F$ is a hyperbolic manifold. Furthermore,
\begin{equation}\label{eqn:injrad-cover}\InjRad(\ov M)\ge \InjRad(M)/|F|>\tau_n.\end{equation}
We prove this below. Now fix $r$ with $\tau_n<r<\InjRad(\ov M)$. From (\ref{eqn:injrad-cover}) it follows that for any ball $B=B_r(p)$ in $M$, the $F$-translates of $B$ are disjoint. Fix such a ball $B$. As in the proof of Theorem \ref{thm:diff-split}, write $\Si=|F|\cdot\Si'$ and consider $M_0=M\setminus F.B$. The manifold $N$ is obtained by gluing $\bb D^n$ to each boundary component of $M_0$ by a fixed diffeomorphism $f\in \Diff(S^{n-1})$.
Using the technique in \cite{FJ-exotic-hyperbolic}, we give $N$ a Riemannian metric $\rho$ that agrees with the hyperbolic metric on $M_0$ and is a warped-product metric on each $\bb D^n$. Since $r>\tau_n$, \cite[\S3]{FJ-exotic-hyperbolic} guarantees that the resulting metric has negative curvature. The group $F$ acts on $N$ as in Theorem \ref{thm:diff-split}, and by construction it acts by isometries for the metric $\rho$.
Now we explain the inequality (\ref{eqn:injrad-cover}). To see the first inequality, note that $2\InjRad(M)=\sys(M)$, where $\sys(M)$ is \emph{systole}, i.e.\ the length of the shortest geodesic. Under a $d$-fold isometric cover $M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\ov M$, if $\ov\ga$ is a closed geodesic of $\ov M$ and $\ga\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm M$ is a connected component of its preimage, then $\text{length}(\ga)\le d\cdot\text{length}(\ov\ga)$. It follows that $\sys(M)\le d\cdot\sys(\ov M)$.
It remains is to show that $N$ is not diffeomorphic to $M$. When $n$ is even, then by Theorem \ref{thm:BL+} we can assume that $M$ is stably parallelizable and so $M$ is not diffeomorphic to $M\#\Si$ by Farrell--Jones \cite{FJ-exotic-hyperbolic}. In the general case, $M$ has a stably parallelizable cover of degree 2 or 4. Suppose for a contradiction that $M\#\Si$ is diffeomorphic to $M$. Lifting to the cover $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$, we find that $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\#\ell \Si$ is diffeomorphic to $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M$. Note that $\ell\Si\neq0$ in $\Theta_n$ since $\Si$ has odd order and $\ell\in\{2,4\}$. Since $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M$ is stably parallelizable, by \cite[Prop.\ 1.2]{FJ-exotic-hyperbolic}, we conclude that $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\#\ell\Si$ is not diffeomorphic to $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M$. This is a contradiction, so $N$ is not
diffeomorphic to $M$ as desired. This completes the proof.
\end{proof}
Next we prove Theorem \ref{thm:BL+}. Fix a finite group $F$. In what follows $M=\hy^n/\pi$ will always denote one of the Belolipetsky--Lubotzky manifolds with $\Isom(M)=\Isom^+(M)\simeq F$. We have to explain why $M$ can be chosen to satisfy (a), (b), and (c). We will see that \cite[Thm.\ 2.1]{belolipetsky-lubotzky} already shows that (a) can be arranged, and that (b) can be arranged by modifying the proof of \cite[Prop.\ 2.2]{belolipetsky-lubotzky}. Part (c) requires a different, separate argument. All of these arguments involve passing to certain congruence covers, so once we explain why (a), (b), and (c) can be arranged \emph{individually}, it will be evident that they can be arranged \emph{simultaneously}.
\vspace{.1in}
{\bf Recollection of Belolipetsky--Lubotzky \cite{belolipetsky-lubotzky}.}
Here we summarize the main results of \cite{belolipetsky-lubotzky}, especially the aspects needed for our proof. Let $\Gamma}\newcommand{\man}{\measuredangle$ be a finitely generated group. Assume that $\Delta\lhd\Gamma}\newcommand{\man}{\measuredangle$ is finite-index, normal, and that $\De$ surjects to a finite-rank free group:
\[1\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow K\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\De\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow F_r\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow 1\] for some $r\ge2$. The conjugation action of $N_\Gamma}\newcommand{\man}{\measuredangle(K)$ on $\De$ preserves $K$, so $N_\Gamma}\newcommand{\man}{\measuredangle(K)$ acts on $F_r$ by automorphisms. Let $D<N_\Gamma}\newcommand{\man}{\measuredangle(K)$ be the subgroup that acts on $F_r$ by inner automorphisms. With this setup, the main algebraic construction of \cite[Thm.\ 2.1]{belolipetsky-lubotzky} asserts that for any finite group $F$, there exists a finite-index subgroup $\pi<D$ with $N_\Gamma}\newcommand{\man}{\measuredangle(\pi)/\pi\simeq F$ (in their notation, they use $M$ instead of $K$ and $B$ instead of $\pi$).
In the application to hyperbolic manifolds, define $\Gamma}\newcommand{\man}{\measuredangle$ as the commensurator $\Comm(\Lam)$ of a Gromov--Piatetski-Shapiro \cite{gps} non-arithmetic lattice $\Lam<\SO(n,1)$. By work of Mostow and Margulis, $\Comm(\Lam)$ is a maximal discrete subgroup of $\Isom(\hy^n)$, so for any $\pi<\Gamma}\newcommand{\man}{\measuredangle$,
\[N_\Gamma}\newcommand{\man}{\measuredangle(\pi)/\pi\simeq N_{\Isom(\hy^n)}(\pi)/\pi\simeq\Isom(\hy^n/\pi).\]
Hence to find $M=\hy^n/\pi$ with $\Isom(M)\simeq F$, it suffices to find $\pi<\Gamma}\newcommand{\man}{\measuredangle$ with $N_\Gamma}\newcommand{\man}{\measuredangle(\pi)/\pi\simeq F$.
To define $\De$, denote $G=O(n,1)$ and let $\ca O_S$ be ring of definition of $\Gamma}\newcommand{\man}{\measuredangle$, so $\Gamma}\newcommand{\man}{\measuredangle<G(\ca O_S)$. Let $\mathfrak p\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\ca O_S$ be a prime ideal and denote $p\in\N$ the prime with $(p)=\mathfrak p\cap\Z$. We only deal with prime ideals $\mathfrak p$ where $\ca O_S/\mathfrak p\simeq\bb F_p$. Equivalently, $p$ splits completely in $\ca O_S$; there are infinitely many such $\mathfrak p$ by Chebotarev's theorem. Reduction mod $\mathfrak p$ defines a map $\alpha_{\mathfrak p}:\Gamma}\newcommand{\man}{\measuredangle\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow G(\ca O_S/\mathfrak p)\simeq O_{n+1}(p)$ to an orthogonal group over $\bb F_p$. Define $\Gamma}\newcommand{\man}{\measuredangle(\mathfrak p)=\ker(\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p})$, where $\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p}:\Gamma}\newcommand{\man}{\measuredangle\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow O_{n+1}(p)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\PO_{n+1}(p)$. The group $\De$ is defined as $\Lam\cap\Gamma}\newcommand{\man}{\measuredangle(\mathfrak p)$.
To ensure $\De\lhd\Gamma}\newcommand{\man}{\measuredangle$, we want $\Lambda\lhd\Gamma}\newcommand{\man}{\measuredangle$. In order to arrange this, after we've defined $\Gamma}\newcommand{\man}{\measuredangle$, we replace $\Lambda$ with a finite-index subgroup (still denoted $\Lambda$) so that $\Lambda\lhd\Gamma}\newcommand{\man}{\measuredangle$ (note that this replacement does not change $\Comm(\Lam)$). The group $\De$ surjects to a free group: By the cut-and-paste nature of the construction of \cite{gps}, $\Lam$ is either an amalgamated product or an HNN extension. For definiteness assume $\Lam=\Lam_1*_{\Lam_3}\Lam_2$. Denoting $\Om_{n+1}(p)=[O_{n+1}(p),O_{n+1}(p)]$, by strong approximation, for all but finitely many $\mathfrak p$, the image of $\ov\alpha_{\mathfrak p}:\Lam\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow \PO_{n+1}(p)$ contains $Q_p:=\POm_{n+1}(p)$, and the same is true for the restriction to $\Lam_1,\Lam_2$. Without loss of generality, we may assume $\im(\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p})=Q_p$ (replace $\Lambda$ by the intersection of all index-2 subgroups of $\Lambda$). Denoting $T_p=\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_p(\Lam_3)$, the map $\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_p$ factors through surjective maps $\Lam\xra{s} Q_p*_{T_p}Q_p\xra{t} Q_p$. Then $\De=\ker(t\circ s)$ surjects onto $\ker t$, which is a free group of rank $r\ge2$ \cite[Prop.\ 3.4]{belolipetsky-lubotzky}.
\vspace{.1in}
{\bf Proof of Theorem \ref{thm:BL+}.} Fix a finite group $F$. We use the setup of the proceeding paragraphs. In particular, $\pi<D$ will always denote a subgroup with $N_\Gamma}\newcommand{\man}{\measuredangle(\pi)/\pi\simeq F$, and our aim is to show that $\pi$ can be chosen in such a way that $M=\hy^n/\pi$ has properties (a), (b), and (c).
{\bf Part (a).} By \cite[pg.\ 465]{belolipetsky-lubotzky} the group $N_\Gamma}\newcommand{\man}{\measuredangle(\pi)$ is contained in $D=\ker\big[N_\Gamma}\newcommand{\man}{\measuredangle(K)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Out(F_r)\big]$, and \cite[\S5]{belolipetsky-lubotzky} shows that $D$ is contained in $\Gamma}\newcommand{\man}{\measuredangle(\mathfrak p)$, which is torsion-free for $p$ large. It follows that $\Isom(M)\simeq N_\Gamma}\newcommand{\man}{\measuredangle(\pi)/\pi$ acts freely on $M$: if $x\in M$ is fixed by $g\neq 1\in\Isom(M)$, then $g$ lifts to $\til g\in N_\Gamma}\newcommand{\man}{\measuredangle(\pi)$ that acts on $\hy^n$ with a fixed point, but this contradicts the fact that $N_\Gamma}\newcommand{\man}{\measuredangle(\pi)$ is torsion-free.
{\bf Part (b).} As mentioned in part (a), we can arrange that $\pi<\Gamma}\newcommand{\man}{\measuredangle(\mathfrak p)$. Our main task for part (b) will be to show that we can also arrange that $\pi<\Gamma}\newcommand{\man}{\measuredangle(\mathfrak p)\cap\Gamma}\newcommand{\man}{\measuredangle(\mathfrak q)$, where $\mathfrak p,\mathfrak q\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\ca O_S$ are prime ideas with $\ca O_S/\mathfrak p\simeq\bb F_p$ and $\ca O_S/\mathfrak q\simeq\bb F_q$ for distinct primes $p,q$. Before we do this, we explain why this is enough to conclude that $M=\hy^n/\pi$ has the desired stably parallelizable cover.
Suppose that $M=\hy^n/\pi$ with $\pi<\Gamma}\newcommand{\man}{\measuredangle(\mathfrak p)\cap\Gamma}\newcommand{\man}{\measuredangle(\mathfrak q)$. We will show that there is a cover $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$ of degree 1, 2, or 4 so that $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M$ has a tangential map $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow S^n$, and hence $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M$ is stably parallelizable. The group $\pi$ is a subgroup of the identity component $\SO_0(n,1)<\SO(n,1)$. The inclusions $\pi\hookrightarrow}\newcommand{\onto}{\twoheadrightarrow\SO_0(n,1)\hookrightarrow}\newcommand{\onto}{\twoheadrightarrow\SO_{n+1}(\mathbb C}\dmo{\Nrd}{Nrd)$ define flat bundles over $M$. By Deligne--Sullivan \cite{deligne-sullivan}, there is a particular cover $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$ so that the map $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow B\SO_{n+1}(\mathbb C}\dmo{\Nrd}{Nrd)$ is homotopically trivial. This cover is the one corresponding to the subgroup $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega\pi=\pi\cap\ker(\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p})\cap\ker(\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak q})$ of $\pi$. Note that the index $[\pi:\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega\pi]$ is 1, 2, or 4 because $\ker(\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p})$ has index 2 in $\ker(\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p})$. Furthermore, if $n$ is even, then $\SO_{n+1}(p)< O_{n+1}(p)$ has trivial center, so $\SO_{n+1}(p)\simeq\PSO_{n+1}(p)$, which implies that $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega\pi=\pi$.
Since there is a fibration
\[\SO_{n+1}(\mathbb C}\dmo{\Nrd}{Nrd)/\SO_0(n,1)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow B\SO_0(n,1)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow B\SO_{n+1}(\mathbb C}\dmo{\Nrd}{Nrd)\]
and $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow B\SO_0(n,1)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow B\SO_{n+1}(\mathbb C}\dmo{\Nrd}{Nrd)$ is trivial, the map $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow B\SO_0(n,1)$ lifts to $\SO_{n+1}(\mathbb C}\dmo{\Nrd}{Nrd)/\SO_0(n,1)$, which is homotopy equivalent to $\SO(n+1)/\SO(n)\simeq S^n$. This map $\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow S^n$ is a tangential map by Okun \cite[\S5]{okun}. This completes the construction of the stably parallelizable cover.
Now we show we can find $M$ with isometry group $F$ and fundamental group $\pi<\Gamma}\newcommand{\man}{\measuredangle(\mathfrak p)\cap\Gamma}\newcommand{\man}{\measuredangle(\mathfrak q)$. As above, fix $\mathfrak p\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\ca O_S$ such that $\alpha_p:\Lambda\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow Q_p$ is surjective and also $\alpha(\Lam_1)=\alpha(\Lam_2)=Q_p$.
{\it Observation.} Fix a prime ideal $\mathfrak q\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\ca O_S$ and denote $q\in\N$ the prime with $(q)=\mathfrak q\cap\Z$. If the image of $\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak q}:\Lam(p)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\PO_{n+1}(q)$ contains $Q_q$, then the image of $\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p,\mathfrak q}: \Lambda\to \PO_{n+1}(p)\times\PO_{n+1}(q)$ defined by
\begin{equation*}
\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p,\mathfrak q}(g)=(\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p}(g),\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak q}(g))
\end{equation*}
contains $Q_p\ti Q_q$. Indeed, if $(x,y)\in Q := Q_p\times Q_q$, then one has that $\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p}(g)=x$ for some $g\in\Lambda$ and also $\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak q}(h)=\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak q}(g)^{-1}y$ for some $h\in\Lambda(\mathfrak p)$. Thus $\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p,\mathfrak q}(gh)=(x,y)$.
We use the observation together with the strong approximation theorem to conclude that for all but finitely many of the infinitely many primes $q$ that split completely, the image of each of $\Lam$, $\Lam_1$, and $\Lam_2$ in $\PO_{n+1}(p)\ti\PO_{n+1}(q)$ contains $Q_p\ti Q_q$. As before, we may assume (by replacing $\Lam$ with a finite-index subgroup) that $\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p,\mathfrak q}(\Lam)=Q_p\ti Q_q$.
Set $T=\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p,\mathfrak q}(\Lambda_3)$. The subgroup $T<Q$ has the property that there are no nontrivial $N\lhd Q$ such that $1\le N\leq T$ (compare \cite[\S3.2]{belolipetsky-lubotzky}). This holds essentially for the same reasons it holds for $T_p<Q_p$ (see \cite[\S5]{belolipetsky-lubotzky}). In our case, we only need to notice that $T\leq \PO_{n}(p)\times \PO_{n}(q)$, while the only nontrivial proper normal subgroups of $Q$ are $Q_p\times 1$ and $1\times Q_q$ (the latter fact holds because $Q_p$ and $Q_q$ are simple if $p,q$ are sufficiently large and $Q_p\not\simeq Q_q$).
Setting $\De=\ker(\ov\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta_{\mathfrak p,\mathfrak q})=\Lam\cap\Gamma}\newcommand{\man}{\measuredangle(\mathfrak p)\cap\Gamma}\newcommand{\man}{\measuredangle(\mathfrak q)$, we may repeat the argument of \cite[\S5]{belolipetsky-lubotzky} to conclude that $\pi<D$ is contained in $\Gamma}\newcommand{\man}{\measuredangle(\mathfrak p)\cap\Gamma}\newcommand{\man}{\measuredangle(\mathfrak q)$. This finishes part (b).
{\bf Part (c).} We explain why we can arrange for $M$ to have isometry group $F$ and arbitrarily large injectivity radius. This will follow (using Proposition \ref{prop:injrad} below) from the fact that $\pi$ is a subgroup of matrices $\Sl_m(\ca O_S)$ with coefficients in the ring $\ca O_S$ of $S$-integers in a number field $L$. Before proving Proposition \ref{prop:injrad} we recall a few facts about $\ca O_S$. Here $\ca O$ is the ring of integers in $L$, and $S$ is a finite set of places (i.e.\ an equivalence class of absolute value on $L$) that includes all of the Archimedean places, and $\ca O_S=\{x\in L: t(x)\le 1\text{ for all places }t\notin S\}$.
For our proof of Proposition \ref{prop:injrad}, we recall the description of the set of all places of $L$. This is the content of Ostrowski's theorem \cite[Ch.\ II]{janusz}. The Archimedean places all come from embeddings of $L$ into $\R$ or $\mathbb C}\dmo{\Nrd}{Nrd$. The non-Archimedean places come from prime ideals $\mathfrak q\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\ca O$ as follows. Given $\mathfrak q$, for $a\in\ca O$ define $\nu_{\mathfrak q}(a)\in\Z_{\ge0}$ as the multiplicity of $\mathfrak q$ appearing in the prime factorization of the ideal $(a)\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\ca O$; this is extended to $x=\fr{a}{b}\in L$ by $\nu_{\mathfrak q}(x)=\nu_{\mathfrak q}(a)-\nu_{\mathfrak q}(b)$. Denoting the norm $N(\mathfrak q)=|\ca O/\mathfrak q|$, the function $t_{\mathfrak q}(x)=N(\mathfrak q)^{-\nu_{\mathfrak q}(x)}$ defines a place of $L$. The set of all places (normalized in the way we have described) satisfies the \emph{product formula} $\prod t(x)=1$ for any $x\in L^\ti$ \cite[Ch.\ II, \S6]{janusz}.
For future reference, observe that if $a\in\ca O$ and $\mathfrak q\nmid a$, then $t_{\mathfrak q}(a)=1$, so only finitely many terms in the product $\prod t(x)$ differ from 1. Note also that if $(a)=\mathfrak q_1^{n_1}\cdots}\newcommand{\ld}{\ldots\mathfrak q_f^{n_f}$ is the prime factorization, then $N(a)=N(\mathfrak q_1)^{n_1}\cdots}\newcommand{\ld}{\ldots N(\mathfrak q_f)^{n_f}$, so by the product formula, $N(a)$ is also equal to the product $\prod_{t\mid\infty} t(a)$ over Archimedean places of $L$.
\begin{prop}[Injectivity radius growth in congruence covers]\label{prop:injrad}
Let $V$ be a closed aspherical Riemannian manifold with fundamental group $\pi$. Suppose there exists an injection
$\pi\hookrightarrow}\newcommand{\onto}{\twoheadrightarrow\Sl_m(\ca O_S)$, where $\ca O_S$ is the ring of $S$-integers in a number field $L$. For an ideal $\mathfrak k\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\ca O$, denote
\[\Sl_m(\mathfrak k)=\ker\big[\Sl_m(\ca O_S)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Sl_m(\ca O_S/\mathfrak k\ca O_S)\big]\]
and let $V_{\mathfrak k}$ be the cover of $V$ with fundamental group $\pi(\mathfrak k):=\pi\cap\Sl_m(\mathfrak k)$. Then there are constants $C,D$ (depending only on $V$, $m$, and $K$, but not $\mathfrak k$) so that $\InjRad(V_{\mathfrak k})\ge C\log k+D$, where $(k)=\mathfrak k\cap\Z$.
\end{prop}
This statement is similar to the ``Elementary Lemma" of \cite[\S3.C.6]{gromov-systole}. The proof below is based on, and has some overlap with, the argument in \cite[\S4]{guth-lubotzky}.
\begin{proof}[Proof of Proposition \ref{prop:injrad}]
Let $\wtil V$ be the universal cover of $V$.
Fix the ideal $\mathfrak k$, and set $R=\InjRad(V_{\mathfrak k})$. By definition of $\InjRad$, there exists $y,z\in \wtil V$ and $\eta\in\pi(\mathfrak k)$ so that $y,\eta y$ are both contained in the ball $B_{2R}(z)$. Then $d(y,\eta y)\le 4R$; equivalently
\[R\ge\fr{1}{4} d(y,\eta y).\] To prove the proposition, we will give a lower bound on $d(y,\eta y)$.
Since $V$ is compact, $\pi$ is finitely generated. Consider the generating set associated to the Dirichlet fundamental domain $\ca D$ centered at $y$ for the action of $\pi$ on $\wtil V$ (generators are those $g\in\pi$ for which $g(\ca D)\cap \ca D\neq\vn$). For the word length $w:\pi\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Z_{\ge0}$ associated to this generating set, there is a bound $w(\eta)\le c_1\cdot\big[d(y,\eta y)+1\big]$, obtained as follows. Take a geodesic $\ga$ connecting $y,\eta y$, and cover it by $\lfloor d(y,\eta y)\rfloor+1$ balls of radius 1. There is $c_1>0$ so that each ball intersects at most $c_1$ translates of $\ca D$, so $\ga$ intersects at most $c_1\cdot\big[d(y,\eta y)+1\big]$ translates of $\ca D$. This proves the aforementioned bound, which is equivalent to
\[d(y,\eta y)\ge (1/c_1)\cdot w(\eta)-1.\
To finish the proof, we prove
\begin{equation}\label{eqn:bound-word-length}
w(\eta)\ge c_2\log k+c_3\end{equation} for some constants $c_2,c_3$. Now we use the assumptions that $\pi<\Sl_m(\ca O_S)$ and $\eta\in\Sl_m(\mathfrak k)$. For $X=(x_{ij})\in \Sl_m(L)$ and $s\in S$, define
\[|X|_s=\max_{i,j} s(x_{ij})\>\>\>\text{ and }\>\>\> |X|_S=\sum_{s\in S}|X|_s.\] By the formula for matrix multiplication $|XY|_S\le m\>|X|_S|Y|_S$. Write $\eta=X_1\cdots}\newcommand{\ld}{\ldots X_{w(\eta)}$ with $X_i\in\Sl_m(\ca O_S)$ belonging to our chosen generating set of $\pi$. Then $|\eta|_S\le m^{w(\eta)-1}\cdot M^{w(\eta)}$, where $M$ is the maximum value of $|\cdot|_S$ on generators of $\pi$. On the other hand, we will show that $|\eta|_S\ge \ell\cdot k^{1/\ell}-\ell$, where $(k)=\mathfrak k\cap\Z$ and $\ell=|S|$. Then altogether we have
\[\ell\cdot k^{1/\ell}-\ell\le |\eta|_S\le m^{w(\eta)-1}\cdot M^{w(\eta)},\] which gives a bound as in (\ref{eqn:bound-word-length}) after taking log. Note that $\log(k^{1/\ell}-1)=\log(k^{1/\ell})+\log(1-k^{-1/\ell})$ and $\log(1-k^{-1/\ell})$ is bounded below by the constant $\log(1-2^{-1/\ell})$.
Now we prove $|\eta|_S\ge\ell\cdot k^{1/\ell}-\ell$. Since $\eta\neq\id$, some entry $\eta_{ij}$ has the form $1+x$ or $x$, where $x\in\mathfrak k\ca O_S$ is nonzero. Write $x=\fr{a}{b}\cdot x_1$, where $x_1\in\mathfrak k$ and the only primes dividing $a,b$ are primes in $S$. By the product formula
\[\prod_{s\in S}s(a/b)=1\>\>\>\text{ and }\>\>\>\prod_{s\in S}s(x_1)=N(x_1).\] Furthermore, $N(x_1)\ge N(\mathfrak k)\ge k$ because $(x_1)\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\mathfrak k$ and $\Z/k\Z\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\ca O/\mathfrak k$. Therefore, $\prod_{s\in S}s(x)\ge k$.
Next we show that $\prod_{s\in S}s(x)\ge k$ implies that $|x|_S:=\sum_{s\in S} s(x)\ge \ell k^{1/\ell}$. This follows from some calculus: we want to minimize the function $\phi(x_1,\ld,x_\ell)=x_1+\cdots}\newcommand{\ld}{\ldots+x_\ell$ under the constraint $x_1\cdots}\newcommand{\ld}{\ldots x_\ell\ge k$. Since $\phi$ has no critical points, the minimum is achieved on the set $x_1\cdots}\newcommand{\ld}{\ldots x_\ell=k$. Using Lagrange multipliers, one finds that $\phi$ has a unique minimum at $x=(k^{1/\ell},\ld,k^{1/\ell})$ and the minimum value is $\phi(x)=\ell\cdot k^{1/\ell}$.
Since $\eta_{ij}$ is either $x$ or $1+x$, in either case $|\eta_{ij}|_S\ge\sum_{s\in S}[s(x)-1]\ge \ell\cdot k^{1/\ell}-\ell$. Combining everything we conclude that
\[|\eta|_S\ge|\eta_{ij}|_S\ge\ell\cdot k^{1/\ell}-\ell.\]
This completes the proof.
\end{proof}
\section{Symmetry constant for $N=M_{c,\phi}$}\label{sec:asymmetric}
In this section we prove Theorem \ref{thm:asymmetric}. As mentioned in the introduction, the goal is to find smooth structures $N$ and large subgroups $F<\Out(\pi)$ so that $\im\Psi_N\cap F=1$. To this end, we consider the exotic smooth structures $N=M_{c,\phi}$ studied in \cite{FJ-nonuniform}. Here $M$ is hyperbolic, $c$ is a simple closed geodesic, and $\phi\in\Diff(S^{n-2})$. Choosing a framing $\iota:S^1\times D^{n-1}\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$ of $c$, the manifold $M_{c,\phi}$ is defined as the quotient of
\[S^1\ti D^{n-1} \coprod M\setminus \iota(S^1\ti\text{ int}( D^{n-1}))\]
by the identification $(x,v)\leftrightarrow \iota(x,\phi(v))$ for $(x,v)\in S^1\ti S^{n-2}$.
We prove Theorem \ref{thm:asymmetric} in 3 steps.
\subsection{Non-concordant smooth structures (Step 1)}
Our mechanism for constructing $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta\in\Out(\pi)$ such that $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta\notin\im\Psi_N$ is Theorem \ref{thm:non-concordant} below. Before we state it, recall some facts about smooth structures that will be used here and in the next subsection.
{\bf Smoothings of topological manifolds.} By a smooth manifold $N$ we mean a topological manifold with a smooth atlas of charts $\R^n\supset U_\alpha \rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow N$ (which we call a \emph{smooth structure}). If $N$ (resp.\ $M$) is a smooth (resp.\ topological) manifold and $h:N\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$ is a homeomorphism, then we obtain a smooth structure on $M$ by pushforward. The map $h$ is called a \emph{marking}. Two markings $h_0:N_0\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$ and $h_1:N_1\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$ determine the same smooth structure on $M$ if there is a diffeomorphism $g:N_0\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow N_1$ so that $h_1g=h_0$.
Two smooth structures $N_0,N_1$ on $M$ are \emph{concordant} if there exists a smooth structure on $M\times[0,1]$ whose restriction to $M\times\{i\}$ is $N_i$ for $i=0,1$. The main fact about concordances that we use is that classifying concordance classes reduces to homotopy theory: there is a bijection between the set of concordance classes of smooth structures on $M$ and the set of based homotopy classes of maps $[M,\Top/O]$.
As remarked in \cite[\S1]{FJ-nonuniform}, the concordance class of the smooth structure $M_{c,\phi}$ is independent of the choice of framing and is also independent of the choice of representative of the isotopy class $[\phi]\in\pi_0\Diff(S^{n-2})$.
\begin{thm}[non-concordant smooth structures]\label{thm:non-concordant}
Let $M$ be a smooth closed manifold. Assume $M$ is stably parallelizable. Let $c_1,\ld,c_\ell$ be disjoint closed curves in $M$. Assume that there exists a homomorphism $\De:\pi_1(M)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Z^\ell$ such that $\De(c_1),\ld,\De(c_\ell)$ generate $\Z^\ell$. For any nontrivial isotopy class $[\phi]\in\pi_0\Diff(S^{n-2})$, no two of the smooth structures $M_{c_1,\phi},\ldots,M_{c_\ell,\phi}$ are concordant.
\end{thm}
\begin{proof}
Given a codimension-0 embedding $\la:X\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow Y$ of open manifolds, we denote $\la'$ the induced map of 1-point compactifications, obtained by collapsing $Y\setminus X$ to a point. Also $X_+$ denotes the space $X$ with a disjoint basepoint.
Let $\iota_1,\ld,\iota_\ell:S^1\times D^{n-1}\hookrightarrow}\newcommand{\onto}{\twoheadrightarrow M$ be framings of $c_1,\ld,c_\ell$. Use $\iota_1,\ldots,\iota_\ell$ to define an embedding $\iota:\coprod_\ell S^1\ti D^{n-1}\hookrightarrow}\newcommand{\onto}{\twoheadrightarrow M$. The induced collapse map has the form $\iota':M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\bigvee_\ell \Si^{n-1}(S^1_+)$.
Consider the composition
\[\hat\iota:M_+\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M\xra{\iota'}\bigvee_\ell\Si^{n-1}(S^1_+)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow \bigvee_\ell S^{n-1},\] where the last map is induced from the obvious maps $\Si^{n-1}(S^1_+)\simeq S^n\vee S^{n-1}\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow S^{n-1}$. It suffices to show that the induced map
\[\hat\iota^*: \big[\bigvee_\ell S^{n-1},\>\Top/O\big]\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\big[M_+,\>\Top/O\big]\]
is injective. This is because, under the bijection between concordance classes of smooth structures on $M$ and $[M,\Top/O]$, the concordance class of $M_{c_j,\phi}$ corresponds to the map
\[M\xra{\hat\iota} \bigvee_\ell S^{n-1}\xra{\pi_j} S^{n-1}\xra{\hat\phi} \Top/O,\]
where $\pi_j$ collapses every sphere other than the $j$-th sphere to the basepoint, and $\hat\phi$ corresponds to $[\phi]\in\pi_0\Diff(S^{n-2})$ under the bijections $[S^{n-1},\Top/O]\simeq \Theta_{n-1}\simeq\pi_0\Diff(S^{n-2})$.
To show that $\hat\iota^*$ is injective, we use that $\Top/O$ is an infinite loop space. In particular, there exists a space $Y$ such that $\Om^{n+\ell}Y\simeq\Top/O$, and for any space $A$, there are natural bijections $[A,\Top/O]\simeq[A,\Om^{n+\ell}Y]\simeq[\Si^{n+\ell}A,Y]$. This allows us to view $\hat\iota^*$ as map
\[\big[\bigvee_\ell S^{2n+\ell-1},Y
\big]\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\big[\Si^{n+\ell}(M_+),Y]. \]
This map can also be obtained by considering the embedding $\iota\times 1:\left(\coprod_\ell S^1\ti D^{n-1}\right)\times D^{n+\ell}\hookrightarrow}\newcommand{\onto}{\twoheadrightarrow M\times D^{n+\ell}$ and the composition
$\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega{\iota\ti 1}:\Si^{n+\ell}(M_+)\xra{(\iota\times1)'} \bigvee_\ell\Si^{2n+\ell}(S^1_+)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\bigvee_\ell S^{2n+\ell-1}$, similar to before.
The homomorphism $\De$ is induced by a map $\de:M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow T^\ell$ to the torus, and we can assume $\de$ is smooth. Take a Whitney embedding $\ep:M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow D^{2n}$, and consider the induced embedding $\de\ti\ep:M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow T^\ell\ti D^{2n}$. Since $M$ is a stably parallelizable, $M\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm T^\ell\ti D^{2n}$ has trivial normal bundle $\nu_M\simeq\ep^{n+\ell}$. (To see this, observe that $TM\oplus}\newcommand{\til}{\tilde\nu_M\simeq\ep^{2n+\ell}$. Since $M$ is stably parallelizable, $TM\oplus\ep\simeq\ep^{n+1}$, which implies that $\ep^{n+1}\oplus\nu_M\simeq\ep^{2n+\ell+1}$. Since $\text{rank}(\nu_M)>\dim M$, this implies that $\nu_M$ is the trivial bundle by \cite[Lem.\ 3.5]{kervaire-milnor}.) Then there is an embedding $\ka:M\ti D^{n+\ell}\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow T^\ell\ti D^{2n}$.
Consider now the composition
\[p:\label{eqn:torus-collapse}\Si^{2n}(T^\ell_+)\xra{\ka'} \Si^{n+\ell}(M_+)\xra{\widehat}\newcommand{\wtil}{\widetilde}\newcommand{\ta}{\theta}\newcommand{\Si}{\Sigma}\newcommand{\Del}{\nabla}\newcommand{\Lam}{\Lambda}\newcommand{\om}{\omega{\iota\times1}} \bigvee_\ell S^{2n+\ell-1}.\]
To prove the theorem, we show that the induced map
\[p^*:\big[\bigvee_\ell S^{2n+\ell-1},Y\big]\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\big[\Si^{2n}(T^\ell_+),Y\big]\]
is injective. First observe the homotopy equivalence $\Si^{2n}(T^\ell_+)\sim \bigvee_{i=0}^\ell{\ell\choose i}S^{2n+i}$.
This follows from general homotopy equivalences $\Si(A_+)\sim \Si A\vee S^1$ and $\Si(A\times B)\sim \Si A\vee\Si B\vee \Si(A\wedge B)$. Since $\De(c_1),\ldots,\De(c_\ell)$ generate $\pi_1(T^\ell)$, the inclusion $\ell\> S^{2n+\ell-1}\subset}\newcommand{\sps}{\supset}\newcommand{\sbse}{\subseteq}\newcommand{\spse}{\supseteq}\newcommand{\bs}{\backslash}\newcommand{\ci}{\circ}\newcommand{\pa}{\partial}\newcommand{\es}{\emptyset}\newcommand{\fa}{\forall}\newcommand{\bbm}{\mathbbm\bigvee_{i=0}^\ell{\ell\choose i}S^{2n+i}$ is a right inverse to $p$, up to homotopy. This implies that $p^*$ is injective.
\end{proof}
\subsection{Outer automorphisms not realized by diffeomorphisms (Step 2)}
Next we apply Theorem \ref{thm:non-concordant} to give a criterion that guarantees that $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta\in\Out(\pi)$ is not in the image of $\Psi_N:\Diff(N)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Out(\pi)$.
\begin{thm}[obstruction to Nielsen realization]\label{thm:non-realize}
Let $M$ be a hyperbolic manifold and fix a simple closed geodesic $c$ in $M$. Let $N=M_{c,\phi}$ be an exotic smooth structure. Assume that $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta\in\Isom(M)\simeq\Out(\pi)$ is such that $M_{c,\phi}$ and $M_{\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta(c),\phi}$ are not concordant. Then $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta\notin\im\Psi_N$.
\end{thm}
\begin{proof}
Suppose for a contradiction that there is a diffeomorphism $f:N\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow N$ such that $\Psi_N(f)=\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta$.
Set $N_0=N$ and $N_1=M_{\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta(c),\phi}$, and observe that $\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta:M\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$ induces a diffeomorphism $g_1:N_0\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow N_1$. Define $g_2=g_1\circ f^{-1}$. Denoting $h_i:N_i\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M$ be the obvious homeomorphisms, the composition
\[M\xra{h_0^{-1}} N_0\xra{g_2} N_1\xra{h_1}M\]
induces the identity on $\pi$ and is therefore homotopic to the identity. From this homotopy, we obtain a homotopy equivalence $H_0:M\ti[0,1]\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow M\ti[0,1]$, which restricts to a homeomorphism on the boundary. By \cite[Cor.\ 10.6]{FJ-mostow}, $H_0$ is homotopic rel boundary to a homeomorphism $H$. Then the composition
\[N_0\times[0,1]\xra{h_0\ti\text{id}}M\ti[0,1]\xra{H}M\ti[0,1]\]
defines a smooth structure on $M\times[0,1]$ whose restriction to $M\ti\{i\}$ is $N_i$ for $i=0,1$, i.e.\ $N_0$ and $N_1$ are concordant. This contradicts our assumption, so $\alpha\notin\im\Psi_N$.
\end{proof}
\subsection{Examples (Step 3)} To complete the proof of Theorem \ref{thm:asymmetric}, we explain how to obtain examples of stably parallelizable $M$ that satisfy the assumptions of Theorems \ref{thm:non-concordant} and \ref{thm:non-realize}. This is the content of the following proposition.
\begin{prop}\label{prop:examples}
Fix $n\ge2$. For any $d\ge2$, there exists a stably parallelizable hyperbolic manifold $M^n$, a geodesic $c$, a subgroup $F<\Isom(M)$ isomorphic to $\Z/d\Z=\pair{\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta}$, and $\rho\in H^1(M)\simeq\Hom(H_1(M),\Z)$ such that
\begin{equation}\label{eqn:homomorphism}\rho(\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta^jc)=\begin{cases}1&j=0\\0&1\le j\le d-1.\end{cases}\end{equation} Consequently, the homomorphism $\De:H_1(M)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Z^d$ whose $i$-th coordinate is $\rho\circ\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta^{-i}$ has the property that $\De(c),\ld,\De(\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta^{d-1}c)$ generate $\Z^d$.
\end{prop}
In \cite{lubotzky-betti}, Lubotzky gave examples of hyperbolic $M$ (both arithmetic and non-arithmetic) with a surjection $\pi_1(M)\onto F_r$ to a free group of rank $r\ge2$. By passing to a cover, we can assume that $M$ is stably parallelizable \cite[pg.\ 553]{sullivan-stably-parallelizable}. Proposition \ref{prop:examples} is proved by passing to a further cover, using the general procedure of the following lemma.
\begin{lem}\label{lem:covers-betti}
Let $X$ be a CW-complex, and let $F_r$ denote a free group of rank $r\ge2$. Assume there is a surjection $\pi_1(X)\onto F_r$. Then for any $d\ge2$, there exists a regular cover $Y\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow X$ with deck group $\Z/d\Z=\pair{\alpha}\newcommand{\be}{\beta}\newcommand{\ga}{\gamma}\newcommand{\de}{\delta}\newcommand{\ep}{\epsilon}\newcommand{\si}{\sigma}\newcommand{\ze}{\zeta}$ and $c\in\pi_1(Y)$ and $\rho\in H^1(Y)$ satisfying (\ref{eqn:homomorphism}).
\end{lem}
\begin{proof}
Take $F_r$ with generators $a_1,\ld,a_r$. Consider $F_r\onto\Z/d\Z$ defined by $a_1\mapsto 1$ and $a_i\mapsto 0$ for $2\le i\le r$. Then $\ker[F_r\onto\Z/d\Z]\simeq F_k$ with $k=1+d(r-1)$. It's easy to compute $H_1(F_k)$ as a $F=\Z/d\Z$-module:
\[H_1(F_k)\simeq \Z\{b_1\}\oplus\Z F\{b_2,\ld,b_k\}.\]
(For example, realize $1\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow F_k\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow F_r\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow\Z/d\Z\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow 0$ as a $(\Z/d\Z)$-covering of graphs.) Then also $H^1(F_k)\simeq\Z\{\be_1\}\oplus\Z F\{\be_2,\ld,\be_k\}$, where $\be_i$ is dual to $b_i$.
Let $Y\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow X$ be the cover such that $\pi_1(Y)=\ker\big[\pi_1(X)\onto F_r\onto\Z/d\Z\big]$. Then $\pi_1(Y)\onto F_k$, and $H_1(Y)\rightarrow}\newcommand{\Ra}{\Rightarrow}\newcommand{\La}{\Leftarrow}\newcommand{\Lr}{\Leftrightarrow H_1(F_k)$ is $(\Z/d\Z)$-equivariant. Choose $c\in\pi_1(Y)$ so that $c\mapsto b_2$ under $\pi_1(Y)\onto F_k$, and define $\rho:\pi_1(Y)\onto F_k\xra{\be_2}\Z$. It's easy to verify that $\rho$ satisfies (\ref{eqn:homomorphism}).
This proves the lemma.
\end{proof}
\bibliographystyle{alpha}
|
2,869,038,155,547 | arxiv | \section{Introduction}
\section{Introduction}
The semileptonic $B\to D\ell\nu_\ell$ $(\ell=e,\mu, \tau)$ decays have attracted a lot of attention
in extracting the exclusive Cabibbo-Kobayashi-Maskawa (CKM) matrix element $|V_{cb}|$.
Especially, the substantial difference for the ratio
${\cal R}(D)={\rm Br}(B\to D\tau\nu_\tau)/{\rm Br}(B\to D\ell'\nu_{\ell'})$ $(\ell'=e,\mu)$ between the experimental data
and the standard model (SM) predictions generated a great excitement in
testing the SM and searching for new physics beyond the SM. The
experimental data,
${\cal R}^{\rm exp}(D)=0.440(58)(42)$ measured from BaBar~\cite{BaB12,BaB13} and
${\cal R}^{\rm exp}(D)=0.375(64)(26)$ from Belle~\cite{Belle15},
have shown an excess over the standard model (SM) prediction
${\cal R}^{\rm SM}(D)=0.299(3)$~\cite{Amhis}. Many theoretical efforts have been made in resolving the issue of ${\cal R}(D)$ anomaly and
searching for new physics beyond the SM~\cite{Lat1,Lat2, HQEFT1,Bigi,LCSR1}.
We note that the $B\to D\ell\nu_\ell$ decays involve two transition form factors (TFFs),
i.e. the vector form factor $f_+(q^2)$ and the scalar form factor $f_0(q^2)$.
The analysis of both TFFs $f_{+,0}(q^2)$ for $B\to D$ transitions can be found in various theoretical
approaches such as the lattice QCD (LQCD)~\cite{Lat1,Lat2},
the light-cone sum rule (LCSR)~\cite{LCSR0,LCSR1,LCSR2,LCSR3}, and the light-front quark model (LFQM)~\cite{Cheng04}.
While ${\rm Br}(B\to D\ell\nu_{\ell})$ for the light lepton decay modes $(\ell=e,\mu)$ needs only $f_{+}(q^2)$,
${\rm Br}(B\to D\tau\nu_\tau)$ for the heavy $\tau$ decay mode receives contributions from both $f_{+}(q^2)$
and $f_0(q^2)$. The ratio ${\cal R}(D)$ is in particular quite sensitive to the scalar form factor.
This leads us to speculate that the scalar contribution is the main source of ${\cal R}(D)$ anomaly
and thus the new physics effect beyond the SM.
However, since the predictions of $f_0(q^2)$ as well as $f_+(q^2)$
are quite different for different theoretical approaches within the SM,
it is very important to obtain the reliable and self-consistent results for the
TFFs before drawing any sound conclusion from the ${\cal R}(D)$ anomaly.
The purpose of this paper is to present the self-consistent descriptions of the $B\to D\ell\nu_\ell$
TFFs in the standard LFQM based on the LF quantization~\cite{SPP}.
There have been many previous LFQM analyses for the semileptonic decays between two pseudoscalar
mesons~\cite{SLF1,SLF2,CLF1,CLF2,CJ09}.
In fact, there are two main kinds of LFQM, i.e. the standard LFQM~\cite{SLF1,SLF2}
and the covariant LFQM~\cite{CLF1,CLF2,CJ09}.
In the standard LFQM, the constituent quark and antiquark
in a bound state are required to be on-mass shells and the spin-orbit wave function is
obtained by the interaction-independent Melosh transformation~\cite{Melosh} from the ordinary equal-time static spin-orbit
wave function assigned by the quantum number $J^{PC}$.
The main characteristic of the standard LFQM is to use the sum of the LF energy of the constituent quark and antiquark for the
the meson mass in the spin-orbit wave function and any physical observable can be obtained directly in three-dimensional
LF momentum space using the more phenomenologically accessible LF wave function such as
Gaussian radial wave function $\phi(x, {\bf k}_\perp)$.
However, as the standard LFQM itself is not amenable to analyze the zero-mode contribution, the covariant LFQM
using the manifestly covariant Bethe-Salpeter (BS) model with the multipole type $q{\bar q}$ vertex
was introduced~\cite{CLF1}, in which the constituents are off-mass shell.
While the covariant BS model used in~\cite{CLF1,CLF2,CJ09} allows one to analyze all the treacherous points such the zero
modes and the off-mass shell instantaneous contributions in a systematic way, it is less realistic than the standard LFQM.
Thus, in an effort to apply such treacherous points found in the covariant BS model to the standard LFQM,
the effective replacement~\cite{CLF1,CLF2,CJ09} of
the LF vertex function $\chi(x,{\bf k}_\perp)$ obtained in the BS model with the more
realistic Gaussian wave function $\phi(x,{\bf k}_\perp)$ in the standard LFQM has been made.
However, through the analysis of the vector meson decay constant together with the twist-2 and twist-3 distribution amplitudes (DAs) of the
vector meson~\cite{CJ14}, we found the correspondence relation between $\chi$ and $\phi$
proposed in~\cite{CLF1,CLF2,CJ09} encounters the self-consistency problem, e.g. the vector meson decay constants
obtained in the standard LFQM were found to differ
for different sets of the LF current components and polarization states of the vector meson~\cite{CJ14}.
We also resolved this self-consistency problem in the same work~\cite{CJ14} by imposing
the on-mass shell condition of the constituent quark and antiquark, i.e. replacement of the physical meson mass $M$ with the invariant mass $M_0$
in the integrand of formulas for the physical quantities,
in addition to the original correspondence relation between $\chi$ and $\phi$.
The remarkable finding from our new self-consistent correspondence relations (i..e. $\chi\to\phi$ and $M\to M_0$) between
the two models [see, e.g. Eq. (49) in~\cite{CJ14}]
was that both zero-mode and instantaneous contributions appeared in the covariant BS model became absent in the standard LFQM with the LF
on-mass shell constituent quark and antiquark degrees of freedom.
We then extended our self-consistent correspondence relations to analyze the decay amplitude related with twist-2 and twist-3 DAs
of pseudoscalar mesons~\cite{CJ15,CJ17} and observed the same conclusion drawn from~\cite{CJ14}.
In the previous analysis~\cite{CLF1,CLF2,CJ09} of the semileptonic decays between two pseudoscalar mesons using the covariant BS model,
the LF covariant calculations was made in the Drell-Yan-West ($q^+=q^0+q^3=0$) frame (i.e. $q^2=-{\bf q}^2_{\perp} <0$), which is advantageous
in that only the valence contributions are needed unless the zero-mode contributions exist.
The form factor $f_+(q^2)$ was obtained only from the plus component ($J^+$) of the weak current $J^\mu$ without encountering
the zero-mode contribution. One needs, however, two different components of the
current to obtain the form factor $f_0(q^2)$ [or $f_-(q^2)]$,
and $J^+$ and ${\bf J}_{\perp}=(J_x, J_y)$ were
used to obtain it in~\cite{CLF1,CLF2,CJ09}~\footnote{While the method of Jaus~\cite{CLF1} and ours~\cite{CJ09} in obtaining the form factors
are slightly different, the final results for $f_-(q^2)$ are the same with each other, i.e.
$f_-(q^2)$ (see Eq. (4.3) in~\cite{CLF1} and Eq. (42) in~\cite{CJ09}) was obtained from using both $J^+$ and ${\bf J}_\perp$.}.
However, $f_-(q^2)$ obtained from ($J^+, {\bf J}_\perp$) in the covariant BS model receives
not only the instantaneous contribution but also the zero mode due to the ${\bf J}_\perp$ component.
Employing the effective method presented in~\cite{CLF1,CLF2,CJ09} to express the zero-contribution as a convolution of the zero-mode operator
with the initial and final state LF vertex functions, the form factor $f_-(q^2)$ can also be expressed
as the convolution form between the initial- and final-states LF vertex functions $\chi(x, {\bf k}_\perp)$ in the
valence sector. To obtain $f_+(q^2)$ and $f_-(q^2)$ in the more realistic standard LFQM, the authors in~\cite{CLF1,CLF2,CJ09}
use the only correspondence relation between $\chi$ and $\phi$ without imposing the on-mass shell condition (i.e. $M\to M_0$).
In the recent work in~\cite{QC18}, the authors
investigated the self-consistency of the form factor $f_-(q^2)$ obtained from $(J^+, {\bf J}_\perp)$ by applying
both the old correspondence $(\chi\to\phi)$ and our new correspondence ($\chi \to\phi$ and $M\to M_0$)
between the BS model and the standard LFQM. From their numerical calculations, the authors found
from $f_-(q^2)$ in the standard LFQM that
the zero-mode contribution to $f_-(q^2)$ is sizable for the case of using only $(\chi\to\phi)$ relation but
vanishes when using ($\chi \to\phi$ and $M\to M_0$) relations.
This result is very supportive to assert that our new correspondence relations are universally applicable even to the
weak transition form factors for a self-consistent description of the standard LFQM.
In order to assert that the form factor $f_-(q^2)$ is truly self-consistent, however,
it is essential to show that $f_-(q^2)$ obtained in the $q^+=0$ frame is independent of the components of the current,
i.e. $f_-(q^2)$ obtained from $(J^+, J^-)$ is the same as the one obtained from $(J^+, {\bf J}_\perp)$.
In this work, we shall show that our new correspondence relations ($\chi \to\phi$ and $M\to M_0$)
guarantee the self-consistent description for the weak decay constant of a pseudoscalar meson and
the semileptonic decays between two pseudoscalar mesons in the standard LFQM.
To show this, we shall prove
that (1) the decay constant $f_{\cal P}$ of a pseudoscalar meson (${\cal P}$) is independent of the components of the current, and (2)
$f_-(q^2)$ obtained from $(J^+, J^-)$ is exactly the same as the one obtained
from $(J^+, {\bf J}_\perp)$ in the $q^+=0$ frame.
Those findings again entail that the zero-mode contribution
as well as the instantaneous one appeared in the covariant BS model became absent
in the standard LFQM.
Although we do not consider in this analysis,
the $q^+\neq 0$ frame may be used to compute the timelike process such as this semileptonic decay
but then it is unavoidable to encounter the particle-number-nonconserving Fock state (or nonvalence) contribution~\cite{CJ01}.
The main source of difficulty in the LFQM phenomenology is the lack of information on
the non-wave-function vertex~\cite{BCJ1,BCJ2} in the nonvalence diagram arising from the quark-antiquark pair creation/annihilation.
This should contrast with the usual LF valence wave function. In principle, there is a systematic program
as was discussed in~\cite{BH98} to include the particle-number-nonconserving amplitude to take into
account the nonvalence contributions. However, the program requires to find all the higher Fock-state wave
functions while there has been relatively little progress in computing the basic wave functions of hadrons from
first principles. In the very recent analysis~\cite{Tang20} of the semileptonic $B_c\to\eta_c(J/\psi)$ decays in the framework of basis LF quantization,
the frame dependence of the TFFs between $q^+=0$ and $q^+\neq 0$ frames
is discussed. The main reason for the frame dependence comes from the ignorance of the nonvalence contribution
in the $q^+\neq 0$ frame and it is not even possible to show that the form factors are independent of the components of the current
in the $q^+\neq 0$ frame unless the nonvalence contribution is correctly taken into account.
However, our main findings in the $q^+=0$ frame may be incorporated in the same $q^+=0$ frame calculations of Ref.~\cite{Tang20}.
The paper is organized as follows: In Sec.~\ref{sec:II}, we briefly review the decay constant $f_{\cal P}$
of a pseudoscalar meson in an exactly solvable model based on the covariant BS model of ($3+1$) dimensional fermion field theory.
We then present our LF calculation of $f_{\cal P}$ in the BS model using both plus
and minus components of the current and discuss the treacherous points
such as the zero-mode contribution and the instantaneous one when the minus component of the current is used.
Linking the covariant BS model to the standard LFQM with our universal
mapping between the two models~\cite{CJ14,CJ15,CJ17}, we obtain $f_{\cal P}$ from both plus and minus components of the current
in the standard LFQM. Our main finding is that while $f_{\cal P}$ obtained from
the minus component of the current in the covariant BS model receives both the zero mode and the instantaneous contributions,
$f_{\cal P}$ obtained from the minus component of the current in the standard LFQM is free from such treacherous contributions
and gives identical result with the one obtained from the plus component of the current.
In Sec.~\ref{sec:III}, we obtain the transition form factors $f_{\pm}(q^2)$ in the standard LFQM using the same procedure discussed in
Sec.~\ref{sec:II}. Especially, we explicitly show that
$f_-(q^2)$ obtained from $(J^+, J^-)$ is exactly the same as the one obtained
from $(J^+, {\bf J}_\perp)$ in the $q^+=0$ frame. This finding again supports the universality of our correspondence relations between
the covariant BS model and the standard LFQM.
In Sec.~\ref{sec:IV}, we show our numerical results for the semileptonic $B\to D\ell\nu_\ell$ $(\ell=e,\mu, \tau)$ decays.
In the Appendix, the explicit forms of the standard LFQM results for $f_{\pm}(q^2)$ are presented.
\section{Decay Constant }
\label{sec:II}
\subsection{$f_{\cal P}$ in the covariant BS model}
In the solvable model, based on the covariant BS model of
($3+1$)-dimensional fermion field theory,
the decay constant $f_{\cal P}$ of a pseudoscalar meson (${\cal P}$)
with the four-momentum $P$ and mass $M$ as a $q{\bar q}$ bound state
is defined by the matrix element of the axial vector current
\begin{equation}\label{eq1}
\langle 0|{\bar q}\gamma^\mu\gamma_5 q|{\cal P}(P)\rangle
= if_{\cal P} P^\mu.
\end{equation}
The matrix element $A^\mu \equiv \langle 0|{\bar q}\gamma^\mu\gamma_5 q|{\cal P}(P)\rangle$ is given in the one-loop
approximation as a momentum integral
\begin{equation}\label{eq2}
A^\mu = N_c
\int\frac{d^4k}{(2\pi)^4} \frac{ H_{\cal P} S^\mu} {(p^2 -m^2_1 +i\varepsilon) (k^2 - m^2_{q}+i\varepsilon)},
\end{equation}
where $N_c$ is the number of colors and
$p =P -k$ and $k$ are the internal momenta carried by the quark and antiquark propagators
of mass $m_1$ and $m_q$, respectively. The $q\bar{q}$ bound-state vertex function $H_{\cal P}$ of a pseudoscalar meson
is taken as multipole ansatz, i.e. $H_{\cal P}(p^2,k^2)=g/(p^2-\Lambda^2+i\epsilon)$ where $g$ and $\Lambda$ are
constant parameters in this manifestly covariant model.
The trace term is given by
\begin{equation}\label{eq3}
S^\mu = {\rm Tr}\left[\gamma^\mu\gamma_5\left(\slash \!\!\!\!\! p +m_1 \right) \gamma_5
\left(-\slash \!\!\!\! k + m_q \right) \right].
\end{equation}
Performing the LF calculation, we take the reference frame where $P=(P^+, P^-, {\bf P}_\perp)=(P^+, M^2/P^+,{\bf 0}_\perp)$
and use the metric convention $a\cdot b =\frac{1}{2} (a^+ b^- + a^- b^+) - {\bf a}_\perp\cdot {\bf b}_\perp$.
We then obtain the identity $\not\!\!q = \not\!\!q_{\rm on} +\frac{1}{2} \gamma^+\Delta^-_q$,
where $\Delta^-_q = q^- - q^-_{\rm on}$ and
the subscript (on) denotes the on-mass shell quark momentum,
i.e., $p^2_{\rm on}=m^2_1$ and $k^2_{\rm on}=m^2_q$.
Using this identity, one can separate the trace term into the on-shell propagating part
$S^{\mu}_{\rm on}$ and the off-mass shell instantaneous one $S^{\mu}_{\rm inst}$
as $S^\mu = S^\mu_{\rm on} + S^\mu_{\rm inst}$.
By the integration over $k^-$ in Eq.~(\ref{eq2}) and closing the contour in the lower half of the complex $k^-$ plane, one
picks up the residue at $k^-=k^-_{\rm on}$ in the region of $0<k^+<P^+$ (or $0<x<1$) where
$x=\frac{p^+}{P^+}$ and $1-x = \frac{k^+}{P^+}$ are the LF longitudinal momentum fractions of the quark and antiquark.
We denote the valence contribution to $A^\mu$ that is obtained by taking $k^-=k^-_{\rm on}$ in the region of
$0<x <1$ region as $[A^\mu]^{\rm LF}_{\rm val}$.
Then the Cauchy integration formula for the
$k^-$ integration in the valence region of Eq.~(\ref{eq2}) yields
\begin{equation}\label{eq4}
[A^\mu]^{\rm LFBS}_{\rm val}
= \frac{i N_c}{16 \pi^3}\int^1_0\frac{dx}{(1-x)}\int d^2{\bf k}_\perp
\chi(x,{\bf k}_\perp) S^\mu_{\rm val},
\end{equation}
where $\chi(x,{\bf k}_\perp) =\frac{g}{x^2(M^2 -M^2_0)(M^2 -M^2_\Lambda)}$ is the LF quark-meson vertex function
and $M^2_{0(\Lambda)}= \frac{{\bf k}^2_\perp + m^2_1(\Lambda^2)}{x} + \frac{{\bf k}^2_\perp + m^2_q}{ 1-x}$.
The trace term in the valence contribution is given by
$S^\mu_{\rm val} = S^\mu_{\rm on} + S^\mu_{\rm inst}$, where
$S^\mu_{\rm on} = 4 (m_1 k^\mu_{\rm on} + m_q p^\mu_{\rm on})$ and
$S^\mu_{\rm inst} = 2 (m_1 \Delta^-_k + m_q \Delta^-_{p}) g^{\mu +}$.
We note from $S^\mu_{\rm inst}$ that the off-shell instantaneous contributions are nonzero for the minus component
of the current while they are absent for the plus or perpendicular components of the current.
In our previous work~\cite{CJ14}, we check the LF covariance of $f_{\cal P}$ obtained from Eq.~(\ref{eq4})
using two different components (i.e. $\mu=+$ and $-$) of the current.
We found that while $f^{(+)}_{\cal P}$ obtained from $\mu=+$ is free from the zero mode,
$f^{(-)}_{\cal P}$ obtained from $\mu=-$ receives the zero mode.
We also identified the zero-mode operator corresponding to the zero-mode contribution to $f^{(-)}_{\cal P}$ (see Eq. (B9) in~\cite{CJ14}).
Since the LF calculations of $f_{\cal P}$ obtained from Eq.~(\ref{eq4})
were explicitly shown in~\cite{CJ14,CJ15}, we recapitulate the essential features of obtaining the full LF result of $f^{(-)}_{\cal P}$.
Then, we focus on the self-consistent standard LFQM analysis of $f_{\cal P}$ using
our new correspondence relations (i.e. $\chi\to\phi$ and $M\to M_0$).
For $\mu=+$, the full result of $f_{\cal P}$ can be obtained only from the valence contribution with the
on-mass shell quark propagating part, i.e.
$S^+_{\rm full} =S^+_{\rm val}= S^+_{\rm on}$.
The full solution of the decay constant obtained from $\mu=+$ is given by~\cite{CJ14,CJ15}
\begin{equation}\label{eq5}
[f^{(+)}_{\cal P}]^{\rm LFBS}_{\rm full}
= \frac{N_c}{4\pi^3}\int^1_0\frac{dx}{(1-x)}\int d^2{\bf k}_\perp
\chi(x,{\bf k}_\perp) \frac{S^+_{\rm on}}{4P^+},
\end{equation}
where $S^+_{\rm full}=S^+_{\rm on}=4P^+{\cal A}_1$ and ${\cal A}_1= (1-x)\; m_1 + x\;m_q$.
For $\mu=-$, the valence contribution
to the trace term comes not only from the on-shell propagating part but also from the off-shell instantaneous one, i.e.
$S^-_{\rm val} = S^-_{\rm on} + S^-_{\rm inst}$. However, the valence contribution itself is not equal to the manifestly
covariant result (or equivalently $[f^{(+)}_{\cal P}]^{\rm LFBS}_{\rm full}$) since the minus component of the current receives the zero-mode
contribution as shown in~\cite{CJ14}.
In~\cite{CJ14}, we also found the zero-mode operator $S^{-}_{\rm Z.M.}$ corresponding to the zero-mode contribution at the trace level, i.e.
$S^-_{\rm Z.M.} = \frac{4}{P^+} (m_q - m_1) (-Z_2)$
with $Z_2 = x (M^2 - M^2_0)+ m^2_1 - m^2_q + (1-2x) M^2$.
Adding $S^-_{\rm Z.M.}$ to $S^-_{\rm val}$, we found that
$S^-_{\rm full}= S^-_{\rm val} + S^-_{\rm Z.M.}= 4 P^- {\cal A}_1$.
That is, in this manifestly covariant BS model, the full solution $[f^{(-)}_{\cal P}]_{\rm full}^{\rm LFBS}$ obtained from $\mu=-$ is completely equal to
$[f^{(+)}_{\cal P}]_{\rm full}^{\rm LFBS}$ only if the zero-mode contribution is included in addition to the valence contribution.
We should note that while $[f^{(+)}_{\cal P}]_{\rm full}^{\rm LFBS}=[f^{(+)}_{\cal P}]_{\rm on}^{\rm LFBS}$,
$[f^{(-)}_{\cal P}]_{\rm full}^{\rm LFBS}=[f^{(-)}_{\cal P}]_{\rm on}^{\rm LFBS}+[f^{(-)}_{\cal P}]_{\rm inst}^{\rm LFBS}+[f^{(-)}_{\cal P}]_{\rm Z.M.}^{\rm LFBS}$.
For the sake of comparison with $[f^{(+)}_{\cal P}]_{\rm on}^{\rm LFBS}$ and also for later use in the standard LFQM analysis,
we display the result of $[f^{(-)}_{\cal P}]_{\rm on}^{\rm LFBS}$ obtained from Eq.~(\ref{eq4}) with only the on-mass propagating part, $S^\mu_{\rm val}=S^{-}_{\rm on}$,
as follows
\begin{equation}\label{eq6}
[f^{(-)}_{\cal P}]_{\rm on}^{\rm LFBS}
= \frac{N_c}{4\pi^3}\int^1_0\frac{dx}{(1-x)}\int d^2{\bf k}_\perp
\chi(x,{\bf k}_\perp)
\frac{P^+ S^-_{\rm on}}{4M^2},
\end{equation}
where $S^-_{\rm on}=4(m_1 k^-_{\rm on} + m_q p^-_{\rm on})$ with
$k^{-}_{\rm on}=\frac{{\bf k}_\perp^2 + m_q^2}{(1-x)P^+}$ and
$p^{-}_{\rm on}=\frac{{\bf k}_\perp^2 + m_1^2}{xP^+}$.
\subsection{$f_{\cal P}$ in the standard LFQM}
\label{sec:IIIb}
In the standard LFQM~\cite{SLF1,SLF2,Cheng97,KT,CJ98,CJ99,CJ99PLB,Choi07,CJ07},
the
wave function of a ground state pseudoscalar meson
as a $q\bar{q}$ bound state is given by
\begin{equation}\label{eq7}
\Psi_{\lambda{\bar\lambda}}(x,{\bf k}_{\perp})
={\phi(x,{\bf k}_{\perp})\cal R}_{\lambda{\bar\lambda}}(x,{\bf k}_{\perp}),
\end{equation}
where ${\cal R}_{\lambda{\bar\lambda}}$ is the spin-orbit wave function
that is obtained by the interaction independent Melosh transformation from the ordinary
spin-orbit wave function assigned by the quantum number $J^{PC}$.
The covariant form of ${\cal R}_{\lambda{\bar\lambda}}$ with the definite spin $(S, S_z)=(0,0)$
constructed out of the LF helicity $\lambda({\bar\lambda})$ of a quark (antiquark)
is given by
\begin{equation}\label{eq8}
{\cal R}_{\lambda{\bar\lambda}}
=\frac{\bar{u}_{\lambda}(p_q)\gamma_5 v_{{\bar\lambda}}( p_{\bar q})}
{\sqrt{2}[M^{2}_{0}-(m_1 -m_{q})^{2}]^{1/2}},
\end{equation}
which satisfies the unitarity condition,
$\sum_{\lambda{\bar\lambda}}{\cal R}_{\lambda{\bar\lambda}}^{\dagger}{\cal R}_{\lambda{\bar\lambda}}=1$.
Its explicit matrix form is given by
\begin{equation}\label{eq9}
{\cal R}_{\lambda{\bar\lambda}}
=\frac{1}{\sqrt{2}\sqrt{{\bf k}^2_\perp+{\cal A}^2_1}}
\begin{pmatrix}
-k^L & {\cal A}_1\\ -{\cal A}_1 & -k^R
\end{pmatrix},
\end{equation}
where $k^{R} = k_x + i k_y$ and $k^{L} = k_x - i k_y$.
For the radial wave function $\phi$ in~Eq.~(\ref{eq7}), we use the Gaussian wave function
\begin{equation}\label{eq10}
\phi(x,{\bf k}_{\perp})=
\frac{4\pi^{3/4}}{\beta^{3/2}} \sqrt{\frac{\partial
k_z}{\partial x}} {\rm exp}(-{\vec k}^2/2\beta^2),
\end{equation}
where $\vec{k}^2={\bf k}^2_\perp + k^2_z$ and $\beta$ is the variational parameter
fixed by the analysis of meson mass spectra~\cite{CJ09,CJ99,CJ99PLB,Choi07}.
The longitudinal component $k_z$ is defined by $k_z=(x-\frac{1}{2})M_0 +
\frac{(m^2_{q}-m^2_1)}{2M_0}$, and the Jacobian of the variable transformation
$\{x,{\bf k}_\perp\}\to {\vec k}=({\bf k}_\perp, k_z)$ is given by
$\frac{\partial k_z}{\partial x}
= \frac{M_0}{4 x (1-x)}[ 1-
(\frac{m^2_1 - m^2_q}{M^2_0})^2]$.
The normalization of our Gaussian radial wave function is then given by
\begin{equation}\label{eq11}
\int^1_0 dx \int \frac{d^2{\bf k}_\perp}{16\pi^3}
|\phi(x,{\bf k}_{\perp})|^2=1.
\end{equation}
Using the plus component of the current, the standard LFQM calculation of Eq.~(\ref{eq1})
is obtained by
\begin{equation}\label{eq12}
[f^{(+)}_{\cal P}]^{\rm SLF}_{\rm on}= \frac{\sqrt{2N_c}}{{8\pi^3}}\int^1_0 dx \int d^2{\bf k}_\perp
\frac{\phi(x,{\bf k}_\perp)}{\sqrt{{\bf k}^2_\perp + {\cal A}^2_1}}
\frac{S^+_{\rm on}}{4P^+}.
\end{equation}
We should note that the main differences between the covariant BS model and the standard LFQM are attributed to the different
spin structures of the $q{\bar q}$ system (i.e. off-shellness vs on-shellness) and the different meson-quark
vertex functions ($\chi$ vs $\phi$). In other words, while the results of the covariant BS model allow the nonzero binding
energy $E_{\rm B.E.}=M^2 -M^2_0$, the SLF result is obtained from the condition of on-mass shell quark and antiquark (i.e. $M\to M_0$).
To find the exact correspondence between the covariant BS model and the standard LFQM, we first compare the physical quantities which are
immune to the the treacherous points such as the zero modes or the instantaneous contributions in the BS model.
In the case of pseudoscalar meson decay constant, since $f^{(+)}_{\cal P}$ obtained from the plus component of the current satisfies this prerequisite condition,
one can find the following correspondence relation,
$\sqrt{2N_c} \frac{ \chi(x,{\bf k}_\perp) } {1-x}
\to \frac{ \phi(x,{\bf k}_\perp) } {\sqrt{ {\cal A}^2_1 + {\bf k}^2_\perp }}$, by comparing
$[f^{(+)}_{\cal P}]^{\rm LFBS}_{\rm full}=[f^{(+)}_{\cal P}]^{\rm LFBS}_{\rm on}$ in Eq.~(\ref{eq5})
and $[f^{(+)}_{\cal P}]^{\rm SLF}_{\rm on}$ in Eq.~(\ref{eq12}).
In most previous LFQM analyses, this correspondence ($\chi$ vs $\phi$) has also been used for the mapping of other physical observables
contaminated by the treacherous points.
In our previous analysis~\cite{CJ14,CJ15,CJ17}, we found that the correspondence relation including only LF vertex functions brings about
the self-consistency problem, i..e. the same physical quantity obtained from different components of the current and/or the polarization vectors
yields different results in the standard LFQM.
Our new correspondence relations between the two models to iron out the self-consistency problem is given by~\cite{CJ14,CJ15,CJ17}:
\begin{equation}\label{eq13}
\sqrt{2N_c} \frac{ \chi(x,{\bf k}_\perp) } {1-x}
\to \frac{ \phi(x,{\bf k}_\perp) } {\sqrt{ {\cal A}^2_1 + {\bf k}^2_\perp }}, \;\; M\to M_0,
\end{equation}
that is, the physical mass $M$ included in the integrand of the BS
amplitude should be replaced with the invariant mass $M_0$ since the results in the standard LFQM
are obtained from the requirement of all constituents being on their respective mass shell.
We should note that the correspondence in Eq.~(\ref{eq13}) between the covariant model and the LFQM has been
verified through our previous analyses of pseudoscalar~\cite{CJ15} and pseudotensor~\cite{CJ17} twist-3 DAs of a pseudoscalar meson
and the chirality-even twist-2 and twist-3 DAs of a vector meson~\cite{CJ14}.
The virtue of Eq.~(\ref{eq13}) to restore the self-consistency of the standard LFQM is that
one can apply Eq.~(\ref{eq13}) only to the on-mass shell contribution in the BS model to get the full result
in the standard LFQM. In other words, the treacherous points (i.e. zero mode and the instantaneous contribution)
appeared in the covariant BS model are absorbed into the LF on-mass shell constituent quark and antiquark contributions and the full result
in the standard LFQM is obtained only from the on-shell contribution regardless of the components of the currents being used.
This remarkable feature also can be seen in this analysis of decay constant of pseudoscalar meson obtained from the ``$-$" component
of the currents.
That is, applying Eq.~(\ref{eq13}) to $[f^{(-)}_{\cal P}]_{\rm on}^{\rm LFBS}$ given by Eq.~(\ref{eq6}), we obtain
the SLF result for the minus component of the current as follows
\begin{equation}\label{eq14}
[f^{(-)}_{\cal P}]^{\rm SLF}_{\rm on} = \frac{\sqrt{2N_c}}{{8\pi^3}}\int^1_0 dx \int d^2{\bf k}_\perp
\frac{\phi(x,{\bf k}_\perp)}{\sqrt{{\bf k}^2_\perp + {\cal A}^2_1}}
\frac{P^+ S^{-}_{\rm on}}{4M_0^2}.
\end{equation}
We confirm numerically that $[f^{(-)}_{\cal P}]^{\rm SLF}_{\rm on}=[f^{(+)}_{\cal P}]^{\rm SLF}_{\rm on}$, which
contrasts with the covariant BS model calculation, in which $[f^{(-)}_{\cal P}]^{\rm LFBS}_{\rm on}\neq [f^{(+)}_{\cal P}]^{\rm LFBS}_{\rm on}$.
We also should note that our confirmation for $[f^{(-)}_{\cal P}]^{\rm SLF}_{\rm on}=[f^{(+)}_{\cal P}]^{\rm SLF}_{\rm on}$ is
independent of the form of the radial wave function, e.g. the power-law type wave function such as $\phi\propto \sqrt{{\partial k_z}/\partial x}(1+ {\vec k}^2/\beta^2)^{-2}$
also shows $[f^{(-)}_{\cal P}]^{\rm SLF}_{\rm on}=[f^{(+)}_{\cal P}]^{\rm SLF}_{\rm on}$.
\section{ Semileptonic Decays between two pseudoscalar mesons}
\label{sec:III}
The transition form factors for the ${\cal P}(P_1)\to {\cal P}(P_2)\ell\nu_\ell$ semileptonic decays between two pseudoscalar mesons
are given by
\begin{equation}\label{eq15}
\langle P_2|V^\mu|P_{1}\rangle = f_{+}(q^2)(P_1 + P_2)^{\mu} +
f_-(q^2)q^\mu,
\end{equation}
where $q^\mu=(P_1-P_2)^\mu$ is the
four-momentum transfer to the lepton pair($\ell\nu_\ell$) and
$m^2_\ell\leq q^2\leq (M_1-M_2)^2$.
The two form factors $f_{\pm}(q^2)$ also satisfy
\begin{equation}\label{eq16}
f_0(q^2) = f_+(q^2) +
\frac{q^2}{M^2_1-M^2_2}f_-(q^2).
\end{equation}
The matrix element
${\cal M}^\mu\equiv\langle P_2|V^\mu|P_1\rangle$ in the BS model is given by
\begin{equation}\label{eq21}
{\cal M}^\mu = iN_c\int\frac{d^4k}{(2\pi)^4} \frac{H_{p_1}{\cal T}^\mu H_{p_2}} {N_{p_1} N_{k} N_{p_2}},
\end{equation}
where $N_{k} = k^2 - m^2_q + i\epsilon$ and $N_{p_j} = p^2_{j} - m^2_j + i\epsilon$ with $p_j=P_j -k\;(j=1,2)$.
To be consistent with the analysis of the decay constant, we take
the $q\bar{q}$ bound-state vertex functions $H_{p_j}(p^2_j,k^2)=g_j/(p^2_j -\Lambda^2_j+i\epsilon)$
of the initial ($j=1$) and final ($j=2$) state pseudoscalar mesons.
The trace term is given by
\begin{equation}\label{eq22}
{\cal T}^\mu = {\rm Tr}[\gamma_5\left(\slash \!\!\!\!\! p_1 +m_1 \right) \gamma^\mu \left(\slash \!\!\!\!\! p_2 +m_2 \right)\gamma_5
\left(-\slash \!\!\!\! k + m_q \right)].
\end{equation}
Performing the LF calculation of Eq.~(\ref{eq21}) in the valence region ($0<k^+<P^+_2$) of the $q^+=0$ frame, where the pole $k^-=k^-_{\rm on}=({\bf
k}^2_\perp + m^2_q -i\epsilon)/k^+$ (i.e., the spectator quark) is located in the lower half of
the complex $k^-$ plane, the Cauchy integration formula for the
$k^-$ integral in Eq.~(\ref{eq21}) gives
\begin{equation}\label{eq23}
[{\cal M}^\mu]^{\rm LFBS}_{\rm val} =
N_c\int^1_0 \frac{dx}{(1-x)}\int \frac{d^2{\bf k}_\perp}{16\pi^3}
\chi_1(x,{\bf k}_\perp) \chi_2 (x, {\bf k'}_\perp)
{\cal T}^\mu_{\rm val} ,
\end{equation}
where
\begin{equation}\label{eq24}
\chi_{1(2)} = \frac{g_{1(2)}}{x^2 (M^2_{1(2)} - M^{(\prime)2}_0)(M^2_{1(2)} -M^2_{\Lambda_{1(2)}})},
\end{equation}
with
$M^2_{0(\Lambda_1)}= \frac{{\bf k}^2_\perp + m^2_1(\Lambda^2_1)}{x} + \frac{{\bf k}^2_\perp + m^2_q}{ 1-x}$
and $M'^2_{0(\Lambda_2)}=M^2_{0(\Lambda_1)}(m_1(\Lambda_1)\to m_2(\Lambda_2), {\bf k}_\perp\to {\bf k'}_\perp={\bf k}_\perp + (1-x) {\bf q}_\perp$).
The explicit LF calculation of Eq.~(\ref{eq23}) in parallel with the manifestly covariant calculation of Eq.~(\ref{eq21}) can be found in~\cite{CJ09}.
As shown in Ref.~\cite{CJ09}, while $f_{+}(q^2)$ was obtained from $J^+$ and immune to the zero mode,
the form factor $f_-(q^2)$ was obtained from $(J^+, {\bf J}_\perp)$ and received both the instantaneous and the zero-mode
contributions. Of course, one cannot avoid such treacherous points in the BS model
even if $f_-(q^2)$ is obtained from the two components $(J^+, J^-)$ of the current.
In this work, we shall show that $f_-(q^2)$ in the standard LFQM is independent of the components of the current, i.e. regardless of using
$(J^+, {\bf J}_\perp)$ or $(J^+, J^-)$, as far as we apply Eq.~(\ref{eq13}) in the BS model to get the standard LFQM results.
So, from now on, we discuss only for the on-mass shell contribution in the valence region of the $q^+=0$ frame.
Of the trace terms ${\cal T}^\mu_{\rm val}= {\cal T}^\mu_{\rm on} + {\cal T}^\mu_{\rm inst}$, the on-shell
contribution is given by
\begin{eqnarray}\label{eq25}
{\cal T}^\mu_{\rm on} &=&
4 \biggl[
p^\mu_{1\rm on} (p_{2\rm on}\cdot k_{\rm on}) - k^\mu_{\rm on} (p_{1\rm on}\cdot p_{2\rm on})
+ p^\mu_{2\rm on} (p_{1\rm on}\cdot k_{\rm on})
\nonumber\\
&&\hspace{0.5cm}
+ m_2m_{\bar q} p^\mu_{1\rm on}
+ m_1m_{\bar q}p^\mu_{2\rm on}
+ m_1m_2 k^\mu_{\rm on} \biggr],
\end{eqnarray}
where
\begin{eqnarray}\label{eq26}
p_{1\rm on}&=& \left[ x P^+_1, \frac{m^2_1 + {\bf k}_\perp^2}{xP^+_1}, -{\bf k}_\perp \right],
\nonumber\\
p_{2\rm on} &=& \left[ x P^+_1, \frac{m^2_2 + ({\bf k}_\perp+{\bf q}_\perp)^2}{xP^+_1}, -{\bf k}_\perp-{\bf q}_\perp \right],
\nonumber\\
k_{\rm on} &=& \left[ (1-x) P^+_1, \frac{m^2_q + {\bf k}_\perp^2}{ (1-x)P^+_1}, {\bf k}_\perp \right].
\end{eqnarray}
The explicit form of the instantaneous contribution ${\cal T}^\mu_{\rm inst}$ can be found in~\cite{CJ09}.
On the one hand, the transition form factors $f_{\pm}(q^2)$ obtained from
$(J^+, {\bf J}_\perp)$ are given by
\begin{eqnarray}\label{eq27}
f_+(q^2) &=& \frac{{\cal M}^+}{2P^+_1},
\nonumber\\
f^{(\perp)}_-(q^2) &=& \frac{{\cal M}^+}{2P^+_1}+ \frac{ {\cal M}^\perp \cdot {\bf q}_\perp}{ {\bf q}^2_\perp}.
\end{eqnarray}
On the other hand, the form factor $f_-(q^2)$ obtained from $(J^+, J^-)$ is given by
\begin{equation}\label{eq28}
f^{(-)}_-(q^2) = - \frac{{\cal M}^+}{2P^+_1} \biggl( \frac{\Delta M^2_{+} + {\bf q}^2_\perp}{\Delta M^2_{-} - {\bf q}^2_\perp} \biggr)
+ \frac{ P^+_1 {\cal M}^-}{ \Delta M^2_{-} - {\bf q}^2_\perp},
\end{equation}
where $\Delta M^2_{\pm} = M^2_1 \pm M^2_2$. For convenience sake, the form factor $f_-(q^2)$ obtained from $(J^+, {\bf J}_\perp)$ and $(J^+, J^-)$
is denoted by $f^{(\perp)}_-(q^2)$ and $ f^{(-)}_-(q^2)$, respectively.
In the manifestly covariant BS model given by Eq.~(\ref{eq21}), we note that
while $[f^{(+)}_{+}]_{\rm full}^{\rm LFBS}=[f^{(+)}_{+}]_{\rm on}^{\rm LFBS}$,
$[f^{(\perp)}_{-}]_{\rm full}^{\rm LFBS}=[f^{(\perp)}_{-}]_{\rm on}^{\rm LFBS}+[f^{(\perp)}_{-}]_{\rm inst}^{\rm LFBS}+[f^{(\perp)}_{-}]_{\rm Z.M.}^{\rm LFBS}$.
The full result $f^{(-)}_-(q^2)$ has the same structure as $f^{(\perp)}_-(q^2)$,
i.e.
$[f^{(-)}_{-}]_{\rm full}^{\rm LFBS}=[f^{(-)}_{-}]_{\rm on}^{\rm LFBS}+[f^{(-)}_{-}]_{\rm inst}^{\rm LFBS}+[f^{(-)}_{-}]_{\rm Z.M.}^{\rm LFBS}$
although the explicit forms of the instantaneous and zero-mode contributions are different from those for $f^{(\perp)}_-(q^2)$.
For the calculation of the transition form factors $f_{\pm}(q^2)$, our new correspondence relations between the covariant BS model and the
standard LFQM is given by
\begin{eqnarray}\label{eq29}
&&\sqrt{2N_c} \frac{ \chi_1(x,{\bf k}_\perp) } {1-x}
\to \frac{ \phi_1(x,{\bf k}_\perp) } {\sqrt{ {\cal A}^2_1 + {\bf k}^2_\perp }}, \;\; M_1\to M_0,
\nonumber\\
&& \sqrt{2N_c} \frac{ \chi_2(x,{\bf k'}_\perp) } {1-x}
\to \frac{ \phi_2(x,{\bf k'}_\perp) } {\sqrt{ {\cal A}^2_2 + {\bf k'}^2_\perp }}, \;\; M_2\to M'_0.
\end{eqnarray}
In order to obtain the self-consistent description of our standard LFQM,
we first compute $[f_{+}]_{\rm full}^{\rm LFBS}=[f_{+}]_{\rm on}^{\rm LFBS}$, $[f^{(\perp)}_{-}]_{\rm on}^{\rm LFBS}$, and $[f^{(-)}_{-}]_{\rm on}^{\rm LFBS}$
from the BS model and apply Eq.~(\ref{eq29}) to get the corresponding standard LFQM results,
i.e. $[f_{+}]_{\rm on}^{\rm SLF}$, $[f^{(\perp)}_{-}]_{\rm on}^{\rm SLF}$ and $[f^{(-)}_{-}]_{\rm on}^{\rm SLF}$, respectively.
The final standard LFQM results for $f_{\pm}(q^2)$ are given by
%
\begin{eqnarray}\label{eq30}
[ f_{+}(q^2)]^{\rm SLF}_{\rm on} &=& \int^{1}_{0}dx \int \frac{d^{2}{\bf k}_{\perp}}{16\pi^3}
\frac{\phi_{1}(x,{\bf k}_{\perp})}{\sqrt{ {\cal A}_{1}^{2} + {\bf k}^{2}_{\perp}}}
\frac{\phi_{2}(x,{\bf k}'_{\perp})}{\sqrt{ {\cal A}_{2}^{2}+ {\bf k}^{\prime 2}_{\perp}}}
\nonumber\\
&&\times
\frac{(1-x)}{2} \left[\frac{ {\cal T}^{+}_{\rm on}} {2P^+_1}\right],
\end{eqnarray}
\begin{eqnarray}\label{eq31}
[ f^{(\perp)}_{-}(q^2)]^{\rm SLF}_{\rm on} &=& \int^{1}_{0}dx \int \frac{d^{2}{\bf k}_{\perp}}{16\pi^3}
\frac{\phi_{1}(x,{\bf k}_{\perp})}{\sqrt{ {\cal A}_{1}^{2} + {\bf k}^{2}_{\perp}}}
\frac{\phi_{2}(x,{\bf k}'_{\perp})}{\sqrt{ {\cal A}_{2}^{2}+ {\bf k}^{\prime 2}_{\perp}}}
\nonumber\\
&&\times
\frac{(1-x)}{2} \left[\frac{ {\cal T}^{+}_{\rm on}} {2P^+_1} + \frac{{\bf{\cal T}}_{\perp\rm on}\cdot{\bf q}_{\perp}}{{\bf q}^2_\perp}\right],
\end{eqnarray}
and
\begin{eqnarray}\label{eq32}
[ f^{(-)}_{-}(q^2)]^{\rm SLF}_{\rm on} &=& \int^{1}_{0}dx \int \frac{d^{2}{\bf k}_{\perp}}{16\pi^3}
\frac{\phi_{1}(x,{\bf k}_{\perp})}{\sqrt{ {\cal A}_{1}^{2} + {\bf k}^{2}_{\perp}}}
\frac{\phi_{2}(x,{\bf k}'_{\perp})}{\sqrt{ {\cal A}_{2}^{2}+ {\bf k}^{\prime 2}_{\perp}}}
\nonumber\\
&&\times
\frac{(1-x)[P^+_1{\cal T}^-_{\rm on}-\frac{ {\cal T}^{+}_{\rm on}} {2P^+_1} ({\Delta M}^2_{0+}+{\bf q}^2_\perp)]}
{2({\Delta M}^2_{0-}-{\bf q}^2_\perp)},
\nonumber\\
\end{eqnarray}
respectively, where ${\Delta M}^2_{0\pm} = M^2_0 \pm M^{\prime 2}_0$
obtained from the on-mass shell condition (i.e. $M^{(\prime)}\to M^{(\prime)}_0$)
and ${\cal A}_i = (1-x) m_i + x m_{q}\; (i=1,2)$.
We numerically confirm that $[ f^{(\perp)}_{-}(q^2)]^{\rm SLF}_{\rm on}=[ f^{(-)}_{-}(q^2)]^{\rm SLF}_{\rm on}$,
which supports the self-consistency of our standard LFQM.
The explicit forms of the on-shell trace terms and the form factors
in Eqs.~(\ref{eq30})-(\ref{eq32}) are given in the Appendix.
We note that the form factors obtained in the spacelike region using
the $q^+=0$ frame are analytically continued to the timelike region by changing ${\bf q}^2_\perp$
to $-q^2$ in the form factors.
Including the nonzero lepton mass ($m_\ell$), the differential decay rate for the
exclusive ${\cal P}(P_1)\to {\cal P}(P_2)\ell\nu_\ell$ process is given by~\cite{KS, Yao}
\begin{equation}\label{eq17}
\frac{d\Gamma}{dq^{2}}=\frac{8 {\cal N} |{\vec p}^*|}{3}
\left[
\biggl(1+\frac{m^2_\ell}{2q^2}\biggr) |H_+|^{2}
+
\frac{3 m^2_\ell}{2 q^2}|H_0|^2
\right],
\end{equation}
where
\begin{equation}\label{eq18}
|{\vec p}^*| = \frac{1}{2M_{1}}\sqrt{
(M_{1}^{2}+M_{2}^{2}-q^{2})^{2}-4M_{1}^{2}M_{2}^{2}}
\end{equation}
is the modulus of the three-momentum of the daughter meson in the parent meson rest frame and the helicity amplitudes
$H_0$ and $H_t$ corresponding to the longitudinal parts of the spin-1 and spin-0 hadronic contributions, respectively,
can be expressed in terms of $f_+$ and $f_0$ as follows
\begin{equation}\label{eq19}
H_+ = \frac{2 M_1 |{\vec p}^*| }{\sqrt{q^2}} f_{+}(q^2),\;\;
H_0 = \frac{M^2_1- M^2_2 }{\sqrt{q^2}} f_{0}(q^2).
\end{equation}
The normalization factor in Eq.~(\ref{eq17}) is
\begin{equation}\label{eq20}
{\cal N} = \frac{G^{2}_{F}}{256\pi^{3}} \eta^2_{\rm EW} |V_{Q_{1}\bar{Q}_{2}}|^{2}\frac{q^2}{M^2_1}
\biggl(1-\frac{m^2_\ell}{q^2}\biggr)^2,
\end{equation}
where $G_{F}=1.166\times 10^{-5}$ GeV$^{-2}$ is the Fermi constant, $V_{Q_{1}\bar{Q}_{2}}$ is the relevant CKM
mixing matrix element and the factor $\eta_{\rm EW} =1.0066$ accounts for the leading order electroweak corrections~\cite{Sirlin}.
The kinematics of the ${\cal P}(P_1)\to {\cal P}(P_2)\ell\nu_\ell$ decay can also be expressed in terms of
the recoil variable $w$ defined by
\begin{equation}\label{rw}
w = v_{1}\cdot v_{2} = \frac{ M^2_1 + M^2_2 - q^2}{2 M_1 M_2},
\end{equation}
where $v_{1(2)}=\frac{P_{1(2)}}{M_{1(2)}}$ is the four velocity of the initial (final) meson and $q^2=(P_1-P_2)^2=(P_\ell + P_\nu)^2$.
While the minimum value of $w=1$ (or $q^2=q^2_{\rm max}$)
corresponds to zero-recoil of the final meson in the initial meson rest frame,
the maximum value of $w$ (or $q^2 =0$) corresponds to the maximum recoil of the final
meson recoiling with the maximum three momentum $|{\vec P}_2|=\frac{(M^2_1 - M^2_2)}{2 M_1}$.
\section{Numerical Results}
\label{sec:IV}
In our numerical calculations for the semileptonic
$B\to D\ell\nu_\ell$ ($\ell=e,\mu,\tau$) decays, we use two sets of
model parameters ($m,\beta$) for the linear and harmonic oscillator (HO)
confining potentials given in Table~\ref{t1} obtained from the calculation of the ground state
meson mass spectra~\cite{Choi07,CJ09}. For the physical $(B, D)$ meson masses, we
use the central values quoted by the Particle Data Group (PDG)~\cite{PDG}.
Our predictions for the decay constants of $(D, B)$ mesons obtained from the model parameters
in Table~\ref{t1}
are $f_{D}=197\; (180)$ MeV and $f_{B}=171\;(161)$ MeV
for the linear (HO) parameters, respectively, while the current available
experimental data are given by
$f^{\rm exp}_{D}=205.8(4.5)(0.4)(2.7)$ MeV~\cite{PDG}
and
$f^{\rm exp}_{B}=229^{+39+34}_{-31-37}$ MeV~\cite{Ikado}.
\begin{table}
\centering
\caption{The constituent quark mass $m_q$ (in GeV) and the gaussian parameters
$\beta_{q{\bar q}}$ (in GeV) for the linear and HO confining potential
obtained by the variational principle~\cite{CJ09,Choi07}.
$q=u$ and $d$.}
\label{t1}
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{7pt}
\begin{tabular}{cccccc} \hline\hline
Model ~&~ $m_q$~ &~ $m_c$ ~& $m_b$ ~&~ $\beta_{qc}$ ~&~ $\beta_{qb}$ \\
\hline
Linear ~&~ 0.22~&~ 1.8 ~&~ 5.2 ~&~ 0.4679 ~&~ 0.5266 \\
HO ~&~ 0.25 ~&~ 1.8 ~&~ 5.2 ~&~ 0.4216 ~&~ 0.4960 \\
\hline\hline
\end{tabular}
\end{table}
\begin{figure}[htbp]
\centering
\includegraphics[height=7cm, width=7cm]{Fig1.eps}
\caption{\label{fig1} The $q^2$ dependent form factors ($f_+, f_0, f_-$) of the $B\to D\ell\nu_\ell$ decay
for both spacelike and the kinematic timelike regions, $-2\leq q^2\leq (M_B-M_D)^2$ GeV$^2$.}
\end{figure}
In Fig.~\ref{fig1}, we show the $q^2$ dependences of $f_+(q^2)$ (solid line), $f_0(q^2)$ (dashed line),
and $f_-(q^2)$ for $B\to D\ell\nu_\ell$ decay obtained from Eqs.~(\ref{eq30})-(\ref{eq32}) with the linear potential parameters.
As one can see, our result for $f_-(q^2)$ (dot-dashed line) obtained from $(J^+, {\bf J}_\perp)$ (see Eq.~(\ref{eq31})) shows a complete
agreement with $f_-(q^2)$ (circle) obtained from $(J^+, J^-)$ (see Eq.~(\ref{eq32})) subtantiating the self-consistency of our LFQM.
We also should note that the form factors are displayed not only for
the whole timelike kinematic region [$m^2_\ell \leq q^2\leq (M_B - M_D)^2$] (in unit of GeV$^2$)
but also for the spacelike region ($-2\leq q^2\leq 0$) (in unit of GeV$^2$) to demonstrate the validity of our analytic continuation from
spacelike region to the timelike by changing ${\bf q}^2_\perp$ to $-{\bf q}^2_\perp (= q^2 >0)$ in the form facfors.
\begin{table}
\caption{Form Factors of the $B\to D\ell\nu_\ell$ decay at $q^2=0$ and $q^2=q^2_{\rm max}$
obtained from the linear (HO) potential parameters.}
\label{t2}
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{6pt}
\centering
\begin{tabular}{ccccc} \hline\hline
$f_+(0)$ & $f_+(q^2_{\rm max})$ & $f_0(q^2_{\rm max})$ & $f_-(0)$ & $f_-(q^2_{\rm max})$ \\
\hline
0.7157 & 1.1235 & 0.8739 & -0.3298 & -0.5231\\
(0.6969) & (1.1209) & (0.8755) & (-0.3190) & (-0.5142) \\
\hline\hline
\end{tabular}
\end{table}
\begin{table}
\caption{The fitted parameters $b_{+(0)}$ and $c_{+(0)}$ for the parametric form factors in Eq.~(\ref{eq34})
obtained from the linear (HO) potential parameters.}
\label{t3}
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{ccccc} \hline\hline
$f_{+,0}(0)$ & $b_+$ & $c_+$ & $b_0$ & $c_0$ \\
\hline
0.7157 & 0.955259 & 0.203408 & 0.428416 & -0.014496\\
(0.6969) & (1.00776) & (0.245602) & (0.484403) & (-0.007704) \\
\hline\hline
\end{tabular}
\end{table}
Our results of the form factors $(f_\pm, f_0)$ obtained from the linear (HO) potential parameters
at the maximum recoil ($q^2=0$) and minimum recoil ($q^2=q^2_{\rm max}$) points are
summarized in Table~\ref{t2}. Our direct LFQM results for the form factors $f_i (q^2)$ $(i=\pm, 0)$
obtained from Eqs.~(\ref{eq30})-(\ref{eq32}) are well described by the following parametrization~\cite{LCSR1}
\begin{equation}\label{eq34}
f_{i}(q^2) = \frac{f_{i}(0)}{1 - b_{i} (q^2/M^2_B) + c_{i} (q^2/M^2_B)^2 },
\end{equation}
where the parameters $(b_i, c_i)$ can be obtained from our LFQM results in Eqs.~(\ref{eq30})-(\ref{eq32})
via $b_i=\frac{M^2_B}{f_i(0)} f'_i(0)$ and $c_i = b^2_i - \frac{f''_i(0) M^4_B}{2 f_i(0)}$.
The fitted parameters $(b_i, c_i)$ for $(f_+, f_0)$ are also summarized in Table~\ref{t3} and
those for $f_-$ are obtained as
$b_-=0.970071\;(1.00817)$ and $c_-=0.200821\;(0.2384)$ for the liner (HO) parameters, respectively.
We should note that our direct
LFQM results and the ones obtained from Eq.~(\ref{eq34}) are in excellent agreement with each other within 0.1$\%$ error.
\begin{figure}
\centering
\includegraphics[height=7cm, width=7cm]{Fig2.eps}
\caption{\label{fig2} The recoil variable $w$ dependent form factors $(f_+, f_0)$ of $B\to D\ell\nu_\ell$
obtained from the linear and HO potential parameters, and the result of the combined fit to
experimental~\cite{Belle16} and lattice QCD (HPQCD)~\cite{Lat1} data.}
\end{figure}
In Fig.~\ref{fig2}, we show the recoil variable $w$ dependent form factor $f_+(w)$ (solid line) and $f_0(w)$ (dashed line)
obtained from both linear (black lines) and HO (blue line) potential parameters and compare them with the
data from the Belle experiment~\cite{Belle16} and the lattice QCD (HPQCD)~\cite{Lat1}.
Our results are overall in good agreement with those from ~\cite{Belle16,Lat1}.
Of special interest, while our results for $f_+(w)$ and $f_0(w)$ obtained from the linear potential parameters (black line)
are somewhat different
from those obtained from the HO potential parameters (blue lines) at the maximum recoil
point (i.e. $w\simeq 1.6$), both potential parameters give almost the same results at the zero recoil point (i.e. $w=1$).
This is related with the the heavy-quark symmetry (HQS), i.e. in the infinite quark mass limit, the heavy-to-heavy transition form factors
between two pseudoscalar mesons such as $B\to D\ell\nu_\ell$ decay are reduced to single universal Isgur-Wise (IW)
function~\cite{IW1,IW2} ${\cal G}(w)=\frac{2\sqrt{M_B M_D}}{M_B + M_D}f_+(w)$, which should in principle
satisfy the following normalization ${\cal G}(1)=1$ in the exact HQS limit.
Our LFQM results of ${\cal G}(1)=0.988\; (0.984)$ obtained from the linear (HO) parameters
are in good agreement with the exact HQS limit within $2\%$ errors. Our results also
should be compared with other theoretical
predictions such as ${\cal G}(1) =1.035 (40)$~\cite{Lat1}, ${\cal G}(1) =1.0541 (83)$~\cite{Lat2}, and ${\cal G}(1)=1.033 (95)$~\cite{Lat3}
from the lattice QCD and ${\cal G}(1)=0.981^{+0.045}_{-0.048}$ from the QCD sum rules~\cite{Yi18}.
\begin{figure}
\centering
\includegraphics[height=7cm, width=7cm]{Fig3.eps}
\caption{\label{fig3} Differential decay width of $B\to D\ell\nu_\ell$ ($\ell=e,\mu,\tau$) compared with the
experimental data~\cite{Belle16} measured from the light leptonic decay mode.}
\end{figure}
\begin{table*}
\caption{Our LFQM predictions on the branching ratios (in $\%$) for $B\to D\ell\nu_\ell$ ($\ell=e,\mu,\tau$) decays compared with
the results from other theoretical predictions~\cite{LCSR1,HQET} and PDG~\cite{PDG}. $\ell'=e,\mu$.}
\label{t4}
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{12pt}
\centering
\begin{tabular}{cccccc} \hline\hline
Channel & Linear &HO & LCSR~\cite{LCSR1} & HQET~\cite{HQET} & PDG~\cite{PDG} \\
\hline
$B^0\to D^-\ell'\nu_{\ell'}$ & $2.34\pm 0.18$ & $2.25\pm 0.17$ & $2.086^{+0.230}_{-0.232}$ & $-$ & $2.19\pm 0.12$\\
$B^0\to D^-\tau\nu_\tau$ & $0.66\pm 0.05$ & $0.64\pm 0.05$ & $0.666^{+0.058}_{-0.057}$ & $0.64\pm 0.05$ & $1.03\pm 0.22$\\
$B^+\to {\bar D^0}\ell'\nu_{\ell'}$ & $2.53\pm 0.19$ & $2.44\pm 0.19$ & $2.260^{+0.249}_{-0.251}$ & $-$ & $2.27\pm 0.11$\\
$B^+\to {\bar D^0}\tau\nu_\tau$ & $0.72\pm 0.05$ & $0.70\pm 0.05$ & $0.724^{+0.063}_{-0.062}$ & $0.66\pm 0.05$ & $0.77\pm 0.25$\\
\hline\hline
\end{tabular}
\end{table*}
In Fig.~\ref{fig3}, we show our results for the differential width of $B\to D\ell\nu_\ell$ ($\ell=e,\mu,\tau$) decay
obtained from both linear (black lines) and HO (blue lines) parameters. The solid lines represent our results for the light
($e, \mu$) decay modes compared with the experimental data from Belle~\cite{Belle16}.
The dashed lines represent our results for the semitauonic $B\to D\tau\nu_\tau$ decay.
We summarize our LFQM predictions on the branching ratios for $B\to D\ell\nu_\ell$ decays obtained from both linear and HO potential parameters
in Table~\ref{t4} and compare ours with
the results from PDG~\cite{PDG} and other theoretical predictions such as LCSR~\cite{LCSR1}
and heavy quark effective theory (HQET)~\cite{HQET}.
For the numerical calculations of the branching ratios,
we use the CKM matrix element $|V_{cb}|=(40.5\pm 1.5)\times 10^{-3}$,
the PDG values~\cite{PDG} of the lepton ($e,\mu, \tau$) and hadron $(B, D)$ masses
together with the lifetimes of $(B^0, B^{\pm}$).
As one can see from Table~\ref{t4}, our results obtained from the linear parameters are slightly larger than those obtained
from the HO parameters. Our predictions for three decay modes such as $B^0\to D^-\ell'\nu_{\ell'}$,
$B^+\to {\bar D^0}\ell'\nu_{\ell'}$, and $B^+\to {\bar D^0}\tau\nu_\tau$ also agree with other theoretical results~\cite{LCSR1,HQET}
as well as PDG values~\cite{PDG} within the errors.
For the semitauonic $B^0\to D^-\tau\nu_\tau$ decay, while three theoretical predictions agree with each other, those theoretical
predictions are smaller than the data from PDG.
From the results given in Table~\ref{t4}, our predictions for the ratio
${\cal R}(D)=\frac{{\rm Br}(B\to D\tau\nu_\tau)}{{\rm Br}(B\to D\ell'\nu_{\ell'})} (\ell'=e,\mu)$ are as follows
\begin{equation}
{\cal R}(D) = 0.284^{+0.046}_{-0.039} \left[ 0.286^{+0.046}_{-0.040} \right],
\end{equation}
for the linear [HO] potential parameters. Our predictions for the ratio ${\cal R}(D)$ are consistent with other theoretical predictions
such as $0.300(8)$~\cite{Lat1} and $0.299(11)$~\cite{Lat2} from the LQCD and $0.320^{+0.018}_{-0.021}$~\cite{LCSR1}
within the errors. While our results are quite smaller than the experimental
values, ${\cal R}^{\rm exp}(D)=0.440(58)(42)$ from BaBar~\cite{BaB12,BaB13} and
${\cal R}^{\rm exp}(D)=0.375(64)(26)$ from Belle~\cite{Belle15}, we also take note of
a new preliminary result ${\cal R}^{\rm exp}(D)=0.307(37)(16)$~\cite{Belle19} reported from the
Belle collaboration, which is consistent with the SM at the $1.2\sigma$ level.
\section{Summary and Discussion}
\label{sec:V}
In this work, we discussed the self-consistence description on the decay constant $f_{\cal P}$ of a
pseudoscalar (${\cal P}$) meson and the weak
form factors $f_+$ and $f_-$ (or $f_0$) for the exclusive
semileptonic $B\to D\ell\nu_\ell$ $(\ell=e, \mu,\tau)$ decays in the standard
LFQM. It has been a common perception in the LF formulation that while the plus component ($J^+$) of the LF current $J^\mu$ in
the matrix element can be regarded as the ``good" current, the perpendicular (${\bf J}_\perp$) and
the minus ($J^-$) components of the current were known as the ``bad" currents since $({\bf J}_\perp, J^-)$
are easily contaminated by the treacherous points
such as the LF zero mode and the off-mass shell instantaneous contributions.
To scrutinize such treacherous points when the usage of ${\bf J}_\perp$ or $J^-$ is unavoidable,
we employed the exactly solvable manifestly covariant BS model using the multipole type of $q{\bar q}$ bound
state vertex function. Carrying out the LF calculations for $f_{\cal P}$ and $f_{\pm}(q^2)$
in the BS model, we found that $f_{\cal P}$ and $f_{-}(q^2)$ obtained from the so called ``bad" components of the current
receive the zero-mode contributions as well as the instantaneous ones.
We then linked the covariant BS model to the standard LFQM
following the same universal correspondence Eq.~(\ref{eq13})
between the two models that we found in our previous analysis of the twist-2 and twist-3 DAs of pseudoscalar and
vector mesons~\cite{CJ14,CJ15,CJ17} and replaced the LF vertex function in the BS model with the more
phenomenologically accessible Gaussian wave function provided by the LFQM analysis of meson mass spectra~\cite{CJ99,CJ99PLB}.
As in the previous analysis~\cite{CJ14,CJ15,CJ17}, it is striking to observe that the zero mode and the instantaneous contribution
present in the BS model become absent in the LFQM. In other words,
our LFQM results of the decay constant $f_{\cal P}$ and the TFFs $f_{\pm}(q^2)$ are shown to be independent of the components of the
current without involving any of those treacherous contributions.
We then apply our current independent form factors $f_{\pm}(q^2)$ for the
self-consistent analysis of $B\to D\ell\nu_\ell$ ($\ell=e,\mu,\tau$) decay
using our LFQM constrained by the variational principle for the QCD-motivated effective Hamiltonian with the linear (or HO) plus
Coulomb interaction~\cite{CJ09,CJ99,CJ99PLB,Choi07}.
The form factors $f_{\pm}(q^2)$ are obtained in the $q^+=0$ frame ($q^2=-{\bf q}^2_\perp <0$) and then analytically continued to the
timelike region by changing ${\bf q}^2_\perp$ to $-q^2$ in the form factors.
We obtain ${\rm Br}(B\to D\ell\nu_\ell)$ for both neutral and charged $B$ mesons and compare with the experimental data as well as
other theoretical model predictions. Our results for ${\rm Br}(B\to D\ell\nu_\ell)$ show reasonable agreement with the data except
for the semitauonic $B^0\to D^-\tau\nu_\tau$ decay.
Our results for the ratio ${\cal R}(D)$ are consistent with other theoretical predictions as well as the new preliminary result from
the Belle collaboration~\cite{Belle19} although the previous data from BaBar~\cite{BaB12,BaB13} and Belle~\cite{Belle15}
show quite larger values than our predictions.
\section*{Acknowledgments}
This work was supported by the National Research Foundation of Korea (NRF)
under Grant No. NRF- 2020R1F1A1067990.
|
2,869,038,155,548 | arxiv |
\section{Introduction}
The replicability of research findings is essential for the credibility of
science. However, the scientific world is experiencing a crisis
\citep{Begley2015} as the replicability rate of many fields appears to be
alarmingly low. As a result, large scale replication projects, where original
studies are selected and replicated as closely as possible to the original
procedures, have been conducted in psychology \citep{open2015},
social sciences \citep{camerer2018} and economics \citep{camerer2016} among
others. Replication success is
usually assessed using significance and $p$-values, compatibility of effect
estimates, subjective assessments of replication teams and meta-analysis of
effect estimates \citep[\abk{\latin{e.\,g}} in][]{open2015}. The statistical evaluation of
replication studies is still generating much discussion and new standards
are proposed \citep[\abk{\latin{e.\,g}} in][]{Patil2016, Ly2018, held2020}.
Yet before a replication study is analyzed, it needs to be designed.
While the conditions of the replication study are ideally identical to the
original study, the replication sample size stands out as an exception and
requires further consideration. Using the same sample size as in the original
study may lead to a severely underpowered replication study,
even if the effect $\hat\theta_o$ estimated in the original study is the true,
unknown effect size $\theta$ \citep{good1992}. Standard power calculations
using the effect estimate from the
original study as the basis for the replication study are commonly used.
A major criticism of this method is that the uncertainty
accompanying this original finding is ignored and so the resulting replication
study is likely to be underpowered \citep{anderson2017}. In this paper,
we propose alternatives based on predictive power and adapted from Bayesian
approaches to incorporate prior knowledge to sample size calculation in clinical
trials \citep{Spieg2004}.
In an era where an increasing number of replication projects are being
undertaken, optimal allocation of resources appears to be of particular
importance. Adaptive designs are well suited for this purpose and their
relevance no longer needs to be justified, particularly in clinical trials where
continuing a study which should be stopped can be a matter of life or death.
Stopping
for futility refers to the termination of a trial when the data at interim
indicate that it is unlikely to achieve statistical significance at the end of
the trial \citep{snapinn2006}. In contrast, stopping for efficacy arises when
the data at interim are so convincing that there is no need to continue
collecting more data.
One approach for assessing efficacy and futility is called stochastic
curtailment \citep{halperin1982}, where the conditional power of
the study, given the data so far, is calculated for a range of alternative
hypotheses.
Instead of conditional power, predictive power can also be used to judge
if a trial should be continued \citep{herson1979}. This concept has been
discussed in depth in \citet{dallow2011} and \citet{rufibach2016},
with an emphasis on the choice of the prior in the latter.
\citet{lakens2014} points out that sequential replication studies could be an
alternative to fixed sample size calculations. This approach has been adopted by
\citet{camerer2018}
in the \emph{Social Science Replication Project} (\emph{SSRP}), a
large-scale project aiming at evaluating the replicability of social sciences
experiments published between 2010 and 2015 in \emph{Nature} and \emph{Science}.
A two-stage procedure was used and 21 original studies have been replicated. However, the sequential approach did not include a
power calculation at interim,
only allowed for a premature stopping for efficacy and did not mention any
adjustment on the threshold
for significance.
We try to fill this gap by proposing different methods to calculate the interim
power, namely the power of a replication study taking into account the data from
an interim analysis. We argue that \emph{predictive} interim power is a useful
tool to guide the decision to stop replication studies where the intended effect
is not present. Our framework only enables power calculation
at a single interim analysis.
This paper is structured as follows: power calculations for non-sequential
(Section~\ref{sec:nonseq}) and sequential (Section~\ref{sec:seq}) replication
studies are presented, with a focus on comparing conditional and
predictive methods.
Relevant properties of these methods are then illustrated using data from the \emph{SSRP}
in Section~\ref{sec:application}.
We close with some discussion in Section~\ref{sec:discussion}.
\section{Non-sequential replication studies}\label{sec:nonseq}
Suppose a study has been conducted in order to
estimate an unknown effect size $\theta$. We consider the one-sample case
throughout this paper but the results can also be generalized to the case
of two samples.
The study produced a positive effect estimate $\hat
\theta_o$. In order to confirm this finding,
a replication study is planned. Let us assume that the future data of the
replication study are normally distributed as follows,
\begin{align*}
Y_{1}, \ldots, Y_{n_r} \simiid \Nor\left(\theta, \sigma^2\right) \, ,
\end{align*}
where $\sigma$ is the known standard
deviation of one observation, assumed to be the same for original and replication
study.
In the \emph{SSRP}, as well as in most replication projects,
power calculations for the replication studies are based on the original effect
estimate $\hat\theta_o$.
In order to incorporate the uncertainty of $\hat\theta_o$
we use the following
prior
\begin{equation}
\theta \sim \Nor\left(\hat \theta_o, \sigma^2_o = \sigma^2/n_o\right) \, , \label{eq:prior}
\end{equation}
centered around $\hat\theta_o$
and with variance inversely proportional to the
original sample size $n_o$ \citep{Spieg2004}.
Prior~\eqref{eq:prior} may be too optimistic in practice,
where original effect estimates tend to be exaggerated \citep{camerer2018}.
This issue and possible solutions are discussed in the
next section.
In what follows, the different formulas resulting
from the use of the prior \eqref{eq:prior} are described.
This section is inspired by Section 6.5 in
\citet{Spieg2004} where Bayesian contributions to selecting the sample size
of a clinical trial are studied. We adapt this methodology to the replication
framework and express the power calculation formulas in terms of unitless
quantities (namely relative sample sizes and test statistics).
\subsection{Methods}\label{sec:methods}
We differentiate between design and analysis prior,
both having an
impact on the power calculation \citep{OHagan2001}, and present the different
combinations of priors in
Table \ref{tbl:designanalysis}.
\newcolumntype{C}{>{\centering\arraybackslash} m{4.5cm} }
\newcolumntype{D}{>{\centering\arraybackslash} m{4.5cm} }
\begin{table}[!h]
\centering
\caption{\small Methods of power calculations resulting from the different combinations of design and analysis priors.}
\begin{tabular}{m{0.7cm} m{2.3cm}|D|C}
\multicolumn{2}{c}{} & \multicolumn{2}{c}{\textbf{Design}}\\
\multicolumn{2}{c}{} & \multicolumn{2}{c}{}\\
& & Point prior $\theta = \hat\theta_o$ & Normal prior $\theta \sim \Nor\left(\hat \theta_o, \sigma^2_o\right)$ \\
\cline{2-4}
\multirow{2}{*}{\rot{\textbf{Analysis}}} & Flat prior & \textcolor{white}
{blablablablablablablablablablabl} Conditional \textcolor{white}
{blablablablablablablablablablabl} & Predictive\\
\cline{2-4}
& Normal prior $\theta \sim \Nor\left(\hat \theta_o, \sigma^2_o\right)$ & \textcolor{white}
{blablablablablablablablablablabl} Conditional Bayesian \textcolor{white}
{blablablablablablablablablablabl} & Fully Bayesian
\end{tabular}
\label{tbl:designanalysis}
\end{table}
A point prior at $\theta = \hat\theta_o$ in the design
corresponds to the concept of conditional power
\citep{SpiegFreed1986}.
In contrast, the normal design prior \eqref{eq:prior}
is related to the concept of predictive power,
which averages the conditional power over the possible values of the true effect
according to its design prior distribution.
Alternative names in the literature are assurance
\citep{ohagan2005}, probability of study success \citep{wang2013} and Bayesian
predictive power \citep{spiegelhalter1986}. Conditional and predictive power
are usually accompanied by a flat analysis prior, but can also be calculated
assuming that original and replication data are pooled (using the
normal analysis prior \eqref{eq:prior}),
resulting in the conditional Bayesian power and the fully Bayesian power,
respectively.
In practice, publication bias and the winner's curse often lead to overestimated
original effect estimates \citep{ioannidis2008, button2013, anderson2017}.
Hence, prior \eqref{eq:prior} might be over-optimistic and
lead to underpowered replication studies.
A simple way to correct for this over-optimism
is to multiply
the \emph{design} prior mean
$\hat\theta_o$ in \eqref{eq:prior}
by a factor $d$ between $0$ and $1$. The corresponding shrinkage factor
$s = 1-d$ can be chosen based on previous replication studies in the same field.
This is the approach considered in the \emph{SSRP} and we expand on this
in Section~\ref{sec:application}.
More advanced methods
using empirical Bayes based power estimation \citep{Jiang2016} and data-driven shrinkage
\citep{Pawel2019} are not
considered here.
\subsubsection{Conditional power}
Conditional power is the probability that a replication study will lead to a
statistically significant conclusion at the two-sided level $\alpha$, given that
the alternative hypothesis is true \citep[Section 2.5]{Spieg2004}.
In the context of a replication study, the alternative hypothesis is
represented by the effect estimate $\hat\theta_o$ from the original study.
Let $z_{\alpha/2}$ and $\Phi[\cdot]$ respectively denote the $\alpha/2$-quantile and the cumulative distribution function of the standard normal distribution. The conditional power
of a replication study with sample size $n_r$ is
\begin{align}
\text{CP} = \Phi \left[\frac{\hat\theta_o \sqrt{n_r}}{\sigma} + z_{\alpha/2}
\right] \label{eq:standardpo} \, ,
\end{align}
see Appendix~\ref{sec:standardpow.app} for a derivation.
The required replication sample size $n_r$ can be obtained by rearranging \eqref{eq:standardpo}.
A key feature of our framework is
that all power/sample size formulas are expressed without
absolute effect measures.
Simple mathematical rearrangements
produce an expression which only depends on the original test statistic
$t_o = \hat\theta_o/\sigma_o = \hat\theta_o\sqrt{n_o}/\sigma$ and the
variance ratio $c = \sigma^2_o/\sigma^2_r$
which
simplifies to the relative sample size $c = n_r/n_o$
and represents how much the sample size in the replication study is increased
as compared to the one in the original study. Formula \eqref{eq:standardpo}
then becomes
\begin{align}
\text{CP} & = \Phi\left[\sqrt{c}\, t_o + z_{\alpha/2} \right] \, . \label{eq:spow.d}
\end{align}
This formula highlights an intuitive property of the conditional power:
the larger the evidence in the original study (quantified by $t_o$) or the
larger the increase in sample size compared to the original study
(represented by $c$), the larger the conditional power of the replication study.
\subsubsection{Predictive power}
In order to incorporate the uncertainty of $\hat\theta_o$,
the concept of predictive power is discussed \citep{SpiegFreed1986}.
Its formula is:
\begin{align}
\text{PP} = \Phi\left[\sqrt{\frac{n_o}{n_o+n_r}}\left(\frac{\hat\theta_o
\sqrt{n_r}}{\sigma}+z_{\alpha/2}\right)\right] \, \label{eq:predpow_or},
\end{align}
see Appendix \ref{sec:predictivepow.app} for a derivation.
The predictive power \eqref{eq:predpow_or} tends to the conditional power
\eqref{eq:spow.d} as the original sample size $n_o$ increases.
Using the unitless
quantities $t_o$ and $c$, the predictive power can be rewritten as
\begin{align}
\text{PP} & = \Phi\left[\sqrt{\frac{c}{c+1}}\, t_o + \sqrt{\frac{1}{c+1}}\,
z_{\alpha/2}\right] \, . \label{eq:hpow.d}
\end{align}
\subsubsection{Fully Bayesian and conditional Bayesian power}
So far two power calculation methods where a flat analysis prior is used
have been considered.
This approach corresponds to the two-trials rule in drug development,
which requires ``at least two adequate and well-controlled studies, each
convincing on its own, to establish effectiveness'' \citep[p. 3]{FDA1998}.
In practice, this translates to two studies with a significant $p$-value
and an effect in the intended direction.
An alternative approach for the analysis is to pool original and
replication data.
This is similar to a meta-analysis of original
and replication effect estimates, as done in the \emph{SSRP} for
example.
However, in order to ensure the same evidence level as when original
and replication studies
are analyzed independently,
the corresponding two-sided significance level $\tilde\alpha = \alpha^2/2$ should be used
\citep{Fisher1999, Gibson2020}.
The fully Bayesian power is calculated using the prior \eqref{eq:prior} in
both the design and the analysis. Using the same prior beliefs in both stages
is considered as the most natural approach by some authors \citep[\abk{\latin{e.\,g}} in][]{OHagan2001}. The corresponding formula is
\begin{align}
\text{FBP} = \Phi \left[\sqrt{\frac{c+1}{c}} \, t_o + \sqrt{\frac{1}{c}}\,z_{\tilde\alpha/2}\right] \, . \label{eq:bpow.d}
\end{align}
Note that the fully Bayesian power is also a predictive power as it incorporates the uncertainty of the original effect estimate $\hat\theta_o$.
The last possible combination of design and analysis priors leads
to
the conditional Bayesian power:
\begin{equation}
\text{CBP} = \Phi\left[\frac{c+1}{\sqrt{c}}\,t_o +\sqrt{\frac{c+1}{c}}\, z_{\tilde\alpha/2} \right] \, . \label{eq:cbpow.d}
\end{equation}
Derivations of \eqref{eq:bpow.d} and \eqref{eq:cbpow.d} can be found in Appendix \ref{sec:bpow.app} and \ref{sec:cbpow.app}.
\subsection{Properties}\label{sec:prop}
For fixed relative sample size $c$ and two-sided level $\alpha$, all four
formulas \eqref{eq:spow.d}, \eqref{eq:hpow.d}, \eqref{eq:bpow.d} and
\eqref{eq:cbpow.d} react to an increase in original test statistic $t_o$ with
a monotone increase in power.
However, the original result cannot be changed and it is more
realistic to study the power when varying the relative sample
size $c$ for fixed original test statistic $t_o$ instead.
Consider two original
studies with $p$-values $0.046$ and $0.005$.
These $p$-values
correspond to the original studies by
\citet{Duncan2012}
and \citet{Shah2012} in the \emph{SSRP} dataset and are used in the following
for illustrative purposes.
\begin{figure}[!h]
\centering
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{figure/ch01_fig4_power_as_a_function_of_c-1}
\end{knitrout}
\caption{CP, PP, FBP and CBP as a function of the relative sample size $c$ for
two original studies with $p_o = 0.046$ (left) and
$p_o = 0.005$ (right) at the two-sided $\alpha = 5$\% level, so
$\tilde\alpha = 0.00125$.
The vertical grey line corresponds to the intersection of
CP and PP curves as calculated in \eqref{eq:intersec1}, and the vertical
black line to the intersection of FBP and CBP as in \eqref{eq:intersec2}.
The horizontal black line
indicates the asymptote $1-p_o/2$ of PP and FBP.}
\label{fig:pow2}
\end{figure}
\subsubsection{Conditional vs.\ predictive power}\label{sec:condvspred}
The power obtained with predictive methods is always closer to 50\% than the
power obtained with conditional methods \citep{Spieg2004, grouin2007, dallow2011}.
In practice, power is typically larger than 50\% and this implies that
CP \eqref{eq:spow.d} is larger than PP \eqref{eq:hpow.d}; and CBP
\eqref{eq:cbpow.d} is larger than FBP \eqref{eq:bpow.d}.
Furthermore, it can be shown that CP and PP are both
equal to 50\% if the relative sample size is
\begin{equation}
c = z_{\alpha/2}^2/t_o^2 \label{eq:intersec1} \, ,
\end{equation}
the squared $\alpha/2$-quantile of the normal distribution divided by the
squared test statistic from the original study
(see Appendix~\ref{sec:condvspred.app} for details).
Equation~\eqref{eq:intersec1} implies that the larger the evidence in the
original study (quantified by $t_o$), the smaller the relative sample size $c$
where CP and PP curves intersect.
This can be observed
in Figure~\ref{fig:pow2}, where the relative sample size at the
intersection of the CP and PP curves is closer to zero
in the replication of a convincing original study ($p_o =
0.005$, $c = 0.48$)
than in the replication of a borderline original study ($p_o =
0.046$, $c = 0.96$).
Likewise, FBP and CBP are crossing at a power of 50\% with
corresponding
relative sample size
\begin{equation}
c = z_{\tilde\alpha/2}^2/t_o^2 - 1 \label{eq:intersec2} \, .
\end{equation}
\subsubsection{Predictive power cannot always reach 100\%}
Unlike CP \eqref{eq:spow.d} which always reaches 100\% for
a sufficiently
large replication sample size, PP \eqref{eq:hpow.d}
has an asymptote at
$1-p_o/2$. This means that the more convincing the original study, the closer
to 100\% the PP of an infinitely large replication study is.
In a sense, the original result penalizes the predictive power.
However, this penalty is not very stringent, as replication of an original
study with a two-sided $p$-value of 0.05 would still be able to reach a
PP of 97.5\% for a sufficiently large replication sample size.
This property also applies to the FBP and can be observed in
Figure~\ref{fig:pow2} where the horizontal black line indicates
the asymptote $1-p_o/2$.
\subsubsection{Pooling original and replication studies}
\label{sec:pooling}
For a borderline significant original study (e.g. $p_o = 0.046$
in Figure~\ref{fig:pow2}), FBP \eqref{eq:bpow.d} and CBP \eqref{eq:cbpow.d} are respectively always smaller
than PP \eqref{eq:hpow.d} and CP \eqref{eq:spow.d}.
In contrast, when the original study is more convincing (e.g.
$p_o = 0.005$ in Figure~\ref{fig:pow2}), FBP is larger
than PP (respectively CBP larger than CP) for some values of $c$. However, if
$p_o < \tilde\alpha$, the level required at the end of the replication study
(typically $\tilde\alpha = 0.00125$),
FBP and CBP converge to 100\%
for $c \to 0$, decrease down to
\begin{equation}
\Phi\left[\sqrt{t_o^2- z_{\tilde\alpha/2}^2} \,\right] \label{eq:minimumpower}
\end{equation}
for increasing $c$ and then increase to $1 - p_o/2$ (FBP) or 100\% (CBP).
See Appendix~\ref{sec:minBa.app} for a derivation.
A highly convincing original study will thus always have FBP and CBP very
close to 100\% independently of the sample size. This implies that
a replication may not be required at all,
a clear disadvantage of pooling original and replication studies instead of
considering them independently.
\section{Sequential replication studies}\label{sec:seq}
In Section~\ref{sec:nonseq}, power calculations are performed before any data
have been collected in the replication study. This framework is extended in this
section and allows power (re)calculation at an interim analysis, after some data
have been collected in the replication study already.
The interim power is defined as the probability of statistical significance at the end
of the replication study given the data collected so far.
The incorporation of prior knowledge into interim power
has been studied in
\citet[Section 6.6]{Spieg2004} and we adapt this approach to the case where
prior information refers to a single original study. Moreover, the power
calculation formulas are expressed in terms of unitless quantities (relative
sample sizes and test statistics) in the following.
It is well known from the field of clinical trials that
the maximum sample size (if the trial has not been stopped at interim) increases with the number of planned interim
analyses \citep[Section 8.2.1]{mat2006}. In order to maintain a given power,
even one interim analysis requires a larger maximum sample size than for a trial
with a fixed size and the
calculation of the replication
sample size should take this into account.
\subsection{Methods}\label{sec:methods2}
In addition to the point prior $\theta = \hat\theta_o$
and the normal prior \eqref{eq:prior}, the new
framework enables the specification of a flat design prior.
Table~\ref{tbl:designanalysis.int} shows the different types of
interim power calculations that are investigated in this section.
\newcolumntype{E}{>{\centering\arraybackslash} m{3cm} }
\newcolumntype{F}{>{\centering\arraybackslash} m{2.6cm} }
\begin{table}[!h]
\centering
\caption{\small Methods of interim power calculations resulting from different
design priors using a flat analysis prior.}
\begin{tabular}{E|D|F}
\multicolumn{3}{c}{\textbf{Design}}\\
Point prior $\theta = \hat\theta_o$ & Normal prior $\theta \sim
\Nor\left(\hat \theta_o, \sigma^2_o\right)$ & Flat prior \\
\cline{1-3}
\textcolor{white}
{blablablablablablablablablablabl} Conditional \textcolor{white}
{blablablablablablablablablablabl} &
Informed predictive & Predictive
\end{tabular}
\label{tbl:designanalysis.int}
\end{table}
Calculating the interim power
to detect
the effect estimate from the
original study ignores the uncertainty of the original result.
This corresponds to the conditional power in Table~\ref{tbl:designanalysis.int}.
Uncertainty of the original result can be taken into account when
recalculating the power at an interim analysis, turning the conditional power
into a predictive power. This requires the
selection of a prior distribution for the true effect, which is updated by the
data collected so far in the replication study. The prior distributions
discussed here are the normal prior \eqref{eq:prior}
(leading to the informed predictive power) and a flat prior
(leading to the predictive power). The conditional power is then averaged
with respect to the posterior distribution of the true effect size, given the
data already observed in the replication study.
A pooled analysis of original and replication data can also be considered
in this framework
but is omitted here.
Let $\hat\theta_i$ be the effect estimate at interim and
$\sigma^2_i = \sigma^2/n_i$ the corresponding variance, with $n_i$ the sample
size at interim. The sample size that is still to be collected in the
replication study is denoted by $n_j$ and the total replication sample
size is thus $n_r = n_i + n_j$. The interim power formulas can be shown
to only depend on the original and interim test statistics $t_o$ and
$t_i = \hat\theta_i/\sigma_i$, the relative sample size $c = n_r/n_o$
and the variance ratio $f = \sigma^2_r/\sigma^2_i = n_i/n_r$, the fraction of
the replication study already completed.
\subsubsection{Conditional power at interim}
The conditional power at interim is the interim power to detect the effect
$\theta = \hat\theta_o$. It can be expressed as
\begin{align}
\text{CPi} & = \Phi\left[\sqrt{c(1-f)}\, t_o + \sqrt{\frac{f}{1-f}}\, t_i +
\sqrt{\frac{1}{1-f}}\, z_{\alpha/2} \right] \, , \label{eq:CPis}
\end{align}
see Appendix~\ref{sec:SPi.app} for a derivation. In the particular case where
no data has been collected yet in the replication study ($f = 0$), the
CPi \eqref{eq:CPis} reduces to the CP
\eqref{eq:spow.d}.
Interim power can also be calculated to detect $\theta = \hat\theta_i$,
this is however not recommended \citep{bauer2006, Kunzmann2020}.
\subsubsection{Informed predictive power at interim}
The informed predictive power at interim is the predictive interim power using
the design prior \eqref{eq:prior}. It can be formulated as
\begin{align}
\text{IPPi} & = \Phi \left[\sqrt{\frac{c(1-f)}{(cf+1)(1+c)}}\, t_o +
\sqrt{\frac{f(1+c)}{(1-f)(cf+1)}}\, t_i + \sqrt{\frac{cf+1}{(1+c)(1-f)}}\,
z_{\alpha/2} \right] \, , \label{eq:HPPs}
\end{align}
see Appendix~\ref{sec:IPPi.app} for a derivation.
In the case of $f = 0$ (no data collected in the replication study so far),
the IPPi \eqref{eq:HPPs} reduces to the PP \eqref{eq:hpow.d}.
By considering the original result but also its uncertainty, the predictive
power at interim is a compromise between considering only the original effect
estimate (CPi) and ignoring the original study completely (PPi).
\subsubsection{Predictive power at interim}
The predictive power at interim is the predictive interim power using a flat design prior. In other words, the results from the original study are ignored. It is expressed as
\begin{align}
\text{PPi} & = \Phi \left[\sqrt{\frac{1}{1-f}}\, t_i + \sqrt{\frac{f}{1-f}}\, z_{\alpha/2} \right] \, , \label{eq:CPPs}
\end{align}
see Appendix~\ref{sec:PPi.app} for a derivation.
Note that PPi \eqref{eq:CPPs} corresponds to FBP
\eqref{eq:bpow.d} provided that the original study in FBP
formula is considered as the interim study (see Appendix \ref{sec:PPi.app}).
This illustrates the dependence of original and replication studies when a
normal prior is used in the analysis.
\subsection{Properties}\label{sec:application2}
Theoretical and specific properties of the conditional, informed predictive
and predictive
power at interim are discussed.
\subsubsection{Conditional vs\, predictive power}\label{sec:condvspredint}
The power at interim, as compared to study start,
involves two additional parameters, namely the test statistic $t_i$ from
the interim analysis and the fraction $f$ of the replication study already
conducted.
It is therefore not straightforward to compare the different methods
in terms of which one results in a larger power. Comparison is
facilitated if certain assumptions are made. Consider any combination of a
significant original result, a non-significant interim result and a replication
sample size at least twice as large as the original sample size. This translates
to $t_o > z_{1 - \alpha/2}$,
$t_i < z_{1 - \alpha/2}$ and $c \geq 2$
in formulas
\eqref{eq:CPis}, \eqref{eq:HPPs} and \eqref{eq:CPPs}.
Under these assumptions
and with $f > 0.25$,
the CPi is always larger than
the IPPi, which is always larger than the PPi,
see Appendix \ref{sec:largerthan.app}.
However, one has to be careful as these conditions are sufficient,
but not necessary for obtaining this order.
\subsubsection{Weights given to original and interim results}\label{sec:weights}
Equations \eqref{eq:CPis}, \eqref{eq:HPPs} and \eqref{eq:CPPs} can be expressed as $\Phi[x]$ where $x$ is a weighted average of $t_o$,
$t_i$ and $z_{\alpha/2}$ with weights $w_o$, $w_i$ and $w_\alpha$, say.
The weights $w_o$ and $w_i$ depend
on the relative sample size $c$ and the fraction $f$ of the
replication study already completed.
In the CPi formula \eqref{eq:CPis}, an increase in $c$ leads to a monotone increase in $w_o$ and
does not affect $w_i$.
In other words, the weight given to the original result in the CPi becomes
larger if the relative sample size $c$ increases.
Furthermore, the larger the fraction $f$ of the replication study already
completed, the less weight is given to the original
result and conversely, the more weight to the interim result.
In the IPPi formula \eqref{eq:HPPs}, an increase in $f$ leads to a decrease in
$w_o$ and an increase in $w_i$. Only if the interim analysis takes
place early will the original result have a greater weight than
the interim result in the
calculation of the IPPi,
see Appendix \ref{sec:weights.app}.
In the PPi formula \eqref{eq:CPPs}, no weight is given to the original result
and the weight $w_i$ given to interim results increases when $f$ increases.
\subsubsection{A power of 100\% cannot always be reached with the predictive methods}
\begin{figure}[!h]
\centering
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{figure/ch01_figplot_interim_power-new-1}
\end{knitrout}
\caption{CPi, IPPi and PPi as a function of the sample size $n_j$ still to be collected
in the replication study (or equivalently, as a function of the fraction
of the replication study still to be completed ($1-f$) and the relative sample
size $c$) for an original study with $p_o = 0.005$
and with two hypothetical interim $p$-values $p_i = 0.04$ (left)
and $p_i = 0.5$ (right). The two-sided level $\alpha$ is
$0.05$. Horizontal dashed lines represent the
asymptotes of IPPi and PPi and the horizontal dotted line represents the
minimum PPi.}
\label{fig:6plots}
\end{figure}
Considering that an interim analysis has been conducted, $n_i$ and $t_i$ are
fixed, and the only parameter that can vary is the sample size $n_j$
still to be collected in the replication study. Increasing this sample size
results in an increase of the relative sample size $c$ and a decrease of the
fraction $f$ of the replication study already completed.
If $n_j$ is large enough, the CPi \eqref{eq:CPis} reaches 100\%. In contrast,
the asymptotes of IPPi \eqref{eq:HPPs} and PPi \eqref{eq:CPPs} are
penalized by the original and/or interim results. The larger the evidence
in the original study and at interim (represented by $t_o$ and $t_i$,
respectively), the larger the asymptote of the IPPi
(formula given in Appendix~\ref{sec:asymptotes.app}). The asymptote of
the PPi, on the other hand, is
$1-p_i/2$. This last property is explained in
\citet[Section 4]{dallow2011} and the asymptotes can be visualized in
Figure~\ref{fig:6plots} for an original study with
$p_o = 0.005$ and two hypothetical interim results:
$p_i = 0.04$ and $p_i = 0.5$. On the left panel,
the asymptotes of CPi, IPPi and PPi are all close to 100\% as original and
interim $p$-values
are fairly small. A large increase in interim $p$-value hardly has
an effect on the asymptote of the IPPi (from $99.98$\%
to $99.5$\%, right panel) but results in a
dramatic decrease of the asymptote of the PPi and remarkably,
the maximum PPi achievable for a study with an interim $p$-value of 0.5 is only
$75$\%.
\subsubsection{Non-monotonicity property of power}\label{sec:non-mono}
If the two-sided interim $p$-value is not significant ($p_i > \alpha$),
the interim power with all three methods behaves in an expected way:
it increases with increasing
sample size $n_j$. However, this property breaks when $p_i < \alpha$.
In this situation, the power assuming no additional subject to be added
($f = 1$) is 100\%, declines with increasing $n_j$ (decreasing $f$) and then
increases. For example, the minimum predictive power at interim can be shown
to be $\Phi\left[\sqrt{t_i^2 - z_{\alpha/2}^2}\,\right]$ which means that the
PPi of any replication study with a significant interim result will never be
smaller than 50\% (details in Appendix~\ref{sec:minimumPPi.app}).
This property can be observed in Figure~\ref{fig:6plots} (left panel)
where the PPi cannot be smaller than $73\%$.
\citet{dallow2011} explain this characteristic as follows: ``Intuitively,
if the interim results are very good, any additional subject can be seen as a
potential threat, able to damage the current results rather than a resource
providing more power to our analysis.''
\section{Application}\label{sec:application}
Twenty-one significant original findings were replicated in the
\emph{SSRP} and a two-stage procedure was adopted.
In stage 1, the replication studies had 90\% power to detect 75\% of
the original effect estimate. Data collection was stopped if a two-sided
$p$-value $<0.05$ and an effect in the same direction as the original effect
were found. If not, data collection was continued in stage 2 to have 90\% power
to detect 50\% of the original effect estimate for the first and second data
collections pooled. The shrinkage factor $s$ was chosen to be 0.5 as a previous
replication project in the psychological field \citep{open2015} found replication effect estimates
on average half the size of the original effect estimates.
Stages 1 and 2
can be considered as two steps of a sequential analysis, with an interim
analysis in between. The analysis after stage 1 will be called
the \emph{interim} analysis while the \emph{final} analysis will refer to the
analysis based on the pooled data from stages 1 and 2.
The complete \emph{SSRP} dataset with extended information is available at
\url{https://osf.io/pfdyw/}. The effects are given as correlation coefficients,
making them easily interpretable and comparable. Moreover, the application of
Fisher's $z$ transformation $z(r) = \tanh^{-1}(r)$ to the correlation
coefficients justifies an asymptotic normal distribution and the standard error
of the transformed coefficients becomes a function of the effective sample
size $n-3$ only, $\se(z) = 1/\sqrt{n-3}$. In this dataset, original effects
are always positive. A ready-to-use dataset \texttt{SSRP} can be found in the
package \texttt{ReplicationSuccess}, available at
\url{https://r-forge.r-project.org/projects/replication/}.
\subsection{Descriptive results}
The results are displayed in Figure~\ref{fig:SSRP.summary}.
Twelve studies were significant at interim with an effect in the correct
direction but by mistake only eleven were stopped. Out of the ten studies
that were continued, only two showed a significant result in the correct
direction at the final analysis. The study that was wrongly continued
turned out to be non-significant at the final analysis.
The effect of publication bias is clearly seen:
original effect estimates are larger than the corresponding replication
effect estimates for 19 out of the 21 studies and are on average twice as large.
\begin{figure}[!h]
\centering
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{figure/ch01_figdescriptive_figure-1}
\end{knitrout}
\caption{Original effect estimate vs.\ replication effect estimate
(on the correlation scale). Replications which were not pursued in stage 2
are included with the results from stage 1. Shape and color of the point
indicate whether the study was stopped due to a significant result in the
correct direction at interim (green triangle), was significant
in the correct direction at the final
analysis (red circle) or was not significant at the final analysis
(black square). The diagonal line indicates replication
effect estimates equal to original effect estimates.
}
\label{fig:SSRP.summary}
\end{figure}
\subsection{Power calculations}
The methods described in Sections~\ref{sec:nonseq} and~\ref{sec:seq} are used
to calculate the power of the 21 replication studies before
the onset of the study and at the interim analysis.
Because our calculations are based on Fisher's $z$-transformed correlation
coefficients, the effective sample sizes are used. The relative sample size
is then $c = (n_r -3)/(n_o - 3)$ and the fraction $f$ of the
replication study already completed $f = (n_i -3)/(n_r-3)$. A two-sided $\alpha = 5$\%
level is used as in the original paper, so $\tilde\alpha = 0.00125$ in
the calculation of FBP and CBP.
\subsubsection{At the replication study start}
We computed the CP, PP, FBP and CBP of the 21 replication studies.
The replication sample size that we considered in the calculations is
the one used by the authors of the \emph{SSRP} in stage 1, ignoring stage 2.
To be consistent with the procedures of the \emph{SSRP},
a shrinkage factor $s$ of 0.25 was used in the calculations. Results
can be found in Figure~\ref{fig:boxpower2}, where some
properties discussed in Section~\ref{sec:prop} are
illustrated. CP is larger than PP for all studies, and similarly
CBP is larger than FBP as expected (see Section~\ref{sec:condvspred}).
Furthermore, it can be oberved that for some
studies FBP is larger than PP, while it is the opposite for some other studies.
This depends on the $p$-value $p_o$ from the original study and the
relative sample size $c$ as explained in Section~\ref{sec:pooling}.
The same applies to CP and CBP but cannot be directly observed in
Figure~\ref{fig:boxpower2}.
\begin{figure}[!h]
\centering
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{figure/ch01_figboxplot-1}
\end{knitrout}
\caption{CP, PP, FBP and CBP of the 21 studies of the \emph{SSRP} at level
$\alpha = 5\%$ (so $\tilde \alpha = 0.00125$ for FBP and CBP)
using a shrinkage
factor $s$ of 0.25 in the calculations.
Each circle represents a study and the lines link the same studies.}
\label{fig:boxpower2}
\end{figure}
\subsubsection{At the interim analysis}
Replication studies which did not reach significance after the first data
collection were continued.
We have selected these studies and calculated their interim power
with the different methods (see Table~\ref{tbl:summary_table_interim}).
These studies have a sample size substantially larger in the replication as
compared to the original study (large $c$). Moreover,
the interim analysis took place in the second quarter of the
replication study
($0.3 \leq f \leq 0.47)$
and by selection, they all have a non-significant interim
$p$-value (except the study from \citet{Ackerman2010} which was
continued by mistake). Excluding this study, they all fulfill
the sufficient conditions mentioned in Section~\ref{sec:condvspredint}
and follow the order CPi > IPPi > PPi. This also hold
for the particular study with a significant interim result
as the corresponding relative sample size $c$ is large
($c = 11.62$).
The CPi is remarkably large for all studies, even for the five studies
where the interim effect estimate is in the opposite direction
as the original estimate as the weight given to the
\emph{significant} original
result is consequent due to the large relative sample size $c$
(see Section~\ref{sec:weights}).
In contrast,
more weight is given to the
interim as compared to the original result in the IPPi formula,
making the corresponding IPPi values more sensible.
If a futility boundary between 10\% and 30\% had been used
(as in \citet{demets2006}) four out of the eight studies which
failed to replicate at the final analysis
would have been stopped at interim based on the IPPi values.
Surprisingly, the replication study of \citet{Ramirez2011} presents a
relatively large IPPi ($61.4$\%)
although the interim result goes in the opposite direction
as the original result. This is due to the very small original $p$-value.
The PPi of the same study
is considerably smaller ($4.2$\%)
since the original result does not influence the power with this method.
Furthermore, six out of eight studies which failed to replicate at
the final analysis would have been stopped at interim if futility stopping
had been decided based on a PPi of less than 30\%.
Significant interim results
lead to large PPi values (see Section~\ref{sec:non-mono}),
and that can observed for the study that was incorrectly
continued.
\begin{table}[ht]
\centering
\caption{\small{CPi, IPPi and PPi of the ten studies that were continued including the original, interim and replication two-sided $p$-values and effect estimates, the relative sample size $c$ and the fraction $f$ of the replication study already completed.}}
\label{tbl:summary_table_interim}
\scalebox{0.90}{
\begin{tabular}{lccccccccccc}
\hline
\multicolumn{1}{c}{} & \multicolumn{2}{c}{Original} & \multicolumn{3}{c}{Interim} & \multicolumn{3}{c}{Interim power} & \multicolumn{3}{c}{Replication} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-6} \cmidrule(lr){7-9} \cmidrule(lr){10-12}
Study & $p_o$ & $r_o$ & $f$ & $p_i$ & $r_i$ & CPi & IPPi & PPi & $c$ & $p_r$ & $r_r$ \\
\hline
Duncan & 0.005 & 0.67 & 0.37 & 0.29 & 0.18 & 100.0 & 74.6 & 43.4 & 7.42 & 0.00001 & 0.44 \\
Pyc & 0.023 & 0.38 & 0.43 & 0.09 & 0.15 & 100.0 & 85.3 & 71.0 & 9.18 & 0.009 & 0.15 \\
Ackerman & 0.048 & 0.27 & 0.43 & 0.02 & 0.14 & 100.0 & 95.0 & 90.3 & 11.69 & 0.125 & 0.06 \\
Rand & 0.009 & 0.14 & 0.47 & 0.37 & 0.03 & 99.8 & 51.9 & 27.0 & 6.27 & 0.234 & 0.03 \\
Ramirez & 0.000008 & 0.79 & 0.30 & 0.72 & -0.08 & 100.0 & 61.4 & 4.2 & 4.47 & 0.390 & -0.10 \\
Gervais & 0.029 & 0.29 & 0.42 & 0.41 & -0.05 & 97.5 & 1.9 & 0.3 & 9.78 & 0.415 & -0.04 \\
Lee & 0.013 & 0.39 & 0.42 & 0.45 & -0.07 & 97.7 & 3.1 & 0.4 & 7.65 & 0.435 & -0.05 \\
Sparrow & 0.002 & 0.37 & 0.44 & 0.27 & 0.11 & 99.7 & 74.1 & 40.1 & 3.50 & 0.451 & 0.05 \\
Kidd & 0.012 & 0.27 & 0.40 & 0.27 & -0.07 & 98.9 & 1.6 & 0.1 & 8.57 & 0.467 & -0.03 \\
Shah & 0.046 & 0.27 & 0.45 & 0.15 & -0.09 & 87.0 & 0.1 & 0.0 & 11.62 & 0.710 & -0.02 \\
\hline
\end{tabular}
}
\end{table}
\section{Discussion}\label{sec:discussion}
Conditional power calculations appear to be the norm in most replication
projects.
In this paper, we have drawn attention to notable shortcomings
of this approach and outlined the rationale and properties of
predictive power. We encourage researchers to abandon conditional methods
in favor of predictive methods which make a better use of the original
study and its uncertainty.
Furthermore, as many replications are being conducted and only a fraction
confirms the original result, we argue for the necessity of sequentially
analyzing the results. With this in mind, we encourage the initiative from
\citet{camerer2018} to terminate some replication studies prematurely based
on an interim analysis. However, their approach only enables efficacy stopping.
We propose to use interim power to judge if a replication study should be
stopped for futility. Interim analyses can help to save time and resources
but also raise new questions with regard to the choice of prior distributions.
We have shown using studies from the \emph{SSRP} that different design priors
lead to very different power values and by extension to different decisions.
Conditioning the power calculations at interim on the original results is even
more unreasonable than at the study start and leads to very large power values
given a significant original result, even if interim results suggest evidence
in the opposite direction.
We recommend the use of IPPi and PPi to make futility decisions.
A 30\% futility boundary is sometimes employed
in clinical trials and has proved to be
reasonable in the \emph{SSRP}.
Efficacy stopping based on interim power is known to inflate the type-I
error rate \citep[Chapter 10]{jennison1999}. We only consider futility
stopping as this issue does not apply here \citep{Lachin2005}.
Some limitations should be noted.
First, the paper discusses power calculations before the onset of the study
and at an interim analysis separately. However, the planned interim analysis
has an impact on power at study start and sample size adjustments are
necessary \citep[Section 2.1.2]{Wassmer2016}. This is nevertheless rarely
done in current replication projects such as \emph{SSRP}.
Second, while the ICH E9 `Statistical Principles for Clinical Trials'
\citep{ICH1999} recommends blinded interim results, our data at interim are
assumed to be unblinded. This is not a problem for the one-sample case
but becomes an issue when we want to compare two groups.
Such a situation would require an Independent Data Monitoring Committee to
prevent the replication study from being biased \citep{kieser2003}.
Third,
the assumption of normally distributed observations
is made.
Further research will focus on extending
this framework to multiple interim analyses in a replication study and to
sequentially conducted replication studies.
It will also be of interest to apply
the concept of interim power discussed in Section~\ref{sec:seq}
to the reverse-Bayes
assessment of replication success \citep{held2020}.
\section*{Software}
Software for these power calculations can be found in the R-package \\
\texttt{ReplicationSuccess}, available at
\url{https://r-forge.r-project.org/projects/replication/}.
An example of the usage of this package is given in Appendix~\ref{sec:package}.
\section*{Acknowledgments}
We thank Samuel Pawel, Malgorzata Roos and Lawrence L. Kupper for helpful comments and suggestions
on this manuscript. We also would like to thank the referees whose comments
helped to improve and clarify the manuscript.
|
2,869,038,155,549 | arxiv | \section{Introduction}
Adverse drug events (ADEs) are ``injuries resulting from medical intervention related to a drug'' \citep{NebekerBS04}, and are distinct from medication errors (inappropriate prescription, dispensing, usage etc.) as they are caused by drugs at normal dosages. According to the National Center for Health Statistics \citep{CDC14}, 48.9\% of Americans took at least one prescription drug in the last 30 days, 23.1\% took at least three, and 11.9\% took at least five. These numbers rise sharply to 90.6\%, 66.8\% and 40.7\% respectively, among older adults (65 years or older). This means that the potential for ADEs is very high in a variety of health care settings including inpatient, outpatient and long-term care settings. For example, in inpatient settings, ADEs can account for as many as one-third of hospital-related complications, affect up to 2 million hospital stays annually, and prolong hospital stays by 2--5 days \citep{HHS10}.
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{images/example_images.png}
\caption{Some example molecular images of different drugs extracted from the PubChem database.}
\label{fig:examples}
\end{figure*}
The economic impact of these issues is as widespread as the various healthcare settings and can be staggering. Estimates suggest that ADEs contributed to \$3.6 billion in excess healthcare costs in the US alone \citep{IMC06}. Unsurprisingly, older adults are at the highest risk of being affected by an ADE, and are seven times more likely than younger persons to require hospital admission \citep{BudnitzEtAl06}. In the US, as a large number of older adults are Medicare beneficiaries, this economic impact is borne by an already overburdened Medicare system and ultimately passed on to taxpayers and society at large. Beyond older adults, there are several other patient populations that are also vulnerable to ADEs including children, those with lower socio-economic means, those with limited access to healthcare services, and certain minorities.
Recent research has identified, somewhat surprisingly, that many of these ADEs can be attributed to {\bf very common medications} \citep{BudnitzEtAl11} and {\bf many of them are preventable} \citep{GurwitzEtAl03} or {\bf ameliorable} \citep{ForsterEtAl05}. This issue motivates our long-term goal of developing accessible and robust means of identifying ADEs in a disease/drug-agnostic manner and across a variety of healthcare settings. Here, we focus on the problem of {\bf drug-drug interactions} (DDIs), which are a type of ADE. An ADE is characterized as a DDI when multiple medications are co-administered and cause an adverse effect on the patient. DDIs, often caused by inadequate understanding of various drug-drug contraindications, are a major cause of hospital admissions, rehospitalizations, emergency room visits, and even death \citep{becker2007hospitalisations}.
Predicting and discovering drug-drug interactions (DDIs) is an important problem and has been studied extensively both from medical and machine learning point of view. Identifying DDIs is an important task during drug design and testing, and regulatory agencies such as the U. S. Food and Drug Administration require large controlled clinical trials before approval. Beyond their expense and time-consuming nature, it is impossible to {\bf discover} all possible interactions during such clinical trials. This necessitates the need for computational methods for DDI prediction. A substantial amount of work in DDI focuses on text-mining \citep{liu2013azdrugminer,chee2011predicting} to extract DDIs from large text corpora; however, this type of information extraction does not discover new interactions, and only serves to extract {\em in vivo} or {\em in vitro} discoveries from publications.
Our goal is to {\bf discover} DDIs in large drug databases by exploiting various properties of the drugs and identifying patters in drug interaction behaviors. Almost all of the machine learning approaches have focused on text data or textual representation of the structural data of drugs \citep{gurulingappa2012extraction,asada2018enhancing,purkayastha2019drug}. Recent approaches consider phenotypic, therapeutic, structural, genomic and reactive properties of drugs \citep{cheng2014machine} or their combinations \citep{dhami2018drug} to characterize drug interactivity. We take a fresh and completely new perspective on DDI prediction through the lens of molecular images, a few examples shown in figure \ref{fig:examples}, via deep learning. Our work is novel in the following significant ways:
\begin{itemize}
\item we formulate DDI discovery as a {\bf link prediction problem};
\item we aim to perform DDI discovery directly on {\bf molecular structure images} of the drugs directly, rather than on lossy, string-based representations such as SMILES strings and molecular fingerprints; and
\item we utilize a deep learning technique, specifically {\bf Siamese networks} \citep{chopra2005learning} in a contrastive manner to build a {\bf DDI discovery engine} that can be integrated into a drug database seamlessly.
\end{itemize}
\section{Related Work}
\subsection{Drug-Drug Interactions}
The social and economic impacts of drug-drug interactions have also been well studied and understood. The effect of DDI on medication management and social care is studied in \citep{arnold2018impact} and with its economic impact shown in \citep{shad2001economic}. The impact of DDIs in the elderly patients in 6 Europen countries was documented in \citep{bjorkman2002drug} and in a similar vein the study by Becker et al. \citep{becker2007hospitalisations} identifies that the elderly have an increased risk factor $\approx$ 9 times over the general population with the clinical significance of DDIs studied in \citep{roberts1996quantifying}. Identification of DDIs can be done by either clinical trials or {\it in vitro} and {\it in vivo} experiments but these approaches are highly labor-intensive, costly and time-consuming. Thus, a system that can mitigate these factors is highly desirable.
Drug-Drug interactions have been studied extensively both from medical and machine learning point of view. From a medical standpoint \citep{lau2003atorvastatin}, \citep{hirano2006drug} and \citep{wang2000human} showed the effect of important individual drugs and enzymes such as subtrates on various drug-drug interactions. The problem of DDI discovery/prediction is a pairwise classification task and thus kernel-based methods \citep{Shawe-TaylorCristianini04} are a natural fit since kernels are naturally suited to representing pairwise similarities. Most similarity-based methods for DDI discovery/prediction have used biomedical research literature as the underlying data source and construct NLP-based kernels from these medical documents \citep{segura2011using,chowdhury2013fbk}. Some work has also been done on learning kernels from different types of data such as molecular and structural properties of the drugs and then using these multiple kernels to predict DDIs \citep{cheng2014machine,dhami2018drug}.\\
\subsection{Siamese Neural Networks}
Siamese networks have been applied in one shot image recognition ~\citep{koch2015siamese}, signature verification ~\citep{bromley1994signature},
object tracking ~\citep{bertinetto2016fully} and human re-identification ~\citep{varior2016gated,chung2017two}. Siamese networks have also been used in the health care domain in medical question retrieval \citep{wang2019medical} and Alzheimer disease diagnosis \citep{aderghal2017classification}. Siamese networks have also been used for the tasks of drug-drug interactions in the form of a Siamese graph convolutional network \citep{chen2019drug,jeon2019resimnet}.
\section{Siamese Convolutional Network for Drug-Drug Interactions}
A discriminative approach for learning a similarity metric using a Siamese architecture was introduced in \citep{chopra2005learning} which maps the input (pair of images in our case) into a target space such that the distance between the mappings is minimized in the target space for similar pair of examples and maximized in case of dissimilar examples.
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth,height=9cm]{images/siamese_arch.png}
\caption{An overview of our model for predicting drug-drug interactions}
\label{fig:overview}
\end{figure*}
We adapt the Siamese architecture for the task of link prediction where the link is whether two drugs interact or not. Since the Siamese architecture results in a measure of similarity between the pair of given inputs it can be thresholded in order to obtain a classification. We use contrastive loss \citep{hadsell2006dimensionality}, based on a distance metric (Euclidean distance in our case), to learn a parameterized function $F$ to obtain the mapping from the input space to the target space whose minimization can result in pushing the semantically similar examples together.
An important property of the loss function is that it calculated on a pair of examples. The loss function is formulated as as follows: Let $X_1$ and $X_2$ are a pair of drug images and $Y$ is the label assigned to each of the pairs. The label $Y=0$ if the pair of drug images do not interact and $Y=1$ if the pair of drug images interact. Also, let $D$ be the Euclidean distance between the vector of the image pairs after being processed by the underlying Siamese network and $P$ are the parameters of the function F. The contrastive loss function can then be given as
\begin{equation}
\mathcal{L}(P,X_1,X_2,Y)=\frac{(1-Y)}{2}{D_P}^2+\frac{Y}{2}\{\max(0, m-{D_P})\}^2
\label{eq}
\end{equation}
where $D_P={\|F_P(X_1) - F_P(X_2)\|_2^2}$ is the Eucledian distance between the obtained outputs after the input pairs are processed by the sub-networks. Also \textit{m} is a margin such that \textit{m} $\geq$ 0 that signifies that dissimilar pairs beyond this margin will not contribute to the loss.
Figure \ref{fig:overview} shows our complete architecture. It consists of two identical sub-networks i.e. networks having same configuration with the same parameters and weights. Each sub-network takes a gray-scale image of size 500 $\times$ 500 $\times$ 1 as input (we initially have color images that we convert to gray-scale before feeding to sub-networks as input) and consists of 4 convolutional layers with number of filters as 64, 128, 128 and 256 respectively. The kernel size for each convolutional layer is (9 $\times$ 9) and the activation function is \textit{relu}. The \textit{relu} is a non-linear activation function is given as $f(x)=max(0,x)$. Each convolutional layer is followed by a max-pooling layer with pool size of (3 $\times$ 3) and a batch normalization layer. After the convolutional layers, the sub-network has 3 fully connected layers with 256, 128 and 20 neurons respectively. Thus after an image pair is processed by the Siamese sub-networks two vectors of dimension 20 $\times$ 1 are obtained. Contrastive loss is then applied to the obtained pair of vectors to obtain a distance between the input pair which can then be thresholded to obtain a prediction.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{images/isomer_interaction.png}
\end{center}
\caption{An example of how two isomers interact differently with a single drug.}
\label{fig:isomer}
\end{figure}
\begin{figure*}[h!]
\begin{center}
\includegraphics[width=\textwidth]{images/stn.png}
\end{center}
\caption{Using spatial transformer network as a pre-processing step to mitigate rotational variance. Note that this process is done for both the input images.}
\label{fig:stn}
\end{figure*}
We use the technique of precision recall curve (PR-curve) to identify the best threshold $=$ 0.65. Note that the convolutions in the convolutional sub-network provide translational in-variance property but rotational in-variance is also important in our problem domain. This is because isomers (one of the chiral forms) of drugs are expected to react differently when interacting with a certain drug \citep{nguyen2006chiral,chhabra2013review}. For example, Fenfluramine and Dexfenfluramine are isomers of each other and where Fenfluramine interacts with Acebutolol but Dexfenfluramine does not (Figure \ref{fig:isomer}). Another example is that the L-isomer of methorphan, Levomethorphan, is an opioid analgesic, while the D-isomer, Dextromethorphan, is a dissociative cough suppressant\footnote{https://en.wikipedia.org/wiki/Enantiopure\_drug}. To overcome this problem and introduce rotational invariance into our framework, we make use of spatial transformer networks \citep{jaderberg2015spatial} that we discuss next.
\subsection{Spatial Transformer Networks}
Spatial Transformer Network (STN) is a visual attention mechanism that can handle the scaling and rotation of the input images to the underlying convolutional network thereby leading to a better performance by reducing the effect of the rotation variance, which is a hard problem for convolutional neural networks \citep{cohen2014transformation}. It consists of three basic building blocks: a localisation network, a grid generator and a sampler which can be used as a pre-processing step before feeding the input image pair into our underlying Siamese architecture as shown in Figure \ref{fig:stn}. The whole network is differentiable, which means that it can be plugged directly into an existing model. The localisation network is used to regress the transformation parameters $\theta$, which controls the rotation, translation, zooming in and zooming out of the input images.
The localisation network takes the input image, say $X_1$ in our case, and generates $\theta = {f}_{loc}(X_1)$ that can then be used to calculate the target image $\hat{X_1}$ . There is no specific requirement for the localisation network except it should be able to generate regression value for $\theta$. Our localization network is a convolutional neural network consisting of 2 pooling layers, 2 convolutional layers and 2 dense layers. The transformation parameters $\theta$ is the mapping between source image coordinators $\left({x}^{X_1}_{i},{y}^{X_1}_{i}\right)$ and target image coordinators $\left({x}^{\hat{X_1}}_{i},{y}^{\hat{X_1}}_{i},1\right)$ as shown by the equation \ref{stn_theta}. Note that transformation function is not learned explicitly rather is learned automatically by the network.
\begin{equation}
\begin{pmatrix}
{x}^{X_1}_{i}\\
{y}^{X_1}_{i}
\end{pmatrix}=
\begin{bmatrix}
{\theta}_{11} & {\theta}_{12} & {\theta}_{13}\\
{\theta}_{21} & {\theta}_{22} & {\theta}_{23}
\end{bmatrix}
\begin{pmatrix}
{x}^{\hat{X_1}}_{i}\\
{y}^{\hat{X_1}}_{i}\\
1
\end{pmatrix}
\label{stn_theta}
\end{equation}
Hence, the localization and transformation as shown in Figure 4 are done in a single step. For the sampling kernel, we used the standard bilinear interpolation as described in \citep{jaderberg2015spatial}, since gradients can be defined with respect to the source image coordinates for bilinear interpolation.
\section{Experiments}
We aim to answer the following questions:
\begin{enumerate}
\item[] \textbf{Q1:} Are Siamese networks effective in link prediction task of DDI?
\item[] \textbf{Q2:} What is the effect of number of epochs on the predictive performance of the Siamese architecture?
\item[] \textbf{Q3:} Does our architecture handle the problem of rotational variance?
\item[] \textbf{Q4:} Are molecular structure images informative enough to predict DDIs and can be used instead of lossy string representations?
\item[] \textbf{Q5:} How does our method compare with state-of-the-art statistical relational models?
\item[] \textbf{Q6:} How does the choice of distance function for contrastive loss effect the prediction performance?
\item[] \textbf{Q7:} How does the choice of optimization function effect the prediction performance?
\end{enumerate}
\subsection{Data set}
Our data set consists of images of 373 drugs of size 500 $\times$ 500 $\times$ 3 downloaded from the PubChem database \footnote{https://pubchem.ncbi.nlm.nih.gov/} and converted to a grayimage format to yield images of size 500 $\times$ 500 $\times$ 1. From these images we create a total of 67,360 drug interaction pairs excluding the reciprocal pairs (Since drug-drug interaction is reciprocal in nature i.e. if drug $d_1$ interacts with drug $d_2$ then $d_2$ interacts with $d_1$ and vice versa, we need to remove such pairs from our data). From the 67,630 drug pairs we obtain a data set of 19936 drug pairs that interact with each other ($Y$ = 1) and 47424 drug pairs that do not interact with each other ($Y$ = 0). The images are normalized by the maximum pixel value (i.e. 255) before passing to the network. The data set and the code is available at \url{https://rb.gy/koax5u}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{images/Venlafaxine.png}
\caption{An example of abstract features learned by a convolutional layer for Venlafaxine.}
\label{fig:venla_conv}
\end{figure}
\subsection{Baselines}
We consider 5 baselines using different data modalities to compare the results from our Siamese architecture, namely,
\begin{enumerate}
\item \textit{\textbf{Image data:}}\\ \textbf{1. Structural Similarity Index (SSIM):} is used for measuring perceptual similarity between images \citep{wang2004image} and given 2 images $X_1$ and $X_2$ is calculated as,
\begin{equation}
SSIM(X_1,X_2)=\frac{(2 \mu_{X_1} \mu_{X_2} + C_1) \times (2 \sigma_{X_1 X_2} + C_2)}{(\mu_{X_1}^2 + \mu_{X_2}^2 +C_1) \times (\sigma_{X_1}^2 + \sigma_{X_2}^2 +C_2)}
\end{equation}
where $\mu_{X_1}$ and $\mu_{X_2}$ is the average of the images $X_1$ and $X_2$ respectively, $\sigma_{X_1}$ and $\sigma_{X_2}$ is the variance of the images $X_1$ and $X_2$ respectively, $\sigma_{X_1 X_2}$ is the covariance of the two input images. The constants $C_1$ and $C_2$ are added to the SSIM to avoid instability and are the product of a small constant ($\ll$ 1) with the dynamic range of pixel values in the given images. The SSIM measure can also be written as the product of three types of comparisons between input images, namely, luminance, contrast and structure. To obtain the predictions, the SSIM needs to be thresholded and in the experiments, the threshold is set as the mean SSIM values of all pairs.\\
\textbf{2. Autoencoders:} are neural networks that consists of 2 main components: an encoder and a decoder \citep{kramer1991nonlinear}. The encoder extracts features from the input images and decoder restores the original images from the extracted features. In general, the performance of autoencoders are evaluated by pixel-wise comparison between input images and output images. In order to compare the similarity between two images, the similarity between extracted features of the two images can be compared. This approach should be able to find images which contain objects with similar color and shape.
For the encoder, we have three convolutional layers with filter sizes 16, 32 and 64 followed by a max pooling layer which is in turn followed by two convolutional layers with filter sizes 128, 64 and another max pooling layer. The final three convolutional layers consists of filters of sizes 32, 16 and 8. For the decoder, we have two convolutional layers with filter sizes 16 and 32 followed by a single up-sampling layer which is in turn followed by two convolutional layers with filter sizes 64 and 128 again followed by a single up-sampling layer. The final four convolutional layers consists of filters of sizes 64, 32, 16 and 1. The size of all kernels is 3$\times$3. The size of max pooling is 2$\times$2 and up sampling size is also 2$\times$2. The activation of all convolutional layers is relu, except the last layer of both encoder and decoder is a sigmoid, for the ease of comparison.
First, the autoencoder model is trained using the training images, as is the normal training process of an autoencoder model. The number of epochs is 10 and the loss function is binary cross-entropy. Then features are extracted using the encoder on the testing images. To find images with similar extracted features, a couple of criterion were used, namely, binary cross-entropy and cosine proximity. The threshold to decide whether the two images is similar or not was set as the mean of all values calculated for all pairs of testing image.
\item \textit{\textbf{String data:}}\\ 1. \textbf{CASTER} \citep{huang2020caster} uses the drug molecular structure in a text format of \textbf{S}implified \textbf{M}olecular \textbf{I}nput \textbf{L}ine \textbf{E}ntry \textbf{S}ystem (\textbf{SMILES}) \citep{weininger1988smiles} strings representation to predict drug-drug interactions and ouperforms several deep learning methods such as DeepDDI \citep{ryu2018deep} and molVAE \citep{gomez2018automatic}. CASTER identifies the frequent substrings present in the SMILES strings presented during the training phase using a sequential pattern mining algorithm which are then converted to a emdedded representation using an encoder module to obtain a set of latent feature vectors. These features are then converted into linear coefficients which are then passed through a decoder and a predictor to obtain the DDI predictions. We obtain the SMILES strings of all the drugs in our data set from PubChem and DrugBank \footnote{https://www.drugbank.ca/} and use the source code \footnote{https://github.com/kexinhuang12345/CASTER} provided by the authors along with provided default hyper parameter settings.
\item \textit{\textbf{Relational Data:}}\\ \textbf{1. RDN-Boost} \cite{natarajan2012gradient} extends the functional gradient boosting framework \cite{friedman2001greedy} to the relation setting by boosting relational dependency networks (RDNs) \cite{neville2007relational} with the aim to overcome the assumption of a propositional representation of the data as in standard functional gradient boosting. The objective function used in is the log-likelihood and probability of an example is represented as a sigmoid over the learned relational regression trees (RRT) \cite{blockeel1998top} which uses the relational features as input. The basic idea is to take an initial model (RRT) and use the obtained predictions to compute gradient(s) or residues. A new regression function i.e. a new RRT is then learnt to fit the residues and the model is updated. At the end, a combination (the sum) of all the obtained regression function gives the final model.\\
\textbf{2. MLN-Boost} \cite{khot2011learning} boosts the undirected Markov logic networks (MLNs) \cite{richardson2006markov} instead of the directed relational dependency networks in case of RDN-Boost. In MLN-Boost the structure and parameters of the MLN are learned simultaneously by converting the problem of learning MLNs to a series of relational functional approximation problems similar to the RDN-Boost setting, with the only difference being that the number of groundings for each learned clause are counted in case of MLN-Boost whereas RDN-Boost uses existential semantics.
\end{enumerate}
We convert the data obtained from DrugBank to the relational format with number of relations = 14 and the total number of facts = 5366. For both RDN-Boost and MLN-Boost we set the number of relational regression trees to be learned as 10.
\subsection{Results}
We optimize our Siamese network using the Adam as the optimization algorithm \citep{kingma2014adam} with a learning rate of $5 \times 10^{-5}$ (we also train the network using several other optimization algorithms as defined later). The best learning rate was obtained using line search. We set the value of the margin $m$ in contrastive loss equal to 1. As mentioned before, we keep the threshold value as 0.65, obtained using AUC-PR curve, to obtain the predictions after obtaining a distance between pair of drug images using the Siamese convolutional network. We divide our data set into 44457 training (66\% of the data) and 22903 testing examples. Example features learned by the second convolutional layer in our network for the drug Venlafaxine is shown in figure \ref{fig:venla_conv}. When pre-processing the data using a STN, we rotate the data set images by \ang{90} and pass it through the STN before passing it through our Siamese network. Another important thing to note here is that in our problem formulation recall is the most important factor that should be considered. The simple reason is that we do not want to miss any interaction i.e. a false negative results in much more serious consequences (fatalities in patients) than false positives (monetary losses such as new clinical trials) \citep{dhami2018drug} \textbf{although a recall gain should not come at the cost of loss in precision} since that can be obtained simply by classifying every test example as a positive example.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{images/image_results.png}
\caption{Results for DDI prediction using images. Although the recall of auto-encoders (AE) is higher than the Siamese network, low precision shows a high rate of false positives.}
\label{fig:image_base}
\end{figure}
Figure \ref{fig:image_base} shows the results of using our Siamese network architecture with and without rotational invariance (STN) compared with baselines. The Siamese network with and without STN (results here reported for 50 epochs for both cases) outperforms the baselines thereby answering \textbf{Q1} affirmatively. Siamese networks are clearly effective and significantly better for the DDI task of link prediction. Note that although the recall of auto-encoders is higher than the Siamese network, the very low precision shows a high rate of false positives and thus its performance cannot be judged as being better than the proposed model.
Figure \ref{fig:epochs} shows the variation of performance of Siamese network (figure \ref{fig:siam_epoch}) and Siamese network with STN (figure \ref{fig:stn_epoch}) with respect to the number of iterations. The results for Siamese networks without STN do not show any significant change wrt the increasing epochs across metrics whereas in case of Siamese networks with STN, the results show a steady increase with increasing iterations across majority of metrics. The recall decreases with increasing number of epochs in both cases i.e Siamese networks with and without STN but the decrease is more stark in case of the network without STN whereas the drop is not significant in the other case with STN. This answers \textbf{Q2}.
\begin{figure}
\centering
\subcaptionbox{Performance of Siamese network across epochs\label{fig:siam_epoch}}
{\includegraphics[width=\columnwidth]{images/siamese_epochs.png}}
\subcaptionbox{Performance of Siamese network with STN across epochs\label{fig:stn_epoch}}
{\includegraphics[width=\columnwidth]{images/siamese_STN_epochs.png}}
\caption{Variation of DDI prediction of the proposed networks wrt epochs.}\label{fig:epochs}
\end{figure}
We refer back to figures \ref{fig:image_base} and \ref{fig:epochs} to answer \textbf{Q3}. The performance of Siamese network with no STN is certainly better than with STN especially in lesser number of epochs although the difference in performance begins to shrink with the increase in the number of epochs. This is expected since STN, being a separate convolutional network in itself, takes longer number of epochs to train. Due to this steady increase in performance of Siamese network with STN we can answer \textbf{Q3} affirmatively. Our architecture can effectively handle the problem of rotational variance.
Figure \ref{fig:siamvscas} shows the result of our method when compared to a recent state-of-the-art method, CASTER. Our method outperforms CASTER across majority of metrics thereby proving the effectiveness of our approach in identifying drug-drug interactions. We show that using molecular structure images directly in a deep learning framework can result in a better/on-par performance than using \emph{lossy} string based representations. This answers \textbf{Q4}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{images/siam_caster.png}
\caption{Comparison of our method (using images) with CASTER (using SMILES strings).}
\label{fig:siamvscas}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{images/rel_base.png}
\caption{Comparison of our method (using images) with RDN-Boost and MLN-Boost (using relational data).}
\label{fig:rel_base}
\end{figure}
Figure \ref{fig:rel_base} shows the result of comparing our method (Siamese network \textbf{without} STN trained for 50 epochs) to the state-of-the-art statistical relational learning baselines. Our method outperforms both the boosted methods across majority of the metrics. Note that similar to the results obtained when comparing with image based methods (figure \ref{fig:image_base}), although the recall of MLN-Boost is higher than the Siamese network, an accompanying low precision score shows a higher rate of false positives. This shows that using the molecular structural images directly can result in a better link prediction performance than using the data for the same drugs in a relational setting. This answers \textbf{Q5}.
An ideal predictor can use all the heterogeneous data types of the drugs considered i.e. images, string based representation and relational representation. We propose an initial sketch of such a model and leave it as future work. A graph convolutional network (GCN) \cite{kipf2016semi} is a type of graph neural network that extends the neural network models to be principally applied on graph data sets. A GCN makes use of a node feature matrix and the graph adjacency matrix to propagate functional values akin to a neural network to accomplish link prediction and node classification tasks. We propose a \emph{heterogeneous GCN} where heterogeneous data types available to us can be used to obtain the feature and adjacency matrix to be fed to the GCN. For example, relational data can be used to learn lifted rules which can then be grounded and the counts of the satisfied groundings can form a more richer and informed feature matrix than simple node features. A combination of the distances between the images and the string representation can form a more informed adjacency matrix and we can solve the drug-drug interaction problem as a link prediction problem.
All the above reported results use euclidean distance as the metric to be used in contrastive loss while training the Siamese network ($D_P$ in equation \ref{eq}). To answer \textbf{Q6} we use 3 more distance metrics to be used inside the contrastive loss. These metrics are:
\begin{enumerate}
\item \textbf{Manhattan distance:} This is the 1 norm distance between vectors i.e. the sum of absolute difference of the components of the vectors and is defined as $D_P={|F_P(X_1) - F_P(X_2)|_1}$.
\item \textbf{Hellinger distance:} is a close relative of euclidean distance and is used to find the distance between 2 probability distributions. The Hellinger distance is given as \\$D_P=\sqrt{2 \sum (\sqrt{\frac{X_1}{\bar{X_1}}}-\sqrt{\frac{X_2}{\bar{X_2}}})^2}$.
\item \textbf{Jaccard distance:} can be calculated in between binary segmentation of the input images and is given as $D_P={\frac{|X_1 \cap X_2|}{|X_1 \cup X_2|}}$.
\end{enumerate}
\begin{table*}[!ht]
\centering
\caption{Effect of choice of distance metric on the prediction performance.}
\label{tab:results_dist}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Distance Metric & Number of Epochs & Accuracy & Recall & Precision & F1\\
\hline
\multirow{3}{*}{Manhattan distance} & 20 & 0.817 & 0.849 & 0.645 & 0.733\\
& 30 & 0.828 & 0.866 & 0.665 & \textbf{0.752}\\
& 50 & 0.806 & 0.828 & 0.634 & 0.718\\
\hline
\multirow{3}{*}{Hellinger distance} & 20 & 0.300 & \textbf{1.0} & 0.297 & 0.461\\
& 30 & 0.297 & \textbf{1.0} & 0.297 & 0.458\\
& 50 & 0.295 & \textbf{1.0} & 0.295 & 0.456\\
\hline
\multirow{3}{*}{Jaccard distance} & 20 & 0.703 & 0.01 & 0.427 & 0.02\\
& 30 & 0.703 & 0.0 & 0.7 & 0.0\\
& 50 & 0.703 & 0.0 & \textbf{1.0} & 0.0\\
\hline
\multirow{3}{*}{Euclidean distance} & 20 & 0.822 & 0.835 & 0.657 & 0.735\\
& 30 & 0.832 & 0.849 & 0.669 & 0.748\\
& 50 & \textbf{0.839} & 0.78 & 0.705 & 0.741\\
\hline
\end{tabular}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{Effect of choice of optimization function on the prediction performance.}
\label{tab:results_opt}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Optimization function & Number of Epochs & Accuracy & Recall & Precision & F1 Score\\
\hline
\multirow{3}{*}{RMSprop \citep{hinton2012lecture}} & 20 & 0.822 & 0.672 & 0.715 & 0.693\\
& 30 & 0.770 & \textbf{0.877} & 0.576 & 0.696\\
& 50 & 0.816 & 0.640 & 0.707 & 0.672\\
\hline
\multirow{3}{*}{Adadelta \citep{zeiler2012adadelta}} & 20 & 0.721 & 0.138 & 0.667 & 0.229\\
& 30 & 0.827 & 0.813 & 0.672 & 0.735\\
& 50 & \textbf{0.851} & 0.831 & 0.707 & \textbf{0.764}\\
\hline
\multirow{3}{*}{Nadam \citep{dozat2016incorporating}} & 20 & 0.812 & 0.852 & 0.639 & 0.730\\
& 30 & 0.828 & 0.833 & 0.668 & 0.742\\
& 50 & 0.848 & 0.790 & \textbf{0.721} & 0.754\\
\hline
\multirow{3}{*}{Adam \citep{kingma2014adam}} & 20 & 0.822 & 0.835 & 0.657 & 0.735\\
& 30 & 0.832 & 0.849 & 0.669 & 0.748\\
& 50 & 0.839 & 0.780 & 0.705 & 0.741\\
\hline
\end{tabular}
\end{table*}
Table \ref{tab:results_dist} shows the effect of using different distance metrics within the contrastive loss on the performance of the Siamese architecture (without STN). The results show that the use of euclidean and Manhattan distance as the metric in the contrastive loss perform similarly and outperform Hellinger and Jaccard distance by huge margins. Although the recall values using Hellinger distance and precision values using Jaccard distance, at 50 epochs, are perfect i.e. equal to 1, the respective precision and recall values in both the distances are very low thereby showing that using these distances in the contrastive loss leads to poor performance. This answers \textbf{Q6}.
Table \ref{tab:results_opt} shows the effect of using different optimization functions (RMSProp, Adadelta and Nadam) to optimize the Siamese network with increasing number of epochs. The last row in table \ref{tab:results_dist} shows the results with using Adam as the optimization function with increasing number of epochs. We include that row in table \ref{tab:results_opt} for more clarity.
The results vary widely with respect to the optimization function used with an increase in the performance wrt the increasing number of epochs in case of Adadelta \citep{zeiler2012adadelta} optimization function. In case of the other 3 optimization functions, interestingly, we note that there is a drop in recall when we go from 30 to 50 epochs. This shows that the choice of the optimization function does play a big part in the prediction performance thereby answering \textbf{Q7}.
\section{Conclusion}
In this work we focus on using the molecular images of the drugs in a pairwise fashion and feeding them to a rotation-invariant Siamese architecture to predict whether two drugs interact with each other. Our evaluations on the drug images obtained from PubChem database establish the superiority of our proposed approach, which is distinct from current approaches that generally uses SMILES and \textbf{SM}iles \textbf{AR}bitrary \textbf{T}arget \textbf{S}pecification (\textbf{SMARTS}) strings \citep{sayle19971st}.
Combining our previous work \citep{dhami2018drug} that used different similarity measures obtained from a directed graph of known chemical reactions between drugs and enzymes, transporters and inhibitors as well as the structure of the drugs in the form of SMILES and SMARTS strings and the current work which uses images of the drug structure is a natural next step. Also refining the Siamese architecture and feeding more drug images to the network are an interesting area of future work.
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,155,550 | arxiv | \section{Introduction}
Massive neutrinos are our first and only undoubted evidence of physics
beyond the Standard Model (SM). The evidence arises from a variety of
neutrino experiments which have detected the effect of their mass via
the flavour oscillation of neutrinos with energies ranging between
tens of keV and tens of GeV \cite{GonzalezGarcia:2007ib}. The obvious
question that this observation raises is that of the dynamics of the
New Physics (NP) responsible for the neutrino mass.
It is well known that in the framework of effective operators for NP
there is just one dimension-five operator which can be
built~\cite{Weinberg:1979sa},
${\cal O}=\alpha_5 /\Lambda_{LN}\,
L_L^{} L_L^{} HH$, where $L_L^{}$ and $H$ are the leptonic and Higgs
$SU(2)_L$ doublets. This operator breaks total lepton number and
after electroweak symmetry breaking it generates Majorana masses for
the neutrinos $m_\nu \sim \alpha_5 v^2/\Lambda_{LN}$, where $v$ is the
SM Higgs vacuum expectation value (vev). This explains the lightness
of the neutrino mass due to the large scale of total lepton number
violation $\Lambda_{LN}$. In the simplest UV completions, this
dimension-5 operator can be generated by the tree-level exchange of
three types of new states: lepton singlets in the Type--I see-saw
scenarios~\cite{Minkowski:1977sc, Yanagida:1979as, GellMann:1980vs,
Mohapatra:1979ia}, a scalar triplet for the Type--II
mechanisms~\cite{Konetschny:1977bn, Cheng:1980qt, Lazarides:1980nt,
Schechter:1980gr, Mohapatra:1980yp}, and lepton triplets for
Type--III models~\cite{Foot:1988aq}. In any of these mechanisms the
smallness of the neutrino mass can be naturally explained with Yukawa
couplings $\lambda\sim{\cal O}(1)$ if the masses of the new states are
$M\sim \Lambda_{LN}\sim 10^{14-15}$ GeV. Notwithstanding consistent
models of lower scale see--saw exist in the literature for some time;
see {\em e.g.} \cite{Mohapatra:1986bd,Kersten:2007vk,Chen:2011de}.
Given the energy range of the neutrino experiments it is clear that if
the NP scale is beyond $\sim$ GeV it is not possible to clarify its
origin within the oscillation neutrino experiments themselves. On the
other hand, the high energy frontier is currently being explored by the
CERN Large Hadron Collider (LHC) which has been running for eight
years with a reach to NP at the TeV scale. So far no clear evidence of
NP has been observed at LHC which bears implications for models
constructed to explain the neutrino masses containing new states at
the TeV scale.
Generically the expected event rates at LHC in neutrino mass models
depend not only on the mass and weak charge of the new states
involved, but also on the flavour structure of their couplings which
determines their decay modes. A priory, the decay channels are
arbitrary in most Type--I and Type--III see--saw
models, making difficult to derive unambiguous constraints on the NP
scale in these scenarios~\cite{delAguila:2008cj,
Aguilar-Saavedra:2013twa}. For example searches for triplet leptons
of the Type--III see--saw models have been performed both by CMS
\cite{CMS:2012ra,CMS:2017wua, CMS:2016hmk,CMS:2015mza} and ATLAS
\cite{Aad:2015cxa,ATLAS:2013hma} collaborations, however, most of
these searches have been carried out within the context of simplified
models such as Ref.~\cite{Biggio:2011ja} and the derived bounds can be
evaded depending on which is the dominant decay mode of the triplet
leptons.
One exception are see--saw models which extend the principle of
minimal flavour violation to the leptonic sector. Minimal flavour
violation was first introduced for quarks~\cite{Chivukula:1987py,
Buras:2000dm, DAmbrosio:2002vsn} as a way to explain the absence of
NP effects in flavour changing processes in meson decays. The basic
assumption is that the only source of flavour mixing in the full
theory is the same as in the SM, {\it i.e. } the quark Yukawa couplings.
This idea was latter on extended for leptons~\cite{Cirigliano:2005ck,
Davidson:2006bd} and in particular to TeV scale see--saw
models~\cite{Cirigliano:2005ck, Davidson:2006bd, Gavela:2009cd,Alonso:2011jd}.
From the point of view of LHC phenomenology minimal lepton flavour
violation (MLFV) see--saw models are attractive since, a) the new
states can be light enough to be produced at LHC, and b) their
signatures are fully determined by the neutrino parameters. As
discussed in Ref.~\cite{Gavela:2009cd} scalar (Type-II) see--saw with
light doublet-triplet mixing is a light scale MLFV model by
construction (for early study of their observability see for example
~\cite{Perez:2008ha,Garayoa:2007fw}).
In Ref.~\cite{Gavela:2009cd} simple MLFV models for fermionic
see--saw were also presented. In Type--I see--saw the new states are
singlets under the SM gauge group, and therefore, they can only be
produced via their mixing with the SM neutrinos. This leads to small
production rates which makes the model only marginally testable at
LHC. Type-III see--saw fermions, on the contrary, are $SU(2)_L$
triplets with weak-interaction pair-production cross section, and
consequently, having the potential to allow for tests of the
hypothesis of MLFV.
In Ref.~\cite{Eboli:2011ia} we studied the potential of LHC to unravel
the existence of triplet fermionic states that appear in MLFV Type-III
see--saw models of neutrino mass. Here, we obtain the bounds on the NP
scale of this scenario that originates from the
ATLAS~\cite{Aad:2015cxa} searches in the final state topology
containing two charged leptons (electrons and/or muons), two jets
compatible with a hadronically decaying $W$ and missing transverse
momentum. This ATLAS analysis is best suited for testing the MLFV
scenario because it classifies the final states in the different
flavour and charge combinations, allowing to fully exploit the
predictions of the MLFV Type-III see--saw model. We find that using
this ATLAS data it is possible to unambiguously rule out the MLFV
Type--III see--saw scenario with triplet masses lighter than 300 GeV
at 95\% CL irrespective of the neutrino mass ordering and other
unknowns in the light neutrino sector, in particular of the so--far
undetermined Majorana CP violating phase. The same analysis allows to
rule out triplet masses masses up to 480 GeV at 95\% CL for normal
order (NO) scenarios and specific values of that phase.
This paper is organized as follows. We first summarize in
Sec.~\ref{sec:model} the basics of the MLFV Type-II see--saw model and
we quantify the allowed range of the relevant couplings controlling
the triplet decay modes as derived from the present analysis of
neutrino oscillation data from Ref.~\cite{Esteban:2016qun}. Section
~\ref{sec:simulation} describes our simulation of signal events by the
reaction $pp\rightarrow l l' j j \nu\nu$ with $ l^{(\prime)} = e$ or
$\mu$ in the context of the MLFV Type-III see--saw model. The
quantification of the bounds is presented in Sec.~\ref{sec:results}.
In doing so we have put special emphasis and we have quantified how
the information on flavour and charge information of the produced
leptons is important for maximal sensitivity to MLFV.
\section{MLFV Type--III see--saw model}
\label{sec:model}
In Ref.~\cite{Eboli:2011ia} we introduced the simplest MLFV Type-III
see--saw model which was adapted from the Type-I one presented in
Ref.~\cite{Gavela:2009cd}. For completeness we summarize here its main
features.
The particle contents of model is that of the SM extended with two
fermion triplets
$\vec{\Sigma} = \left( \Sigma_1,\Sigma_2,\Sigma_3\right) $ and
$\vec{\Sigma}^\prime = \left( \Sigma'_1,\Sigma'_2,\Sigma'_3\right)$,
each one formed by three right-handed Weyl spinors of zero
hypercharge. Hence, the Lagrangian is
\begin{eqnarray}
{\mathcal L}={\mathcal L}_{SM}+{\mathcal L}_K+{\mathcal
L}_Y+{\mathcal L}_\Lambda
\label{eq:lag}
\end{eqnarray}
with
\begin{eqnarray}
{\mathcal
L}_K=&&i\left(\overline{\vec{\Sigma}}\Sla{D}_\mu\vec{\Sigma}+
\overline{\vec{\Sigma}^\prime}\Sla{D}_
\mu\vec{\Sigma}^\prime\right)
\;,\\
{\mathcal
L}_Y=&&-Y_i^\dag\overline{L^w_{L
i}}\left(\vec{\Sigma}\cdot\vec{\tau}\right)\tilde{\phi}- \epsilon
Y_i^{\prime\dag}\overline{L^w_{L
i}}\left(\vec{\Sigma}^\prime\cdot\vec{\tau}\right)\tilde{\phi}+h.c.
\;, \\
{\mathcal
L}_\Lambda=&&
-\frac{\Lambda}{2}\left(\overline{\vec{\Sigma}^c}\vec{\Sigma}^\prime
+\overline{\vec{\Sigma}^{\prime c}}\vec{\Sigma}\right)
-\frac{\mu}{2}\overline{\vec{\Sigma}^{\prime c}}\vec{\Sigma}^\prime
-\frac{\mu'}{2}\overline{\vec{\Sigma}^c}\vec{\Sigma}
+h.c. \;.
\end{eqnarray}
Here $\vec{\tau}$ are the Pauli matrices, the gauge covariant
derivative is defined as
$D_\mu=\partial_\mu+ig\vec{T}\cdot\vec{W}_\mu$, where $\vec{T}$ are
the three-dimensional representation of the $SU(2)_L$ generators,
$\phi$ stands for the SM Higgs doublet field, and
$L^w_{i}=(\nu^w_i,\ell^w_i)^T$ are the three weak eigenstate lepton
doublets of the SM. The parameters $\epsilon$, $\mu$ and $\mu'$ are
flavour-blind and {\sl small}, {\em i.e.}, the scales $\mu$ and $\mu'$ are
much smaller than $\Lambda$ and $v$ while $\epsilon\ll 1$.
The Lagrangian in Eq.~\eqref{eq:lag} breaks total lepton number due to
the simultaneous presence of the Yukawa terms $Y_i$ and
$\epsilon Y'_i$ as well as to the existence of the $\mu$ and $\mu'$
terms. Thus in the limit $\mu,\mu',\epsilon\rightarrow 0$ it is
possible to define a conserved total lepton number by assigning
$L(L^w)=L(\Sigma)=-L(\Sigma^\prime)=1$. Without any loss of generality
one can work in a basis in which $\Lambda$ is real while both $Y$ and
$Y'$ are complex. In general the parameters $\mu$ and $\mu'$ would be
complex, but for the sake of simplicity we take them to be real in
what follows though it is straight forward to generalize the
expression to include the relevant phases~\cite{Abada:2007ux}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{yuk.pdf}
\includegraphics[width=0.7\textwidth]{yuk_cor.pdf}
\caption{ Allowed ranges of the Yukawa couplings $|\tilde Y_e|^2\equiv
|Y_1|^2/y^2$ and $|\tilde Y_\mu|^2\equiv |Y_2|^2/y^2$ obtained from
the global analysis of neutrino data in Ref.~\cite{Esteban:2016qun}.
The upper four
panel shows the values of the couplings as a function of the unknown
Majorana phase $\alpha$. The correlation between the two couplings
is shown in the two lower panels. The left (right) panels correspond
to normal (inverted) ordering. The dotted line corresponds to the
best fit values. The ranges in the filled green, red and yellow areas are shown at
1$\sigma$, $2\sigma$, and 3$\sigma$ with 1 dof ($\Delta\chi^2=1,4,9$
respectively). }
\label{fig:yuk}
\end{figure}
After electroweak symmetry breaking, in the unitary gauge, the
leptonic mass matrices are
\begin{eqnarray}
{\mathcal L}_m
&=&-\frac{1}{2}\left(\overline{\vec{\nu^w_L}^c}\
\overline{\tilde N_R}\ \overline{\tilde N_R^{\prime}}\right)
M_0\left(\begin{array}{c}
\vec{\nu^w_L}\\\tilde N_R^c\\\tilde N_R^{\prime c}
\end{array}\right)
-\left(\overline{\vec{\ell^w_L}}\ \overline{E_L}\
\overline{E_L^{\prime}}\right)
M_\pm
\left(\begin{array}{c}\vec{\ell^w_R}\\E_R\\E_R^{\prime}\end{array}\right)
+h.c.
\label{eq:lmass}
\end{eqnarray}
with
\begin{eqnarray}
M_0=\left(\begin{array}{ccc}
0&\frac{v}{\sqrt{2}}Y^T&\epsilon\frac{v}{\sqrt{2}}Y^{\prime T} \\
\frac{v}{\sqrt{2}}Y&\mu'&\Lambda \\
\epsilon\frac{v}{\sqrt{2}}Y^{\prime}&\Lambda&\mu
\end{array}\right)
&&
\;\;\;\;
\hbox{ and }
\;\;\;\;
M_\pm=
\left(\begin{array}{ccc}
\frac{v}{\sqrt{2}}Y^\ell&vY^\dagger&\epsilon vY^{\prime \dagger} \\
0&\mu'&\Lambda \\
0&\Lambda&\mu
\end{array}\right) \; ,
\end{eqnarray}
where $Y^\ell$ are the charged lepton Yukawa couplings of the SM.
$\vec\nu^w$ and $\vec\ell^w$ are column vectors containing
respectively the three neutrinos and charged leptons of the SM in the
weak basis. The charge eigenstates Dirac fermions $E$ and $E'$ and the
neutral Majorana fermions $\tilde N$ and $\tilde N^\prime$ are defined
in terms of the triplet states,
$\Sigma^{(\prime)}_\pm=\frac{1}{\sqrt{2}}\left(\Sigma^{(\prime)}_1\mp
i\Sigma^{(\prime)}_2\right)$, and
$\Sigma^{(\prime)}_0=\Sigma^{(\prime)}_3$, as
\begin{eqnarray}
E^{(\prime)}
=\Sigma^{(\prime)}_-+{\Sigma_+^{(\prime)}}^c &\;\;\;\;
\tilde N^{(\prime)}=\Sigma^{(\prime)}_0+{\Sigma^{(\prime)}_0}^c
\; .
\end{eqnarray}
The mass basis is composed of:
\begin{itemize}
\item Three light Majorana neutrinos $\nu_i$ (with the
lightest one being massless) and three light charged leptons
leptons $\ell_i$ with masses
\begin{eqnarray}
m^{diag}_\nu&=&
{V^\nu}^T \left[-\frac{v^2}{2\Lambda}
\epsilon\left[\left(Y'-\frac{1}{\epsilon}\frac{\mu}{2\Lambda} Y\right)^T Y
+ Y^T \left(Y'-\frac{1}{\epsilon}\frac{\mu}{2\Lambda} Y\right)\right]\right]V^\nu \; , \\
m^{diag}_\ell&=&
\frac{v}{\sqrt{2}} {V^\ell}^\dagger_R Y^{\ell\dagger}
\left[1-\frac{v^2}{2\Lambda^2}
Y^\dagger Y\right] V^\ell_L \; ,
\end{eqnarray}
where $V^\nu$ and $V^\ell_{L,R}$ being $3\times 3$ unitary matrices
and in general, one can choose the flavour basis such that
$ V^\ell_L= V^\ell_R=I$.
\item Two charged heavy leptons, $E_1^-$ and $E_2^+$,
both with masses $M\simeq\Lambda (1 \mp \frac{\mu+\mu'}{2\Lambda})$.
\item Two heavy Majorana neutral leptons and two charged
heavy leptons also with masses
$M\simeq\Lambda (1 \mp \frac{\mu+\mu'}{2\Lambda})$ with which we build
a quasi-Dirac heavy state $N$.
\end{itemize}
The relation between the weak and mass eigenstate to first order in
the small parameters $\mu$, $\mu'$ and $\epsilon$ is:
\begin{eqnarray}
\nu_L^w&=&V^\nu \nu_L
+ \frac{v}{\sqrt{2}\Lambda}Y^\dagger N_L
+ \frac{v}{\sqrt{2}\Lambda}\left(\epsilon Y^{\prime \dagger}-
\left(\frac{3\mu+\mu^\prime}{4\Lambda}\right)Y^\dagger\right)N_R^c
\;,
\\
\ell^w_L&=&\ell_L
+\frac{v}{\Lambda}Y^\dagger E_{1L}^-
+\frac{v}{\Lambda}\left(\epsilon Y^{\prime \dagger}-\left(\frac{3\mu+\mu^\prime}{4\Lambda}\right)
Y^\dagger\right)E_{2R}^{+c} \; , \\
\ell^w_R&=&\ell_R \; , \\
\tilde N_L&=&N_R^c-\left(\frac{\mu-\mu^\prime}{4\Lambda}\right)N_L
-\frac{v}{\sqrt{2}\Lambda}\left(\epsilon Y^{\prime}-\frac{\mu}{\Lambda}Y\right)V^\nu \nu_L\; , \\
\tilde N'_L&=&N_L+\left(\frac{\mu-\mu^\prime}{4\Lambda}\right)N_R^c
-\frac{v}{\sqrt{2}\Lambda}Y V^\nu \nu_L \;,
\\
E_L&=&E_{2R}^{+c}
-\left(\frac{\mu-\mu^\prime}{4\Lambda}\right)E_{1L}^-
-\frac{v}{\Lambda}\left(\epsilon Y^{\prime}-\frac{\mu}{\Lambda}Y\right)\ell_L \; , \\
E_R&=&E_{1R}^-
-\left(\frac{\mu-\mu^\prime}{4\Lambda}\right)E_{2L}^{+c}\; , \\
E'_L&=&E_{1L}^-
+\left(\frac{\mu-\mu^\prime}{4\Lambda}\right)E_{2R}^{+c}
-\frac{v}{\Lambda}Y \ell_L \; , \\
E'_R&=&E_{2L}^{+c}
+\left(\frac{\mu-\mu^\prime}{4\Lambda}\right)E_{1R}^- \; .
\end{eqnarray}
From the above relations it follows that the neutral weak interactions
of the light states take the same form as that on the SM and the
charged current interactions involve a 3x3 unitary matrix
$U_{LEP}=V^\nu$ which after phase redefinition of the light charged
leptons, can be chosen
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
U_{LEP}=&=&
\begin{pmatrix}
1 & 0 & 0 \\
0 & c_{23} & {s_{23}} \\
0 & -s_{23} & {c_{23}}
\end{pmatrix}
\!\!\!
\begin{pmatrix}
c_{13} & 0 & s_{13} e^{-i\delta_\text{CP}} \\
0 & 1 & 0 \\
-s_{13} e^{i\delta_\text{CP}} & 0 & c_{13}
\end{pmatrix}
\!\!\!
\begin{pmatrix}
c_{21} & s_{12} & 0 \\
-s_{12} & c_{12} & 0 \\
0 & 0 & 1
\end{pmatrix}
\!\!\!
\begin{pmatrix}
e^{-i \alpha}
& 0 & 0 \\
0 & e^{i \alpha} & 0 \\
0 & 0 & 1
\end{pmatrix},
\end{eqnarray}
where $c_{ij} \equiv \cos\theta_{ij}$ and
$s_{ij} \equiv \sin\theta_{ij}$. The angles $\theta_{ij}$ can be
taken without loss of generality to lie in the first quadrant,
$\theta_{ij} \in [0,\pi/2]$ and the phases
$\delta_\text{CP},\; \alpha\in [0,2\pi]$. The leptonic mixing matrix
contains only one Majorana phase because there are only two heavy
triplets and consequently only two light neutrinos are massive while
the lightest one remains massless. Also notice that unitarity
violation in the charged current and flavour mixing in the neutral
current of the light leptons in generated at higher
order~\cite{Schechter:1979bn, Antusch:2006vwa, Abada:2007ux}.
This model is MLFV because one can fully reconstruct the neutrino
Yukawa coupling $Y$ and the combination
$\widehat {Y'}=Y'-\frac{1}{\epsilon}\frac{\mu}{2\Lambda} Y$ from the
neutrino mass matrix~\cite{Gavela:2009cd} (up to two real
normalization factors $y$ and $\hat y'$) as follows:
\begin{equation}
\begin{array}{c|c}
{\rm NO}\; (m_1=0<m_2<m_3) & {\rm IO}\; (m_3=0 < m_1< m_2) \\[+0.3cm]
\hline
r=\frac{\Delta m^2_{21}}{\Delta m^2_{32}} &
r=-\frac{\Delta m^2_{21}}{\Delta m^2_{31}} \\[+0.3cm]
\rho=\frac{\sqrt{1+r}-\sqrt{r}}{\sqrt{1+r}+\sqrt{r}} &
\rho=\frac{\sqrt{1+r}-1}{\sqrt{1+r}+1} \\[+0.3cm]
m_{2,3}=\frac{\epsilon y {\widehat {y'}} v^2}{\Lambda}(1\mp\rho)
&m_{1,2}=\frac{\epsilon y {\widehat {y'}} v^2}{\Lambda}(1\mp\rho)\\[+0.3cm]
Y_{a}=\frac{y}{\sqrt{2}}\left(\sqrt{1+\rho}\ U_{a3}^*+\sqrt{1-\rho}\
U_{a2}^*\right) &
Y_{a}=\frac{y}{\sqrt{2}}\left(\sqrt{1+\rho}\ U_{a2}^*+\sqrt{1-\rho}\
U_{a1}^*\right) \\[+0.3cm]
\widehat {Y^\prime}_{a}= \frac
{\widehat{y^\prime}}{\sqrt{2}}\left(\sqrt{1+\rho}\ U_{a3}^*-
\sqrt{1-\rho}\ U_{a2}^*\right) &
\widehat {Y^\prime}_{a}=
\frac{\widehat {y^\prime}}{\sqrt{2}}\left(\sqrt{1+\rho}\ U_{a2}^*-
\sqrt{1-\rho}\ U_{a1}^*\right)\\
\end{array}
\label{eq:reconsyuk}
\end{equation}
We plot in Fig.~\ref{fig:yuk} the ranges of the Yukawa couplings
$|\tilde Y_e|^2\equiv |Y_1|^2/y^2$ and
$|\tilde Y_\mu|^2\equiv |Y_2|^2/y^2$ obtained by projecting the
allowed ranges of oscillation parameters from the global analysis of
neutrino data \cite{Esteban:2016qun} using Eqs.~\eqref{eq:reconsyuk}.
In the first and second rows we plot the ranges of the Yukawa
couplings as a function of the unknown Majorana phase $\alpha$ while
the lower row shows the correlation between the electron and muon
Yukawa couplings. This figure illustrates the quite different allowed
ranges and correlation of the electron and muon Yukawa couplings in
the two orderings. As a curiosity we notice that in NO the dependence
of the range of $|Y_e|$ on $\alpha$ is driven by the present hint of
$\delta_{\rm CP}\sim 270^\circ$ in the oscillation data analysis
\cite{Esteban:2016qun} because $\alpha$ enters via
${\rm Re}(U_{e3} U_{e2}^*)\propto \cos(\alpha+\delta_{\rm CP})$.
In what respects the interactions of the heavy states, as discussed in
Ref.~\cite{Eboli:2011ia}, lepton number violating couplings appear due
to ${\cal O} (\epsilon, \mu/\Lambda, \mu^\prime/\Lambda)$ mixings and
mass splittings in the heavy states, that are, by hypothesis, very
suppressed in MLFV models. This renders the lepton number violating
processes involving heavy fermions unobservable at LHC, at a
difference with the non MLFV scenarios for Type--III see--saw for
which $\Delta L=2$ final states constitute a smoking
gun~\cite{Franceschini:2008pz,Bajc:2007zf}. Consequently in what
follows we concentrate in the lepton conserving interaction Lagrangian
with $\Lambda=M$ being the common mass of the heavy states:
\begin{eqnarray}
{\mathcal L}_W=&&-g\left(\overline{E^-_1}\gamma^\mu N W_\mu^- -
\overline{N}\gamma^\mu E^+_2W_\mu^-\right)+h.c. \nonumber \\
&&- g \left(\frac{1}{\sqrt{2}} K_{a}
\overline{\ell_{aL}}\gamma^\mu N_LW_\mu^- +
\tilde K_{a}\overline{\nu_{aL}} \gamma^\mu E^+_{2L} W_\mu^- \right) +h.c.
\label{eq:lw}
\\
{\mathcal L}_Z=&&gC_W
\left(
\overline{E^-_1}\gamma^\mu E^-_1Z_\mu-
\overline{E^+_2}\gamma^\mu E^+_2Z_\mu\right) \nonumber \\
&&+\frac{g}{2 \sqrt{2} C_W}\left(\frac{1}
{\sqrt{2}}\tilde K_{a}\overline{\nu_{aL}}\gamma^\mu N_{L} Z_\mu
+ K_{a}\overline{\ell_{aL}}\gamma^\mu E^-_{1L}Z_\mu \right)+h.c.
\label{eq:lz}\\
{\mathcal L}_\gamma=&& e
\left(
\overline{E^-_1}\gamma^\mu E^-_1A_\mu-
\overline{E^+_2}\gamma^\mu E^+_2A_\mu\right) \\
{\mathcal L}_{h^0}=&&\frac{g M}{\sqrt{2}M_W}\left(
\frac{1}{\sqrt{2}}\tilde K_{a}
\overline{\nu_{aL}} N_{R} +K_{a}\overline{\ell_{aL}}E^-_{1R}\right)+h.c. \; .
\label{eq:lh}
\end{eqnarray}
where $c_W$ stands for the cosine of the weak mixing angle and the
lepton number conserving couplings of the heavy triplet fermions are
\begin{equation}
\begin{array}{ll}
K_a
=-\frac{v}{\sqrt{2} M} {Y_a}^{*} \;\;\; ,&\;\;\;
\tilde K_{a}={U^*_{LEP}}_{ca} K_{c} \; ,
\end{array}
\label{eq:kdef}
\end{equation}
which verify
\begin{eqnarray}
\sum_{a=1}^3|K_a|^2=\sum_{a=1}^3|\tilde K_a|^2=\frac{y^2 v^2}{2 M^2}
\; .
\label{eq:sumk}
\end{eqnarray}
Notice that flavour structures of the couplings $K$ and $\tilde K$ are
fully determined by the low energy neutrino parameters. Their
strengths are, however, arbitrary as they are controlled by the
normalization factor $y v/M$ while it is the combination
$\epsilon y \widehat {y'}/M$ what is fixed by the neutrino masses.
From these interactions we can obtain the decay widths of the heavy
states~\cite{delAguila:2008hw}:
\begin{eqnarray}
&&\Gamma\left(N\rightarrow\ell_a^-W^+\right)=\frac{g^2}{64\pi}
|K_a|^2\frac{M^3}{M_W^2}
\left(1-\frac{M_W^2}{M^2}\right)\left(1+\frac{M_W^2}
{M^2}-2\frac{M_W^4}{M^4}\right) \; ,
\nonumber\\
&&\Gamma\left(N\rightarrow\nu_a Z\right)=\frac{g^2}{128\pi
c_w^2}|\tilde K_a|^2\frac{M^3}{M_Z^2}
\left(1-\frac{M_Z^2}{M^2}\right)\left(1+\frac{M_Z^2}{M^2}
-2\frac{M_Z^4}{M^4}\right) \; ,
\nonumber\\
&&\Gamma\left(N\rightarrow\nu_a
h^0\right)=\frac{g^2}{128\pi}|\tilde K_a|^2\frac{M^3}{M_W^2}
\left(1-\frac{M_{h^0}^2}{M^2}\right)^2 \; ,
\label{widths}\\
&&\Gamma\left(E_2^+\rightarrow\nu_a
W^+\right)=\frac{g^2}{32\pi}|\tilde K_a|^2\frac{M^3}{M_W^2}
\left(1-\frac{M_W^2}{M^2}\right)\left(1+\frac{M_W^2}{M^2}
-2\frac{M_W^4}{M^4}\right) \; ,
\nonumber\\
&&\Gamma\left(E_1^-\rightarrow\ell_a^-Z\right)=\frac{g^2}{64\pi
c_W^2}|K_a|^2\frac{M^3}{M_Z^2}
\left(1-\frac{M_Z^2}{M^2}\right)\left(1+\frac{M_Z^2}{M^2}
-2\frac{M_Z^4}{M^4}\right) \; ,
\nonumber\\
&&\Gamma\left(E_1^-\rightarrow\ell_a^-h^0\right)=\frac{g^2}{64\pi}
| K_a|^2\frac{M^3}{M_W^2}
\left(1-\frac{M_{h^0}^2}{M^2}\right)^2 \; .
\nonumber
\end{eqnarray}
Therefore, using Eq.~\eqref{eq:sumk} the total decay widths for the
three triplet fermions $F=N,E^-_1,E^+_2$ are
\begin{eqnarray}
&&\Gamma_{F}^{\mbox{\begin{tiny}TOT\end{tiny}}}
=\frac{g^2 M^3}{64 \pi M_W^2} \frac{y^2 v^2}{M^2}
(1+{\cal F}_F(M))
\label{totalwidths}
\end{eqnarray}
where ${\cal F}_{F}(M)\rightarrow 0$ for $M\gg m_{h^0},M_Z,M_W$.
Consequently, the arbitrary $y$ factor cancels out in the branching
ratios. Furthermore the branching ratio in a final state with a
charge lepton (neutrino) of flavour $\alpha$ produced in the vertex of
the heavy state decay is proportional to $|\tilde Y_\alpha|^2$
($|(U_{LEP} Y)_\alpha|^2$) times a kinematic factor depending solely
on $M$.
Unlike in a general Type--III see--saw model for which the branching
ratio of $N$ or $E^\pm_i$ into a light lepton of a given flavour can
be negligibly small, for the MLFV Type--III see--saw model the
branching ratios in the different light lepton flavours are fixed by
the neutrino physics and are non-vanishing as can be seen from
Fig.~\ref{fig:yuk} and Eqs.~(\ref{widths}). This makes the flavour
composition of the final states in any decay chain of the heavy
leptons to be fully determined given a neutrino mass ordering.
Consequently in the narrow width approximation the only free
parameters in the model are the mass of the states and the Majorana
phase $\alpha$, making the model more unambiguously testable.
Conversely, as discussed in Ref.~\cite{Eboli:2011ia} in this MLFV
Type-III model the values of the neutrino masses imply a lower bound
on the total decay width of the triplet fermions as a consequence of
the hierarchy between the $L$-conserving and $L$-violating $y$ and
$\epsilon \widehat{y'}$ constants. So their decay length is too short
to produce a detectable displaced decay vertex
signature~\cite{Aad:2008zzm,Chatrchyan:2008zzk} at difference with
other see--saw models~\cite{Franceschini:2008pz, Bajc:2007zf,
Perez:2008ha, Li:2009mw}.
In order to simulate the expected signals in this MLFV Type--III
see--saw model we have implemented the Lagrangian in
Eqs.~\eqref{eq:lw}--\eqref{eq:lh} using the package
FeynRules~\cite{Christensen:2008py,Alloul:2013bka}.
We have made available the corresponding model files
at the corresponding URL~\cite{frwiki}.
\section{Case study: $pp\rightarrow l l' j j \nu\nu$}
\label{sec:simulation}
In order to study the sensitivity of LHC Run I to MLFV Type--III
see--saw signatures we will use the event topologies studied by ATLAS
in Ref.~\cite{Aad:2015cxa} which contain two charged leptons (either
electron or muons), two jets from a hadronically decaying $W$ boson
and large missing transverse momentum. ATLAS used these topologies to
search for heavy fermions in the context of the simplified Type--III
see--saw model as implemented in Ref.~\cite{Biggio:2011ja}. They
presented their results as number of events for the six different
flavour and charge lepton pair combinations: same sign (SS) and
opposite sign (SS) $ee$, $\mu\mu$ and $e\mu$. Using those they obtain
bounds on the triplet mass which depend on the decay branching ratio
into the different flavours. In particular for triplets decaying
mostly into $\tau$'s no bound can be derived.
On the contrary, as stressed in the previous section, in the MLFV
scenario the flavour and lepton number of the final states produced in
the heavy fermion decay chain is very much constrained. Thus having
the final states classified in the different flavour and charge
combinations makes the result in Ref.~\cite{Aad:2015cxa} best suited
for testing the MLFV scenario as we quantify next.
\subsection{Contributing subprocesses}
Let us start by listing the possible subprocesses contributing to the
different flavour and charge combinations in the MLFV Type--III
see--saw model. They all proceed by the production of a pair of
heavy triplet states and their subsequent decay.
The pair production of the fermion triplets takes place via gauge
interactions, and, therefore, it depends exclusively upon the mass of the new
states. On the other hand, the branching ratios of these fermions into
the final states described in the previous section depend upon the
Yukawa couplings $\tilde{Y}_a$ which vary in the different subprocesses:
\begin{itemize}
\item
$p \; p \; \rightarrow \; E_{1} ^- \; \tilde{N} \, , \quad
( \; E_{1}^- \; \rightarrow \; l_a ^- \; Z \, , \, Z
\;\rightarrow \; \nu \; \nu \; ) \quad
( \; \tilde{N} \; \rightarrow \; l_b ^+ \; W^- \, , \, W^- \;
\rightarrow \; j \; j \;)$: \\[+0.3cm]
Using interactions in \eqref{eq:lw}--\eqref{eq:lz} it is easy to
show that the production cross section for this process is
proportional to $\vert K_a \vert ^2 \, \vert K_b \vert ^2
$. Therefore, as discussed in the previous section, in the narrow
width approximation the production cross section for this process
and its charge conjugated one can be factorized as
\begin{equation}
\sigma_1(M)_{ab}\;
\vert \tilde Y_a \vert ^2 \, \vert \tilde Y_b \vert ^2 .
\label{eq:s1}
\end{equation}
\item $p \; p \; \rightarrow \; E_{2} ^+ \; \tilde{N} \,
, \; ( \; E_{2}^+ \; \rightarrow \; \nu_c \; W^+ \, , \,
W^+ \;\rightarrow \; j \; j \; ) \quad
( \; \tilde{N} \; \rightarrow \; l_a ^+ \; W^- \, ,
\, W^-\; \rightarrow \;l_b^- \; \nu_c \;)$: \\[+0.3cm]
whose production cross section is proportional to
$\vert K_a \vert^2 \vert \tilde{K}_c \vert^2$. So in the narrow
width approximation and after summing over the $\nu_c$ flavour using
Eq.~\eqref{eq:sumk} the cross section for this process and its
charge conjugated one can be written as
\begin{equation}
\sigma_{2a}(M)_{ab}\;\vert \tilde Y_a \vert^2\; .
\label{eq:s2a}
\end{equation}
\item
$p \; p \; \rightarrow \; E_{2} ^+ \; \tilde{N} \, ,
\; ( \; E_{2}^+ \; \rightarrow \; \nu_c \; W^+ \, ,
\, W^+ \; \rightarrow \; l_b^+ \; \nu_b \; ) \quad
( \; \tilde{N} \; \rightarrow \; l_a ^+ \; W^- \, ,
\, W^- \; \rightarrow \; j \;\; j \; )$: \\[+0.3cm]
and the respective charge conjugate process possess a cross section
proportional to $\vert K_a \vert^2 \vert \tilde{K}_c \vert^2$. As
before, in the narrow width approximation and after summing over the
$\nu_c$ flavour we parametrize the production cross section as
\begin{equation}
\sigma_{2b}(M)_{ab}\;\vert \tilde Y_a \vert^2\; .
\label{eq:s2b}
\end{equation}
\item
$p \; p \; \rightarrow \; E_{2} ^+ \; \tilde{N} \, , \quad ( \;
E_{2}^+ \; \rightarrow \; \nu_c \; W^+ \,\, , \,\, W^+ \;
\rightarrow \; j \; j \; ) \quad ( \; \tilde{N} \;\rightarrow \;
\nu_d \; Z \, ,
\, Z \; \rightarrow \; l_a^- \; l_a^+ \; ) $: \\[+0.3cm]
and its charge conjugate process with cross section proportional to
$\vert \tilde{K}_c \vert^2 \vert \tilde{K}_d \vert^2$ that after
summing over the neutrino flavours $c$ and $d$ reads
\begin{equation}
\sigma_{2c}(M)_{aa} \;.
\label{eq:s2c}
\end{equation}
\item
$p \; p \; \rightarrow \; E_{1}^+ \; E_{1}^- \, , \quad
( \; E_{1}^+ \; \rightarrow \; l_a^+ \; Z \, , \, Z
\; \rightarrow \; j \; j \;) \quad
(\; E_1^- \; \rightarrow \; l_b^- \; Z \, , \, Z
\; \rightarrow \; \nu_c \; \nu_c \;) $\, , \\
$p \; p \; \rightarrow \; E_{1}^+ \; E_{1}^- \, , \quad
( \; E_{1}^+ \; \rightarrow \; l_a^+ \; Z \, , \,
Z \; \rightarrow \; \nu_c \; \nu_c \; ) \quad
(\; E_1^- \; \rightarrow \; l_b^- \; Z \, , \, Z
\; \rightarrow \; j \;j \; ) $ : \\[+0.3cm]
with cross section
\begin{equation}
\sigma_{3}(M)_{ab}\; \vert \tilde Y_a \vert^2
\vert \tilde Y_b \vert^2 \; .
\label{eq:s3}
\end{equation}
\end{itemize}
Altogether the cross section of each OS flavour channel is given by:
\begin{eqnarray}
\sigma^{OS}_{ee}&\equiv&
\left[ \sigma_1(M)_{ee} \,
+\, \sigma_3(M)_{ee}\right] \vert\tilde Y_e \vert^4 \,
+\, \sigma_{2a}(M)_{ee} \vert \tilde Y_e \vert^2
+\, \sigma_{2c}(M)_{ee} \; , \nonumber \\
\sigma^{OS}_{e\mu}&\equiv &
\left[ \sigma_1 (M)_{e\mu} \,
+\,\sigma_1(M)_{\mu e} \,
+\, \sigma_3(M)_{e \mu} \,
+\, \sigma_3(M)_{\mu e} \right]
\vert \tilde Y_e \vert^2 \vert \tilde Y_{\mu} \vert^2 \,\nonumber \\
&& +\, \sigma_{2a}(M)_{e \mu} \vert \tilde Y_e \vert^2
\,+\,\sigma_{2a}(M)_{\mu e}
\vert\tilde Y_{\mu} \vert^2 \; , \label{eq:OS}
\\
\sigma^{OS}_{\mu \, \mu}&\equiv&
\left[\sigma_1(M)_{\mu\mu} \,+\,
\sigma_3(M)_{\mu\mu}\right] \vert \tilde Y_{\mu} \vert^4 \,
+\, \sigma_{2a}(M)_{\mu\mu} \vert\tilde Y_{\mu} \vert^2
\,+\, \sigma_{2c}(M)_{\mu\mu} \;,\nonumber
\end{eqnarray}
while for SS lepton final state
\begin{eqnarray}
\sigma^{SS}_{ee}&\equiv&
\sigma_{2b}(M)_{ee} \vert \tilde Y_e \vert^2 \; , \nonumber \\
\sigma^{SS}_{e \mu}&\equiv& \sigma_{2b}(M)_{e \mu}
\vert \tilde Y_e \vert^2
+\, \sigma_{2b}(M)_{\mu e} \vert\tilde Y_{\mu} \vert^2 \; , \label{eq:SS}
\\
\sigma^{SS}_{\mu\mu}&\equiv & \sigma_{2b}(M)_{\mu\mu}
\vert\tilde Y_{\mu}\vert^2 \; . \nonumber
\end{eqnarray}
\subsection{Simulation of the expected event rates}
In our analysis, we first simulated the above parton level signal processes
with MadGraph 5~\cite{Alwall:2014hca}. We then used
PYTHIA 6.4 ~\cite{Sjostrand:2006za} to generate the parton shower and
hadronization. Finally we performed a fast detector simulation with DELPHES
3~\cite{deFavereau:2013fsa} with jets being reconstructed using the
anti-$k_T$ algorithm with a radius $R=0.4$ with the package
FASTJET~\cite{Cacciari:2011ma}.
In order to reproduce the ATLAS event selection corresponding to the
searches in Ref.~\cite{Aad:2015cxa} we required that the events
contain exactly two leptons (muons and/or electrons), a minimum of two
jets and no b-tagged jet. In the case of OS (SS) leptons the leading
lepton must have transverse momentum ($p_T$) in excess of 100 (70) GeV
with the next-to-leading lepton $p_T$ larger than 25 (40) GeV. In
addition we imposed the invariant mass of the two leptons to be larger
than 130 and 90 GeV for OS and SS events respectively. We also
demanded the $p_T$ of two leading jets to be larger than 60 (40) and
30 (25) GeV for the OS (SS) final state. Moreover, to characterize the
presence of a hadronically decaying $W$ in the event the invariant
mass of the leading jet pair was required to be between 60 and 100
GeV, and for OS events we require the two leading jets to satisfy
$\Delta R_{jj} < 2$. Finally, we only selected events presenting
missing transverse energy in excess of 110 (100) GeV for OS (SS)
events. We implement the above selection in
MadAnalysis5~\cite{Conte:2012fm}.
In order to tune our calculations we first simulated the signal for the
Type-III see--saw model of Ref.~\cite{Biggio:2011ja} which is the one
used in the ATLAS analysis. By comparing our number of expected events with
the ones obtained by ATLAS in Fig. 2 of~\cite{Aad:2015cxa} for the
OS and SS final states presenting $ee$, $e\mu$ and $\mu\mu$ pairs, we
extract overall multiplicative correction factors for each of these final states
such that our fast simulation agrees with the one of ATLAS.
To validate our procedure we
verified that our tuned Monte Carlo reproduces the missing transverse
momentum distribution presented in Fig. 1 of~\cite{Aad:2015cxa}.
We then apply these correction factors in the evaluation of the expected number
of events of the MLFV type-III see-saw model.
Concerning backgrounds, the dominant contribution comes from the
production of dibosons ($WW$,$WZ$,$ZZ$), $Z$ plus jets, top pairs and
single top in association with a $W$. In addition there are also
background events that stems from misidentification of leptons. In our
analysis we directly use the background rates estimated by
ATLAS~\cite{Aad:2015cxa}.
Figure~\ref{fig:sigmas} depicts the resulting cross section factors
$\sigma_1$, $\sigma_{2a}$, $\sigma_{2b}$, and $\sigma_3$ introduced in
Eqs.~(\ref{eq:s1})--(\ref{eq:s3}), for the events passing the selection cuts and
after applying the tuning correction factors. The results are shown as a
function of the new fermion mass and for the different lepton flavor
combinations and a center-of-mass energies of 8 TeV~\footnote{$\sigma_{2c}$
is vanishing small since the event selection vetoes the presence of on-shell
$Z$'s.} . From this figure
we can see that the four possible leptonic final states ($ee$,
$\mu\mu$, $e\mu$, and $\mu e$) have similar cross section factors for
masses larger than 300 GeV, however, closer to the threshold the
detection and acceptance efficiencies induce differences among the leptonic
final states. In the case of $\sigma_{2a}$ both leptons originate from
the decay of a single triplet fermion, therefore reducing the
acceptance at small masses due to the OS lepton pair invariant mass
cut.
\begin{figure}\centering
\includegraphics[width=0.8\textwidth]{sigmas.pdf}
\caption{Cross section factors for the different contributions
to the event topologies as a function of the triplet mass $M$
as defined in Eqs.~\eqref{eq:s1}--\eqref{eq:s3} after inclusion of our
simulation of the selections of Ref.~\cite{Aad:2015cxa} (see text for
details).}
\label{fig:sigmas}
\end{figure}
Weighting these cross section factors with the different combination of
Yukawa couplings as in Eqs.~\eqref{eq:OS} and ~\eqref{eq:SS} and times the
luminosity we predict the signal event rates in the different flavour and charge
combinations of the final lepton pair.
For example in Figs.~\ref{fig:neventsno}
and ~\ref{fig:neventsio} we present the event rate prediction as a function
of the unknown Majorana phase $\alpha$ for a triplet mass of 300 GeV,
an integrated luminosity of 20.3
fb$^{-1}$ and normal ordering and inverted ordering respectively.
As seen in Fig.~\ref{fig:neventsno} for NO, the expected number
of $ee$ events is very small for both OS and SS
channels. Nevertheless, a considerable number of events containing
muons is expected except for $\alpha/\pi$ around 1. This is so because for NO
$|\tilde{Y}_\mu|$ is much larger than $| \tilde{Y}_e|$ as can be
seen from Fig.~\ref{fig:yuk}. For IO we see in Fig.~\ref{fig:neventsio},
as could be anticipated from Fig.~\ref{fig:yuk}, that
the expectations in all flavor channels vary appreciably with the Majorana
phase $\alpha$. However, the strong correlation between $|\tilde{Y}_e|$ and
$| \tilde{Y}_\mu|$ in IO -- depicted in the right bottom panel of
Fig.~\ref{fig:yuk} -- guarantees a sizable number of events for almost
the entire $\alpha$ range with the signal being dominated by the
channels $e\mu$ and $\mu\mu$ for OS leptons and the channel $e\mu$ for
SS leptons.
\begin{figure}\centering
\includegraphics[width=0.8\textwidth]{nevents_NO.pdf}
\caption{Expected number of signal events at $\sqrt{s}=8$ TeV
and an integrated luminosity of 20.3 fb$^{-1}$,
in each of the six flavour-charge
combinations for a triplet mass of $M=300$ GeV
and for neutrino parameters corresponding to the NO.
The conventions are the same as in Fig.1.}
\label{fig:neventsno}
\end{figure}
\begin{figure} [h]
\centering
\includegraphics[width=0.8\textwidth]{nevents_IO.pdf}
\caption{Same as Fig.~\ref{fig:neventsio} for IO.}
\label{fig:neventsio}
\end{figure}
\section{Analysis: Results and Discussion}
\label{sec:results}
In order to quantify the bounds on the MLFV Type--III see--saw
scenario we build the likelihood function using the six data points
associated to the events with $ee$, $e\mu$ and $\mu\mu$ leptons of
either SS or OS. As discussed in the previous section, we make direct use
of the corresponding background
estimate by ATLAS which we read from Fig. 2 and Table I of
Ref.~\cite{Aad:2015cxa} and that summarize here for completeness:
\vskip 0.5cm
\begin{tabular}{c|cccccc|cc}
& OS $ee$ & SS $ee$ & OS $\mu\mu$ & SS $\mu\mu$ & OS $e\mu$ & SS $e\mu$
& total OS & total SS \\
\hline
$N^{dat}$ & 9 & 3 & 3 &1 & 13 & 0 & 25 &4 \\
$ N^{bck}$ & $8.5$ & $1$ & $9.5$ & $0.5$
& $13$
& $1.65$ & $31.0\pm 7.7$& $3.15\pm 0.8$
\end{tabular}
\vskip 0.5cm
According to Ref.~\cite{Aad:2015cxa} the reported background errors in
the table and figure include both the statistical and systematic
uncertainties. Comparing the read out of these uncertainties for each
of the six individual channels with the total reported in the table we
conclude that the $\sim$ 25\% background uncertainty is strongly
correlated among the different channels as the total background
uncertainty comes to be very close to the arithmetic sum of the
individual ones, while if they were totally uncorrelated one would
expect it to be the quadratic sum.
So we build the likelihood function as
\begin{equation}
-2{\cal L}_{6d}=\chi^2_{6d} =\min_{\xi}
\left\{ 2\sum_{i=1,6}\left[ (1+\xi)N^{bck}_i
+N^{mod}_i-N^{dat}_i \log\frac{N^{dat}_i}{(N^{bck}_i+N^{mod}_i)}\right]
+\frac{\xi^2}{0.25^2} \right \}
\label{eq:l6}
\end{equation}
where we account for the background uncertainty by introducing a
unique pull $\xi$ with an uncertainty of 25\%\footnote{ We have
verified that including several pulls for the different source of
background and smaller uncertainties each does not have any
significant impact in the results.}. $N^{mod}_i$ is the predicted
contribution to the number of events in channel $i$ from the triplet
production and decays which depends on the triplet mass and the
neutrino parameters as discussed in the previous section. In building
the likelihood \eqref{eq:l6} we have used Poisson statistics to
account for the small number of events in each channel.
\begin{figure} [h]
\centering
\includegraphics[width=0.6\textwidth]{chi2min_marg.pdf}
\caption{Triplet mass dependence of the $\chi^2$ functions of the analysis of
$pp\rightarrow l l' j j \nu\nu$ events with $l,l'$ being either
$e$ or $\mu$ of either charge observed in the LHC Run-I in
ATLAS~\cite{Aad:2015cxa} when analyzed in the context of MLFV Type--III
see--saw model. The full lines correspond to the likelihood
function constructed including the full information
given on flavour and charge of the final states (see Eq.~\eqref{eq:l6})
while the dashed lines are obtained from the analysis of the total
data summing over charge and flavour. The triplet couplings have been
marginalized within the ranges allowed at 95\% CL by the neutrino oscillation
data analysis in Ref.~\cite{Esteban:2016qun} for NO (blue lines)
and IO (purple lines) and for any value of the Majorana phase $\alpha$.}
\label{fig:chi2min}
\end{figure}
We plot as full lines in Fig.~\ref{fig:chi2min} the dependence of
$\chi^2_{6d}$ on the triplet mass after marginalization over the
neutrino parameters (including the unknown Majorana phase $\alpha$)
over the 95\% CL allowed values from the neutrino oscillation analysis
for either NO or IO. First thing to notice is that in the SM
($N^{mod}_i$=0, which can be inferred from the large M limit in the
figure) we find $\chi^2_{6d,SM}=11.8$ which is a bit high for 6 data
points. This is mostly driven by the OS $\mu\mu$ channel for which 3
events are observed when about 10 are expected in the SM. From this
figure we also read that requiring $\chi^2_{6d}-\chi^2_{6d,SM}<4$ we
can infer an absolute bound on the triplet mass of 300 GeV (375 GeV)
for NO (IO) light neutrino masses.
In order to stress the importance of the flavour and charge
information on the possibility of imposing this bound, we have
constructed the corresponding likelihood function summing the
information from all the channels. As in this case the total number of
observed events is large enough, we assume gaussianity. So we define:
\begin{equation}
\chi^2_{tot}=\frac{(N^{tot,bck}+N^{tot,mod}-N^{tot,dat})^2}
{N^{tot,dat}+(0.25 N^{tot,bck})^2} \; .
\label{eq:l1}
\end{equation}
The dependence of $\chi^2_{tot}$ on the triplet mass after
marginalization over the neutrino parameters (including the unknown
Majorana phase $\alpha$) over the 95\% CL allowed values from the
neutrino oscillation analysis ($\Delta\chi^2_{osc}\leq 4$) for either
NO or IO is shown as dashed lines in Fig.~\ref{fig:chi2min}. As the
deficit in OS $\mu\mu$ is compensated by the slight excesses in other
channels, we find that in this case the SM gives a perfect description
of the total observed event rates ($\chi^2_{tot,SM}$=0.25). The figure
clearly illustrates the relevance of flavour and charge information,
as in this case the condition $\chi^2_{tot}-\chi^2_{tot,SM}<4$ does
result into no bound on the triplet mass for NO neutrino masses while
it rules out only $M<260$ for IO.
The dependence of the bounds on the unknown phase $\alpha$ is
displayed in Fig.~\ref{fig:bounds}. The full red regions are excluded
values of triplet masses in the MLFV scenario at 95\% CL
($\chi^2_{6d}-\chi^2_{6d,SM}>4$) when marginalizing over the
oscillation parameters within the 95\% CL allowed values by the
oscillation analysis in NO (left) and IO (right) for each value of
$\alpha$. The $\alpha$ marginalized bound discussed above correspond
to the lightest allowed masses in these panels which for NO ($M>300$
GeV) correspond to $\alpha=\pi$ while for $\alpha=0$, $2\pi$ the bound
strengthens to $M>480$ GeV. For IO the dependence of the bound on
$\alpha$ is weaker. The marginalized bound $M>380$ GeV
corresponds to $\alpha\sim 3\pi/4,3\pi/2$ but it is close to that value for
almost all values of alpha. The strongest bound of
$M>430$ GeV corresponds also to $\alpha=0$, $2\pi$.
\begin{figure} [h]
\centering
\includegraphics[width=0.8\textwidth]{bounds.pdf}
\caption{95\% excluded triplet mass in the MLFV Type--III see--saw scenario
as a function of the unknown phase $\alpha$ from the analysis of
$pp\rightarrow l l' j j \nu\nu$ events with $l,l'$ being either
$e$ or $\mu$ of either charge observed in the LHC Run-I in
ATLAS~\cite{Aad:2015cxa}. The full regions correspond to the likelihood
function constructed including the full information
given on flavour and charge of the final states (see Eq.~\eqref{eq:l6})
while the hatched ones are obtained from the analysis of the total
data summing over charge and flavour (see Eq.~\eqref{eq:l1}).
The triplet couplings have been
marginalized within the ranges allowed at 95\% CL by the neutrino oscillation
data analysis in Ref.~\cite{Esteban:2016qun} for NO (left)
and IO (right).}
\label{fig:bounds}
\end{figure}
The hatched regions are the corresponding constraints obtained using
only the information on the total number of events
($\chi^2_{tot}-\chi^2_{tot,SM}>4$) summed over flavour and charge of
the final leptons. This figure illustrates again how using the
flavour and charge information allows to impose stronger bounds on
this scenario, in particular allowing to rule out triplet masses
irrespective of $\alpha$ for both orderings while for NO no bound can
be imposed for $70^\circ \pi\lesssim \alpha \lesssim 250^\circ$ if
only the total number of events is considered.
In summary, we have shown how the analysis of the events containing
two charged leptons (either electron or muons), two jets from a
hadronically decaying $W$ boson in the Run I with the ATLAS detector
~\cite{Aad:2015cxa} can be used to impose constraints on the MLFV Type
III see--saw scenario. Because of MLFV, the expected event rates in
the different flavour and charge combinations of the two leptons are
constrained by the existing neutrino data so the bounds cannot be
evaded. For this reason it is possible to use this data to rule out
these scenario with triplet masses lighter than 300 GeV at 95\% CL
irrespective of the neutrino mass ordering and of the value of the
unknown Majorana phase parameter. The same analysis allows to rule out
triplet masses masses up to 480 GeV at 95\% CL for NO and
$\alpha=0,\pi$. We have stressed and quantified how the information
on flavour and charge information of the produced leptons is important
for maximal sensitivity to MLFV.
We finish by commenting that extended sensitivity to MLFV with heavier
triplets should be attainable with the data already accumulated from
Run II in the same or other event topologies. For example by the
analysis of the multilepton final states in CMS in
Ref.~\cite{CMS:2017wua} which, so far, has been performed only in the
context of the simplified Type--III see--saw model. Nevertheless, as
previously stressed to do so it is important to make use of the
flavour and charge of final state leptons which has not been made
public yet. To this aim, we have made available the model files
for the MLFV Type-III see-saw~\cite{frwiki}.
\acknowledgments M.C.G-G wants to thank her NUFIT collaborators,
I. Esteban, M. Maltoni, I. Martinez and T. Schwetz for their generous
contribution of the results from the data oscillation analysis used in
this article. She also wants to thank the USP group for their their
hospitality during the final stages of this work. This work is
supported in part by Conselho Nacional de Desenvolvimento
Cient\'{\i}fico e Tecnol\'ogico (CNPq) and by Funda\c{c}\~ao de Amparo
\`a Pesquisa do Estado de S\~ao Paulo (FAPESP) grants 2012/10095-7 and
2017/06109-5, by USA-NSF grant PHY-1620628, by EU Networks FP10 ITN
ELUSIVES (H2020-MSCA-ITN-2015-674896) and INVISIBLES-PLUS
(H2020-MSCA-RISE-2015-690575), by MINECO grant FPA2016-76005-C2-1-P
and by Maria de Maetzu program grant MDM-2014-0367 of ICCUB.
\bibliographystyle{JHEP}
|
2,869,038,155,551 | arxiv | \section{Introduction}
Equilibrium statistical physics has been extremely successful, while
accurate computation of physical averages in non-equilibrium systems
remains a great challenge both theoretically and practically, due to
the intrinsic difficulty of procuring the right statistical weight
of various states based on equations of motion that govern system
evolution~\cite{cvt89b,hao90b}. From a dynamical systems point of
view, the statistical weight is proportional to the natural measure
of states in the phase space, which often is non-smooth or even
singular in a chaotic system~\cite{sinai76,frisch} and thus flunks
an accurate representation. Fortunately, periodic orbit theory (POT)
avoids a direct description of the possibly fractal measure by
expressing phase space averages in terms of averages on periodic
orbits or cycles and thus is a powerful way for reliable and
accurate characterization~\cite{DasBuch,gutbook,so96per,mic97un} of
a nonlinear chaotic system. The associated cycle expansion of
spectral determinants or dynamical zeta functions orders cycles in a
hierarchical way such that dynamical averages are dominated by a few
short cycles and the longer ones give decreasing corrections. For a
uniformly hyperbolic system with finite symbolic dynamics, the
corrections decrease exponentially or even
super-exponentially~\cite{cexp,hhrugh92,hat95rat}. However, for
non-hyperbolic systems, the convergence could be extremely poor,
which severely limits the application of cycle
expansions~\cite{DasBuch}.
Real physical systems are non-hyperbolic in different cases. For
example, in an intermittent system, typical trajectories alternate
between regular and chaotic motions in an irregular way and thus
cause non-hyperbolicity~\cite{carl97int,art03int}. A milder type of
non-hyperbolicity is created by the strong contraction at specific
points such as homoclinic tangencies in the H\'enon map or critical
points in 1-d maps~\cite{cexp,rob90rec}. As a consesuence, the
natural measure becomes non-analytic and thus the nice shadowing
property among cycles that is necessary for fast convergence of
cycle expansions fades out. Poles appear near the origin of the
complex plane in the dynamical zeta function and the spectral
determinant, which gives much trouble to a polynomial approximation
as in the normal cycle expansion. To compute averages with fair
accuracy, many cycles are needed, which usually requires
unaffordable amount of resources. Thus, how to accelerate the
convergence of cycle expansions in the presence of non-hyperbolicity
is a key problem in practice.
Several accelerating schemes have been proposed based on the
analyticity of the spectral functions. One idea is to identify and
remove the poles that are near the origin and thus expand the radius
of convergence. In \cite{cexp,rob90rec,aur90con}, the dominant terms
in the tail of the expansion are estimated and summed up to
approximately determine the leading pole. More accurate estimation
is obtained by using Pad{\' e} approximation, which is valid not
only for computing the leading pole but also for seeking other
ones~\cite{eck93res,main99sem}. An interesting consequence of
analyticity of the underlying dynamics is the existence of infinite
sum rules which symptom strong correlations among periodic orbits.
These exact relations show signs of information redundancy embedded
in the whole set of periodic orbits and can be utilized to
accelerate the convergence of cycle expansions~\cite{sun99per}. In
certain cases, analyticity can be used to derive the spectral
function with no resort to periodic orbits and thus implies a
potential alternative route to the spectrum computation. But so far,
it succeeded only for several very specific maps and hard to be
generalized to other examples~\cite{cv98bey}.
In this paper, we employ a geometric picture of the cycle expansion
to treat the convergence problem for 1-d maps~\cite{skeleton,inv}.
Maps with critical points have a natural measure with singularities,
due to the strong contraction around critical points. This
contraction also deteriorates the dynamical shadowing between cycles
of different lengths and thus leads to a slow convergence of cycle
expansions. So, the singularity in the natural measure is an
effective indicator of unbalance of cycle weights and signals a
small radius of convergence. One idea for expediting convergence is
thus to identify and then clear out singularities in the natural
measure. In the current paper, we achieve this by properly designing
coordinate transformations such that the resulted conjugate map has
a natural measure with no singularity. The computation of dynamical
averages in the original map can be efficiently done with
counterparts in the conjugate map since the convergence is much
improved in the new map.
In the following, after a brief review of periodic orbit theory in
Section~\ref{sec:pot}, we discuss in Section~\ref{sec:Icce} the
convergence of the dynamical zeta function and the spectral
determinant for maps with critical points. A comparison between the
two spectral functions is made and the importance of hyperbolicity
for efficient calculation is emphasized. With a description of the
geometric significance of the truncation in cycle expansions, our
accelerating scheme is presented and tested on several examples. In
Section~\ref{sec:sum}, we summarize the paper and discuss the
existing problems and possible directions for further
investigation.
\section{Periodic orbit theory \label{sec:pot}}
More often than not, dynamical averages are conveniently computed
via time averaging,
\begin{equation}
\overline{a(x_0)}=\frac{1}{N} \sum_{i=0}^{N} a(x_i)\,,\label{eq:ta}
\end{equation}
where $x_0 \to x_1 \to \cdots x_i \to \cdots x_N$ is an itinerary
generated by the map $f(x)$ and $N$ is a large number. Time
averaging is easy to do but hard to achieve high accuracy. In the
presence of non-hyberbolicity, its convergence is very slow and the
result becomes unreliable. To better understand the dynamics for
more efficient calculation, the phase space average $\langle a
\rangle$ is introduced and has the nice property $\langle a \rangle
= \overline{a(x_0)}$ in an ergodic system. Thus, a geometric picture
is enabled to explore the averaging process, which will be explained
in more detail.
For an ergodic map $f(x)$, when $n \to \infty$, under the map action
any smooth initial measure will approach an invariant measure,
called the natural measure. Formally,
\begin{equation}
\rho(x) = \lim_{n\to \infty} \int_{\mathcal{M}} dy \delta(x-f^n(y))
\rho_0(y) \,,
\end{equation}
where $\rho(x)$ is the natural measure and $\rho_0(y)$ is an initial
smooth measure. With the natural measure, the average $\langle a
\rangle $ can be obtained by
\begin{equation}
\langle a \rangle = \int_{\mathcal{M}} a(x) \rho(x) dx \,,
\end{equation}
which is the most common way in statistical physics for computing
averages. In a chaotic system, however, the measure $\rho(x)$, being
often singular and supported on a fractal set, is hard to obtain,
which motivates introduction of periodic orbit theory.
For a map $f: \mathcal{M} \to \mathcal{M}$ and an observable $a(x)$,
we define the evolution operator $\mathcal{L}^n$: $\mathcal{L}^n
\circ g(y) = \int_{\Omega} \mathcal{L}^n(y,x) g(x) dx$ for any
function $g(y)$. The kernel is
\begin{equation}
\mathcal{L}^n(y,x)=\delta(y-f^n(x))e^{\beta A^n}\,,
\end{equation}
where $\beta$ is an auxiliary variable and $A^n =
\sum_{k=0}^{n-1}a(f^k(x))$. The average $\langle a \rangle $ or
other dynamical properties can be conveniently obtained by virtue of
the spectrum of $\mathcal{L}$. Suppose the leading eigenvalue of
$\mathcal{L}$ is $e^{s_0}$, then we have~\cite{DasBuch}
\begin{equation}
\langle a \rangle = \frac{\partial s_0}{\partial \beta}\mid_{\beta=0}\,. \label{eq:aver}
\end{equation}
Specifically, when $\beta=0$, we have
$\mathcal{L}^n(y,x)=\delta(y-f^n(x))$, which is the kernel of the
so-called Perron-Frobenius operator. The escape rate of a dynamical
system, which we denote by $\gamma$, can be obtained by computing
the leading eigenvalue of this operator
\begin{equation}
\gamma = - s_0\,.
\end{equation}
The eigenvalues of the evolution operator $\mathcal{L}$ can be
detected with the help of the spectral determinant, which is related
to the trace of $ \mathcal{L}$ and thus to the periodic orbits by
the identity
\begin{equation}
\ln \det(1-z\mathcal{L}) = \mathrm{Tr} \ln(1-z\mathcal{L})\,.
\end{equation}
For one-dimensional maps, detailed manipulation shows that~\cite{DasBuch}
\begin{equation}
\det(1-z\mathcal{L}) = \exp(-\sum_p \sum_{r=1}^{\infty}\frac{1}{r}\frac{z^{n_pr}e^{r\beta A_p}}{|1-\Lambda_p^r|})\,,
\end{equation}
where $ p$ denotes prime cycles which are not repeats of shorter
ones. $\Lambda_p$ is the stability eigenvalue of cycle $p$ and $n_p$
is its length. In most classical computations, we are only
interested in the leading eigenvalue. Obviously, the smallest
positive zero of the above-defined spectral determinant is
$e^{-s_0}$, the inverse of the leading eigenvalue of $\mathcal{L}$.
In view of Eq.~(\ref{eq:aver}), we are able to compute dynamical
averages with the spectral determinant.
The leading eigenvalue can alternatively be obtained from a simpler spectral function---the dynamical zeta function
\begin{equation}
\frac{1}{\zeta} = \prod_p(1-t_p)\,,t_p= \frac{1}{|\Lambda_p|}z^{n_p}e^{\beta A_p}\,.
\end{equation}
It can be proved that $\frac{1}{\zeta}$ is the zeroth-order
approximation of the spectral determinant and has kept the smallest
positive zero unchanged. Most often, however, they have different
analytic properties.
Practically, to evaluate zeros, we expand the spectral determinant
or the dynamical zeta function in terms of power series and get
polynomial approximation through truncation. The power series
expansion is one type of cycle expansion. For example, for the
one-dimensional map with complete binary symbolic dynamics, the
dynamical zeta function can be expanded as
\begin{equation}
\begin{array}{lll}
\frac{1}{\zeta} & = & 1-t_0-t_1-[(t_{01}-t_0t_1)]\\
& & -[(t_{001} - t_{01}t_0)+(t_{011}-t_{01}t_1)] -\cdots\,.
\end{array}
\label{eq:ce}
\end{equation}
If we keep only the terms explicitly shown in Eq.~(\ref{eq:ce}), its
cycle expansion is truncated at cycle length $3$ and results in a
polynomial in $z$ of degree $3$. The linear term is the fundamental
contribution which gives the dominant part of the expansion. Higher
order terms are curvature corrections which consist of contributions
from prime cycles such as $t_{01}$ and from pseudo-cycles being
combination of prime cycles such as $t_0t_1$. The cancelation
between cycles and pseudo-cycles signals shadowing properties and
smoothness of the underlying dynamics and results in an exponential
decrease of the curvature corrections when uniform hyperbolicity is
assumed. However, there is no good cancelation in the presence of
non-hyperbolicity and as a result cycle expansion converges very
slowly. In the current paper, we are trying to restore this
cancelation by dynamical conjugacy under certain circumstances.
\section{Improving the convergence of cycle expansions \label{sec:Icce}}
\subsection{Notes on numerical computation}
To calculate dynamical averages, we need to build the truncated
version of the dynamical zeta function and the spectral
determinant. The detailed explanation for an efficient computation
can be found in~\cite{DasBuch}, which is omitted here for brevity.
With a truncation length $N$, we drop out all the cycles longer than
$N$. To study the convergence of cycle expansions, as an example, we
will evaluate the escape rate and other dynamical averages with the
truncated dynamical zeta function and spectral determinant. Also, we
will check how the computational error of physical averages depends
on the truncation length. However, except for very few cases, we
cannot obtain the exact average values. Therefore, we use averages
obtained with the truncation length $N_{\max}+1$ as the ``exact''
values when estimating errors with the truncation length no larger
than $N_{\max}$. Figures are plotted to show this dependence and the
logarithmic scale is often used in the ordinate.
The natural measure is computed by map iterations. We choose a
random initial point and iterate it many times, usually $10^7$ if
not specified otherwise. Then, by counting the times that the point
enters a small interval, we get a probability distribution, which is
a numerical approximation of the natural measure for an ergodic
system. As the maps discussed in this paper are all ergodic, though
not very accurate in some cases, this method is simple for getting a
rough picture of the natural measure. In addition, with
Eq.~(\ref{eq:ta}), physical averages are easily computed with the
iterations at the same time.
\subsection{Comparison of the dynamical zeta function and the spectral determinant}
The escape rate and other dynamical averages may be evaluated with
the dynamical zeta function or the spectral determinant. However,
the convergence rates of the two methods are quite different, which
is due to the difference in their radius of convergence in cycle
expansions. To show this, we calculate the escape rate of the map
$ f(x) = 6x(1-x),\, \mathcal{M}=[0,1],\,f: \mathcal{M} \to
\mathcal{M}$ with the dynamical zeta function and the spectral
determinant. The results are listed in TABLE~\ref{ta:2}.
\begingroup
\squeezetable
\begin{table}[htp]
\begin{tabular}{|l|l|l|}
\hline
N & $ \gamma(\frac{1}{\zeta})$ & $ \gamma(\det(1-z\mathcal{L}))$ \\
\hline
1 &0.87& 0.9\\
\hline
2 & 0.83 & 0.83 \\
\hline
3 & 0.83151 & 0.831492\\
\hline
4 & 0.831492 & 0.831492987\\
\hline
5 & 0.831493012 & 0.831492987487621\\
\hline
6 & 0.831492987 & 0.831492987487621617307\\
\hline
7 & 0.8314929875 & 0.8314929874876216173072762950\\
\hline
8 & 0.831492987487 & 0.83149298748762161730727629503691\\
\hline
9 & 0.8314929874876 & \\
\hline
10 & 0.83149298748762 & \\
\hline
\end{tabular}
\caption{Escape rate obtained by the dynamical zeta function $\frac{1}{\zeta}$ and the spectral determinant $\det(1-z\mathcal{L})$ for the map $f(x)=6x(1-x)$ on the interval $[0,1]$.}
\label{ta:2}
\end{table}
\endgroup
According to the table, both methods converge fast, thanks to the
nearly perfect cancelation between prime and pseudo-cycles. Thus
very accurate results can be obtained with only several short prime
cycles. Moreover, the results computed with the spectral
determinant converge much more quickly than with the dynamical zeta
function, implying a difference in their analyticity.
\begin{figure}[htp]
\subfigure[the error of the escape rate computed with the dynamical zeta function]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{zf-er-6xl1-xr.eps}}
\subfigure[the error of the escape rate computed with the spectral determinant]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{sd-er-6xl1-xr.eps}}
\caption{The error of the escape rate for the map $f(x)=6x(1-x)$ computed with (a) the dynamical zeta function and (b) the spectral determinant.}
\label{fig:1}
\end{figure}
FIG.~\ref{fig:1} shows the dependence of the error in the computed
escape rate on the truncation length $N$ with the dynamical zeta
function and the spectral determinant. It is clear that the
logarithm of the error decreases linearly for the dynamical zeta
function and super-linearly for the spectral determinant, which
suggests an exponential and a super-exponential decrease in the
error itself, respectively. The reason for this difference is that
the spectral determinant is analytic over the whole complex plane,
while the dynamical zeta function is only analytic in a region with
a finite radius. Thus, the coefficient of the $N$th-order term
decreases super-exponentially with $N$ in the spectral determinant,
and exponentially in the dynamical zeta function. From a pure
algebraic point of view, the way in which the coefficient of the
$N$th-order term decreases determines the convergence rate.
\subsection{The influence of hyperbolicity }
Having compared the convergence rate of the dynamical zeta function
and the spectral determinant for the map $ f(x) = Ax(1-x)$ with
$A=6$, we check how the value of $A$ influences the convergence
rate.
First, we set $A$ equal to $5$ and repeat the above computation for
the escape rate, the results are shown in FIG.~\ref{fig:2}. We see
that though a little slower than the $A=6$ case, both methods
converge fast---the dynamical zeta function method converges nearly
exponentially and the spectral determinant exhibits a beautiful
super-exponential convergence, just like what happened before. It
looks as if the change of $A$ had little effect on convergence.
However, if we set $A$ equal to $4$, a dramatic change happens to
the convergence. As shown in FIG.~\ref{fig:3}, the dynamical zeta
function with cycles up to length $15$ gives an error of
$10^{-5}$, while in the $A=5$ case, the error is $ 10^{-15}$.
Moreover, the results obtained by the spectral determinant even lose
the super-exponential convergence and exhibit only an exponential
convergence. This phenomena implies that, in this special case, the
spectral determinant may not be an entire function any more.
\begin{figure}[htp]
\subfigure[the error of the escape rate computed with the dynamical zeta function]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{zf-er-5xl1-xr.eps}}
\subfigure[the error of the escape rate computed with the spectral determinant]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{sd-er-5xl1-xr.eps}}
\caption{The error of the escape rate for the map $f(x)=5x(1-x)$ computed with (a) the dynamical zeta function and (b) the spectral determinant.}
\label{fig:2}
\end{figure}
\begin{figure}[htp]
\subfigure[the error of the escape rate computed with the dynamical zeta function ]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{zf-er-4xl1-xr.eps}} \quad
\subfigure[the error of the escape rate computed with the spectral determinant]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{sd-er-4xl1-xr.eps}}
\caption{The error of the escape rate for the map $f(x)=4x(1-x)$ computed with (a) the dynamical zeta function and (b) the spectral determinant.}
\label{fig:3}
\end{figure}
\begin{figure}[htp]
\subfigure[the error of the escape rate computed with the dynamical zeta function]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{zf-er-4.001xl1-xr.eps}} \quad
\subfigure[the error of the escape rate computed with the spectral determinant]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{sd-er-4.001xl1-xr.eps}}
\caption{The error of the escape rate for the map $f(x)=4.001x(1-x)$ computed with (a) the dynamical zeta function and (b) the spectral determinant.}
\label{fig:4}
\end{figure}
If we increase $A$ from $4$ to $4.001$, the convergence of the
expansion improves dramatically, as depicted in Figure \ref{fig:4}.
When $A=4.001$, the super-exponential convergence of the spectral
determinant is restored. An apparent property that makes the map of
$ A=4$ different is that the height of the critical point falls
within the interval $[0,1]$, which is known to be the cause of the
slow convergence~\cite{DasBuch}. Here, we study this phenomena in
great detail and will design a technique to counter its effect
later. To show how the critical point influences the convergence of
cycle expansions, we use the dynamical zeta function to calculate
the escape rate for maps with a higher-order critical point. One
general form of such maps is
$f(x)=1-|2x-1|^k,\,x\in[0,1]$~\cite{det09sto}.
\begin{figure}[htp]
\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{cmap.eps}
\caption{The graph of $f(x)=1-(2x-1)^k,\,k=2,4,6$.}
\label{fig:5}
\end{figure}
In FIG.~\ref{fig:5}, the profile of the map for different $k$ is portrayed. As the order $k$ increases,
the top of the map profile becomes flatter and flatter. The error of the
escape rate obtained from the dynamical zeta function is displayed
in FIG.~\ref{fig:6} and FIG.~\ref{fig:3}(a), where we can see that
the convergence is poorer for maps with a higher-order critical
point. To know why the dynamical zeta function and the spectral
determinant flaw for maps with critical points, we must have a
clear understanding of the nature of the approximations when we
apply a truncation to the dynamical zeta function.
\begin{figure}[htp]
\subfigure[the error of the escape rate for the map $f(x)=1-(2x-1)^4$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{zf-er-1-l2x-1rp4.eps}} \quad
\subfigure[the error of the escape rate for the map $f(x)=1-(2x-1)^6$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{zf-er-1-l2x-1rp6.eps}}
\caption{The error of the escape rate for maps with a higher-order critical point computed with the dynamical zeta function.}
\label{fig:6}
\end{figure}
\subsection{The significance of the truncated dynamical zeta function}
As the number of all prime cycles is infinite, the truncation to the
spectral functions is needed for an efficient computation. For
example, for the unimodal maps discussed above, if we truncate at
the shortest cycles, the dynamical zeta function is $
\frac{1}{\zeta} = 1 - t_0- t_1 $. This truncated dynamical zeta
function leaves out all the curvature corrections and is a rough
approximation for the original map. Now, one question can be asked:
is there a linear map having $\frac{1}{\zeta}= 1 - t_0 -t_1 $ as its
truncated first-order dynamical zeta function? A simple example is a
piecewise linear map consisting of two branches, which is
constructed in this way: firstly, find the fixed points of the map
along with the slopes at these points, then, draw line segments
which pass the fixed points and are tangent to the graph of the map.
If we want to construct a piecewise linear map which has a dynamical
zeta function identical with that of the original map up to order
$N$, we need the positions and slopes of all the periodic points of
period not larger than $N$. FIG.~\ref{fig:7} displays such a
piecewise linear map for the truncation up to length two, while the
original map is $f(x)=4x(1-x)$.
\begin{figure}[htp]
\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{plm.eps}
\caption{A piecewise linear map approximation of the map $f(x)=4x(1-x)$.}
\label{fig:7}
\end{figure}
For the piecewise linear map, the curvature corrections with order
lager than $N$ are nearly zero due to its linearity, and therefore,
we regard it as a prototyped geometric model of the $N$th-order
truncated dynamical zeta function. The dynamical averages computed
with the $N$th-order truncated dynamical zeta function are very
close to those given by the piecewise linear map. So, how well the
piecewise linear map approximates the original map determines the
computational accuracy of the truncation.
We know that the average obtained from the dynamical zeta function
is the phase space average $\langle a \rangle = \int_\mathcal{M} dx
\rho(x) a(x)$, where $\rho(x)$ is the natural measure. For example,
for the tent map, the natural measure is uniform everywhere.
However, for the map with critical points, the natural measure has
singularities as portrayed in FIG.~\ref{fig:8}.
\begin{figure}[htp]
\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{spd-4xl1-xr.eps}
\caption{The natural measure of the map $f(x)=4x(1-x)$.}
\label{fig:8}
\end{figure}
It's just these singularities that lead to the slow convergence in a
cycle expansion, because the piecewise linear map obtained from the
truncated dynamical zeta function can not capture well the natural
measure with singularities. Though the natural measure of the
piecewise linear map gets closer and closer to that of the original
map with increasing truncation length, it fails at the singularity.
As a result, the average obtained from the piecewise linear map, or
equivalently, from the truncated dynamical zeta function for a
non-hyperbolic system, does not converge as fast as in the uniformly
hyperbolic case.
\subsection{Accelerating convergence in the presence of a critical point}
The singularity in the natural measure causes the slow convergence of the dynamical zeta function. A natural cure of this trouble is to clear out the singularity by a coordinate transformation, which results in a new map conjugate to the original one but without a singularity in the natural measure. Consequently, we are able to accelerate the convergence of the cycle expansion with the help of the conjugate map.
\subsubsection{Clear out the singularities \label{sec:cos}}
For maps with a critical point, such as $f(x)= 1-|2x-1|^k,\,k=2,4,6,\cdots$, the probability distribution has an algebraic form in the neighborhood of the singularity, more explicitly
\begin{equation}
\rho(x) \sim \frac{1}{x^{\frac{k-1}{k}}}\,\,\textrm{near $x=0$}\,,
\end{equation}
where $ k$ is the order of the critical point. For example, the natural measure of the map $f(x)=1-(2x-1)^2$ has two singular points: $0$ and $1$, with the probability distribution $\rho(x) \sim \frac{1}{\sqrt{x}}$ near $ x=0$ and $ \rho(x) \sim \frac{1}{\sqrt{1-x}}$ near $ x=1$.
A coordinate transformation is needed which stretches the coordinate axis around the singularity in order to remove it. We are able to construct a homeomorphism $ h:\mathcal{M} \to \mathcal{M}$ of the form $h(x)\propto{|x-x_0|}^{1/k}$ in the neighborhood of the singularity $ x=x_0$ to achieve this goal. For the map $f(x)=1-(2x-1)^2=4x(1-x),\,[0,1] \to [0,1]$, an appropriate transformation is $h(x)=\frac{2}{\pi} \arcsin{\sqrt{x}}$, which as you can see has the desired form in the neighborhood of $0$ and $1$.
With the coordinate transformation, the original map is changed to its conjugate. Denoting the original map by $f(x)$ and the conjugate by $g(x')$, we have the relationship that $f=h^{-1} \circ g \circ h$, or equivalently, $g=h \circ f \circ h^{-1}$. The map $f(x)=1-(2x-1)^2=4x(1-x)$ and its conjugate are displayed in FIG.~\ref{fig:9}.
\begin{figure}[htp]
\centering
\subfigure[ $f(x)= 4x(1-x)$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{originalmap-4xl1-xr.eps}}\quad
\subfigure[the conjugate map]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{conjugatemap-4xl1-xr.eps}}
\caption{ (a)The map $f(x)=4x(1-x)$ and (b) its conjugate map.}
\label{fig:9}
\end{figure}
The conjugate map has no critical point any more, and the natural measure produced by the two maps are displayed in FIG.~\ref{fig:10}, in which we can see that the natural measure of the conjugate map has no singularity any more.
\begin{figure}[htp]
\centering
\subfigure[the natural measure of map $f(x)= 4x(1-x) $]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{spd-4xl1-xr.eps}}\quad
\subfigure[the natural measure of the conjugate map]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{cpd-4xl1-xr.eps}}
\caption{The natural measure (a) of the map $f(x)=4x(1-x) $ and (b) of its conjugate map.}
\label{fig:10}
\end{figure}
\subsubsection{The conjugate dynamical zeta function}
In an ergodic system, the dynamical average $\langle a \rangle$ can be obtained through the time averaging, as well as an average on the natural measure as discussed before.
An ergodic map $f:X\to X$ and its conjugate $ g=h \circ f \circ h^{-1}: X' \to X'$ is related by the conjugacy $h:X \to X'$.
If we have the iteration $f(x_i)=x_{i+1}$, under the conjugation $ h(x_i) = x_i'$, it becomes $g(x_i')=x_{i+1}'$, which is to say, one trajectory $\{x_i\}$ for the map $f(x)$ transforms into a trajectory $\{x_i'\}$ for the map $g(x')$. In particular, the cycle for the map $f(x)$ is a cycle for the map $g(x')$. If the conjugacy $h$ is piecewise smooth, a typical trajectory has identical weight in both coordinates, which suggests that the dynamical average can be similarly computed with iterations of the map $g(x')$,
\begin{equation}
\langle a \rangle _f= \frac{1}{N} \sum_{i=i}^{N} a(x_i) = \frac{1}{N} \sum_{i=1}^N a(h^{-1}(x_i')) = \langle a \circ h^{-1} \rangle_g\,. \label{eq:cta}
\end{equation}
Eq.~(\ref{eq:cta}) shows that computing the average $ \langle a \rangle $ under the map $f(x)$ is equivalent to computing the average $\langle a\circ h^{-1} \rangle$ under the map $g(x')$.
Based on the discussion above, it is natural to introduce a concept: the conjugate dynamical zeta function as follows.
Suppose that the map $f(x)$ and $g(x')$ are conjugate, with $f=h^{-1}\circ g \circ h$. The observable $a$ for the map $f(x)$ has a correspondent $a\circ h^{-1}$ for the map $g(x')$. The dynamical zeta function $\frac{1}{\zeta}$ for $f(x)$ and $a$, and $\frac{1}{\zeta'} $ for $g(x')$ and $a \circ h^{-1}$ are said to be conjugate. We call $h$ the conjugacy between $ \frac{1}{\zeta}$ and $\frac{1}{\zeta'}$.
It's obvious that the dynamical averages obtained by $\frac{1}{\zeta}$ and $\frac{1}{\zeta'} $ are the same, since $\langle a \rangle_f = \langle a \circ h^{-1} \rangle_g $. Typical forms for $\frac{1}{\zeta}$ and $\frac{1}{\zeta'}$ are
\begin{equation}
\begin{array}{lll}
\frac{1}{\zeta}& = &\prod_p (1-t_p),\,t_p=z^{n_p}\frac{e^{\beta A_p}}{|\Lambda_p|}\\
\frac{1}{\zeta'} & = & \prod_p (1-t_p'),\,t_p'=z^{n_p}\frac{e^{\beta A_p'}}{|\Lambda_p'|} \,,
\end{array}
\end{equation}
where $A_p'= \sum_{i=1}^{n_p}a(h^{-1}(x_i'))=\sum_{i=1}^{n_p}a(x_i) = A_p$. So, $A_p$ is equal to $A_p'$, no matter if $ h$ is diffeomorphic or not. If $h$ is diffeomorphic, the cycle stability eigenvalue $\Lambda_p$ is also equal to $\Lambda_p'$. Thus, we obtain $\frac{1}{\zeta} = \frac{1}{\zeta'}$. However, when $h$ is not diffeomorphic, the cycle stability eigenvalue could be changed and so does the dynamical zeta function, \emph{i.e.}, $\frac{1}{\zeta} \neq \frac{1}{\zeta'}$. This happens when we use a non-diffeomorphic coordinate transformation to clear out the singularities in the natural measure.
\subsubsection{The associated changes with the dynamical zeta function}
We mentioned in Section~\ref{sec:cos} that to clear out the singularities, we need a coordinate transformation $h \propto {|x-x_0|}^{1/k}$ near the singular point $x_0$. The derivative of $h$ has the form: $\frac{dh}{dx} \propto \frac{1}{{|x-x_0|}^{1-\frac{1}{k}}}$. So, the $dh/dx $ has a singularity at $x=x_0$. Thus, the conjugacy $h$ is not diffeomorphic at the singular point. If one periodic point happens to be singular, the stability eigenvalue of the periodic orbit will be changed.
For example, for the map $f(x)=1-{|2x-1|}^k $, the fixed point $\overline{0}$ is a singular point of natural measure. So, the stability $\Lambda_0$ for $\overline{0}$ is changed to $\Lambda_0^{1/k}$, as shown below.
In the neighborhood of the fixed point $\overline{0}$, the asymptotic form for $ f $ and $ h $ is $ f \sim \Lambda_0 x,\,x > 0$ and $h \sim ax^{\frac{1}{k}}$, where $ a>0 $ is a coefficient. We have
\begin{equation}
\begin{array}{lll}
g(x')& = & h \circ f \circ h^{-1}(x') \\
& \sim & h \circ f(x^{k}/a^k) \sim h(\Lambda_0x^k/a^k) \sim \Lambda_0^{1/k} x'
\end{array}
\end{equation}
Hence stability $\Lambda_0$ of $\overline{0}$ for the conjugate map $g$ is changed to $ \Lambda_0^{1/k}$.
So, in this situation, $\frac{1}{\zeta'} \neq \frac{1}{\zeta}$. Nevertheless, for the map $f(x)=1-{|2x-1|}^k $, if the conjugacy is diffeomorphic except at $x=0,1$, the only change in $\frac{1}{\zeta'}$ is $\Lambda_0'=\Lambda_0^{1/k}$, as compared to $\frac{1}{\zeta}$. As the conjugate dynamical zeta function $\frac{1}{\zeta'}$ is computed for map $g(x')$ whose natural measure has no singularity, the convergence for $\frac{1}{\zeta'}$ is much improved than for $\frac{1}{\zeta}$. Note that the exact functional form of the conjugacy is not essential as long as it has the right asymptotic form near the singular point.
\subsection{Several examples}
We proved that dynamical averages in a map can be computed with its conjugate map. For a map with critical points, we should find an appropriate coordinate transformation to clear out singularities in the natural measure. The conjugate map behaves much better in the sense that the singularity caused by critical points is eliminated and the convergence of the conjugate dynamical zeta function is accelerated. Moreover, we do not have to know the exact functional form of the coordinate transformation. What we do is change stability eigenvalues of specific cycles supported on the singularities of the natural measure and hence transform the dynamical zeta function to its conjugate.
In the following, we apply our method to several examples. All these maps have critical points and therefore, produce a natural measure with singularity.
\subsubsection{The logistic map}
The logistic map $f(x)=4x(1-x)\,,x\in[0,1]$ has a critical point of order two. Its natural measure is singular at two points: $x=0$ and $x=1$. Under the transformation $h(x)=\frac{2}{\pi}\arcsin{\sqrt{x}}$, the singularities in its natural measure are cleared out, as shown in FIG.~\ref{fig:10}. Also, the conjugate map $g(x')=h \circ f(x) \circ h^{-1}$ has no critical point any more, as depicted in FIG.~\ref{fig:9}.
In the map $g(x')$, the stability eigenvalue of the fixed point $\overline{0}$ is $\Lambda_0^{1/2}=2 $ where $\Lambda_0=4$ is the stability eigenvalue of the corresponding point in the logistic map, with stability eigenvalues of all the other orbits remaining the same.
The logistic map has an interesting property: the eigenvalue for any prime cycle except $\overline{0}$ has an absolute value of $ 2^{n}$, where $n$ is the length of the cycle. However, the eigenvalue of $\overline{0}$ is $\Lambda_0=4$. After the coordinate transformation, the absolute value of the eigenvalue of any prime cycle of length $n$ is $2^n$, returning to the tent map case.
Thanks to this interesting observation, the conjugate dynamical zeta function under the conjugation $h$ is just
\begin{equation}
\frac{1}{\zeta}=1-z,
\end{equation}
which is the same dynamical zeta function of the tent map. So, the escape rate of the logistic map is exactly $0$.
Similar to the conjugate dynamical zeta function, we can write down
the conjugate spectral determinant, only with the change $\Lambda_0'
= \Lambda_0^{1/2} = 2$. With the conjugate spectral determinant, the
error of the escape rate decreases super-exponentially, as shown in
FIG.~\ref{fig:11}. However, if we apply the spectral determinant
directly to the logistic map, the error of the escape rate decreases only exponentially with the truncation length $N$, as shown in
FIG.~\ref{fig:3}(b). So, by an appropriate coordinate
transformation, we bring the super-exponential convergence back,
which signals that the influence of critical points wears out in the conjugate system.
Certainly, the logistic map here is very special since it is exactly
conjugate to a tent map~\cite{the95lya}. To show the general
applicability of the technique, in the following, we apply the
method to several other maps for which no smooth conjugacy to the
piecewise linear map is known.
\begin{figure}[htp]
\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{imsd-er-4xl1-xr.eps}
\caption{ The error of the escape rate for the logistic map computed with the conjugate spectral determinant.}
\label{fig:11}
\end{figure}
\subsubsection{The map $f(x)=\sin(\pi x) $}
The map $f(x)=\sin(\pi x)\,,x\in[0,1]$ has a critical point of order
two. Similar to the logistic map, its natural measure has two singular points: $x=0$ and $x=1$.
The asymptotic form of its natural measure near the singularity is
$\rho \sim \frac{1}{\sqrt{x}}$ near $x=0$ and $ \rho \sim
\frac{1}{\sqrt{1-x}}$ near $x=1$. Under the coordinate transformation
$h(x)=\frac{2}{\pi}\arcsin{\sqrt{x}}$, the singularities of the natural
measure are removed, and we obtain the conjugate map $g(x')=h\circ
f(x)\circ h^{-1}$. FIG.~\ref{fig:12} portraits the map $f(x)$ and
$g(x')$ where the critical point that exists in $f(x)$ disappears in
the map $g(x')$. FIG.~\ref{fig:13} displays the natural measure
produced by the map $f(x)$ and $g(x')$ respectively where the
singularity for $f(x)$ at $x=0,1$ vanishes for $g(x')$.
\begin{figure}[htp]
\subfigure[the map $f(x)=\sin(\pi x)$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{originalmap-sinlpixr.eps}}
\subfigure[the conjugate map $g(x')$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{conjugatemap-sinlpixr.eps}}
\caption{The graph of (a) $f(x)=\sin(\pi x)$ and (b) its conjugate map $g(x')$.}
\label{fig:12}
\end{figure}
\begin{figure}[htp]
\subfigure[the natural measure of $f(x)=\sin(\pi x)$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{spd-sinlpixr.eps}}
\subfigure[the natural measure of the conjugate map $g(x')$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{cpd-sinlpixr.eps}}
\caption{The natural measure of (a) $f(x)=\sin(\pi x)$ and (b) its conjugate map $g(x')$.}
\label{fig:13}
\end{figure}
For the conjugate map $g(x')$, the stability eigenvalue of the fixed point $\overline{0}$ is changed from $\Lambda_0=\pi$ to $\Lambda_0=\pi^{1/2}$, while the eigenvalue of any other orbit doesn't change. We use the dynamical zeta function and its conjugate to calculate the escape rate, $\langle x \rangle , \,\langle x^2 \rangle, \,\langle x^3 \rangle$ of the map $f(x)=\sin(\pi x)$. The results are shown in TABLE~\ref{ta:3}, with a cutoff of cycle length $10$. Also included are the results obtained by direct time averaging, with $10^7$ iterations. From TABLE~\ref{ta:3} we can see that the results obtained from the conjugate dynamical zeta function are by far the most accurate.
\begingroup
\squeezetable
\begin{table}[htp]
\begin{tabular}{|c|c|c|c|}
\hline
& the dynamical & the conjugate & time\\
& zeta function & dynamical zeta function & averaging\\
\hline
escape rate & $9\times10^{-4}$ & $-3 \times 10^{-11}$ & \\
\hline
$\langle x \rangle$ & $0.47$ & $ 0.467962949$ & $0.468$\\
\hline
$ \langle x^2 \rangle$ & $ 0.34$ & $ 0.34397492$ & $0.344$\\
\hline
$\langle x^3 \rangle $ & $ 0.28$ & $ 0.28394728$ & $ 0.284$\\
\hline
\end{tabular}
\caption{The escape rate, $\langle x \rangle\,,\langle x^2 \rangle\,,\langle x^3 \rangle$ for the map $f=\sin(\pi x) $ computed with three different methods.}
\label{ta:3}
\end{table}
\endgroup
FIG.~\ref{fig:14} displays errors in the averages obtained by the dynamical zeta function for $f(x)=\sin(\pi x) $ and its conjugate with different truncation length. Although all the errors seem to decrease exponentially, the results from the conjugate dynamical zeta function decay much faster, indicating a great improvement of the convergence.
\begin{figure}[htp]
\subfigure[ the error of the escape rate]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{er-sinlpixr.eps}}
\subfigure[ the error of $\langle x \rangle $]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{x-sinlpixr.eps}}
\subfigure[ the error of $ \langle x^2 \rangle$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{xp2-sinlpixr.eps}}
\subfigure[ the error of $\langle x^3 \rangle $]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{xp3-sinlpixr}}
\caption{The error of the escape rate, $\langle x \rangle\,,\langle x^2 \rangle\,,\langle x^3 \rangle$ obtained by the dynamical zeta function(circles) for $f=\sin(\pi x)$ and its conjugate dynamical zeta function(stars).}
\label{fig:14}
\end{figure}
\subsubsection{ The map $f(x)=1-(2x-1)^4 $}
The map $f(x)=1-(2x-1)^4$ has a critical point of order four, with measure singularities at $x=0$ and $x=1$. The asymptotic form of the natural measure near the singularity is $ \rho \sim \frac{1}{x^{\frac{3}{4}}}$ near $x=0$ and $ \rho \sim \frac{1}{(1-x)^{\frac{3}{4}}}$ near $x=1$ as shown in FIG.~\ref{fig:16}(a). To remove the singularities, an appropriate coordinate transformation is $h(x)=1-\frac{\arccos(1-2\sqrt{1-\sqrt{x}})}{\pi}$, which has an asymptotic form $ h \propto x^{1/4}$ near $x=0$ and $ h \propto (1-x)^{1/4}$ near $ x=1$. Thus, we obtain the conjugate map $g(x')=h \circ f(x) \circ h^{-1}$. The map $f(x)$ and its conjugate $g(x')$ are depicted in FIG.~\ref{fig:15}. Although $f(x)$ has a very flat top, the peak of $g(x')$ is acute. As a result, the natural measure of the map $g(x')$ has no singularity, as exhibited in FIG.~\ref{fig:16}(b).
\begin{figure}[htp]
\subfigure[the map $f(x)=1-{(2x-1)}^4 $ ]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{originalmap-1-l2x-1rp4.eps}}
\subfigure[the conjugate map]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{conjugatemap-1-l2x-1rp4.eps}}
\caption{The graph of (a) the map $f(x)=1-{(2x-1)}^4 $ and (b) its conjugate map.}
\label{fig:15}
\end{figure}
\begin{figure}[htp]
\subfigure[the natural measure of $f(x)=1-{(2x-1)}^4 $]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{spd-1-l2x-1rp4}}
\subfigure[the natural measure of the conjugate map]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{cpd-1-l2x-1rp4}}
\caption{The natural measure of (a) $f(x)=1-{(2x-1)}^4 $ and (b) its conjugate map.}
\label{fig:16}
\end{figure}
The stability eigenvalue of $\overline{0}$ of the conjugate map is $\Lambda_0'=\Lambda_0^{1/4}= 8^{1/4}$. The convergence of the dynamical zeta function for $f(x)=1-{(2x-1)}^4$ is even poorer than the logistic map. However, the conjugate dynamical zeta function continues to give a much accelerated convergence. The escape rate, $\langle x \rangle\,,\langle x^2 \rangle\,,\langle x^3 \rangle$ obtained by the two different ways are shown in TABLE~\ref{ta:4}, with a truncation length $10$. It is worth mentioning that the direct time averaging becomes very unreliable in the current case. FIG.~\ref{fig:17} plots the errors in these averages with different truncation length. We can see that the conjugate dynamical zeta function converges much faster.
\begin{table}[htp]
\begin{tabular}{|c|c|c|}
\hline
& the dynamical & the conjugate \\
& zeta function & dynamical zeta function\\
\hline
escape rate & $2\times10^{-3}$ & $-2\times10^{-9}$ \\
\hline
$\langle x \rangle $ & $0.45$ & $0.4475860$ \\
\hline
$\langle x^2 \rangle $ & $0.36$ & $0.3601271$\\
\hline
$\langle x^3 \rangle $ & $0.32 $ & $0.31801265$\\
\hline
\end{tabular}
\caption{The escape rate, $\langle x \rangle\,,\langle x^2 \rangle\,,\langle x^3 \rangle$ for the map $f(x)=1-{(2x-1)}^4$ computed with two different methods.}
\label{ta:4}
\end{table}
\begin{figure}[htp]
\subfigure[ the error of the escape rate]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{er-1-l2x-1rp4.eps}}
\subfigure[ the error of $\langle x \rangle$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{x-1-l2x-1rp4.eps}}
\subfigure[ the error of $\langle x^2 \rangle $]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{xp2-1-l2x-1rp4.eps}}
\subfigure[ the error of $\langle x^3 \rangle$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{xp3-1-l2x-1rp4.eps}}
\caption{The error of the escape rate, $\langle x \rangle\,,\langle x^2 \rangle\,,\langle x^3 \rangle$ obtained by the dynamical zeta function(circles) for $f(x)=1-{(2x-1)}^4$ and its conjugate dynamical zeta function(stars).}
\label{fig:17}
\end{figure}
\subsubsection{The map $f(x)=1-{(2x-1)}^6$}
The map $f(x)=1-{(2x-1)}^6$ has a critical point of order six, and therefore causes an even worse convergence for the dynamical zeta function. The coordinate transformation we use is $h(x)=1-\frac{\arccos(1-2{(1-x^{\frac{1}{3}})}^{\frac{1}{3}})}{\pi} $, and the conjugate map $g(x')=h \circ f(x) \circ h^{-1} $ has no critical point any more. Thus, the singularities of natural measure are removed. FIG.~\ref{fig:18} shows the graph of the map $f(x)$ and $g(x')$. The natural measure of the map $f(x)$ and $g(x')$ is depicted in FIG.~\ref{fig:19}, obtained with $10^8$ iterations in this case. We see that both numerical measures fluctuate, which implies that time averages based on iterations can't reach a high accuracy for this map.
\begin{figure}[htp]
\subfigure[the map $f(x)=1-{(2x-1)}^6 $]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{originalmap-1-l2x-1rp6.eps}}
\subfigure[the conjugate map]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{conjugatemap-1-l2x-1rp6.eps}}
\caption{The graph of (a) the map $f(x)=1-{(2x-1)}^6$ and (b) its conjugate map $g(x')$.}
\label{fig:18}
\end{figure}
\begin{figure}[htp]
\subfigure[the natural measure of the map $f(x)=1-{(2x-1)}^6$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{spd-1-l2x-1rp6.eps}}
\subfigure[the natural measure of the conjugate map $g(x')$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{cpd-1-l2x-1rp6.eps}}
\caption{The natural measure of the map $f(x)=1-{(2x-1)}^6$ and its conjugate map $g(x')$.}
\label{fig:19}
\end{figure}
For the conjugate dynamical zeta function, the only difference from the original one is that, the stability eigenvalue of $\overline{0}$ is changed to $\Lambda_0'=\Lambda_0^{\frac{1}{6}}=12^{\frac{1}{6}}$. The values of the averages obtained by the two dynamical zeta functions, with a truncation length $10$, are listed in TABLE~\ref{ta:5}, while the dependence of the computational errors on the truncation length is portrayed in FIG.~\ref{fig:20}. We can see that the conjugate dynamical zeta function converges much faster than the original zeta function, which provides evidence that clearing out the singularity in natural measure helps accelerate the convergence.
\begin{table}[htp]
\begin{tabular}{|c|c|c|}
\hline
& the dynamical & the conjugate \\
& zeta function & dynamical zeta function\\
\hline
escape rate & $5 \times 10^{-3}$ & $-2 \times 10^{-7}$ \\
\hline
$\langle x \rangle$ & $ 0.4$ & $0.40232$ \\
\hline
$\langle x^2 \rangle$ & $0.34$ & $0.332921$ \\
\hline
$\langle x^3 \rangle$ & $0.31$ & $0.30027$\\
\hline
\end{tabular}
\caption{The escape rate, $\langle x \rangle\,,\langle x^2 \rangle\,,\langle x^3 \rangle$ for the map $f(x)=1-{(2x-1)}^6$ computed with two different methods.}
\label{ta:5}
\end{table}
\begin{figure}[htp]
\subfigure[the error of the escape rate]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{er-1-l2x-1rp6.eps}}
\subfigure[the error of $\langle x \rangle$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{x-1-l2x-1rp6.eps}}
\subfigure[the error of $\langle x^2 \rangle$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{xp2-1-l2x-1rp6.eps}}
\subfigure[the error of $\langle x^3 \rangle$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{xp3-1-l2x-1rp6.eps}}
\caption{The error of the escape rate, $\langle x \rangle\,,\langle x^2 \rangle\,,\langle x^3 \rangle$ obtained by the dynamical zeta function(circles) for $f(x)=1-{(2x-1)}^6$ and its conjugate dynamical zeta function(stars).}
\label{fig:20}
\end{figure}
\subsubsection{A map with three measure singularities}
The maps we have studied so far all have two measure singularities: $x=0$ and $x=1$. In this section, we turn to a map with three measure singularities, the graph of which is shown in FIG.~\ref{fig:21}(a). The exact functional form of the map is $f(x)=\sin(\frac{\pi}{a}(1-x))$, where $a=1.3156445888...$. It has a critical point of order two, and has a nice property: $f(0)=x_f$, where $x_f$ is the unique fixed point of the map. The natural measure of $f$ has three singularities: $x=0,x_f,1$. The asymptotic form of the natural measure near the singularity is: $\rho \sim \frac{1}{\sqrt{x}}$ near $x=0$, $\rho \sim \frac{1}{\sqrt{|x-xf|}} $ near $x=x_f$ and $\rho \sim \frac{1}{\sqrt{1-x}}$ near $x=1$, as shown in FIG.~\ref{fig:22}(a). To clear out the singularities, we use the coordinate transformation $h(x)$ as depicted in FIG.~\ref{fig:23}, which stretches the coordinate around the singularities. The conjugate map $ g(x')$ is nearly a piecewise linear map, as shown in FIG.~\ref{fig:21}(b). The natural measure of the map $g(x')$ is depicted in FIG.~\ref{fig:22}(b), which has no singularity any more.
\begin{figure}[htp]
\subfigure[the map with three measure singularities]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{originalmap-0111.eps}}
\subfigure[the conjugate map]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{conjugatemap-0111.eps}}
\caption{The graph of (a) the map with three measure singularities and (b) its conjugate map. }
\label{fig:21}
\end{figure}
\begin{figure}[htp]
\subfigure[the natural measure of the map with measure three singularities]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{spd-0111.eps}}
\subfigure[the natural measure of the conjugate map]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{cpd-0111.eps}}
\caption{The natural measure of (a) the map with three measure singularities and (b) its conjugate map.}
\label{fig:22}
\end{figure}
\begin{figure}[htp]
\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{h-0111.eps}
\caption{The conjugacy $h(x)$ for the map with three measure singularities.}
\label{fig:23}
\end{figure}
For the conjugate dynamical zeta function, the stability of $\overline{x_f}$ should be changed to $|\Lambda_{x_f}|'=|\Lambda_{x_f}|^{\frac{1}{2}}$. Again, we use the original and the conjugate dynamical zeta function to calculate averages. The results are listed in TABLE~\ref{ta:6}, with a cutoff of cycle length $20$. The errors in the computation are plotted in FIG.~\ref{fig:24}. Note that the binary symbolic dynamics is not complete in the current example. The number of cycles get much reduced compared with the full symbolic dynamics case.
By clearing out the singularities in the natural measure, the convergence is accelerated a lot. So, in this case, the conjugate dynamical zeta function is still an effective way to acquire averages with high accuracy.
\begin{table}[htp]
\begin{tabular}{|c|c|c|}
\hline
& the dynamical& the conjutate\\
& zeta function& dynamical zeta function\\
\hline
escape rate & $5\times 10^{-4}$ & $-1\times10^{-10}$\\
\hline
$\langle x \rangle$ & $0.601$ & $0.601895610$ \\
\hline
$\langle x^2 \rangle$ & $ 0.453$ & $0.453165976$\\
\hline
$\langle x^3 \rangle$ & $0.36$ & $0.364669939 $\\
\hline
\end{tabular}
\caption{The averages for the map with three measure singularities computed with two different methods.}
\label{ta:6}
\end{table}
\begin{figure}[htp]
\subfigure[the error of escape rate]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{er-0111.eps}}
\subfigure[the error of $\langle x \rangle$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{x-0111.eps}}
\subfigure[the error of $\langle x^2 \rangle$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{xp2-0111.eps}}
\subfigure[the error of $\langle x^3 \rangle$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{xp3-0111.eps}}
\caption{The error of the escape rate, $\langle x \rangle\,,\langle x^2 \rangle\,,\langle x^3\rangle$ obtained by the dynamical zeta function(circles) for the map with three measure singularities and its conjugate dynamical zeta function(stars).}
\label{fig:24}
\end{figure}
\subsubsection{A map with measure singularities on a period-2 orbit}
The conjugation method may be applied to systems with measure singularity on longer orbits. As an example, we study a map with measure singularities on a period-2 orbit. The functional form of the map is still $f(x)=\sin(\frac{\pi}{a}(1-x)) $, but with a different value $a=1.10263451544766... $. FIG.~\ref{fig:25}(a) portrays the graph of the map. For this map, $ f(0)=x_a,\,f^2(0)=x_b$, where $x_a$ and $x_b$ make a period-2 orbit. The natural measure of the map is shown in FIG.~\ref{fig:26}(a) and has four singularities: $x=0,\,x_a,\,x_b,\,1$. An appropriate conjugacy $h(x)$ is plotted in FIG.~\ref{fig:27}, which stretches the coordinate around the four singularities. The conjugate map $g(x')$ and its natural measure are plotted in FIG.~\ref{fig:25}(b) and FIG.~\ref{fig:26}(b) respectively. For the map $g(x')$, the singularities has been removed---the situation that we have experienced many times.
\begin{figure}[htp]
\subfigure[the map with measure singularities on a period-2 orbit]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{originalmap-0101.eps}}
\subfigure[the conjugate map]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{conjugatemap-0101.eps}}
\caption{The graph of (a) the map with measure singularities on a period-2 orbit and (b) its conjugate map.}
\label{fig:25}
\end{figure}
\begin{figure}[htp]
\subfigure[the natural measure of map $f$]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{spd-0101.eps}}
\subfigure[the natural measure of the conjugate map]{\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{cpd-0101.eps}}
\caption{The natural measure of (a) the map with measure singularities on a period-2 orbit and (b) its conjugate map.}
\label{fig:26}
\end{figure}
\begin{figure}[htp]
\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{h-0101.eps}
\caption{The conjugacy $h(x)$ for the map with measure singularities on a period-2 orbit.}
\label{fig:27}
\end{figure}
We change the eigenvalue of the period-2 orbit and obtain the conjugate dynamical zeta function---just as what we have done. The conjugate dynamical zeta function gives interesting results in the calculation of the escape rate. FIG.~\ref{fig:28} shows the error in the escape rate computed with the original and the conjugate dynamical zeta function. In general, the results obtained with the conjugate dynamical zeta function converge exponentially and uniformly, faster than with the original dynamical zeta function. However, it's not totally true here. The conjugate dynamical zeta function doesn't accelerate the convergence as effectively as before. So, why the convergence for the conjugate dynamical zeta function is not so good even if we have removed the measure singularities?
\begin{figure}[htp]
\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{er-0101.eps}
\caption{The error of the escape rate by the original(circles) and the conjugate(stars) dynamical zeta function for the map with measure singularities on a period-2 orbit.}
\label{fig:28}
\end{figure}
If we stare at FIG.~\ref{fig:25}(b), the graph of the conjugate map, we find one special point at which the slope is infinite. It's just this point that causes the slow convergence. We regard this infinite slope map as ``super-hyperbolic''. The super-hyperbolic map can slow down the convergence. To illustrate this point, we use the dynamical zeta function to calculate the escape rate of a super-hyperbolic map $f(x)$,
\begin{equation}
f(x)= \{ \begin{array}{ll} 2x & x \in [0,1/2] \\ \sqrt{2-2x} & x \in [1/2,1]\,,\end{array}
\end{equation}
where $f'(1)=\infty$. We compare the results obtained from this map and the logistic map in FIG.~\ref{fig:29}. Both results are computed with the original dynamical zeta function. We can see that the convergence rates are similar for these two maps. So, the super-hyperbolicity is harmful to the convergence, just like the non-hyperbolicity. To understand this, we recall the fact that the average obtained by the truncated dynamical zeta function is nearly identical to the one computed with the corresponding piecewise linear map. So, if the piecewise linear map can approximate the original map very well, the average obtained would be quite accurate. However, for the super-hyperbolic map, there exists a point with infinite slope, which means that the value of the map changes extremely unevenly near that point. So, a lot of cycle points are needed near that point to get a fair approximation just like near the critical of a non-hyperbolic map. Thus, the slow convergence of the super-hyperbolic map is expected.
Based on the discussion above, we can see that the singularity in the natural measure is not the only factor which influences the convergence of the dynamical zeta function. For the map with measure singularities on a period-2 orbit, clearing out the measure singularities doesn't improve the convergence much. In fact, the direct and essential factor to determine the convergence of the dynamical zeta function is the cancelation between prime cycles and pseudo-cycles. For the super-hyperbolic case, the cancelation is not good even if the measure singularities do not exist. So, in the super-hyperbolic case, how to incur further cancelation remains a challenging problem.
\begin{figure}[htp]
\includegraphics[width=0.48\textwidth,height=0.3\textwidth]{er-nonsuper.eps}
\caption{The error of the escape rate for the super-hyperbolic map and the logistic map.}
\label{fig:29}
\end{figure}
\section{Conclusion \label{sec:sum}}
The central idea of this paper is that by clearing
out the singularities in the natural measure we may accelerate the convergence of cycle expansions. Maps with critical points produce natural measures with singularities and show bad convergence in the expansion calculation. With appropriate coordinate transformation, the resulted conjugate map produces no singularity in its natural measure. To calculate dynamical averages, we use the conjugate spectral function, the cycle expansion of which is greatly accelerated due to the removal of the singularities. Essentially, the method locates leading poles of the spectral function by classifying singularities in the natural measure and removes them through a coordinate transformation.
We test our method on several maps, {\em i.e.},
$f(x)=1-|2x-1|^k,\,k=2,4,6$, $f(x)=\sin(\pi x)$ which have only one critical point and complete binary dynamics, and the map which has three measure singularities. For these maps, the conjugate dynamical zeta function converges much faster than the original zeta function. Also, the conjugate spectral determinant restores the super-exponential convergence. However, when we treat the map with measure singularities on a period-two orbit, we find that the conjugate dynamical zeta function does not converge as fast as
expected. Analysis shows that the super-hyperbolicity of the
conjugate map leads to this slowing-down. Further study is needed to eliminate this nuisance.
In this paper, we use one-dimensional maps as examples to
demonstrate our accelerating scheme. How to generalize it to higher dimensions or to flows requires further investigation. Even in the 1-d case, we can can only treat maps with symbolic dynamics being subshifts of finite type. If the genealogy sequence of the critical point is not essentially periodic, there may exist a natural boundary in the complex plane for the dynamical zeta function on which singular points are dense. In this case, the natural measure is singular on a dense and countable set~\cite{dah97prun}. It seems not possible to expand the radius of expansion by analytic continuation. Novel techniques have to be invented to achieve accelerated convergence in this case.
\section*{Acknowledgements}
This research is supported by National Natural Science Foundation of
China (Grant No. 10975081) and the ph.D. Programs Foundation of
Ministry of Education of China (Grant No.20090002120054).
|
2,869,038,155,552 | arxiv | \section{Introduction}
\label{sec:intro}
Forecasting the motion of pedestrians in crowds is essential for autonomous systems like self-driving cars and social robots that will potentially co-exist with humans. To successfully predict how humans navigate in crowds, a forecasting model needs to tackle three crucial challenges: \\
(1) \textbf{Modelling social interactions}: the model should learn how the trajectory of one person affects another person; \\
(2) \textbf{Physically acceptable outputs}: the model predictions should be physically acceptable, \textit{i.e.}, not undergo collisions; \\
(3) \textbf{Multimodality}: given the history, the model needs to be able to output all futures without missing any mode.
\begin{figure}
\centering
\begin{subfigure}[h]{0.47\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/pull_new1.png}
\end{subfigure}
\setlength{\belowcaptionskip}{-12pt}
\caption{Given the history (solid), a forecasting model needs to account for social rules of human motion when predicting collision-free multimodal futures (dash). SGANv2 learns social interactions using spatio-temporal interaction modelling and refines unsafe outputs via collaborative sampling strategy.}
\label{fig:pull_traj1}
\end{figure}
The objective of multi-modal trajectory forecasting is to learn a generative model over future trajectories. Generative adversarial networks (GANs) \cite{Goodfellow2014GenerativeAN} are a popular choice of generative models for trajectory forecasting as they can effectively capture all possible future modes by mapping samples from a given noise distribution to samples in real data distribution. Gupta \textit{et al.}~\cite{Gupta2018SocialGS} proposed Social GAN (SGAN), GANs with social mechanisms, to learn human interactions and output multimodal trajectories. Following the success of SGAN, recent works \cite{Kosaraju2019SocialBiGATMT, Sadeghian2018SoPhieAA, Zhao2019MultiAgentTF, Amirian2019SocialWL} have proposed improved GAN architectures to better model human interactions in crowds. Indeed, these designs have been successful in reducing the distance-based metrics on real-world datasets \cite{Kosaraju2019SocialBiGATMT}. However, we discover that they fail to model social interactions \textit{i.e.}, the models output colliding trajectories.
\newcolumntype{?}{!{\vrule width 1pt}}
\renewcommand\theadalign{bc}
\renewcommand\theadfont{\bfseries}
\renewcommand\theadgape{\Gape[4pt]}
\renewcommand\cellgape{\Gape[4pt]}
\begin{table*}[t!]
\centering
\resizebox{0.95\textwidth}{!}{\begin{tabular}{lccccccc}
\toprule
Method & Generative Model & \makecell{Spatio-temporal \\ Interaction Modelling \\ in Generator} & \makecell{Multimodal} & \makecell{Spatio-temporal \\ Interaction Modelling \\ in Discriminator} & \makecell{Discriminator \\ Design} & \makecell{Test-time \\ Refinement} \\
\midrule
S-LSTM \cite{Alahi2016SocialLH} & -- & \checkmark & \xmark & -- & -- & \xmark \\
DESIRE \cite{Lee2017DESIREDF} & VAE & \checkmark & \checkmark & -- & -- & \checkmark \\
Trajectron \cite{Ivanovic2018TheTP} & VAE & \checkmark & \checkmark & -- & -- & \xmark \\
SGAN \cite{Gupta2018SocialGS} & GAN & \xmark & \checkmark & \xmark & RNN & \xmark \\
S-BiGAT \cite{Kosaraju2019SocialBiGATMT} & GAN & \checkmark & \checkmark & \xmark & RNN & \xmark \\
SGANv2 [Ours] & GAN & \checkmark & \checkmark & \checkmark & Transformer & \checkmark \\
\bottomrule
\end{tabular}}
\centering
\caption{High-level comparison of proposed architecture against selected common generative model-based forecasting models.}
\label{compare}
\end{table*}
The failure to output collision-free trajectories can be attributed to the fact that the current discriminator designs do not fully model human-human interactions; hence they are incapable of differentiating real trajectory data from fake data. \textit{Only when the discriminator is capable of differentiating real data from fake data, can the supervised signal from it be meaningful to teach the generator}. To tackle this issue, we propose two architectural changes to the SGAN design: (1) Spatio-temporal interaction modelling to better discriminate between real and generated trajectories. (2) A transformer-based discriminator design to strengthen the sequence modelling capability and better guide the generator training. Equipped with these structural changes, our proposed architecture \textit{SGANv2}, learns to better model the underlying etiquette of human motion as evidenced by reduced collisions.
To further reduce the prediction collisions, SGANv2 leverages the trained discriminator even at test time. In particular, we perform collaborative sampling \cite{Liu2019CollaborativeGS} between the generator and discriminator at test-time to guide the unsafe trajectories sampled from the generator. Additionally, we empirically demonstrate that collaborative sampling not only helps to refine trajectories but also has the potential to prevent mode collapse, a phenomenon where the generator fails to capture all modes in the output distribution.
We empirically validate the efficacy of SGANv2 in outputting socially compliant predictions on both synthetic and real-world trajectory datasets. First, we shed light on the shortcomings of the current metric commonly used to measure the multimodal performance, namely Top-20 ADE/FDE \cite{Gupta2018SocialGS}. Specifically, we demonstrate that a simple predictor that outputs uniformly spaced predictions performs at par with the state-of-the-art methods when evaluated using only Top-20 ADE/FDE. To counter this limitation, we propose an alternate evaluation scheme to better measure the socially-compliant multimodal performance of a model. We demonstrate that SGANv2 outperforms competitive baselines on both synthetic and real-world trajectory datasets under the new evaluation scheme. Finally, we demonstrate the ability of collaborative sampling to prevent mode collapse on the recently released Forking Paths \cite{liang2020garden} dataset. Our main contributions are:
\begin{enumerate}[topsep=0pt]
\itemsep0em
\item We propose SGANv2, an improved SGAN architecture that incorporates spatio-temporal interaction modelling in both the generator and the discriminator. Moreover, our transformer-based discriminator better guides the learning process of the generator.
\item We demonstrate the efficacy of collaborative sampling between the generator and discriminator at test-time to reduce prediction collisions and prevent mode collapse in trajectory forecasting.
\end{enumerate}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{\textwidth}
\includegraphics[width=0.98\textwidth]{IEEEtran/figures/system_fig.pdf}
\end{subfigure}
\caption{Our proposed SGANv2 model: Our model consists of three main parts: the spatial interaction embedding module (SIM), the generator (G) and the discriminator (D). At each time-step, for each pedestrian, the SIM outputs the motion embedding and the spatial interaction embedding. G encodes the input embedding sequence using the encoder LSTM to obtain the latent representation. The latent representation along with the sampled noise vector $z$ is used to generate multimodal predictions using the decoder LSTM. D inputs the stacked embedding sequence of real (or fake) trajectories, encodes it using the transformer architecture to obtain the real (or fake) score.}
\label{fig:sys_fig}
\end{figure*}
\section{Related Work}
Human trajectory forecasting in crowds has been an active area of research \cite{SocialForce, Alahi2016SocialLH, Li2020SocialWaGDATIT, Huang2019STGATMS, Mohamed2020SocialSTGCNNAS, Zhu2019StarNetPT, Giuliari2020TransformerNF, Yu2020SpatioTemporalGT, Su2022TrajectoryFB, Zhang2019SRLSTMSR, Kothari2021InterpretableSA, KothariAdversarialLF, Liu2021SocialNC, Daniel2021PECNetAD, saadatnejad_sattack, Liu2022CausalMotionRepresentations, KothariCoRL2022, Xu2022GroupNetMH, Xu2022AdaptiveTP} for various applications like autonomous systems \cite{WaymoSafety, UberSafety, Chen2019CrowdRobotIC, Rasouli2020AutonomousVT} and advanced surveillance \cite{Mehran2009AbnormalCB}. In this section, we review model designs that learn social interactions and output socially compliant multimodal outputs. Table~\ref{compare} provides a high-level overview of how SGANv2 architecture differs from selected generative model-based designs.
\textbf{Spatio-temporal interaction modelling.} The seminal work of Social LSTM \cite{Alahi2016SocialLH} proposed to learn spatial interactions in a data-driven manner with a novel social pooling layer. Following the success of Social LSTM, various designs of data-driven interaction modules have been proposed \cite{Pfeiffer2017ADM, Shi2019PedestrianTP, Bisagno2018GroupLG, Gupta2018SocialGS, Zhang2019SRLSTMSR, Zhu2019StarNetPT, Ivanovic2018TheTP, Liang2019PeekingIT, Tordeux2019PredictionOP, Ma2016AnAI, Hasan2018MXLSTMMT, Mohamed2020SocialSTGCNNAS, Li2020EvolveGraphHM, Yu2020SpatioTemporalGT, Xu2022GroupNetMH} to effectively model interactions in crowds. For a detailed taxonomy on the designs of interaction modules, one can refer to Kothari \textit{et al.} \cite{Kothari2020HumanTF}. In this work, we highlight the importance of modelling both the spatial and temporal nature of social interactions.
Architectures that model dynamics of entities in spatio-temporal tasks have been well-studied. Structural-RNN \cite{Jain2016StructuralRNNDL}, a specialized RNN design, proposed to model dynamics in spatio-temporal tasks like human-object interaction and driver maneuver anticipation. Specific to motion forecasting, several works consider the temporal evolution of spatial human interactions using recurrent mechanisms \cite{Vemula2017SocialAM, Huang2019STGATMS, Li2020EvolveGraphHM}, graph convolutional networks \cite{Mohamed2020SocialSTGCNNAS, Sun2020RecursiveSB} as well as transformers \cite{Yu2020SpatioTemporalGT}. However, many recent works advocated performing spatial interaction modelling only at the end of observation \cite{Gupta2018SocialGS, Kosaraju2019SocialBiGATMT}, as this strategy did not impact the distance-based metrics and saved computational time. In this work, we study the importance of spatio-temporal interaction modelling from the perspective of reducing the collisions in model outputs.
\textbf{Multimodal forecasting.} Neural networks trained using $L_2$ loss are condemned to output the average of all possible outcomes. To tackle this, one line of work proposes $L_2$ loss variants \cite{GuzmnRivera2012MultipleCL, Rupprecht2016LearningIA, Makansi2019OvercomingLO, Huang2019STGATMS} capable of handling multiple hypotheses. However, these variants fail to penalize low quality predictions, \textit{e.g.}, samples that are far away from the ground truth and undergo collisions. Thus, training using these variants can result in high diversity but low quality predictions.
Another line of work utilizes generative models \cite{Lee2017DESIREDF, Ivanovic2018TheTP, Gupta2018SocialGS, Amirian2019SocialWL, Kosaraju2019SocialBiGATMT, Huang2021STIGANMP}, with Variational Autoencoders (VAEs) and Generative Adversarial networks (GANs) being the most popular, to model future trajectory distribution. VAE models in trajectory forecasting \cite{Lee2017DESIREDF, Ivanovic2018TheTP} employ a loss objective based on different variants of the euclidean distance. Such a formulation leads to low quality samples especially when the predictions are uncertain \cite{Dosovitskiy2016GeneratingIW}. In contrast, the discriminator of the GAN framework acts as a learned loss function that naturally penalizes the low quality samples under the adversarial training objective \textit{i.e.}, penalty is incurred on the generator if a sample does not look real \cite{Goodfellow2014GenerativeAN}. Thus, we choose GANs as our generative model as they can effectively produce diverse and high-quality modes by transforming samples from a noise distribution to samples in the real data.
\textbf{GANs in trajectory forecasting.} SGAN \cite{Gupta2018SocialGS} used an LSTM encoder-decoder with social mechanisms within the GAN framework \cite{goodfellow_generative_2014} to perform multimodal forecasting. Following the success of SGAN, various GAN-based architectures have been proposed to better model multimodality in crowds \cite{Li2019WhichWA, Kosaraju2019SocialBiGATMT, Amirian2019SocialWL} as well as on roads \cite{Roy2019VehicleTP, Jin2022AGS}. Yuke Li \cite{Li2019WhichWA} proposed to infer the latent decisions of the agents to model multimodality. Kosaraju \textit{et. al.} \cite{Kosaraju2019SocialBiGATMT} proposed to introduce two discriminators: a local discriminator for the local pedestrian trajectories, similar to \cite{Amirian2019SocialWL, Gupta2018SocialGS}, and a global discriminator that accounted for the spatial interactions. All these works exhibit two common design choices: (1) they do not perform spatio-temporal interaction modelling within the discriminator, (2) they utilize a recurrent LSTM-based discriminator.
It is crucial to equip the discriminator with the ability to model spatio-temporal interactions. Therefore, SGANv2 performs \textit{spatio-temporal interaction modelling} within the discriminator, along with the generator. Transformers \cite{Vaswani2017AttentionIA} have been shown to outperform RNNs in almost all sequence modelling tasks, including trajectory forecasting \cite{Giuliari2020TransformerNF, Yu2020ImprovedOI}. Therefore, we design our discriminator using the transformer and demonstrate that it better guides the generator training. Giuliari \textit{et al. }\cite{Giuliari2020TransformerNF} do not take into account social interactions leading to high collisions in the outputs.
The spatio-temporal transformer design of STAR \cite{Yu2020SpatioTemporalGT} is most closely related to the design of our discriminator. However, as discussed above, their $L_2$ loss training objective can fail to effectively model multimodality. Further, in contrast to previous transformer and GAN-based works, SGANv2 performs test-time refinement that leads to further collision reduction, discussed next.
\textbf{Test-time Refinement.} This refers to the task of refining model predictions at test-time. Lee \textit{et al.} \cite{Lee2017DESIREDF} propose an inverse optimal control based module to refine the predicted trajectories. Sun \textit{et al.} \cite{Sun2020ReciprocalLN} refine trajectories using a reciprocal network that reconstructs input trajectories given the predictions. However, they rely on the strong assumption that both forward and backward trajectories follow identical rules of human motion. We propose to refine trajectories by performing collaborative sampling between the trained generator and discriminator \cite{Liu2019CollaborativeGS}. This technique provides theoretical guarantees with respect to moving the generator distribution closer to real distribution.
\textbf{Mode Collapse.} This is the phenomenon where the generator distribution fails to capture all modes of target distribution. SGAN collapses to a single mode of behavior. Social Ways \cite{Amirian2019DataDrivenCS} utilizes InfoGAN that overcomes this issue albeit on a toy dataset. We empirically show that the collaborative sampling technique in SGANv2 overcomes mode collapse on the more-diverse Forking Path dataset \cite{liang2020garden}.
\section{Method}
Modelling human trajectories using generative adversarial networks (GANs) has the potential to learn the underlying etiquette of human motion and output realistic multimodal predictions. Indeed, recent GAN-based trajectory forecasting models have been successful in reducing distance-based metrics, however they suffer from high prediction collisions. In this section, we present SGANv2, an improvement over the SGAN architecture to output safety-compliant predictions. On a high level, we propose three structural changes: (1) \textit{Spatio-temporal interaction modelling} within the discriminator and generator to better understand social interactions, (2) \textit{Transformer-based discriminator} to better guide the generator, (3) \textit{Collaborative sampling mechanism} between the generator and discriminator to refine the colliding trajectories at test-time. Our proposed changes are generic and can be employed on top of any existing GAN-based architecture.
\subsection{Problem Definition}
Given a scene, we receive as input the trajectories of all people within the scene denoted by $\mathbf{X} = \{X_1, X_2, ... X_n\}$, where $n$ is the number of people in the scene. The trajectory of a person $i$, is defined as $X_i = (x_i^t,y_i^t)$, for time $t=1,2...T_{obs}$ and the future ground-truth trajectory is defined as $Y_i = (x_i^t,y_i^t)$ for time $t=T_{obs}+1,...T_{pred}$. The objective is to accurately and simultaneously forecast the future trajectories of all people $\mathbf{\hat{Y}}=\hat{Y}_1,\hat{Y}_2...\hat{Y}_n$, where $\hat{Y}_i$ is used to denote the predicted trajectory of person $i$. The velocity of a pedestrian $i$ at time-step $t$ is denoted by ${v}^{t}_{i}$.
\subsection{Generative Adversarial Networks}
GANs consist of two neural networks, namely the generator $G$ and the discriminator $D$, which are trained together in tandem. The objective of $D$ is to correctly identify whether a sample belongs to the real data distribution or is generated by the generator. The objective of $G$ is to produce realistic samples which can fool the discriminator. $G$ takes as input a noise vector ${z}$ sampled from a given noise distribution ${p_z}$ and transforms it into a real looking sample ${G(z)}.$ $D$ outputs a probability score indicating whether a sample comes from the generator distribution ${p_g}$ or the real data distribution ${p_r}$. Training GANs is essentially a minimax game between the generator and the discriminator:
\begin{equation} \label{eq:minimax}
\min_G \max_D \mathbb{E}_{x \sim p_r} [ \log(D(x)) ] + \mathbb{E}_{z \sim p_z} [1 - \log(D(G(z)))].
\end{equation}
\subsection{Interaction Modelling Designs}
Modelling social interactions is the key to outputting safe and accurate future trajectories. In this work, we argue that current works do not model interactions between agents sufficiently within both the generator and discriminator leading to large number of prediction collisions. Here, we differentiate between the notion of performing \textit{spatial interaction modelling} and performing \textit{spatio-temporal interactions modelling}. On one hand, an architectural design is said to perform \textit{spatial interaction modelling} if it models the interaction between pedestrians at a \textbf{single time-step only}. For instance, SGAN performs spatial interaction modelling within the generator as it encodes the neighbourhood information only once, at the end of the observation. On the other hand, an architectural design is said to perform \textit{spatio-temporal interaction modelling} if it performs the spatial interaction modelling at \textbf{every} time-step (from $t=1$ to $t=T_{pred}$) and the temporal evolution of the interactions are captured using any sequence encoding mechanism, \textit{e.g.}, an LSTM or a Transformer. We empirically demonstrate that spatio-temporal interactions modelling within both the generator and the discriminator are essential to output safer trajectories.
\subsection{SGANv2}
We now describe our proposed model design in detail (see Fig.~\ref{fig:sys_fig}). Our architecture consists of three key components: the Spatial Interaction embedding Module (SIM), the Generator (G), and the Discriminator (D). SIM is responsible for spatial interaction modelling while the G and D perform temporal modelling. Thus, \textit{G and D in congregation with SIM perform spatio-temporal interaction modelling (STIM)}. In particular, SIM performs motion embedding and spatial interaction embedding for each pedestrian at each time-step. G encodes the embedded sequence through time and outputs multimodal predictions using an LSTM encoder-decoder framework. D, modelled using a transformer \cite{Vaswani2017AttentionIA}, inputs the entire sequence comprising the observed trajectory $\mathbf{X}$ and the future prediction $\mathbf{\hat{Y}}$ (or ground-truth $\mathbf{Y}$), and classifies it as real/fake.
\textbf{Spatial Interaction Embedding Module.} One important characteristic that differentiates human motion forecasting from other sequence prediction tasks is the presence of social interactions: the trajectory of a person is affected by other people in their vicinity. SIM performs the task of encoding human motion and human-human interactions in the spatial domain at a particular time-step.
We embed the velocity ${v}^{t}$ of pedestrian $i$ at time $t$ using a single layer MLP to get the motion embedding vector ${e}^{t}_{i}$ given as:
\begin{equation}
e^t_{i} = \phi(v^t_i; W_{emb}),
\end{equation}
where $\phi$ is the embedding function with weights $W_{emb}$.
The design of SIM is flexible and it can utilize any spatial interaction module proposed in literature \cite{Kothari2020HumanTF, Kosaraju2019SocialBiGATMT}. It embeds the spatial configuration of the scene and outputs the interaction embedding ${p}^t_{i}$ for pedestrian $i$ at time-step $t$. We then concatenate the motion embedding with the spatial interaction embedding, \textit{i.e.}, ${s}^t_{i} = [{e}^t_{i}; {p}^{t}_{i}]$, and provide the concatenated embedding ${s}^t_{i}$ to the G (or the D). The input embedding is constructed using the ground-truth observations from $[1, T_{obs}]$, and generator predictions from $[T_{obs} + 1, T_{pred}]$.
\textbf{Generator.} Within the generator, the encoder LSTM encodes the input embedding sequence provided by the SIM. The encoder LSTM helps to model the temporal evolution of spatial interactions in the form of the following recurrence:
\begin{equation} \label{eq:LSTM_main}
h^t_{i} = LSTM_{enc}(h^{t-1}_i, s^t_{i}; W_{\mathrm{encoder}}),
\end{equation}
where ${h}^t_{i}$ denotes the hidden state of pedestrian $i$ at time $t$, $W_{\mathrm{encoder}}$ are the weights of encoder LSTM that are learned.
The output of the LSTM encoder for each pedestrian at the end of the observation period represents his/her \textit{observed scene representation}. Similar to SGAN, we utilize this representation to condition our GAN for prediction. In other words, SGANv2 take as input noise $z$ and the observed scene representation to produce future trajectories that are conditioned on the past observations. The decoder hidden-state of each pedestrian is initialized with the final hidden-state of the encoder LSTM. The input noise $z$ is concatenated with the inputs of the decoder LSTM, resulting in the following recurrence for the decoder LSTM:
\begin{equation} \label{eq:LSTM_main_dec}
h^t_{i} = LSTM_{dec}(h^{t-1}_i, [s^t_{i}; z_{i}]; W_{\mathrm{decoder}}),
\end{equation}
where $W_{\mathrm{decoder}}$ are the weights of decoder LSTM.
The decoder hidden-state at time-step $t$ of pedestrian $i$ is then used to predict the next velocity at time-step $t + 1$. Similar to Alahi \textit{et al.}\cite{Alahi2016SocialLH}, we model the next velocity as a bivariate Gaussian distribution parametrized by the mean $\mu^{t+1} = (\mu_x,\mu_y)^{t+1}$, standard deviation $\sigma^{t+1} = (\sigma_x,\sigma_y)^{t+1}$ and correlation coefficient $\rho^{t+1}$:
\begin{equation}
[\mu^{t}, \sigma^{t}, \rho^{t}] = \phi_{dec}(h_i^{t-1}, W_{\mathrm{norm}}),
\end{equation}
where $\phi_{dec}$ is an MLP and $W_{norm}$ is learned.
\textbf{Discriminator.}
The social interactions between humans evolve with time. Therefore, we design our discriminator to perform spatio-temporal interaction modelling. Also, in recent times, transformers \cite{Vaswani2017AttentionIA} have become the de-facto model for modelling temporal sequences, replacing recurrent architectures \cite{Giuliari2020TransformerNF, Yu2020SpatioTemporalGT}. Therefore, we design the discriminator as a transformer to perform the temporal sequence modelling of the output provided by SIM.
The discriminator takes as input $\textit{Traj}_{\textit{real}} = [\mathbf{X}, \mathbf{Y}]$ or $\textit{Traj}_{\textit{fake}} = [\mathbf{X}, \mathbf{\hat{Y}}]$ and classifies them as real/fake. The discriminator has its own SIM, which provides the spatial interaction embedding $s^{t}_{i}$ for each pedestrian $i$ at each time-step $t$ in the input sequence. Instead of passing $s^{t}_{i}$ through an LSTM (similar to the generator), we stack these embedded vectors together to form an embedded sequence $S_{i}$ for each pedestrian $i$ (similar to an embedded sequence obtained after embedding word tokens in the field of natural language \cite{Vaswani2017AttentionIA}):
\begin{equation}
S_{i} = [s^{1}_{i}; s^{2}_{i}; \ldots s^{T_{pred}}_{i}].
\end{equation}
This sequence $S_{i}$ is given as input to the encoder of the transformer proposed in \cite{Vaswani2017AttentionIA}. The ability of transformers to capture the temporal correlations within the spatial interaction embedding lies mainly in its self-attention module. Within the attention module, each element of the sequence $S_{i}$ is decomposed into query (Q), key (K) and value (V). The matrix of outputs is computed using the following equation \cite{Vaswani2017AttentionIA}:
\begin{equation}
\text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^{T}}{\sqrt{d_k}}\right)V,
\end{equation}
where $d_k$ is the dimension of the SIM embedding $s^{t}_{i}$. The output of the attention layer is normalized and passed through a feedforward layer to obtain the latent representation of the input sequence, denoted by $R_{i}$:
\begin{equation}
R_{i} = \max (0, A_{i} * W1 + b1) * W2 + b2,
\end{equation}
where the weights $W1, W2, b1, b2$ are learned, $*$ represents matrix multiplication and $A_{i}$ denotes the normalized representation of the output of the attention module. We utilize the last element of $R_{i}$, as the representation of the input sequence. This embedding gets scored using an MLP $\phi_{d}$ to determine if the sequence is real or fake.
\subsection{Training}
As mentioned earlier, SGANv2 is a conditional GAN model. It takes as input noise vector $z$, sampled from $\mathcal{N}(0, 1)$, and outputs future trajectories $\mathbf{\hat{Y}}$ conditioned on the past observations $\mathbf{X}$. We found the least-square training objective \cite{Mao2017LeastSG} to be effective in training SGANv2:
\begin{align} \label{eq:ls_minimax}
&\scalemath{0.8}{\min_{G} \mathcal{L}(G) = \frac{1}{2} \mathbb{E}_{z \sim p_z} [(D(X, G(X, z)) - 1)^{2}]}, \\
&\scalemath{0.8}{
\min_{D} \mathcal{L}(D) = \frac{1}{2} \mathbb{E}_{x \sim p_r} [ (D(X, Y) - 1)^{2} ] + \frac{1}{2} \mathbb{E}_{z \sim p_z} [(D(X, G(X, z)))^{2}]}.
\end{align}
Additionally, we utilize the variety loss \cite{Gupta2018SocialGS} to further encourage the network to produce diverse samples. For each scene, we generate $k$ output predictions by randomly sampling $z$ and penalize the prediction closest to the ground-truth based on L2 distance.
\begin{equation}
\mathcal{L}_{variety} = \min_{k} \| Y - G(X, z)^{(k)} \|_{2}^{2}.
\end{equation}
Following the strategy in \cite{Kothari2020HumanTF}, the generator predicts only the trajectory of the pedestrian of interest in each scene and uses the ground-truth future of neighbours during training. During test time, we predict the trajectories of \textit{all} the pedestrians simultaneously in the scene. All the learnable weights are shared between all pedestrians in the scene.
\begin{figure}
\centering
\begin{subfigure}[h]{0.40\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/collab1.pdf}
\end{subfigure}
\setlength{\belowcaptionskip}{-12pt}
\caption{Illustration of trajectory refinement using collaborative sampling. The trained discriminator provides feedback to improve the generated samples during test-time.}
\label{fig:collab}
\end{figure}
\subsection{Collaborative Sampling in GANs}\label{Collab-section}
The common practice in GANs is to sample from the generator and discard the discriminator during test time. However, our trained discriminator has knowledge regarding the social etiquette of human motion. We can utilize this knowledge to refine the \textit{bad} predictions proposed by the generator. We define a prediction as \textit{bad} if the pedestrian of interest undergoes collision in the model prediction. We propose to refine such trajectories by performing collaborative sampling \cite{Liu2019CollaborativeGS} between the generator and discriminator, as demonstrated in Fig.~\ref{fig:collab}.
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{lccc|ccc|ccc|ccc|ccc}
\toprule
\multicolumn{1}{l}{\textbf{Model}} & \multicolumn{3}{c}{\textbf{ETH}} & \multicolumn{3}{c}{\textbf{HOTEL}} & \multicolumn{3}{c}{\textbf{UNIV}} & \multicolumn{3}{c}{\textbf{ZARA1}} &\multicolumn{3}{c}{\textbf{ZARA2}}\\ \rowcolor{gray!15}
\; & Top-3 & Top-20 & Col
& Top-3 & Top-20 & Col
& Top-3 & Top-20 & Col
& Top-3 & Top-20 & Col
& Top-3 & Top-20 & Col\\ \midrule
Transformer$^\dagger$ \cite{Giuliari2020TransformerNF}
& 1.0/1.9 & 0.6/0.9 & 5.8
& 0.5/0.9 & 0.3/0.5 & 8.2
& 2.3/4.2 & 0.8/1.3 & 10.9
& 0.5/1.0 & 0.3/0.4 & 7.1
& 0.4/0.8 & \textbf{0.2}/\textbf{0.3} & 11.3 \\ \rowcolor{gray!15}
STGAT$^\dagger$ \cite{Huang2019STGATMS}
& \textbf{0.9}/\textbf{1.8} & 0.7/1.2 & 1.7
& 0.7/1.4 & 0.5/1.0 & 4.2
& 0.6/1.2 & 0.3/0.7 & 13.9
& 0.4/0.9 & \textbf{0.2}/\textbf{0.4} & 3.9
& 0.4/0.7 & 0.2/0.4 & 6.9 \\
Social-STGCNN$^\dagger$ \cite{Mohamed2020SocialSTGCNNAS}
& 1.0/1.8 & 0.7/1.2 & 6.7
& 0.4/0.8 & 0.3/0.6 & 10.4
& 0.7/1.3 & 0.5/0.8 & 25.0
& 0.5/0.9 & 0.3/0.5 & 12.1
& 0.4/0.8 & 0.3/0.5 & 19.4 \\ \rowcolor{gray!15}
Uniform Predictor (UP)
& 1.1/2.2 & \textbf{0.6}/\textbf{0.9} & 3.3
& 0.5/0.9 & \textbf{0.2}/\textbf{0.4} & 5.1
& 0.6/1.3 & \textbf{0.3}/\textbf{0.6} & 15.7
& 0.5/1.0 & 0.3/0.6 & 4.7
& 0.4/0.8 & 0.2/0.4 & 7.5 \\
\midrule
SGANv2 [Ours]
& 1.0/1.9 & 0.7/1.2 & \textbf{1.0}
& \textbf{0.4}/\textbf{0.7} & 0.3/0.5 & \textbf{1.2}
& \textbf{0.6}/\textbf{1.3} & 0.5/0.8 & \textbf{8.3}
& \textbf{0.4}/\textbf{0.8} & 0.3/0.6 & \textbf{1.3}
& \textbf{0.3}/\textbf{0.7} & 0.3/0.5 & \textbf{2.2} \\
\bottomrule
\end{tabular}}
\setlength{\belowcaptionskip}{-10pt}
\caption{Quantitative evaluation of various methods on ETH-UCY. Errors reported are Top-K ADE/FDE (in m) and collision (in \%). Only observing the Top-20 metric (as done by previous work) can lead to incorrect conclusions. We show that a high-entropy uniform predictor is highly competitive with respect to state-of-the-art methods in multimodal forecasting using Top 20 metric. See Table \ref{tab:Real} for a more informative evaluation.}
\label{tab:top20}
\end{table*}
To summarize collaborative sampling for the case of trajectory forecasting, our goal is to refine the generator prediction using gradients from the discriminator \textit{without} updating the parameters of the generator. We leverage the gradient information provided by the discriminator to continuously refine the generator predictions of the pedestrian of interest $i$ through the following iterative update:
\begin{equation} \label{eq:backward}
\hat{Y}_i^{m+1} = \hat{Y}_i^{m} - \lambda \nabla \mathcal{L}_{G} (\hat{Y}_i^m),
\end{equation}
where $m$ is the iteration number, $\lambda$ is the stepsize, $\mathcal{L}_{G}$ is the loss of the generator in Eq.~\ref{eq:ls_minimax}. The authors demonstrate that the above iteration process theoretically, under mild assumptions, shifts the learned generator distribution towards the real distribution \cite{Liu2019CollaborativeGS}. The trajectories are updated till either the discriminator score goes above a defined threshold or the maximum number of iterations is reached.
\section{Experiments}
\begin{figure}
\centering
\begin{subfigure}[h]{0.19\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/SP.png}
\end{subfigure}
\setlength{\belowcaptionskip}{-12pt}
\caption{20 uniformly spread predictions (solid) of a handcrafted predictor conditioned on the last observed velocity (dotted).}
\label{fig:sp}
\end{figure}
In this section, we highlight the ability of SGANv2 to output socially-compliant multimodal futures. We evaluate the performance of our architecture against several state-of-the-art methods on the ETH/UCY datasets \cite{Pellegrini2009YoullNW, Lerner2007CrowdsBE} and on the interaction-centric TrajNet++ benchmark \cite{Kothari2020HumanTF}. Additionally, we highlight the potential of collaborative sampling to prevent mode collapse on the Forking Paths \cite{liang2020garden} dataset. We evaluate two variants of our model against various baselines:
\begin{itemize}
\item \textbf{SGANv2 w/o CS}: Our GAN architecture comprising of a transformer-based discriminator that performs spatio-temporal interaction modelling.
\item \textbf{SGANv2}: Our complete GAN architecture in combination with collaborative sampling at test-time.
\end{itemize}
\subsection{Evaluation Metrics}
\begin{enumerate}[itemsep=0.25cm]
\item \textbf{Top-K Average Displacement Error (ADE)}: Average $l_{2}$ distance between ground truth and closest prediction (out of k samples) over all predicted time steps.
\item \textbf{Top-K Final Displacement Error (FDE)}: The distance between the final destination of closest prediction (out of k samples) and the ground truth final destination at the end of the prediction period $T_{pred}$.
\item \textbf{Prediction collision (Col)} \cite{Kothari2020HumanTF}: The percentage of collision between the primary pedestrian and the neighbors in the \textit{forecasted future} scene.
\end{enumerate}
\subsection{Limitations of current multimodal evaluation scheme}
Current multimodal forecasting works utilize metrics that measure model performance at the \textit{individual level} such as the top-$k$ ADE/FDE \cite{Gupta2018SocialGS, Huang2019STGATMS}. This metric evaluates the quality of the predicted distribution per pedestrian; and does not measure the interaction between different pedestrians. Further, the value of $k$ is very high (k=20 being most common). Almost all the recent works \cite{Gupta2018SocialGS, Kosaraju2019SocialBiGATMT, Daniel2021PECNetAD, Huang2019STGATMS, Giuliari2020TransformerNF} in human trajectory forecasting utilize the Top-20 ADE/FDE metric \cite{Gupta2018SocialGS} to quantify multimodal performance. We argue that measuring multimodal performance based solely on this metric can be misleading.
The Top-20 ADE/FDE metric can be easily cheated by predicting a high entropy distribution that covers all the space but is not precise \cite{Eghbalzadeh2017LikelihoodEF}. We empirically validate this claim by comparing state-of-the-art baselines against a simple hand-crafted uniform predictor (\texttt{UP}). \texttt{UP} takes as input the last observed velocity of each pedestrian and outputs 20 uniformly spread trajectories (see Fig~\ref{fig:sp}). \texttt{UP} outputs 20 predictions using the combination of 5 different relative direction profiles $[0, 25, 50, -25, -50]$ (in degrees relative to current direction of motion) and 4 different relative speed profiles $[1, 0.75, 1.25, 0.25]$ (factors of the current speed).
Table~\ref{tab:top20} compares the performance of recent state-of-the-art methods \cite{Giuliari2020TransformerNF, Huang2019STGATMS, Mohamed2020SocialSTGCNNAS} and \texttt{UP} on ETH-UCY datasets. It is apparent that by observing the \textit{Top-20 metric only}, \texttt{UP} seems to perform better (or at par) against the state-of-the-art baselines. If we note the prediction collisions, it is apparent that \texttt{UP} is not a good multimodal predictor. This corroborates our conjecture that a high entropy distribution can easily cheat the Top-20 metric leading to incorrect conclusions.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.98\textwidth]{IEEEtran/figures/Collab_Sample_Whole_Viz.pdf}
\caption{Illustration of collaborative sampling at test-time to reduce model collisions in both TrajNet++ synthetic and real-world datasets. Given a generator prediction of the pedestrian of interest (blue) that undergoes collision with the neighbours (red), our discriminator, equipped with spatio-temporal interaction modelling, provides feedback based on its learned understanding human-human interactions. Consequently, the resulting refined prediction (green) does not undergo collision and in some cases, is closer to the ground-truth (black)}
\label{fig:collab_sample_viz}
\end{figure*}
\subsection{Multimodal Evaluation Scheme}
To counter the above issues with current multimodal evaluation strategy, we propose to set $k$ to a lower value in our experiments; as a lower $k$ is a better proxy for likelihood estimation for implicit generative models \cite{Eghbalzadeh2017LikelihoodEF}. Specific to our problem, we will demonstrate that when $k$ is low ($k = 3$), the uniform predictor due to a lack of modeling social interactions performs poorly compared to interaction-based baselines \cite{Huang2019STGATMS, Mohamed2020SocialSTGCNNAS}. Further, to measure the interaction-modelling capability, we focus on the percentage of collisions between the primary pedestrian and the neighbors in the \textit{forecasted future} scene.
\subsection{Synthetic Experiments}
We first demonstrate the efficacy of our proposed architectural changes in SGANv2 compared to other generative model designs in the TrajNet++ synthetic setup. We observe that SGANv2 greatly improves upon the Top-3 ADE/FDE metric with a lower collision metric compared to training a model using only variety loss (see Table~\ref{tab:traj_synth}).
\begin{table}[htb!]
\centering
\resizebox{0.34\textwidth}{!}{\begin{tabular}{lcc}
\toprule
\textbf{Method} & \textbf{Top-3} & \textbf{Col} \\ \midrule
CV* \cite{Schller2020WhatTC} & 0.4/1.0 & 21.1 \\ \rowcolor{gray!15}
LSTM* \cite{Hochreiter1997LongSM} & 0.3/0.6 & 19.0 \\
S-LSTM* \cite{Kothari2020HumanTF} & 0.2/0.5 & 2.2 \\ \rowcolor{gray!15}
D-LSTM* \cite{Kothari2020HumanTF} & 0.2/0.5 & 2.2 \\
CVAE \cite{Lee2017DESIREDF} & 0.2/0.5 & 4.6 \\ \rowcolor{gray!15}
WTA \cite{Rupprecht2016LearningIA} & 0.2/0.4 & 2.4\\
SGAN \cite{Gupta2018SocialGS} & 0.2/0.4 & 2.8\\ \midrule \rowcolor{gray!15}
SGANv2 w/o CS [Ours] & 0.2/0.4 & 1.9 \\
SGANv2 [Ours] & \textbf{0.2}/\textbf{0.4} & \textbf{0.6}\\
\bottomrule
\end{tabular}}
\caption{Quantitative evaluation on TrajNet++ synthetic dataset. Errors reported are Top-3 ADE/FDE (in m) and collision (in \%). SGANv2 with collaborative sampling greatly reduces the model prediction collisions without compromising on the distance-based metrics. *Unimodal methods}
\label{tab:traj_synth}
\end{table}
\begin{table*}[t]
\centering
\resizebox{0.90\textwidth}{!}{\begin{tabular}{lcccccccccc}
\toprule
\multicolumn{1}{l}{\textbf{Model}} & \multicolumn{2}{c}{\textbf{ETH}} & \multicolumn{2}{c}{\textbf{HOTEL}} & \multicolumn{2}{c}{\textbf{UNIV}} & \multicolumn{2}{c}{\textbf{ZARA1}} &\multicolumn{2}{c}{\textbf{ZARA2}}\\ \rowcolor{gray!15}
\; & Top-3 & Col
& Top-3 & Col
& Top-3 & Col
& Top-3 & Col
& Top-3 & Col\\ \midrule
CV* \cite{Schller2020WhatTC}
& 1.1/2.3 & 5.3
& 0.4/0.8 & 7.2
& 0.6/1.4 & 20.3
& 0.4/1.0 & 6.0
& 0.3/0.7 & 9.6 \\ \rowcolor{gray!15}
LSTM* \cite{Hochreiter1997LongSM}
& 1.0/2.1 & 5.8
& 0.5/0.9 & 6.7
& 0.6/1.3 & 20.2
& 0.5/1.0 & 5.2
& 0.4/0.8 & 9.5 \\
Uniform Predictor
& 1.1/2.2 & 3.3
& 0.5/0.9 & 5.1
& 0.6/1.3 & 15.7
& 0.5/1.0 & 4.7
& 0.4/0.8 & 7.5 \\ \rowcolor{gray!15}
Transformer$^\dagger$ \cite{Giuliari2020TransformerNF}
& 1.0/1.9 & 5.8
& 0.5/0.9 & 8.2
& 2.3/4.2 & 10.9
& 0.5/1.0 & 7.1
& 0.4/0.8 & 11.3 \\
S-LSTM* \cite{Alahi2016SocialLH}
& 1.1/2.1 & 2.2
& 0.5/0.9 & 2.5
& 0.7/1.5 & 11.8
& 0.4/0.9 & 2.7
& 0.4/0.8 & 3.7 \\ \rowcolor{gray!15}
CVAE \cite{Lee2017DESIREDF}
& 1.1/2.2 & 2.8
& 0.4/0.8 & 1.5
& 0.7/1.5 & 12.6
& 0.4/0.9 & 2.6
& 0.4/0.8 & 3.5 \\
WTA \cite{Rupprecht2016LearningIA}
& 1.0/1.9 & 2.5
& 0.4/0.7 & 2.3
& 0.6/1.3 & 12.7
& 0.4/0.8 & 2.2
& 0.3/0.7 & 4.1 \\ \rowcolor{gray!15}
SGAN \cite{Gupta2018SocialGS}
& 1.0/2.0 & 2.2
& 0.4/0.7 & 1.7
& 0.6/1.3 & 11.8
& 0.4/0.8 & 2.3
& 0.3/0.7 & 3.2 \\
STGAT$^\dagger$ \cite{Huang2019STGATMS}
& \textbf{0.9}/\textbf{1.8} & 1.7
& 0.7/1.4 & 4.2
& 0.6/1.2 & 13.9
& 0.4/0.9 & 3.9
& 0.4/0.7 & 6.9 \\ \rowcolor{gray!15}
Social-STGCNN$^\dagger$ \cite{Mohamed2020SocialSTGCNNAS}
& 1.0/1.8 & 6.7
& 0.4/0.8 & 10.4
& 0.7/1.3 & 25.0
& 0.5/0.9 & 12.1
& 0.4/0.8 & 19.4 \\
S-BiGAT \cite{Kosaraju2019SocialBiGATMT}
& 1.0/1.9 & 3.3
& 0.4/0.7 & 1.7
& 0.6/1.3 & 11.5
& 0.4/0.8 & 2.2
& 0.3/0.7 & 3.3 \\ \rowcolor{gray!15}
\midrule
SGANv2 w/o CS [Ours]
& 1.0/1.9 & 1.7
& 0.4/0.7 & 1.4
& 0.6/1.3 & 11.5
& 0.4/0.8 & 2.1
& 0.3/0.7 & 3.6 \\
SGANv2 [Ours]
& 1.0/1.9 & \textbf{1.0}
& \textbf{0.4}/\textbf{0.7} & \textbf{1.2}
& \textbf{0.6}/\textbf{1.3} & \textbf{8.3}
& \textbf{0.4}/\textbf{0.8} & \textbf{1.3}
& \textbf{0.3}/\textbf{0.7} & \textbf{2.2} \\
\bottomrule
\end{tabular}}
\setlength{\belowcaptionskip}{-10pt}
\caption{Quantitative evaluation of our proposed method on ETH/UCY datasets. We observe the trajectories for 8 times steps (3.2 seconds) and show prediction results for the next 12 time steps (4.8 seconds). Errors reported are Top-3 ADE / FDE (in m), Col (in \%). SGANv2 improves in collision metric without compromising on the distance-based metrics. *Unimodal}
\label{tab:Real}
\end{table*}
Next, we utilize collaborative sampling technique to refine trajectories that undergo collision at test-time. The trained discriminator provides feedback to the colliding samples which helps to reduce the collisions. For each colliding prediction, we perform 5 refinement iterations with stepsize 0.01. We observe that this scheme greatly reduces the collision rate by \textbf{$\sim$ 70\%}. The first row of Fig~\ref{fig:collab_sample_viz} illustrates the ability of collaborative sampling to refine predictions in the synthetic scenario.
\begin{table}[htb!]
\centering
\resizebox{0.34\textwidth}{!}{\begin{tabular}{lcc}
\toprule
\textbf{Method} & \textbf{Top-3} & \textbf{Col} \\ \midrule
CV* \cite{Schller2020WhatTC} & 0.6/1.3 & 10.9 \\ \rowcolor{gray!15}
LSTM* \cite{Hochreiter1997LongSM} & 0.5/1.2 & 9.3 \\
S-LSTM* \cite{Alahi2016SocialLH} & 0.5/1.0 & 4.9 \\ \rowcolor{gray!15}
D-LSTM* \cite{Kothari2020HumanTF} & 0.5/1.1 & 3.9 \\
CVAE \cite{Lee2017DESIREDF} & 0.5/1.1 & 3.9 \\ \rowcolor{gray!15}
WTA \cite{Rupprecht2016LearningIA} & 0.5/1.0 & 3.5 \\
SGAN \cite{Gupta2018SocialGS} & 0.5/1.0 & 3.5 \\ \rowcolor{gray!15}
S-NCE \cite{Liu2021SocialNC} & 0.5/1.1 & 4.0 \\
PECNet \cite{Daniel2021PECNetAD} & \textbf{0.4}/\textbf{0.9} & 10.7 \\ \hline \rowcolor{gray!15}
Uniform Predictor & 0.6/1.2 & 8.4 \\
Transformer$^\dagger$ \cite{Giuliari2020TransformerNF}. & 0.7/1.3 & 9.4 \\ \rowcolor{gray!15}
STGCNN$^\dagger$ \cite{Mohamed2020SocialSTGCNNAS} & 0.6/1.1 & 12.6 \\
STGAT$^\dagger$ \cite{Huang2019STGATMS} & 0.5/1.1 & 5.6 \\ \rowcolor{gray!15}
S-BiGAT \cite{Kosaraju2019SocialBiGATMT} & 0.5/1.0 & 3.3 \\ \midrule
SGANv2 w/o CS [Ours] & 0.5/1.0 & 3.1 \\ \rowcolor{gray!15}
SGANv2 [Ours] & 0.5/1.0 & \textbf{2.3}\\
\bottomrule
\end{tabular}}
\caption{Quantitative evaluation of our proposed method on TrajNet++ real-world dataset. Errors reported are Top-3 ADE/FDE (in m) and collision (in \%). SGANv2 in combination with collaborative sampling (CS) improves in collision metric without compromising on the distance-based metrics. *Unimodal}
\label{tab:traj_real}
\end{table}
\subsection{Real-World Experiments}
Next, we evaluate the performance of our SGANv2 architecture in real-world datasets of ETH/UCY and the TrajNet++ benchmark. For ETH/UCY, we observe the trajectories for 8 times steps (3.2 seconds) and show prediction results for 12 (4.8 seconds) time steps. For TrajNet++, we observe the trajectories for 9 times steps (3.6 seconds) and show prediction results for 12 (4.8 seconds) time steps.
Table~\ref{tab:Real} provides the quantitative evaluation of various baselines and state-of-the-art forecasting methods on the ETH/UCY dataset. We observe that SGANv2 outputs safer predictions in comparison to competitive baselines without compromising on the prediction accuracy. Our Top-3 ADE/FDE are on par with (if not better than) state-of-the-art methods while our collision rate is significantly reduced thanks to spatio-temporal interaction modelling. It is further interesting to note that Trajectory Transformer \cite{Giuliari2020TransformerNF} and the simple uniform predictor (\texttt{UP}) that performed the best on Top-20 ADE/FDE in Table~\ref{tab:top20} are not among the top performing methods when evaluated on the more-strict Top-3 ADE/FDE. Next, we benchmark on the TrajNet++ with interaction-centric scenes with a standardized evaluator that provides a more objective comparison \cite{Kothari2020HumanTF}.
Table~\ref{tab:traj_real} compares SGANv2 against other competitive baselines on TrajNet++ real-world benchmark. The first part of Table~\ref{tab:traj_real} reports simple baselines and the top-3 official submissions on AICrowd made by different works literature \cite{Liu2021SocialNC, Daniel2021PECNetAD, Kothari2020HumanTF}. SGANv2 performs at par with the top-ranked PECNet \cite{Daniel2021PECNetAD} on the Top-3 evaluation while having \textbf{3x} lower collisions demonstrating that spatio-temporal interaction modelling is key to outputting safer trajectories \footnote{PECNet performs spatial interaction modelling once at end of observation}. Additionally, we utilize the open-source implementation of three additional state-of-the-art methods (denoted by $\dagger$) and evaluate them on the TrajNet++ benchmark. Compared to these competing baselines, SGANv2 improves upon the Top-3 ADE/FDE metric by $\sim$ 10\% and the collision metric by $\sim$ 40\%.
We perform collaborative sampling to refine trajectories that undergo collision in real world datasets. For each colliding prediction, we perform 5 refinement iterations with stepsize 0.01. We observe that this procedure reduces the collision rate by \textbf{$\sim$ 30\%} on both ETH/UCY and TrajNet++. The trained discriminator understands human social interactions, and provides feedback to the bad samples, and consequently helps to reduce collisions. The second row of Fig~\ref{fig:collab_sample_viz} illustrates a few real-world scenarios where collaborative sampling demonstrates the ability to refine generator predictions that undergo collisions. In conclusion, we observe that SGANv2 beats competitive baselines in generating socially-compliant trajectories without compromising on the distance-based metrics.
\subsection{Ablation: Interaction Modelling}
In Table~\ref{interaction}, we empirically demonstrate that modelling interactions is the key to reducing prediction collisions. We consider the performance of different variants of our proposed SGANv2 architecture based on the interaction modelling schemes within the generator and discriminator. It is apparent that modelling interaction within both the generator and discriminator is necessary to output safe multimodal trajectories.
\begin{table}[!t]
\centering
\resizebox{0.45\textwidth}{!}{\begin{tabular}{cccccc}
\toprule
\multicolumn{1}{c}{$\textbf{G}_{Pool}$} & \multicolumn{1}{c}{$\textbf{D}_{Pool}$} & \multicolumn{2}{c}{\textbf{TrajNet++ Synth}} & \multicolumn{2}{c}{\textbf{TrajNet++ Real}} \\ \rowcolor{gray!15}
\; & \; & Top-3 & Col & Top-3 & Col
\\ \midrule
\xmark & \xmark & 0.3 / 0.5 & 18.3 & 0.5 / 1.1 & 9.6 \\ \rowcolor{gray!15}
\checkmark & \xmark & 0.2 / 0.4 & 4.1 & 0.5 / 1.0 & 3.9 \\
\checkmark & \checkmark & 0.2 / 0.4 & \textbf{2.9} & 0.5 / 1.0 & \textbf{3.1} \\
\bottomrule
\end{tabular}}
\caption{Interaction modules of SGANv2. Errors reported are Top-3 ADE/FDE (in m) and collision (in \%). Modelling interactions greatly reduces collisions on TrajNet++.}
\label{interaction}
\end{table}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/FP_GT.pdf}
\caption{Ground Truth}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/FP_Variety2.pdf}
\caption{Variety Loss}
\label{fig:var}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/FP_GAN.pdf}
\caption{SGAN}
\label{fig:sgan}
\end{subfigure} \\
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/FP_InfoGAN.pdf}
\caption{Social Ways}
\label{fig:infoGan}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/FP_WGAN2.pdf}
\caption{SGANv2 w/o CS}
\label{fig:WGan}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/FP_Collab2.pdf}
\caption{SGANv2}
\label{fig:WGan_collab}
\end{subfigure}
\caption{Qualitative illustration of effectiveness of collaborative sampling on Forking Paths \cite{liang2020garden}. (b) Training models using variety loss \cite{Rupprecht2016LearningIA} leads to uniform output distribution. (c) Training using SGAN objective \cite{Gupta2018SocialGS} leads to mode collapse while (d) InfoGAN \cite{Amirian2019SocialWL} helps to mitigate the mode collapse issue. (e) SGANv2 helps to cover all the modes, and in combination with (f) collaborative sampling, we can successfully recover all modes with high accuracy.}
\label{fig:collab_fig}
\end{figure*}
\subsection{Multimodal Analysis}
In this final experiment, we demonstrate the potential of collaborative sampling to prevent mode collapse in trajectory generation. We utilize the sample scene `Zara01' from the Forking Paths dataset. We choose this scene as the multimodal futures of the `Zara01' scene is only affected by social interactions, and not physical obstacles. It forms the ideal test ground to check the multimodal performance of forecasting models. In this experiment, we observe the trajectories for 8 times steps (3.2 seconds) and show prediction results for 13 (5.2 seconds) time steps.
Fig.~\ref{fig:collab_fig} qualitatively illustrates the performance of a GAN model trained using variety loss \cite{Rupprecht2016LearningIA, Gupta2018SocialGS} and other GAN objectives on the chosen scene. As there are 4 dominant modes in the scene, we chose $k=4$ for the variety loss. The model trained using variety loss (Fig.~\ref{fig:var}) ends up learning a uniform distribution, \textit{i.e.}, high diversity and low quality, as there is no penalty on the bad samples during training. Variety loss only penalizes the sample closest to the ground-truth. SGAN training \cite{Gupta2018SocialGS} (Fig.~\ref{fig:sgan}) results in mode collapse, \textit{i.e.}, low diversity and high quality as standard GAN training is highly unstable. Social Ways \cite{Amirian2019SocialWL} proposed infoGAN objective \cite{chen_infogan:_2016} to mitigate the mode collapse issue. The InfoGAN improves upon SGAN, however, it still fails to cover all the modes (Fig.~\ref{fig:infoGan}).
Empirically, we found that training SGANv2 with the gradient penalty objective (Fig.~\ref{fig:WGan}), proposed in \cite{arjovsky_wasserstein_2017}, provides a better mode coverage compared to InfoGAN, but the resulting distribution is still not accurate. As shown in Fig.~\ref{fig:WGan_collab}, our proposed collaborative sampling at test-time helps to improve the accuracy of the SGANv2 predictions, recovering modes with low coverage. The trained discriminator guides the generated samples to these modes. Thus, we see that collaborative sampling is not only effective in refining trajectories at test time, but also can help to prevent mode collapse.
\subsection{Key attributes}
We now analyze the performance of the key SGANv2 design choices in the TrajNet++ synthetic setup. In the synthetic setup, we have access to the goals of each agent, allowing us to calculate Distance-to-Goal (Dist2Goal) \cite{Ma2017ForecastingID}, defined as the L2 distance between the predicted final destination and the goal of the agent.
\textbf{Rationale behind Distance to Goal:} It is possible that the generator predicts a socially-acceptable mode that \textit{does not correspond} to the ground-truth mode (see Fig.~\ref{fig:dist2goal_main}). If we calculate the ADE/FDE with respect to the ground-truth for such a predicted mode (that differs from ground-truth), the numbers will be high, \textit{misleading} us to incorrectly conclude that the generator did not learn the underlying task of trajectory forecasting. However, if the predicted destination is close to the goal of the agent, then one can assert that a different but socially acceptable mode has been predicted. The Col metric will help to validate that no collisions take place. Thus, Dist2Goal in combination with the Col metric helps to validate that a predicted mode is socially plausible.
Table~\ref{tab:traj_synth_ablat_main} quantifies the performance of various GAN architectures trained \textit{without} variety loss \cite{Rupprecht2016LearningIA}. SGAN \cite{Gupta2018SocialGS} performs the worst on the Col metric as the discriminator does not perform any interaction modelling, thereby not possessing the ability to learn the concept of collision avoidance. Only if the discriminator learns the collision avoidance property, can we expect it to teach the generator to output collision-free trajectories.
The global discriminator of S-BiGAT \cite{Kosaraju2019SocialBiGATMT} performs spatial interaction modelling \textit{only once}, at the end of prediction. Thus, the global discriminator is able to reason about interactions spatially but cannot model the temporal evolution of the same. SGANv2 equipped with spatio-temporal interaction modelling results in \textbf{near-zero} prediction collision. It is apparent that spatio-temporal interaction modelling within the discriminator plays a significant role in teaching the generator the concept of collision avoidance.
We now justify the design choices of sequence modelling within the discriminator using the \textit{Dist2Goal} metric. We compare an additional design of our proposed SGANv2 architecture: SGANv2-L, an SGANv2 with an LSTM discriminator. SGANv2-L trained using LSTM discriminator shows stopping behavior, indicated by the high Dist2Goal value in the test set. In other words, SGANv2-L outputs collision-free trajectories but the predictions fail to move towards the goal of the primary agent. In comparison, SGANv2 is able to output collision-free trajectories with a lower Dist2Goal (almost matching the ground-truth Dist2Goal value of $8.6m$). In conclusion, SGANv2 is able to output socially acceptable trajectories when compared to other GAN-based designs.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=\textwidth]{IEEEtran/figures/dist2goal1.png}
\end{subfigure}
\caption[Illustration of the Distance-To-Goal metric.]{Illustration of difference between Dist2Goal and Final Displacement Error (FDE). FDE is the distance between ground-truth position and predicted position at end of prediction period. Dist2Goal is the distance between predicted position and the final position of agent in the entire dataset.}
\label{fig:dist2goal_main}
\end{figure}
\begin{table}[t]
\centering
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{ lcccc }
\toprule
Model & \makecell{Spatio-temporal \\ Interaction Modelling \\ in Discriminator} & \makecell{Discriminator \\ Design} & Col & Dist2Goal\\
\midrule
Ground-truth & -- & -- & 0.0 & 8.6 \\ \rowcolor{gray!15}
SGAN \cite{Gupta2018SocialGS} & \xmark & LSTM & 24.9 & 8.9 \\
S-BiGAT \cite{Kosaraju2019SocialBiGATMT} & \xmark & LSTM & 8.4 & 8.9\\ \rowcolor{gray!15}
SGANv2-L & \checkmark & LSTM & 0.8 & 8.8\\
SGANv2 & \checkmark & Transformer & \textbf{0.2} & \textbf{8.6} \\
\bottomrule
\end{tabular}}
\caption[Quantitative evaluation of various SGANv2 design choices on TrajNet++ synthetic dataset.]{Quantitative evaluation of various GAN architectural designs on TrajNet++ synthetic dataset. Col in \% and Dist2Goal in meters. SGANv2 learns to successfully predict socially acceptable outputs evidenced by lower collisions and Dist2Goal.}
\label{tab:traj_synth_ablat_main}
\end{table}
\subsection{Computational Time.}
Speed is crucial for a method to be used in a real world setting like autonomous vehicles where you need accurate predictions about pedestrian behavior. We provide the computational time at inference for our method against baseline unimodal LSTMs with and without interaction modelling. All the run times have been benchmarked on a single NVIDIA 2080 Ti GPU. We provide the run time per scene (averaged over all the scenes in the TrajNet++ real world benchmark).
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& LSTM & D-LSTM & SGANv2 w/o CS & SGANv2 \\
\hline
Time & 10ms & 22ms & 22ms & 77ms \\
\hline
\end{tabular}
\caption{Computational time comparison at inference per scene for various forecasting designs. The additional computational time for SGANv2 corresponds to the sample refinement process that occurs for five iterations.}
\label{tab:compute}
\end{table}
The runtimes of D-LSTM and SGANv2 without collaborative sampling are similar as the multiple future predictions in the latter case can be generated in parallel, albeit at the cost of additional memory complexity. The relatively higher computational time of collaborative sampling corresponds to the sample refinement process based on the gradients from the discriminator. Nevertheless, the absolute computational time of collaborative sampling (77ms per scene) is suitable for real-time applications like autonomous systems.
\subsection{Implementation details}
The generator and the discriminator have their own spatial interaction embedding modules (SIM). Each pedestrian has his/her encoder and decoder.
\paragraph{Synthetic experiments.}\label{synth_param} The velocity of each pedestrian is embedded into a 16-dimensional vector. The hidden-state dimension of the encoder LSTM and decoder LSTM of the generator is 64. The dimension of the interaction vector of both the generator and discriminator is fixed to 64. We utilize Directional-Grid \cite{Kothari2020HumanTF} interaction module with a grid of size $12 \times 12$ and a resolution of $0.6$ meters. For the LSTM discriminator, the hidden-state dimension is set to 64. For the transformer-based discriminator, we stack N=4 encoder layers together. The dimension of query vector, key vector and value vector is fixed to 64. The dimension of the feedforward hidden layer within each encoder layer is set to 64. We train using ADAM optimizer \cite{Kingma2015AdamAM} with a learning rate of 0.0003 for the generator and 0.001 for the discriminator for 50 epochs. The ratio of generator steps to discriminator steps for LSTM discriminator and transformer-based discriminator is 2:1. For synthetic data experiment, we have access to the goals of each pedestrian. The direction to the goal is embedded into a 16-dimensional vector. The batch size is fixed to 32.
\paragraph{Real-world experiments.}\label{real_param}
The velocity of each pedestrian is embedded into a 32-dimensional vector. The hidden-state dimension of the encoder LSTM and decoder LSTM of the generator is 128. The dimension of the interaction vector of both the generator and discriminator is fixed to 256. We utilize Directional-Grid \cite{Kothari2020HumanTF} interaction module with a grid of size $12 \times 12$ and a resolution of $0.6$ meters. For the LSTM discriminator, the hidden-state dimension is set to 128. The ratio of generator steps to discriminator steps is 2:1. For the transformer-based discriminator, we stack N=2 encoder layers together (see Fig. 2 of main text). The dimension of query vector, key vector and value vector is fixed to 128. The dimension of the feedforward hidden layer within each encoder layer is set to 1024. We train using ADAM optimizer \cite{Kingma2015AdamAM} with a learning rate of 0.001 for both the generator and the discriminator for 25 epochs with a learning rate scheduler of step-size 10. The batch size is fixed to 32. The weight of variety loss is set to 0.2.
\paragraph{Multimodal Analysis.} The velocity of each pedestrian is embedded into a 16-dimensional vector. The hidden-state dimension of the encoder LSTM and decoder LSTM of the generator is 32. We train using ADAM optimizer \cite{Kingma2015AdamAM} with a learning rate of 0.0003 for the generator and 0.001 for the discriminator.
\section{Conclusion}
We presented SGANv2, an improved SGAN architecture equipped with two crucial architectural changes in order to output safety-compliant trajectories. First, SGANv2 incorporates spatio-temporal interaction modelling that can help to understand the subtle nuances of human interactions. Secondly, the transformer-based discriminator better guides the generator learning process. Furthermore, the collaborative sampling strategy helps leverage the trained discriminator during test-time to identify and refine the socially-unacceptable trajectories output by the generator. We empirically demonstrated the strength of SGANv2 to reduce the model collisions without comprising the distance-based metrics. We additionally highlighted the potential of collaborative sampling to overcome mode collapse in a challenging multimodal scenario.
Our work aims at expanding the current horizon of trajectory forecasting models for real-world applications where humans' lives are at risk, such as social robots or autonomous vehicles. Accuracy, safety, and robustness are all mandatory keywords. Over the past years, researchers have focused their evaluation on distance-based metrics. Yet, if we compare the methods on the safety-critical ``collision" metric, we observe a difference in performance above 50\%. Hence, we believe that one should focus more on this metric and develop methods that aim for zero collisions.
\section*{Acknowledgement}
This work was supported by the Honda R\&D Co. Ltd and EPFL. We also thank VITA members and reviewers for their valuable comments.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,869,038,155,553 | arxiv | \section{Convergence of \texttt{FedPVR}}
\label{sec:proof}
We first state the convergence theorem, then provide the proof for the convergence rate.
\begin{remark}
Proving convergence for a randomly picked iterate $\bar{\boldsymbol{x}}^R \in \{\boldsymbol{x}^r\}_{r=0}^{R}$ is equivalent to show the convergence of a (weighted) average of the output criterion, e.g. $\frac{1}{R+1}\sum_{r=0}^{R}\mathbb{E}[f(\boldsymbol{x}^r)] - f^*$ for convex functions{\cite[Remark 17]{DBLP:journals/corr/abs-1909-05350}}. Following this, we assume there exists some weights of $\{w_r\}$ such that:
\begin{equation*}
\bar{\boldsymbol{x}}^R = \boldsymbol{x}^{r-1} \quad \text{with probability} \quad \frac{w_{r}}{\sum_{\tau} w_{\tau}} \quad \text{for} \quad r\in \{1, ..., R+1\}
\end{equation*}
\end{remark}
\begin{theoremp}{II}
\label{theorem:general}
Suppose functions $\{f_i\}$ satisfies Assumption~\ref{assum:beta_smooth} and~\ref{assum:sigma_bound}. Then in each of the cases, there exist weights $\{w_r\}$ and local step size $\eta_l$ such that for any $\eta_g\geq 1$, the output of \texttt{FedPVR}, satisfies:
\begin{itemize}
\item \textbf{Strongly convex}: $f_i$ satisfies Assumption.~\ref{assum:mu_convex} and~\ref{assum:zeta} for $\mu > 0$, $\eta_l \leq \min\left(\frac{1}{80K\eta_g\beta}, \frac{26}{20\mu K\eta_g}\right)$, $R\geq\max\left(\frac{20}{13}, \frac{160\beta}{\mu}\right)$, then
\begin{equation}
\mathbb{E}[f(\bar{\boldsymbol{x}}^R)] - f(\boldsymbol{x}^*) \leq \tilde{\mathcal{O}}\left(\frac{\sigma^2}{\mu NKR}(1+\frac{N}{\eta_g^2}) + \frac{\zeta_{1-p}^2}{\mu R} + \mu D\exp\left(-\min\left\{\frac{13}{20}, \frac{\mu}{160\beta}\right\}R \right) \right) \,.
\end{equation}
\vspace*{-4mm}
\item \textbf{General convex}: $f_i$ satisfies Assumption.~\ref{assum:mu_convex} and~\ref{assum:zeta} for $\mu=0$, $\eta_l \leq \frac{1}{80K\eta_g\beta}$, then:
\begin{equation}
\mathbb{E}[f(\bar{\boldsymbol{x}}^R)] - f(\boldsymbol{x}^*) \leq \mathcal{O}\left(\frac{\sigma \sqrt{D}}{\sqrt{RKN}}\sqrt{1+\frac{N}{\eta_g^2}} + \frac{\zeta_{1-p}\sqrt{D}}{\sqrt{R}} + \frac{\beta D}{R} + F\right) \,.
\end{equation}
\item \textbf{Non convex}: $f_i$ satisfies Assumption.~\ref{assum:zeta_hat}, $\eta_l \leq \frac{1}{26K\eta_g\beta}$, and $R\geq 1$, then:
\begin{equation}
\mathbb{E}||\nabla f(\bar{\boldsymbol{x}}^R)||^2\leq \mathcal{O}\left(\frac{\sigma\sqrt{F}}{\sqrt{KNR}}\sqrt{\beta(\frac{N}{\eta_g^2} + 1)} + \frac{\hat{\zeta}_{1-p}\sqrt{F}}{\sqrt{R}}\sqrt{\frac{\beta}{\eta_g^2}} + \frac{\beta F}{R} \right)
\end{equation}
\end{itemize}
\vspace*{-4mm}
Here $D:= ||\boldsymbol{x}^0 - \boldsymbol{x}^*||^2$ and $F:= f(\boldsymbol{x}^0) - f(\boldsymbol{x}^*)$
\end{theoremp}
Instead of showing the convergence dependent on the number of round $R$, we can also state the convergence dependent on the expected error $\epsilon$ with choosing $\eta_g = \sqrt{N}$.
\begin{corollaryp}{I}
\label{theorem_convergence_appendix}
Suppose function $\{f_i\}$ satisfy Assumption~\ref{assum:beta_smooth} and Assumption~\ref{assum:sigma_bound}. Then the output of \texttt{FedPVR} has expected error smaller than $\epsilon$ for $\eta_g = \sqrt{N}$ and some values of $\eta_l$, $R$ satisfying:
\begin{itemize}
\item \textbf{Strongly convex:} $f_i$ satisfies Assumption.~\ref{assum:mu_convex} and~\ref{assum:zeta} for $\mu > 0$, $\eta_l \leq \min\left(\frac{1}{80K\eta_g\beta}, \frac{26}{20\mu K\eta_g}\right)$
\begin{equation}
R = \tilde{\mathcal{O}}\left(\frac{\sigma^2}{\mu NK\epsilon} + \frac{\zeta_{1-p}^2}{\mu\epsilon} + \frac{\beta}{\mu}\right),
\end{equation}
\vspace*{-4mm}
\item \textbf{General convex:} ($\mu=0$): $f_i$ satisfies Assumption.~\ref{assum:mu_convex} and~\ref{assum:zeta} for $\mu=0$, $\eta_l \leq \frac{1}{80K\eta_g\beta}$,
\begin{equation}
R = \mathcal{O}\left(\frac{\sigma^2D}{KN\epsilon^2} + \frac{\zeta_{1-p}^2D}{\epsilon^2} + \frac{\beta D}{\epsilon} + F\right),
\end{equation}
\item \textbf{Non-convex}: $f_i$ satisfies Assumption.~\ref{assum:zeta_hat}, $\eta_l \leq \frac{1}{26K\eta_g\beta}$, and $R\geq 1$, then:
\begin{equation}
R=\mathcal{O}\left(\frac{\beta\sigma^2F}{KN\epsilon^2} + \frac{\beta\hat{\zeta}_{1-p}^2F}{N\epsilon^2} + \frac{\beta F}{\epsilon} \right),
\end{equation}
\end{itemize}
\vspace*{-4mm}
Where $D := ||\boldsymbol{x}^0 - \boldsymbol{x}^*||^2$ and $F:=f(\boldsymbol{x}^0) - f^*$.
\end{corollaryp}
In the special case $\boldsymbol{p}=\boldsymbol{1}$ ($\zeta_{1-p}^2 = 0, \hat{\zeta}_{1-p}^2 = 0$), then FedPVR is identical to Scaffold and we recover their convergence guarantees. In the strongly convex case, the effect of the heterogeneity of the block of weights that are not variance-reduced ($\zeta_{1-p}^2$) becomes negligible if $\tilde{\mathcal{O}}\left(\frac{\zeta_{1-p}^2}{\epsilon}\right)$ is sufficiently smaller than $\tilde{\mathcal{O}}\left(\frac{\sigma^2}{NK\epsilon}\right)$. In such case, our rate becomes $\frac{\sigma^2}{\mu NK\epsilon} + \frac{1}{\mu}$, which recovers the SCAFFOLD in the strongly convex without sampling and further matches the SGD (with mini-batch size $K$ on each worker), proving that it is at least as fast as the SGD.
\begin{remark}[heterogeneity-diversity]
Theoretically, in the non-convex case, the heterogeneity of the block of the weights that are not variance-reduced $\hat{\zeta}_{1-p}:=\frac{1}{N}\sum_{i=1}^N||(\boldsymbol{1}-\boldsymbol{p})\odot \nabla f_i(\boldsymbol{x})||^2$ may slow down the convergence. In the manuscript, we observe that the diversity of the block of weights that are not variance-reduced $\xi^r:=\frac{\sum_{i=1}^N||(\boldsymbol{1}-\boldsymbol{p})\odot(\boldsymbol{y}_{i,K}^r - \boldsymbol{x}^{r-1})||^2}{||\sum_{i=1}^N (\boldsymbol{1}-\boldsymbol{p})\odot(\boldsymbol{y}_{i,K}^r - \boldsymbol{x}^{r-1})||^2}$ tends to increase the convergence speed. The above two observations do not necessarily disagree as we define $\hat{\zeta}_{1-p}$ and $\xi$ differently. The theory has not yet captured the phenomenon of the diversity of the non-variance-reduced weights improving convergence.
\end{remark}
\vspace*{-0.5cm}
\subsection{Algorithm and additional definitions}
We write our algorithm using the following notions: $\{\boldsymbol{y}_i\}$ represents the client model, $\boldsymbol{x}$ is the aggregated server model, $\boldsymbol{c_i}$, and $\boldsymbol{c}$ are the client and server control variate. The server maintains a global control variate $\boldsymbol{c}$ and each client maintains its own control variate $\boldsymbol{c_i}$. $N$ is the total number of clients. All the clients participate each round.
To give the proof for the convergence rate and simplify the algorithm, we rewrite it as shown in Algorithm II. The algorithm is identical to Algorithm-1 in the main paper except that $\boldsymbol{c}_i$ and $\boldsymbol{c}$ are now vectors with the same length as the model parameters. However, we only update the block of weights where the corresponding value in $\boldsymbol{p}$ equals 1. This version of the algorithm may consume more bits during the communication between the client and server, but the convergence rate is identical to Algorithm-1 in the main manuscript.
\begin{algorithm}[ht!]
\floatname{algorithm}{Algorithm II}
\small
\caption{Partial variance reduction (FedPVR)}
\algcomment{This algorithm is identical to Algorithm I in the manuscript in terms of convergence rate but may consume more bits during the communication between the clients and server.}
\label{append:alg}
\hspace*{\algorithmicindent} \textbf{server}: initialise the server model $\boldsymbol{x}$, control variate $\boldsymbol{c}=\boldsymbol{0}$, and global step size $\eta_g$
\hspace*{\algorithmicindent} \textbf{client}: initialise the control variate $\boldsymbol{c}_i=\boldsymbol{0}$, and local step size $\eta_l$
\hspace*{\algorithmicindent} \textbf{general}: set a binary mask $\boldsymbol{p} \in \{0, 1\}^d$
\label{Algorithm:l_scaffold_appendix}
\begin{algorithmic}[1]
\Procedure{Model updating}{}
\For {$r = 1 \to R $}
\State \textbf{Communicate} $\boldsymbol{x}$ \textbf{and} $\boldsymbol{c}$ \textbf{to all clients} $i \in [N]$
\For {\textbf{client $i=1\rightarrow N$ in parallel}}
\State $\boldsymbol{y}_i \leftarrow \boldsymbol{x}$
\For {$k = 1 \to K$}
\State compute minibatch gradient $g_i(\boldsymbol{y}_i)$
\State $\boldsymbol{y}_i \leftarrow \boldsymbol{y}_{i} - \eta_l(g_i(\boldsymbol{y}_{i}) - \boldsymbol{c_i} + \boldsymbol{c})$
\EndFor
\State $\boldsymbol{c}_{i} \leftarrow \boldsymbol{c}_{i} - \boldsymbol{c} + \frac{1}{K\eta_l}\boldsymbol{p}\odot(\boldsymbol{x} - \boldsymbol{y}_{i})$
\State \textbf{Communicate} $\boldsymbol{y}_{i}, \boldsymbol{c}_i$ \textbf{to the server}
\EndFor
\State $\boldsymbol{x} \leftarrow (1-\eta_g)\boldsymbol{x} + \frac{1}{N}\sum_{i=1}^N\boldsymbol{y}_{i}$
\State $\boldsymbol{c} \leftarrow \frac{1}{N}\sum_{i=1}^N\boldsymbol{c}_{i}$
\EndFor
\EndProcedure
\Statex
\end{algorithmic}
\vspace{-0.4cm}%
\end{algorithm}
Given the above algorithm, every client performs the following updates:
\begin{itemize}
\item Starting from the shared global parameters $\boldsymbol{y}_{i,0}^r = \boldsymbol{x}^{r-1}$, we update the local parameters:
\begin{equation}
\boldsymbol{y}_{i, k}^r = \boldsymbol{y}_{i, k-1}^r - \eta_l \boldsymbol{v}_{i,k}^r, \quad \text{where} \quad \boldsymbol{v}_{i,k}^r:=g_i(\boldsymbol{y}_{i,k-1}^r) - \boldsymbol{c}_i^{r-1} + \boldsymbol{c}^{r-1} \,.
\label{eq:update_0}
\end{equation}
\item update the control variate
\vspace*{-0.1cm}
\begin{equation}
\boldsymbol{c}_i^r = \boldsymbol{c}_i^{r-1} - \boldsymbol{c}^{r-1} + \frac{1}{K\eta_l}\boldsymbol{p}\odot(\boldsymbol{x}^{r-1} - \boldsymbol{y}_{i, K}^r) \,.
\label{eq:update_1}
\end{equation}
\item compute the new global model and global control variate:
\vspace*{-0.1cm}
\begin{equation}
\boldsymbol{x}^{r} = \boldsymbol{x}^{r-1} + \frac{\eta_g}{N}\sum_{i=1}^N(\boldsymbol{y}_{i, K}^r - \boldsymbol{x}^{r-1}) \quad \text{and} \quad \boldsymbol{c}^{r} = \frac{1}{N}\sum_{i=1}^N\boldsymbol{c}_i^r \,.
\label{eq:update_2}
\end{equation}
\end{itemize}
\begin{definition}
We define the client drift $\mathcal{E}_r$ to be the amount of the movement between a client model $\boldsymbol{y}_{i,k}^r$ and the starting server model $\boldsymbol{x}^{r-1}$
\begin{equation}
\mathcal{E}_r := \frac{1}{NK}\sum_{i=1}^N\sum_{k=1}^K\mathbb{E}||\boldsymbol{y}_{i,k}^r - \boldsymbol{x}^{r-1}||^2\,.
\end{equation}
\end{definition}
\begin{definition}
We define $C_r$ as:
\begin{equation}
C_r := \frac{1}{N}\sum_{i=1}^N\mathbb{E}||\mathbb{E}[\boldsymbol{c}_i^r]- \nabla f_i(\boldsymbol{x}^*)||^2 \,.
\end{equation}
\end{definition}
\begin{definition}
We define the effective learning rate $\tilde{\eta}$ as:
\begin{equation}
\tilde{\eta} = K\eta_l\eta_g \,.
\end{equation}
\end{definition}
\vspace*{-2mm}
\section{Technicalities}
We first summarize the assumptions that are needed for the proof of convergence in the Section~\ref{sec:assumption} based on the previous literature~\cite{DBLP:journals/corr/abs-2003-10422, DBLP:journals/corr/abs-1910-06378}. We then demonstrate the implications of these assumptions for our proof in the Section~\ref{sec:implication}. Following that, we summarize some of the useful and well-known lemmas in the Section~\ref{sec:lemma}.
\subsection{Assumptions}
\label{sec:assumption}
\noindent\textbf{Assumptions on the objective function}
For some of our results we assume (strong) convexity.
\begin{assumptionp}{A-1}[$\mu$-convex]
\label{assum:mu_convex}
$f_i$ is $\mu$-convex for $\mu \geq 0$ and satisfies:
\begin{equation}
\langle \nabla f_i(\boldsymbol{x}), \boldsymbol{y}-\boldsymbol{x}\rangle \leq -(f_i(\boldsymbol{x}) - f_i(\boldsymbol{y}) + \frac{\mu}{2}||\boldsymbol{x}-\boldsymbol{y}||^2), \quad \forall \boldsymbol{x}, \boldsymbol{y}\in \mathbb{R}^d, i\in [N] \,.
\end{equation}
Here, we allow $\mu=0$ (we refer generate convex case as when $\mu = 0$)
\end{assumptionp}
For all our theoretical analysis, we assume $f_i$ is smooth.
\begin{assumptionp}{A-2}[$\beta$-smooth]
\label{assum:beta_smooth}
$f_i$ is $\beta$-smooth and satisfy:
\begin{equation}
||\nabla f_i(\boldsymbol{x}) - \nabla f_i(\boldsymbol{y})|| \leq \beta||\boldsymbol{x}-\boldsymbol{y}||, \quad \forall \boldsymbol{x}, \boldsymbol{y}\in \mathbb{R}^d, i\in [N] ,.
\end{equation}
\begin{equation}
f_i(\boldsymbol{y}) \leq f_i(\boldsymbol{x}) + \langle \nabla f_i(\boldsymbol{x}), \boldsymbol{y}-\boldsymbol{x}\rangle + \frac{\beta}{2}||\boldsymbol{y}-\boldsymbol{x}||^2, \quad \forall \boldsymbol{x}, \boldsymbol{y}\in \mathbb{R}^d, i\in [N] \,.
\end{equation}
\end{assumptionp}
\begin{remark}
If functions $\{f_i\}$ are convex and $\boldsymbol{x}^*$ is a minimizer of $f$, then $\sum_{i}\nabla f_i(\boldsymbol{x}^*) = 0$, Assumption.~\ref{assum:beta_smooth} implies:
\begin{equation}
\frac{1}{N}\sum_{i=1}^N ||\nabla f_i(\boldsymbol{x}) - \nabla f_i(\boldsymbol{x}^*)||^2 \leq 2\beta(f(\boldsymbol{x}) - f^*) \quad \forall \boldsymbol{x} \in \mathbb{R}^d \,.
\end{equation}
\end{remark}
\noindent\textbf{Assumptions on the noise}
For the convergence analysis of SGD on convex functions, it is usually enough to assume a bound on the noise on the optimum only(~\cite{DBLP:journals/corr/abs-2003-10422, DBLP:journals/corr/abs-1907-04232}). Similarity, to express the function heterogeneity at the optimum point $\boldsymbol{x}^*$ (such a point always exist for the convex function), we make the following assumption.
\begin{assumptionp}{A-3}[$\zeta$-heterogeneity]
\label{assum:zeta}
We define a measure of variance at the optimum $\boldsymbol{x}^*$ given $N$ clients as :
\begin{equation}
\zeta^2 := \frac{1}{N}\sum_{i=1}^{N}||\nabla f_i(\boldsymbol{x}^*)||^2 \,.
\end{equation}
\end{assumptionp}
For the non-convex function, such an unique optimum point $\boldsymbol{x}^*$ does not necessarily exist, so we generalise assumption~\ref{assum:zeta} to:
\begin{assumptionp}{A-4}[$\hat{\zeta}$-heterogeneity]
\label{assum:zeta_hat}
We assume that there exists constants $\hat{\zeta}$ such that $\forall \boldsymbol{x} \in \mathbb{R}^d$:
\begin{equation}
\frac{1}{N}\sum_{i=1}^{N}||\nabla f_i(\boldsymbol{x})||^2 \leq \hat{\zeta}^2 \,.
\end{equation}
\end{assumptionp}
Another assumption that is usually common is to assume the stochastic gradients are bounded as:
\begin{assumptionp}{A-5}[bounded variance]
\label{assum:sigma_bound}
$g_i(\boldsymbol{x}):= \nabla f_i(\boldsymbol{x};\mathcal{D}_i)$ is unbiased stochastic gradient of $f_i$ with bounded variance:
\begin{equation}
\mathbb{E}_{\mathcal{D}_i}[||g_i(\boldsymbol{x}) - \nabla f_i(\boldsymbol{x})||^2] \leq \sigma^2, \quad \forall x\in \mathbb{R}^d, i\in [N] \,.
\end{equation}
\end{assumptionp}
\subsection{Implications of the assumptions}
\label{sec:implication}
Given a binary mask $\boldsymbol{p}$ that has the same length as $\boldsymbol{x}$, it holds that:
\begin{equation}
||\boldsymbol{p}\odot \boldsymbol{x}|| \leq ||\boldsymbol{x}||\,.
\label{eq:p_rule}
\end{equation}
Based on Eq.~\ref{eq:p_rule}, we have the following propositions:
\begin{proposition}[Implications of the smoothness Assumption~\ref{assum:beta_smooth}]
Given a binary mask $\boldsymbol{p}$, we define the block of weights that are variance reduced as $\beta_p$-smooth:
\begin{equation}
||\boldsymbol{p}\odot(\nabla f_i(\boldsymbol{x}) - f_i(\boldsymbol{y}))|| \leq \beta_p||\boldsymbol{x}-\boldsymbol{y}||, \quad \forall \boldsymbol{x}, \boldsymbol{y}\in \mathbb{R}^d, i\in [N] \,.
\end{equation}
If Assumption.~\ref{assum:beta_smooth} holds, then it also holds that:
\begin{equation}
\beta_p \leq \beta \,.
\end{equation}
\end{proposition}
\begin{proposition}[Implication of the \textbf{convex} function heterogeneity Assumption~\ref{assum:zeta}]
Given a binary mask $\boldsymbol{p}$, we define the heterogeneity of the block of weights that are not variance reduced at the optimum $\boldsymbol{x}^*$ as:
\begin{equation}
\zeta_{1-p}^2 := \frac{1}{N}\sum_{i=1}^{N}||(\boldsymbol{1}-\boldsymbol{p})\odot\nabla f_i(\boldsymbol{x^*})||^2 ,
\end{equation}
If Assumption~\ref{assum:zeta} holds, then it also holds that:
\begin{equation}
\zeta_{1-p}^2 \leq \zeta^2\,.
\end{equation}
\end{proposition}
\begin{proposition}[Implication of the \textbf{non-convex} function heterogeneity Assumption~\ref{assum:zeta_hat}]
Given a binary mask $\boldsymbol{p}$, we assume the heterogeneity of the block of weights that are not variance reduced as:
\begin{equation}
\frac{1}{N}\sum_{i=1}^{N}||(\boldsymbol{1}-\boldsymbol{p})\odot \nabla f_i(\boldsymbol{x})||^2 \leq \hat{\zeta}_{1-p} ,
\end{equation}
If Assumption~\ref{assum:zeta_hat} holds, then it also holds that:
\begin{equation}
\hat{\zeta}_{1-p}^2 \leq \hat{\zeta}^2\,.
\end{equation}
\end{proposition}
\begin{proposition}[Implication of the bounded variance Assumption.~\ref{assum:sigma_bound}]
If Assumption.~\ref{assum:sigma_bound} holds, then:
\begin{subequations}
\begin{equation}
\mathbb{E}_{\mathcal{D}_i}[||\boldsymbol{p}\odot(g_i(\boldsymbol{x}) - \nabla f_i(\boldsymbol{x}))||^2] \leq \sigma_p^2, \quad \forall \boldsymbol{x} \in \mathbb{R}^d, i\in [N]
\end{equation}
\begin{equation}
\sigma_p^2\leq\sigma^2 \,.
\end{equation}
\end{subequations}
\end{proposition}
\subsection{Some technical lemmas}
\label{sec:lemma}
We summarize some of the well-known lemmas in this subsection.
\begin{lemma}[triangle inequality]
\label{lemma:triangle}
For arbitrary set of $n$ vectors $\{\boldsymbol{v}_i\}_{i=1}^N$ with $\boldsymbol{v}_i \in \mathbb{R}^d$, then the following are true:
\begin{equation}
||\sum_{i=1}^{N}\boldsymbol{v}_i||^2 \leq N\sum_{i=1}^N||\boldsymbol{v}_i||^2 \,.
\end{equation}
\begin{equation}
||\boldsymbol{v}_i + \boldsymbol{v}_j||^2 \leq (1+\alpha)||\boldsymbol{v}_i||^2 + (1+\alpha^{-1})||\boldsymbol{v}_j||^2, \quad \forall \alpha > 0 \,.
\label{eq:triangle_2}
\end{equation}
\end{lemma}
\begin{lemma}[separating the mean and variance, {\cite[Lemma.4]{DBLP:journals/corr/abs-1910-06378}}]
\label{lemma:separate_mean_var}
Let $\{\boldsymbol{v}_1, ..., \boldsymbol{v}_\tau\}$ be $\tau$ random variables in $\mathbb{R}^d$ which are not necessarily dependent. First, suppose that their mean is $\mathbb{E}[\boldsymbol{v}_i]=\boldsymbol{\Xi}_i$ and the variance is bounded as $\mathbb{E}||\boldsymbol{v}_i - \boldsymbol{\Xi}_i||^2 \leq \sigma^2$, then the following holds:
\begin{equation}
\mathbb{E}||\sum_{i=1}^\tau \boldsymbol{v}_i||^2 \leq ||\sum_{i=1}^\tau \boldsymbol{\Xi}_i||^2 + \tau^2\sigma^2 \,.
\end{equation}
Now, suppose their condition mean is $\mathbb{E}[\boldsymbol{v}_i|\boldsymbol{v}_{i-1}, \boldsymbol{v}_{i-1}, .., v_1] = \boldsymbol{\Xi}_i$, the variance is bounded as $\mathbb{E}||\boldsymbol{v}_i-\boldsymbol{\Xi}_i||^2 \leq \sigma^2$, then we can show tighter bounds:
\begin{equation}
\mathbb{E}||\sum_{i=1}^\tau \boldsymbol{v}_i||^2 \leq 2||\sum_{i=1}^\tau \boldsymbol{\Xi}_i||^2 + 2\tau\sigma^2\,.
\end{equation}
\begin{proof}
For any random variables $X$, $\mathbb{E}[X^2] = (\mathbb{E}[X-\mathbb{E}[X]])^2 + (\mathbb{E}[X])^2$, this implies:
\begin{equation*}
\mathbb{E}||\sum_{i=1}^\tau \boldsymbol{v}_i||^2 \leq \mathbb{E}||\sum_{i=1}^\tau \boldsymbol{v}_i - \boldsymbol{\Xi}_i||^2 + \mathbb{E}||\sum_{i=1}^\tau \boldsymbol{\Xi}_i||^2 \leq \tau^2\sigma^2 + ||\sum_{i=1}^\tau \boldsymbol{\Xi}_i||^2
\end{equation*}
For the second statement, $\boldsymbol{\Xi}_i$ is not determinant and is dependent on $[v_{i-1}, v_{i-2}, ..., v_1]$. Based on Lemma~\ref{lemma:triangle}:
\begin{subequations}
\begin{equation*}
\mathbb{E}||\sum_{i=1}^\tau \boldsymbol{v}_i||^2 \leq 2\mathbb{E}||\sum_{i=1}^\tau \boldsymbol{v}_i - \boldsymbol{\Xi}_i||^2 + 2\mathbb{E}||\sum_{i=1}^\tau\boldsymbol{\Xi}_i||^2
\end{equation*}
\begin{equation*}
\mathbb{E}||\sum_{i=1}^\tau \boldsymbol{v}_i-\boldsymbol{\Xi}_i||^2 = \sum_{i=1}^\tau \mathbb{E}||\boldsymbol{v}_i-\boldsymbol{\Xi}_i||^2 +2\sum_{i,j}\mathbb{E}\langle \boldsymbol{v}_i-\boldsymbol{\Xi}_i, \boldsymbol{v}_j - \boldsymbol{\Xi}_j \rangle \leq \tau \sigma^2
\end{equation*}
\end{subequations}
\end{proof}
\end{lemma}
\begin{lemma}[contrastive mapping, {\cite[Lemma.6]{DBLP:journals/corr/abs-1910-06378}}]
\label{lemma:contrastive_mapping}
For any $\beta$-smooth and $\mu$-strongly convex function $h$, points $\boldsymbol{x}$ and $\boldsymbol{y}$ in the domain of h, and step-size $\eta \leq \frac{1}{\beta}$, the following is true:
\begin{equation}
||\boldsymbol{x} - \eta \nabla h(\boldsymbol{x}) + \eta \nabla h(\boldsymbol{y}) + \boldsymbol{y}|| \leq (1-\mu\eta)||\boldsymbol{x}-\boldsymbol{y}||^2 \,.
\label{eq:contrastive_mapping}
\end{equation}
\begin{proof}
\begin{equation*}
\begin{split}
||\boldsymbol{x} - \eta\nabla h(\boldsymbol{x}) - y + \eta\nabla h(\boldsymbol{y})||^2 &= ||\boldsymbol{x} - \boldsymbol{y}||^2 + \eta^2||\nabla h(\boldsymbol{x}) - \nabla h(\boldsymbol{y})||^2 - 2\eta \langle \nabla h(\boldsymbol{x}) - \nabla h(\boldsymbol{y}), \boldsymbol{x}-\boldsymbol{y}\rangle \\
&\leq ||\boldsymbol{x}-\boldsymbol{y}||^2 + (\eta^2\beta-2\eta)\langle \nabla h(\boldsymbol{x}) - \nabla h(\boldsymbol{y}), \boldsymbol{x}-\boldsymbol{y}\rangle\\
&\leq (1-\eta\mu)||\boldsymbol{x}-\boldsymbol{y}||^2 \,.
\end{split}
\end{equation*}
The second step uses the smoothness Assumption.~\ref{assum:beta_smooth}. The last step uses our bound on the step size $\eta \leq \frac{1}{\beta}$ (implies $\eta^2\beta - 2\eta \leq -\eta$)
\end{proof}
\end{lemma}
\begin{lemma}[Perturbed strong convexity, {\cite[Lemma.5]{DBLP:journals/corr/abs-1910-06378}}]
\label{lemma:perturbed_strong_convexity}
The following holds for any $\beta$-smooth and $\mu$-strongly convex function $h$ and any $\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{z}$ in the domain of $h$:
\begin{equation}
\langle \nabla h(\boldsymbol{x}), \boldsymbol{z}-\boldsymbol{y}\rangle \geq h(\boldsymbol{z}) - h(\boldsymbol{y}) + \frac{\mu}{4}||\boldsymbol{y}-\boldsymbol{z}||^2 - \beta||\boldsymbol{z}-\boldsymbol{y}||^2\,.
\end{equation}
\begin{proof}
Given any $\boldsymbol{x}, \boldsymbol{y}$, and $\boldsymbol{z}$, we can get the following inequalities using smoothness and strong convexity of $h$:
\begin{subequations}
\begin{equation*}
\langle \nabla h(\boldsymbol{x}), \boldsymbol{z}-\boldsymbol{x} \rangle \geq h(\boldsymbol{z}) - h(\boldsymbol{x}) - \frac{\beta}{2}||\boldsymbol{z}-\boldsymbol{x}||^2,
\end{equation*}
\begin{equation*}
\langle \nabla h(\boldsymbol{x}), \boldsymbol{x}-\boldsymbol{y} \rangle \geq h(\boldsymbol{x}) - h(\boldsymbol{y}) - \frac{\mu}{2}||\boldsymbol{x} - \boldsymbol{y}||^2 \,.
\end{equation*}
\end{subequations}
Applying relaxed triangle inequality:
\begin{equation*}
\frac{\mu}{2}||\boldsymbol{y}-\boldsymbol{x}||^2 \geq \frac{\mu}{4}||\boldsymbol{y}-\boldsymbol{z}||^2 - \frac{\mu}{2}||\boldsymbol{x}-\boldsymbol{z}||^2\,.
\end{equation*}
Combining the above three equations together:
\begin{equation*}
\langle \nabla h(\boldsymbol{x}), \boldsymbol{z}-\boldsymbol{y}\rangle \geq h(\boldsymbol{z}) - h(\boldsymbol{y})+\frac{\mu}{4}||\boldsymbol{y}-\boldsymbol{z}||^2 - \frac{\beta+\mu}{2}||\boldsymbol{z}-\boldsymbol{x}||^2 \,.
\end{equation*}
The lemma follows since $\beta \geq \mu$
\end{proof}
\end{lemma}
\begin{lemma}[Tunning the stepsize, {\cite[Lemma.17]{DBLP:journals/corr/abs-2003-10422}}]
\label{lemma:tune_stepsize}
For any parameters $r_0\geq 0, b\geq 0, e\geq 0, \gamma\geq 0$, there exists constant step size $\eta \leq \frac{1}{\gamma}$ such that:
\begin{equation}
\Psi_T := \frac{r_0}{\eta(T+1)} + b\eta + e\eta^2 \leq 2\left(\frac{br_0}{T+1}\right)^{\frac{1}{2}} + 2e^{1/3}\left(\frac{r_0}{T+1}\right)^{\frac{2}{3}} + \frac{\gamma r_0}{T+1}\,.
\end{equation}
\begin{proof}
Choosing $\eta = \text{min}\left\{\left(\frac{r_0}{b(T+1)}\right)^{\frac{1}{2}}, \left(\frac{r_0}{e(T+1)} \right)^{\frac{1}{3}}, \frac{1}{\gamma} \right\} \leq \frac{1}{\gamma}$, we have three cases:
\begin{itemize}
\item $\eta=\frac{1}{\gamma}$ and is smaller than the other two terms, then
\begin{equation*}
\Psi_T \leq \frac{dr_0}{T+1}+\frac{b}{\gamma}+\frac{e}{d^2} \leq \left(\frac{br_0}{T+1} \right)^{\frac{1}{2}} + \frac{dr_0}{T+1} + e^{1/3}\left(\frac{r_0}{T+1}\right)^{\frac{2}{3}},
\end{equation*}
\item $\eta = \left(\frac{r_0}{b(T+1)}\right)^{\frac{1}{2}} < \left(\frac{r_0}{e(T+1)} \right)^{\frac{1}{3}}$, then:
\begin{equation*}
\Psi_T \leq 2\left(\frac{br_0}{T+1} \right)^{\frac{1}{2}} + e\left(\frac{r_0}{b(T+1)}\right) \leq 2\left(\frac{br_0}{T+1} \right)^{\frac{1}{2}} + e^{1/3}\left(\frac{r_0}{T+1}\right)^{\frac{2}{3}},
\end{equation*}
\item The last case $\eta=\left(\frac{r_0}{e(T+1)} \right)^{\frac{1}{3}} < \left(\frac{r_0}{b(T+1)}\right)^{\frac{1}{2}}$, then
\begin{equation*}
\Psi_T \leq 2e^{1/3}\left(\frac{r_0}{T+1} \right) + b\left(\frac{r_0}{e(T+1)}\right)^{\frac{1}{3}} \leq 2e^{1/3}\left(\frac{r_0}{T+1}\right)^{\frac{2}{3}} + \left(\frac{br_0}{T+1} \right)^{\frac{1}{2}} \,.
\end{equation*}
\end{itemize}
\end{proof}
\end{lemma}
\begin{lemma}[tunning the stepsize, {\cite[Lemma.2]{DBLP:journals/corr/abs-1907-04232}}]
\label{lemma:stich_stepsize}
If there exists two non-negative sequences $\{r_t\}_{t\geq0}$, $\{s_t\}_{t\geq0}$ and $a>0$ that satisfy the relation:
\begin{equation}
r_{t+1} \leq (1-a\mu_t)r_t - b\mu_ts_t + c\mu_t^2,
\end{equation}
Then there exists a constant step size $\mu_t \equiv \mu \leq \frac{1}{\gamma}$ such that for weight $w_t:=(1-a\mu)^{-t-1}$ and $W_T:=\sum_{t=0}^T w_t$, it holds:
\begin{equation}
\frac{b}{W_t}\sum_{t=0}^T s_tw_t + ar_{T+1} = \tilde{\mathcal{O}}\left(\gamma r_0\exp\left[-\frac{aT}{b}\right] + \frac{c}{aT} \right) \,.
\end{equation}
\begin{proof}
Choosing $\eta = \text{min}\left\{\frac{\ln{(\max\{2, a^2r_0T^2/c\})}}{aT}, \frac{1}{\gamma} \right\}$, we have two cases:
\begin{itemize}
\item If $\frac{1}{\gamma} \geq \frac{\ln{(\max\{2, a^2r_0T^2/c\})}}{aT}$, then we choose $\eta=\frac{\ln{(\max\{2, a^2r_0T^2/c\})}}{aT}$:
\begin{equation*}
\frac{b}{W_t}\sum_{t=0}^T s_tw_t + ar_{T+1} =\tilde{\mathcal{O}}\left(\frac{c}{aT}\right) ,
\end{equation*}
\item If $\frac{1}{\gamma} < \frac{\ln{(\max\{2, a^2r_0T^2/c\})}}{aT}$, then we chose $\eta = \frac{1}{\gamma}$:
\begin{equation*}
\frac{b}{W_t}\sum_{t=0}^T s_tw_t + ar_{T+1} =\tilde{\mathcal{O}}\left(\gamma r_0\exp\left[-\frac{aT}{\gamma}\right] +\frac{c}{aT} \right) .
\end{equation*}
\end{itemize}
\end{proof}
\end{lemma}
\subsection{Additional experimental setups}
\textbf{Data distribution}
We follow the procedure as described in~\cite{DBLP:conf/nips/LinKSJ20} to simulate the data heterogeneity scenario. Fig.~\ref{fig:distribution_of_data} shows data distribution across clients using CIFAR10 and CIFAR100. When $\alpha=0.1$, some clients may only have data from a single class.
\begin{figure}[ht!]
\centering
\includegraphics{latex_folder/figures/data_distribution_cifar10_appendix.pdf}
\caption{Different levels of data heterogeneity. The left two figures are from CIFAR10 and the right two figures are from CIFAR100.}
\label{fig:distribution_of_data}
\end{figure}
\textbf{Conformal prediction}
Let $d_i$ denotes a data point and $l_i$ denotes the corresponding label. Conformal prediction aims to produce a predictive set $\mathcal{C}_{\kappa}(d_i)$ such that:
\begin{equation}
P(l_i \in \mathcal{C}_{\kappa}(d_i)) \geq 1 - \kappa
\end{equation}
where $\kappa \in (0, 1)$ specifies the desired converge level. To find the predictive set $\mathcal{C}$, we use a threshold $\tau$ on the predicted probability to indicate which predictions are included in the prediction set. Specifically, we assume that the server can have a small portion of a dataset in the same domain as the datasets in each client (e.g. validation dataset). As we only suggest conformal prediction as an effective tool when the task is sensitive and producing a wrong prediction is dangerous (e.g. chemical hazards detection), we think it is reasonable to make such an assumption to guarantee the performance.
\textbf{Architecture}
We use two types of neural networks in our paper: VGG-11 and ResNet8. We simply adopt the architectures that are used in~\cite{DBLP:conf/nips/LinKSJ20}. The detailed information about these two architectures can be found at \url{https://github.com/epfml/federated-learning-public-code}.
\textbf{Hyperparameters} The learning rate and learning rate schedule are shown in the following two tables. Most of the experiments use learning rate of 0.1 with constant schedule, which means that each client always use a learning rate of 0.1 along the communication rounds. To stabilize the training procedure, we apply momentum with factor 0.9 on the block of weights that are not variance-reduced. We use 10 clients with full participation. The number of local epochs per round is 10.
\begin{table}[ht!]
\caption{The learning rate and learning rate schedule for CIFAR10 experiments where $c$ represents \texttt{constant} and $m$ represents multi-step decay}
\resizebox{0.85\textwidth}{!}
{\begin{minipage}{\textwidth}
\begin{tabular}{@{}lcccccccccccc@{}}
\toprule
& \multicolumn{6}{c}{VGG-11} & \multicolumn{6}{c}{ResNet8} \\ \midrule
& \multicolumn{2}{c}{FedAvg} & \multicolumn{2}{c}{SCAFFOLD} & \multicolumn{2}{c}{FedPVR} & \multicolumn{2}{c}{FedAvg} & \multicolumn{2}{c}{SCAFFOLD} & \multicolumn{2}{c}{FedPVR} \\
& $\alpha=0.1$ & $\alpha=0.5$ &$\alpha=0.1$ & $\alpha=0.5$ & $\alpha=0.1$ & $\alpha=0.5$ & $\alpha=0.1$ & $\alpha=0.5$ &$\alpha=0.1$ & $\alpha=0.5$ &$\alpha=0.1$ & $\alpha=0.5$ \\
LR &0.05 &0.1 &0.05 &0.1 &0.05 &0.1 & 0.1 &0.2 &0.1 &0.3 &0.1 & 0.3 \\
LR-schedule &c &c &m & c & c &c &c &c &c &c &c & c \\ \bottomrule
\end{tabular}
\end{minipage}}
\end{table}
\begin{table}[ht!]
\caption{The learning rate and learning rate schedule for CIFAR100 experiments where $c$ represents \texttt{constant} and $m$ represents \texttt{multi-step decay} and $cos$ represents \texttt{cosine-decay}}
\resizebox{0.85\textwidth}{!}
{\begin{minipage}{\textwidth}
\begin{tabular}{@{}lcccccccccccc@{}}
\toprule
& \multicolumn{6}{c}{VGG-11} & \multicolumn{6}{c}{ResNet8} \\ \midrule
& \multicolumn{2}{c}{FedAvg} & \multicolumn{2}{c}{SCAFFOLD} & \multicolumn{2}{c}{FedPVR} & \multicolumn{2}{c}{FedAvg} & \multicolumn{2}{c}{SCAFFOLD} & \multicolumn{2}{c}{FedPVR} \\
& $\alpha=0.1$ & $\alpha=1.0$ &$\alpha=0.1$ & $\alpha=1.0$ & $\alpha=0.1$ & $\alpha=1.0$ & $\alpha=0.1$ & $\alpha=1.0$ &$\alpha=0.1$ & $\alpha=1.0$ &$\alpha=0.1$ & $\alpha=1.0$ \\
LR &0.1 &0.05 &0.1 &0.1 &0.1 &0.1 & 0.1 & 0.1 &0.2 & 0.2 &0.1 &0.1 \\
LR-schedule &c & c&c &c& c &c &c &c &cos &c &c & c \\ \bottomrule
\end{tabular}
\end{minipage}}
\end{table}
\subsection{Additional experimental results}
\begin{figure}[ht!]
\centering
\includegraphics{latex_folder/figures/influence_of_svr_on_layer_resnet.pdf}
\caption{Influence of using variance reduction on layers that start from different positions in a neural network on the learning speed. \texttt{SVR:0$\rightarrow$28} applies variance reduction on the entire model, which corresponds to \texttt{SCAFFOLD}. \texttt{SVR:27$\rightarrow$28} applies variance reduction from the layer index 27 to 28, which corresponds to our method. The later we apply variance reduction, the better performance speedup we obtain. However, no variance reduction (\texttt{FedAvg}) performs the worst here.}
\label{fig:resnet_svr}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics{latex_folder/figures/understand_layer_wise_scaffold_supplementary_file.pdf}
\caption{Drift diversity and learning curve for VGG-11 on CIFAR100 with $\alpha=0.1$. Compared to FedAvg, SCAFFOLD and our method can both improve the agreement between the classifiers. Compared to SCAFFOLD, our method results in a higher gradient diversity at the early stage of the communication, which tends to boost the learning speed as the curvature of the drift diversity seem to match the learning curve.}
\label{fig:my_label}
\end{figure}
\section{Introduction}
\label{sec:intro}
Federated learning (FL) is emerging as an essential distributed learning paradigm in large-scale machine learning. Unlike in traditional machine learning, where a model is trained on the collected centralized data, in federated learning, each client (\eg phones and institutions) learns a model with its local data. A centralized model is then obtained by aggregating the updates from all participating clients without ever requesting the client data, thereby ensuring a certain level of user privacy~\cite{DBLP:journals/corr/abs-1912-04977, DBLP:journals/corr/KonecnyMRR16}. Such an algorithm is especially beneficial for tasks where the data is sensitive, \eg chemical hazards detection and disease diagnosis~\cite{Sheller2020}.
\begin{figure}[ht!]
\centering
\includegraphics[width=.48\textwidth]{latex_folder/figures/SCAFFOLD.pdf}
\caption{Our proposed FedPVR framework with the performance (communicated parameters per round client$\Longleftrightarrow$server). Smaller $\alpha$ corresponds to higher data heterogeneity. Our method achieves a better speedup than existing approaches by transmitting a slightly larger number of parameters than FedAvg.}
\label{fig:concept}
\end{figure}
Two primary challenges in federated learning are i) handling data heterogeneity across clients~\cite{DBLP:journals/corr/abs-1912-04977} and ii) limiting the cost of communication between the server and clients~\cite{Halgamuge2009AnEO}. In this setting, FedAvg~\cite{DBLP:journals/corr/KonecnyMRR16} is one of the most widely used schemes: A server broadcasts its model to clients, which then update the model using their local data in a series of steps before sending their individual model to the server, where the models are aggregated by averaging the parameters. The process is repeated for multiple communication rounds. While it has shown great success in many applications, it tends to achieve subpar accuracy and convergence when the data are heterogeneous~\cite{DBLP:journals/corr/abs-1812-06127, DBLP:journals/corr/abs-1910-06378, DBLP:journals/corr/abs-2106-05001}.
The slow and sometimes unstable convergence of FedAvg can be caused by client drift~\cite{DBLP:journals/corr/abs-1910-06378} brought on by data heterogeneity. Numerous efforts have been made to improve FedAvg's performance in this setting. Prior works attempt to mitigate client drift by penalizing the distance between a client model and the server model~\cite{DBLP:journals/corr/abs-1812-06127, DBLP:journals/corr/abs-2103-16257} or performing variance reduction techniques for updating client models using extra control variates~\cite{DBLP:journals/corr/abs-1910-06378, DBLP:journals/corr/abs-2111-04263, DBLP:journals/corr/ShamirS013}. These works demonstrate fast convergence on convex problems or for simple neural networks; however, their performance on deep neural networks, which are state-of-the-art for many centralized learning tasks~\cite{DBLP:journals/corr/HeZRS15, Simonyan15}, has yet to be well explored. Adapting techniques that perform well on convex problems to neural networks is non-trivial~\cite{DBLP:journals/corr/abs-1812-04529} due to their ``intriguing properties''~\cite{DBLP:journals/corr/SzegedyZSBEGF13} such as over-parametrization and permutation symmetries.
To overcome the above issues, we revisit the FedAvg algorithm with a deep neural network (VGG-11~\cite{Simonyan15}) under the assumption of data heterogeneity and full client participation. Specifically, we investigate which layers in a neural network are mostly influenced by data heterogeneity. We define \textit{drift diversity}, which measures the diversity of the directions and scales of the averaged gradients across clients per communication round. We observe that in the non-IID scenario, the deeper layers, especially the final classification layer, have the highest diversity across clients compared to an IID setting. This indicates that FedAvg learns good feature representations even in the non-IID scenario~\cite{https://doi.org/10.48550/arxiv.2205.13692} and that the significant variation of the classifier layers across clients is a primary cause of FedAvg's subpar performance.
Based on the above observations, we propose to align the classification layers across clients using variance reduction. Specifically, we estimate the average updating direction of the classifiers (the last several fully connected layers) at the client $\boldsymbol{c}_{i}$ and server level $\boldsymbol{c}$ and use their difference as a control variate~\cite{DBLP:journals/corr/abs-1910-06378} to reduce the variance of the classifiers across clients. We analyze our proposed algorithm and derive a convergence rate bound.
We perform experiments on the popular federated learning benchmark datasets CIFAR10~\cite{Krizhevsky2009LearningML} and CIFAR100~\cite{Krizhevsky2009LearningML} using two types of neural networks, VGG-11~\cite{Simonyan15} and ResNet-8~\cite{DBLP:journals/corr/HeZRS15}, and different levels of data heterogeneity across clients. We experimentally show that we require fewer communication rounds compared to the existing methods~\cite{DBLP:journals/corr/abs-1812-06127, DBLP:journals/corr/abs-1910-06378, DBLP:journals/corr/KonecnyMRR16} to achieve the same accuracy while transmitting a similar or slightly larger number of parameters between server and clients than FedAvg (see Fig.~\ref{fig:concept}). With a (large) fixed number of communication rounds, our method achieves on-par or better top-1 accuracy, and in some settings it even outperforms centralized learning. Using conformal prediction~\cite{angelopoulos2021uncertainty}, we show how performance can be improved further using adaptive prediction sets.
We show that applying variance reduction on the last layers increases the diversity of the feature extraction layers. This diversity in the feature extraction layers may give each client more freedom to learn richer feature representations, and the uniformity in the classifier then ensures a less biased decision. We summarize our contributions here:
\begin{itemize}
\item We present our algorithm for partial variance-reduced federated learning (FedPVR). We experimentally demonstrate that the key to the success of our algorithm is the diversity between the feature extraction layers and the alignment between the classifiers.
\item We prove the convergence rate in the convex and non-convex settings, precisely characterize its weak dependence on data-heterogeneity measures, and show that FedPVR provably converges as fast as the centralized SGD baseline in most practical relevant cases.
\item We experimentally show that FedPVR is more communication efficient than previous works across various levels of data heterogeneity, datasets, and neural network architectures. In some cases where data heterogeneity exists, the proposed algorithm even performs slightly better than centralized learning.
\end{itemize}
\begin{figure*}[ht!]
\centering
\includegraphics{latex_folder/figures/cka_similarity_motivation_on_fed_avg.pdf}
\caption{Data distribution (number of images per client per class) with different levels of heterogeneity, client CKA similarity, and the drift diversity of each layer in VGG-11 (20 layers) with FedAvg. \textbf{Deep layers in an over-parameterised neural network have higher disagreement and variance when the clients are heterogeneous using FedAvg.}}
\label{fig:cka_similarity}
\end{figure*}
\section{Related work}
\label{sec:related_work}
\subsection{Federated learning}
Federated learning (FL) is a fast-growing field~\cite{DBLP:journals/corr/abs-1912-04977, DBLP:journals/corr/abs-2107-06917}. We mainly describe FL methods in non-IID settings where the data is distributed heterogeneously across clients. Among the existing methods, FedAvg~\cite{DBLP:journals/corr/McMahanMRA16} is the \textit{de facto} optimization technique. Despite its solid empirical performances in IID settings~\cite{DBLP:journals/corr/McMahanMRA16, DBLP:journals/corr/abs-1912-04977}, it tends to achieve a subpar accuracy-communication trade-off in non-IID scenarios.
Many works attempt to tackle FL when data is heterogeneous across clients~\cite{DBLP:journals/corr/abs-1812-06127, DBLP:journals/corr/abs-1910-06378, DBLP:conf/cvpr/GaoFLC0022, DBLP:journals/corr/abs-2111-04263, DBLP:journals/corr/abs-2108-04755, DBLP:conf/nips/VogelsHKKLSJ21}. FedProx~\cite{DBLP:journals/corr/abs-1812-06127} proposes a temperature parameter and proximal regularization term to control the divergence between client and server models. However, the proximal term does not bring the alignment between the global and local optimal points~\cite{DBLP:journals/corr/abs-2111-04263}. Besides, tuning the temperature parameter requires some knowledge about the statistical heterogeneity among the data. Similarly, some works control the update direction by introducing client-dependent control variate~\cite{DBLP:journals/corr/abs-1910-06378, DBLP:journals/corr/ShamirS013, DBLP:journals/corr/KonecnyMRR16, DBLP:journals/corr/abs-2111-04263, mishchenko2022proxskip} that is also communicated between the server and clients. They have achieved a much faster convergence rate, but their performance in a non-convex setup, especially in deep neural networks, such as ResNet~\cite{DBLP:journals/corr/HeZRS15} and VGG~\cite{Simonyan15}, is not well explored. Besides, they suffer from a higher communication cost due to the transmission of the extra control variates, which may be a critical issue for resources-limited IoT mobile devices~\cite{Halgamuge2009AnEO}. Among these methods, SCAFFOLD~\cite{DBLP:journals/corr/abs-1910-06378} is the most closely related method to ours, and we give a more detailed comparison in section~\ref{sec:method} and~\ref{sec:experimental_results}.
Another line of work develops FL algorithms based on the characteristics, such as expressive feature representations~\cite{DBLP:journals/corr/abs-2010-15327} of neural networks. Collins et al.~\cite{https://doi.org/10.48550/arxiv.2205.13692} show that FedAvg is powerful in learning common data representations from clients' data. FedBabu~\cite{DBLP:journals/corr/abs-2106-06042}, TCT~\cite{https://doi.org/10.48550/arxiv.2207.06343}, and CCVR~\cite{DBLP:journals/corr/abs-2106-05001} propose improving FL performance by finetuning the classifiers with a standalone dataset or simulated features based on the client models. However, preparing a standalone dataset/features that represent the data distribution across clients is challenging as this usually requires domain knowledge and may raise privacy concerns. Moon~\cite{DBLP:journals/corr/abs-2103-16257} boosts the similarity of the representations across different client models by using contrastive loss~\cite{DBLP:journals/corr/abs-2002-05709} but with the cost of three full-size models in memory on each client, which may limit its applicability in resource-limited devices.
Other works focus on reducing the communication cost by compressing the transmitted gradients~\cite{https://doi.org/10.48550/arxiv.2002.11364, DBLP:journals/corr/abs-1901-09269, DBLP:journals/corr/abs-1911-08250, DBLP:journals/corr/Alistarh0TV16, StichCJ18sparseSGD}. They can reduce the communication bandwidth by adjusting the number of bits sent per iteration. These works are complementary to ours and can be easily integrated into our method to save communication costs.
\subsection{Variance reduction}
Stochastic variance reduction (SVR), such as SVRG~\cite{Johnson2013AcceleratingSG}, SAGA~\cite{DBLP:journals/corr/DefazioBL14}, and their variants, use control variate to reduce the variance of traditional stochastic gradient descent (SGD). These methods can remarkably achieve a linear convergence rate for strongly convex optimization problems compared to the sub-linear rate of SGD. Many federated learning algorithms, such as SCAFFOLD~\cite{DBLP:journals/corr/abs-1910-06378} and DANE~\cite{DBLP:journals/corr/ShamirS013}, have adapted the idea of variance reduction for the whole model and achieved good convergence on convex problems. However, as~\cite{DBLP:journals/corr/abs-1812-04529} demonstrated, naively applying variance reduction techniques gives no actual variance reduction and tend to result in a slower convergence in deep neural networks, especially for larger models such as ResNet-110~\cite{DBLP:journals/corr/HeZRS15} and DenseNet~\cite{DBLP:journals/corr/HuangLW16a}. This suggests that adapting SVR techniques in deep neural networks for FL requires a more careful design.
\subsection{Conformal prediction}
Conformal prediction is a general framework that computes a prediction set guaranteed to include the true class with a high user-determined probability~\cite{angelopoulos2021uncertainty, 10.5555/3495724.3496026}. It requires no retraining of the models and achieves a finite-sum coverage guarantee~\cite{angelopoulos2021uncertainty}. As FL algorithms in extreme data heterogeneous settings can hardly perform as well as centralized learning~\cite{DBLP:journals/corr/abs-2106-05001}, we can integrate conformal prediction to improve the empirical coverage by slightly increasing the predictive set size. This can be beneficial in sensitive use cases such as detecting chemical hazards, where it is better to give a prediction set that contains the correct class than producing a single but wrong prediction.
\section{Method}
\label{sec:method}
\subsection{Problem statement}
Given $N$ clients with full participation, we formalise the problem as minimizing the average of the stochastic functions with access to stochastic samples in Eq.~\ref{eq:problem} where $\boldsymbol{x}$ is the model parameters and $f_i$ represents the loss function at client $i$ with dataset $\mathcal{D}_i$,
\begin{equation}
\min_{\boldsymbol{x}\in\mathbb{R}^d}\left(f(\boldsymbol{x}):=\frac{1}{N}\sum_{i=1}^N f_i(\boldsymbol{x})\right),
\label{eq:problem}
\end{equation}
where $f_i(\boldsymbol{x}) := \mathbb{E}_{\mathcal{D}_i}[f_i(\boldsymbol{x};\mathcal{D}_i)]$.
\begin{table}[tb]
\caption{Notations used in this paper}
\vspace*{-3mm}
\resizebox{\linewidth}{!}{
\begin{tabular}{ll}
\toprule
$R, r$ & Number of communication rounds and round index \\
$K, k$ & Number of local steps, local step index \\
$N, i$ & Number of clients, client index \\
$\boldsymbol{y}_{i,k}^r$ & client model $i$ at step $k$ and round $r$ \\
$\boldsymbol{x}^{r}$ & server model at round $r$ \\
$\boldsymbol{c}_i^r$, $\boldsymbol{c}^r$ & client and server control variate \\
\bottomrule
\end{tabular}}
\end{table}
\subsection{Motivation}
When the data $\{\mathcal{D}_i\}$ are heterogeneous across clients, FedAvg suffers from \emph{client drift}~\cite{DBLP:journals/corr/abs-1910-06378}, where the average of the local optimal $\bar{\boldsymbol{x}}^*=\frac{1}{N}\sum_{i\in N}\boldsymbol{x}_i^*$ is far from the global optimal $\boldsymbol{x}^*$. To understand what causes client drift, specifically which layers in a neural network are influenced most by the data heterogeneity, we perform a simple experiment using FedAvg and CIFAR10 datasets on a VGG-11. The detailed experimental setup can be found in Section~\ref{sec:experimental_setup}.
In an over-parameterized model, it is difficult to directly calculate client drift $||\bar{\boldsymbol{x}}^* - \boldsymbol{x}^*||^2$ as it is challenging to obtain the global optimum $\boldsymbol{x}^*$. We instead hypothesize that we can represent the influence of data heterogeneity on the model by measuring 1) drift diversity and 2) client model similarity. Drift diversity reflects the diversity in the amount each client model deviates from the server model after an update round.
\begin{definition}[Drift diversity]
We define the drift diversity across $N$ clients at round $r$ as:
\begin{equation}
\xi^r := \frac{\sum_{i=1}^N||\boldsymbol{m}_i^r||^2}{||\sum_{i=1}^N \boldsymbol{m}_i^r||^2} \quad \boldsymbol{m}_i^r = \boldsymbol{y}_{i,K}^r - \boldsymbol{x}^{r-1}
\label{eq:drift_diversity}
\end{equation}
\end{definition}
Drift diversity $\xi$ is high when all the clients update their models in different directions, \eg when the inner products between client updates $\boldsymbol{m}_i$ are small. When each client performs $K$ steps of vanilla SGD updates, $\xi$ depends on the directions and amplitude of the gradients over $N$ clients and is equivalent to $\frac{\sum_{i=1}^{N}||\sum_{k}g_i(\boldsymbol{y}_{i,k})||^2}{||\sum_{i=1}^N\sum_{k}g_i(\boldsymbol{y}_{i,k})||^2}$, where $g_i(\boldsymbol{y}_{i,k})$ is the stochastic mini-batch gradient.
After updating each client model, we quantify the client model similarity using the centred kernel alignment (CKA)~\cite{DBLP:journals/corr/abs-1905-00414} computed on the test dataset. CKA is a widely used permutation invariant metric for measuring the similarity between feature representations in neural networks~\cite{DBLP:journals/corr/abs-2106-05001,DBLP:journals/corr/abs-1905-00414, DBLP:journals/corr/abs-2010-15327}.
Fig.~\ref{fig:cka_similarity} shows the movement of $\xi$ and CKA across different levels of data heterogeneity using FedAvg. We observe that the similarity and diversity of the early layers (\eg layer index 4 and 12) are with a higher agreement between the IID ($\alpha=100.0$) and non-IID ($\alpha=0.1$) experiments, which indicates that FedAvg can still learn and extract good feature representations even when it is trained with non-IID data. The lower similarity on the deeper layers, especially the classifiers, suggests that these layers are strongly biased towards their local data distribution. When we only look at the model that is trained with $\alpha=0.1$, we see the highest diversity and variance on the classifiers across clients compared to the rest of the layers. Based on the above observations, we propose to align the deeper layers across clients by controlling their updating directions with control variates.
\subsection{Classifier variance reduction}
Our proposed algorithm (Alg.~\ref{alg}) consists of three parts: i) client updating (Eq.~\ref{eq:client_update_svr}-~\ref{eq:client_update_sgd}) ii) client control variate updating, (Eq.~\ref{eq:control_variate_update}), and iii) server updating (Eq.~\ref{eq:server_update_x}-\ref{eq:server_update_c})
We first define a vector $\boldsymbol{p} \in \mathbb{R}^d$ that contains $0$ or $1$ with $v$ non-zero elements ($v\ll d$) in Eq.~\ref{eq:p_define}. We recover SCAFFOLD with $\boldsymbol{p} = \boldsymbol{1}$ and recover FedAvg with $\boldsymbol{p} = \boldsymbol{0}$. For the set of indices $j$ where $\boldsymbol{p}_j=1$ ($S_{\text{svr}}$ from Eq.~\ref{eq:indices}), we update the corresponding weights $\boldsymbol{y}_{i, S_{\text{svr}}}$ with variance reduction such that we maintain a state for each client ($\boldsymbol{c}_i \in \mathbb{R}^v$) and for the server ($\boldsymbol{c} \in \mathbb{R}^v$) in Eq.~\ref{eq:client_update_svr}. For the rest of the indices $S_{\text{sgd}}$ from Eq.~\ref{eq:indices}, we update the corresponding weights $\boldsymbol{y}_{i S_{\text{sgd}}}$ with SGD in Eq.~\ref{eq:client_update_sgd}. As the server variate $\boldsymbol{c}$ is an average of $\boldsymbol{c}_i$ across clients, we can safely initialise them as $\boldsymbol{0}$.
\vspace*{-5mm}
\begin{flalign}
\boldsymbol{p} &:= \{0, 1\}^d, \quad v = \sum \boldsymbol{p} \label{eq:p_define}\\
S_{\text{svr}} &:= \{j: \boldsymbol{p}_j = 1\}, \quad S_{\text{sgd}} := \{j: \boldsymbol{p}_j = 0\} \label{eq:indices}\\
\boldsymbol{y}_{i, S_{\text{svr}}} &\leftarrow \boldsymbol{y}_{i, S_{\text{svr}}} - \eta_l (g_i(\boldsymbol{y}_i)_{S_{\text{svr}}} - \boldsymbol{c}_i + \boldsymbol{c}) \label{eq:client_update_svr}\\
\boldsymbol{y}_{i, S_{\text{sgd}}} &\leftarrow \boldsymbol{y}_{i, S_{\text{sgd}}} - \eta_l g_i(\boldsymbol{y}_i)_{S_{\text{sgd}}} \label{eq:client_update_sgd}\\
\boldsymbol{c}_i &\leftarrow \boldsymbol{c}_i - \boldsymbol{c} + \frac{1}{K\eta_l}\left(\boldsymbol{x}_{S_{\text{svr}}} - \boldsymbol{y}_{i, S_{\text{svr}}}\right)
\label{eq:control_variate_update} \\
\boldsymbol{x} &\leftarrow (1-\eta_g)\boldsymbol{x} + \frac{1}{N}\sum_{i\in N}\boldsymbol{y}_{i}
\label{eq:server_update_x}\\
\boldsymbol{c} &\leftarrow \frac{1}{N}\sum_{i\in N}\boldsymbol{c}_i
\label{eq:server_update_c}
\end{flalign}
In each communication round, each client receives a copy of the server model $\boldsymbol{x}$ and the server control variate $\boldsymbol{c}$. They then perform $K$ model updating steps (see Eq.~\ref{eq:client_update_svr}-~\ref{eq:client_update_sgd} for one step) using cross-entropy as the loss function. Once this is finished, we calculate the updated client control variate $\boldsymbol{c}_i$ using Eq.~\ref{eq:control_variate_update}. The server then receives the updated $\boldsymbol{c}_{i}$ and $\boldsymbol{y}_{i}$ from all the clients for aggregation (Eq.~\ref{eq:server_update_x}-\ref{eq:server_update_c}). This completes one communication round.
\begin{algorithm}[ht!]
\floatname{algorithm}{Algorithm I}
\small
\caption{Partial variance reduction (FedPVR)}
\algcomment{In terms of implementation, we can simply assume the control variate for the block of weights that is updated with SGD as $\boldsymbol{0}$ and implement line 8 and 9 in one step}
\label{alg}
\hspace*{\algorithmicindent} \textbf{server}: initialise the server model $\boldsymbol{x}$, the control variate $\boldsymbol{c}$, and global step size $\eta_g$
\hspace*{\algorithmicindent} \textbf{client}: initialise control variate $\boldsymbol{c}_i$ and local step size $\eta_l$
\hspace*{\algorithmicindent} \textbf{mask}: $\boldsymbol{p}:=\{0, 1\}^d$, $S_{\text{sgd}}:=\{j: \boldsymbol{p}_j = 0\}$, $S_{\text{svr}}:= \{j: \boldsymbol{p}_j = 1\}$
\label{Algorithm:l_scaffold}
\begin{algorithmic}[1]
\Procedure{Model updating}{}
\For {$r = 1 \to R $}
\State \textbf{communicate} $\boldsymbol{x}$\ \textbf{and} $\boldsymbol{c}$ \textbf{to all clients} $i \in [N]$
\For {On client $i \in [N]$ in parallel}
\State $\boldsymbol{y}_i \leftarrow \boldsymbol{x}$
\For {$k = 1 \to K$}
\State compute minibatch gradient $g_i(\boldsymbol{y}_{i})$
\State $\boldsymbol{y}_{i, S_{\text{sgd}}} \leftarrow \boldsymbol{y}_{i, S_{\text{sgd}}} - \eta_l g_i(\boldsymbol{y}_i)_{S_{\text{sgd}}}$
\State $\boldsymbol{y}_{i, S_{\text{svr}}} \leftarrow \boldsymbol{y}_{i, S_{\text{svr}}} - \eta_l (g_i(\boldsymbol{y}_i)_{S_{\text{svr}}} - \boldsymbol{c}_i+\boldsymbol{c})$
\EndFor
\State $\boldsymbol{c}_{i} \leftarrow \boldsymbol{c}_{i} - \boldsymbol{c} + \frac{1}{K\eta_l}(\boldsymbol{x}_{S_{\text{svr}}} - \boldsymbol{y}_{i, S_{\text{svr}}})$
\State \textbf{communicate} $\boldsymbol{y}_{i}, \boldsymbol{c}_{i}$
%
\EndFor
\State $\boldsymbol{x} \leftarrow (1-\eta_g)\boldsymbol{x} + \frac{1}{N}\sum_{i\in N}\boldsymbol{y}_{i}$
\State $\boldsymbol{c} \leftarrow \frac{1}{N}\sum_{i\in N}\boldsymbol{c}_i$
\EndFor
\EndProcedure
\Statex
\end{algorithmic}
\vspace{-0.4cm}%
\end{algorithm}
\textbf{Ours vs SCAFFOLD~\cite{DBLP:journals/corr/abs-1910-06378}} While our work is similar to SCAFFOLD in the use of variance reduction, there are some fundamental differences. We both communicate control variates between the clients and server, but our control variate ($2v\approx 0.02d\sim0.1d$ ) is significantly smaller than the one in SCAFFOLD ($2d$). This 2x decrease in bits can be critical for some low-power IoT devices as the communication may consume more energy~\cite{Halgamuge2009AnEO}. From the application point of view, SCAFFOLD achieved great success in convex or simple two layers problems. However, adapting the techniques that work well from convex problems to over-parameterized models is non-trivial~\cite{DBLP:journals/corr/SzegedyZSBEGF13}, and naively adapting variance reduction techniques on deep neural networks gives little or no convergence speedup~\cite{DBLP:journals/corr/abs-1812-04529, https://doi.org/10.48550/arxiv.2207.06343}. Therefore, the significant improvement achieved by our method gives essential and non-trivial insight into what matters when tackling data heterogeneity in federated learning in over-parameterized models.
\subsection{Convergence rate}
We state the convergence rate in this section. We assume functions $\{f_i\}$ are $\beta$-smooth following~\cite{DBLP:journals/corr/abs-1907-04232, DBLP:journals/corr/abs-2003-10422}. We then assume $g_i(\boldsymbol{x}):=\nabla f_i(x;\mathcal{D}_i)$ is an unbiased stochastic gradient of $f_i$ with variance bounded by $\sigma^2$. We assume strongly convexity ($\mu > 0$) and general convexity ($\mu=0$) for some of the results following~\cite{DBLP:journals/corr/abs-1910-06378}. Furthermore, we also make assumptions for the heterogeneity of the functions.
For convex functions, we assume the heterogeneity of the function $\{f_i\}$ at the optimal point $\boldsymbol{x}^*$ (such a point always exists for a strongly convex function) following~\cite{DBLP:journals/corr/abs-2003-10422, DBLP:journals/corr/abs-1909-04746}. If all the functions are identical ($f_i = f_j, \forall i, \forall j$), then $\zeta^2=0$.
\begin{assumption}[$\zeta$-heterogeneity]
We define a measure of variance at the optimum $\boldsymbol{x}^*$ given $N$ clients as :
\begin{equation}
\zeta^2 := \frac{1}{N}\sum_{i=1}^N\mathbb{E}||\nabla f_i(\boldsymbol{x}^*)||^2 \,.
\end{equation}
\label{assump:zeta}
\end{assumption}
\vspace*{-6mm}
For the non-convex functions, such an unique optimal point $\boldsymbol{x}^*$ does not necessarily exist, so we generalize Assumption~\ref{assump:zeta} to Assumption~\ref{assump:zeta_hat}.
\begin{assumption}[$\hat{\zeta}$-heterogeneity]
We assume there exists constant $\hat{\zeta}$ such that $\forall \boldsymbol{x}\in \mathbb{R}^d$
\begin{equation}
\frac{1}{N}\sum_{i=1}^N\mathbb{E}||\nabla f_i(\boldsymbol{x})||^2 \leq \hat{\zeta}^2 \,.
\end{equation}
\label{assump:zeta_hat}
\end{assumption}
\vspace*{-4mm}
Given the mask $\boldsymbol{p}$ as defined in Eq.~\ref{eq:p_define}, we know $||\boldsymbol{p}\odot\boldsymbol{x}|| \leq ||\boldsymbol{x}||$. Therefore, we have the following propositions.
\begin{proposition}[Implication of Assumption~\ref{assump:zeta}]
Given the mask $\boldsymbol{p}$, we define the heterogeneity of the block of weights that are not variance reduced at the optimum $\boldsymbol{x}^*$ as:
\begin{equation}
\zeta_{1-p}^2 := \frac{1}{N}\sum_{i=1}^{N}||(\boldsymbol{1}-\boldsymbol{p})\odot\nabla f_i(\boldsymbol{x^*})||^2 ,
\end{equation}
If Assumption~\ref{assump:zeta} holds, then it also holds that:
\begin{equation}
\zeta_{1-p}^2 \leq \zeta^2\,.
\end{equation}
\label{proposition:zeta}
\end{proposition}
\vspace*{-6mm}
In Proposition~\ref{proposition:zeta}, $\zeta_{1-p}^2 = \zeta^2$ if $\boldsymbol{p} = \boldsymbol{0}$ and $\zeta_{1-p}^2 = 0$ if $\boldsymbol{p}=\boldsymbol{1}$. If $\boldsymbol{p} \neq \boldsymbol{0}$ and $\boldsymbol{p} \neq \boldsymbol{1}$, as the heterogeneity of the shallow weights is lower than the deeper weights~\cite{https://doi.org/10.48550/arxiv.2207.06343}, we have $\zeta_{1-p}^2 \leq \zeta^2$. Similarity, we can validate Proposition~\ref{proposition:zeta_hat}.
\begin{proposition}[Implication of Assumption~\ref{assump:zeta_hat}]
Given the mask $\boldsymbol{p}$, we assume there exists constant $\hat{\zeta}_{1-p}$ such that $\forall \boldsymbol{x} \in \mathbb{R}^d$, the heterogeneity of the block of weights that are not variance reduced:
\begin{equation}
\frac{1}{N}\sum_{i=1}^{N}||(\boldsymbol{1}-\boldsymbol{p})\odot \nabla f_i(\boldsymbol{x})||^2 \leq \hat{\zeta}_{1-p} ,
\end{equation}
If Assumption~\ref{assump:zeta_hat} holds, then it also holds that:
\begin{equation}
\hat{\zeta}_{1-p}^2 \leq \hat{\zeta}^2\,.
\end{equation}
\label{proposition:zeta_hat}
\end{proposition}
\vspace*{-10mm}
\begin{theorem}
\label{theorem_convergence}
For any $\beta$-smooth function $\{f_i\}$, the output of \texttt{FedPVR} has expected error smaller than $\epsilon$ for $\eta_g = \sqrt{N}$ and some values of $\eta_l$, $R$ satisfying:
\begin{itemize}
\item \textbf{Strongly convex:} $\eta_l \leq \min\left(\frac{1}{80K\eta_g\beta}, \frac{26}{20\mu K\eta_g}\right)$,
\begin{equation}
R = \tilde{\mathcal{O}}\left(\frac{\sigma^2}{\mu NK\epsilon} + \frac{\zeta_{1-p}^2}{\mu\epsilon} + \frac{\beta}{\mu}\right),
\end{equation}
\vspace*{-4mm}
\item \textbf{General convex:} $\eta_l \leq \frac{1}{80K\eta_g\beta}$,
\begin{equation}
R = \mathcal{O}\left(\frac{\sigma^2D}{KN\epsilon^2} + \frac{\zeta_{1-p}^2D}{\epsilon^2} + \frac{\beta D}{\epsilon} + F\right),
\end{equation}
\item \textbf{Non-convex}: $\eta_l \leq \frac{1}{26K\eta_g\beta}$, and $R\geq 1$, then:
\begin{equation}
R=\mathcal{O}\left(\frac{\beta\sigma^2F}{KN\epsilon^2} + \frac{\beta\hat{\zeta}_{1-p}^2F}{N\epsilon^2} + \frac{\beta F}{\epsilon} \right),
\end{equation}
\end{itemize}
\vspace*{-4mm}
Where $D := ||\boldsymbol{x}^0 - \boldsymbol{x}^*||^2$ and $F:=f(\boldsymbol{x}^0) - f^*$.
\end{theorem}
\begin{table*}[ht!]
\caption{The required number of communication rounds (speedup compared to FedAvg) to achieve a certain level of top-1 accuracy ($66\%$ for the CIFAR10 dataset and $44\%$ for the CIFAR100 dataset). Our method requires fewer rounds to achieve the same accuracy}
\label{tab:communication_round}
\resizebox{0.72\textwidth}{!}
{\begin{minipage}{\textwidth}
\begin{tabular}{@{}lcccccccc@{}}
\toprule
& \multicolumn{4}{c}{CIFAR10 (66$\%$)} & \multicolumn{4}{c}{CIFAR100 (44$\%$)} \\ \cmidrule(lr){2-5}\cmidrule(l){6-9}
& \multicolumn{2}{c}{$\alpha$=0.1} & \multicolumn{2}{c}{$\alpha$=0.5} & \multicolumn{2}{c}{$\alpha$=0.1} & \multicolumn{2}{c}{$\alpha$=1.0} \\
\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(l){8-9}
& VGG-11 & ResNet-8 & VGG-11 & ResNet-8 & VGG-11 & ResNet-8 & VGG-11 & ResNet-8 \\
\midrule
& No. rounds & No. rounds & No. rounds & No. rounds & No. rounds & No. rounds & No. rounds & No. rounds \\
FedAvg &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.55}{\hspace{0.1cm}\raisebox{1.5pt}{$55(1.0\text{x})$}}
& \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.9}{\hspace{0.1cm}\raisebox{1.5pt}{$90(1.0\text{x})$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.15}{\hspace{0.1cm}\raisebox{1.5pt}{$15(1.0\text{x})$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.15}{\hspace{0.1cm}\raisebox{1.5pt}{$15(1.0\text{x})$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{1.0}{\hspace{0.1cm}\raisebox{1.5pt}{$100+(1.0\text{x})$}}
&\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{1.0}{\hspace{0.1cm}\raisebox{1.5pt}{$100+(1.0\text{x})$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.8}{\hspace{0.1cm}\raisebox{1.5pt}{$80(1.0\text{x})$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.56}{\hspace{0.1cm}\raisebox{1.5pt}{$56(1.0\text{x})$}} \\
FedProx &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.52}{\hspace{0.1cm}\raisebox{1.5pt}{$52(1.1\text{x})$}} &
\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.75}{\hspace{0.1cm}\raisebox{1.5pt}{$75(1.2\text{x})$}}&
\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.16}{\hspace{0.1cm}\raisebox{1.5pt}{$16(0.9\text{x})$}} &
\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.2}{\hspace{0.1cm}\raisebox{1.5pt}{$20(0.8\text{x})$}}&
\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{1.0}{\hspace{0.1cm}\raisebox{1.5pt}{$100+(1.0\text{x})$}} &
\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{1.0}{\hspace{0.1cm}\raisebox{1.5pt}{$100+(1.0\text{x})$}}&
\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.8}{\hspace{0.1cm}\raisebox{1.5pt}{$80(1.0\text{x})$}}&
\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.59}{\hspace{0.1cm}\raisebox{1.5pt}{$59(0.9\text{x})$}} \\
SCAFFOLD & \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.39}{\hspace{0.1cm}\raisebox{1.5pt}{$39(1.4\text{x})$}} & \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.57}{\hspace{0.1cm}\raisebox{1.5pt}{$57(1.6\text{x})$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.14}{\hspace{0.1cm}\raisebox{1.5pt}{$14(1.0\text{x})$}} & \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.09}{\hspace{0.1cm}\raisebox{1.5pt}{\hspace{4pt}$9(1.7\text{x})$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.8}{\hspace{0.1cm}\raisebox{1.5pt}{\hspace{7.5pt}$80(>1.3\text{x})$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.61}{\hspace{0.1cm}\raisebox{1.5pt}{\hspace{4.3pt}$\textcolor{red}{\mathbf{61(>1.6\text{x})}}$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.36}{\hspace{0.1cm}\raisebox{1.5pt}{$36(2.2\text{x})$}} & \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.41}{\hspace{0.1cm}\raisebox{1.5pt}{$25(2.2\text{x})$}} \\
Ours & \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.27}{\hspace{0.1cm}\raisebox{1.5pt}{$\textcolor{red}{\textbf{27(2.0\text{x})}}$}} & \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.5}{\hspace{0.1cm}\raisebox{1.5pt}{$\textcolor{red}{\textbf{50(1.8\text{x})}}$}} & \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.09}{\hspace{0.1cm}\raisebox{1.5pt}{\hspace{4pt}$\textcolor{red}{\textbf{9(1.6\text{x})}}$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.05}{\hspace{0.1cm}\raisebox{1.5pt}{\hspace{4pt}$\textcolor{red}{\textbf{5(3.0\text{x})}}$}} & \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.37}{\hspace{0.1cm}\raisebox{1.5pt}{\hspace{4.3pt}$\textcolor{red}{\mathbf{37(>2.7\text{x})}}$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.66}{\hspace{0.1cm}\raisebox{1.5pt}{\hspace{7.5pt}$66(>1.5\text{x})$}} &\progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.12}{\hspace{0.1cm}\raisebox{1.5pt}{$\textcolor{red}{\textbf{12(6.7\text{x})}}$}} & \progressbar[linecolor=white, filledcolor=black, ticksheight=0.0, heighta=8pt, width=2.5em]{0.15}{\hspace{0.1cm}\raisebox{1.5pt}{$\textcolor{red}{\textbf{15(3.7\text{x}})}$}} \\ \bottomrule
\end{tabular}
\end{minipage}
}
\end{table*}
Given the above assumptions, the convergence rate is given in Theorem~\ref{theorem_convergence}. When $\boldsymbol{p} = \boldsymbol{1}$, we recover SCAFFOLD convergence guarantee as $\zeta_{1-p}^2=0, \hat{\zeta}_{1-p}^2=0$. In the strongly convex case, the effect of the heterogeneity of the block of weights that are not variance reduced $\zeta_{1-p}^2$ becomes negligible if $\tilde{\mathcal{O}}\left(\frac{\zeta_{1-p}^2}{\epsilon}\right)$ is sufficiently smaller than $\tilde{\mathcal{O}}\left(\frac{\sigma^2}{NK\epsilon} \right)$. In such case, our rate is $\frac{\sigma^2}{NK\epsilon} + \frac{1}{\mu}$, which recovers the SCAFFOLD in the strongly convex without sampling and further matches that of SGD (with mini-batch size $K$ on each worker). We also recover the FedAvg rate\footnotemark{} at simple IID case. See Appendix~\ref{sec:proof} for the full proof.
\footnotetext{FedAvg at strongly convex case has the rate $R=\tilde{\mathcal{O}}(\frac{\sigma^2}{\mu KN\epsilon} + \frac{\sqrt{\beta}G}{\mu\sqrt{\epsilon}} + \frac{\beta}{\mu})$ with $G$ measures the gradient dissimilarity. At simple IID case, G=0~\cite{DBLP:journals/corr/abs-1910-06378}.}
\section{Experimental setup}
\label{sec:experimental_setup}
We demonstrate the effectiveness of our approach with CIFAR10~\cite{Krizhevsky2009LearningML} and CIFAR100~\cite{Krizhevsky2009LearningML} on image classification tasks. We simulate the data heterogeneity scenario following~\cite{DBLP:conf/nips/LinKSJ20} by partitioning the data according to the Dirichlet distribution with the concentration parameter $\alpha$. The smaller the $\alpha$ is, the more imbalanced the data are distributed across clients. An example of the data distribution over multiple clients using the CIFAR10 dataset can be seen in Fig.~\ref{fig:cka_similarity}. In our experiment, we use $\alpha\in\{0.1, 0.5, 1.0\}$ as these are commonly used concentration parameters~\cite{DBLP:conf/nips/LinKSJ20}. Each client has its local data, and this data is kept to be the same during all the communication rounds. We hold out the test dataset at the server for evaluating the classification performance of the server model. Following~\cite{DBLP:conf/nips/LinKSJ20}, we perform the same data augmentation for all the experiments.
We use two models: VGG-11 and ResNet-8 following~\cite{DBLP:conf/nips/LinKSJ20}. We perform variance reduction for the last three layers in VGG-11 and the last layer in ResNet-8. We assume full clients participation in each communication rounds following~\cite{DBLP:journals/corr/ShamirS013, https://doi.org/10.48550/arxiv.1905.11261}. We use 10 clients and a batch size of 256. Each client performs 10 local epochs of model updating. We set the server learning rate $\eta_g=1$ for all the models~\cite{DBLP:journals/corr/abs-1910-06378}. We tune the clients learning rate from $\{0.05, 0.1, 0.2, 0.3\}$ for each individual experiment. The learning rate schedule is experimentally chosen from constant, cosine decay~\cite{DBLP:journals/corr/LoshchilovH16a}, and multiple step decay~\cite{DBLP:conf/nips/LinKSJ20}. We compare our method with the representative federated learning algorithms FedAvg~\cite{DBLP:journals/corr/McMahanMRA16}, FedProx~\cite{DBLP:journals/corr/abs-1812-06127}, and SCAFFOLD~\cite{DBLP:journals/corr/abs-1910-06378}. For FedProx~\cite{DBLP:journals/corr/abs-1812-06127}, we tune the temperature parameter $\mu$ from $\{0.1, 0.5, 1.0\}$. All the results are averaged over three repeated experiments with different random initialization. We leave $1\%$ of the training data from each client out as the validation data to tune the hyperparameters. See Appendix.~\ref{sec:appendix_experiment} for additional experimental setups.
\section{Experimental results}
\label{sec:experimental_results}
We demonstrate the performance of our proposed approach in the FL setup with data heterogeneity in this section. We compare our method with the existing state-of-the-art algorithms on various datasets and deep neural networks. For the baseline approaches, we finetune the hyperparameters and only show the best performance we get. Our main findings are 1) we are more communication efficient than the baseline approaches, 2) conformal prediction is an effective tool to improve FL performance in high data heterogeneity scenarios, and 3) the benefit of the trade-off between diversity and uniformity for using deep neural networks in FL.
\subsection{Communication efficiency and accuracy}
We first report the number of rounds required to achieve a certain level of Top 1\% accuracy ($66\%$ for CIFAR10 and $44\%$ for CIFAR100) in Table.~\ref{tab:communication_round}. An algorithm is more communication efficient if it requires less number of rounds to achieve the same accuracy and/or if it transmits fewer number of parameters between the clients and server. Compared to the baseline approaches, we require much fewer number of rounds for almost all types of data heterogeneity and models. We can achieve a speedup between $1.5$ and $6.7$ than FedAvg. We also observe that ResNet-8 tends to converge slower than VGG-11, which may be due to the aggregation of the Batch Normalization layers that are discrepant between the local data distribution~\cite{DBLP:conf/nips/LinKSJ20}.
We next compare the top-1 accuracy between centralized learning and federated learning algorithms. For the centralized learning experiment, we tune the learning rate from $\{0.01, 0.05, 0.1\}$ and report the best test accuracy based on the validation dataset. For a fair comparison, we apply no weight decay, momentum, or gradient clipping for centralized learning. We train the model for 800 epochs which is as same as the total number of epochs in the federated learning algorithms (80 communication rounds x 10 local epochs). The results are shown in Table.~\ref{tab:top_1_accuracy}. We also show the number of copies of the parameters that need to be transmitted between the server and clients (\eg 2x means we communicate $\boldsymbol{x}$ and $\boldsymbol{y}_{i}$)
Table.~\ref{tab:top_1_accuracy} shows that our approach achieves a much better Top-1 accuracy compared to FedAvg with transmitting a similar or a slightly bigger number of parameters between the server and client per round. Our method also achieves slightly better accuracy than centralized learning when the data is less heterogeneous (\eg $\alpha=0.5$ for CIFAR10 and $\alpha=1.0$ for CIFAR100).
\begin{table*}[ht!]
\caption{The top-1 accuracy (\%) after running 80 communication rounds using different methods on CIFAR10 and CIFAR100, together with the number of communicated parameters between the client and the server. We train the centralised model for 800 epochs (= 80 rounds x 10 local epochs in FL). Higher accuracy is better, and we highlight the best accuracy per dataset and model in red colour.}
\label{tab:top_1_accuracy}
\resizebox{0.9\textwidth}{!}
{\begin{minipage}{\textwidth}
\begin{tabular}{@{}lcccccccccc@{}}
\toprule
& \multicolumn{5}{c}{VGG-11} & \multicolumn{5}{c}{ResNet-8} \\
\cmidrule(lr){2-6}\cmidrule(l){7-11}
& \multicolumn{2}{c}{CIFAR10} & \multicolumn{2}{c}{CIFAR100} & server$\Leftrightarrow$client & \multicolumn{2}{c}{CIFAR10} & \multicolumn{2}{c}{CIFAR100} & server$\Leftrightarrow$client \\
\cmidrule(lr){2-3}\cmidrule(lr){4-5}
\cmidrule(lr){7-8}\cmidrule(lr){9-10}
& $\alpha=0.1$ & $\alpha=0.5$ & $\alpha=0.1$ & $\alpha=1.0$ & & $\alpha=0.1$ & $\alpha=0.5$ & $\alpha=0.1$ & $\alpha=1.0$ & \\
\midrule
Centralised & \multicolumn{2}{c}{\textcolor{red}{\textbf{86.0}}} & \multicolumn{2}{c}{56.3} & - & \multicolumn{2}{c}{81.7} & \multicolumn{2}{c}{\textcolor{red}{\textbf{55.7}}} & - \\
FedAvg & 69.3 & 80.9 & 34.3 & 45.0 & 2x & 64.9 & 79.1 & 38.8 & 47.0 & 2x\\
Fedprox & 72.1 & 80.4 & 35.0 & 43.2 &2x & 66.1 & 77.9 & 42.0 & 47.2 & 2x \\
SCAFFOLD & 74.1 & 83.5 & 43.4 & 50.6 &4x &66.6 & 80.3 & 43.8 & 52.3 & 4x \\
Ours & 78.2 & 84.9 & 43.5 & \textcolor{red}{\textbf{58.0}} & 2.1x & 69.3 & \textcolor{red}{\textbf{83.6}} & 43.5 & 52.3 & 2.02x \\ \bottomrule
\end{tabular}
\end{minipage}}
\end{table*}
\subsection{Conformal prediction}
When the data heterogeneity is high across clients, it is difficult for a federated learning algorithm to match the centralized learning performance~\cite{DBLP:journals/corr/abs-2106-05001}. Therefore, we demonstrate the benefit of using simple post-processing conformal prediction to improve the model performance.
We examine the relationship between the empirical coverage and the average predictive set size for the server model after 80 communication rounds for each federated learning algorithm. The empirical coverage is the percentage of the data samples where the correct prediction is in the predictive set, and the average predictive size is the average of the length of the predictive sets over all the test images~\cite{angelopoulos2021uncertainty}. See Appendix.~\ref{sec:appendix_experiment} for more information about conformal prediction.
The results for when $\alpha=0.1$ for both datasets and architectures are shown in Fig.~\ref{fig:conformal_prediction}. We show that by slightly increasing the predictive set size, we can achieve a similar accuracy as the centralized performance. Besides, our approach tends to surpass the centralized performance similar to or faster than other approaches. These results demonstrate that for the use cases where data heterogeneity is high and having a predictive set that contains true prediction is more important than having a single but wrong prediction (\eg chemical hazards detection), conformal prediction is an effective tool to guarantee performance.
\begin{figure}[ht!]
\centering
\includegraphics{latex_folder/figures/conformal_prediction.pdf}
\caption{Relation between average predictive size and test accuracy when $\alpha=0.1$. By slightly increasing the predictive set size, we can achieve similar accuracy as the centralised model even if the data are heterogeneously distributed across clients. Our method is similar to or faster than other approaches to surpass the centralised accuracy. }
\label{fig:conformal_prediction}
\end{figure}
\subsection{Diversity and uniformity}
\begin{figure}[ht!]
\centering
\includegraphics{latex_folder/figures/understand_layer_wise_scaffold.pdf}
\caption{Drift diversity and learning curve for ResNet-8 on CIFAR100 with $\alpha=1.0$. Compared to FedAvg, SCAFFOLD and our method can both improve the agreement between the classifiers. Compared to SCAFFOLD, our method results in a higher gradient diversity at the early stage of the communication, which tends to boost the learning speed as the curvature of the drift diversity seem to match the learning curve.}
\label{fig:diversity}
\end{figure}
We have shown that our algorithm achieves a better speedup and performance against the existing approaches with only lightweight modifications to FedAvg. We next investigate what factors lead to better accuracy. Specifically, we calculate the drift diversity $\xi$ across clients after each communication round using Eq.~\ref{eq:drift_diversity} and average $\xi$ across three runs. We show the result of using ResNet-8 and CIFAR100 with $\alpha=1.0$ in Fig.~\ref{fig:diversity}.
Fig.~\ref{fig:diversity} shows the drift diversity for different layers in ResNet-8 and the testing accuracy along the communication rounds. We observe that classifiers have the highest diversity in FedAvg against other layers and methods. SCAFFOLD, which applies control variate on the entire model, can effectively reduce the disagreement of the directions and scales of the averaged gradient across clients. Our proposed algorithm that performs variance reduction only on the classifiers can reduce the diversity of the classifiers even further but increase the diversity of the feature extraction layers. This high diversity tends to boost the learning speed as the curvature of the diversity movement (Fig.~\ref{fig:diversity} left) seems to match the learning curve (Fig.~\ref{fig:diversity} right). Based on this observation, we hypothesize that this diversity along the feature extractor and the uniformity of the classifier is the main reason for our better speedup.
To test this hypothesis, we perform an experiment where we use variance reduction starting from different layers of a neural network. If the starting position of the use of variance reduction influences the learning speed, it indicates where in a neural network we need more diversity and where we need more uniformity. We here show the result of using VGG-11 on CIFAR100 with $\alpha=1.0$ as there are more layers in VGG-11. The result is shown in Fig.~\ref{fig:influence_of_svr} where $\texttt{SVR:}16\rightarrow20$ is corresponding to our approach and $\texttt{SVR:}0\rightarrow20$ is corresponding to SCAFFOLD that applies variance reduction for the entire model. Results for using ResNet-8 is shown in the Appendix.
\begin{figure}[ht!]
\centering
\includegraphics{latex_folder/figures/influence_of_svr_on_layer.pdf}
\caption{Influence of using stochastic variance reduction(SVR) on layers that start from different positions in a neural network on the learning speed. \texttt{SVR:0$\rightarrow$20} applies variance reduction on the entire model (SCAFFOLD). \texttt{SVR:16$\rightarrow$20} applies variance reduction from the layer index 16 to 20 (ours). The later we apply variance reduction, the better performance speedup we obtain. However, no variance reduction (FedAvg) performs the worst here.}
\label{fig:influence_of_svr}
\end{figure}
We see from Fig.~\ref{fig:influence_of_svr} that the deeper in a neural network we apply variance reduction, the better learning speedup we can obtain. There is no clear performance difference between where to activate the variance reduction when the layer index is over 10. However, applying no variance reduction (FedAvg) achieves by far the worst performance. We believe that these experimental results indicate that in a distributed optimization framework, to boost the learning speed of an over-parameterized model, we need some levels of diversity in the middle and early layers for learning richer feature representation and some degrees of uniformity in the classifiers for making a less biased decision.
\section{Conclusion}
\label{sec:conclusion}
In this work, we studied stochastic gradient descent learning for deep neural network classifiers in a federated learning setting, where each client updates its local model using stochastic gradient descent on local data. A central model is periodically updated (by averaging local model parameters) and broadcast to the clients under a communication bandwidth constraint. When data is homogeneous across clients, this procedure is comparable to centralized learning in terms of efficiency; however, when data is heterogeneous, learning is impeded. Our hypothesis for the primary reason for this is that when the local models are out of alignment, updating the central model by averaging is ineffective and sometimes even destructive.
Examining the diversity across clients of their local model updates and their learned feature representations, we found that the misalignment between models is much stronger in the last few neural network layers than in the rest of the network. This finding inspired us to experiment with aligning the local models using a partial variance reduction technique applied only on the last layers, which we named FedPVR. We found that this led to a substantial improvement in convergence speed compared to the competing federated learning methods. In some cases, our method even outperformed centralized learning. We derived a bound on the convergence rate of our proposed method, which matches the rates for SGD when the gradient diversity across clients is sufficiently low. Compared with FedAvg, the communication cost of our method is only marginally worse, as it requires transmitting control variates for the last layers.
We believe our FedPVR algorithm strikes a good balance between simplicity and efficiency, requiring only a minor modification to the established FedAvg method; however, in our further research, we plan to pursue more optimal methods for aligning and guiding the local learning algorithms, \eg using adaptive procedures. Furthermore, the degree of over-parameterization in the neural network layers (\eg feature extraction vs bottlenecks) may also play an important role, which we would like to understand better.
\section*{Acknowledgements}
The authors thank for financial support from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 883390 (H2020-SU-SECU-2019 SERSing Project). BL thanks for the financial support from the Otto Mønsted Foundation.
\newpage
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,155,554 | arxiv | \section{Introduction}
Although the concept of pseudospin symmetry (PSS) in nuclear single-particle spectra has been introduced for more than 40 years, the origin of PSS and its breaking mechanism in realistic nuclei have not been fully understood. Basically, whether or not its nature is perturbative remains an open problem.
In this report, we mainly focus on our recent progress on this topic by combining the similarity renormalization group (SRG) technique, supersymmetric (SUSY) quantum mechanics, and perturbation theory \cite{Liang2013,Shen2013}.
The PSS was introduced in 1969 to explain the near degeneracy between pairs of single-particle states with the quantum numbers ($n-1, l + 2, j = l + 3/2$) and ($n, l, j=l + 1/2$) \cite{Arima1969,Hecht1969}.
They are regarded as the pseudospin doublets with modified quantum numbers ($\tilde{n}=n-1,\tilde{l}=l+1,j=\tilde{l}\pm1/2$).
Since the suggestion of PSS, there have been comprehensive efforts to understand its origin.
In 1997, the PSS was shown to be a relativistic symmetry of the Dirac Hamiltonian, and the equality in magnitude but difference in sign of the scalar $\mathcal{S}(\bm{r})$ and vector $\mathcal{V}(\bm{r})$ potentials was suggested as the exact PSS limit \cite{Ginocchio1997}.
A more general condition $d(\mathcal{S}+\mathcal{V})/dr=0$ \cite{Meng1998,Sugawara-Tanabe1998} can be better satisfied in exotic nuclei with diffuse potentials \cite{Meng1999}.
On the other hand, since there exist no bound nuclei within such PSS limit, the non-perturbative or dynamical nature of PSS in realistic nuclei was suggested \cite{Alberto2001}.
A recent overview on the PSS investigation can be found in Ref.~\cite{Liang2013}, and the readers are referred to Refs.~\cite{Lu2012,Jolos2012,Alberto2013} for some recent progress.
Recently, the perturbation theory was used to investigate the symmetries of the Dirac Hamiltonian and their breaking in realistic nuclei~\cite{Liang2011,Li2011}, which provides a clear and quantitative way for investigating the perturbative nature of PSS.
On the other hand, the SUSY quantum mechanics can provide a PSS-breaking potential without singularity~\cite{Typel2008}, and naturally interpret the unique feature that all states with $\tilde{l}>0$ have their own pseudospin partners except for the intruder states~\cite{Typel2008,Leviatan2004}.
Furthermore, the SRG technique fills the gap between the perturbation calculations and the SUSY descriptions by transforming the Dirac Hamiltonian into a diagonal form which keeps every operator Hermitian~\cite{Guo2012,Li2013}.
Therefore, we deem it promising to understand the PSS and its breaking mechanism in a quantitative way by combining the SRG technique, SUSY quantum mechanics, and perturbation theory \cite{Liang2013,Shen2013}.
\section{SRG, SUSY quantum mechanics, and perturbation theory}
Within the relativistic scheme, our starting point is the Dirac Hamiltonian for nucleons, $H_D = \bm{\alpha\cdot p} + \beta (M+\mathcal{S}) + \mathcal{V}$, where $\bm{\alpha}$ and $\beta$ are the Dirac matrices, $M$ is the nucleon mass, $\mathcal{S}$ and $\mathcal{V}$ are the scalar and vector potentials, respectively.
Using the SRG technique \cite{Guo2012}, the Dirac Hamiltonian can be transformed into a series of $1/M$ having a diagonal form.
By keeping the leading-order terms of the kinetic energy, the central and spin-orbit (SO) potentials, respectively, the eigenequations for nucleons in the Fermi sea read
\begin{equation}\label{Eq:Schrodinger}
\left[-\frac{1}{2M}\frac{d^2}{dr^2} + \frac{\kappa(\kappa+1)}{2Mr^2} + V(r) + \frac{\kappa}{Mr}U(r) \right] R(r) = E R(r)
\end{equation}
with $V(r) = \mathcal{V}+\mathcal{S}$ and $U(r)=-(\mathcal{V}-\mathcal{S})'/(4M)$.
Here the spherical symmetry is adopted, the symbol $'$ means the derivative with respect to $r$, and the good quantum number $\kappa$ is defined as $\kappa=\mp(j+1/2)$ for $j=l\pm1/2$.
One finds that Eq.~(\ref{Eq:Schrodinger}) is nothing but the Schr\"odinger equation including a SO term.
In the SUSY framework \cite{Cooper1995}, a couple of Hermitian conjugate first-order differential operators are defined as
\begin{equation}
B_\kappa^+ = \left[ Q_\kappa(r)-\frac{d}{dr}\right] \frac{1}{\sqrt{2M}},\qquad B_\kappa^- = \frac{1}{\sqrt{2M}}\left[ Q_\kappa(r)+\frac{d}{dr}\right],
\end{equation}
where the $Q_\kappa(r)$ are the so-called superpotentials to be determined.
In order to explicitly identify the $\kappa(\kappa+1)$ structure and the SO term shown in Eq.~(\ref{Eq:Schrodinger}), one can introduce the reduced superpotentials $q_\kappa(r) = Q_\kappa(r) - \kappa/r - U(r)$.
In such a way, the SUSY partner Hamiltonians $H_1$ and $H_2$ can be expressed as
\begin{subequations}\label{Eq:BB}
\begin{align}
H_1(\kappa) &= B^+_\kappa B^-_\kappa = \ff{2M}\left[-\frac{d^2}{dr^2}+\frac{\kappa(\kappa+1)}{r^2}+q_\kappa^2+2q_\kappa U+U^2+\frac{2\kappa}{r}q_\kappa-q'_\kappa-U'\right]+\frac{\kappa}{Mr}U,\notag \\ \label{Eq:BB1}\\
H_2(\kappa) &= B^-_\kappa B^+_\kappa = \ff{2M}\left[-\frac{d^2}{dr^2}+\frac{\kappa(\kappa-1)}{r^2}+q_\kappa^2+2q_\kappa U+U^2+\frac{2\kappa}{r}q_\kappa+q'_\kappa+U'\right]+\frac{\kappa}{Mr}U.\notag \\ \label{Eq:BB2}
\end{align}
\end{subequations}
Note that the Hamiltonian $H(\kappa)$ in Eq.~(\ref{Eq:Schrodinger}) and its SUSY partner Hamiltonian $\tilde{H}(\kappa)$ differ from $H_1$ and $H_2$ by a common constant $e(\kappa)$, the so-called energy shift \cite{Liang2013,Typel2008,Cooper1995}.
By combining Eqs.~(\ref{Eq:Schrodinger}) and (\ref{Eq:BB1}), one obtains the first-order differential equations for the reduced superpotentials $q_\kappa(r)$, and then obtains $\tilde{H}(\kappa)$ with Eq.~(\ref{Eq:BB2}).
Now it is important to see that not only does the $\kappa(\kappa+1)$ structure appear in $H$ ($H_1$) but also the $\kappa(\kappa-1)$ structure explicitly appears in the SUSY partner Hamiltonian $\tilde{H}$ ($H_2$).
The so-called pseudo-centrifugal barrier (PCB) terms $\kappa(\kappa-1)/(2Mr^2)$ are identical for the pseudospin doublets $a$ and $b$ with $\kappa_a + \kappa_b = 1$.
For the perturbation analysis \cite{Liang2011}, the Hamiltonian $\tilde{H}$ is further expressed as $\tilde{H} = \tilde{H}^{\rm PSS}_0 + \tilde{W}^{\rm PSS}$, where $\tilde{H}^{\rm PSS}_0$ and $\tilde{W}^{\rm PSS}$ are the PSS-conserving and PSS-breaking terms, respectively. By requiring that $\tilde{W}^{\rm PSS}$ should be proportional to $\kappa$~\cite{Liang2013,Shen2013}, which is similar to the case of the SO term in the normal scheme, one has
\begin{equation}
\tilde{H}^{\rm PSS}_0=\ff{2M}\left[-\frac{d^2}{dr^2}+\frac{\kappa(\kappa-1)}{r^2}\right] + \tilde{V}_{\rm PSS}(r),\qquad
\tilde{W}^{\rm PSS}=\kappa \tilde{V}_{\rm PSO}(r).
\end{equation}
Finally, for a given pair of pseudospin doublets, the pseudospin-orbit (PSO) potential $\tilde{V}_{\rm PSO}(r)$ can be uniquely determined as
\begin{equation}\label{Eq:VPSO}
\tilde{V}_{\rm PSO}(r)
=\frac{1}{M}\frac{q'_{\kappa_a}(r) - q'_{\kappa_b}(r)}{\kappa_a-\kappa_b}
+ \frac{1}{Mr}U(r).
\end{equation}
\begin{figure}\centering
\includegraphics[width=0.45\textwidth]{Fig1-1.eps}\hspace{2em}
\includegraphics[width=0.45\textwidth]{Fig1-2.eps}
\caption{(Color online) Left: Pseudospin-orbit potentials $\tilde{V}_{\rm PSO}(r)$ for the $\tilde{f}$ block.
The symmetry-breaking potential obtained with the SO term (solid line) is decomposed into the contributions from the first (dash-dotted line) and second (dotted line) terms on the right-hand side of Eq.~(\ref{Eq:VPSO}).
The symmetry-breaking potential obtained without SO term is shown with short dotted line for comparison.
Right: Reduced pseudospin-orbit splittings $(E_{j_<}-E_{j_>})/(2\tilde{l}+1)$ vs the average single-particle energies $(E_{j_<}+E_{j_>})/2$.
The results obtained with and without SO term are shown with filled and open symbols, respectively.
Taken from Ref.~\cite{Shen2013}.
\label{Fig1}}
\end{figure}
\section{An example}
In this report, we take the scalar and vector potentials of the Woods-Saxon form from Ref.~\cite{Koepf1991}, and take the neutron potentials in the nucleus $^{132}$Sn as an example.
First of all, the PSS-breaking potentials $\tilde{V}_{\rm PSO}(r)$ for the $\tilde{f}$ block obtained without and with the SO term are shown in the left panel of Fig.~\ref{Fig1}.
These potentials show several special features~\cite{Liang2013,Shen2013}, which are crucial for understanding the PSS in a quantitative way:
(1) they are regular functions of $r$;
(2) their amplitudes directly determine the sizes of reduced PSO splittings $\Delta E_{\rm PSO}\equiv(E_{j_<}-E_{j_>})/(2\tilde l+1)$ according to the perturbation theory;
(3) their shape, being negative at small radius but positive at large radius with a node at the surface region, can explain a general tendency that the PSO splittings become smaller with increasing single-particle energies, and even reverse as approaching the single-particle threshold.
In order to identify the SO effects, the $\tilde{V}_{\rm PSO}(r)$ obtained with the SO term is decomposed into the contributions from the first and second terms on the right-hand side of Eq.~(\ref{Eq:VPSO}), denoted as $\Delta q'/M\Delta\kappa$ and $U/Mr$, respectively.
These two terms can be regarded as the indirect and direct effects of the SO term, respectively, because the former one represents the SO effects on $\tilde{V}_{\rm PSO}(r)$ via the reduced superpotentials $q_\kappa(r)$, while the latter is nothing but the SO potential itself appearing in Eq.~(\ref{Eq:Schrodinger}).
Comparing to the result obtained without SO term, it is found that the indirect effect on $\tilde{V}_{\rm PSO}(r)$ is only around $0.1\sim0.2$~MeV, and eventually results in less influence on $\Delta E_{\rm PSO}$ due to the cancellation between $r<5$~fm and $r>5$~fm regions.
On the other hand, the SO potential $U(r)/Mr$ is always positive with a surface-peak shape.
It substantially raises the $\tilde{V}_{\rm PSO}(r)$, in particular for the surface region.
This systematically reduces the PSO splittings $\Delta E_{\rm PSO}$.
All of these properties are confirmed in the right panel of Fig.~\ref{Fig1}, in which $\Delta E_{\rm PSO}$ for all bound pseudospin doublets are shown as a function of the average single-particle energies $E_{\rm av}=(E_{j_<}+E_{j_>})/2$.
The results obtained with and without SO term are shown with filled and open symbols, respectively.
It is found that the sizes of $\Delta E_{\rm PSO}$ match the amplitudes of $\tilde{V}_{\rm PSO}(r)$.
The decreasing of the PSO splittings with increasing single-particle energies is due to the special shape of $\tilde{V}_{\rm PSO}(r)$.
Last but not least, the SO term reduces the $\Delta E_{\rm PSO}$ systematically by $0.15\sim0.3$~MeV, and this effect can be understood now in a fully quantitative way.
\section{Summary}
Work is now in progress for exploring the origin of PSS and its breaking mechanism by combining the SRG, SUSY quantum mechanics, and perturbation theory.
It is shown that while the spin-symmetry-conserving term appears in the single-particle Hamiltonian $H$, the PSS-conserving term appears naturally in its SUSY partner Hamiltonian $\tilde H$.
The eigenstates of Hamiltonians $H$ and $\tilde H$ are exactly one-to-one identical except for the so-called intruder states.
In such a way, the origin of PSS deeply hidden in $H$ can be traced in its SUSY partner Hamiltonian $\tilde H$.
Furthermore, the perturbative nature of PSS is demonstrated by the perturbation calculations, and the PSS-breaking term can be regarded as a small perturbation on the exact PSS limits, so that the special patterns of the PSO splittings can be interpreted in a quantitative way.
\section*{Acknowledgements}
This work was partly supported by
the RIKEN iTHES Project,
the Grant-in-Aid for JSPS Fellows under Grant No. 24-02201;
the Major State 973 Program No. 2013CB834400;
the NSFC under Grants No. 11105005, No. 11105006, and No. 11175002;
the China Postdoctoral Science Foundation under Grant No. 2012M520101;
and the Research Fund for the Doctoral Program of Higher Education under Grant No. 20110001110087.
\section*{References}
\providecommand{\newblock}{}
|
2,869,038,155,555 | arxiv | \section{Introduction}
Attendance is a mandatory part of every class in colleges. Often, there is a minimum attendance requirement for courses taken by students. The simplest methods of taking attendance include roll-call or manually signing on a document. These methods are tedious and waste time and do not take advantage of technology in any way. Attendance will have to be manually entered from the attendance sheet into the database. Further, proxy attendance is easy, wherein a student gives attendance to another student, either by forging his signature or calling out his name during roll-call. A current solution to make attendance easier, and gaining popularity, is biometric attendance. The biometric machine can be connected to the database and update attendance automatically. The problem of proxy is solved due to the uniqueness of thumb prints. However, this method still wastes time due to the students needing to queue up for biometric attendance. Passing the biometric machine around in class solves this issue, but can be disturbing. However, all the aforementioned methods of taking attendance still share a common problem: there is no way to ensure that students sit in class through its entire duration. A student can leave class immediately after attendance, or enter class just before attendance.
AttenFace's novel snapshot technique of face recognition (described in Sec. \ref{sec:sys-arch}) solves all these problems, including the issue of proxy attendance which is common in colleges. Students no longer have to manually give attendance, as their presence is automatically recognized by a camera. Since the camera is recording at all times, it is easy to capture how long a student remains in class. Final attendance can be given only if the student remained in class above a certain threshold of time, which can be decided by the professor teaching the class. Further, the system includes an easy-to-use portal for students to check their attendance for any class and course, and for professors to override default attendance rules for a particular class or student if necessary.
\section{Related Work}
The proposed attendance system requires three major technologies to identify students: 1) object detection and localization, to identify which objects in the classroom are students, and where they are, 2) face detection, to identify which object is a face, and 3) face recognition, to map detected faces to corresponding students. There is continuous research going on in these areas. YOLO \cite{b1} is a real-time object detection algorithm which is being continuously improved upon since its first introduction in 2016. Haar Casading \cite{b2} identifies faces in a real-time video feed. FaceNet \cite{b3} is a notable face-recognition technique, which uses a deep convolutional neural network with 22 layers trained via a triplet loss function to directly output a 128-dimensional embedding. VGGFace2 \cite{b4} is a dataset for training face recognition models taking into account pose and age.
At the heart of the proposed system is the face recognition algorithm. This problem has numerous approaches \cite{b5}. Principal Component Analysis (PCA) extracts principal features to recognize faces. Template matching involves comparing and matching patterns to recognize faces, usually through a neural network. Taking into account pose variations and liveliness detection can help make the system more robust. Liveliness detection is the problem of differentiating between the face of a live person and a photograph \cite{b6}.
In recent years, a number of face recognition based attendance systems have been proposed. In \cite{b7}, face recognition along with Radio Frequency Identification (RFID) was used to detect authorized students and count the number of times a student enters and leaves the classroom. In \cite{b8}, students were recognized through iris biometrics. The system automatically took attendance by capturing an image of the eye of each student and searching for a match in the database. In \cite{b9}, the Eigenface and Fisherface face recognition algorithms were compared and used in a real-time attendance system. Eigenface was found to perform better, with an accuracy of 70-90\%. In \cite{b10}, authors used Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) to extract features of students' faces, followed by the Radial Basis Function (RBF) to recognize them. Their attendance system had an accuracy of about 82\%. In \cite{b11}, authors considered various real-time scenarios such as lighting and pose of students. A 3D approach to recognize faces for attendance was put forward in \cite{b12}.
There are a number of attendance systems available on the market which use face recognition for identification. Most of these, however, are time and attendance systems, which are used for one-time close-up identification of a single person at a time. For example, Truein \cite{b13} is a touchless face recognition system used to manage employee attendance in the workplace. iFace \cite{b14} provides face recognition capabilities through a mobile app, useful in work-from-home scenarios. There is currently no product in the market aimed at real-time face recognition for attendance capture in schools and colleges that provides a single unified portal to track attendance and modify attendance policy, which can also integrate with existing attendance management systems. The underlying technology of face recognition, however, is the same, and can be adapted to suit the proposed system.
\section{System architecture}
\label{sec:sys-arch}
The proposed system uses face recognition to automatically handle attendance of students. Taking the actual face recognition algorithm as a black box, the system decides attendance in the following manner:
\begin{itemize}
\item Recording begins at the start of the class. The start time of the class and the room number are available via the database.
\item A snapshot of the class is taken every 10 minutes. Using the snapshot, the face recognition algorithm recognizes students and marks them as present in that 10 minute block of time.
\item A student will be marked present for a class if he is present in at least 'n' snapshots. This threshold can be decided by the professor. This allows a student to leave the class in between in case of an emergency without losing attendance.
\end{itemize}
AttenFace's snapshot model provides a method to continuously track attendance throughout the class duration while avoiding the computationally expensive process of face recognition on live video. Simultaneously, it ensures that students must remain in class for a minimum amount of time to receive attendance, solving the issue of students leaving class after manual attendance. This makes the system equally efficient but more robust than existing solutions for face recognition, which do not run continuously during the whole class but rather involve a single verification before or after class.
\subsection{Requirements}\label{AA}
The requirements of the system are as follows:
\begin{enumerate}
\item Functional requirements
\begin{itemize}
\item A login portal connected to the institute login, to be used by students, professors and administrators.
\item A dashboard for students and professors to view attendance details for any class or course taken by them.
\item An easy way for professors to make minor changes to attendance policy for a class or the entire course directly from the dashboard without needing to go through the administration.
\item An option on the dashboard for the administration to manually override faulty attendance results.
\end{itemize}
\item Non-functional requirements
\begin{itemize}
\item The system should be able to access required student information, course information and class information from the institute database.
\item Portal to view attendance should be cross-platform with emphasis on mobile friendliness.
\item The face recognition algorithm should be able to recognize faces in real-time, without much computational overhead.
\item Multiple instances of the face recognition algorithm should be able to run in parallel, since there can be multiple classes going on at any given time.
\end{itemize}
\end{enumerate}
\subsection{Use Cases}\label{AA}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{use-case-diagram.png}
\caption{UML use case diagram.}
\label{fig}
\end{figure}
\begin{enumerate}
\item A user (student, professor or admin) should be able to log in using their college credentials.
\item A student should be able to view his attendance for any class of any course he has taken.
\item A student should be able to view his attendance (0 or 1) for a class as soon as it ends.
\item A student should be able to see how many "blocks" in a class he attended (as explained in the previous section) and whether this is above the threshold set by the professor, hence justifying the attendance obtained for that class.
\item A student should be able to view the total number of classes he has attended in a course so far, and the number of classes he is allowed to miss before his total attendance for that course drops below the course's requirements.
\item A professor should be able to view the total attendance of a class as soon as it ends.
\item A professor should be able to change the threshold determining for how many blocks a student must be present in the class to get attendance. This can be changed for a specific class or for all classes in the course. This gives the professor the power to make attendance lenient or optional on a specific day, without having to go through the college administration.
\item A professor should be able to change the room number of the class before it starts, in case of any sudden events to ensure that attendance will still be taken by activating the camera in the new room.
\item An administrator should be able to directly change the attendance of a student in a particular class if required, overriding the automated attendance.
\item An administrator should be able to change the room number of the entire course, in case of any clashes with other classes or events.
\end{enumerate}
\subsection{System Architecture}\label{AA}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{architecture-diagram.png}
\caption{System architecture.}
\label{fig}
\end{figure}
The system can be divided into the following modules:
\begin{enumerate}
\item Front-End (mobile and web application): All users interact with the system through the frontend. The following information will be displayed for a student: a) total attendance received so far in a particular course, b) attendance received in any class of any course he has registered for, c) the total "blocks" attended in any class to justify the attendance for that class, and d) the threshold to determine whether attendance is obtained or not in a particular class. The following information will be displayed for a professor: a) total attendance for any class in any course taught by him, and b) the threshold to determine attendance (which he can edit).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{student-gui.png}
\caption{Sample interface for students.}
\label{fig:student-gui}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{prof-gui.png}
\caption{Sample interface for professors.}
\label{fig:prof-gui}
\end{figure}
\item Back-End: The back-end handles all interactions with the database including accessing student images for the face recognition algorithm, displaying all relevant data on the front-end, and standard CRUD operations. It also performs the following calculations: a) total attendance of a student in a course given his attendance in all classes of the course so far, and b) attendance of a student in a class of a course given his attendance in each block of time in that class. The back-end acts as a bridge to send data to the face recognition server, which cannot access the database directly.
\item Face Recognition Model Server: The computationally heavy operations for face recognition will be performed here. Each ongoing class will have its own thread of computation associated with it, which obtains live feed directly from the concerned camera, and communicates with the back-end server for attendance calculation. Each thread of the face recognition server receives the following data from the back-end: a) Images of all students attending the class, b) start time of the class, c) end time of the class, and d) the camera ID to activate and obtain live feed from. The interaction diagram of the face recognition server with the back-end and camera is shown in Fig. 6. The direct communication between the back-end and camera is an integral component of making the system standalone and fully automatic compared to existing solutions.
\item Database: It stores information regarding students (such as images for recognition and the courses they have registered for), courses (such as room number and the corresponding camera ID), and professors (such as courses they are teaching). Fig. 8 shows a simplified schema of the database.
\end{enumerate}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{uml-class-diagram.png}
\caption{UML class diagram.}
\label{fig}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{uml-sequence-diagram1.png}
\caption{UML 2.0 sequence diagram showing interactions between the back-end server, database and face recognition server.}
\label{fig}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{uml-sequence-diagram2.png}
\caption{UML sequence diagram showing interaction between user and front-end.}
\label{fig}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{er-diagram.png}
\caption{Simplified database design.}
\label{fig}
\end{figure}
\subsection{University Classroom: A Practical Example}
A walkthrough of using AttenFace for a camera-enabled classroom in a university would be as follows:
\begin{itemize}
\item The system establishes a connection with the camera 5 minutes before class starts.
\item The professor, if he wishes to do so, logs into the portal and change attendance requirements for the current class (see Fig. 4).
\item Starting from the beginning of class, and until class ends, a snapshot of the classroom is sent to the back-end and subsequently, the face recognition server, every 10 minutes. Students are identified and their presence is marked in that 10-minute block of time.
\item After class ends, students can log in to the portal and immediately view their attendance status for that class (see Fig 3).
\end{itemize}
\subsection{Extensibility and Ease of Integration}\label{AA}
The proposed system is modular. The face recognition server, in particular, is a standalone module, which plays no role in the actual attendance policy of the professor or institute. Given pictures of students as input, all recognition-based calculations are done within the face recognition module and results for each student (whether present or not for that block of time) are returned to the back-end server, which handles attendance calculation using this data. Due to its modular nature, the system can be easily integrated into existing college portals. For example, integrating the proposed real-time attendance system with moodle is straightforward. The front-end, institute login and interaction with the college database are already handled by moodle. The only part of the system which needs to be integrated is a custom back-end script which interacts with the face recognition server and performs the desired calculations, and the face recognition server itself. The attendance data can then be made available to moodle to display on the frontend.
\section{Conclusion and future work}
This paper proposes a new method to analyze and grant attendance in real time using face recognition.
Attendance in each class is determined automatically with no human effort. The system ensures that a student must stay in class for at least a certain amount of time to be marked present, and at the same time grants a certain amount of leniency in the attendance calculation, as decided by the professor. There is always scope for improvement. Face recognition techniques are not completely accurate, and the system may sometimes be unable to identify students, or recognize them incorrectly. External factors such as classroom lighting and position of students faces may have an effect on the accuracy of the face recognition algorithm. As new research leads to better performing face recognition algorithms which are more robust and adaptable to varying situations, the proposed system benefits.
|
2,869,038,155,556 | arxiv | \section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\large\bf}}
\def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex
minus -.2ex}{1.5ex plus .2ex}{\normalsize\bf}}
\def\@fnsymbol#1{\ensuremath{\ifcase#1\or *\or 1\or 2\or 3\or 4\or
5\or 6\or 7 \or 8\ or 9 \or 10\or 11 \else\@ctrerr\fi}}
\makeatother
\newcommand{\out}[1]{}
\newcommand\note[1]{\GenericWarning{
{AUTHOR WARNING: Unresolved annotation
{$\blacksquare$
\textsf{$\langle\!\langle${#1}$\rangle\!\rangle$
}
\title{Finding Largest Rectangles in Convex Polygon
\thanks{Supported by the Slovenian Research Agency, program P1-0297, projects J1-4106 and L7-5459; by the ESF EuroGIGA project (project GReGAS) of the European Science Foundation; and by NRF
grant~2011-0030044 (SRC-GAIA), funded by the government of Korea.}}
\author{Sergio Cabell
\thanks{Department of Mathematics, IMFM, and
Department of Mathematics, FMF, University of Ljubljana, Slovenia.
Part of the work was done while visiting KAIST, Korea.}
\and Otfried Cheon
\thanks{Department of Computer Science, KAIST, Daejeon, Korea.}
\and Christian Knaue
\thanks{Institut f\"ur Informatik, Universit\"at Bayreuth, Bayreuth, Germany}
\and Lena Schlip
\thanks{Institute of Computer Science, Freie Universit\"at Berlin,
Germany.}
}
\begin{document}
\maketitle
\begin{abstract}
We consider the following geometric optimization problem: find a
maximum-area rectangle and a maximum-perimeter rectangle contained
in a given convex polygon with $n$ vertices. We give exact
algorithms that solve these problems in time $O(n^3)$. We also give
$(1-\eps)$-approximation algorithms that take time $O(\eps^{-3/2}+
\eps^{-1/2} \log n)$.
\medskip
\textbf{Keywords:} geometric optimization; approximation algorithm;
convex polygon; inscribed rectangle.
\end{abstract}
\section{Introduction}
Computing a largest rectangle contained in a polygon (with respect to
some appropriate measure) is a well studied problem. Previous results
include computing largest \emph{axis-aligned} rectangles, either in
convex polygons~\cite{ahs-cliir-95} or simple polygons (possibly with
holes) \cite{DMR-1997}, and computing largest \emph{fat} rectangles in
simple polygons \cite{hkms-06}.
Here we study the problem of finding a maximum-area rectangle and a
maximum-perimeter rectangle contained in a given convex polygon with
$n$ vertices. We give exact $O(n^3)$-time algorithms and
$(1-\eps)$-approximation algorithms that take time $O(\eps^{-3/2}+
\eps^{-1/2} \log n)$. (For maximizing the perimeter we allow the
degenerate solution consisting of a single segment whose perimeter is
twice its length.) To the best of our knowledge, apart from a
straightforward $\mathcal O(n^4)$-time algorithm, there is no other
exact algorithm known so far.
Our approximation algorithm to maximize the area improves the previous
results by Knauer et al.~\cite{ksst-12}, who give a deterministic
$(1-\eps)$-approximation algorithm with running time $O(\eps^{-2}\log
n)$ and a Monte Carlo $(1-\eps)$-approximation algorithm with running
time $O(\eps^{-1}\log n)$. We are not aware of previous
$(1-\eps)$-approximation algorithms to maximize the perimeter.
\section{Preliminaries}
\paragraph{Notation.}
We use $C$ for arbitrary convex bodies and $P$ for convex polygons.
Let $\U$ be the set of unit vectors in the plane.
For each $u\in \U$ and each convex body $C$, the \emph{directional width}
of $C$ in direction $u$, denoted by $\dwidth (u,C)$,
is the length of the orthogonal projection of $C$
onto any line parallel to $u$. Thus
\[
\dwidth (u,C) ~=~ \max_{p\in C} \scalar{p,u} - \min_{p\in C} \scalar{p,u},
\]
where $\scalar{\cdot,\cdot}$ denotes the scalar product.
For a convex body $C$ and a parameter $\eps\in (0,1)$,
an \emph{$\eps$-kernel} for $C$ is a convex body $C_\eps\subseteq C$ such that
\[
\forall u\in \U: ~~(1-\eps)\cdot \dwidth (u,C) \le \dwidth (u,C_\eps).
\]
The diameter of $C$ is the distance between the two furthest points
of $C$. It is easy to see that it equals
\[
\max_{u\in \U} \dwidth (u,C).
\]
Ahn et al.~\cite{abcnsv-06} show how to compute an $\eps$-kernel.
Their algorithm uses the following type of
primitive operations for $C$:
\begin{di}
\item given a direction $u\in \U$, find an extremal point of $C$
in the direction $u$;
\item given a line $\ell$, find $C\cap \ell$.
\end{di}
Let $T_C$ be the time needed to perform each of those primitive operations.
We will use $T_C$ as a parameter in some of our running times.
When $C$ is a convex $n$-gon whose boundary is given
as a sorted array of vertices or as a binary search tree,
we have $T_C=O(\log n)$~\cite{Chazelle-Dobkin,Reichling}.
Ahn et al.~show the following result.
\begin{lemma}[Ahn et al.~\cite{abcnsv-06}]
\label{le:kernel}
Given a convex body $C$ and a parameter $\eps\in (0,1)$,
we can compute in $O(\eps^{-1/2} T_C)$ time
an $\eps$-kernel of $C$ with $O(\eps^{-1/2})$ vertices.
\end{lemma}
\begin{lemma}
\label{le:invariant}
Let $C_\eps$ be an $\eps$-kernel for $C$.
If $\varphi$ is an invertible affine mapping,
then $\varphi(C_\eps)$ is an $\eps$-kernel for $\varphi(C)$.
\end{lemma}
\begin{proof}
The ratio of directional widths for convex bodies
is invariant under invertible affine transformations.
This means that
\[
\forall u\in \U: ~~
1-\eps
\le \frac{\dwidth (u,C_\eps)}{\dwidth (u,C)}
=\frac{\dwidth (u,\varphi(C_\eps))}{\dwidth (u,\varphi(C))}
\]
and thus $\varphi(C_\eps)$ is an $\eps$-kernel for $\varphi(C)$.
\end{proof}
\begin{figure}
\includegraphics[width=\columnwidth,page=1]{square}
\caption{Proof of Lemma~\ref{le:square}.}
\label{fig:square}
\end{figure}
\begin{lemma}
\label{le:square}
Assume that $C$ contains the rectangle $R = [-a,a] \times [-b,b]$,
that $C$ has diameter $d$, and that $C_\eps$ is an $\eps$-kernel for
$C$. Then $C_\eps$ contains the axis-parallel rectangle $S = [-a +
d\eps, a - d\eps] \times [-b + d\eps, b - d\eps]$.
\end{lemma}
\begin{proof}
The statement is empty if $a < d\eps$ or $b < d\eps$, so assume that
$a, b \geq d\eps$. For the sake of contradiction, assume that $S$
is not contained in $C_\eps$. This means that one vertex of $S$ is
not contained in $C_\eps$. Because of symmetry, we can assume that
$s=(a-d\eps, b-d\eps)$ is not contained in $C_\eps$. Since $C_\eps$
is convex and $s\notin C_\eps$, there exists a closed halfplane $h$
that contains $C_\eps$ but does not contain $s$. Let $\ell$ be the
boundary of $h$.
We next argue that $R$ has some vertex at distance at least $d\eps$
from $h$ (and thus $\ell$); see Figure~\ref{fig:square} for a couple
of cases. If $\ell$ has negative slope and $h$ is its lower
halfplane, then the distance from $(a,b)$ to $\ell$ is at least
$d\eps$. If $\ell$ has negative slope and $h$ is its upper
halfplane, then the distance from $(-a,-b)$ to $\ell$ is at least
$2b-d\eps \geq d\eps$. If $\ell$ has positive slope, then $(-a,b)$
or $(a,-b)$ are at distance at least $d\eps$ from~$h$.
Since $R\subseteq C$, for the direction $u$ orthogonal to~$\ell$ we
have
\[
\dwidth(u,C) - \dwidth(u,C_\eps) >
d\eps \ge \eps \cdot \dwidth(u,C),
\]
where we have used the assumption that $\dwidth(u,C)\le d$.
This means that
\[
\left( 1 - \eps \right)\cdot \dwidth(u,C) ~>~
\dwidth(u,C_\eps),
\]
which contradicts that $C_\eps$ is an $\eps$-kernel
for $C$.
\end{proof}
\section{Exact algorithms}
\label{sec:exact}
Let $e_1,\dots, e_n$ be the edges of the convex polygon $P$.
For each edge $e_i$ of $P$, let $h_i$ be the closed halfplane defined
by the line supporting $e_i$ that contains $P$.
Since $P$ is convex, we have $P=\bigcap_i h_i$.
We parameterize the set of \emph{parallelograms} in the plane by points
in $\RR^6$, as follows.
We identify each $6$-dimensional point $(x_1,x_2,u_1,u_2,v_1,v_2)$
with the triple $(x,u,v)\in (\RR^2)^3$, where $x=(x_1,x_2)$,
$u=(u_1,u_2)$, and $v=(v_1,v_2)$. The triple $(x,u,v)\in \RR^6$
corresponds to the parallelogram $\paral(x,u,v)$ with vertices
\[
x,~ x+u,~ x+v,~ x+u+v.
\]
Thus, $x$ describes a vertex of the parallelogram $\paral(x,u,v)$,
while $u$ and $v$ are vectors describing the edges of $\paral(x,u,v)$.
This correspondence is not bijective because, for example,
\[
\paral(x,u,v) ~=~ \paral(x+u+v,-u,-v) ~=~
\paral(x,v,u) .
\]
Nevertheless, each parallelogram is $\paral(x,u,v)$ for
some $(x,u,v)\in \RR^6$: the parallelogram given by
the vertices $p_1p_2p_3p_4$
in clockwise (or counterclockwise) order
is $\paral (p_1,p_2-p_1,p_4-p_1)$.
We are interested in the parallelograms contained in $P$.
To this end we define
\[
\Pi(P) ~=~ \big\{ (x,u,v)\in \RR^6 \mid \paral(x,u,v) \subseteq P\big\}.
\]
Since $P$ is convex, a parallelogram is contained in $P$ if and
only if each vertex of the parallelogram is in~$P$.
Therefore
\begin{align*}
\Pi(P) ~&=~ \big\{ (x,u,v)\in \RR^6 \mid x,\, x+u,\, x+v,\, x+u+v \in P\big\} \\
&=~ \big\{ (x,u,v)\in \RR^6 \mid \forall i: x,\, x+u,\, x+v,\, x+u+v \in h_i \big\} \\
&=~ \bigcap_i \big\{ (x,u,v)\in \RR^6 \mid x,\, x+u,\, x+v,\, x+u+v \in h_i \big\}.
\end{align*}
Since $\Pi(P)$ is trivially bounded,
it follows that $\Pi(P)$ is a convex polytope in $\RR^6$
defined by $4n$ linear constraints.
The Upper Bound Theorem~\cite{McMullen} implies that $\Pi(P)$ has combinatorial
complexity at most~$O(n^3)$. Chazelle's algorithm~\cite{chazelle93}
gives a triangulation of the boundary of $\Pi(P)$
in $O(n^3)$ time;
Seidel~\cite{seidel} calls this the boundary description of a polytope.
From the triangulation of the boundary
we can construct a triangulation of $\Pi(P)$: chose
an arbitrary vertex $x$ of $\Pi(P)$ and add it to
each simplex of the triangulation of the boundary of $\Pi(P)$ that does not contain $x$.
(One can also use a point in the interior of $\Pi(P)$.)
The set of \emph{rectangles} is obtained by restricting
our attention to triples $(x,u,v)$ with $\scalar{u,v}=0$,
where $\scalar{\cdot,\cdot}=0$ again denotes the scalar product of two vectors.
This constraint is non-linear. Because of this, it
is more convenient to treat each simplex of a triangulation of
$\Pi(P)$ separately. When $\scalar{u,v}=0$,
the area of $\paral(x,u,v)$ is $|u|\cdot |v|$.
Consider any simplex $\triangle$ of the triangulation of $\Pi(P)$.
Finding the maximum area rectangle restricted to $\triangle$
corresponds to the problem
\begin{align*}
\opt(\triangle) ~=~\max ~& |u|^2\cdot |v|^2\\
\text{s.t.} ~& (x,u,v)\in \triangle\\
& \scalar{u,v}=0
\end{align*}
This is a constant-size problem. It has $6$ variables and a constant
number of constraints; all constraints but one are linear.
The optimization function has degree four.
In any case, each such problem can be solved in constant time.
When the problem is not feasible, we set $\opt(\triangle)=0$.
Taking the best rectangle over all simplices of a triangulation of
$\Pi(P)$, we find a maximum area rectangle.
Thus, we return $\arg\max_{\triangle} \opt(\triangle)$.
We have shown the following.
\begin{theorem}
\label{th:exact}
Let $P$ be a convex polygon with $n$ vertices.
In time $O(n^3)$ we can find a maximum-area rectangle
contained in $P$.
\end{theorem}
To maximize the perimeter, we apply the same approach.
For each simplex $\triangle$ in a triangulation of $\Pi(P)$
we have to solve the following problem:
\begin{align*}
\opt(\triangle) ~=~\max ~& |u| + |v|\\
\text{s.t.} ~& (x,u,v)\in \triangle\\
& \scalar{u,v}=0
\end{align*}
Combining the solutions over all simplices of the triangulation
we obtain the following.
\begin{theorem}
\label{th:exact2}
Let $P$ be a convex polygon with $n$ vertices.
In time $O(n^3)$ we can find a maximum-perimeter rectangle
contained in $P$.
\end{theorem}
\section{Combinatorially distinct rectangles}
The reader may wonder if the algorithm of the previous section cannot be improved:
It constructs the space~$\Pi(P)$ of all \emph{parallelograms} contained in~$P$,
and then considers the intersection with the manifold $\scalar{u,v} = 0$ corresponding to
the rectangles. If the complexity of this intersection was smaller than~$\Theta(n^3)$,
then we should avoid constructing the entire parallelogram space~$\Pi(P)$ first.
In this section we show that this is not the case: the complexity of the space of rectangles
that fit inside~$P$, that is, the complexity of the intersection of~$\Pi(P)$ with the
manifold~$\scalar{u, v} = 0$, is already~$\Theta(n^3)$ in the worst case. Therefore,
asymptotically we are not loosing anything by considering all parallelograms,
instead of directly concentrating on rectangles.
To this end, let us call two rectangles contained in a convex polygon~$P$
\DEF{combinatorially distinct}
if their vertices are incident to a different subset of edges of~$P$.
We are going to show the following: for every sufficiently large $n$ there is a
polygon $P$ with $n$ vertices that contains $\Theta(n^3)$ combinatorially
distinct rectangles. This shows that any algorithm iterating over all
combinatorially distinct rectangles contained in $P$ needs at least
$\Omega(n^3)$ time. Our algorithm falls in this category.
We provide an informal overview of the construction, see Figure~\ref{fig:lowerbound1}.
\begin{figure}
\centering
\includegraphics[scale=.8,page=1]{lowerbound}
\caption{Outline of the construction.}
\label{fig:lowerbound1}
\end{figure}
For simplicity, we are going to use $3n+2$ vertices.
Consider the circle $C$ of unit radius centered at the origin.
We are going to select some points on $C$, plus two additional points.
The polygon $P$ is then described as the convex hull of these points.
The points are classified into 4 groups.
We have a group $L$ of $n$ points
placed densely on the left side of $C$.
We have another group $R$ of $n$ points placed densely on the right
side of $C$.
The third group $T$, also with $n$ points, is placed on the upper part of $C$.
The points of $T$ are more spread out and will be chosen carefully.
Finally, we construct a group $B$ of two points placed below $C$. The construction
will have the property that, for any edge $e_L$ defined by $L$, any edge $e_R$
defined by $R$, and any edge $e_T$ defined by $T$, there is a rectangle
contained in $P$ with vertices on the edges $e_L$, $e_R$ and $e_T$. The points
$B$ are needed only to make sure that the bottom part of the rectangle
is contained in $P$; they do not have any particular role.
\begin{theorem}
For any sufficiently large value of $n$ there is a polygon $P$ with
$n$ vertices such that $P$ contains $\Theta(n^3)$ combinatorially distinct
rectangles.
\end{theorem}
\begin{proof}
Let $C$ be the circle of unit radius centered at the origin~$o$,
and let $C'$ be the circle of radius $1-2\eps$ centered at~$o$, for a
small value~$\eps>0$ to be chosen later on.
Let $H_\eps$ be the horizontal strip defined by $-\eps\le y\le 0$,
see Figure~\ref{fig:lowerbound2}.
Select a set $R$ of $n$ points in $C\cap H_\eps$ with positive $x$-coordinate
and select a set $L$ of $n$ points in $C\cap H_\eps$ with negative $x$-coordinate.
For every $r$ on a segment connecting consecutive points of $R$
and every $\ell$ on a segment connecting consecutive points of $L$,
let $C_{r\ell}$ be the circle with diameter $r\ell$.
We now observe that the upper semicircle of $C_{r\ell}$ with endpoints~$r$ and~$\ell$
lies between $C$ and $C'$.
This follows
from the fact that $C_{r\ell}$ has radius at least $1-\eps$ and the center
of $C_{r\ell}$ is at most $\eps$ apart from~$o$.
\begin{figure}
\centering
\includegraphics[scale=.6,page=2]{lowerbound}
\caption{Detail of the construction of $L$ and $B$. Some circles $C_{r\ell}$
are in dashed blue.}
\label{fig:lowerbound2}
\end{figure}
Figure~\ref{fig:lowerbound3} should help with the continuation of the construction.
We place a set $T$ of $n$ points on the upper side of $C$. We want the following
additional property: for any two consecutive points $t$ ant $t'$ of $T$,
the segment $tt'$ intersects $C'$. If we select $\eps>0$ sufficiently small,
then $C$ and $C'$ are close enough that we can select the $n$ points needed
to construct $T$. Finally, we choose a set $B$ of two points below $C$, as shown in
Figure~\ref{fig:lowerbound1}. The final polygon $P$ is the convex hull
of $L\cup R\cup T\cup B$. This finishes the description of the polygon $P$.
Consider any edge $e_R$ defined by two consecutive vertices of $R$
and chose a point $r$ on $e_R$.
Similarly, consider any edge $e_L$ defined by two consecutive vertices of $L$
and chose a point $\ell$ on $e_L$.
Let $e_T$ be an edge of $P$ defined by two consecutive points of $T$.
By construction, the circle $C_{r\ell}$ with diameter $r\ell$
intersects the segment $e_T$ in some point, let's call it~$p$.
This means that the triangle $\triangle(r,p,\ell)$ has a
right angle at~$p$. Let~$q$ be the point on~$C_{r\ell}$ such that $pq$ is a diameter
of~$C_{r\ell}$. Then the quadrilateral formed by $r$, $p$, $\ell$, and~$q$
is a rectangle contained in~$P$.
Each choice of an edge $e_R$ defined by $R$, an edge $e_L$ defined by $L$,
and an edge $e_T$ defined by $T$ results in a combinatorially distinct rectangle.
Therefore there are $\Omega(n^3)$ combinatorially distinct rectangles contained in~$P$.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=.6,page=3]{lowerbound}
\caption{Detail of the construction of $T$.}
\label{fig:lowerbound3}
\end{figure}
\section{Approximation algorithm to maximize the area}
\label{sec:approximation1}
The algorithm is very simple: we compute an $(\eps/32)$-kernel
$C_{\eps/32}$ for the input convex body $C$,
compute a maximum-area rectangle contained in $C_{\eps/32}$ and return it.
We next show that this algorithm indeed returns a $(1-\eps)$-approximation.
Let $\Ropt$ be a maximum-area rectangle contained in $C$, and
let $\varphi$ be an affine transformation such that
$\varphi(\Ropt)$ is the square $[-1,1]^2$.
\begin{lemma}
\label{le:bound}
The diameter of $\varphi(C)$ is at most $16$.
\end{lemma}
\begin{proof}
We will show that $\varphi(C)$ is contained in the disk
centered at the origin $o=(0,0)$ of radius $8$, which implies the result.
Any convex body contains a rectangle with at least half of its
area~\cite{Lassak}. Therefore
$\area(\Ropt)/\area(C)\ge 1/2$.
Any invertible affine transformation does not change the ratio
between areas of objects. Therefore
\[
\frac 12 ~\le~ \frac{\area(\Ropt)}{\area(C)} ~=~
\frac{\area(\varphi(\Ropt))}{\area(\varphi(C))} ~=~
\frac{4}{\area(\varphi(C))}
\]
and thus $\area(\varphi(C))\le 8$.
\begin{figure}
\centering
\includegraphics[]{bound}
\caption{Triangle considered in the proof of Lemma~\ref{le:bound}. The point $p$ is
drawn closer to the origin $o$ than it is assumed.}
\label{fig:bound}
\end{figure}
Assume, for the sake of reaching a contradiction, that
$\varphi(C)$ has a point $p$ at distance larger
than $8$ from the origin $o$. See Figure~\ref{fig:bound}.
Let $s$ be the line segment
of length $2$ centered at the origin $o$ and orthogonal
to the segment $op$. Since $s$ is contained in the square $[-1,1]^2$,
it is contained in $\varphi(C)$. Therefore $\varphi(C)$ contains
the convex hull of $s\cup \{ p \}$, which is a triangle of area
larger than $8$, and we get a contradiction.
It follows that $\varphi(C)$ is contained
in a disk centered at the origin $0$ of radius $8$.
\end{proof}
\begin{lemma}
\label{le:large}
Let $C_\eps$ be an $\eps$-kernel for $C$.
Then $C_\eps$ contains a rectangle with area at least
$(1-32\eps)\cdot \area(\Ropt)$.
\end{lemma}
\begin{proof}
Because of Lemma~\ref{le:invariant}, $\varphi(C_\eps)$ is an
$\eps$-kernel for~$\varphi(C)$. Since $\varphi(C)$
contains~$[-1,1]^2$ and has diameter at most~$16$ due to
Lemma~\ref{le:bound}, Lemma~\ref{le:square} with $a=b=1$ implies
that $\varphi(C_\eps)$ contains the square $S=[-t,t]^2$,
where~$t=1-16\eps$.
Since $S$ is obtained by scaling $[-1,1]^2 = \varphi(\Ropt)$ by
$1-16\eps$, its preimage $R = \varphi^{-1}(S)$ is obtained by
scaling $\Ropt$ by~$1-16\eps$ about its center. It follows that $R$
is a rectangle with area
\begin{align*}
\area(R) ~=~ (1-16\eps)^2 \cdot \area(\Ropt)
~\ge~ (1-32\eps) \cdot \area(\Ropt),
\end{align*}
and the lemma follows.
\end{proof}
\begin{theorem}
Let $C$ be a convex body in the plane.
For any given $\eps\in (0,1)$, we can find a $(1-\eps)$-approximation
to the maximum-area rectangle contained in $C$
in time $O(\eps^{-1/2} T_C +\eps^{-3/2})$.
\end{theorem}
\begin{proof}
First, we compute an $(\eps/32)$-kernel $C_{\eps/32}$ to $C$.
We then compute a maximum-area rectangle
contained in $C_{\eps/32}$ and return it. This finishes the
description of the algorithm.
Because of Lemma~\ref{le:large}, $C_{\eps/32}$ contains
a rectangle of area at least $(1-\eps)\cdot \area(\Ropt)$,
where $\Ropt$ is a maximum-area rectangle contained in $C$.
Therefore, the algorithm returns a $(1-\eps)$-approximation
to the maximum-area rectangle.
Computing $C_{\eps/32}$ takes time
$O((\eps/32)^{-1/2} T_C)=O(\eps^{-1/2} T_C)$ because
of Lemma~\ref{le:kernel}. Since $C_{\eps/32}$
has $O((\eps/32)^{-1/2})=O(\eps^{-1/2})$ vertices, finding a
largest rectangle contained in $C_{\eps/32}$
takes time $O(\eps^{-3/2})$ because of Theorem~\ref{th:exact}.
\end{proof}
\begin{corollary}
Let $C$ be a convex polygon with $n$ vertices given
as a sorted array or a balanced binary search tree.
For any given $\eps\in (0,1)$, we can find a $(1-\eps)$-approximation
to the maximum-area rectangle contained in $C$
in time $O(\eps^{-1/2} \log n +\eps^{-3/2})$.
\end{corollary}
\begin{proof}
In this case $T_C=O(\log n)$.
\end{proof}
\section{Approximation algorithm to maximize the perimeter}
\label{sec:approximation2}
The approximation algorithm is the following: we compute an
$(\eps/16)$-kernel $C_{\eps/16}$ for the input convex body $C$,
compute a maximum-perimeter rectangle $R_{\eps/16}$ contained in
$C_{\eps/16}$, and return it. We next show that this indeed computes
a $(1-\eps)$-approximation.
Since the algorithm is independent of the coordinate axes,
we can assume that the maximum-perimeter rectangle contained in~$C$
is an axis-parallel rectangle $\Ropt = [-a,a] \times [-b,b]$ with $b \leq a$.
We distinguish two cases depending
on the aspect ratio $b/a \le 1$ of $\Ropt$. When $b/a \le \eps/2$, then
the longest segment contained in $C$ is a good approximation
to~$\Ropt$. When $b/a > \eps/2$, then $\Ropt$ is fat enough that we can
use Lemma~\ref{le:square} to obtain a large-perimeter rectangle.
\begin{lemma}
\label{le:bound3}
We have
$\peri(R_{\eps/16})\ge (1-\eps)\cdot \peri(\Ropt)$.
\end{lemma}
\begin{proof}
We assume first that $b \le (\eps/2) a$. This implies $\peri(\Ropt)
\le 4(1 + \eps/2)a$. Because $C$ contains~$\Ropt$, the directional
width of $C$ in the horizontal direction is at least~$2a$. The
diameter of $C_{\eps/16}$ is therefore at least~$(1-\eps/16)2a$, and
so $C_{\eps/16}$ contains a segment of perimeter at least
$4(1-\eps/16)a$. The lemma then follows from
\[
4(1-\eps/16)a \geq (1-\eps) \cdot 4(1 + \eps/2)a.
\]
It remains to consider the case $b > (\eps/2) a$. Let $D$ be the
disk of radius~$2a$ centered at the origin. We have $C \subset D$,
as otherwise $C$ contains a segment of perimeter strictly larger
than $4a \geq 2a+2b = \peri(\Ropt)$, contradicting the optimality
of~$\Ropt$. It follows that the diameter of $C$ is at most~$4a$.
By Lemma~\ref{le:square},
$C_{\eps/16}$ contains the axis-parallel rectangle $S = [-a + t, a-t]
\times [-b + t, b - t]$, where $t = a\eps/4$. We have
\[
\peri(S) = \peri(\Ropt) - 8t
= \peri(\Ropt) - 2a\eps
> (1-\eps)\peri(\Ropt). \qedhere
\]
\end{proof}
\begin{theorem}
Let $C$ be a convex body in the plane. For any given $\eps\in
(0,1)$, we can find a $(1-\eps)$-approximation to the
maximum-perimeter rectangle contained in $C$ in time $O(\eps^{-1/2}
T_C +\eps^{-3/2})$.
\end{theorem}
\begin{proof}
First, we compute an $(\eps/16)$-kernel $C_{\eps/16}$ for $C$.
We then compute a maximum-perimeter rectangle
contained in $C_{\eps/16}$ and return it. This finishes the
description of the algorithm.
By Lemma~\ref{le:bound3} $C_{\eps/16}$ contains a rectangle of
perimeter at least $(1-\eps)\cdot \peri(\Ropt)$, where $\Ropt$ is a
maximum-perimeter rectangle contained in $C$. Therefore, the
algorithm returns a $(1-\eps)$-approximation to the
maximum-perimeter rectangle.
Computing $C_{\eps/16}$ takes time $O(\eps^{-1/2} T_C)$ because of
Lemma~\ref{le:kernel}. Finding the maximum-perimeter rectangle
contained in $C_{\eps/16}$ takes time $O(\eps^{-3/2})$ because of
Theorem~\ref{th:exact2}.
\end{proof}
\begin{corollary}
Let $C$ be a convex polygon with $n$ vertices given as a sorted
array or a balanced binary search tree. For any given $\eps\in
(0,1)$, we can find a $(1-\eps)$-approximation to the
maximum-perimeter rectangle contained in $C$ in time $O(\eps^{-1/2}
\log n +\eps^{-3/2})$.
\end{corollary}
\begin{proof}
In this case $T_C=O(\log n)$.
\end{proof}
\small
\bibliographystyle{abbrv}
|
2,869,038,155,557 | arxiv | \section{Introduction}
One of the most intriguing features in Bose-condensed systems is the coherence of the macroscopic quantum phase of the order parameter.
A Bose-Einstein condensate behaves as a quantum mechanical wave with a macroscopic quantum phase.
The macroscopic quantum phase difference of a condensate trapped in a double-well potential was measured by means of the matter wave interference both in destructive~\cite{rf:interfere} and {\it non}destructive~\cite{rf:nondestructive} ways, and the coherence of the Bose-Einstein condensate was clearly verified.
The Josephson effect, which was first predicted for superconductor tunnel junctions~\cite{rf:joseph_origin}, also clearly exhibits the coherence of a Bose-Einstein condensate.
The two mode approximation is useful to describe the Josephson-like dynamics of a weakly linked Bose-Einstein condensate in a double-well potential (see, e.g., refs. 4-7 and references therein).
Using the two mode approximation, various types of the Josephson-like dynamics have been predicted, which include Josephson plasma oscillation and nonlinear self-trapping.~\cite{rf:smerzi,rf:zwerg,rf:BEC}
Recently, the Josephson plasma oscillation and the nonlinear self-trapping have been observed by Albiez {\it et al.} in a condensate of atomic gases trapped in a double-well potential.~\cite{rf:JPST}
Because of the high tunability of system parameters, such a system provides an ideal stage for observing the Josephson-like dynamics.
Since the Josephson plasma mode is a small amplitude oscillation from a static equilibrium, it can be treated as a kind of Bogoliubov excitations.~\cite{rf:parao}
When the potential barrier separating the condensate does not exist, the lowest energy excitation is the dipole mode.
In contrast, when the potential barrier is so strong that the two mode approximation can be valid, the lowest excitation energy accords with the Josephson plasma energy.
In other words, the crossover from the dipole mode to the Josephson plasma mode occurs in the lowest energy excitation.
Salasnich {\it et al}. numerically solved the Bogoliubov equations in a double-well trap with a harmonic confinement and a Gaussian potential barrier to obtain the collective excitation energies.~\cite{rf:numel_dw}
They found that the lowest excitation energy decreases as the barrier strength increases; however they did not refer to the relation between the reduction of the lowest excitation energy and the crossover to the Josephson plasma mode.
The excitation energy in a double-well potential can be determined by a solution of a scattering problem of Bogoliubov excitations through the potential barrier used for preparing the double-well potential.
Kagan {\it et al.} studied the scattering problem of Bogoliubov excitations, and predicted that a potential barrier is transparent for low energy excitations, which is called {\it anomalous tunneling}.~\cite{rf:antun}
It is expected that we can relate the anomalous tunneling to the crossover to the Josephson plasma mode, when the Josephson plasma mode is regarded as a Bogoliubov excitation.
In the present paper, we study collective excitations of Bose-Einstein condensates at temperature $T=0$ in a double-well potential, and discuss the crossover from the dipole mode to the Josephson plasma mode.
Adopting a box-shaped double-well potential, we analytically calculate the lowest excitation energy and the Josephson plasma energy for arbitrary values of the barrier strength.
We show that the lowest excitation energy asymptotically approaches the Josephson plasma energy as the barrier strength increases.
We also find that the anomalous tunneling determines the region of the barrier strength where the crossover occurs.
Moreover, we numerically calculate the lowest excitation energy in case of a double-well potential consisting of a harmonic confinement and a Gaussian potential barrier.
It is shown that the mechanism of the crossover is valid also in this experimentally accessible trap.
The present paper is organized as follows.
In \S \ref{sec:cond}, solving the Gross-Pitaevskii equation with a box-shaped double-well potential, we analytically calculate the condensate wave function and the Josephson plasma energy.
In \S \ref{sec:exci}, solving the Bogoliubov equations with the box-shaped double-well potential, we analytically obtain the excitation spectra of condensates.
In \S \ref{sec:cross}, focusing our attention on the lowest energy excitation, we discuss the crossover from the dipole mode to the Josephson plasma mode.
In \S \ref{sec:numel}, we numerically calculate the lowest excitation energy for a condensate in an experimentally accessible double-well potential.
In \S \ref{sec:conc}, we summarize our results.
\section{Condensate Wave Functions and Josephson Plasma Energy in a Box-Shaped Double-Well Potential}\label{sec:cond}
We first consider a Bose-Einstein condensate in a box-shaped trap which consists of a radial harmonic confinement and end caps in the axial direction (the $x$ axis).
We assume that the frequency $\omega_{\perp}$ of the radial harmonic potential is large enough compared to the excitation energy for the axial direction.
Then, one can justify the one-dimensional treatment of the problem.
Such a configuration was realized in a recent experiment.~\cite{rf:box}
Setting a potential barrier in the center of the trap, a double-well potential is created.
Adopting rigid walls and a $\delta$-function potential barrier as the end caps and the barrier respectively, the double-well potential is written as
\begin{eqnarray}
V(x)=\left\{
\begin{array}{cc}
V_0\delta(x),\,\,|x|<a,\\
\infty,\,\,|x|\geq a,
\end{array}\right.\label{eq:pote}
\end{eqnarray}
where $a$ is the size of a well.
Since we are mainly interested in collective excitations of a double-well trapped condensate at the absolute zero temperature, our formulation of the problem is based on the Bogoliubov theory.
The Bogoliubov theory consists of the time-independent Gross-Pitaevskii equation and the Bogoliubov equations.~\cite{rf:BEC}
In this section, we solve the time-independent Gross-Pitaevskii equation:
\begin{eqnarray}
\Bigl[-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+V(x)+g|\Psi_0(x)|^2\Bigr]
\Psi_0(x) = \mu\Psi_0(x),
\label{eq:sGPE}
\end{eqnarray}
and obtain the condensate wave function $\Psi_0(x)$.
Here $\mu$ is the chemical potential and $m$ is the mass of an atom.
Since the radial confinement is harmonic and sufficiently tight, the coupling constant is affected by the harmonic oscillator length $a_{\perp}$ of the radial confinement as $g=\frac{2\hbar^2a_s}{ma_{\perp}^2}$, where $a_s$ is the $s$-wave scattering length~\cite{rf:1dime}.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=0.2]{box_well}
\end{center}
\caption{\label{fig:box}
Schematic picture of a condensate in a box-shaped double-well potential.
Black solid line represents the condensate wave function.
}
\end{figure}
In \S \ref{sec:cross}, we will discuss the crossover from the dipole mode to the Josephson plasma mode by comparing the lowest excitation energy to the Josephson plasma energy.
Using solutions of the Gross-Pitaevskii equation, one can calculate the Josephson plasma energy.
In order to calculate the Josephson plasma energy, we need to obtain not only the lowest energy symmetric solution ($\Psi_0^{\mathrm{sm}}$) but also antisymmetric ($\Psi_0^{\mathrm{an}}$) solution of eq. \!(\ref{eq:sGPE}).
Assuming that the system size is much larger than the healing length $\xi=\frac{\hbar}{\sqrt{m\mu}}$, the condensate wave function near the center of each well is not affected by the potential barrier and the rigid wall.
Then, one can approximately obtain the symmetric and antisymmetric solutions of eq. \!(\ref{eq:sGPE}) as
\begin{eqnarray}
\Psi_0^{\mathrm{sm}}(x) &=& \sqrt{\frac{\mu_{\mathrm{sm}}}{g}}\times\Biggl\{
\begin{array}{ll}
\mathrm{tanh}\left(\frac{|x|+x_0}{\xi_{\mathrm{sm}}}\right),
\,|x|<\frac{a}{2},
\\
\mathrm{tanh}\left(\frac{-|x|+a}{\xi_{\mathrm{sm}}}\right),
\, \frac{a}{2}\leq |x|<a,
\end{array}\Biggr.\label{eq:gs}
\end{eqnarray}
\begin{eqnarray}
\Psi_0^{\mathrm{an}}(x) &=& \sqrt{\frac{\mu_{\mathrm{an}}}{g}}\times\left\{
\begin{array}{ll}
\mathrm{tanh}\left(\frac{x}{\xi_{\mathrm{an}}}\right),
\,\,\,\,|x|<\frac{a}{2},
\\
\mathrm{sgn}(x)\,\mathrm{tanh}\left(\frac{-|x|+a}{\xi_{\mathrm{an}}}\right),
\frac{a}{2}\leq |x|<a,
\end{array}\right.\label{eq:ex}
\end{eqnarray}
where $\mu_{\mathrm{sm}(\mathrm{an})}$ and $\xi_{{\rm sm}({\rm an})}$ is the chemical potential and the healing length of the symmetric (antisymmetric) state.
A constant $x_0$ reflects value of the condensate wave function at $x=0$, which is determined by the boundary conditions
\begin{eqnarray}
\Psi_0(-0)=\Psi_0(+0), \label{eq:bound0}
\end{eqnarray}
\begin{eqnarray}
\frac{d\Psi_0}{dx}\biggl.\biggr|_{x=-0}
&=&
\frac{d\Psi_0}{dx}\biggl.\biggr|_{x=+0}-\frac{2mV_0}{\hbar^2}\Psi_0(0),
\label{eq:bound0_de}
\end{eqnarray}
as
\begin{eqnarray}
\mathrm{tanh}\frac{x_0}{\xi_{\mathrm{sm}}}=
\frac{-V_0+\sqrt{V_0^2+4(\mu_{\mathrm{sm}}\xi_{\mathrm{sm}})^2}}
{2\mu_{\mathrm{sm}}\xi_{\mathrm{sm}}}.
\end{eqnarray}
We next calculate the Josephson plasma energy $\varepsilon_{\mathrm{JP}}=\sqrt{E_{\mathrm{J}}E_{\mathrm{C}}}$, which is easily derived from the Josephson Hamiltonian:
\begin{eqnarray}
H_{\mathrm{J}}=\frac{E_{\mathrm{C}}k^2}{2}-E_{\mathrm{J}}\,\mathrm{cos}\varphi,
\end{eqnarray}
where $k$ and $\varphi$ represent the population difference and the phase difference between the two wells.~\cite{rf:BEC}
The Josephson coupling energy $E_{\mathrm{J}}$ expresses the overlapping integral of condensate wave functions in the two wells, and the capacitive energy $E_{\mathrm{C}}$ is proportional to inverse of the compressibility.
They are defined as
\begin{eqnarray}
E_{\mathrm{J}}&=&\frac{E_{\mathrm{an}}-E_{\mathrm{sm}}}{2},
\label{eq:jose_J}\\
E_{\mathrm{C}}&=&4\frac{d\mu_{\mathrm{sm}}}{dN_0},\label{eq:jose_C}
\end{eqnarray}
where $E_{\mathrm{sm(an)}}$ is the mean field energy of the symmetric (antisymmetric) state.~\cite{rf:BEC}
Hence, we need to calculate the chemical potential and the mean field energy in order to obtain the Josephson plasma energy.
The chemical potential is related to the number $N_0$ of condensate atoms by the normalization condition:
\begin{equation}
\int_{-a}^{a}dx|\Psi_0(x)|^2=N_0.\label{eq:norm}
\end{equation}
Substituting eqs. \!(\ref{eq:gs}) and (\ref{eq:ex}) into the normalization condition, we derive the relations between the chemical potentials and $N_0$:
\begin{eqnarray}
\frac{\mu_{\mathrm{sm}}}{gn_0}\left(1-2\frac{\xi_{\mathrm{sm}}}{a}+
\frac{\xi_{\mathrm{sm}}}{a}\mathrm{tanh}\frac{x_0}{\xi_{\mathrm{sm}}}
\right)=1,\label{eq:gschem}
\end{eqnarray}
\begin{eqnarray}
\frac{\mu_{\mathrm{an}}}{gn_0}\left(1-2\frac{\xi_{\mathrm{an}}}{a}
\right)=1,
\label{eq:exchem}
\end{eqnarray}
where $n_0\equiv \frac{N_0}{2a}$ is the averaged density of the condensate.
One can obtain approximate solutions of eq. \!(\ref{eq:gschem}) in the limits of $V_0 \gg gn_0\xi_0$ and $V_0 \ll gn_0\xi_0$, where $\xi_0\equiv\frac{\hbar}{\sqrt{mgn_0}}$.
When $V_0 \gg gn_0\xi_0$, we expand eq. \!(\ref{eq:gschem}) into power series of $\frac{\xi_0}{a}$ and $\frac{gn_0\xi_0}{V_0}$, and obtain
\begin{eqnarray}
\frac{\mu_{\mathrm{sm}}}{gn_0} \simeq
1+\frac{2\xi_0}{a}+\frac{2\xi_0^2}{a^2}-\frac{gn_0\xi_0^2}{aV_0}
+\frac{\xi_0^3}{a^3}-\frac{3gn_0\xi_0^3}{a^2 V_0}.
\label{eq:cheml_dw}
\end{eqnarray}
In a similar way, when $V_0 \ll gn_0\xi_0$, we expand eq. \!(\ref{eq:gschem}) into power series of $\frac{\xi_0}{a}$ and $\frac{V_0}{gn_0\xi_0}$, and obtain
\begin{eqnarray}
\frac{\mu_{\mathrm{sm}}}{gn_0} \simeq
1+\frac{\xi_0}{a}+\frac{\xi_0^2}{2a^2}+\frac{V_0}{2agn_0}
+\frac{\xi_0^3}{8a^3}+\frac{\xi_0 V_0}{4a^2 gn_0}
-\frac{V_0^2}{8a\xi_0(gn_0)^2}.
\label{eq:chems_dw}
\end{eqnarray}
Meanwhile, the solution of eq. \!(\ref{eq:exchem}) is
\begin{eqnarray}
\frac{\mu_{\mathrm{an}}}{gn_0}
&=& 1+\frac{2\xi_0^2}{a^2}+2\frac{\xi_0}{a}\sqrt{1+\frac{\xi_0^2}{a^2}},
\nonumber\\
&\simeq&
1+\frac{2\xi_0}{a}+\frac{2\xi_0^2}{a^2}+\frac{\xi_0^3}{a^3}.
\label{eq:chemex_dw}
\end{eqnarray}
In eqs. \!(\ref{eq:cheml_dw}), (\ref{eq:chems_dw}) and (\ref{eq:chemex_dw}), we express the expansions up to the third order of the small parameters.
In Fig. \ref{fig:chem}, we show the chemical potentials $\mu_{\mathrm{sm}}$ and $\mu_{\mathrm{an}}$ as functions of the potential strength $V_0$.
While $\mu_{\mathrm{an}}$ is constant, $\mu_{\mathrm{sm}}$ increases monotonically and approaches the value of $\mu_{\mathrm{an}}$ as $V_0$ increases.
This means that the symmetric state and the antisymmetric state are degenerate in the strong potential limit $V_0\to\infty$.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.25]{chem}
\end{center}
\caption{\label{fig:chem}
Chemical potentials $\mu_{\mathrm{sm}}$ (solid line) and $\mu_{\mathrm{an}}$ (dashed line) as functions of the potential strength $V_0$ are shown.}
\end{figure}
The mean field energy $E$ can be calculated using the expression
\begin{eqnarray}
E=
\int_{-a}^{a}dx\Psi_0^{\ast}(x)
\Bigl[-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}
+V(x)+\frac{g}{2}|\Psi_0(x)|^2\Bigr]\Psi_0(x).
\end{eqnarray}
In the limit of $V_0\gg gn_0\xi_0$, one can approximately obtain
\begin{eqnarray}
\frac{E_{\mathrm{sm}}}{N_0 gn_0}&\simeq&
1+\frac{4\xi_0}{3a}+\frac{2\xi_0^2}{a^2}
-\frac{gn_0\xi_0^2}{2aV_0}+\frac{2\xi_0^3}{a^3}
-\frac{2gn_0\xi_0^3}{a^2 V_0},\label{eq:mean_gs}\\
\frac{E_{\mathrm{an}}}{N_0 gn_0}&\simeq&
1+\frac{4\xi_0}{3a}+\frac{2\xi_0^2}{a^2}+\frac{2\xi_0^3}{a^3},
\label{eq:mean_ex}
\end{eqnarray}
where $E_{\mathrm{sm}(\mathrm{an})}$ is the mean field energy of the symmetric (antisymmetric) state.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.25]{JP_energy}
\end{center}
\caption{\label{fig:JP_energy}
Josephson plasma energy $\varepsilon_{\mathrm{JP}}$ as a function of the potential strength $V_0$ is shown.}
\end{figure}
Substituting the chemical potential of eq. \!(\ref{eq:cheml_dw}) and the mean field energies of eqs. \!(\ref{eq:mean_gs}) and (\ref{eq:mean_ex}) into eqs. \!(\ref{eq:jose_J}) and (\ref{eq:jose_C}), one obtains
\begin{eqnarray}
\frac{E_{\mathrm{J}}}{gn_0}&\simeq&
\frac{N_0gn_0\xi_0^2}{4V_0a}\left(1+\frac{4\xi_0}{a}\right),\\
\frac{E_{\mathrm{C}}}{gn_0}&\simeq&
\frac{4}{N_0}\left(1+\frac{\xi_0}{a}\right)
\end{eqnarray}
when $V_0\gg gn_0\xi_0$.
These equations yield the Josephson plasma energy
\begin{eqnarray}
\frac{\varepsilon_{\mathrm{JP}}}{gn_0}\simeq
\sqrt{\frac{gn_0\xi_0^2}{V_0a}}\left(1+\frac{5\xi_0}{2a}\right),
V_0\gg gn_0\xi_0.\label{eq:josephp}
\end{eqnarray}
The Josephson plasma energy is shown in Fig. \ref{fig:JP_energy}, as a function of the potential strength.
In \S \ref{sec:cross}, we will see that the lowest excitation energy coincides with this Josephson plasma energy for the sufficiently strong potential.
\section{Excitation Spectrum in a Box-Shaped Double-Well Potential}\label{sec:exci}
The aim of this section is to calculate the excitation spectra of condensates in a box-shaped double-well potential.
The excitations of a condensate correspond to the small fluctuations of the condensate wave function $\Psi_0(x,t)$ around the stationary solution of eq. \!(\ref{eq:sGPE}),
\begin{eqnarray}
\delta\Psi_0(x,t)=e^{-\frac{{\rm i}\mu t}{\hbar}}
\sum_j\left(u_j(x)e^{-\frac{{\rm i}\varepsilon_j t}{\hbar}}
-v_j^*(x) e^{\frac{{\rm i}\varepsilon_j t}{\hbar}}\right),
\end{eqnarray}
where $\varepsilon_j$ is the energy of the excitation in an eigenstate labeled by $j$.
The wave functions $(u_j(x), v_j(x))^{\bf t}$ of the excitations fulfill the Bogoliubov equations~\cite{rf:BEC},
\begin{eqnarray}
\begin{array}{cc}
\left(
\begin{array}{cc}
H_0 & -g{\Psi_0(x)}^2 \\
g\Psi_0^{\ast}(x)^2 & -H_0
\end{array}
\right)
\left(
\begin{array}{cc}
u_j(x) \\ v_j(x)
\end{array}
\right)
= \varepsilon_j\left(
\begin{array}{cc}
u_j(x) \\ v_j(x)
\end{array}
\right),
\end{array}\\ \label{eq:BdGE}
H_0 = -\frac{\hbar^2}{2m}\frac{d^2}{dx^2}
-\mu+V\left(x\right)+2g|\Psi_0(x)|^2.
\end{eqnarray}
Solving these equations with the condensate wave function of eq. \!(\ref{eq:gs}), we shall obtain the excitation spectrum.
Hereafter the chemical potential $\mu$ denotes the chemical potential $\mu_{\rm sm}$ of the symmetric state.
In order to relate the problem to tunneling properties of the Bogoliubov excitation, we separate the system into three regions, or $-a < x \leq -\frac{a}{2}$, $|x| < \frac{a}{2}$ and $\frac{a}{2}\leq x<a$.
We solve the Bogoliubov equations analytically in each region, and derive the equation to determine the excitation spectrum by connecting each solution smoothly.
At first, we find solutions of the Bogoliubov equations in the central region ($|x| < \frac{a}{2}$).
There exist two independent solutions with a same excitation energy $\varepsilon$, corresponding to two types of scattering process.~\cite{rf:KP}
One solution $\psi^l(x)\equiv (u^l(x), v^l(x))^{\bf t}$ describes the process where a Bogoliubov excitation comes from left, and the other solution $\psi^r(x)\equiv (u^r(x), v^r(x))^{\bf t}$ describes the process where a Bogoliubov excitation comes from right.
The solution $\psi^l(x)$ is written as
\begin{eqnarray}
{\psi}^l(x)
\!\!\!&=&\!\!\!\left\{\begin{array}{ll}
\left(\!\begin{array}{cc}
u_1 \\ v_1
\end{array}\!\right) e^{{\rm i}p_+ x}
+r\left(\!\begin{array}{cc}
u_2 \\ v_2
\end{array}\!\right) e^{-{\rm i}p_+ x}
+b\left(\!\begin{array}{cc}
u_3 \\ v_3
\end{array}\!\right) e^{p_- x},
& \!\!\!x<0, \\
t\left(\!\begin{array}{cc}
u_1 \\ v_1
\end{array}\!\right) e^{{\rm i}p_+ x}
+c\left(\!\begin{array}{cc}
u_4 \\ v_4
\end{array}\!\right) e^{-p_- x},
& \!\!\!x>0,
\end{array}\right.\label{eq:lcs}
\end{eqnarray}
where $p_{\pm}=\sqrt{\frac{2m}{\hbar^2}(\sqrt{\mu^2+\varepsilon^2}\mp\mu)}$ satisfies the Bogoliubov spectrum for a uniform system.~\cite{rf:antun,rf:KP,rf:kovri}
The coefficients $r$, $b$, $t$, and $c$ are the amplitudes of reflected, left localized, transmitted, and right localized components, respectively.
All the coefficients can be determined by the boundary conditions at $x=0$.
Thus, we obtained $\psi^l(x)$, and we can also find $\psi^r(x)$ in the same way.
Expanding $|t|$ and the phase $\delta$ of $t$ around $\varepsilon=0$,~\cite{rf:KP} one can analytically obtain
\begin{eqnarray}
|t| &\simeq& 1-\alpha\left(\frac{\varepsilon}{\mu}\right)^2,
\label{eq:unexp_pr}\\
\delta &\simeq& \beta\frac{\varepsilon}{\mu}.
\label{eq:tnexp_ph}
\end{eqnarray}
The coefficients $\alpha$ and $\beta$ are
\begin{eqnarray}
\alpha=
\frac{2(V_0 \!-\! \mu\xi)(V_0^3\!+\! \nu V_0^2\!+\! 2\nu(\mu\xi)^2
- 4(\mu\xi)^3)+ 9(\mu\xi V_0)^2}{8(\mu\xi\nu)^2},
\end{eqnarray}
\begin{eqnarray}
\beta=\frac{V_0^2+\nu V_0-3\mu\xi\nu+6(\mu\xi)^2}{2\mu\xi\nu},
\end{eqnarray}
where
\begin{eqnarray}
\nu=\sqrt{V_0^2+4(\mu\xi)^2}.
\end{eqnarray}
It is obvious from eqs. \!(\ref{eq:unexp_pr}) and (\ref{eq:tnexp_ph}) that the transmission coefficient $|t|^2$ approaches unity and the phase shift of $t$ approaches zero as the energy is reduced to zero; Kagan $\mathit{et \,\, al.}$ called this behavior {\it anomalous tunneling}~\cite{rf:antun}.
This expression is adequate for arbitrary values of the potential strength $V_0$.
Furthermore, when $\varepsilon \ll \mu$ and $V_0 \gg \mu\xi$, one can approximately obtain the reflection and transmission amplitudes,~\cite{rf:KP}
\begin{eqnarray}
r &=& \frac{\varepsilon V_0-\varepsilon\mu\xi
+i\frac{\varepsilon^2 V_0}{\mu}}
{\varepsilon V_0-\varepsilon\mu\xi+i\mu^2\xi},\label{eq:cfrf}
\\
t &=& \frac{\varepsilon\mu\xi+i\mu^2\xi}
{\varepsilon V_0-\varepsilon\mu\xi+i\mu^2\xi}. \,
\label{eq:cfct}
\end{eqnarray}
In eq. (\ref{eq:cfct}), we can see that the peak around $\varepsilon=0$ in $|t|^2$ has the Lorentzian shape with half width $\Delta\varepsilon\sim \frac{\mu^2\xi}{V_0}$.
Such an anomalous tunneling property of the Bogoliubov excitations essentially affects the crossover from the dipole mode to the Josephson plasma mode of the lowest energy excitation, as discussed in the next section.
We can write a general solution of the Bogoliubov equations in the central region as a linear combination of $\psi^l(x)$ and $\psi^r(x)$:
\begin{eqnarray}
\psi(x)=\eta\psi^l(x)+\zeta\psi^r(x), \, |x|<\frac{a}{2}.
\label{eq:centre}
\end{eqnarray}
One can obtain an analytical solution of the Bogoliubov equations also in the left and right side regions ($\frac{a}{2}\leq |x| <a$).
It is
\begin{eqnarray}
u^{\mathrm{side}}(x)&=&
\sqrt{\frac{2\mu}{\varepsilon}}
\Biggl[\Biggr.
\left(1+\frac{(p_+\xi)^2\mu}{2\varepsilon}
\right)
\mathrm{tanh}\left(\frac{|x|-a}{\xi}\right)
\mathrm{cos}p_+(|x|-a)
\nonumber\\
&&+\frac{p_+\xi\mu}{2\varepsilon}
\Biggl(\Biggr.\frac{(p_+\xi)^2}{2}+1
-\mathrm{tanh}^2\left(\frac{|x|-a}{\xi}\right)
+\frac{\varepsilon}{\mu}\Biggl.\Biggr)
\mathrm{sin}p_+(|x|-a)
\Biggl.\Biggr],\label{eq:sideu}
\\
v^{\mathrm{side}}(x)&=&
\sqrt{\frac{2\mu}{\varepsilon}}
\Biggl[\Biggr.
\left(1-\frac{(p_+\xi)^2\mu}{2\varepsilon}
\right)
\mathrm{tanh}\left(\frac{|x|-a}{\xi}\right)
\mathrm{cos}p_+(|x|-a)\nonumber\\
&&-\frac{p_+\xi\mu}{2\varepsilon}
\Biggl(\Biggr.\frac{(p_+\xi)^2}{2}+1
-\mathrm{tanh}^2\left(\frac{|x|-a}{\xi}\right)
-\frac{\varepsilon}{\mu}\Biggl.\Biggr)
\mathrm{sin}p_+(|x|-a)
\Biggl.\Biggr].\label{eq:sidev}
\end{eqnarray}
Using the solutions of eqs. \!(\ref{eq:centre}), (\ref{eq:sideu}) and (\ref{eq:sidev}), one can construct solutions in all regions:
\begin{eqnarray}
{\psi}(x)
\!\!\!&=&\!\!\!\left\{\begin{array}{ll}
F \psi^{\mathrm{side}}(x)
& \!\!\!-a<x\leq-\frac{a}{2},\\
\vspace{-2mm}
& \\
\eta\psi^l(x)
+\zeta\psi^r(x)
& |x|<\frac{a}{2},\\\vspace{-2mm}
& \\
G \psi^{\mathrm{side}}(x)
& \!\!\!\frac{a}{2}\leq x < a,
\end{array}\right.\label{eq:spose}
\end{eqnarray}
where $\eta=\zeta$ and $F=G$ are satisfied for collective excitations with even parity or $\eta=-\zeta$ and $F=-G$ for those with odd parity.
By imposing the boundary condition at $|x|=\frac{a}{2}$, the solutions can be smoothly connected.
As a result, one obtains the equation to determine the excitation spectrum:
\begin{equation}
(r \pm t)\mathrm{exp}\left({\rm i}(2p_+a+\gamma)\right)=1,
\label{eq:eq_bspe}
\end{equation}
where
\begin{eqnarray}
\gamma \equiv \mathrm{tan}^{-1}
\left(\frac{-p_+\xi_{\mathrm{sm}}}{1-\frac{1}{4}(p_+\xi_{\mathrm{sm}})^2}
\right).
\end{eqnarray}
The positive (negative) sign in the left-hand side of eq. \!(\ref{eq:eq_bspe}) represents the excitations with even (odd) parity.
One can see from eq. \!(\ref{eq:eq_bspe}) that tunneling properties of the Bogoliubov excitations through the potential barrier affect the excitation spectrum explicitly.
Since the reflection and transmission amplitudes satisfy relations:~\cite{rf:KP}
\begin{eqnarray}
|t|^2+|r|^2=1,\label{eq:efcl}
\end{eqnarray}
and
\begin{eqnarray}
t=|t|e^{{\rm i}\delta}, \,r=\pm {\rm i}|r|e^{{\rm i}\delta},\label{eq:prim}
\end{eqnarray}
one obtains
\begin{eqnarray}
|r \pm t|=1.\label{eq:reftra}
\end{eqnarray}
Using eq. \!(\ref{eq:reftra}), one can rewrite eq. \!(\ref{eq:eq_bspe}) in terms of the phase of the left-hand side as
\begin{equation}
2p_+ a+\gamma+\phi_{\pm}=2\pi n, \label{eq:ph_exp}
\end{equation}
where $\phi_{\pm}$ is the phase of $r \pm t$.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.25]{energy_sp}
\end{center}
\caption{\label{fig:epsart5}
Excitation spectra in the double-well traps with $V_0=0$, $V_0=10$ and $V_0=100$ are shown, where $a=10\xi_0$.
}
\end{figure}
Solving eq. \!(\ref{eq:ph_exp}) for excitation energy $\varepsilon$, we obtain the excitation spectra which consist of discrete energy levels as shown in Fig. \ref{fig:epsart5}.
Parity of quantum numbers expresses parity of the wave function of the collective excitations.
In the strong potential limit $V_0\rightarrow \infty$, the excitations labeled by $2l$ and $2l+1$ are degenerate, because the condensate is completely divided into each well.
Hence, as the potential barrier becomes stronger, energy of the even parity excitation labeled by $2l$ increases and energy of the odd parity excitation labeled by $2l+1$ decreases so that both energies approach.
The change of energies of the odd parity excitations is more pronounced than that of the even parity excitations, and such a tendency is qualitatively consistent with the numerical results in the case of the double-well traps with harmonic confinement~\cite{rf:numel_dw}.
Moreover, we see that the less the excitation energy is, the slower the change according to growth of the potential barrier is.
This implies that the anomalous tunneling behavior of low energy excitations buffers the effect of the potential barrier.
\section{Crossover from Dipole Mode to Josephson Plasma Mode}\label{sec:cross}
In this section, we shall discuss the effect of the anomalous tunneling on the lowest energy excitation.
Solving eq. \!(\ref{eq:ph_exp}) with $n=0$ and odd parity, we obtain the lowest excitation energy as a function of the potential strength, which is represented by the solid line in Fig. \ref{fig:lowest_ex}.
In order to elucidate the effect of the anomalous tunneling on the lowest excitation energy $\varepsilon_{\mathrm{low}}$, we calculate it analytically in two limits.
On the one hand, when $\frac{gn_0\xi_0}{V_0} \gg \frac{\xi_0}{a}$, substituting the low energy expansion of the transmission amplitude of eqs. \!(\ref{eq:unexp_pr}) and (\ref{eq:tnexp_ph}), one obtains
\begin{eqnarray}
\frac{\varepsilon_{\mathrm{low}}}{gn_0}
\simeq\frac{\pi\xi_0}{2a}\Biggl[\Biggr.
1+\frac{\xi_0}{a}\biggl(\biggr.\frac{3}{2}
-\frac{\beta}{2}-\sqrt{\frac{\alpha}{2}}
-\frac{\sqrt{V_0^2+4(gn_0\xi_0)^2}-V_0}{4gn_0\xi_0}
\biggl.\biggr)\Biggl.\Biggr],\label{eq:small_cor}
\end{eqnarray}
which is plotted in Fig. \ref{fig:lowest_ex} by the dotted line.
Obviously, the influence of the potential barrier on the lowest excitation is relatively small, because the correction of $\varepsilon_{\rm low}$ due to the potential barrier is included in the small term of $O(\frac{\xi_0}{a})$ in eq. \!(\ref{eq:small_cor}).
On the other hand, when $\frac{gn_0\xi_0}{V_0} \ll \frac{\xi_0}{a}$,
one can obtain an approximate expression of the lowest excitation energy from eqs. \!(\ref{eq:cfrf}) and (\ref{eq:cfct}),
\begin{eqnarray}
\frac{\varepsilon_{\mathrm{low}}}{gn_0}\simeq
\sqrt{\frac{gn_0 \xi_0^2}{V_0 a}}
\left(1+\frac{5\xi_0}{2a}\right).
\end{eqnarray}
This energy coincides with the Josephson plasma energy of eq. \!(\ref{eq:josephp}), and this result explicitly justifies treatment of the Josephson plasma oscillation by the two mode approximation for a sufficiently strong potential barrier.
In Fig. \ref{fig:lowest_ex}, one can see that the crossover from the dipole mode to the Josephson plasma mode occurs.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=3 in, height=2 in]{lowest_ex}
\end{center}
\caption{\label{fig:lowest_ex}
Black solid line is the lowest excitation energy for $a=10\xi_0$ as a function of $V_0$.
Dotted line is the corrected dipole mode energy of eq. \!(\ref{eq:small_cor}), while the dashed line is the Josephson plasma energy shown in Fig. \ref{fig:JP_energy}.
Gray solid line is the half-width of the peak of the transmission coefficient.}
\end{figure}
It is crucial to understand the physics of the crossover that two energy scales of, $\frac{gn_0\xi_0}{a}$ and $\frac{(gn_0)^2\xi_0}{V_0}$, determine whether the lowest energy excitation is the Josephson plasma mode or the dipole mode.
The former is comparable to the dipole mode energy.
The latter is comparable to the half-width $\Delta\varepsilon$ of the peak of the transmission coefficient $|t|^2$.
In other words, the potential barrier is almost transparent for the excitations with energy less than $\frac{(gn_0)^2\xi_0}{V_0}$.
The crossover is determined by whether the anomalous tunneling is effective or not for the lowest energy excitation.
When $\frac{\xi_0}{a} \ll \frac{gn_0\xi_0}{V_0}$, the potential barrier is almost transparent for the lowest energy excitation due to the anomalous tunneling; , the dipole mode is hardly affected by the potential barrier.
As the potential strength $V_0$ increases, contribution of the anomalous tunneling diminishes gradually and the lowest energy excitation comes to be significantly affected by the potential barrier
Consequently, when $\frac{\xi_0}{a} \gg \frac{gn_0\xi_0}{V_0}$, the lowest energy excitation becomes the Josephson plasma mode.
Figure \ref{fig:lowest_ex}, where the half-width of the anomalous tunneling (gray line) divides the lowest excitation energy into regions of the dipole mode and the Josephson plasma mode, clearly confirms such an interpretation of the crossover.
\section{Realistic Double-Well Potential}\label{sec:numel}
In the preceding sections, we considered the case in the double-well potential of eq. \!(\ref{eq:pote}) with rigid walls and a $\delta$-function potential barrier.
In this section, we numerically solve the Gross-Pitaevskii equation and the Bogoliubov equations in case of a realistic double-well potential.
Considering a double-well potential consisting of the magnetic trap and the blue-detuned laser beam, which is realized in the experiment of ref. 1, we adopt a double-well potential with a harmonic confinement and a Gaussian potential barrier:
\begin{eqnarray}
V(x)=\frac{m\omega_x^2 x^2}{2}+U_0\,{\rm exp}\left(-\frac{x^2}{\sigma^2}\right),
\label{eq:realpote}
\end{eqnarray}
where $\omega_x$ is the frequency of the harmonic potential.
The height $U_0$ and the width $\sigma$ of the potential barrier can be controlled in experiments by varying the intensity and the aperture of the laser beam, respectively.
With use of the Bogoliubov theory, one can discuss the crossover to the Josephson plasma mode in the same way as in the case of the box-shaped double-well potential.
We can numerically find the lowest symmetric and antisymmetric solutions of the Gross-Pitaevskii equation with the double-well potential of eq. \!(\ref{eq:realpote}); accordingly we can calculate the Josephson coupling energy $E_{\rm J}$ and the capacitive energy $E_{\rm C}$ using the expressions of eq. \!(\ref{eq:jose_J}) and eq. \!(\ref{eq:jose_C}).
As a result, we can obtain the Josephson plasma energy $\varepsilon_{\rm JP}$.
Furthermore, substituting the lowest symmetric solution of the Gross-Pitaevskii
equation into the Bogoliubov equations, we can numerically calculate the lowest excitation energy.
Open circles and closed circles in Fig. \ref{fig:harmo} represent the lowest excitation energy and the Josephson plasma energy, respectively.
In Fig. \ref{fig:harmo} one can see that the lowest excitation energy reduces as height of the potential barrier increases; finally it accords with the Josephson plasma energy for the sufficiently high barrier.
Thus, the crossover from the dipole mode to the Josephson plasma mode is confirmed in the case of a realistic double-well potential.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=0.25]{ngraphh}
\end{center}
\caption{Closed circles represent the lowest excitation energy, and open circles represent the Josephson plasma energy. Triangles represent the half-width of the peak of the transmission coefficient. We consider a condensate of $^{23}{\rm Na}$ atoms. The values of parameters are as follows; the number of the condensed atoms is $N_0=3000$, the frequency of the radial confinement is $\omega_{\perp}=250\times 2\pi \,\,{\rm Hz}$, the frequency of the axial confinement is $\omega_{\perp}=10\times 2\pi \,\,{\rm Hz}$, the {\it s}-wave scattering length is $a_s=3 \,{\rm nm}$ and the width of the potential barrier is $\sigma=1.33 \,{\rm \mu m}$.}
\label{fig:harmo}
\end{figure}
In order to elucidate the role of the anomalous tunneling in the crossover, we next estimate half-width $\Delta\varepsilon$ of the peak of the anomalous tunneling for a Gaussian potential barrier.
In case of a rectangular potential barrier, Kagan {\it et al}. have approximately obtained an analytical expression for the transmission coefficient.~\cite{rf:antun}
Using the expression, one can calculate the half-width,
\begin{eqnarray}
\frac{\Delta\varepsilon}{\mu}\simeq
\frac{2\sqrt{2}\,{\rm e}^{-\kappa_0 d}}{\kappa_0 \xi},\label{eq:half_w}
\end{eqnarray}
where
\begin{eqnarray}
\kappa_0 = \frac{1}{d}\int_{-\frac{d}{2}}^{\frac{d}{2}}dx
\sqrt{\frac{2m}{\hbar^2}(V_{\rm barrier}(x)-\mu)}.\label{eq:t_ac}
\end{eqnarray}
In the region $|x|<\frac{d}{2}$, the potential barrier $V_{\rm barrier}(x)$ is larger than the chemical potential; this means that the points $|x|=\frac{d}{2}$ are the classical turning points.
We estimate the half-width $\Delta\varepsilon$ for a Gaussian potential barrier by means of the expression of eqs. \!(\ref{eq:half_w}) and (\ref{eq:t_ac}).
Triangles in Fig. \ref{fig:harmo} represent the half-width $\Delta\varepsilon$ of the peak of the transmission coefficient.
When the dipole mode energy is much smaller than the half-width, namely $\hbar\omega_x \ll \Delta\varepsilon$, the anomalous tunneling is effective for the lowest excitation energy.
Accordingly, the lowest excitation is hardly affected by the potential barrier.
When $\hbar\omega_x \sim \Delta\varepsilon$, the lowest excitation energy remarkably changes because the excitation begins to perceive the potential barrier.
When $\hbar\omega_x \gg \Delta\varepsilon$, the anomalous tunneling is no longer effective; consequently, the lowest excitation becomes the Josephson plasma mode.
Therefore, the interpretation of the crossover discussed in the previous section is also appropriate for the double-well potential of eq. \!(\ref{eq:realpote}).
\section{Conclusions}\label{sec:conc}
In summary, we have studied collective excitations of a condensate in a double-well potential.
We have analytically solved the Bogoliubov equations with a box-shaped double-well potential, and analyzed the excitation spectrum.
In the lowest excitation, it has been found that the crossover from the dipole mode to the Josephson plasma mode occurs as the potential barrier separating the condensate becomes strong.
The crossover is dominated by the anomalous tunneling behavior of the Bogoliubov excitations.
We have numerically calculated the lowest excitation energy and the Josephson plasma energy for a condensate in a realistic double-well potential.
We have confirmed by numerical calculation that the mechanism of the crossover is valid also in the case of more realistic potential.
While only condensates at $T=0$ has been considered in our calculations, the Josephson plasma oscillation of condensates in finite temperature is known to exhibit dissipative behaviors due to the presence of the thermal depletion~\cite{rf:zwerg}.
It will be interesting to study the excitations of the condensates at finite temperature in the double-well potential, and to discuss the relation between the damping of the collective oscillation and the anomalous tunneling of the Bogoliubov excitations.
\section*{Acknowledgment}
We would like to thank S. Tsuchiya, T. Kimura and T. Nikuni for useful discussions.
This work is partly supported by a Grant for The 21st Century COE Program (Physics of Self-organization Systems) at Waseda University from the Ministry of Education, Sports, Culture, Science, and Technology of Japan.
I.D. is supported by Grant-in-Aid for JSPS fellows.
|
2,869,038,155,558 | arxiv |
\subsubsection{Odijk regime}
If both $D_{\rm H} \ll \ell_{\rm P}$ and $D_{\rm W} \ll \ell_{\rm P}$, then it is impossible for the polymer to turn around completely. Instead, the polymer must stay almost parallel to the channel axis, undulating slightly from side to side. In this regime the statistics of the extension and the free energy of confinement
have been studied in detail both theoretically and numerically \cite{burkhardt2010,chen2013}.
\subsubsection{Backfolded Odijk regime}
Odijk \cite{odijk2008} has predicted the existence of a regime intermediate between the Odijk regime and the extended de Gennes regime, for very slender semi-flexible polymers. For square channels, this regime was studied
by Muralidhar {\em et al.} \cite{muralidhar2014a}, who gave it the name ``backfolded Odijk regime''. In this regime the size of the channel is such that backfolds are possible but rare. Odijk defines the \textit{global persistence length} $g$ as the orientational correlation length of the corresponding ideal polymer. The backfolded Odijk regime requires that $\ell_{\rm P} \ll g\ll l_{\rm cc}$. This condition can only be satisfied for very thin polymers \cite{muralidhar2014a}.
Assuming that a polymer section that is free of backfolds
follows the statistics of the Odijk regime, one can predict scaling relations for the extension \textit{in terms of the global persistence length} $g$. However, no theory for how $g$ itself depends on the channel size exists for relevant channel sizes \cite{muralidhar2014a}. Note also that the statistics of the Odijk regime are not derived under the assumption that the chain does not backfold, but under the stricter assumption that it is almost completely parallel with the channel axis. Since such strong alignment prohibits backfolding, the agreement with Odijk statistics is only approximate in this regime.
The predictions for the backfolded Odijk regime in the square channel \cite{odijk2008,muralidhar2014a} are straightforward to generalise to rectangular channels. The results are shown in Table \ref{tab:1}. The prediction for the scaling of the extension was derived by Odijk \cite{odijk2008}.
\subsubsection{Slit regimes}
For completeness, we include in Table \ref{tab:1} the scaling regimes for infinite channel aspect ratio,
i.e. for the polymer confined in a slit. The results for these regimes are quoted from recent studies by Dai {\em et al.} \cite{dai2012},
Taloni {\em et al.} \cite{taloni2013}, and Tree {\em et al.} \cite{tree2014}.
For polymers that are too short to exhibit the asymptotic relation $R\propto L$ (see main text),
these scalings also describe the statistics in regimes Ia-c in Fig.~1 in the main text.
\begin{turnpage}
\begin{table*}
\mbox{}\hspace*{-1cm}\begin{tabular}{llllllll}
\toprule
&& \multicolumn{3}{c}{Conditions} & \multicolumn{3}{c}{Statistics} \\
\cmidrule(lr){3-5}
\cmidrule(lr){6-8}
Regime & Name & $D_{\rm H}$ & $D_{\rm W}$ \footnote{We assume throughout that $D_{\rm W}\geD_{\rm H}$.} & $L$ & $R/L$ & $\sigma_R^2/L$ & $F_{\rm c}/L$\\
\midrule
Ia & de Gennes & \small $\gg\ell_{\rm K}^2/w$ & --- & $\gg \Big(\frac{D_{\rm W}^4 D_{\rm H}} {\ell_{\rm K} w}\Big)^{\frac{1}{3}}$ & $\approx \Big(\frac{\ell_{\rm K} w}{D_{\rm H} D_{\rm W}}\Big)^{\frac{1}{3}}$ \cite{benkova2015a} & $\approx \Big(\frac{\ell_{\rm K} w D_{\rm W}^2}{D_{\rm H}}\Big)^{\frac{1}{3}}$ \cite{werner2015}& $\approx \Big(\frac{\ell_{\rm K} w}{D_{\rm H}^5}\Big)^{\frac{1}{3}}$ \cite{werner2015} \\
Ib & --- & $\ell_{\rm K}\ll D_{\rm H} \ll \frac{\ell_{\rm K}^2}{w}$ & $D_{\rm W}^2 \gg D_{\rm H} \ell_{\rm K}^2/w$ & $\gg \Big(\frac{D_{\rm W}^4 D_{\rm H}} {\ell_{\rm K} w}\Big)^{\frac{1}{3}}$ & $\approx \Big(\frac{\ell_{\rm K} w}{D_{\rm H} D_{\rm W}}\Big)^{\frac{1}{3}}$ \cite{werner2015} & $\approx \Big(\frac{\ell_{\rm K} w D_{\rm W}^2}{D_{\rm H}}\Big)^{\frac{1}{3}}$ \cite{werner2015}& $=\frac{\pi^2}{6} \ell_{\rm K}D_{\rm H}^{-2}$ \cite{casassa1967}\\
Ic & --- & $w\leD_{\rm H}\ll\ell_{\rm K}$ \footnote{\mbox{Tree et al. \cite{tree2014} have shown in simulations that for polymers confined in slits, the non-crossing regime $D_{\rm H}<w$} \mbox{ is approached smoothly as $D_{\rm H}\to w$ from above, and we expect the same to be valid in channels.}} & $D_{\rm W}^2 \gg D_{\rm H} \ell_{\rm K}^2/w$ & $\gg \Big(\frac{D_{\rm W}^4 D_{\rm H}} {\ell_{\rm K} w}\Big)^{\frac{1}{3}}$ & $\approx \Big(\frac{\ell_{\rm K} w}{D_{\rm H} D_{\rm W}}\Big)^{\frac{1}{3}}$ \cite{werner2015}& $\approx \Big(\frac{\ell_{\rm K} w D_{\rm W}^2}{D_{\rm H}}\Big)^{\frac{1}{3}}$ \cite{werner2015}& $= 1.1032(1) \ell_{\rm P}^{-\frac{1}{3}} D_{\rm H}^{-\frac{2}{3}}$ \cite{chen2013}\footnote{\mbox{Whereas predictions involving $\ell_{\rm P}$ are specific to the worm-like chain, those expressed in terms of $\ell_{\rm K}$ are valid for general} \mbox{ polymer models, if the effective width $w$ is replaced by $v/\ell_{\rm K}^2$, where $v$ is the excluded volume of a Kuhn length segment.}} \\
IIa & extended de Gennes & $\ell_{\rm K}\ll D_{\rm H} \ll \frac{\ell_{\rm K}^2}{w}$ & $D_{\rm W}^2 \ll D_{\rm H} \ell_{\rm K}^2/w$ & $\gg\Big(\frac{D_{\rm H}^2D_{\rm W}^2\ell_{\rm K}}{w^2}\Big)^{\frac{1}{3}}$ & $= 0.9338(84)\Big(\frac{\ell_{\rm K} w} {D_{\rm H} D_{\rm W}}\Big)^\frac{1}{3}$ \cite{werner2014} & $=0.13(1) \ell_{\rm K} $ \cite{werner2014} & $=\frac{\pi^2}{6} \ell_{\rm K}(D_{\rm H}^{-2} + D_{\rm W}^{-2})$ \cite{casassa1967} \\
IIb & --- & $w\llD_{\rm H}\ll\ell_{\rm K}$ & $\ell_{\rm K}^2\llD_{\rm W}^2 \ll \frac{D_{\rm H} \ell_{\rm K}^2}{w}$ & $\gg\Big(\frac{D_{\rm H}^2D_{\rm W}^2\ell_{\rm K}}{w^2}\Big)^{\frac{1}{3}}$ & $\approx \Big(\frac{\ell_{\rm K} w}{D_{\rm H} D_{\rm W}}\Big)^{\frac{1}{3}}$ \cite{odijk2008} & $ = 0.20(2) \ell_{\rm K} $ \cite{werner2014}\footnote{The step variance $\sigma_0^2$ \cite{werner2014} obeys $\sigma_0^2= \ell_{\rm K}^2/2$ in this regime.} & $= 1.1032(1) \ell_{\rm P}^{-\frac{1}{3}} D_{\rm H}^{-\frac{2}{3}}$ \cite{chen2013} \\
IIIa & Odijk & $D_{\rm H} \ll \ell_{\rm P}$ & $D_{\rm W} \ll \ell_{\rm P}$ & $ \gg \ell_{\rm P}^{\frac{1}{3}}D_{\rm W}^{\frac{2}{3}}$ & $= 1\! - 0.09137(7) \frac{D_{\rm H}^{\frac{2}{3}} + D_{\rm W}^{\frac{2}{3}} }{\ell_{\rm P}^{\frac{2}{3}}}\! $ \cite{burkhardt2010} & $= 0.00478(10) \frac{D_{\rm H}^2 + D_{\rm W}^2}{\ell_{\rm P}}\! $ \cite{burkhardt2010} & $= 1.1032(1) \ell_{\rm P}^{-\frac{1}{3}} (D_{\rm H}^{-\frac{2}{3}}\! + D_{\rm W}^{-\frac{2}{3}})\! $ \cite{chen2013} \\
IIIb & backfolded Odijk & \multicolumn{2}{c}{$D_{\rm H}\leD_{\rm W}\lesssim \ell_{\rm P},\; \frac{g w}{\ell_{\rm P}^{\frac{1}{3}}D_{\rm W}^{\frac{2}{3}}D_{\rm H}}\ll 1$ \footnote{\mbox{The global persistence length $g$ is the orientational correlation length of the confined polymer, defined in Refs.~\cite{odijk2008,muralidhar2014a}. }}} & $\gg \Big(\frac{g^{\frac{3}{2}}\ell_{\rm P}D_{\rm H}^3D_{\rm W}^{2}}{w^3}\Big)^\frac{2}{9}$ & $\approx \Big(\frac{g w}{\ell_{\rm P}^{\frac{1}{3}}D_{\rm W}^{\frac{2}{3}}D_{\rm H}}\Big)^{\frac{1}{3}}$ \cite{odijk2008} & $\approx g$ \cite{muralidhar2014a} & $\approx \ell_{\rm P}^{-\frac{1}{3}} D_{\rm H}^{-\frac{2}{3}}$ \cite{muralidhar2014a} \\
\midrule
\multicolumn{2}{l}{Slit regimes} \\
\midrule
Sa & de Gennes & $\gg\ell_{\rm K}^2/w$ & --- & $\gg \Big(\frac{D_{\rm H}^5}{\ell_{\rm K} w}\Big)^{\frac{1}{3}}$ & $\approx \Big(\frac{\ell_{\rm K} w}{LD_{\rm H}}\Big)^\frac{1}{4}$ \cite{dai2012} & $\approx \Big(\frac{L\ell_{\rm K} w}{D_{\rm H}}\Big)^{\frac{1}{2}}$ \cite{taloni2013} & $\approx \Big(\frac{\ell_{\rm K} w}{D_{\rm H}^5}\Big)^{\frac{1}{3}}$ \cite{taloni2013} \\
Sb & extended de Gennes & $\ell_{\rm K}\ll D_{\rm H} \ll \frac{\ell_{\rm K}^2}{w}$ & --- & $\gg \frac{\ell_{\rm K} D_{\rm H}}{w}$ & $\approx \Big(\frac{\ell_{\rm K} w}{LD_{\rm H}}\Big)^\frac{1}{4}$ \cite{dai2012} & $\approx \Big(\frac{L\ell_{\rm K} w}{D_{\rm H}}\Big)^{\frac{1}{2}}$ \cite{taloni2013}& $=\frac{\pi^2}{6} \ell_{\rm K}D_{\rm H}^{-2}$ \cite{casassa1967} \\
Sc & Odijk-Flory & $w <D_{\rm H} \ll \ell_{\rm K}$ & --- & $\gg \frac{\ell_{\rm K} D_{\rm H}}{w}$ & $\approx \Big(\frac{\ell_{\rm K} w}{LD_{\rm H}}\Big)^\frac{1}{4}$ \cite{odijk2008} & $\approx \Big(\frac{L\ell_{\rm K} w}{D_{\rm H}}\Big)^{\frac{1}{2}}$ \cite{taloni2013} & $= 1.1032(1) \ell_{\rm P}^{-\frac{1}{3}} D_{\rm H}^{-\frac{2}{3}}$ \cite{chen2013}\\
\bottomrule
\end{tabular}
\caption{\label{tab:1} Summary of scaling regimes for a semi-flexible polymer confined to a rectangular channel.}
\vspace*{-.7cm}
\end{table*}
\end{turnpage}
|
2,869,038,155,559 | arxiv | \section{Introduction}
On July 4, 2012, the ATLAS and CMS collaborations at the CERN Large
Hadron Collider (LHC) announced the discovery of a new particle with
mass near $126$ GeV~\cite{Aad:2012tfa,Chatrchyan:2012ufa}. The
initial discovery and subsequent measurements indicate that this new
particle looks very much like the long-anticipated Higgs
boson~\cite{Aad:2013wqa,Aad:2013xqa,Chatrchyan:2012jja,Chatrchyan:2013lba}.
It is of the first importance to determine if this discovery is indeed
the Higgs boson of the Standard Model, a component of a more
complicated symmetry-breaking structure, or a closely-related
impostor, such as the radion of a warped extra-dimensional model.
Such a determination can only come by making improved measurements of
the particles properties and couplings to other particles.
One important observable that will help to establish the particle's
identity could be the production rate. Unfortunately, the dominant
production mechanism for the Standard Model Higgs Boson, gluon fusion,
has a large theoretical uncertainty, of order $15\%$, even though is
has been computed to next-to-next-to-leading order in $\as$. This
theoretical uncertainty receives two, roughly equal contributions: the
scale uncertainty in the partonic cross section and the uncertainty in
the values of the parton distributions. The determination of parton
distributions will improve with further experimentation, but are
unlikely to be dramatically reduced. The uncertainty in the partonic
cross-section, however, can be addressed by calculating ever-higher
orders in the expansion in $\as$.
The {NNLO}\ calculation was completed in
2002~\cite{Harlander:2002wh,Anastasiou:2002yz,Ravindran:2003um} and is
now a mature result. It is therefore time to address the extension to
next-to-next-to-next-to-leading order ({N${}^3$LO}). Indeed, the process
has already started: The purely virtual corrections, the three-loop
corrections to $gg\to H$ were
computed~\cite{Lee:2010cga,Baikov:2009bg,Gehrmann:2010ue,Gehrmann:2010tu}
a couple of years ago; last year, the convolutions of {NNLO}\ and
lower-order cross sections with the DGLAP splitting
functions~\cite{Hoschele:2012xc} were computed; and earlier this
year~\cite{Anastasiou:2013srw}, Anastasiou and collaborators reported
results for the first few terms in the threshold expansion of the
triple-real radiation contributions. In this paper, I will present
the contributions from one-loop single-real-emission amplitudes. Like
Ref.~\cite{Anastasiou:2013srw}, I too compute some of the terms which
appear by means of a threshold expansion. However, by extending the
techniques established in
Refs.~\cite{Harlander:2002wh,Harlander:2002vv,Harlander:2003ai}, I am
able to map the expansion onto a set of basis functions consisting of
harmonic polylogarithms. I am therefore able to report the complete
result as a Laurent series in the dimensional regularization parameter
($D=4-2{\varepsilon}$) through order ${\varepsilon}^{(1)}$. The results for the
contributions at {N${}^3$LO}\ were recently computed, using very different
methods, in Ref.~\cite{Anastasiou:2013mca}. After careful comparison,
we find that our results for the contribution to the inclusive cross
section agree completely.
The plan of this paper is as follows: In \Sec{sec:setup}, I will
describe the setup of the calculation: the structure of {N${}^3$LO}\
calculations; the effective Lagrangian and the resulting tree-level
and one-loop amplitudes; the calculation of loop master integrals and
renormalization. In \Sec{sec:math} I will review the mathematical
structure of the functions I will be working with, namely harmonic
polylogarithms, (multiple) $\zeta$-functions and functions of uniform
transcendentality. In \Sec{sec:methods}, I discuss the squaring of
the amplitudes and integration over phase space. In particular, I
discuss the methods used to reduce and perform the phase space
integrals. In \Sec{sec:results} I present results for the reduction
of phase space integrals to a set of master integrals and I present
the results for the partonic cross sections. I present the {NLO}\ and
{NNLO}\ cross sections in closed form. The expression for the {N${}^3$LO}\
partonic cross sections are very lengthy, so I present only the first
few terms in the threshold expansion. The complete result (as a
Laurent expansion through order ${\varepsilon}^{(1)}$) in terms of harmonic
polylogarithms is given in the supplementary material attached to this
article. Finally, in \Sec{sec:conclude}, I present my
conclusions.
\section{Setup of the Calculation}
\label{sec:setup}
\subsection{The Structure of {N${}^3$LO}\ Calculations}
\label{sec:nnnlo}
A perturbative calculation at {N${}^3$LO}\ contains many pieces. It
contains virtual corrections ($gg\to H$) through three loops,
single-real-emission corrections through two loops,
double-real-emission corrections through one loop and
triple-real-emission at tree-level. Each contribution lives in its
own phase space and must be computed separately from the others. The
triple-real terms are computed from the squares of the tree-level
matrix elements. The double-real terms from the squares of the
tree-level terms and the interference of the the tree-level amplitudes
with the one-loop amplitudes. The single-real emission terms contain
the square of the tree-level amplitudes, the interference of
tree-level with one-loop amplitudes, the square of the one-loop
amplitudes and the interference of tree-level with the two-loop
amplitudes. The purely virtual terms contain the squares of the
tree-level and one-loop amplitudes, the interference of the tree-level
amplitudes with one-, two- and three-loop amplitudes, and the
interference of one- and two-loop amplitudes. In this paper, I will
focus on single-real emission corrections and restrict myself to terms
involving the one-loop amplitudes. The contributions from the
interference of tree-level with two-loop amplitudes is left to future
consideration.
\subsection{The Effective Lagrangian}
\label{sec:ELag}
In the Standard Model, elementary particles obtain mass through their
couplings to the Higgs field. Massless particles, like gluons and
photons, do not couple directly to the Higgs fields. Instead, they
couple indirectly through massive particle loops. In the limit that
all quarks except the top are massless, gluons couple to the Higgs
through top loops as shown in \fig{fig::toploops}, while photons
couple through both top and $W$ boson loops. The light quarks
(treated as massless) couple to the Higgs fields through gluons and
photons feeding into massive particle loops.
\begin{figure*}[ht]
\includegraphics[height=4.cm]{TopVert.png}
\caption{\label{fig::toploops}Top loop diagrams
coupling gluons to the Higgs boson}
\end{figure*}
Since the $126$ GeV Higgs boson mass is far below the top threshold
($M_H \ll 2M_t$), one can integrate out the top quark and compute
amplitudes involving the Higgs field using \qcd\ with five active
flavors and the following effective
Lagrangian~\cite{Shifman:1979eb,Voloshin:1986tc,Vainshtein:1980ea} for
the Higgs-gluon interaction:
\begin{equation}
\begin{split}
{\cal L}_{\rm eff}
&= -\frac{H}{4v} C_1^{\rm B}(\alpha_s)\,{\cal O}_1^{\rm B}
= -\frac{H}{4v} C_1(\alpha_s)\,{\cal O}_1\,,\qquad
{\cal O}_1 = G^a_{\mu\nu} G^{a\,\mu\nu}\,,
\label{eqn::leff}
\end{split}
\end{equation}
where $v$ is the vacuum expectation value of the Higgs field $H$
($v\sim246\,$ GeV), $G^a_{\mu\nu}$ is the gluon field strength tensor
and the ${}^{\rm B}$ superscripts represent bare quantities. In the
approximation that all light flavors are massless, this effective
Lagrangian is renormalization group invariant, but the coefficient
function $C_1^{\rm B}(\alpha_s)$ and the operator ${\cal O}_1^{\rm B}$
must each be renormalized. Using this effective Lagrangian, the top
quark loops of \fig{fig::toploops} are replaced by the effective
vertices shown in \fig{fig::effvert}.
\begin{figure*}[ht]
\includegraphics[height=4.cm]{EffVert.png}
\caption[]{\label{fig::effvert}Effective vertices coupling gluons to
the Higgs boson}
\end{figure*}
The finite top mass corrections to the {NNLO}\ result using this
effective Lagrangian are found to be very small for a Higgs mass near
$126$ GeV~\cite{Pak:2009dg,Harlander:2009my}.
The coefficient function $C_1(\alpha_s)$ contains the residual
logarithmic dependence on the top quark mass and has been computed up
to $\order{\alpha_s^4}$
\cite{Chetyrkin:1998un,Chetyrkin:1997iv,Schroder:2005hy,Chetyrkin:2005ia},
though for this calculation, one needs it only up to
$\order{\alpha_s^3}$~\cite{Chetyrkin:1998un,Chetyrkin:1997iv,Kramer:1998iq}.
In the modified minimal subtraction scheme ($\msbar$), the
renormalized coefficient function is:
\begin{widetext}
\begin{equation}
\begin{split}
C_1(\alpha_s) &= -\frac{1}{3}\aspi\left\{ 1 + \frac{11}{4}\aspi +
\aspi^2\left[\frac{2777}{288} + \frac{19}{16}\logmut{}
+ N_f\left(-\frac{67}{96} +
\frac{1}{3}\logmut{}\right)\right]\right.\\
&\hskip 20pt
+ \aspi^3\left[-\frac{2761331}{41472} + \frac{897943}{9216}\zeta(3)
+ \frac{2417}{288}\logmut{}+\frac{209}{64}\logmut{2}\right.\\
&\hskip 50pt
+ N_f\left(\frac{58723}{20736} - \frac{110779}{13824}\zeta(3)
+ \frac{91}{54}\logmut{}
+ \frac{23}{32}\logmut{2}\right)\\
&\hskip 50pt\left.\left.
+ N_f^2\left(-\frac{6865}{31104} + \frac{77}{1728}\logmut{}
- \frac{1}{18}\logmut{2}\right)\right]+
\ldots\right\}\,,
\label{eqn::c1}
\end{split}
\end{equation}
\end{widetext}
where $\logmut{} = \ln(\mu^2/M_t^2)$, $\mu$ is the renormalization
scale and $M_t$ the on-shell top quark mass. $\alpha_s \equiv
\alpha_s^{(5)}(\mu^2)$ is the $\msbar$ renormalized \qcd\ coupling
constant for five active flavors, and $N_f$ is five, the number of
massless flavors.
\subsection{$H\,g\,g\,g$ Amplitudes}
\label{sec:MEs}
\label{sec::HgggAmps}
The $H\,g\,g\,g$ amplitude can be written at any loop order in terms of
four linearly independent gauge invariant
tensors~\cite{Ellis:1988xu,Gehrmann:2011aa},
\begin{equation}
\begin{split}
\label{eq::EHSvdBa}
{\cal M}(H;1,2,3) &= \frac{g}{v}\,C_1(\alpha_s)\,f^{ijk}
{\epsilon}^i_{1\mu}{\epsilon}^j_{2\nu}{\epsilon}^k_{3\rho}\sum_{n=0}^3A_n\,
{\cal Y}_n^{\mu\nu\rho}.
\end{split}
\end{equation}
where $g$ is the \qcd\ coupling and $f^{ijk}$ are the structure
constants of $SU(N_c)$. I adopt the following tensor definitions:
\begin{widetext}
\begin{equation}
\begin{split}
\label{eq::Tensors}
{\cal Y}_0^{\mu\nu\rho} =\ &
\left(p_1^\nu g^{\rho\mu} - p_1^\rho g^{\mu\nu}\right)\,\frac{s_{23}}{2}
+ \left(p_2^\rho g^{\mu\nu} - p_2^\mu g^{\nu\rho}\right)\,\frac{s_{31}}{2}
+ \left(p_3^\mu g^{\nu\rho} - p_3^\nu g^{\rho\mu}\right)\,\frac{s_{12}}{2}
+ p_2^\mu p_3^\nu p_1^\rho - p_3^\mu p_1^\nu p_2^\rho\,,\\
{\cal Y}_1^{\mu\nu\rho} =\ &
p_2^\mu p_1^\nu p_1^\rho
- p_2^\mu p_1^\nu p_2^\rho\,\frac{s_{31}}{s_{23}}
- \frac{1}{2}p_1^\rho g^{\mu\nu}\,s_{12}
+ \frac{1}{2}p_2^\rho g^{\mu\nu}\frac{s_{31}\,s_{12}}{s_{23}}\,,\\
{\cal Y}_2^{\mu\nu\rho} =\ &
p_3^\mu p_3^\nu p_1^\rho
- p_3^\mu p_1^\nu p_1^\rho\,\frac{s_{23}}{s_{12}}
- \frac{1}{2}p_3^\nu g^{\mu\rho}\,s_{31}
+ \frac{1}{2}p_1^\nu g^{\mu\rho}\frac{s_{23}\,s_{31}}{s_{12}}\,,\\
{\cal Y}_3^{\mu\nu\rho} =\ &
p_2^\mu p_3^\nu p_2^\rho
- p_3^\mu p_3^\nu p_2^\rho\,\frac{s_{12}}{s_{31}}
- \frac{1}{2}p_2^\mu g^{\nu\rho}\,s_{23}
+ \frac{1}{2}p_3^\mu g^{\nu\rho}\frac{s_{12}\,s_{23}}{s_{31}}\,,\\
\end{split}
\end{equation}
\end{widetext}
The momenta are specified as if the process were $
H\,g_1\,g_2\,g_3\to\emptyset$. Momentum conservation thus demands
that $p_H+p_1+p_2+p_3= 0$.
From these tensors, I can construct projectors to map the amplitudes
onto their tensor coefficients.
\begin{equation}
\begin{split}
{\cal P}_{{\cal Y}_{0}} =&
\frac{D}{D-3}\frac{{\cal Y}_{0}}{s_{12}\,s_{23}\,s_{31}}
- \frac{D-2}{D-3}\left(
\frac{{\cal Y}_{1}}{s_{31}\,s_{12}^{2}}
+ \frac{{\cal Y}_{2}}{s_{23}\,s_{31}^{2}}
+ \frac{{\cal Y}_{3}}{s_{12}\,s_{23}^{2}}
\right)\,,\\
{\cal P}_{{\cal Y}_{1}} =&
\frac{D}{D-3}\frac{s_{23}\,{\cal Y}_{1}}{s_{31}\,s_{12}^{3}}
- \frac{D-2}{D-3}\frac{{\cal Y}_{0}}{s_{31}\,s_{12}^{2}}
+ \frac{D-4}{D-3}\left(
\frac{{\cal Y}_{2}}{s_{12}\,s_{31}^{2}}
+ \frac{{\cal Y}_{3}}{s_{23}\,s_{12}^{2}}
\right)\,,\\
{\cal P}_{{\cal Y}_{2}} =&
\frac{D}{D-3}\frac{s_{12}\,{\cal Y}_{2}}{s_{23}\,s_{31}^{3}}
- \frac{D-2}{D-3}\frac{{\cal Y}_{0}}{s_{23}\,s_{31}^{2}}
+ \frac{D-4}{D-3}\left(
\frac{{\cal Y}_{3}}{s_{31}\,s_{23}^{2}}
+ \frac{{\cal Y}_{1}}{s_{12}\,s_{31}^{2}}
\right)\,,\\
{\cal P}_{{\cal Y}_{3}} =&
\frac{D}{D-3}\frac{s_{31}\,{\cal Y}_{3}}{s_{12}\,s_{23}^{3}}
- \frac{D-2}{D-3}\frac{{\cal Y}_{0}}{s_{12}\,s_{23}^{2}}
+ \frac{D-4}{D-3}\left(
\frac{{\cal Y}_{1}}{s_{23}\,s_{12}^{2}}
+ \frac{{\cal Y}_{2}}{s_{31}\,s_{23}^{2}}
\right)\,,
\end{split}
\end{equation}
where $D=4-2{\varepsilon}$ is the dimensionality of space-time.
Each tensor coefficient has an expansion in $\alpha_s$ of the form:
\begin{equation}
A_i = A_i^{(0)} + \aspi\,A_i^{(1)} + \aspi^2\,A_i^{(2)} + \ldots\,.
\end{equation}
I have computed the amplitudes in the following manner: the Feynman
diagrams were generated using {\abbrev QGRAF}~\cite{Nogueira:1993ex}; they were
contracted with the projectors onto the gauge-invariant tensors and
the Feynman rules were implemented using a
{\abbrev FORM}~\cite{Vermaseren:2000nd} program. For the one-loop amplitudes,
the resulting expressions were reduced to loop master integrals with
the program {\abbrev REDUZE2}~\cite{vonManteuffel:2012np}. The reduced
expressions were put back into the FORM\ program and the master
integrals were evaluated to produce the final expressions.
At tree level, there are only four Feynman diagrams, and I find the
tree-level tensor coefficients to be
\begin{equation}
\begin{split}
\label{eq::Ctrees}
A_0^{(0)} =& -\frac{2}{s_{12}} -\frac{2}{s_{23}} -\frac{2}{s_{31}},\qquad
A_1^{(0)} = -\frac{2}{s_{31}},\qquad
A_2^{(0)} = -\frac{2}{s_{23}},\qquad
A_3^{(0)} = -\frac{2}{s_{12}}.
\end{split}
\end{equation}
There are only two master integrals involved in the one-loop
amplitude, the one-loop bubble, and the one-loop box with a single
massive external leg (see \fig{fig:masters}).
\begin{figure}[ht]
a) \raise30pt\hbox{\includegraphics[width=4.cm]{BubMaster.png}}\hskip 90pt
b) \includegraphics[width=4.cm]{BoxMaster.png}
\caption[]{\label{fig:masters}One-loop master integrals: a) The bubble
diagram: ${\cal I}^{1}_{2}(Q^2)$, b) the box diagram with one massive leg:
${\cal I}^{1}_{4}(s_{12},s_{23};M_H^2)$.}
\end{figure}
I find the one-loop tensor coefficients to be
\begin{equation}
\begin{split}
A_0^{(1)} = 4\,i\,\pi^2\,C_A&\left(a_{0,M}(s_{12},s_{23},s_{31})
\,{\cal I}^{(1)}_2(M_H^2)
+ a_{0,s}(s_{12},s_{23},s_{31})\,{\cal I}^{(1)}_2(s_{12})
+ a_{0,s}(s_{23},s_{31},s_{12})\,{\cal I}^{(1)}_2(s_{23})\right.\\
&\left.
+ a_{0,s}(s_{31},s_{12},s_{23})\,{\cal I}^{(1)}_2(s_{31})
+ \alpha_0(s_{12},s_{23},s_{31})\,{\cal I}^{(1)}_4(s_{12},s_{23};M_H^2)\right.\\
&\left.
+ \alpha_0(s_{23},s_{31},s_{12})\,{\cal I}^{(1)}_4(s_{23},s_{31};M_H^2)
+ \alpha_0(s_{31},s_{12},s_{23})\,{\cal I}^{(1)}_4(s_{31},s_{12};M_H^2)\right)\,,\\
A_1^{(1)} = 4\,i\,\pi^2\,C_A&\left(a_{1,M}(s_{12},s_{23},s_{31})
\,{\cal I}^{(1)}_2(M_H^2)
+ a_{1,s_{12}}(s_{12},s_{23},s_{31})\,{\cal I}^{(1)}_2(s_{12})
+ a_{1,s_{23}}(s_{12},s_{23},s_{31})\,{\cal I}^{(1)}_2(s_{23})\right.\\
&\left.
+ a_{1,s_{31}}(s_{12},s_{23},s_{31})\,{\cal I}^{(1)}_2(s_{31})
+ \alpha_{1,2}(s_{12},s_{23},s_{31})\,{\cal I}^{(1)}_4(s_{12},s_{23};M_H^2)\right.\\
&\left.
+ \alpha_{1,3}(s_{12},s_{23},s_{31})\,{\cal I}^{(1)}_4(s_{23},s_{31};M_H^2)
+ \alpha_{1,1}(s_{12},s_{23},s_{31})\,{\cal I}^{(1)}_4(s_{31},s_{12};M_H^2)\right)\,,
\end{split}
\end{equation}
where $C_A=N_c$ is the Casimir operator for the adjoint representation
and
\begin{equation}
\begin{split}
a_{0,M}(s_{12},s_{23},s_{31}) &= \left[-(D-2)\,M_H^2\,\frac{1}{s_{12}^2}
+(D-4)\,\frac{s_{12}}{s_{23}\,s_{31}}
+\left(4\,\frac{D-4}{D-2}+12\,\frac{D-3}{D-4}\right)
\frac{1}{s_{12}}\right.\\
&\left.\hskip-50pt
- 2\,\frac{(D-6)(D-4)}{D-2}\frac{1}{s_{23}+s_{31}}
+ 4\,\frac{D-4}{D-2}\frac{s_{12}}{(s_{23}+s_{31})^2}\right]
+\left[\cdots\vp{\frac{1}{s_{12}}}
\right]_{s_{12}\to s_{23}\to s_{31}\to s_{12}}
+\left[\cdots\vp{\frac{1}{s_{12}}}
\right]_{s_{12}\to s_{31}\to s_{23}\to s_{12}}\,,\\
a_{0,s}(s_{12},s_{23},s_{31}) &= (D-2)\,M_H^2\,\left(
\frac{1}{s_{23}^2}+\frac{1}{s_{31}^2}\right)
-(D-4)\,\frac{s_{12}}{s_{23}\,s_{31}}
- \frac{(D-2)^2}{D-4}\,\left(\frac{1}{s_{23}}
+\frac{1}{s_{31}}\right)\\
& - 4\,\frac{(D-3)}{D-4}\frac{1}{s_{12}}
- 4\,\frac{(D-4)}{D-2}\frac{M_H^2}{(s_{23}+s_{31})^2}\,,\\
\alpha_{0}(s_{12},s_{23},s_{31}) &=
\frac{1}{4}\frac{(D-2)(D-4)}{D-3}\frac{s_{12}\,s_{23}\,M_H^2}{s_{31}^2}
- \frac{s_{12}\,s_{23} + s_{23}\,s_{31} + s_{31}\,s_{12}}{s_{31}}\,,
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
a_{1,M}(s_{12},s_{23},s_{31}) &= D\,\frac{s_{23}\,M_H^2}{s_{12}^3}
-\frac{(D-4)(3\,D-16)}{D-2}\frac{1}{s_{31}}
+ (D-4)\frac{(s_{23}+s_{31})^2\,M_H^2}{s_{12}\,s_{23}\,s_{31}^2}
\\
& - \frac{D\,(D-4)}{D-2}\frac{M_H^4+s_{12}^2}{s_{12}^2\,s_{31}}
- 2\,\frac{(D-4)^2}{D-2}\frac{1}{s_{12}+s_{23}}
+ 4\,\frac{D-4}{D-2}\frac{M_H^2}{(s_{12}+s_{23})^2}\\
& - 2\,\frac{(D-4)^2}{D-2}\frac{s_{23}}{s_{31}\,(s_{12}+s_{31})}
+ 4\,\frac{D-4}{D-2}\frac{s_{23}\,M_H^2}{s_{31}\,(s_{12}+s_{31})^2}
+ 12\,\frac{D-3}{D-4}\frac{1}{s_{31}}\,,\\
a_{1,s_{12}}(s_{12},s_{23},s_{31}) &=-(D-4)\frac{M_H^2}{s_{12}\,s_{23}}
\left(1 + \frac{s_{23}^2}{s_{31}^2}\right)
+ \frac{(D-4)^2}{D-2}\frac{M_H^2}{s_{12}\,s_{31}}\\
&- \frac{(D-4)(D-2-2\,N_f/C_A)}{(D-1)(D-2)}\frac{s_{23}}{s_{12}^2}
- 4\,\frac{D-3}{D-4}\frac{1}{s_{31}}\,,\\
a_{1,s_{23}}(s_{12},s_{23},s_{31}) &=
- (D-4)\frac{s_{23}\,M_H^2}{s_{12}\,s_{31}^2}
- D\,\frac{s_{23}\,M_H^2}{s_{12}^3}
+ \frac{D\,(D-4)}{D-2}\frac{s_{23}\,M_H^2}{s_{12}^2\,s_{31}}\\
& - 4\,\frac{D-4}{D-2}\frac{s_{23}\,M_H^2}{s_{31}(s_{12}+s_{31})^2}
- 4\,\frac{D-3}{D-4}\frac{1}{s_{31}}\,,\\
a_{1,s_{31}}(s_{12},s_{23},s_{31}) &=
- (D-4)\,\frac{M_H^2}{s_{12}\,s_{23}}
- D\,\frac{s_{23}\,M_H^2}{s_{12}^3}
+ \frac{D\,(D-4)}{D-2}\frac{M_H^2}{s_{12}^2}\\
&
- 4\,\frac{D-4}{D-2}\frac{M_H^2}{(s_{12}+s_{23})^2}
- 4\frac{D-3}{D-4}\frac{1}{s_{31}}\,,\\
\alpha_{1,1}(s_{12},s_{23},s_{31}) &= s_{12}
+ \frac{1}{4}\frac{(D-4)^2}{D-3}
\frac{s_{31}\,M_H^2}{s_{23}}\,,\\
\alpha_{1,2}(s_{12},s_{23},s_{31}) &=
\frac{s_{12}\,s_{23}}{s_{31}}
+ \frac{1}{4}\frac{(D-4)^2}{D-3}
\frac{s_{23}^2\,M_H^2}{s_{31}^2}\,,\\
\alpha_{1,3}(s_{12},s_{23},s_{31}) &=
s_{23} + \frac{1}{4}\frac{D\,(D-4)}{D-3}
\frac{s_{31}\,s_{23}^2\,M_H^2}{s_{12}^3}\,.
\end{split}
\end{equation}
The tensor coefficients $A_2^{(1)}$ and $A_3^{(1)}$ are given by
permutations of the invariants in $A_1^{(1)}$:
\begin{equation}
\begin{split}
A_2^{(1)} &= \left.A_1^{(1)}
\right|_{s_{12}\to s_{31}\to s_{23}\to s_{12}}\,,
\hskip 40pt
A_3^{(1)} = \left.A_1^{(1)}
\right|_{s_{12}\to s_{23}\to s_{31}\to s_{12}}\,.
\end{split}
\end{equation}
\subsection{$H\,q\,\overline{q}\,g$ Amplitudes}
The $H\,q\,\overline{q}\,g$ amplitudes can be written in terms of only
two gauge invariant tensor structures,
\begin{equation}
\begin{split}
\label{eq::Mgqq}
{\cal M}(H;g,q, \overline{q}) =\ &
i\frac{g}{v}\,C_1(\alpha_s)\left( T^g\right)^{\bar\imath}_j{\epsilon}_\mu(p_g)
\left( B_1\,{\cal X}^\mu_1 + B_2\,{\cal X}^\mu_2\right)\,,
\end{split}
\end{equation}
where $ T^g$ is a generator of the fundamental
representation of $SU(N_c)$ the tensors are given by~\cite{Gehrmann:2011aa}
\begin{equation}
\begin{split}
{\cal X}_{1}^{\mu} &= p_{\overline{q}}^{\mu}
\overline{u}(p_q)\,\slashed{p}_g\,v(p_{\overline{q}})
- \frac{s_{\overline{q}g}}{2}
\overline{u}(p_q)\,\gamma^\mu\,v(p_{\overline{q}})\,,\\
{\cal X}_{2}^{\mu} &= p_{q}^{\mu}
\overline{u}(p_q)\,\slashed{p}_g\,v(p_{\overline{q}})
- \frac{s_{gq}}{2}
\overline{u}(p_q)\,\gamma^\mu\,v(p_{\overline{q}})\,,
\end{split}
\end{equation}
and the projectors onto these tensors are
\begin{equation}
\begin{split}
{\cal P}_{{\cal X}_{1}} =& \frac{D-2}{D-3}
\frac{{\cal X}_{1}^{\dagger}}{2\,s_{q\overline{q}}\,s_{\overline{q}g}^{2}}
- \frac{D-4}{D-3}\frac{{\cal X}_{2}^{\dagger}}
{2\,s_{q\overline{q}}\,s_{gq}\,s_{\overline{q}g}}\,,\\
{\cal P}_{{\cal X}_{2}} =& \frac{D-2}{D-3}
\frac{{\cal X}_{2}^{\dagger}}{2\,s_{q\overline{q}}\,s_{gq}^{2}}
- \frac{D-4}{D-3}\frac{{\cal X}_{1}^{\dagger}}
{2\,s_{q\overline{q}}\,s_{gq}\,s_{\overline{q}g}}\,.\\
\end{split}
\end{equation}
These tensor coefficients also have expansions in $\alpha_s$:
\begin{equation}
B_i = B_i^{(0)} + \aspi\,B_i^{(1)} + \aspi^2\,B_i^{(2)} + \ldots\,.
\end{equation}
The calculation proceeds through the same chain of {\abbrev QGRAF}, {\abbrev FORM}, and
{\abbrev REDUZE2}\ programs as before. The tree-level coefficients are:
\begin{equation}
B_1^{(0)} = B_2^{(0)} = \frac{1}{s_{q\overline{q}}}\,,
\end{equation}
and the one-loop coefficients $B_i^{(1)}$ involve the same set of
master integrals as the $A_i^{(1)}$:
\begin{equation}
\begin{split}
B_1^{(1)} = -4\,i\,\pi^2\,C_A&\left(b_{1,M}(s_{q\overline{q}},
s_{gq},s_{\overline{q}g})\,{\cal I}^{(1)}_2(M_H^2)
+ b_{1,s_{q\overline{q}}}(s_{q\overline{q}},s_{gq},
s_{\overline{q}g})\,{\cal I}^{(1)}_2(s_{q\overline{q}})
+ b_{1,s_{gq}}(s_{q\overline{q}},s_{gq},
s_{\overline{q}g})\,{\cal I}^{(1)}_2(s_{gq})\right.\\
&\left.
+ b_{1,s_{\overline{q}g}}(s_{q\overline{q}},s_{gq},
s_{\overline{q}g})\,{\cal I}^{(1)}_2(s_{\overline{q}g})
+ \beta_{1,\overline{q}}(s_{q\overline{q}},s_{gq},
s_{\overline{q}g})\,{\cal I}^{(1)}_4(s_{q\overline{q}},s_{gq};M_H^2)\right.\\
&\left.
+ \beta_{1,g}(s_{q\overline{q}},s_{gq},
s_{\overline{q}g})\,{\cal I}^{(1)}_4(s_{gq},s_{\overline{q}g};M_H^2)
+ \beta_{1,q}(s_{q\overline{q}},s_{gq},
s_{\overline{q}g})\,{\cal I}^{(1)}_4(s_{\overline{q}g},s_{q\overline{q}};M_H^2)\right)\,,\\
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
b_{1,M}(s_{q\overline{q}},s_{gq},s_{\overline{q}g}) &=
\frac{M_H^2}{s_{\overline{q}g}}\left(\frac{D-4}{s_{q\overline{q}}} + \frac{D-4}{s_{gq}}
- \frac{D-2}{s_{\overline{q}g}}\right) + \frac{2}{D-4}\Lx1 +
2(D-3)\frac{C_F}{C_A}\right)\frac{1}{s_{q\overline{q}}}\\
& - \frac{D^2-10\,D+20}{D-2}\frac{1}{s_{q\overline{q}}}
- \frac{(D-4)\,M_H^2}{s_{q\overline{q}}\,(s_{q\overline{q}}+s_{\overline{q}g})}
- \frac{(D-4)^2}{D-2}\frac{1}{s_{gq}+s_{\overline{q}g}}
+ 2\frac{D-4}{D-2}\frac{M_H^2}{(s_{gq}+s_{\overline{q}g})^2}\,,\\
b_{1,s_{q\overline{q}}}(s_{q\overline{q}},s_{gq},s_{\overline{q}g}) &=
\frac{D-2}{2}\frac{M_H^2}{s_{\overline{q}g}^2}
- \frac{D-4}{2}\frac{M_H^2}{s_{gq}\,s_{\overline{q}g}}
- \frac{1}{D-4}\left(\frac{D^2-4\,D+12}{2} - \frac{C_F}{C_A}\left(
D^2-7\,D+16\right)\Rx\frac{1}{s_{q\overline{q}}}\\
& - \frac{1}{2}\frac{D-2}{D-1}\Lx1+2\frac{N_f}{C_A}\right)\frac{1}{s_{q\overline{q}}}
- 2\frac{D-4}{D-2}\frac{M_H^2}{(s_{gq}+s_{\overline{q}g})^2}\,,\\
b_{1,s_{gq}}(s_{q\overline{q}},s_{gq},s_{\overline{q}g}) &=
- \frac{D-4}{2}\frac{M_H^2}{s_{q\overline{q}}}\left(
\frac{1}{s_{\overline{q}g}} - 2\frac{C_F}{C_A}\frac{1}{s_{q\overline{q}}+s_{\overline{q}g}}\right)
+ \frac{D-2}{2}\frac{M_H^2}{s_{\overline{q}g}^2}
+ 2\frac{D-3}{D-4}\Lx1-2\frac{C_F}{C_A}\right)\frac{1}{s_{q\overline{q}}}\,,\\
b_{1,s_{\overline{q}g}}(s_{q\overline{q}},s_{gq},s_{\overline{q}g}) &=
- \frac{D-4}{2}\frac{1}{s_{\overline{q}g}}\left(
\Lx1-2\frac{C_F}{C_A}\right)\frac{M_H^2}{s_{q\overline{q}}}
+ \frac{s_{q\overline{q}}+s_{\overline{q}g}}{s_{gq}}
+ \frac{C_F}{C_A}\right)
+ 2\frac{D-3}{D-4}\Lx1-2\frac{C_F}{C_A}\right)\frac{1}{s_{q\overline{q}}}\,,
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\beta_{1,\overline{q}}(s_{q\overline{q}},s_{gq},s_{\overline{q}g}) &=
- \frac{1}{2}\,s_{gq}
+ \frac{1}{8}\frac{(D-2)(D-4)}{D-3}
\frac{s_{q\overline{q}}\,s_{gq}\,M_H^2}{s_{\overline{q}g}^2}\,,\\
\beta_{1,g}(s_{q\overline{q}},s_{gq},s_{\overline{q}g}) &=
\Lx1-2\frac{C_F}{C_A}\right)\frac{s_{gq}}{s_{q\overline{q}}}\left(\frac{1}{2}\,s_{\overline{q}g}
- \frac{1}{8}\frac{(D-4)^2}{D-3}\,M_H^2\right)\,,\\
\beta_{1,q}(s_{q\overline{q}},s_{gq},s_{\overline{q}g}) &=
- \frac{1}{2}\,s_{\overline{q}g}
- \frac{1}{8}\frac{(D-4)^2}{D-3}\frac{s_{q\overline{q}}\,M_H^2}{s_{gq}}\,.
\end{split}
\end{equation}
$C_F = (N_c^2-1)/2/N_c$ is the Casimir operator of the fundamental
representation. The other tensor coefficient, $B_2^{(1)}$ is given by
\begin{equation}
B_2^{(1)} = \left.B_1^{(1)}\right|_{s_{\overline{q}g}\leftrightarrow s_{gq}}
\end{equation}
These amplitudes describe all scattering configurations, $q\,\overline{q}\to
H\,g$, $g\,q\to H\,q$, etc., so long as the incoming and outgoing
momenta are correctly identified.
\subsection{Loop Master Integrals}
The loop master integrals that appear in these amplitudes are known in closed
form and the amplitudes are therefore known to all orders in the
dimensional regularization parameter ${\varepsilon}$. Working in the production
kinematics, where $s_{12} > 0$, $s_{23}, s_{31} < 0$
\begin{equation}
\begin{split}
\label{eqn:oneloopmasters}
{\cal I}^{(1)}_2(Q^2) &=
\frac{i\,{c_\Gamma}}{{\varepsilon}\,(1-2\,{\varepsilon})}\left(\frac{\mu^2}{-Q^2}\right)^{\varepsilon}\\
{\cal I}^{(1)}_4(s_{12},s_{23};M_H^2) &=
\frac{2\,i{c_\Gamma}}{s_{12}\,s_{23}}\frac{1}{{\varepsilon}^2}
\left[\left(\frac{\mu^2}{-s_{12}}\right)^{\varepsilon}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\sos{31}{23}}\right.\\
&\left.
+ \left(\frac{\mu^2}{-s_{23}}\right)^{\varepsilon}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\sos{31}{12}}
- \left(\frac{\mu^2}{-M_H^2}\right)^{\varepsilon}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{M_H^2\,s_{31}}{s_{12}\,s_{23}}}
\right]\\
{\cal I}^{(1)}_4(s_{23},s_{31};M_H^2) &=
\frac{2\,i\,{c_\Gamma}}{s_{23}\,s_{31}}\frac{1}{{\varepsilon}^2}
\left[\left(\frac{\mu^2}{-s_{12}}\right)^{-{\varepsilon}}
\left(\frac{\mu^2}{-s_{23}}\right)^{\varepsilon}\left(\frac{\mu^2}{-s_{31}}\right)^{\varepsilon}
\Gamma(1-{\varepsilon})\Gamma(1+{\varepsilon})\right.\\
&+ \left(\frac{\mu^2}{-s_{23}}\right)^{\varepsilon}\Lx1-
\hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\sos{31}{12}}\right)
+ \left(\frac{\mu^2}{-s_{31}}\right)^{\varepsilon}\Lx1-
\hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\sos{23}{12}}\right)\\
&\left.
- \left(\frac{\mu^2}{-M_H^2}\right)^{\varepsilon}\Lx1-
\hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\frac{s_{23}\,s_{31}}{s_{12}\,M_H^2}}\right)
\right]\,,\\
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
\label{eq::cgamma}{c_\Gamma} = \frac{\Gamma(1+{\varepsilon})\Gamma^2(1-{\varepsilon})}
{\Lx4\pi\right)^{2-{\varepsilon}}\Gamma(1-2{\varepsilon})}.
\end{split}
\end{equation}
The master integrals have been expressed in such a way that imaginary
parts come only from terms like
$\left(\frac{\mu^2}{-M_H^2}\right)^{\varepsilon}$, when the kinematic invariant
is positive. The correct analytic continuation of these terms is
given by the ``$i\epsilon$'' prescription of the Feynman propagator,
($-s_{ij} \to -(s_{ij}+i\epsilon$). Note that the expression for the
box integral takes a different form when the incoming legs are
adjacent to one another (first form), so that one of the two-particle
invariants entering the diagram is time-like and when they are not
(second form) so that both two-particle invariants entering the
diagram are space-like.
The arguments of the hypergeometric functions have been arranged so
that the functions are real-valued and well-behaved throughout the
kinematic range. Logarithmic singularities in the hypergeometrics,
resulting from collinear emission, occur only at boundary points and
are integrable.
\subsection{Renormalization}
The renormalization of ultraviolet divergences is performed in the
$\msbar$ scheme. The bare \qcd\ coupling, $\asbare$ is replaced with
the renormalized coupling $\amsbar(\mu^2)$, evaluated at the
renormalization scale $\mu^2$.
\begin{equation}
\asbare = \left(\frac{\mu^2\,e^{\gamma_E}}{4\,\pi}\right)^{\varepsilon}
Z_{\amsbar}\,\amsbar(\mu^2)
\end{equation}
The structure of the renormalization constant $Z_{\amsbar}$ is
determined entirely by its lowest order ($1/{\varepsilon}$) poles, which in
turn define the \qcd\ $\beta$-function.
\begin{equation}
\begin{split}
\betabar{}(\amsbar) = \mu^2\frac{d}{d\,\mu^2}\frac{\amsbar}{\pi}
&= -{\varepsilon}\frac{\amsbar}{\pi}\Lx1 + \frac{\amsbar}{Z_{\amsbar}}
\frac{\partial Z_{\amsbar}}{\partial\amsbar}\right)^{-1}
= -{\varepsilon}\frac{\amsbar}{\pi} - \sum_{n=0}^\infty\,\betabar{n}\,\amsbarpi^{n+2}\,.
\label{eqn:cdrbetadef}
\end{split}
\end{equation}
With this normalization, the first two coefficients of the
$\beta$-function are:
\begin{equation}
\betabar{0} = \frac{11}{12}C_A - \frac{1}{6}N_f\,,\qquad\qquad
\betabar{1} = \frac{17}{24}C_A^2 - \frac{5}{24}C_A\,N_f
- \frac{1}{8}C_F\,N_f\,,
\end{equation}
The composite operator of the effective Lagrangian (\eqn{eqn::leff})
renormalizes as
\begin{equation}
{\cal O}_1^{\rm B} = Z_{{\cal O}_1}{\cal O}_1\,,
\end{equation}
where~\cite{Chetyrkin:1996ke}
\begin{equation}
Z_{{\cal O}_1} = \Lx1+\frac{\amsbar}{Z_{\amsbar}}
\frac{\partial Z_{\amsbar}}{\partial\amsbar}\right)
= \LB1 + \sum_{n=0}^\infty\,\betabar{n}\,\amsbarpi^{n+1}\right]^{-1}
\end{equation}
The Wilson coefficient, $C_1$, renormalizes in the exact opposite
fashion as the operator ${\cal O}_1$,
\begin{equation}
C_1^{\rm B} = Z^{-1}_{{\cal O}_1}C_1\,.
\end{equation}
The value for $C_1$ given in \eqn{eqn::c1} is for the renormalized
Wilson coefficient.
\section{Mathematical Framework}
\label{sec:math}
Performing this calculation relies on taking advantage of the special
properties of the mathematical functions that appear in Feynman
integrals. In particular, I make use of the harmonic polylogarithms
and the (multiple) $\zeta$-function. These functions are closely
related, as I shall briefly describe below.
\subsection{Harmonic Polylogarithms}
The results of the calculations presented in this paper are
conveniently expressed in terms of harmonic polylogarithms. The
mathematical properties of harmonic polylogarithms (HPL) have been
discussed extensively in the literature\
\cite{Remiddi:1999ew,Gehrmann:2000zt,Gehrmann:2001ck,Maitre:2005uu},
but I briefly review their definition and some important properties.
The standard harmonic polylogarithms are defined in terms of three
weight functions, $f_{+1}$, $f_0$, and $f_{-1}$:
\begin{equation}
f_{+1}(x) = \frac{1}{1-x}\,,\qquad f_0(x) = \frac{1}{x}\,,\qquad
f_{-1}(x) = \frac{1}{1+x}
\end{equation}
The weight-one HPLs are defined by
\begin{equation}
H(0;x) = \ln\,x\,,\qquad H(\pm1;x) = \int_0^x\,dz\,f_{\pm1}(z)\,.
\end{equation}
Higher-weight HPLs are defined by iterated integrations against the
weight functions:
\begin{equation}
\label{eqn:HPLint}
H(w_n,w_{n-1},\ldots,w_0;x)
= \int_0^x\,dz\,f_{w_n}(z)\,H(w_{n-1},\ldots,w_0;z)\,,
\end{equation}
Clearly, the derivatives of HPLs involve the same weight functions,
\begin{equation}
\frac{d}{dz}H(w_n,w_{n-1},\ldots,w_0;x) =
f_{w_n}(z)\,H(w_{n-1},\ldots,w_0;z)\,.
\end{equation}
The HPLs include the classic polylogarithms, $\Li{n}(x)$ as special
cases. For example, $\Li{1}(x) = H(1;x)$, $\Li{2}(x) = H(0,1;x)$,
$\Li{3}(x) = H(0,0,1;x)$, etc.
There is a commonly-used shorthand notation for the weight vector
$\vec{w}$: whenever a weight $0$ is to the left of a non-zero weight,
the zero is omitted and the non-zero weight is increased in magnitude
by $1$. So, $H(0,1;x)\to H(2:x)$, and $H(0,-1,0,0,1;x)\to
H(-2,3:x)$.
The HPLs are very versatile. For example, it is relatively simple to
transform the argument of the HPLs to, for example, relate a function
$H(\vec{w};x)$ to a combination of functions $H(\vec{v},1-x)$.
Another important property is that the HPLs form a shuffle algebra, so
that
\begin{equation}
\label{eqn:shuffle}
H(\vec{w}_1;x)\,H(\vec{w}_2;x) = \sum_{\vec{w}^\prime \in
\vec{w}_1\mbox{\bf \scyr X}\vec{w}_2} H(\vec{w}\prime;x)\,,
\end{equation}
where $\vec{w}_1\mbox{\bf \scyr X}\vec{w}_2$ is the set of shuffles, or mergers of
the sequences $\vec{w}_1$ and $\vec{w}_2$ that preserve their relative
orderings.
The harmonic polylogarithms can be extended in various ways. One is
to use different weight functions. These additional weights can even
be related to kinematic
variables~\cite{Gehrmann:2000zt,Gehrmann:2001ck}. In this work, it
will be convenient to introduce the weight function
\begin{equation}
f_{+2}(x) = \frac{1}{2-x}\,,
\end{equation}
and the associated polylogarithms. Using harmonic polylogarithms
derived from this weight function makes it confusing to try to use the
short-hand notation described above. I therefore use it only when
working with standard HPLs and multiple $\zeta$-functions, and avoid
it when working with extended HPLs.
\subsection{Multiple $\zeta$-Functions}
The Multiple $\zeta$-function is a generalization of the Riemann
$\zeta$-function, defined by
\begin{equation}
\zeta(w_1,\ldots\,w_k) \equiv \sum_{n_1>n_2>\ldots>n_k}^\infty
\frac{1}{n_1^{w_1}\cdots n_k^{w_k}}\,.
\end{equation}
When all weights $w_m$ are positive, these are sometimes called
multiple $\zeta$ values, or MZVs. The multiple $\zeta$-functions
are, in some sense, the endpoints of the harmonic polylogarithms,
since
\begin{equation}
H(\vec{w};1) = \zeta(\vec{w})\,,
\end{equation}
where $\vec{w}$ is written in the shorthand notation defined above.
If one deconstructs the weight vector into $0$'s, $+1$'s and $-1$'s, it
is clear that the multiple $\zeta$-functions share the shuffle algebra
of the harmonic polylogarithms. This property allows one to derive
many relations involving the products and sums of the MZVs.
One important result is that at any rank $n$, the MZVs with weight
vectors containing only $2$'s and $3$'s form a basis for MZVs of that
rank~\cite{Hoffman:1467164,Blumlein:2009cf,Brown:2012XX,Zagier:2012XX}.
A consequence of this is that through rank $7$, one can replace this
basis with products of single (Riemann) $\zeta$ functions. Not until
rank $8$ are there more elements in the basis ($\zeta(2,2,2,2)$,
$\zeta(3,3,2)$, $\zeta(3,2,3)$, $\zeta(2,3,3)$) than there are
independent single $\zeta$ products ($\zeta(8)$, $\zeta(5)\,\zeta(3)$,
$\zeta^2(3)\,\zeta(2)$).
\subsection{Functions of Uniform Transcendentality}
It is useful to define the concept of the degree of
transcendentality~\cite{Henn:2013pwa} ${\cal T}(f)$ of a function $f$
which, like the HPLs, is defined by iterated integration. The degree
of transcendentality is simply the number of iterated integrals needed
to define the function. Thus, the transcendentality of an HPL is
equal to the rank of its weight vector. Transcendentality is also
assigned to numerical constants that are obtained at special values of
transcendental functions. Thus $\zeta(5) = \Li{5}(1) =
H(5;1)$ is assigned ${\cal T}(\zeta(5)) = 5$. The
transcendentality of products of functions is equal to the sum of the
transcendentalities of the two functions, ${\cal T}(f_1\,f_2) = {\cal
T}(f_1) + {\cal T}(f_2)$. This is consistent with the shuffle
operation where the product of functions of rank $r_1$ and $r_2$ is
expressed as a sum of functions of rank $r_1+r_2$.
A function is said to be a function of uniform
transcendentality~\cite{Henn:2013pwa} (FUT) if it is a sum of terms
which all have the same transcendentality. A further refinement is to
define a {\it pure} function of uniform transcendentality (pFUT) as
one for which the degree of transcendentality is lowered by taking a
derivative, ${\cal T}(d\,f) = {\cal T}(f) -1$. For instance, $f(x) =
x\,H(1;x)$ is not a pFUT because $df/dx = H(1;x) + x/(1-x)$ does not
have uniform transcendentality and thus is not an FUT, while $g(x) =
H(1,1;x) + H(0,1;x)$ is a pure function of uniform transcendentality
and $dg/dx = H(1;x)\,\left( f_{1}(x) + f_{0}(x)\right)$ is an FUT.
Typically, the functions that are encountered in performing
dimensionally regularized Feynman integrals are expressed as Laurent
expansions in the parameter ${\varepsilon}$, where $D=4-2{\varepsilon}$. The concept of
transcendentality can by usefully applied to these functions by
assigning ${\cal T}({\varepsilon})=-1$. Simple examples of pure functions of
uniform transcendentality are
\begin{equation}
\begin{split}
\label{eqn:FUT1}
\Gamma(1-{\varepsilon}) &= \exp\left({\varepsilon}\,\gamma_E + \sum_{n=2}^{\infty}\frac{{\varepsilon}^{n}\,\zeta(n)}{n}\right)\\
\left(\frac{\mu^2}{M_H^2}\right)^{\varepsilon} &= \sum_{n=0}^\infty
\frac{{\varepsilon}^n}{n!}\ln^{n}\frac{\mu^2}{M_H^2}
\end{split}
\end{equation}
where the Euler-Mascheroni constant, $\gamma_E \approx 0.577216$ is assigned
${\cal T}(\gamma_E) = 1$. A more complicated example is the
hypergeometric function that appears in the one-loop box master
integrals (\eqn{eqn:oneloopmasters},
\begin{equation}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{z} = 1 - \sum_{n=1}^\infty {\varepsilon}^n\,\Li{n}(z)\,.
\end{equation}
Note that the one-loop bubble master integral, however, is not an FUT
because of the factor of $1/(1-2\,{\varepsilon})$.
\section{Methods}
\label{sec:methods}
\subsection{Squared amplitudes and Phase Space Integration}
\label{sec:SRE}
The partonic cross section is computed by squaring the production
amplitudes, averaging (summming) over initial (final) state colors and
spins, and integrating over phase space.
\begin{equation}
\sigma = \frac{1}{2\,s_{12}}\,d(LIPS)\,\frac{1}{\mathbb{S}}
\sum_{\rm spin/color} \,\left|{\cal M}\right|^2\,,
\end{equation}
where the factor of $1/(2\,s_{12})$ is the flux factor, $d(LIPS)$
represents Lorentz invariant phase space and the factor $\mathbb{S}$
represents the averaging over initial state spins and colors. The
matrix elements presented in the previous sections were written for
the kinematics $p_1+p_2+p_3+p_H\to\emptyset$. To compute the cross
section, $p_3$ and $p_H$ must be crossed into the final state. When
$p_3$ represents the momentum of a fermion, the squared matrix element
picks up an extra factor of $(-1)$ from Fermi-Dirac statistics. For
the production process $p_1+p_2\to p_3+p_H$, the element of Lorentz
invariant phase space is
\begin{equation}
d(LIPS) = \frac{1}{8\,\pi}\left(\frac{4\,\pi\,\mu^2}{s_{12}}\right)^{\varepsilon}\,\
\frac{\left( s_{23}\,s_{31}\right)^{\varepsilon}}{\Gamma(1-{\varepsilon})}\,ds_{23}\,.
\end{equation}
Defining $s_{12} = \hat{s}$ to be the parton CM energy squared, I
introduce the dimensionless parameters $x = M_H^2/\hat{s}$,
$\bar{x}\, = 1-x$, and $y = \frac{1}{2}\Lx1-\cos\,\theta^*\right)$, $\bar{y}\, = 1-y$, where
$\theta^*$ is the scattering angle in the CM frame,
\begin{equation}
\begin{split}
s_{12} &= \hat{s}\,,\hskip 50pt M_H^2 = x\,\hat{s}\,,\\
s_{23} &= \bar{x}\,\,y\,\hat{s}\,,\hskip 37pt s_{31} = \bar{x}\,\,\bar{y}\,\,\hat{s}\,.
\end{split}
\end{equation}
In terms of these variables, the element of phase space is
\begin{equation}
d(LIPS) = \frac{1}{8\,\pi}\left(\frac{4\,\pi^2\,\mu^2}{\hat{s}}\right)^{\varepsilon}
\frac{1}{1-{\varepsilon}}\,\bar{x}\,^{1-2{\varepsilon}}\,y^{-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\,dy\,.
\end{equation}
$\bar{x}\,$ is called the threshold parameter, and is a measure of excess or
kinetic energy in the scattering process, beyond that which is needed to
produce a Higgs boson at rest. The kinematically available region in
$x$ and $y$ space is $M_H^2/s < x < 1$ and $0 < y < 1$, where
$s$ is the hadronic (not partonic) CM energy. Clearly, $0 < \bar{x}\,\ <
1-M_H^2/s$ and $0<\bar{y}\,\ <1$.
In the virtual production process, $g\,g\to H$, there is no excess
energy and $\bar{x}\,$ is constrained to be zero. This constraint is
enforced by a $\delta$-function, $\delta(\bar{x}\,)$, which arises from the
phase space element of the virtual process. In a real emission
process, like that considered here, $\bar{x}\,$ is allowed to vary
continuously between $0$ and $M_H^2/s$ and the terms in the cross
section are multiplied by powers (both integer and proportional to
${\varepsilon}$) of $\bar{x}\,$. The leading terms in $\bar{x}\,$, associated with soft
emission, vary like $\bar{x}\,^{-1+n\,{\varepsilon}}$, and are singular at the
endpoint $\bar{x}\,\to0$. These soft terms are evaluated by expanding in
distributions of $\bar{x}\,$,
\begin{equation}
\label{eqn:xbdistrib}
\bar{x}\,^{-1+n\,{\varepsilon}} = \frac{1}{n\,{\varepsilon}}\delta(\bar{x}\,)
+ {\cal D}^{n\,{\varepsilon}}(\bar{x}\,)
= \frac{1}{n\,{\varepsilon}}\delta(\bar{x}\,)
+ \sum_{m=0}^\infty\frac{(n\,{\varepsilon})^m}{m!}\,{\cal D}_m(\bar{x}\,)\,,
\end{equation}
where ${\cal D}_m(\bar{x}\,)$ is a ``plus'' distribution defined as
\begin{equation}
\begin{split}
{\cal D}_m(\bar{x}\,) &= \left[\frac{\ln^m(\bar{x}\,)}{\bar{x}\,}\right]_+\,,\\
\int_0^1 dx\,h(x)\,{\cal D}_m(\bar{x}\,) &=
\int_0^1 dx\,\left( h(x)-h(1)\right)\frac{\ln^m(\bar{x}\,)}{\bar{x}\,}\,,
\end{split}
\end{equation}
and ${\cal D}^{n\,{\varepsilon}}(\bar{x}\,)$ represents the whole tower of plus
distributions. In this way, one obtains $\delta$-function terms to
add to those from the virtual corrections.
\subsection{Integration by Parts}
The partonic cross sections are given by integrals of the squared
matrix elements over the phase space. This involves a great many
integrals of functions of varying complexity. It is certainly
possible to simply attack the list of integrals, one-by-one, and solve
them by whatever means possible. The magnitude of the problem can be
essentially cut in half by taking advantage of the symmetry in
exchanging $y\leftrightarrow\bar{y}\,$, but this still leaves a large number
of integrals to be performed.
An elegant solution is suggested by the success of the
integration-by-parts method that has been applied to Feynman
integrals, allowing one to express a large set of integrals in terms
of a few ``master'' integrals. Since loop integrals and phase space
integrals are intimately related through the Cutkosky relations, it is
no surprise that the same procedure can be applied to phase space
integrals. An example of a phase space integral encountered in the
interference of tree-{} and one-loop amplitudes is
\begin{equation}
I_{ex}(\bar{x}\,) = \int_0^1 dy\,y^{-{\varepsilon}}\,\bar{y}\,^{-2{\varepsilon}}\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{\bar{x}\,\,y}\,.
\end{equation}
If I differentiate both sides of this equation by $y$, I obtain zero
on the left-hand side, since $I_{ex}(\bar{x}\,)$ is not a function of $y$,
but when I carry the differential under the integral on the right-hand
side, I obtain a sum of different integrals. Since the sum is equal
to zero, I derive non-trivial relations among various phase space
integrals. In the example given above, I obtain
\begin{equation}
\begin{split}
0 &= \Lx1-2\,{\varepsilon}\right)\int_0^1 dy\,
y^{-{\varepsilon}}\,\bar{y}\,^{-2{\varepsilon}}\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{\bar{x}\,\,y}\\
&+2\,{\varepsilon}\int_0^1 dy\,
y^{-{\varepsilon}}\,\bar{y}\,^{-1-2{\varepsilon}}\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{\bar{x}\,\,y}\\
&-{\varepsilon}\int_0^1 dy\,
y^{-{\varepsilon}}\,\bar{y}\,^{-2{\varepsilon}}\Lx1-\bar{x}\,\,y\right)^{-1}
\end{split}
\end{equation}
As it turns out, two of the integrals on the right-hand side,
\begin{equation}
\int_0^1 dy\, y^{-{\varepsilon}}\,\bar{y}\,^{-2{\varepsilon}}\Lx1-\bar{x}\,\,y\right)^{-1} =
\frac{\Gamma(1-{\varepsilon})\Gamma(1-2\,{\varepsilon})}{\bar{x}\,\,\Gamma(1-3\,{\varepsilon})}
\left(-1 + \hypgeo{1}{-{\varepsilon}}{1-3\,{\varepsilon}}{\bar{x}\,}\right)\,,
\end{equation}
and
\begin{equation}
\int_0^1 dy\, y^{-{\varepsilon}}\,\bar{y}\,^{-1-2{\varepsilon}}\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{\bar{x}\,\,y} =
\frac{\Gamma(1-{\varepsilon})\Gamma(1-2\,{\varepsilon})}{(-2\,{\varepsilon})\Gamma(1-3\,{\varepsilon})}
\hypgeo{1}{-{\varepsilon}}{1-3\,{\varepsilon}}{\bar{x}\,}\,,
\end{equation}
are functions of uniform transcendentality. This makes them good
candidates to be chosen as master integrals, though as it turns out, I
have chosen other FUTs as masters.
\subsection{Threshold Expansion}
\label{sec:thresh}
Once the full set of integrals has been reduced to a few masters, one
must actually perform those integrals. Some of the masters can be
integrated in closed form, but most of those that arise
from the squared one-loop amplitudes, cannot. The technique by which
I will solve these integrals involves expansion of the integrands in
terms of the threshold parameter
$\bar{x}\,$~\cite{Harlander:2002wh,Harlander:2002vv,Harlander:2003ai}.
The advantage of this approach is that the coefficient of each power
of $\bar{x}\,$ consists of simple, often trivial, integrals over powers and
functions of $y$ and $\bar{y}\,$ only. The disadvantage is that the result
is a truncated series in $\bar{x}\,$, not a set of functions in closed form.
This disadvantage, however, is essentially one of {\ae}sthetics.
Because the gluon luminosity spectrum is a fairly steeply falling
function, the Higgs production cross section is dominated by the
threshold region and so the first several terms in the $\bar{x}\,$ expansion
give a good approximation to the physics. This feature was
demonstrated explicitly in the first {NNLO}\ calculation of Higgs
boson production~\cite{Harlander:2002wh}.
Nevertheless, even this disadvantage can be overcome if one has a
suitable ansatz for the basis of functions in which the closed-form
integrals would take values and if one can carry out the threshold
expansion to sufficiently high order that one can map the series
expansion onto the basis functions~\cite{Harlander:2002vv,Harlander:2003ai}.
At {NNLO}, the author used the ansatz that the basis of functions
consisted of those functions which appeared in the ground-breaking
calculation of Drell-Yan production at {NNLO}~\cite{Hamberg:1990np}.
In the present calculation, one does not have such guidance for how to
choose functions beyond rank three. A logical choice would seem to be
the standard harmonic polylogarithms in $\bar{x}\,$. This, however, would
be incorrect! Among the functions found in the {NNLO}\ Drell-Yan
result are
\begin{equation}
\Li{2}(-x) = - H(0,-1;x)\,, \qquad
\Li{3}(-x) = - H(0,0,-1;x)\,.
\end{equation}
These functions can be expanded in $\bar{x}\,$. For example,
\begin{equation}
\label{eqn:inhomo}
\Li{2}(-x) = - \frac{1}{2}\Li2(2\,\bar{x}\,-\bar{x}\,^2) + \Li2(\bar{x}\,)
- \frac{\zeta(2)}{2} + \ln(2)\Li1(\bar{x}\,)
- \Li1(\bar{x}\,)\Li1\left(\frac{\bar{x}\,}{2}\right)\,.
\end{equation}
All of the functions on the right hand side of this expression can be
readily expanded in $\bar{x}\,\!$, but cannot be expressed as standard HPLs
of $\bar{x}\,$. A better ansatz is that the basis of functions consists of
standard HPLs of $x$, not $\bar{x}\,$. The problem with this ansatz,
however, is that the threshold expansion is in $\bar{x}\,$, not $x$, and the
expansion in $\bar{x}\,$ of HPLs in $x$ involves the appearance of
transcendental numbers like $\zeta(n)$ or $\ln(2)$ as in
\eqn{eqn:inhomo} above. It turns out that the best basis of functions
consists of the generalized harmonic polylogarithms in $\bar{x}\,$, where
the elements of the weight vector takes values from the set
$\{0,1,2\}$, rather than the standard $\{-1,0,1\}$. These generalized
HPLs all expand homogeneously in $\bar{x}\,$, without the appearance of
transcendental numbers. Once the threshold expansion has been mapped
onto these functions, they can, in turn, be mapped back onto the
standard HPLs in $x$. Thus, the final results of this calculation
will be expressed in terms of standard HPLs in $x$.
\subsection{Series Inversion}
The mapping of the threshold expansions onto basis functions is done
as follows. For a set of $n$ basis functions, $G(w_i;\bar{x}\,)$, (Note
that I use $G(\vec{w};x)$ to denote that I am using generalized rather
than standard HPLs) each function is expanded in powers of $\bar{x}\,$ from
$\bar{x}\,^0$ to $\bar{x}\,^{n-1}$. This statement assumes that the right-most
element of the weight vector is not equal to $0$. Such terms would
contain factors of $\ln(\bar{x}\,\!)$, which does not expand in powers of
$\bar{x}\,$. (There is no problem with eliminating these terms from the
basis since factors of $\ln(\bar{x}\,\!)$ arise exclusively from terms like
$\bar{x}\,^{n\,{\varepsilon}}$, which appear explicitly in the phase space element and
have been factored out in the form of the loop master integrals given
in \eqn{eqn:oneloopmasters}.) With this assumption, the HPLs can be
expanded as~\cite{Maitre:2005uu}
\begin{equation}
G(\vec{w};\bar{x}\,) = \sum_{i=0}^\infty\,\bar{x}\,^{i}\ Z_i(\vec{w})\,.
\end{equation}
The coefficients $Z_i(\vec{w})$ can be determined using the definition
of the HPLs.
\begin{equation}
\label{eqn:zexp}
G(w_n,w_{n-1}\ldots,w_1;z) = \int_0^z dt\ f_{w_n}(t)\
G(w_{n-1}\ldots,w_1;t) = \sum_{i=0}^\infty Z_i(w_{n-1}\ldots,w_1)
\int_0^z\ f_{w_n}(t)\ t^i
\end{equation}
For $w_n$ taking values from the set $\{0,1,2\}$.
\begin{equation}
\begin{split}
\label{eqn:wts}
\int_0^z\ dt\ f_{0}(t)\ t^i &= \int_0^z\ \frac{dt}{t} t^i = \frac{z^i}{i}\,,\\
\int_0^z\ dt\ f_{1}(t)\ t^i &= \int_0^z\ \frac{dt}{1-t} t^i =
\sum_{j=i+1}^\infty \frac{z^j}{j}\,,\\
\int_0^z\ dt\ f_{2}(t)\ t^i &= \int_0^z\ \frac{dt}{2-t} t^i =
\sum_{j=i+1}^\infty \frac{z^j}{2^{j-i}j}\,.
\end{split}
\end{equation}
Combining \eqns{eqn:zexp}{eqn:wts}, I obtain starting values
\begin{equation} Z_j(1) = \frac{1}{j}\,, \qquad
Z_j(2) = \frac{1}{(2)^j\,j}
\end{equation}
and the recursion relations:
\begin{equation}
\begin{split}
Z_j(0,\vec{w}) & = \frac{1}{j}\ Z_{j}(\vec{w})\,,\\
Z_j(1,\vec{w}) &= \frac{1}{j}\ \sum_{i=1}^{j-1}\ Z_{i}(\vec{w})\,,\\
Z_j(2,\vec{w}) &= \frac{1}{j}\ \sum_{i=1}^{j-1}\ \frac{1}{2^{j-i}}
\ Z_{i}(\vec{w})\,.
\end{split}
\end{equation}
Once the basis functions have been expanded, one forms a matrix ${\mathbb
M}$ of coefficients, where each column corresponds to a different
function, and each row to a different order in $\bar{x}\,$. This matrix is
inverted, to form ${\mathbb M}^{-1}$. The solution to the integral
$I(\bar{x}\,\!)$ is then found to be
\begin{equation}
I(\bar{x}\,\!) = \vec{f}\cdot {\mathbb M}^{-1}\cdot \vec{\imath}\,,
\end{equation}
where $\vec{f}$ is a row-vector of the basis functions, and
$\vec{\imath}$ is a column-vector consisting of the threshold
expansion coefficients of the integral $I(\bar{x}\,\!)$.
Threshold expansion followed by series inversion is a very powerful
and versatile tool. It can be used as a blunt instrument to invert
the threshold expansion of the entire partonic cross section. This is
how it was used in the calculations of {NNLO}\ Higgs cross
sections~\cite{Harlander:2002vv,Harlander:2003ai}. When applied to
such complicated integrands, one needs not just the basis functions
discussed above, but also those basis functions weighted by various
powers of $\bar{x}\,$. Thus, while the inversion was performed using only
functions of rank $3$ or less (of which there are $40$ in total,
counting $1$ as a rank-$0$ function, and only $13$ which appear), we
needed a basis of $78$ functions.
The full power of the technique emerges, however, when it is applied
to a more controlled set of integrals. As discussed above, I only
need to evaluate a relatively small number of master integrals. The
rest are determined from the masters by algebraic relations. If I
choose my master integrals to be pure functions of uniform
transcendentality, I significantly reduce the size of the basis needed
for inversion. This is an important consideration because the number
of operations required for matrix inversion grows like $n^3$, where
$n$ is the size of the basis. This $n^3$ growth in the number of
operations does not take into account the fact that the size of the
terms being manipulated also grows rapidly with $n$. Thus, a
reduction in the size of the basis by a factor of $2$ makes the
problem of matrix inversion at least $10$ times simpler. I find that
the most complicated integrals in this calculation require a basis of
only $48$ functions to extract the rank $5$ components. In contrast,
to proceed by brute-force and compute the coefficients through rank
$5$ of the non-FUT integrals would require a basis of up to $325$
functions.
\section{Results}
\label{sec:results}
The first task is to compute the master integrals.
\subsection{Master Integrals at {NLO}}
There is only one master integral that contributes to the integral of
the square of tree-level amplitudes over phase space.
\begin{equation}
\label{eqn:MINLO}
M_0 = \alpha{\varepsilon}\int_0^1\ dy\,y^{-1+\alpha\,{\varepsilon}}\ \bar{y}\,^{\beta\,{\varepsilon}} =
\frac{\Gamma(1+\alpha\,{\varepsilon})\Gamma(1+\beta\,{\varepsilon})}
{\Gamma(1+(\alpha+\beta){\varepsilon})}
\end{equation}
For this integral, integration by parts does not yield any identities
that are not equivalent to $\Gamma(\alpha+1) = \alpha\,\Gamma(\alpha)$.
\subsection{Master Integrals at {NNLO}}
Applying the integration-by-parts technique to the integrals that
appear in the interference of tree-{} and one-loop amplitudes, I find
that there are only five new master integrals. All five can be
evaluated in closed form, meaning that the entire contribution of
single-real-emission at {NNLO}\ can be evaluated to all orders in
${\varepsilon}$. The master integrals are:
\begin{equation}
\begin{split}
\label{eqn:MINNLO}
M_1(\alpha,\beta) = \alpha\,{\varepsilon}\,&\int_0^1\ dy\,y^{-1+\alpha\,{\varepsilon}}\
\bar{y}\,^{\beta\,{\varepsilon}}\ (1-\bar{x}\,\,y)^{-1}=
\frac{\Gamma(1+\alpha\,{\varepsilon})\Gamma(1+\beta\,{\varepsilon})}
{\Gamma(1+(\alpha+\beta){\varepsilon})}\
\hypgeo{1}{\alpha\,{\varepsilon}}{1+(\alpha+\beta)\,{\varepsilon}}{\bar{x}\,}\\
M_2(\alpha,\beta) =\alpha\,{\varepsilon}\,&\int_0^1\ dy\,y^{-1+\alpha\,{\varepsilon}}\
\bar{y}\,^{\beta\,{\varepsilon}}\ \hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}} =\\
& \frac{\Gamma(1+\alpha\,{\varepsilon})\Gamma(1+(\beta-1)\,{\varepsilon})}
{\Gamma(1+(\alpha+\beta-1){\varepsilon})}\
\ghypgeo{-{\varepsilon}}{-{\varepsilon}}{\alpha\,{\varepsilon}}{1-{\varepsilon}}{1+(\alpha+\beta-1)\,{\varepsilon}}{1}\\
M_3(\alpha,\beta) =\alpha\,{\varepsilon}\,&\int_0^1\ dy\,y^{-1+\alpha\,{\varepsilon}}\
\bar{y}\,^{\beta\,{\varepsilon}}\ \hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}} =\\
&\frac{\Gamma(1+\alpha\,{\varepsilon})\Gamma(1+\beta\,{\varepsilon})}
{\Gamma(1+(\alpha+\beta){\varepsilon})}\
\ghypgeo{1}{-{\varepsilon}}{\alpha\,{\varepsilon}}{1-{\varepsilon}}{-\beta\,{\varepsilon}}{x} \\
& + \frac{\alpha\,\Gamma(1+\beta\,{\varepsilon})\Gamma(1-\beta\,{\varepsilon})}{\beta\,(\alpha+\beta)}\
x^{\beta\,{\varepsilon}}\left(\hypgeo{(\alpha+\beta)\,{\varepsilon}}{(\beta-1)\,{\varepsilon}}{1+(\beta-1)\,{\varepsilon}}{x}
- \bar{x}\,^{-(\alpha+\beta)\,{\varepsilon}}\right)\\
M_4(n,\alpha,\beta) = \alpha\,{\varepsilon}\,&\int_0^1\ dy\,y^{-1+\alpha\,{\varepsilon}}\
\bar{y}\,^{\beta\,{\varepsilon}}\ \hypgeo{1}{n\,{\varepsilon}}{1+n\,{\varepsilon}}{\bar{x}\,\,y} =\\
&\frac{\Gamma(1+\alpha\,{\varepsilon})\Gamma(1+\beta\,{\varepsilon})}
{\Gamma(1+(\alpha+\beta){\varepsilon})}\
\ghypgeo{1}{n\,{\varepsilon}}{\alpha\,{\varepsilon}}{1+n\,{\varepsilon}}{1+(\alpha+\beta)\,{\varepsilon}}{\bar{x}\,}\\
M_5(\alpha) = \alpha\,{\varepsilon}\,&\int_0^1\ dy\,y^{-1+\alpha\,{\varepsilon}}\
\bar{y}\,^{\alpha\,{\varepsilon}}\ \hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\frac{\bar{x}\,^2\,y\,\bar{y}\,}{x}} =\\
&\frac{\Gamma^2(1+\alpha\,{\varepsilon})}{\Gamma(1+2\,\alpha\,{\varepsilon})}
\ghypgeo{1}{{\varepsilon}}{\alpha\,{\varepsilon}}{\frac{1}{2}+\alpha\,{\varepsilon}}{1+{\varepsilon}}{-\frac{\bar{x}\,^2}{4\,x}}
\end{split}
\end{equation}
It might appear that master integral $M_3$ contains factors of
$\ln(\bar{x}\,\!)$. It turns out, however, that when the hypergeometric
functions are expanded in ${\varepsilon}$, the $\ln(\bar{x}\,\!)$ terms contained in
the hypergeometrics exactly cancel the explicit logs from the
$\bar{x}\,^{-(\alpha+\beta)\,{\varepsilon}}$ terms. Note also that the ${\varepsilon}$
expansion of $M_5(\alpha)$ involves expanding around a half-integer
parameter in the hypergeometric function. Such expansions are
discussed in Ref.~\cite{Huber:2007dx}.
\subsection{Master Integrals at {N${}^3$LO}}
There are more than twenty new master integrals that appear at {N${}^3$LO}.
A few of them, particularly those that involve the products of
hypergeometric functions of the same argument, can be computed in
closed form, although the resulting functions are still hard to expand
in ${\varepsilon}$, even for tools like
{\abbrev HypExp}~\cite{Huber:2005yg,Huber:2007dx}. As an example,
\begin{equation}
\begin{split}
\label{eqn:mi26}
\int_0^1\,dy\ &y^{-1-{\varepsilon}}\,\bar{y}\,^{-3\,{\varepsilon}}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{\bar{x}\,\,y}\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{\bar{x}\,\,y}\\
= & \frac{\Gamma(1-{\varepsilon})\Gamma(1-3\,{\varepsilon})}{\Gamma(1-4\,{\varepsilon})({\varepsilon})}
\LB1-\ghypgeo{1}{-{\varepsilon}}{-{\varepsilon}}{1-{\varepsilon}}{1-4\,{\varepsilon}}{\bar{x}\,}
\vphantom{\frac{2}{\delta}}\right.\\
& - \lim_{\delta\to0}\frac{2\,{\varepsilon}^2\,\bar{x}\,\!}{\delta\,(1-2{\varepsilon})(1-4{\varepsilon})}
\left(\vphantom{\frac{\Gamma}{\Gamma}}
\ghypgeo{1}{1-2{\varepsilon}}{1-{\varepsilon}}{2-2{\varepsilon}}{2-4{\varepsilon}}{\bar{x}\,\!}\right.\\
&\left.\left.\vphantom{\frac{\Gamma}{\Gamma}}\hskip110pt
- \ghypgeo{1}{1-2{\varepsilon}}{1-{\varepsilon}+\delta{\varepsilon}}{2-2{\varepsilon}}{2-4{\varepsilon}}{\bar{x}\,\!}
\right)\right]\\
\end{split}
\end{equation}
Both for this reason, and the fact that many of the masters cannot be
evaluated in closed form, I choose to compute all of the needed integrals
directly as a Laurent series in ${\varepsilon}$ by means of threshold expansion.
The exceptions are the two scale-free master integrals, $M_{10}$ and
$M_{11}$, which integrate to pure numbers,
The full list of master integrals needed for the {N${}^3$LO}\ contribution
is given below. The coefficients are chosen so that each of the
master integrals is a function of uniform transcendentality ${\cal T}
= 0$, with the leading term in the ${\varepsilon}$ expansion equal to unity.
\begin{equation}
\begin{split}
\label{eqn:MINNNLO}
M_{10} &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}^2\,,\\
M_{11} &= -2\,{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{\bar{y}\,}{y}}\,.\\
M_{12} &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}}\\
M_{13} &= -2{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{\bar{y}\,}{y}}\\
M_{14}(n) &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{n\,{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}
\Lx1-\bar{x}\,\,y\right)^{-1}\\
M_{15}(n) &= -2{\varepsilon}\int_0^1\,dy\ y^{n\,{\varepsilon}}\,\bar{y}\,^{-1-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}
\Lx1-\bar{x}\,\,\bar{y}\,\right)^{-1}\\
M_{16}(m) &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-2\,{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}
\hypgeo{1}{m\,{\varepsilon}}{1+m\,{\varepsilon}}{\bar{x}\,\,y}\\
M_{17}(m) &= -2{\varepsilon}\int_0^1\,dy\ y^{-1-2{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}
\hypgeo{1}{m\,{\varepsilon}}{1+m\,{\varepsilon}}{\bar{x}\,\,\bar{y}\,}\\
M_{18} &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}
\hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\frac{\bar{x}\,^2\,y\,\bar{y}\,}{x}}\\
M_{19} &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}}^2\\
M_{20} &= -2{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{\bar{y}\,}{y}}\\
\end{split}
\end{equation}
\begin{equation*}
\begin{split}
M_{21}(n) &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{n\,{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}}
\Lx1-\bar{x}\,\,y\right)^{-1}\\
M_{22}(n) &= -2{\varepsilon}\int_0^1\,dy\ y^{n\,{\varepsilon}}\,\bar{y}\,^{-1-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}}
\Lx1-\bar{x}\,\,\bar{y}\,\right)^{-1}\\
M_{23}(m) &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-2\,{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}}
\hypgeo{1}{m\,{\varepsilon}}{1+m\,{\varepsilon}}{\bar{x}\,\,y}\\
M_{24}(m) &= -2{\varepsilon}\int_0^1\,dy\ y^{-1-2{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}}
\hypgeo{1}{m\,{\varepsilon}}{1+m\,{\varepsilon}}{\bar{x}\,\,\bar{y}\,}\\
M_{25} &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}}
\hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\frac{\bar{x}\,^2\,y\,\bar{y}\,}{x}}\\
M_{26}(n,m) &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-3{\varepsilon}}\
\hypgeo{1}{n\,{\varepsilon}}{1+n\,{\varepsilon}}{\bar{x}\,\,y}
\hypgeo{1}{m\,{\varepsilon}}{1+m\,{\varepsilon}}{\bar{x}\,\,y}\\
M_{27}(n,m) &= -2{\varepsilon}\int_0^1\,dy\ y^{-1-2{\varepsilon}}\,\bar{y}\,^{-2{\varepsilon}}\
\hypgeo{1}{n\,{\varepsilon}}{1+n\,{\varepsilon}}{\bar{x}\,\,y}
\hypgeo{1}{m\,{\varepsilon}}{1+m\,{\varepsilon}}{\bar{x}\,\,\bar{y}\,}\\
M_{28}(n,m) &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{n\,{\varepsilon}}\
\hypgeo{1}{m\,{\varepsilon}}{1+m\,{\varepsilon}}{\bar{x}\,\,y}
\Lx1-\bar{x}\,\,y\right)^{-1}\\
M_{29}(n,m) &= -2{\varepsilon}\int_0^1\,dy\ y^{n\,{\varepsilon}}\,\bar{y}\,^{-1-2{\varepsilon}}\
\hypgeo{1}{m\,{\varepsilon}}{1+m\,{\varepsilon}}{\bar{x}\,\,y}
\Lx1-\bar{x}\,\,\bar{y}\,\right)^{-1}\\
M_{30}(n) &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-2\,{\varepsilon}}\
\hypgeo{1}{n\,{\varepsilon}}{1+n\,{\varepsilon}}{\bar{x}\,\,y}
\hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\frac{\bar{x}\,^2\,y\,\bar{y}\,}{x}}\\
M_{31} &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{-{\varepsilon}}\
\hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\frac{\bar{x}\,^2\,y\,\bar{y}\,}{x}}^2\\
M_{32}(n) &= -{\varepsilon}\int_0^1\,dy\ y^{-1-{\varepsilon}}\,\bar{y}\,^{n\,{\varepsilon}}\
\hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\frac{\bar{x}\,^2\,y\,\bar{y}\,}{x}}
\Lx1-\bar{x}\,\,y\right)^{-1}
\end{split}
\end{equation*}
In addition, one also needs a variation on $M_5$,
\begin{equation}
M_{6}(\alpha,\beta) = \alpha\,{\varepsilon}\,\int_0^1\ dy\,y^{-1+\alpha\,{\varepsilon}}\
\bar{y}\,^{\beta\,{\varepsilon}}\ \hypgeo{1}{{\varepsilon}}{1+{\varepsilon}}{-\frac{\bar{x}\,^2\,y\,\bar{y}\,}{x}}\,,
\end{equation}
where $M_5(\alpha) = M_6(\alpha,\alpha)$. Note that while $M_5$ can be
expressed in closed form, $M_6$ cannot.
\subsection{Threshold expansions of the integrands}
The threshold expansion of the integrands is quite simple. In many
cases, one can simply use the series representation of the
hypergeometric function
\begin{equation}
\hypgeo{\alpha}{\beta}{\gamma}{z} = \sum_{n=0}^\infty
\frac{(a)_n\,(b)_n}{n!\,(c)_n}\,z^n\,,
\end{equation}
where $(a)_n$ is the Pochhammer symbol
\begin{equation}
(a)_n = \frac{\Gamma(a+n)}{\Gamma(a)}\,.
\end{equation}
This works well for hypergeometric functions of argument $(\bar{x}\,\,y)$ and
$(\bar{x}\,\,\bar{y}\,)$. It also works for the hypergeometrics of argument $(-
x^{-1}\,\bar{x}\,^2\,y\,\bar{y}\,)$ if one then expands the resulting factors of
$x^{-m}$,
\begin{equation}
x^{-m} = (1-\bar{x}\,)^{-m} = \hypgeo{m}{a}{a}{\bar{x}\,} = \sum_{n=0}^\infty
\frac{(m)_n}{n!}\,\bar{x}\,^n\,.
\end{equation}
In the same way, factors of $(1-\bar{x}\,\,y)^{-m}$ are expanded as
\begin{equation}
\Lx1-\bar{x}\,\,y\right)^{-m} = \hypgeo{m}{a}{a}{\bar{x}\,\,y} = \sum_{n=0}^\infty
\frac{(m)_n}{n!}\,\left(\bar{x}\,\,y\right)^n\,.
\end{equation}
The only terms that don't expand trivially in this way are the
hypergeometrics with arguments $\left(-x\,y/\bar{y}\,\right)$ and
$\left(-x\,\bar{y}\,/y\right)$. For these, one simply uses the Taylor series
expansion,
\begin{equation}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}} = \sum_{n=0}^\infty
\frac{\bar{x}\,^n}{n!}\ \left[\frac{d^n}{d\bar{x}\,^n}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{(\bar{x}\,-1)\frac{y}{\bar{y}\,}}\right]_{\bar{x}\,=0}\,,
\end{equation}
where
\begin{equation}
\frac{d}{d\bar{x}\,}\hypgeo{a}{b}{c}{{(\bar{x}\,-1)\frac{y}{\bar{y}\,}}} =
\frac{y}{\bar{y}\,}\frac{a\,b}{c}
\hypgeo{a+1}{b+1}{c+1}{{(\bar{x}\,-1)\frac{y}{\bar{y}\,}}}\,.
\end{equation}
Combining these equations and repeatedly applying hypergeometric
identities for contiguous functions (see, {\it e.g.}\
Ref.~\cite{Gradshteyn:Tables}), I obtain the threshold expansion to be
\begin{equation}
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-x\frac{y}{\bar{y}\,}} = \sum_{n=0}^\infty
\frac{\bar{x}\,^n}{n!}\ \frac{(-{\varepsilon})_n}{n!}\left(
\hypgeo{1}{-{\varepsilon}}{1-{\varepsilon}}{-\frac{y}{\bar{y}\,}}
- \bar{y}\,\,\sum_{m=0}^{n-1}\ y^m\,\frac{m!}{(1-{\varepsilon})_m}\right)\,.
\end{equation}
Thus, when the threshold expansion is performed on all components of
the integrands, the result is a sum of powers of $\bar{x}\,$ multiplying
integrals in $y$ and $\bar{y}\,$ only. These integrals can all be reduced to
combinations of master integrals $M_0$, $M_2$, $M_{10}$ and $M_{11}$,
given in Eqs.~(\ref{eqn:MINLO}),\,(\ref{eqn:MINNLO}) and
(\ref{eqn:MINNNLO}).
\subsection{Results for the Partonic Cross Sections}
The results of these calculations are merely parts of a physical
result, namely the inclusive Higgs production cross section to
{N${}^3$LO}. By themselves, they have no direct physical interpretation.
Thus, while I have described how one would perform $\msbar$
renormalization on these terms, I present the results of the bare
calculation, and leave renormalization until such time as all pieces
of the {N${}^3$LO}\ cross section can be assembled.
The contributions can be broken into two distinct components, the soft
and the hard contributions. The soft contributions come entirely from
the leading behavior in $\bar{x}\,$, that is terms that go like
$\bar{x}\,^{-1+n\,{\varepsilon}}$, which can be expanded in distributions as described
\Sec{sec:SRE}. The hard contribution is comprised of all other terms.
Only the purely gluon-initiated partonic cross section $g\,g\to\ H\,g$,
has soft contributions.
\subsubsection{Contributions starting at {NLO}}
The contribution to the inclusive cross section from the square of
tree-level amplitudes starts at {NLO}\ and, through the renormalization
of $\alpha_s$, the effective operator ${\cal O}_1$ and the Wilson
coefficient $C_1$, applies to all higher orders. The results of this
calculation depend only on master integral $M_0$, which expands
readily to arbitrary order in ${\varepsilon}$.
\begin{equation}
\begin{split}
\sigma_{gg\to H\,g}^{1,B} =\ & \Cfact\,{\Lx\frac{g^2(4\,\pi)^{\ep}}{4\,\pi^2\,\Gamma(1-\ep)}\Rx}\mumh^{{\varepsilon}}\,M_{0}(-1,-1)\left[
\frac{3\,\delta(\bar{x}\,)}{{\varepsilon}^2\,(1-{\varepsilon})}
- \frac{6\,\DSum{-2}\,x^{{\varepsilon}}}{{\varepsilon}\,(1-{\varepsilon})}\right.\\
+ &\left. \frac{x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}}{{\varepsilon}}\left(
\frac{12}{1-{\varepsilon}}
- \bar{x}\,\,\frac{18 - 54\,{\varepsilon} + 42\,{\varepsilon}^2}{(1-{\varepsilon})^2\,(1-2\,{\varepsilon})}
+ \bar{x}\,^2\,\frac{12 - 36\,{\varepsilon} + 30\,{\varepsilon}^2}{(1-{\varepsilon})^2\,(1-2\,{\varepsilon})}
- \bar{x}\,^3\,\frac{36 - 27\,{\varepsilon}}{2\,(1-2\,{\varepsilon})\,(3-2\,{\varepsilon})}\right)\right]\\
\sigma_{q\overline{q}\to H\,g}^{1,B} =\ & \Cfact\,{\Lx\frac{g^2(4\,\pi)^{\ep}}{4\,\pi^2\,\Gamma(1-\ep)}\Rx}\mumh^{{\varepsilon}}\,M_{0}(-1,-1)\,
x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\,\bar{x}\,^3\,\frac{32\,(1-{\varepsilon})^2}{9\,(1-2\,{\varepsilon})\,(3-2\,{\varepsilon})}\\
\sigma_{gq\to H\,q}^{1,B} =\ & -\Cfact\,{\Lx\frac{g^2(4\,\pi)^{\ep}}{4\,\pi^2\,\Gamma(1-\ep)}\Rx}\mumh^{{\varepsilon}}\,M_{0}(-1,-1)\,
x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\,\left(\frac{2}{3\,{\varepsilon}} +
\bar{x}\,\,\frac{4}{3\,(1-2\,{\varepsilon})} + \bar{x}\,^2\frac{2-{\varepsilon}}{3\,{\varepsilon}\,(1-2\,{\varepsilon})}\right)
\end{split}
\end{equation}
where, as in \eqn{eqn:xbdistrib}, ${\cal D}^{-2\,{\varepsilon}}(\bar{x}\,)$ represents
the tower of plus-distributions in $\bar{x}\,$ weighted by $(-2\,{\varepsilon})$.
Using the expansion of $M_{0}(-1,-1)$ given in \eqn{eqn:M0exp}, one
easily recovers the previously known results for these terms.
\subsubsection{Contributions starting at {NNLO}}
The contribution from the interference of tree-level and one-loop
amplitudes starts at {NNLO}\ and, through renormalization, contributes
to all higher orders. The results of this calculation depend on six
master integrals, $M_{0-5}$, which are all known in closed form (see
\eqn{eqn:MINNLO}). In addition to the phase space integrals, there
are products of $\Gamma$-functions that arise from the loop
integration that can be cast into the same form as the master integral
$M_0(\alpha,\beta)$.
{\small
\begin{equation*}
\begin{split}
\sigma_{gg\to H\,g}^{2,B} =\ &
\Cfact\,{\Lx\frac{g^2(4\,\pi)^{\ep}}{4\,\pi^2\,\Gamma(1-\ep)}\Rx}^2\mumh^{2\,{\varepsilon}}\left\{ \vphantom{\frac{9}{{\varepsilon}^2}}\right.\\
& \frac{\Gamma^5(1-{\varepsilon})\,\Gamma^3(1+{\varepsilon})}{\Gamma^2(1-2\,{\varepsilon})\,\Gamma(1+2\,{\varepsilon})}
\,M_{0}(-2,-2)\left(
- \frac{9\,\delta(\bar{x}\,)}{8\,{\varepsilon}^4\,(1-{\varepsilon})}
+ \frac{9\,\DSum{-4}\,x^{2\,{\varepsilon}}}{2\,{\varepsilon}^3\,(1-{\varepsilon})}
+ x^{2\,{\varepsilon}}\,\bar{x}\,^{-4\,{\varepsilon}}\left(
- \frac{9}{{\varepsilon}^3\,(1-{\varepsilon})}\right.\right.\\
&\qquad \left.\left.
+ \bar{x}\,\,\frac{27 - 135\,{\varepsilon} + 135\,{\varepsilon}^2}{2\,{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-4\,{\varepsilon})}
- \bar{x}\,^2\,\frac{18 - 90\,{\varepsilon} + 99\,{\varepsilon}^2}{2\,{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-4\,{\varepsilon})}
+ \bar{x}\,^3\,\frac{54 - 189\,{\varepsilon} + 162\,{\varepsilon}^2}{4\,{\varepsilon}^3\,(1-{\varepsilon})\,(1-4\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)\Rx\\
+& \frac{\Gamma^4(1-{\varepsilon})\,\Gamma^2(1+{\varepsilon})}{\Gamma^2(1-2\,{\varepsilon})\,\Gamma(1+2\,{\varepsilon})}\left[
M_{0}(-1,-1)\left(
- \delta(\bar{x}\,)\,\frac{9 - 27\,{\varepsilon} + 18\,{\varepsilon}^2 +
9\,{\varepsilon}^3}{{\varepsilon}^4\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})}
\right.\right.\\&\qquad
+ \DSum{-2}\,x^{{\varepsilon}}\,\frac{18 - 108\,{\varepsilon} + 234\,{\varepsilon}^2 - 198\,{\varepsilon}^3 + 27\,{\varepsilon}^4 + 36\,{\varepsilon}^5}{{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})^2}
- \DSum{-2}\,x^{2\,{\varepsilon}}\,\frac{9 - 54\,{\varepsilon} + 117\,{\varepsilon}^2 - 108\,{\varepsilon}^3 + 54\,{\varepsilon}^4}{2\,{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})^2}
\\&\qquad
+ x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
- \frac{324 - 3834\,{\varepsilon} + 18810\,{\varepsilon}^2 - 50400\,{\varepsilon}^3 + 79650\,{\varepsilon}^4 - 72387\,{\varepsilon}^5 + 31050\,{\varepsilon}^6 - 432\,{\varepsilon}^7 - 2592\,{\varepsilon}^8}{{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})^2\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right.\\&\qquad
+ \bar{x}\,\,\frac{324 - 3402\,{\varepsilon} + 14247\,{\varepsilon}^2 - 30618\,{\varepsilon}^3 + 32562\,{\varepsilon}^4 - 9243\,{\varepsilon}^5 - 10080\,{\varepsilon}^6 + 5616\,{\varepsilon}^7}{2\,{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})^2\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})}
\\&\qquad
- \bar{x}\,^2\,\frac{648 - 7128\,{\varepsilon} + 32166\,{\varepsilon}^2 - 77544\,{\varepsilon}^3 + 102726\,{\varepsilon}^4 - 60363\,{\varepsilon}^5 - 12411\,{\varepsilon}^6 + 33840\,{\varepsilon}^7 - 11664\,{\varepsilon}^8}{2\,{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})^2\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad\left.
+ \bar{x}\,^3\,\frac{108 - 405\,{\varepsilon} + 459\,{\varepsilon}^2 - 54\,{\varepsilon}^3 - 81\,{\varepsilon}^4}{2\,{\varepsilon}^3\,(1-{\varepsilon})\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})}
\right)\\&\qquad
+ x^{2\,{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
\frac{9 - 54\,{\varepsilon} + 117\,{\varepsilon}^2 - 108\,{\varepsilon}^3 + 54\,{\varepsilon}^4}{{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})^2}
- \bar{x}\,\,\frac{81 - 540\,{\varepsilon} + 1323\,{\varepsilon}^2 - 1602\,{\varepsilon}^3 + 1026\,{\varepsilon}^4 - 270\,{\varepsilon}^5}{2\,{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})}
\right.\\&\qquad\left.
+ \bar{x}\,^2\,\frac{27 - 153\,{\varepsilon} + 279\,{\varepsilon}^2 - 243\,{\varepsilon}^3 + 81\,{\varepsilon}^4}{{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})}
- \bar{x}\,^3\,\frac{162 - 1323\,{\varepsilon} + 3123\,{\varepsilon}^2 - 3276\,{\varepsilon}^3 + 1593\,{\varepsilon}^4 - 297\,{\varepsilon}^5}{4\,{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})^2}
\right)\\&\qquad\left.
+ x^{2\,{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\,N_{f}\,\left(
- \bar{x}\,\,\frac{3}{2\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})}
+ \bar{x}\,^2\,\frac{3}{2\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})}
\right.\right.\\&\qquad\left.\left.
- \bar{x}\,^3\,\frac{9 - 9\,{\varepsilon} + 3\,{\varepsilon}^2}{4\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})^2}
\right)\Rx
\\&
+ M_{1}(-1,-1)\left(
- \DSum{-2}\,x^{{\varepsilon}}\,\frac{9}{2\,{\varepsilon}^3\,(1-{\varepsilon})}
\right.\\&\qquad
+ x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
\frac{81 - 756\,{\varepsilon} + 3231\,{\varepsilon}^2 - 8568\,{\varepsilon}^3 + 13536\,{\varepsilon}^4 - 10800\,{\varepsilon}^5 + 3168\,{\varepsilon}^6}{{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right.\\&\qquad
- \bar{x}\,\,\frac{243 - 2268\,{\varepsilon} + 9747\,{\varepsilon}^2 - 24750\,{\varepsilon}^3 + 31464\,{\varepsilon}^4 - 9504\,{\varepsilon}^5 - 12384\,{\varepsilon}^6 + 6912\,{\varepsilon}^7}{2\,{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad
+ \bar{x}\,^2\,\frac{162 - 972\,{\varepsilon} + 2223\,{\varepsilon}^2 - 3519\,{\varepsilon}^3 + 1710\,{\varepsilon}^4 + 8820\,{\varepsilon}^5 - 14616\,{\varepsilon}^6 + 5760\,{\varepsilon}^7}{2\,{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad\left.\left.
- \bar{x}\,^3\,\frac{162 - 729\,{\varepsilon} - 360\,{\varepsilon}^2 + 5445\,{\varepsilon}^3 - 9954\,{\varepsilon}^4 + 10332\,{\varepsilon}^5 - 7416\,{\varepsilon}^6 + 2304\,{\varepsilon}^7}{4\,{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)\Rx
\\&
+\left( M_{2}(-1,-1)\,x^{2\,{\varepsilon}} - M_{3}(-1,-1)\,x^{{\varepsilon}}\right)
\frac{9\,\DSum{-2} - \bar{x}\,^{-2\,{\varepsilon}}\left(
18 - 27\,\bar{x}\, + 18\,\bar{x}\,^2 - 9\,\bar{x}\,^3\right)}{{\varepsilon}^3\,(1-{\varepsilon})}
\\&
+ M_{5}(-1)\left(
\DSum{-2}\,x^{{\varepsilon}}\,\frac{9}{{\varepsilon}^3\,(1-{\varepsilon})}
+ x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
- \frac{18}{{\varepsilon}^3\,(1-{\varepsilon})}
+ \bar{x}\,\,\frac{27 - 135\,{\varepsilon} + 135\,{\varepsilon}^2}{{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-4\,{\varepsilon})}
\right.\right.\\&\qquad\left.\left.\left.
- \bar{x}\,^2\,\frac{18 - 90\,{\varepsilon} + 99\,{\varepsilon}^2}{{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-4\,{\varepsilon})}
+ \bar{x}\,^3\,\frac{54 - 189\,{\varepsilon} + 162\,{\varepsilon}^2}{2\,{\varepsilon}^3\,(1-{\varepsilon})\,(1-4\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)\Rx
\right]
\end{split}
\end{equation*}
\begin{equation}
\begin{split}
\phantom{\sigma_{gg\to H\,g}^{2,B} =\ }
+&\frac{\Gamma^3(1-{\varepsilon})\,\Gamma(1+{\varepsilon})}{\Gamma(1-2\,{\varepsilon})}\left[
M_{0}(-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
\frac{216 + 675\,{\varepsilon} - 11403\,{\varepsilon}^2 + 31536\,{\varepsilon}^3 - 31824\,{\varepsilon}^4 + 10368\,{\varepsilon}^5}{4\,{\varepsilon}^2\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right.\right.\\&\qquad
- \bar{x}\,\,\frac{162 - 378\,{\varepsilon} - 4347\,{\varepsilon}^2 + 17136\,{\varepsilon}^3 - 7641\,{\varepsilon}^4 - 41220\,{\varepsilon}^5 + 57888\,{\varepsilon}^6 - 20736\,{\varepsilon}^7}{2\,{\varepsilon}^2\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-3\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad\left.
+ \bar{x}\,^2\,\frac{2592 - 29106\,{\varepsilon} + 134829\,{\varepsilon}^2 - 344790\,{\varepsilon}^3 + 559035\,{\varepsilon}^4 - 635688\,{\varepsilon}^5 + 521784\,{\varepsilon}^6 - 271728\,{\varepsilon}^7 + 62208\,{\varepsilon}^8}{4\,{\varepsilon}^2\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-3\,{\varepsilon})\,(1-4\,{\varepsilon})\,(2-3\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)
\\&
+ M_{0}(-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\,N_{f}\,\left(
- \frac{3}{4\,{\varepsilon}\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(3-2\,{\varepsilon})}
+ \bar{x}\,\,\frac{3}{2\,{\varepsilon}\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})\,(3-2\,{\varepsilon})}
\right.\\&\qquad\left.
- \bar{x}\,^2\,\frac{6 - 27\,{\varepsilon} + 36\,{\varepsilon}^2}{4\,{\varepsilon}\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})\,(1-3\,{\varepsilon})\,(2-3\,{\varepsilon})\,(3-2\,{\varepsilon})}
\right)
\\&
+ M_{1}(-1,-2) \left(
\frac{9\,\DSum{-3}\,x^{2\,{\varepsilon}}}{{\varepsilon}^3\,(1-{\varepsilon})}
+ x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
- \frac{162 - 1566\,{\varepsilon} + 6300\,{\varepsilon}^2 - 14328\,{\varepsilon}^3 + 19260\,{\varepsilon}^4 - 13680\,{\varepsilon}^5 + 3744\,{\varepsilon}^6}{{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right.\right.\\&\qquad
+ \bar{x}\,\,\frac{486 - 4698\,{\varepsilon} + 19197\,{\varepsilon}^2 - 43245\,{\varepsilon}^3 + 50796\,{\varepsilon}^4 - 19764\,{\varepsilon}^5 - 10224\,{\varepsilon}^6 + 6912\,{\varepsilon}^7}{2\,{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad
- \bar{x}\,^2\,\frac{162 - 1296\,{\varepsilon} + 4302\,{\varepsilon}^2 - 8127\,{\varepsilon}^3 + 7659\,{\varepsilon}^4 + 720\,{\varepsilon}^5 - 6516\,{\varepsilon}^6 + 2880\,{\varepsilon}^7}{{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad\left.\left.
+ \bar{x}\,^3\,\frac{162 - 1026\,{\varepsilon} + 2007\,{\varepsilon}^2 - 1017\,{\varepsilon}^3 - 1494\,{\varepsilon}^4 + 3492\,{\varepsilon}^5 - 3384\,{\varepsilon}^6 + 1152\,{\varepsilon}^7}{2\,{\varepsilon}^3\,(1-{\varepsilon})^3\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)\Rx
\\&
+ M_{4}(-1,-1,-2)\,x^{2\,{\varepsilon}}
\frac{9\,\DSum{-3} - \bar{x}\,^{-3\,{\varepsilon}}\Lx18 - 27\,\bar{x}\, + 18\,\bar{x}\,^2-9\,\bar{x}\,^3\right)}{{\varepsilon}^3\,(1-{\varepsilon})}
\\&
+ M_{4}(1,-1,-2) \left(
- \frac{18\,\DSum{-3}\,x^{2\,{\varepsilon}}}{{\varepsilon}^3\,(1-{\varepsilon})}
+ x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
+ \frac{36}{{\varepsilon}^3\,(1-{\varepsilon})}
- \bar{x}\,\,\frac{54 - 270\,{\varepsilon} + 270\,{\varepsilon}^2}{{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-4\,{\varepsilon})}
\right.\right.\\&\qquad\left.\left.\left.\left.
+ \bar{x}\,^2\,\frac{36 - 180\,{\varepsilon} + 198\,{\varepsilon}^2}{{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-4\,{\varepsilon})}
- \bar{x}\,^3\,\frac{54 - 189\,{\varepsilon} + 162\,{\varepsilon}^2}{{\varepsilon}^3\,(1-{\varepsilon})\,(1-4\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)\Rx
\right]\right\}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\sigma_{q\overline{q}\to H\,g}^{2,B} =\ & \Cfact\,{\Lx\frac{g^2(4\,\pi)^{\ep}}{4\,\pi^2\,\Gamma(1-\ep)}\Rx}^2\mumh^{2{\varepsilon}}\left\{
\vphantom{\frac{9}{{\varepsilon}^2}}\right.\\&
\frac{\Gamma^5(1-{\varepsilon})\,\Gamma^3(1+{\varepsilon})}{\Gamma^2(1-2\,{\varepsilon})\,\Gamma(1+2\,{\varepsilon})}\left[
M_{0}(-2,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-4\,{\varepsilon}}\left(
\bar{x}\,^2\,\frac{8}{27\,(1-4\,{\varepsilon})}
+ \bar{x}\,^3\,\frac{16 - 40\,{\varepsilon} + 8\,{\varepsilon}^2 + 32\,{\varepsilon}^3}{27\,{\varepsilon}^2\,(1-4\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)\right]
\\+&
\frac{\Gamma^4(1-{\varepsilon})\,\Gamma^2(1+{\varepsilon})}{\Gamma^2(1-2\,{\varepsilon})\,\Gamma(1+2\,{\varepsilon})}\left[
M_{0}(-1,-1)\,x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
\frac{768 - 176\,{\varepsilon} - 2720\,{\varepsilon}^2 + 2176\,{\varepsilon}^3}{27\,{\varepsilon}^2\,(1-2\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right.\right.\\&\qquad
- \bar{x}\,\,\frac{384 - 2128\,{\varepsilon} + 1872\,{\varepsilon}^2 + 2608\,{\varepsilon}^3 - 1856\,{\varepsilon}^4 - 2624\,{\varepsilon}^5 + 2176\,{\varepsilon}^6}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})^2\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})}
\\&\qquad
+ \bar{x}\,^2\,\frac{1152 - 10512\,{\varepsilon} + 37648\,{\varepsilon}^2 - 77264\,{\varepsilon}^3 + 100352\,{\varepsilon}^4 - 77024\,{\varepsilon}^5 + 29312\,{\varepsilon}^6 - 4096\,{\varepsilon}^7}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})^2\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad\left.
- \bar{x}\,^3\,\frac{32 - 128\,{\varepsilon} + 160\,{\varepsilon}^2 - 32\,{\varepsilon}^3 - 32\,{\varepsilon}^4}{3\,{\varepsilon}^2\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})}
\right)
\\&
+ M_{0}(-1,-1)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
- \bar{x}\,\,\frac{16}{3\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})}
+ \bar{x}\,^2\,\frac{16}{3\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})}
\right.\\&\qquad\left.
+ \bar{x}\,^3\,\frac{48 - 200\,{\varepsilon} + 160\,{\varepsilon}^2 - 224\,{\varepsilon}^3 + 272\,{\varepsilon}^4 - 160\,{\varepsilon}^5 + 32\,{\varepsilon}^6}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})^2}
\right)
\\&
+ M_{0}(-1,-1)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\,N_{f}\,\left(
- \bar{x}\,^3\,\frac{32 - 96\,{\varepsilon} + 96\,{\varepsilon}^2 - 32\,{\varepsilon}^3}{9\,{\varepsilon}\,(1-2\,{\varepsilon})^2\,(3-2\,{\varepsilon})^2}
\right)
\\&
+ M_{1}(-1,-1)\,x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
- \frac{768 - 176\,{\varepsilon} - 2720\,{\varepsilon}^2 + 2176\,{\varepsilon}^3}{27\,{\varepsilon}^2\,(1-2\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right.\\&\qquad
+ \bar{x}\,\,\frac{1152 - 6384\,{\varepsilon} + 5808\,{\varepsilon}^2 + 7984\,{\varepsilon}^3 - 10496\,{\varepsilon}^4 - 1984\,{\varepsilon}^5 + 4352\,{\varepsilon}^6}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad
- \bar{x}\,^2\,\frac{1152 - 8976\,{\varepsilon} + 24648\,{\varepsilon}^2 - 36472\,{\varepsilon}^3 + 31552\,{\varepsilon}^4 - 15520\,{\varepsilon}^5 + 4480\,{\varepsilon}^6}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad\left.
+ \bar{x}\,^3\,\frac{816 - 6904\,{\varepsilon} + 20800\,{\varepsilon}^2 - 30280\,{\varepsilon}^3 + 20832\,{\varepsilon}^4 - 4960\,{\varepsilon}^5 + 128\,{\varepsilon}^6}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)
\\&\left.
+ M_{5}(-1)\,x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
\bar{x}\,^2\,\frac{16}{27\,(1-4\,{\varepsilon})}
+ \bar{x}\,^3\,\frac{32 - 80\,{\varepsilon} + 16\,{\varepsilon}^2 + 64\,{\varepsilon}^3}{27\,{\varepsilon}^2\,(1-4\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)\right]
\\+&
\frac{\Gamma^3(1-{\varepsilon})\,\Gamma(1+{\varepsilon})}{\Gamma(1-2\,{\varepsilon})}\left[
M_{0}(-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
- \frac{768 - 176\,{\varepsilon} - 2720\,{\varepsilon}^2 + 2176\,{\varepsilon}^3}{27\,{\varepsilon}^2\,(1-2\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right.\right.\\&\qquad
+ \bar{x}\,\,\frac{1152 - 9072\,{\varepsilon} + 20944\,{\varepsilon}^2 - 8208\,{\varepsilon}^3 - 19376\,{\varepsilon}^4 + 7744\,{\varepsilon}^5 + 19008\,{\varepsilon}^6 - 13056\,{\varepsilon}^7}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-3\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad\left.
- \bar{x}\,^2\,\frac{2304 - 26784\,{\varepsilon} + 128000\,{\varepsilon}^2 - 340896\,{\varepsilon}^3 + 568912\,{\varepsilon}^4 - 614432\,{\varepsilon}^5 + 416224\,{\varepsilon}^6 - 163520\,{\varepsilon}^7 + 31488\,{\varepsilon}^8}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-3\,{\varepsilon})\,(1-4\,{\varepsilon})\,(2-3\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)
\\&
+ M_{1}(-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
\frac{768 - 176\,{\varepsilon} - 2720\,{\varepsilon}^2 + 2176\,{\varepsilon}^3}{27\,{\varepsilon}^2\,(1-2\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right.\\&\qquad
- \bar{x}\,\,\frac{1152 - 6384\,{\varepsilon} + 5808\,{\varepsilon}^2 + 7984\,{\varepsilon}^3 - 10496\,{\varepsilon}^4 - 1984\,{\varepsilon}^5 + 4352\,{\varepsilon}^6}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad
+ \bar{x}\,^2\,\frac{1152 - 8976\,{\varepsilon} + 24720\,{\varepsilon}^2 - 36832\,{\varepsilon}^3 + 32192\,{\varepsilon}^4 - 16000\,{\varepsilon}^5 + 4608\,{\varepsilon}^6}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\\&\qquad\left.
- \bar{x}\,^3\,\frac{768 - 6608\,{\varepsilon} + 20144\,{\varepsilon}^2 - 29744\,{\varepsilon}^3 + 20928\,{\varepsilon}^4 - 5312\,{\varepsilon}^5 + 256\,{\varepsilon}^6}{27\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)
\\&\left.\left.
- M_{4}(1,-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
\bar{x}\,^2\,\frac{32}{27\,(1-4\,{\varepsilon})}
+ \bar{x}\,^3\,\frac{64 - 160\,{\varepsilon} + 32\,{\varepsilon}^2 + 128\,{\varepsilon}^3}{27\,{\varepsilon}^2\,(1-4\,{\varepsilon})\,(3-4\,{\varepsilon})}
\right)\right]\right\}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\sigma_{gq\to H\,q}^{2,B} =\ & \Cfact\,{\Lx\frac{g^2(4\,\pi)^{\ep}}{4\,\pi^2\,\Gamma(1-\ep)}\Rx}^2\mumh^{2\,{\varepsilon}}\left\{
\vphantom{\frac{9}{{\varepsilon}^2}}\right.\\&
\frac{\Gamma^5(1-{\varepsilon})\,\Gamma^3(1+{\varepsilon})}{\Gamma^2(1-2\,{\varepsilon})\,\Gamma(1+2\,{\varepsilon})}\left[
M_{0}(-2,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-4\,{\varepsilon}}\left(
\frac{1}{2\,{\varepsilon}^3}
+ \bar{x}\,\,\frac{1}{{\varepsilon}^2\,(1-4\,{\varepsilon})}
+ \bar{x}\,^2\,\frac{1-2\,{\varepsilon} - {\varepsilon}^2}{2\,{\varepsilon}^3\,(1-{\varepsilon})\,(1-4\,{\varepsilon})}
\right)\right]
\\+&
\frac{\Gamma^4(1-{\varepsilon})\,\Gamma^2(1+{\varepsilon})}{\Gamma^2(1-2\,{\varepsilon})\,\Gamma(1+2\,{\varepsilon})}\left[
M_{0}(-1,-1)\,x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
\frac{8 - 85\,{\varepsilon} + 235\,{\varepsilon}^2 + 55\,{\varepsilon}^3 - 995\,{\varepsilon}^4 + 974\,{\varepsilon}^5 + 24\,{\varepsilon}^6}{9\,{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})^2\,(1-4\,{\varepsilon})}
\right.\right.\\&\left.\qquad
- \bar{x}\,\,\frac{18 - 147\,{\varepsilon} + 441\,{\varepsilon}^2 - 322\,{\varepsilon}^3 - 214\,{\varepsilon}^4 + 8\,{\varepsilon}^5}{9\,{\varepsilon}^2\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})^2\,(1-4\,{\varepsilon})}
+ \bar{x}\,^2\,\frac{8 - 28\,{\varepsilon} + 20\,{\varepsilon}^2 - 5\,{\varepsilon}^3-4\,{\varepsilon}^4}{9\,{\varepsilon}^3\,(1-{\varepsilon})\,(1-2\,{\varepsilon})^2}
\right)
\\&
+ M_{0}(-1,-1)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
\frac{11 - 55\,{\varepsilon} + 86\,{\varepsilon}^2 - 36\,{\varepsilon}^3 - 24\,{\varepsilon}^4}{18\,{\varepsilon}^3\,(1-{\varepsilon})\,(1-2\,{\varepsilon})^2}
+ \bar{x}\,\,\frac{11 - 24\,{\varepsilon} + 26\,{\varepsilon}^2 + 14\,{\varepsilon}^3}{9\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})^2}
\right.\\&\left.\qquad
+ \bar{x}\,^2\,\frac{11 - 28\,{\varepsilon} + 34\,{\varepsilon}^2 - 16\,{\varepsilon}^3 - 9\,{\varepsilon}^4}{18\,{\varepsilon}^3\,(1-{\varepsilon})\,(1-2\,{\varepsilon})^2}
\right)
\\&
+ M_{1}(-1,-1)\,x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
- \frac{9 - 82\,{\varepsilon} + 45\,{\varepsilon}^2 + 630\,{\varepsilon}^3 - 914\,{\varepsilon}^4 - 120\,{\varepsilon}^5}{18\,{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})}
+ \bar{x}\,\,\frac{25 - 159\,{\varepsilon} + 286\,{\varepsilon}^2 + 88\,{\varepsilon}^3}{9\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})}
\right.\\&\left.\qquad
- \bar{x}\,^2\,\frac{9 - 12\,{\varepsilon} - 101\,{\varepsilon}^2 + 132\,{\varepsilon}^3 + 56\,{\varepsilon}^4}{18\,{\varepsilon}^3\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})}
\right)
\\&
- \left( M_{2}(-1,-1)\,x^{2\,{\varepsilon}} - M_{3}(-1,-1)\,x^{{\varepsilon}}\right)\,\bar{x}\,^{-2\,{\varepsilon}}\left(
\frac{1-{\varepsilon} - {\varepsilon}^2}{9\,{\varepsilon}^3\,(1-{\varepsilon})}
+ \bar{x}\,\,\frac{2 + 2\,{\varepsilon}}{9\,{\varepsilon}^2\,(1-{\varepsilon})}
+ \bar{x}\,^2\,\frac{1-{\varepsilon} - {\varepsilon}^2}{9\,{\varepsilon}^3\,(1-{\varepsilon})}
\right)
\\&\left.
+ M_{5}(-1)\,x^{{\varepsilon}}\,\bar{x}\,^{-2\,{\varepsilon}}\left(
\frac{1}{{\varepsilon}^3}
+ \bar{x}\,\,\frac{2}{{\varepsilon}^2\,(1-4\,{\varepsilon})}
+ \bar{x}\,^2\,\frac{1-2\,{\varepsilon}-{\varepsilon}^2}{{\varepsilon}^3\,(1-{\varepsilon})\,(1-4\,{\varepsilon})}
\right)\right]
\\ +& \frac{\Gamma^3(1-{\varepsilon})\,\Gamma(1+{\varepsilon})}{\Gamma(1-2\,{\varepsilon})}\left[
M_{0}(-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
\frac{30 - 293\,{\varepsilon} + 1610\,{\varepsilon}^2 - 4587\,{\varepsilon}^3 + 5746\,{\varepsilon}^4 - 1794\,{\varepsilon}^5 - 280\,{\varepsilon}^6}{18\,{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})}
\right.\right.\\&
+ \bar{x}\,\,\frac{132 - 1378\,{\varepsilon} + 5204\,{\varepsilon}^2 - 7950\,{\varepsilon}^3 + 3782\,{\varepsilon}^4 + 530\,{\varepsilon}^5 - 536\,{\varepsilon}^6}{9\,{\varepsilon}^2\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})\,(1-3\,{\varepsilon})\,(1-4\,{\varepsilon})\,(3-2\,{\varepsilon})}
\\&\qquad\left.
+ \bar{x}\,^2\,\frac{60 - 556\,{\varepsilon} + 1367\,{\varepsilon}^2 - 1085\,{\varepsilon}^3 - 34\,{\varepsilon}^4 + 202\,{\varepsilon}^5}{18\,{\varepsilon}^3\,(1-2\,{\varepsilon})\,(1-3\,{\varepsilon})\,(2-3\,{\varepsilon})\,(3-2\,{\varepsilon})}
\right)
\\&
+ M_{0}(-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\,N_{f}\,\left(
\frac{(1-{\varepsilon})}{3\,{\varepsilon}^2\,(1-2\,{\varepsilon})\,(3-2\,{\varepsilon})}
+ \bar{x}\,\,\frac{2 - 2\,{\varepsilon}}{3\,{\varepsilon}\,(1-2\,{\varepsilon})\,(1-3\,{\varepsilon})\,(3-2\,{\varepsilon})}
\right.\\&\left.\qquad
+ \bar{x}\,^2\,\frac{2 - 5\,{\varepsilon} + 4\,{\varepsilon}^2 - {\varepsilon}^3}{3\,{\varepsilon}^2\,(1-2\,{\varepsilon})\,(1-3\,{\varepsilon})\,(2-3\,{\varepsilon})\,(3-2\,{\varepsilon})}
\right)
\\&
+ M_{1}(-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
\frac{9 - 77\,{\varepsilon} + 117\,{\varepsilon}^2 + 216\,{\varepsilon}^3 - 421\,{\varepsilon}^4 - 60\,{\varepsilon}^5}{9\,{\varepsilon}^3\,(1-{\varepsilon})^2\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})}
- \bar{x}\,\,\frac{16 - 132\,{\varepsilon} + 268\,{\varepsilon}^2 + 88\,{\varepsilon}^3}{9\,{\varepsilon}^2\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})}
\right.\\&\left.\qquad
+ \bar{x}\,^2\,\frac{9 - 24\,{\varepsilon} - 37\,{\varepsilon}^2 + 75\,{\varepsilon}^3 + 28\,{\varepsilon}^4}{9\,{\varepsilon}^3\,(1-{\varepsilon})\,(1-2\,{\varepsilon})\,(1-4\,{\varepsilon})}
\right)
\\&
+ M_{4}(-1,-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
- \frac{1-{\varepsilon} - {\varepsilon}^2}{9\,{\varepsilon}^3\,(1-{\varepsilon})}
- \bar{x}\,\,\frac{2 + 2\,{\varepsilon}}{9\,{\varepsilon}^2\,(1-{\varepsilon})}
- \bar{x}\,^2\,\frac{1-{\varepsilon} - {\varepsilon}^2}{9\,{\varepsilon}^3\,(1-{\varepsilon})}
\right)
\\&\left.\left.
+ M_{4}(1,-1,-2)\,x^{2\,{\varepsilon}}\,\bar{x}\,^{-3\,{\varepsilon}}\left(
- \frac{2}{{\varepsilon}^3}
- \bar{x}\,\,\frac{4}{{\varepsilon}^2\,(1-4\,{\varepsilon})}
- \bar{x}\,^2\,\frac{2 - 4\,{\varepsilon} - 2\,{\varepsilon}^2}{{\varepsilon}^3\,(1-{\varepsilon})\,(1-4\,{\varepsilon})}
\right)\right]\right\}
\end{split}
\end{equation}
}
\subsubsection{Contributions starting at {N${}^3$LO}}
The contributions from the square of the one-loop amplitudes starts at
{N${}^3$LO}. The full result is too lengthy to report here, but is given,
along with assorted moments in $\bar{x}\,\!$, in the supplemental material
attached to this article. I present below only the soft contributions
(that is, the $\delta$ function and plus-distribution terms).
{\small
\begin{equation}
\begin{split}
\sigma_{gg\to H\,g}^{3,B,{\rm soft}} =\ &
\Cfact\,C_A^3\,{\Lx\frac{g^2(4\,\pi)^{\ep}}{4\,\pi^2\,\exp(\ep\,\gamma_E)}\Rx}^3\mumh^{3\,{\varepsilon}}\left\{ \vphantom{\frac{9}{{\varepsilon}^2}}
\frac{1}{{\varepsilon}^6}\,
\frac{23}{72}\,\delta(\bar{x}\,\!)
+ \frac{1}{{\varepsilon}^5}\,\left[
\frac{23}{72}\,\delta(\bar{x}\,\!)
- \frac{19}{24}{\cal D}_{0}(\bar{x}\,\!)\,
\right]
\right.\\+&
\frac{1}{{\varepsilon}^4}\,\left[
\delta(\bar{x}\,\!)\,\left(
\frac{23}{72}
- \frac{247}{144}\,\zeta(2)
\right)
- \frac{19}{24}\,{\cal D}_{0}(\bar{x}\,\!)
+ \frac{9}{4}\,{\cal D}_{1}(\bar{x}\,\!)
\right]
+ \frac{1}{{\varepsilon}^3}\,\left[
\delta(\bar{x}\,\!)\,\left(
\frac{127}{144}
- \frac{247}{144}\,\zeta(2)
- \frac{125}{36}\,\zeta(3)
\right)
\right.\\&\left.
+ {\cal D}_{0}(\bar{x}\,\!)\,\left(
- \frac{19}{24}
+ \frac{275}{48}\,\zeta(2)
\right)
+ \frac{9}{4}\,{\cal D}_{1}(\bar{x}\,\!)
- \frac{15}{4}\,{\cal D}_{2}(\bar{x}\,\!)
\right]
\\+&
\frac{1}{{\varepsilon}^2}\,\left[
\delta(\bar{x}\,\!)\,\left(
\frac{185}{72}
- \frac{247}{144}\,\zeta(2)
- \frac{125}{36}\,\zeta(3)
+ \frac{3029}{384}\,\zeta(4)
\right)
+ {\cal D}_{0}(\bar{x}\,\!)\,\left(
- \frac{49}{24}
+ \frac{275}{48}\,\zeta(2)
+ \frac{269}{24}\,\zeta(3)
\right)
\right.\\&\left.
+ {\cal D}_{1}(\bar{x}\,\!)\,\left(
\frac{9}{4}
- \frac{169}{8}\,\zeta(2)
\right)
- \frac{15}{4}\,{\cal D}_{2}(\bar{x}\,\!)
+ \frac{29}{6}\,{\cal D}_{3}(\bar{x}\,\!)
\right]
\\+&
\frac{1}{{\varepsilon}}\,\left[
\delta(\bar{x}\,\!)\,\left(
\frac{937}{144}
- \frac{1151}{288}\,\zeta(2)
- \frac{125}{36}\,\zeta(3)
+ \frac{3029}{384}\,\zeta(4)
- \frac{553}{20}\,\zeta(5)
+ \frac{2125}{72}\,\zeta(2)\,\zeta(3)
\right)
\right.\\&
+ {\cal D}_{0}(\bar{x}\,\!)\,\left(
- \frac{139}{24}
+ \frac{275}{48}\,\zeta(2)
+ \frac{269}{24}\,\zeta(3)
- \frac{3841}{128}\,\zeta(4)
\right)
+ {\cal D}_{1}(\bar{x}\,\!)\,\left(
\frac{21}{4}
- \frac{169}{8}\,\zeta(2)
- \frac{171}{4}\,\zeta(3)
\right)
\\&\left.
+ {\cal D}_{2}(\bar{x}\,\!)\,\left(
- \frac{15}{4}
+ \frac{335}{8}\,\zeta(2)
\right)
+ \frac{29}{6}\,{\cal D}_{3}(\bar{x}\,\!)
- \frac{21}{4}\,{\cal D}_{4}(\bar{x}\,\!)
\right]
\\+&
\delta(\bar{x}\,\!)\,\left(
\frac{547}{36}
- \frac{1561}{144}\,\zeta(2)
- \frac{1193}{144}\,\zeta(3)
+ \frac{3029}{384}\,\zeta(4)
- \frac{553}{20}\,\zeta(5)
+ \frac{2125}{72}\,\zeta(2)\,\zeta(3)
\right.\\&\left.
- \frac{84281}{3072}\,\zeta(6)
+ \frac{4607}{144}\,\zeta(3)^2
\right)
+ {\cal D}_{0}(\bar{x}\,\!)\,\left(
- \frac{349}{24}
+ \frac{593}{48}\,\zeta(2)
+ \frac{269}{24}\,\zeta(3)
- \frac{3841}{128}\,\zeta(4)
\right.\\&\left.
+ \frac{4869}{40}\,\zeta(5)
- \frac{5581}{48}\,\zeta(2)\,\zeta(3)
\right)
+ {\cal D}_{1}(\bar{x}\,\!)\,\left(
\frac{57}{4}
- \frac{169}{8}\,\zeta(2)
- \frac{171}{4}\,\zeta(3)
+ \frac{6777}{64}\,\zeta(4)
\right)
\\&
+ {\cal D}_{2}(\bar{x}\,\!)\,\left(
- \frac{31}{4}
+ \frac{335}{8}\,\zeta(2)
+ \frac{373}{4}\,\zeta(3)
\right)
+ {\cal D}_{3}(\bar{x}\,\!)\,\left(
\frac{29}{6}
- \frac{701}{12}\,\zeta(2)
\right)
- \frac{21}{4}\,{\cal D}_{4}(\bar{x}\,\!)
+ \frac{149}{30}\,{\cal D}_{5}(\bar{x}\,\!)
\\&\left.
+ \cal{O}({\varepsilon})\vphantom{\frac{225}{{\varepsilon}^6}}\right\}
\\ \sigma_{q\overline{q}\to H\,g}^{3,B,{\rm soft}} =\ & 0
\\ \sigma_{q\overline{q}\to H\,g}^{3,B,{\rm soft}} =\ & 0
\end{split}
\end{equation}
\section{Conclusions and Outlook}
\label{sec:conclude}
I have computed the contributions of one-loop single-real-emission
amplitudes to inclusive Higgs boson production at {N${}^3$LO}. Though a
substantial calculation, this is but a portion of the full {N${}^3$LO}\
result. I have computed this contribution to the cross section as an
extended threshold expansion, obtaining enough terms to invert the
series and determine the closed functional form through order ${\varepsilon}^1$.
I have also computed the contributions of these same amplitudes to the
{NLO}\ and {NNLO}\ inclusive cross sections in closed form, in terms of
$\Gamma$-functions and the hypergeometric functions ${}_{2}F_{1}$ and
${}_{3}F_{2}$. These functions can be readily expanded to all orders
in ${\varepsilon}$.
The methods used in this calculation can be immediately applied to
other single-inclusive production processes like Drell-Yan or
pseudoscalar production. In the current calculation, I have only
considered single-real emission contributions. However, the basic
method was already used more than ten years ago to compute double-real
emission contributions at
{NNLO}~\cite{Harlander:2002wh,Harlander:2002vv,Harlander:2003ai}. The
phase space for triple-real emission is far more complicated than that
for single- or double-real emission and it may be that the methods of
Ref.~\cite{Anastasiou:2013srw}, working on the other side of the
Cutkosky relations and threshold-expanding cut loop integrals rather
than phase space integrals, is more effective for that process.
\vskip20pt
\paragraph*{Acknowledgments:}
I would like to thank Lance Dixon for a stimulating discussion. I
would also like to thank the authors of Ref.~\cite{Anastasiou:2013mca}
for their assistance in comparing results. This research was
supported by the U.S.~Department of Energy under Contract
No.~DE-AC02-98CH10886.
|
2,869,038,155,560 | arxiv | \section*{Acknowledgements}
We thank Edward Y. X. Ong, Eric M. Schwen and Stephen J. Thornton for valuable discussions, and Anton Paar for use of the MCR 702 rheometer through their VIP academic research program. For this work, IC, DL, IG, and MR were generously supported by NSF CBET award numbers: 2010118, 1804963, 1509308, and an NSF DMR award number: 1507607. BC was supported by NSF CBET award number 1916877 and NSF DMR award number 2026834. JPS was supported by NSF CBET-2010118 and DMR-1719490. EK was supported by the NSF Award PHY-1554887, the University of Pennsylvania Materials Research Science and Engineering Center (MRSEC) through Award DMR-1720530.
\section*{Author Contributions}
MR is the corresponding author. BC and IC contributed equally to this work. All the authors were involved in discussions that shaped the idea and experiments in this manuscript. MR ran the experiments. MR, BC, IC, and JPS analyzed the experimental data guided by theoretical calculations performed by BC. MR, BC, JPS and, IC wrote the manuscript with all authors contributing.
\section{\label{Methods}Sample Preparation and Experimental Protocol}
The samples were prepared by weighing out the solutes - cornstarch (Argo) and silica (2 $\mu$m charge stabilized spheres from Angstrom Sphere) and the solvent - glycerol (Sigma-Aldrich). The volume fraction was calculated as
\begin{equation}
\phi = \frac{\rho_g \phi_M}{\phi_g \phi_M + (1- \phi_M)\rho_s}
\end{equation}
where $\phi$ is the volume fraction, $\phi_M$ is the mass fraction of solute, $\rho_g$ is the density of glycerol, and $\rho_S$ is the density of the solute. Here we use $\rho_g = 1.26$g/cm$^3$ and $\rho_s = 1.62$g/cm$^3$ for cornstarch and $\rho_s = 2.2$g/cm$^3$ for silica. Both types of suspensions were initially mixed manually with a spatula for 10 minutes. The cornstarch suspensions were used immediately after preparation and the silica suspensions were sonicated for 60 minutes prior to use.
An Anton Paar MCR702 rheometer with a 50mm sanded top and bottom parallel plate geometry was used to measure the viscosity. The sample was loaded and the top plate lowered onto the sample to a set gap of 1 mm. We ensured that there was no normal stress measured prior to starting the experiments. The sample was presheared at a constant stress of 1 Pa for five minutes. The stress was then ramped up and the viscosity at steady state was measured at each stress. The viscosity measurement is then repeated as the stress is ramped down. We use the ramp down measurements for the scaling collapse in this paper as it is less noisy.
\section{\label{sec:Phi0Calc} Estimation of $\phi_0$}
We use the low stress viscosity across all volume fractions to estimate the value of $\phi_0$. It has been shown previously that the low stress viscosity prior to shear thickening diverges at $\phi_0$ as $\eta \sim 1/(\phi_0 - \phi)^2$. Thus, we plot $1/\sqrt{\eta_{min}}$ as a function of $\phi$, and use the x intercept of the graph as the estimate of $\phi_0$ (Fig.~\ref{fig:Phi0Calc}). As seen in Fig.~\ref{fig:ScalingPhi0}\textbf{a}, this estimate of $\phi_0$ works well to collapse the data at small $x$.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{Phi0Calc.png}
\caption{\textbf{Estimation of $\phi_0$.} Plot of $\eta_{min}^{-1/2}$ as a function of the volume fraction $\phi$, for \textbf{a} cornstarch and \textbf{b} silica, where $\eta_{min}$ is the viscosity prior to shear thickening. The red line is a linear fit to the data and the x intercept is the estimated value of $\phi_0$.}
\label{fig:Phi0Calc}
\end{center}
\end{figure}
\section{\label{x_cx} Determining the scaling exponent}
From the scaling collapse of the data shown in Fig, 2a, the scaling function $\mathcal{F}$ appears to diverge at $x_c$: $\mathcal{F} \sim 1/(x_c - x)^\delta$. To determine the scaling exponent $\delta$, we plot the scaling function $\mathcal{F} = (\eta - \beta_\eta(\phi))(\phi^*_0 - \phi)^2$ as a function of $x_c - x$ (Fig.~\ref{fig:x_c_xPhi0Str}). We find that the divergence is indeed a power law, with $\delta \sim 1.5$. This value is distinct from $\delta = 2$, predicted by the reformulated Wyart and Cates model (Eq.~5).
\begin{figure}
\begin{center}
\includegraphics[width=0.6\linewidth]{xc_xPhi0Str.png}
\caption{\textbf{Power law scaling of cornstarch and silica viscosity data.} The scaling function $\mathcal{F} = (\eta - \beta_\eta(\phi))(\phi^*_0 - \phi)^2$ as a function of the scaling variable $x = e^{-\sigma^{*}/\sigma}C(\phi)/(\phi^*_0 - \phi)$ for the cornstarch (squares) and silica (diamond) suspensions data. We find that the data shows power law scaling with an exponent of $\sim -1.5$. The black solid line is a line with power law -1.5.}
\label{fig:x_c_xPhi0Str}
\end{center}
\end{figure}
\section{\label{Bphi} $\beta_\eta(\phi)$}
As discussed in the main text, to best fit the data at small stresses, we need to subtract a volume fraction dependent parameter, $\beta_\eta(\phi)$, from the viscosity. This parameter may indicate that a fraction of the viscosity is not governed by the crossover scaling. The values of $\beta_\eta(\phi)$ used in the scaling collapse for cornstarch and silica are shown in Fig.~\ref{fig:BetaEta}\textbf{a}. Importantly, even though $\beta_\eta(\phi)$ increases with the volume fraction, subtracting this parameter from the viscosity does not remove the divergence at small $x$. Rather, it shifts the divergence at small stresses from $\phi_0$ to $\phi^*_0$ (Fig~\ref{fig:BetaEta}\textbf{b}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{BetaEta.png}
\caption{\label{fig:BetaEta} \textbf{Low Volume fraction subtraction parameter $\beta_\eta(\phi)$}. \textbf{a} $\beta_\eta(\phi)$ as a function of the viscosity for cornstarch (square) and silica (diamond) suspensions. $\beta_\eta(\phi)$ largely increases with the volume fraction except at the highest volume fraction where it is set to zero. \textbf{b} $(\eta_{min} - \beta_\eta(\phi))^{-1/2}$ as a function of volume fraction, where $\eta_{min}$ is the low stress viscosity. The x intercept of this graph is the divergence of $\eta_{min} - \beta_\eta(\phi)$ at low stresses. Black lines are straight line guides to the eye. }
\end{center}
\end{figure}
\section{\label{SJLime} Shear Jamming Line Calculation}
The shear jamming line is given by the divergence of the scaling function $\mathcal{F}$ at $x = x_c$. Using
\begin{equation}
x = \frac{e^{-\sigma^*/\sigma} C(\phi)}{(\phi^*_0 - \phi)} = x_c,
\end{equation}
and a linear fit to the high volume fraction $C(\phi) = c_1 \phi + c_2$, we can solve for the shear jamming line:
\begin{equation}
\phi = \frac{x_c \phi^*_0 - c_2 e^{-\sigma^*/\sigma}}{c_1 e^{-\sigma^*/\sigma} + x_c}
\end{equation}
For both the cornstarch and silica data, $x_c = 14.5$. The values of $c_1$ and $c_2$ are suspension dependent and the exact values used to generate the phase diagram are listed in Table~\ref{table:ParametersPhi0Str}.
\section{\label{DSTLime} DST Line Calculation}
The DST line is calculated as
\begin{equation}
\frac{d \log \eta}{d \log \sigma} = 1
\end{equation}
or,
\begin{equation}
\frac{\sigma}{\eta}\frac{d\eta}{d\sigma} = 1
\end{equation}
To calculate the DST line, we thus need the full functional form of the scaling function. From Fig. 2\textbf{b} in the main text, we know that
\begin{equation}
(\eta - \beta_\eta(\phi))(C(\phi)f)^2 = \mathcal{H}(1/x - 1/x_c)
\end{equation}
or
\begin{equation}
\label{eq:CardyScalingManipulated}
\eta = \frac{1}{(C(\phi)f)^{2}} \mathcal{H}(1/x - 1/x_c) + \beta_\eta(\phi)
\end{equation}
where $x = fC(\phi)/(\phi^*_0 - \phi)$, and $f = e^{-\sigma^*/\sigma}$
\begin{equation}
\frac{\sigma}{\eta}\frac{d\eta}{d\sigma} = \frac{\sigma}{\eta}\frac{1}{(C(\phi))^2}\left( \frac{-2}{f^3}\mathcal{H}\frac{df}{d\sigma} + \frac{1}{f^2}\frac{d\mathcal{H}}{d\sigma}\right) = 1
\end{equation}
\begin{equation}
\frac{\sigma}{\eta}\frac{1}{(C(\phi))^2}\left( \frac{-2}{f^3}\mathcal{H}e^{-\sigma^*/\sigma}\frac{\sigma^*}{\sigma^2} + \frac{1}{f^2}\frac{d\mathcal{H}\left(\frac{1}{x} - \frac{1}{x_c} \right)}{d\left(\frac{1}{x}- \frac{1}{x_c}\right)}\frac{d \left(\frac{1}{x} - \frac{1}{x_c}\right)}{d \sigma}\right) = 1
\end{equation}
\begin{equation}
\frac{\sigma}{\eta}\frac{1}{(C(\phi))^2}\left( \frac{-2}{f^2}\mathcal{H}\frac{\sigma^*}{\sigma^2} + \frac{1}{f^2}\frac{d\mathcal{H}\left(\frac{1}{x} - \frac{1}{x_c} \right)}{d\left(\frac{1}{x}- \frac{1}{x_c}\right)}\frac{-1}{x^2}\frac{d x}{d \sigma}\right) = 1
\end{equation}
\begin{equation}
\frac{\sigma}{\eta}\frac{1}{(C(\phi))^2}\left( \frac{-2}{f^2}\mathcal{H}\frac{\sigma^*}{\sigma^2} + \frac{1}{f^2}\frac{d\mathcal{H}\left(t\right)}{dt}\left(\frac{-1}{x^2}\right)\frac{e^{-\sigma^*/\sigma}C(\phi)}{\phi^*_0 - \phi}\left(\frac{\sigma^*}{\sigma^2}\right)\right) = 1
\end{equation}
where $t = 1/x - 1/x_c$. Further simplifying,
\begin{equation}
\frac{1}{\eta}\frac{1}{(C(\phi))^2}\frac{\sigma^*}{\sigma}\frac{1}{f^2}\left( -2\mathcal{H} + \frac{d\mathcal{H}\left(t\right)}{dt}\left(\frac{-1}{x^2}\right)x\right) = 1
\end{equation}
To simplify things, we assume that $\beta_\eta(\phi) \ll \eta$ and substituting Eq.~\ref{eq:CardyScalingManipulated} above, we get
\begin{equation}
\frac{(C(\phi)f)^2}{\mathcal{H}}\frac{1}{(C(\phi))^2}\frac{\sigma^*}{\sigma}\frac{1}{f^2}\left( -2\mathcal{H} + \frac{d\mathcal{H}\left(t\right)}{dt}\left(\frac{-1}{x}\right)\right) = 1
\end{equation}
\begin{equation}
\left( -2\frac{\sigma^*}{\sigma} - \frac{1}{t}\frac{d\log\mathcal{H}\left( t\right)}{d \log t}\left(\frac{1}{x}\right)\left(\frac{\sigma^*}{\sigma}\right)\right) = 1
\end{equation}
Simplifying:
\begin{equation}
\label{eq:DSTCond}
\left( -2\frac{\sigma^*}{\sigma} - \left(\frac{x_c}{x_c - x}\right)\left(\frac{\sigma^*}{\sigma}\right)\frac{d\log\mathcal{H}\left( t\right)}{d \log t}\right) = 1
\end{equation}
\newline
We fit the function in Fig. 2\textbf{b} on a log scale to a crossover between two straight lines.
\begin{equation}
\label{eq:HFit}
\log \mathcal{H}(y) = ay + b + \left( \frac{1 + \tanh(e(y - f))}{2}\right)\left( (c-a)y + d -b\right)
\end{equation}
where $y = \log(1/x - 1/x_c)$, and $a$, $b$, $c$, $d$, $e$ and $f$ are fitting parameters. Using this expression for $\mathcal{H}$ it is very easy to take the derivative.
\begin{equation}
\label{eq:DH}
\frac{d\log \mathcal{H}}{dy} = a + ((c - a)y + d - b)\frac{1}{2\cosh^2(e(y - f))}e + \left(\frac{1 + \tanh(e(y - f))}{2}\right)(c - a)
\end{equation}
From Eq.~\ref{eq:DH} and Eq.~\ref{eq:DSTCond}, and the fits for $a$, $b$, $c$, $d$, $e$ and $f$, we can solve for the DST line numerically.
More specifically, the exact parameters used to solve for the shear jamming and the DST line in silica and cornstarch respectively are shown in Table~\ref{table:ParametersPhi0Str}.
\begin{table}
\begin{center}
\begin{tabular}{ |c|c| }
\hline
\textbf{Parameters} & \textbf{Values} \\
\hline
Cornstarch $C(\phi)$ & -13.11$\phi$ + 7.79\\
\hline
Silica $C(\phi)$ & -11.14$\phi$ + 6.2\\
\hline
Cornstarch $\sigma^*$ & 3.9\\
\hline
Silica $\sigma^*$ & 48.2\\
\hline
Cornstarch $\phi^*_0$ & 0.59\\
\hline
Silica $\phi^*_0$ & 0.55\\
\hline
Fit $\mathcal{H}$ - $a$ & -1.51 $\pm$ 0.06 \\
\hline
Fit $\mathcal{H}$ - $b$ & -1.4 $\pm$ 0.4 \\
\hline
Fit $\mathcal{H}$ - $c$ & -2.04 $\pm$ 0.08\\
\hline
Fit $\mathcal{H}$ - $d$ & -3.1 $\pm$ 0.3\\
\hline
Fit $\mathcal{H}$ - $e$ & 0.5 $\pm$ 0.2\\
\hline
Fit $\mathcal{H}$ - $f$ & -2.1 $\pm$ 1\\
\hline
\end{tabular}
\label{table:ParametersPhi0Str}
\end{center}
\caption{Parameters to determine the shear jamming and DST lines for the phase diagram.}
\end{table}
\section{\label{CphiSensitivty} Sensitivity of phase diagrams}
To generate the phase diagram, we use fits to both $C(\phi)$ and the scaling function $\mathcal{H}$. We fit a line to the high volume fraction $C(\phi)$ data as shown in Fig.~\ref{fig:CPhiFit}, and $\mathcal{H}$ is fit by crossover between two power laws as described in Eq.~\ref{eq:HFit}. The lines in the phase diagram (Fig. 3) are sensitive to these fits. More specifically, as the scaling variable $x = fC(\phi)/(\phi^*_0 - \phi)$, and $\phi \rightarrow \phi^*_0$, the shear jamming line is very sensitive to the fit of $C(\phi)$. This sensitivity is illustrated by the overlay of experimental points on the phase diagram (Fig.~\ref{fig:PhaseDiagramPhi0StrDots}), where data are projected to be on the jammed side of the phase diagram via the fit, while the actual values of $C(\phi)$ places the data in the DST region. The DST line is especially sensitive to the power law exponents $a$ and $c$. These parameters change the extent of the DST region, moving the onset volume fraction for DST.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\linewidth]{CphiWithFit.png}
\caption{\label{fig:CPhiFit} \textbf{Fits to the anisotropy $C(\phi)$}. The anisotropy factor $C(\phi)$ for cornstarch (square) and silica (diamond) suspensions and the fits (red) used to generate the phase diagram. We find that the data is fit fairly well by a straight line.}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{Figure3V6_VSmallDots.png}
\caption{\label{fig:PhaseDiagramPhi0StrDots} \textbf{Phase diagrams for cornstarch and silica suspensions with experimental data points}. Three distinct regions are seen in the phase diagrams for the cornstarch \textbf{a} and silica \textbf{b} systems- continuous shear thickening (CST) in purple, discontinuous shear thickening (DST) in blue and a jammed region in red. The small green dots indicate the experimental stresses and volume fractions used for the scaling analysis. The shear jamming line (maroon) is determined by $x = x_c$, where $x = e^{-\sigma^*/\sigma}C(\phi)/(\phi_0 - \phi)$ and the DST line (blue) is determined by the condition $d\log\eta/d\log\sigma = 1$.}
\end{center}
\end{figure}
\section{\label{DSTLime} Scaling collapse with only the $C(\phi)$ modification}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{Figure2Phi0LowPhi.png}
\caption{\textbf{Scaling collapse of the low volume fraction data with only $C(\phi)$} \textbf{a.} The scaling function $\mathcal{F} = \eta(\phi_0 - \phi)^2$ as a function of the scaling variable $x = e^{-\sigma^{*}/\sigma}C(\phi)/(\phi_0 - \phi)$ for the cornstarch (squares) and silica (diamond) suspensions data that only show CST. We find that the low volume fraction data show excellent collapse with only the inclusion of $C(\phi)$ in the scaling variable. Inset shows a zoom in of the data. \textbf{b}. The scaling function $\mathcal{H} = \eta(g(\sigma, \phi))^2$ versus $|1/x_c - 1/x|$ for all the cornstarch and silica suspensions data. This way of scaling the data clearly illustrates two distinct regimes - a regime characterized by a power law of -2 at small $x$ and a power law of $\sim -3/2$ at $x \approx x_c$. The solid black lines indicate power laws of $-2$ and $-3/2$ respectively.}
\label{fig:ScalingPhi0LowPhi}
\end{center}
\end{figure}
\begin{figure}
\includegraphics[width=1\textwidth]{Figure2Phi0.png}
\caption{\textbf{Scaling collapse with only $C(\phi)$} \textbf{a.} The scaling function $\mathcal{F} = \eta(\phi_0 - \phi)^2$ as a function of the scaling variable $x = e^{-\sigma^{*}/\sigma}C(\phi)/(\phi_0 - \phi)$ for all the cornstarch (squares) and silica (diamond) suspensions data. We find that the data largely collapse onto a single curve that diverges at $x = x_c$. The higher volume fractions show small deviations at intermediate values of $x$. Inset shows a zoom in of the data. \textbf{b}. The scaling function $\mathcal{H} = \eta(g(\sigma, \phi))^2$ versus $|1/x_c - 1/x|$ for all the cornstarch and silica suspensions data. This way of scaling the data clearly illustrates two distinct regimes - a regime characterized by a power law of -2 at small $x$ and a power law of $\sim -5/3$ at $x \approx x_c$. The solid black lines indicate power laws of $-2$ and $-1.6$ respectively. \textbf{c}. The anisotropy factor $C(\phi)$ as a function of the volume fraction for both silica and cornstarch.}
\label{fig:ScalingPhi0}
\end{figure}
The essential modification to ensure scaling collapse of the data is the inclusion of the volume fraction dependent parameter $C(\phi)$ in the numerator of the scaling variable:
\begin{equation}
x = \frac{fC(\phi)}{(\phi_0 - \phi)}
\end{equation}
Thus the new scaling equation is:
\begin{equation}
\eta (\phi_0 - \phi)^2 = \frac{1}{\xi_0}\mathcal{F}\left(\frac{fC(\phi)}{(\phi_0 - \phi)}\right)
\end{equation}
Here $\xi_0$ is a suspension scaling factor with $\xi_0 = 1$ for cornstarch and $\xi_0 = 2$ for silica. With this modification alone, the low volume fraction data shows excellent collapse (Fig.~\ref{fig:ScalingPhi0LowPhi}) and the high volume fraction data collapse (Fig.~\ref{fig:ScalingPhi0}) is also significantly improved from the prediction of the recasted Wyart and Cates model (Eq.4, Fig. 1\textbf{c}, \textbf{d}).
As seen in Fig.~\ref{fig:ScalingPhi0}, the scaling function $\mathcal{F}$ has a divergence at $x = x_c$. To determine the nature of the divergence, we plot $\mathcal{F} = \xi_0\eta(\phi_0 - \phi)^2$ as a function of $x_c - x$ (Fig~\ref{fig:xc_xPhi0}). We find a power law divergence with an exponent of $\delta \sim 3/2$, distinct from the value of 2 predicted by the Wyart and Cates model and the divergence near the isotropic jamming point.
\begin{figure}
\begin{center}
\includegraphics[width = \textwidth]{xc_xPhi0.png}
\caption{\textbf{Power law scaling of cornstarch and silica viscosity data.}\textbf{a} The scaling function $\mathcal{F} = \xi_0 \eta(\phi_0 - \phi)^2$ as a function of the scaling variable $x = e^{-\sigma^{*}/\sigma}C(\phi)/(\phi_0 - \phi)$ for the cornstarch (squares) and silica (diamond) suspensions data. We find that the data shows power law scaling with an exponent of $\sim -3/2$. The black solid line is a line with power law -1.5. \textbf{b} Only the low volume fraction data -- the scaling function $\mathcal{F} = \xi_0 \eta(\phi_0 - \phi)^2$ as a function of the scaling variable $x = e^{-\sigma^{*}/\sigma}C(\phi)/(\phi_0 - \phi)$ for continuously shear thickening cornstarch (squares) and silica (diamond) suspensions data.The black solid line is a line with power law -1.5.}
\label{fig:xc_xPhi0}
\end{center}
\end{figure}
To better visualize the change in exponent at large and small $x$, we can replot the data with only the $C(\phi)$ modification in a manner analogous to Eq.10:
\begin{equation}
\eta (g(\sigma, \phi))^2 = \frac{1}{\xi_0} \mathcal{H}\left( \left|1/x_c- 1/x\right|\right)
\end{equation}
We find excellent collapse over six orders of magnitude in the scaling variable with two easily distinguishable regimes each characterized by clearly different power-law exponents (Fig.~\ref{fig:ScalingPhi0}\textbf{b}, Fig.~\ref{fig:ScalingPhi0LowPhi}\textbf{b}). At small $x$, far from $x_c$, the behaviour is governed by the frictionless jamming point $\phi_0$ and $\mathcal{H} \sim |1/x_c - 1/x|^{-2}$. As $x$ approaches $x_c$, the exponent changes to $\approx -3/2$, with a crossover between the two regimes at $x/x_c \sim$ 0.1.
From this scaling collapse, we can again draw the phase diagrams for the cornstarch and silica suspensions. We fit the scaling function $\mathcal{H}$ to a crossover between two different power laws :
\begin{equation}
\log \mathcal{H}(y) = ay + b + \left( \frac{1 + \tanh(e(y - f))}{2}\right)\left( (c-a)y + d -b\right)
\end{equation}
and use the fit to determine the DST line:
\begin{equation}
\frac{d \log \eta}{d \log \sigma} = 1
\end{equation}
We can also determine the shear jamming line as $x = x_c$. The resultant phase diagram for silica and cornstarch are shown in Fig.~\ref{fig:PhaseDiagramPhi0}. The exact parameters used to generate the phase diagram is shown in Table~\ref{table:ParametersPhi0}. We find that the phase diagrams are qualitatively similar to that shown in the main text, however, there are small differences in $\phi_\mu$, the shear jamming line and the DST line. The key difference is that the shear jamming line and DST line converge at $\phi_0$ and not $\phi^*_0$ at small stresses.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{Figure3Phi0_v1_VerySmallDots.png}
\caption{\label{fig:PhaseDiagramPhi0} \textbf{Phase diagrams for cornstarch and silica suspensions as derived from the scaling analysis with only $C(\phi)$ correction}. Three distinct regions are seen in the phase diagrams for the cornstarch \textbf{a} and silica \textbf{b} systems- continuous shear thickening (CST) in purple, discontinuous shear thickening (DST) in blue and a jammed region in red. The shear jamming line (maroon) is determined by $x = x_c$, where $x = e^{-\sigma^*/\sigma}C(\phi)/(\phi_0 - \phi)$ and the DST line (blue) is determined by the condition $d\log\eta/d\log\sigma = 1$. The small green dots indicate the experimental stresses and volume fractions used for the scaling analysis.}
\end{center}
\end{figure*}
\begin{table}
\begin{center}
\begin{tabular}{ |c|c| }
\hline
\textbf{Parameters} & \textbf{Values} \\
\hline
Cornstarch $C(\phi)$ & -12.47$\phi$ + 8.23\\
\hline
Silica $C(\phi)$ & -10.5$\phi$ + 6.2\\
\hline
Cornstarch $\sigma^*$ & 3.9\\
\hline
Silica $\sigma^*$ & 48.2\\
\hline
Cornstarch $\phi_0$ & 0.65\\
\hline
Silica $\phi_0$ & 0.58\\
\hline
Fit $\mathcal{H}$ - $a$ & -1.5 $\pm$ 0.3 \\
\hline
Fit $\mathcal{H}$ - $b$ & -1 $\pm$ 1 \\
\hline
Fit $\mathcal{H}$ - $c$ & -2.00 $\pm$ 0.04\\
\hline
Fit $\mathcal{H}$ - $d$ & -1.3 $\pm$ 0.3\\
\hline
Fit $\mathcal{H}$ - $e$ & 0.3 $\pm$ 0.2\\
\hline
Fit $\mathcal{H}$ - $f$ & -3 $\pm$ 2\\
\hline
\end{tabular}
\label{table:ParametersPhi0}
\end{center}
\caption{Parameters to determine the shear jamming and DST lines for the phase diagram with only the $C(\phi)$ modification.}
\end{table}
\bibliographystyle{unsrt}
\end{document} |
2,869,038,155,561 | arxiv | \section{Introduction}
The determination of stellar parameters, especially overall metallicity (denoted here $[{{\rm M}\over {\rm H}}]$
unless otherwise indicated) and detailed
abundances of individual metals, in stars of remote
Galactic and extra-galactic structures has become crucial to the study of galaxy formation and evolution, including that
of the Milky Way. Metallicity distribution functions for galaxies and globular star clusters (GCs) reveal information
about multiple populations from multiple star formation episodes and allow the investigation of the history of star formation.
Chemical tagging of stellar populations allows the investigation of the link between Galactic structures such as GCs and
nearby extra-galactic ones such as ultra-faint dwarf (UFD) and dwarf spheroidal (dSphe) satellite galaxies, and the process
through which galaxies are assembled in hierarchical structure formation (see \citet{mucciarelli13} for a recent example,
and \citet{frebel13} and \citet{belokurov13} for reviews).
However, stars in remote
structures are often only significantly and efficiently detectable with low- to moderate-resolution spectroscopy, such as that of the
Sloan Digital Sky Survey - Sloan Extension for Galactic Understanding and Exploration (SDSS-SEGUE), that precludes
the measurement of individual spectral lines, and usefully accurate $[{{\rm M}\over {\rm H}}]$ values must be
obtained from observationally expensive follow-up spectroscopy at high spectral resolution ({\it e.g.} see \citet{aoki}).
\citet{ivezic12} contains a recent authoritative review of how modest resolution spectroscopic surveys are
revolutionizing our study of Galactic populations and leading to insights into Galactic formation.
As a result, there is active interest in novel methods for extracting $[{{\rm M}\over {\rm H}}]$ and $T_{\rm eff}$
values from low- to moderate-resolution data (see \citet{schlaufman} for a recent investigation of a method
based on IR molecular bands, and \citet{miller} for a very recent investigation of a
photometric method based on SDSS $ugriz$ photometry.)
\paragraph{}
The original system of 11 \citep{gorgas93}, and then 21 (\citet{worthey94}, W94 henceforth), Lick/IDS spectral indices, $I$,
was defined to optimize the determination
of {\bf $[{{\rm Fe}\over{\rm H}}]$} and, hence, age ($t$), from integrated light (IL) spectra of
faint spatially unresolved {\it old} stellar populations ($-1 < [{{\rm M}\over{\rm H}}] < +0.5$, $0.5 < t < 20$ Gyr)
dominated by G and K stars, obtained at low spectra resolution ($R < 1000$) in the
$\lambda 4000$ to $6200$ \AA~ spectral band.
(\citet{wortheyo97} explored four new indices based on two definitions, each, of
H$\gamma$- and H$\delta$-centered features, but found them to be of limited usefulness - a result
consistent with our own investigation.)
Because of the emphasis on old stellar populations,
such as that of globular clusters (GCs) and ``early-type'' galaxies, red giant (RGB) stars are important
contributors to the IL spectrum. The indices were discovered by empirically identifying composite
spectral features in low $R$ spectra of Galactic and GC G and K stars that showed a significant
and useful correlation with one of $[{{\rm M}\over{\rm H}}]$, $T_{\rm eff}$, or $\log g$ while
(hopefully) depending less sensitively on the other two.
The original Lick/IDS system was defined with spectra obtained with the Lick Observatory Image Dissector Scanner (IDS)
having a spectral sampling, $\Delta\lambda$, of
$\sim 8$ \AA~ in a region centered at 5200 \AA~ spanning a $\Delta\lambda$ range of 2400 \AA, corresponding to
spectral resolution $R \equiv \lambda / \Delta\lambda$ of $\sim 650$.
\paragraph{}
Given the usefulness of the Lick indices for modern moderate resolution spectroscopic surveys such as SDSS-SEGUE and LAMOST,
\citet{franchini10} developed the system further by creating a synthetic library of $I$ values for dwarf and giant stars
derived from synthetic spectra
that had been convolved to the higher SDSS $R$ value of 1800, and tested the predicted relationship between the
$I$ values and stellar parameters against that of several empirical spectral libraries, including the SDSS-DR7 spectroscopic
database itself. They also supplemented the 21 indices of W94 with a new near-UV index, namely CaHK - a prominent
spectral feature that has proved useful for identifying candidate XMP stars in productive surveys such as
the HK survey of \citet{beers}.
One of the main conclusions of \citet{franchini10} is that $T_{\rm eff}$ values derived from fitting their
synthetic indices to SDSS-SEGUE spectra of late-type giants were systematically lower than the $T_{\rm eff}$
values derived with the SEGUE Stellar Parameter Pipeline (SSPP).
\paragraph{}
Many of the Lick indices are dominated by, or have significant contributions from, lines of \ion{Fe}{1}.
Initial investigations of metal-poor stars in which the \ion{Fe}{1} extinction is treated with
Non-local Thermodynamic Equilibrium (Non-LTE, NLTE) indicate that NLTE effects become increasingly important with decreasing metallicity, and alter the inferred $T_{\rm eff}$
values by as much as 400 K and the derived abundances of Fe by 0.4 dex for metal-poor giants with parameters based on RAVE
survey spectra (\citet{serenelli13}, \citet{ruchti13}).
Recently \citet{short13} and papers in that series have found that when most of the light and Fe-group metals that contribute to the
visible band spectral line blanketing of mildly metal-poor RGB stars is treated in NLTE, the $T_{\rm eff}$ value inferred from spectrophotometric spectral energy distributions (SEDs) of
$R \approx 100$ is reduced by $\gtrsim 50$ K.
Therefore, calibration of the Lick $I([{{\rm M}\over {\rm H}}])$
relation based on NLTE modeling may be crucial to using the indices for accurate $[{{\rm M}\over {\rm H}}]$ inference.
\paragraph{}
Our goal is to extend an analysis of the detectability, and sensitivity to stellar parameters, including
$[{{\rm M}\over {\rm H}}]$, of the Lick indices to the regime of extremely metal-poor (XMP) red giants,
and to investigate the magnitude of NLTE effects on the value of modeled Lick indices.
In Section \ref{sModel} we describe the atmospheric models and the spectrum synthesis, and the procedure
for producing synthetic $I$ values; in Section \ref{sResults} we identify the Lick indices that remain most useful
at XMP metallicities, and provide useful polynomial fits and partial derivatives for index values, $I$, modeled in LTE,
in terms of $T_{\rm eff}$, $V-K$, and $[{{\rm M}\over {\rm H}}]$; in Section \ref{sConc} we present
conclusions.
\section{Modeling \label{sModel}}
\subsection{Model grid \label{sgrid}}
We have used PHOENIX V. 15 \citep{hauschildtafba99} to computed a grid of atmospheric models and corresponding synthetic
spectra in both LTE and ``massive multi-species NLTE'' \citep{shorth09} for very- and extremely-metal-poor (VMP
and XMP) red (and ``orange'') giant stars
of MK spectral class late F to late K covering the range of $T_{\rm eff}$ and $[{{\rm M}\over {\rm H}}]$ value of stars that are
spectroscopically accessible at Galactic halo distances and that serve as useful stellar ``fossils'' for Galactic archeology
\citep{cohen13}.
The grid also includes red giants of higher $[{{\rm M}\over {\rm H}}]$ value representative of the solar neighborhood
and disk population for comparison. The parameters of the LTE grid are: \\
$3750 \leq T_{\rm eff} \leq 6500$ K with $\Delta T_{\rm eff} = 250$ K, \\
$-6.0 \leq [{{\rm M}\over {\rm H}}] \leq 0.0$ with $\Delta [{{\rm M}\over {\rm H}}] = 1.0$ for $[{{\rm M}\over {\rm H}}] < -2.0$ and 0.5 for $[{{\rm M}\over {\rm H}}] \geq -2.0$, \\
$\log g = 2.0$\\
\paragraph{}
Fig. \ref{atmosGrid} shows the $T_{\rm Kin}(\tau_{\rm 12000})$ structure of a subset of our models for $T_{\rm eff}=4000$ K
and $[{{\rm M}\over {\rm H}}]$ values of -0.5, -2.0, -4.0, and -6.0, where $\tau_{\rm 12000}$ is the monochromatic
{\it continuum} optical depth at 12000 \AA~ and serves as our standard radial depth variable. The reduction in the
well-understood back-warming and surface cooling effects caused by line extinction as $[{{\rm M}\over {\rm H}}]$
decreases is readily noticeable.
For comparison, the grid of \citet{franchini10} has $3500 \leq T_{\rm eff} \leq 7000$ K with
$\Delta T_{\rm eff} = 250$ K, $-2.5 \leq [{{\rm Fe}\over {\rm H}}] \leq 0.5$ with $\Delta [{{\rm Fe}\over {\rm H}}] = 0.5$ generally,
with the addition of $[{{\rm Fe}\over {\rm H}}] = -4.0$ for their $\alpha$-enhanced models, and $0.5 \leq \log g \leq 5.0$ with
$\Delta\log g = 0.5$, where $[{{\rm Fe}\over {\rm H}}]$ denotes the scaled abundance parameter for elements other than
$\alpha$-process elements. The most important distinguishing features of our grid are the extension to $[{{\rm M}\over {\rm H}}]$
values of -6.0, and the inclusion of NLTE models for a subset of parameter values spanning the grid.
Although there are few stars of $[{{\rm M}\over {\rm H}}] \lesssim -3.5$,
even among halo stars useful for Galactic archeology, extending the grid to $[{{\rm M}\over {\rm H}}] = -6.0$ allows us to
anchor the $I([{{\rm M}\over {\rm H}}])$ fit through the useful $[{{\rm M}\over {\rm H}}]$ range.
\paragraph{}
Because of the large number of $[{{\rm M}\over {\rm H}}]$ values (nine) and doubling of most of the grid to
include NLTE counterparts, the number of models was limited by fixing the $\log g$ value at 2.0, representative of the giant population,
and giving all models a scaled-solar abundance distribution based on the abundances of \citet{grevs98} (therefore, for our grid
the overall metallicity parameter, $[{{\rm M}\over {\rm H}}]$, is identical with the $[{{\rm Fe}\over {\rm H}}]$ parameter). W94
and \citet{gorgas93} found that important $T_{\rm eff}$- and {\bf $[{{\rm Fe}\over {\rm H}}]$-sensitive} $I$ values were
least sensitive to $\log g$. Nevertheless the lack of the $\log g$ dimension in our $I$ polynomial fits described
below leads to polynomial fitting coefficients that are not directly comparable to those of W94 from their
analysis of observed IDS spectra.
Adopting an $\alpha$-enhancement of 0.0 is expected to have a minor effect on the differential comparison of most
Fe-dominated $[{{\rm M}\over {\rm H}}]$-sensitive $I$ values
computed in LTE and NLTE (because these are really more $[{{\rm Fe}\over {\rm H}}]$ indicators,
and Fe is not an $\alpha$-element), and will allow direct assessment of pure NLTE effects
on $I$ values across the entire range of $[{{\rm M}\over {\rm H}}]$ value. Moreover, incorporating
$\alpha$-enhancement requires a model for how the value of the enhancement increases with
decreasing $[{{\rm M}\over {\rm H}}]$ value in the range 0.0 to -1.0. Nevertheless, a future direction
is to extend our grid in the $\alpha$-enhancement dimension. As a preliminary assessment of the effect of
$\alpha$-enhancement, in Table \ref{alphaTab} we present the computed
values of the index, $I$, computed at SDSS spectral resolution for our subset of nine ``Lick-XMP'' indices (see below) for LTE models
of $T_{\rm eff}=4000$ K, $\log g=2.0$, for which the $I$ values are relatively strong, and select
$[{{\rm Fe}\over {\rm H}}]$ values of -2.0 and -4.0 and scaled
solar abundance, and abundances with the maximal relative enhancement of $+0.4$ of the eight $\alpha$-process elements of
even atomic number from O ($Z=8$) to Ti ($Z=22$).
Mg is an $\alpha$-process element, and the Mg$_{\rm 1}$ and Mg $b$ indices
are more strongly affected, increasing by a factor of $\sim 1.5$ at $[{{\rm Fe}\over {\rm H}}]=-2$
where their value is still large. Of our nine Lick-XMP indices, the results presented below for
Mg$_{\rm 1}$ and Mg $b$ should be regarded as most suspect and require follow up investigation
with a full $\alpha$-enhanced NLTE and LTE model grids. The inclusion of NLTE effects in
the modeling is not likely to change the conclusion that non-$\alpha$-element spectral features are
significantly less affected by $\alpha$-enhancement than are $\alpha$-element spectral features.
As might be expected, the effect of $\alpha$ enhancement on
the Fe-dominated indices Fe4531, Fe5015, {\bf Fe5270,} Fe5335, and the Na $D$ index is minor, being of
the order of $10\%$ or less. {\bf We note that for some non-$\alpha$-element indices, the effect of
$\alpha$-enhancement can be to {\it reduce} the index slightly at some $[{{\rm Fe}\over {\rm H}}]$-values, while
increasing it at others. Changes to the composition can affect the spectrum in the bracketing pseudo-continuum
windows that define any index, and will also indirectly affect the strength of features contributing to any index
through the effect on the electron number density. }
\paragraph{}
Our models have spherical geometry with radii based on an adopted mass of $M = 1 M_\odot$, a microturbulence broadening
parameter of $\xi_{\rm T} = 2.0$ km s$^{\rm -1}$, which is consistent with what has been measured and adopted for late-type
giants generally, and a mixing-length parameter for the treatment of convective energy transport, $l$, of
1.0 $H_{\rm P}$ (pressure scale heights).
\subsubsection{NLTE treatment }
We treat 6706 atomic energy-levels ($E$-levels) connected by 74550 bound-bound ($b-b$) transitions of 35 chemical
species accounting for various ionization stages of 20 chemical elements, including H, He, CNO, and the
Fe-group elements that blanket late-type visible band stellar spectra, as well as other abundant light metals.
We compute the NLTE level populations, $n_{\rm i}(\tau_{\rm 12000})$, and hence the corresponding extinction coefficient, $\kappa_\lambda(\tau_{\rm 12000})$, in
self-consistent multi-level
NLTE by solving the system of coupled rate equations of statistical equilibrium (SE) consistently with the equation of
radiative transfer (RT) in each of the relevant bound-bound ($b-b$) and bound-free ($b-f$) transitions.
\citet{shorth09} contains a description of
the species treated in NLTE, sources of atomic data, and other important details.
\paragraph{}
Because NLTE models are more computationally expensive, we only produced NLTE models and spectra at a subset of
our LTE grid, as follows\\
$4000 \leq T_{\rm eff} \leq 6500$ K with $\Delta T_{\rm eff} = 500$ K, \\
$-6.0 \leq [{{\rm M}\over {\rm H}}] \leq 0.0$ with $\Delta [{{\rm M}\over {\rm H}}] = 1.0$, with addition of $[{{\rm M}\over {\rm H}}] = -0.5$ \\
This is sufficient to assess the dependence of NLTE effects on Lick indices throughout the grid. Fig. \ref{atmosGrid} shows
NLTE $T_{\rm Kin}(\tau_{\rm 12000})$ structures for comparison with those of LTE. NLTE radiative equilibrium is complex, and \citet{anderson89} contains
a very thorough analysis for the case of the Sun, and \citet{short12}
extends the analysis to solar metallicity and moderately metal-poor RGB stars.
\paragraph{NLTE Fe treatment }
The predicted magnitude of the well-known \ion{Fe}{1} NLTE ``over-ionization'', and the resulting predicted brightening
of the \ion{Fe}{1}-blanketed near-UV and blue spectral bands with respect to the rest of the
SED (see \citet{shorth09} and \cite{rutten86}) depends on
the details of the atomic model of \ion{Fe}{1} used in the NLTE Fe treatment. More specifically, the completeness
with which high-energy $E$-levels are included near the ionization limit, $\chi_{\rm I}$, affects the computed rate of collisional recombination
from \ion{Fe}{2}, and thus the \ion{Fe}{1}/\ion{Fe}{2} ionization equilibrium \citep{mashonkina11}.
Generally, the more $E$-levels are included for
which the atomic energy gap, $\Delta\chi$, between the $E$-level and $\chi_{\rm I}$ is less than the
average collisional energy among particles ($kT$) throughout the line-forming region of the atmosphere, the more accurate
the NLTE effect on the computed SED will be.
For \ion{Fe}{1}, $\chi_{\rm I} = 7.9024$ eV, and in our 494-level model \ion{Fe}{1} atom,
the highest lying $E$-level has $\chi = 7.538$ eV, for a minimum $\Delta\chi$ gap of 0.364 eV.
The line-forming region of the atmosphere throughout the visible
band generally lies at shallower total optical depths, $\tau_\lambda$, than the layer where the continuum value of $\tau_\lambda$ is unity,
where $T(\tau_\lambda) \le T_{\rm eff}$.
For the warmest models in our grid
($T_{\rm eff} = 6500$ K), $kT \le 0.560$ eV in the
line forming region, and we have eight $E$-levels for which $\Delta\chi < kT$, at least in the lower
line-forming region.
For the coolest models in our grid ($T_{\rm eff} =3750$ K), $kT \le 0.323$ eV in the
line forming region, and we just miss having {\it any} $E$-levels for which $\Delta\chi < kT$.
We expect our prediction of NLTE
\ion{Fe}{1} effects to be most accurate at the warm end of our grid where collisional
recombination into our highest $E$-levels is energetically accessible.
At the cool end, the collisional recombination rate is artificially suppressed by the lack
of higher-lying $E$-levels in the model atom. The recombination rate is under-estimated for our cooler
models, and the NLTE over-ionization effect is
likely over-estimated. Our NLTE modeled NLTE effects on $I$ values may be thought of, cautiously,
as upper limits. We plan to expand the PHOENIX NLTE \ion{Fe}{1}
atom in the near future, but this is a significant project in its own right.
\subsection{Synthetic spectra \label{sspectra}}
Our longer-term goal is to identify useful $T_{\rm eff}$ and overall $[{{\rm M}\over {\rm H}}]$ line diagnostics
for high spectral resolution from the near UV to the the near IR (NIR). Therefore, we have computed
synthetic spectra for each of our models for $3000 < \lambda < 26000$ \AA~ with a spectral sampling,
$\Delta\lambda$, set so as to maintain an $R$ value of ~300\, 000 throughout, sufficient to
fully resolve spectral line cores. This $\lambda$ range includes the NIR $J$, $H$, and $K$ photometric
bands, in which useful line lists of stellar parameter and abundance diagnostics have recently been
published (\citet{bergemann12}, \citet{cesseti13}, \citet{le11}).
\paragraph{}
Our synthetic spectra were post-processed by broadening with a Gaussian kernel to $R$ values of
650 and, following \citet{franchini10}, 1800 to match the resolution of the original IDS, and SDSS-SEGUE spectroscopy,
respectively.
A Gaussian is only an approximation to the real instrumental spectral profiles of IDS and SDSS-SEGUE
spectroscopy, but at this stage our study is a differential one to compare the effect of NLTE on Lick
indices to that of choice of $R$ value as a function of $[{{\rm M}\over {\rm H}}]$.
We do not account for either macro-turbulent or rotational broadening. Rotation is expected to be modest in
evolved stars of large radius, and both effects are expected to be minor at these $R$ values.
Fig. \ref{specGrid} shows representative synthetic spectra for the models of Fig. \ref{atmosGrid} convolved
to IDS spectral resolution, with the Lick indices labeled, and it can be seen
that some of the strongest spectral features are still significant at
$[{{\rm M}\over {\rm H}}]$ values as low as -6.0. Fig. \ref{specDiff} shows the relative flux
difference at IDS resolution,
$\Delta F_\lambda \equiv (F_{\lambda, {\rm NLTE}} - F_{\lambda, {\rm LTE}})/F_{\lambda_{\rm LTE}}$, for
models of $T_{\rm eff}= 4000$ K, and $[{{\rm M}\over {\rm H}}]$ values of 0.0, -2.0, and -6.0, with the Lick indices labeled.
$\Delta F_\lambda$ is generally positive at the $\lambda$ values of the Lick indices because most low-$\chi$
spectral lines from neutral ionization stages of metals are {\it weaker} in NLTE than in LTE.
\subsection{Synthetic Lick indices \label{sindex}}
We use our LTE and NLTE synthetic spectra, convolved to both IDS and SDSS spectral resolution, to compute LTE and NLTE
IDS and SDSS Lick indices, $I$, following the prescription of W94. We took the $\lambda$ values defining the latest
recommended index and associated pseudo-continuum bands,
similar to the information presented in Table 1 of W94, from
the official Lick index WWW site (http://astro.wsu.edu/worthey/html/system.html), as did \citet{franchini10}.
The IDS indices conform to the
well-studied Lick index system as originally defined,
and are directly comparable to
those of \citet{gorgas93} and W94, and serve as a check on our procedures, as well as allowing us
to assess the impact of NLTE effects at IDS resolution. The SDSS indices are comparable to those of \citet{franchini10}
and allow an assessment of NLTE effects at somewhat higher resolution typical of more modern spectroscopic
surveys such as SDSS and LAMOST. Furthermore, comparing the LTE IDS and SDSS indices allows an assessment of the
dependence of the $I$ sensitivity to $T_{\rm eff}$ and $[{{\rm M}\over {\rm H}}]$ on $R$ value.
\paragraph{}
Following \citet{gorgas93} and W94, we also computed the photometric $V-K$ index for the models
as an observational surrogate for the independent parameter $T_{\rm eff}$.
$V-K$ has been found to be {\it relatively} insensitive to $\log g$ and $[{{\rm M}\over{\rm H}}]$, and a good proxy for
$T_{\rm eff}$ over the GK star range \citep{bellg89}.
For consistency with W94, we use the $V$- and $K$-band filter definitions of \citet{johnson66} and
calibrate the index with a single-point calibration of the {\bf $[{{\rm Fe}\over {\rm H}}] = 0.0$} models at
$T_{\rm eff} = 4000$ K to the $(V-K) - T_{\rm eff}$
relation given in Table 4 of \citet{ridgway80}. The \citet{ridgway80} $(V-K) - T_{\rm eff}$ relation is
for giant stars, and a $T_{\rm eff}$ value of 4000 K is near the center of their calibrated $T_{\rm eff}$ range,
and overlaps with the $T_{\rm eff}$ range of our grid.
We always use the LTE model $V-K$ color
on the grounds that it is serving as an independent variable in
this analysis, and the LTE grid is more complete.
\paragraph{}
It is worth reiterating remarks made by previous investigators about the particular diagnostic utility of those indices
that are expected to be most useful in this investigation:
\paragraph{}
Fe4383 and Fe4668 (W94) and Fe5270 and Fe5335 \citep{gorgas93} are $[{{\rm Fe}\over{\rm H}}]$-sensitive with a range in
$I$ value significantly greater than measurement uncertainty, and are expected to be especially useful here if they remain detectable
down to XMP metallicities (note that Fe4668 has significant contributions from Mg, Cr, Ti, and C$_{\rm 2}$). Ca4227 really
is dominated by Ca (whereas Ca4455 is more influenced by Fe-group lines), thus providing one of the few atomic indices
{\it not} heavily affected by Fe, and is somewhat sensitive to overall $[{{\rm M}\over{\rm H}}]$ (W94) as well as
$[{{\alpha}\over{\rm H}}]$ given that Ca is an $\alpha$-process element. CN$_{\rm 2}$
is a modification of CN$_{\rm 1}$ designed to avoid contamination from the H$\delta$ line, and is strongly dependent on
overall $[{{\rm M}\over{\rm H}}]$ for giants (but less so for dwarfs) (W94).
By contrast to the preceding, H$\beta$ has been found to be most
strongly dependent on $T_{\rm eff}$ \citep{gorgas93}, and thus provides a valuable complementary diagnostic.
Indices Mg$_{\rm 1}$ (dominated by MgH)
and Mg $b$ were found by \citet{gorgas93} to be usefully sensitive to $\log g$,
and we include them in our investigation to re-assess their $T_{\rm eff}$ and $[{{\rm M}\over{\rm H}}]$
sensitivity (note that Mg$_{\rm 2}$ includes contributions from both MgH
and the Mg $b$ lines, so is less pure a signal of either). Na $D$ is known to be significantly contaminated by interstellar (ISM) extinction,
which complicates its interpretation \citep{gorgas93}.
\paragraph{}
\citet{franchini10} investigated the influence of enhanced $\alpha$-element abundances on model $I$ values. They
found that for $T_{\rm eff} > 4250$ K the most $\alpha$-sensitive indices are CN$_{\rm 1}$, CN$_{\rm 2}$,
CaHK, Ca4227, Fe4668, Mg$_{\rm 1}$, Mg$_{\rm 2}$, and Mg $b$, and that the least $\alpha$-sensitive indices are G4300,
Ca4455, Fe4531, Fe5015, Fe5782, and H$\beta$. For $T_{\rm eff} < 4250$ K the situation seems more complex, but CaHK,
Ca4227 and Mg $b$ remain among the most $\alpha$-sensitive, and G4300 and Ca4455 remain among the least $\alpha$-sensitive
indices.
\section{Results \label{sResults}}
We caution that because our models have scaled-solar abundances, in the discussion that follows the
$[{{\rm M}\over{\rm H}}]$ parameter is effectively identical to the $[{{\rm Fe}\over{\rm H}}]$ parameter in the
internal context of our modeling and analysis, whereas
in $\alpha$-enhanced metal-poor RGB stars with non-solar abundance distributions,
$[{{\rm M}\over{\rm H}}]$ differs from $[{{\rm Fe}\over{\rm H}}]$. This distinction is expected to be most important
for those Lick indices that are Mg- or Ca- dominated, and less so for those that are
Fe-dominated.
\subsection{$V-K(T_{\rm eff})$ and $V-K([{{\rm M}\over{\rm H}}])$ relations }
Fig. \ref{vmkteff} shows the LTE and NLTE model $V-K(T_{\rm eff})$ relation for our range of model $[{{\rm M}\over{\rm H}}]$
values, over-plotted with the $V-K(T_{\rm eff})$ relation for giants of \citet{ridgway80}. The model
$V-K(T_{\rm eff})$ relation flattens with decreasing $[{{\rm M}\over{\rm H}}]$ value, which is to be expected
because line blanketing extinction in the $V$ band increases more rapidly with increasing $[{{\rm M}\over{\rm H}}]$
value than does that in the $K$ band. Within the range of
overlap in $T_{\rm eff}$ (3750 to 5000 K), our model $V-K(T_{\rm eff})$ relation at $[{{\rm M}\over{\rm H}}] = 0.0$
closely tracks that of \citet{ridgway80}, but is slightly steeper. However, the \citet{ridgway80} sample
of red giants probably includes stars of {\bf $[{{\rm Fe}\over{\rm H}}] < 0.0$}, so it should have a flatter $V-K(T_{\rm eff})$
relation than that of $[{{\rm M}\over{\rm H}}] = 0.0$. NLTE effects are negligible, which is to be expected
given that these are broad-band colors that average the effects of many spectral lines, and that line blanketing
opacity is already considerably reduced in the $V$ band as compared to the $B$ and $U$ bands.
\subsection{XMP indices}
Table \ref{XMPIs} displays, for a selection of $T_{\rm eff}$ values spanning our grid,
the range of $[{{\rm M}\over{\rm H}}]$ values for which each index, $I$,
is a sensitive $[{{\rm M}\over{\rm H}}]$ indicator as judged by the criterion that
$\Delta[{{\rm M}\over{\rm H}}]\times {{\partial I}\over {\partial [{{\rm M}/{\rm H}}]}} \gtrsim \sigma_{\rm Worthey}$,
where $\Delta[{{\rm M}\over{\rm H}}] \approx 1$ and $\sigma_{\rm Worthey}$ is an observational uncertainty
described below.
Lick indices that meet this criterion and remain strong enough at VMP-to-XMP metallicities
($[{{\rm M}\over{\rm H}}] < -4.0$) to be detectable are considered to be ``Lick-XMP'' indices.
Generally, {\it all} the ``metallic'' atomic and molecular indices
have ${{\partial I}\over {\partial [{{\rm M}/{\rm H}}]}}$ values
that increase with decreasing $T_{\rm eff}$ value, and we expect that Lick-XMP indices
will be more readily identifiable at the cool end of our grid.
\paragraph{}
Table \ref{XMPIs} includes indications
for indices that become ``pathological'' over a significant $[{{\rm M}\over{\rm H}}]$ range
at {\it any} $T_{\rm eff}$ values by becoming multi-valued, or negative in the case of
those indices that are in linear $W_\lambda$ units. These pathologies often reflect complications
in the bracketing pseudo-continuum wave-bands as defined by the Lick index standard rather than
with the central feature itself, and most often appear at higher $[{{\rm M}\over{\rm H}}]$ values.
\paragraph{}
At $T_{\rm eff}=3750$ K we have found five Lick indices that meet
our Lick-XMP criterion all the way down to $[{{\rm M}\over{\rm H}}] = -6.0$
(Fe5270, Fe5335, Fe5406, Mg $b$, Na $D$), and four others that meet the criterion
down to $[{{\rm M}\over{\rm H}}] = -5.0$ (Fe4383, Fe4531, Fe5015, Mg$_{\rm 1}$).
At $T_{\rm eff}=4500$ K there are none that meet the criterion down to $[{{\rm M}\over{\rm H}}] = -6.0$,
and four of the above nine that still meet the criterion down to $[{{\rm M}\over{\rm H}}] = -5.0$
(Fe5335, Fe5406, Mg $b$, Na $D$). These nine Lick-XMP indices are indicated
in Figs. \ref{specGrid} and \ref{specDiff} with identification labels that
are off-set from the rest.
By $T_{\rm eff}=5000$ K (and hotter) there are {\it no} indices that
meet our Lick-XMP criterion. (Furthermore, we caution that Mg $b$ becomes multi-valued
as a function of $[{{\rm M}\over{\rm H}}]$ for $T_{\rm eff} \gtrsim 5000$ K.)
The presence of Fe4383, Fe5270 and Fe5335 among our Lick-XMP indices is not surprising
given that \citet{gorgas93} identified them as strong {\bf $[{{\rm Fe}\over{\rm H}}]$-indicators}.
Na $D$ is an
interesting member of our Lick-XMP indices in that it is not Fe-dominated, but we caution, again, that its usefulness
is compromised by significant ISM extinction.
As noted in Section \ref{sModel}, our treatment of Mg $b$ and Mg$_{\rm 1}$ is least accurate because we
neglect $\alpha$-enhancement in the current investigation, and Mg is an $\alpha$-process element. Therefore,
the following results for these two indices are most suspect and require further investigation.
\paragraph{}
Figs. \ref{Fe5270_4000} to \ref{NaD_4000} show the modeled $I([{{\rm M}\over{\rm H}}])$ relation
at $T_{\rm eff}=4000$ K, and Figs. \ref{Fe5270_0} to \ref{NaD_0} show the modeled $I(T_{\rm eff})$ relation at
$[{{\rm M}\over{\rm H}}]=0.0$ for four of our Lick-XMP indices, Fe5270, Fe4383, Mg $b$, and Na $D$. The
latter are the only two Lick-XMP indices {\it not} dominated by Fe.
Results are shown for spectra computed with the
$R$ values of SDSS and IDS spectroscopy, as computed in LTE and NLTE.
For comparison, we have over-plotted observationally derived $I(T_{\rm eff})$
and $I([{{\rm M}\over{\rm H}}])$ points for a wide range of cool giants from the catalog of \citet{wortheyo97} with catalog $\log g$ values between 1.0 and 3.0, and $T_{\rm eff}$ and $[{{\rm M}\over{\rm H}}]$ within $\pm 100$ K and $\pm 1.0$,
respectively, of the plotted models. Note that the \citet{wortheyo97} $I$ data
only includes stars of $[{{\rm M}\over{\rm H}}] \gtrsim -1.0$, and that {\it no}
calibration of our $I$ values to those of \citet{wortheyo97} has been performed.
Nevertheless, from Figs. \ref{Mgb_4000} and \ref{NaD_4000}, the agreement between our modeled
Mg $b$ and Na $D$ $I$ values and the measurements of \citet{wortheyo97} for stars
of $T_{\rm eff} = 4000 \pm 100$ K is assuring.
Figs. \ref{Fe5270_4000} to \ref{NaD_4000} show that these indices generally satisfy our
Lick-XMP criteria for $T_{\rm eff}$ values in the cool part of our grid.
Fig. \ref{Fe4383_0} shows that the utility of Fe4383 is somewhat compromised - it is double-valued as a function
of $T_{\rm eff}$ at the cool edge of our grid. However, it is a sensitive and
useful $[{{\rm M}\over{\rm H}}]$ discriminator otherwise, and we include it because
there are few indices that qualify as Lick-XMP at all.
Figs. \ref{Fe5270_0} to \ref{NaD_0} show that, as discussed above, Lick-XMP indices
quickly become weaker and lose their ability to discriminate among $[{{\rm M}\over{\rm H}}]$
values for $T_{\rm eff} \gtrsim 5000$ K. Figs. \ref{Fe5270_4000} through \ref{NaD_0}
also include an indication of the ``observational uncertainty'' as determined by W94,
$\sigma_{\rm Worthey}$ (see below), to aid in assessing the significance of $\Delta I$ differences.
\paragraph{}
The effect of NLTE is complex in that every Lick index is really a compound feature
caused by significant spectral lines from several species. Moreover, although
the bracketing pseudo-continua used to define the indices were chosen to be relatively
insensitive to stellar parameter values, NLTE effects on line strengths in the
bracketing regions will, in principle, also play a role in the overall effect of NLTE
on the computed $I$ value. However, Fe5270 is typical
of our results for the Fe-dominated indices in that the effect of the
well-known NLTE over-ionization of \ion{Fe}{1} in late-type stars \citep{rutten86}
leads to smaller $I$ values at every $T_{\rm eff} - [{{\rm M}\over{\rm H}}]$
combination. Figs. \ref{Fe5270diff} and \ref{Mgbdiff} show the size of the
NLTE effect, $\Delta I\equiv I_{\rm NLTE} - I_{\rm LTE}$, as a function of
$T_{\rm eff}$ for Fe5270 and Mg $b$. The effect of NTLE on strong low $\chi$ \ion{Fe}{1} lines
is to weaken them (a negative correction to {\it modeled} $I$ value) as a result of NLTE over-ionization.
In late-type stars the effect will generally be maximal where
the discrepancy, $\Delta T$, throughout the line forming region between the radiation temperature, $T_{\rm Rad}$, of
the photo-ionizing near-UV band radiation in the NLTE treatment,
and the local kinetic temperature, $T_{\rm Kin}$, that determines the ionization
balance in the LTE treatment, is largest. For \ion{Fe}{1} in giants, the discrepancy is largest
around $T_{\rm eff}\approx 5000$ K, and decreases in magnitude for both lower and higher $T_{\rm eff}$ value
(see \citet{rutten86} for a thorough analysis for the case of Fe in the Sun).
\paragraph{}
For \ion{Mg}{1} (Index Mg $b$) the NLTE correction to the modeled $I$ value is also
negative, but increase in magnitude with decreasing $T_{\rm eff}$ throughout this $T_{\rm eff}$ range,
especially for $T_{\rm eff} < 4000$ K. \citet{osorio15} very recently conducted a thorough
NLTE analysis of \ion{Mg}{1} in late-type stellar atmospheres, including an investigation
of \ion{H}{1} collisional cross-sections and electron-exchange reactions. For the $\lambda$ 5184 line,
they found that NLTE effects lead to a weakening of the modeled line
(hence the {\it positive} abundance correction in their Fig. 10), by an amount that depends
on choice of atomic data, but can be as much as 0.4 dex at $T_{\rm eff}=4500$ K, $\log g=1.0$,
and $[{{\rm M}\over{\rm H}}]=0.0$, which is qualitatively consistent with our result.
\paragraph{}
Interestingly, the {\it magnitude} of NLTE effect on the computed $I$ value is comparable to that caused by
changing spectral resolution ($R$ value of IDS $vs$ SDSS). Na $D$ is an exception because it
is so broad that it
is minimally affected by the choice of $R$.
Computing $I$ from higher $R$-value spectral ({\it i.e.} that of SDSS) can either increase
or decrease the $I$ value, depending on the index. The same remarks as made when
considering NLTE effects above also apply: namely, that the effect of $R$ value on any given
index will depend on how the bracketing pseudo-continua are affected as well as the
central feature itself.
Altogether, the vertical spread in $I$ values
at each abscissa can be taken as an approximate indication of ``spectroscopic and modeling
physics uncertainty''.
\subsection{Polynomial fits \label{sfitfunc}}
Based on the approach taken by W94 with {\it observed} IDS spectra {\bf and observationally determined stellar parameters},
we have used multi-variate linear regression to determine ten polynomial fitting
(regression) coefficients (or model parameters), $\{ C_{\rm n} \}$, for each index, $I$, for a polynomial fitting function, $P_{\rm 3}$, that accounts for all terms,
including cross-products, up to third order in $[{{\rm M}\over{\rm H}}]$ and $\log\theta$, where $\theta\equiv {\bf 5040}/T_{\rm eff}$,
\begin{eqnarray}
I \approx P_{\rm 3} \equiv C_{\rm 0} + C_{\rm 1}[{{\rm M}\over{\rm H}}] + C_{\rm 2}\log\theta
+ C_{\rm 3}[{{\rm M}\over{\rm H}}]^2 + C_{\rm 4}\log^2\theta + C_{\rm 5}[{{\rm M}\over{\rm H}}]\log\theta \nonumber\\
+ C_{\rm 6}[{{\rm M}\over{\rm H}}]^3 + C_{\rm 7}\log^3\theta
+ C_{\rm 8}[{{\rm M}\over{\rm H}}]^2\log\theta + C_{\rm 9}[{{\rm M}\over{\rm H}}]\log^2\theta
\label{poly}
\end{eqnarray}
Note that, as per convention, the units of $I$ for the molecular indices CN$_{\rm 1}$, CN$_{\rm 2}$, Mg$_{\rm 1}$,
Mg$_{\rm 2}$, TiO$_{\rm 1}$ and TiO$_{\rm 2}$ are magnitudes, and those for the remaining indices are \AA.
Moreover, following W94, in the special case of the TiO$_{\rm 1}$ and TiO$_{\rm 2}$ indices, which exhibit
a rapid increase in strength with decreasing $T_{\rm eff}$ near the lower limit of our $T_{\rm eff}$
range, we fit
Eq. \ref{poly} to $\log I$ so that $I$ is being fit by $\exp P_{\rm 3}$.
The fitting is performed with the intrinsic REGRESS procedure in Interactive Data Language (IDL) and is
achieved by minimizing
the $\chi^2$ figure of merit assuming that the uncertainty, $\sigma_{\rm i}(I)$, of all ``data'' points
$I_{\rm i}$, is unity. We note that IDL installations include the source code for all intrinsic procedures,
and we have been able to critically inspect the REGRESS procedure. We use REGRESS to compute nine linear
regression coefficients and a constant term in Equation \ref{poly} for nine basis functions that consist of the powers of
the independent parameters ($T_{\rm eff}$, $[{{\rm M}\over{\rm H}}]$) and their products up to third order.
This amounts to fitting a model with nine parameters to 90 data points (90 computed $I$ values for ten $T_{\rm eff}$
and nine $[{{\rm M}\over {\rm H}}]$ values), thus having 81 degrees of freedom.
This is consistent with the fitting method of W94 - we note that their Equation 4 appears to be the
standard formula for $\chi^2$ because their summed square deviations are weighted by their inverse observational
uncertainty, $1/\sigma^2$, although they have labeled their figure of merit ``rms$^2$''. We note
that because we have no proper data uncertainties ({\it i.e.} ``measurement errors'' in data modeling),
$\sigma_{\rm i}(I)$, we cannot properly propagate
errors to compute uncertainties for the fitted model parameters, $C_{\rm n}$ - rather we compute ``fitting uncertainties'',
$\sigma$, {\it post hoc} from the calculated value of $\chi^2$ and the number of degrees of freedom.
Assuming these ``fitting uncertainties'' reflect errors that are normally distributed,
they may be interpreted as $68\%$ confidence intervals for the fitted values of the corresponding $C_{\rm n}$.
\paragraph{}
Tables \ref{coeffsthetaIDS} and \ref{coeffsthetaSDSS} present these $\{ C_{\rm n} \}$ values for LTE spectra of IDS and
SDSS resolution, respectively,
for the nine indices that were identified above as good Lick-XMP diagnostics,
in the same format and numerical precision as that of Table 2 of W94 (``Data for stars of
$3570 < T_{\rm eff} < 5160$ K'') for direct comparison. Table \ref{coeffsvmkIDS} presents the results for
IDS spectral resolution of
performing the same third order multiple linear regression
with the model $V-K$ color in place of $\log\theta$. We also present the values of the standard
deviations, $\sigma$, computed for each $C_{\rm n}$ parameter from $\chi^2$, and the value of the
reduced $\chi^2$ given 81 degrees of freedom, although we caution that in the absence of proper
measurement errors, $\chi^2$ is not really a goodness-of-fit figure of merit.
Figs. \ref{residTeffFe5270} through \ref{residTeffNaD} show the comparison of the polynomial fits
to the modeled $I(T_{\rm eff})$ relation for $[{{\rm M}\over {\rm H}}]=0.0$ at SDSS and IDS
resolution, and the residuals. For SDSS resolution, we also show both the fitted relation and
the residuals computed with the 1-$\sigma$ ``fitting uncertainties'' (see above) added and subtracted
from each of the $C_{\rm n}$ values to illustrate these two limiting cases. Figs. \ref{residAbndFe5270}
through \ref{residAbndNaD} present the same information for the polynomial fits
to the modeled $I([{{\rm M}\over {\rm H}}])$ relation for $T_{\rm eff}=4000$ K.
\paragraph{}
There are a number of important differences between our approach and that of W94:\\
a) Our $C_{\rm n}$ values complement those of W94 by being based on spectra of SDSS resolution
rather than those of IDS resolution, and are more relevant to both SDSS and LAMOST spectra
and to the investigation of \citet{franchini10}. \\
b) $\log g$ is not a fitting parameter, so we do not have cross-product terms that capture the
dependence of $I$ on the product of $\log g$ or any of its powers and other parameters or their powers. \\
c) Table 2 of W94 contains coefficients
for fits in the $T_{\rm eff}$ range of 3570 to 5160 K, whereas our fits apply to the range 3750 to 6500 K. The
difference at the high $T_{\rm eff}$ end is necessary for us to accommodate the $T_{\rm eff}$ range of interest for detected halo red giants.
Because the lower limit of our $T_{\rm eff}$ range is significantly higher than that of W94, results for our
TiO indices are especially suspect, and not comparable to W94.
W94 included stars of $T_{\rm eff} > 5160$ K in their fits for warm and hot stars (5040 to 13260 K, their Table 3).
This is well beyond the limits of our red giant grid and we are not able to compare to their ``hot star'' fits. \\
d) W94 carry out a careful statistical $F$-value test of the goodness-of-fit to separately determine whether the addition of
each successive
$C_{\rm n}$ term in Eq. \ref{poly} led to a statistically significant change in the fitted $I$ value for each index. As a result,
many of the $C_{\rm n}$ values in their Table 2 are blank because, presumably, including the corresponding term
in the fitted function led to an insignificant reduction in the variance. We have chosen to simply let the
regression find
$C_{\rm n}$ values for all ten terms in Eq. \ref{poly} consistently for all indices, with the expectation that
terms that are of low significance will have small fitted values of the corresponding $C_{\rm n}$. Our expectation is that a third
order fit is of low enough order that we do not expect high order spurious solutions to compromise the fit, and our situation
is simpler than that of W94 in that $\log g$ is not a fitting parameter.
\paragraph{Error analysis }
W94 describes the uncertainty in determining an $I$ value as a ``typical rms error per
observation'', which we denote $\sigma_{\rm Worthey}$. For reference, we have included an indicator of the magnitude
of $\sigma_{\rm Worthey}$ for
each index in Figs. \ref{Fe5270_4000} through \ref{NaD_0}.
W94 quantifies the uncertainties in the fitted $I$ values with a ``residual'' rms value
in units of the observational uncertainty, $\sigma_{\rm Worthey}$.
We are not working with observational data, and we quantify the
uncertainties in our fits, $\sigma$, with the quadrature sum of the 1-$\sigma$ uncertainty estimates computed
for each $C_{\rm n}$ value from the multiple linear regression
procedure (as described above), and have included them in
Tables \ref{coeffsthetaIDS} through \ref{coeffsvmkIDS}.
Generally, the first order coefficients, $C_{\rm 1}$ for $[{{\rm M}\over{\rm H}}]$ and $C_{\rm 2}$ for
$T_{\rm eff}$, from the fit to SDSS resolution spectra are larger than those form the fit to the IDS
resolution spectra. This is to be expected because in spectra of higher $R$ value the first order dependence
of the strength of spectral features on stellar parameters is less diluted by the re-shuffling of information
within the instrumental spectral profile.
\paragraph{}
The coefficient of the $\log ^3\theta$ term, $C_{\rm 7}$, generally had, by far, the largest
1-$\sigma$ uncertainty value of all the $C_{\rm n}$ values, and dominates our quadrature
sum $\sigma$ values. Generally, the parameter with next largest 1-$\sigma$ uncertainty
was the $\log ^2\theta$ term, $C_{\rm 4}$, but it was much smaller than that of $C_{\rm 7}$.
Table \ref{coeffsthetaIDS} shows that the magnitude of $C_{\rm 7}$ is generally larger than
any of the other coefficients, and this is consistent with what was reported in Table 2 of
W94. We also found that the value of $C_{\rm 7}$ was the most sensitive to spectral
resolution, differing by as much as a factor of four between the fits to IDS and SDSS resolution
spectra. We conclude that the terms in non-linear powers of $\log\theta$ were generally least well fit.
However, for these GK stars, $\theta\equiv 5040/T_{\rm eff}$ is of order unity, and
the squares and cubes of the independent variable $\log\theta$ is much less than unity. We have found
that the 1-$\sigma$ uncertainty of the $C_{\rm 4}$ and $C_{\rm 7}$ terms for the analogous $V-K^{\rm 2}$ and
$V-K^{\rm 3}$ terms from the fits with $V-K$ in lieu of $T_{\rm eff}$ are much smaller
and are consistent with those of the other $C_{\rm n}$ coefficients, and from Table \ref{coeffsvmkIDS}
it can be seen that the corresponding total $\sigma$ values are smaller.
\paragraph{}
For our nine XMP indices, we compare our fitted $C_{\rm n}$ values for the zeroth
and first order terms of the fit to IDS resolution spectra with those of W94,
given the four caveats listed above. For the fits with $\log\theta$ as an independent parameter,
comparable to Table 2 of W94, for five of six indices designated ``Fe'', our fitted $C_{\rm 0}$
value is consistently smaller than that of W94, and ranges from about 0.6 to 0.75 of the W94 value,
and for Fe5270 we are in very close agreement.
For Na $D$, our $C_{\rm 0}$ value is also smaller, but is in closer agreement with that of W94.
Mg$_{\rm 1}$ is our only index for which our $C_{\rm 0}$ is larger than W94, by about a factor of 1.5.
For Mg $b$ we
find a negative value of $C_{\rm 0}$ where W94 finds a positive value. In both cases, the magnitude of
$C_{\rm 0}$ is about unity.
For the $\log\theta$ coefficient, $C_{\rm 2}$, our values for Fe4531, Fe5335, and Fe5406 are
close to the values of W94.
The remaining $C_{\rm 2}$ values are also generally within a
factor of two, greater or less than, and of the same sign as those of W94. The two exceptions are Na $D$ for
which our $C_{\rm 2}$ value is about three times larger, and Mg$_{\rm 1}$ for which our $C_{\rm 2}$ value is positive
and that of W94 is negative, although the magnitudes are with a factor of two.
For the $[{{\rm M}\over{\rm H}}]$ coefficient, $C_{\rm 1}$, our values for Fe5335, Fe5406, Mg $b$, and Mg$_{\rm 1}$ agree closely with
those of W94. For the remaining XMP-Lick indices, including Na $D$, our $C_{\rm 1}$ values differ by as much as a factor of 2.5,
greater or less then, those of W94.
\paragraph{}
For the fits with $V-K$ as an independent parameter (in lieu of $\log\theta$), only four of our nine
Lick-XMP indices appear in the comparable Table 4 of W94 (Fe4383, Fe4531, Fe5015, and Fe5406).
For Fe5015, W94 does not present a $C_{\rm 0}$ value, but our $C_{\rm 2}$ ($I(V-K)$) agrees
closely with theirs. For Fe4531, W94 does not present a $C_{\rm 2}$ value, and our $C_{\rm 0}$ value
is about a factor of two larger. For both of these indices, our $C_{\rm 1}$ values ($I([{{\rm M}\over{\rm H}}])$,
at constant $V-K$ this time) are close to those of W94.
For Fe4383 and Fe5406
the situation is disconcerting and puzzling - we find both $C_{\rm 0}$ and $C_{\rm 2}$ values with the opposite sign and
a difference in magnitude ranging from two to four.
For both of these indices, W94 have no $C_{\rm 1}$ value for the term in $[{{\rm M}\over{\rm H}}]$,
indicating, presumably, they found insignificant {\it linear} dependence of $I$ on $[{{\rm M}\over{\rm H}}]$
at fixed $V-K$,
and, consistently, we find modest $C_{\rm 1}$ values of -0.4489 and -0.0986, respectively.
We have been unable to identify the
reason for the gross discrepancy in fitted polynomial coefficients for Fe4383 and Fe5406, beyond those
expressed in the caveats itemized above.
\subsection{Special indices }
\paragraph{CaHK }
\citet{severn05} introduced a new Lick-type index, which they designate CaHK, and
\citet{franchini10} modified the definition by changing the blue
pseudo-continuum band-pass and removing the central 5 \AA~ around the line cores to
remove potential chromospheric emission. \citet{franchini10} found that CaHK had the advantage
of being sensitive to $\alpha$-enhancement, as well as being less influenced by Fe than most of the
other atomic indices. The index seems to shows promise as a Lick-XMP
$[{{\rm M}\over{\rm H}}]$ diagnostic because the line is very strong. We do not remove the central 5 \AA~
from our index computation because our models have outer atmospheres that are in
radiative equilibrium and do not have chromospheric emission. We have confirmed that
we can reproduce the qualitative double-valued behavior of $I(\theta)$ exhibited in
Fig. 7 of \citet{franchini10}. We have found that the index is a good Lick-XMP
diagnostic for our models at the cool edge of our grid, of $T_{\rm eff}$ value equal
to 3750 to 4000 K. However, unfortunately, CaHK is either double-valued
as a function of $[{{\rm M}\over{\rm H}}]$ for $T_{\rm eff}$ in the range of 4250 to 5500 K,
or becomes negligible and insensitive to $[{{\rm M}\over{\rm H}}]$ for $[{{\rm M}\over{\rm H}}] \leq -4$
in the $T_{\rm eff}$ range of 5500 to 6500 K. We can only recommend CaHK as a
Lick-XMP index for $T_{\rm eff} \leq 4000$ K. Because the behavior of CaHK is double
valued as a function of both $T_{\rm eff}$ and $[{{\rm M}\over{\rm H}}]$ throughout
much of the $T_{\rm eff} - [{{\rm M}\over{\rm H}}]$ plane, the $C_{\rm n}$ and partial
derivative values are probably not useful for interpolation, and we have not included them.
\paragraph{TiO }
We have included the TiO$_{\rm 1}$ and TiO$_{\rm 2}$ indices in our analysis because
they are part of the Lick system, but they are only significant in strength and
in $[{{\rm M}\over{\rm H}}]$-sensitivity for the coolest part of our grid, and then
only at the highest $[{{\rm M}\over{\rm H}}]$ values. TiO$_{\rm 2}$ is the stronger
of the two indices and remains sensitive to $[{{\rm M}\over{\rm H}}]$ in the -2.5 to 0.0
range to $T_{\rm eff}$ values as high as 4250 K. Neither TiO index has pathologies
such as being double-valued.
Note that we fit Eq. \ref{poly} to $\log I$ to
account for the strong $T_{\rm eff}$ and $[{{\rm M}\over{\rm H}}]$ dependence of
TiO$_{\rm 1}$ and TiO$_{\rm 2}$, following W94. Because our grid does not extend to
the low $T_{\rm eff}$ values included in the W94 fit, where TiO is strong, our $C_{\rm n}$ and
partial derivative values are not comparable to W94, and should be treated with more caution
that those of the other indices, and we do not include them here.
\subsection{Partial derivatives with respect to $T_{\rm eff}$, $V-K$ and $[{{\rm M}\over{\rm H}}]$ }
We have used our $\{ C_{\rm n} \}$ values to compute the partial derivatives
$100~ {\rm K}\times {{\partial I}\over{\partial T_{\rm eff}}}|_{[{{\rm M}/{\rm H}}]}$,
$0.5\times {{\partial I}\over{\partial [{{\rm M}/{\rm H}}]}}|_{T_{\rm eff}}$,
$0.25~ {\rm mag}\times {{\partial I}\over{\partial (V-K)}}|_{[{{\rm M}/{\rm H}}]}$,
and $0.5\times {{\partial I}\over{\partial [{{\rm M}/{\rm H}}]}}|_{(V-K)}$
for all Lick indices modeled in LTE,
in both index units and in units of $\sigma$, as defined above, for IDS and SDSS resolution spectra.
For example, the partial derivative with respect to $\log\theta$ at constant $[{{\rm M}/{\rm H}}]$ can
be found from
\begin{eqnarray}
{{\partial I}\over{\partial\log\theta}}|_{[{\rm M}/{\rm H}]} =
C_{\rm 2} + 2 C_{\rm 4}\log\theta + C_{\rm 5} [{{\rm M}\over {\rm H}}] + \nonumber\\
3 C_{\rm 7}\log ^2\theta + C_{\rm 8} [{{\rm M}\over {\rm H}}]^2 +
2 C_{\rm 9}\log\theta [{{\rm M}\over {\rm H}}] .
\end{eqnarray}
Then, the partial derivative with respect to $T_{\rm eff}$ follows from
\begin{equation}
{{\partial I}\over{\partial T_{\rm eff}}} = - {{\partial I}\over{\partial\log\theta}} \log e / T_{\rm eff} .
\end{equation}
Tables \ref{partialteffs} and \ref{partialvmks} present the values for our nine identified
Lick-XMP indices for SDSS resolution only, and are comparable
to Tables 7A and 7B of W94.
The values in Table \ref{partialteffs},
provide an indication of by how much
the measured value of $I$ will differ between two stars that differ by $\Delta T_{\rm eff}\approx 100$ K at each
value of $[{{\rm M}\over{\rm H}}]$, and
by how much $I$ will differ between two stars that differ by
$\Delta [{{\rm M}\over{\rm H}}]\approx 0.5$ at each value of $T_{\rm eff}$.
Alternately, these derivatives can be used to estimate the change in inferred
$T_{\rm eff}$ or $[{{\rm M}\over{\rm H}}]$ as a result of the change in computed $I$ caused
by NLTE effects ({\it i.e.} where $\Delta I \equiv I_{\rm NLTE} - I_{\rm LTE}$).
The partial derivative values
that are in units of $\sigma$ give
an indication of the significance, or detectability, of these changes in $I$ value given the formal
uncertainty $\Delta I \approx\sigma$.
\paragraph{}
As an example of the implications of NLTE effects for $[{{\rm M}\over {\rm H}}]$
determination, from Table \ref{partialteffs}, for Fe5270 the quantity
$0.5\times {{\partial I}\over{\partial [{{\rm M}/{\rm H}}]}}|_{T_{\rm eff}} = 1.2717$
at $(T_{\rm eff}, [{{\rm M}/{\rm H}}]) = (5000~ {\rm K}, 0.0)$, and from Fig.
\ref{Fe5270diff}, the computed change in $I$ caused by NLTE effects, $\Delta I$,
at 5000 K is {\bf $\sim -0.05$}.
{\bf This corresponds to an LTE model $[{{\rm M}/{\rm H}}]$ value that is smaller by}
\begin{equation}
\Delta [{{\rm M}/{\rm H}}] \approx 0.5\times -0.05 / 1.2717 \approx -0.02
\end{equation}
{\bf Inversely, fitting a given observed $I$ value with NLTE as compared to LTE models would require a
compensating model value of $[{{\rm M}/{\rm H}}]$ that is $\sim 0.02$ larger,
consistent with the sign of the change in inferred $[{{\rm M}\over {\rm H}}]$ at fixed
$I$ value for $T_{\rm eff}=4000$ K shown in Fig. \ref{Fe5270_4000}.}
We emphasize that this is an estimate of $\Delta [{{\rm M}/{\rm H}}]$ based on the
modeled {\it LTE} value of ${{\partial I}\over{\partial [{{\rm M}/{\rm H}}]}}|_{T_{\rm eff}}$.
A more accurate estimate would follow from a NLTE value of
${{\partial I}\over{\partial [{{\rm M}/{\rm H}}]}}|_{T_{\rm eff}}$, and the importance of this
consideration depends on the magnitude of the difference between
${{\partial I}\over{\partial [{{\rm M}/{\rm H}}]}}|_{T_{\rm eff}}$ values computed from NLTE and
LTE model grids. However, computation of
NLTE partial derivatives that are comparable to those of LTE requires a NLTE model grid that
includes {\it all} the same $(T_{\rm eff}, [{{\rm M}/{\rm H}}])$ points as the LTE grid.
Currently, our NLTE model grid covers only a subset because of the larger computational cost
of NLTE modeling, and its only purpose is to spot check the effect of NLTE on computed $I$
values at select grid point.
\section{Conclusions \label{sConc}}
Using a grid of red giant synthetic spectra that extends from solar- to XMP-metallicity
($[{{\rm M}\over {\rm H}}]=-6.0$) we have identified nine of the original 21 Lick indices,
designated Lick-XMP indices,
that remain significantly detectable and significantly sensitive to $[{{\rm M}\over {\rm H}}]$
down to XMP values (at least $[{{\rm M}\over {\rm H}}]=-5.0$) for giants of
$T_{\rm eff} < 4500$ K. For warmer late-type giants,
{\it all} Lick indices become undetectable or insignificantly sensitive to $[{{\rm M}\over {\rm H}}]$
before $[{{\rm M}\over {\rm H}}]$ decreases to -4.0. The Lick-XMP indices should be the most useful ones for
characterizing very old ``fossil'' stars that formed very early in the history of the Galaxy,
and in other galaxies. We also investigated a newer Lick-type index, CaHK, introduced by
\citet{severn05} and developed by \citet{franchini10} as a potential Lick-XMP index, given
its strength. However, for CaHK, $I$ is double valued as a function of $[{{\rm M}\over {\rm H}}]$
and $T_{\rm eff}$ over much of our grid and its usefulness is restricted to the cool edge of our grid
($T_{\rm eff} < 4000$ K).
\paragraph{}
For our LTE grid of SDSS resolution spectra we present polynomial coefficients, $\{C_{\rm n}\}$,
to third order in the independent variable pairs ($\log\theta, [{{\rm M}\over {\rm H}}]$)
and ($(V-K), [{{\rm M}\over {\rm H}}]$) derived from multi-variate linear regression, approximately
comparable to those of W94 for IDS
resolution spectra. We present the partial derivatives
${{\partial I}\over {\partial T_{\rm eff}}}|_{[{{\rm M}/{\rm H}}]}$,
${{\partial I}\over {\partial [{{\rm M}/{\rm H}}]}}|_{T{\rm eff}}$,
${{\partial I}\over {\partial (V-K)}}|_{[{{\rm M}/{\rm H}}]}$,
and ${{\partial I}\over {\partial [{{\rm M}/{\rm H}}]}}|_{(V-K)}$
computed from our $\{C_{\rm n}\}$ values.
\paragraph{}
For Fe-dominated Lick indices, the effect of NLTE is to generally weaken the value of $I$
at any give ($T_{\rm eff}, [{{\rm M}\over {\rm H}}]$) value. To put the magnitude of the
NLTE effect into context, the change, $\Delta I$,
caused by NLTE effects is generally comparable to the change that results from
computing $I$ from spectra of SDSS resolution rather than that of IDS. The partial
derivatives can be used to estimate a change in inferred $T_{\rm eff}$ at fixed $[{{\rm M}\over {\rm H}}]$,
or a change in inferred $[{{\rm M}\over {\rm H}}]$ at fixed $T_{\rm eff}$, resulting from
a change in $I$ ({\it e.g.} as caused by NLTE effects) at any ($T_{\rm eff}, [{{\rm M}\over {\rm H}}]$)
value pair throughout our grid. For example, from Figs. \ref{Fe5270_0} and \ref{Fe4383_0},
for stars of inferred $T_{\rm eff} \gtrsim 4200$ K ($\theta \lesssim 1.2$), an Fe-dominated $I$ value
computed in LTE that is too strong might be compensated for by inferring a $T_{\rm eff}$ value that
is too large, for fixed inferred $[{{\rm M}\over {\rm H}}]$.
\acknowledgments
CIS is grateful for NSERC Discovery Program grant RGPIN-2014-03979. The calculations were
performed with the facilities of the Atlantic Computational Excellence Network (ACEnet).
|
2,869,038,155,562 | arxiv | \section{Introduction}
In 2014, the International Diabetes Federation (IDF) \citep{b1} estimated 175 million undiagnosed cases of diabetes. This lack of diagnosis is compounded by the absence of statistics on diabetes-related deaths in many lower-and-middle-income countries (LMIC). The mortality data that is available is frequently derived from hospital records, and it possibly understates mortality from fatalities that occurring outside of hospital \cite{Blackstock}. People with diabetes may develop chronic complications such as neuropathy, nephropathy and retinopathy, uncontrolled hyperglycaemia, and can also develop acute potentially fatal complications such as diabetic ketoacidosis (DKA) and hyperosmolar hyperglycaemic syndrome (HHS) \cite{Gerich, Umpierrez}.
A verbal autopsy (VA) is a technique, endorsed by the World Health Organization (WHO), to determine a likely cause of death (COD) in countries without robust death registration systems \cite{b4}. It is a record of an interview between a non-clinician field worker and a care taker of the deceased about events around an uncertified death. A VA report consists of two kinds of standardized questions: the closed-ended questions where the interviewee responds to a 'yes' or 'no' question and the open-ended section where the interviewee narrates the events around the period of death \cite{b7}.
Like much electronic data, an analysis of VA reports may benefit from the application of machine learning techniques and while some studies suggest the narrative section of the VA report is unnecessary and of limited use due to its high dimensionality and sparsity, we hypothesized that this narrative text would improve the ability of verbal autopsy reports to predict a cause of death. We compared the binary features, text features and a combination of binary and text features in VA reports from Agincourt, South Africa, for their accuracy in classifying COD from uncontrolled hyperglycaemia. The machine learning techniques of logistic regression, random forest, XGBoost and neural networks were employed to automate COD classification in the three feature settings. We relied on data labeled by physician experts \cite{Blackstock} as our gold standard for determining whether a death had been caused by uncontrolled hyperglycaemia (positive; coded as 1) or not (negative; coded as 0).
\section{Background}
Two-thirds of the 60 million annual deaths in lower-and-middle-income countries occur outside of health facilities, often in countries with weak death registration systems and therefore do not have a recorded cause \cite{b3}. Information about cause of death (COD) is, however, vital for public health researchers and policy makers and verbal autopsy technology was developed to address this need.
Verbal autopsy, a tool in which the caretaker of the decedent is interviewed about their health preceding their death, can be formulated as a semi-supervised learning problem where binary or continuous questionnaire responses form features of the disease and the label of the disease is the categorical response \cite{b22}. VAs have proved to be highly specific ($98\%$) for various types of CODs \cite{b8}.
Several machine learning approaches have been utilized in a number of research, including Nave Bayes, neural networks, support vector machines, random forests, InterVA4, InsilicoVA, AdaBoost, XGBoost, and Tariff \cite{b11, Fottrell, Fottrell2, Danso2} and have demonstrated the ability to automate the coding of COD from binary features of VA reports. While some studies suggest the narrative section of the VA report is unnecessary and of limited use in COD determination \cite{King}, others argue that the narrative section is more convenient as it does not require a specific questionnaire format and takes less time to collect compared to the closed-ended questions \cite{b20}. Information in free-text narrative, such as the history of the disease symptoms and treatment, may often be essential to making a correct diagnosis.
\citet{Boag} evaluated common representations of clinical text in a variety of clinical tasks and demonstrated that the best method of representing narratives from clinical text for prediction remains debatable. This is due to the fact that word embeddings do not apprehend medical knowledge but describe the meaning of a word based on its context. \citet{Grnarova} also showed that no text representation performs better than all others and that simple representations outscore advanced representations on tasks like age prediction, hospital admission while complex models perform best on diagnosis and length of stay tasks.
Clinical text representation models, ClinicalBERT \cite{Huang}, MeDAL \cite{Wen}, Publicly Available Clinical BERT Embeddings \cite{Alsentzer}, BioBERT \cite{Lee}, BioELMo \cite{Jin} and SciBERT \cite{Beltagy} have been created to improve natural language processing tasks of predicting mortality and hospital readmission from clinical notes. Datasets on which these state-of-the-art narrative text representations are trained include a collection of clinical and electronic health records from hospitals, MIMIC-II (Medical Information Mart for Intensive Care) \cite{Johnson}, MIMIC-III, PubMed abstracts and PMC full text articles and Wikipedia. The training tasks are disease diagnosis, hospital readmission and mortality prediction.
Other work has focused on natural language processing in VA narratives and numerous methods have been used to classify CoD from VA narratives including term frequency and TF-IDF (term frequency–inverse document frequency) \cite{Danso}, the Tariff method \cite{b16}, the neural network classifier \cite{b20} and random forest \cite{AbrahamRF}. \citet{Danso2} demonstrated improved performances of classification algorithms when using linguistic features of part-of-speech tags, noun phrases and word pairs for COD diagnosis from VA reports in addition to frequency-based features. \citet{b21} showed that using models with key phrases as additional features outperforms topic models which depend on features present in the corpus when using a multi-task learning model to learn COD categories. Addition of character information appears to improve model classification for smaller datasets (500 to 1000 records) calling for character-based convolutional neural networks (CNNs) and is a promising technique for automated VA coding \cite{b18}.
While the bag of words model excels at tasks that require it to predict categories that are directly represented by single words in their notes, such as the frequency with which the words 'diabetes' and 'sugar' might appear in a VA report. This may make it a good model to predict diabetes as the cause of death for a VA case, however, this approach ignores the order of words inside a document and so fails to reveal all of the underlying conditions behind a patient's death.
Doc2vec has successfully been used to classify documents in various fields including health care \cite{doc2vec, Boag}. Its variants, the continuous distributed bag of words paragraph vector (PV-DBOW) and the distributed memory paragraph vector (DM-PV) work on the idea that the element values of a word are influenced by the values of other words in its vicinity, and this idea is embodied as a neural network structure as with word2vec. On sentiment analysis and text classification tasks, the model surpass the bag of words model and advanced models based on recursive and recurrent neural networks, which is why it was chosen for this study. While both the binary and VA text representation techniques for COD automation outlined above used raw VA reports, we used the traditional approach of physician evaluation of VA forms to incorporate clinical knowledge of diabetes and uncontrolled hyperglycemia into our embeddings.
The outcomes of this work will aid the public health sector in areas lacking adequate death registration systems by improving automation of COD from binary and text features of VA reports for research, surveillance, and diagnosis of diabetes. Improved VA text representations will provide accurate information regarding diabetes as a cause of mortality, an early detection of which will prevent the lethal complication hyperglycemia.
\section{Methodology and Research Instruments}
\subsection{Algorithms}
\begin{description}
\item[{Natural Language Processing Algorithms}:] Doc2vec \cite{doc2vec} with both its variants, the continuous distributed bag-of-words paragraph vector (PV-DBOW) and the distributed memory paragraph vector (PV-DM).
\item[{Classification Algorithms}:] Logistic regression, artificial neural network, random forest and XGBoost.
\item[{Hardware and Software}:] All experiments were run with Python v3.6 using Sci-kit Learn \cite{Scikit} for random forest, XGBoost and logistic regression classifiers and Keras \cite{chollet2015keras} with Theano \cite{Theano} backend for neural networks. Text preprocessing was done with the Natural Language Tool Kit NLTK \cite{NLTK}.
\end{description}
\subsection{Dataset}
The MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt) \cite{Kahn}, a population health and demographic surveillance system located in rural South Africa, that supports research into the origins and effects of diseases on social transitions and populations, provided the verbal autopsy (VA) dataset for this study. Initial coding of the data was done by a paediatrician with expertise in diabetes who reviewed VAs to identify features suggestive of diabetes or uncontrolled hyperglycaemia. Cases where the reviewing physician was uncertain were reviewed with colleagues with experience in adult internal medicine, diabetes, and endocrinology until an agreement was reached (Table~\ref{tab:VAfeature}). Positive instances (79) were those in which uncontrolled hyperglycemia was a likely cause of death and negative (8619) cases were those in which it was not.
\begin {table}[htb!]
\caption{Features of the Agincourt verbal autopsy dataset.} \label{tab:VAfeature}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lll}
\toprule
\textbf{Feature} & \textbf{Type} & \textbf{Description} \\
\bottomrule\toprule
Female & Binary & The deceased's gender \\
Tuber & Binary & \makecell[lt]{The dead had been diagnosed with tuberculosis by a medical facility} \\
Diabetes & Binary & \makecell[lt]{The dead had been diagnosed with diabetes by a medical facility} \\
Men-con & Binary & \makecell[lt]{The dead had signs of mental confusion and memory loss} \\
Cough & Binary & The dead had a cough \\
Ch-Cough & Binary & The dead had a chronic cough \\
Diarr & Binary & Diarrhoea was present in the deceased \\
Exc-Urine & Binary & The dead urinated excessively \\
Exc-Drink & Binary & The dead drank water excessively \\
Age & Numeric & The dead's age \\
Description & Text & \makecell[lt]{Narration of signs and symptoms of the deceased by relative}\\
Class & Binary & Cause of death classification by diabetes\\
\bottomrule
\end{tabular}
}
\end {table}
We classified binary features as symptoms to which responses were 'yes' or 'no'. These included excessive thirst, urination, mental confusion and the rest listed in (Table~\ref{tab:VAfeature}). The narration describing symptoms and events around the death of the deceased was classified as a text feature.
'Yes' and 'no' responses were code to '1s' and '0s' as well as 'Female' and 'Male,' were converted to '1s' and '0s'. To leverage clinical knowledge and incorporate domain knowledge of uncontrolled hyperglycaemia into VA embeddings, we used a physician-annotated dataset. The words 'diabetes, 'sugar' and their misspelled forms were removed from the VA text to uncover symptoms that distinguish cases with diabetes from those without. Text processing included removing punctuation, special characters, short words and stop words as well as lower casing the text. Each VA case includes a paragraph of 4 to 6 sentences of variable length of narratives of instances surrounding death. We tokenized the final text and tagged each case to its respective class as its unique ID and fed it as input to the doc2vec model.
Taking from the Word2vec \cite{MikolovA} model, given a sequence of words $w_1, w_2,..., w_T$ the doc2vec model is trained to predict one of the word's vector $w_t$ given the other $T-k$ words' vectors.
The model maximizes the average log probability
\begin{equation}
\frac{1}{T} \sum_{t=k}^{T-k} \log p(w_t| w_{t-k},...,w_{t+k})
\end{equation}
\noindent and the task of prediction is
\begin{equation}
p(w_t| w_{t-k},..., w_{t+k}) = \frac{e^{{y}_{w_t}}}{\sum_{i} e^{y_i}}
\end{equation}
\noindent In addition the model also takes a paragraph vector $p_i$ with $i$ identifying which body The authors then propose cost-sensitive classification, which differs from random oversampling in that it provides a more continuous-valued and consistent method of weighting samples of imbalanced training data; random oversampling, for example, will invariably favor some training instances over otherof text $w_1, w_2,..., w_T$ comes from.
Each paragraph and word inside a paragraph were mapped to a unique vector and the vectors were next concatenated for prediction of the next word forming a continuous distributed vector representations for VA texts. The contexts were drawn from a sliding window of a defined length. Stochastic gradient descent and back propagation were used to train both word and paragraph vectors. The size of embedding dimension and the length of the context window (the number of words around the target word), were the top parameters for training Doc2vec embeddings and were both tuned experimentally. The link to the code is available at \cite{thoko2021}.
\begin {table}[htb!]
\centering
\caption{Random forest performance on cause of death classification with feature vectors dimensions from PV-DBOW, PV-DM and their combinations.} \label{Vectors}
\begin{tabular}{ll|rrrrr}
\toprule
\textbf{Method} & \textbf{Dims} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score} & \textbf{AUC-ROC} & \textbf{Accuracy} \\
\bottomrule\toprule
\multirow{6}{*}{\textbf{PV-DM}}
&\textbf{50} & 0.3702 & 0.0601 & 0.0831 & 0.6468 & 0.9186 \\
&\textbf{100} & 0.1226 & 0.0147 & 0.0261 & 0.5297 & 0.9302 \\
&\textbf{200} & 0.2226 & 0.0282 & 0.0494 & 0.5853 & 0.9420 \\
&\textbf{300} & 0.1619 & 0.0422 & 0.0667 & 0.5674 & 0.9663 \\
&\textbf{400} & 0.1952 & 0.0528 & 0.0802 & 0.5846 & 0.9675 \\
&\textbf{500} & 0.1333 & 0.0384 & 0.0582 & 0.5540 & 0.9678 \\\midrule
\multirow{6}{*}{\textbf{PV-DBOW}}
&\textbf{50} & 0.8464 & 0.8166 & 0.8243 & 0.9224 & 0.9971 \\
&\textbf{100} & 0.7964 & 0.6731 & 0.7219 & 0.8964 & 0.9948 \\
&\textbf{200} & 0.8048 & 0.7606 & 0.7794 & 0.9012 & 0.9962 \\
&\textbf{300} & 0.7964 & 0.8024 & 0.7980 & 0.8974 & 0.9969 \\
&\textbf{400} & 0.7774 & 0.8322 & 0.7930 & 0.8879 & 0.9969 \\
&\textbf{500} & 0.8607 & 0.8196 & 0.8320 & 0.9296 & 0.9974 \\\midrule
\multirow{6}{*}{\makecell{\textbf{PV-DM +}\\\textbf{PV-DBOW}}}
&\textbf{50} & 0.8298 & \textbf{0.8571} & \textbf{0.8324} & 0.9141 & \textbf{0.9972}\\
&\textbf{100} & 0.7548 & 0.5726 & 0.6192 & 0.8746 & 0.9926 \\
&\textbf{200} & \textbf{0.8405} & 0.7526 & 0.7879 & \textbf{0.9189} & 0.9960 \\
&\textbf{300} & 0.8298 & 0.6504 & 0.7231 & 0.9129 & 0.9946 \\
&\textbf{400} & 0.8131 & 0.7738 & 0.7781 & 0.9053 & 0.9962 \\
&\textbf{500} & 0.7798 & 0.7528 & 0.7578 & 08889 & 0.9963 \\
\bottomrule
\end{tabular}
\end {table}
\citet{Le} show that combining the two variations of a paragraph vector, namely the distributed memory paragraph vector (PV-DM) and the distributed bag of words paragraph (PV-DBOW) improves performance. We sought to test these predictions and experimented with a number of dimensions for feature vectors on both models and their combination. From Table~\ref{Vectors} the optimal vector dimension is 50 on each model. We next cross-validated the window size using the validation set and the optimal context window size was 9 words, with the 10th one as the one to be predicted.
\subsubsection{Preprocessing}
To address the imbalanced dataset, a technique that combines the synthetic minority oversampling technique (SMOTE) \cite{SMOTE} and the Tomek Links (T-Links) \cite{Thai} undersampling was used.
By first defining the minority class vector and then determining the number of nearest numbers (k), SMOTE synthesizes new examples in the minority class from existing examples along the boundary joining all k-nearest neighbors in the minority class. A line is generated and a synthetic point is placed between the minority data points and any of their neighbors. This process is performed for each minority data point and its k neighbors until the data is balanced.
The Tomek Links (T-Links) undersampling approach identifies all the pairs of data points that are nearest to each other but belong to different classes. Two points $a$ and $b$ are termed Tomek Links points if for $a$ as an instance of class $A$ and $b$ an instance of class $B$, the distance $d(a, b)$ between $a$ and $b$ $d(a, b) < d(a, c)$ or $d(a, b) < d(b, c)$ for any instance $c$.
In the preprocessing of the VA data, we combined Tomek-Link undersampling with SMOTE oversampling. On classification tasks, the combination of SMOTE and undersampling approaches (ENN and Tomek Links) has been shown to be beneficial, with better area under a receiver operating characteristic AUC-ROC \cite{Purnajaya}. A 5-fold cross validation step was included in the training process to evaluate the classifiers' performance. The SMOTETomek approach was used to sample VA data in each fold, and the classifier was trained on the training folds and validated on the remaining folds. The ideal hyperparameters of each classifier were found using a grid search technique.
The performance of the classifiers on the COD classification task with respect to physician expert diagnosis was evaluated using accuracy which is defined as the number of correct predictions from all predictions made and precision, which is defined as the number of deaths correctly identified as deaths due to uncontrolled hyperglycaemia out of all the of positive predictions made. We also used recall, defined as the number of deaths correctly identified as due to uncontrolled hyperglycaemia for all deaths due to uncontrolled hyperglycaemia.
Both recall and precision capture pertinent cases. A model's precision tells us which cases to ignore, even though they were predicted by our models as deaths due to uncontrolled hyperglycemia. Recall tells us what fraction of deaths due to uncontrolled hyperglycaemia to focus on even through they were predicted by our models as deaths not due to hyperglycaemia. To account for the balance of recall and precision, we used the F1-score, which is a weighted average of precision and recall. We were interested in how well our classifiers predict negative cases and positive cases across different thresholds for labeling a death as due to uncontrolled hyperglycaemia. This was determined by a receiver operating characteristic (ROC) curve. We also used the area under the ROC curve (AUC-ROC) which measures the overall performance of an algorithm.
To understand the relationships captured by the vector representations in the text setting, we trained the principal components analysis (PCA) algorithm \cite{Mackiewicz} to reduce the dimensions of the feature vectors (100-D to 2-D vector space) by integrating feature variances and visualized the results on a plot.
We performed an error analysis where two endocrinologists independently reviewed the false positive (188) and false negative (10) cases predicted by the best classifier in the text features setting. Where there was disagreement between the two reviewers, consensus was reached through discussion. To uncover predictors of uncontrolled hyperglycaemia, we examined the positive class and false positive predictions word embeddings to determine if they captured the same relationships between words by visualizing their alignments. The positive class was made of a vocabulary of 849 unique words. To allow for better visualizations, we looked at the top 150 words which included those of words describing symptoms of uncontrolled hypeglycaemia. The false positive cases vocabulary was 283 words, all of which were used.
\section{Results}
\begin {table}[htb!]
\centering
\caption{Performance comparison of logistic regression, random forest, xgboost and neural network classifiers on cause of death classification in the binary, narrative text and combined features settings.} \label{CombinedFeatures}
\begin{tabular}{ll|rrrr}
\toprule
\textbf{Features} & \textbf{Classifier} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score} & \textbf{Accuracy} \\
\bottomrule\toprule
\multirow{4}{*}{\textbf{Binary}}
&\textbf{Logistic Regression} & \textbf{0.8471} & 0.2110 & 0.3199 & 0.9636\\
&\textbf{Random Forest} & 0.8000 & \textbf{0.2772} & \textbf{0.3885} & \textbf{0.9741} \\
&\textbf{XGBoost} & 0.7529 & 0.0921 & 0.1637 & 0.9384 \\
&\textbf{Neural Network} & 0.7882 & 0.1069 & 0.1873 & 0.9449 \\
\midrule
\multirow{4}{*}{\textbf{Text}}
&\textbf{Logistic Regression} & \textbf{0.7529} & 0.2801 & 0.4067 & 0.9824 \\
&\textbf{Random Forest} & 0.7412 & 0.2197 & 0.3377 & 0.9770 \\
&\textbf{XGBoost} & 0.6000 & \textbf{0.6437} & \textbf{0.6166} & \textbf{0.9942} \\
&\textbf{Neural Network} & 0.5294 & 0.4976 & 0.5001 & 0.9914 \\
\midrule
\multirow{4}{*}{\textbf{Combined}}
&\textbf{Logistic Regression} & 0.7294 & 0.3783 & 0.4962 & 0.9882 \\
&\textbf{Random Forest} & 0.8117 & 0.3281 & 0.4601 & 0.9844 \\
&\textbf{XGBoost} & \textbf{0.8588} & \textbf{0.7170} & \textbf{0.7807} & \textbf{0.9962} \\
&\textbf{Neural Network} & 0.5647 & 0.6107 & 0.5786 & 0.9935 \\
\bottomrule
\end{tabular}
\end {table}
The results show that the accuracy of the models is higher than precision, recall and F1-score values across all settings. For the binary features setting, Table~\ref{CombinedFeatures} shows that the logistic regression classifier achieved the best performance in terms of recall (0.8471), i.e out of all of the cases for which uncontrolled hyperglycaemia was a likely cause of death, the model was able to classify 85\% as deaths due to uncontrolled hyperglycaemia. The random forest classifier followed next in terms of performance for the metrics of precision and F1-score. In the text features setting, the logistic regression performed best in terms of recall (0.7529) while the XGBoost classifier performed best in terms of precision, F1-score and accuracy. In the setting of combined binary and text features, the XGBoost classifier performs across all metrics, followed by the random forest classifier in terms of recall and the neural network in terms of precision, F1-score and accuracy.
\begin{figure}[htb!]
\centering
\subfloat{{\includegraphics[width=6cm, height=6cm]{figures/RocCurve.eps}}}
\subfloat{{\includegraphics[width=6cm, height=6cm]{figures/logRocCurve.eps}}}
\caption{ROC curves and AUC-ROC of Random Forest and Logistic Regression classifiers in the binary features, text features and combined binary and text features setting.}
\label{Figure.1}
\end{figure}
\begin{figure}[htb!]
\centering
\subfloat{{\includegraphics[width=6cm, height=6cm]{figures/XGB1RocCurve.eps}}}
\subfloat{{\includegraphics[width=6cm, height=6cm]{figures/NNRocCurve.eps}}}
\caption{ROC curves and AUC-ROC of XGBoost and Neural Network classifiers in the binary features, text features and combined binary and text features setting.}
\label{Figure.2}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{figures/EmbeddingSpace.eps}
\caption{PCA plot: The distribution of keywords indicative of death due to uncontrolled hyperglycaemia.}\label{Figure.3}
\end{figure}
Of interest were high recall scores for all classifiers, but they came at the cost of precision across all the settings except the XGBoost in the text features setting. The best model was then selected by taking the balance between precision and recall (F1-Score) into consideration. In the binary features setting, the best classifier was the random forest while the XGBoost performed best in the text and combined binary and text features settings.
The receiver operating characteristic (ROC) curve plots of the random forest and logistic regression classifiers across the three settings are given by Fig.~\ref{Figure.1} and those of the XGBoost and Neural Network classifiers by Fig.~\ref{Figure.2}. In the three settings, all the classifier ROC curves climbed toward the top left, meaning the models correctly predict positive and negative cases. We aimed to discover a large number of deaths owing to uncontrolled hyperglycemia from the data, thus we were looking for a threshold with the best sensitivity (recall).
In the binary features setting, the logistic regression and the random forest classifier had the best performance at a threshold that gave a true positive rate (TPR) of around 90\%; i.e where 90\% of all deaths due to uncontrolled hyperglycaemia were being identified and a false positive rate (FPR) of around 30\%. Since more than 90\% of deaths from the VA report were not due to uncontrolled hyperglycaemia, operating at this threshold for both classifiers would be ideal. In the text setting the logistic regression and the XGBoost gave the best area under the ROC curve (AUC-ROC) with the best cut-off point at around (TPR~98\%, FPR~24\%) and (TPR~96\%, FPR~30\%) for the two classifiers. For the combined features setting the same classifiers had the best AUC-ROC scores with ideal threshold values for logistic regression at (TPR~96\%, FPR~18\%) and the XGBoost classifier at (TPR~98\%, FPR~10\%).
The PCA projections to the 2-D vector space are visualized in Fig.~\ref{Figure.3} showing that words with similar part of speech (POS) tags are next to each other (i.e spatially correlated in the embedding space) and that the paragraph2vec model learned words by identifying the neighbouring words.
\subsubsection{Error Analysis}
After the review of the false positive cases (i.e those identified by the best classifier as deaths likely due to uncontrollable hyperglycaemia, but not by the physician-coders) and the false negative cases (cases identified by the best classifier as deaths not likely due to uncontrolled hyperglycaemia, but by the physician-coders as deaths due to uncontrolled hyperglycaemia), a consensus was reached that of the hundred and eighty eight false positive cases, six were indeed likely to be due to uncontrolled hyperglycaemia and of the ten false negative cases, three were as identified as deaths not likely due to uncontrolled hyperglycaemia.
Overall, the six correctly classified false positive cases are an 8\% increase in the positive cases and the improved scores from the text features setting over the combined features setting prove the importance of information available in the free text of VA reports, further highlighting the need for both parts of the VA report in identifying a COD. The results further illustrate that the removal of the keywords, 'sugar' and 'diabetes' from the corpus aided in the classifiers learning the underlying symptoms indicative of hyperglycaemia.
\section{Discussion}
The random forest classifier achieved the best performance for the binary features setting and the XGBoost performed best in the text and combined features settings. While both techniques use ensemble learning, the XGBoost algorithm uses gradient boosting to better capture patterns of high-dimensional features in text and combined text and binary data than the random forest classifier, which depends on making prediction by random chance. In comparison to the binary features and text features settings, the combined features setting had higher scores across all metric because the narrative text provides more depth and context on events around death than the binary features alone.
Our results are in agreement with existing literature. \citet{Sakr, Knuiman} did a comparative study of the same techniques trained with the Synthetic Minority Over-Sampling Technique (SMOTETomek technique) to predict mortality from medical reports using binary features. The results of their study indicated that neural networks perform worse than the other techniques. This is further supported by \citet{Knuiman}, who compared artificial neural networks, Naive Bayes, logistic regression, decision trees, and Bayesian network classifiers on myocardial infarction, with logistic regression outperforming the others (AUC-ROC = 0.82).
The additional features in the combined binary and text setting increase the dimensionality of the feature space, which in turn increases the search space and limits the amount of extraction of valuable information from data \cite{Bolon, Pes}. However, \citet{Pes} has shown the effectiveness of ensemble techniques in handling high-dimensional data which was the case with XGBoost and random forest in the text features and combined features settings in our work. \citet{Clermont} demonstrated how the non-parametric techniques of neural networks, random forests and XGBoost performed better than logistic regression in predicting hospital mortality from clinical notes in ICU patients and \citet{Pirracchio} added that this was because the linear and additive relationship of the logistic regression technique cannot constrain the complex processes surrounding a cause of death and its predictors or variables.
Word embeddings from the positive class and those from the false positive predictions did capture the same relationships between symptoms of uncontrolled hyperglycaemia as depicted by their vector alignments. This explains the false positive predictions by the best classifier predicts false positives which corroborate the 'distributional hypothesis' \cite{McDonald}, which states that words with comparable or related meanings exist in the same context, and embeddings for semantically and syntactically related keywords from the VA corpus were closer together, as expected.
\subsubsection{Limitations of the study}
This study has three main limitations, the first of which is the dataset size and its class distribution. With 79 positive and 8619 negative cases, the VA dataset from Agincourt has the extremely imbalanced and small minority (EISM) problem. Oversampling and undersampling techniques for redressing this imbalance are not as effective when the absolute number of minority examples is this small \cite{David}, and for this reason we believe significant relationships from the data could have been missed. Additionally, the paragraph2vec model is benchmarked on two very large datasets, the IMDB dataset \cite{Maas} (100 000 movie reviews) and Standford Sentiment Treebank dataset \cite{Socher}, vector dimensions of which were 400. Tuning this parameter on our data, we were below the optimal dimension of 100.
The second limitation was the way in which we extrapolated medical/ disease knowledge concepts into our embeddings. While there is prior research on domain adaptation techniques to health and clinical text for natural language processing, the VA dataset by nature of its collection, transcription and language cannot be described as a clinical document. Due to lack of publicly available VA word embeddings, our analysis was also not validated against any VA data outside of our institution.
The last limitation of this study is interpretability of VA embeddings. It was not straight forward to provide an explanation for our classifier's judgments based on paragraph and word embeddings because like most embeddings, the paragraph2vec encodes but does not distinguish notions of relatedness and similarity \cite{Khattak}. While some symptoms like "excessive drinking" and "excessive urination" are related, they are not similar. For analysis of cause of death by uncontrolled hyperglycaemia, we would be interested in all symptoms of uncontrolled hyperglycaemia and their combinations and not similar words and synonyms.
\section{Conclusion}
The four machine learning techniques may be useful in the automation of cause of death (COD) classification from verbal autopsy (VA) reports. In this study, they could accurately identify COD from uncontrolled hyperglycaemia, a complication of diabetes. Ensemble learning techniques of random forest and XGBoost are highly accurate, sensitive and specific to this task in all settings of binary, text and combined binary and text features. Our results further suggest that the narrative text of the VA report may contain vital information for determining COD and we therefore encourage further studies to incorporate both binary and text features, separately or in combination. In future work, we will use the transfer learning adaptation techniques of feature extraction, variable selection and fine tuning to make use of broader language representations from the general health and English language domain to improve and refine representations of text from VA reports.\\
\noindent \textbf{Acknowledgements} We would like to thank Prof. Justine Davies and the Agincourt Health and Socio-demographic Surveillance System (HDSS) for the VA dataset. We also acknowledge the assistance of Dr. Faheem Seedat in reviewing cases for the error analysis. We thank the United Nations’ Organization of Women in Science for the Developing World (OWSD) for supporting and funding this work. AW is funded by the Fogarty International Centre of the National Institutes of Health (grant number K43TW010698). This paper describes the views of the authors and does not necessarily represent the official views of the National Institutes of Health (USA).
\bibliographystyle{unsrtnat}
\renewcommand{\bibname}{\leftline{References}}
|
2,869,038,155,563 | arxiv | \section{Introduction}
\label{section:introduction}
Quantum cohomology of rational homogeneous spaces $\mathrm{G}/\mathrm{P}$ is a rich subject
that received considerable attention in the last decades. A particular aspect
that has been studied is the generic semisimplicity of the quantum cohomology
and its connections to the bounded derived category ${\mathbf D^{\mathrm{b}}}(\mathrm{G}/\mathrm{P})$ of coherent
sheaves via Dubrovin's conjecture. Recall that Dubrovin's conjecture predicts
that for a smooth projective Fano variety $X$ the existence of a full exceptional
collection in ${\mathbf D^{\mathrm{b}}}(X)$ is equivalent to the generic semisimplicity of
the \textsf{big quantum cohomology} ring $\BQH(X)$.
A folklore conjecture predicts that the big quantum cohomology of a rational
homogenous space is always generically semisimple.
\begin{conjecture}
\label{conjecture:introduction-semisimplicity-of-BQH}
Let $\mathrm{G}$ be a semisimple algebraic group and $\mathrm{P} \subset \mathrm{G}$ a parabolic subgroup.
Then the big quantum cohomology $\BQH(\mathrm{G}/\mathrm{P})$ is generically semisimple.
\end{conjecture}
We list homogeneous varieties of simple
algebraic groups $\mathrm{G}$ corresponding to maximal parabolic subgroups $\mathrm{P} \subset \mathrm{G}$,
where Conjecture \ref{conjecture:introduction-semisimplicity-of-BQH} is known to hold.
Such a group $\mathrm{G}$ is fixed by its Dynkin diagram that falls into types
$\mathrm{A}, \mathrm{B}, \mathrm{C}, \mathrm{D}, \mathrm{E}, \mathrm{F}, \mathrm{G}$, and its maximal parabolic subgroups correspond
to vertices of the Dynkin diagram, for which we use the standard labelling \cite{Bo}.
\begin{description}
\item[Type $\mathrm{A}_n$] \hspace{5pt} $\mathrm{A}_n/\mathrm{P}_k = \G(k,n+1)$ for $k \in [1,n]$
\item[Type $\mathrm{B}_n$] \hspace{5pt} $\mathrm{B}_n/\mathrm{P}_1 = Q_{2n-1}$, $\mathrm{B}_n/\mathrm{P}_2 = \OG(2,2n+1)$, and $\mathrm{B}_n/\mathrm{P}_n = \OG(n,2n+1)$
\item[Type $\mathrm{C}_n$] \hspace{5pt} $\mathrm{C}_n/\mathrm{P}_1 = {\mathbb P}^{2n-1}$, $\mathrm{C}_n/\mathrm{P}_2 = \IG(2,2n)$ and $\mathrm{C}_n/\mathrm{P}_n = \IG(n,2n)$
\item[Type $\mathrm{D}_n$] \hspace{5pt} $\mathrm{D}_n/\mathrm{P}_1 = Q_{2n-2}$ and $\mathrm{D}_n/\mathrm{P}_{n-1} = \mathrm{D}_n/\mathrm{P}_n = \OG_{+}(n,2n)$
\item[Type $\mathrm{E}_n$] \hspace{5pt} $\mathrm{E}_6/\mathrm{P}_1$ and $\mathrm{E}_7/\mathrm{P}_7$
\item[Type $\mathrm{F}_4$] \hspace{5pt} $\mathrm{F}_4/\mathrm{P}_1$ and $\mathrm{F}_4/\mathrm{P}_4$
\item[Type $\mathrm{G}_2$] \hspace{5pt} $\mathrm{G}_2/\mathrm{P}_1$ and $\mathrm{G}_2/\mathrm{P}_2$
\end{description}
In the majority of the above cases the generic semisimplicity of the big quantum
cohomology $\BQH(\mathrm{G}/\mathrm{P})$ follows from the semisimplicity of the \textsf{small quantum cohomology $\QH(\mathrm{G}/\mathrm{P})$}.
The small quantum cohomology of a Fano variety is a much simpler gadget than its
big quantum cohomology, as the former involves only finitely many Gromov--Witten
invariants, whereas the latter involves infinitely many of them.
For classical types $\mathrm{B}_n, \mathrm{C}_n, \mathrm{D}_n$ it is known (see \cite[Table~on~p.~326]{ChPe})
that very often, roughly speaking in at least half of the cases, the small quantum cohomology
$\QH(\mathrm{G}/\mathrm{P})$ is not semisimple. In the exceptional types $\mathrm{E}_6, \mathrm{E}_7, \mathrm{E}_8, \mathrm{F}_4$ a similar
behaviour persists (see \cite[Table~on~p.~326]{ChPe}). Therefore, in general one must
work with the big quantum cohomology $\BQH(\mathrm{G}/\mathrm{P})$ in Conjecture \ref{conjecture:introduction-semisimplicity-of-BQH}.
Up to now the only cases with non-semisimple $\QH(\mathrm{G}/\mathrm{P})$, where Conjecture
\ref{conjecture:introduction-semisimplicity-of-BQH} is proved to hold are the symplectic
isotropic Grassmannians $\mathrm{C}_n/\mathrm{P}_2 = \IG(2,2n)$ and the $\mathrm{F}_4$-Grassmannian
$\mathrm{F}_4/\mathrm{P}_4$ (see \cite{CMMPS,GMS,Pe,MPS}). Both cases are the so called
\textsf{coadjoint varieties} in respective Dynkin types and one of the main results of this paper
is the proof of Conjecture \ref{conjecture:introduction-semisimplicity-of-BQH} for
coadjoint varieties in all Dynkin types.
\begin{remark}
Note that in the classical types $\mathrm{B}_n, \mathrm{C}_n, \mathrm{D}_n$ we only listed examples
that fit into infinite series. Since by \cite{BKT} presentations for the small
quantum cohomology rings in types $\mathrm{B}_n, \mathrm{C}_n, \mathrm{D}_n$ are known, it could be
possible to do a computer check of the semisimplicity of the small quantum cohomology
for some isolated small rank examples (e.g. $\QH(\IG(3,8))$ can easily be checked to be semisimple).
On the contrary, in the exceptional types $\mathrm{E}_6, \mathrm{E}_7, \mathrm{E}_8, \mathrm{F}_4$, even though
we only have finitely many varieties to consider, the problem in extending the above
list by a single example is non-trivial, as already presentations even for the small
quantum cohomology are known only for (co)minuscule or (co)adjoint varieties.
\end{remark}
\subsection{Statements of results}
Recall that an \textsf{adjoint} (resp. \textsf{coadjoint}) variety of a simple
algebraic group $\mathrm{G}$ is the highest weight vector orbit in the projectivization of
the irreducible $\mathrm{G}$-representation, whose highest weight is the highest \emph{long}
(resp. \emph{short}) root of $\mathrm{G}$. Clearly, if the group $\mathrm{G}$ is simply laced, then
adjoint and coadjoint varieties coincide.
\begin{center}
\begin{tabular}{ccccc}
\hline
Type of $\mathrm{G}$ & Coadjoint variety & Adjoint variety \\
\hline
$\mathrm{A}_n$ & $\mathrm{A}_n/\mathrm{P}_{1,n} = \Fl(1,n;n+1)$ & $\Fl(1,n;n+1)$ \\
$\mathrm{B}_n$ & $\mathrm{B}_n/\mathrm{P}_1 = \Q_{2n-1}$ & $\mathrm{B}_n/\mathrm{P}_2 = \OG(2,2n+1)$ \\
$\mathrm{C}_n$ & $\mathrm{C}_n/\mathrm{P}_2 = \IG(2,2n)$ & $\mathrm{C}_{n}/\mathrm{P}_1 = \mathbb{P}^{2n-1}$ \\
$\mathrm{D}_n$ & $\mathrm{D}_n/\mathrm{P}_2 = \OG(2,2n)$ & $\mathrm{D}_n/\mathrm{P}_2 = \OG(2,2n)$ \\
$\mathrm{E}_6$ & $\mathrm{E}_6/\mathrm{P}_2$ & $\mathrm{E}_6/\mathrm{P}_2$ \\
$\mathrm{E}_7$ & $\mathrm{E}_7/\mathrm{P}_1$ & $\mathrm{E}_7/\mathrm{P}_1$ \\
$\mathrm{E}_8$ & $\mathrm{E}_8/\mathrm{P}_8$ & $\mathrm{E}_8/\mathrm{P}_8$ \\
$\mathrm{F}_4$ & $\mathrm{F}_4/\mathrm{P}_4$ & $\mathrm{F}_4/\mathrm{P}_1$ \\
$\mathrm{G}_2$ & $\mathrm{G}_2/\mathrm{P}_2 = \Q_5$ & $\mathrm{G}_2/\mathrm{P}_1$ \\
\hline
\end{tabular}
\medskip
\centering
{ Table 1. Adjoint and coadjoint varieties. }
\end{center}
Note that the Picard rank of (co)adjoint varieties is one, except for type $\mathrm{A}_n$,
where it is two. In a given Dynkin type we denote the adjoint (resp.~coadjoint)
variety by $X^\mathrm{ad}$ (resp.~$X^\mathrm{coad}$).
\medskip
In this paper we concentrate our attention on coadjoint varieties, as for non simply laced
groups $\mathrm{G}$, for which there is a distinction between $X^\mathrm{ad}$ and $X^\mathrm{coad}$, it is known by
\cite{ChPe} that already the small quantum cohomology $\QH(X^\mathrm{ad})$ is semisimple.
Our first result shows that for all coadjoint varieties
the presentation of the small quantum cohomology ring has some common features.
Let us fix some notation before stating this result. Recall that for a smooth
projective Fano variety $X$ the biggest integer that divides the canonical bundle
$\omega_X \in \Pic(X)$ is called \textsf{index} of $X$; often we use the letter
$r$ for the index. In all the paper we will only consider varieties with vanishing
odd cohomology, so we will use the \textsf{algebraic degree}, which is half of the
usual cohomological degree.
\begin{theorem}
\label{theorem:introduction-uniform-presentation-for-QH}
Let $\mathrm{G}$ be a simple algebraic group not of type $\mathrm{A}_n$ and let $X^\mathrm{coad}$ be
the corresponding coadjoint variety. There exists an integer $k \in [0,2]$ and
integers $(a_i)_{i \in [1,k]}$ with $2 \leq a_1 < a_k$ such that $\QH(X^\mathrm{coad})$
has a presentation
\begin{equation*}
\QH(X^\mathrm{coad}) = K[h,\delta_1 , \delta_k]/(E_1,E_k,E + \lambda qh),
\end{equation*}
where $\lambda \neq 0$, $h$ is the hyperplane class, $\delta_i$ is a Schubert
class of degree $\deg(\delta_i) = a_i$ and $E,E_i \in {\mathbb Q}[h,\delta_1,\delta_k]$
are polynomial in these classes with $\deg(E) = r+1$, $\deg(E_i) = r+1-a_i$ for
all $i \in [1,k]$.
\end{theorem}
The values of the constants appearing in Theorem \ref{theorem:introduction-uniform-presentation-for-QH}
are collected in the following table.
\begin{center}
\begin{tabular}{cccccc}
\hline
Type of $\mathrm{G}$ & $X^\mathrm{coad}$ & $r$ & $k$ & $a_1$ & $a_2$ \\
\hline
$\mathrm{B}_n$ & $\mathrm{B}_n/\mathrm{P}_1 = \Q_{2n-1}$ & $2n-1$ & $0$ & & \\
$\mathrm{C}_n$ & $\mathrm{C}_n/\mathrm{P}_2 = \IG(2,2n)$ & $2n-1$ & $1$ & $2$ & \\
$\mathrm{D}_n$ & $\mathrm{D}_n/\mathrm{P}_2 = \OG(2,2n)$ & $2n-3$ & $2$ & $2$ & $n-2$ \\
$\mathrm{E}_6$ & $\mathrm{E}_6/\mathrm{P}_2$ & $11$ & $2$ & $3$ & $4$ \\
$\mathrm{E}_7$ & $\mathrm{E}_7/\mathrm{P}_1$ & $17$ & $2$ & $4$ & $6$ \\
$\mathrm{E}_8$ & $\mathrm{E}_8/\mathrm{P}_8$ & $29$ & $2$ & $6$ & $10$ \\
$\mathrm{F}_4$ & $\mathrm{F}_4/\mathrm{P}_4$ & $11$ & $1$ & $4$ & \\
$\mathrm{G}_2$ & $\mathrm{G}_2/\mathrm{P}_2 = \Q_5$ & $5$ & $0$ \\
\hline
\end{tabular}
\medskip
\centering
{ Table 2. Constants appearing in Theorem \ref{theorem:introduction-uniform-presentation-for-QH}}
\end{center}
Most cases of this theorem are already known. Indeed, for types $\mathrm{B}_n$ and $\mathrm{G}_2$
this is \cite{Beauville, ChMaPe}, for type $\mathrm{C}_n$ this is \cite{BKT, CMMPS},
and for type $\mathrm{E}_6$, $\mathrm{E}_7$, $\mathrm{E}_8$ and $\mathrm{F}_4$ these are Propositions 5.4, 5.6, 5.7 and
5.3 of \cite{ChPe}. Thus, we only need to give a proof in type $\mathrm{D}_n$, which is
done in Section \ref{section:type-D} (see Corollary \ref{corollary:presentation-small-QH-type-D}).
\medskip
The second main result of this paper is a uniform description of the non-reduced
factor of $\QH(X^\mathrm{coad})$. Before stating it we need to introduce some notation.
For a simple algebraic group $\mathrm{G}$ we denote by $\mathrm{T}(\mathrm{G})$ its Dynkin diagram
and by $\mathrm{T}_{\mathrm{short}}(\mathrm{G})$ its subdiagram of short roots. In simply laced types we view
all roots as both short and long. For convenience of the reader we collect the
resulting Dynkin types into a table:
\begin{equation*}
\begin{array}{|c|c|c|c|c|c|c|c|}
\hline
\mathrm{T} & \mathrm{A}_n & \mathrm{B}_n & \mathrm{C}_n & \mathrm{D}_n & \mathrm{E}_n & \mathrm{F}_4 & \mathrm{G}_2 \\
\hline
\mathrm{T}_{\mathrm{short}} & \mathrm{A}_n & \mathrm{A}_1 & \mathrm{A}_{n-1} & \mathrm{D}_n & \mathrm{E}_n & \mathrm{A}_2 & \mathrm{A}_1 \\
\hline
\end{array}
\end{equation*}
\medskip
With this notation we can formulate the following.
\begin{theorem}
\label{theorem:introduction-fat-points-of-QH}
Let $\mathrm{G}$ be a simple algebraic group not of type $\mathrm{A}_n$ and let $X^\mathrm{coad}$ be
the corresponding coadjoint variety. Then $\Spec \QH(X^\mathrm{coad})$ has a unique non-reduced
point and the localisation of $\QH(X^\mathrm{coad})$ at this point is isomorphic to the
Jacobian ring of a simple hypersurface singularity of Dynkin type $\mathrm{T}_{\mathrm{short}}(\mathrm{G})$.
The non-reduced point is supported in the locus $h = \delta_1 = \delta_k = 0$,
where $h$, $\delta_1$ and $\delta_k$ are the classes appearing in Theorem \ref{theorem:introduction-uniform-presentation-for-QH}.
\end{theorem}
In type $\mathrm{D}_n$ this is a consequence of Proposition \ref{prop:decomp-A-B} and
Lemma \ref{lemma:description-fat-point-type-D}.
In types $\mathrm{E}_6, \mathrm{E}_7, \mathrm{E}_8, \mathrm{F}_4$ this is proved in Lemmas \ref{lemma:description-fat-point-E6},
\ref{lemma:description-fat-point-E7}, \ref{lemma:description-fat-point-E8} and
\ref{lemma:description-fat-point-F4} respectively.
In type $\mathrm{C}_n$ this is \cite[Proposition 4.3]{CMMPS}.
In types $\mathrm{B}_n, \mathrm{G}_2$ this is known by \cite{Beauville, ChMaPe}.
\begin{remark}
Note that the Jacobian ring of an $\mathrm{A}_1$-singularity is just the ground field
${\mathbb Q}$ and is, therefore, a reduced ring.
\end{remark}
After examining the structure of the small quantum cohomology of coadjoint varieties
we are ready to proceed with our study of their big quantum cohomology. As we already mentioned,
the generic semisimplicity of the big quantum cohomology of coadjoint varieties in types
$\mathrm{C}_n$ and $\mathrm{F}_4$ was shown in \cite{CMMPS, GMS, Pe, MPS} by various methods. Here
we adopt the strategy of \cite{CMMPS, MPS}. Namely, we examine the first order deformations
of $\QH(X^\mathrm{coad})$ inside the big quantum cohomology $\BQH(X^\mathrm{coad})$ along the directions
$\delta_1, \delta_k$ as in Theorem \ref{theorem:introduction-uniform-presentation-for-QH},
which we denote by $\BQH_{\delta_1, \delta_k}(X^\mathrm{coad})$.
Let $t_{\delta_1}$ and $t_{\delta_2}$ be the quantum variables associated to $\delta_1$
and $\delta_2$ (see Subsection \ref{Sec.: Conventions and Notation}) and consider the ideals
${\mathfrak{t}} = (t_{\delta_1},t_{\delta_2}) \subset {\mathfrak{m}} = (h,\delta_1,\delta_2,t_{\delta_1},t_{\delta_2})$
of $\BQH_{\delta_1, \delta_k}(X^\mathrm{coad})$. We establish the following.
\begin{theorem}
\label{theorem:introduction-presentation-of-BQH}
Let $\mathrm{G}$ be a simple algebraic group not of type $\mathrm{A}_n$ and let $X^\mathrm{coad}$ be
the corresponding coadjoint variety.
The presentation of Theorem \ref{theorem:introduction-uniform-presentation-for-QH}
deforms to the presentation
\begin{equation*}
\BQH_{\delta_1, \delta_k}(X^\mathrm{coad}) = K[h,\delta_1 , \delta_k][[t_{\delta_1},t_{\delta_k}]]/(QE_1,QE_k,QE),
\end{equation*}
where
\begin{equation*}
QE_i = E_i + \lambda_i q t_{\delta_i} + {\mathfrak{t}}{\mathfrak{m}} \quad \text{and} \quad QE = E + \lambda qh + {\mathfrak{t}}{\mathfrak{m}},
\end{equation*}
\begin{equation*}
\lambda,\lambda_1,\lambda_k \in {\mathbb Q}^\times,
\end{equation*}
and $E_i, E$ are defined in Theorem \ref{theorem:introduction-uniform-presentation-for-QH}.
\end{theorem}
\begin{theorem}
\label{theorem:introduction-regularity-of-BQH}
Let $\mathrm{G}$ be a simple algebraic group not of type $\mathrm{A}_n$ and let $X^\mathrm{coad}$ be
the corresponding coadjoint variety. Then the big quantum cohomology $\BQH(X^\mathrm{coad})$
is a regular ring.
\end{theorem}
\begin{proof}
By Theorem \ref{theorem:introduction-fat-points-of-QH}, the ring $\QH(X^\mathrm{coad})$
is regular except for $h = \delta_1 = \delta_k = 0$. The Jacobian criterion for
$\BQH(X^\mathrm{coad})$ applied to the presentation in Theorem \ref{theorem:introduction-presentation-of-BQH}
at the locus $h = \delta_1 = \delta_k = t_{\delta_1} = t_{\delta_k} = 0$ gives
the regularity of $\BQH(X^\mathrm{coad})$.
\end{proof}
An immediate consequence of Theorem \ref{theorem:introduction-regularity-of-BQH}
is the desired generic semisimplicity of the big quantum cohomology for coadjoint varieties
(see the proof of Corollary \ref{corollary:semisimplicity-of-BQH-type-D} for details).
\begin{corollary}
\label{theorem:introduction-semisimplicity-of-BQH}
Let $\mathrm{G}$ be a simple algebraic group not of type $\mathrm{A}_n$ and let $X^\mathrm{coad}$ be
the corresponding coadjoint variety. Then the big quantum cohomology $\BQH(X^\mathrm{coad})$
is generically semisimple.
\end{corollary}
Here are the pointers to the proofs of Theorem \ref{theorem:introduction-presentation-of-BQH}
and Corollary \ref{theorem:introduction-semisimplicity-of-BQH}.
In type $\mathrm{D}_n$ these are Corollaries \ref{corollary:regularity-of-BQH-type-D}
and \ref{corollary:semisimplicity-of-BQH-type-D}.
In types $\mathrm{E}_6, \mathrm{E}_7, \mathrm{E}_8$ these are Corollaries
\ref{corollary:E6-regularity-of-BQH},
\ref{corollary:E6-semisimplicity-of-BQH},
\ref{corollary:E7-regularity-of-BQH},
\ref{corollary:E7-semisimplicity-of-BQH},
\ref{corollary:E8-regularity-of-BQH},
\ref{corollary:E8-semisimplicity-of-BQH}.
In type $\mathrm{F}_4$ these are Corollaries \ref{corollary:F4-regularity-of-BQH}
and \ref{corollary:F4-semisimplicity-of-BQH}.
In type $\mathrm{C}_n$ this is \cite[Theorem 6.4]{CMMPS} and \cite[Corollary 6.5]{CMMPS}.
In types $\mathrm{B}_n, \mathrm{G}_2$ there is nothing to prove, as in these cases already the
small quantum cohomology is known to be semisimple and, therefore, both the regularity
and the generic semisimplicity of the big quantum cohomology hold automatically.
In types $\mathrm{C}_n$ and $\mathrm{F}_4$ the generic semisimplicity
of the big quantum cohomology is already known by \cite{CMMPS, GMS, MPS, Pe}. However,
in type $\mathrm{F}_4$ this gives a new proof of this fact. In types $\mathrm{D}_n, \mathrm{E}_6, \mathrm{E}_7, \mathrm{E}_8$
these results are new.
\medskip
In view of Theorem \ref{theorem:introduction-regularity-of-BQH} we propose the following.
\begin{conjecture}
\label{conjecture:introduction-regularity-of-BQH}
Let $\mathrm{G}$ be a semisimple algebraic group and $\mathrm{P} \subset \mathrm{G}$ a parabolic
subgroup. Then the big quantum cohomology ring $\BQH(\mathrm{G}/\mathrm{P})$ is regular.
\end{conjecture}
Note that Conjecture \ref{conjecture:introduction-regularity-of-BQH} implies
Conjecture \ref{conjecture:introduction-semisimplicity-of-BQH}.
\medskip
The proofs of Theorem \ref{theorem:introduction-uniform-presentation-for-QH} and
Theorem \ref{theorem:introduction-presentation-of-BQH} use the following ingredients:
Classical products in $H^*(X^\mathrm{coad},{\mathbb Q})$ for classes of degree at most $\dim(X^\mathrm{coad})/2$
obtained in \cite{ChPe} using \textsf{Jeu de Taquin}. Comparison formulas that
replace the computations of $3$-point and $4$-points Gromov--Witten invariants in
$X^\mathrm{coad}$ with classical products in $H^*(\F,{\mathbb Q})$ where $\F$ is an auxiliary
rational projective variety of (co)minuscule type, obtained in \cite{ChPe} and
in Sections~\ref{section:geometry-of-the-space-of-lines}~and~\ref{sec-coadjoint}.
The classical products in $H^*(\F,{\mathbb Q})$ are then obtained using \textsf{Jeu de Taquin's rule}
from~\cite{ThYo}. Finally we use the Quantum Chevalley Formula proved in \cite{FuWo}.
All these computations, especially \textsf{Jeu de Taquin} and the Quantum Chevalley
Formula have been implemented by the second author in the computer algebra system
SageMath \cite{sagemath} and are available in \cite{LRCalc,QCCalc}.
Our computations for coadjoint varieties in exceptional Dynkin types rely
on \cite{LRCalc} and all the necessary scripts are available at
\begin{center}
\texttt{https://github.com/msmirnov18/bqh-coadjoint}
\end{center}
\subsection{Coadjoint variety in type $\mathrm{A}$}
In type $\mathrm{A}$ the behaviour is slightly different
due to the fact that the
Picard rank is $2$ in this case. Indeed, in type $\mathrm{A}_n$ the coadjoint variety is
the two-step flag variety $\Fl(1,n,n+1)$. The small quantum cohomology has the
following explicit description.
\begin{proposition}[{\cite[Proposition 7.2]{ChPe}}]
\label{proposition:coadjoint-type-A}
The small quantum cohomology of $\Fl(1,n,n+1)$ is the quotient of
\begin{equation*}
{\mathbb C}[h_1, h_2, q_1, q_2]
\end{equation*}
modulo the relations
\begin{equation}\label{eq:coadjoint-type-A-relations}
\sum_{k=0}^n h_1^k(-h_2)^{n-k} = q_1 + (-1)^n q_2 \quad \text{and} \quad h_1^{n+1} = q_1(h_1+h_2).
\end{equation}
If $q_1 + (-1)^n q_2 \neq 0$, then the algebra is semisimple. Otherwise the algebra
has a unique non-reduced factor of the form ${\mathbb C}[\varepsilon]/(\varepsilon^n)$.
\end{proposition}
\begin{proof}
The above presentation follows from the quantum Chevalley formula \cite{FuWo}.
The only claim not explicitly contained in \cite[Proposition 7.2]{ChPe} is the
type of the
non-reduced point for $q_1 + (-1)^n q_2 = 0$. This follows easily by
setting $q_1 + (-1)^n q_2 = 0$ in \eqref{eq:coadjoint-type-A-relations}, then
eliminating $h_2$ to get the relation
\begin{equation*}
\sum_{k=0}^n h_1^n(h_1^n - q_1)^{n-k} (-1/q_1)^{n-k} = 0.
\end{equation*}
Since under our assumptions we have $q_1 \neq 0$, the claim follows.
\end{proof}
\subsection{Derived category of coherent sheaves and results of \cite{KuSm21}}
Recall that according to Dubrovin's conjecture the generic semisimplicity of the
big quantum cohomology is equivalent to the existence of a full exceptional collection
in the bounded derived category ${\mathbf D^{\mathrm{b}}}(X)$ of coherent sheaves. Thus, the following
folklore conjecture fits very well with Conjecture \ref{conjecture:introduction-semisimplicity-of-BQH}.
\begin{conjecture}
\label{conjecture:introduction-derived-category-G/P}
Let $\mathrm{G}$ be a semisimple
algebraic group and $\mathrm{P} \subset \mathrm{G}$ a parabolic
subgroup. Then the bounded derived category ${\mathbf D^{\mathrm{b}}}(\mathrm{G}/\mathrm{P})$ of coherent sheaves
has a full exceptional collection.
\end{conjecture}
We refer to \cite[§1.1]{KuPo} for an overview of the state-of-the art on this conjecture
from about 10 years ago. Since then the main advances are \cite{Fo19,Gu18,KuSm21,BeKuSm21,Sm21}.
\smallskip
In \cite{KuSm20, KuSm21} Alexander Kuznetsov and the second named author proposed
a conjecture \cite[Conjecture 1.3]{KuSm21} relating the structure of the small quantum
cohomology $\QH(X)$ of a Fano variety $X$ with generically semisimple big quantum
cohomology to the existence of certain Lefschetz exceptional collections in ${\mathbf D^{\mathrm{b}}}(X)$.
This conjecture could be seen as refinement of Dubrovin's conjecture.
For coadjoint varieties, using the additional knowledge of the structure of $\QH(X)$
provided by the present paper, an even more precise conjecture can be formulated
\cite[Conjecture 1.9]{KuSm21}.
\begin{conjecture}[{\cite[Conjecture 1.9]{KuSm21}}]
\label{conjecture:introduction-derived-coadjoint}
Let $X$ be the coadjoint variety of a simple algebraic group~$\mathrm{G}$ over an algebraically
closed field of characteristic zero. Then ${\mathbf D^{\mathrm{b}}}(X)$ has an~$\Aut(X)$-invariant rectangular
Lefschetz exceptional collection with residual category~$\mathcal{R}$ and
\begin{enumerate}
\item
if $\mathrm{T}(\mathrm{G}) = \mathrm{A}_n$ and $n$ is even, then $\mathcal{R} = 0$;
\item
otherwise, $\mathcal{R}$ is equivalent to the derived category of representations of a quiver of Dynkin type~$\mathrm{T}_{\mathrm{short}}(\mathrm{G})$.
\end{enumerate}
\end{conjecture}
Guided by the structure of $\QH(X)$ this conjecture has been proved in all cases
except for types $\mathrm{E}_6$, $\mathrm{E}_7$ and $\mathrm{E}_8$. Specifically, this conjecture is
proved in type $\mathrm{A}_n$ and $\mathrm{D}_n$ in \cite{KuSm21}, in type $\mathrm{C}_n$ in \cite[Appendix by A. Kuznetsov]{CMMPS}
and in type $\mathrm{F}_4$ in \cite{BeKuSm21}. Types $\mathrm{B}_n$ and $\mathrm{G}_2$ are easy since
in this case $X^\mathrm{coad}$ is a smooth quadric and the result is well known.
In \cite{KuSm21}, the necessary structural results to state the above conjecture
are summarized in \cite[Theorem 1.6]{KuSm21} with a reference to the present paper.
Thus, we feel obliged to give a proof.
For a Fano variety $X$ we denote by $\QS_{X}$ the spectrum of the small quantum
cohomology with the quantum parameters $q$ specialized to the anticanonical class.
If there is only one parameter $q$, i.e. the rank of the Picard group is one,
then one can simply put $q = 1$.
This is a finite scheme over ${\mathbb Q}$ and the anticanonical class $-\mathrm{K}_X \in H^*(X)$
endowes it with a morphism into the affine line $\kappa \colon \QS_{X} \to {\mathbb A}^1$.
We then denote $\QS^\times_X \coloneqq \kappa^{-1}({\mathbb A}^1 \setminus \{ 0 \})$
and $\QS^\circ_X \coloneqq \QS_X \setminus \QS^\times_X$.
We refrain from recalling more precise notations and use further notations from
\cite{KuSm21} freely. Theorem 1.6 in \cite{KuSm21} states the following.
\begin{theorem}[{\cite[Theorem 1.6]{KuSm21}}]
Let $X^\mathrm{ad}$ and $X^\mathrm{coad}$ be the adjoint and coadjoint varieties of a simple
algebraic group $\mathrm{G}$, respectively.
\begin{enumerate}
\item If $\mathrm{T}(\mathrm{G}) = \mathrm{A}_{2n}$, then $\QS^\circ_{X^\mathrm{ad}} = \QS^\circ_{X^\mathrm{coad}} = \emptyset$.
\item If $\mathrm{T}(\mathrm{G}) \neq \mathrm{A}_{2n}$, then $\QS^\circ_{X^\mathrm{coad}}$ is a single non-reduced
point and the localization of $\QH_{\rm can}(X^\mathrm{coad})$ at this point is isomorphic
to the Jacobian ring of a simple hypersurface singularity of type $\mathrm{T}_{\rm short}(\mathrm{G})$.
\item If $\mathrm{T}(\mathrm{G})$ is simply laced, then we have $X^\mathrm{ad} = X^\mathrm{coad}$ and $\QS^\circ_{X^\mathrm{ad}} = \QS^\circ_{X^\mathrm{coad}}$.
\item If $\mathrm{T}(\mathrm{G})$ is not simply laced, then $\QS^\circ_{X^\mathrm{ad}} = \emptyset$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) In type $\mathrm{A}_n$ the algebra $\QH_{\mathrm{can}}(X^\mathrm{coad})$ is obtained
by setting $q_1 = q_2 = 1$ in the presentation of Proposition \ref{proposition:coadjoint-type-A}.
and $\QS^\circ_{X^\mathrm{coad}}$ is supported in the locus $h_1 + h_2 = 0$.
Assuming $n$ is even and using \eqref{eq:coadjoint-type-A-relations} we see that
$\QS^\circ_{X^\mathrm{coad}}$ is empty.
(2) For $\mathrm{G}$ of type $\mathrm{A}_n$ we argue as in (1) using Proposition
\ref{proposition:coadjoint-type-A}. For $\mathrm{G}$ not of type $\mathrm{A}$ the statement
follows from Theorem \ref{theorem:introduction-fat-points-of-QH} and the fact
that $\QS^\circ_{X^\mathrm{coad}}$ is supported in the locus~$h = 0$.
(3) This part is automatic.
(4) In all cases the claim is obtained by setting $h = 0$ in the
presentation for the small quantum cohomology and checking easily that
there are no solutions to the equations. For $\mathrm{B}_n/\mathrm{P}_2$ one can either use
the presentation from \cite[Theorem 2.5]{BKT} or the analysis of the solution set
done the proof of \cite[Proposition 6.2]{ChPe}. For $\mathrm{C}_n/ \mathrm{P}_1 = {\mathbb P}^{2n-1}$
this is well-known. For $\mathrm{G}_2/\mathrm{P}_2$ and $\mathrm{F}_4/\mathrm{P}_1$ one can use presentations
given in \cite[Proposition 5.1]{ChPe} and \cite[Proposition 5.2]{ChPe}.
\end{proof}
\subsection{Structure of the paper}
The paper is structured as follows. Section \ref{section:preliminaries} contains
preliminaries on quantum cohomology and Schubert calculus. In Section \ref{section:geometry-of-the-space-of-lines}
we recall some mostly well-known facts that allow to compute degree one Gromov--Witten
invariants of a rational homogeneous spaces using the Fano variety of lines.
These results suffice to comute the necessary Gromov--Witten invariants for coadjoint
varieties in simply-laced Dynkin types. In Section \ref{sec-coadjoint} we give some
generalities on coadjoint varieties and explain how to compute the necessary Gromov--Witten
for coadjoint varieties in non-simply laced Dynkin types. In Section \ref{section:type-D}
we consider the case of the coadjoint variety in type $\mathrm{D}$. In Section \ref{section:type-E}
and Section \ref{section:type-F} we treat the conadjoint varieties in types
$\mathrm{E}_6, \mathrm{E}_7, \mathrm{E}_8$ and $\mathrm{F}_4$. In Section \ref{section:singularity-theory} we
relate the big quantum cohomology of coadjoint varieties to unfording of isolated
hypersurface singularities.
\medskip
\noindent {\bf Acknowledgements.} We are grateful to Sergey Galkin, Vasily Golyshev,
and Alexander Kuznetsov for useful discussions and interest in our work.
M.S. is indebted to Pieter Belmans and Anton Mellit for their insights on programming
related issues that were crucial for the development of \cite{LRCalc,QCCalc}.
M.S. thanks the Hausdorff Research Institute for Mathematics and
the Max Planck Institute for Mathematics in Bonn for the great working
conditions during the preparation of this paper.
N.P. was supported by ANR Catore and ANR FanoHK. M.S. was partially supported by
the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) --- Projektnummer 448537907.
\section{Preliminaries}
\label{section:preliminaries}
In this section, we recall some definitions and basic properties of the small and
big quantum cohomology. We also fix some notations for algebraic groups and
Schubert varieties and prove some results on the behaviour of Schubert varieties
under equivariant maps between homogeneous spaces.
We will work over the field ${\mathbb C}$ of complex numbers.
\subsection{Conventions and notation for quantum cohomology}
\label{Sec.: Conventions and Notation}
Here we briefly recall the definition of the quantum cohomology ring for a smooth
projective variety $X$. To simplify the exposition and avoid introducing unnecessary
notation we impose from the beginning the following conditions on $X$: it is a Fano
variety of Picard rank~1 and $H^{odd}(X,{\mathbb Q})=0$. For a thorough introduction we refer
to \cite{Ma}. Recall that we will use algebraic degrees which are half of cohomological
degrees. We write $[{\rm pt}]$ for the cohomology class of a point.
\subsubsection{Definition}
\label{SubSec.: Def of QH}
Let us fix a graded basis $\Delta_0, \dots , \Delta_s$ in $H^*(X, {\mathbb Q})$ and dual
linear coordinates $t_0, \dots, t_s$. It is customary to choose $\Delta_0=1$.
For cohomology classes we use the Chow grading, i.e. we divide the topological
degree by two. Further, for variables $t_i$ we set $\deg(t_i)=1-\deg({\Delta}_i)$.
Let $R$ be the ring of formal power series ${\mathbb Q}[[q]]$, $k$ its field of fractions,
and $K$ an algebraic closure of $k$. We set $\deg(q)= \text{index} \, (X)$,
which is the largest integer $r$ such that $-K_X = rH$ for some ample divisor
$H$ on $X$, where $K_X$ is the canonical class of $X$.
The genus zero Gromov--Witten potential of $X$ is an element $F \in R[[t_0, \dots, t_s]]$
defined by the formula
\begin{align}\label{eq:GW-potential}
&F = \sum_{(i_0, \dots , i_s)} \langle \Delta_0^{\otimes i_0}, \dots, \Delta_s^{\otimes i_s} \rangle \frac{t_0^{i_0} \dots t_s^{i_s} }{i_0!\dots i_s!},
\end{align}
where
\begin{equation}\label{eq:GW-potential-coefficients}
\langle \Delta_0^{\otimes i_0}, \dots, \Delta_s^{\otimes i_s} \rangle = \sum_{d=0}^{\infty} \langle \Delta_0^{\otimes i_0}, \dots, \Delta_s^{\otimes i_s} \rangle_d q^d,
\end{equation}
and $\langle \Delta_0^{\otimes i_0}, \dots, \Delta_s^{\otimes i_s} \rangle_d$ are
rational numbers called Gromov--Witten invariants of $X$ of degree $d$. With respect
to the grading defined above $F$ is homogeneous of degree $3 - \dim X$.
Using \eqref{eq:GW-potential} one defines the \textit{big quantum cohomology ring} of $X$.
Namely, let us endow the $K[[t_0, \dots, t_s]]$-module
\begin{align*}
\BQH(X) = H^*(X, {\mathbb Q})\otimes_{{\mathbb Q}} K[[t_0, \dots, t_s]]
\end{align*}
with a ring structure by setting
\begin{equation}\label{eq:big-quantum-product}
\Delta_a \star \Delta_b = \sum_c \frac{\partial^3 F}{\partial t_a \partial t_b \partial t_c} \Delta^c,
\end{equation}
on the basis elements and extending to the whole $\BQH(X)$ by $K[[t_0, \dots, t_s]]$-linearity.
Here ${\Delta}^0, \dots, {\Delta}^s$ is the basis dual to ${\Delta}_0, \dots, {\Delta}_s$ with respect
to the Poincar\'e pairing. It is well known that~\eqref{eq:big-quantum-product}
makes $\BQH(X)$ into a commutative, associative, graded $K[[t_0, \dots, t_s]]$-algebra
with the identity element ${\Delta}_0$.
The algebra $\BQH(X)$ is called the \textit{big} quantum cohomology algebra of $X$
to distinguish it from a simpler object called the \textit{small} quantum cohomology
algebra which is the quotient of $\BQH(X)$ with respect to the ideal $(t_0, \dots, t_s)$.
We will denote the latter $\QH(X)$ and use ${\!\ \star_{0}\!\ }$ instead of $\star$ for the product
in this algebra. It is a finite dimensional $K$-algebra. Equivalently one can say that
\begin{equation*}
\QH(X)=H^*(X,{\mathbb Q}) \otimes_{{\mathbb Q}} K
\end{equation*}
as a vector space, and the $K$-algebra structure is defined by putting
\begin{equation}\label{eq:small-quantum-product}
\Delta_a {\!\ \star_{0}\!\ } \Delta_b = \sum_c \langle {\Delta}_a, {\Delta}_b, {\Delta}_c \rangle \Delta^c.
\end{equation}
\begin{remark}
We are using a somewhat non-standard notation $\BQH(X)$ for the big quantum cohomology
and $\QH(X)$ for the small quantum cohomology to stress the difference between the two.
Note that this notation is different from the one used in \cite{GMS} and is closer to
the notation of \cite{Pe}.
\end{remark}
\begin{remark}\label{SubSubSec.: Remark on difference of notation with Manin's book}
The above definitions look slightly different from the ones given in~\cite{Ma}.
The differences are of two types. The first one is that $\QH(X)$ and $\BQH(X)$
are in fact defined already over the ring $R$ and not only over $K$. We pass to $K$
from the beginning, since in this paper we are only interested in generic semisimplicity
of quantum cohomology. The second difference is that in some papers on quantum
cohomology one unifies the coordinate $q$ with the coordinate $t_1$ which is dual
to $H^2(X, {\mathbb Q})$, but the resulting structures are equivalent.
\end{remark}
\subsubsection{Deformation picture}
\label{SubSec.: Def. Picture}
The small quantum cohomology, if considered over the ring $R$
(cf. Remark~\ref{SubSubSec.: Remark on difference of notation with Manin's book}),
is a deformation of the ordinary cohomology algebra, i.e. if we put $q=0$, then
the quantum product becomes the ordinary cup-product. Similarly, the big quantum
cohomology is an even bigger deformation family of algebras. Since we work not
over $R$ but over $K$, we lose the point of classical limit but still retain the
fact that $\BQH(X)$ is a deformation family of algebras with the special fiber being $\QH(X)$.
In this paper we view ${{\rm Spec}}(\BQH(X))$ as a deformation family of zero-dimensional
schemes over ${{\rm Spec}} (K[[t_0,\dots, t_s]])$. In the base of the deformation we
consider the following two points: the origin (the closed point given by the maximal
ideal $(t_0, \dots , t_s)$) and the generic point $\eta$. The fiber of this family
over the origin is the spectrum of the small quantum cohomology ${{\rm Spec}}(\QH(X))$.
The fiber over the generic point will be denoted by ${{\rm Spec}}(\BQH(X)_{\eta})$.
It is convenient to summarize this setup in the diagram
\begin{align}\label{Eq.: BQH family}
\vcenter{
\xymatrix{
{{\rm Spec}}(\QH(X)) \ar[d] \ar[r] & {{\rm Spec}}(\BQH(X)) \ar[d]^{\pi} & \ar[l] {{\rm Spec}}(\BQH(X)_{\eta}) \ar[d]^{\pi_{\eta}} \\
{{\rm Spec}} (K) \ar[r] & {{\rm Spec}} (K[[t_0, \dots, t_s]]) & \ar[l] \eta
}
}
\end{align}
where both squares are Cartesian.
By construction $\BQH(X)$ is a free module of finite rank over $K[[t_0, \dots,t_s]]$.
Therefore, it is a noetherian semilocal $K$-algebra which is flat and finite over
$K[[t_0, \dots,t_s]]$. Note that neither $K[[t_0, \dots,t_s]]$ nor $\BQH(X)$ are
finitely generated over the ground field $K$. Therefore, some extra care is required
in the standard commutative algebra (or algebraic geometry) constructions.
For example, the notion of smoothness is one of such concepts.
Often we consider not the full deformation family $\BQH(X)$ but only a subfamily
that corresponds to deformation directions along some basis elements $\Delta_{i_1}, \dots, \Delta_{i_m}$
defined by
\begin{equation*}
\BQH_{\Delta_{i_1}, \dots, \Delta_{i_m}}(X) \coloneqq
\BQH(X)/\left( t_0, \dots, \hat{t}_{i_1}, \dots, \hat{t}_{i_m}, \dots, t_s \right),
\end{equation*}
where $\hat{t}_{i_l}$ means that the variable $\hat{t}_{i_l}$ is ommitted.
\subsubsection{Semisimplicity}
\label{SubSec.: Semisimplicity}
Let $A$ be a finite dimensional algebra over a field $F$ of characteristic zero.
It is called \textit{semisimple} if it is a product of fields. Equivalently,
the algebra $A$ is semisimple if the scheme ${{\rm Spec}}(A)$ is reduced. Another equivalent
condition is to require the morphism ${{\rm Spec}}(A) \to {{\rm Spec}}(F)$ to be smooth.
\begin{definition}
We say that $\BQH(X)$ is \textit{generically semisimple} if $\BQH(X)_{\eta}$ is a semisimple algebra.
\end{definition}
\subsection{Notation for algebraic groups}
Let $\mathrm{G}$ be a reductive algebraic group, let $\mathrm{T}$ be a maximal torus in $\mathrm{G}$
and let $\mathrm{B}$ be a Borel subgroup containing $T$. A parabolic subgroup
$\mathrm{P} \subset \mathrm{G}$ is called standard if $\mathrm{B} \subset \mathrm{P}$.
Let $\mathrm{P}$ be parabolic subgroup, we write $R_u(\mathrm{P})$ for the unipotent radical of $\mathrm{P}$.
If $\mathrm{P}$ contains $\mathrm{T}$ (for example if $\mathrm{P}$ is standard), then $\mathrm{P}$ contains a
unique reductive subgroup $\mathrm{L}_\mathrm{P}$ such that $\mathrm{T} \subset \mathrm{L}_\mathrm{P} \subset \mathrm{P}$ and $\mathrm{P} = \mathrm{L}_\mathrm{P} R_u(\mathrm{P})$
with $\mathrm{L}_\mathrm{P} \cap R_u(\mathrm{P}) = \{1\}$, the subgroup $\mathrm{L}_\mathrm{P}$ is called
\textsf{the Levi subgroup of} $\mathrm{P}$.
For $\mathrm{P}$ a standard parabolic subgroup, there exists a unique parabolic subgroup,
denoted $\mathrm{P}^-$ and called \textsf{the opposite parabolic subgroup} such that
$\mathrm{P}^- \cap \mathrm{P} = \mathrm{L}_\mathrm{P}$. In the case $\mathrm{P} = \mathrm{B}$, then $\mathrm{B}^-$ is the opposite
Borel subgroup and is defined by $\mathrm{B} \cap \mathrm{B}^- = \mathrm{T}$.
Denote by $\Phi$ the root system of $(\mathrm{G},\mathrm{T})$ and by $\Delta$ the set of simple
roots defined by $\mathrm{B}$ such that the roots of $\mathrm{B}$ is the set $\Phi^+$ of positive
roots. We set $\Phi^- = - \Phi^+$. If ${\rm rk}(\mathrm{G})$ is the semisimple rank of $\mathrm{G}$, we will
number the simple roots $\Delta = \{ \alpha_i \ | \ i \in [1,{\rm rk}(\mathrm{G})] \}$ as in Bourbaki
\cite[Tables]{Bo}. If $\mathrm{P}$ is a standard parabolic subgroup, then we write $\Phi_\mathrm{P}$
for the root system of $\mathrm{L}_\mathrm{P}$. Set $\Phi_\mathrm{P}^+ = \Phi_\mathrm{P} \cap \Phi^+$ and
$\Phi_\mathrm{P}^- = \Phi_\mathrm{P} \cap \Phi^-$. If $\mathrm{P}$ is a maximal standard parabolic subgroup,
we define $\alpha_\mathrm{P}$ as the unique simple root such that $- \alpha_\mathrm{P} \not \in \Phi_\mathrm{P}$.
Denote by $\mathrm{W}$ the Weyl group of the pair $(\mathrm{G},\mathrm{T})$ and by $s_\alpha \in \mathrm{W}$
the reflection associated to any root $\alpha \in \Phi$. For $\alpha_i \in \Delta$
a simple root we will also use the notation $s_i := s_{\alpha_i}$ for the corresponding
simple reflection. Simple reflections define a length in $\mathrm{W}$ and we let $w_0$ be
the longest element in $W$. Recall that $w_0^{-1} = w_0$. For $\mathrm{P}$ a standard
parabolic subgroup, we denote by $\mathrm{W}_\mathrm{P}$ the Weyl group of the pair $(\mathrm{P},\mathrm{T})$.
Note that this is also the Weyl group of the pair $(\mathrm{L}_\mathrm{P},\mathrm{T})$. Let $w_{0,\mathrm{P}}$
be the longest element of $\mathrm{W}_\mathrm{P}$. We have $w_{0,\mathrm{P}}^{-1} = w_{0,\mathrm{P}}$. Denote
by $\mathrm{W}^\mathrm{P} \subset \mathrm{W}$ the set of minimal length representatives of the quotient
$\mathrm{W}/\mathrm{W}_\mathrm{P}$ and by $w_{0}^\mathrm{P}$ the longest element of $\mathrm{W}^\mathrm{P}$. If $\mathrm{Q}$ is
another standard parabolic subgroup, we denote by $\mathrm{W}_{\mathrm{P}}^{\mathrm{Q}} \subset \mathrm{W}_\mathrm{P}$
the set of minimal length representatives of $\mathrm{W}_{\mathrm{P}}/\mathrm{W}_{\mathrm{P} \cap \mathrm{Q}}$.
Let $w_{0,\mathrm{P}}^{\mathrm{Q}} \in \mathrm{W}_{\mathrm{P}}^{\mathrm{Q}}$ be the longest element.
\subsection{Basics on Schubert varieties and classes}
\label{subsection:schubert-classes}
Let $\mathrm{P} \subset \mathrm{G}$ be a standard parabolic subgroup and set $X = \mathrm{G}/\mathrm{P}$. Then $X$ is a projective rational variety homogeneous under the action of $\mathrm{G}$.
The variety $X$ admits cellular decompositions given by the $\mathrm{B}$ and $\mathrm{B}^-$ orbits: \textsf{the Bruhat decomposition}. Its cells, \textsf{the Schubert cells} are defined as follows: ${\mathring{X}}_w = \mathrm{B} w.\mathrm{P}$ and ${\mathring{X}}^w = \mathrm{B}^- w. \mathrm{P}$ for $w \in \mathrm{W}^\mathrm{P}$. We have
$$X = \coprod_{w \in \mathrm{W}^\mathrm{P}} {\mathring{X}}_w = \coprod_{w \in \mathrm{W}^\mathrm{P}} {\mathring{X}}^w.$$
Furthermore, we have $\dim {\mathring{X}}_w =\ell(w) = \codim {\mathring{X}}^w$. The closures of the Schubert cells are \textsf{the Schubert varieties}: $X_w = \overline{\mathrm{B} w. \mathrm{P}}$ and $X^w = \overline{\mathrm{B}^- w .\mathrm{P}}$. Their cohomology classes, \textsf{the Schubert classes}, defined by $\sigma_w = [X_w]$ and $\sigma^w = [X^w]$ form Poincar\'e dual basis of the singular cohomology $H^*(X,{\mathbb Z})$.
If $\mathrm{R} \subset \mathrm{P}$ is another standard parabolic subgroup contained in $\mathrm{P}$, set $Z = \mathrm{G}/\mathrm{R}$ which is also projective rational and homogeneous under $\mathrm{G}$. We have a map $\pi : Z \to X$. The following result summarises well known results on Schubert classes that we will need in the sequel. We include proofs or references for the convenience of the reader.
\begin{proposition}
\label{prop-comb}
Let $u \in \mathrm{W}$.
\begin{enumerate}
\item We have a unique factorisation $u = u^\mathrm{P} u_\mathrm{P}$ with $u^\mathrm{P} \in \mathrm{W}^\mathrm{P}$ and $u_\mathrm{P} \in \mathrm{W}_\mathrm{P}$.
\item We have $\ell(u) = \ell(u^P) + \ell(u_P)$.
\item The decomposition of $w_0$ is given by $w_0 = w_0^\mathrm{P} w_{0,\mathrm{P}}$.
\item We have $(u_\mathrm{P})^\mathrm{R} = (u^\mathrm{R})_\mathrm{P}$. We denote this element by $u_\mathrm{P}^\mathrm{R}$.
\item We have $X^u = X^{u^\mathrm{P}}$ and $X_u = X_{u^\mathrm{P}}$.
\item We have $\pi(Z_u) = X_{u}$ and $\pi(Z^u) = X^{u}$.
\item We have $\pi^{-1}(X^u) = Z^u$ and $\pi^{-1}(X_u) = Z_{u w_{0,\mathrm{P}}}$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) and (2) See \cite[Proposition 2.4.4]{bb}.
(3) See display (2.16) on page 44 in \cite{bb}.
(4) We have $u = u^\mathrm{R} u_\mathrm{R}$ with $u_\mathrm{R} \in \mathrm{W}_\mathrm{R} \subset \mathrm{W}_\mathrm{P}$ and $u^\mathrm{R} = (u^\mathrm{R})^\mathrm{P} (u^\mathrm{R})_\mathrm{P}$ with $(u^\mathrm{R})^\mathrm{P} \in \mathrm{W}^\mathrm{P}$ and $(u^\mathrm{R})_\mathrm{P} \in \mathrm{W}_\mathrm{P}$. This gives $u = (u^\mathrm{R})^\mathrm{P} ((u^\mathrm{R})_\mathrm{P} u_\mathrm{R})$ and by unicity of the decomposition in (1) we get $u^\mathrm{P} = (u^\mathrm{R})^\mathrm{P}$ and $u_\mathrm{P} = (u^\mathrm{R})_\mathrm{P} u_\mathrm{R}$. Furthermore by \cite[Lemma 2.4.3]{bb}, an element $v$ is in $\mathrm{W}^\mathrm{R}$ if no reduced expression of $v$ ends with a simple reflection of $\mathrm{W}_\mathrm{R}$. In particular, since $u^\mathrm{R} = (u^\mathrm{R})^\mathrm{P} (u^\mathrm{R})_\mathrm{P}$ no reduced expression of $(u^\mathrm{R})_\mathrm{P}$ can end with an element of $\mathrm{W}_\mathrm{R}$ thus $(u^\mathrm{R})_\mathrm{P} \in \mathrm{W}^\mathrm{R}$. This implies the equalities $(u_\mathrm{P})^\mathrm{R} = (u^\mathrm{R})_\mathrm{P}$ and $(u_\mathrm{P})_\mathrm{R} = u_\mathrm{R}$.
Note that in particular, we have $w_{0,\mathrm{P}} = w_{0,\mathrm{P}}^\mathrm{R} w_{0,\mathrm{R}}$.
(5) We have $\mathrm{B} u. \mathrm{P} = \mathrm{B} u^\mathrm{P} u_\mathrm{P}. \mathrm{P} = \mathrm{B} u^\mathrm{P}. \mathrm{P}$ since $u_\mathrm{P} \in P$. Similarly we have $\mathrm{B}^- u. \mathrm{P} = \mathrm{B}^- u^\mathrm{P} u_\mathrm{P}. \mathrm{P} = \mathrm{B}^- u^\mathrm{P}. \mathrm{P}$. The result follows by taking closures.
(6) Note that the set $[u] = \{ v \in \mathrm{W} \ | \ v^\mathrm{P} = u^\mathrm{P} \}$ has a unique element of minimal length: $u^\mathrm{P}$ and a unique element of maximal length: $u^\mathrm{P} w_{0,\mathrm{P}}$. By (5), we have $\pi^{-1}(\mathrm{B}^- u. \mathrm{P}) = \cup_{v \in [u]} \mathrm{B}^- v.\mathrm{R}$ and $\pi^{-1}(\mathrm{B} u. \mathrm{P}) = \cup_{v \in [u]} \mathrm{B} v.\mathrm{R}$. The closure of these sets are therefore equal to $Z^{u^\mathrm{P}}$ and $Z_{u^\mathrm{P} w_{0,\mathrm{P}}}$. Since the map $\pi$ is open, the result follows by taking closures.
\end{proof}
\begin{remark}
\label{rem-wp}
As a consequence of the above proposition we also have the following formulas involving only elements in $\mathrm{W}^\mathrm{R}$ and $\mathrm{W}^\mathrm{P}$:
\begin{enumerate}
\item $\mathrm{W}^\mathrm{P} \subset \mathrm{W}^\mathrm{R}$: by \cite[Lemma 2.4.3]{bb}, an element $v$ is in $\mathrm{W}^\mathrm{R}$ if no reduced expression of $v$ ends with a simple reflection of $\mathrm{W}_\mathrm{R}$.
\item For $u \in \mathrm{W}^\mathrm{R}$, we have $\pi(Z^u) = X^{u^\mathrm{P}}$ and $\pi(Z_u) = X_{u^\mathrm{P}}$.
\item For $u \in \mathrm{W}^\mathrm{P}$, we have $\pi^{-1}(X^u) = Z^u$ and $\pi^{-1}(X_u) = Z_{uw_{0,\mathrm{P}}^\mathrm{R}}$.
\end{enumerate}
\end{remark}
\begin{corollary}
\label{coro-bir}
Let $u \in \mathrm{W}$.
\begin{enumerate}
\item We have $X^u = w_0.X_{w_0u}$.
\item If $u \in \mathrm{W}^\mathrm{P}$, then $u^\vee := w_0 u w_{0,\mathrm{P}} \in \mathrm{W}^\mathrm{P}$ and $X^u = w_0. X_{u^\vee}$
\item If $u \in \mathrm{W}^\mathrm{P}$, then the map $\pi : Z_u \to X_u$ is birational.
\item If $u \in \mathrm{W}^\mathrm{P}$, then the map $\pi : Z^{u w_{0,\mathrm{P}}^\mathrm{R}} \to X^u$ is birational.
\end{enumerate}
\end{corollary}
\begin{proof}
(1) We have $\mathrm{B}^- = w_0 \mathrm{B} w_0$ thus $\mathrm{B}^- u . \mathrm{P} = w_0 \mathrm{B} w_0u . \mathrm{P} = w_0 . {\mathring{X}}_{w_0u}$. The result follows by taking closures.
(2) Let $s \in \mathrm{W}_\mathrm{P}$ be a simple reflection. Both elements $w_{0,\mathrm{P}}$ and $w_{0,\mathrm{P}}s$ lie in $\mathrm{W}_\mathrm{P}$ thus, since $u \in \mathrm{W}^\mathrm{P}$, the decompositions $uw_{0,\mathrm{P}}$ and $u(w_{0,\mathrm{P}}s)$ are length additive thus $\ell(u(w_{0,\mathrm{P}}s) = \ell(u) + \ell(w_{0,\mathrm{P}}s) = \ell(u) + \ell(w_{0,\mathrm{P}}) - 1$ (the last equality holds true since $w_{0,\mathrm{P}}$ is the longest element in $\mathrm{W}_\mathrm{P}$) thus $\ell(uw_{0,\mathrm{P}}s) = \ell(uw_{0,\mathrm{P}}) - 1$. This imples $\ell(w_0uw_{0,\mathrm{P}}s) = \ell(w_0) - \ell(uw_{0,\mathrm{P}}s) = \ell(w_0) - \ell(uw_{0,\mathrm{P}}) + 1 = \ell(w_0uw_{0,\mathrm{P}}) + 1$ proving that $u^\vee := w_0uw_{0,\mathrm{P}} \in \mathrm{W}^\mathrm{P}$. The last equality follows from (1).
(3) Let $\mathrm{U}$ be the unipotent radical of $\mathrm{B}$ and $\mathrm{U}^-$ be the unipotent radical of $\mathrm{B}^-$. Set $\mathrm{U}_u = \mathrm{U}^- \cap u \mathrm{U} u^{-1}$. A classical result (see for example \cite[Section 13.8]{jantzen}) states that the map $\mathrm{U}_u \to \mathrm{B} u.\mathrm{P}, x \mapsto xu.\mathrm{P}$ is an isomorphism onto an open subset of $X_u$ as soon as $u \in \mathrm{W}^\mathrm{P}$. Since $\mathrm{W}^\mathrm{P} \subset \mathrm{W}^\mathrm{R}$, we also have that the map $\mathrm{U}_w \to \mathrm{B} u. \mathrm{R}$ is an isomorphism onto an open subset of $Z_u$. Finally since the map $\pi$ is equivariant we get that the map $\mathrm{B} u. \mathrm{R} \to \mathrm{B} u . \mathrm{P}$ obtained by restriction of $\pi$ is an isomorphism.
(4) By (2), we have $Z^{uw_{0,\mathrm{P}}^\mathrm{R}} = w_0 Z_{w_0uw_{0,\mathrm{P}}^\mathrm{R} w_{0,\mathrm{R}}}$. But $w_{0,\mathrm{R}} = (w_{0,\mathrm{P}})_\mathrm{R}$ thus $w_{0,\mathrm{P}}^\mathrm{R} w_{0,\mathrm{R}} = (w_{0,\mathrm{P}})^\mathrm{R} (w_{0,\mathrm{P}})_\mathrm{R} = w_{0,\mathrm{P}}$. We obtain the equalities $w_0uw_{0,\mathrm{P}}^\mathrm{R} w_{0,\mathrm{R}} = w_0uw_{0,\mathrm{P}} = u^\vee \in \mathrm{W}^\mathrm{P}$. Applying (3), we get that the map $Z_{w_0uw_{0,\mathrm{P}}^\mathrm{R} w_{0,\mathrm{R}}} \to X_{w_0uw_{0,\mathrm{P}}^\mathrm{R} w_{0,\mathrm{R}}} = X_{u^\vee}$ is birational. Translating by $w_0$, the map $Z^{uw_{0,\mathrm{P}}^\mathrm{R}} \to w_0 X_{u^\vee} = X^u$ is birational.
\end{proof}
Recall the definition of the Bruhat order on $\mathrm{W}^\mathrm{P}$induced by inclusion of Schubert varieties: for $u,v \in \mathrm{W}^\mathrm{P}$, set $u \leq v \stackrel{\textrm{def}}{\Leftrightarrow} X_u \subset X_v$.
\begin{lemma}
\label{lemm-rich}
For $u,v \in \mathrm{W}^\mathrm{P}$, we have $X_u \cap X^v \neq \emptyset \Leftrightarrow v \leq u$
\end{lemma}
\begin{proof}
The $\mathrm{T}$-fixed points in $X$ are the elements $w . \mathrm{P}$ for $w \in \mathrm{W}^\mathrm{P}$. The intersection $X_u \cap X^v$ is $\mathrm{T}$-stable and therefore it is non empty if and only if it contains a $\mathrm{T}-$fixed point $w . \mathrm{P}$. Now since $X_u$ is $\mathrm{B}$-stable, we have $w.\mathrm{P} \in X_u \Leftrightarrow X_w \subset X_u \Leftrightarrow w \leq u$. Using the equality $X^v = w_0.X_{w_0vw_{0,X}}$ we have $w.\mathrm{P} \in X^v \Leftrightarrow w_0w. \mathrm{P} \in X_{w_0ww_{0,\mathrm{P}}}$. Now $w_0w .\mathrm{P} = w_0ww_{0,\mathrm{P}}.\mathrm{P}$ and $w_0ww_{0,\mathrm{P}} \in \mathrm{W}^\mathrm{P}$ thus $w.\mathrm{P} \in X^v \Leftrightarrow w_0ww_{0,\mathrm{P}} \leq w_0vw_{0,\mathrm{P}}$ and by \cite[Proposition 2.5.4]{bb} this is equivalent to $w \geq v$.
\end{proof}
We now compute the intersection of an opposite Schubert variety in $Z$ with the $\mathrm{B}$-stable fiber of $\pi$. Let $V = \pi^{-1}(X_1) = Z_{w_{0,\mathrm{P}}^\mathrm{R}}$. We have $V = \mathrm{P}/\mathrm{R}$. Let $\mathrm{L}$ be the Levi subgroup of $\mathrm{P}$ containing $\mathrm{T}$ and set $\mathrm{R}_\mathrm{L} = \mathrm{R} \cap \mathrm{L}$. This is a parabolic subgroup of $\mathrm{L}$ and we have an identification $V = \mathrm{P}/\mathrm{R} = \mathrm{L}/\mathrm{R}_\mathrm{L}$. Note that the Weyl group of the pair $(\mathrm{L},\mathrm{T})$ is $\mathrm{W}_\mathrm{L} = \mathrm{W}_\mathrm{P}$ and that $\mathrm{W}_\mathrm{L}^{\mathrm{R}_\mathrm{L}} = \mathrm{W}_\mathrm{P}^\mathrm{R}$. In particular, for $u \in \mathrm{W}_\mathrm{P}^\mathrm{R}$ the Schubert varieties $V_u$ and $V^u$ with respect to the Borel subgroups $\mathrm{B}_\mathrm{L} = \mathrm{B} \cap \mathrm{L}$ and $\mathrm{B}^-_\mathrm{L} = \mathrm{B}^- \cap \mathrm{L}$ of $\mathrm{L}$ are well defined.
\begin{proposition}
\label{prop-res-fibre}
Let $u \in \mathrm{W}^\mathrm{R}$.
\begin{enumerate}
\item The intersection $V \cap Z^u$ is non empty if and only if $u \leq w_{0,\mathrm{P}}^\mathrm{R}$.
\item We have $u \leq w_{0,\mathrm{P}}^\mathrm{R} \Leftrightarrow u \in \mathrm{W}^\mathrm{R}_\mathrm{P}$
\item If this condition is statisfied, we have $V \cap \Z^u = V^u$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) We have $V = \pi^{-1}(X_1) = Z_{w_{0,\mathrm{P}}^\mathrm{R}}$ with $w_{0,\mathrm{P}}^\mathrm{R} \in \mathrm{W}^\mathrm{R}_\mathrm{P}$ (see Remark \ref{rem-wp}). By Lemma \ref{lemm-rich}, the intersection $V \cap Z^u$ is non-empty if and only if $u \leq w_{0_\mathrm{P}}^\mathrm{R}$.
(2) Applying Proposition \ref{prop-comb}.(3) to $w_{0,\mathrm{P}} \in \mathrm{W}_\mathrm{P}$, we have that $w_{0,\mathrm{P}}^\mathrm{R}$ is the maximal element in $\mathrm{W}_\mathrm{P}^\mathrm{R}$ proving the result.
(3) We compute the intersection ${\mathring{Z}}^u \cap V = \mathrm{B}^- u.\mathrm{R} \cap \mathrm{P} .\mathrm{R}$ for $u \in \mathrm{W}^\mathrm{R}_\mathrm{P}$. Let $b \in \mathrm{B}^-$ such that $bu.\mathrm{R} \in \mathrm{P} . \mathrm{R}$. Since $\mathrm{R} \subset \mathrm{P}$ this is equivalent to $bu \in \mathrm{P}$ and since $u \in \mathrm{W}_\mathrm{P}^\mathrm{R} \subset \mathrm{W}_\mathrm{P}$ this in turn is equivalent to $b \in \mathrm{P}$. We thus have $ {\mathring{Z}}^u \cap V = (\mathrm{B}^-\cap \mathrm{P}) u.\mathrm{R}$. Since $\mathrm{B} \subset \mathrm{P}$, we have $\mathrm{B}^- \cap \mathrm{P} = \mathrm{B}^- \cap \mathrm{L} = \mathrm{B}^-_\mathrm{L}$. In particular, we have ${\mathring{Z}}^u \cap V = \mathrm{B}^-_\mathrm{L} u.\mathrm{R}$ and via the identification $\mathrm{P}/\mathrm{R} = \mathrm{L}/\mathrm{R}_\mathrm{L}$ this gives ${\mathring{Z}}^u \cap V = \mathrm{B}^-_\mathrm{L} u . \mathrm{R}_\mathrm{L} = {\mathring{V}}^u$. The result follows by taking closures.
\end{proof}
\subsection{Borel's presentation}
\label{subsection:borel-presentation}
Let $\mathrm{G}$ be a reductive group and $\mathrm{P} \subset \mathrm{G}$ be a parabolic subgroup. In this subsection, we briefly recall Borel's presentation of the cohomology ring $H^*(\mathrm{G}/\mathrm{P},{\mathbb Q})$.
First assume that $\mathrm{P} = \mathrm{B}$ is a Borel subgroup of $\mathrm{G}$ and let $\Lambda$ be the group of characters of $\mathrm{B}$. For any character $\lambda \in \Lambda$, define the line bundle $L_\lambda \to \mathrm{G}/\mathrm{B}$ by $L_\lambda = (\mathrm{G} \times {\mathbb C})/\mathrm{B}$ where $\mathrm{B}$ acts via $(g,z).b = (gb,\lambda(b)^{-1}z)$ for $g \in \mathrm{G}$, $z \in {\mathbb C}$ and $b \in \mathrm{B}$. This induces a linear map $c : \Lambda \to H^2(\mathrm{G}/\mathrm{B},{\mathbb Q})$ via $c(\lambda) = c_1(L_\lambda)$. This map extends to a ${\mathbb Q}$-algebra morphism called the \textsf{characteristic map}
$$c : {\mathbb Q}[\Lambda] \to H^*(\mathrm{G}/\mathrm{B},{\mathbb Q}).$$
Note that $\Lambda$ is also the group of characters of a maximal torus $\mathrm{T} \subset \mathrm{B}$ and therefore the Weyl group $\mathrm{W}$ of the pair $(\mathrm{G},\mathrm{T})$ acts on $\Lambda$. A consequence of Borel's presentation of equivariant cohomology imples that the characteristic map is surjective and that its kernel is the ideal ${\mathbb Q}[\Lambda]^{\mathrm{W}}_+$
generated by the
non-constant invariant elements (see for example \cite[Proposition 26.1]{Bor} or \cite[Corollaire 2, page 292]{De}). We thus have an isomorphism
$$H^*(\mathrm{G}/\mathrm{B},{\mathbb Q}) \simeq {\mathbb Q}[\Lambda]/{\mathbb Q}[\Lambda]^\mathrm{W}_+.$$
For $\mathrm{P} \supset \mathrm{B}$ a general parabolic subgroup, we use the projection map $p : \mathrm{G}/\mathrm{B} \to \mathrm{G}/\mathrm{P}$ which induces an inclusion $p^* : H^*(\mathrm{G}/\mathrm{P},{\mathbb Q}) \to H^*(\mathrm{G}/\mathrm{B},{\mathbb Q})$. This induces an isomorphism
$$H^*(\mathrm{G}/\mathrm{P},{\mathbb Q}) \simeq {\mathbb Q}[\Lambda]^{\mathrm{W}_{\mathrm{P}}}/{\mathbb Q}[\Lambda]^\mathrm{W}_+.$$
See for example \cite[Theorem 26.1]{Bor} or \cite[Theorem 5.5]{BGG}). We will deform these presentations to get presentations for the small and the big quantum cohomology of adjoint and coadjoint varieties.
\section{Geometry of the space of lines}
\label{section:geometry-of-the-space-of-lines}
In this section we give general methods for computing degree one Gromov--Witten invariants
on projective rational homogeneous spaces of Picard number one.
We will later give applications to the case of coadjoint varieties and use them
to compute first order germs of the big quantum cohomology.
Let $\mathrm{P}$ be a maximal standard parabolic subgroup and let $X = \mathrm{G}/\mathrm{P}$. Let $\mathcal{O}_X(1)$ be the ample generator of ${{\rm Pic}}(X) = {\mathbb Z} \mathcal{O}_X(1)$.
\subsection{Fano variety of lines}
\label{susec:fvol}
We denote by $\F(X)$ the Fano variety of lines on $X$. The variety $\F(X)$ is described by a theorem of Landsberg and Manivel \cite[Theorem 4.3]{LaMa03}. Recall that any maximal parabolic subgroup $\mathrm{P}$ is associated to a simple root $\alpha_\mathrm{P}$.
\begin{definition}
\label{def-para-q}
Define the parabolic subgroup $\mathrm{Q} \subset \mathrm{G}$ of $\mathrm{G}$ as the parabolic subgroup associated to the nodes of the Dynkin diagram that are connected to the node of $\alpha_\mathrm{P}$.
\end{definition}
\begin{example}
\label{exam-def-para}
Let us illustrate the above definition for the homogenous spaces $X = \GL(6)/\mathrm{P}_3 = \mathrm{A}_5/\mathrm{P}_3 = \G(3,6)$ and $X = \mathrm{E}_6/\mathrm{P}_2$.
\begin{enumerate}
\item We start with $X = \mathrm{A}_5/\mathrm{P}_3$ and the Dynkin diagram of type $\mathrm{A}_5$ associated to $\GL(6)$
\begin{equation*}
\begin{dynkinDiagram}[edge length = 2em,labels*={1,2,3,4,5}, parabolic = 4]A5
\dynkinRootMark{t}4
\dynkinRootMark{t}2
\end{dynkinDiagram}
\end{equation*}
with the third node crossed out to indicate the parabolic $\mathrm{P}_3$ and the second and fourth nodes ``tensored out'' to indicate the neighboring roots. So here $Q = \mathrm{P}_{2,4}$ is the standard parabolic subgroup associated to the second and fourth simple root: $W_{\mathrm{P}_{2,4}} = \langle s_{\alpha_1}, s_{\alpha_3}, s_{\alpha_5} \rangle$.
\item Now consider $X = \mathrm{E}_6/\mathrm{P}_2$. The following picture shows the Dynkin diagram of $\mathrm{E}_6$
\begin{equation*}
\begin{dynkinDiagram}[edge length = 2em,labels*={1,...,6}, parabolic = 2]E6
\dynkinRootMark{t}4
\end{dynkinDiagram}
\end{equation*}
with the second node crossed out to indicate the parabolic $\mathrm{P}_2$ and the fourth node ``tensored out'' to indicate the neighboring root. So here $Q = \mathrm{P}_4$ is the standard maximal parabolic subgroup associated to the fourth simple root.
\end{enumerate}
\end{example}
If $\mathrm{G}$ is simply laced, all roots are considered long and short.
\begin{theorem}[{\cite[Theorem 4.3]{LaMa03}}]
\label{thm-lines}
Let $X = \mathrm{G}/\mathrm{P}$ with $\mathrm{P}$ maximal associated to the simple root $\alpha_\mathrm{P}$ and let $\mathrm{Q}$ be as in Definition \ref{def-para-q}.
\begin{enumerate}
\item If $\alpha_\mathrm{P}$ is long, then $\F(X) = \mathrm{G}/\mathrm{Q}$ is homogeneous under the action of $\mathrm{G}$.
\item If $\mathrm{G}$ is not simply laced and $\alpha_\mathrm{P}$ is short, then $\F(X)$ is not homogeneous and contains two $\mathrm{G}$-orbits. The closed orbit in $\F(X)$ is $\mathrm{G}/\mathrm{Q}$.
\end{enumerate}
\end{theorem}
\begin{example}
Recall Example \ref{exam-def-para}.
\begin{enumerate}
\item For $X = \mathrm{A}_5/\mathrm{P}_3 = \G(3,6)$, we thus have $\F(X) = \mathrm{A}_5/\mathrm{P}_{2,4} = \Fl(2,4;6)$.
\item For $X = \mathrm{E}_6/\mathrm{P}_2$, we thus have $\F(X) = \mathrm{E}_6/\mathrm{P}_4$.
\end{enumerate}
\end{example}
For coadjoint varieties (see Section \ref{sec-coadjoint}) that appear in the second case of Theorem \ref{thm-lines}, we will give another equivalent description of the Fano variety of lines $\F(X)$ that is better suited for our purposes.
Let $\Z(X) = \{ (x,\ell) \in X \times \F(X) \ | \ x \in \ell \}$ be the universal family of lines with projections $p : \Z(X) \to X$ and $q : \Z(X) \to \F(X)$. We have the diagram
\begin{equation}
\label{eq:universal-family-of-lines}
\begin{aligned}
\xymatrix{
\Z(X) \ar[r]^-p \ar[d]_-q & X \\
\F(X) \\}
\end{aligned}
\end{equation}
If the root $\alpha_P$ is long, then $\F(X) = \mathrm{G}/\mathrm{Q}$ and setting $\mathrm{R} = \mathrm{P} \cap \mathrm{Q}$, the above diagram becomes
\begin{equation}
\label{eq:universal-family-of-lines-simply-laced}
\begin{aligned}
\xymatrix{
\Z(X) = \mathrm{G}/\mathrm{R} \ar[r]^-p \ar[d]_-q & X = \mathrm{G}/\mathrm{P} \\
\F(X) = \mathrm{G}/\mathrm{Q} \\}
\end{aligned}
\end{equation}
We end this subsection with the following useful result on Weyl groups.
\begin{lemma}
\label{lemm-weylPQR}
Let $\mathrm{P}$ be a standard maximal parabolic subgroup associated to the simple root $\alpha_\mathrm{P}$ and set $s_\mathrm{P} = s_{\alpha_\mathrm{P}}$. Let $\mathrm{Q}$ be the standard parabolic subgroup as in Definition \ref{def-para-q}. Set $\mathrm{R} = \mathrm{P} \cap \mathrm{Q}$. Let $\roots_\mathrm{P}$, $\roots_\mathrm{Q}$ and $\roots_\mathrm{R}$ be the root systems of $\mathrm{P}$, $\mathrm{Q}$ and $\mathrm{R}$. Let $\F = \mathrm{G}/\mathrm{Q}$, $\Z = \mathrm{G}/\mathrm{R}$ and $\pi : \Z \to \F$.
\begin{enumerate}
\item We have $\simpleroots_\mathrm{R} =\simpleroots_\mathrm{Q} \setminus \{ \alpha_\mathrm{P} \}.$
\item We have $w_{0,\mathrm{Q}}^\mathrm{R} = s_\mathrm{P}$.
\item For $w \in \mathrm{W}^\mathrm{P} \setminus \{ 1 \}$, we have $\ell(w s_\mathrm{P}) = \ell(w) - 1$ and $w s_\mathrm{P} \in \mathrm{W}^\mathrm{Q}$.
\item For $w \in \mathrm{W}^\mathrm{P} \setminus \{ 1 \}$, the map $\pi : \Z^w \to \F^{ws_\mathrm{P}}$ is birational.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Since $\mathrm{Q}$ is the standard parabolic subgroup generated by the simple roots connected to $\alpha_\mathrm{P}$, all simple roots distinct from $\alpha_\mathrm{P}$ are orthogonal to $\alpha_\mathrm{P}$ proving the result.
(2) The element $w_{0,\mathrm{Q}}^\mathrm{R}$ is the longest element in $\mathrm{W}^\mathrm{Q}_\mathrm{R}$ and by (1) we get $w_{0,\mathrm{Q}}^\mathrm{R} = s_\mathrm{P}$.
(3) Since $\mathrm{P}$ is maximal associated to $\alpha_\mathrm{P}$, any element $w \in \mathrm{W}^\mathrm{P}$
with $w \neq 1$ is such that $\ell(ws_\mathrm{P}) = \ell(w) - 1$. To prove that $ws_\mathrm{P} \in \mathrm{W}^\mathrm{Q}$
recall the characterisation of $\mathrm{W}^\mathrm{Q} = \{ v \in \mathrm{W} \ | \ v(\roots_\mathrm{Q}^+) \subset \roots^+ \}$
and let $\beta \in \roots_\mathrm{Q}^+$. If $\beta = \alpha_\mathrm{P}$, then since $\ell(ws_\mathrm{P}) = \ell(w) - 1$,
we have $ws_\mathrm{P}(\alpha_\mathrm{P}) > 0$. Otherwise $\beta \in \roots_\mathrm{R}^+ \subset \roots_\mathrm{P}^+$ and is
orthogonal to $\alpha_\mathrm{P}$. We get $ws_\mathrm{P}(\beta) = w(\beta) \in \roots^+$ since $w \in \mathrm{W}^\mathrm{P}$.
(4) Apply Corollary \ref{coro-bir}.
\end{proof}
\subsection{Computing degree $1$ Gromov--Witten invariants}
Let $X = \mathrm{G}/\mathrm{P}$ be as before a projective rational homogeneous space with $\mathrm{P}$ maximal.
\begin{lemma}\label{lemma:quantum-to-classical}
Let $w_1, \dots, w_n \in \mathrm{W}^{\mathrm{P}}$ and consider the corresponding opposite Schubert varieties $X^{w_1}, \dots, X^{w_n} \subset X$ and Schubert classes $\sigma^{w_1}, \dots, \sigma^{w_n} \in H^*(X, {\mathbb Q})$. Let us further assume that
\begin{equation*}
\sum_i \deg(\sigma^{w_i})
= n - 3 + (-K_X, \ell) + \dim X,
\end{equation*}
where $\ell$ is a line in $X$ and $\deg(\gamma)$
is the cohomological degree of a cohomology class $\gamma$.
\begin{enumerate}
\item For $g_1, \dots, g_n \in \mathrm{G}$ general, the scheme theoretic intersection
\begin{equation*}
ev_1^{-1}(g_1X^{w_1}) \cap \dots \cap ev_n^{-1}(g_nX^{w_n})
\end{equation*}
is a finite number of reduced points supported in the locus of automorphism-free stable maps with irreducible domain and
\begin{equation*}
\scal{\sigma^{w_1}, \dots, \sigma^{w_n}}_1 = \# \left( ev_1^{-1}(g_1X^{w_1}) \cap \dots \cap ev_n^{-1}(g_nX^{w_n}) \right).
\end{equation*}
\medskip
\item For general elements $g_1, \dots, g_n \in \mathrm{G}$ the translates
\begin{equation*}
g_1 \left( qp^{-1}(X^{w_1}) \right), \dots, g_n \left( qp^{-1}(X^{w_n}) \right)
\end{equation*}
intersect transversally in $\F(X)$ and we have
\begin{equation*}
\scal{\sigma^{w_1}, \dots, \sigma^{w_n}}_1 = \# \left( g_1 \left( qp^{-1}(X^{w_1}) \right) \cap \dots \cap g_n \left( qp^{-1}(X^{w_n}) \right) \right).
\end{equation*}
\medskip
\item If the map $q : p^{-1}(X^{w_i}) \to qp^{-1}(X^{w_i})$ is not genericaly finite for some $i$, we have $\scal{\sigma^{w_1}, \dots, \sigma^{w_n}}_1 = 0$. Otherwise, we have
\begin{equation*}
\scal{\sigma^{w_1}, \dots, \sigma^{w_n}}_1 = \deg_{\mathrm{F}(X)}(q_*p^*(\sigma^{w_1}) \cup \dots \cup q_*p^*(\sigma^{w_n}))
\end{equation*}
\end{enumerate}
\end{lemma}
\begin{remark}
Note that the above in particular implies that the invariant $\scal{\sigma^{w_1}, \dots, \sigma^{w_n}}_1$ is equal to the number of degree one stable maps $f \colon ({\mathbb P}^1, x_1, \dots x_n) \to X$ such that $f(x_i) \in g_iX^{w_i}$. Note also that the degree one condition ensures that the morphism $f \colon {\mathbb P}^1 \to X$ induces an isomorphism of ${\mathbb P}^1$ onto a line $L \subset X$.
\end{remark}
\begin{proof}
(1) This is a special case of \cite[Lemma 14]{FuPa}.
(2) For all $i$, we have $\codim_{\F(X)}(qp^{-1}(X^{w_i})) \geq \codim_X(X^{w_i}) - 1$. This implies the inequality
$$\sum_i \codim_{\F(X)}(qp^{-1}(X^{w_i})) \geq \sum_i \deg(\sigma^{w_i})
- n = (-K_X, \ell) + \dim X -3.$$
In particular, this sum is greater that $\dim \F(X) = (-K_X, \ell) + \dim X -3$. Recall that $\F(X)$ has at most two $\mathrm{G}$-orbits and that both maps $p : \Z(X) \to X$ and $q : \Z(X) \to \F(X)$ are $\mathrm{G}$-equivariant. Since $X$ is $\mathrm{G}$-homogeneous, the set $q(p^{-1}(x))$ of lines passing through $x$ meets any $\mathrm{G}$-orbit in $\F(X)$. In particular, for any $i \in [1,n]$, the set $qp^{-1}(X^{w_i})$ of lines meeting $X^{w_i}$, will meet any $\mathrm{G}$-orbit in $\F(X)$. This together with the previous inequality implies that, for general elements $g_1, \dots, g_n \in \mathrm{G}$, the translates $g_1 \left( qp^{-1}(X^{w_1}) \right), \dots, g_n \left( qp^{-1}(X^{w_n}) \right)$ will only meet in the open $\mathrm{G}$-orbit of $\F(X)$ and intersect transversally.
We now want to compare the Gromov--Witten invariant $\scal{\sigma^{w_1}, \dots, \sigma^{w_n}}_1$ and the number of points $\# \left( g_1 \left( qp^{-1}(X^{w_1}) \right) \cap \dots \cap g_n \left( qp^{-1}(X^{w_n}) \right) \right)$. By (1), we have $\scal{\sigma^{w_1}, \dots, \sigma^{w_n}}_1 = \# \left( ev_1^{-1}(g_1X^{w_1}) \cap \dots \cap ev_n^{-1}(g_nX^{w_n}) \right)$ which is finite and a map $\pi : \overline{M}_{0,n}(X, 1) \to \F(X)$ obtained by forgetting the marked points. Note that we have
$$\pi \left( ev_1^{-1}(g_1X^{w_1}) \cap \dots \cap ev_n^{-1}(g_nX^{w_n}) \right) = g_1 \left( qp^{-1}(X^{w_1}) \right) \cap \dots \cap g_n \left( qp^{-1}(X^{w_n}) \right)$$
so we need to prove that $\pi$ is injective on $\left( ev_1^{-1}(g_1X^{w_1}) \cap \dots \cap ev_n^{-1}(g_nX^{w_n}) \right)$. The fiber of $\pi$ is the set of points $\{p_1,\cdots,p_n\} \subset \mathbb{P}^1$ on the source curve such that $ev_i(p_i) \in X^{w_i}$. In particular for a given line $\ell \in g_1 \left( qp^{-1}(X^{w_1}) \right) \cap \dots \cap g_n \left( qp^{-1}(X^{w_n}) \right)$, we have $p_i \in \ell \cap g_i X^{w_i}$. Since a Schubert variety is defined by linear equations (see \cite[Theorem 3.11.(\i\i)]{Ra}) this intersection is either a reduced point or the line $\ell$ itself. Since $ev_1^{-1}(g_1X^{w_1}) \cap \dots \cap ev_n^{-1}(g_nX^{w_n})$ is finite, the intersection must be a point for all, $i$ proving the result.
(3) Assume that the map $q : p^{-1}(X^{w_i}) \to qp^{-1}(X^{w_i})$ is not genericaly finite for some $i$. Then this map is a $\mathbb{P}^1$-bundle and we have $\codim_{\F(X)} qp^{-1}(X^{w_i}) = \codim_X(X^{w_i}) > \codim_X(X^{w_i}) - 1$. This implies the inequality
$$\sum_i \codim_{\F(X)}(qp^{-1}(X^{w_i})) > \sum_i \deg(\sigma^{w_i})
- n = \dim \F(X).$$
In particular, for general elements $g_1, \cdots, g_n \in \mathrm{G}$, the intersection of translates or Schubert varieties $g_1 \left( qp^{-1}(X^{w_1}) \right) \cap \dots \cap g_n \left( qp^{-1}(X^{w_n}) \right)$ is empty. By (2) this proves the vanishing $\scal{\sigma^{w_1}, \dots, \sigma^{w_n}}_1 = 0$.
If the map $q : p^{-1}(X^{w_i}) \to qp^{-1}(X^{w_i})$ is generically finite, its fiber at $\ell \in \F(X)$ is given by the intersection of the line $\ell$ with $X^{w_i}$. As explained in the proof of (2), this intersection is either a reduced point or the line $\ell$. The general fiber is therefore reduced to a point and the map is birational. We get the equality $q_*p^*(\sigma^{w_i}) = [qp^{-1}(X^{w_i})]$ and the result follows by transversality (recall that $\F(X)$ has finitely many orbits and that $qp^{-1}(X^{w_i})$ meets all the orbits).
\end{proof}
Let $x \in X$ and set $\F_x = \F_x(X) = qp^{-1}(x) = \{ \ell \in \F(X) \ | \ x \in \ell\}$. Let $i : \F_x \to \F(X)$ be the inclusion map.
\begin{corollary}
\label{coro:quantum-to-classical}
Let $w_1,\cdots,w_n \in \mathrm{W}^\mathrm{P}$ such that
\begin{equation*}
\sum_i \deg(\sigma^{w_i}) = n - 2 + (-K_X, \ell).
\end{equation*}
If the map $q : p^{-1}(X^{w_i}) \to qp^{-1}(X^{w_i})$ is not genericaly finite for some $i$, we have $\scal{\sigma^{w_1}, \dots, \sigma^{w_n}}_1 = 0$. Otherwise, we have
\begin{equation*}
\scal{[{\rm pt}], \sigma^{w_1}, \dots, \sigma^{w_n}}_1 = \deg_{\mathrm{F}_x(X)}(i^*q_*p^*(\sigma^{w_1}) \cup \dots \cup i^*q_*p^*(\sigma^{w_n}))
\end{equation*}
\end{corollary}
\begin{proof}
The degree condition agrees with the one in Proposition \ref{lemma:quantum-to-classical}.
The result follows by the projection formula for the map $i \colon \F_x \to \F(X)$.
\end{proof}
\subsection{Simply laced case}
\label{subsection:simply-laced-types}
We apply the previous results to the situation where $\F(X)$ is homogeneous therefore of the form $\F(X) = \mathrm{G}/\mathrm{Q}$ with $\mathrm{Q}$ given in Definition \ref{def-para-q}. This is in particular the case if $\mathrm{G}$ is simply laced. For simplicity, we set $\F \coloneqq \F(X) = \mathrm{G}/\mathrm{Q}$. For $w \in \mathrm{W}$, recall the definition of $w^\mathrm{Q} \in \mathrm{W}^\mathrm{Q}$. We have the following result.
\begin{lemma}
\label{lemma:pull-push}
Let $w$ be an element of $\mathrm{W}^{\mathrm{P}}$.
\begin{enumerate}
\item Then we have $qp^{-1}(X^w) = F^w = F^{w^\mathrm{Q}}$.
\item Furthermore, we have
\begin{equation*}
q_*p^*[X^w] = \left\{
\begin{array}{ll}
~ 0 & \textrm{if $w = 1$,} \\
~ [F^{w^\mathrm{Q}}] = [F^{ws_\mathrm{P}}] & \textrm{else.} \\
\end{array}
\right.
\end{equation*}
\end{enumerate}
\end{lemma}
\begin{proof}
The first claim follows Proposition \ref{prop-comb} applied to the maps $p : \Z = \mathrm{G}/\mathrm{R} \to X = \mathrm{G}/\mathrm{P}$ and $q : \Z = \mathrm{G}/\mathrm{R} \to X = \mathrm{G}/\mathrm{Q}$. We get $p^{-1}(X^w) = \Z^w$ and $q(\Z^w) = \F^w = \F^{w^\mathrm{Q}}$.
For (2), consider the map $q : p^{-1}(X^w) \to qp^{-1}(X^w) = \F^w = \F^{w^\mathrm{Q}}$. Since this map is $\mathrm{B}$-equivariant (and because of the structure of Schubert varieties) we have the alternative: the map is birational or has generic fiber of positive dimension. In the first case, we get $q_*p^*[X^w] = [\F^w] = [\F^{w^\mathrm{Q}}]$ while in the second case, we get $q_*p^*[X^w] = 0$.
Assume first that $w = 1$, then $X^w = X$, $p^{-1}(X) = \Z$ and $\F^w = \F$. The map $q : \Z^w = \Z \to \F^w = \F$ is a $\mathbb{P}^1$-fibration and $q_*p^*[X^w] = 0$.
If $w \in \mathrm{W}^\mathrm{P}$ is such that $w \neq 1$, then by Lemma \ref{lemm-weylPQR}, we have $w = (ws_\mathrm{P})s_\mathrm{P}$ with $\ell(w s_\mathrm{P}) + 1 = \ell(w)$ and $ws_\mathrm{P} \in \mathrm{W}^\mathrm{Q}$ so that the decomposition $w = w^\mathrm{Q} w_\mathrm{Q}$ is given by $w^\mathrm{Q} = ws_\mathrm{P}$ and $w_\mathrm{Q} = s_\mathrm{P}$. By Lemma \ref{lemm-weylPQR}, we have $w_{0,\mathrm{Q}}^\mathrm{R} = s_\mathrm{P}$ thus setting $u = ws_\mathrm{P} \in \mathrm{W}^\mathrm{Q}$, we have $w = uw_{0,\mathrm{Q}}^\mathrm{R}$ by Corollary \ref{coro-bir} applied to the map $q : \Z = \mathrm{G}/\mathrm{R} \to \F = \mathrm{G}/\mathrm{Q}$, we have that the map $\Z^w \to \F^{w^\mathrm{Q}}$ is birational, proving the result.
\end{proof}
For a simply laced $\mathrm{G}$ not only is $\F$ a homogeneous variety, but also the variety $\F_x$,
of lines passing through a point $x \in X$, is homogeneous with respect to a subgroup of $\mathrm{G}$. Indeed, the variety $\F$ is homogeneous of the form $\F = \mathrm{G}/\mathrm{Q}$ with $\mathrm{Q}$ as in Definition \ref{def-para-q}. Set $\mathrm{R} = \mathrm{P} \cap \mathrm{Q}$. The variety $\F_x$ for $x = 1. \mathrm{P} \in X = \mathrm{G}/\mathrm{P}$ is therefore the fiber of the map $\mathrm{G}/\mathrm{R} \to \mathrm{G}/\mathrm{P}$. We thus have $\mathrm{F}_x = \mathrm{P}/\mathrm{R}$. To simplify notation, we set $\mathrm{V} := \F_x$. We have:
\begin{equation*}
\mathrm{V} := \F_x = \mathrm{P}/\mathrm{R} \simeq \mathrm{L}/\mathrm{R}_\mathrm{L}
\end{equation*}
where $\mathrm{L}$ is the Levi subgroup of $\mathrm{P}$ and $\mathrm{R}_\mathrm{L} = \mathrm{L} \cap \mathrm{R}$.
\begin{remark}
\label{rem-fx}
There is an easy combinatorial rule to deduce $\mathrm{L}$ and $\mathrm{R}_\mathrm{L}$. Indeed, the Dynkin diagram of $\mathrm{L}$ is obtained from that of $\mathrm{G}$ by removing the node corresponding to $\mathrm{P}$ while the parabolic subgroup $\mathrm{R}_\mathrm{L}$ is given in the Dynkin diagram of $\mathrm{L}$ by the marked nodes corresponding to $\mathrm{Q}$.
\end{remark}
\begin{example}
Let us illustrate the above situation for the homogenous space $X = \mathrm{E}_6/\mathrm{P}_2$. We start with the Dynkin diagram of $\mathrm{E}_6$
\begin{equation*}
\begin{dynkinDiagram}[edge length = 2em,labels*={1,...,6}, parabolic = 2]E6
\dynkinRootMark{t}4
\end{dynkinDiagram}
\end{equation*}
with the second node crossed out to indicate the parabolic $\mathrm{P}_2$ and the fourth node ``tensored out'' to indicate the neighboring root. Removing the second vertex and using the fourth vertex for the parabolic we get
\begin{equation*}
\dynkin[edge length = 2em,labels*={1,3,4,5,6}, parabolic = 4]A5
\end{equation*}
Thus, we see that in terms of the usual numbering of the vertices in type $\mathrm{A}$ we have
\begin{equation*}
\mathrm{V} := \F_x = \mathrm{A}_5/\mathrm{P}_3 = \G(3,6).
\end{equation*}
\end{example}
\begin{example}
\label{exam-fx}
Let now illustrate the above situation for the homogenous space $X = \mathrm{E}_6/\mathrm{P}_1$. We start with the Dynkin diagram of $\mathrm{E}_6$
\begin{equation*}
\begin{dynkinDiagram}[edge length = 2em,labels*={1,...,6}, parabolic = 1]E6
\dynkinRootMark{t}3
\end{dynkinDiagram}
\end{equation*}
with the first node crossed out to indicate the parabolic $\mathrm{P}_1$ and the third node ``tensored out'' to indicate the neighboring root. Removing the first vertex and using the third vertex for the parabolic we get
\begin{equation*}
\begin{dynkinDiagram}[edge length = 2em,labels*={6,5,4,2,3}]D5
\dynkinRootMark{t}5
\end{dynkinDiagram}
\end{equation*}
Thus, we see that in terms of the usual numbering of the vertices in type $\mathrm{D}$ we have
\begin{equation*}
\mathrm{V} := \F_x = \mathrm{D}_5/\mathrm{P}_4 = \OG^+(5,10).
\end{equation*}
\end{example}
Since $\mathrm{V}$ is homogeneous, cohomology classes $i^*q_*p^*(\sigma^w)$ appearing in Corollary \ref{coro:quantum-to-classical} can be expressed in terms of Schubert classes on $\mathrm{V}$. In particular, by Proposition \ref{prop-res-fibre} we have the following result.
\begin{lemma}
\label{lemma:intersections-of-schubert-varieties-with-Fx}
For $v \in \mathrm{W}^{\mathrm{Q}}$ we have
\begin{equation*}
\mathrm{V} \cap \F^v =
\begin{cases}
\emptyset \quad \text{if} \quad v \not \leq w^{\mathrm{Q}}_{\mathrm{P}}, \\
\mathrm{V}^v \quad \text{if} \quad v \leq w^{\mathrm{Q}}_{\mathrm{P}}.
\end{cases}
\end{equation*}
\end{lemma}
\begin{corollary}
\label{corollary:quantum-to-classical-simply-laced}
Let $w_1,\cdots,w_n \in \mathrm{W}^\mathrm{P} \setminus \{ 1 \}$ such that
\begin{equation*}
\sum_i \deg(\sigma^{w_i}) = n - 2 + (-K_X, \ell).
\end{equation*}
We have
\begin{equation*}
\scal{[{\rm pt}],\sigma_X^{w_1},\dots,\sigma_X^{w_n}}_1^X =
\begin{cases}
\deg_{\Fpt} \left( \cup_{i=1}^n \sigma_{\Fpt}^{w_is_\mathrm{P}} \right)
\quad \text{if} \quad w_is_\mathrm{P} \leq w_{\mathrm{P}}^{\mathrm{Q}} \text{ for all } i, \\
0 \quad \text{otherwise}.
\end{cases}
\end{equation*}
\end{corollary}
\begin{proof}
This follows from Corollary \ref{coro:quantum-to-classical} and Lemma \ref{lemma:intersections-of-schubert-varieties-with-Fx}.
\end{proof}
\begin{remark}
In the non simply laced case, we will also give explicit formulas to compute these invariants but we will restrict ourselves to the coadjoint case (see Section \ref{sec-coadjoint}).
\end{remark}
\section{Coadjoint varieties}
\label{sec-coadjoint}
\subsection{Definition}
Let $\mathrm{G}$ be a semisimple algebraic group with simple Lie algebra $\mathfrak{g}$. The group $\mathrm{G}$ acts on $\mathfrak{g}$ and this representation is the adjoint representation of $\mathrm{G}$.
\begin{definition}
The adjoint variety for the group $\mathrm{G}$ is the unique closed $\mathrm{G}$-orbit in $\mathbb{P}(\mathfrak{g})$.
\end{definition}
In the following we give a simplified definition of the Langlands dual group adapted to groups defined over algebraically closed fields. Recall that in this case, any semisimple group is determined by its Dynkin diagram and the lattice $\Lambda_{\mathrm{G}} = \mathrm{X}(\mathrm{T})$ of characters of a maximal torus $\mathrm{T}$. Furthermore, if $\mathrm{R}_{\mathrm{G},{\mathbb Z}}$ and $\mathrm{P}_{\mathrm{G},{\mathbb Z}}$ are the root and weight lattices of $\mathrm{G}$, we always have $\mathrm{R}_{\mathrm{G},{\mathbb Z}} \subset \mathrm{X}(\mathrm{T}) \subset \mathrm{P}_{\mathrm{G},{\mathbb Z}}$.
\begin{definition}
The Langlands dual group $\mathrm{G}^\vee$ to $\mathrm{G}$ is the group unique group whose Dynkin diagram is obtained from the Dynkin diagram of $\mathrm{G}$ by reversing the arrows and such that $\Lambda_{\mathrm{G}^\vee} = \Hom_{\mathbb Z}(\Lambda_\mathrm{G},{\mathbb Z})$.
\end{definition}
\begin{remark}
Note that $(\mathrm{G}^\vee)^\vee = \mathrm{G}$.
\end{remark}
Since the Dynkin diagram of $\mathrm{G}$ and $\mathrm{G}^\vee$ are the same up to the direction of the arrows, we identify their vertices.
\begin{definition}
Let $\mathrm{P} \subset \mathrm{G}$ be a standard parabolic subgroup. This subgroup is associated to a set $N_\mathrm{P}$ of nodes of the Dynkin diagram such that $\mathrm{W}_\mathrm{P} = \langle s_{\alpha_i} \ | \ i \not\in N_\mathrm{P} \rangle$. We define the standard parabolic subgroup $\mathrm{P}^\vee$ as the standard parabolic subgroup of $\mathrm{G}^\vee$ defined by the same set $N_\mathrm{P}$ of nodes of the Dynkin diagram.
\end{definition}
\begin{example}
\label{exam-dual}
Let us illustrate the above definitions for the group $\mathrm{G} = \Sp_{10}$. The Dynkin diagram of $\mathrm{G}$ is of type $\mathrm{C}_5$ and is as follows
\begin{equation*}
\begin{dynkinDiagram}[edge length = 2em,labels*={1,2,3,4,5}, parabolic = 2]C5
\end{dynkinDiagram}
\end{equation*}
and we crossed out the second node to indicate the parabolic $\mathrm{P}_2$. The Langlands dual group $\mathrm{G}^\vee$ has Dynkin diagram of type $\mathrm{B}_5$ as follows
\begin{equation*}
\begin{dynkinDiagram}[edge length = 2em,labels*={1,2,3,4,5}, parabolic = 2]B5
\end{dynkinDiagram}
\end{equation*}
and we crossed out the second node to indicate the parabolic $\mathrm{P}_2^\vee$.
\end{example}
\begin{definition}
The coadjoint variety for the group $\mathrm{G}$ is $X = \mathrm{G}/\mathrm{P}$ such that $X^\vee = \mathrm{G}^\vee/\mathrm{P}^\vee$ is the adjoint variety for the group $\mathrm{G}^\vee$.
\end{definition}
\begin{example}
Here are some examples of coadjoint varieties, the full list is given in Table~1.
\begin{enumerate}
\item If $\mathrm{G}$ is simply laced, then $\mathrm{G}^\vee = \mathrm{G}$ and adjoint and coadjoint varieties coincide.
\item If $\mathrm{G}$ is of type $\mathrm{C}_n$, then $\mathrm{G}^\vee$ is of type $\mathrm{B}_n$ and $\mathrm{G}^\vee/\mathrm{P}_2^\vee$ is the adjoint variety for groups of type $\mathrm{B}_n$. The coadjoint variety for $\mathrm{G}$ is therefore $X = \mathrm{G}/\mathrm{P}_2 = \IG(2,2n)$ the grassmannian of isotropic lines for a symplectic for in dimension $2n$.
\end{enumerate}
\end{example}
\vskip 0.2 cm
\begin{center}
\begin{tabular}{ccccc}
\hline
Type of $\mathrm{G}$ & Coadjoint variety & Adjoint variety \\
\hline
$\mathrm{A}_n$ & $\mathrm{A}_n/\mathrm{P}_{1,n} = \Fl(1,n;n+1)$ & $\Fl(1,n;n+1)$ \\
$\mathrm{B}_n$ & $\mathrm{B}_n/\mathrm{P}_1 = \Q_{2n-1}$ & $\mathrm{B}_n/\mathrm{P}_2 = \OG(2,2n+1)$ \\
$\mathrm{C}_n$ & $\mathrm{C}_n/\mathrm{P}_2 = \IG(2,2n)$ & $\mathrm{C}_{n}/\mathrm{P}_1 = \mathbb{P}^{2n-1}$ \\
$\mathrm{D}_n$ & $\mathrm{D}_n/\mathrm{P}_2 = \OG(2,2n)$ & $\mathrm{D}_n/\mathrm{P}_2 = \OG(2,2n)$ \\
$\mathrm{E}_6$ & $\mathrm{E}_6/\mathrm{P}_2$ & $\mathrm{E}_6/\mathrm{P}_2$ \\
$\mathrm{E}_7$ & $\mathrm{E}_7/\mathrm{P}_1$ & $\mathrm{E}_7/\mathrm{P}_1$ \\
$\mathrm{E}_8$ & $\mathrm{E}_8/\mathrm{P}_8$ & $\mathrm{E}_8/\mathrm{P}_8$ \\
$\mathrm{F}_4$ & $\mathrm{F}_4/\mathrm{P}_4$ & $\mathrm{F}_4/\mathrm{P}_1$ \\
$\mathrm{G}_2$ & $\mathrm{G}_2/\mathrm{P}_2 = \Q_5$ & $\mathrm{G}_2/\mathrm{P}_1$ \\
\hline
\end{tabular}
\medskip
\centering
{ Table 1. Coadjoint varieties.}
\end{center}
\subsection{Indexing Schubert classes by roots and Chevalley formula}
\label{subsection:indexing-schubert-classes-by-roots}
We recall some results from \cite[Section 2]{ChPe} giving a correspondence between
short roots and Schubert varieties in $X = \mathrm{G}/\mathrm{P}$ a coadjoint variety not of type $\mathrm{A}$.
Let $\roots$ be the root system of $\mathrm{G}$ and let $\shortroots$ be the subset of short roots in $\roots$. Recall that if $\mathrm{G}$ is simply laced, all root are considered long and short. The \textsf{height} $|\alpha|$ of a root
$\alpha \in \roots$ is defined by
\begin{equation*}
|\alpha| = \sum_{\beta \in \simpleroots} |m_\beta| \qquad \text{for} \quad \alpha = \sum_{\beta \in \simpleroots} m_\beta \beta.
\end{equation*}
Let $\theta \in \shortroots$ be the highest short root and $r_0 = |\theta|$. Then we have $\dim X = 2r_0 - 1$. Let $\Theta$ be the highest root of $\roots$. The index of $X$ is given by $c_1(X) = r = |\Theta|$. When the group is simply laced, then $r_0 = |\theta| = |\Theta| = r$ is the index of $X$ as a Fano variety but in general, we have $r \geq r_0$. For $\alpha = \sum_{\beta \in \Delta}m_\beta \beta$ a root, set ${\rm Supp}(\alpha) = \{ \beta \in \Delta \ | \ m_\beta \neq 0 \}$.
We have the following result.
\begin{lemma}[{\cite[Proposition 2.9]{ChPe}}]
\label{lemm:wp-root}
Let $X = \mathrm{G}/\mathrm{P}$ be a coadjoint variety not of type $\mathrm{A}$.
\begin{enumerate}
\item The map $\mathrm{W}^{\mathrm{P}} \to \shortroots$ defined by $w \mapsto w(\theta)$
is bijective and can be used to reindex Schubert classes using short roots $\shortroots$ setting $X^{w(\theta)} := X^w$ and $\sigma_{w(\theta)} := \sigma^w$.
\item We have a basis $(\sigma_{\alpha})_{\alpha \in \shortroots}$ of $H^*(X, {\mathbb Q})$ such that:
\begin{enumerate}
\item We have the equivalence
\begin{equation*}
X^\alpha\subset X^\beta \Leftrightarrow
\begin{cases}
\alpha \leq \beta & \text{if } \alpha \text{ and } \beta \text{ have the same sign}, \\
{\rm Supp}(\alpha) \cup {\rm Supp}(\beta) \text{ is connected} & \text{for } \alpha <0 \text{ and } \beta > 0. \\
\end{cases}
\end{equation*}
\item The Poincaré duality takes the form $\sigma_\alpha^\vee = \sigma_{w_0(\alpha)}$.
\end{enumerate}
\end{enumerate}
\end{lemma}
Recall the Chevalley formula in cohomology in this context.
\begin{proposition}[{\cite[Proposition 2.15]{ChPe}}]
\label{prop:chevalley-H}
Let $X = \mathrm{G}/\mathrm{P}$ be a coadjoint variety not of type $\mathrm{A}$.
Let $\alpha \in \shortroots$ be a short root and let $h$ be the hyperplane class.
We have
\begin{equation}
\label{eq:classical-chevalley-formula}
h \cup \sigma_\alpha = \left \{ \begin{array}{ll}
\displaystyle{\sum_{\beta \in \shortroots, \alpha - \beta \in \simpleroots}}
\sigma_\beta & \textrm{if $\alpha$ is not simple} \\
\\
\displaystyle{2 \sigma_{-\alpha} + \sum_{\beta \in \simpleroots_s \setminus \{\alpha\},
\ \langle \alpha^\vee , \beta \rangle \neq 0} \sigma_{-\beta}} & \textrm{if $\alpha$ is simple}\\
\end{array}
\right.
\end{equation}
\end{proposition}
\begin{remark} In type $A$ the Picard group ${{\rm Pic}}(X)$ is generated by two divisor classes $h_1$ and $h_2$ with $h = h_1 + h_2$. There is a more precise formula (see \cite{ChPe}) for multiplication with the classes $h_1$ and $h_2$. In all other cases $h$ is an ample generator of ${{\rm Pic}}(X)$.
\end{remark}
\begin{corollary}
\label{cor:wp-root}
Let $X = \mathrm{G}/\mathrm{P}$ be a coadjoint variety not of type $\mathrm{A}$, the algebraic degree of~$\sigma_{\alpha}$ is given by
\begin{equation*}
\deg(\sigma_{\alpha}) =
\begin{cases}
r_0 - |\alpha| & \text{for } \alpha \in \Phi_{\rm s}^+, \\
r_0 + |\alpha| - 1 & \text{for } \alpha \in \Phi_{\rm s}^-. \\
\end{cases}
\end{equation*}
\end{corollary}
\begin{proof}
By descending induction on roots.
We start with positive roots. For $\theta$ the highest short root, we have $\sigma_\theta = 1$ and $\deg(\sigma_\theta) = 0 = r_0 - |\theta|$. Let $\alpha > 0$ be a short root such that there is a simple root $\beta$ with $\alpha + \beta \in \shortroots$. Then by induction $\deg(\sigma_{\alpha + \beta}) = r_0 - |\alpha + \beta| = r_0 - |\alpha| - 1$. Furthermore, the Chevalley formula implies that $\sigma_\alpha$ occurs non trivially in $h \cup \sigma_{\alpha + \beta}$ therefore $\deg(\sigma_\alpha) = \deg(\sigma_{\alpha+\beta}) + 1 = r_0 - |\alpha|$, proving the result for positive roots.
We now deal with negative roots starting with $\alpha$ such that $-\alpha \in \Delta_s$ is a simple short root. We already proved that $\deg(\sigma_{-\alpha}) = r_0 - 1$. Furthermore, the Chevalley formula implies that $\sigma_\alpha$ occurs non trivially in $h \cup \sigma_{-\alpha}$ therefore $\deg(\sigma_\alpha) = \deg(\sigma_{-\alpha}) + 1 = r_0 = r_0 - 1 + |\alpha|$. The result for other negative roots follows from this as in the case of positive roots.
\end{proof}
\subsection{Quantum Chevalley formula}
In this subsection we recall the quantum Chevalley formula for coadjoint varieties. The quantum Chevalley formula gives the quantum product of a codimension $1$ class with any other class. It was first proved in \cite{FuWo}, we present a simplified version for coadjoint varieties in terms of roots that was first explained in \cite{ChPe}.
\begin{theorem}\label{thm:chevalley-QH}
Assume that $G$ is simply laced, let $\alpha \in \roots$, then
$$h \star_{0} \sigma_\alpha = h \cup \sigma_\alpha + \left\{ \begin{array}{ll}
q & \textrm{if $\alpha$ is simple with $\langle \alpha^\vee , \Theta \rangle \neq 0$}, \\
q \sigma_{\Theta - \alpha} & \textrm{if $\alpha < 0$ with $\langle \alpha^\vee , \Theta \rangle = -1$}, \\
2q^2 + q \sigma_{-\alpha_\mathrm{P}} & \textrm{if $\alpha = - \Theta$}, \\
0 & \textrm{otherwise}. \\
\end{array}
\right.$$
\end{theorem}
To describe the non simply laced case, we will need the following definition.
\begin{definition}
Let $\mathrm{G}$ be a non simply laced group.
We set $\delta = \Theta - \theta$. Note that $\delta \in \shortroots$. Writing $(\alpha_i)_i$ for the simple roots with notations as in \cite{Bo}, we have
\begin{center}
\begin{tabular}{cc}
\hline
Type of $\mathrm{G}$ & $\delta$ \\
\hline
$\mathrm{B}_n$ & $\alpha_2 + \cdots + \alpha_n$ \\
$\mathrm{C}_n$ & $\alpha_1$ \\
$\mathrm{F}_4$ & $\alpha_1 + \alpha_2 + \alpha_3$ \\
$\mathrm{G}_2$ & $\alpha_1 + \alpha_2$ \\
\hline
\end{tabular}
\medskip
\centering
{ Table 2. The root $\delta = \Theta - \theta \in \shortroots$.}
\end{center}
\end{definition}
For $\alpha,\beta \in \roots$ two roots, we write $\alpha \geq \beta$ if $\alpha - \beta$ is a non-negative linear combination of simple roots.
\begin{theorem}
Assume that $G$ is not simply laced, let $\alpha \in \shortroots$, then
$$h \star_{0} \sigma_\alpha = h \cup \sigma_\alpha + \left\{ \begin{array}{ll}
q \sigma_{\alpha + \Theta} & \textrm{if $\alpha \leq - \delta$}, \\
0 & \textrm{otherwise}. \\
\end{array}
\right.$$
\end{theorem}
\subsection{Coadjoint varieties for non simply-laced groups}
\label{subsection:non-simply-laced-types}
In this subsection we describe coadjoint varieties for non simply-laced groups as hyperplane sections of certain rational projective homogeneous spaces $Y$ that are homogeneous under the action of a simply laced group $\hat{\mathrm{G}}$. We use this description in the next subsection to describe the Fano variety $\F(X)$ of lines in $X$ and compute Gromov--Witten invariants.
Let $X = \mathrm{G}/\mathrm{P}$ be a coadjoint variety of a non simply laced type,
i.e. of Dynkin type $\mathrm{B}_n$, $\mathrm{C}_n$, $\mathrm{F}_4$ or $\mathrm{G}_2$. Our approach here is based on the observation that $X$ is a hyperplane section of a homogeneous variety
$Y = \hat{\mathrm{G}}/\hat{\mathrm{P}}$, with $\hat{\mathrm{G}}$ of simply laced Dynkin type. Here is an explicit list of such pairs\footnote{Note that also the flag variety $\Fl(1,n,n+1) \subset {\mathbb P}^n \times {\mathbb P}^n$ can be realized this way via folding, but we do not need this case here.}. This can be realised in a uniform way using Jordan algebras but we will not need this here. We refer to \cite[Section 4]{LaMa01} for an account on this.
\medskip
\begin{center}
\begin{tabular}{ccccc}
\hline
$X$ & \quad & $Y$ & \quad\\
\hline
$\mathrm{B}_n/\mathrm{P}_1 = \Q_{2n-1}$ & \quad & $\mathrm{D}_{n+1}/\mathrm{P}_1 = \Q_{2n}$ \\
$\mathrm{C}_n/\mathrm{P}_2 = \IG(2,2n)$ & \quad & $\mathrm{A}_{2n-1}/\mathrm{P}_2 = \G(2,2n)$ \\
$\mathrm{F}_4/\mathrm{P}_4$ & \quad & $\mathrm{E}_6/\mathrm{P}_1$ \\
$\mathrm{G}_2/\mathrm{P}_2 = \Q_5$ & \quad & $\mathrm{D}_4/\mathrm{P}_1 = \Q_6$\\
\hline
\end{tabular}
\medskip
\centering
{ Table 2. Coadjoint varieties for non simply-laced groups.}
\end{center}
\medskip
Let us make this correspondence more precise. Let $Y = \hat{\mathrm{G}}/\hat{\mathrm{P}}$ be one of the four homogenous spaces in the second column of above table. Note that $\hat{\mathrm{P}}$ is a maximal parabolic subgroup of $\hat{\mathrm{G}}$. Consider the Pl\"ucker embedding of $Y$:
\begin{equation*}
Y \subset {\mathbb P}(V^\vee),
\end{equation*}
where $V = H^0(Y, \mathcal{O}_Y(1))$ with ${{\rm Pic}}(Y) = \Z\mathcal{O}_Y(1)$. The vector space $V$ is a fundamental representation of $\hat{\mathrm{G}}$ and the group $\mathrm{G}$ can be identified with the stabilizer of a general point $v_0 \in {\mathbb P}(V)$ (see \cite{LaMa01} for example). Thus, we have a natural inclusion $\mathrm{G} \hookrightarrow \hat{\mathrm{G}}$ such that $\mathrm{P} = \hat{\mathrm{P}}\cap \mathrm{G}$. The element $v_0 \in {\mathbb P}(V)$ defines a hyperplane $H_{v_0}$ in ${\mathbb P}(V^\vee)$ and we have
\begin{equation*}
X = \mathrm{G}/\mathrm{P} = Y \cap H_{v_0} \hookrightarrow Y = \hat{\mathrm{G}}/\hat{\mathrm{P}}.
\end{equation*}
The inclusion $\mathrm{G} \subset \hat{\mathrm{G}}$ corresponds at the level of Dynkin diangrams to the operation called \textsf{folding}. For more on foldings we refer to \cite{Hel} and \cite{Kac}. We recall that a folding is given by a finite automorphism $\sigma$ of the Dynkin diagram of $\hat{\mathrm{G}}$ such that the nodes in any orbit of $\sigma$ are not connected. There are only few such automorphisms, all of them are involutions except for type $\mathrm{D}_4$ where there is an automorphism of order $3$ called triality (see below). The automorphism $\sigma$ induces a groups automorphism of $\hat{\mathrm{G}}$ (still denoted by $\sigma$) and the group $\mathrm{G}$ is (up to a finite group) the group $\hat{\mathrm{G}}^\sigma$ of $\sigma$-invariant elements. We list the possible foldings
\begin{center}
\begin{tabular}{ccc}
\hline
Type of $\hat{\mathrm{G}}$ & Type of $\mathrm{G}$ & Order of $\sigma$ \\
\hline
$\mathrm{A}_{2n-1}$ & $\mathrm{C}_n$ & $2$ \\
$\mathrm{D}_4$ & $\mathrm{G}_2$ & $3$ \\
$\mathrm{D}_{n+1}$ & $\mathrm{B}_n$ & $2$ \\
$\mathrm{E}_6$ & $\mathrm{F}_4$ & $2$ \\
\hline
\end{tabular}
\medskip
\centering
{ Table 3. Foldings.}
\end{center}
Note that we recover the pairs given in Table 2 describing coadjoint varieties for non simply-laced groups as hyperplane sections of homogeneous spaces.
The Dynkin diagram of $\mathrm{G}$ is easily deduced from the diagram of $\hat{\mathrm{G}}$ and the automorphism. The vertices of the Dynkin diagram of $\mathrm{G}$ are given by the orbits of $\sigma$ in the Dynkin diagram of $\hat{\mathrm{G}}$. If the orbit of $\sigma$ in the Dynkin diagram of $\hat{\mathrm{G}}$ contains a unique element, the associated node in the Dynkin diagram of $\mathrm{G}$ corresponds to a long root while each non trivial orbit of nodes gives rise to a node corresponding to a short root. The ratio of the root length between long and short roots is given by the number of elements in the non trivial orbits (or equivalently the order of $\sigma$):
$$\frac{\textrm{length of long roots of $\mathrm{G}$}}{\textrm{length of short roots of $\mathrm{G}$}} = \textrm{order of } \sigma.$$
The nodes in the Dynkin diagram of $\mathrm{G}$ are connected if the corresponding orbits were connected in the Dynkin diagram of $\hat{\mathrm{G}}$.
By this description, there is a surjective map of sets of simple roots
\begin{equation*}
\pi \colon \simpleroots_{\hat{\mathrm{G}}} \to \simpleroots_{\mathrm{G}},
\end{equation*}
satisfiying some additional assumptions as in \cite[p.18]{FoGo}. The above description can be summarised in the following formula:
\begin{equation}
\label{eq:root-coroot}
\scal{\pi(\hat{\alpha})^\vee,\pi(\hat{\beta})} = \sum_{\hat{\gamma} \in \pi^{-1}(\pi(\hat{\alpha}))} \scal{\gamma^\vee,\beta}.
\end{equation}
\begin{example}
For the pair $\mathrm{F}_4 \subset \mathrm{E}_6$ we have
\begin{equation*}
\label{diagram:folding-E6-F4}
\vcenter{\xymatrix@R=3ex{
\dynkin[edge length = 2em,labels*={1,...,6}]E6
\ar@{=>}[rr]^{\pi} &&
\dynkin[edge length = 2em,labels*={2,4,{3,5},{1,6}}]F4
}}
\end{equation*}
\end{example}
One can check that the group $\hat{\mathrm{G}}$ admits a $\sigma$-stable maximal torus
$\hat{\mathrm{T}}$ such that $\mathrm{T} := (\hat{\mathrm{T}}^\sigma)^\circ$ is a maximal torus of
$\mathrm{G}$. In particular $\sigma$ acts on $\roots_{\hat{\mathrm{G}}}$ the root system of
$\hat{\mathrm{G}}$ and stabilises $\simpleroots_{\hat{\mathrm{G}}}$. We recover this way the action of
$\sigma$ on the Dynkin diagram of $\hat{\mathrm{G}}$. We have $\roots_{\mathrm{G}} = \roots_{\hat{\mathrm{G}}}/\langle \sigma \rangle$
and $\simpleroots_{\mathrm{G}} = \simpleroots_{\hat{\mathrm{G}}}/\langle \sigma \rangle$. The roots of $\mathrm{G}$ are
obtained as restriction of the roots of $\hat{\mathrm{G}}$ to $\mathrm{T}_\mathrm{G}$. The quotient map
$\roots_{\hat{\mathrm{G}}} \to \roots_\mathrm{G}$ is therefore given by $\hat{\alpha} \mapsto \hat{\alpha}\vert_{\mathrm{T}_\mathrm{G}}$.
Note that no root of $\hat{\mathrm{G}}$ has a trivial restriction so that
$C_{\hat{\mathrm{G}}}(\mathrm{T}_\mathrm{G}) = C_{\hat{\mathrm{G}}}(\mathrm{T}_{\hat{\mathrm{G}}}) = \mathrm{T}_{\hat{\mathrm{G}}}$.
More generally, we have a restriction map from weights of $\hat{\mathrm{G}}$ to weights
of $\mathrm{G}$ given by $\hat{\lambda} \mapsto \hat{\lambda}\vert_{\mathrm{T}_\mathrm{G}}$. On the
coroot level, the map goes in the other direction: for any root $\alpha \in \roots_\mathrm{G}$,
there is a map $\alpha^\vee \mapsto \check{\alpha}^\vee$ where
$\check{\alpha}^\vee : \mathbb{G}_m \to \mathrm{T}_{\hat{\mathrm{G}}}$ is the composition of
$\alpha^\vee : \mathbb{G}_m \to \mathrm{T}_\mathrm{G}$ with the inclusion $\mathrm{T}_\mathrm{G} \subset \mathrm{T}_{\hat{\mathrm{G}}}$.
\begin{lemma}
For $\alpha \in \simpleroots_\mathrm{G}$, we have $\check{\alpha}^\vee = \sum_{\hat{\beta} \in \pi^{-1}(\alpha)} \hat{\beta}^\vee.$
\end{lemma}
\begin{proof}
Let $\hat{\gamma} \in \simpleroots_{\hat{\mathrm{G}}}$ be a simple root of $\hat{\mathrm{G}}$. By definition of the pairing between weights and coroots, we have
$$\scal{\check{\alpha}^\vee,\hat{\gamma}} = \scal{\alpha^\vee,\hat{\gamma}\vert_{\mathrm{T}_\mathrm{G}}} = \scal{\alpha^\vee,\pi(\hat{\gamma})}.$$
The result follows by applying equation \eqref{eq:root-coroot}.
\end{proof}
\begin{corollary}
Let $\hat{\varpi}_{\hat{\alpha}}$ be the fundamental weight associated to the simple root $\hat{\alpha} \in \simpleroots_{\hat{\mathrm{G}}}$. Then $\hat{\varpi}_{\hat{\alpha}}\vert_{\mathrm{T}_\mathrm{G}} = \varpi_{\pi(\hat{\alpha})}$ is the fundamental weight associated to the simple root $\pi(\hat{\alpha}) \in \simpleroots_\mathrm{G}$.
\end{corollary}
\begin{lemma}
Let $\hat{\mathrm{G}}$, $\sigma$, $\mathrm{G}$, $\mathrm{T}_{\hat{\mathrm{G}}}$ and $\mathrm{T}_{\mathrm{G}}$ as above.
\begin{enumerate}
\item $\sigma$ acts on the Weyl group $\mathrm{W}_{\hat{\mathrm{G}}}$ of the pair $(\hat{\mathrm{G}},\mathrm{T}_{\hat{\mathrm{G}}})$.
\item We have $\mathrm{W}_\mathrm{G} = (\mathrm{W}_{\hat{\mathrm{G}}})^\sigma$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) For $g \in N_{\hat{\mathrm{G}}}(\mathrm{T}_{\hat{\mathrm{G}}})$ and $t \in \mathrm{T}_{\hat{\mathrm{G}}}$, we have $\sigma(g)t \sigma(g)^{-1} = \sigma(g\sigma^{-1}(t)g^{-1}) \in \mathrm{T}_{\hat{\mathrm{G}}}$ since $\sigma$ stabilises $\mathrm{T}_{\hat{\mathrm{G}}}$.
(2) For $g \in N_{\hat{\mathrm{G}}}(\mathrm{T}_{\hat{\mathrm{G}}})$ such that $\sigma(g) = gt$ for $t \in \mathrm{T}_{\hat{\mathrm{G}}}$ and for $t_0 \in \mathrm{T}_{\mathrm{G}}$, we have $g t_0 g^{-1} \in \mathrm{T}_{\hat{\mathrm{G}}}$ and $\sigma(gt_0g^{-1}) = gt_0g^{-1}$ thus $g \in N_{\mathrm{G}}(\mathrm{T}_\mathrm{G})$ proving that we have $\mathrm{W}_{\hat{\mathrm{G}}}^\sigma \subset \mathrm{W}_\mathrm{G}$.
Conversely, let $g \in N_{\mathrm{G}}(\mathrm{T}_\mathrm{G})$, then $g \mathrm{T}_{\hat{\mathrm{G}}} g^{-1}$ is a maximal torus of $\hat{\mathrm{G}}$ containing $\mathrm{T}_\mathrm{G}$ and therefore centralising $\mathrm{T}_\mathrm{G}$. But $C_{\hat{\mathrm{G}}}(\mathrm{T}_\mathrm{G}) = \mathrm{T}_{\hat{\mathrm{G}}}$ thus $g \mathrm{T}_{\hat{\mathrm{G}}} g^{-1} \subset \mathrm{T}_{\hat{\mathrm{G}}}$ thus $g \in N_{\hat{\mathrm{G}}}(\mathrm{T}_{\hat{\mathrm{G}}}) \cap \mathrm{G}$ proving the converse inclusion.
\end{proof}
\begin{remark}
The above inclusion of $\mathrm{W}_\mathrm{G}$ in $\mathrm{W}_{\hat{\mathrm{G}}}$ as the set of $\sigma$-invariants is given as follows on simple roots:
$$s_\alpha \mapsto \prod_{\hat{\alpha} \in \pi^{-1}(\alpha)} s_{\hat{\alpha}}.$$
Note that the product is independent of the order chosen in $\pi^{-1}(\alpha)$ since all the roots in $\pi^{-1}(\alpha)$ are pairwise orthogonal.
\end{remark}
\begin{definition}
Let $\hat{\mathrm{G}}$, $\hat{\mathrm{P}}$, $\sigma$, $\mathrm{G}$ and $\mathrm{P}$ as above.
\begin{enumerate}
\item The inclusion map $\mathrm{W}_\mathrm{G} \to \mathrm{W}_{\hat{\mathrm{G}}}$ will be denoted by $\pi_*$.
\item For $w \in \mathrm{W}_{\mathrm{G}}^\mathrm{P}$, we set $w_* = \pi_*(w)^{\hat{\mathrm{P}}} \in \mathrm{W}_{\hat{\mathrm{G}}}^{\hat{\mathrm{P}}}$.
\end{enumerate}
\end{definition}
\begin{lemma}
\label{lem:w_*}
Let $X = \mathrm{G}/\mathrm{P}$ be the coadjoint variety for a non simply-laced group $\mathrm{G}$ and let $Y = \hat{\mathrm{G}}/\hat{\mathrm{P}}$ be the $\hat{\mathrm{G}}$-homogeneous variety such that $X$ is a general hyperplane section in $Y$. Let $w \in \mathrm{W}_{{\mathrm{G}}}^{{\mathrm{P}}}$ such that $\ell(w) \leq \dim X/2$ and let
$$w = s_{\alpha_{i_1}} \cdots s_{\alpha_{i_r}}$$
be a reduced expression for $w$. For each $k \in [1,r]$, there exists a unique simple root $\hat{\beta}_k \in \pi^{-1}(\alpha_{i_k})$ such that $w_*$ has a reduced expression of the form:
$$w_* = s_{\hat{\beta}_1} \cdots s_{\hat{\beta}_r} \in \mathrm{W}^{\hat{\mathrm{P}}}_{\hat{\mathrm{G}}}.$$
In particular, we have $\ell(w_*) = \ell(w)$.
\end{lemma}
\begin{proof}
Recall the following result: for $v_* \in \mathrm{W}^{\hat{\mathrm{P}}}_{\hat{\mathrm{G}}}$ and $\hat{\beta} \in \simpleroots_{\hat{\mathrm{G}}}$ a simple root, we have $s_{\hat{\beta}} v_* \in \mathrm{W}^{\hat{\mathrm{P}}}_{\hat{\mathrm{G}}}$ and $\ell(s_{\hat{\beta}}v_*) = \ell(v_*) + 1$ if and only if $(v_*)^{-1}(\hat{\beta})$ is a positive root which is not a root of $\mathrm{L}_{\hat{\mathrm{P}}}$ the Levi subgroup of $\hat{\mathrm{P}}$.
We proceed by induction on $\ell(w)$ so we may assume that the result holds for $v = s_{\alpha_{i_2}} \cdots s_{\alpha_{i_r}}$ and prove it for $w$. There exists therefore simple roots $\hat{\beta}_{i_k} \in \pi^{-1}(\alpha_{i_k})$ for all $k \in [2,r]$ such that $v_* = s_{\hat{\beta}_{i_2}} \cdots s_{\hat{\beta}_{i_r}}$ is a reduced expression. We thus want to prove that there exists a unique root $\hat{\beta} \in \pi^{-1}(\alpha_{i_1})$ such that $s_{\hat{\beta}} v_* \in \mathrm{W}^{\hat{\mathrm{P}}}_{\hat{\mathrm{G}}}$. Let $\hat{\varpi}$ be the fundamental weight associated to $\hat{\mathrm{P}}$, we therefore need to prove that there exists a unique root $\hat{\beta} \in \pi^{-1}(\alpha_{i_1})$ such that $\scal{\hat{\beta}^\vee,v_*(\hat{\varpi})} > 0$.
Let $\varpi$ be the fundamental weight associated to the maximal parabolic subgroup $\mathrm{P}$. We have $\varpi = \hat{\varpi}\vert_{\mathrm{T}_\mathrm{G}}$. Furthermore, we have
$$v_*(\hat{\varpi})\vert_{\mathrm{T}_\mathrm{G}} = s_{\hat{\beta}_{i_2}} \cdots s_{\hat{\beta}_{i_r}}(\hat{\varpi})\vert_{\mathrm{T}_\mathrm{G}} = s_{\alpha_{i_2}} \cdots s_{\alpha_{i_r}}(\hat{\varpi}\vert_{\mathrm{T}_\mathrm{G}}) = v(\varpi).$$
We will use $\lambda$-minuscule elements. Recall that for a weight $\lambda$ of $\mathrm{G}$, an element $u \in \mathrm{W}_\mathrm{G}$ is called $\lambda$-minuscule if there exists a reduced expression $u = s_{\alpha_{i_1}} \cdots s_{\alpha_{i_r}}$ such that $s_{\alpha_{i_k}} \cdots s_{\alpha_{i_r}}(\lambda) = \lambda - (\alpha_{i_r} + \cdots + \alpha_{i_k})$ for all $k \in [1,r]$. If $\lambda = \varpi$ is the fundamental weight associated to the maximal parabolic subgroup $\mathrm{P}$, then it is a general fact that $\varpi$-minuscule elements are in $\mathrm{W}^\mathrm{P}_\mathrm{G}$. By \cite[Proposition 2.11]{ChPe}, we have
$$\{ \textrm{$\varpi$-minuscule elements in $\mathrm{W}_{\hat{\mathrm{G}}}$ }\} = \{ u \in \mathrm{W}^\mathrm{P}_\mathrm{G} \ | \ \ell(u) \leq \dim X/2 \}.$$
In particular $w$ is $\varpi$-minuscule so that we have the equalities
$$\langle \alpha_{i_1}^\vee , s_{\alpha_{i_{2}}} \cdots s_{\alpha_{i_r}}(\varpi) \rangle = \langle \alpha_{i_1}^\vee , v(\varpi) \rangle = 1.$$
We deduce the following formulas
$$\sum_{\hat{\beta} \in \pi^{-1}(\alpha_{i_1})} \scal{\hat{\beta}^\vee,v_*(\hat{\varpi})} = \scal{\check{\alpha}_{i_1}^\vee,v_*(\hat{\varpi})} = \scal{\alpha_{i_1}^\vee,v_*(\hat{\varpi})\vert_{\mathrm{T}_\mathrm{G}}} = \scal{\alpha_{i_1}^\vee,v(\varpi)} = 1.$$
The fundamental weight $\hat{\varpi}$ associated to $\hat{\mathrm{P}}$ is a minuscule weight: $\langle \hat{\alpha}^\vee , \hat{\varpi} \rangle \in \{ -1,0,1\}$ for all $\hat{\alpha} \in \roots_{\hat{\mathrm{G}}}$. This in particular implies that we have $\scal{\hat{\beta}^\vee,v_*(\hat{\varpi})} = \scal{(v_*)^{-1}(\hat{\beta}),\hat{\varpi}} \in \{-1,0,1\}$. If $\sigma$ has order $2$, then the only possibility is that there is a unique $\hat{\beta} \in \pi^{-1}(\alpha_{i_1})$ with $\scal{\hat{\beta}^\vee,v_*(\hat{\varpi})} = 1$ and $\scal{\hat{\gamma}^\vee,v_*(\hat{\varpi})} = 0$ for the orther element $\gamma \in \pi^{-1}(\alpha_{i_1})$. This finishes the proof for $\sigma$ of order $2$.
If $\sigma$ has order $3$ there is also the possibility that two elements $\hat{\beta},\hat{\gamma} \in \pi^{-1}(\alpha_{i_1})$ are such that $\scal{\hat{\beta}^\vee,v_*(\hat{\varpi})} = 1 = \scal{\hat{\gamma}^\vee,v_*(\hat{\varpi})}$ and one element $\hat{\delta} \in \pi^{-1}(\alpha_{i_1})$ satisfies $\scal{\hat{\delta}^\vee,v_*(\hat{\varpi})} = -1$. But in this case we have $\hat{\mathrm{G}}$ of type $\mathrm{D}_4$ and we may assume that $\hat{\varpi}$ is associated to the first simple root and an easy check proves that this never occurs for $v_* \in \mathrm{W}^{\hat{\mathrm{P}}}_{\hat{\mathrm{G}}}$.
\end{proof}
\begin{example}
\label{ex:f4-e6-1}
Let us consider the case $X = \mathrm{F}_4/\mathrm{P}_4$ and $Y = \mathrm{E}_6/\mathrm{P}_1$. Then we have
\begin{equation*}
\pi({\hat{\alpha}}_1) = \pi({\hat{\alpha}}_6) = \alpha_4, \quad \pi({\hat{\alpha}}_3) = \pi({\hat{\alpha}}_5) = \alpha_3,
\quad \pi({\hat{\alpha}}_2) = \alpha_1, \quad \text{and} \quad \pi({\hat{\alpha}}_4) = \alpha_2.
\end{equation*}
For $w = s_{\alpha_4}s_{\alpha_2}s_{\alpha_3}s_{\alpha_1}s_{\alpha_2}s_{\alpha_3}s_{\alpha_4} \in \mathrm{W}_{\mathrm{F}_4}^{\mathrm{P}_4}$.
we have
\begin{equation*}
\pi_*(w) = s_{{\hat{\alpha}}_1}s_{{\hat{\alpha}}_6}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_3}s_{{\hat{\alpha}}_5}s_{{\hat{\alpha}}_2}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_3}s_{{\hat{\alpha}}_5} s_{{\hat{\alpha}}_1}s_{{\hat{\alpha}}_6} \in \mathrm{W}_{\mathrm{E}_6}
\end{equation*}
and
\begin{equation*}
w_* = s_{{\hat{\alpha}}_6}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_5}s_{{\hat{\alpha}}_2}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_3} s_{{\hat{\alpha}}_1} \in \mathrm{W}_{\mathrm{E}_6}^{\mathrm{P}_1}.
\end{equation*}
Note that $\ell(\pi_*(w)) > \ell(w_*) = \ell(w)$.
\end{example}
Note that if $\hat{\alpha},\hat{\beta}$ are simple roots of $\hat{\mathrm{G}}$ with $s_{\hat{\alpha}} s_{\hat{\beta}} = s_{\hat{\beta}} s_{\hat{\alpha}}$, then $s_{\pi(\hat{\alpha})} s_{\pi(\hat{\beta})} = s_{\pi(\hat{\beta})} s_{\pi(\hat{\alpha})}$. This implies the following result.
\begin{lemma}
Let $\mathrm{G} \subset \hat{\mathrm{G}}$ as above and let $\hat{w} \in \mathrm{W}_{\hat{\mathrm{G}}}$.
Assume that $\hat{w}$ has a unique reduced expression $\hat{w} = s_{\hat{\alpha}_{i_1}} \cdots s_{\hat{\alpha}_{i_r}}$ up to commuting relation. Then there is a well defined element $\pi^*(\hat{w}) \in \mathrm{W}_\mathrm{G}$ given by
$$ \pi^*(\hat{w}) = s_{\pi(\hat{\alpha}_{i_1})} \cdots s_{\pi(\hat{\alpha}_{i_r})}.$$
\end{lemma}
\begin{proof}
The element $\pi^*(\hat{w})$ is well defined up to commuting relations in $\hat{w}$. But simple reflections commuting in $\mathrm{W}_{\hat{\mathrm{G}}}$ also commute in $\mathrm{W}_\mathrm{G}$.
\end{proof}
Recall the following result from \cite[Proposition 2.1]{Ste}
\begin{lemma}
Let $X = \mathrm{G}/\mathrm{P}$ be the coadjoint variety for a non simply-laced group $\mathrm{G}$ and let $Y = \hat{\mathrm{G}}/\hat{\mathrm{P}}$ be the $\hat{\mathrm{G}}$-homogeneous variety such that $X$ is a general hyperplane section in $Y$.
Any element $\hat{w} \in \mathrm{W}_{\hat{\mathrm{G}}}^{\hat{\mathrm{P}}}$ has a unique reduced expression up to commuting relation. In particular, the element $\pi^*(\hat{w}) \in \mathrm{W}_{\mathrm{G}}$ is well defined.
\end{lemma}
\begin{proof}
Recall the definition of $\lambda$-minuscule elements (see also \cite{Ste} or \cite{ChPe}). Any $\lambda$-minuscule element has a unique reduced expression up to commuting relations (see \cite[Proposition 2.1]{Ste}). Since $Y$ is minuscule, all elements in $\mathrm{W}^{\hat{\mathrm{P}}}_{\hat{\mathrm{G}}}$ are $\hat{\varpi}$-minuscule for the minuscule weight $\hat{\varpi}$.
\end{proof}
\begin{definition}
For $\hat{w} \in \mathrm{W}_{\hat{\mathrm{G}}}^{\hat{\mathrm{P}}}$, set $\hat{w}^* := \pi^*(\hat{w})^\mathrm{P} \in \mathrm{W}_{\mathrm{G}}^\mathrm{P}$.
\end{definition}
\begin{example}
The maps $\pi^*$ and $w \mapsto w^*$ do not in general respect the length and
are not injective. Indeed, let us consider the case $X = \mathrm{F}_4/\mathrm{P}_4$ and
$Y = \mathrm{E}_6/\mathrm{P}_1$ as in Example \ref{ex:f4-e6-1}.
For the minimal coset representative $\hat{w} \in \mathrm{W}_{\mathrm{E}_6}^{\mathrm{P}_1}$ given by
\begin{equation*}
\hat{w} = s_{{\hat{\alpha}}_5}s_{{\hat{\alpha}}_6}s_{{\hat{\alpha}}_1}s_{{\hat{\alpha}}_3}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_5}s_{{\hat{\alpha}}_2}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_3}s_{{\hat{\alpha}}_1}
\end{equation*}
we have
\begin{equation*}
\pi^*(\hat{w}) = s_{\alpha_3}s_{\alpha_4}s_{\alpha_4}s_{\alpha_3}s_{\alpha_2}s_{\alpha_3}s_{\alpha_1}s_{\alpha_2}s_{\alpha_3}s_{\alpha_4}
= s_{\alpha_2}s_{\alpha_1}s_{\alpha_3}s_{\alpha_2}s_{\alpha_3}s_{\alpha_4} = w^*.
\end{equation*}
Thus, we see that the lenght is not preserved in general.
To see the non-injectivity we take
\begin{equation*}
\hat{u} = s_{{\hat{\alpha}}_3}s_{{\hat{\alpha}}_6}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_5}s_{{\hat{\alpha}}_2}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_3}s_{{\hat{\alpha}}_1}
\quad \text{and} \quad
\hat{v} = s_{{\hat{\alpha}}_5}s_{{\hat{\alpha}}_6}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_5}s_{{\hat{\alpha}}_2}s_{{\hat{\alpha}}_4}s_{{\hat{\alpha}}_3}s_{{\hat{\alpha}}_1},
\end{equation*}
and obtain
\begin{equation*}
\pi^*(\hat{u}) = u^* = s_{\alpha_3}s_{\alpha_4}s_{\alpha_2}s_{\alpha_3}s_{\alpha_1}s_{\alpha_2}s_{\alpha_3}s_{\alpha_4} = v^* = \pi^*(\hat{v}).
\end{equation*}
\end{example}
\bigskip
Let $X = \mathrm{G}/\mathrm{P}$ be a coadjoint variety for a non simply-laced group $\mathrm{G}$ and let $Y = \hat{\mathrm{G}}/\hat{\mathrm{P}}$ be the $\hat{\mathrm{G}}$-homogeneous variety such that $X$ is a general hyperplane section in $Y$.
\begin{lemma}
\label{lemma:pullback-pushforward-weyl-group-elements}
Let $w \in \mathrm{W}_{{\mathrm{G}}}^{{\mathrm{P}}}$ such that $\ell(w) \leq \dim X/2$. We have $\pi^*(w_*) = (w_*)^* = w$.
\end{lemma}
\begin{proof}
Let $w = s_{\alpha_{i_1}} \cdots s_{\alpha_{i_r}}$ be a reduced expression for $w$. By Lemma \ref{lem:w_*}, there exists, for each $k \in [1,r]$, a unique simple root $\hat{\beta}_k \in \pi^{-1}(\alpha_{i_k})$ such that $w_*$ has a reduced expression of the form $w_* = s_{\hat{\beta}_1} \cdots s_{\hat{\beta}_r} \in \mathrm{W}^{\hat{\mathrm{P}}}_{\hat{\mathrm{G}}}$. We thus have $\pi^*(w_*) = s_{\pi(\hat{\beta}_1)} \cdots s_{\pi(\hat{\beta}_r)} = s_{\alpha_{i_1}} \cdots s_{\alpha_{i_r}} = w \in \mathrm{W}^\mathrm{P}_\mathrm{G}$. Therefore $(w_*)^* = \pi^*(w_*) = w$.
\end{proof}
Let $X = \mathrm{G}/\mathrm{P}$ be a coadjoint variety for a non simply-laced group $\mathrm{G}$ and let $Y = \hat{\mathrm{G}}/\hat{\mathrm{P}}$ be the $\hat{\mathrm{G}}$-homogeneous variety such that $X$ is a general hyperplane section in $Y$. Let $j : X \to Y$ be the inclusion. There exists $\hat{\mathrm{B}} \subset \hat{\mathrm{G}}$ a Borel subgroup of $\hat{\mathrm{G}}$ containing $\mathrm{T}_{\hat{\mathrm{G}}}$ and contained in $\hat{\mathrm{P}}$ which is $\sigma$-stable. In that case, we have that $\mathrm{B} := (\mathrm{G} \cap \hat{\mathrm{B}})^\circ$ is a Borel subgroup of $\mathrm{G}$ containing $\mathrm{T}_\mathrm{G}$ and contained in $\mathrm{P}$. Let $\hat{\mathrm{B}}^-$ be the opposite Borel subgroup in $\hat{\mathrm{G}}$ with respect to $\mathrm{T}_{\hat{\mathrm{G}}}$. For $\hat{w} \in \mathrm{W}^{\hat{\mathrm{P}}}_{\hat{\mathrm{G}}}$, we define $Y_{\hat{w}}$ as the closure of $\hat{B}.\hat{w}\hat{P} \subset Y$ and $Y^{\hat{w}}$ as the closure of $\hat{B}^-.\hat{w}\hat{P} \subset Y$. We set $\sigma_Y^{\hat{w}} = [Y^{\hat{w}}] \in H^{2\ell(\hat{w})}(Y,{\mathbb Z})$.
\begin{proposition}
\label{coho-coadj-nsl}
Let $k \leq\dim X/2$.
\begin{enumerate}
\item The map $j_* : H_{2k}(X,{\mathbb Z}) \to H_{2k}(Y,{\mathbb Z})$ is an isomorphism.
\item The map $j^* : H^{2k}(X,{\mathbb Z}) \to H^{2k}(Y,{\mathbb Z})$ is an isomorphism.
\item For $w \in \mathrm{W}_\mathrm{G}^\mathrm{P}$ with $\ell(w) = k$, we have $j(X_w) = Y_{w_*}$.
\item For $\hat{w} \in \mathrm{W}_{\hat{\mathrm{G}}}^{\hat{\mathrm{P}}}$ with $\ell(\hat{w}) = k$, we have $j^*(\sigma_Y^{\hat{w}}) = \sigma_X^{\hat{w}^*}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Parts (1) and (2) follow from the Lefschetz theorem.
(3) We have $j(w.\mathrm{P}) = \pi_*(w).\hat{\mathrm{P}} = w_*.\hat{\mathrm{P}}$. In particular, since $\mathrm{B} \subset\hat{\mathrm{B}}$, we have $j(\mathrm{B} w.\mathrm{P}) \subset Y_{w_*}$ and taking the closures, we get $j(X_w) \subset Y_{w_*}$. But since $w \in \mathrm{W}^\mathrm{P}_\mathrm{G}$ and $w_* \in \mathrm{W}^{\hat{\mathrm{P}}}_{\hat{\mathrm{G}}}$ and by Lemma \ref{lem:w_*}, we have $\dim X_w = \ell(w) = \ell(w_*) = \dim Y_{w_*}$ proving the result.
(4) Let $w \in \mathrm{W}^\mathrm{P}_\mathrm{G}$ such that $w_* = \hat{w}$. This is possible by (1) and (3). We have $j_*[X_w] = [Y_{w_*}]$. Note that the map $j^*$ is the transpose of $j_*$ for Poincar\'e duality.
Furthermore, the bases $([X_u])_{u \in \mathrm{W}_\mathrm{G}^\mathrm{P}}$ and $([X^u])_{u \in \mathrm{W}_\mathrm{G}^\mathrm{P}}$ are Poincar\'e dual as well as the bases $([Y_{\hat{u}}])_{\hat{u} \in \mathrm{W}_{\hat{\mathrm{G}}}^{\hat{\mathrm{P}}}}$ and $([Y^{\hat{u}}])_{\hat{u} \in \mathrm{W}_{\hat{\mathrm{G}}}^{\hat{\mathrm{P}}}}$. The result follows from this and the fact that $w = (w_*)^* = \hat{w}^*$.
\end{proof}
\subsection{Fano variety of lines in the non-simply laced case}
We describe the Fano variety of lines for $X = \mathrm{G}/\mathrm{P}$ coadjoint and $\mathrm{G}$ non simply-laced. The embedding $j : X = \mathrm{G}/\mathrm{P} \to Y = \hat{\mathrm{G}}/\hat{\mathrm{P}}$ induces a closed embedding $\F(X) \subset \F(Y)$ for the Fano varieties of lines in $X$ and $Y$. For $x = 1.\mathrm{P} \in X = \mathrm{G}/\mathrm{P} \subset Y$, we get closed embeddings $ \F_x(X) \subset \F_x(Y)$.
The above discussion can be conveniently summarized in the commutative diagram
\begin{equation*}
\xymatrix{
\Z(X)\ar[dd]_{q_X}\ar[rr]^{p_X}\ar[rd] & & X \ar[d]^j \\
& \Z(Y) \ar[d]_{q_Y} \ar[r]^{p_Y} & Y \\
\F(X) \ar[r] & \F(Y) & \\
\F_x(X) \ar[u]^{i_X} \ar[r] & \F_x(Y) \ar[u]^{i_Y}
}
\end{equation*}
Let $Y \subset \mathbb{P}(V)$ be the Pl\"ucker embedding and set $E := (q_Y)_*(p_Y)^*\mathcal{O}_Y(1)$. Note that $E$ is the restriction of $\mathcal{U}^\vee$ to $Y$ where $\mathcal{U}$ is the tautological subbundle of rank $2$ on the Grassmannian $\G(2,V)$ of lines in $\mathbb{P}(V)$.
\begin{lemma}\label{lemma:fano-variey-of-lines-zero-locus}
Let $X \subset Y$ be defined by $s$ general in $H^0(Y,\mathcal{O}_Y(1))$.
\begin{enumerate}
\item The sheaf $E$ is a globally generated vector bundle of rank $2$ and $H^0(\F(Y),E) = H^0(Y,\mathcal{O}_Y(1))$.
\item The Fano variety $\F(X)$ is the vanishing locus of the general section $s \in H^0(\F(Y),E)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Since $E$ is the restriction of $\mathcal{U}^\vee$ to $Y$, $E$ is globally generated of rank $2$. We have $H^0(\F(Y),E) = H^0(\Z(Y),(p_Y)^*\mathcal{O}_Y(1)) = H^0(Y,(p_Y)_*(p_Y)^*\mathcal{O}_Y(1)) = H^0(Y,\mathcal{O}_Y(1))$.
(2) The locus of lines in $\mathbb{P}(V)$ contained in the zero locus of $s \in H^0(Y,\mathcal{O}_Y(1)) = V$ is the zero locus in $\G(2,V)$ of $s \in H^0(\G(2,V),\mathcal{U}^\vee)$ and the result follows by restriction to $Y$.
\end{proof}
In $\G(2,V)$, the lines passing through $x$ form a projective space isomorphic to $\mathbb{P}(V/\scal{x})$ and we have a natural inclusion $\F_x(Y) \subset \mathbb{P}(V/\scal{x})$. The restricton of $\mathcal{U}^\vee$ to $\mathbb{P}(V/\scal{x})$ decomposes as $\mathcal{O} \oplus \mathcal{O}(1)$ and since $s$ vanishes on $x$ we have that $s$ uniformly vanishes on $\mathcal{O}$ and induces a section $s \in H^0(\mathbb{P}(V/\scal{x},\mathcal{O}(1))$.
\begin{corollary}\label{lemma:variey-of-lines-through-a-point-zero-locus}
The Fano variety $\F_x(X)$ is the vanishing locus of the general section $s \in H^0(\F_x(Y),\mathcal{O}_{\F_x(Y)}(1))$.
\end{corollary}
We now prove formulas for computing Gromov--Witten invariants in $X$ of pull-backs of Schubert classes in $Y$. Note that using Proposition \ref{coho-coadj-nsl} this gives formulas for computing Gromov--Witten invariants of Schubert classes in $X$.
Recall that $\F(Y)$ is homogeneous of the form $\F(Y) = \hat{\mathrm{G}}/\hat{\mathrm{Q}}$ with $\hat{\mathrm{Q}}$ defined from $\hat{\mathrm{P}}$ as in Definition \ref{def-para-q}. Recall also that $\Z(Y)$ and $\F_x(Y)$ are homogeneous of the form $\Z(Y) = \hat{\mathrm{G}}/\hat{\mathrm{R}}$ with $\hat{\mathrm{R}} = \hat{\mathrm{P}} \cap \hat{\mathrm{Q}}$ and $\F_x(Y) = \hat{\mathrm{P}}/\hat{\mathrm{R}} = \hat{\mathrm{L}}/\hat{\mathrm{R}}_{\hat{\mathrm{L}}}$ where $\hat{\mathrm{L}}$ is the Levi subgroup of $\hat{\mathrm{P}}$ containing $\mathrm{T}_{\hat{\mathrm{G}}}$ and $\hat{\mathrm{R}}_{\hat{\mathrm{L}}} = \hat{\mathrm{L}} \cap \hat{\mathrm{R}}$.
Recall also from Proposition \ref{prop-res-fibre} that for $\hat{w} \in \mathrm{W}_{\hat{\mathrm{G}}}^{\hat{\mathrm{P}}}$, we have $\hat{w} \in \mathrm{W}_{\hat{\mathrm{G}}}^{\hat{\mathrm{R}}}$ if and only if $\hat{w} \leq w_{0,\hat{\mathrm{P}}}^{\hat{\mathrm{R}}}$. Denote by $\hat{\alpha}_{\hat{\mathrm{P}}}$ the simple root associated to $\hat{\mathrm{P}}$ and by $s_{\hat{\mathrm{P}}}$ the associated simple reflection in $\mathrm{W}_{\hat{\mathrm{G}}}$.
\begin{proposition}
\label{proposition:quantum-to-classical-non-simply-laced}
Let $\hat{w}_1, \dots, \hat{w}_n \in \mathrm{W}_{\hat{\mathrm{G}}}^{\hat{\mathrm{P}}}$.
\begin{enumerate}
\item Assume that for all $i \in [1,n]$, we have $\hat{w}_i \neq 1$.
Assume furthermore that we have
$$\sum_{i = 1}^n \ell(\hat{w}_i) = \dim X + c_1(X) + n - 3.$$
Then we have the equality
\begin{equation*}
\scal{j^*\sigma^{\hat{w}_1}_Y, \dots , j^*\sigma^{\hat{w}_n}_Y}_1^X =
\deg_{\F(Y)} \left( c_{top}(E) \cup \left( \bigcup_{i = 1}^n \sigma^{\hat{w}_is_{\hat{\mathrm{P}}}}_{\F(Y)} \right) \right).
\end{equation*}
If
$\hat{w}_i = 1$ for some $i \in [1,n]$, we have
the vanishing $\scal{j^*\sigma^{\hat{w}_1}_Y, \dots , j^*\sigma^{\hat{w}_n}_Y}_1^X = 0$.
\item Assume that for all $i \in [1,n]$, we have $\hat{w}_i \neq 1$ and $\hat{w}_i s_{\hat{\mathrm{P}}} \leq w^{\hat{\mathrm{R}}}_{0,\hat{\mathrm{P}}}$. Assume furthermore that we have
$$\sum_{i = 1}^n \ell(\hat{w}_i) = c_1(X) + n - 2.$$
Then we have the equality
\begin{equation*}
\scal{[{\rm pt}], j^*\sigma^{\hat{w}_1}, \dots , j^*\sigma^{\hat{w}_n}}_1^X =
\deg_{\F_x(Y)} \left( h_{\F_x(Y)} \cup \left( \bigcup_{i = 1}^n \sigma^{\hat{w}_is_{\hat{\mathrm{P}}}}_{\F_x(Y)} \right) \right).
\end{equation*}
If for some $i \in [1,n]$, we have $\hat{w}_i = 1$ or $\hat{w}_i s_{\hat{\mathrm{P}}} \not\leq w^{\hat{\mathrm{R}}}_{0,\hat{\mathrm{P}}}$, then we have the vanishing
$$\scal{[{\rm pt}], j^*\sigma^{\hat{w}_1}, \dots , j^*\sigma^{\hat{w}_n}}_1^X = 0.$$
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Note that if $\hat{w}_i = 1$, then $j^*\sigma_Y^{\hat{w}_i} = 1$ and by definition of Gromov--Witten invariants, we get: $\scal{j^*\sigma^{\hat{w}_1}, \dots , j^*\sigma^{\hat{w}_n}}_1^X = 0$. We may therefore assume that $\hat{w}_i \neq 1$ in which case $\sigma^{\hat{w}_i} = [Y^{\hat{w}_i}]$ and, by Lemma \ref{lemm-weylPQR}, the map $q_Y : p_Y^{-1}(Y^{\hat{w}_i}) \to q_Yp_Y^{-1}(Y^{\hat{w}_i}) = \F(Y)^{\hat{w}_is_{\hat{\mathrm{P}}}}$ is birational and the codimension of $\F(Y)^{\hat{w}_is_{\hat{\mathrm{P}}}}$ in $\F(Y)$ is $\ell(\hat{w}_i) - 1$.
By Lemma \ref{lemma:quantum-to-classical}, the Gromov--Witten invariant $\scal{j^*\sigma^{w_1}, \dots , j^*\sigma^{w_n}}_1^X$ is equal to the number of points of the following finite reduced intersection in $\F(X)$:
$$g_1.q_Xp_X^{-1}(X \cap Y^{\hat{w}_1}) \cap \cdots \cap g_n.q_Xp_X^{-1}(X \cap Y^{\hat{w}_n})$$
where $g_1,\cdots,g_n$ are general elements in $\mathrm{G}$. To get the formula we compute this intersection in $\F(Y)$. This intersection is equal to
$$g_1.q_Yp_Y^{-1}(Y^{\hat{w}_1}) \cap \cdots \cap g_n.q_Yp_Y^{-1}(Y^{\hat{w}_n}) \cap \F(X).$$
Since this intersection is finite and reduced and since the sum of the codimension of these subvarieties sum up to $\dim\F(Y)$, the number of points in this intersection is equal to
$$ \deg_{\F(Y)} \left( [\F(X)] \cup \left( \bigcup_{i = 1}^n \sigma^{\hat{w}_is_{\hat{\mathrm{P}}}}_{\F(Y)} \right) \right).$$
The result follows since $\F(X)$ is the vanishing locus of a general section of $E$ which is globally generated.
(2) The proof follows the same lines as in (1). Note that we can again assume that $\hat{w}_i \neq 1$ in which case $\sigma^{\hat{w}_i} = [Y^{\hat{w}_i}]$ and, by Lemma \ref{lemm-weylPQR}, the map $q_Y : p_Y^{-1}(Y^{\hat{w}_i}) \to q_Yp_Y^{-1}(Y^{\hat{w}_i}) = \F(Y)^{\hat{w}_is_{\hat{\mathrm{P}}}}$ is birational and the codimension of $\F(Y)^{\hat{w}_is_{\hat{\mathrm{P}}}}$ in $\F(Y)$ is $\ell(\hat{w}_i) - 1$.
Note that $q_Xp_X^{-1}(x) = \F_x(X)$. By Lemma \ref{lemma:quantum-to-classical}, the Gromov--Witten invariant $\scal{[{\rm pt}], j^*\sigma^{w_1}, \dots , j^*\sigma^{w_n}}_1^X$ is equal to the number of points of the following finite reduced intersection in $\F(X)$:
$$\F_x(X) \cap g_1.q_Xp_X^{-1}(X \cap Y^{\hat{w}_1}) \cap \cdots \cap g_n.q_Xp_X^{-1}(X \cap Y^{\hat{w}_n})$$
where $g_1,\cdots,g_n$ are general elements in $\mathrm{G}$. To get the formula we compute this intersection in $\F(Y)$. This intersection is equal to
$$\F_x(Y) \cap g_1.q_Yp_Y^{-1}(Y^{\hat{w}_1}) \cap \cdots \cap g_n.q_Yp_Y^{-1}(Y^{\hat{w}_n}) \cap \F(X).$$
Note that by Proposition \ref{prop-res-fibre}, the intersecton $\F_x(Y) \cap g_i q_Yp_Y^{-1}(Y^{\hat{w}_i})$ is empty for $\hat{w}_i \not\leq w_{0,\hat{\mathrm{P}}}^{\hat{\mathrm{R}}}$. Since this intersection is finite and reduced and since the sum of the codimension of these subvarieties sum up to $\dim\F(Y)$, the number of points in this intersection is equal to
$$ \deg_{\F(Y)} \left( [\F_x(Y)] \cup [\F(X)] \cup \left( \bigcup_{i = 1}^n \sigma^{\hat{w}_is_{\hat{\mathrm{P}}}}_{\F(Y)} \right) \right).$$
The result follows since $\F_x(X) = \F(X) \cap \F_x(Y)$ is the vanishing locus of a general hyperplane section in $\F_x(Y)$.
\end{proof}
\section{Type $\mathrm{D}_n$}
\label{section:type-D}
Let $\mathrm{G}$ be a semisimple group of type $\mathrm{D}_n$ and of adjoint type. We assume $n \geq 4$. Since $\mathrm{G}$ is simply laced, the adjoint and the coadoint varieties agree. Let $X$ be this (co)adjoint variety, we have $X = \mathrm{G}/\mathrm{P}_2$ where $\mathrm{P}_2$ is the maximal parabolic subgroup of $\mathrm{G}$ associated to the simple root $\alpha_2$ with notation as in \cite{Bo}. This variety can be described as isotropic Grassmannian of lines as follows.
Let $V$ be a $2n$-dimensional complex vector space endowed with a non degenerate quadratic form $q$. Define the algebraic variety $\OG(2, V)$ parametrizing $2$-dimensional isotropic subspaces of $V$. As for different non degenerate quadratic forms on $V$ we obtain isomorphic isotropic Grassmannians, it is unambiguous to write $\OG(2,2n)$. We have $X \simeq \OG(2,2n)$ and $\mathrm{G} = \aut(X)$. We have $\dim(X) = 4n - 7 = 2r - 1$ with $r = {\rm Index}(X) = 2n - 3$.
\subsection{Borel presentation for $\OG(2,2n)$}
We will give an explicit description of Borel's presentation (see Subsection \ref{subsection:borel-presentation}). Consider the short exact sequence of vector bundles on $X=\OG(2, V)$
\begin{align}\label{Eq.: Tautological S.E.S I}
0 \to \mathcal{U} \to \mathcal{V} \to \mathcal{V}/\mathcal{U} \to 0,
\end{align}
where $\mathcal{V}$ is the trivial vector bundle with fiber $V$, where $\mathcal{U}$ is the subbundle of isotropic subspaces, and where $\mathcal{Q} := \mathcal{V}/\mathcal{U}$ is the quotient bundle. Usually one refers to $\mathcal{U}$ and $\mathcal{Q}$ as \textit{tautological subbundle} and \textit{tautological quotient bundle} respectively.
One also defines a vector bundle $\mathcal{U}^\perp$ as the kernel of the composition $\mathcal{V} \overset{q}{\to} \mathcal{V}^\vee \to \mathcal{U}^\vee$, where the first morphism is the isomorphism induced by the quadratic form, and the second one is the dual of the natural inclusion $\mathcal{U} \to \mathcal{V}$. From the definition of $\mathcal{U}^\perp$ we immediately obtain an isomorphism
\begin{align}\label{Eq.: V/U^perp iso U^*}
\mathcal{V}/\mathcal{U}^\perp \simeq \mathcal{U}^\vee.
\end{align}
Further, we have inclusions of vector bundles $\mathcal{U} \subset \mathcal{U}^\perp \subset \mathcal{V}$, and can also consider the short exact sequence
\begin{align}\label{Eq.: Tautological S.E.S II}
0 \to \mathcal{U}^{\perp}/\mathcal{U} \to Q = \mathcal{V}/\mathcal{U} \to \mathcal{V}/\mathcal{U}^\perp \to 0.
\end{align}
The vector bundle $\mathcal{Q} = \mathcal{V}/\mathcal{U}$ is of rank $2n-2$ and and we denote its Chern classes by $\psi_k = c_k(\mathcal{Q})$. The vector bundle $\mathcal{V}/\mathcal{U}^\perp \simeq \mathcal{U}^\vee$ is of rank $2$ and we denote its Chern classes by $h = c_1(\mathcal{U}^\vee)$ and $p = c_2(\mathcal{U}^\vee)$. The vector bundle $\mathcal{U}^\perp/\mathcal{U}$ is of rank $2n-4$. Furthermore it is self dual (via the restriction of $q$), therefore it only has non vanishing even Chern classes. We set $\Sigma_k = c_{2k}(\mathcal{U}^\perp/\mathcal{U})$.
From \eqref{Eq.: Tautological S.E.S II} and the multiplicativity of Chern polynomial on short exact sequences we obtain the equality
\begin{equation*}
c_t(\mathcal{Q}) = c_t(\mathcal{U}^\vee)c_t(\mathcal{U}^\perp/\mathcal{U}),
\end{equation*}
whose graded pieces imply the following
\begin{equation}\label{eq:psi-via-h-p-sigma}
\begin{aligned}
& \psi_{2j} = \Sigma_j + p \Sigma_{j-1} \quad \text{for} \quad j \in [1,n-1], \\
& \psi_{2j+1} = h\Sigma_j \quad \text{for} \quad j \in [0,n-2].
\end{aligned}
\end{equation}
Let $\mathrm{T} \subset\mathrm{B} \subset \mathrm{P}$ be a maximal torus and a Borel subgroup contained in $\mathrm{P}$. The natural projection $\mathrm{G}/\mathrm{T} \to \mathrm{G}/\mathrm{B}$ induces an isomorphism $H^*(\mathrm{G}/\mathrm{B},{\mathbb Q}) \simeq H^*(\mathrm{G}/\mathrm{T},{\mathbb Q})$. Furthermore, the projection $\pi \colon \mathrm{G}/\mathrm{T} \to \mathrm{G}/\mathrm{P}$ induces the embedding of cohomology rings $\pi^* \colon H^*(\mathrm{G}/\mathrm{P},{\mathbb Q}) \to H^*(\mathrm{G}/\mathrm{T},{\mathbb Q})$.
Given a vector bundle $\mathcal{E}$ on $\mathrm{G}/\mathrm{P}$ its pull-back $\pi^*\mathcal{E}$ splits into a direct sum line bundles $L_1, \dots, L_n$. Hence, in $H^*(\mathrm{G}/\mathrm{T},{\mathbb Q})$ we have the equality
\begin{equation*}
\pi^* c_t(\mathcal{E}) = c_t(L_1) \dots c_t(L_n).
\end{equation*}
First Chern classes of the line bundles $L_i$ are called \textsf{Chern roots of $\mathcal{E}$} and are given by images of characters of $\mathrm{T}$ (or equivalently of $\mathrm{B}$) via the characteristic map (see Subsection \ref{subsection:borel-presentation}).
Let $x_1$ and $x_2$ be the Chern roots of $\mathcal{U}^\vee$, so that we have
\begin{equation*}
c_t(\mathcal{U}^\vee) = (1 + x_1 t)(1 + x_2t).
\end{equation*}
Further, due to the self-duality of $\mathcal{U}^\perp/\mathcal{U}$, its possible to choose Chern roots $x_3, \dots, x_n$ of $\mathcal{U}^\perp/\mathcal{U}$, so that we have
\begin{equation*}
c_t(\mathcal{U}^\perp/\mathcal{U}) = (1 - x_3^2t^2) \cdots (1 - x_n^2t^2).
\end{equation*}
From the definition of classes $h, p, \Sigma_j$ it now immediately follows
\begin{equation}\label{eq:h-p-Sigma-via-chern-roots}
\begin{aligned}
& h = {\mathfrak{s}}_1(x_1,x_2) = x_1 + x_2, \\
& p = {\mathfrak{s}}_2(x_1,x_2) = x_1x_2, \\
& \Sigma_j = (-1)^j{\mathfrak{s}}_j(x_3^2,\cdots,x_n^2) \quad \text{for} \quad j \in [1,n-2],
\end{aligned}
\end{equation}
where ${\mathfrak{s}}_j$ is the $j$-symmetric function. Note an easy computation of the weights of the line bundles occuring in the decomposition of $\pi^*\mathcal{U}^\vee$ or $\pi^*(\mathcal{U}^\perp/\mathcal{U})$ imply that that the Chern roots $x_1,x_2,x_3,\cdots,x_n$ are the images under the characteristic map of the standard basis $\varepsilon_1,\varepsilon_2,\varepsilon_3,\cdots,\varepsilon_n$ of $\mathrm{G}$ (see \cite[Planche IV, page 256]{Bo}).
Recall Borel's presentation (see Subsection \ref{subsection:borel-presentation}). Let $\Lambda = \mathrm{X}(T)$ be the group of characters of $\mathrm{T}$. Then we have an isomorphism $H^*(X,{\mathbb Q}) \simeq {\mathbb Q}[\Lambda]^{\mathrm{W}_{\mathrm{P}}}/{\mathbb Q}[\Lambda]^{\mathrm{W}}_+$. An explicit description of the invariants ${\mathbb Q}[\Lambda]^{\mathrm{W}_{\mathrm{P}}}$ appearing in Borel's presentation
can be found, for example, in \cite[p.~210]{Bo}. According to this description the ${\mathbb Q}$-algebra ${\mathbb Q}[\Lambda]^{\mathrm{W}_{\mathrm{P}}}$ is freely generated by the polynomials
\begin{equation*}
\begin{aligned}
& x_1 + x_2 \\
& x_1 x_2 \\
& {\mathfrak{s}}_j(x_3^2, \cdots ,x_n^2) \quad \text{for} \quad j \in [1,n-3] \\
& x_3 \dots x_n.
\end{aligned}
\end{equation*}
Similarly, the ideal ${\mathbb Q}[\Lambda]^{\mathrm{W}}_+$ is generated by the polynomials
\begin{equation*}
\begin{aligned}
& {\mathfrak{s}}_j(x_1^2, \cdots ,x_n^2) \quad \text{for} \quad j \in [1,n-1] \\
& x_1 \dots x_n.
\end{aligned}
\end{equation*}
Hence, we obtain the following explicit description of Borel's presentation.
\begin{proposition}
\label{cor-pres-xi}
We have an algebra isomorphism
\begin{equation*}
H^*(X,{\mathbb Q}) = {\mathbb Q}[h,p,\gamma,(\Sigma_j)_{j \in [1,n-3]}]/((\Xi_i)_{i \in [1,n-1]} , \xi_n),
\end{equation*}
where
\begin{equation}\label{eq:borel-presentation-II}
\begin{aligned}
& \gamma = {\mathfrak{s}}_{n-2}(x_3,\cdots,x_n) = x_3 \cdots x_n, \\
& \Xi_j = {\mathfrak{s}}_j(x_1^2,\cdots,x_n^2) \quad \text{for} \quad j \in [1,n], \\
& \xi_n = {\mathfrak{s}}_n(x_1,\cdots,x_n) = x_1 \cdots x_n,
\end{aligned}
\end{equation}
and $h,p, \Sigma_j$ are defined by \eqref{eq:h-p-Sigma-via-chern-roots}.
\end{proposition}
Our next goal is to simplify the above presentation. In particular, we are going to eliminate the variables $\Sigma_j$ with $j \in [1,n-3]$ using the equations $\Xi_i$ with $i \in [1,n-3]$. For that we introduce the following two ingredients:
\begin{enumerate}
\item For any $l \in [1,n]$ we have the following identity
\begin{equation}\label{eq:identity-Xi-Sigma}
\Xi_{l} = (-1)^{l}\Sigma_{l} + (x_1^2 + x_2^2)(-1)^{l-1}\Sigma_{l-1} + x_1^2x_2^2(-1)^{l-2} \Sigma_{l-2}
\end{equation}
of symmetric polynomials in $x_1, \dots, x_n$, where $\Xi_{l}$ and $\Sigma_{l}$ are defined in \eqref{eq:h-p-Sigma-via-chern-roots} and \eqref{eq:borel-presentation-II} respectively, and we use the convention $\Sigma_0 = 1$ and $\Sigma_{-1} = 0$.
\item For any $j \geq 0$ we define polynomials $E_j(h,p) \in {\mathbb Q}[h,p]$ as
\begin{equation}\label{eq:definition-polynomials-Ej}
E_j(h,p) =
\begin{cases}
E_0 = 1, \\
E_1 = h^2 - 2p, \\
E_j = (h^2 - 2p)E_{j-1} - p^2 E_{j-2} \quad \text{for} \quad j \geq 2.
\end{cases}
\end{equation}
It is clear from the definition that, if we set $\deg(h) = 1$ and $\deg(p) = 2$, then the polynomial $E_j(h,p)$ is graded homogeneous of degree $2j$.
\end{enumerate}
\begin{lemma}
\label{lemm-elim}
For $j \in [1,n-2]$ we have the following equality in $H^*(X,{\mathbb Q})$
\begin{equation}\label{eq:Sigma-via-x1-and-x2}
\Sigma_j = \sum_{k = 0}^j x_1^{2k} x_2^{2(j-k)}.
\end{equation}
In particular, we have
\begin{equation}\label{eq:Sigma-via-h-and-p}
\Sigma_j = E_j(h,p).
\end{equation}
\end{lemma}
\begin{proof}
To show \eqref{eq:Sigma-via-x1-and-x2} we use \eqref{eq:identity-Xi-Sigma} and induction on $j$.
For $j = 1$ the equality~\eqref{eq:identity-Xi-Sigma} becomes $\Xi_1 = -\Sigma_1 + x_1^2 + x_2^2$. Therefore, in $H^*(X,{\mathbb Q})$ we get $\Sigma_1 = x_1^2 + x_2^2$, which is a particular instance of \eqref{eq:Sigma-via-x1-and-x2}.
Let us now assume that \eqref{eq:Sigma-via-x1-and-x2} is known to hold up to $j = r$. Using \eqref{eq:identity-Xi-Sigma} we see that in $H^*(X,{\mathbb Q})$ we have the relation $\Sigma_{r+1} = (x_1^2 + x_2^2)\Sigma_r - x_1^2x_2^2 \Sigma_{r-1}$ and the claim now follows by induction.
To show \eqref{eq:Sigma-via-h-and-p} use \eqref{eq:identity-Xi-Sigma}, \eqref{eq:Sigma-via-x1-and-x2}, and
\eqref{eq:h-p-Sigma-via-chern-roots}.
\end{proof}
\begin{corollary}\label{corollary:borel-presentation-III-simplified}
There is an isomorphism of algebras
\begin{equation*}
H^*(X,{\mathbb Q}) \simeq {\mathbb Q}[h,p,\gamma]/(EQ_{n}, EQ_{2n-4}, EQ_{2n-2}),
\end{equation*}
where
\begin{equation}\label{eq:borel-presentation-III-simplified}
\begin{aligned}
& EQ_{n} = p\gamma , \\
& EQ_{2n-4} = \gamma^2 + (-1)^{n-1} E_{n-2}(h,p) , \\
& EQ_{2n-2} = (h^2-p) E_{n-2}(h,p) - p^2 E_{n-3}(h,p) .
\end{aligned}
\end{equation}
\end{corollary}
\begin{proof}
Using \eqref{eq:identity-Xi-Sigma} we can eliminate variables $\Sigma_j$ for $j$ in $[1,n-3]$ using relations $\Xi_i$ for $i \in [1,n-3]$. Thus, after the elimination we have
\begin{equation*}
H^*(X,{\mathbb Q}) \simeq {\mathbb Q}[h,p,\gamma]/(\xi_n, \Xi_{n-2}, \Xi_{n-1}).
\end{equation*}
Let us look at these relations one at a time:
\begin{enumerate}
\item Since $\xi_n = p \gamma$, we immediately get the first relation in \eqref{eq:borel-presentation-III-simplified}.
\item After the elimination of the variables $\Sigma_j$ for $j$ in $[1,n-3]$ we need to replace their occurences in in the relations $\Xi_{n-2}$ and $\Xi_{n-1}$ by the polynomial $E_j(h,p)$. Thus, we can write $\Xi_{n-2}$ in the variables $h,p,\gamma$ as
\begin{equation*}
\Xi_{n-2} = \gamma^2 + (-1)^{n-3}\left((h^2 - 2p)E_{n-3}(h,p) - p^2 E_{n-4}(h,p) \right).
\end{equation*}
Finally, using \eqref{eq:definition-polynomials-Ej} we can rewrite the above relation as
\begin{equation*}
\Xi_{n-2} = \gamma^2 + (-1)^{n-3}E_{n-2}(h,p).
\end{equation*}
\item Again, we need to rewrite $\Xi_{n-1}$ in terms of $h,p,\gamma$. By \eqref{eq:identity-Xi-Sigma} we have
\begin{equation*}
\Xi_{n-1} = (x_1^2 + x_2^2)(-1)^{n-2}\Sigma_{n-2} + x_1^2x_2^2(-1)^{n-3} \Sigma_{n-3}.
\end{equation*}
As in the previous case we can rewrite
\begin{equation*}
\Xi_{n-1} = (h^2 - 2p)\gamma^2 + p^2(-1)^{n-3}E_{n-3}(h,p).
\end{equation*}
Now using the second relation of \eqref{eq:borel-presentation-III-simplified}, which we have already showed in the previous step, we can replace $\gamma^2$ and get
\begin{equation*}
(-1)^{n-2}\left((h^2 - 2p) E_{n-2}(h,p) - p^2 E_{n-3}(h,p)\right)
\end{equation*}
Finally, since by the first and the second relations we have $p E_{n-2}(h,p) = p \gamma = 0$, we can rewrite the above relation as
\begin{equation*}
(-1)^{n-2}\left((h^2 - p)E_{n-2}(h,p) - p^2E_{n-3}(h,p)\right),
\end{equation*}
\end{enumerate}
The claim now follows.\end{proof}
\subsection{Schubert classes for $\OG(2,2n)$}
Let $e_1, \dots, e_{2n}$ be a basis of the vector space $V$ such that
\begin{equation*}
B(e_i,e_j) = \delta_{2n+1 - i,j}
\end{equation*}
where $B$ is the non-degenerate symmetric bilinear form associated to the quadratic form $q$. For $k \in [1,2n]$, define $F_k = \scal{e_i \ | \ i \in [1,k]}$ and $F'_n = F_{n-1} + \scal{e_{n+1}}$ and set
$$\epsilon(k) = 2n - 2 - k + \left\{
\begin{array}{ll}
1 & \textrm{ if $k \leq n - 2$} \\
0 & \textrm{ if $k > n - 2$}. \\
\end{array}
\right.$$
For $k \in [1,2n-3]$ with $k \neq n-2$, we define Schubert varieties
\begin{equation*}
X_k = \{ V_2 \in X \ | \ V_2 \cap F_{\epsilon(k)} \neq 0 \}.
\end{equation*}
For $k = n - 2$ we define two Schubert varieties
\begin{equation*}
X_{n-2} = \{ V_2 \in X \ | \ V_2 \cap F_n \neq 0 \}
\end{equation*}
and
\begin{equation*}
X'_{n-2} = \{ V_2 \in X \ | \ V_2 \cap F'_{n} \neq 0 \}.
\end{equation*}
We have $\codim(X_k) = k$ and $\codim(X'_{n-2}) = n-2$. Finally, we set
\begin{equation*}
\tau_k = [X_k] \quad \text{for} \quad k \in [1,2n-3]
\end{equation*}
and
\begin{equation*}
\tau'_{n-2} = [X'_{n-2}].
\end{equation*}
These classes are called \textsf{special Schubert classes}.
We have seen in Subsections \ref{subsection:schubert-classes} and \ref{subsection:indexing-schubert-classes-by-roots} that Schubert classes can be indexed by elements of the Weyl group and in the (co)adjoint case also by roots. We explicitely give both descriptions for the classes $(\tau_k)_{k \in [1,2n-3]}$ and $\tau'_{n-2}$.
In terms of elements of the Weyl group,
we can express these classes as follows. Consider the elements of $\mathrm{W}^{\mathrm{P}}$ defined by
\begin{equation*}
w_j =
\begin{cases}
s_{\alpha_{j+1}} s_{\alpha_j} \dots s_{\alpha_2} \quad \text{for} \quad 1 \leq j \leq n-1, \\
s_{\alpha_{2n-2 - j}} \dots s_{\alpha_{n-3}} s_{\alpha_{n-2}} s_{\alpha_n} s_{\alpha_{n-1}} \dots s_{\alpha_2} \quad \text{for} \quad n \leq j \leq 2n-3,
\end{cases}
\end{equation*}
and
\begin{equation*}
w'_{n-2} = s_{\alpha_n} s_{\alpha_{n-2}} s_{\alpha_{n-3}} \dots s_{\alpha_2}.
\end{equation*}
With this notation we have
\begin{equation*}
\begin{aligned}
& \tau_j = [X^{w_j}] = \sigma^{w_j} \quad \text{for} \quad \quad 1 \leq j \leq 2n-3, \\
& \tau'_{n-2} = [X^{w'_{n-2}}] = \sigma^{w'_{n-2}}.
\end{aligned}
\end{equation*}
To check that this is the correct description, since $\ell(w_k) = k$, $\ell(w'_{n-2}) = n-2$, it is enough to check that, using the action of the Weyl group $\mathrm{W}$ on $V$, that we have $w_k(\scal{e_{2n},e_{2n-1}}) \in X_k$ and $w'_{n-2}(\scal{e_{2n},e_{2n-1}}) \in X'_{n-2}$.
We now describe these classes using roots. Applying Lemma \ref{lemm:wp-root}, we have
\begin{equation*}
\begin{aligned}
& \tau_k = \sigma_{\theta - \gamma_k} \quad \text{for} \quad k \in [1,2n-3], \\
& \tau'_{n-2} = \sigma_{\theta - \gamma'_{n-2}},
\end{aligned}
\end{equation*}
where
\begin{equation*}
\begin{aligned}
& \gamma_k =
\begin{cases}
\alpha_2 + \cdots + \alpha_{k+1} & \quad \text{for} \quad k \in [1,n-1], \\
\alpha_2 + \cdots + \alpha_{n} + \alpha_{n-2} + \cdots + \alpha_{n-2-(k-n)} & \quad \text{for} \quad k \in [n,2n-4], \\
\theta + \alpha_1 & \quad \text{for} \quad k = 2n-3, \\
\end{cases}
\\
& \gamma'_{n-2} = \alpha_2 + \cdots + \alpha_{n-2} + \alpha_n.
\end{aligned}
\end{equation*}
and the highest (short) root $\theta$ is given in this case by
\begin{equation*}
\theta = \alpha_1 + 2\sum_{i = 2}^{n-2} \alpha_i + \alpha_{n-1} + \alpha_n.
\end{equation*}
Using the above description of Schubert classes, we reinterpret Borel's presentation given in Corollary \ref{corollary:borel-presentation-III-simplified} in terms of Schubert classes. Recall from \cite[Section 3.3]{BKT} that we have
\begin{equation}\label{eq:psi-via-tau}
\psi_j = c_j(\mathcal{Q}) =
\begin{cases}
\tau_j & \textrm{ for $j < n-2$} \\
\tau_{n-2} + \tau'_{n-2} & \textrm{ for $j = n-2$} \\
2\tau_j & \textrm{ for $j > n-2$} \\
\end{cases}
\end{equation}
Together with \eqref{Eq.: Tautological S.E.S I} and the classical Chevalley formula \eqref{eq:classical-chevalley-formula}
this implies
\begin{equation}\label{eq:h-and-p-via-tau}
h = \tau_1 \and p = \tau_1^2 - \psi_2 = \tau_{1,1},
\end{equation}
where $\tau_{1,1} = \sigma^{s_1s_2}$ is a Schubert class of degree $2$.
\begin{remark}
Note that for $\OG(2,8)$ there are three Schubert classes in degree $2$
\begin{equation*}
\tau_2 = \sigma^{s_3s_2}, \quad \tau_{1,1} = \sigma^{s_1s_2}, \quad \tau_2' = \sigma^{s_4s_2},
\end{equation*}
whereas for $\OG(2,2n)$ with $n \geq 5$ there are only two such classes
\begin{equation*}
\tau_2 = \sigma^{s_3s_2} \quad \text{and} \quad \tau_{1,1} = \sigma^{s_1s_2}.
\end{equation*}
Nevertheless, the formula \eqref{eq:h-and-p-via-tau} holds for any $n \geq 4$ as we have
\begin{equation}\label{eq:tau-one-squared-OG}
\tau_1^2 =
\begin{cases}
\tau_2 + \tau_{1,1} + \tau_2' \quad \text{for} \quad n = 4, \\
\tau_2 + \tau_{1,1} \quad \text{for} \quad n \geq 5,
\end{cases}
\end{equation}
which can easily be deduced from the classical Chevalley formula \eqref{eq:classical-chevalley-formula}.
\end{remark}
We now want to express the classes $\gamma$ and $\Sigma_{n-2}$ in terms of Schubert classes.
\begin{lemma}\label{lemma:En-2-via-simple-roots}
In $H^*(X,{\mathbb Q})$ we have the identity
\begin{equation}\label{eq:En-2-via-simple-roots}
\Sigma_{n-2} = 2 \sum_{i = 1}^{n-2} (-1)^{i-1} \sigma_{\alpha_i} + (-1)^{n}(\sigma_{\alpha_{n-1}} + \sigma_{\alpha_{n}}).
\end{equation}
\end{lemma}
\begin{proof}
According to \eqref{eq:psi-via-h-p-sigma} and \eqref{eq:psi-via-tau} we have
\begin{equation*}
h\Sigma_{n-2} = \psi_{2n-3} = 2\tau_{2n-3}.
\end{equation*}
By the hard Lefschetz theorem the map $H^{2(2n-4)}(X,{\mathbb Q}) \to H^{2(2n-3)}(X,{\mathbb Q})$ given by the multiplication with $h$ is an isomorphism. Hence, to show the identity \eqref{eq:En-2-via-simple-roots}, it is enough to check that its right-hand side multiplied with $h$ is equal to $2\tau_{2n-3}$.
This can be easily checked using the classical Chevalley formula \eqref{eq:classical-chevalley-formula}.
\end{proof}
\begin{lemma}
In $H^*(X,{\mathbb Q})$ we have the identity
\begin{equation*}
\gamma = \pm(\tau_{n-2} - \tau'_{n-2}),
\end{equation*}
which is to be understood up to sign, which comes from exchanging $F_n$ with $F'_{n}$ in the definition of the flag.
\end{lemma}
\begin{proof}
The presentation \eqref{eq:borel-presentation-III-simplified} shows that the map $H^{2n-4}(X,{\mathbb Q}) \to H^{2n}(X,{\mathbb Q})$ given by multiplication with $p$ has kernel equal to the ${\mathbb Q}$-span of $\gamma$. Indeed up to degree $n$, the only relation is given by $p\gamma = 0$. Forgetting this relation we would have a polynomial algebra which is a domain and the multiplication would be injective. Adding this only relation in degree $n$ leads to the claim.
Now from the Pieri formula given in \cite[Theorem 3.1]{BKT} it is easy to check that $p\tau_{n-2} = p\tau'_{n-2}$ so that $\gamma = \lambda(\tau_{n-2} - \tau'_{n-2})$.
We compute $\lambda$ using the equation $\gamma^2 = (-1)^{n-2}\Sigma_{n-2}$
(this is the second relation in \eqref{eq:borel-presentation-III-simplified}).
In particular we consider the coefficient of $\sigma_{\alpha_1} = \tau_{2n-4}$.
For a cohomology class $\psi$ of degree $2n-4$ written in the basis
$(\sigma_\alpha)_{\alpha \in R}$, we write $L(\psi)$ for its coordinate on
$\sigma_{\alpha_1}$. We have $L(\gamma^2) = (-1)^{n-2}L(\Sigma_{n-2}) = 2(-1)^n$.
An easy application of the Pieri formula (or a direct geometric check) shows that we have
$$L(\tau_{n-2}^2) = L({\tau'_{n-2}}^2)
= \left\{
\begin{array}{ll}
1 & \textrm{ for $n$ even} \\
0 & \textrm{ for $n$ odd,} \\
\end{array}\right.
\textrm{ and }
L(\tau_{n-2}\tau'_{n-2})
= \left\{
\begin{array}{ll}
0 & \textrm{ for $n$ even} \\
1 & \textrm{ for $n$ odd.} \\
\end{array}\right.$$
We get $2(-1)^n = L(\gamma^2) = 2\lambda^2(-1)^n$ so that $\lambda^2 = 1$ and $\lambda = \pm1$. Exchanging $F_n$ and $F_n'$ changes the sign of $\gamma$ proving the result.
\end{proof}
\subsection{Small quantum cohomology}
In \cite{BKT} a presentation for the small quantum cohomology ring $\QH(X)$ in terms of special Schubert classes is given. Here we give another presentation based on the Borel presentation \eqref{eq:borel-presentation-III-simplified}, which is better suited for our needs.
To give a presentation for $\QH(X)$ from the presentation given in Corollary \ref{corollary:borel-presentation-III-simplified}, we only need to deform the equations $EQ_{n}$, $EQ_{2n-4}$ and $EQ_{2n-2}$ by replacing the product in cohomology by the quantum product $\star_0$. Write $EQ_k^{\star_0}$ for the polynomial in $h$, $p$ and $\gamma$ obtained from $EQ_k$ by replacing the product in cohomology by $\star_0$.
\begin{lemma}
In $\QH(X)$ we have the following equalities
\begin{equation*}
\begin{aligned}
& EQ_n^{\star_0} = 0, \\
& EQ_{2n-4}^{\star_0} = 0, \\
& EQ_{2n-2}^{\star_0} = -4qh.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}
The first two formulas follow immediately from the fact that $\deg(q) = 2n-3$ is greater than the degree of the expression. Hence, the quantum product coincides with the classical product and the expression vanishes.
First, note that in $H^*(X,{\mathbb Q})$ by Lemma \ref{lemm-elim}, \eqref{eq:psi-via-tau} and \eqref{eq:psi-via-h-p-sigma} we have the equality
\begin{equation*}
E_{n-2}(h,p)+pE_{n-3}(h,p) = \Sigma_{n-2} + p\Sigma_{n-3} = \psi_{2n-4} = 2\tau_{2n-4}.
\end{equation*}
Since its degree is smaller than $\deg(q) = 2n-3$, this equality also holds in $\QH(X)$. Thus, we conclude
\begin{equation}\label{eq:EQ-2n-2-proof}
EQ_{2n-2}^{\star_0} = h \star_0 h \star_0 E_{n-2}(h,p) - 2 p \star_0 \tau_{2n-4}.
\end{equation}
To compute the first summand in \eqref{eq:EQ-2n-2-proof} we use Lemma \ref{lemma:En-2-via-simple-roots} and the quantum Chevalley formula given in Theorem \ref{thm:chevalley-QH} to get
\begin{equation*}
h \star_0 \Sigma_{n-2} = 2 \tau_{2n-3} - 2q,
\end{equation*}
and then once more to get
\begin{equation*}
h \star_0 h \star_0 E_{n-2}(h^2,p) = 2\sigma_{-(\alpha_1 + \alpha_2)} - 2qh.
\end{equation*}
To compute the second summand in \eqref{eq:EQ-2n-2-proof} we apply the quantum Pieri formula \cite[Theorem 3.4]{BKT} and obtain
\begin{equation*}
p \star_0 \tau_{2n-4} = \tau_{2n-2} + q = \sigma_{-(\alpha_1 + \alpha_2)} + qh.
\end{equation*}
Putting everything together, we get the desired equality $EQ_{2n-2}^{\star_0} = -4qh$.
\end{proof}
\begin{corollary}
\label{corollary:presentation-small-QH-type-D}
We have the following presentation
\begin{equation}\label{eq:presentation-small-QH-type-D}
\QH(X) = K[h,p,\gamma]/(EQ_n, EQ_{2n-4}, EQ_{2n-2} + 4qh).
\end{equation}
\end{corollary}
\vspace{10pt}
This proves Theorem \ref{theorem:introduction-uniform-presentation-for-QH} in type $\mathrm{D}_n$.
\begin{remark}
Using \eqref{eq:presentation-small-QH-type-D} it is not difficult to see that the small quantum deformation of the presentation given in Proposition \ref{cor-pres-xi} is
\begin{equation}\label{eq:presentation-small-QH-more-variables}
\QH(X) = K[h,p,\gamma,(\Sigma_j)_{j \in [1,n-3]}]/(\xi_n, (\Xi_i)_{i \in [1,n - 2]} , \Xi_{n-1} + (-1)^{n-2}4qh).
\end{equation}
\end{remark}
\bigskip
Since the small quantum cohomology $\QH(X)$ is a finite dimensional $K$-algebra,
we can view it as the algebra of functions on the finite scheme ${{\rm Spec}}(\QH(X))$.
Moreover, the presentation \eqref{eq:presentation-small-QH-type-D} defines the closed embedding
\begin{equation*}
{{\rm Spec}}(\QH(X)) \subset {{\rm Spec}}(K[h,p,\gamma]) = {\mathbb A}^3.
\end{equation*}
We call \textsf{origin} the point in ${{\rm Spec}}(\QH(X))$ corresponding to the maximal ideal $(h,p,\gamma)$.
\begin{lemma}
The origin is contained in $\Spec \QH(X)$ and the Zariski tangent space
at this point is of dimension $2$. In particular, the origin is a fat point of
$\Spec \QH(X)$ and the algebra $\QH(X)$ is not semisimple.
\end{lemma}
\begin{proof}
All claims follow immediately from Corollary \ref{corollary:presentation-small-QH-type-D},
as polynomials $E_j(h,p)$ appearing in the presentation have no constant term.
Note that the relations $EQ_n^{\star_0} = 0$ and $EQ_{2n-4}^{\star_0} = 0$ give
no relation in the tangent space at the origin, while the relation
$EQ_{2n-2}^{\star_0} = -4qh$ induces the equation $h = 0$ explaning the dimension
of the tangent space.
\end{proof}
\begin{remark}
We recover the fact that $\QH(X)$ is not semisimple (see \cite{ChPe}).
\end{remark}
In the next proposition we study the structure of the finite scheme
$\Spec \QH(X)$ in more detail. Note that we have
$\dim_K \QH(X) = \dim_{{\mathbb Q}} H^*(X, {\mathbb Q}) = 2n(n-1)$.
\begin{proposition}
\label{prop:decomp-A-B}
We have a decomposition
\begin{equation*}
\QH(X) = A\times B,
\end{equation*}
where $A$ is a fat point of length $n$ supported at the origin and $B$ is a
semisimple algebra corresponding to $n(2n-3)$ reduced points in $\Spec(\QH(X))$
different from the origin.
\end{proposition}
\begin{proof}
Since $\dim_{K} \QH(X) = 2n(n-1)$,
to prove the claim it is enough to show the inequatities
\begin{equation}\label{eq:dimension-inequality-A}
\dim_K A \geq n
\end{equation}
and
\begin{equation}\label{eq:dimension-inequality-B}
\dim_K B \geq n(2n-3).
\end{equation}
To show \eqref{eq:dimension-inequality-A} it is enough to consider the intersection with the locus $h = 0$, i.e. to add $h$ to the relations in the presentation \eqref{eq:presentation-small-QH-type-D}. Since $E_j(0,p) = (-1)^{j}(j+1)p^j$, the resulting algebra becomes
\begin{equation*}
K[p,\gamma]/(p\gamma, \gamma^2 - (n-1)p^{n-2}, p^{n-1}).
\end{equation*}
As this algebra is of dimension $n$, the inequality \eqref{eq:dimension-inequality-A} follows.
To show \eqref{eq:dimension-inequality-B} we proceed as follows. Consider the $2$-to-$1$ cover
of $\Spec \QH(X)$ defined by the injective homomorphism of algebras (note that we use
\eqref{eq:presentation-small-QH-type-D} and \eqref{eq:presentation-small-QH-more-variables} here):
\begin{equation}\label{eq:covering-map}
K[h,p,\gamma]/(EQ_n, EQ_{2n-4}, EQ_{2n-2} + 4qh) \to K[x_1, \dots, x_n]/I
\end{equation}
with $I = (\xi_n, \left( \Xi_i \right)_{i \in [1,n-2]}, \Xi_{n-1} + (-1)^{n-2}4q(x_1 + x_2))$ induced by
\begin{equation}\label{eq:covering-map-formulas}
h \mapsto x_1 + x_2, \quad p \mapsto x_1x_2, \quad \gamma \mapsto x_3 \dots x_n,
\end{equation}
and where $\xi_n$ and $\Xi_i$ are defined in \eqref{eq:borel-presentation-II}.
Note that the generators of $I$
can be rewritten in the form of polynomial identities
\begin{equation}\label{eq:covering-map-polynomial-identities}
\begin{aligned}
& \prod_{i = 1}^n(t^2 - x_i^2) = t^2(t^{2n-2} - 4q(x_1 + x_2)), \\
& x_1 \cdots x_n = 0,
\end{aligned}
\end{equation}
which we are going to solve to estimate the dimension of $B$. From \eqref{eq:covering-map-formulas} it follows that that the set-theoretic the preimage of the origin $h = p = \gamma = 0$ is given by the equations $x_1 = x_2 = 0$ and $x_3 \dots x_n = 0$. Thus, we are only interested in the solutions of \eqref{eq:covering-map-polynomial-identities} outside of this set. We consider three cases.
\begin{enumerate}
\item \emph{Case $x_1 = 0, x_2 \neq 0$.} The first identity in \eqref{eq:covering-map-polynomial-identities} becomes
\begin{equation*}
\prod_{i = 2}^n(t^2 - x_i^2) = (t^{2n-2} - 4qx_2).
\end{equation*}
Since $t = x_2$ is a solution, we get a relation on $x_2$ of the form
\begin{equation*}
x_2^{2n-3} = 4q.
\end{equation*}
Thus, we see that we have $2n-3$ choices for $x_2$. From here one also obtains $x_3, \dots, x_n$ up to permutations and signs.
These solutions in terms of $x_1, \dots, x_n$ give rise to the following solutions in terms of $h,p,\gamma$
\begin{equation*}
h = x_2, \quad p = 0, \quad \gamma = x_3 \dots x_n.
\end{equation*}
Thus, in total we have $2(2n-3) = 4n - 6$ solutions in terms of $h,p,\gamma$, where the factor $2$ comes from the sign.
\medskip
\item \emph{Case $x_1 = 0, x_2 \neq 0$.} This can be treated identially to the previous one. However, these solutions give the same answers for $h$, $p$ and $\gamma$. So we don't need to take them into account.
\medskip
\item \emph{Case $x_1 \neq 0, x_2 \neq 0, x_1 \neq x_2$.} The second identity in \eqref{eq:covering-map-polynomial-identities} implies that at least one of the $x_i$ for $i \in [3,n]$ must vanish. Therefore, we immediately obtain $\gamma = 0$.
By our assumption $x_1$ and $x_2$ are two distinct roots of the first identity in~\eqref{eq:covering-map-polynomial-identities}. Hence, by substituing them into this identity, we obtain the system of equations
\begin{equation}\label{eq:covering-map-polynomial-identities-case3-I}
\begin{aligned}
& x_1^{2n-2} = 4q(x_1+x_2) \\
& x_2^{2n-2} = 4q(x_1+x_2).
\end{aligned}
\end{equation}
Eliminating $x_2$ from \eqref{eq:covering-map-polynomial-identities-case3-I} we obtain
\begin{equation*}
(x_1^{2n-2} - 4qx_1)^{2n-2} = (4qx_1)^{2n-2},
\end{equation*}
and, since $x_1 \neq 0$, we can further rewrite it as
\begin{equation}\label{eq:covering-map-polynomial-identities-case3-II}
(x_1^{2n-3} - 4q)^{2n-2} = (4q)^{2n-2}.
\end{equation}
It is clear that \eqref{eq:covering-map-polynomial-identities-case3-II} is
a polynomial of degree $(2n-3)(2n-2)$, which has zero as a root of multiplicity $2n-3$
and all other roots are of multiplicity one. Therefore, taking into account
further $2n-3$ solutions of the form $x_1 = x_2 \neq 0$, the number of solutions
of \eqref{eq:covering-map-polynomial-identities-case3-I} satisfying
$x_1 \neq 0, x_2 \neq 0, x_1 \neq x_2$ is equal to $(2n-3)(2n-4)$.
Finally, the above $(2n-3)(2n-4)$ solutions for $x_i$'s give rise to $(2n-3)(n-2)$ solutions for $h, p, \gamma$.
\end{enumerate}
Adding the contributions of the first and the third case gives the desired bound \eqref{eq:dimension-inequality-B}.
\end{proof}
\begin{lemma}\label{lemma:description-fat-point-type-D}
The non-reduced factor $A$ of $\QH(X)$ described in Proposition \ref{prop:decomp-A-B}
has the following explicit presentation
\begin{equation*}
A \simeq K[x,y]/(xy, x^2 + (n-1)y^{n-2}).
\end{equation*}
In particular, $A$ is isomorphic to the Jacobi ring of the simple isolated
hypersurface singularity of type $\mathrm{D}_n$, i.e. we have
\begin{equation*}
A \simeq K[x,y]/(f'_x, f'_y) \quad \text{with} \quad f = x^2y + y^{n-1}.
\end{equation*}
\end{lemma}
\begin{proof}
Since $A$ is the localisation of $\QH(X)$ at the origin, to prove the above
claims we can consider the relations \eqref{eq:presentation-small-QH-type-D}
for $\QH(X)$ in the formal power series ring $K[[h,p,\gamma]]$.
First we note that from \eqref{eq:definition-polynomials-Ej} we have congruences
\begin{equation*}
E_k \equiv h^{2k}\ ({\rm mod}\ p) \qquad \text{and} \qquad E_k \equiv (-1)^k(k+1)p^k\ ({\rm mod}\ h).
\end{equation*}
From the relation $EQ_{2n-2} + 4qh = 0$ it follows that there exists
$d(p) \in (p) \subset K[[p]]$ such that
\begin{equation}\label{eq:expression-for-h-via-p}
h = (-p)^{n-1} \left( -\frac{1}{4q} + d(p) \right),
\end{equation}
which allows us to get rid of the variable $h$ and of the relation $EQ_{2n-2} + 4qh = 0$.
Now we can substitue \eqref{eq:expression-for-h-via-p} instead of $h$ into the relation
$EQ_{2n-4} = 0$ and obtain
\begin{equation*}
\gamma^2 - (n-1)p^{n-2} \left( 1 + e(p) \right) = 0
\end{equation*}
for some power series $e(p) \in (p) \subset K[[p]]$.
Let us define
\begin{equation*}
x = \gamma \qquad \text{and} \qquad y = p\sqrt[n-2]{- 1 - e(p)}.
\end{equation*}
In this notation the relation $EQ_{2n-4} = 0$ becomes $x^2 + (n-1)y^{n-2} = 0$
and the relation $EQ_{n-2} = 0$ becomes $xy = 0$. The claims now follow.
\end{proof}
\subsection{Big quantum cohomology}
Recall the presentation \eqref{eq:presentation-small-QH-type-D} of the small
quantum cohomology of $X$
\begin{equation*}
\QH(X) = K[h,p,\gamma]/(EQ_n, EQ_{2n-4}, EQ_{2n-2} + 4qh).
\end{equation*}
In this subsection, we prove the generic semisimplicity of $\BQH(X)$ as outlined
in the introduction. We need to show that in the big quantum cohomology we have
a deformation, whose relations are of the following form with $\lambda_\gamma,\lambda_p \in {\mathbb Q}^\times$:
\begin{equation*}
\begin{aligned}
& EQ_n + \lambda_\gamma qt_{\gamma} + \text{higher degree terms} \\
& EQ_{2n-4} + \lambda_p qt_p + \text{higher degree terms} \\
& EQ_{2n-2} + 4qh + \text{higher degree terms}
\end{aligned}
\end{equation*}
We will see that the above form is prescribed by the degree of the generators and
equations so we will only need to explicitely compute the values of $\lambda_\gamma$
and $\lambda_p$. For this we will need to compute four-points Gromov--Witten invariants
of degree $1$ and we will use the results of Subsection \ref{subsection:simply-laced-types}.
\begin{proposition}
We have the following formulas.
\begin{equation}\label{eq:GW-pt-p-tau-tau}
\scal{[{\rm pt}], p, \tau_{n-2}, \tau_{n-2}}_1^X = \scal{[{\rm pt}], p, \tau'_{n-2}, \tau'_{n-2}}_1^X =
\begin{cases}
0 \quad \text{if $n$ is even} \\
1 \quad \text{if $n$ is odd}
\end{cases}
\end{equation}
\begin{equation}\label{eq:GW-pt-p-tau-tauprime}
\scal{[{\rm pt}], p, \tau_{n-2}, \tau'_{n-2}}_1^X =
\begin{cases}
1 \quad \text{if $n$ is even} \\
0 \quad \text{if $n$ is odd}
\end{cases}
\end{equation}
\begin{equation}\label{eq:GW-pt-p-gamma-gamma}
\scal{[{\rm pt}], p,\gamma,\gamma}_1^X = (-1)^{n-1}2.
\end{equation}
\begin{equation}\label{eq:GW-pt-gamma-gamma-gamma}
\scal{[{\rm pt}], \gamma, \gamma, \gamma}_1^X = 0
\end{equation}
\begin{equation}\label{eq:GW-pt-p-p-tau}
\scal{[{\rm pt}], p, p, \tau}_1^X = 0 \textrm{ for any class } \tau
\end{equation}
\begin{equation}\label{eq:GW-pt-h-tau-2n-5}
\scal{[{\rm pt}], p, h, \tau_{2n-5}}_1^X = 1
\textrm{ and }
\scal{[{\rm pt}], \gamma, h, \tau_{2n-5}}_1^X = 0.
\end{equation}
\end{proposition}
\begin{proof}
Recall from Section \ref{section:geometry-of-the-space-of-lines} that for $X = \OG(2,2n)$
the variety $\mathrm{F}_x$ of lines passing through a point has the following description.
The variety $X$ corresponds to the second vertex of the Dynking diagram of type $\mathrm{D}_n$
\begin{equation*}
\begin{dynkinDiagram}[edge length = 2em,labels*={1,2,3,4,,n-2, n-1, n},scale=1.5]D{o*oo.oooo}
\end{dynkinDiagram}
\end{equation*}
To obtain $\mathrm{F}_x$ we remove the second vertex from the diagram, splitting it into
two connected components, and mark the two vertices that used to be adjacent to the
second vertex:
\begin{equation*}
\begin{dynkinDiagram}[edge length = 2em,labels*={1},scale=1.5]A{*}
\end{dynkinDiagram}
\hspace{60pt}
\begin{dynkinDiagram}[edge length = 2em,labels*={3,4,,n-2, n-1, n},scale=1.5]D{*o.oooo}
\end{dynkinDiagram}
\end{equation*}
in this process we keep the original indexing of the vertices. Hence, we obtain
\begin{equation*}
\mathrm{F}_x = {\mathbb P}^1 \times \mathrm{Q}^{2n-6}.
\end{equation*}
The Schubert basis for $H^*({\mathbb P}^1, {\mathbb Q})$ is given by
\begin{equation*}
1 = \sigma^{e} \quad \text{and} \quad \zeta = \sigma^{s_1},
\end{equation*}
where $\zeta$ can alternatively be described as the hyperplane class or the class
of a point. The Schubert basis for $H^*(\mathrm{Q}^{2n-6}, {\mathbb Q})$ is given by
\begin{equation*}
\begin{aligned}
& 1 = \sigma^{e}, \\
& \xi_{i} = \sigma^{s_{i+2}s_{i+1} \dots s_3} \quad \text{for} \quad i \in [1,n-2], \\
& \xi_{i} = \sigma^{s_{2n - 3 - i} \dots s_{n-2} s_{n} s_{n-1} \dots s_3} \quad \text{for} \quad i \in [n-1,2n-6], \\
& \xi_{n-3}' = \sigma^{s_n s_{n-2} s_{n-3} \dots s_3}.
\end{aligned}
\end{equation*}
Finally, the Schubert basis for $H^*({\mathbb P}^1 \times \mathrm{Q}^{2n-6}, {\mathbb Q})$ is given by
the Künneth formula.
Let us recall (for example, see \cite[Theorem 1.13]{Reid}) that in $H^*(\mathrm{Q}^{2n-6}, {\mathbb Q})$ the following identities hold
\begin{equation}\label{eq:prod-in-quad}
\xi_{n-3}^2 = (\xi'_{n-3})^2 =
\begin{cases}
1 \quad \text{if $n - 3$ is even} \\
0 \quad \text{if $n - 3$ is odd}
\end{cases}
\textrm{ and }
\xi_{n-3}\xi'_{n-3} =
\begin{cases}
0 \quad \text{if $n - 3$ is even} \\
1 \quad \text{if $n - 3$ is odd}.
\end{cases}
\end{equation}
We also restate for convenience of the reader the following formulas
\begin{equation}\label{eq:recalling-formulas-for-p-tau-tau-prime}
p = \sigma^{s_1s_2}, \quad \tau_{n-2} = \sigma^{s_{n-1} s_{n-2} s_{n-3} \dots s_2}, \quad \tau_{n-2}' = \sigma^{s_n s_{n-2} s_{n-3} \dots s_2}, \quad \gamma = \pm(\tau_{n-2} - \tau_{n-2}'),
\end{equation}
which appeared earlier in this section.
\medskip
Finally, we are now ready to compute the necessary GW invariants.
\smallskip
\noindent \textbf{Invariants \eqref{eq:GW-pt-p-tau-tau} -- \eqref{eq:GW-pt-p-tau-tauprime}:}
These follow immediately from Corollary \ref{corollary:quantum-to-classical-simply-laced}
and \eqref{eq:prod-in-quad}--\eqref{eq:recalling-formulas-for-p-tau-tau-prime}.
\smallskip
\noindent \textbf{Invariant \eqref{eq:GW-pt-p-gamma-gamma}:} This invariant follows
from \eqref{eq:GW-pt-p-tau-tau} -- \eqref{eq:GW-pt-p-tau-tauprime}.
\smallskip
\noindent \textbf{Invariant \eqref{eq:GW-pt-gamma-gamma-gamma}:} This invariant
reduces via Corollary \ref{corollary:quantum-to-classical-simply-laced} to a linear
combination of triple intersection numbers on $\mathrm{Q}^{2n-6}$. Since all the cohomology
classes involved are of degree $n-3$, all such triple intersections vanish for
degree reasons.
\smallskip
\noindent \textbf{Invariant \eqref{eq:GW-pt-p-p-tau}:} Similarly to the above,
this vanishing follows from the fact that $\zeta^2 = 0$ in $H^*({\mathbb P}^1,{\mathbb Q})$.
\smallskip
\noindent \textbf{Invariant \eqref{eq:GW-pt-h-tau-2n-5}:} It follows from the fact
that $q_*p^*\tau_{2n-5} = 1 \otimes \xi_{2n-6}$ so its product is $1$ with
$q_*p^*p = \zeta \otimes 1$ and $0$ with $q_*p^*\gamma = 1 \otimes (\xi_{n-3} - \xi'_{n-3})$.
\end{proof}
\begin{proposition}
The ring $\BQH_{p, \gamma}(\OG(2,2n))$ is the quotient of
\begin{equation*}
\left(K[[t_p, t_{\gamma}]]\right)[h,p,\gamma]
\end{equation*}
by the ideal generated by
\begin{equation}\label{eq:presentation-big-QH-type-D}
\begin{aligned}
& EQ_n + (-1)^n2q t_{\gamma} + {\mathfrak{t}}^2, \\
& EQ_{2n-4} + (-1)^n 4 qt_p
+ {\mathfrak{t}} {\mathfrak{m}}, \\
& EQ_{2n-2} + 4qh + {\mathfrak{t}} {\mathfrak{m}},
\end{aligned}
\end{equation}
where ${\mathfrak{t}} = (t_p,t_\gamma)$ and ${\mathfrak{m}} = (h,p,\gamma,t_p,t_\gamma)$.
\end{proposition}
\begin{proof}
Relations in the big quantum cohomology ring are homogeneous deformations of relations in the small quantum cohomology ring. In our case we are deforming along $t_p$ and $t_{\gamma}$ directions only. Thus, we a looking for a homogenous deformation of \eqref{eq:presentation-small-QH-type-D} involving variables $t_p$ and $t_{\gamma}$. Moreover, the general properties of Gromov--Witten invariants ensure that any terms involving $t_p$ or $t_{\gamma}$ must necessarily have a factor of $q$ in it.
Recall that we have
\begin{equation*}
\begin{aligned}
& \deg(t_p) = 1 - |p| = -1, \\
& \deg(t_{\gamma}) = 1 - |\gamma| = 3 - n, \\
& \deg(q) = 2n-3.
\end{aligned}
\end{equation*}
From these formulas and homogeneity it immediately follows that the necessary deformation of the small quantum cohomology relations \eqref{eq:presentation-small-QH-type-D} has to have the form prescribed by \eqref{eq:presentation-big-QH-type-D}. The only point that needs to be checked is the precise form of the linear terms
$(-1)^n2q t_{\gamma}$ and $(-1)^n 4q t_p$ appearing in \eqref{eq:presentation-big-QH-type-D}. This is what we do in the rest of the proof.
We denote by $EQ_k^{\star}$ the element of $\BQH_{p, \gamma}(\OG(2,2n))$ defined by the polynomial $EQ_k$, i.e. we use the product in $\star$ in $\BQH_{p, \gamma}(X)$ to multiply terms of the polynomial. For example, we have $EQ_n^\star = p \star \gamma$.
\smallskip
Let us consider the first relation in \eqref{eq:presentation-big-QH-type-D}. The linear terms of $p \star \gamma$ are given by $\scal{p, \gamma, \gamma, [{\rm pt}]}_1^Xqt_{\gamma}$ and $\scal{p, \gamma, p, [{\rm pt}]}_1^Xqt_p$, where for degree reasons the latter one is only potentially non-zero for $n = 4$.
Applying \eqref{eq:GW-pt-p-gamma-gamma} and \eqref{eq:GW-pt-p-p-tau} we get the claim.
In the rest of the proof we deal with the second relation in \eqref{eq:presentation-big-QH-type-D}. Recall from \eqref{eq:borel-presentation-III-simplified} that we have $EQ_{2n-4} = \gamma^2 + (-1)^{n-1}E_{n-2}(h,p)$. As above we need to compute $EQ_{2n-4}^\star$
and we are only interested in the linear terms $q t_{\gamma}$ and $q t_p$. We treat the summands $\gamma \star \gamma$
and $\left((-1)^{n-1}E_{n-2}(h,p)\right)^\star$ separately.
First we consider $\gamma \star \gamma$. Here the linear terms are given by $\scal{\gamma,\gamma,p, {{\rm pt}}}_1 qt_p$ and $\scal{\gamma,\gamma,\gamma,{{\rm pt}}}_1 qt_{\gamma}$. Note that for degree reasons the latter term can only be potentially non-zero for $n = 4$. By \eqref{eq:GW-pt-p-gamma-gamma} we have $\scal{\gamma,\gamma,p, {{\rm pt}}}_1 qt_p = (-1)^{n-1}2 qt_p$
and by \eqref{eq:GW-pt-gamma-gamma-gamma} we have $\scal{\gamma,\gamma,\gamma,{{\rm pt}}}_1 qt_{\gamma} = 0$. Thus, the linear term of $\gamma \star \gamma$ is
\begin{equation}\label{eq:proof-BQH-linear-term-gamma-gamma}
(-1)^{n-1}2 qt_p
\end{equation}
Now we consider $\left(E_{n-2}(h,p)\right)^\star$. From \eqref{eq:definition-polynomials-Ej} is clear that
$E_{n-2}(h,p) \equiv h^{2n-4} \ ({\rm mod} \ p)$. Therefore, $\left(E_{n-2}(h,p)\right)^\star = h^{\star 2n-4} + p \star \tau$, where $\tau$ is a linear combination of Schubert classes of degree $2n-6$. We consider $h^{\star 2n-4}$ and $p \star \tau$ separately.
The linear terms of $p \star \tau$ are given by $\scal{p,\tau,p,[{\rm pt}]}_1qt_p$ and $\scal{p,\tau,\gamma,[{\rm pt}]}_1qt_{\gamma}$, where the latter term can only be potentially non-zero for $n = 4$. By \eqref{eq:GW-pt-p-p-tau} we have $\scal{p,\tau,p,[{\rm pt}]}_1qt_p = 0$. To deal with $\scal{p,\tau,\gamma,[{\rm pt}]}_1qt_{\gamma}$
in the case $n = 4$ we note that by \eqref{eq:definition-polynomials-Ej} we have $E_2(h,p) = h^4 + p(3p-4h^2)$.
Thus, in the case $n = 4$ by \eqref{eq:psi-via-tau} -- \eqref{eq:tau-one-squared-OG}
we have
\begin{equation*}
\tau = 3p-4h^2 = -4(\tau_2 + \tau'_2) - p.
\end{equation*}
Hence, using \eqref{eq:GW-pt-p-tau-tau}, \eqref{eq:GW-pt-p-tau-tauprime} and \eqref{eq:GW-pt-p-p-tau}
we get
\begin{equation*}
\scal{p,\tau,\gamma,[{\rm pt}]}_1 = -4 \scal{p,\tau_2+\tau'_2,\tau_2 -\tau'_2,[{\rm pt}]}_1 - \scal{p,p,\gamma,[{\rm pt}]}_1 = 0.
\end{equation*}
Finally, we conclude that the linear term of $p \star \tau$ always vanishes.
Now let us consider $h^{\star 2n-4}$. By the Chevalley formula (see Theorem \ref{thm:chevalley-QH}), we have
\begin{equation*}
h^{2n-5} = \tau_1^{2n-5} = (\sigma_{\theta - \alpha_2})^{2n-5} = 2 \tau_{2n-5} + \sigma,
\end{equation*}
(no quantum corrections for degree reasons), where $\sigma$ is a linear combination of Schubert classes of the form $\sigma_{\alpha_i + \alpha_j}$ with $i,j \geq 2$ (also for degree reasons). For $n = 4$, we have
$\tau_{2n-5} = \sigma_{\alpha_1 + \alpha_2}$ and
$\sigma = 2(\sigma_{\alpha_2 + \alpha_3} + \sigma_{\alpha_2 + \alpha_4}) = 2(\sigma^{s_4s_1s_2} + \sigma^{s_3s_1s_2})$.
Multiplying with $h$, we have
\begin{equation*}
h^{\star 2n-4} = 2 \tau_1 \star \tau_{2n-5} + \tau_1 \star \sigma.
\end{equation*}
We treat these summands separately:
\begin{enumerate}
\item The linear terms of $\tau_1 \star \tau_{2n-5}$ are given by $\scal{\tau_1 , \tau_{2n-5},p,[{\rm pt}]}_1qt_p$ and $\scal{\tau_1 ,\tau_{2n-5},\gamma,[{\rm pt}]}_1qt_{\gamma}$, where the latter term can only be potentially non-zero for $n = 4$. According to \eqref{eq:GW-pt-h-tau-2n-5},
we have $\scal{\tau_1 , \tau_{2n-5},p,[{\rm pt}]}_1qt_p = qt_p$ and $\scal{\tau_1 ,\tau_{2n-5},\gamma,[{\rm pt}]}_1qt_{\gamma} = 0$.
\smallskip
\item The linear terms of $\tau_1 \star \sigma$ are given by $\scal{\tau_1 , \sigma,p,[{\rm pt}]}_1qt_p$ and $\scal{\tau_1 ,\sigma,\gamma,[{\rm pt}]}_1qt_{\gamma}$, where the latter term can only be potentially non-zero for $n = 4$.
To deal with $\scal{\tau_1 , \sigma,p,[{\rm pt}]}_1qt_p$ we note that $q_*p^*\sigma = \zeta \otimes \omega$ in $H^*({\mathbb P}^1 \times \Q^{2n-6})$ because $\sigma$ is a linear combinaison of classes of the form $\sigma_{\alpha_i + \alpha_j}$ with $i,j \geq 2$.
Hence, since $q_*p^*{p} = \zeta \otimes 1$, we get $\scal{\tau_1 , \sigma,p,[{\rm pt}]}_1 = 0$.
To deal with $\scal{\tau_1 ,\sigma,\gamma,[{\rm pt}]}_1qt_{\gamma}$ in the case $n = 4$ we note that $q_*p^*\sigma = \zeta \otimes (\xi_1 + \xi'_1)$ and $q_*p^*\gamma = 1 \otimes (\xi_1 - \xi'_1)$ and using Corollary \ref{corollary:quantum-to-classical-simply-laced}, we have
\begin{equation*}
\scal{\tau_1 ,\sigma,\gamma,[{\rm pt}]}^X_1 = 2 \scal{1, \zeta \otimes (\xi_1 +\xi'_1), 1 \otimes (\xi_1 - \xi'_1) }_0^{\mathbb{P}^1 \times \mathrm{Q}^2} = 0,
\end{equation*}
\end{enumerate}
Therefore, we see that the linear term of $h^{\star 2n-4}$ is given by $2qt_p$ so that the linear term in $(-1)^{n-1}E_{n-2}(p,h)$ is given by
\begin{equation}\label{eq:proof-BQH-linear-term-h-power-2n-4}
(-1)^{n-1}2qt_p
\end{equation}
Combining \eqref{eq:proof-BQH-linear-term-gamma-gamma} and
\eqref{eq:proof-BQH-linear-term-h-power-2n-4}, we obtain that the linear term of $(\gamma^2 + (-1)^{n-1}E_{n-2}(h,p))^\star$ is given by $(-1)^{n-1}4 qt_p$.
\end{proof}
As explained in the introduction we obtain the following results.
\begin{corollary}\label{corollary:regularity-of-BQH-type-D}
The ring $\BQH(\OG(2,2n))$ is regular.
\end{corollary}
\begin{corollary}\label{corollary:semisimplicity-of-BQH-type-D}
$\BQH(\OG(2,2n))$ is generically semisimple.
\end{corollary}
\begin{proof}
The proof is identical to the proof of \cite[Corollary 6.5]{CMMPS}, but as it
is very short, we repeat it here for the reader's convenience.
Recall the deformation picture \eqref{Eq.: BQH family} and note that $\BQH(X)_{\eta}$
is a localisation of $\BQH(X)$. Since $\BQH(X)$ is regular by Corollary
\ref{corollary:regularity-of-BQH-type-D} and since regularity is preserved under
localisation, we conclude that $\BQH(X)_{\eta}$ is also a regular ring. This implies
that $\BQH(X)_{\eta}$ is a product of finite field extensions of $K((t_0, \dots, t_s))$,
which was our definition of semisimplicity.
\end{proof}
\section{Types $\mathrm{E}_6, \mathrm{E}_7, \mathrm{E}_8$}
\label{section:type-E}
We use the following notation
\begin{equation*}
\alpha(a_1, \dots, a_n) \coloneqq \sum_{i = 1}^n a_i \alpha_i.
\end{equation*}
Given a cohomology class $\gamma \in H^*(X, {\mathbb Q})$ we define $\coeff_{\sigma^{w}}(\gamma)$ as the coefficients in the decomposition into the linear combination of Schubert classes
\begin{equation*}
\gamma = \sum_{w \in \mathrm{W}^{\mathrm{P}}} \coeff_{\sigma^{w}}(\gamma) \, \sigma^{w}.
\end{equation*}
In terms of the dual Schubert classes we have
\begin{equation*}
\coeff_{\sigma^{w}}(\gamma) = \deg_{X} \left( \gamma \cup \left( \sigma^{w} \right)^\vee \right).
\end{equation*}
Recall the point-line incidence variety $\Z(X)$ and the maps $p : \Z(X) \to X$ and
$q : \Z(X) \to \mathrm{F}(X)$ from \eqref{eq:universal-family-of-lines-simply-laced}.
Recall also the inclusion $i : \mathrm{F}_x := p^{-1}(x) \subset \mathrm{F}(X)$.
For $\sigma \in H^*(X,{\mathbb Q})$, we set $\bar{\sigma} := i^*q_*p^*\sigma$.
For $\mathrm{G}$ simply laced and $\mathrm{P}$ maximal, we have $X = \mathrm{G}/\mathrm{P}$, $\mathrm{F}(X) = \mathrm{G}/\mathrm{Q}$
and $\Z(X) = \mathrm{G}/\mathrm{R}$ with $\mathrm{R} = \mathrm{P} \cap \mathrm{Q}$. Furthermore, as explained in
Lemma \ref{lemma:pull-push} and Lemma \ref{lemma:intersections-of-schubert-varieties-with-Fx},
we have $\overline{\sigma_X^w} = i^*q_*p^*\sigma_X^w = \sigma_{\mathrm{F}_x}^{ws_\mathrm{P}}$
if $1 < w $ and $ws_\mathrm{P} \leq w_\mathrm{P}^\mathrm{Q}$ and $\overline{\sigma_X^w} = 0$ otherwise.
\begin{remark}
\label{remark:simple-fact-proofs}
Let $\gamma \in H^*(X,{\mathbb Q})$ and write $\gamma = \sum_{w \in \mathrm{W}^{\mathrm{P}}} \coeff_{\sigma^{w}_X}(\gamma) \, \sigma^{w}_X$.
Then we have
\begin{equation*}
\bar{\gamma} = \sum_{w \in \mathrm{W}^{\mathrm{P}}} \coeff_{\sigma_X^{w}}(\gamma) \, \overline{\sigma^{w}} =
\sum_{w \in \mathrm{W}^{\mathrm{P}},\ w \neq 1,\ w\leq w_\mathrm{P}^\mathrm{Q}} \coeff_{\sigma_X^{w}}(\gamma) \, \sigma_{\mathrm{F}_x}^{ws_\mathrm{P}}
\end{equation*}
In particular, for $w \in \mathrm{W}^\mathrm{P}$ such that $1 < w$ and $ws_\mathrm{P} \leq w_\mathrm{P}^\mathrm{Q}$, we have
\begin{equation*}
\coeff_{\sigma^{w}_{X}}(\gamma) = \coeff_{\sigma^{ws_\mathrm{P}}_{\F_x}}(\bar{\gamma}).
\end{equation*}
We will use this formula several times in what follows.
\end{remark}
In the following, we will freely use the Littlewood-Richardson rules coming from Jeu de Taquin for (co)minuscule varieties proved in \cite{ThYo} and for coadjoint varieties $X$ and classes of degree at most $\dim X/2$ proved in \cite{ChPe} and \cite{ChPeLR}. These computations have been implemented by the second author, see \cite{LRCalc} and we will use this software for several computations.
\subsection{Type $\mathrm{E}_6$}
Let $X = \mathrm{E}_6/\mathrm{P}_2$ be the coadjoint variety of type $\mathrm{E}_6$. It is shown in \cite{ChPe} that the cohomology classes
\begin{equation*}
\begin{aligned}
& h = \sigma^{s_2}, \\
& s = \sigma^{s_3s_4s_2}, \\
& t = \sigma^{s_1s_3s_4s_2}.
\end{aligned}
\end{equation*}
generate the cohomology ring $H^*(X, {\mathbb Q})$. The quantum parameter is of degree
\begin{equation*}
\deg(q) = 11.
\end{equation*}
According to \cite{ChPe} we have the following presentation of the small quantum cohomology of $X$.
\begin{proposition}[{\cite[Proposition 5.4]{ChPe}}]
\label{proposition:E6-presentation-small-QH}
The small quantum cohomology $\QH(X)$ is the quotient of the polynomial ring $K[h,s,t]$ modulo the ideal generated by
\begin{equation*}
h^8 - 6h^5s + 3h^4t + 9h^2s^2 - 12hst + 6t^2,
\end{equation*}
\begin{equation*}
h^9 - 4h^6s + 3h^5t + 3h^3s^2 - 6h^2st + 2s^3,
\end{equation*}
\begin{equation*}
-97h^{12} + 442h^9s - 247h^8t - 507h^6s^2 + 624h^5st - 156h^2s^2t + 48hq.
\end{equation*}
\end{proposition}
\vspace{10pt}
As already mentioned in the introduction, the above proposition implies Theorem \ref{theorem:introduction-uniform-presentation-for-QH} in type $\mathrm{E}_6$.
\begin{remark}
A curious reader can verify the validity of the relations of Proposition \ref{proposition:E6-presentation-small-QH} using \cite{LRCalc}.
\end{remark}
\begin{lemma}
\label{lemma:description-fat-point-E6}
The small quantum cohomology $\QH(X)$ is not semisimple. Its unique non-semisimple factor is supported at the point $h = s = t = 0$ and is isomorphic to the Jacobian algebra of the isolated hypersurface singularity of type $\mathrm{E}_6$.
\end{lemma}
\begin{proof}
From the presentation in Proposition \ref{proposition:E6-presentation-small-QH}
it is clear that the point $h = s = t = 0$ belongs to $\Spec \QH(X)$ and the
Zariski tangent space to $\Spec \QH(X)$ at this point is of dimension $2$.
Hence, since $\Spec \QH(X)$ is of dimension $0$, the point $h = s = t = 0$ is
non-reduced.
To determine the algebra structure we set $h = 0$ in the presentaion in
Proposition \ref{proposition:E6-presentation-small-QH} and see that we have
unique solution and the algebra structure at that point is given by $K[s,t]/(6t^2, 2s^3)$,
i.e. this is exactly the Jacobian algebra of the isolated hypersurface singularity of type $\mathrm{E}_6$.
To finish the proof it is enough to show that the vector space dimension of the
locus $h \neq 0$ is equal to $66$ and that this locus is reduced. This can be done
easily using SageMath~\cite{sagemath}.
\end{proof}
\bigskip
\begin{lemma}
In $H^*(X, {\mathbb Q})$ we have
\begin{equation}\label{eq:E6-t^2-via-Schubert}
t^{\cup 2} = \sigma_{\alpha(010110)}
\end{equation}
\begin{equation}\label{eq:E6-s^2-via-Schubert}
s^{\cup 2} = \sigma_{\alpha(011210)} + 2 \sigma_{\alpha(011111)} + \sigma_{\alpha(111110)}
\end{equation}
\begin{equation}\label{eq:E6-s^3-via-Schubert}
s^{\cup 3} = 6 \sigma_{\alpha(010100)} + 2 \sigma_{\alpha(000011)} + 9 \sigma_{\alpha(000110)}
+ 6 \sigma_{\alpha(001100)} + \sigma_{\alpha(101000)}
\end{equation}
\end{lemma}
\begin{proof}
This is a routine calculation using the Littlewood-Richardson rule for $\mathrm{E}_6/\mathrm{P}_2$. We used \cite{LRCalc} for this.
\end{proof}
\begin{proposition}
\label{proposition:E6-4-point-GW-invariants}
We have
\begin{equation*}
\scal{[{\rm pt}],t,t,t}_1 = 1
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], h, t, \gamma}_1 = \coeff_{\sigma_{\alpha(010111)}}(\gamma)
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], h, s, \gamma}_1 = \coeff_{\sigma_{\alpha(010110)}}(\gamma)
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], s, s, \gamma}_1 = \coeff_{\sigma_{\alpha(011111)}}(\gamma) + \coeff_{\sigma_{\alpha(011210)}}(\gamma)
\end{equation*}
\end{proposition}
\begin{proof}
The proof is a combination of Corollary \ref{corollary:quantum-to-classical-simply-laced} and some computations in the classical cohomology ring of $\G(3,6) = \mathrm{A}_5/\mathrm{P}_3$.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}],t,t,t}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}],t,t,t}_1 = \deg_{\G(3,6)}(\bar{t} \cup \bar{t} \cup \bar{t}).
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\bar{t} \cup \bar{t} \cup \bar{t} = [{\rm pt}].
\end{equation*}
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], h, t, \gamma}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}], h, t, \gamma}_1 = \deg_{\G(3,6)}(\bar{h} \cup \bar{t} \cup \bar{\gamma})
= \coeff_{\bar{t}^\vee}(\bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\bar{t}^\vee = \sigma_{\G(3,6)}^{s_2s_3s_4s_1s_2s_3}
\end{equation*}
Lifting $\bar{t}^\vee$ to $\mathrm{E}_6/\mathrm{P}_2$ by appending $s_2$ to the Weyl group element (and keeping in mind the relabelling vertices in Dynkin diagrams) we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{E}_6/\mathrm{P}_2}^{s_3s_4s_5s_1s_3s_4s_2} = \sigma_{\mathrm{E}_6/\mathrm{P}_2}^{\alpha(010111)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_6/\mathrm{P}_2$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], h, s, \gamma}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}], h, s, \gamma}_1 = \deg_{\G(3,6)}(\bar{h} \cup \bar{s} \cup \bar{\gamma})
= \coeff_{\bar{s}^\vee}(\bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\bar{s}^\vee = \sigma_{\G(3,6)}^{s_5s_2s_3s_4s_1s_2s_3}
\end{equation*}
Lifting $\bar{s}^\vee$ to $\mathrm{E}_6/\mathrm{P}_2$ by appending $s_2$ to the Weyl group element (and keeping in mind the relabelling vertices in Dynkin diagrams) we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{E}_6/\mathrm{P}_2}^{s_6s_3s_4s_5s_1s_3s_4s_2} = \sigma_{\mathrm{E}_6/\mathrm{P}_2}^{\alpha(010110)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_6/\mathrm{P}_2$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], s, s, \gamma}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}], s, s, \gamma}_1 = \deg_{\G(3,6)}(\bar{s} \cup \bar{s} \cup \bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\bar{s} \cup \bar{s} = \sigma_{\G(3,6)}^{s_3s_4s_2s_3} + \sigma_{\G(3,6)}^{s_4s_1s_2s_3}
\end{equation*}
and
\begin{equation*}
(\bar{s} \cup \bar{s})^\vee = \sigma_{\G(3,6)}^{s_5s_4s_1s_2s_3} + \sigma_{\G(3,6)}^{s_3s_4s_1s_2s_3}.
\end{equation*}
Lifting $(\bar{s} \cup \bar{s})^\vee$ to $\mathrm{E}_6/\mathrm{P}_2$ by appending $s_2$ to the Weyl group element (and keeping in mind the relabelling vertices in Dynkin diagrams) we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{E}_6/\mathrm{P}_2}^{s_6s_5s_1s_3s_4s_2} + \sigma_{\mathrm{E}_6/\mathrm{P}_2}^{s_4s_5s_1s_3s_4s_2} = \sigma_{\mathrm{E}_6/\mathrm{P}_2}^{\alpha(011210)} + \sigma_{\mathrm{E}_6/\mathrm{P}_2}^{\alpha(011111)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_6/\mathrm{P}_2$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\end{proof}
\begin{proposition}
\label{proposition:E6-presentation-big-QH}
In $\BQH_{s,t}(X)$, we have the following equalities modulo ${\mathfrak{t}}{\mathfrak{m}}$:
\begin{equation}\label{eq:presentation-big-QH-E6-relation-1}
h^8 - 6h^5s + 3h^4t + 9h^2s^2 - 12hst + 6t^2 \equiv 2 qt_{\delta_2} \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{equation}
\begin{equation}\label{eq:presentation-big-QH-E6-relation-2}
h^9 - 4h^6s + 3h^5t + 3h^3s^2 - 6h^2st + 2s^3 \equiv -2 qt_{\delta_1} \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{equation}
\begin{multline}\label{eq:presentation-big-QH-E6-relation-3}
-97h^{12} + 442h^9s - 247h^8t - 507h^6s^2 + \\ + 624h^5st - 156h^2s^2t + 48hq \equiv 0 \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{multline}
\end{proposition}
\begin{proof}
For degree reasons \eqref{eq:presentation-big-QH-E6-relation-3} holds automatically. Thus, we only need to take care of \eqref{eq:presentation-big-QH-E6-relation-1}
and \eqref{eq:presentation-big-QH-E6-relation-2}.
\medskip
\noindent \emph{Relation \eqref{eq:presentation-big-QH-E6-relation-1}.} The left hand side of \eqref{eq:presentation-big-QH-E6-relation-1} is of the form $-h \sigma + 6 t^2$,
where $\sigma \in H^{14}(X,{\mathbb Q})$ is a linear combination of Schubert classes. Thus, in the classical (and also in the small quantum) cohomology of $X$ we have the equality $h \sigma = 6 t^2$. By the hard Lefschetz theorem, multiplication by $h$, considered as a map from $H^{14}(X,{\mathbb Q}) \to H^{16}(X,{\mathbb Q})$, is injective. Hence, there exist a unique class $\sigma \in H^{14}(X,{\mathbb Q})$ so that $h \sigma = 6 t^2$.
Using \eqref{eq:E6-t^2-via-Schubert} and \cite{LRCalc} we compute
\begin{equation*}
\sigma = 4 \sigma_{\alpha(010111)} + 2 \sigma_{\alpha(011110)} - 4 \sigma_{\alpha(001111)} - 2 \sigma_{\alpha(111100)}
+ 2 \sigma_{\alpha(101110)}.
\end{equation*}
For degree reasons the linear term of $(-h \sigma + 6 t^2)^{\star}$ is given by
\begin{equation*}
\left( - \scal{h, \sigma, t, [{\rm pt}]}_1 + 6 \scal{t, t, t, [{\rm pt}]}_1 \right) q t_{\delta_2}.
\end{equation*}
Applying Proposition \ref{proposition:E6-4-point-GW-invariants} we conclude that the linear term is in fact equal to
$2 q t_{\delta_2}$,
and the claim is proved.
\medskip
\noindent \emph{Relation \eqref{eq:presentation-big-QH-E6-relation-2}.} The left hand
side of \eqref{eq:presentation-big-QH-E6-relation-2} is of the form $-h\gamma + 2s^3$,
where $\gamma = \gamma_1 + \gamma_2$ with $\gamma_1 \in H^{16}(X,{\mathbb Q})$ and
$\gamma_2 \in {\mathbb Q} \, qt_{\delta_2}$. Since we are only interested in the linear terms
of $(-h\gamma + 2s^3)^\star$, we proceed as if we had $\gamma_2 = 0$. By the hard Lefschetz
theorem, the classical multiplication by $h$, considered as a map from $H^{16}(X,{\mathbb Q}) \to H^{18}(X,{\mathbb Q})$,
is injective. Hence, there exist a unique class $\gamma \in H^{16}(X,{\mathbb Q})$ so that $h \cup \gamma = 2s^{\cup 3}$.
Using \eqref{eq:E6-s^3-via-Schubert} and \cite{LRCalc} we compute
\begin{equation*}
\gamma = 8 \sigma_{\alpha(010110)} + 4 \sigma_{\alpha(011100)} + 4 \sigma_{\alpha(000111)} +
6 \sigma_{\alpha(001110)} + 2 \sigma_{\alpha(101100)}.
\end{equation*}
Denoting $\rho = s^{\cup 2}$ we can rewrite
\begin{equation*}
-h\gamma + 2s^3 = -h\gamma + 2 s\rho.
\end{equation*}
For degree reasons the linear term of $(-h\gamma + 2 s\rho)^{\star}$ is given by
\begin{equation*}
\left(- \scal{h, \gamma, s, [{\rm pt}]}_1 + 2 \scal{s, \rho, s, [{\rm pt}]}_1 \right) q t_{\delta_1}.
\end{equation*}
Applying Proposition \ref{proposition:E6-4-point-GW-invariants} and \eqref{eq:E6-s^2-via-Schubert} we conclude that the linear term is in fact equal to
$-2q t_{\delta_1}$,
and the claim is proved.
\end{proof}
\begin{corollary}\label{corollary:E6-regularity-of-BQH}
$\BQH(X)$ is a regular ring.
\end{corollary}
\begin{corollary}\label{corollary:E6-semisimplicity-of-BQH}
$\BQH(X)$ is generically semisimple.
\end{corollary}
\bigskip
\subsection{Type $\mathrm{E}_7$}
Let $X = \mathrm{E}_7/\mathrm{P}_1$ be the coadjoint variety of type $\mathrm{E}_7$ and recall that in this case for the quantum parameter we have
\begin{equation*}
\deg(q) = 17.
\end{equation*}
It is shown in \cite{ChPe} that the cohomology classes
\begin{equation*}
\begin{aligned}
& h = \sigma^{s_1}, \\
& s = \sigma^{s_2s_4s_3s_1}, \\
& t = \sigma^{s_7s_6s_5s_4s_3s_1}.
\end{aligned}
\end{equation*}
generate the cohomology ring $H^*(X, {\mathbb Q})$ and the following presentation of the small quantum cohomology is given.
\begin{proposition}[{\cite[Proposition 5.6]{ChPe}}]
\label{proposition:E7-presentation-small-QH}
The small quantum cohomology $\QH(X)$ is the quotient of the polynomial ring $K[h,s,t]$ modulo the ideal generated by
\begin{equation*}
h^{12} - 6h^8s - 4h^6t + 9h^4s^2 + 12h^2st - s^3 + 3t^2,
\end{equation*}
\begin{equation*}
h^{14} - 6h^{10}s - 2h^8t + 9h^6s^2 + 6h^4st - h^2s^3 + 3s^2t,
\end{equation*}
\begin{equation*}
232h^{18} - 1444h^{14}s - 456h^{12}t + 2508h^{10}s^2 + 1520h^8st - 988h^6s^3 + 133h^2s^4 + 36hq.
\end{equation*}
\end{proposition}
\vspace{10pt}
As already mentioned in the introduction, the above proposition implies Theorem \ref{theorem:introduction-uniform-presentation-for-QH} in type $\mathrm{E}_7$.
\begin{lemma}
\label{lemma:description-fat-point-E7}
The small quantum cohomology $\QH(X)$ is not semisimple. Its unique non-semisimple factor is supported at the point $h = s = t = 0$ and is isomorphic to the Jacobian algebra of the isolated hypersurface singularity of type $\mathrm{E}_7$.
\end{lemma}
\begin{proof}
The proof is identical to the proof of Lemma \ref{lemma:description-fat-point-E6}.
\end{proof}
\begin{lemma}
In $H^*(X, {\mathbb Q})$ we have
\begin{equation}\label{eq:E7-t^2-via-Schubert}
t^{\cup 2} = \sigma_{\alpha(0112100)}
\end{equation}
\begin{equation}\label{eq:E7-st-via-Schubert}
s \cup t = \sigma_{(1112110)}
\end{equation}
\begin{equation}\label{eq:E7-s^2-via-Schubert}
s^{\cup 2} = \sigma_{(0112221)} + \sigma_{(1112211)} + \sigma_{(1122111)} + \sigma_{(1122210)}
\end{equation}
\begin{multline}\label{eq:E7-s^3-via-Schubert}
s^{\cup 3} = 3 \sigma_{\alpha(0011111)} + 3 \sigma_{\alpha(0101111)} + 11 \sigma_{\alpha(0111110)} + \\
+ 9 \sigma_{\alpha(0112100)} + 4 \sigma_{\alpha(1011110)} + 6 \sigma_{\alpha(1111100)}
\end{multline}
\begin{multline}\label{eq:E7-ts^2-via-Schubert}
t \cup s^{\cup 2} = 2 \sigma_{\alpha(0001110)} + 5 \sigma_{\alpha(0011100)} + 3 \sigma_{\alpha(0101100)} + \\
+ 3 \sigma_{\alpha(0111000)} + 2 \sigma_{\alpha(1011000)}
\end{multline}
\end{lemma}
\begin{proof}
This is a routine calculation using the Littlewood-Richardson rule for $\mathrm{E}_7/\mathrm{P}_1$. We used \cite{LRCalc} for this.
\end{proof}
\begin{proposition}
\label{proposition:E7-4-point-GW-invariants}
We have
\begin{equation*}
\scal{[{\rm pt}],t,t,t}_1 = 0
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], h, t, \gamma}_1 = \coeff_{\sigma_{\alpha(1011111)}}(\gamma)
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], h, s, \gamma}_1 = \coeff_{\sigma_{\alpha(1111000)}}(\gamma)
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], s, t, \gamma}_1 = \coeff_{\sigma_{\alpha(1122111)}}(\gamma)
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], s, s, \gamma}_1 = \coeff_{\sigma_{\alpha(1112110)}}(\gamma)
\end{equation*}
\end{proposition}
\begin{proof}
The proof is a combination of Corollary \ref{corollary:quantum-to-classical-simply-laced} and some computations in the classical cohomology ring of $\OG(6,12)$. As such it is very similar to the proof of Proposition \ref{proposition:E6-4-point-GW-invariants} and so we try to be very concise here.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}],t,t,t}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}],t,t,t}_1 = \deg_{\OG(6,12)}(\bar{t} \cup \bar{t} \cup \bar{t}).
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\bar{t} \cup \bar{t} \cup \bar{t} = 0,
\end{equation*}
and we get the desired vanishing.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], h, t, \gamma}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}], h, t, \gamma}_1 = \deg_{\OG(6,12)}(\bar{h} \cup \bar{t} \cup \bar{\gamma})
= \coeff_{\bar{t}^\vee}(\bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\bar{t}^\vee = \sigma_{\OG(6,12)}^{s_5s_4s_6s_3s_4s_5s_2s_3s_4s_6}
\end{equation*}
Lifting $\bar{t}^\vee$ to $\mathrm{E}_7/\mathrm{P}_1$ by appending $s_1$ to the Weyl group element (and keeping in mind the relabelling vertices in Dynkin diagrams) we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{E}_7/\mathrm{P}_1}^{s_2s_4s_5s_6s_3s_4s_5s_2s_4s_3s_1} = \sigma_{\mathrm{E}_7/\mathrm{P}_1}^{\alpha(1011111)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_7/\mathrm{P}_1$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], h, s, \gamma}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}], h, s, \gamma}_1 = \deg_{\OG(6,12)}(\bar{h} \cup \bar{s} \cup \bar{\gamma})
= \coeff_{\bar{s}^\vee}(\bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\bar{s}^\vee = \sigma_{\OG(6,12)}^{s_3s_4s_6s_2s_3s_4s_5s_1s_2s_3s_4s_6}
\end{equation*}
Lifting $\bar{s}^\vee$ to $\mathrm{E}_7/\mathrm{P}_1$ by appending $s_1$ to the Weyl group element (and keeping in mind the relabelling vertices in Dynkin diagrams) we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{E}_7/\mathrm{P}_1}^{s_5s_6s_7s_4s_5s_6s_3s_4s_5s_2s_4s_3s_1} = \sigma_{\mathrm{E}_7/\mathrm{P}_1}^{\alpha(1111000)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_7/\mathrm{P}_1$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], s, t, \gamma}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}], s, t, \gamma}_1 = \deg_{\OG(6,12)}(\bar{s} \cup \bar{t} \cup \bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\bar{s} \cup \bar{t} = \sigma_{\OG(6,12)}^{s_6s_4s_5s_1s_2s_3s_4s_6}
\end{equation*}
and
\begin{equation*}
(\bar{s} \cup \bar{t})^\vee = \sigma_{\OG(6,12)}^{s_3s_4s_5s_2s_3s_4s_6}.
\end{equation*}
Lifting $(\bar{s} \cup \bar{t})^\vee$ to $\mathrm{E}_7/\mathrm{P}_1$ by appending $s_1$ to the Weyl group element (and keeping in mind the relabelling vertices in Dynkin diagrams) we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{E}_7/\mathrm{P}_1}^{s_5s_6s_4s_5s_2s_4s_3s_1} = \sigma_{\mathrm{E}_7/\mathrm{P}_1}^{\alpha(1122111)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_7/\mathrm{P}_1$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], s, s, \gamma}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}], s, s, \gamma}_1 = \deg_{\OG(6,12)}(\bar{s} \cup \bar{s} \cup \bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\bar{s} \cup \bar{s} = \sigma_{\OG(6,12)}^{s_4s_5s_2s_3s_4s_6}
\end{equation*}
and
\begin{equation*}
(\bar{s} \cup \bar{s})^\vee = \sigma_{\OG(6,12)}^{s_6s_3s_4s_5s_1s_2s_3s_4s_6}.
\end{equation*}
Lifting $(\bar{s} \cup \bar{s})^\vee$ to $\mathrm{E}_7/\mathrm{P}_1$ by appending $s_1$ to the Weyl group element (and keeping in mind the relabelling vertices in Dynkin diagrams) we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{E}_7/\mathrm{P}_1}^{s_7s_5s_6s_3s_4s_5s_2s_4s_3s_1} = \sigma_{\mathrm{E}_7/\mathrm{P}_1}^{\alpha(1112110)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_7/\mathrm{P}_1$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\end{proof}
\begin{proposition}
\label{proposition:E7-presentation-big-QH}
In $\BQH_{s,t}(X)$, we have the following equalities modulo ${\mathfrak{t}}{\mathfrak{m}}$:
\begin{equation}\label{eq:presentation-big-QH-E7-relation-1}
h^{12} - 6h^8s - 4h^6t + 9h^4s^2 + 12h^2st - s^3 + 3t^2 \equiv - qt_{\delta_2} \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{equation}
\begin{equation}\label{eq:presentation-big-QH-E7-relation-2}
h^{14} - 6h^{10}s - 2h^8t + 9h^6s^2 + 6h^4st - h^2s^3 + 3s^2t \equiv qt_{\delta_1} \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{equation}
\begin{multline}\label{eq:presentation-big-QH-E7-relation-3}
232h^{18} - 1444h^{14}s - 456h^{12}t + 2508h^{10}s^2 + \\
+ 1520h^8st - 988h^6s^3 + 133h^2s^4 + 36hq \equiv 0 \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{multline}
\end{proposition}
\begin{proof}
For degree reasons \eqref{eq:presentation-big-QH-E7-relation-3} holds automatically. Thus, we only need to take care of \eqref{eq:presentation-big-QH-E7-relation-1} and \eqref{eq:presentation-big-QH-E7-relation-2}.
\medskip
\noindent \emph{Relation \eqref{eq:presentation-big-QH-E7-relation-1}.}
The lefthand side of \eqref{eq:presentation-big-QH-E7-relation-1} is of the form $h \sigma - s^3 + 3 t^2$,
where $\sigma \in H^{22}(X, {\mathbb Q})$ is a linear combination of Schubert classes.
Thus, in the classical (and in the small quantum) cohomology of $X$ we have the equality $h \sigma = s^3 - 3 t^2$. By the hard Lefschetz theorem, multiplication by $h$, considered as a map from $H^{22}(X,{\mathbb Q}) \to H^{24}(X,{\mathbb Q})$, is injective. Hence, there exist a unique class $\sigma \in H^{22}(X,{\mathbb Q})$ so that $h \sigma = s^3 - 3 t^2$.
Using \eqref{eq:E7-t^2-via-Schubert}, \eqref{eq:E7-s^3-via-Schubert}, and \cite{LRCalc} we compute
\begin{equation*}
\sigma = 3 \sigma_{\alpha(0111111)} + 4 \sigma_{\alpha(0112110)} + 4 \sigma_{\alpha(1111110)} + 2 \sigma_{\alpha(1112100)}.
\end{equation*}
Setting $\tau = s^2$, we can rewrite the left hand side of \eqref{eq:presentation-big-QH-E7-relation-1} as $h\sigma - s\tau + 3t^2$. Thus, now we need to compute $(h\sigma - s\tau + 3t^2)^\star$. For degree reasons the linear term of $(h\sigma - s\tau + 3t^2)^\star$ is given by
\begin{equation*}
\left( \scal{h,\sigma, [{\rm pt}] ,t}_1 - \scal{s,\tau, [{\rm pt}] ,t}_1 + 3\scal{t,t, [{\rm pt}] ,t}_1 \right) q t_{\delta_2}.
\end{equation*}
Applying \eqref{eq:E7-s^2-via-Schubert} and Proposition \ref{proposition:E7-4-point-GW-invariants} we conclude that the linear term is indeed equal to $-qt_{\delta_2}$, and the claim is proved.
\bigskip
\noindent \emph{Relation \eqref{eq:presentation-big-QH-E7-relation-2}.}
The lefthand side of \eqref{eq:presentation-big-QH-E7-relation-2} is of the form $-h\gamma + 3s^2t$,
where $\gamma \in H^{26}(X, {\mathbb Q})$ is a linear combination of Schubert classes.
Thus, in the classical (and in the small quantum) cohomology of $X$ we have the equality $h \gamma = 3 s^2t$. By the hard Lefschetz theorem, multiplication by $h$, considered as a map from $H^{26}(X,{\mathbb Q}) \to H^{28}(X,{\mathbb Q})$, is injective. Hence, there exist a unique class $\gamma \in H^{26}(X,{\mathbb Q})$ so that $h \gamma = 3 s^2t$.
Using \eqref{eq:E7-ts^2-via-Schubert} and \cite{LRCalc} we compute
\begin{equation*}
\gamma = 4 \sigma_{\alpha(0011110)} + 2 \sigma_{\alpha(0101110)} + 7 \sigma_{\alpha(0111100)} + 4 \sigma_{\alpha(1011100)}
+ 2\sigma_{\alpha(1111000)}.
\end{equation*}
Setting $\eta = st = \sigma_{\alpha(1112110)}$, we can rewrite the left hand side of \eqref{eq:presentation-big-QH-E7-relation-1} as $- h\gamma + 3s\eta$. Thus, now we need to compute $(- h\gamma + 3s\eta)^\star$. For degree reasons the linear term of $(- h\gamma + 3s\eta)^\star$ is given by
\begin{equation*}
\left( -\scal{h,\gamma, [{\rm pt}] ,s}_1 + 3 \scal{s,\eta, [{\rm pt}] ,s}_1 \right) q t_{\delta_1}.
\end{equation*}
Applying \eqref{eq:E7-st-via-Schubert} and Proposition \ref{proposition:E7-4-point-GW-invariants} we conclude that the linear term is indeed equal to $qt_{\delta_1}$, and the claim is proved.
\end{proof}
\begin{corollary}\label{corollary:E7-regularity-of-BQH}
$\BQH(X)$ is a regular ring.
\end{corollary}
\begin{corollary}\label{corollary:E7-semisimplicity-of-BQH}
$\BQH(X)$ is generically semisimple.
\end{corollary}
\bigskip
\subsection{Type $\mathrm{E}_8$}
Let $X = \mathrm{E}_8/\mathrm{P}_8$ be the coadjoint variety of type $\mathrm{E}_8$. It is shown in \cite{ChPe} that the cohomology classes
\begin{equation*}
\begin{aligned}
& h = \sigma^{s_8}, \\
& s = \sigma^{s_2s_4s_5s_6s_7s_8}, \\
& t = \sigma^{s_6s_5s_4s_3s_2s_4s_5s_6s_7s_8}.
\end{aligned}
\end{equation*}
generate the classical and the small quantum cohomology rings, and their presentations are given. Note that here we have
\begin{equation*}
\deg(q) = 29.
\end{equation*}
\begin{proposition}[{\cite[Proposition 5.7]{ChPe}}]
\label{proposition:E8-presentation-small-QH}
The small quantum cohomology $\QH(X)$ is the quotient of the polynomial ring $K[h,s,t]$ modulo the ideal generated by
\begin{equation*}
h^{14}s + 6h^{10}t - 3h^8s^2 - 12h^4st - 10h^2s^3 + 3t^2,
\end{equation*}
\begin{equation*}
29h^{24} - 120h^{18}s + 15h^{14}t + 45h^{12}s^2 - 30h^8st + 180h^6s^3 - 30h^2s^2t + 5s^4,
\end{equation*}
\begin{multline*}
- 86357 h^{30} + 368652 h^{24}s - 44640 h^{20}t - 189720 h^{18}s^2 + \\
+ 94860 h^{14}st - 473680 h^{12}s^3 + 74400h^8s^2t - 1240 h^2s^3t + 60hq.
\end{multline*}
\end{proposition}
\vspace{10pt}
As already mentioned in the introduction, the above proposition implies Theorem \ref{theorem:introduction-uniform-presentation-for-QH} in type $\mathrm{E}_8$.
\begin{lemma}
\label{lemma:description-fat-point-E8}
The small quantum cohomology $\QH(X)$ is not semisimple. Its unique non-semisimple factor is supported at the point $h = s = t = 0$ and is isomorphic to the Jacobian algebra of the isolated hypersurface singularity of type $\mathrm{E}_8$.
\end{lemma}
\begin{proof}
The proof is identical to the proof of Lemma \ref{lemma:description-fat-point-E6}.
\end{proof}
\begin{lemma}
In $H^*(X, {\mathbb Q})$ we have the equalities
\begin{multline}\label{eq:E8-t^2-via-Schubert}
t^{\cup 2} = 4 \sigma_{\alpha(01122111)} + 7 \sigma_{\alpha(01122210)} + 8 \sigma_{\alpha(11221110)} + \\
+ 16 \sigma_{\alpha(11222100)} + 2 \sigma_{\alpha(11121111)} + 14 \sigma_{\alpha(11122110)}
\end{multline}
\begin{multline}\label{eq:E8-s^3-via-Schubert}
s^{\cup 3} = 6 \sigma_{\alpha(01122221)} + 58 \sigma_{\alpha(11222111)} + 85 \sigma_{\alpha(11222210)} + \\
+ 111 \sigma_{\alpha(11232110)} + 34 \sigma_{\alpha(11122211)} + 25 \sigma_{\alpha(12232100)}
\end{multline}
\begin{multline}\label{eq:E8-s^4-via-Schubert}
s^{\cup 4} = 1668 \sigma_{\alpha(01011110)} + 3957 \sigma_{\alpha(01121000)} + 5600 \sigma_{\alpha(01111100)} + \\
+ 432 \sigma_{\alpha(00011111)} + 2256 \sigma_{\alpha(00111110)} + \\
+ 2888 \sigma_{\alpha(11111000)} + 2048 \sigma_{\alpha(10111100)}
\end{multline}
\end{lemma}
\begin{proof}
This is a routine calculation using the Littlewood-Richardson rule for $\mathrm{E}_8/\mathrm{P}_8$. We used \cite{LRCalc} for this.
\end{proof}
\begin{proposition}
\label{proposition:E8-4-point-GW-invariants}
We have
\begin{equation*}
\scal{[{\rm pt}],t,t,t}_1 = 2
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], h, t, \gamma}_1 = \coeff_{\sigma_{\alpha(01122211)}}(\gamma)
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], h, s, \gamma}_1 = \coeff_{\sigma_{\alpha(01011111)}}(\gamma)
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], s, s, \gamma}_1 = 2\coeff_{\sigma_{\alpha(11122211)}}(\gamma) + 2\coeff_{\sigma_{\alpha(11222111)}}(\gamma)
\end{equation*}
\end{proposition}
\begin{proof}
The proof is a combination of Corollary \ref{corollary:quantum-to-classical-simply-laced} and some computations in the classical cohomology ring of the Freudenthal variety $\mathrm{E}_7/\mathrm{P}_7$. As such it is very similar to the proofs of Propositions
\ref{proposition:E6-4-point-GW-invariants} and \ref{proposition:E7-4-point-GW-invariants}
and so we try to be very concise here.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}],t,t,t}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}],t,t,t}_1 = \deg_{\mathrm{E}_7/\mathrm{P}_7}(\bar{t} \cup \bar{t} \cup \bar{t}).
\end{equation*}
Using \cite{LRCalc} we compute
\begin{equation*}
\bar{t} \cup \bar{t} \cup \bar{t} = 2 \pointclass.
\end{equation*}
Hence, we get
\begin{equation*}
\deg_{\mathrm{E}_7/\mathrm{P}_7}(\bar{t} \cup \bar{t} \cup \bar{t}) = 2.
\end{equation*}
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], h, t, \gamma}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}], h, t, \gamma}_1 = \deg_{\mathrm{E}_7/\mathrm{P}_7}(\bar{h} \cup \bar{t} \cup \bar{\gamma}) = \coeff_{\bar{t}^\vee}(\bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} on $\mathrm{E}_7/\mathrm{P}_7$ we get
\begin{equation*}
\bar{t}^\vee = \sigma_{\mathrm{E}_7/\mathrm{P}_7}^{s_7s_1s_3s_4s_5s_6s_2s_4s_5s_3s_4s_2s_1s_3s_4s_5s_6s_7}.
\end{equation*}
Lifting $\bar{t}^\vee$ to $\mathrm{E}_8/\mathrm{P}_8$ by appending $s_8$ to the Weyl group element we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{E}_8/\mathrm{P}_8}^{s_7s_1s_3s_4s_5s_6s_2s_4s_5s_3s_4s_2s_1s_3s_4s_5s_6s_7s_8} = \sigma_{\mathrm{E}_8/\mathrm{P}_8}^{\alpha(01122211)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_8/\mathrm{P}_8$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], h, s, \gamma}_1$.}
By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}], h, s, \gamma}_1 = \deg_{\mathrm{E}_7/\mathrm{P}_7}(\bar{h} \cup \bar{s} \cup \bar{\gamma}) = \coeff_{\bar{s}^\vee}(\bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} on $\mathrm{E}_7/\mathrm{P}_7$ we get
\begin{equation*}
\bar{s}^\vee = \sigma_{\mathrm{E}_7/\mathrm{P}_7}^{s_3s_4s_5s_6s_7s_1s_3s_4s_5s_6s_2s_4s_5s_3s_4s_1s_3s_2s_4s_5s_6s_7}.
\end{equation*}
Lifting $\bar{s}^\vee$ to $\mathrm{E}_8/\mathrm{P}_8$ by appending $s_8$ to the Weyl group element we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{E}_8/\mathrm{P}_8}^{s_3s_4s_5s_6s_7s_1s_3s_4s_5s_6s_2s_4s_5s_3s_4s_1s_3s_2s_4s_5s_6s_7s_8} = \sigma_{\mathrm{E}_8/\mathrm{P}_8}^{\alpha(01011111)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_8/\mathrm{P}_8$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], s, s, \gamma}_1$.} By Corollary \ref{corollary:quantum-to-classical-simply-laced} we have
\begin{equation*}
\scal{[{\rm pt}],s,s,\gamma}_1 = \deg_{\mathrm{E}_7/\mathrm{P}_7}(\bar{s} \cup \bar{s} \cup \bar{\gamma}).
\end{equation*}
Using \cite{LRCalc} on $\mathrm{E}_7/\mathrm{P}_7$ we get
\begin{equation*}
\left( \bar{s} \cup \bar{s} \right)^\vee =
2 \sigma_{\mathrm{E}_7/\mathrm{P}_7}^{s_6s_7s_4s_5s_6s_2s_4s_5s_3s_4s_1s_3s_2s_4s_5s_6s_7} + 2 \sigma_{\mathrm{E}_7/\mathrm{P}_7}^{s_7s_3s_4s_5s_6s_2s_4s_5s_3s_4s_1s_3s_2s_4s_5s_6s_7}.
\end{equation*}
Lifting each summand of $\left( \bar{s} \cup \bar{s} \right)^\vee$ to $\mathrm{E}_8/\mathrm{P}_8$ by appending $s_8$ to the Weyl group element we get
\begin{equation*}
\begin{aligned}
& \sigma_{\mathrm{E}_8/\mathrm{P}_8}^{s_6s_7s_4s_5s_6s_2s_4s_5s_3s_4s_1s_3s_2s_4s_5s_6s_7s_8} = \sigma_{\mathrm{E}_8/\mathrm{P}_8}^{\alpha(11222111)} \\
& \sigma_{\mathrm{E}_8/\mathrm{P}_8}^{s_7s_3s_4s_5s_6s_2s_4s_5s_3s_4s_1s_3s_2s_4s_5s_6s_7s_8} = \sigma_{\mathrm{E}_8/\mathrm{P}_8}^{\alpha(11122211)}
\end{aligned}
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{E}_8/\mathrm{P}_8$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\end{proof}
\begin{proposition}
\label{proposition:E8-presentation-big-QH}
In $\BQH_{s,t}(X)$ we have the following equalities modulo ${\mathfrak{t}}{\mathfrak{m}}$:
\begin{multline}\label{eq:presentation-big-QH-E8-relation-1}
h^{14}s + 6h^{10}t - 3h^8s^2 - 12h^4st
- 10h^2s^3 + 3t^2 \equiv - qt_{\delta_2} \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{multline}
\begin{multline}\label{eq:presentation-big-QH-E8-relation-2}
29h^{24} - 120h^{18}s + 15h^{14}t + 45h^{12}s^2 - 30h^8st + \\
+ 180h^6s^3 - 30h^2s^2t + 5s^4 \equiv - qt_{\delta_1} \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{multline}
\begin{multline}\label{eq:presentation-big-QH-E8-relation-3}
- 86357 h^{30} + 368652 h^{24}s - 44640 h^{20}t - \\
- 189720 h^{18}s^2 + 94860 h {14}st - 473680 h^{12}s^3 + \\
+ 74400h^8s^2t - 1240 h^2s^3t + 60hq \equiv 0 \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{multline}
\end{proposition}
\begin{proof}
For degree reasons \eqref{eq:presentation-big-QH-E8-relation-3} holds automatically. Thus, we only need to take care of \eqref{eq:presentation-big-QH-E8-relation-1} and
\eqref{eq:presentation-big-QH-E8-relation-2}.
\medskip
\noindent \emph{Relation \eqref{eq:presentation-big-QH-E8-relation-1}.} The lefthand side of \eqref{eq:presentation-big-QH-E8-relation-1} is of the form $- h \sigma + 3 t^2$,
where $\sigma \in H^{38}(X, {\mathbb Q})$ is a linear combination of Schubert classes.
Thus, in the classical (and in the small quantum) cohomology of $X$ we have the equality $h \sigma = 3 t^2$. By the hard Lefschetz theorem, multiplication by $h$, considered as a map from $H^{38}(X,{\mathbb Q}) \to H^{40}(X,{\mathbb Q})$, is injective. Hence, there exist a unique class $\sigma \in H^{38}(X,{\mathbb Q})$ so that $h \sigma = 3 t^{\cup 2}$. Using \eqref{eq:E8-t^2-via-Schubert} and \cite{LRCalc} we compute
\begin{multline*}
\sigma = 7 \sigma_{\alpha(01122211)} + \sigma_{\alpha(11221111)} + 23 \sigma_{\alpha(11222110)} + \\
+ 25 \sigma_{\alpha(11232100)} + 5 \sigma_{\alpha(11122111)} + 14 \sigma_{\alpha(11122210)}.
\end{multline*}
For degree reasons the linear term of $(- h \sigma + 3 t^2)^\star$ is given by
\begin{equation*}
\left( -\scal{h,\sigma, [{\rm pt}] ,t}_1 + 3\scal{t,t, [{\rm pt}] ,t}_1 \right) q t_{\delta_2}.
\end{equation*}
Applying Proposition \ref{proposition:E8-4-point-GW-invariants} we conclude that the linear term is indeed equal to $-qt_{\delta_2}$, and the claim is proved.
\bigskip
\noindent \emph{Relation \eqref{eq:presentation-big-QH-E8-relation-2}.} The left hand side of
\eqref{eq:presentation-big-QH-E8-relation-2} is of the form $-h\gamma + 5s^4$, where
$\gamma = \gamma_1 + \gamma_2$ with $\gamma_1 \in H^{46}(X,{\mathbb Q})$ and $\gamma_2 \in H^6(X,{\mathbb Q}) \cdot \, qt_{\delta_2}$.
Since we are only interested in the linear terms of $(-h\gamma + 5s^4)^\star$, we proceed as if we had
$\gamma_2 = 0$.
By the hard Lefschetz theorem, the classical multiplication by $h$, considered as a map from $H^{46}(X,{\mathbb Q}) \to H^{48}(X,{\mathbb Q})$, is injective. Hence, there exist a unique class $\gamma \in H^{46}(X,{\mathbb Q})$ so that $h \cup \gamma = 5s^{\cup 4}$. Using \eqref{eq:E8-s^4-via-Schubert} and \cite{LRCalc} we compute
\begin{multline*}
\gamma = 921 \sigma_{\alpha(01011111)} + 12963 \sigma_{\alpha(01121100)} + \\
+ 7419 \sigma_{\alpha(01111110)} + 1239 \sigma_{\alpha(00111111)} + 6822 \sigma_{\alpha(11121000)} + \\
+ 7618 \sigma_{\alpha(11111100)} + 2622 \sigma_{\alpha(10111110)}.
\end{multline*}
Setting $\tau = s^3$ we can rewrite the left hand side of \eqref{eq:presentation-big-QH-E8-relation-2} as $- h\gamma + 5s\tau$. For degree reasons the linear term of $(- h\gamma + 5s\tau)^\star$ is given by
\begin{equation*}
(-\scal{h,\gamma, [{\rm pt}] ,s}_1 + 5 \scal{s, \tau , [{\rm pt}] ,s}_1) q t_{\delta_1}
\end{equation*}
Applying Proposition \ref{proposition:E8-4-point-GW-invariants} and \eqref{eq:E8-s^3-via-Schubert} we conclude that the linear term is indeed equal to $-qt_{\delta_1}$, and the claim is proved.
\end{proof}
\begin{corollary}\label{corollary:E8-regularity-of-BQH}
$\BQH(X)$ is a regular ring.
\end{corollary}
\begin{corollary}\label{corollary:E8-semisimplicity-of-BQH}
$\BQH(X)$ is generically semisimple.
\end{corollary}
\section{Type $\mathrm{F}_4$}
\label{section:type-F}
Let $X = \mathrm{F}_4/\mathrm{P}_4$ be the coadjoint variety of type $\mathrm{F}_4$. It is shown in \cite{ChPe} that the cohomology classes
\begin{equation*}
\begin{aligned}
& h = \sigma^{s_4}, \\
& s = \sigma^{s_1s_2s_3s_4}.
\end{aligned}
\end{equation*}
generate the classical and the small quantum cohomology rings, and their presentations are given. Note that here we have
\begin{equation*}
\deg(q) = 11.
\end{equation*}
\begin{proposition}[{\cite[Proposition 5.3]{ChPe}}]
\label{proposition:F4-presentation-small-QH}
The small quantum cohomology $\QH(X)$ is the quotient of the polynomial ring $K[h,s]$ modulo the ideal generated by
\begin{equation*}
2h^8 - 6h^4s + 3 s^2,
\end{equation*}
\begin{equation*}
- 11h^{12} + 26h^8 s + 3hq.
\end{equation*}
\end{proposition}
As already mentioned in the introduction, the above proposition implies Theorem \ref{theorem:introduction-uniform-presentation-for-QH} in type $\mathrm{F}_4$.
\begin{lemma}
\label{lemma:description-fat-point-F4}
The small quantum cohomology $\QH(X)$ is not semisimple. Its unique non-semisimple factor is supported at the point $h = s = 0$ and is isomorphic to the Jacobian algebra of the isolated hypersurface singularity of type $\mathrm{A}_2$.
\end{lemma}
\begin{proof}
The proof is identical to the proof of Lemma \ref{lemma:description-fat-point-E6}.
\end{proof}
Let us consider the embedding
\begin{equation*}
j \colon \mathrm{F}_4/\mathrm{P}_4 \to \mathrm{E}_6/\mathrm{P}_1
\end{equation*}
descussed in Section \ref{subsection:non-simply-laced-types}.
Since $\dim \mathrm{F}_4/\mathrm{P}_4 = 15$, by Proposition \ref{coho-coadj-nsl} we obtain that up
to degree $7$ the pullback map $j^* \colon H^*(\mathrm{E}_6/\mathrm{P}_1, {\mathbb Q}) \to H^*(\mathrm{F}_4/\mathrm{P}_4, {\mathbb Q})$
sends Schubert classes to Schubert classes and is bijective. Moreover,
Lemma~\ref{lemma:pullback-pushforward-weyl-group-elements} allows to compute the
corresponding minimal length representatives explicitly. For example, the unique
``lift'' $\hat{s} \in H^8(\mathrm{E}_6/\mathrm{P}_1,{\mathbb Q})$ of $s$, \emph{i.e.} the unique
$\hat{s}$ such that $j^*\hat{s} = s$ is given by
\begin{equation}\label{eq:F4-lift-of-s-to-E6/P1}
\hat{s} = \sigma_{\mathrm{E}_6/\mathrm{P}_1}^{s_2s_4s_3s_1}.
\end{equation}
Recall the point-line incidence variety $\Z(\mathrm{E}_6/\mathrm{P}_1)$ and the maps $p : \Z(\mathrm{E}_6/\mathrm{P}_1) = \mathrm{E}_6/\mathrm{P}_{1,3} \to \mathrm{E}_6/\mathrm{P}_1$
and $q : \Z(\mathrm{E}_6/\mathrm{P}_1) \to \mathrm{F}(\mathrm{E}_6/\mathrm{P}_1) = \mathrm{E}_6/\mathrm{P}_3$ from \eqref{eq:universal-family-of-lines-simply-laced}.
Recall also the definition of $\mathrm{F}_x := p^{-1}(1.\mathrm{P}_1) = \mathrm{P}_1/\mathrm{P}_{1,3} \simeq \mathrm{D}_5/\mathrm{P}_4$
as explained in Remark \ref{rem-fx} and Example \ref{exam-fx}. We have an inclusion
$i : \mathrm{F}_x \subset \mathrm{F}(X)$ and for $\sigma \in H^*(\mathrm{E}_6/\mathrm{P}_1,{\mathbb Q})$, we set
$\bar{\sigma} := i^*q_*p^*\sigma$. For example, we will consider $\bar{\hat{s}} \in H^{6}(\mathrm{F}_x,{\mathbb Q})$.
We have
\begin{equation*}
\bar{\hat{s}} = \sigma_{\mathrm{D}_5/\mathrm{P}_4}^{s_5s_3s_2} \in H^6(\mathrm{D}_5/\mathrm{P}_4,{\mathbb Q}).
\end{equation*}
To compute the required $4$-point GW invariants we apply Proposition \ref{proposition:quantum-to-classical-non-simply-laced}.
\begin{proposition}
\label{proposition:F4-4-point-GW-invariants}
\begin{equation*}
\scal{[{\rm pt}], s, s, s}_1 = 1
\end{equation*}
\begin{equation*}
\scal{[{\rm pt}], h, s, \gamma}_1 = \coeff_{\sigma_{\alpha(0010)}}(\gamma)
\end{equation*}
\end{proposition}
\begin{proof}
By Proposition \ref{proposition:quantum-to-classical-non-simply-laced} computations
of these GW invariants reduce to some computations in the classical cohomology ring
of the Fano variety $\Fpt(\mathrm{E}_6/\mathrm{P}_1)$, which by Remark \ref{rem-fx} and Example \ref{exam-fx}
is isomorphic to $\mathrm{D}_5/\mathrm{P}_4$.
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}],s,s,s}_1$.} By Proposition
\ref{proposition:quantum-to-classical-non-simply-laced}(2) we have
\begin{equation*}
\scal{[{\rm pt}],s,s,s}_1 = \deg_{\mathrm{D}_5/\mathrm{P}_4} \left(\bar{\hat{s}} \cup \bar{\hat{s}} \cup \bar{\hat{s}} \cup h_{\mathrm{D}_5/\mathrm{P}_4} \right)
\end{equation*}
Using \cite{LRCalc} we compute
\begin{equation*}
\bar{\hat{s}} \cup \bar{\hat{s}} \cup \bar{\hat{s}} \cup h_{\mathrm{D}_5/\mathrm{P}_4} = \pointclass.
\end{equation*}
Hence, we get
\begin{equation*}
\deg_{\mathrm{D}_5/\mathrm{P}_4} \left(\bar{\hat{s}} \cup \bar{\hat{s}} \cup \bar{\hat{s}} \cup h_{\mathrm{D}_5/\mathrm{P}_4} \right) = 1.
\end{equation*}
\bigskip
\noindent \emph{Invariant $\scal{[{\rm pt}], h, s, \gamma}_1$.} By the dimension axiom for
Gromov--Witten invariants, the invariant $\scal{[{\rm pt}], h, s, \gamma}_1$ vanishes unless
$\gamma \in H^{14}(\mathrm{F}_4/\mathrm{P}_4, {\mathbb Q})$. Applying Proposition \ref{coho-coadj-nsl} we see that
such a class $\gamma$ has a unique lift $\hat{\gamma} \in H^{14}(\mathrm{E}_6/\mathrm{P}_1, {\mathbb Q})$.
By Proposition \ref{proposition:quantum-to-classical-non-simply-laced}(2) we have
\begin{equation*}
\scal{[{\rm pt}], h, s, \gamma}_1 = \deg_{\mathrm{D}_5/\mathrm{P}_4} \left(\bar{\hat{s}} \cup \bar{\hat{\gamma}} \cup h_{\mathrm{D}_5/\mathrm{P}_4} \right)
\end{equation*}
Using \cite{LRCalc} we get
\begin{equation*}
\left( \bar{\hat{s}} \cup h_{\mathrm{D}_5/\mathrm{P}_4} \right)^\vee = \sigma_{\mathrm{D}_5/\mathrm{P}_4}^{s_3s_5s_1s_2s_3s_4}.
\end{equation*}
Lifting $\left( \bar{\hat{s}} \cup h_{\mathrm{D}_5/\mathrm{P}_4} \right)^\vee$ to $\mathrm{E}_6/\mathrm{P}_1$ by appending $s_1$ to the Weyl group element gives $\sigma_{\mathrm{E}_6/\mathrm{P}_1}^{s_4s_2s_6s_5s_4s_3s_1}$. Restricting it to $\mathrm{F}_4/\mathrm{P}_4$ using Proposition \ref{coho-coadj-nsl}(4) (and at each step keeping in mind the relabelling vertices in Dynkin diagrams) we get the Schubert class
\begin{equation*}
\sigma_{\mathrm{F}_4/\mathrm{P}_4}^{s_4s_2s_3s_1s_2s_3s_4} = \sigma_{\mathrm{F}_4/\mathrm{P}_4}^{\alpha(0010)},
\end{equation*}
where we used \cite{LRCalc} on $\mathrm{F}_4/\mathrm{P}_4$ to convert into the root labelling (see Section \ref{subsection:indexing-schubert-classes-by-roots}).
Now the claim follows by Remark \ref{remark:simple-fact-proofs}.
\end{proof}
\begin{proposition}
In $\BQH_{s}(X)$, we have the following equalities modulo ${\mathfrak{t}}{\mathfrak{m}}$
\begin{equation}\label{eq:presentation-big-QH-F4-relation-1}
2h^8 - 6h^4s + 3 s^2 \equiv qt_{\delta_1} \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{equation}
\begin{equation}\label{eq:presentation-big-QH-F4-relation-2}
- 11h^{12} + 26h^8 s + 3hq \equiv 0 \ ({\rm mod}\ {\mathfrak{t}}{\mathfrak{m}})
\end{equation}
\end{proposition}
\begin{proof}
For degree reasons \eqref{eq:presentation-big-QH-F4-relation-2} holds automatically
and, hence, we only need to consider \eqref{eq:presentation-big-QH-F4-relation-1}.
The left hand side of \eqref{eq:presentation-big-QH-F4-relation-1} is of the
form $-h\sigma + 3s^2$, with $\sigma = - 2h^7 + 6h^3s$. Using \cite{LRCalc} we
easily compute
\begin{equation*}
\sigma = 2 \sigma^{s_4s_2s_3s_1s_2s_3s_4}_{\mathrm{F}_4/\mathrm{P}_4} + 2 \sigma^{s_3s_2s_3s_1s_2s_3s_4}_{\mathrm{F}_4/\mathrm{P}_4}
= 2\sigma_{\mathrm{F}_4/\mathrm{P}_4}^{\alpha(0010)} + 2 \sigma_{\mathrm{F}_4/\mathrm{P}_4}^{\alpha(0001)},
\end{equation*}
For degree reasons the linear term of $(-h\sigma + 3s^2)^\star$ is of the form
\begin{equation*}
\left(- \scal{h,\sigma, [{\rm pt}] ,s}_1 + 3\scal{s,s, [{\rm pt}] ,s}_1 \right) q t_{\delta_1}.
\end{equation*}
Applying Proposition \ref{proposition:F4-4-point-GW-invariants} we conclude that the linear term is in fact equal to $q t_{\delta_1}$.
\end{proof}
\begin{corollary}\label{corollary:F4-regularity-of-BQH}
$\BQH(X)$ is a regular ring.
\end{corollary}
\begin{corollary}\label{corollary:F4-semisimplicity-of-BQH}
$\BQH(X)$ is generically semisimple.
\end{corollary}
\section{Relation to the unfoldings of $ADE$-singularities}
\label{section:singularity-theory}
The goal of this section is to strengthen the relation between the quantum cohomology of
coadjoint varieties and unfoldings of $ADE$-singularities given by Theorem \ref{theorem:introduction-fat-points-of-QH}.
To do that we need to pass to the world of \textsf{$F$-manifolds}. For a thourough treatment
of the background material on $F$-manifolds we refer to \cite{HeMa, Hertling}. To a
reader interested only in a very consice summary of the necessary facts we recommend~\cite[Section~7]{CMMPS}.
Let us briefly explain how we get an $F$-manifold from the big quantum cohomology of a
Fano variety $X$. Recall that the big quantum product is defined using the GW
potential \eqref{eq:GW-potential}. Since $X$ is a Fano variety, the dimension axiom
for GW invariants implies that the coefficients \eqref{eq:GW-potential-coefficients}
of the GW potential are polynomial in $q$. Hence, it makes sense to specialize
the formulas \eqref{eq:GW-potential}--\eqref{eq:small-quantum-product} to $q = 1$.
Viewing \eqref{eq:GW-potential} specialized at $q = 1$ as a formal power series
in $t_0, \dots, t_s$ we can ask ourselves the question, wether this series has a
non-trivial convergence domain in ${\mathbb C}^{s+1} = H^*(X, {\mathbb C})$. In general, the answer to this question is not known.
Thus, we add a convergence assumption to our setup.
\begin{assumption}[\emph{Convergence assumption}]
The power series $\Phi_{q=1}$ converges in some open neighbourhood $M \subset {\mathbb C}^{s+1}$ of the origin.
\end{assumption}
Under this assumption \eqref{eq:big-quantum-product} endows $M$ with
the structure of an analytic Frobenius manifold. In particular, forgetting the metric,
we get an $F$-manifold structure on $M$. Below we work with the germ of
this $F$-manifold at the origin $t_0 = t_1 = \dots = t_s$, which corresponds to the
small quantum cohomology at $q = 1$.
As in the introduction after setting $q = 1$ in $\QH(X^\mathrm{coad})$ we can consider
the finite scheme $\QS_{X^\mathrm{coad}} = \Spec (\QH(X^\mathrm{coad}))$ endowed with a morphism to
the affine line $\kappa \colon \QS_{X^\mathrm{coad}} \to {\mathbb A}^1$ given by the anticanonical
class. We define $\QS_{X^\mathrm{coad}}^\times$ as the preimage of ${\mathbb A}^1 \setminus \{0\}$
under $\kappa$ and $\QS^\circ_{X^\mathrm{coad}}$ as its complement. Since in our case the
anticanonical class is proportinal to the hyperplane class $h$, $\QS_{X^\mathrm{coad}}^\times$
and $\QS^\circ_{X^\mathrm{coad}}$ are the vanishing and the non-vanishing loci of $h$ considered
as a function on $\QS_{X^\mathrm{coad}}$.
\begin{theorem}
\label{theorem:F-manifolds}
Let $X^\mathrm{coad}$ be the coadjoint variety of a simple algebraic group $\mathrm{G}$ not of type $\mathrm{A}$ and
let $M_{\mathrm{G}}$ be the germ of the $F$-manifold of $\BQH(X^\mathrm{coad})$ as above.
\begin{enumerate}
\item
\label{item:theorem-F-manifolds-decomposition}
The $F$-manifold germ $M_{\mathrm{G}}$ decomposes into the direct product of
irreducible germs of $F$-manifolds
\begin{equation*}
M_{\mathrm{G}} = M_{\mathrm{G},0} \times \prod_{x \in \QS^\times_{X^\mathrm{coad}}} M_{\mathrm{G},x}
\end{equation*}
and $M_{\mathrm{G},0}$ corresponds to the unique fat point of $\QH(X^\mathrm{coad})$.
\medskip
\item
\label{item:theorem-F-manifolds-reduced-points}
The $F$-manifold germs $M_{\mathrm{G},x}$ for $x \in \QS^\times_{X^\mathrm{coad}}$ are
one-dimensional and isomorphic to the base space of a semiuniversal
unfolding of an isolated hypersurface singularity of type $\mathrm{A}_1$.
\medskip
\item
\label{item:theorem-F-manifolds-spectral-cover}
The spectral cover of $M_{\mathrm{G}}$ is smooth.
\medskip
\item
\label{item:theorem-F-manifolds-fat-point}
The $F$-manifold germ $M_{\mathrm{G},0}$ is isomorphic to the base space of
a semiuniversal unfolding of a simple hypersurface singularity of
Dynkin type $\mathrm{T}_{\mathrm{short}}(\mathrm{G})$.
\end{enumerate}
\end{theorem}
\begin{proof}
To prove (\ref{item:theorem-F-manifolds-decomposition}) we apply \cite[Theorem 2.11]{Hertling}
and Theorem \ref{theorem:introduction-fat-points-of-QH}.
To prove (\ref{item:theorem-F-manifolds-reduced-points}) we just note that the
base space of a semiuniversal unfolding of an isolated hypersurface singularity
of type $\mathrm{A}_1$ is the unique one-dimensional $F$-manifold germ up to isomorphism.
To prove (\ref{item:theorem-F-manifolds-spectral-cover}) it is enough to consider
the component $M_{\mathrm{G},0}$ and compute the rank of the Jacobian matrix as in
the proof of Theorem \ref{theorem:introduction-regularity-of-BQH}.
To prove (\ref{item:theorem-F-manifolds-fat-point}) we proceed as follows.
By (\ref{item:theorem-F-manifolds-spectral-cover}) and \cite[Theorem 5.6]{Hertling}
it follows that $M_{\mathrm{G},0}$ is isomorphic to the base space of a semiuniversal
unfolding of some isolated hypersurface singularity $g$. Thus, we just need to prove
that this singularity is stably right equivalent to a simple singularity $f$
of type $\mathrm{T}_{\mathrm{short}}(\mathrm{G})$. Here we denote by $g, f \in {\mathbb C} \{x_1, \dots, x_n \}$ the germs
of holomorphic functions defining these singularities. Since $f$ is
quasi-homogeneous, a Mather-Yau-type statement holds. Indeed, a theorem by Shoshitaishvili
\cite{Sho} (see \cite[Theorem 2.29]{GLS} for a more convenient reference) implies
that if the Jacobian algebras $M_{f}$ and $M_{g}$ are isomorphic as ${\mathbb C}$-algebras,
then $f$ and $g$ are right equivalent.
\end{proof}
\bibliographystyle{plain}
|
2,869,038,155,564 | arxiv | \section{Methods Summary}
\subsection{Device design and fabrication}
Devices were designed similarly to those in previous work\cite{mehta2017precise}. The grating designs exploit the reflection from the silicon substrate below to increase grating strength; a compromise between grating length and efficiency resulted in a designed radiation efficiency of 50\% in these devices (defined as upwards-radiated power as a fraction of input power). Emission into a single grating order is ensured by choosing a sufficiently large grating wavenumber $\beta_g = \frac{2\pi}{\Lambda}$ (with $\Lambda$ the grating period). The curvature of the grating lines is calculated to induce focusing along $\mathbf{\hat{y}}$ in Fig.~\ref{fig:2qdev}b, with designed focusing limited to ${\sim}3$ $\mu$m waists to relax sensitivity to misalignments in fabrication. $\Lambda$ is constant over the grating length in these devices to generate an approximately collimated beam along the trap axis.
\begin{figure}[b]
\centerline{\includegraphics[width=0.5\textwidth]{./WaferDevice.pdf}}
\vspace{0 cm}
\caption{\label{fig:wafer} \textbf{Design layout}. (a) Mask images for device fabrication across a 4-inch wafer. (b) Individual $2.2\times2.2$ cm$^2$ reticle, showing trap designs as well as independent optics test structures. (c) Trap design used in ion experiments presented here. In all images, SiN features are shown in red, the top trap electrode layer in gray, and the ground plane in blue. The 8 waveguides coupled to the fibre array are labeled at the left, with inputs 1 and 8 forming a loop structure used to align the fibre V-groove array. }
\end{figure}
Mode overlap simulations predict a 1 dB coupling loss between an optical mode with 5.4 $\mu$m mode-field diameter and the waveguide mode of the 25 nm-thick SiN waveguides. Waveguides formed of the thin SiN core in the fibre coupling-regions are strongly polarizing in our devices, in that owing to the metal and silicon a few $\mu$m from the core, only the mode with E-field polarized predominantly along the horizontal in Fig.~\ref{fig:gratings}b propagates in these regions with low loss. Polarization is maintained in the high-confined waveguides due to the significant index mismatch between the quasi-TE and quasi-TM modes in these regions, typical for integrated waveguides with non-square aspect ratios.
Grating designs were verified with 3D finite-difference-time-domain (FDTD) simulations (Lumerical FDTD solutions), and 3D simulations of the ion trap potentials were performed in COMSOL; designs were drawn in Cadence Virtuoso.
Various trap and optical designs were drawn on a $2.2\times2.2$ cm$^2$ reticle repeated across a 4-inch wafer (Extended Data Fig.~\ref{fig:wafer}). A design drawing of the trap die used in this work is shown in Extended Data Fig.~\ref{fig:wafer}c, together with a magnified view near one of the trap zones with 729 nm and 854/866 nm waveguides and gratings. The fabrication process used here allowed relative alignment between different layers within approximately $\pm$2 $\mu$m, and the e-beam grating lines were aligned within about 300 nm to the photolithographically-defined waveguides. To account for possible misalignments, grating lines in zones 1 and 3 were intentionally offset along $y$ by $\pm 300$ nm. All three zones were characterized in optical measurements; the 300 nm offset between grating and waveguide features results in a ${\sim}2.5$ $\mu$m beam shift along $y$ at the ion height between zones, in accordance with simulations.
Fabrication was performed by LioniX International \cite{worhoff2015triplex}. Devices were fabricated on silicon wafers with 5-10 Ohm-cm resistivity; the bottom 2.7 $\mu$m oxide layer was formed via thermal oxidation, with the waveguides formed in LPCVD silicon nitride layers. Following combined stepper photolithography and e-beam patterning of the waveguide/grating features, LPCVD SiO$_2$ cladding was deposited. The platinum layer was then deposited after planarization, and patterned via contact photolithography ion beam etching. A thin (90 nm) Ta$_2$O$_5$ layer serves as an adhesion layer above the Pt. The upper PECVD SiO$_2$ isolation was patterned to allow vias between the two metal layers, after which the upper gold was sputtered and patterned via contact photolithography and a liftoff procedure to form the trap electrodes.
Diced wafers were delivered to ETH, where die were characterized for optical performance, or else cleaned, packaged and fibre-attached for ion trap experiments.
\subsection{Assembly and fibre attachment}
A standard eight-channel fibre V-groove array populated with Nufern S630-HP fibres spaced at 127 $\mu$m pitch was purchased from OZ Optics. Eight waveguides extend to the edge of the chip die to interface to the array, with the outer two forming a simple loop structure used to align the array by maximizing loop transmission. Standard non-polarization-maintaining fibre is used in this work, and in-line fibre polarizers control the input polarization.
Individual trap die were removed from the wafer, and a $1.5 \times 3.5$ mm$^2$ SiO$_2$ piece of 500 $\mu$m thickness was epoxied (EPO-TEK silver epoxy, model H21D) to the top surface of the trap chip at the fibre-coupling edge (Extended Data Fig.~\ref{fig:fibattach}, and visible near the fibre array in Fig.~\ref{fig:2qdev}a). This piece was previously coated in 300 nm Au via electron-beam evaporation on faces exposed to the ion, to minimize possible stray fields. Subsequently the die was coated in a protective photoresist layer and mounted in a custom-machined holder for polishing on standard fibre polishing paper. Polishing reduces roughness and associated loss at the coupling interface, and introduces convex curvature to the facet, important for the fibre attachment as described below. The silicon substrate was scribed with a diamond scribe to break through the native oxide; after dissolving the protective photoresist and attaching the die to the carrier PCB, a drop of silver epoxy was applied both to contact the silicon substrate to ground on the carrier PCB, as well as to ensure grounding of the metal on the SiO$_2$ piece near the fibre array. The trap electrodes were then wirebonded to the contacts on the carrier PCB.
\begin{figure}[t]
\centerline{\includegraphics[width=0.45\textwidth]{./FiberAttach.pdf}}
\vspace{0 cm}
\caption{\label{fig:fibattach} \textbf{Fibre attachment}. Fibre attach process schematic and measured single pass fibre-waveguide coupling losses inferred from a loop-back structure on-chip; solid line is a guide to the eye. }
\end{figure}
To attach the fibre array, the facet-polished and wirebonded sample was mounted on a temperature-controlled chuck fixed to a 3-axis tip/tilt/rotation stage and raised to 95$^\circ$C. The fibre array, held in a custom machined stainless-steel mount in a 3-axis translation stage, was then aligned and pressed against the chip in such a fashion that static friction between the two interfaces allowed coupling to be passively maintained for hours (see Extended Data Fig.~\ref{fig:fibattach}a for a schematic of this interface). Low-temperature curing, non-transparent, filled epoxy (EPO-TEK T7109-19) was then dropped at the two edges and allowed to wick into the gap; this epoxy was chosen as its flexibility should result in robustness to temperature changes. The convex curvature of the trap die at this interface ensures that the epoxy does not interact with the optical mode, allowing choice of mechanically suitable epoxies without restriction to transparent variants; minimizing exposure of the optical mode to the epoxy is also important in avoiding possible photo-effects which can be problematic even with transparent adhesives exposed to visible wavelengths. The total contact area used for the attachment is about $1\times3.5$ mm$^2$. After a few hours of curing with the sample stage at $95^\circ$C, the holder for the fibre array was loosened to allow the fibre array to move with the trap chip, the sample stage was cooled to room temperature, and no drop in transmission was observed. We have performed three attachments in this fashion to date, and found all robust to temperature drops. We note that in this work the epoxy was applied manually -- more precise automatic dispensing should allow more symmetric application, which may increase robustness to temperature changes further still.
The transmission of a single fibre-waveguide interface was measured at various wavelengths within the bandwidth of a CW Ti:sapphire laser, measuring the transmission from input fibre 1 to fibre 8 and subtracting the measured waveguide loss. The broadband nature of this coupling is shown in the measurements in Extended Data Fig.~\ref{fig:fibattach}b. Upon applying current to a heater on the carrier PCB near the chip and then cooling the assembly to 7K, we observed a small increase in loss from 1.4 dB to 2.4 dB at 729 nm, but saw no further changes after two additional temperature cycles between room temperature and 7K. These coupling losses compare favorably to previous published work on cryogenic fibre attachment \cite{mckenna2019cryogenic}. This method furthermore allows multiple channels aligned in parallel, and at shorter wavelengths with more demanding alignment tolerances.
\subsection{Waveguide/grating characterization and optical losses}
Structures consisting of waveguides of varying length on chip were included on the reticle design to measure waveguide losses in the high-confinement mode used for routing on chip. These were measured to be 2 dB/cm at 729 nm, and verified with measurements of quality factors of ring resonators also included in the design. Wavelength-dependent losses were measured from 729 to 920 nm using a tunable CW Ti:sapphire source; we observe a reduction in loss as a function of increasing wavelength consistent with scattering from sidewall roughness being the dominant contribution. We therefore expect improved lithography to reduce this propagation loss.
Grating emission was profiled (Fig.~\ref{fig:gratings}c-e) using a microscope mounted on a vertical translation stage to image the emission at various heights above the chip\cite{mehta2017precise} onto a scientific CCD camera (Lumenera Infinity 3S-1UR), through a 0.95-NA objective (Olympus MPLANAPO50x). An RF modulation input was applied to the current drive of a 730 nm diode laser to reduce the coherence length to avoid reflection artifacts in the imaging system. The measured height of the beam focus above the top metal layer agrees with FDTD simulation to within the measurement accuracy of about 2 $\mu$m, and the measured emission angle in the $xz$-plane (Fig.~\ref{fig:gratings}d) matches design to within about 1$^\circ$. These measurements are conducted at room temperature; considering the thermo-optic coefficients of SiO$_2$ and SiN \cite{elshaari2016thermo}, we expect a change in emission angle upon cooling to 7K of roughly $0.2^\circ$, negligible in our geometry.
With 1.5 mW of 729 nm light input to the fibre outside the cryostat, we observe a 2.6 $\mu$s $\pi$-time, which is within 25\% of the 2.0 $\mu$s predicted from a first-principles calculation \cite{james1998quantum} accounting for the measured beam profile, and the total expected loss of 6.4 dB, arising from: 2.4 dB fibre-chip coupling, 1 dB of waveguide loss over the 5 mm path length, and 3 dB grating emission loss. In the $\pi$-time calculation we assume the ion sits at the maximum of the 3.7 $\mu$m beam waist along $\mathbf{\hat{y}}$, resulting in a lower-bound given the $\mu$m-scale misalignments in these devices. Similar $\pi$-times are observed in the two-ion Rabi oscillations in Fig.~\ref{fig:2ion}a; we note that in this data, the $P_{\uparrow\downarrow + \downarrow\uparrow}$ points are particularly helpful in bounding the Rabi frequency imbalance between the two ions, as a small imbalance results in these populations rising significantly above 0.5 for pulses much longer than the $\pi$-time.
In testing power handling of these waveguides, a maximum of 300 mW was coupled into the single-mode waveguides at $\lambda=729$ nm; we did not observe damage to the waveguides from these intensities. Future work will investigate power handling limits and self-phase modulation in such waveguides at visible wavelengths, relevant to large-scale architectures with power for multiple zones input to a single bus waveguide.
\subsection{Laser/cryogenic apparatus, and trap operation}
Diode lasers supply the various wavelengths used in the ion experiments. The 729 nm light used for coherent qubit control is stabilized to a high-finesse reference cavity resulting in a linewidth of order 100 Hz. Light transmitted through the cavity injection-locks a secondary diode, whose output passes through a tapered amplifier (TA) to a free-space double-pass acousto-optic modulator (AOM) setup for switching and frequency tuning, and subsequently a single-pass fibre-coupled AOM for pulse-shaping. Two tones are applied to this fibre AOM also for generating the two sidebands in the MS gate. In the current experimental path, light propagates along approximately nine meters of fibre before and inside the cryostat on which acoustic noise is not actively cancelled; vibrations along this length of fibre contribute to the coherence decay presented below (Extended Data Fig.~\ref{fig:ramsey}).
Though the photonics are functional at room temperature and the ion trap could be operated in ultra high vacuum without cryogenic cooling, we perform ion experiments in a cryogenic environment because of the ease of achieving low background pressure via cryopumping, the possibility for rapid trap installation/replacement (${\sim}2$ days), elimination of the need for baking of in-vacuum components, and potentially reduced electric-field noise near surfaces. The cryogenic apparatus used for ion trap experiments is similar to that described previously \cite{leupold2015bang}. Ions are loaded from neutral atom flux from a resistively heated oven within the 4K chamber, followed by two-step photoionization using beams at 423 nm and 389 nm \cite{lucas2004isotope}. The fibre feedthrough consists simply of bare 250 $\mu$m-diameter acrylate-coated fibres epoxied (Stycast 2850FT) into a hole drilled in a CF blank flange mounted on the outer vacuum chamber. In the experiments presented in this paper only the 729 nm beam for qubit control was delivered via integrated optics. Though waveguide structures for 854 nm/866 nm light were included on chip, these beams were routed together with the shorter wavelengths and delivered along conventional free-space beam paths for convenience given the previously existing optical setup.
DC voltage sets for axial confinement, stray-field compensation, and radial mode rotation were calculated using methods similar to those described in \cite{allcock2010implementation}, based on 3D field simulations including the effect of exposed dielectric in our design.
We used these voltage sets to compensate stray fields in 3D by minimizing first and second micromotion sidebands of ion motion using two separate 729 nm beampaths. Light emitted by the grating has a wavevector $\vec{k_{g} }= k_0(\mathbf{\hat{z}} \cos{\theta_g} + \mathbf{\hat{x}}\sin{\theta_g} )$, where $\theta_g = 36^\circ$ is the emission angle from vertical and $k_0 = 2\pi/\lambda$ is the free-space wavenumber; this beam hence allows detection of micromotion along $\hat{x}$ and $\hat{z}$. To compensate with sensitivity along $\hat{y}$, we used a second beam propagating in free-space along $\vec{k_{f} }= k_0(\mathbf{\hat{x}} \cos{\theta_f} + \mathbf{\hat{y}}\sin{\theta_f} )$ with $\theta_f = 45^\circ$. The calculated compensation voltage sets were used to minimize micromotion sidebands on both beam paths, which allowed us to achieve first micromotion sideband to carrier Rabi frequency ratios of $\Omega_\mathrm{MM}/\Omega_\mathrm{car} \approx 0.01$. Compensation fields applied are of order 300 V/m along $y$ and 1500 V/m along $z$; day-to-day drifts are of order 1\%. We note the requirement for beams along two different directions for compensation arises from the typical assumption that the optical field's gradient (or, for quadrupole transitions, curvature) is nonzero only along its propagation direction; this does not hold for transverse focusing to wavelength-scale spots, in which ions undergoing micromotion perpendicular to the beam propagation direction will experience amplitude modulation. Beams focused to wavelength-scale spots may thus allow sensitive compensation along all dimensions using a single beam.
The silicon partially exposed to the ion due to openings in the ground plane near the gratings was a potential concern, as previous work has seen that unshielded semiconductor can be problematic for stable trap operation \cite{mehta2014ion}; we note that the substrate was not highly conductive or grounded in that previous work, and the grounding implemented here may be a significant difference. We have observed charging effects that appear to be related to carrier dynamics in the silicon; in particular, we observe jumps in the motional frequency of order 10 kHz after Doppler cooling and state preparation with the 397 nm beam, which relax on millisecond timescales. These jumps were eliminated when a few mW of IR light were input into any of the waveguide channels on the chip. We attribute this to photoexcited carriers in the silicon from light scattered out of the waveguides and gratings, which diffuse through the substrate, increasing its conductivity and more effectively shorting it to ground, thereby suppressing stray fields originating from the substrate. The gate data taken in this paper were obtained with 4 mW of CW light at $\lambda = 785$ nm combined with the 729 nm pulses in the fibre coupled to input 3. It is possible that the use of heavily doped silicon would attenuate possible stray fields from the silicon; in a CMOS context, highly-doped and contacted layers at the exposed surfaces could potentially be used for the same purpose.
Additionally, we observed kHz-level drifts in motional frequencies which relaxed on longer timescales of many minutes, which we attribute to charges trapped in or on the surface of the dielectric near the grating windows -- this charging was clearly correlated, for example, to the 729 nm beams used for the MS gates being turned on. To minimize drifts in the motional frequency, we pulsed on the 729 nm beam during the 1 ms-long Doppler cooling pulse applied in each experimental shot, which together with the 785 nm light sent to the same coupler led to a more constant optical flux through the opening and reduced these drifts to the level of a few 100 Hz. We additionally recalibrate the motional frequency every 15 s during MS gate experiments to minimize the effect of these remaining drifts. Our experiments serve as a baseline for performance achievable with no shielding of the exposed dielectric; the extent to which conductive shielding of this exposed dielectric (as recently demonstrated \cite{niffenegger2020integrated}) reduces the magnitude of these effects will be an interesting question for future experiments.
\subsection{Contributions to Bell-state infidelity and routes to improvement}
Here we detail the error sources contributing to the Bell state infidelity achieved via the integrated MS gate (Table~\ref{tab:errors}), and discuss routes to improvement.
The heating rate of the two-ion stretch mode at 2.2 MHz used for the MS gates was measured to be $\dot{\bar{n}} = 60(30)$ quanta/s, via fits to sideband flopping following variable wait times. For the single-loop gates implemented here, this contributes an error $\epsilon_h = \dot{\bar{n}}\tau_g/2$,\cite{ballance2017high} resulting in our estimate of $2(1) \times 10^{-3}$. We measure a heating rate for a single-ion axial mode at 1.2 MHz of ${\sim}3000$ quanta/s, indicating $E$-field noise in our device ${\sim}100\times$ higher than in other cryogenic surface-electrode traps with similar ion-electrode distance \cite{sedlacek2018distance}.
During gate experiments, we observed an average drift in motional frequency of 200 Hz magnitude between each recalibration. Assuming a linear drift in frequency between these calibration points, we generate a probability density function for the motional frequency error during each experimental shot, which together with simulated infidelity resulting from an error in gate detuning $\delta$ results in an expectation value for infidelity of $1\times10^{-3}$.
\begin{figure}[t]
\centerline{\includegraphics[width=0.5\textwidth]{./20200109_180824and193624_RamseyDecayLaserNoEcho_discfit-eps-converted-to.pdf}}
\vspace{0 cm}
\caption{\label{fig:ramsey} \textbf{Ramsey coherence measurements.} We apply two $\pi/2$ pulses separated by a variable wait time, and the fringe contrast upon scanning the phase of the second pulse relative to the first is plotted to assess $T_2^*$. Data is shown using the same light guided through the in-cryostat fibres and integrated couplers (black points and fit) or through free-space (red points and fit). The fit to the data observed with the integrated coupler was used to infer laser noise parameters relevant to gate infidelity calculation; the observation of significantly faster decoherence when driving with the free-space beam (red points/fit) using the same 729 nm source indicates the integrated beam path's advantage in insensitivity to cryostat vibrations. Error bars on points represent 68\% confidence intervals on fit contrasts. }
\end{figure}
Spin coherence was assessed by means of Ramsey measurements on single qubits. Ramsey decays (Extended Data Fig.~\ref{fig:ramsey}) observed when driving the $\ket{S_\frac12,m_J = -\frac12}$ to $\ket{D_\frac52,m_J = -\frac12}$ qubit transition through the integrated gratings were fit by a model with a discrete noise component \cite{kotler2013nonlinear} corresponding to oscillations in apparent carrier frequency occurring with $175$ Hz periodicity, with a carrier frequency excursion amplitude of $2\pi\times160$ Hz, together with a slower Gaussian decay with $1/e$ time of 11 ms. We found similar decays on the $\ket{S_\frac12,m_J = -\frac12}$ to $\ket{D_\frac52,m_J = -\frac32}$ transition, which has $2\times$ higher magnetic field sensitivity, suggesting that laser frequency fluctuations (and not magnetic field drifts) are the dominant contribution. We estimate the infidelity resulting from the discrete noise component that dominates the Ramsey decay as follows. We perform numerical simulation to find the MS gate infidelity resulting from an offset carrier frequency, and average over the probability density function describing the offset during each shot for sinusoidal noise with the amplitude inferred from the Ramsey data. This results in an expectation value for infidelity of $1\times10^{-3}$, which serves as a lower bound for error as we have considered only the dominant noise component. We note that errors from drifts in both motional and carrier frequencies, together accounting for about $2\times10^{-3}$ of our Bell-State infidelity, can add coherently in sequences of multiple gates.
The use of the stretch mode is advantageous with respect to heating, but introduces sensitivity to radial mode temperatures; variance in occupancy of the ``rocking" radial modes results in variance in the stretch mode frequency through a Kerr-type interaction \cite{roos2008nonlinear, nie2009theory}. In our experiments, for technical reasons owing to beam directions required for EIT cooling, radial modes are not ground-state cooled, and after Doppler cooling we observe occupancies corresponding to $\bar{n} \sim 12-13$ for the modes at 3.5 MHz and $\bar{n} \sim 5$ for those at 5.5 MHz; these result in an estimate of the resulting infidelity of $4 \times 10^{-4}$.
The warm radial modes also contribute to shot-to-shot carrier Rabi frequency fluctuations, since the grating beam couples to all radial modes (these dominate the decay observed in Rabi oscillations in Fig.~\ref{fig:2ion}a and Extended Data Fig.~\ref{fig:crosstalk}a). From the same occupancies given above, we estimate an infidelity of $3 \times 10^{-4}$.
Qubit state preparation is based on frequency-selected optical pumping on the quadrupole transition (repeatedly driving a 729 nm $\pi$-pulse starting from $\ket{S_\frac12,m_J = +\frac12}$ and repumping with the 854 nm beam to the fast-decaying $P_\frac32$ levels); we measure preparation of qubits in the starting $\ket{S_\frac12,m_J = -\frac12}$ state with infidelities $<10^{-4}$. Qubit state detection is based on applying a resonant 397 nm pulse (driving ${S_\frac12} \leftrightarrow {P_\frac12}$ transitions) for $250$ $\mu$s after each shot and thresholding the number of photon counts detected by our photomultiplier tube (PMT), to infer whether the detection event corresponded to 0,1, or 2 ions bright. The 1-bright-ion histogram is well separated from the dark distribution and contributes negligible error; however, with the optimal threshold, 0.09\% of 1-bright-ion events are mistaken for 2 bright, and vice versa, contributing $0.9\times10^{-4}$ to our Bell-State infidelity.
The gate pulse is smoothly ramped on and off over ${\sim}5$ $\mu$s, which reduces infidelity from off-resonant excitation of the direct drive term in the MS interaction Hamiltonian \cite{sorensen2000entanglement} to negligible levels for these gate times.
\begin{figure}[b]
\centerline{\includegraphics[width=0.5\textwidth]{./20191124_173242_Histograms-eps-converted-to.pdf}}
\vspace{0 cm}
\caption{\label{fig:histograms} \textbf{Readout histograms.} Histogram of PMT counts observed in detection events over all points in the parity scan of Fig.~\ref{fig:gate}, fitted to a sum of 3 Poissonian distributions. Each distribution corresponds to counts obtained during a 250 $\mu$s detection period from events with either 0, 1, or 2 ions in the bright state.}
\end{figure}
The effects summarized above together account for about 0.6\% infidelity. The gate fidelity could clearly be increased by means of certain technical improvements; error from motional mode can be suppressed by two orders of magnitude if improved trap materials or technical noise sources can allow heating rates comparable to the lowest-noise cryogenic surface traps \cite{sedlacek2018distance}. Shielding of the exposed dielectric e.g. with a transparent conductor \cite{niffenegger2020integrated} or perhaps with thin, semitransparent metal films should reduce motional frequency drifts, and optimal strategies for this are likely to be essential especially for trap chips delivering significant powers in the blue/UV as well. Laser noise could be reduced using improved implementation of acoustic noise cancellation on optical fibre in our system \cite{ma1994delivering}, together with technical improvements to our laser frequency stabilization. Ground-state cooling of radial modes can suppress Kerr cross-coupling and Rabi frequency fluctuations from radial mode occupancies, both of which which contribute error roughly quadratic in $\bar{n}$, by $>100\times$. Couplers and beam/field geometries in future chips for EIT ground-state cooling of all modes \cite{lechner2016electromagnetically} will be an enabling development. Aside from these possible technical improvements, multi-loop and composite-pulse gate implementations can reduce infidelity from motional heating and frequency drifts, as well as laser frequency drifts \cite{milne2020phase}. Our current experiments show no obvious limit to fidelity arising from optical integration, and it appears such implementations may assist in approaching the ${\sim}10^{-5}$ limit imposed by spontaneous emission for typical gate times using this qubit.
\subsection{Crosstalk between trap zones}
Optical radiation brings advantages in the possibility for tight beam focuses addressing individual ions or ensembles within a larger systems.
We quantified crosstalk levels expected in parallel operation of multiple zones in the present device by inputting light to adjacent fibre couplers and measuring the effect on the ion in the experimental trap zone 3 (zones and waveguide inputs labeled in Extended Data Fig.~\ref{fig:wafer}c). Extended Data Fig.~\ref{fig:crosstalk}a shows Rabi flopping observed on a single ion trapped in zone 3 with light coupled to input 3 (which directly feeds the grating addressing this zone). Light intended to address zone 2 would instead be coupled to input 5; when we send the same optical power to input 5, we observe Rabi oscillations with a $1000\times$ lower Rabi frequency on the ion in zone 3 (Extended Data Fig.~\ref{fig:crosstalk}b), indicating a relative intensity in this non-addressed zone of -60 dB.
\begin{figure}[b]
\centerline{\includegraphics[width=0.5\textwidth]{./Crosstalk.pdf}}
\vspace{0 cm}
\caption{\label{fig:crosstalk} \textbf{Crosstalk characterization.} Rabi oscillations at zone 3 (a) with light coupled to the port directly addressing this zone (input 3), and (b) with light coupled to the port intended to address zone 2 (input 5). Fits to Rabi oscillations with a Gaussian envelope decay indicate $\pi$-times of 2.4 $\mu$s (a) and 2.6 ms (b). }
\end{figure}
This level of crosstalk appears to arise from a combination of coupling between the waveguides on-chip (i.e. coupling from the waveguide of input 5 to input 3, and then emission by the grating in zone 3), and weak sidelobes of emission from the zone 2 grating to the ion in zone 3. Similar levels of crosstalk at -60 dB are observable when feeding in light to input 1, which propagates in a loop around the device and to no grating. The beam profile emitted in this case is probed by measuring ion response at different ion positions along the trap axis and is observed to be consistent with that emitted by the 729 nm coupler in zone 3; hence we attribute this crosstalk to coupling between the waveguides fed by input 1 and 3. Furthermore, when polarization is adjusted to minimize transmission to the loop output, the crosstalk drops by 20 dB, suggesting that the bulk of crosstalk into the zone 3 coupler comes from inter-waveguide coupling rather than scatter at the fibre-chip interface. We expect lower-loss waveguides with less sidewall scattering to reduce this effect. Crosstalk from the emission of the grating itself is particularly pronounced along the axial direction we probe here \cite{mehta2017precise}, and we expect optimized arrangements and designs to reduce this as well. A variety of composite pulse schemes can reduce the impact of coherent errors resulting from such crosstalk.
\subsection{Hybrid Zeeman/optical qubit encoding}
Sequences of gates conducted using the optical qubit addressed in this work would require phase stability of the driving 729 nm light over the entire algorithm, a challenging technical requirement for algorithms extending beyond many seconds. The 1.1 s spontaneous emission lifetime also limits memory time achievable with this transition.
We note that both issues may be ameliorated by hybrid approaches where, for example in $^{40}$Ca$^+$, qubits can be stored in the two $S_\frac12$ Zeeman sublevels (and single-qubit gates implemented via RF magnetic fields), with excitation from one sublevel to the $D_\frac52$ state only to implement two-qubit interactions.
For each ion, denote the three levels involved $\ket{{0}} = \ket{S_\frac12, m_J = -\frac{1}{2}}$, $\ket{{1}} = \ket{S_\frac12, m_J = +\frac{1}{2}}$, and $\ket{{2}}$ a sublevel of the $D_\frac52$ manifold; $\ket{{0}}$ and $\ket{{1}}$ represent the long-term ``memory" qubit. $\ket{{1}}$ is mapped to $\ket{{2}}$ for optical multi-qubit gates via rotations $\hat{R}_{12}(\pi, \phi) = \exp \left(-i \pi \hat{\sigma}_{12}(\phi)/2\right)$, with $\hat{\sigma}_{12}(\phi) = \hat\sigma_{x12} \cos\phi + \hat\sigma_{y12} \sin\phi$, where $\hat\sigma_{x12}$ and $\hat\sigma_{y12}$ are Pauli matrices for states $\ket 1$ and $\ket 2$. The MS interaction $\hat{U}^\mathrm{MS}_{02}(\phi)$ between $\ket 0$ and $\ket 2$ is expressed exactly as in the main text, with Pauli matrices appropriate to these two levels; defining the global $\pi$-rotation on qubits $a$ and $b$ as $\hat{\mathcal R}_{12}(\phi) = \hat{R}_{12}(\pi, \phi)_a \otimes \hat{R}_{12}(\pi, \phi)_b$, we find the total unitary $\hat{\mathcal R}_{12}(\phi) \hat{U}^\mathrm{MS}_{02}(\phi) \hat{\mathcal R}_{12}(\phi)$ is independent of the constant laser phase offset $\phi$. In fact we note this is not unique to the MS gate and would apply to any optically-implemented unitary.
Such an approach may allow systems to exploit the relatively low power requirements and high addressability of optical interactions, while benefiting from the long memory times and relaxed laser phase stability requirements of microwave qubits.
|
2,869,038,155,565 | arxiv | \section{Introduction}\label{sec:introduction}
Conversational agents, fueled by language understanding advancements enabled by large contextualized language models, are drawing considerable attention~\cite{anand2020conversational,zamani2022conversational}. Multi-turn conversations commence with a main topic and evolve with differing facets of the initial topic or an abrupt shift to a new focus, possibly suggested by the content of the answers returned~\cite{Mele2021AdaptiveUR,DaltonEtAl2020}.
A user drives such an interactive information-discovery process by submitting a query about a topic followed by a sequence of more specific queries, possibly aimed at clarifying some aspects of the topic. Documents relevant to the first query
are often relevant and helpful in answering subsequent
queries.
This suggests the presence of temporal locality in the lists of results retrieved by conversational systems for successive queries issued by the same user in the same conversation. In support of this claim, Figure~\ref{fig:motivation} illustrates a t-SNE~\cite{TSNE} bi-dimensional visualization of dense representations for the queries and the relevant documents of five manually rewritten conversations from the TREC 2019 CAsT dataset~\cite{DaltonEtAl2020}.
As illustrated, there is a clear spatial clustering among queries in the same conversation, as well as a clear spatial clustering of relevant documents for these queries.
We exploit
locality to improve efficiency in conversational systems by caching the query results on the client side.
Rather than caching pages of results answering queries likely to be resubmitted, we cache documents about a topic, believing that their content will be likewise relevant to successive queries issued by the user involved in the conversation.
Topic caching is effective in Web search~\cite{ipm2020-MTFP}
but, as yet, was never explored in
conversational search.
\ophir{Topic caching effectiveness rests on topical locality. Specifically, if the variety of search domains is limited, the likelihood that past, and hence potentially cached, documents are relevant to successive searches is greater. In the Web environment, search engines respond to a wide and diverse set of queries, and yet, topic caching is still effective \cite{ipm2020-MTFP}; thus, in the conversational search domain where a sequence of searches often focuses on a related if not on the same specific topic, topical caching, intuitively, should have even greater appeal than in the Web environment, motivating our exploration.}
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\linewidth]{figs/motivation.pdf}
\caption{2D visualization of conversational queries ($\bullet$) and corresponding relevant documents ($\times$) for 5 \textsc{CAsT}\xspace 2019 topics.}\label{fig:motivation}
\end{figure}
To capitalize on the deep semantic relationship between conversation queries and documents, we leverage recent advances in Dense Retrieval (DR) models~\cite{ance,star,dpr,10.1145/3394486.3403305,CastDR}. In our DR setting, documents are represented by low-dimension learned embeddings stored for efficient access in a specialised metric index, such as that provided by the FAISS toolkit~\cite{JDH17}. Given a query embedded in the same multi-dimensional space, online ranking is performed by means of a top-$k$ nearest neighbor similarity search based on a metric distance. In the worst-case scenario, the computational cost of the nearest neighbor search is directly proportional to the number of documents stored in the metric index.
To
improve end-to-end responsiveness of the system, we insert a \emph{client-side} metric cache~\cite{Lucchese08, Lucchese12} in front of the DR system aimed at reusing documents retrieved for previous queries in the same conversation. We investigate different strategies for populating the cache at cold start and updating its content as the conversation topic evolves.
Our metric cache returns an approximate result set for the current query.
Using reproducible experiments based on TREC \textsc{CAsT}\xspace datasets, we demonstrate that our cache significantly reduces end-to-end conversational system processing times
without answer quality degradation.
Typically, we answer a query without accessing the document
index since the cache already stores the most similar documents. More importantly, we can
estimate the quality of the documents present in the cache for the current query, and based on this
estimate, decide if querying the document index is potentially beneficial.
Depending on the size of the cache, the hit rate measured on the \textsc{CAsT}\xspace conversations varies between 65\% and 75\%,
illustrating that caching significantly expedites conversational search by drastically reducing the number of queries submitted to the document index on the back-end.
Our contributions are as follows:
\begin{itemize}
\item Capitalizing on temporal locality, we propose a client-side document embedding cache \cache for expediting conversational search systems;
\item We innovate means that assess current cache content quality necessitating document index access only needed to improve response quality;
\item Using the TREC \textsc{CAsT}\xspace datasets, we demonstrate responsiveness improvement without accuracy degradation.
\end{itemize}
The remainder of the paper is structured as follows: Section~\ref{sec:architecture} introduces
our conversational search system architecture and discusses the proposed document embedding cache and the associated update strategies. Section~\ref{sec:expsetup} details our research questions, introducing the experimental settings and the experimental methodology. Results of our comprehensive evaluation conducted to answer the research questions are discussed in Section~\ref{sec:results}. Section~\ref{sec:related} contextualizes our contribution in the related work. Finally, we conclude our investigation in Section~\ref{sec:conclusions}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\linewidth]{figs/cache-architecture.pdf}
\caption{Architecture of a conversational search system with client-side caching.}\label{fig:architecture}
\end{figure}
\section{A Conversational system with client-side caching}
\label{sec:caching}
\label{sec:architecture}
A conversational search system enriched with our client-side caching is depicted in Figure \ref{fig:architecture}. We adopt a typical client-server architecture where a client supervises the conversational dialogue between a user and a search back-end running on a remote server.
We assume that the conversational back-end uses a dense retrieval model where documents and queries are both encoded with vector representations, also known as embeddings, in the same multi-dimensional latent space; the collection of document embeddings is stored, for efficient access, in a search system supporting nearest neighbor search, such as a FAISS index~\cite{JDH17}.
Each conversational client, possibly running on a mobile device, deals with a single user conversation at a time, and hosts a local cache aimed at reusing, for efficiency reasons, the documents previously retrieved from the back-end as a result of the previous utterances of the ongoing conversation. Reusing previously retrieved, namely cached, results eliminates the additional index access, reducing latency and resource load. Specifically, the twofold goal of the cache is: 1) to improve user-perceived responsiveness of the system by promptly answering user utterances with locally cached content; 2) to reduce the computational load on the back-end server by lowering the number of server requests as compared to an analogous solution not adopting client-side caching.
In detail,
the client handles the user conversation by semantically enriching those utterances that lack context \cite{Mele2021AdaptiveUR} and encoding the rewritten utterance in the embedding space. Online conversational search is performed in the above settings by means of top $k$ nearest neighbor queries based on a metric distance between the embedding of the utterance and those of the indexed documents. The conversational client
likewise queries the local cache or the back-end for the most relevant results answering the current utterance and presents them to the requesting user.
\raf{The first query of a conversation is always answered by querying the back-end index, and the results retrieved are used to populate the initially empty cache. For successive utterances of the same conversation, the decision of whether to answer by leveraging the content of the cache or querying the remote index is taken locally as explained later.}
We begin by introducing the notation used, continuing with a mathematical background on the metric properties of queries and documents and with a detailed specification of our client-side cache together with an update policy based on the metric properties of query and document embeddings.
\subsection{Preliminaries}
Each query or document is represented by a vector in $\mathbb R^l$, hereinafter called an \textit{embedding}.
Let $\db = \{d_1,d_2,\ldots,d_n\}$ be a collection of $n$ documents represented by the embeddings $\Phi = \{\phi_1, \phi_2, \ldots, \phi_n\}$, where $\phi_i = \mathcal{L}(d_i)$ and $\mathcal{L}: \mathcal{D} \to \mathbb{R}^l$ is a learned representation function. Similarly, let $q_a$ be a query represented by the embedding $\psi_a = \mathcal{L}(q_a)$ in the same multi-dimensional space $\mathbb{R}^l$.
Similarity functions
to compare embeddings exist including inner product~\cite{ance,dpr,star, sbert} and euclidean norm~\cite{colbert}.
We use STAR \cite{star} to encode queries and documents. Since STAR embeddings are fine-tuned for maximal inner-product search, they cannot natively exploit the plethora of efficient algorithms developed for searching in Euclidean metric spaces.
To leverage nearest neighbor search and all the efficient tools devised for it, maximum inner product similarity search between embeddings can be adapted to use the Euclidean distance. Given a query embedding $\psi_a \in \mathbb{R}^l$ and a set of document embeddings $\Phi = \{\phi_i\}$ with $\phi_i \in \mathbb{R}^l$, we apply the following transformation from $\mathbb{R}^l$ to $\mathbb{R}^{l+1}$~\cite{10.5555/3045118.3045323,xbox}:
\begin{equation}
\label{eq:transformation}
\bar{\psi}_a = \begin{bmatrix}\psi_a^T/\|\psi_a\| & 0\end{bmatrix}^T,\quad
\bar{\phi}_i = \begin{bmatrix}\phi_i^T/M & \sqrt{1 - \|\phi_i\|^2/M^2}\end{bmatrix}^T,
\end{equation}
where $M = \max_i \|\phi_i\|$. In doing so, the maximization problem of the inner product $\langle\psi_a,\phi_i\rangle$ becomes exactly equivalent to the minimization problem of the Euclidean distance $\|\bar{\psi}_a - \bar{\phi}_i\|$. In fact,
we have:
\begin{equation*}
\min \|\bar{\psi}_a - \bar{\phi}_i\|^2 =
\min \big( \|\bar{\psi}_a\|^2 + \|\bar{\phi}_i\|^2 - 2 \langle \bar{\psi}_a, \bar{\phi}_i \rangle \big) =
\min \big(2 - 2\langle {\psi}_a, {\phi}_i/M \rangle \big) =
\max \langle {\psi}_a, {\phi}_i\rangle.
\end{equation*}
Hence, hereinafter we consider the task of online ranking with a dense retriever as a nearest neighbor search task based on the Euclidean distance among the transformed embeddings $\bar{\psi}$ and $\bar{\phi}$ in $\mathbb{R}^{l+1}$.
Intuitively, assuming $l = 2$, the transformation~\eqref{eq:transformation} maps arbitrary \textit{query and document} vectors in $\mathbb{R}^2$ into unit-norm \textit{query and document} vectors in $\mathbb{R}^3$, i.e., the transformed vectors are mapped on the surface of the unit sphere in $\mathbb{R}^3$.
To simplify the notation we drop the bar symbol from the embeddings $\bar{\psi} \to \psi$ and $\bar{\phi} \to \phi$, and assume that the learned function $\mathcal{L}$ encodes queries and documents directly in $\mathbb{R}^{l+1}$ by also applying the above transformation.
\subsection{Nearest neighbor queries and metric distances}
Let $\delta$ be a {\em metric distance function}, $\delta: \mathbb{R}^{l+1} \times \mathbb{R}^{l+1} \to \mathbb{R}$, measuring the Euclidean distance between two embeddings in $\mathbb{R}^{l+1}$ of valid documents and queries;
the smaller the distance between the embeddings, the more similar the corresponding documents or queries are.
Given a query $q_a$, we are interested in retrieving $\text{NN}(q_a, k)$, i.e., the $k$ Nearest Neighbor documents to $q_a$ query according to the distance function $\delta(\cdot, \cdot)$.
In the metric space $\mathbb{R}^{l+1}$, $\text{NN}(q_a, k)$ identifies an hyperball $\mathcal{B}_a$ centered on $\psi_a = \mathcal{L}(q_a)$ and with radius $r_a$, computed as:
\begin{equation}\label{eq:radiusa}
r_a = \max_{d_i \in \text{NN}(q_a, k)} \delta(\psi_a, \mathcal{L}(d_i)).
\end{equation}
The radius $r_a$ is thus the distance from $q_a$ of the least similar document among the ones in $\text{NN}(q_a, k)$\footnote{Without loss of generality, we assume that the least similar document is unique, and we do not have two or more documents at distance $r_a$ from $q_a$.}.
We now introduce a new query $q_b$. Analogously, the set $\text{NN}(q_b, k)$ identifies the hyperball $\mathcal{B}_b$ with radius $r_b$ centered in $\psi_b$ and including the $k$ embeddings closest to $\psi_b$. If $\psi_a \neq \psi_b$, the two hyperballs can be completely disjoint, or may partially overlap. We introduce the quantity:
\begin{equation}\label{eq:radiusb}
\hat{r}_b = r_a - \delta(\psi_a, \psi_b),
\end{equation}
to detect the case of a partial overlap in which the query embedding $\psi_b$ falls within the hyperball
$\mathcal{B}_a$, i.e., $\delta(\psi_a, \psi_b) < r_a$, or, equivalently, $\hat{r}_b > 0$, as illustrated\footnote{The figure approximates the metric properties in a local neighborhood of $\psi_a$ on the $(l+1)$-dimensional unit sphere, i.e., in its locally-Euclidean $l$-dimensional tangent plane.} in Figure~\ref{fig:overlap}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.4\linewidth]{figs/overlap.pdf}
\caption{Overlapping hyperballs for $\text{NN}(q_a, 10)$ and $\text{NN}(q_b, 6)$ with embeddings in $\mathbb{R}^2$. Grey squares represent the embeddings of the 10 nearest neighbor documents to $q_a$.}\label{fig:overlap}
\end{figure}
In this case, there always exists a hyperball $\hat{\mathcal{B}}_b$, centered on $\psi_b$ with radius $\hat{r}_b$ such that $\hat{\mathcal{B}}_b \subset \mathcal{B}_a$. As shown in the figure, some of the documents in $\text{NN}(q_a, k)$, retrieved for query $q_a$, may belong also to $\text{NN}(q_b, k)$. Specifically, these documents are all those within the hyperball $\hat{\mathcal{B}}_b$. Note that there can be other documents in $\mathcal{B}_a$ whose embeddings are contained in $\mathcal{B}_b$, but if such embeddings are in $\hat{\mathcal{B}}_b$, we have the \textit{guarantee} that the corresponding documents are the most similar to $q_b$ among \textcolor{black}{all the documents in \db~\cite{Lucchese08}.
Our experiments will show that the documents relevant for successive queries in a conversation overlap significantly.
To take advantage of such overlap,} we now introduce a cache for storing historical embeddings that exploits the above metric properties of dense representations of queries and documents. \raf{Given the representation on the current utterance, the proposed cache aims at reusing the embeddings already retrieved for previous utterances of the same conversation for improving the responsiveness of the system. In the simplistic example depicted in Figure \ref{fig:rb_hat}, our cache would answer query $q_b$ by reusing the embeddings in $\mathcal{B}_b$ already retrieved for $q_a$.}
\subsection{A metric cache for conversational search}
Since several queries in a multi-turn conversation may deal with the same broad topic, documents retrieved for the starting topic of a conversation might become useful also for answering subsequent queries within the same conversation.
The properties of nearest neighbor queries in metric spaces discussed in the previous subsection suggest a simple, but effective way to exploit temporal locality by means of a metric cache \cache deployed on the client-side of a conversational DR system.
\begin{algorithm2e}[htb!]
\DontPrintSemicolon
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\SetKwProg{myalg}{}{:}{end}
\SetKwFunction{name}{{\sc }}
\Input{a metric index \mindex, a metric cache \cache, a \raf{query} cutoff $k$, a~cache~cutoff~$k_c$, a query embedding $\psi$}
\Output{a results set \results}
{
\nl \If{{\sf Empty}$(\cache)$ {\bf or} {\sf LowQuality($\psi$, \cache)}}{
\nl \results $\leftarrow {\sf NN}(\mindex, \psi, k_c)$\;
\nl {\sf Insert}$(\cache, \results)$\;
}
\nl \results $\leftarrow$ {\sf NN}$(\cache,\psi,k)$\;
\nl \Return \results\;
}
\caption{The \textsf{CACHE}\xspace\ pseudo-code}
\label{algo:cache}
\end{algorithm2e}
Our system for CAChing Historical Embeddings (\textsf{CACHE}\xspace) is specified in Algorithm \ref{algo:cache}. The system receives a sequence of queries belonging to a user conversation and answers them returning $k$ documents retrieved from the metric cache \cache
or the metric index \mindex containing the document embeddings of the whole collection.
When the conversation is initiated with a query $q$, whose embedding is $\psi$, the cache is empty (line 1). The main index \mindex, \raf{possibly stored on a remote back-end server,} is thus queried for top $\textsf{NN}(\mindex, \psi, k_c)$ documents, with cache cutoff $k_c \gg k$ (line 2). Those $k_c$ documents are then stored in the cache (line 3).
The rationale of using a cache cutoff $k_c$ much larger than the \raf{query cutoff} $k$ is that of filling the cache with documents that are likely to be relevant also for the successive queries of the conversation, i.e., possibly all the documents in the conversation clusters depicted in Figure~\ref{fig:motivation}. The cache cutoff $k_c$ relates in fact with the radius $r_a$ of the hyperball $\mathcal{B}_a$ illustrated in Figure~\ref{fig:overlap}: the larger $k_c$ the larger $r_a$ and the possibility of having documents relevant to the successive queries of the conversation in the hyperball $\mathcal{B}_a$.
When a new query of the same conversation arrives, we estimate the quality of the historical embeddings stored in the cache for answering it. This is accomplished by the function {\sf LowQuality}($\psi,\cache)$ (line 1). If the results available in the cache \cache are likely to be of low quality, we issue the query to the main index \mindex with cache cutoff $k_c$ and add the top $k_c$ results to \cache (line 2-3).
Eventually, we query the cache for the $k$ nearest neighbor documents (line 4), and return them (line 5).
\subsubsection*{Cache quality estimation}
The quality of the historical embeddings stored in \cache for answering a new query is estimated heuristically within the function {\sf LowQuality}($\psi,\cache)$ called in line 1 of Algorithm \ref{algo:cache}.
Given the embedding $\psi$ of the new query, we first identify the query embedding $\psi_a$ closest to $\psi$ among the ones present in \cache, i.e.,
\begin{equation}\label{eq:minq}
\psi_a = \argmin_{\psi_i \in \cache} \delta(\psi_i, \psi)
\end{equation}
Once $\psi_a$ is identified, we consider the radius $r_a$ of the hyperball $\mathcal{B}_a$, depicted in Figure \ref{fig:motivation}, and use Eq. \ref{eq:radiusb} to check if $\psi$ falls within $\mathcal{B}_a$. If this happen, it is likely that some of the documents previously retrieved for $\psi_a$ and stored in \cache are relevant even for $\psi$.
Specifically, our quality estimation heuristics considers the value $\hat{r} = r_a - \delta(\psi_a, \psi)$ introduced in Eq. \ref{eq:radiusb}. If $ \hat{r} > \epsilon$, with $\epsilon \geq 0$ being a hyperparameter of the cache, we answer $\psi$ with the $k$ nearest neighbor documents stored in the cache, i.e., the {\sf NN}$(\cache,\psi, k)$ documents; otherwise, we query the main embedding index in the conversational search back-end and update the cache accordingly.
This quality test has the advantage of efficiency; it simply requires computing
the distances between $\psi$ and the embeddings of the few queries previously used to populate the cache for the current conversation, i.e., the ones that caused a cache miss and were answered by retrieving the embeddings from the back-end (lines 2 and 3 of Algorithm \ref{algo:cache}).
In addition, by changing the single hyperparameter $\epsilon$ that measures the distance of a query from the internal border of the hyperball containing the closest cached query, we can easily tune the quality-assessment heuristic for the specific needs.
In the experimental section, we propose and discuss a simple but effective technique for tuning $\epsilon$ to
balance the effectiveness of the results returned and the efficiency improvement introduced with caching.
\section{Research questions and Experimental Settings}\label{sec:expsetup}
We now present the research questions and the experimental setup aimed at evaluating the proposed \textsf{CACHE}\xspace\ system in operational scenarios.
That is, we experimentally assess both the accuracy, namely not hindering response quality, and efficiency, namely a reduction of index request time, of a conversational search system that includes \textsf{CACHE}\xspace. Our reference baseline is exactly the same conversational search system illustrated in Figure \ref{fig:architecture} where conversational clients always forward the queries to the back-end server managing the document embedding index.
\subsection{Research Questions}
Specifically, in the following we address the following research questions:
\begin{itemize}[align=left,leftmargin=*]
\item \textbf{RQ1}: Does \textsf{CACHE}\xspace\ provide effective answers to conversational utterances by reusing the embeddings retrieved for previous utterance of the same conversation?
\begin{itemize
\item[A.] How effective is the quality assessment heuristic used to decide cache updates?
\item[B.] To what extent does \textsf{CACHE}\xspace\ impact on client-server interactions?
\item[C.] How much memory \textsf{CACHE}\xspace\ requires in the worst case?
\end{itemize}
\item \textbf{RQ2}: How much does \textsf{CACHE}\xspace\ expedite the conversational search process?
\begin{itemize
\item[A.] What is the impact of the cache cutoff $k_c$ on the efficiency of the system in case of cache misses?
\item[B.] How much faster is answering a query from the cache rather than from the remote index?
\end{itemize}
\end{itemize}
\subsection{Experimental settings}\label{ssec:expsetup}
Our conversational search system uses STAR~\cite{star} to encode \textsc{CAsT}\xspace queries and documents as embeddings with 769 dimensions\footnote{STAR encoding uses 768 values but we added one dimension to each embedding by applying the transformation in Eq. \ref{eq:transformation}.}. The document embeddings are stored in a dense retrieval system leveraging the FAISS library~\cite{JDH17} to efficiently perform similarity searches between queries and documents. The nearest neighbor search is exact, and no approximation/quantization mechanisms are deployed.
\paragraph{Datasets and dense representation}
Our experiments are based on the resources provided by the 2019, 2020, and 2021 editions of the TREC Conversational Assistant Track (\textsc{CAsT}\xspace).
The \textsc{CAsT}\xspace 2019 dataset consists of 50 human-assessed conversations, while the other two datasets include 25 conversations each, with an average of 10 turns per conversation. The \textsc{CAsT}\xspace 2019 and 2020 include relevance judgements at passage level, whereas for \textsc{CAsT}\xspace 2021 the relevance judgments are provided at the document level.
The judgments, graded on a three-point scale, refer to passages of the TREC CAR (Complex Answer Retrieval),
and \textsc{MS-MARCO}\xspace (MAchine Reading COmprehension) collections for \textsc{CAsT}\xspace 2019 and 2020, and to documents of \textsc{MS-MARCO}\xspace, KILT, Wikipedia, and Washington Post 2020 for \textsc{CAsT}\xspace 2021\footnote{\url{https://www.treccast.ai/}}.
\raf{Regarding the dense representation of queries and passages/documents, our caching strategy is orthogonal w.r.t. the choice of the embedding. The state-of-the-art single-representation models proposed in the literature are: DPR~\cite{dpr}, ANCE~\cite{ance}, and STAR~\cite{star}. The main difference among these models is how the fine-tuning of the underlying pre-trained language model, i.e., BERT, is carried out. We selected for our experiments the embeddings computed by the STAR model since it employs hard negative sampling during fine-tuning, obtaining better representations in terms of effectiveness w.r.t. ANCE and DPR.}
For \textsc{CAsT}\xspace 2019 and 2020, we generated a STAR embedding for each passage in the collections, while for \textsc{CAsT}\xspace 2021, we encoded each document, up to the maximum input length of 512 tokens, in a single STAR embedding.
Given our focus on the efficiency of conversational search, we strictly use manually rewritten queries, where missing keywords or mentions to previous subjects, e.g., pronouns, are resolved by human assessors.
\paragraph{\textsf{CACHE}\xspace\ Configurations}
To answer our research questions, we measure
the end-to-end performance of the proposed \textsf{CACHE}\xspace\ system on the three \textsc{CAsT}\xspace datasets.
We compare \textsf{CACHE}\xspace\ against the efficiency and effectiveness of a \textit{baseline} conversational search system with no caching, always answering the conversational queries by using the FAISS index hosted by the back-end (hereinafter indicated as \textit{no-caching}). The effectiveness of no-caching on the assessed conversations of the three \textsc{CAsT}\xspace datasets represents an upper bound for the effectiveness of our \textsf{CACHE}\xspace system. Analogously, we consider the no-caching baseline always retrieving documents via the back-end as a lower bound for the responsiveness of the conversational search task addressed.
We experiment with two different versions of our \textsf{CACHE}\xspace system:
\begin{itemize}
\item a \textit{static}-\textsf{CACHE}\xspace: a metric cache populated with the $k_c$ nearest documents returned by the index for the first query of each conversation and never updated for the remaining queries of the conversations;
\item a \textit{dynamic}-\textsf{CACHE}\xspace: a metric cache updated at query processing time according to Alg.~\ref{algo:cache}, where {\sf LowQuality}($\psi_b,\cache$) returns false if $\hat{r}_b \geq \epsilon$ (see Eq. \ref{eq:radiusb}) for at least one of the previously cached queries, and true otherwise.
\end{itemize}
We vary the cache cutoff $k_c$ in $\{1K, 2K, 5K, 10K\}$ and assess its impact. Additionally,
since conversations are typically brief, e.g., from 6 to 13 queries for the three \textsc{CAsT}\xspace datasets considered, for efficiency and simplicity of design, we forgo implementing any space-freeing, eviction policy should
the client-side cache reach maximum capacity.
We assess experimentally that, even without eviction, the amount of memory needed by our \textit{dynamic}-\textsf{CACHE}\xspace to store the embeddings of the documents retrieved from the FAISS index during a single conversation suffices and does not present an issue.
In addition to the document embeddings, we recall that to implement the {\sf LowQuality}$(\cdot,\cdot)$ test, our cache records also the embeddings $\psi_a$ and radius $r_a$ of all the previous queries $q_a$ of the conversation answered on the back-end.
\paragraph{Effectiveness Evaluation}
\looseness -1 The effectiveness of the \textit{no-caching} system, the static-\textsf{CACHE}\xspace, and the dynamic-\textsf{CACHE}\xspace\ are assessed by using the official metrics used to evaluate \textsc{CAsT}\xspace conversational search systems~\cite{DaltonEtAl2020}: mean average precision at \raf{query} cutoff 200 (MAP@200), mean reciprocal rank at \raf{query} cutoff 200 (MRR@200), normalized discounted cumulative gain at \raf{query} cutoff 3 (nDCG@3), and precision at \raf{query} cutoffs 1 and 3 (P@1, P@3).
Our experiments report the statistically significant differences w.r.t. the baseline system for $p<0.01$ according to the two-sample t-test.
In addition to these standard IR measures, we introduce a new metric to measure the quality of the approximate answers retrieved from the cache w.r.t. the correct results retrieved form the FAISS index. We define the \textit{coverage} of a query $q$ w.r.t. a cache \cache and a given \raf {query cutoff} value $k$, as the intersection, in terms of nearest neighbor documents, between the top $k$ elements retrieved for the cache \cache and the exact top $k$ elements retrieved from the whole index $\mathcal{M}$, divided by $k$:
\begin{equation}\label{eq:acov}
\textsf{cov}_k(q) = \frac{|{\sf NN}(\cache, \psi, k) \cap {\sf NN}(\mindex, \psi, k)|}{k},
\end{equation}
where $\psi$ is the embedding of query $q$. We report on the quality of the approximate answers retrieved from the cache by measuring the coverage $\textsf{cov}_k$, averaged over the different queries.
\raf{The higher $\textsf{cov}_k$ at a given query cutoff $k$ is, greater is the quality of the approximate $k$ nearest neighbor documents retrieved from the cache. Of course $\textsf{cov}_k(q) = 1$ for a given cutoff $k$ and query $q$ means that we retrieve from the cache or the main index exactly the same set of answers. Moreover, these answers come out to be ranked in the same order by the distance function adopted. Besides measuring the quality of the answers retrieved from the cache vs the main index, we use }
the metric $\textsf{cov}_k$ also to tune the hyperparameter $\epsilon$.
To this end, Figure~\ref{fig:rb_hat} reports the correlation between $\hat{r}_b$ vs. \textsf{cov}$_{10}(q)$ for the \textsc{CAsT}\xspace 2019 train queries, using static-\textsf{CACHE}\xspace and $k_c=1K$. The queries with $\textsf{cov}_{10} \leq 0.3$, i.e., those with no more than three documents in the intersection between the static-\textsf{CACHE}\xspace contents and their actual top 10 documents, correspond to $\hat{r}_b \leq 0.04$. Hence, in our \raf{initial} experiments, we set the value of $\epsilon$ to $0.04$ \raf{ to obtain good coverage figures at small query cutoffs}. \raf{In answering RQ1.A we will also discuss a different tuning of $\epsilon$ aimed at improving the effectiveness of dynamic-\textsf{CACHE}\xspace at large query cutoffs. }
\begin{figure}[bt!]
\centering
\includegraphics[width=0.6\linewidth]{figs/cov.pdf}
\caption{Correlation between $\hat{r}_b$ vs. \textsf{cov}$_{10}(q)$ for the \textsc{CAsT}\xspace 2019 train queries, using static-\textsf{CACHE}\xspace, $k=10$ and $k_c=1K$. The vertical black dashed line corresponds to $\hat{r}_b = 0.04$, the tuned cache update threshold value $\epsilon$ used in our experiments.}\label{fig:rb_hat}
\end{figure}
\paragraph{Efficiency Evaluation}
The efficiency of our \textsf{CACHE}\xspace\ systems is measured in terms of: i) \textit{hit rate}, i.e., the percentage of queries, over the total number of queries, answered directly by the cache without querying the dense index; ii) \textit{average query response time} for our \textsf{CACHE}\xspace configurations and the \textit{no-caching} baseline.
The hit rate is measured by not considering the first query in each conversation since each conversation starts with an empty cache, and the first queries are thus compulsory cache misses, always answered by the index.
Finally, the query response time, namely latency, is measured as the amount of time from when a query is submitted to the system to the time it takes for the response to get back. To better understand the impact of caching, for \textsf{CACHE}\xspace we measure separately the average response time for hits and misses.
The efficiency evaluation is conducted on a server equipped with an Intel Xeon E5-2630 v3 CPU clocked at 2.40GHz and 192 GiB of RAM. In our tests, we employ the FAISS\footnote{https://github.com/facebookresearch/faiss} Python API v1.6.4. The experiments measuring query response time are conducted by using the low-level C++ exhaustive nearest-neighbor search FAISS APIs.
We perform this choice to avoid possible overheads introduced by the Python interpreter that comes into play when using the standard FAISS high-level APIs. Moreover, as FAISS is a library designed and optimized for batch retrieval, our efficiency experiments are conducted by retrieving results for a batch of queries instead of a single one. The rationale of doing this relies in the fact that, on a back-end level, we can easily assume that queries coming from different clients can be batched together before being submitted to FAISS. The reported response times are obtained as an average of three different runs.
\paragraph{Available Software}
The source code used in our experiments is made publicly available to allow the reproducibility of the results\footnote{\url{https://github.com/hpclab/caching-conversational-search}}.
\section{Experimental Results}
\label{sec:results}
We now discuss the results of the experiments conducted to answer the research questions posed in Section \ref{sec:expsetup}.
\begin{table*}[htb!]
\centering
\caption{Retrieval performance measured on \textsc{CAsT}\xspace datasets with or without document embedding caching. We highlight with symbol $\blacktriangledown$ statistical significant differences w.r.t. \textit{no-caching} for $p<0.01$ according to the two-sample t-test. Best values for each dataset and metric are shown in bold.}\label{tb:effectiveness}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{cccccccccc}
\toprule
&& $k_c$ & {MAP@200} & {MRR@200} & {nDCG@3} & {P@1} & {P@3} & $cov_{10}$ & {Hit Rate} \\
\midrule
\multirow{9}{*}{\bf \textsc{CAsT}\xspace 2019}&no-caching & -- & \textbf{0.194} & 0.647 & 0.376 & 0.497 & 0.495 & -- &--\\
\cmidrule{3-10}
&\multirow{4}{*}{static-\textsf{CACHE}\xspace}
& 1K & 0.101$\blacktriangledown$ & 0.507$\blacktriangledown$ & 0.269$\blacktriangledown$ & 0.387$\blacktriangledown$ & 0.364$\blacktriangledown$ & 0.40 & 100\% \\
&& 2K & 0.112$\blacktriangledown$ & 0.567$\blacktriangledown$ & 0.304$\blacktriangledown$ &0.428 &0.414$\blacktriangledown$ & 0.47 &100\% \\
&& 5K & 0.129$\blacktriangledown$ & 0.588 & 0.316$\blacktriangledown$ & 0.451 & 0.426$\blacktriangledown$ & 0.56 & 100\% \\
&& 10K & 0.140$\blacktriangledown$ &0.611 & 0.338 & 0.486 & 0.459 & 0.62 & 100\% \\
\cmidrule{3-10}
&\multirow{4}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.180$\blacktriangledown$ & 0.634 & 0.365 & 0.474 & 0.482 & 0.91 & 67.82\% \\
&& 2K & 0.183$\blacktriangledown$ & 0.631 & 0.366 & 0.480 & 0.487 & 0.93 & 70.69\% \\
&& 5K & 0.186$\blacktriangledown$ & 0.652 & 0.375 & 0.503 & 0.499 & 0.94 & 74.14\% \\
&& 10K & 0.190 & \textbf{0.655} & \textbf{0.380} & \textbf{0.509} & \textbf{0.505} & 0.96 & 75.29\% \\
\midrule
\multirow{9}{*}{\bf \textsc{CAsT}\xspace 2020}&
no-caching & -- & \textbf{0.212} & 0.622 & 0.338 & 0.471 & 0.473 & -- & --\\
\cmidrule{3-10}
&\multirow{4}{*}{static-\textsf{CACHE}\xspace}
& 1K & 0.112$\blacktriangledown$ & 0.421$\blacktriangledown$ & 0.215$\blacktriangledown$ & 0.312$\blacktriangledown$ & 0.306$\blacktriangledown$ & 0.35 & 100\% \\
& & 2K & 0.120$\blacktriangledown$ & 0.454$\blacktriangledown$ & 0.236$\blacktriangledown$ & 0.351$\blacktriangledown$ & 0.324$\blacktriangledown$ & 0.41 & 100\% \\
& & 5K & 0.139$\blacktriangledown$ & 0.509$\blacktriangledown$ & 0.267$\blacktriangledown$ & 0.394 & 0.370$\blacktriangledown$ & 0.48 & 100\% \\
& & 10K & 0.146$\blacktriangledown$ & 0.518$\blacktriangledown$ & 0.270$\blacktriangledown$ & 0.394$\blacktriangledown$ & 0.380$\blacktriangledown$ & 0.52 & 100\% \\
\cmidrule{3-10}
&\multirow{4}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.204$\blacktriangledown$ & 0.624 & 0.339 & \textbf{0.481} & 0.478 & 0.91 & 56.02\% \\
& & 2K & 0.203$\blacktriangledown$ & \textbf{0.625} & 0.336 & \textbf{0.481} & 0.470 & 0.93 & 60.73\% \\
& & 5K & 0.208 & 0.622 & \textbf{0.341} & 0.476 & \textbf{0.479} & 0.94 & 62.83\% \\
& & 10K & 0.210 & \textbf{0.625} & 0.339 & 0.476 & 0.476 & 0.96 & 63.87\% \\
\midrule
\multirow{9}{*}{\bf \textsc{CAsT}\xspace 2021}&
no-caching & -- & \textbf{0.109} & 0.584 & \textbf{0.340} & 0.449 & \textbf{0.411} & -- & --\\
\cmidrule{3-10}
&\multirow{4}{*}{static-\textsf{CACHE}\xspace}
& 1K & 0.068$\blacktriangledown$ & 0.430$\blacktriangledown$ & 0.226$\blacktriangledown$ & 0.323$\blacktriangledown$ & 0.283$\blacktriangledown$ & 0.38 & 100\% \\
& & 2K & 0.072$\blacktriangledown$ & 0.461$\blacktriangledown$ & 0.240$\blacktriangledown$ & 0.348$\blacktriangledown$ & 0.300$\blacktriangledown$ & 0.42 & 100\% \\
& & 5K & 0.079$\blacktriangledown$ & 0.508$\blacktriangledown$ & 0.270$\blacktriangledown$ & 0.386 & 0.338$\blacktriangledown$ & 0.51 & 100\% \\
& & 10K & 0.080$\blacktriangledown$ & 0.503$\blacktriangledown$ & 0.272$\blacktriangledown$ & 0.367$\blacktriangledown$ & 0.338$\blacktriangledown$ & 0.56 & 100\% \\
\cmidrule{3-10}
&\multirow{4}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.106 & 0.577 & 0.335 & 0.443 & 0.409 & 0.89 & 61.97\% \\
& & 2K & 0.107 & \textbf{0.585} & 0.338 & \textbf{0.456} & \textbf{0.411} & 0.91 & 63.38\% \\
& & 5K & 0.106 & 0.584 & 0.334 & 0.449 & 0.407 & 0.92 & 66.67\% \\
& & 10K & 0.107 & 0.584 & 0.336 & 0.449 & 0.409 & 0.94 & 67.61\% \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table*}
\subsection{RQ1: Can we provide effective cached answers?}
The results of the experiments conducted on the three \textsc{CAsT}\xspace datasets with the \textit{no-caching} baseline, static-\textsf{CACHE}\xspace, and dynamic-\textsf{CACHE}\xspace\ are reported in Table~\ref{tb:effectiveness}.
For each dataset, the static, and dynamic versions of \textsf{CACHE}\xspace, we vary the value of the cache cutoff $k_c$ as discussed in Sec.~\ref{ssec:expsetup}, and highlight with symbol $\blacktriangledown$ the statistical significant differences (two-sample t-test with $p<0.01$) w.r.t. the \textit{no-caching} baseline. The best results for each dataset and effectiveness metric are shown in bold.
By looking at the figures in the table, we see that static-\textsf{CACHE}\xspace returns worse results than \textit{no-caching} for all the datasets, most of the metrics, and cache cutoffs $k_c$ considered. However, in a few cases, the differences are not statistically significant. For example, we observe that static-\textsf{CACHE}\xspace on \textsc{CAsT}\xspace 2019 with $k_c=10k$ does not statistically differ from \textit{no-caching} for all metrics but MAP@200. The reuse of the embeddings retrieved for the first queries of \textsc{CAsT}\xspace 2019 conversations is thus so high that even the simple heuristic of statically caching the top $10k$ embeddings of the first query allows to answer effectively the following queries without further interactions with the back-end.
As expected, we see that by increasing the number $k_c$ of statically cached embeddings from $1K$ to $10K$, we improve the quality for all datasets and metrics. Interestingly, we observe that static-\textsf{CACHE}\xspace performs relatively better at small query cutoffs since in column P@1 we have, for 5 times out of 12, results not statistically different from those of \textit{no-caching}.
We explain such behavior by observing again Figure \ref{fig:overlap}: when an incoming query $q_b$ is close to a previously cached one, i.e., $\hat{r}_b \geq 0$, it is likely that the relevant documents for $q_b$ present in the cache are those most similar to $q_b$ among all those in \db. The larger is query cutoff $k$, the lower is the probability of the least similar documents among the ones in {\sf NN}$(q_b, k)$ residing in the cache.
When considering dynamic-\textsf{CACHE}\xspace, based on the heuristic update policy discussed earlier, effectiveness improves remarkably. Independently of the dataset and the value of $k_c$, we achieve performance figures that are not statistically different from those measured with \textit{no-caching} for all metrics but MAP@200. Indeed, the metrics measured at small query cutoffs result in some cases to be even slightly better than those of the baseline even if the improvements are not statistically significant: since the embeddings relevant for a conversation are tightly clustered, retrieving them from the cache rather than from the whole index in some case reduces noise and provides higher accuracy.
MAP@200 is the only metrics for which some configurations of dynamic-\textsf{CACHE}\xspace result to perform worse than \textit{no-caching}. This is motivated by the tuning of threshold $\epsilon$ performed by focusing on small query cutoffs, i.e., the ones commonly considered important for conversational search tasks \cite{DaltonEtAl2020}.
\paragraph{RQ1.A: Effectiveness of the quality assessment heuristic}
The performance exhibited by dynamic-\textsf{CACHE}\xspace demonstrates that the quality assessment heuristic used to determine cache updates is highly effective.
To further corroborate this claim,
the $cov_{10}$ column of Table~\ref{tb:effectiveness} reports for static-\textsf{CACHE}\xspace and dynamic-\textsf{CACHE}\xspace the mean coverage for $k=10$ measured by averaging Eq.~\eqref{eq:acov} over all the conversational queries in the datasets. We recall that this measure counts the cardinality of the intersection between the top $10$ elements retrieved from the cache and the exact top $10$ elements retrieved from the whole index, divided by $10$. While the $cov_{10}$ values for static-\textsf{CACHE}\xspace range between 0.35 to 0.62, justifying the quality degradation captured by the metrics reported in the table, with dynamic-\textsf{CACHE}\xspace we measure values between 0.89 and 0.96, showing that, consistently across different datasets and cache configurations, the update heuristics proposed successfully trigger when the content of the cache needs refreshing to answer a new topic introduced in the conversation.
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\linewidth]{figs/cov_200.pdf}
\caption{Correlation between $\hat{r}_b$ vs. \textsf{cov}$_{200}(q)$ for the \textsc{CAsT}\xspace 2019 train queries, using static-\textsf{CACHE}\xspace and $k_c=1K$. The vertical black dashed line corresponds to $\hat{r}_b = 0.07$, the tuned cache update threshold value $\epsilon$ used in the experiments.
}\label{fig:rb_hat200}
\end{figure}
\raf{To gain further insights about RQ1.A, we conducted other experiments aimed at understanding if the hyperparameter $\epsilon$ driving the dynamic-\textsf{CACHE}\xspace updates can be fine-tuned for a specific query cutoff. Our investigation
is motivated by
the MAP@200 results reported in Table \ref{tb:effectiveness} that are slightly lower than the baseline for $5$ out of $12$ dynamic-\textsf{CACHE}\xspace configurations. We ask ourselves if it is possible to tune the value of $\epsilon$ to achieve MAP@200 results statistically equivalent to those of \textit{no-caching} without losing all the efficiency advantages of our client-side cache.}
\raf{Similarly to Figure \ref{fig:rb_hat}, the plot in Figure \ref{fig:rb_hat200}
shows the correlation between the value of $\hat{r}_b$ vs. \textsf{cov}$_{200}(q)$ for the \textsc{CAsT}\xspace 2019 train queries with static-\textsf{CACHE}\xspace, $k=200$ and $k_c=1K$. Even at query cutoff $200$, we observe a strong correlation between $\hat{r}_b$ and the coverage metrics of Eq. \ref{eq:acov}: most of the train queries with coverage $\textsf{cov}_{200} \leq 0.3$ have a value of $\hat{r}_b$ smaller than $0.07$, with a single query for which this rule of thumb does not strictly hold. Hence, we set $\epsilon = 0.07$ and we run again our experiments with dynamic-\textsf{CACHE}\xspace by varying the cache cutoff $k_c$ in $\{1k, 2k, 5k, 10k\}$.}
\raf{The results of these experiments, conducted with the \textsc{CAsT}\xspace 2019 dataset, are reported in Table \ref{tb:effectiveness200}.
As we can see from the figures reported in the table, increasing from $0.04$ to $0.07$ the value of $\epsilon$ improves the quality of the results returned by the cache at large cutoffs. Now dynamic-\textsf{CACHE}\xspace returns results that are always, even for MAP@200, statistically equivalent to the ones retrieved from the whole index by the \textit{no-caching} baseline (according to a two-sample t-test for $p<0.01$). The improved quality at cutoff $200$ is of course paid with a decrease in efficiency. While for $\epsilon = 0.04$ (see Table \ref{tb:effectiveness}) we measured on \textsc{CAsT}\xspace 2019 hit rates ranging from $67.82$ to $75.29$, by setting $\epsilon = 0.07$ we strengthen the constraint on cache content quality and correspondingly increase the number of cache updates performed. Consequently, the hit rate now ranges from $46.55$ to $58.05$, witnessing a likewise strong efficiency boost with respect to the \textit{no-caching} baseline.
}
\begin{table*}[htb!]
\centering
\caption{Retrieval performance on \textsc{CAsT}\xspace 2019 of the \textit{no-caching} baseline and dynamic-\textsf{CACHE}\xspace with $\epsilon = 0.07$. }\label{tb:effectiveness200}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{ccccccccc}
\toprule
& $k_c$ & {MAP@200} & {MRR@200} & {nDCG@3} & {P@1} & {P@3} & $cov_{200}$ & {Hit Rate} \\
\midrule
no-caching & -- & 0.194 & 0.647 & 0.376 & 0.497 & 0.495 & -- &--\\
\cmidrule{2-9}
\multirow{4}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.193& 0.645 & 0.374 & 0.497 & 0.491 & 0.83 & 46.55\% \\
& 2K & 0.193 & 0.644 & 0.375 & 0.497 & 0.493 & 0.91 & 51.15\% \\
& 5K & 0.194 & 0.645 & 0.375 & 0.497 & 0.493 & 0.93 & 54.02\% \\
& 10K & 0.194 & 0.648 & 0.375 & 0.497 & 0.493 & 0.94 & 58.05\% \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table*}
\paragraph{RQ1.B: Impact of \textsf{CACHE}\xspace on client-server interactions}
The last column of Table \ref{tb:effectiveness} reports the cache hit rate, i.e., the percentage of conversational queries over the total answered with the cached embeddings without interacting with the conversational search back-end. Of course, static-\textsf{CACHE}\xspace results in a trivial 100\% hit rate since all the queries in a conversation are answered with the embeddings initially retrieved for answering the first query. The lowest possible workload on the back-end is however paid with a significant performance drop with respect to the \textit{no-caching} baseline.
With dynamic-\textsf{CACHE}\xspace, instead, we achieve high hit rates with the optimal answer quality discussed earlier. As expected, the greater the value of $k_c$, the larger the number of cached embeddings and the higher the hit rate. With $k_c=1K$, hit rates range between $56.02\%$ to $67.82\%$, meaning that even with the lowest cache cutoff experimented more than half of the conversation queries in the 3 datasets are answered directly by the cache, without forwarding the query to the back-end. For $k_c=10K$, the hit rate value is in the interval $[63.87\%-75.29\%]$, with more than $3/4$ of the queries in the \textsc{CAsT}\xspace 2019 dataset answered directly by the cache.
If we consider the hit rate as a measure correlated to the amount of temporal locality present in the \textsc{CAsT}\xspace conversations, we highlight the highest locality present in the 2019 dataset: on this dataset dynamic-\textsf{CACHE}\xspace with $k_c=1K$ achieves a hit rate higher that the ones measured for $k_c=10K$ configurations on \textsc{CAsT}\xspace 2020 and 2021.
\paragraph{RQ1.C: Worst-case \textsf{CACHE}\xspace memory requirements}
\raf{The memory occupancy of static-\textsf{CACHE}\xspace is limited, fixed and known in advance. The worst-case amount of memory required by dynamic-\textsf{CACHE}\xspace depends instead on the value of $k_c$ and on the number of cache updates performed during a conversation. The parameter $k_c$ establishes the number of embeddings added to the cache after every cache miss. Limiting the value of $k_c$ can be necessary to respect memory constraints on the client hosting the cache. Anyway, the larger $k_c$ is, the greater the performance of dynamic-\textsf{CACHE}\xspace thanks to the increased likelihood that upcoming queries in the conversation will be answered directly, without querying the back-end index. In our experiments, we varied $k_c$ in $\{1K, 2K, 5K, 10K\}$ always obtaining optimal retrieval performances thanks to the effectiveness and robustness of the cache-update heuristic.}
Regarding the number of cache updates performed, we consider as exemplary cases the most difficult conversations for our caching strategy in the three \textsc{CAsT}\xspace datasets, namely topic 77, topic 104, and topic 117 for \textsc{CAsT}\xspace 2019, 2020, and 2021, respectively. These conversations require the highest number of cache updates: 6, 7, 6 for $k_c=1K$ and 5, 6, 5 for $k_c=10K$, respectively.
Consider topic 104 of \textsc{CAsT}\xspace 2020, the toughest conversation for the memory requirements of dynamic-\textsf{CACHE}\xspace. At its maximum occupancy, after the last cache update, dynamic-\textsf{CACHE}\xspace system stores at most $8 \cdot 1K + 8 \approx 8K$ embeddings for $k_c=1K$ and $7 \cdot 1K + 7 \approx 70K$ embeddings for $k_c=10K$. In fact, at a given time, dynamic-\textsf{CACHE}\xspace stores the $k_c$ embedding retrieved for the first query in the conversation plus $k_c$ new embeddings for every cache update performed. Indeed, the total number is lower due to the presence of embeddings retrieved multiple times from the index on the back-end. The actual number of cache embeddings for the case considered is $7.5K$ and $64K$ for $k_c=1K$ and $k_c=10K$, respectively.
Since each embedding is represented with $769$ floating point values, the maximum memory occupation for our largest cache is $64K \times 769 \times 4 $ bytes $\approx 188$ MB. Note that if we consider the case dynamic-\textsf{CACHE}\xspace, $k_c=1K$, achieving the same optimal performance of dynamic-\textsf{CACHE}\xspace, $k_c=10K$ on \textsc{CAsT}\xspace 2020 topic 104, the maximum occupancy of the cache decreases dramatically to about $28$ MB.
\subsection{RQ2: How much does \textsf{CACHE}\xspace\ expedite the conversational search process?}
We now answer RQ2 by assessing the efficiency of the conversational search process in presence of cache misses (RQ2.A) or cache hits (RQ2.B).
\paragraph{RQ2.A: What is the impact of the cache cutoff $k_c$ on the efficiency of the system in case of cache misses?}
We first conduct experiments to understand the impact of $k_c$ on the latency of nearest-neighbor queries performed on the remote back-end. To this end, we do not consider the costs of client-server communications, but only the retrieval time measured for answering a query on the remote index. Our aim is understanding if the value of $k_c$ impacts significantly or not the retrieval cost. In fact, when we answer the first query in the conversation or dynamic-\textsf{CACHE}\xspace performs an update of the cache in case of a miss (lines 1-3 of Algorithm \ref{algo:cache}), we retrieve from the remote index a large set of $k_c$ embeddings to increase the likelihood of storing in the cache documents relevant for successive queries. However, the query cutoff $k$ commonly used for answering conversational queries is very small, e.g., $1, 3, 5$, and $k \ll k_c$. Our caching approach can improve efficiency only if the cost of retrieving from the remote index $k_c$ embeddings is comparable to that of retrieving a much smaller set of $k$ elements. Otherwise, even if we reduce remarkably the number of accesses to the back-end, every retrieval of a large number of results for filling or updating the cache would jeopardize its efficiency benefits.
We conduct the experiment on the \textsc{CAsT}\xspace 2020 dataset by reporting the average latency (in msec.) of performing {\sf NN}$(q, k_c)$ queries on the remote index. Due to the peculiarities of the FAISS library implementation previously discussed, the response time is measured by retrieving the top-$k_c$ results for a batch of $216$ queries, i.e., the \textsc{CAsT}\xspace 2020 test utterances, and by averaging the total response time (Table \ref{tb:efficiency1}). Experimental results show that the back-end query response time is approximately $1$ second and is almost not affected by the value of $k_c$. This is expected as exhaustive nearest-neighbor search requires the computation of the distances from the query of all indexed documents, plus the negligible cost of maintaining the top-$k_c$ closest documents in a min-heap. The result thus confirms that large $k_c$ values do not jeopardize the efficiency of the whole system when cache misses occur.
\begin{table*}[h!]
\centering
\caption{Average response time (msec.) for querying the FAISS back-end (no-caching) or the {static-\textsf{CACHE}\xspace} and {dynamic-\textsf{CACHE}\xspace} in case of cache hit.\label{tb:efficiency1}}
\begin{tabular}{ccccc}
\toprule
& \multicolumn{4}{c}{$k_c$} \\
\cmidrule{2-5}
& {1K} & {2K} & {5K} & {10K} \\
\midrule
no-caching & 1,060 &1,058 & 1,061 & 1,073 \\
\midrule
{static-\textsf{CACHE}\xspace} & 0.14 & 0.30 & 0.78 & 1.59 \\
{dynamic-\textsf{CACHE}\xspace} & 0.36 & 0.70 & 1.73 & 3.48 \\
\bottomrule
\end{tabular}
\end{table*}
\paragraph{RQ2.B: How much faster is answering a query from the local cache rather than from the remote index?}
The second experiment conducted aims at measuring the average retrieval time for querying the client-side cache (line 4 of Algorithm \ref{algo:cache}) in case of hit. We run the experiment for the two caches proposed, i.e., {static-\textsf{CACHE}\xspace} and {dynamic-\textsf{CACHE}\xspace}. While the first one stores a fixed number of documents, the latter employs cache updates that add document embeddings to the cache during the conversation.
We report, in the last two rows of Table \ref{tb:efficiency1}, the average response time of top-3 nearest-neighbor queries resulting in cache hits for different configurations of {static-\textsf{CACHE}\xspace} and {dynamic-\textsf{CACHE}\xspace}. As before, latencies are measured on batches of $216$ queries, i.e., the \textsc{CAsT}\xspace 2020 test utterances, by averaging the total response time. The results of the experiment show that, in case of a hit, querying the cache requires on average less than 4 milliseconds, more than 250 times less than querying the back-end. We observe that, as expected, hit time increases linearly with the size of the {static-\textsf{CACHE}\xspace}.
We also note that {dynamic-\textsf{CACHE}\xspace} shows slightly higher latency than {static-\textsf{CACHE}\xspace}. This is due to the updates of the cache performed during the conversation that add embeddings to the cache.
This result shows that the use of a cache in conversational search allows to achieve a speedup of up to four orders of magnitude, i.e., from seconds to few tenths of milliseconds, between querying a remote index and a local cache.
We can now finally answer RQ2, how much does \textsf{CACHE}\xspace\ expedite the conversational search process, by computing the average overall speedup achieved by our caching techniques on an entire conversation. Assuming that the average conversation is composed of $10$ utterances, the \textit{no-caching} baseline that always queries the back-end leads to a total response time of about $10 \times 1.06 = 10.6$ seconds. Instead, with {static-\textsf{CACHE}\xspace} we perform only one retrieval from the remote index for the first utterance while the remaining queries are resolved by the cache. Assuming the use of {static-\textsf{CACHE}\xspace} with 10K embeddings, i.e., the one with higher latency, the total response time for the whole conversation is $1.06 + (9 \cdot 0.00159) = 1.074$ seconds, with an overall speedup of about $9.87\times$ over \textit{no-caching}. Finally, the use of {dynamic-\textsf{CACHE}\xspace} implies possible cache updates that may increase the number of queries answered using the remote index. In detail, {dynamic-\textsf{CACHE}\xspace} with 10K embeddings obtains a hit rate of about $64$\% on \textsc{CAsT}\xspace 2020 (see Table \ref{tb:effectiveness}). This means that, on average, we forward $1 + (9 \cdot 0.36) = 4.24$ queries to the back-end that cost in total $4.24 \cdot 1.06 = 4.49$ seconds. The remaining cost comes from cache hits. Hits are on average $5.76$ and require $5.76 \cdot 0.00348 = 0.002$ seconds accounting for a total response time for the whole conversation of $4.242$ seconds. This leads to a speedup of $2.5\times$ with respect to the \textit{no-caching} solution.
The above figures confirm the feasibility and the computational performance advantages of our client-server solution for caching historical embeddings for conversational search.
\iffalse
\begin{table*}[h!]
\centering
\caption{Average per-query retrieval time (msec.) required to retrieve the top-$3$ results from the cache. Cache sizes (\# documents) are reported in parenthesis.\label{tb:efficiency2}}
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{static-\textsf{CACHE}\xspace} & 0.143 & 0.298 & 0.782 & 1.591 \\
& (1,000) & (2,000) & (5,000) & (10,000) \\
\cmidrule{2-5}
\multirow{2}{*}{dynamic-\textsf{CACHE}\xspace} & 0.360 & 0.700 & 1.730 & 3.479 \\
& (2,417) & (4,513) & (10,926) & (21,479) \\
\bottomrule
\end{tabular}
\end{table*}
\fi
\section{Related Work}
\label{sec:related}
Our contribution relates to two main research areas. The first, attracting recently significant interest, is \textit{Conversational Search}.
Specifically, our work focuses on the ability of neural retrieval models to capture the semantic relationship between conversation utterances and documents, and, more centrally, with efficiency aspects of neural search.
The second related area is \textit{Similarity Caching} that was initially investigated in the field of content-based image retrieval and contextual advertisement.
\paragraph{Neural approaches for conversational search}
\fm{Conversational search focuses on retrieving relevant documents from a collection to fulfill user information needs expressed in a dialogue, i.e., sequences of natural-language utterances expressed in oral or written form~\cite{https://doi.org/10.48550/arxiv.2201.05176,10.1145/3269206.3271776}.} Given the nature of speech, these queries often lack context and are grammatically poorly formed, complicating their processing. To address these issues, it is natural to exploit past queries and their system response, if available, in a given conversation to build up a \textit{context history}, and use this history to enrich the semantic contents of the current query. The context history is typically used to rewrite the query in a self-contained, decontextualized query, suitable for ad-hoc document retrieval~\cite{10.1145/3446426,Mele2021AdaptiveUR,Yang2019QueryAA,10.1145/3397271.3401130,10.1145/3498557}.
Lin \emph{et al.} propose two conversational query reformulation methods based on the combination of term importance estimation and neural query rewriting~\cite{10.1145/3446426}. For the latter, authors reformulate conversational queries into natural and human-understandable queries with a pretrained sequence-to-sequence model. They also use reciprocal rank fusion to combine the two approaches yielding state-of-the-art retrieval effectiveness in terms of NDCG@3 compared to the best submission of Text REtrieval Conference (TREC) Conversational Assistant Track (CAsT) 2019.
Similarly, Voskarides \emph{et al.} focus on multi-turn passage retrieval by proposing QuReTeC (Query Resolution by Term Classification), a neural query resolution model based on bidirectional transformers and a distant supervision method to automatically generate training data by using query-passage relevance labels~\cite{10.1145/3397271.3401130}. Authors incorporate QuReTeC in a multi-turn, multi-stage passage retrieval architecture and show its effectiveness on the TREC CAsT dataset.
Others approach the problem by leveraging pre-trained generative language model to directly generate the reformulated queries \cite{9413557,10.1145/3437963.3441748,10.1145/3397271.3401323}. Some other studies combine approaches based on term selection strategies and query generation methods \cite{kumar-callan-2020-making,10.1145/3446426}.
\fm{Xu \emph{et al.} propose to track the context history on a different level, i.e., by exploiting user-level historical conversations~\cite{xu-etal-2020-user}. They build a structured per-user memory knowledge graph to represent users' past interactions and manage current queries. The knowledge graph is dynamically updated and complemented with a reasoning model that predicts optimal dialog policies to be used to build the personalized answers.}
Pre-trained language models, such as BERT~\cite{bert}, learn semantic representations called \emph{embeddings} from the contexts of words and, therefore, better capture the relevance of a document w.r.t.\ a query, with substantial improvements over the classical approach in the ranking and re-ranking of documents~\cite{lin2020pretrained}.
Recently, several efforts exploited pre-trained language models to represent queries and documents in the same dense latent vector space, and then used inner product to compute the relevance score of a document w.r.t. a query.
In conversational search, the representation of a query can be computed in two different ways. In one case, a stand-alone contextual query understanding module reformulates the user query $q$ into a rewritten query $\hat{q}$, exploiting the context history $H$~\cite{https://doi.org/10.48550/arxiv.2201.05176}, and then a query embedding $\mathcal{L}(\hat{q})$ is computed. In the other case, the learned representation function is trained to receive as input the query $q$ together with its context history $H_q$, and to generate a query embedding $\mathcal{L}(q, H_q)$~\cite{convdr}.
In both cases, dense retrieval methods are used to compute the query-document similarity, by deploying efficient nearest neighbor techniques over specialised indexes, such as those provided by the FAISS toolkit~\cite{JDH17}.
\paragraph{Similarity caching}
Similarity caching is a variant of classical exact caching in which the cache can return items that are similar, but not necessarily identical, to those queried. Similarity caching was first introduced by Falchi \emph{et al.}, where the authors proposed two caching algorithms possibly returning approximate result sets for k-NN similarity queries \cite{Lucchese08}. The two caching algorithms differ in the strategies adopted for building the approximate result set and deciding its quality based on the properties of metric spaces discussed in Section \ref{sec:architecture}. The authors focused on large-scale content-based image retrieval and conducted tests on a collection of one million images observing a significant reduction in average response time. Specifically, with a cache storing at most 5\% of the total dataset, they achieved hit rates exceeding 20\%. In successive works, the same authors analyzed the impact of similarity caching on the retrieval from larger collections with real user queries \cite{LuccheseEDBT,Lucchese12}.
Chierichetti \emph{et al.} propose a similar caching solution that is used to efficiently identify advertisement candidates on the basis of those retrieved for similar past queries \cite{Chierichetti09}. Finally, Neglia \emph{et al.} propose an interesting theoretical study of similarity caching in the offline, adversarial, and stochastic settings \cite{SimCache22}, aimed at understanding how to compute the expected cost of a given similarity caching policy.
\smallskip
We capitalize on these seminal works by exploiting the properties of similarity caching in metric spaces for a completely different scenario, i.e., dense retrievers for conversational search. Differently from image and advertisement retrieval, our use case is characterized by the similarity among successive queries in a conversation, enabling a novel solution based on integrating a small similarity cache in the conversational client. Our client-side similarity cache answers most of the queries in a conversation without querying the main index hosted remotely.
A similar work to our own is the one by Sermpezis \emph{et al.}, where authors propose a similarity-based system for recommending alternative cached content to a user when their exact request cannot be satisfied by the local cache \cite{SoftCache}. The contribution is related because it proposes a client-side cache where similar content is looked for, although their focus is on how statically fill the local caches on the basis of user profiles and content popularity.
\section{Conclusion}
\label{sec:conclusions}
We introduced a client-side, document-embedding cache for expediting conversational search systems. Although caching is extensively used in search, we take a closer look at how it can be effectively and efficiently exploited in a novel and challenging setting: a client-server conversational architecture exploiting state-of-the-art dense retrieval models and a novel metric cache hosted on the client-side.
Given the high temporal locality of the embeddings retrieved for answering utterances in a conversation, a cache can provide a great advantage to expedite conversational systems. We initially prove that both queries and documents in a conversation lie close together in the embedding space and that given this specific interaction and query properties, we can exploit the metric properties of distance computations in a dense retrieval context.
We propose two types of caching and compare the results in terms of both effectiveness and efficiency with respect to a no-caching baseline using the same back-end search solution. The first is a static-\textsf{CACHE}\xspace which populates the cache with documents retrieved based on the first query of a conversation only. The second, dynamic-\textsf{CACHE}\xspace, proposes also an update mechanism that comes in place when we determine, via a precise and efficient heuristic strategy, that the current contents of the cache might not provide relevant results.
The results of extensive and reproducible experiments conducted on \textsc{CAsT}\xspace datasets show that dynamic-\textsf{CACHE}\xspace achieves hit rates up to 75\% with answers quality statistically equivalent to that of the \textit{no-caching} baseline. In terms of efficiency, the response time varies with the size of the cache, nevertheless queries resulting in cache hit are three orders of magnitude faster than those processed on the back-end (accessed only for cache misses by dynamic-\textsf{CACHE}\xspace and for all queries by the \textit{no-caching} baseline).
We conclude that our \textsf{CACHE}\xspace solution for conversational search is a viable and effective solution, also opening the door for significant further investigation. \fm{Its client-side organization permits, for example, to effectively integrate models of user-level contextual knowledge. Equally interesting is the investigation of user-level, personalized query rewriting strategies and neural representations.}
\iffalse
\begin{table*}[htb!]
\centering
\caption{Retrieval performance \raf{measured on \textsc{CAsT}\xspace datasets} \cris{for the most difficult conversations, which require more updates,} with or without document embedding caching.} \label{tb:effectiveness-difficult}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{cccccccccc}
\toprule
&& $k_c$ & {MAP@200} & {MRR@200} & {nDCG@3} & {P@1} & {P@3} & No. updates & Index size \\
\midrule
\multirow{9}{*}{\bf \textsc{CAsT}\xspace 2019 - Topic 77}&no-caching & -- & 0.334 & 0.765 & 0.428 & 0.750 & 0.625 & -- &--\\
\cmidrule{3-10}
&\multirow{2}{*}{static-\textsf{CACHE}\xspace}
& 1K & 0.073 & 0.381 & 0.174 & 0.375 & 0.250 & 1 & 1000 \\
&& 10K & 0.174 & 0.769 & 0.309 & 0.750 & 0.458 & 1 & 10000 \\
\cmidrule{3-10}
&\multirow{2}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.309 & 0.765 & 0.428 & 0.750 & 0.625 & 6 & 6541 \\
&& 10K & 0.337 & 0.775 & 0.428 & 0.750 & 0.625 & 5 & 56328 \\
\midrule
\multirow{9}{*}{\bf \textsc{CAsT}\xspace 2019 - Topic 67} & no-caching & -- & 0.129 & 0.795 & 0.462 & 0.636 & 0.545 & -- &--\\
\cmidrule{3-10}
&\multirow{2}{*}{static-\textsf{CACHE}\xspace}
& 1K & 0.047 & 0.308 & 0.122 & 0.182 & 0.152 & 1 & 1000 \\
&& 10K & 0.090 & 0.595 & 0.347 & 0.455 & 0.455 & 1 & 10000 \\
\cmidrule{3-10}
&\multirow{2}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.113 & 0.795 & 0.429 & 0.636 & 0.485 & 5 & 5157 \\
&& 10K & 0.128 & 0.795 & 0.462 & 0.636 & 0.545 & 4 & 43239 \\
\midrule
\multirow{9}{*}{\bf \textsc{CAsT}\xspace 2020 - Topic 104 (13)}&
no-caching & -- & 0.236 & 0.490 & 0.289 & 0.200 & 0.433 & -- & --\\
\cmidrule{3-10}
&\multirow{2}{*}{static-\textsf{CACHE}\xspace}
& 1K & 0.120 & 0.340 & 0.196 & 0.100 & 0.300 & 1 & 1000 \\
& & 10K & 0.181 & 0.376 & 0.203 & 0.100 & 0.300 & 1 & 10000 \\
\cmidrule{3-10}
&\multirow{2}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.237 & 0.490 & 0.289 & 0.200 & 0.433 & 7 & 7552 \\
& & 10K & 0.237 & 0.490 & 0.289 & 0.200 & 0.433 & 6 & 64504 \\
\midrule
\multirow{9}{*}{\bf \textsc{CAsT}\xspace 2020 - Topic 98}&
no-caching & -- & 0.205 & 0.708 & 0.331 & 0.625 & 0.375 & -- & --\\
\cmidrule{3-10}
&\multirow{2}{*}{static-\textsf{CACHE}\xspace}
& 1K & 0.036 & 0.462 & 0.213 & 0.375 & 0.167 & 1 & 1000 \\
& & 10K & 0.071 & 0.390 & 0.203 & 0.375 & 0.208 & 1 & 10000 \\
\cmidrule{3-10}
&\multirow{2}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.206 & 0.708 & 0.331 & 0.625 & 0.375 & 5 & 5506 \\
& & 10K & 0.195 & 0.708 & 0.331 & 0.625 & 0.375 & 3 & 36978 \\
\midrule
\multirow{9}{*}{\bf \textsc{CAsT}\xspace 2021 - Topic 112}&
no-caching & -- & 0.033 & 0.416 & 0.142 & 0.25 & 0.25 & -- & --\\
\cmidrule{3-10}
&\multirow{2}{*}{static-\textsf{CACHE}\xspace}
& 1K & 0.050 & 0.598 & 0.201 & 0.375 & 0.292 & 1 & 1000 \\
& & 10K & 0.043 & 0.525 & 0.185 & 0.250 & 0.291 & 1 & 10000 \\
\cmidrule{3-10}
&\multirow{2}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.035 & 0.416 & 0.142 & 0.250 & 0.250 & 6 & 6308 \\
& & 10K & 0.035 & 0.423 & 0.142 & 0.250 & 0.250 & 5 & 50679 \\
\midrule
\multirow{9}{*}{\bf \textsc{CAsT}\xspace 2021 - Topic 117}&
no-caching & -- & 0.219 & 0.833 & 0.485 & 0.778 & 0.556 & -- & --\\
\cmidrule{3-10}
&\multirow{2}{*}{static-\textsf{CACHE}\xspace}
& 1K & 0.225 & 0.700 & 0.460 & 0.667 & 0.593 & 1 & 1000 \\
& & 10K & 0.214 & 0.677 & 0.447 & 0.667 & 0.556 & 1 & 10000 \\
\cmidrule{3-10}
&\multirow{2}{*}{dynamic-\textsf{CACHE}\xspace}
& 1K & 0.224 & 0.833 & 0.514 & 0.778 & 0.593 & 5 & 5315 \\
& & 10K & 0.219 & 0.833 & 0.485 & 0.777 & 0.593 & 5 & 48359 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table*}
\fi
\begin{acks}
This work is supported by the European Union – Horizon 2020 Program under the scheme “INFRAIA-01-2018-2019 – Integrating Activities for Advanced Communities”, Grant Agreement n.871042, “SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics” (http://www.sobigdata.eu).
\end{acks}
|
2,869,038,155,566 | arxiv | \section*{Results}
\vspace{-0.3cm}
\textbf{Cavity with several mechanical modes} The general Hamiltonian for $N$ mechanical resonators with frequencies $\omega_j$, individually coupled to a common cavity mode, written in a frame rotating with the pump, becomes
\begin{align}
\label{eq:h2}
H = - \hbar \Delta a^\dag a + \hbar \sum_{j=1}^N \omega_j b_j^\dag b_j - \hbar \left(a^\dag + a \right) \sum_{j=1}^N G_j \left(b_j^\dag + b_j \right)
\end{align}
Here, $ a$ is the annihilation operator for the cavity mode, $ b_j$ are those of the mechanical resonators, and $x_{0j} = \sqrt{\hbar/2 m_j \omega_j}$ is the zero-point amplitude. If one assumes roughly similar mechanical resonators, viz.~ $\omega_j \sim \omega_m$ and $G_j \sim G$, the cavity becomes nearly resonant to all of them at the red-sideband detuned pump condition $\Delta = -\omega_m$.
Coupling of the mechanical resonators via the cavity "bus" can be anticipated to be significant if the mechanical spectra begin to overlap when their width increased by the radiation pressure, $\gamma_{\mathrm{eff}} =\gamma_m + \gamma_{\mathrm{opt}}$, grows comparable to the mechanical frequency spacing. Here, $\gamma_{\mathrm{opt}}= 4 G^2/\gamma_c$. The equations of motion following from Eq.~(\ref{eq:h2}) allow one to verify the above assumption. We will in the following focus on two resonators, $N=2$. The result of such calculation, which is detailed in the Supplementary Information, is that they experience an effective coupling with energy $G_{12} = G_1 G_2/\gamma_c$.
In the limit of strong coupling, $\sqrt{(G_1^2+G_2^2)/2} \gg \gamma_c/4$, the system is best described as a combination of two "bright" modes with linewidths $\sim \gamma_c/2$, and of a "dark" mode having a long lifetime $\sim \gamma_m$. In the latter, the two mechanical resonators oscillate out-of-phase and synchronize into a common frequency $(\omega_1+\omega_2)/2$. In any case, the mode structure can be written with the normal coordinates for mode $n$, for position quadrature according to $X^{(n)} = D_c^{(n)} q_c + \sum_{j=1}^N D_j^{(n)} q_j$, and similarly for the momentum quadrature $P^{(n)}$.
In the experiment, we use doubly clamped flexural beams which are made by the use of ion beam milling of aluminum \cite{Sulkko:2010ih}, having an ultranarrow $\sim 10$ nm vacuum slit to the opposite end of the cavity for maximizing the coupling. The cavity is $\lambda/2$ floating microstrip similar to Ref.~ \cite{MechAmpPaper}, resonating at $\omega_c/(2 \pi) = 6.98$ GHz. The total cavity linewidth $\gamma_{c} = \gamma_E + \gamma_I \simeq (2 \pi)
\times 6.2$ MHz is a sum of the internal damping $\gamma_I/(2\pi) = 1.4$ MHz, and the external damping $\gamma_E/(2\pi) = 4.8$ MHz. At at the highest powers discussed, however, we obtain a decreased $\gamma_I/(2\pi) = 0.9$ MHz, typical of dielectric loss mechanism.
There are a total of three beams as shown in Fig.~1c. Two of the beams have a large zero-point coupling of $g_1 x_{01} /2\pi = 39$ Hz, and $g_2 x_{02} /2\pi = 44$ Hz. The frequencies of beams 1 and 2 were relatively close to each other, $\omega_2 - \omega_1 \sim (2 \pi) \, 450$ kHz, such that it is straightforward to obtain an effective coupling $G_{12}$ of the same order. The third beam had an order of magnitude smaller coupling, and hence we neglect it in the full calculation, setting $N=2$ in Eq.~(\ref{eq:h2}) in what follows.
\begin{figure*}
\centering
\includegraphics[width=0.95\linewidth]{Fighyb_2012-04b.eps}
\caption{\textbf{Hybridization of microwave photons and two radio-frequency phonons}. \textbf{a}, Measured reflection coefficient of probe microwave while a strong pump is applied at the red sideband, $\Delta \simeq - \omega_1$, while increasing pump amplitude, from bottom to top. \textbf{b}, \textbf{c}, zoomed-in view of (a). An excellent agreement with the theory is obtained with reflection calculation using a complete model based on Eq.~(\ref{eq:h2}), for two beams 1 and 2 (solid lines). The dashed lines show a fit with an independent-resonator model for beam 1. The narrow feature at 6.982 GHz is due to beam 3. The curves are displaced vertically by 3 dB for clarity. \textbf{d}, normal modes of oscillations expressed as linear combinations of photon and two phonons for the dark mode. The parameters are as in \textbf{c}. The arrow denotes the highest experimental value. \textbf{e}, Theory plot for higher $n_P$; the dark mode is expected to become narrower and more and more phonon-like. Here, $\Delta = -(\omega_1 + \omega_2)/2$. For simplicity, we take $g^{(2)}_j = 0$ in \textbf{d} and \textbf{e}.}\label{fig2}
\end{figure*}
The measurements are conducted in a cryogenic temperature below the superconducting transition temperature of the Al film ($\sim 1$ K) in order to reduce losses. We used a dilution refrigerator setup as in Ref.~\cite{MechAmpPaper} down to 25 mK, which allows for a low mechanical mode temperature in equilibrium. Apart from this, cryogenic temperatures or superconductivity are not necessary \cite{Weig2011}.
\vspace{0.4cm}
\textbf{Experimental data}
The modes can be probed by adding to the experimental setup another, the probe, tone \cite{Groeblacher:2009eh,Teufel:2011ha,MechAmpPaper} with frequency $\omega_{\mathrm{in}}$. The maximum pump power we can reach is $4 \,\mu$W, inducing a coherent $n_P \sim 6 \times 10^8$ photon occupancy in the cavity, and effective coupling $G_{12} =(2\pi)\times 150$ kHz (with $G_1 = 0.93$ MHz, $G_2 = 1.0$ MHz). In Fig.~2a, we display changes of the cavity absorption by increasing the pump power. The two peaks at the bottom of the cavity dip are due to the beams 1 and 2. They broaden with increasing $n_P$, finally leaving behind a narrow dip between them. The overall lineshape is clearly not a sum of two Lorentzian peaks.
In Fig.~2c we show a zoom-in view of the two peaks, together with theoretical predictions. One immediately sees that an attempt to simulate either peak by a single mechanical resonator coupled to cavity ($N=1$ in the analysis), thereby neglecting the cavity-mediated coupling, fails to explain the lineshape at large $n_P \gtrsim 10^8$. The full simulation with both beams 1 and 2 included, however, produces a remarkable agreement to the experiment. In order to create these curves, we further take into account a shift of the mechanical frequencies due to second-order interaction \cite{Rocheleau:2010jd}, given as $-\frac{1}{2} g^{(2)}_j n_P$, where $g^{(2)}_j = x_{0j} ^2 \omega_c /2C \left( \partial^2 C_j/\partial x^2_j \right) \sim (2\pi) \, 3 \times 10^{-4}$ Hz is the second-order coupling coefficient. The third beam, visible in Fig.~2c as a narrow feature just below 6.982 GHz, was later fitted by running a single-resonator calculation. This is justified because its mixing is negligible owing to weak coupling.
The narrow dip between the peaks manifests the onset of the dark mode, where the cavity participation approaches zero with growing $G_j$, as occurs also in other tripartite systems \cite{Tripartite}. As usual for weakly decaying states, the dark modes can be useful for storage of quantum information \cite{EITrmp,PainterMix2010}. The other two dips (best discerned in the theoretical plots at higher $G_j$, see Fig.~2e) are the bright modes, and they retain fully mixed character of all three degrees of freedom. In Fig.~2d we display the prediction for the mode expansion coefficients with respect to coupling. Asymmetry in the role of the mechanical modes is sensitive to parameters. With the maximum coupling in the experiment ($G_j/\gamma_c \simeq 0.15$), the normal modes are well mixed: the dark mode can be expressed as $\left[D_c, D_1, D_2 \right] \sim \left[0.3, 0.3, 0.9 \right]$.
\begin{figure}[h]
\label{figdark}
\includegraphics[width=0.95\linewidth]{Figdark_2012-04b.eps}
\caption{\textbf{Pump detuning measurements.} \textbf{a}, Measured spectrum of the tripartite cavity electromechanical system when pumped at a fixed input power such that at resonance, $\Delta \sim \left( -\omega_{1}, -\omega_{2} \right)$, $n_P \simeq 5.6\times 10^8$. The pump frequency was slowly swept from left to right. \textbf{b}, theory prediction.}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\linewidth]{Figcool_2011-12c.eps}
\caption{\textbf{Sideband cooling of motion down to 1.8 thermal quanta}. \textbf{a}, cavity output spectra representing the thermal motion peaks of beams 1 and 2 measured at 345 mK and 35 mK, with $n_P = 4 \times 10^5$. \textbf{b}, calibration of the detection according to temperature dependence of the mechanical energy $ n_m \hbar \omega_m = k_B T$, given by the area of the peak (data for beam 2). \textbf{c}, output spectra of beam 2 at pump occupation $n_P$ increasing from top to bottom. The curves are shifted vertically by 0.3 y-axis units for clarity. The corresponding mechanical occupations are written on the right. \textbf{d}, progress of back-action cooling (beam 2) as a function of pump occupation. The mechanical mode temperature is given by circles, and the cavity temperature by squares. Red: high temperature, 245 mK; blue, base temperature (28 mK). The ideal theoretical behavior are plotted by the solid lines. Unless the data point for $n_c^T$, including its error bars, was of positive sign, only the absolute value of the error bar is plotted. }\label{figcool}
\end{figure*}
One can also vary the pump detuning, thereby effectively detuning the cavity and the mechanical resonators \cite{Groeblacher:2009eh,Teufel:2011ha}. An anticrossing of the cavity and mechanical modes is seen in Fig.~3a up to detuning $\Delta \sim -\omega_m$, above which the system exhibits parametric oscillations. The pertinent simulation, Fig.~3b, portrays the main features of the measurement, except bending of the cavity frequency towards lower value in an upward sweep of $\Delta$. We expect such possibly hysteretic behavior of the cavity frequency to be due to second-order effects beyond the present linear model.
\vspace{0.4cm}
\textbf{Sideband cooling}
We now turn the discussion into showing how the tripartite system resides nearly in a pure quantum state during the mode mixing experiments. The mechanical resonators can be sideband-cooled with the radiation pressure back-action \cite{Regal:2008di,Kippenberg2009,Hertzberg:2010,Rocheleau:2010jd} under the effect of the pump which also creates the mode mixing. Although the ground state of one mode was recently achieved this way \cite{Teufel2011b,AspelmeyerCool11}, cooling in multimode or nanowire systems is less advanced.
The incoherent output spectrum can be due to either a finite phonon number $n_m$, or non-equilibrium population number $n_c^T$ of the cavity. The former manifests as a Lorentzian centered at the upper sideband, as in Fig.~4a, with the linewidth $\gamma_{\mathrm{eff}} = \gamma_m + 4 G^2/\gamma_c$, whereas the latter gives a small emission at broader bandwidth. The analytical theory \cite{Rocheleau:2010jd} and experimental details are discussed in Supplementary Information.
At low input power, when the cavity back-action damping is small and $n_c^T \sim 0$, phonon number is set by thermal equipartition $n_{mj} \hbar \omega_{j} = k_B T$. This is best observed by the area of the mechanical peak in the output spectrum, as in Fig.~4a, which should be linear in cryostat temperature. The linearity holds down to about 150 mK temperatures (Fig.~4b), below which we observe intermittent heating which is sensitive to parameters.
The resulting cooling of beam 2 is displayed in Fig.~4d. At a relatively high temperature of 245 mK, the data points follow theory well up to input powers corresponding to $n_P \sim 5 \times 10^7$. At the minimum cryostat temperature of 28 mK, the mode is not thermalized in a manner which depends irregularly on power. We obtain a minimum phonon number $n_{m2} \sim 1.8$, where the mechanical mode spends one third of the time in the quantum ground state. The other mechanical mode, beam 1, cooled simultaneously down to 2.5 quanta, possibly compromised by the slightly smaller coupling. The cooling is quite clearly bottlenecked by heating, by the pump microwave, of the bath to which the mechanical mode is coupled. For the optimum data points, the starting temperature for cooling has raised to 150 mK, and above this it grows roughly quadratically with $n_P$.
\section*{Discussion}
\vspace{-0.3cm}
The coupling of micromechanical resonators mediated via microwave photons in an on-chip cavity is a basic demonstration of the control of a multimode mechanical system near the quantum limit. The setup provides a flexible platform for creation and studies of nonclassical motional states entangled over the chip \cite{OptoEntang}, or over macroscopic distances \cite{AspelmeyerPRL07,ligo}. The setup is easily extended to embody nearly arbitrarily many mechanical resonators, hence allowing for designing an electromechanical metamaterial with microwave-tunable properties. For future quantum technology applications even at elevated temperatures \cite{Weig2011}, data may be stored \cite{Painter2011} in weakly decaying dark states identified in the present work.
\section*{Methods}
\small
\vspace{-0.3cm}
\textbf{Device fabrication}
The lithography to define the meandering cavity and jigs for the beams consisted of a single layer of electron-beam exposure, followed by evaporation of 150 nm of aluminum on a fused silica substrate. Since the cavity is of planar structure, it has no potentially problematic cross-overs which tend to have weak spots in superconducting properties and hence could heat up the cavity. We thus expect the cavity to portray a high critical current limited only by the intrinsic properties of the strip. With a 2 $\mu$m wide strip, we expect the critical current to be several mA. With the maximum circulating currents in cavity (Fig.~2), about 0.5 mA of peak value, there was no sign of nonlinearity in the cavity response.
The mechanical resonators were defined by Focused Ion Beam (FIB) cutting, as in Ref.~\cite{Sulkko:2010ih}. We used low gallium ion currents of 1.5 pA which gives the nominal beam width of about 7 nm. In order to maximize sputtering yield with minimal gallium contamination, we used a single cutting pass mode. Otherwise, the cut beams tend to collapse to the gate. With the current recipe, fabrication of down to 10 nm gap widths over 5-10 microns distance can be done with about 50 \% yield.
\vspace{0.3cm}
\textbf{Theory of cavity-coupled resonators}
The Hamiltonian for $N$ mechanical resonators each coupled to one cavity mode via the radiation pressure interaction is
\begin{equation}
\label{eq:h1}
H = \hbar \omega_c a^\dag a + \hbar \sum_{j=1}^N \omega_j b_j^\dag b_j - \hbar a^\dag a \sum_{j=1}^N g_j x_{0j} \left(b_j^\dag + b_j \right) +H_P
\end{equation}
The pump with the Hamiltonian $H_P = 2 \hbar \sqrt{\frac{P_{P} \gamma_E}{\hbar \omega_c}} \left( a^{\dagger} + a \right) \cos {\omega_P t}$ induces a large cavity field with the number of pump photons $n_P \gg 1$.
Under the strong driving, the cavity-mechanics interaction can be linearized individually for each beam, resulting in Eq.~(\ref{eq:h2}). Further details of the calculations are given in the Supplementary Information.
\vspace{0.3cm}
\textbf{Sideband cooling}
For one beam, the measured output spectrum divided by system gain, $S_{\mathrm{out}}(\omega)$, carries information on the phonon number according to \cite{Rocheleau:2010jd}
\begin{align}
\label{eq:cool}
S_{\mathrm{out}} = \frac{\gamma_E}{\gamma_c} n_c^T + \frac{\gamma_E}{\gamma_c}\gamma_{\mathrm{opt}}\frac{\gamma_{\mathrm{eff}}}{\left(\omega-\omega_m\right)^2 + \gamma_{\mathrm{eff}}^2/4} \left( n_m - 2 n_c^T \right)
\end{align}
which is a Lorentzian centered at the cavity frequency. A non-zero base level is due to possible broadband emission from the cavity, due to a thermal state with occupation $n_c^T$. We suppose that at modest pump (cooling) powers of utmost interest, coupling of the mechanical resonators remains insignificant, and thus we can apply Eq.~(\ref{eq:cool}) separately for each resonator. Some deviations can, however, occur at the highest powers with $n_P \sim 3 \times 10^8$.
In order to obtain the phonon number under strong back-action, we fit a Lorentzian to each peak as in Fig.~4c, obtaining independently the base level, amplitude and linewidth for every input power. These values are then compared to Eq.~(\ref{eq:cool}). This leaves, however, yet too many unknowns in order to obtain the occupations of the mechanics and cavity. This is basically because of uncertainties in the attenuations of both input and amplifier lines, which limit the accuracy of estimating $n_P$. We can obtain a calibration using the linear temperature dependence, and by fitting the dependence of $\gamma_{\mathrm{eff}}$ on input power.
As opposed to for instance to Ref.~\cite{Rocheleau:2010jd}, the bottleneck for cooling is not heating of the cavity by strong pump, as seen in Fig.~4d, where the cavity temperature is increasing only little up to the optimum cooling powers corresponding to about 0.15 mA of peak current values in cavity. This can be due to the simplistic structure of the cavity.
The above analysis supposes the presence of nearly no non-equilibrium photons in the cavity at zero or low input powers. This situation can be investigated by observing a possible emission from about cavity linewidth under these conditions. A complication arises because the cavity linewidth is so large such that a small change is easily overwhelmed by modest standing waves in cabling. We avoided this issue by using temperature dependence of the cavity frequency, due to the kinetic inductance in the long microstrip. Thereby, a broad emission peak moving according to the cavity frequency would be easily distinguishable. We observed no such signal down to the level of $n_P^T \lesssim 0.1$ which justifies the above analysis.
\vspace{1cm}
\normalsize
\textbf{Acknowledgements} We would like to thank S. Paraoanu and Lorenz Lechner for useful discussions. This work was supported by the Academy of Finland, by the European Research Council (grants No. 240362-Heattronics and 240387-NEMSQED), and by the Vaisala Foundation.
\bibliographystyle{naturemag}
\section{Response of coupled resonators}
\label{Two_b}
We detail here the theory of the response of a driven cavity coupled
to $N$ (with emphasis on the case $N=2$) mechanical resonators. In
particular, we show how it is possible to describe the system from two
complementary points of view. On one hand, the cavity-mechanics
interaction provides a dressing of the dynamics of the resonators,
leading to the definition of effective mechanical frequencies and
dampings. On the other hand, for strong coupling, we find that on top
of the normal mode splitting found for single resonators, the system
exhibits a dark mode which gets asymptotically decoupled from the
cavity, and whose linewidth tends to the bare linewidth of the
mechanical resonances.
We define the following symbols: The mechanical resonators have frequencies $\omega_j$ and linewidths $\gamma_j$. Their annihilation operators are denoted by $ b_j$ while $q_j = \frac{1}{\sqrt{2}}\left( b_j^\dag+b_j\right) $ and $p_j= \frac{i}{\sqrt{2}}\left( b_j^\dag-b_j\right)$ are the dimensionless amplitude and momentum of the mechanical
oscillations. The mechanical zero-point amplitude is $x_{0j} = \sqrt{\hbar/2 m_j \omega_j}$. Correspondingly, the cavity has the frequency $\omega_c$ and linewidth $\gamma_c=\gamma_E+\gamma_I$, consisting of the external and internal dissipation, respectively. $a$ is the annihilation operator for the cavity. The pump microwave (applied at the frequency $\omega_{P}$ near the cavity frequency) has the detuning from the cavity $\Delta=\omega_P-\omega_c$, and the power $P_P$.
The Hamiltonian for $N$ mechanical resonators each coupled to one cavity mode via the radiation pressure interaction is
\begin{equation}
\label{eq:hS1}
H = \hbar \omega_c a^\dag a + \hbar \sum_{j=1}^N \omega_j b_j^\dag b_j - \hbar a^\dag a \sum_{j=1}^N g_j x_{0j} \left(b_j^\dag + b_j \right) +H_P
\end{equation}
The pump with the Hamiltonian $H_P = 2 \hbar \sqrt{\frac{P_{P} \gamma_E}{\hbar \omega_c}} \left( a^{\dagger} + a \right) \cos {\omega_P t}$ induces a large cavity field with the number of pump photons $n_P \gg 1$. The number of quanta the in cavity due to pump is
\begin{equation}
\label{nc}
n_P = \frac{P_{P} \gamma_E}{\hbar \omega_c} \frac{1}{\Delta^2 + \left(\frac{\gamma}{2}\right)^2}
\end{equation}
where angular frequency units are used.
The equations of motion following from the Hamiltonian, Eq.~(1) in the main text, linearized about a steady-state, can be written as \cite{masselsuppl}
\begin{align}
\label{eq:da}
\dot{a}&=i \Delta a - \frac{\gamma_c}{2} a+
i\sum_{j=1}^N G_j q_j +
\sum_{i=I,E}\sqrt{\gamma_{i}} a^{i}_{\rm in} \\
\label{eq:dq1}
\dot{q}_j&= \omega_j p_j\\
\label{eq:dp1}
\dot{p}_j&= - \omega_j q_j - \gamma_{j} p_j +
G_j \left(a^\dagger+a\right) + \xi_j,
\end{align}
where $a_{\rm in}^i$
are input fields to the cavity ($I$ and $E$ denoting internal and
external input fields, the previous describing internal dissipation
and the latter the coupling to the measurement setup), and $\xi_j$ are
fields driving the mechanical resonators.
Moreover,
$G_j=\sqrt{n_P}g_j$ describes the cavity driving (by photon number
$n_P$) enhanced coupling between the cavity field and resonator $j$. Fourier transforming and, after some algebra, assuming
a probe field $a_{\rm in}=a_{\rm in}^E$ and neglecting the noise terms
$\xi_j$ yields
\begin{align}
&X \equiv a^\dagger+a=
\tilde{\chi} \left(\omega \right) \sum_{j=1}^N G_j q_j
-i \sqrt{\gamma_E}\left[
\chi_{-\Delta}\left(\omega\right) a_{\rm in}+
\chi_{\Delta}\left(\omega\right)
a_{\rm in}^{\dagger}
\right]\\
&\label{eq:1}q_j=\chi_j(\omega) G_j X
\end{align}
where the response functions are
\begin{equation}
\label{eq:2}
\chi_{\Delta}(\omega)=\frac{1}{\Delta-\omega-i\gamma_c/2}, \quad \chi_j(\omega)=\frac{\omega_j}{\omega_j^2-\omega^2-i \gamma_j \omega}
\end{equation}
and
\begin{equation}
\label{eq:3}
\tilde{\chi}(\omega)= \chi_{-\Delta}(\omega)-\chi_{\Delta}(\omega)=\frac{2\Delta}{(\omega+i\gamma_c/2)^2-\Delta^2}.
\end{equation}
Solving for $X$, we get
\begin{equation}
\label{eq:4}
X=\sqrt{\gamma_E} M(\omega)
\left[\chi_{-\Delta}a_{\rm in}+\chi_{\Delta}a_{\rm in}^{\dagger} \right]
\end{equation}
with
\begin{equation}
\label{eq:5}
M(\omega)=\frac{1}{1-\tilde{\chi}(\omega)\sum_{j=1}^N G_j^2 \chi_j(\omega)}.
\end{equation}
This field is coupled to the cavity output field by
\begin{equation}
a_{\rm out}^{(\dagger)}=\sqrt{\gamma_E}a^{(\dagger)}-a_{\rm in}^{(\dagger)}.
\end{equation}
When the effect of $a_{\rm in}^\dagger$ on $a$ can be disregarded (see
below), we can write $a_{\rm out}=R(\omega) a_{\rm in}$, where
$R(\omega)=-i\gamma_E M(\omega) \chi_{-\Delta}(\omega) -1$ is the
reflection probability amplitude in the cavity. Figure 2 of the main
text shows its absolute value, corresponding to the measured
observable.
The effective dynamics of the system is qualitatively different in
different regimes. For typical structures $\gamma_c \gg \gamma_j$. In
this case, when $G_1^2+G_2^2 \ll (\gamma_c/4)^2$, the cavity response
$\tilde{\chi}(\omega)$ is approximatively independent of frequency
within the relevant frequencies describing the mechanics. In this case,
we can view the system as two mechanical resonators effectively
coupled by the cavity (Sec.~\ref{sec:lowcoupling}). On the other hand,
for large $G_j$ (sec. IB), the normal modes of the total system split. There are
two bright modes strongly coupled to the cavity, with resonance
frequencies $\omega_{b} \approx \Delta \pm \sqrt{\sum_j G_j^2}$ and
linewidths $\sim \gamma_c/2$, and one ``dark'' mode at around
$(\omega_1+\omega_2)/2$ and with a linewidth $\sim \gamma_j$. The
latter is almost decoupled from the cavity, with the coupling decreasing
with an increasing $G_j$.
\subsection{Weak-coupling regime}
\label{sec:lowcoupling}
In the following, we concentrate on the case of red-detuned driving,
$\Delta \approx -\omega_j$ in the sideband resolved case $\gamma_c
\ll \omega_j$, and study the response at frequencies $\omega \approx
-\Delta$. This allows us to approximate
\begin{equation}
\chi_j(\omega) \approx \frac{1}{2(\omega_j-\omega) -i\gamma_j}, \quad \tilde{\chi} \approx \frac{1}{\omega+\Delta+i\gamma_c/2}.
\label{eq:apprrespfuncs}
\end{equation}
The same approximation yields $\chi_\Delta(\omega) \ll
\chi_{-\Delta}(\omega)$ and we may hence disregard the input field
$a_{\rm in}^\dagger$ at these frequencies.
Let us restrict to the case of two mechanical resonators. Then the
response function $M(\omega)$ may be written in the form
\begin{align}
M(\omega)=& \frac{\left(\omega_1-\omega-i\gamma_1/2\right)
\left(\omega_2-\omega-i\gamma_2/2\right)}
{\left(\omega_{1}-\omega-i\gamma_{1}/2\right)
\left(\omega_{2}-\omega-i\gamma_{2}/2\right)-
\tilde{\chi}(\omega)\left[ G_1^2\left(\omega_2-\omega-i
\gamma_2/2\right)+ G_2^2\left(\omega_1-\omega-i
\gamma_1/2\right)\right]/2} \nonumber\\
\equiv & \frac{\left(\omega_1+\omega-i\gamma_1/2\right)
\left(\omega_2+\omega-i\gamma_2/2\right)}
{\left(\omega_{1}^c+\omega-i\gamma_{1}^c/2\right)
\left(\omega_{2}^c+\omega-i\gamma_{2}^c/2\right)}
\label{eq:6b}
\end{align}
We return to the definition of $\omega_j^{c}$ and $\gamma_j^{c}$
below, but now we rewrite the response in a more compact form
\begin{align}
\label{eq:7}
&X= \frac{\chi_{1}^c}{\chi_1} \frac{\chi_{2}^c}{\chi_2} \chi_{-\Delta} \sqrt{\gamma_E} a_{\rm in} \\\label{eq:q1}
&q_1= G_1 \frac{\chi_{2}^c}{\chi_2}\chi_{1}^c \chi_{-\Delta} \sqrt{\gamma_E} a_{\rm in} \\\label{eq:q2}
&q_2= G_2 \frac{\chi_{1}^c}{\chi_1}\chi_{2}^c \chi_{-\Delta} \sqrt{\gamma_E} a_{\rm in},
\end{align}
where $\chi_j^c=\chi_j(\omega_j \mapsto \omega_j^c)$.
This form is readily generalized for more than two mechanical resonators.
We now calculate $\omega_j^c$ and $\gamma_j^c$. For the sake of
compactness, we include the broadening to the frequencies, allowing
them to assume complex values
\begin{align}
\label{eq:9}
\omega_j\equiv\omega_j-i\gamma_j/2.
\end{align}
The frequencies $\omega_1^c$, $\omega_2^c$ appearing in the
denominator of Eq.~\eqref{eq:6b} can be expressed as
\begin{equation}
\label{eq:12}
\omega_{1,2}^c=\frac{1}{2}\left[\omega_1^e+\omega_2^e \pm
\sqrt{\left(\omega_1^e-\omega_2^e\right)^2+\tilde{\chi}(\omega)^2 G_1^2 G_2^2}\right]
\end{equation}
where
\begin{align}
\omega_j^e=\omega_j-\tilde{\chi}(\omega) G_j^2/2 \label{eq:opticalspring}
\end{align}
are the effective frequencies (and
dampings) of the mechanical resonators induced by the presence of the
cavity, in the absence of the other mechanical resonator. The
frequencies (and dampings) $\omega_j^c$ are thus the dressed
mechanical frequencies in the presence of the cavity-mediated coupling
between the mechanical resonators. Equation \eqref{eq:12} describes an
avoided crossing of the two mechanical resonators coupled by the
energy
\begin{equation}
G_{12} \equiv \frac{1}{2}G_1 G_2 i\tilde{\chi}(\omega) \approx \frac{G_1 G_2}{\gamma_c}.
\label{eq:gmech}
\end{equation}
The latter form is valid in the limit $\omega_j^c+\Delta \ll
\gamma_c$, where we can approximate $\tilde{\chi} \approx
-2i/\gamma_c$. For strong coupling or large initial frequency
difference, the determination of $\omega_{1,2}^c$ requires the
solution of Eq.~\eqref{eq:12} with the replacement
$\tilde{\chi}(\omega) \to\tilde{\chi}(\omega_{1,2}^c)$. The solutions
of such a calculation are shown in Fig.~\ref{fig:cfreqs}, plotting the
resulting frequency shifts and changes in the damping.
When $G_{12}\ll \omega_1^e-\omega_2^e$, we can disregard the
coupling between the mechanical resonators, and the response of the
cavity is a simple product of the individual responses. In the
experiments, we reach $G_{12} =(2\pi)\times 150$ kHz (with $G_1 = 0.93$ MHz, $G_2 = 1.0$ MHz), which is
roughly one third of the frequency difference between the mechanical
resonators, and allows for the coupling effect to show up in the
shape of the response curves. This is shown in
Fig.~\ref{fig:comparison}, which compares the coupled response to the
case of individual resonators.
Although the dressed response is affected by the coupling, the
frequency shifts are not easily observed from the cavity response as
they tend to be overwhelmed by the growing width $\sim {\rm
Im}(\omega_j^c)$. However, the frequency shifts can be seen in the
mechanical response $q(\omega)/a_{\rm in}$, plotted in
Fig.~\ref{fig:q_out}.
\begin{figure
\includegraphics[width=0.45\textwidth]{weffc.eps}
\includegraphics[width=0.45\textwidth]{gammaeffc.eps}
\caption{Values of $\omega_1^c$, $\omega_2^c$ in the weak-coupling
regime. Left: real part of the frequencies and right: effective
damping. On the left, the dashed lines show the approximation
with a frequency independent coupling $G_{12}$ and
disregarding the optical spring effect
(Eq.~(\ref{eq:opticalspring})), whereas on the right, the dashed
line shows the imaginary part of the effective frequencies
$\omega_j^e$. For the damping, the coupling between the
resonators shows up only in the strong-coupling regime. Here and in Figs.~\ref{fig:q_out}-\ref{fig:exact} the
curves have been calculated with $\Delta=5 \gamma_c$,
$\omega_{1,2}=|\Delta| \pm 0.0075\Delta$ and
$\gamma_1=\gamma_2=3\times 10^{-5}|\Delta|$, close to the
experimental values, and for simplicity assuming $G_1=G_2$.}
\label{fig:cfreqs}
\end{figure}
\begin{figure
\includegraphics[width=0.45\textwidth]{response_mech_max_r1_r2.eps}
\includegraphics[width=0.45\textwidth]{response_mech_max_r1_r2_eff.eps}
\caption{(left)Comparison of the reflection probability
$|R(\omega)|$ for a single oscillator vs. that of two
oscillators. (right) Comparison of the reflection probabilities
when including (solid line, calculated with $\omega_j^c$) or
excluding (dashed line, with $\omega_j^e$) the coupling induced
normalization of the frequencies \textcolor[rgb]{1.00,0.00,0.00}{($G=0.1 \gamma_c$).}}
\label{fig:comparison}
\end{figure}
\begin{figure
\includegraphics[width=0.5\textwidth]{mech_spectr.eps}
\caption{Mechanical spectrum of $|q_1|$ in the case of two (blue
line) and one (red line) mechanical resonators, showing the frequency
shift due to the coupling with the other resonator, an analogous
situation would be observed for $|q_2|$ ($G=0.1 \gamma_c$).}
\label{fig:q_out}
\end{figure}
\subsection{Strong-coupling regime}
\label{sec:strongcouplign}
In the strong-coupling limit $\tilde \chi G_1^2 G_2^2 \gtrsim
(\omega_1^e-\omega_2^e)^2$, $\omega_{1,2}^c$ can be expressed as
\begin{equation}
\label{eq:22}
\omega_{1,2}^c=\frac{1}{2}\left[\omega_1+\omega_2-
\tilde{\chi}\left(\ G_1^2+
G_2^2\right)/2 \pm \sqrt{\tilde{\chi}^2\left(G_1^2+G_2^2\right)^2/4}\right],
\end{equation}
from which we obtain two equations for $\omega_1^c$ and $\omega_2^c$
\begin{align}
\label{eq:23}
\omega_1^c&= \frac{\omega_1+\omega_2}{2}+\tilde{\chi}(\omega_1^c)\left(G_1^2+ G_2^2\right)/2 \\
\omega_2^c&= \frac{\omega_1+\omega_2}{2}.
\end{align}
The second equation defines an asymptotic {\it dark mode} residing at
the frequency between the two mechanical frequencies, and having a
damping $(\gamma_1+\gamma_2)/2$, i.e., typically much lower than that
of the other dressed modes. For the first equation we have to include
the full frequency dependence of $\tilde \chi(\omega)$ from
Eq.~\eqref{eq:apprrespfuncs}. Multiplying the equation by $\tilde
\chi^{(-1)}$ then yields another second-order equation for
$\omega_1^c$, with the roots
\begin{equation}
\omega_1^c=\frac{1}{4} \left(-2 \Delta +\omega _1+\omega
_2+i \gamma _c\pm \sqrt{\left(-i \gamma _c+2 \Delta +\omega _1+\omega _2\right){}^2+8 G_1^2+8 G_2^2}\right).
\label{eq:strongcouplingomega}
\end{equation}
For simplicity, we consider in the following the case
$\Delta=-(\omega_1+\omega_2)/2$ which simplifies the expression to
\begin{equation}
\omega_1^c=-\Delta + \frac{1}{4}\left(i\gamma_c\pm \sqrt{8 (G_1^2+G_2^2)-\gamma_c^2}\right).
\end{equation}
When $G_1^2+G_2^2 > \gamma_c^2/8$, the real part of the
frequencies tends to
\begin{equation}
\omega_1^c \rightarrow -\Delta \pm G,
\end{equation}
where $G=\sqrt{(G_1^2+G_2^2)/2}$. Moreover, the linewidth of
these modes, whenever $G > \gamma_c/4$, is given by $\gamma_c/2$. In
this case these frequencies and linewidths can be seen in the cavity
response, as the cavity response function $|R(\omega)|$ has three
dips: the outer dips, of width $\sim \gamma_c/2$ correspond to the
(bright) modes $\omega_1^c$, whereas the inner narrow dip corresponds
to the (dark) mode $\omega_2^c$ (see Fig.~2e of the main text).
Let us try to understand these modes from starting the equations of
motion by redefining the mechanical motion to relative and center of
mass motion (weighted by the couplings $G_j$), i.e.,
\begin{align}
\label{eq:17}
q_a=\left( G_2 q_1 - G_1 q_2 \right)/\left(
G_2 + G_1 \right) \\
q_s=\left( G_1 q_1 + G_2 q_2 \right)/\left(
G_2 + G_1 \right).
\end{align}
Below, we show that in the strong-coupling limit the above frequency
$\omega_2^c$ describes the ``dark'' mode $q_a$ and the two frequencies
$\omega_{1\pm}^c$ the ``bright'' modes $q_s$.
The equations of motion for these modes are
\begin{subequations}
\begin{align}
\dot{a}&=i \Delta a + i(G_1+G_2) q_s + \sqrt{\gamma_E}a_{\rm in}\\
\dot{q}_s&=\omega_\Sigma p_s + \omega_\Delta p_a \\
\dot{q}_a&=\omega_\Sigma p_a + \omega_\Delta p_s \label{eq:19b}\\
\dot{p}_s&=-\omega_\Sigma q_s + \omega_\Delta q_a - \gamma p_s +
\frac{ G_1 G_2}{ G_1+ G_2} \left( a^\dagger + a\right) \\
\dot{p}_a&=-\omega_\Sigma q_a + \omega_\Delta q_s - \gamma p_a
\end{align}
\label{eq:commonq}
\end{subequations}
with $\omega_\Sigma=\left(\omega_1+\omega_2\right)/2$ and
$\omega_\Delta=\left(\omega_1-\omega_2\right)/2$. We have assumed that
the linewidths of the two mechanical resonators are the same. If there
is no frequency mismatch, the symmetric and antisymmetric modes are
uncoupled.
Solving Eqs.~\eqref{eq:commonq}, or substituting Eqs.~(\ref{eq:q1},\ref{eq:q2})
into \eqref{eq:17} (and using the complex frequency notation)
we obtain, in the strong-coupling limit
\begin{align}
\label{eq:20}
q_s&=\frac{4 G_1 G_2 }{\left(
G_2 + G_1 \right)} \frac{\omega-\omega_\Sigma}
{\left(\omega-\omega_{1+}^c\right)\left(\omega-\omega_{1-}^c\right)\left(\omega-\omega_{2}^c\right)}
\sqrt{\gamma_E} a_{\rm in} \\
q_a&=\frac{4 G_1 G_2}{\left(
G_2 + G_1 \right)}\frac{\omega_\Delta}
{\left(\omega-\omega_{1+}^c\right)\left(\omega-\omega_{1-}^c\right)\left(\omega-\omega_{2}^c\right)} \sqrt{\gamma_E}a_{\rm in} .
\end{align}
Considering that $\omega_2^c = \omega_\Sigma$, and assuming for simplicity
$ G_1 = G_2 \equiv G$, we
have, when $\omega \simeq \omega_{1\pm}^c $,
\begin{align}
\label{eq:20b}
q_{s+}&=\frac{\sqrt{\gamma_E} }{
(\omega-\omega_{1+}^c)} a_{\rm in}\\
q_{s-}&=-\frac{\sqrt{\gamma_E} }{
(\omega-\omega_{1+}^c)} a_{\rm in}
\end{align}
the cavity field becomes
\begin{align}
\label{eq:27}
X=\frac{\sqrt{ \gamma_E}}{2(\omega-\omega_{1\pm}^c)} a_{\rm in}.
\end{align}
Analogously, when $\omega \simeq -\Delta$, we get for $q_a$
\begin{equation}
\label{eq:20c}
q_a=\frac{2\omega_\Delta}{G \left(\omega-\omega_2^c\right)}\sqrt{\gamma_E} a_{\rm in}.
\end{equation}
In this case the cavity field around the dark mode resonance frequency becomes
\begin{equation}
\label{eq:29}
X=-\frac{\left(\omega_1+\Delta\right)\left(\omega_2+\Delta\right)}{G^2\left(\Delta-\omega_2^c\right)}\sqrt{\gamma_E}a_{\rm in}.
\end{equation}
From Eq. \eqref{eq:20c} we see that, in the strong coupling limit,
the mechanical mode $q_a$ is asymptotically decoupled from the input
as well as from the cavity field, allowing to identify this mode as a
dark mode.
From a somewhat different perspective, the above picture and the
values of the effective frequencies can be recovered by diagonalizing
the equations of motion for $q_a$, $p_a$, $q_s$, $p_s$, and $a$.
\begin{figure
\includegraphics[width=0.45\textwidth]{strongreeffc.eps}
\includegraphics[width=0.45\textwidth]{strongimeffc.eps}
\caption{Comparison between the calculated values of
$\omega_{1,2}^c$ with the values obtained by diagonalization of
the equations of motion (solid lines). (Left:) Real part of the
frequency. Dashed line is the weak-coupling expression obtained
from Eq.~\eqref{eq:12} by using Eq.~\eqref{eq:gmech} and the
dash-dotted line is obtained from the strong coupling expression
Eq.~\eqref{eq:strongcouplingomega}. (Right): Effective
linewidths of the modes. Black dashed line is from
eqs.~(\ref{eq:opticalspring}), showing the regular optically
induced damping, whereas the dash-dotted lines are
obtained from Eq.~\eqref{eq:strongcouplingomega}. These
frequencies have been calculated with the same parameters as in
the previous figures.}.
\label{fig:exact}
\end{figure}
In Fig.\ref{fig:exact}, we have compared the relevant
eigenfrequencies around $-\Delta$ as obtained by the diagonalization of
Eqs.~\eqref{eq:commonq} with the strong and weak coupling expansion
for $\omega_{1 \pm}^c$ and $\omega_2^c$. In the weak-coupling regime
the eigenstates correspond (in the limit $ G_{1,2}\to 0$) to the
normal modes of the uncoupled system (i.e. cavity, mechanical
resonator 1, mechanical resonator 2, cavity field). The corresponding
eigenfrequencies match the expression given in Eq. \eqref{eq:12} in
the case where $\tilde{\chi}$ is independent of $\omega$.
In the strong-coupling regime these modes correspond to the coherent
superposition of the cavity field and $q_{s\pm}$ modes or zero cavity
field and $q_a$ modes. We can gain further insight about these modes
by writing the strong-coupling version ($\omega_\Delta \ll G$) of the
equation of motion \eqref{eq:19b} in the sideband-resolved regime
\begin{align}
\label{eq:16}
&\dot{a}_a= -i \omega_\Sigma a_a -\frac{\gamma}{2} a_a \\
&\dot{a}_s= -i \omega_\Sigma a_s -\frac{\gamma}{2} a_s + \frac{i
G}{\sqrt{2}} a\\
&\dot{a} = i \Delta a + i \sqrt{2}G a_s - \frac{\gamma_c}{2} a.
\end{align}
Its eigenfrequencies are given by
\begin{align}
\label{eq:33}
&\omega_A=\omega_\Sigma \\
&\omega_{B\pm}=-\Delta \pm 1/4 \sqrt{16 G^2-\gamma_c^2}.
\end{align}
Moreover, the eigenvector associated to $\omega_A$, corresponds
(asymptotically) to a vector for which $a$ and $a_s$ are zero, while
the eigenvectors associated to $\omega_{B\pm}$ correspond to vectors
for which $a_a$ is zero, allowing thus to identify $\omega_A$ with the
frequency of the dark mode and $\omega_{B\pm}$ to that of the bright
modes.
The coupling $G=\gamma_c/4$ corresponds to the onset of the
strong-coupling regime where, asymptotically, the eigenmodes
correspond to the dark and bright modes. Moreover the limit $ G \gg
\gamma_c$ yields
\begin{equation}
\label{eq:34}
\omega_{B\pm}=-\Delta \pm G,
\end{equation}
allowing us to identify the symmetric mode $q_s$ with the mechanical
component of the bright mode, as it can be seen from the expression
giving the strong-coupling expansion of $\omega_1^c$ (see
eqs.~(\ref{eq:12})).
|
2,869,038,155,567 | arxiv | \section{Introduction}
In this paper, we consider the problem of finding a barycenter of discrete random probability measures generated by a distribution. We refer to
optimal transport (OT) metrics which provides a successful framework to compare objects that can be modeled as probability measures (images, videos, texts and etc.). Transport based distances have gained popularity in various fields such as statistics \cite{ebert2017construction,bigot2012consistent}, unsupervised learning \cite{arjovsky2017wasserstein}, signal and image analysis \cite{thorpe2017transportation}, computer vision \cite{rubner1998metric}, text classification \cite{kusner2015word}, economics and finance \cite{rachev2011probability} and medical imaging \cite{wang2010optimal,gramfort2015fast}.
Moreover, a lot of
statistical results are known about optimal transport distances \cite{sommerfeld2018inference,weed2019sharp,klatt2020empirical}.
The success of optimal transport led to an increasing interest in {Wasserstein barycenters} (WB's).
Wasserstein barycenters are used in Bayesian computations \cite{srivastava2015wasp}, texture mixing \cite{rabin2011wasserstein}, clustering ($k$-means for probability measures) \cite{del2019robust}, shape interpolation and color transferring \cite{Solomon2015}, statistical estimation of template models \cite{boissard2015distribution} and neuroimaging \cite{gramfort2015fast}.
For discrete random probability measures from probability simplex $\Delta_n$ ($n$ is the size of support) with distribution $\mathbb P$, a Wasserstein barycenter is introduced through a notion of Fr\'{e}chet mean \cite{frechet1948elements}
\begin{equation}\label{def:Freche_population}
\min_{p\in \Delta_n}\mathbb E_{q \sim \mathbb P} W(p,q).
\end{equation}
If a solution of \eqref{def:Freche_population} exists and is unique, then it is referred to as the population barycenter for distribution $\mathbb P$. Here $W(p,q)$ is
optimal transport metrics between measures $p$ and $q$
\begin{equation}\label{def:optimaltransport}
W(p,q) = \min_{\pi \in U(p,q)} \langle C, \pi \rangle,
\end{equation}
where $C \in \mathbb R^{n\times n}_+$ is a symmetric transportation cost matrix and $U(p,q) \triangleq\{ \pi\in \mathbb R^{n\times n}_+: \pi {\mathbf 1} =p, \pi^T {\mathbf 1} = q\}$ is transport polytope.\footnote{
When for $\rho\geq 1$, $C_{ij} =\mathtt d(x_i, x_j)^\rho$ in \eqref{def:optimaltransport}, where $\mathtt d(x_i, x_j)$ is a distance on support points $x_i, x_j$, then $W(p,q)^{1/\rho}$ is known as the $\rho$-Wasserstein distance.
Nevertheless, all the results of this thesis are based only on the assumptions that the matrix $C \in \mathbb R_+^{n\times n}$ is symmetric and non-negative. Thus, optimal transport problem defined in \eqref{def:optimaltransport} is a more general than the Wasserstein distances.}
In \cite{cuturi2013sinkhorn}, the entropic regularization of optimal transport problem \eqref{def:optimaltransport} was proposed to improve its statistical properties \cite{klatt2020empirical,bigot2019central} and to reduce the computational complexity from $\tilde O(n^3)$ ($n$ is the size of the support of the measures) to $n^2\min\{\tilde O\left(\frac{1}{\varepsilon} \right), \tilde O\left(\sqrt{n} \right) \} $ arithmetic operations\footnote{The estimate $n^2\min\{\tilde O\left(\frac{1}{\varepsilon} \right), \tilde O\left(\sqrt{n} \right) \}$ is the best known theoretical estimate for solving OT problem \cite{blanchet2018towards,jambulapati2019direct,lee2014path,quanrud2018approximating}. The best known practical estimates are $\sqrt{n}$ times worse (see \cite{guminov2019accelerated} and references therein).}
\begin{align}\label{eq:wass_distance_regul2222}
W_\gamma (p,q) &\triangleq \min_{\pi \in U(p,q)} \left\lbrace \left\langle C,\pi\right\rangle - \gamma E(\pi)\right\rbrace.
\end{align}
Here $\gamma>0$ and $E(\pi) \triangleq -\langle \pi,\log \pi \rangle $ is the entropy. Since $E(\pi)$ is 1-strongly concave on $\Delta_{n^2}$ in the $\ell_1$-norm, the objective in \eqref{eq:wass_distance_regul2222} is $\gamma$-strongly convex with respect to $\pi$ in the $\ell_1$-norm on $\Delta_{n^2}$, and hence, problem \eqref{eq:wass_distance_regul2222} has a unique optimal solution. Moreover, $W_\gamma (p,q)$ is $\gamma$-strongly convex with respect to $p$ in the $\ell_2$-norm on $\Delta_n$ \citep[Theorem 3.4]{bigot2019data}.
Another particular advantage of the entropy-regularized optimal transport \eqref{eq:wass_distance_regul2222} is a closed-form representation for its dual function~\cite{agueh2011barycenters,cuturi2016smoothed} defined by the Fenchel--Legendre transform of $W_\gamma(p,q)$ as a function of $p$
\begin{align*}
W_{\gamma, q}^*(u) &= \max_{ p \in \Delta_n}\left\{ \langle u, p \rangle - W_{\gamma}(p, q) \right\} = \gamma\left(E(q) + \left\langle q, \log (K \beta) \right\rangle \right).
\end{align*}
where $\beta = \exp( {u}/{\gamma}) $, \mbox{$K = \exp( {-C}/{\gamma }) $} and functions $\log$ or $\exp$ are applied element-wise.
Hence, the gradient of dual function $W_{\gamma, q}^*(u)$ is also represented in a closed-form \cite{cuturi2016smoothed}
\begin{equation*}
\nabla W^*_{\gamma,q} (u)
= \beta \odot \left(K \cdot {q}/({K \beta}) \right) \in \Delta_n,
\end{equation*}
where symbols $\odot$ and $/$ stand for the element-wise product and element-wise division respectively.
\textbf{Background on the SA and the SAA and Convergence Rates.}
Let us consider a general stochastic convex minimization problem
\begin{equation}\label{eq:gener_risk_min}
\min_{x\in X \subseteq \mathbb{R}^n} F(x) \triangleq \mathbb E f(x,\xi),
\end{equation}
where function $f$ is convex in $x$ ($x\in X,$ $X$ is a convex set), and $\mathbb E f(x,\xi)$ is the expectation of $f$ with respect to $\xi \in \Xi$.
Such kind of problems arise in many applications of data science
\cite{shalev2014understanding,shapiro2014lectures} (e.g., risk minimization) and mathematical statistics \cite{spokoiny2012parametric} (e.g., maximum likelihood estimation). There are two competing approaches based on Monte Carlo sampling techniques to solve \eqref{eq:gener_risk_min}: the Stochastic Approximation (SA) \cite{robbins1951stochastic} and the Sample Average Approximation (SAA).
The SAA approach replaces the objective in problem \eqref{eq:gener_risk_min} with its sample average approximation (SAA) problem
\begin{equation}\label{eq:empir_risk_min}
\min_{x\in X} \hat{F}(x) \triangleq \frac{1}{m}\sum_{i=1}^m f(x,\xi_i),
\end{equation}
where $\xi_1, \xi_2,...,\xi_m$ are the realizations of a random variable $\xi$. The number of realizations $m$ is adjusted by the desired precision.
The total working time of both approaches to solve problem \eqref{eq:gener_risk_min} with the average precision $\varepsilon$ in the non-optimality gap in term of the objective function (i.e., to find $x^N$ such that $\mathbb E F(x^N) - \min\limits_{x\in X } F(x)\leq \varepsilon$),
depends on the specific problem. However, it was generally accepted \cite{nemirovski2009robust} that the SA approach is better than the SAA approach.
Stochastic gradient (mirror) descent, an implementation of the SA approach \cite{juditsky2012first-order}, gives the following estimation for the number of iterations (that is equivalent to the sample size of $\xi_1, \xi_2, \xi_3,...,\xi_m$)
\begin{equation}\label{eq:SNSm}
m = O\left(\frac{M^2R^2}{\varepsilon^2}\right).
\end{equation}
Here we considered the minimal assumptions (non-smoothness) for the objective $f(x,\xi)$
\begin{equation}\label{M}
\|\nabla f(x,\xi)\|_2^2\le M^2, \quad \forall x \in X, \xi \in \Xi.
\end{equation}
Whereas, the application of the SAA approach requires the following sample size \cite{shapiro2005complexity}
\[m = \widetilde{O}\left(\frac{n M^2R^2}{\varepsilon^2}\right),\]
that is $n$ times more ($n$ is the problem’s dimension) than the sample size in the SA approach. This estimate was obtained under the assumptions that problem \eqref{eq:empir_risk_min} is solved exactly. This is one of the main drawback of the SAA approach. However, if the objective $f(x,\xi)$ is $\lambda$-strongly convex in $x$, the sample sizes are equal up to logarithmic terms
\[
m = O\left(\frac{M^2}{\lambda \varepsilon}\right).
\]
Moreover, in this case, for the SAA approach, it suffices to solve problem \eqref{eq:empir_risk_min} with accuracy \cite{shalev2009stochastic}
\begin{equation}\label{eq:aux_e_quad}
\varepsilon' = O\left(\frac{\varepsilon^2\lambda}{M^2}\right).
\end{equation}
Therefore, to eliminate the linear dependence on $n$ in the SAA approach for a non-strongly convex objective, regularization $\lambda =\frac{\varepsilon}{R^2}$ should be used \cite{shalev2009stochastic}.
Let us suppose that $f(x,\xi)$ in \eqref{eq:gener_risk_min} is convex but non-strongly convex in $x$ (possibly, $\lambda$-strongly convex but with very small $\lambda \ll \frac{\varepsilon}{R^2}$). Here $R = \|x^1 - x^*\|_2$ is the Euclidean distance between starting point $x^1$ and the solution $x^*$ of \eqref{eq:gener_risk_min} which corresponds to the minimum of this norm (if the solution is not the only one). Then, the problem \eqref{eq:gener_risk_min} can be replaced by
\begin{equation}\label{eq:gener_risk_min_reg}
\min_{x\in X } \mathbb E f(x,\xi) + \frac{\varepsilon}{2R^2}\|x - x^1\|_2^2.
\end{equation}
The empirical counterpart of \eqref{eq:gener_risk_min_reg} is
\begin{equation}\label{eq:empir_risk_min_reg}
\min_{x\in X} \frac{1}{m}\sum_{i=1}^m f(x,\xi_i) + \frac{\varepsilon}{2R^2}\|x - x^1\|_2^2,
\end{equation}
where the sample size $m$ is defined in \eqref{eq:SNSm}
Thus, in the case of non-strongly objective, a regularization equates the sample size of both approaches.
\subsection{Contribution and Related Work}
\hspace{0.25cm} \textbf{The SA and the SAA approaches}.
This paper is inspired by the work \cite{nemirovski2009robust}, where it is stated that the SA approach outperforms the SAA approach for a certain class of convex stochastic problems. Our aim is to show that for the Wasserstein barycenter problem this superiority can be inverted. We provide a detailed comparison by stating the complexity bounds for implementations of the SA and the SAA approaches for the Wasserstein barycenter problem. As a byproduct, we also construct a confidence interval for the barycenter defined w.r.t. entropy-regularized OT.
\textbf{Sample size.} We also estimate the sample size of measures to calculate an approximation for Fr\'{e}chet mean of a probability distribution with a given precision.
\textbf{Consistency and rates of convergence.}
The consistency of empirical barycenter as an estimator of true Wasserstein barycenter (defined by the notion of Fr\'{e}chet mean) as the number of measures tends to infinity was studied in many papers, e.g, \cite{LeGouic2017,panaretos2019statistical,LeGouic2017,Bigot2012a,rios2018bayesian},
under some conditions for the process generated the measures.
Moreover, the authors of \cite{boissard2015distribution} provide the rate of this convergence but under restrictive assumption on the process (it must be from
{admissible family of deformations}, i.e., it is a gradient of a convex function). Without any assumptions on generating process, the rate of convergence was obtained in \cite{bigot2018upper}, however, only for measures with one-dimensional support. For some specific types of metrics and measures, the rates of convergence were also provided in works \cite{chewi2020gradient,gouic2019fast,kroshnin2019statistical}. Our results were obtained under the condition of discreteness of the measures. We can always achieve this condition through additional preprocessing (discretization of measures).
\textbf{Penalization of barycenter problem.}
For a general convex (but not strongly convex) optimization problem, empirical minimization may fail in offline approach despite the guaranteed success of an online approach if no regularization was introduced \cite{shalev2009stochastic}.
The limitations of the SAA approach for non-strongly convex case are also discussed in \cite{guigues2017non-asymptotic,shapiro2005complexity}.
Our contribution includes introducing a new regularization for population Wasserstein barycenter problem that improves the complexity bounds for standard penalty (squared norm penalty) \cite{shalev2009stochastic}. This regularization relies on the Bregman divergence from \cite{ben-tal2001lectures}.
\subsection{Preliminaries}
\noindent\textbf{Notations}.
Let $\Delta_n = \{ a \in \mathbb{R}_+^n \mid \sum_{l=1}^n a_l =1 \}$ be the probability simplex.
Then we refer
to the $j$-th component of vector $x_i$ as $[x_i]_j$.
The notation $[n]$ means $1,2,...,n$.
For two vectors $x,y$ of the same size, denotations $x/y$ and $x \odot y$ stand for the element-wise product and element-wise division respectively.
When functions, such as $log$ or $exp$, are used on vectors, they are always applied element-wise.
For some norm $\|\cdot\|$ on space $\mathcal X$, we define the dual norm $\|\cdot\|_*$ on the dual space $\mathcal X^*$ in a usual way $ \|s\|_{*} = \max\limits_{x\in \mathcal X} \{ \langle x,s \rangle : \|x\| \leq 1 \} $.
We denote by $I_n$ the identity matrix, and by $0_{n\times n}$ we denote zeros matrix.
For a positive semi-definite matrix $A$ we denote its smallest positive eigenvalue by $\lambda^{+}_{\min}(A)$.
We use denotation $ O(\cdot)$ when we want to indicate the complexity hiding constants, to hide also logarithms, we use denotation $\widetilde O(\cdot)$.
\begin{definition}
A function $f(x,\xi):X \times \Xi \rightarrow \mathbb R$ is $M$-Lipschitz continious in $x$ w.r.t. a norm $\|\cdot\|$ if it satisfies
\[{|}f(x,\xi)-f(y,\xi){|}\leq M\|x-y\|, \qquad \forall x,y \in X,~ \forall \xi \in \Xi.\]
\end{definition}
\begin{definition}
A function $f:X\times \Xi \rightarrow \mathbb R$ is $\gamma$-strongly convex in $x$ w.r.t. a norm $\|\cdot\|$ if it is continuously differentiable and it satisfies
\[f(x, \xi)-f(y, \xi)- \langle\nabla f(y, \xi), x-y\rangle\geq \frac{\gamma}{2}\|x-y\|^2, \qquad \forall x,y \in X,~ \forall \xi \in \Xi.\]
\end{definition}
\begin{definition}
The Fenchel--Legendre conjugate for a function $f:(X, \Xi) \rightarrow \mathbb R$ w.r.t. $x$ is \[f^*(u,\xi) \triangleq \sup_{x \in X}\{\langle x,u\rangle - f(x,\xi)\}, \qquad \forall \xi \in \Xi.\]
\end{definition}
\subsection{Paper organization}
The structure of the paper is the following.
In Section \ref{ch:population}
we give a background on
the SA and the SAA approaches and derive preliminary results.
Section \ref{sec:pen_bar} presents the comparison of the SA and the SAA approaches for the problem of Wasserstein barycenter defined w.r.t. regularized optimal transport distances. Finally, Section \ref{sec:unreg} gives the comparison of the SA and the SAA approaches for the problem of Wasserstein barycenter defined w.r.t. (unregularized) optimal transport distances.
\section{Strongly Convex Optimization Problem}
\label{ch:population}
We start with preliminary results stated for a general stochastic strongly convex optimization problem of form
\begin{equation}\label{eq:gener_risk_min_conv}
\min_{x\in X \subseteq \mathbb{R}^n} F(x) \triangleq \mathbb E f(x,\xi),
\end{equation}
where $f(x,\xi)$ is $\gamma$-strongly convex with respect to $x$. Let us define $ x^* = \arg\min\limits_{x\in X} {F}(x)$.
\subsection{The SA Approach: Stochastic Gradient Descent }
The classical SA algorithm for problem \eqref{eq:gener_risk_min_conv} is presented by stochastic gradient descent (SGD) method. We consider the SGD with inexect oracle given by $g_\delta(x,\xi)$ such that
\begin{equation}\label{eq:gen_delta}
\forall x \in X, \xi \in \Xi, \qquad
\|\nabla f(x,\xi) - g_\delta(x,\xi)\|_2 \leq \delta.
\end{equation}
Then the iterative formula of SGD can be written as ($k=1,2,...,N.$)
\begin{equation}\label{SA:implement_simple}
x^{k+1} = \Pi_{X}\left(x^k - \eta_{k} g_\delta(x^k,\xi^k) \right).
\end{equation}
Here $x^1 \in X$ is starting point, $\Pi_X$ is the projection onto $X$, $\eta_k$ is a stepsize.
For a $\gamma$-strongly convex $f(x,\xi)$ in $x$, stepsize $\eta_k$ can be taken as $\frac{1}{\gamma k}$ to obtain optimal rate $O(\frac{1}{\gamma N})$.
A good indicator of the success of an algorithm is the \textit{regret}
\[Reg_N \triangleq \sum_{k=1}^{N} \left(f( x^k, \xi^k) - f(x^*, \xi^k)\right).\]
It measures the value of the difference between a made decision and the optimal decision on all the rounds.
The work \cite{kakade2009generalization} gives a bound on the excess risk of the output of an online algorithm in terms of the average regret.
\begin{theorem}\citep[Theorem 2]{kakade2009generalization} \label{Th:kakade2009generalization}
Let $f:X\times \Xi \rightarrow [0,B]$ be $\gamma$-strongly convex and $M$-Lipschitz w.r.t. $x$. Let $\tilde x^N \triangleq \frac{1}{N}\sum_{k=1}^{N}x^k $
be the average of online vectors $x^1, x^2,...,x^N$.
Then with probability at least $1-4\beta\log N$
\[ F(\tilde x^N) - F(x^*) \leq \frac{Reg_N }{N} + 4\sqrt{ \frac{M^2\log(1/\beta)}{\gamma}}\frac{\sqrt{Reg_N}}{N} + \max\left\{ \frac{16M^2}{\gamma},6B \right\}\frac{\log(1/\beta)}{N}. \]
\end{theorem}
For the update rule \eqref{SA:implement_simple} with $\eta_k = \frac{1}{\gamma k}$, this theorem can be specify as follows.
\begin{theorem}\label{Th:contract_gener}
Let $f:X\times \Xi \rightarrow [0,B]$ be $\gamma$-strongly convex and $M$-Lipschitz w.r.t. $x$. Let $\tilde x^N \triangleq \frac{1}{N}\sum_{k=1}^{N}x^k $ be the average of outputs generated by iterative formula \eqref{SA:implement_simple} with $\eta_k = \frac{1}{\gamma k}$. Then, with probability
at least $1-\alpha$ the following holds
\begin{align*}
F(\tilde x^N) - F(x^*)
&\leq \frac{3\delta D }{2} + \frac{3(M^2+\delta ^2)}{N\gamma }(1+\log N) \notag\\
&+ \max\left\{ \frac{18M^2}{\gamma},6B + \frac{2M^2}{\gamma} \right\}\frac{\log(4\log N/\alpha)}{N}.
\end{align*}
where $D =\max\limits_{x',x'' \in X}\|x'-x''\|_2 $ and $\delta$ is defined by \eqref{eq:gen_delta}.
\end{theorem}
\begin{proof}
The proof mainly relies on Theorem \ref{Th:kakade2009generalization} and estimating the regret for iterative formula \eqref{SA:implement_simple} with $\eta_k = \frac{1}{\gamma k}$.
From $\gamma$-strongly convexity in $x$ of $f(x,\xi)$, it follows for $x^k, x^* \in X$
\begin{equation*}
f(x^*, \xi^k) \geq f(x^k, \xi^k) + \langle \nabla f(x^k, \xi^k), x^*-x^k\rangle +\frac{\gamma}{2}\|x^*-x^k\|_2.
\end{equation*}
Adding and subtracting the term $\langle g_\delta(x^k, \xi^k), x^*-x^k\rangle$ we get using Cauchy–Schwarz inequality and \eqref{eq:gen_delta}
\begin{align}\label{str_conv_W1}
f(x^*, \xi^k) &\geq f(x^k, \xi^k) + \langle g_\delta(x^k,\xi^k), x^*-x^k\rangle +\frac{\gamma}{2}\|x^*-x^k\|_2 \notag \\
&+ \langle \nabla f(x^k, \xi^k) -g_\delta(x^k,\xi^k), x^*-x^k\rangle \notag\\
&\geq f(x^k, \xi^k) + \langle g_\delta(x^k,\xi^k), x^*-x^k\rangle +
\frac{\gamma}{2}\|x^*-x^k\|_2 + \delta\|x^*-x^k\|_2.
\end{align}
From the update rule \eqref{SA:implement_simple} for $x^{k+1}$ we have
\begin{align*}
\|x^{k+1} - x^*\|_2 &= \|\Pi_{X}(x^k - \eta_{k} g_\delta(x^k,\xi^k)) - x^*\|_2 \notag\\
&\leq \|x^k - \eta_{k} g_\delta(x^k,\xi^k) - x^*\|_2 \notag\\
&\leq \|x^k - x^*\|_2^2 + \eta_{k}^2\| g_\delta(x^k,\xi^k)\|_2^2 -2\eta_{k}\langle g_\delta(x^k,\xi^k), x^k - x^*\rangle.
\end{align*}
From this it follows
\begin{equation*}
\langle g_\delta(x^k,\xi^k), x^k - x^*\rangle \leq \frac{1}{2\eta_{k}}(\|x^k-x^*\|^2_2 - \|x^{k+1} -x^*\|^2_2) + \frac{\eta_{k}}{2}\| g_\delta(x^k,\xi^k)\|_2^2.
\end{equation*}
Together with \eqref{str_conv_W1} we get
\begin{align*}
f(x^k, \xi^k) - f(x^*, \xi^k) &\leq \frac{1}{2\eta_{k}}(\|x^k-x^*\|^2_2 - \|x^{k+1} -x^*\|^2_2) \notag \\
&-\left(\frac{\gamma}{2}+\delta\right)\|x^*-x^k\|_2+ \frac{\eta_{k}^2}{2}\| g_\delta(x^k,\xi^k)\|_2^2.
\end{align*}
Summing this from 1 to $N$, we get using $\eta_k = \frac{1}{\gamma k}$
\begin{align}\label{eq:eq123}
\sum_{k=1}^{N}f( x^k, \xi^k) - f(x^*, \xi^k) &\leq
\frac{1}{2}\sum_{k=1}^{N}\left(\frac{1}{\eta_k} - \frac{1}{\eta_{k-1}} + {\gamma} +\delta \right)\|x^*-x^k\|_2 \notag\\ &\hspace{-1cm}+\frac{1}{2}\sum_{k=1}^{N}{\eta_{k}}\| g_\delta (x^k,\xi^k)\|_2^2 \notag\\
&\hspace{-1cm}\leq \frac{\delta}{2}\sum_{k=1}^{N}\|x^*-x^k\|_2 +\frac{1}{2}\sum_{k=1}^{N}{\eta_{k}}\| g_\delta (x^k,\xi^k)\|_2^2.
\end{align}
From Lipschitz continuity of $f(x,\xi)$ w.r.t. to $x$ it follows that $\|\nabla f(x,\xi)\|_2\leq M$ for all $x \in X, \xi \in \Xi$. Thus, using that for all $a,b, ~ (a+b)^2\leq 2a^2+2b^2$ it follows
\[
\|g_\delta(x,\xi)\|^2_2 \leq 2\|\nabla f(x,\xi)\|^2_2 + 2\delta^2 = 2M^2 + 2\delta^2
\]
From this and \eqref{eq:eq123} we bound the regret as follows
\begin{align}\label{eq:reg123}
Reg_N \triangleq \sum_{k=1}^{N}f( x^k, \xi^k) - f(x^*, \xi^k) &\leq \frac{\delta}{2}\sum_{k=1}^{ N}\|p^*-p^k\|_2 + (M^2 +\delta^2)\sum_{k=1}^{ N} \frac{1}{\gamma k}\notag\\
&\leq \frac{1}{2} \delta D N + \frac{M^2+\delta ^2}{\gamma }(1+\log N).
\end{align}
Here the last bound takes place due to the sum of harmonic series.
Then for \eqref{eq:reg123} we can use Theorem \ref{Th:kakade2009generalization}. Firstly, we simplify it rearranging the terms using that $\sqrt{ab} \leq \frac{a+b}{2}$
\begin{align*}
F(\tilde x^N) - F(x^*)
&\leq \frac{Reg_N }{N} + 4\sqrt{ \frac{M^2\log(1/\beta)}{N\gamma}}\sqrt{\frac{Reg_N}{N}} + \max\left\{ \frac{16M^2}{\gamma},6B \right\}\frac{\log(1/\beta)}{N}
\notag\\
&\leq \frac{3 Reg_N }{N} + \frac{2 M^2\log(1/\beta)}{N\gamma} + \max\left\{ \frac{16M^2}{\gamma},6B \right\}\frac{\log(1/\beta)}{N}\notag\\
&= \frac{3 Reg_N }{N} + \max\left\{ \frac{18M^2}{\gamma},6B + \frac{2M^2}{\gamma} \right\}\frac{\log(1/\beta)}{N}.
\end{align*}
Then we substitute \eqref{eq:reg123} in this inequality and making change $\alpha = 4\beta\log N$ and get with probability at least $ 1-\alpha$
\begin{align*}
F(\tilde x^N) - F(x^*)
&\leq \frac{3\delta D }{2} + \frac{3(M^2+\delta ^2)}{N\gamma }(1+\log N) \notag\\
&+ \max\left\{ \frac{18M^2}{\gamma},6B + \frac{2M^2}{\gamma} \right\}\frac{\log(4\log N/\alpha)}{N}.
\end{align*}
\end{proof}
\subsection{Preliminaries on the SAA Approach }
The SAA approach replaces the objective in \eqref{eq:gener_risk_min_conv} with its sample average
\begin{equation}\label{eq:empir_risk_min_conv}
\min_{x\in X } \hat{F}(x) \triangleq \frac{1}{m}\sum_{i=1}^m f(x,\xi_i),
\end{equation}
where each $f(x,\xi_i)$ is $\gamma$-strongly convex in $x$.
Let us define the empirical minimizer of \eqref{eq:empir_risk_min_conv} $\hat x^* = \arg\min\limits_{x\in X} \hat{F}(x)$, and $\hat x_{\varepsilon'}$ such that
\begin{equation}\label{eq:fidelity}
\hat{F}(\hat x_{\varepsilon'}) - \hat{F}(\hat x^*) \leq \varepsilon'.
\end{equation}
The next theorem gives a bound on the excess risk for problem \eqref{eq:empir_risk_min_conv}
in the SAA approach.
\begin{theorem}\label{Th:contractSAA}
Let $f:X\times \Xi \rightarrow [0,B]$ be $\gamma$-strongly convex and $M$-Lipschitz w.r.t. $x$ in the $\ell_2$-norm. Let $\hat x_{\varepsilon'}$ satisfies \eqref{eq:fidelity} with precision $ \varepsilon' $.
Then, with probability at least $1-\alpha$ we have
\begin{align*}
F( \hat x_{\varepsilon'}) - F(x^*)
&\leq \sqrt{\frac{2M^2}{\gamma}\varepsilon'} +\frac{4M^2}{\alpha\gamma m}.
\end{align*}
Let $\varepsilon' = O \left(\frac{\gamma\varepsilon^2}{M^2} \right)$ and $m = O\left( \frac{M^2}{\alpha \gamma \varepsilon} \right)$. Then, with probability at least $1-\alpha$ the following holds
\[F( \hat x_{\varepsilon'}) - F(x^*)\leq \varepsilon \quad \text{and} \quad \|\hat x_{\varepsilon'} - x^*\|_2 \leq \sqrt{2\varepsilon/\gamma}.\]
\end{theorem}
The proof of this theorem mainly relies on the following theorem.
\begin{theorem}\citep[Theorem 6]{shalev2009stochastic}\label{Th:shalev2009stochastic}
Let $f(x,\xi)$ be $\gamma$-strongly convex and $M$-Lipschitz w.r.t. $x$ in the $\ell_2$-norm. Then, with probability at least $1-\alpha$ the following holds
\[
F(\hat x^* ) - F(x^*) \leq \frac{4M^2}{\alpha \gamma m},
\]
where $m$ is the sample size.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{Th:contractSAA}]
For any $x\in X$, the following holds
\begin{equation}\label{eq:exp_der}
F(x) - F(x^*) = F(x) - F(\hat x^*)+ F( \hat x^*)- F(x^*).
\end{equation}
From Theorem \ref{Th:shalev2009stochastic} with probability at least $1 - \alpha$ the following holds
\begin{equation*}
F( \hat x^*) - F(x^*) \leq \frac{4M^2}{\alpha \gamma m}.
\end{equation*}
Then from this and \eqref{eq:exp_der} we have with probability at least $1 - \alpha$
\begin{equation}\label{eq:eqtosub}
F(x) - F(x^*) \leq F(x) - F(\hat x^*)+ \frac{4M^2}{\alpha \gamma m}.
\end{equation}
From Lipschitz continuity of $ f(x,\xi)$ it follows, that for ant $x\in X, \xi\in \Xi$ the following holds
\begin{equation*}
| f(x,\xi) - f(\hat x^*,\xi)| \leq M\|x-\hat x^*\|_2.
\end{equation*}
Taking the expectation of this inequality w.r.t. $\xi$ we get
\begin{equation*}
\mathbb E | f(x,\xi) - f(\hat x^*,\xi)| \leq M\|x-\hat x^*\|_2.
\end{equation*}
Then we use Jensen's inequality ($g\left (\mathbb E(Y)\right) \leq \mathbb E g(Y) $) for the expectation, convex function $g$ and a random variable $Y$. Since the module is a convex function we get
\begin{equation*}
| \mathbb E f(x,\xi) - \mathbb E f(\hat x^*,\xi)| =| F(x) - F(\hat x^*)| \leq \mathbb E | f(x,\xi) - f(\hat x^*,\xi)| \leq M\|x-\hat x^*\|_2.
\end{equation*}
Thus, we have
\begin{equation}\label{eq:Lipsch}
| F(x) - F(\hat x^*)| \leq M\|x-\hat x^*\|_2.
\end{equation}
From strong convexity of $ f( x, \xi)$ in $x$, it follows that the average of $f(x,\xi_i)$'s, that is $\hat F(x)$, is also $\gamma$-strongly convex in $x$. Thus we get for any $x \in X, \xi \in \Xi$
\begin{equation}\label{eq:str}
\|x-\hat x^*\|_2 \leq \sqrt{\frac{2}{\gamma } (\hat F(x) - \hat F(\hat x^*))}.
\end{equation}
By using \eqref{eq:Lipsch} and \eqref{eq:str} \label{eq:th6} and taking $x=\hat x_{\varepsilon'}$ in \eqref{eq:eqtosub}, we get the first statement of the theorem
\begin{align}\label{eq_to_prove_2}
F( \hat x_{\varepsilon'}) - F(x^*)
&\leq \sqrt{\frac{2M^2}{\gamma }(\hat F( \hat x_{\varepsilon'}) - \hat F(\hat x^*))} +\frac{4M^2}{\alpha\gamma m} \leq \sqrt{\frac{2M^2}{\gamma}\varepsilon'} +\frac{4M^2}{\alpha\gamma m}.
\end{align}
Then from the strong convexity we have
\begin{align}\label{eq:con_reg_bar}
\| \hat x_{\varepsilon'} - x^*\|_2
&\leq \sqrt{\frac{2}{\gamma}\left( \sqrt{\frac{2M^2}{\gamma }\varepsilon'} +\frac{4M^2}{\alpha \gamma m}\right)}.
\end{align}
Equating \eqref{eq_to_prove_2} to $\varepsilon$, we get the expressions for the sample size $m$ and auxiliary precision $\varepsilon'$. Substituting both of these expressions in \eqref{eq:con_reg_bar} we finish the proof.
\end{proof}
\section{Non-Strongly Convex Optimization Problem}
Now we consider non-strongly convex optimization problem
\begin{equation}\label{eq:gener_risk_min_nonconv}
\min_{x\in X \subseteq \mathbb{R}^n} F(x) \triangleq \mathbb E f(x,\xi),
\end{equation}
where $f(x,\xi)$ is Lipschitz continuous in $x$. Let us define $ x^* = \arg\min\limits_{x\in X} {F}(x)$.
\subsection{The SA Approach: Stochastic Mirror Descent}
We consider stochastic mirror descent (MD) with inexact oracle \cite{nemirovski2009robust,juditsky2012first-order,gasnikov2016gradient-free}.\footnote{By using dual averaging scheme~\cite{nesterov2009primal-dual} we can rewrite Alg.~\ref{Alg:OnlineMD} in online regime \cite{hazan2016introduction,orabona2019modern} without including $N$ in the stepsize policy. Note, that mirror descent and dual averaging scheme are very close to each other \cite{juditsky2019unifying}.} For a prox-function $d(x)$ and the corresponding Bregman divergence $B_d(x,x^1)$, the proximal mirror descent step is
\begin{equation}\label{eq:prox_mirr_step}
x^{k+1} = \arg\min_{x\in X}\left( \eta \left\langle g_\delta(x^k,\xi^k), x\right\rangle + B_d(x,x^k)\right).
\end{equation}
We consider the simplex setup:
prox-function $d(x) = \langle x,\log x \rangle$. Here and below, functions such as $\log$ or $\exp$ are always applied element-wise. The corresponding Bregman divergence is given by the Kullback--Leibler divergence
\[
{\rm KL}(x,x^1) = \langle x, \log(x/x^1)\rangle - \boldsymbol{1}^\top(x-x^1).
\]
Then the starting point is taken as $x^1 = \arg\min\limits_{x\in \Delta_n}d(x)= (1/n,...,1/n)$.
\begin{theorem}\label{Th:MDgener}
Let $ R^2 \triangleq {\rm KL}(x^*,x^1) \leq \log n $ and $D =\max\limits_{x',x''\in \Delta_n}\|x'-x''\|_1 = 2$. Let $f:X\times \Xi \rightarrow \mathbb R^n$ be $M_\infty$-Lipschitz w.r.t. $x$ in the $\ell_1$-norm. Let $\breve x^N \triangleq \frac{1}{N}\sum_{k=1}^{N}x^k $ be the average of outputs generated by iterative formula \eqref{eq:prox_mirr_step} with $\eta = \frac{\sqrt{2} R}{M_\infty\sqrt{N} }$. Then, with probability
at least $1-\alpha$ we have
\begin{equation*}
F(\breve x^N) - F(x^*) \leq\frac{M_\infty (3R+2D \sqrt{\log (\alpha^{-1})})}{\sqrt{2N}} +\delta D = O\left(\frac{M_\infty \sqrt{\log ({n}/{\alpha})}}{\sqrt{N}} +2 \delta \right).
\end{equation*}
\end{theorem}
\begin{proof}
For MD with prox-function function $d(x) = \langle x\log x\rangle $ the following holds for any $x\in \Delta_n$ \citep[Eq. 5.13]{juditsky2012first-order}
\begin{align*}
\eta\langle g_\delta (x^k,\xi^k), x^k -x \rangle
&\leq {\rm {\rm KL}}(x,x^k) - {\rm KL}(x,x^{k+1}) +\frac{\eta^2}{2}\|g_\delta(x^k,\xi^k)\|^2_\infty \notag\\
&\leq {\rm {\rm KL}}(x,x^k) -{\rm {\rm KL}}(x,x^{k+1}) +\eta^2M_\infty^2.
\end{align*}
Then by adding and subtracting the terms $\langle F(x), x-x^k\rangle$ and $\langle \nabla f(x, \xi^k), x-x^k\rangle$ in this inequality, we get using
Cauchy--Schwarz inequality the following
\begin{align}\label{str_conv_W2}
\eta\langle \nabla F(x^k), x^k-x\rangle
&\leq \eta\langle \nabla f(x^k,\xi^k)-g_\delta(x^k,\xi^k), x^k-x\rangle \notag \\
&+ \eta\langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x\rangle + {\rm KL}(x,x^k) - {\rm KL}(x,x^{k+1}) +\eta^2M_\infty^2 \notag\\
&\leq \eta\delta\max_{k=1,...,N}\|x^k-x\|_1 + \eta\langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x\rangle \notag \\
&+{\rm KL}(x,x^k) - {\rm KL}(x,x^{k+1}) +\eta^2M_\infty^2.
\end{align}
Then using convexity of $F(x^k)$ we have
\[
F(x^k) - F(x)\leq \eta\langle \nabla F(x^k), x^k-x\rangle
\]
Then we use this for \eqref{str_conv_W2} and sum for $k=1,...,N$ at $x=x^*$
\begin{align}\label{eq:defFxx}
\eta\sum_{k=1}^N F(x^k) - F(x^*) &\leq
\eta\delta N \max_{k=1,...,N}\|x^k-x^*\|_1
+\eta\sum_{k=1}^N\langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x^*\rangle \notag\\
&+ {\rm KL}(x^*,x^1) - {\rm KL}(x^*,x^{N+1}) + \eta^2M_\infty^2N \notag\\
&\leq \eta\delta N{D}
+\eta \sum_{k=1}^N\langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x^*\rangle + R^2+ \eta^2M_\infty^2N.
\end{align}
Where we used ${\rm KL}(x^*,x^1) \leq R^2 $ and $\max\limits_{k=1,...,N}\|p^k-p^*\|_1 \leq D$.
Then using convexity of $F(x^k)$ and the definition of output $\breve x^N$ in \eqref{eq:defFxx} we have
\begin{align}\label{eq:F123xxF}
F(\breve x^N) -F(x^*)
&\leq \delta D +\frac{1}{N}\sum_{k=1}^N \langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x^*\rangle + \frac{R^2}{\eta N}+ \eta M_\infty^2.
\end{align}
Next we use the {Azuma--}Hoeffding's {\cite{jud08}} inequality and get for all $\beta \geq 0$
\begin{equation}\label{eq:AzumaH}
\mathbb{P}\left(\sum_{k=1}^{N+1}\langle \nabla F(x^k) -\nabla f(x^k,\xi^k), x^k-x^*\rangle \leq \beta \right)\geq 1 - \exp\left( -\frac{2\beta^2}{N(2M_\infty D)^2}\right)= 1 - \alpha.
\end{equation}
Here we used that $\langle \nabla F(p^k) -\nabla f(x^k,\xi^k), x^*-x^k\rangle$ is a martingale-difference and
\begin{align*}
{\left|\langle \nabla F(x^k) -\nabla f(x^k,\xi^k), x^*-x^k\rangle \right|} &\leq \| \nabla F(x^k) -\nabla W(p^k,q^k)\|_{\infty} \|x^*-x^k\|_1 \notag \\
&\leq 2M_\infty \max\limits_{k=1,...,N}\|x^k-x^*\|_1 \leq 2M_\infty D.
\end{align*}
Thus, using \eqref{eq:AzumaH} for \eqref{eq:F123xxF} we have that with probability at least $1-\alpha$
\begin{equation}\label{eq:eta}
F(\breve x^N) - F(x^*) \leq \delta D +\frac{\beta}{N}+ \frac{R^2}{\eta N}+ \eta M_\infty^2.
\end{equation}
Then, expressing $\beta$ through $\alpha$ and substituting $\eta = \frac{ R}{M_\infty} \sqrt{\frac{2}{N}}$ to \eqref{eq:eta} ( such $\eta$ minimize the r.h.s. of \eqref{eq:eta}), we get \begin{align*}
& F(\breve x^N) - F(x^*) \leq \delta D + \frac{M_\infty D\sqrt{2\log(1/\alpha)} }{\sqrt{N} } + \frac{M_\infty R}{\sqrt{2N}}+ \frac{M_\infty R\sqrt{2}}{\sqrt{N}} \notag \\
&\leq \delta D + \frac{M_\infty (3R+2D \sqrt{\log(1/\alpha) })}{\sqrt{2N}}.
\end{align*}
Using $R=\sqrt{\log n}$ and {$D = 2$} in this inequality,
we obtain
\begin{align}\label{eq:final_est}
F(\breve x^N) - F(x^*) &\leq \frac{M_\infty (3\sqrt{\log{n}} +4 \sqrt{\log(1/\alpha)})}{\sqrt{2N}} +2\delta.
\end{align}
We raise this to the second power, use that for all $a,b\geq 0, ~ 2\sqrt{ab}\leq a+b$ and then extract the square root. We obtain the following
\begin{align*}
\sqrt{\left(3\sqrt{\log{n}} +4 \sqrt{\log(1/\alpha)}\right)^2} &= \sqrt{ 9\log{n} + 16\log(1/\alpha) +24\sqrt{\log{n}}\sqrt{\log(1/\alpha)} } \\
&\leq \sqrt{ 18\log{n} + 32\log(1/\alpha) }.
\end{align*}
Using this for \eqref{eq:final_est}, we get the statement of the theorem
\begin{align*}\label{eq:final_est2}
F(\breve x^N) - F(x^*) &\leq \frac{M_\infty \sqrt{18\log{n} +32 \log(1/\alpha)}}{\sqrt{2N}} +2\delta = O\left(\frac{M_\infty \sqrt{\log ({n}/{\alpha})}}{\sqrt{N}} +2\delta \right) .
\end{align*}
\end{proof}
\subsection{ Penalization in the SAA Approach}
In this section, we study the SAA approach for non-strongly convex problem \eqref{eq:gener_risk_min_nonconv}. We regularize this problem by 1-strongly convex w.r.t. $x$ penalty function $r(x,x^1)$ in the $\ell_2$-norm
\begin{equation}\label{def:gener_reg_prob}
\min_{x\in X \subseteq \mathbb{R}^n} F_\lambda(x) \triangleq \mathbb E f(x,\xi) + \lambda r(x,x^1)
\end{equation}
and we prove that the sample sizes in the SA and the SAA approaches will be equal up to logarithmic terms.
The empirical counterpart of problem \eqref{def:gener_reg_prob} is
\begin{equation}\label{eq:gen_prob_empir}
\min_{x\in X }\hat{F}_\lambda(x) \triangleq \frac{1}{m}\sum_{i=1}^m f(x,\xi_i) + \lambda r(x,x^1).
\end{equation}
Let us define $ \hat x_\lambda = \arg\min\limits_{x\in X} \hat{F}_{\lambda}(x)$.
The next lemma proves the statement from \cite{shalev2009stochastic} on boundness of the population sub-optimality in terms of the square root of empirical sub-optimality.
\begin{lemma}\label{Lm:pop_sub_opt}
Let $f(x,\xi)$ be convex and $M$-Lipschitz continuous w.r.t $\ell_2$-norm.
Then for any $x \in X$ with probability at least $ 1-\delta$ the following holds
\[F_\lambda(x) - F_\lambda(x^*_\lambda) \leq \sqrt{\frac{2M_\lambda^2}{\lambda} \left(\hat F_\lambda(x) - \hat F_\lambda(\hat x_\lambda)\right)} + \frac{4M_\lambda^2}{\alpha \lambda m},\]
where $ x^*_\lambda = \arg\min\limits_{x\in X} {F}_\lambda(x)$, $M_\lambda \triangleq M +\lambda \mathcal {R}^2$ and $\mathcal{R}^2 = r(x^*,x^1)$.
\end{lemma}
\begin{proof}
Let us define $f_\lambda(x, \xi) \triangleq f (x, \xi) + \lambda r(x,x^1) $. As $f(x, \xi)$ is $M$-Lipschitz continuous, $f_\lambda(x, \xi)$ is also Lipschitz continuous with $M_\lambda \triangleq M +\lambda \mathcal {R}^2$.
From
Jensen's inequality for the expectation, and the module as a convex function, we get that
$F_\lambda(x)$ is also $M_\lambda$-Lipschitz continuous
\begin{equation}\label{eq:MLipscht_cont}
|F_\lambda(x)- F_\lambda(\hat x_\lambda) | \leq M_\lambda\| x -\hat x_\lambda\|_2, \qquad \forall x \in X.
\end{equation}
From $\lambda$-strong convexity of $f(x, \xi)$, we obtain that $\hat F_\lambda(x)$ is also $\lambda$-strongly convex
\[
\|x-\hat x_\lambda\|_2^2\leq \frac{2}{\lambda}\left( \hat F_\lambda(x)-\hat F_\lambda(\hat x_\lambda) \right), \qquad \forall x \in X.
\]
From this and \eqref{eq:MLipscht_cont} it follows
\begin{equation}\label{eq:sup_opt_empr}
F_\lambda(x)- F_\lambda(\hat x_\lambda) \leq \sqrt{\frac{2M_\lambda^2}{\lambda}\left( \hat F_\lambda(x)-\hat F_\lambda(\hat x_\lambda) \right)}.
\end{equation}
For any $x \in X$ and $ x^*_\lambda = \arg\min\limits_{x\in X} {F}_\lambda(x)$ we consider
\begin{equation}\label{eq:Fxhatx}
F_\lambda(x)- F_\lambda( x^*_\lambda) = F_\lambda(x)- F_\lambda(\hat x_\lambda) + F_\lambda(\hat x_\lambda) - F_\lambda( x^*_\lambda).
\end{equation}
From \citep[Theorem 6]{shalev2009stochastic} we have with probability at least $ 1 -\alpha$
\[ F_\lambda(\hat x_\lambda) - F_\lambda(x^*_\lambda) \leq \frac{4M_\lambda^2}{\alpha \lambda m}.\]
Using this and \eqref{eq:sup_opt_empr} for \eqref{eq:Fxhatx} we obtain with probability at least $ 1 -\alpha$
\[F_\lambda(x) - F_\lambda(x^*_\lambda) \leq \sqrt{\frac{2M_\lambda^2}{\lambda}\left( \hat F_\lambda(x)-\hat F_\lambda(\hat x_\lambda) \right)} + \frac{4M_\lambda^2}{\alpha \lambda m}.\]
\end{proof}
The next theorem proves the eliminating the linear dependence on $n$ in the sample size of the regularized SAA approach for a non-strongly convex objective
(see estimate \eqref{eq:SNSm}), and estimates the auxiliary precision for the regularized SAA problem \eqref{eq:aux_e_quad}.
\begin{theorem}\label{th_reg_ERM}
Let $f(x,\xi)$ be convex and $M$-Lipschitz continuous w.r.t $x$ and
let $\hat x_{\varepsilon'}$ be such that
\[
\frac{1}{m}\sum_{i=1}^m f(\hat x_{\varepsilon'},\xi_i) + \lambda r(\hat x_{\varepsilon'}, x^1) - \arg\min_{x\in X} \left\{\frac{1}{m}\sum_{i=1}^m f(x,\xi_i) + \lambda r(x,x^1)\right\} \leq \varepsilon'.
\]
To satisfy
\[F(\hat x_{\varepsilon'}) - F(x^*)\leq \varepsilon\]
with probability at least $1-\alpha$
, we need to take $\lambda = \varepsilon/(2\mathcal{R}^2)$,
\[m = \frac{ 32 M^2\mathcal{R}^2}{\alpha \varepsilon^2}, \]
where
$\mathcal{R}^2 = r(x^*,x^1)$. The precision $\varepsilon'$ is defined as
\[\varepsilon' = \frac{\varepsilon^3}{64M^2 \mathcal{R}^2}.\]
\end{theorem}
\begin{proof}
From Lemma \ref{Lm:pop_sub_opt}
we get for $x=\hat x_{\varepsilon'}$
\begin{align}\label{eq:suboptimality}
F_\lambda(\hat x_{\varepsilon'}) - F_\lambda(x^*_\lambda) &\leq \sqrt{\frac{2M_\lambda^2}{\lambda}\left( \hat F_\lambda(\hat x_{\varepsilon'})-\hat F_\lambda(\hat x_\lambda ) \right)} + \frac{4M_\lambda^2}{\alpha \lambda m} \notag \\
&=\sqrt{\frac{2M_\lambda^2}{\lambda}\varepsilon'} + \frac{4M_\lambda^2}{\alpha \lambda m},
\end{align}
where we used the definition of $\hat x_{\varepsilon'}$ from the statement of the this theorem.
Then we subtract $F(x^*)$ in both sides of \eqref{eq:suboptimality} and get
\begin{align}\label{eq:suboptimalityIm}
F_\lambda( \hat x_{\varepsilon'}) - F(x^*)
&\leq \sqrt{\frac{2M_\lambda^2\varepsilon'}{\lambda}} +\frac{4M_\lambda^2}{\alpha\lambda m} +F_\lambda(x^*_\lambda)-F(x^*).
\end{align}
Then we use
\begin{align*}
F_\lambda(x^*_\lambda) &\triangleq \min_{x\in X}\left\{ F(x)+\lambda r(x,x^1) \right\} && \notag\\
&\leq F(x^*) + \lambda r(x^*,x^1) && \text{The inequality holds for any $x \in X$}, \notag\\
&=F(x^*) +\lambda \mathcal{R}^2
\end{align*}
where $\mathcal{R} = r(x^*,x^1)$.
Then from this and \eqref{eq:suboptimalityIm} and the definition of $F_\lambda(\hat x_{\varepsilon'})$ in \eqref{def:gener_reg_prob} we get
\begin{align}\label{eq:minim+lamb}
F( \hat x_{\varepsilon'}) - F(x^*) &\leq \sqrt{\frac{2M_\lambda^2}{\lambda}\varepsilon'} + \frac{4M_\lambda^2}{\alpha \lambda m} - \lambda r(\hat x_{\varepsilon'}, x^1)+{\lambda}\mathcal{R}^2\notag \\
&\leq \sqrt{\frac{2M_\lambda^2\varepsilon'}{\lambda}} +\frac{4M_\lambda^2}{\alpha\lambda m} +{\lambda}\mathcal{R}^2.
\end{align}
Assuming $M \gg \lambda \mathcal{R}^2 $ and choosing $\lambda =\varepsilon/ (2\mathcal{R}^2)$ in \eqref{eq:minim+lamb}, we get the
following
\begin{equation}\label{offline23}
F( \hat x_{\varepsilon'}) - F(x^*) = \sqrt{\frac{4M^2\mathcal R^2\varepsilon'}{\varepsilon}} +\frac{8M^2\mathcal R^2}{\alpha m \varepsilon} +\varepsilon/2.
\end{equation}
Equating the first term and the second term in the r.h.s. of \eqref{offline23} to $\varepsilon/4$ we obtain the
the rest statements of the theorem including $ F( \hat x_{\varepsilon'}) - F(x^*) \leq \varepsilon.$
\end{proof}
\section{Fr\'{e}chet Mean with respect to Entropy-Regularized Optimal Transport }\label{sec:pen_bar}
In this section, we consider the problem of finding population barycenter of independent identically distributed random discrete measures. We define the population barycenter of distribution $\mathbb P$ with respect to
entropy-regularized transport distances
\begin{equation}\label{def:populationWBFrech}
\min_{p\in \Delta_n}
W_\gamma(p)\triangleq
\mathbb E_q W_\gamma(p,q), \qquad q \sim \mathbb P.
\end{equation}
\subsection{Properties of Entropy-Regularized Optimal Transport}
Entropic regularization of transport distances \cite{cuturi2013sinkhorn} improves their statistical properties \cite{klatt2020empirical,bigot2019central} and reduces their computational complexity. Entropic regularization has shown good results in generative models \cite{genevay2017learning},
multi-label learning \cite{frogner2015learning}, dictionary learning \cite{rolet2016fast}, image processing \cite{cuturi2016smoothed,rabin2015convex}, neural imaging \cite{gramfort2015fast}.
Let us firstly remind optimal transport problem between histograms $p,q \in \Delta_n$ with cost matrix $C\in \mathbb R_{+}^{n\times n}$
\begin{equation}\label{eq:OTproblem}
W(p,q) \triangleq \min_{\pi \in U(p,q)} \langle C, \pi \rangle,
\end{equation}
where
\[U(p,q) \triangleq\{ \pi\in \mathbb R^{n\times n}_+: \pi {\mathbf 1} =p, \pi^T {\mathbf 1} = q\}.\]
\begin{remark}[Connection with the $\rho$-Wasserstein distance]
When for $\rho\geq 1$, $C_{ij} =\mathtt d(x_i, x_j)^\rho$ in \eqref{eq:OTproblem}, where $\mathtt d(x_i, x_j)$ is a distance on support points $x_i, x_j$ of space $X$, then $W(p,q)^{1/\rho}$ is known as the $\rho$-Wasserstein distance on $\Delta_n$.
\end{remark}
Nevertheless, all the results of this thesis are based only on the assumptions that the matrix $C \in \mathbb R_+^{n\times n}$ is symmetric and non-negative. Thus, optimal transport problem defined in \eqref{eq:OTproblem} is a more general than the Wasserstein distances.
Following \cite{cuturi2013sinkhorn}, we introduce entropy-regularized optimal transport problem
\begin{align}\label{eq:wass_distance_regul}
W_\gamma (p,q) &\triangleq \min_{\pi \in U(p,q)} \left\lbrace \left\langle C,\pi\right\rangle - \gamma E(\pi)\right\rbrace,
\end{align}
where $\gamma>0$ and $E(\pi) \triangleq -\langle \pi,\log \pi \rangle $ is the entropy. Since $E(\pi)$ is 1-strongly concave on $\Delta_n$ in the $\ell_1$-norm, the objective in \eqref{eq:wass_distance_regul} is $\gamma$-strongly convex with respect to $\pi$ in the $\ell_1$-norm on $\Delta_n$, and hence problem \eqref{eq:wass_distance_regul} has a unique optimal solution. Moreover, $W_\gamma (p,q)$ is $\gamma$-strongly convex with respect to $p$ in the $\ell_2$-norm on $\Delta_n$ \citep[Theorem 3.4]{bigot2019data}.
One particular advantage of the entropy-regularized optimal transport is a closed-form representation for its dual function~\cite{agueh2011barycenters,cuturi2016smoothed} defined by the Fenchel--Legendre transform of $W_\gamma(p,q)$ as a function of $p$
\begin{align}\label{eq:FenchLegdef}
W_{\gamma, q}^*(u) &= \max_{ p \in \Delta_n}\left\{ \langle u, p \rangle - W_{\gamma}(p, q) \right\} = \gamma\left(E(q) + \left\langle q, \log (K \beta) \right\rangle \right)\notag\\
&= \gamma\left(-\langle q,\log q\rangle + \sum_{j=1}^n [q]_j \log\left( \sum_{i=1}^n \exp\left(([u]_i - C_{ji})/\gamma\right) \right)\right)
\end{align}
where $\beta = \exp( {u}/{\gamma}) $, \mbox{$K = \exp( {-C}/{\gamma }) $} and $[q]_j$ is $j$-th component of vector $q$. Functions such as $\log$ or $\exp$ are always applied element-wise for vectors.
Hence, the gradient of dual function $W_{\gamma, q}^*(u)$ is also represented in a closed-form \cite{cuturi2016smoothed}
\begin{equation*}
\nabla W^*_{\gamma,q} (u)
= \beta \odot \left(K \cdot {q}/({K \beta}) \right) \in \Delta_n,
\end{equation*}
where symbols $\odot$ and $/$ stand for the element-wise product and element-wise division respectively.
This can be also written as
\begin{align}\label{eq:cuturi_primal}
\forall l =1,...,n \qquad [\nabla W^*_{\gamma,q} (u)]_l = \sum_{j=1}^n [q]_j \frac{\exp\left(([u]_l-C_{lj})/\gamma\right) }{\sum_{i=1}^n\exp\left(([u]_i-C_{ji})/\gamma\right)}.
\end{align}
The dual representation of $W_\gamma (p,q) $ is
\begin{align}\label{eq:dual_Was}
W_{\gamma}(p,q) &= \min_{\pi \in U(p,q) }\sum_{i,j=1}^n\left( C_{ij}\pi_{i,j} + \gamma \pi_{i,j}\log \pi_{i,j} \right) \notag \\
&=\max_{u, \nu \in \mathbb R^n} \left\{ \langle u,p\rangle + \langle\nu,q\rangle - \gamma\sum_{i,j=1}^n\exp\left( ([u]_i+[\nu]_j -C_{ij})/\gamma -1 \right) \right\} \\
&=\max_{u \in \mathbb R^n}\left\{ \langle u,p\rangle -
\gamma\sum_{j=1}^n [q]_j\log\left(\frac{1}{[q]_j}\sum_{i=1}^n\exp\left( ([u]_i -C_{ij})/\gamma \right) \right)
\right\}. \notag
\end{align}
Any solution $\begin{pmatrix}
u^*\\
\nu^*
\end{pmatrix}$ of \eqref{eq:dual_Was} is a subgradient
of $ W_{\gamma}(p,q)$ \citep[Proposition 4.6]{peyre2019computational} \begin{equation}\label{eq:nabla_wass_lagrang}
\nabla W_\gamma(p,q) = \begin{pmatrix}
u^*\\
\nu^*
\end{pmatrix}.
\end{equation} We consider $u^*$ and $\nu^*$ such that
$\langle u^*, {\mathbf 1}\rangle = 0$ and $\langle \nu^*, {\mathbf 1}\rangle = 0$ ($u^*$ and $\nu^*$ are determined up to an additive constant).
The next theorem \cite{bigot2019data} describes the Lipschitz continuity of $W_\gamma (p,q)$ in $p$ on probability simplex $\Delta_n$ restricted to
\[\Delta^\rho_n = \left\{p\in \Delta_n : \min_{i \in [n]}p_i \geq \rho \right\},\]
where $0<\rho<1$ is an arbitrary small constant.
\begin{theorem}\citep[Theorem 3.4, Lemma 3.5]{bigot2019data}
\label{Prop:wass_prop}
\begin{itemize}
\item For any $q \in \Delta_n$,
$W_\gamma (p,q)$ is $\gamma$-strongly convex w.r.t. $p$ in the $\ell_2$-norm
\item For any $q \in \Delta_n$, $p \in \Delta^\rho_n$ and $0<\rho<1$,
$\|\nabla_p W_\gamma (p,q)\|_2 \leq M$, where
\[M = \sqrt{\sum_{j=1}^n\left( 2\gamma\log n +\inf_{i\in [n]}\sup_{l \in [n]} |C_{jl} -C_{il}| -\gamma\log \rho \right)^2}. \]
\end{itemize}
\end{theorem}
We roughly take $M = O(\sqrt n\|C\|_\infty)$ since for all $i,j\in [n], C_{ij} > 0$, we get
\begin{align*}
M &\stackrel{\text{\cite{bigot2019data}}}{=} O\left(\sqrt{\sum_{j=1}^n\left( \inf_{i\in [n]}\sup_{l \in [n]} |C_{jl} -C_{il}| \right)^2} \right) \\
&= O \left(\sqrt{\sum_{j=1}^n \sup_{l \in [n]} C_{jl}^2} \right)= O\left( \sqrt{n} \sup_{j,l \in [n]} C_{jl} \right)
= O\left( \sqrt{n}\sup_{j\in [n]}\sum_{l \in [n]}C_{jl} \right)= O \left( \sqrt{n}\|C\|_\infty\right).
\end{align*}
Thus, we suppose that $W_\gamma(p,q)$ and $W(p,q)$ are Lipschitz continuous with almost the same Lipschitz constant $M$ in the $\ell_2$-norm on $\Delta_n^\rho$.
Moreover,
by the same arguments,
for the Lipschitz continuity in the $\ell_1$-norm: $\|\nabla_p W_\gamma (p,q)\|_\infty \leq M_\infty$, we can roughly estimate $M_\infty = O(\|C\|_\infty)$ by taking maximum instead of the square root of the sum.
In what follows, we use Lipshitz continuity of $W_\gamma(p,q)$ and $W(p,q)$ for measures from $\Delta_n$ keeping in mind that adding some noise and normalizing the measures makes them belong to $\Delta_n^\rho$. We also notice that if the measures are from the interior of $\Delta_n$ then their barycenter will be also from the interior of $\Delta_n$.
\subsection{The SA Approach: Stochastic Gradient Descent}
For problem \eqref{def:populationWBFrech}, as a particular case of problem \eqref{eq:gener_risk_min}, stochastic gradient descent method can be used. From Eq. \eqref{eq:nabla_wass_lagrang}, it follows that an approximation for the gradient of $W_\gamma(p,q)$ with respect to $p$ can be calculated by Sinkhorn algorithm \cite{altschuler2017near-linear,peyre2019computational,dvurechensky2018computational}
through the computing dual variable $u$ with $\delta$-precision
\begin{equation}\label{inexact}
\|\nabla_p W_\gamma(p,q) - \nabla_p^\delta W_\gamma(p,q) \|_2\leq \delta, \quad \forall q\in \Delta_n.
\end{equation}
Here denotation $\nabla_p^\delta W_\gamma(p,q)$ means an inexact stochastic subgradient of $ W_\gamma(p,q)$ with respect to $p$.
Algorithm \ref{Alg:OnlineGD} combines stochastic gradient descent
given by iterative formula \eqref{SA:implement_simple} for $\eta_k = \frac{1}{\gamma k}$ with Sinkhorn algorithm (Algorithm \ref{Alg:SinkhWas})
and Algorithm \ref{Alg:EuProj} making the projection onto the simplex $\Delta_n$.
\begin{algorithm}[ht!]
\caption{Sinkhorn's algorithm \cite{peyre2019computational} for calculating $ \nabla_p^\delta W_\gamma(p^k,q^k) $}
\label{Alg:SinkhWas}
\begin{algorithmic}[1]
\Procedure{Sinkhorn}{$p,q, C, \gamma$}
\State $a^1 \gets (1/n,...,1/n)$, $ b^1 \gets (1/n,...,1/n)$
\State $K \gets \exp (-C/\gamma)$
\While{\rm not converged}
\State $a \gets {p}/(Kb)$
\State $ b \gets {q }/(K^\top a)$
\EndWhile
\State \textbf{return} $ \gamma \log(a)$\Comment{Sinkhorn scaling $ a = e^{u/\gamma}$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[ht!]
\caption{Euclidean Projection $\Pi_{\Delta_n}(p) = \arg\min\limits_{v\in \Delta_n}\|p-v\|_2$ onto Simplex $\Delta_n$ \cite{duchi2008efficient}}
\label{Alg:EuProj}
\begin{algorithmic}[1]
\Procedure{Projection}{$w\in \mathbb R^n$}
\State Sort components of $w$ in decreasing manner: $r_1\geq r_2 \geq ... \geq r_n$.
\State Find $\rho = \max\left\{ j \in [n]: r_j - \frac{1}{j}\left(\sum^{j}_{i=1}r_i - 1\right) \right\}$
\State Define
$\theta = \frac{1}{\rho}(\sum^{\rho}_{i=1}r_i - 1)$
\State For all $i \in [n]$, define $p_i = \max\{w_i - \theta, 0\}$.
\State \textbf{return} $p \in \Delta_n$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[ht!]
\caption{Projected Online Stochastic Gradient Descent for WB (PSGDWB)}
\label{Alg:OnlineGD}
\begin{algorithmic}[1]
\Require starting point $p^1 \in \Delta_n$, realization $q^1$, $\delta$, $\gamma$.
\For{$k= 1,2,3,\dots$}
\State $\eta_{k} = \frac{1}{\gamma k}$
\State $\nabla_p^\delta W_\gamma(p^k,q^k) \gets$ \textsc{Sinkhorn}$(p^k,q^k, C, \gamma)$ or the accelerated Sinkhorn \cite{guminov2019accelerated}
\State $p^{(k+1)/2} \gets p^k - \eta_{k} \nabla_p^\delta W_\gamma(p^k,q^k)$
\State $p^{k+1} \gets$ \textsc{Projection}$(p^{(k+1)/2})$
\State Sample $q^{k+1}$
\EndFor
\Ensure $p^1,p^2, p^3...$
\end{algorithmic}
\end{algorithm}
For Algorithm \ref{Alg:OnlineGD} and problem \eqref{def:populationWBFrech}, Theorem \ref{Th:contract_gener} can be specified as follows
\begin{theorem}\label{Th:contract}
Let $\tilde p^N \triangleq \frac{1}{N}\sum_{k=1}^{N}p^k $ be the average of $N$ online outputs of Algorithm \ref{Alg:OnlineGD} run with $\delta$. Then, with probability
at least $1-\alpha$ the following holds
\begin{equation*}
W_{\gamma}(\tilde p^N) - W_{\gamma}(p^*_{\gamma})
= O\left(\frac{M^2\log(N/\alpha)}{\gamma N} + \delta \right),
\end{equation*}
where $p^*_{\gamma} \triangleq \arg\min\limits_{p\in \Delta_n}
W_\gamma(p)$.
Let Algorithm \ref{Alg:OnlineGD} run with $\delta = O\left(\varepsilon\right)$ and $
N =\widetilde O \left( \frac{M^2}{\gamma \varepsilon} \right) = \widetilde O \left( \frac{n\|C\|_\infty^2}{\gamma \varepsilon} \right)
$. Then, with probability
at least $1-\alpha$
\begin{equation*}
W_{\gamma}(\tilde p^N) -W_{\gamma}(p^*_{\gamma}) \leq \varepsilon \quad \text{and} \quad \|\tilde p^N - p^*_{\gamma}\|_2 \leq \sqrt{2\varepsilon/\gamma}.
\end{equation*}
The total complexity of
Algorithm \ref{Alg:OnlineGD}
is
\begin{align*
\widetilde O\left( \frac{n^3\|C\|_\infty^2}{\gamma\varepsilon}\min\left\{ \exp\left( \frac{\|C\|_{\infty}}{\gamma} \right) \left( \frac{\|C\|_{\infty}}{\gamma} + \log\left(\frac{\|C\|_{\infty}}{\kappa \varepsilon^2} \right) \right), \sqrt{\frac{n \|C\|^2_{\infty}}{ \kappa\gamma \varepsilon^2}} \right\} \right),
\end{align*}
where $ \kappa \triangleq \lambda^+_{\min}\left(\nabla^2 W_{\gamma, q}^*(u^*)\right)$.
\end{theorem}
\begin{proof}
We estimate the co-domain (image) of $W_\gamma(p,q)$
\begin{align*}
\max_{p,q \in \Delta_n} W_\gamma (p,q)
&= \max_{p,q \in \Delta_n} \min_{ \substack{\pi \in \mathbb R^{n\times n}_+, \\ \pi {\mathbf 1} =p, \\ \pi^T {\mathbf 1} = q}} ~ \sum_{i,j=1}^n (C_{ij}\pi_{ij}+\gamma\pi_{ij}\log \pi_{ij})\notag\\
&\leq \max_{\substack{\pi \in \mathbb R^{n\times n}_+, \\ \sum_{i,j=1}^n \pi_{ij}=1}} \sum_{i,j=1}^n(C_{ij}\pi_{ij}+\gamma\pi_{ij}\log \pi_{ij}) \leq \|C\|_\infty.
\end{align*}
Therefore, $W_\gamma(p,q): \Delta_n\times \Delta_n\rightarrow \left[-2\gamma\log n, \|C\|_\infty\right]$.
Then we apply Theorem \ref{Th:contract_gener} with $B =\|C\|_\infty $ and $D =\max\limits_{p',p''\in \Delta_n}\|p'-p''\|_2 = \sqrt{2}$, and we sharply get
\begin{equation*}
W_{\gamma}(\tilde p^N) - W_{\gamma}(p^*_{\gamma})
= O\left(\frac{M^2\log(N/\alpha)}{\gamma N} + \delta \right),
\end{equation*}
Equating each terms in the r.h.s. of this equality to $\varepsilon/2$ and using $M=O(\sqrt n \|C\|_\infty)$, we get the expressions for $N$ and $\delta$. The statement
$\|\tilde p^N - p^*_{\gamma}\|_2 \leq \sqrt{2\varepsilon/\gamma}$
follows directly from strong convexity of $W_\gamma(p,q)$ and $W_\gamma(p)$.
The proof of algorithm complexity follows from the complexity
of the Sinkhorn's algorithm.
To state the complexity of the Sinkhorn's \avg{algorithm} we firstly define \avg{$\tilde\delta$ as the accuracy in function value of the inexact solution $u$ of maximization problem in \eqref{eq:dual_Was}.}
Using this
we formulate the number of iteration of the Sinkhorn's
\cite{franklin1989scaling,carlier2021linear,kroshnin2019complexity,stonyakin2019gradient}
\begin{align}\label{eq:sink}
\widetilde O \left( \exp\left( \frac{\|C\|_{\infty}}{\gamma} \right) \left( \frac{\|C\|_{\infty}}{\gamma} + \log\left(\frac{\|C\|_{\infty}}{\tilde \delta} \right) \right)\right).
\end{align}
The number of iteration for the accelerated Sinkhorn's can be improved \cite{guminov2019accelerated}
\begin{equation}\label{eq:accel}
\widetilde{O} \left(\sqrt{\frac{n \|C\|^2_\infty}{\gamma \varepsilon'}} \right).
\end{equation}
Here $\varepsilon'$ is the accuracy in the function value, which is the expression
$ \langle u,p\rangle + \langle\nu,q\rangle - \gamma\sum_{i,j=1}^n\exp\left( {(-C_{ji}+u_i+\nu_j)}/{\gamma} -1 \right)$ under the maximum in \eqref{eq:dual_Was}.
From strong convexity of this objective on the space orthogonal to eigenvector $\boldsymbol 1_n$ corresponds to the eigenvalue $0$ for this function, it follows that
\begin{equation}\label{eq:str_k}
\varepsilon'\geq \frac{\gamma}{2}\|u - u^*\|^2_2 = \frac{\kappa}{2}\delta,
\end{equation}
where $\kappa \triangleq \lambda^+_{\min}\left(\nabla^2 W_{\gamma, q}^*(u^*)\right)$. From \citep[Proposition A.2.]{bigot2019data}, for the eigenvalue of $\nabla^2 W^*_{\gamma,q}(u^*)$ it holds that $0=\lambda_n\left(\nabla^2 W_{\gamma, q}^*(u^*)\right) < \lambda_k\left(\nabla^2 W_{\gamma, q}^*(u^*)\right) \text{ for all } k=1,...,n-1$. Inequality \eqref{eq:str_k} holds due to $\nabla^\delta_p W_\gamma(p,q) := u$ in Algorithm \ref{Alg:OnlineGD} and $\nabla_p W_\gamma(p,q) \triangleq u^*$ in \eqref{eq:nabla_wass_lagrang}.
Multiplying both of estimates \eqref{eq:sink} and \eqref{eq:accel} by
the complexity of each iteration of the (accelerated) Sinkhorn's algorithm $\avg{O}(n^2)$ and
\ag{the} number of iterations $
N =\widetilde O \left( \frac{M^2}{\gamma \varepsilon} \right)$ \ag{(measures)} of Algorithm \ref{Alg:OnlineGD},
and
taking the minimum, we get the last statement of the theorem.
\end{proof}
Next, we study the practical convergence of projected stochastic gradient descent (Algorithm \ref{Alg:OnlineGD}).
Using the fact that the true Wasserstein barycenter of one-dimensional Gaussian measures
has closed form expression for the mean and the variance \cite{delon2020wasserstein}, we study the convergence to the true barycenter of
the generated truncated Gaussian measures. Figure \ref{fig:gausbarSGD} illustrates the convergence in the $2$-Wasserstein distance within 40 seconds.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.45\textwidth]{images/projectedSGD.png}
\caption{Convergence of projected stochastic gradient descent to the true barycenter of $2\times10^4$ Gaussian measures in the $2$-Wasserstein distance. }
\label{fig:gausbarSGD}
\end{figure}
\subsection{The SAA Approach }
The empirical counterpart of problem \eqref{def:populationWBFrech} is the (empirical) Wasserstein barycenter problem
\begin{equation}\label{EWB_unregFrech}
\min_{p\in \Delta_n} \frac{1}{m}\sum_{i=1}^m W_\gamma(p,q_i),
\end{equation}
where $q_1, q_2,...,q_m$ are some realizations of random variable with distribution $\mathbb P$.
Let us define $\hat p_\gamma^m \triangleq \arg \min\limits_{p\in \Delta_n}{\frac{1}{m}}\sum_{i=1}^m W_\gamma(p,q_i)$ and its $\varepsilon'$-approximation $\hat p_{\varepsilon'}$ such that
\begin{equation}\label{eq:fidelity_wass}
\frac{1}{m} \sum_{i=1}^m W_{\gamma}( \hat p_{\varepsilon'}, q_i) - \frac{1}{m} \sum_{i=1}^m W_{\gamma}(\hat p^m_{\gamma}, q_i) \leq \varepsilon'.
\end{equation}
For instance, $\hat p_{\varepsilon'}$ can be calculated by the IBP algorithm \cite{benamou2015iterative} or the accelerated IBP algorithm \cite{guminov2019accelerated}.
The next theorem specifies Theorem \ref{Th:contractSAA} for the Wassertein barycenter problem \eqref{EWB_unregFrech}.
\begin{theorem}\label{Th:contract2}
Let $\hat p_{\varepsilon'}$ satisfies \eqref{eq:fidelity_wass}.
Then, with probability at least $1-\alpha$
\begin{align*}
W_{\gamma}( \hat p_{\varepsilon'}) - W_{\gamma}(p_{\gamma}^*)
&\leq \sqrt{\frac{2M^2}{\gamma}\varepsilon'} +\frac{4M^2}{\alpha\gamma m},
\end{align*}
where $p^*_{\gamma} \triangleq \arg\min\limits_{p\in \Delta_n}
W_\gamma(p)$.
Let $\varepsilon' = O \left(\frac{\varepsilon^2\gamma}{n\|C\|_\infty^2} \right)$ and $m = O\left( \frac{M^2}{\alpha \gamma \varepsilon} \right) =O\left( \frac{n\|C\|_\infty^2}{\alpha \gamma \varepsilon} \right)$. Then, with probability at least $1-\alpha$
\[W_{\gamma}( \hat p_{\varepsilon'}) - W_{\gamma}(p_{\gamma}^*)\leq \varepsilon \quad \text{and} \quad \|\hat p_{\varepsilon'} - p^*_{\gamma}\|_2 \leq \sqrt{2\varepsilon/\gamma}.\]
The total complexity of the accelerated IBP computing $\hat p_{\varepsilon'}$ is
\begin{equation*}
\widetilde O\left(\frac{n^4\|C\|_\infty^4}{\alpha \gamma^2\varepsilon^2} \right).
\end{equation*}
\end{theorem}
\begin{proof}
From Theorem \ref{Th:contractSAA} we get the first statement of the theorem
\[ W_{\gamma}( \hat p_{\varepsilon'}) - W_{\gamma}(p_{\gamma}^*)
\leq \sqrt{\frac{2M^2}{\gamma}\varepsilon'} +\frac{4M^2}{\alpha\gamma m}. \]
From \cite{guminov2019accelerated}
we have that complexity of the accelerated IBP is
\[
\widetilde O\left(\frac{mn^2\sqrt n\|C\|_\infty}{\sqrt{\gamma \varepsilon'}} \right).
\]
Substituting the expression for $m$ and the expression for $\varepsilon'$ from Theorem \ref{Th:contractSAA}
\[\varepsilon' = O \left(\frac{\varepsilon^2 \gamma}{M^2} \right), \qquad m = O\left( \frac{M^2}{\alpha \gamma \varepsilon} \right)\]
to this equation we get the final statement of the theorem and finish the proof.
\end{proof}
Next, we study the practical convergence of the Iterative Bregman Projections on truncated Gaussian measures.
Figure \ref{fig:gausbarSGD} illustrates the convergence of the barycenter calculated by the IBP algorithm to the true barycenter of Gaussian measures in the $2$-Wasserstein distance within 10 seconds. For the convergence to the true barycenter w.r.t. the $2$-Wasserstein distance in the SAA approach, we refer to
\cite{boissard2015distribution}, however, considering the convergence in the $\ell_2$-norm (Theorem \ref{Th:contract2}) allows to obtain better convergence rate in comparison with the bounds for the $2$-Wasserstein distance.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.5\textwidth]{images/ibpSAA.png}
\caption{Convergence of the Iterative Bregman Projections to the true barycenter of $2\times10^4$ Gaussian measures in the $2$-Wasserstein distance. }
\label{fig:gausbarIBP}
\end{figure}
\subsection{Comparison of the SA and the SAA for the WB Problem}
Now we compare the complexity bounds for the SA and the SAA implementations solving problem \eqref{def:populationWBFrech}.
For the brevity, we skip the high probability details since we can fixed $\alpha$ (say $\alpha = 0.05$) in the all bounds.
Moreover, based on \cite{shalev2009stochastic}, we assume that in fact all bounds of this paper have logarithmic dependence on $\alpha$ which is hidden in $\widetilde{O}(\cdot)$ \ddd{\cite{feldman2019high,klochkov2021stability}}.
\begin{table}[ht!]
\caption{Total complexity of the SA and the SAA implementations for $ \min\limits_{p\in \Delta_n}\mathbb E_q W_\gamma(p,q)$. }
\small
\hspace{-0.5cm}
{ \begin{tabular}{ll}
\toprule
\textbf{Algorithm} & \textbf{Complexity} \\
\midrule
Projected SGD (SA) & $ \widetilde O\left( \frac{n^3\|C\|^2_\infty}{\gamma\varepsilon} \min\left\{ \exp\left( \frac{\|C\|_{\infty}}{\gamma} \right) \left( \frac{\|C\|_{\infty}}{\gamma} + \log\left(\frac{\|C\|_{\infty}}{\kappa \varepsilon^2} \right) \right), \sqrt{\frac{n \|C\|^2_{\infty}}{ \kappa\gamma \varepsilon^2}} \right\} \right)$ \\
\midrule
Accelerated IBP (SAA) &
$\widetilde O\left(\frac{n^4\|C\|_{\infty}^4}{\gamma^2\varepsilon^2}\right)$\\
\bottomrule
\end{tabular}}
\label{Tab:entropic_OT2}
\end{table}
Table \ref{Tab:entropic_OT2} presents the total complexity of the numerical algorithms implementing the SA and the SAA approaches.
When $\gamma$ is not too \ag{large}, the complexity in the first row of the table is achieved by the second term under the minimum, namely \[\widetilde O \left(\frac{n^3\sqrt n \|C\|^3_\infty}{\gamma \sqrt{\gamma \kappa }\varepsilon^2}\right),\]
where $ \kappa \triangleq \lambda^+_{\min}\left(\nabla^2 W_{\gamma, q}^*(u^*)\right)$. This is \dm{typically bigger than the SAA complexity when $\kappa \ll \gamma/n$.}
Hereby, the SAA approach may outperform the SA approach provided that the regularization parameter $\gamma$ is not too large.
From the practical point of view, the SAA implementation converges much faster than the SA implementation.
Executing the SAA algorithm in a distributed manner only enhances this superiority since for the case when the objective is not Lipschitz smooth, the distributed implementation of the SA approach is not possible. This is the case of the Wasserstein barycenter problem, indeed, the objective is Lipschitz continuous but not Lipschitz smooth.
\section{Fr\'{e}chet Mean with respect to Optimal Transport}\label{sec:unreg}
Now we are interested in finding a Fr\'{e}chet mean with respect to optimal transport
\begin{equation}\label{def:population_unregFrech}
\min_{p\in \Delta_n}W(p) \triangleq \mathbb E_q W (p,q).
\end{equation}
\subsection{The SA Approach with Regularization: Stochastic Gradient Descent}
The next theorem explains how the solution of strongly convex problem \eqref{def:populationWBFrech} approximates
a solution of convex problem
\eqref{def:population_unregFrech} under the proper choice of the regularization parameter $\gamma$.
\begin{theorem}\label{Th:SAunreg}
Let $\tilde p^N \triangleq \frac{1}{N}\sum_{k=1}^{N}p^k $ be the average of $N$ online outputs of Algorithm \ref{Alg:OnlineGD} run with $\delta = O\left(\varepsilon\right)$ and $
N =\widetilde O \left( \frac{n\|C\|_\infty^2}{\gamma \varepsilon} \right)
$. Let $\gamma = \avg{{\varepsilon}/{(2 \mathcal{R}^2)} } $ with $\mathcal{R}^2 = 2 \log n$. Then, with probability
at least $1-\alpha$ the following holds
\begin{equation*}
W(\tilde p^N) - W(p^*) \leq \varepsilon,
\end{equation*}
where $p^*$ is a solution of \eqref{def:population_unregFrech}.
The total complexity of
Algorithm \ref{Alg:OnlineGD} with the accelerated Sinkhorn
is
\[\widetilde O \left(\frac{n^3\sqrt n \|C\|^3_\infty}{\gamma \sqrt{\gamma \kappa }\varepsilon^2}\right)= \widetilde O \left(\frac{n^3\sqrt n \|C\|^3_\infty}{\varepsilon^3 \sqrt{\varepsilon \kappa }}\right).\]
where $ \kappa \triangleq \lambda^+_{\min}\left(\nabla^2 W_{\gamma, q}^*(u^*)\right)$.
\end{theorem}
\begin{proof}
The proof of this theorem follows from Theorem \ref{Th:contract} and the following \cite{gasnikov2015universal,kroshnin2019complexity,peyre2019computational}
\[
W(p) - W(p^*) \leq W_{\gamma}(p) - W_{\gamma}(p^*) + 2\gamma\log n \leq W_{\gamma}(p) - W_{\gamma}(p^*_{\gamma})+ 2\gamma\log n,
\]
where $p \in \Delta_n $, $p^* = \arg\min\limits_{p\in \Delta_n}W(p) $.
The choice $\gamma = \frac{\varepsilon}{4\log n}$ ensures the following
\begin{equation*}
W(p) - W(p^*) \leq W_{\gamma}(p) - W_{\gamma}(p^*_{\gamma}) + \varepsilon/2, \quad \forall p \in \Delta_n.
\end{equation*}
This means that solving problem \eqref{def:populationWBFrech} with $\varepsilon/2$ precision, we get a solution of problem \eqref{def:population_unregFrech} with $\varepsilon$ precision.
When $\gamma$ is not too \ag{large}, Algorithm \ref{Alg:OnlineGD} uses the accelerated Sinkhorn’s algorithm (instead of Sinkhorn’s algorithm). Thus, using $\gamma = \frac{\varepsilon}{4\log n} $ and meaning that $\varepsilon$ is small, we get the complexity according to the statement of the theorem.
\end{proof}
\subsection{The SA Approach: Stochastic Mirror Descent}
Now we propose an approach to solve problem \eqref{def:population_unregFrech} without additional regularization.
The approach is based on mirror prox given by the iterative formula \eqref{eq:prox_mirr_step}. We use simplex setup which
provides a closed form solution for \eqref{eq:prox_mirr_step}. Algorithm \ref{Alg:OnlineMD} presents the application of mirror prox to problem \eqref{def:population_unregFrech}, where
the gradient of $ W(p^k,q^k)$ can be calculated using dual representation of OT \cite{peyre2019computational} by any LP solver exactly
\begin{align}\label{eq:refusol}
W(p,q) = \max_{ \substack{(u, \nu) \in \mathbb R^n\times \mathbb R^n,\\
u_i+\nu_j \leq C_{ij}, \forall i,j \in [n]}}\left\{ \langle u,p \rangle + \langle \nu,q \rangle \right\}.
\end{align}
Then \[
\nabla_p W(p,q) = u^*,
\]
where $u^*$ is a solution of \eqref{eq:refusol} such that $\langle u^*,{\mathbf 1}\rangle =0$.
\begin{algorithm}[ht!]
\caption
Stochastic Mirror Descent for the Wasserstein Barycenter Problem}
\label{Alg:OnlineMD}
\begin{algorithmic}[1]
\Require starting point $p^1 = (1/n,...,1/n)^T$,
number of measures $N$, $q^1,...,q^N$, accuracy of gradient calculation $\delta$
\State $\eta = \frac{\sqrt{2\log n}}{\|C\|_{\infty}\sqrt{N}}$
\For{$k= 1,\dots, N$}
\State Calculate $\nabla_{p^k} W(p^k,q^k)$ solving dual LP by any LP solver
\State
\[p^{k+1} = \frac{p^{k}\odot \exp\left(-\eta\nabla_{p^k} W(p^k,q^k)\right)}{\sum_{j=1}^n [p^{k}]_j\exp\left(-\eta\left[\nabla_{p^k} W(p^k,q^k)\right]_j\right)} \]
\EndFor
\Ensure $\breve{p}^N = \frac{1}{N}\sum_{k=1}^{N} p^{k}$
\end{algorithmic}
\end{algorithm}
The next theorem estimates the complexity of Algorithm \ref{Alg:OnlineMD}
\begin{theorem}\label{Th:MD}
Let $\breve p^N$ be the output of Algorithm \ref{Alg:OnlineMD} processing $N$ measures. Then, with probability
at least $1-\alpha$ we have
\begin{equation*}
W(\breve p^N) - W({p^*}) = O\left(\frac{\|C\|_\infty \sqrt{\log ({n}/{\alpha})}}{\sqrt{N}} \right),
\end{equation*}
Let Algorithm \ref{Alg:OnlineMD} run with $
N = \widetilde O \left( \frac{M_\infty^2R^2}{\varepsilon^2} \right) =\widetilde O \left( \frac{\|C\|_\infty^2}{\varepsilon^2} \right)
$, $ R^2 \triangleq {\rm KL}(p^1,p^*) \leq \log n $.
Then, with probability
at least $1-\alpha$
\[ W(\breve p^N) - W(p^*) \leq \varepsilon.\]
The {total} complexity of Algorithm \ref{Alg:OnlineMD} is
\[ \widetilde O\left( \frac{ n^3 \|C\|^2_\infty}{\varepsilon^2}\right).\]
\end{theorem}
\begin{proof}
From Theorem \ref{Th:MDgener} and using $M_\infty = O\left(\|C\|_\infty\right)$, we have
\begin{align}\label{eq:final_est234}
W(\breve p^N) - W(p^*) & = O\left(\frac{\|C\|_\infty \sqrt{\log ({n}/{\alpha})}}{\sqrt{N}} +2\delta \right).
\end{align}
Notice, that $\nabla_{p^k} W(p^k,q^k)$ can be calculated exactly by any LP solver. Thus, we take $\delta = 0$ in \eqref{eq:final_est234} and get the first statement of the theorem.
The second statement of the theorem directly follows from this and the condition $ W(\breve p^N) - W(p^*)\leq \varepsilon$.
To get the complexity bounds we
notice that the complexity for calculating $\nabla_p W(p^k,q^k)$ is $\tilde{O}(n^3)$ \cite{ahuja1993network,dadush2018friendly,dong2020study,gabow1991faster}, multiplying this by $N = O \left( {\|C\|_\infty^2R^2}/{\varepsilon^2} \right) $ with $ R^2 \triangleq {\rm KL}(p^*,p^1) \leq \log n $, we get the last statement of the theorem.
\[\widetilde O(n^3N) = {\widetilde O\left(n^3 \left(\frac{\|C\|_{\infty} R}{\varepsilon}\right)^2\right) =} \widetilde O\left(n^3 \left(\frac{\|C\|_\infty}{\varepsilon}\right)^2\right).\]
\end{proof}
Next we compare the
SA approaches with and without regularization of optimal transport in problem \eqref{def:population_unregFrech}. Entropic regularization of optimal transport leads to strong convexity of regularized optimal transport in the $\ell_2$-norm, hence, the Euclidean setup should be used. Regularization parameter $\gamma = \frac{\varepsilon}{4 \log n}$ ensures $\varepsilon$-approximation for the unregularized solution.
In this case, we use stochastic gradient descent with Euclidean projection onto simplex since it converges faster for strongly convex objective.
For non-regularized problem we can significantly use the simplex prox structure, indeed, we can apply stochastic mirror descent with simplex setup (the Kullback-Leibler divergence as the Bregman divergence) with Lipschitz constant $M_\infty = O(\|C\|_\infty)$ that is $\sqrt{n}$ better than Lipschitz constant in the Euclidean norm $M = O(\sqrt{n}\|C\|_\infty)$.
We studied the convergence of stochastic mirror descent (Algorithm \ref{Alg:OnlineMD}) and stochastic gradient descent (Algorithm \ref{Alg:OnlineGD}) in the $2$-Wasserstein distance within $10^4$ iterations (processing of $10^4$ probability measures).
Figure \ref{fig:gausbarcomparison1} confirms
better convergence of stochastic mirror descent than projected stochastic gradient descent as stated in their theoretical complexity (Theorems \ref{Th:SAunreg} and \ref{Th:MD}).
\begin{figure}[ht!]
\centering
\includegraphics[width=0.45\textwidth]{images/SGDMD.png}
\caption{Convergence of projected stochastic gradient descent, and stochastic mirror descent to the true barycenter of $2\times10^4$ Gaussian measures in the $2$-Wasserstein distance. }
\label{fig:gausbarcomparison1}
\end{figure}
\subsection{The SAA Approach }
Similarly for the SA approach, we provide the proper choice of the regularization parameter $\gamma$ in the SAA approach
so that the
solution of strongly convex problem \eqref{def:populationWBFrech} approximates
a solution of convex problem
\eqref{def:population_unregFrech}.
\begin{theorem}\label{Th:SAAunreg}
Let $\hat p_{\varepsilon'}$ satisfy
\begin{equation*}
\frac{1}{m} \sum_{i=1}^m W_{\gamma}( \hat p_{\varepsilon'}, q^i) - \frac{1}{m} \sum_{k=1}^m W_{\gamma}(\hat p^*_{\gamma}, q^i) \leq \varepsilon',
\end{equation*}
where $ \hat p_\gamma^* = \arg\min\limits_{p\in \Delta_n}\dm{\frac{1}{m}}\sum\limits_{i=1}^m W_\gamma(p,q^i)$, $\varepsilon' = O \left(\frac{\varepsilon^2 \gamma}{n\|C\|_\infty^2} \right)$, $m = O\left( \frac{n\|C\|_\infty^2}{\alpha \gamma \varepsilon} \right)$, and $\gamma = {\varepsilon}/{(2 \mathcal{R}^2)} $ with $\mathcal{R}^2 = 2 \log n$.
Then, with probability at least $1-\alpha$ the following holds
\[W( \hat p_{\varepsilon'}) - W(p^*)\leq \varepsilon.\]
The total complexity of the accelerated IBP computing $\hat p_{\varepsilon'}$ is
\begin{equation*}
\widetilde O\left(\frac{n^4\|C\|_\infty^4}{\alpha \varepsilon^4} \right).
\end{equation*}
\end{theorem}
\begin{proof}
The proof follows from Theorem \ref{Th:contract2} and the proof of Theorem \ref{Th:SAunreg} with $\gamma = {\varepsilon}/{(4 \log n)} $.
\end{proof}
\subsection{Penalization of the WB problem}
For the population Wasserstein barycenter problem, we construct 1-strongly convex penalty function in the $\ell_1$-norm based on Bregman divergence.
We consider the following prox-function \cite{ben-tal2001lectures}
\[d(p) = \frac{1}{2(a-1)}\|p\|_a^2, \quad a = 1 + \frac{1}{2\log n}, \qquad p\in \Delta_n\]
that is 1-strongly convex in the $\ell_1$-norm. Then Bregman divergence $ B_d(p,p^1)$ associated with $d(p)$ is
\[B_d(p,p^1) = d(p) - d(p^1) - \langle \nabla d(p^1), p - p^1 \rangle.\]
$B_d(p,p^1)$ is 1-strongly convex w.r.t. $p$ in the $\ell_1$-norm and $\tilde{O}(1)$-Lipschitz continuous in the $\ell_1$-norm on $\Delta_n$. One of the advantages of this penalization compared to the negative entropy penalization proposed in \cite{ballu2020stochastic,bigot2019penalization}, is that we get the upper bound on the Lipschitz constant, the properties of strong convexity in the $\ell_1$-norm on $\Delta_n$ remain the same. Moreover, this penalization contributes to the better wall-clock time complexity than quadratic penalization \cite{bigot2019penalization} since the constants of Lipschitz continuity for $W(p,q)$ with respect to the $\ell_1$-norm is $\sqrt{n}$ better than with respect to the $\ell_2$-norm but $R^2 = \|p^* - p^1\|_2^2\leq \|p^* - p^1\|_1^2 \leq \sqrt{2} $ and $R^2_d = B_d(p^*,p^1) = O(\log n)$ are equal up to a logarithmic factor.
The regularized SAA problem is following
\begin{equation}\label{EWB_Bregman_Reg}
\min_{p\in \Delta_n}\left\{\frac{1}{m}\sum_{k=1}^m W(p,q^k) + \lambda B_d(p,p^1)\right\}.
\end{equation}
The next theorem is particular case of Theorem \eqref{th_reg_ERM} for the population WB problem \eqref{def:population_unregFrech} with $r(p,p^1) = B_d(p,p^1)$.
\begin{theorem}\label{Th:newreg}
Let $\hat p_{\varepsilon'}$ be such that
\begin{equation}\label{EWB_eps}
\frac{1}{m}\sum_{k=1}^m W(\hat p_{\varepsilon'},q^k) + \lambda B_d(\hat p_{\varepsilon'},p^1) -
\min_{p\in \Delta_n}\left\{\frac{1}{m}\sum_{k=1}^m W(p,q^k) + \lambda B_d(p,p^1)\right\}
\le \varepsilon'.
\end{equation}
To satisfy
\[ W( \hat p_{\varepsilon'}) - W(p^*)\leq \varepsilon.\]
with probability at least $1-\alpha$,
we need to take $\lambda = \varepsilon/(2{R_d^2})$ and
\[m = \widetilde O\left(\frac{ \|C\|_\infty^2}{\alpha \varepsilon^2}\right), \]
where
$ R_d^2 = B_d(p^*,p^1) = {O(\log n)}$. The precision $\varepsilon'$ is defined as
\[\varepsilon' = \widetilde O\left(\frac{\varepsilon^3}{\|C\|_\infty^2 }\right).\]
The total complexity of Mirror Prox computing $\hat p_{\varepsilon'}$ is
\[
\widetilde O\left( \frac{ n^2\sqrt n\|C\|^5_\infty}{\varepsilon^5} \right).\]
\end{theorem}
\begin{proof}
The proof is based on saddle-point reformulation of the WB problem.
Further, we provide the explanation how to do this.
Firstly
we rewrite the OT as \cite{jambulapati2019direct}
\begin{equation}\label{eq:OTreform}
W(p,q) = \min_{x \in \Delta_{n^2}} \max_{y\in [-1,1]^{2n}} \{d^\top x +2\|d\|_\infty(~ y^\top Ax -b^\top y)\},
\end{equation}
where $b =
(p^\top, q^\top)^\top$, $d$ is vectorized cost matrix of $C$, $x$ be vectorized transport plan of $X$, and $A=\{0,1\}^{2n\times n^2}$ is an incidence matrix.
Then we reformulate the WB problem as a saddle-point problem \cite{dvinskikh2020improved}
\begin{align}\label{eq:alm_distr}
\min_{ \substack{ p \in \Delta^n, \\ \mathbf{x} \in \mathcal X \triangleq \underbrace{\Delta_{n^2}\times \ldots \times \Delta_{n^2}}_{m} }} \max_{ \mathbf{y} \in [-1,1]^{2mn}}
\frac{1}{m} \left\{\boldsymbol d^\top \mathbf{x} +2\|d\|_\infty\left(\mathbf{y}^\top\boldsymbol A \mathbf{x} -\mathbf b^\top \mathbf{y} \right)\right\},
\end{align}
where
$\mathbf{x} = (x_1^\top ,\ldots,x_m^\top )^\top $,
$\mathbf{y} = (y_1^\top,\ldots,y_m^\top)^\top $,
$\mathbf b = (p^\top, q_1^\top, ..., p^\top, q_m^\top)^\top$,
$\boldsymbol d = (d^\top, \ldots, d^\top )^\top $,
and $\boldsymbol A = {\rm diag}\{A, ..., A\} \in \{0,1\}^{2mn\times mn^2}$ is block-diagonal matrix.
Similarly to \eqref{eq:alm_distr} we reformulate \eqref{EWB_Bregman_Reg} as a saddle-point problem
\begin{align*}
\min_{ \substack{ p \in \Delta^n, \\ \mathbf{x} \in \mathcal X}} \max_{ \mathbf{y} \in [-1,1]^{2mn}}
~f_\lambda(\mathbf{x},p,\mathbf{y})
&\triangleq
\frac{1}{m} \left\{\boldsymbol d^\top \mathbf{x} +2\|d\|_\infty\left(\mathbf{y}^\top\boldsymbol A \mathbf{x} -\mathbf b^\top \mathbf{y} \right)\right\} +\lambda B_d(p,p^1)
\end{align*}
The gradient operator for $f(\mathbf{x},p,\mathbf{y})$ is defined by
\begin{align}\label{eq:gradMPrec}
G(\mathbf{x}, p, \mathbf{y}) =
\begin{pmatrix}
\nabla_\mathbf{x} f \\
\nabla_p f\\
-\nabla_\mathbf{y} f
\end{pmatrix} =
\frac{1}{m} \begin{pmatrix}
\boldsymbol d + 2\|d\|_\infty \boldsymbol A^\top \mathbf{y} \\
- 2\|d\|_\infty \{[y_{i}]_{1...n}\}_{i=1}^m +\lambda(\nabla d(p) - \nabla d(p^1)) \\
2\|d\|_\infty(\boldsymbol A\mathbf{x} - \mathbf{b})
\end{pmatrix},
\end{align}
where $[d(p)]_i = \frac{1}{a-1}\|p\|_a^{2-a}[p]_i^{a-1}$.
To get the complexity of MP we use the same reasons as in \cite{dvinskikh2020improved} with \eqref{eq:gradMPrec}. The total complexity is
\[
\widetilde O\left( \frac{ mn^2\sqrt{n} \|C\|_\infty}{\varepsilon'} \right)
\]
Then we use Theorem \ref{th_reg_ERM}
and get the exspressions for $m$, $\varepsilon'$ with $\lambda = \varepsilon/(2{R_d}^2)$, where ${R_d}^2 = B_d(p^*,p^1)$.
The number of measures is
\[m = \frac{ 32 M_\infty^2 R_d^2}{\alpha \varepsilon^2} = \widetilde O\left(\frac{\|C\|_\infty^2}{\alpha \varepsilon^2}\right). \]
The precision $\varepsilon'$ is defined as
\[\varepsilon' = \frac{\varepsilon^3}{64M_\infty^2 {R_d}^2} = O\left(\frac{\varepsilon^3}{\|C\|_\infty^2}\right).\]
\end{proof}
\subsection{Comparison of the SA and the SAA for the WB Problem.}
Now we compare the complexity bounds for the SA and the SAA implementations solving problem \eqref{def:population_unregFrech}. Table \ref{Tab:OT} presents the total complexity for the numerical algorithms.
{
\begin{table}[H]
\caption{Total complexity of the SA and the SAA implementations for $ \min\limits_{p\in \Delta_n}\mathbb E_q W(p,q)$. }
\begin{center}
\begin{tabular}{lll}\toprule
\textbf{Algorithm} & \textbf{Theorem} & \textbf{Complexity} \\
\midrule
\makecell[l]{Projected SGD (SA) \\ \text{with }$\gamma = \frac{\varepsilon}{4 \log n}$}
& \ref{Th:SAunreg} & $ \widetilde O \left(\frac{n^3\sqrt n \|C\|^3_\infty}{\varepsilon^3 \sqrt{\varepsilon \kappa }}\right)$ \\
\midrule
Stochastic MD (SA) & \ref{Th:MD} & $\widetilde O\left( \frac{n^3\|C\|^2_{\infty}}{\varepsilon^2}\right) $ \\
\midrule
\makecell[l]{Accelerated IBP (SAA) \\ \text{with }$\gamma = \frac{\varepsilon}{4 \log n}$}
& \ref{Th:SAAunreg} &
$\widetilde{O}\left(\frac{n^4\|C\|^4_{\infty}}{\varepsilon^4}\right)$
\\
\midrule
\makecell[l]{ Mirror Prox with $B_d(p^*,p^1)$\\ penalization (SAA)} & \ref{Th:newreg} & \ag{$\widetilde O\left(\frac{ n^{2}\sqrt{n}\|C\|^{5} _{\infty}}{\varepsilon^{5} }\right)$} \\
\bottomrule
\end{tabular}
\label{Tab:OT}
\end{center}
\end{table}
}
For the SA algorithms, which are Stochastic MD and Projected SGD, we can conclude the following: non-regularized approach (Stochastic MD) uses simplex prox structure and gets better complexity bounds, indeed Lipschitz constant in the $\ell_1$-norm is $M_\infty = O(\|C\|_\infty)$, whereas Lipschitz constant in the Euclidean norm is $M = O(\sqrt{n}\|C\|_\infty)$. The practical comparison of Stochastic MD (Algorithm \ref{Alg:OnlineMD}) and Projected SGD (Algorithm \ref{Alg:OnlineGD}) can be found in Figure \ref{fig:gausbarcomparison1}.
For the SAA approaches (Accelerated IBP and Mirror Prox with specific penalization) we enclose the following: entropy-regularized approach (Accelerated IBP) has better dependence on $\varepsilon$ than penalized approach (Mirror Prox with specific penalization), however, worse dependence on $n$. Using Dual Extrapolation method for the WP problem from paper \cite{dvinskikh2020improved} instead of Mirror Prox allows to omit $\sqrt{n}$ in the penalized approach.
One of the main advantages of the SAA approach is the possibility to perform it in a decentralized manner
in contrast to the SA approach, which cannot be executed in a decentralized manner or even in distributed or parallel fashion for non-smooth objective \cite{gorbunov2019optimal}. This is the case of the Wasserstein barycenter problem, indeed, the objective is Lipschitz continuous but not Lipschitz smooth.
\section*{Acknowledgements}
The work was
supported by the Russian Science Foundation (project 18-71-10108), \url{https://rscf.ru/project/18-71-10108/}; and by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye) number 075-00337-20-03, project No. 0714-2020-0005.
\bibliographystyle{tfs}
\section{Introduction}
In order to assist authors in the process of preparing a manuscript for a journal, the Taylor \& Francis `\textsf{Interact}' layout style has been implemented as a \LaTeXe\ class file based on the \texttt{article} document class. A \textsc{Bib}\TeX\ bibliography style file and a sample bibliography are also provided in order to assist with the formatting of your references.
Commands that differ from or are provided in addition to standard \LaTeXe\ are described in this document, which is \emph{not} a substitute for a \LaTeXe\ tutorial.
The \texttt{interacttfssample.tex} file can be used as a template for a manuscript by cutting, pasting, inserting and deleting text as appropriate, using the preamble and the \LaTeX\ environments provided (e.g.\ \verb"\begin{abstract}", \verb"\begin{keywords}").
\subsection{The \textsf{Interact} class file}\label{class}
The \texttt{interact} class file preserves the standard \LaTeXe\ interface such that any document that can be produced using \texttt{article.cls} can also be produced with minimal alteration using the \texttt{interact} class file as described in this document.
If your article is accepted for publication it will be typeset as the journal requires in Minion Pro and/or Myriad Pro. Since most authors will not have these fonts installed, the page make-up is liable to alter slightly with the change of font. Also, the \texttt{interact} class file produces only single-column format, which is preferred for peer review and will be converted to two-column format by the typesetter if necessary during preparation of the proofs. Please therefore do not try to match the typeset format exactly, but use the standard \LaTeX\ fonts instead and ignore details such as slightly long lines of text or figures/tables not appearing in exact synchronization with their citations in the text: these details will be dealt with by the typesetter. Similarly, it is unnecessary to spend time addressing warnings in the log file -- if your .tex file compiles to produce a PDF document that correctly shows how you wish your paper to appear, such warnings will not prevent your source files being imported into the typesetter's program.
\subsection{Submission of manuscripts prepared using \emph{\LaTeX}}
Manuscripts for possible publication should be submitted to the Editors for review as directed in the journal's Instructions for Authors, and in accordance with any technical instructions provided in the journal's ScholarOne Manuscripts or Editorial Manager site. Your \LaTeX\ source file(s), the class file and any graphics files will be required in addition to the final PDF version when final, revised versions of accepted manuscripts are submitted.
Please ensure that any author-defined macros used in your article are gathered together in the preamble of your .tex file, i.e.\ before the \verb"\begin{document}" command. Note that if serious problems are encountered in the coding of a document (missing author-defined macros, for example), the typesetter may resort to rekeying it.
\section{Using the \texttt{interact} class file}
For convenience, simply copy the \texttt{interact.cls} file into the same directory as your manuscript files (you do not need to install it in your \TeX\ distribution). In order to use the \texttt{interact} document class, replace the command \verb"\documentclass{article}" at the beginning of your document with the command \verb"\documentclass{interact}".
The following document-class options should \emph{not} be used with the \texttt{interact} class file:
\begin{itemize}
\item \texttt{10pt}, \texttt{11pt}, \texttt{12pt} -- unavailable;
\item \texttt{oneside}, \texttt{twoside} -- not necessary, \texttt{oneside} is the default;
\item \texttt{leqno}, \texttt{titlepage} -- should not be used;
\item \texttt{twocolumn} -- should not be used (see Subsection~\ref{class});
\item \texttt{onecolumn} -- not necessary as it is the default style.
\end{itemize}
To prepare a manuscript for a journal that is printed in A4 (two column) format, use the \verb"largeformat" document-class option provided by \texttt{interact.cls}; otherwise the class file produces pages sized for B5 (single column) format by default. The \texttt{geometry} package should not be used to make any further adjustments to the page dimensions.
\section{Additional features of the \texttt{interact} class file}
\subsection{Title, authors' names and affiliations, abstracts and article types}
The title should be generated at the beginning of your article using the \verb"\maketitle" command.
In the final version the author name(s) and affiliation(s) must be followed immediately by \verb"\maketitle" as shown below in order for them to be displayed in your PDF document.
To prepare an anonymous version for double-blind peer review, you can put the \verb"\maketitle" between the \verb"\title" and the \verb"\author" in order to hide the author name(s) and affiliation(s) temporarily.
Next you should include the abstract if your article has one, enclosed within an \texttt{abstract} environment.
The \verb"\articletype" command is also provided as an \emph{optional} element which should \emph{only} be included if your article actually needs it.
For example, the titles for this document begin as follows:
\begin{verbatim}
\articletype{ARTICLE TEMPLATE}
\title{Taylor \& Francis \LaTeX\ template for authors (\textsf{Interact}
layout + reference style S)}
\author{
\name{A.~N. Author\textsuperscript{a}\thanks{CONTACT A.~N. Author.
Email: [email protected]} and John Smith\textsuperscript{b}}
\affil{\textsuperscript{a}Taylor \& Francis, 4 Park Square, Milton
Park, Abingdon, UK; \textsuperscript{b}Institut f\"{u}r Informatik,
Albert-Ludwigs-Universit\"{a}t, Freiburg, Germany} }
\maketitle
\begin{abstract}
This template is for authors who are preparing a manuscript for a
Taylor \& Francis journal using the \LaTeX\ document preparation system
and the \texttt{interact} class file, which is available via selected
journals' home pages on the Taylor \& Francis website.
\end{abstract}
\end{verbatim}
An additional abstract in another language (preceded by a translation of the article title) may be included within the \verb"abstract" environment if required.
A graphical abstract may also be included if required. Within the \verb"abstract" environment you can include the code
\begin{verbatim}
\\\resizebox{25pc}{!}{\includegraphics{abstract.eps}}
\end{verbatim}
where the graphical abstract is to appear, where \verb"abstract.eps" is the name of the file containing the graphic (note that \verb"25pc" is the recommended maximum width, expressed in pica, for the graphical abstract in your manuscript).
\subsection{Abbreviations}
A list of abbreviations may be included if required, enclosed within an \texttt{abbreviations} environment, i.e.\ \verb"\begin{abbreviations}"\ldots\verb"\end{abbreviations}", immediately following the \verb"abstract" environment.
\subsection{Keywords}
A list of keywords may be included if required, enclosed within a \texttt{keywords} environment, i.e.\ \verb"\begin{keywords}"\ldots\verb"\end{keywords}". Additional keywords in other languages (preceded by a translation of the word `keywords') may also be included within the \verb"keywords" environment if required.
\subsection{Subject classification codes}
AMS, JEL or PACS classification codes may be included if required. The \texttt{interact} class file provides an \texttt{amscode} environment, i.e.\ \verb"\begin{amscode}"\ldots\verb"\end{amscode}", a \texttt{jelcode} environment, i.e.\ \verb"\begin{jelcode}"\ldots\verb"\end{jelcode}", and a \texttt{pacscode} environment, i.e.\ \verb"\begin{pacscode}"\ldots\verb"\end{pacscode}" to assist with this.
\subsection{Additional footnotes to the title or authors' names}
The \verb"\thanks" command may be used to create additional footnotes to the title or authors' names if required. Footnote symbols for this purpose should be used in the order
$^\ast$~(coded as \verb"$^\ast$"), $\dagger$~(\verb"$\dagger$"), $\ddagger$~(\verb"$\ddagger$"), $\S$~(\verb"$\S$"), $\P$~(\verb"$\P$"), $\|$~(\verb"$\|$"),
$\dagger\dagger$~(\verb"$\dagger\dagger$"), $\ddagger\ddagger$~(\verb"$\ddagger\ddagger$"), $\S\S$~(\verb"$\S\S$"), $\P\P$~(\verb"$\P\P$").
Note that any \verb"footnote"s to the main text will automatically be assigned the superscript symbols 1, 2, 3, etc. by the class file.\footnote{If preferred, the \texttt{endnotes} package may be used to set the notes at the end of your text, before the bibliography. The symbols will be changed to match the style of the journal if necessary by the typesetter.}
\section{Some guidelines for using the standard features of \LaTeX}
\subsection{Sections}
The \textsf{Interact} layout style allows for five levels of section heading, all of which are provided in the \texttt{interact} class file using the standard \LaTeX\ commands \verb"\section", \verb"\subsection", \verb"\subsubsection", \verb"\paragraph" and \verb"\subparagraph". Numbering will be automatically generated for all these headings by default.
\subsection{Lists}
Numbered lists are produced using the \texttt{enumerate} environment, which will number each list item with arabic numerals by default. For example,
\begin{enumerate}
\item first item
\item second item
\item third item
\end{enumerate}
was produced by
\begin{verbatim}
\begin{enumerate}
\item first item
\item second item
\item third item
\end{enumerate}
\end{verbatim}
Alternative numbering styles can be achieved by inserting an optional argument in square brackets to each \verb"item", e.g.\ \verb"\item[(i)] first item"\, to create a list numbered with roman numerals at level one.
Bulleted lists are produced using the \texttt{itemize} environment. For example,
\begin{itemize}
\item First bulleted item
\item Second bulleted item
\item Third bulleted item
\end{itemize}
was produced by
\begin{verbatim}
\begin{itemize}
\item First bulleted item
\item Second bulleted item
\item Third bulleted item
\end{itemize}
\end{verbatim}
\subsection{Figures}
The \texttt{interact} class file will deal with positioning your figures in the same way as standard \LaTeX. It should not normally be necessary to use the optional \texttt{[htb]} location specifiers of the \texttt{figure} environment in your manuscript; you may, however, find the \verb"[p]" placement option or the \verb"endfloat" package useful if a journal insists on the need to separate figures from the text.
Figure captions appear below the figures themselves, therefore the \verb"\caption" command should appear after the body of the figure. For example, Figure~\ref{sample-figure} with caption and sub-captions is produced using the following commands:
\begin{verbatim}
\begin{figure}
\centering
\subfloat[An example of an individual figure sub-caption.]{%
\resizebox*{5cm}{!}{\includegraphics{graph1.eps}}}\hspace{5pt}
\subfloat[A slightly shorter sub-caption.]{%
\resizebox*{5cm}{!}{\includegraphics{graph2.eps}}}
\caption{Example of a two-part figure with individual sub-captions
showing that captions are flush left and justified if greater
than one line of text.} \label{sample-figure}
\end{figure}
\end{verbatim}
\begin{figure}
\centering
\subfloat[An example of an individual figure sub-caption.]{%
\resizebox*{5cm}{!}{\includegraphics{graph1.eps}}}\hspace{5pt}
\subfloat[A slightly shorter sub-caption.]{%
\resizebox*{5cm}{!}{\includegraphics{graph2.eps}}}
\caption{Example of a two-part figure with individual sub-captions
showing that captions are flush left and justified if greater
than one line of text.} \label{sample-figure}
\end{figure}
To ensure that figures are correctly numbered automatically, the \verb"\label" command should be included just after the \verb"\caption" command, or in its argument.
The \verb"\subfloat" command requires \verb"subfig.sty", which is called in the preamble of the \texttt{interacttfssample.tex} file (to allow your choice of an alternative package if preferred) and included in the \textsf{Interact} \LaTeX\ bundle for convenience. Please supply any additional figure macros used with your article in the preamble of your .tex file.
The source files of any figures will be required when the final, revised version of a manuscript is submitted. Authors should ensure that these are suitable (in terms of lettering size, etc.) for the reductions they envisage.
The \texttt{epstopdf} package can be used to incorporate encapsulated PostScript (.eps) illustrations when using PDF\LaTeX, etc. Please provide the original .eps source files rather than the generated PDF images of those illustrations for production purposes.
\subsection{Tables}
The \texttt{interact} class file will deal with positioning your tables in the same way as standard \LaTeX. It should not normally be necessary to use the optional \texttt{[htb]} location specifiers of the \texttt{table} environment in your manuscript; you may, however, find the \verb"[p]" placement option or the \verb"endfloat" package useful if a journal insists on the need to separate tables from the text.
The \texttt{tabular} environment can be used as shown to create tables with single horizontal rules at the head, foot and elsewhere as appropriate. The captions appear above the tables in the \textsf{Interact} style, therefore the \verb"\tbl" command should be used before the body of the table. For example, Table~\ref{sample-table} is produced using the following commands:
\begin{table}
\tbl{Example of a table showing that its caption is as wide as
the table itself and justified.}
{\begin{tabular}{lcccccc} \toprule
& \multicolumn{2}{l}{Type} \\ \cmidrule{2-7}
Class & One & Two & Three & Four & Five & Six \\ \midrule
Alpha\textsuperscript{a} & A1 & A2 & A3 & A4 & A5 & A6 \\
Beta & B2 & B2 & B3 & B4 & B5 & B6 \\
Gamma & C2 & C2 & C3 & C4 & C5 & C6 \\ \bottomrule
\end{tabular}}
\tabnote{\textsuperscript{a}This footnote shows how to include
footnotes to a table if required.}
\label{sample-table}
\end{table}
\begin{verbatim}
\begin{table}
\tbl{Example of a table showing that its caption is as wide as
the table itself and justified.}
{\begin{tabular}{lcccccc} \toprule
& \multicolumn{2}{l}{Type} \\ \cmidrule{2-7}
Class & One & Two & Three & Four & Five & Six \\ \midrule
Alpha\textsuperscript{a} & A1 & A2 & A3 & A4 & A5 & A6 \\
Beta & B2 & B2 & B3 & B4 & B5 & B6 \\
Gamma & C2 & C2 & C3 & C4 & C5 & C6 \\ \bottomrule
\end{tabular}}
\tabnote{\textsuperscript{a}This footnote shows how to include
footnotes to a table if required.}
\label{sample-table}
\end{table}
\end{verbatim}
To ensure that tables are correctly numbered automatically, the \verb"\label" command should be included just before \verb"\end{table}".
The \verb"\toprule", \verb"\midrule", \verb"\bottomrule" and \verb"\cmidrule" commands are those used by \verb"booktabs.sty", which is called by the \texttt{interact} class file and included in the \textsf{Interact} \LaTeX\ bundle for convenience. Tables produced using the standard commands of the \texttt{tabular} environment are also compatible with the \texttt{interact} class file.
\subsection{Landscape pages}
If a figure or table is too wide to fit the page it will need to be rotated, along with its caption, through 90$^{\circ}$ anticlockwise. Landscape figures and tables can be produced using the \verb"rotating" package, which is called by the \texttt{interact} class file. The following commands (for example) can be used to produce such pages.
\begin{verbatim}
\setcounter{figure}{1}
\begin{sidewaysfigure}
\centerline{\epsfbox{figname.eps}}
\caption{Example landscape figure caption.}
\label{landfig}
\end{sidewaysfigure}
\end{verbatim}
\begin{verbatim}
\setcounter{table}{1}
\begin{sidewaystable}
\tbl{Example landscape table caption.}
{\begin{tabular}{@{}llllcll}
.
.
.
\end{tabular}}\label{landtab}
\end{sidewaystable}
\end{verbatim}
Before any such float environment, use the \verb"\setcounter" command as above to fix the numbering of the caption (the value of the counter being the number given to the preceding figure or table). Subsequent captions will then be automatically renumbered accordingly. The \verb"\epsfbox" command requires \verb"epsfig.sty", which is called by the \texttt{interact} class file and is also included in the \textsf{Interact} \LaTeX\ bundle for convenience.
Please note that if the \verb"endfloat" package is used, one or both of the commands
\begin{verbatim}
\DeclareDelayedFloatFlavor{sidewaysfigure}{figure}
\DeclareDelayedFloatFlavor{sidewaystable}{table}
\end{verbatim}
will need to be included in the preamble of your .tex file, after the \verb"endfloat" package is loaded, in order to process any landscape figures and/or tables correctly.
\subsection{Theorem-like structures}
A predefined \verb"proof" environment is provided by the \texttt{amsthm} package (which is called by the \texttt{interact} class file), as follows:
\begin{proof}
More recent algorithms for solving the semidefinite programming relaxation are particularly efficient, because they explore the structure of the MAX-CUT problem.
\end{proof}
\noindent This was produced by simply typing:
\begin{verbatim}
\begin{proof}
More recent algorithms for solving the semidefinite programming
relaxation are particularly efficient, because they explore the
structure of the MAX-CUT problem.
\end{proof}
\end{verbatim}
Other theorem-like environments (theorem, definition, remark, etc.) need to be defined as required, e.g.\ using \verb"\newtheorem{theorem}{Theorem}" in the preamble of your .tex file (see the preamble of \verb"interacttfssample.tex" for more examples). You can define the numbering scheme for these structures however suits your article best. Please note that the format of the text in these environments may be changed if necessary to match the style of individual journals by the typesetter during preparation of the proofs.
\subsection{Mathematics}
\subsubsection{Displayed mathematics}
The \texttt{interact} class file will set displayed mathematical formulas centred on the page without equation numbers if you use the \texttt{displaymath} environment or the equivalent \verb"\[...\]" construction. For example, the equation
\[
\hat{\theta}_{w_i} = \hat{\theta}(s(t,\mathcal{U}_{w_i}))
\]
was typeset using the commands
\begin{verbatim}
\[
\hat{\theta}_{w_i} = \hat{\theta}(s(t,\mathcal{U}_{w_i}))
\]
\end{verbatim}
For those of your equations that you wish to be automatically numbered sequentially throughout the text for future reference, use the \texttt{equation} environment, e.g.
\begin{equation}
\hat{\theta}_{w_i} = \hat{\theta}(s(t,\mathcal{U}_{w_i}))
\end{equation}
was typeset using the commands
\begin{verbatim}
\begin{equation}
\hat{\theta}_{w_i} = \hat{\theta}(s(t,\mathcal{U}_{w_i}))
\end{equation}
\end{verbatim}
Part numbers for sets of equations may be generated using the \texttt{subequations} environment, e.g.
\begin{subequations} \label{subeqnexample}
\begin{equation}
\varepsilon \rho w_{tt}(s,t) = N[w_{s}(s,t),w_{st}(s,t)]_{s},
\label{subeqnparta}
\end{equation}
\begin{equation}
w_{tt}(1,t)+N[w_{s}(1,t),w_{st}(1,t)] = 0, \label{subeqnpartb}
\end{equation}
\end{subequations}
which was typeset using the commands
\begin{verbatim}
\begin{subequations} \label{subeqnexample}
\begin{equation}
\varepsilon \rho w_{tt}(s,t) = N[w_{s}(s,t),w_{st}(s,t)]_{s},
\label{subeqnparta}
\end{equation}
\begin{equation}
w_{tt}(1,t)+N[w_{s}(1,t),w_{st}(1,t)] = 0, \label{subeqnpartb}
\end{equation}
\end{subequations}
\end{verbatim}
This is made possible by the \texttt{amsmath} package, which is called by the class file. If you put a \verb"\label" just after the \verb"\begin{subequations}" command, references can be made to the collection of equations, i.e.\ `(\ref{subeqnexample})' in the example above. Or, as the example also shows, you can label and refer to each equation individually -- i.e.\ `(\ref{subeqnparta})' and `(\ref{subeqnpartb})'.
Displayed mathematics should be given end-of-line punctuation appropriate to the running text sentence of which it forms a part, if required.
\subsubsection{Math fonts}
\paragraph{Superscripts and subscripts}
Superscripts and subscripts will automatically come out in the correct size in a math environment (i.e.\ enclosed within \verb"\(...\)" or \verb"$...$" commands in running text, or within \verb"\[...\]" or the \texttt{equation} environment for displayed equations). Sub/superscripts that are physical variables should be italic, whereas those that are labels should be roman (e.g.\ $C_p$, $T_\mathrm{eff}$). If the subscripts or superscripts need to be other than italic, they must be coded individually.
\paragraph{Upright Greek characters and the upright partial derivative sign}
Upright lowercase Greek characters can be obtained by inserting the letter `u' in the control code for the character, e.g.\ \verb"\umu" and \verb"\upi" produce $\umu$ (used, for example, in the symbol for the unit microns -- $\umu\mathrm{m}$) and $\upi$ (the ratio of the circumference of a circle to its diameter). Similarly, the control code for the upright partial derivative $\upartial$ is \verb"\upartial". Bold lowercase as well as uppercase Greek characters can be obtained by \verb"{\bm \gamma}", for example, which gives ${\bm \gamma}$, and \verb"{\bm \Gamma}", which gives ${\bm \Gamma}$.
\section*{Acknowledgement(s)}
An unnumbered section, e.g.\ \verb"\section*{Acknowledgements}", may be used for thanks, etc.\ if required and included \emph{in the non-anonymous version} before any Notes or References.
\section*{Disclosure statement}
An unnumbered section, e.g.\ \verb"\section*{Disclosure statement}", may be used to declare any potential conflict of interest and included \emph{in the non-anonymous version} before any Notes or References, after any Acknowledgements and before any Funding information.
\section*{Funding}
An unnumbered section, e.g.\ \verb"\section*{Funding}", may be used for grant details, etc.\ if required and included \emph{in the non-anonymous version} before any Notes or References.
\section*{Notes on contributor(s)}
An unnumbered section, e.g.\ \verb"\section*{Notes on contributors}", may be included \emph{in the non-anonymous version} if required. A photograph may be added if requested.
\section*{Nomenclature/Notation}
An unnumbered section, e.g.\ \verb"\section*{Nomenclature}" (or \verb"\section*{Notation}"), may be included if required, before any Notes or References.
\section*{Notes}
An unnumbered `Notes' section may be included before the References (if using the \verb"endnotes" package, use the command \verb"\theendnotes" where the notes are to appear, instead of creating a \verb"\section*").
\section{References}
\subsection{References cited in the text}
References should be cited in the text by numbers in square brackets based on the order in which they appear in an alphabetical list of references at the end of the document (not the order of citation), so the first reference cited in the text might be [23]. For example, these may take the forms [32], [5,\,6,\,14], [21--55] (\emph{not} [21]--[55]). For further details on this reference style, see the Instructions for Authors on the Taylor \& Francis website.
Each bibliographic entry has a key, which is assigned by the author and is used to refer to that entry in the text. In this document, the key \verb"Ali95" in the citation form \verb"\cite{Ali95}" produces `\cite{Ali95}', and the keys \verb"Bow76" and \verb"BGP02" in the citation form \verb"\cite{Bow76,BGP02}" produce `\cite{Bow76,BGP02}'. The citation for a range of bibliographic entries (e.g. `\cite{GMW81,Hor96,Con96,Har97,FGK03,Fle80,Ste98,Ell98,Str97,Coo03,Kih01,Hol03,Hai01,Hag03,Hov03,Mul00,GHGsoft,Pow00}') will automatically be produced by \verb"\cite{GMW81,Hor96,Con96,Har97,FGK03,Fle80,Ste98,Ell98,Str97,Coo03,Kih01," \verb"Hol03,Hai01,Hag03,Hov03,Mul00,GHGsoft,Pow00}".
Optional notes may be included at the beginning and/or end of a citation by the use of square brackets, e.g. \verb"\cite[cf.][]{Tay03}" produces `\cite[cf.][]{Tay03}', and \verb"\cite[see][and references therein]{Wei95}" produces `\cite[see][and references therein]{Wei95}'.
\subsection{The list of references}
To produce the list of references, the bibliographic data about each reference item should be listed in \texttt{thebibliography} environment in alphabetical order. References with the same author or group of authors are further sorted chronologically, beginning with the earliest. The following list shows some sample references prepared in Taylor \& Francis' Reference Style S.
\section{Introduction}
In this paper, we consider the problem of finding a barycenter of discrete random probability measures generated by a distribution. We refer to
optimal transport (OT) metrics which provides a successful framework to compare objects that can be modeled as probability measures (images, videos, texts and etc.). Transport based distances have gained popularity in various fields such as statistics \cite{ebert2017construction,bigot2012consistent}, unsupervised learning \cite{arjovsky2017wasserstein}, signal and image analysis \cite{thorpe2017transportation}, computer vision \cite{rubner1998metric}, text classification \cite{kusner2015word}, economics and finance \cite{rachev2011probability} and medical imaging \cite{wang2010optimal,gramfort2015fast}.
Moreover, a lot of
statistical results are known about optimal transport distances \cite{sommerfeld2018inference,weed2019sharp,klatt2020empirical}.
The success of optimal transport led to an increasing interest in {Wasserstein barycenters} (WB's).
Wasserstein barycenters are used in Bayesian computations \cite{srivastava2015wasp}, texture mixing \cite{rabin2011wasserstein}, clustering ($k$-means for probability measures) \cite{del2019robust}, shape interpolation and color transferring \cite{Solomon2015}, statistical estimation of template models \cite{boissard2015distribution} and neuroimaging \cite{gramfort2015fast}.
For discrete random probability measures from probability simplex $\Delta_n$ ($n$ is the size of support) with distribution $\mathbb P$, a Wasserstein barycenter is introduced through a notion of Fr\'{e}chet mean \cite{frechet1948elements}
\begin{equation}\label{def:Freche_population}
\min_{p\in \Delta_n}\mathbb E_{q \sim \mathbb P} W(p,q).
\end{equation}
If a solution of \eqref{def:Freche_population} exists and is unique, then it is referred to as the population barycenter for distribution $\mathbb P$. Here $W(p,q)$ is
optimal transport metrics between measures $p$ and $q$
\begin{equation}\label{def:optimaltransport}
W(p,q) = \min_{\pi \in U(p,q)} \langle C, \pi \rangle,
\end{equation}
where $C \in \mathbb R^{n\times n}_+$ is a symmetric transportation cost matrix and $U(p,q) \triangleq\{ \pi\in \mathbb R^{n\times n}_+: \pi {\mathbf 1} =p, \pi^T {\mathbf 1} = q\}$ is transport polytope.\footnote{
When for $\rho\geq 1$, $C_{ij} =\mathtt d(x_i, x_j)^\rho$ in \eqref{def:optimaltransport}, where $\mathtt d(x_i, x_j)$ is a distance on support points $x_i, x_j$, then $W(p,q)^{1/\rho}$ is known as the $\rho$-Wasserstein distance.
Nevertheless, all the results of this thesis are based only on the assumptions that the matrix $C \in \mathbb R_+^{n\times n}$ is symmetric and non-negative. Thus, optimal transport problem defined in \eqref{def:optimaltransport} is a more general than the Wasserstein distances.}
In \cite{cuturi2013sinkhorn}, the entropic regularization of optimal transport problem \eqref{def:optimaltransport} was proposed to improve its statistical properties \cite{klatt2020empirical,bigot2019central} and to reduce the computational complexity from $\tilde O(n^3)$ ($n$ is the size of the support of the measures) to $n^2\min\{\tilde O\left(\frac{1}{\varepsilon} \right), \tilde O\left(\sqrt{n} \right) \} $ arithmetic operations\footnote{The estimate $n^2\min\{\tilde O\left(\frac{1}{\varepsilon} \right), \tilde O\left(\sqrt{n} \right) \}$ is the best known theoretical estimate for solving OT problem \cite{blanchet2018towards,jambulapati2019direct,lee2014path,quanrud2018approximating}. The best known practical estimates are $\sqrt{n}$ times worse (see \cite{guminov2019accelerated} and references therein).}
\begin{align}\label{eq:wass_distance_regul2222}
W_\gamma (p,q) &\triangleq \min_{\pi \in U(p,q)} \left\lbrace \left\langle C,\pi\right\rangle - \gamma E(\pi)\right\rbrace.
\end{align}
Here $\gamma>0$ and $E(\pi) \triangleq -\langle \pi,\log \pi \rangle $ is the entropy. Since $E(\pi)$ is 1-strongly concave on $\Delta_{n^2}$ in the $\ell_1$-norm, the objective in \eqref{eq:wass_distance_regul2222} is $\gamma$-strongly convex with respect to $\pi$ in the $\ell_1$-norm on $\Delta_{n^2}$, and hence, problem \eqref{eq:wass_distance_regul2222} has a unique optimal solution. Moreover, $W_\gamma (p,q)$ is $\gamma$-strongly convex with respect to $p$ in the $\ell_2$-norm on $\Delta_n$ \citep[Theorem 3.4]{bigot2019data}.
Another particular advantage of the entropy-regularized optimal transport \eqref{eq:wass_distance_regul2222} is a closed-form representation for its dual function~\cite{agueh2011barycenters,cuturi2016smoothed} defined by the Fenchel--Legendre transform of $W_\gamma(p,q)$ as a function of $p$
\begin{align*}
W_{\gamma, q}^*(u) &= \max_{ p \in \Delta_n}\left\{ \langle u, p \rangle - W_{\gamma}(p, q) \right\} = \gamma\left(E(q) + \left\langle q, \log (K \beta) \right\rangle \right).
\end{align*}
where $\beta = \exp( {u}/{\gamma}) $, \mbox{$K = \exp( {-C}/{\gamma }) $} and functions $\log$ or $\exp$ are applied element-wise.
Hence, the gradient of dual function $W_{\gamma, q}^*(u)$ is also represented in a closed-form \cite{cuturi2016smoothed}
\begin{equation*}
\nabla W^*_{\gamma,q} (u)
= \beta \odot \left(K \cdot {q}/({K \beta}) \right) \in \Delta_n,
\end{equation*}
where symbols $\odot$ and $/$ stand for the element-wise product and element-wise division respectively.
\textbf{Background on the SA and the SAA and Convergence Rates.}
Let us consider a general stochastic convex minimization problem
\begin{equation}\label{eq:gener_risk_min}
\min_{x\in X \subseteq \mathbb{R}^n} F(x) \triangleq \mathbb E f(x,\xi),
\end{equation}
where function $f$ is convex in $x$ ($x\in X,$ $X$ is a convex set), and $\mathbb E f(x,\xi)$ is the expectation of $f$ with respect to $\xi \in \Xi$.
Such kind of problems arise in many applications of data science
\cite{shalev2014understanding,shapiro2014lectures} (e.g., risk minimization) and mathematical statistics \cite{spokoiny2012parametric} (e.g., maximum likelihood estimation). There are two competing approaches based on Monte Carlo sampling techniques to solve \eqref{eq:gener_risk_min}: the Stochastic Approximation (SA) \cite{robbins1951stochastic} and the Sample Average Approximation (SAA).
The SAA approach replaces the objective in problem \eqref{eq:gener_risk_min} with its sample average approximation (SAA) problem
\begin{equation}\label{eq:empir_risk_min}
\min_{x\in X} \hat{F}(x) \triangleq \frac{1}{m}\sum_{i=1}^m f(x,\xi_i),
\end{equation}
where $\xi_1, \xi_2,...,\xi_m$ are the realizations of a random variable $\xi$. The number of realizations $m$ is adjusted by the desired precision.
The total working time of both approaches to solve problem \eqref{eq:gener_risk_min} with the average precision $\varepsilon$ in the non-optimality gap in term of the objective function (i.e., to find $x^N$ such that $\mathbb E F(x^N) - \min\limits_{x\in X } F(x)\leq \varepsilon$),
depends on the specific problem. However, it was generally accepted \cite{nemirovski2009robust} that the SA approach is better than the SAA approach.
Stochastic gradient (mirror) descent, an implementation of the SA approach \cite{juditsky2012first-order}, gives the following estimation for the number of iterations (that is equivalent to the sample size of $\xi_1, \xi_2, \xi_3,...,\xi_m$)
\begin{equation}\label{eq:SNSm}
m = O\left(\frac{M^2R^2}{\varepsilon^2}\right).
\end{equation}
Here we considered the minimal assumptions (non-smoothness) for the objective $f(x,\xi)$
\begin{equation}\label{M}
\|\nabla f(x,\xi)\|_2^2\le M^2, \quad \forall x \in X, \xi \in \Xi.
\end{equation}
Whereas, the application of the SAA approach requires the following sample size \cite{shapiro2005complexity}
\[m = \widetilde{O}\left(\frac{n M^2R^2}{\varepsilon^2}\right),\]
that is $n$ times more ($n$ is the problem’s dimension) than the sample size in the SA approach. This estimate was obtained under the assumptions that problem \eqref{eq:empir_risk_min} is solved exactly. This is one of the main drawback of the SAA approach. However, if the objective $f(x,\xi)$ is $\lambda$-strongly convex in $x$, the sample sizes are equal up to logarithmic terms
\[
m = O\left(\frac{M^2}{\lambda \varepsilon}\right).
\]
Moreover, in this case, for the SAA approach, it suffices to solve problem \eqref{eq:empir_risk_min} with accuracy \cite{shalev2009stochastic}
\begin{equation}\label{eq:aux_e_quad}
\varepsilon' = O\left(\frac{\varepsilon^2\lambda}{M^2}\right).
\end{equation}
Therefore, to eliminate the linear dependence on $n$ in the SAA approach for a non-strongly convex objective, regularization $\lambda =\frac{\varepsilon}{R^2}$ should be used \cite{shalev2009stochastic}.
Let us suppose that $f(x,\xi)$ in \eqref{eq:gener_risk_min} is convex but non-strongly convex in $x$ (possibly, $\lambda$-strongly convex but with very small $\lambda \ll \frac{\varepsilon}{R^2}$). Here $R = \|x^1 - x^*\|_2$ is the Euclidean distance between starting point $x^1$ and the solution $x^*$ of \eqref{eq:gener_risk_min} which corresponds to the minimum of this norm (if the solution is not the only one). Then, the problem \eqref{eq:gener_risk_min} can be replaced by
\begin{equation}\label{eq:gener_risk_min_reg}
\min_{x\in X } \mathbb E f(x,\xi) + \frac{\varepsilon}{2R^2}\|x - x^1\|_2^2.
\end{equation}
The empirical counterpart of \eqref{eq:gener_risk_min_reg} is
\begin{equation}\label{eq:empir_risk_min_reg}
\min_{x\in X} \frac{1}{m}\sum_{i=1}^m f(x,\xi_i) + \frac{\varepsilon}{2R^2}\|x - x^1\|_2^2,
\end{equation}
where the sample size $m$ is defined in \eqref{eq:SNSm}
Thus, in the case of non-strongly objective, a regularization equates the sample size of both approaches.
\subsection{Contribution and Related Work}
\hspace{0.25cm} \textbf{The SA and the SAA approaches}.
This paper is inspired by the work \cite{nemirovski2009robust}, where it is stated that the SA approach outperforms the SAA approach for a certain class of convex stochastic problems. Our aim is to show that for the Wasserstein barycenter problem this superiority can be inverted. We provide a detailed comparison by stating the complexity bounds for implementations of the SA and the SAA approaches for the Wasserstein barycenter problem. As a byproduct, we also construct a confidence interval for the barycenter defined w.r.t. entropy-regularized OT.
\textbf{Sample size.} We also estimate the sample size of measures to calculate an approximation for Fr\'{e}chet mean of a probability distribution with a given precision.
\textbf{Consistency and rates of convergence.}
The consistency of empirical barycenter as an estimator of true Wasserstein barycenter (defined by the notion of Fr\'{e}chet mean) as the number of measures tends to infinity was studied in many papers, e.g, \cite{LeGouic2017,panaretos2019statistical,LeGouic2017,Bigot2012a,rios2018bayesian},
under some conditions for the process generated the measures.
Moreover, the authors of \cite{boissard2015distribution} provide the rate of this convergence but under restrictive assumption on the process (it must be from
{admissible family of deformations}, i.e., it is a gradient of a convex function). Without any assumptions on generating process, the rate of convergence was obtained in \cite{bigot2018upper}, however, only for measures with one-dimensional support. For some specific types of metrics and measures, the rates of convergence were also provided in works \cite{chewi2020gradient,gouic2019fast,kroshnin2019statistical}. Our results were obtained under the condition of discreteness of the measures. We can always achieve this condition through additional preprocessing (discretization of measures).
\textbf{Penalization of barycenter problem.}
For a general convex (but not strongly convex) optimization problem, empirical minimization may fail in offline approach despite the guaranteed success of an online approach if no regularization was introduced \cite{shalev2009stochastic}.
The limitations of the SAA approach for non-strongly convex case are also discussed in \cite{guigues2017non-asymptotic,shapiro2005complexity}.
Our contribution includes introducing a new regularization for population Wasserstein barycenter problem that improves the complexity bounds for standard penalty (squared norm penalty) \cite{shalev2009stochastic}. This regularization relies on the Bregman divergence from \cite{ben-tal2001lectures}.
\subsection{Preliminaries}
\noindent\textbf{Notations}.
Let $\Delta_n = \{ a \in \mathbb{R}_+^n \mid \sum_{l=1}^n a_l =1 \}$ be the probability simplex.
Then we refer
to the $j$-th component of vector $x_i$ as $[x_i]_j$.
The notation $[n]$ means $1,2,...,n$.
For two vectors $x,y$ of the same size, denotations $x/y$ and $x \odot y$ stand for the element-wise product and element-wise division respectively.
When functions, such as $log$ or $exp$, are used on vectors, they are always applied element-wise.
For some norm $\|\cdot\|$ on space $\mathcal X$, we define the dual norm $\|\cdot\|_*$ on the dual space $\mathcal X^*$ in a usual way $ \|s\|_{*} = \max\limits_{x\in \mathcal X} \{ \langle x,s \rangle : \|x\| \leq 1 \} $.
We denote by $I_n$ the identity matrix, and by $0_{n\times n}$ we denote zeros matrix.
For a positive semi-definite matrix $A$ we denote its smallest positive eigenvalue by $\lambda^{+}_{\min}(A)$.
We use denotation $ O(\cdot)$ when we want to indicate the complexity hiding constants, to hide also logarithms, we use denotation $\widetilde O(\cdot)$.
\begin{definition}
A function $f(x,\xi):X \times \Xi \rightarrow \mathbb R$ is $M$-Lipschitz continious in $x$ w.r.t. a norm $\|\cdot\|$ if it satisfies
\[{|}f(x,\xi)-f(y,\xi){|}\leq M\|x-y\|, \qquad \forall x,y \in X,~ \forall \xi \in \Xi.\]
\end{definition}
\begin{definition}
A function $f:X\times \Xi \rightarrow \mathbb R$ is $\gamma$-strongly convex in $x$ w.r.t. a norm $\|\cdot\|$ if it is continuously differentiable and it satisfies
\[f(x, \xi)-f(y, \xi)- \langle\nabla f(y, \xi), x-y\rangle\geq \frac{\gamma}{2}\|x-y\|^2, \qquad \forall x,y \in X,~ \forall \xi \in \Xi.\]
\end{definition}
\begin{definition}
The Fenchel--Legendre conjugate for a function $f:(X, \Xi) \rightarrow \mathbb R$ w.r.t. $x$ is \[f^*(u,\xi) \triangleq \sup_{x \in X}\{\langle x,u\rangle - f(x,\xi)\}, \qquad \forall \xi \in \Xi.\]
\end{definition}
\subsection{Paper organization}
The structure of the paper is the following.
In Section \ref{ch:population}
we give a background on
the SA and the SAA approaches and derive preliminary results.
Section \ref{sec:pen_bar} presents the comparison of the SA and the SAA approaches for the problem of Wasserstein barycenter defined w.r.t. regularized optimal transport distances. Finally, Section \ref{sec:unreg} gives the comparison of the SA and the SAA approaches for the problem of Wasserstein barycenter defined w.r.t. (unregularized) optimal transport distances.
\section{Strongly Convex Optimization Problem}
\label{ch:population}
We start with preliminary results stated for a general stochastic strongly convex optimization problem of form
\begin{equation}\label{eq:gener_risk_min_conv}
\min_{x\in X \subseteq \mathbb{R}^n} F(x) \triangleq \mathbb E f(x,\xi),
\end{equation}
where $f(x,\xi)$ is $\gamma$-strongly convex with respect to $x$. Let us define $ x^* = \arg\min\limits_{x\in X} {F}(x)$.
\subsection{The SA Approach: Stochastic Gradient Descent }
The classical SA algorithm for problem \eqref{eq:gener_risk_min_conv} is presented by stochastic gradient descent (SGD) method. We consider the SGD with inexect oracle given by $g_\delta(x,\xi)$ such that
\begin{equation}\label{eq:gen_delta}
\forall x \in X, \xi \in \Xi, \qquad
\|\nabla f(x,\xi) - g_\delta(x,\xi)\|_2 \leq \delta.
\end{equation}
Then the iterative formula of SGD can be written as ($k=1,2,...,N.$)
\begin{equation}\label{SA:implement_simple}
x^{k+1} = \Pi_{X}\left(x^k - \eta_{k} g_\delta(x^k,\xi^k) \right).
\end{equation}
Here $x^1 \in X$ is starting point, $\Pi_X$ is the projection onto $X$, $\eta_k$ is a stepsize.
For a $\gamma$-strongly convex $f(x,\xi)$ in $x$, stepsize $\eta_k$ can be taken as $\frac{1}{\gamma k}$ to obtain optimal rate $O(\frac{1}{\gamma N})$.
A good indicator of the success of an algorithm is the \textit{regret}
\[Reg_N \triangleq \sum_{k=1}^{N} \left(f( x^k, \xi^k) - f(x^*, \xi^k)\right).\]
It measures the value of the difference between a made decision and the optimal decision on all the rounds.
The work \cite{kakade2009generalization} gives a bound on the excess risk of the output of an online algorithm in terms of the average regret.
\begin{theorem}\citep[Theorem 2]{kakade2009generalization} \label{Th:kakade2009generalization}
Let $f:X\times \Xi \rightarrow [0,B]$ be $\gamma$-strongly convex and $M$-Lipschitz w.r.t. $x$. Let $\tilde x^N \triangleq \frac{1}{N}\sum_{k=1}^{N}x^k $
be the average of online vectors $x^1, x^2,...,x^N$.
Then with probability at least $1-4\beta\log N$
\[ F(\tilde x^N) - F(x^*) \leq \frac{Reg_N }{N} + 4\sqrt{ \frac{M^2\log(1/\beta)}{\gamma}}\frac{\sqrt{Reg_N}}{N} + \max\left\{ \frac{16M^2}{\gamma},6B \right\}\frac{\log(1/\beta)}{N}. \]
\end{theorem}
For the update rule \eqref{SA:implement_simple} with $\eta_k = \frac{1}{\gamma k}$, this theorem can be specify as follows.
\begin{theorem}\label{Th:contract_gener}
Let $f:X\times \Xi \rightarrow [0,B]$ be $\gamma$-strongly convex and $M$-Lipschitz w.r.t. $x$. Let $\tilde x^N \triangleq \frac{1}{N}\sum_{k=1}^{N}x^k $ be the average of outputs generated by iterative formula \eqref{SA:implement_simple} with $\eta_k = \frac{1}{\gamma k}$. Then, with probability
at least $1-\alpha$ the following holds
\begin{align*}
F(\tilde x^N) - F(x^*)
&\leq \frac{3\delta D }{2} + \frac{3(M^2+\delta ^2)}{N\gamma }(1+\log N) \notag\\
&+ \max\left\{ \frac{18M^2}{\gamma},6B + \frac{2M^2}{\gamma} \right\}\frac{\log(4\log N/\alpha)}{N}.
\end{align*}
where $D =\max\limits_{x',x'' \in X}\|x'-x''\|_2 $ and $\delta$ is defined by \eqref{eq:gen_delta}.
\end{theorem}
\begin{proof}
The proof mainly relies on Theorem \ref{Th:kakade2009generalization} and estimating the regret for iterative formula \eqref{SA:implement_simple} with $\eta_k = \frac{1}{\gamma k}$.
From $\gamma$-strongly convexity in $x$ of $f(x,\xi)$, it follows for $x^k, x^* \in X$
\begin{equation*}
f(x^*, \xi^k) \geq f(x^k, \xi^k) + \langle \nabla f(x^k, \xi^k), x^*-x^k\rangle +\frac{\gamma}{2}\|x^*-x^k\|_2.
\end{equation*}
Adding and subtracting the term $\langle g_\delta(x^k, \xi^k), x^*-x^k\rangle$ we get using Cauchy–Schwarz inequality and \eqref{eq:gen_delta}
\begin{align}\label{str_conv_W1}
f(x^*, \xi^k) &\geq f(x^k, \xi^k) + \langle g_\delta(x^k,\xi^k), x^*-x^k\rangle +\frac{\gamma}{2}\|x^*-x^k\|_2 \notag \\
&+ \langle \nabla f(x^k, \xi^k) -g_\delta(x^k,\xi^k), x^*-x^k\rangle \notag\\
&\geq f(x^k, \xi^k) + \langle g_\delta(x^k,\xi^k), x^*-x^k\rangle +
\frac{\gamma}{2}\|x^*-x^k\|_2 + \delta\|x^*-x^k\|_2.
\end{align}
From the update rule \eqref{SA:implement_simple} for $x^{k+1}$ we have
\begin{align*}
\|x^{k+1} - x^*\|_2 &= \|\Pi_{X}(x^k - \eta_{k} g_\delta(x^k,\xi^k)) - x^*\|_2 \notag\\
&\leq \|x^k - \eta_{k} g_\delta(x^k,\xi^k) - x^*\|_2 \notag\\
&\leq \|x^k - x^*\|_2^2 + \eta_{k}^2\| g_\delta(x^k,\xi^k)\|_2^2 -2\eta_{k}\langle g_\delta(x^k,\xi^k), x^k - x^*\rangle.
\end{align*}
From this it follows
\begin{equation*}
\langle g_\delta(x^k,\xi^k), x^k - x^*\rangle \leq \frac{1}{2\eta_{k}}(\|x^k-x^*\|^2_2 - \|x^{k+1} -x^*\|^2_2) + \frac{\eta_{k}}{2}\| g_\delta(x^k,\xi^k)\|_2^2.
\end{equation*}
Together with \eqref{str_conv_W1} we get
\begin{align*}
f(x^k, \xi^k) - f(x^*, \xi^k) &\leq \frac{1}{2\eta_{k}}(\|x^k-x^*\|^2_2 - \|x^{k+1} -x^*\|^2_2) \notag \\
&-\left(\frac{\gamma}{2}+\delta\right)\|x^*-x^k\|_2+ \frac{\eta_{k}^2}{2}\| g_\delta(x^k,\xi^k)\|_2^2.
\end{align*}
Summing this from 1 to $N$, we get using $\eta_k = \frac{1}{\gamma k}$
\begin{align}\label{eq:eq123}
\sum_{k=1}^{N}f( x^k, \xi^k) - f(x^*, \xi^k) &\leq
\frac{1}{2}\sum_{k=1}^{N}\left(\frac{1}{\eta_k} - \frac{1}{\eta_{k-1}} + {\gamma} +\delta \right)\|x^*-x^k\|_2 \notag\\ &\hspace{-1cm}+\frac{1}{2}\sum_{k=1}^{N}{\eta_{k}}\| g_\delta (x^k,\xi^k)\|_2^2 \notag\\
&\hspace{-1cm}\leq \frac{\delta}{2}\sum_{k=1}^{N}\|x^*-x^k\|_2 +\frac{1}{2}\sum_{k=1}^{N}{\eta_{k}}\| g_\delta (x^k,\xi^k)\|_2^2.
\end{align}
From Lipschitz continuity of $f(x,\xi)$ w.r.t. to $x$ it follows that $\|\nabla f(x,\xi)\|_2\leq M$ for all $x \in X, \xi \in \Xi$. Thus, using that for all $a,b, ~ (a+b)^2\leq 2a^2+2b^2$ it follows
\[
\|g_\delta(x,\xi)\|^2_2 \leq 2\|\nabla f(x,\xi)\|^2_2 + 2\delta^2 = 2M^2 + 2\delta^2
\]
From this and \eqref{eq:eq123} we bound the regret as follows
\begin{align}\label{eq:reg123}
Reg_N \triangleq \sum_{k=1}^{N}f( x^k, \xi^k) - f(x^*, \xi^k) &\leq \frac{\delta}{2}\sum_{k=1}^{ N}\|p^*-p^k\|_2 + (M^2 +\delta^2)\sum_{k=1}^{ N} \frac{1}{\gamma k}\notag\\
&\leq \frac{1}{2} \delta D N + \frac{M^2+\delta ^2}{\gamma }(1+\log N).
\end{align}
Here the last bound takes place due to the sum of harmonic series.
Then for \eqref{eq:reg123} we can use Theorem \ref{Th:kakade2009generalization}. Firstly, we simplify it rearranging the terms using that $\sqrt{ab} \leq \frac{a+b}{2}$
\begin{align*}
F(\tilde x^N) - F(x^*)
&\leq \frac{Reg_N }{N} + 4\sqrt{ \frac{M^2\log(1/\beta)}{N\gamma}}\sqrt{\frac{Reg_N}{N}} + \max\left\{ \frac{16M^2}{\gamma},6B \right\}\frac{\log(1/\beta)}{N}
\notag\\
&\leq \frac{3 Reg_N }{N} + \frac{2 M^2\log(1/\beta)}{N\gamma} + \max\left\{ \frac{16M^2}{\gamma},6B \right\}\frac{\log(1/\beta)}{N}\notag\\
&= \frac{3 Reg_N }{N} + \max\left\{ \frac{18M^2}{\gamma},6B + \frac{2M^2}{\gamma} \right\}\frac{\log(1/\beta)}{N}.
\end{align*}
Then we substitute \eqref{eq:reg123} in this inequality and making change $\alpha = 4\beta\log N$ and get with probability at least $ 1-\alpha$
\begin{align*}
F(\tilde x^N) - F(x^*)
&\leq \frac{3\delta D }{2} + \frac{3(M^2+\delta ^2)}{N\gamma }(1+\log N) \notag\\
&+ \max\left\{ \frac{18M^2}{\gamma},6B + \frac{2M^2}{\gamma} \right\}\frac{\log(4\log N/\alpha)}{N}.
\end{align*}
\end{proof}
\subsection{Preliminaries on the SAA Approach }
The SAA approach replaces the objective in \eqref{eq:gener_risk_min_conv} with its sample average
\begin{equation}\label{eq:empir_risk_min_conv}
\min_{x\in X } \hat{F}(x) \triangleq \frac{1}{m}\sum_{i=1}^m f(x,\xi_i),
\end{equation}
where each $f(x,\xi_i)$ is $\gamma$-strongly convex in $x$.
Let us define the empirical minimizer of \eqref{eq:empir_risk_min_conv} $\hat x^* = \arg\min\limits_{x\in X} \hat{F}(x)$, and $\hat x_{\varepsilon'}$ such that
\begin{equation}\label{eq:fidelity}
\hat{F}(\hat x_{\varepsilon'}) - \hat{F}(\hat x^*) \leq \varepsilon'.
\end{equation}
The next theorem gives a bound on the excess risk for problem \eqref{eq:empir_risk_min_conv}
in the SAA approach.
\begin{theorem}\label{Th:contractSAA}
Let $f:X\times \Xi \rightarrow [0,B]$ be $\gamma$-strongly convex and $M$-Lipschitz w.r.t. $x$ in the $\ell_2$-norm. Let $\hat x_{\varepsilon'}$ satisfies \eqref{eq:fidelity} with precision $ \varepsilon' $.
Then, with probability at least $1-\alpha$ we have
\begin{align*}
F( \hat x_{\varepsilon'}) - F(x^*)
&\leq \sqrt{\frac{2M^2}{\gamma}\varepsilon'} +\frac{4M^2}{\alpha\gamma m}.
\end{align*}
Let $\varepsilon' = O \left(\frac{\gamma\varepsilon^2}{M^2} \right)$ and $m = O\left( \frac{M^2}{\alpha \gamma \varepsilon} \right)$. Then, with probability at least $1-\alpha$ the following holds
\[F( \hat x_{\varepsilon'}) - F(x^*)\leq \varepsilon \quad \text{and} \quad \|\hat x_{\varepsilon'} - x^*\|_2 \leq \sqrt{2\varepsilon/\gamma}.\]
\end{theorem}
The proof of this theorem mainly relies on the following theorem.
\begin{theorem}\citep[Theorem 6]{shalev2009stochastic}\label{Th:shalev2009stochastic}
Let $f(x,\xi)$ be $\gamma$-strongly convex and $M$-Lipschitz w.r.t. $x$ in the $\ell_2$-norm. Then, with probability at least $1-\alpha$ the following holds
\[
F(\hat x^* ) - F(x^*) \leq \frac{4M^2}{\alpha \gamma m},
\]
where $m$ is the sample size.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{Th:contractSAA}]
For any $x\in X$, the following holds
\begin{equation}\label{eq:exp_der}
F(x) - F(x^*) = F(x) - F(\hat x^*)+ F( \hat x^*)- F(x^*).
\end{equation}
From Theorem \ref{Th:shalev2009stochastic} with probability at least $1 - \alpha$ the following holds
\begin{equation*}
F( \hat x^*) - F(x^*) \leq \frac{4M^2}{\alpha \gamma m}.
\end{equation*}
Then from this and \eqref{eq:exp_der} we have with probability at least $1 - \alpha$
\begin{equation}\label{eq:eqtosub}
F(x) - F(x^*) \leq F(x) - F(\hat x^*)+ \frac{4M^2}{\alpha \gamma m}.
\end{equation}
From Lipschitz continuity of $ f(x,\xi)$ it follows, that for ant $x\in X, \xi\in \Xi$ the following holds
\begin{equation*}
| f(x,\xi) - f(\hat x^*,\xi)| \leq M\|x-\hat x^*\|_2.
\end{equation*}
Taking the expectation of this inequality w.r.t. $\xi$ we get
\begin{equation*}
\mathbb E | f(x,\xi) - f(\hat x^*,\xi)| \leq M\|x-\hat x^*\|_2.
\end{equation*}
Then we use Jensen's inequality ($g\left (\mathbb E(Y)\right) \leq \mathbb E g(Y) $) for the expectation, convex function $g$ and a random variable $Y$. Since the module is a convex function we get
\begin{equation*}
| \mathbb E f(x,\xi) - \mathbb E f(\hat x^*,\xi)| =| F(x) - F(\hat x^*)| \leq \mathbb E | f(x,\xi) - f(\hat x^*,\xi)| \leq M\|x-\hat x^*\|_2.
\end{equation*}
Thus, we have
\begin{equation}\label{eq:Lipsch}
| F(x) - F(\hat x^*)| \leq M\|x-\hat x^*\|_2.
\end{equation}
From strong convexity of $ f( x, \xi)$ in $x$, it follows that the average of $f(x,\xi_i)$'s, that is $\hat F(x)$, is also $\gamma$-strongly convex in $x$. Thus we get for any $x \in X, \xi \in \Xi$
\begin{equation}\label{eq:str}
\|x-\hat x^*\|_2 \leq \sqrt{\frac{2}{\gamma } (\hat F(x) - \hat F(\hat x^*))}.
\end{equation}
By using \eqref{eq:Lipsch} and \eqref{eq:str} \label{eq:th6} and taking $x=\hat x_{\varepsilon'}$ in \eqref{eq:eqtosub}, we get the first statement of the theorem
\begin{align}\label{eq_to_prove_2}
F( \hat x_{\varepsilon'}) - F(x^*)
&\leq \sqrt{\frac{2M^2}{\gamma }(\hat F( \hat x_{\varepsilon'}) - \hat F(\hat x^*))} +\frac{4M^2}{\alpha\gamma m} \leq \sqrt{\frac{2M^2}{\gamma}\varepsilon'} +\frac{4M^2}{\alpha\gamma m}.
\end{align}
Then from the strong convexity we have
\begin{align}\label{eq:con_reg_bar}
\| \hat x_{\varepsilon'} - x^*\|_2
&\leq \sqrt{\frac{2}{\gamma}\left( \sqrt{\frac{2M^2}{\gamma }\varepsilon'} +\frac{4M^2}{\alpha \gamma m}\right)}.
\end{align}
Equating \eqref{eq_to_prove_2} to $\varepsilon$, we get the expressions for the sample size $m$ and auxiliary precision $\varepsilon'$. Substituting both of these expressions in \eqref{eq:con_reg_bar} we finish the proof.
\end{proof}
\section{Non-Strongly Convex Optimization Problem}
Now we consider non-strongly convex optimization problem
\begin{equation}\label{eq:gener_risk_min_nonconv}
\min_{x\in X \subseteq \mathbb{R}^n} F(x) \triangleq \mathbb E f(x,\xi),
\end{equation}
where $f(x,\xi)$ is Lipschitz continuous in $x$. Let us define $ x^* = \arg\min\limits_{x\in X} {F}(x)$.
\subsection{The SA Approach: Stochastic Mirror Descent}
We consider stochastic mirror descent (MD) with inexact oracle \cite{nemirovski2009robust,juditsky2012first-order,gasnikov2016gradient-free}.\footnote{By using dual averaging scheme~\cite{nesterov2009primal-dual} we can rewrite Alg.~\ref{Alg:OnlineMD} in online regime \cite{hazan2016introduction,orabona2019modern} without including $N$ in the stepsize policy. Note, that mirror descent and dual averaging scheme are very close to each other \cite{juditsky2019unifying}.} For a prox-function $d(x)$ and the corresponding Bregman divergence $B_d(x,x^1)$, the proximal mirror descent step is
\begin{equation}\label{eq:prox_mirr_step}
x^{k+1} = \arg\min_{x\in X}\left( \eta \left\langle g_\delta(x^k,\xi^k), x\right\rangle + B_d(x,x^k)\right).
\end{equation}
We consider the simplex setup:
prox-function $d(x) = \langle x,\log x \rangle$. Here and below, functions such as $\log$ or $\exp$ are always applied element-wise. The corresponding Bregman divergence is given by the Kullback--Leibler divergence
\[
{\rm KL}(x,x^1) = \langle x, \log(x/x^1)\rangle - \boldsymbol{1}^\top(x-x^1).
\]
Then the starting point is taken as $x^1 = \arg\min\limits_{x\in \Delta_n}d(x)= (1/n,...,1/n)$.
\begin{theorem}\label{Th:MDgener}
Let $ R^2 \triangleq {\rm KL}(x^*,x^1) \leq \log n $ and $D =\max\limits_{x',x''\in \Delta_n}\|x'-x''\|_1 = 2$. Let $f:X\times \Xi \rightarrow \mathbb R^n$ be $M_\infty$-Lipschitz w.r.t. $x$ in the $\ell_1$-norm. Let $\breve x^N \triangleq \frac{1}{N}\sum_{k=1}^{N}x^k $ be the average of outputs generated by iterative formula \eqref{eq:prox_mirr_step} with $\eta = \frac{\sqrt{2} R}{M_\infty\sqrt{N} }$. Then, with probability
at least $1-\alpha$ we have
\begin{equation*}
F(\breve x^N) - F(x^*) \leq\frac{M_\infty (3R+2D \sqrt{\log (\alpha^{-1})})}{\sqrt{2N}} +\delta D = O\left(\frac{M_\infty \sqrt{\log ({n}/{\alpha})}}{\sqrt{N}} +2 \delta \right).
\end{equation*}
\end{theorem}
\begin{proof}
For MD with prox-function function $d(x) = \langle x\log x\rangle $ the following holds for any $x\in \Delta_n$ \citep[Eq. 5.13]{juditsky2012first-order}
\begin{align*}
\eta\langle g_\delta (x^k,\xi^k), x^k -x \rangle
&\leq {\rm {\rm KL}}(x,x^k) - {\rm KL}(x,x^{k+1}) +\frac{\eta^2}{2}\|g_\delta(x^k,\xi^k)\|^2_\infty \notag\\
&\leq {\rm {\rm KL}}(x,x^k) -{\rm {\rm KL}}(x,x^{k+1}) +\eta^2M_\infty^2.
\end{align*}
Then by adding and subtracting the terms $\langle F(x), x-x^k\rangle$ and $\langle \nabla f(x, \xi^k), x-x^k\rangle$ in this inequality, we get using
Cauchy--Schwarz inequality the following
\begin{align}\label{str_conv_W2}
\eta\langle \nabla F(x^k), x^k-x\rangle
&\leq \eta\langle \nabla f(x^k,\xi^k)-g_\delta(x^k,\xi^k), x^k-x\rangle \notag \\
&+ \eta\langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x\rangle + {\rm KL}(x,x^k) - {\rm KL}(x,x^{k+1}) +\eta^2M_\infty^2 \notag\\
&\leq \eta\delta\max_{k=1,...,N}\|x^k-x\|_1 + \eta\langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x\rangle \notag \\
&+{\rm KL}(x,x^k) - {\rm KL}(x,x^{k+1}) +\eta^2M_\infty^2.
\end{align}
Then using convexity of $F(x^k)$ we have
\[
F(x^k) - F(x)\leq \eta\langle \nabla F(x^k), x^k-x\rangle
\]
Then we use this for \eqref{str_conv_W2} and sum for $k=1,...,N$ at $x=x^*$
\begin{align}\label{eq:defFxx}
\eta\sum_{k=1}^N F(x^k) - F(x^*) &\leq
\eta\delta N \max_{k=1,...,N}\|x^k-x^*\|_1
+\eta\sum_{k=1}^N\langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x^*\rangle \notag\\
&+ {\rm KL}(x^*,x^1) - {\rm KL}(x^*,x^{N+1}) + \eta^2M_\infty^2N \notag\\
&\leq \eta\delta N{D}
+\eta \sum_{k=1}^N\langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x^*\rangle + R^2+ \eta^2M_\infty^2N.
\end{align}
Where we used ${\rm KL}(x^*,x^1) \leq R^2 $ and $\max\limits_{k=1,...,N}\|p^k-p^*\|_1 \leq D$.
Then using convexity of $F(x^k)$ and the definition of output $\breve x^N$ in \eqref{eq:defFxx} we have
\begin{align}\label{eq:F123xxF}
F(\breve x^N) -F(x^*)
&\leq \delta D +\frac{1}{N}\sum_{k=1}^N \langle\nabla F(x^k)- \nabla f(x^k,\xi^k) , x^k-x^*\rangle + \frac{R^2}{\eta N}+ \eta M_\infty^2.
\end{align}
Next we use the {Azuma--}Hoeffding's {\cite{jud08}} inequality and get for all $\beta \geq 0$
\begin{equation}\label{eq:AzumaH}
\mathbb{P}\left(\sum_{k=1}^{N+1}\langle \nabla F(x^k) -\nabla f(x^k,\xi^k), x^k-x^*\rangle \leq \beta \right)\geq 1 - \exp\left( -\frac{2\beta^2}{N(2M_\infty D)^2}\right)= 1 - \alpha.
\end{equation}
Here we used that $\langle \nabla F(p^k) -\nabla f(x^k,\xi^k), x^*-x^k\rangle$ is a martingale-difference and
\begin{align*}
{\left|\langle \nabla F(x^k) -\nabla f(x^k,\xi^k), x^*-x^k\rangle \right|} &\leq \| \nabla F(x^k) -\nabla W(p^k,q^k)\|_{\infty} \|x^*-x^k\|_1 \notag \\
&\leq 2M_\infty \max\limits_{k=1,...,N}\|x^k-x^*\|_1 \leq 2M_\infty D.
\end{align*}
Thus, using \eqref{eq:AzumaH} for \eqref{eq:F123xxF} we have that with probability at least $1-\alpha$
\begin{equation}\label{eq:eta}
F(\breve x^N) - F(x^*) \leq \delta D +\frac{\beta}{N}+ \frac{R^2}{\eta N}+ \eta M_\infty^2.
\end{equation}
Then, expressing $\beta$ through $\alpha$ and substituting $\eta = \frac{ R}{M_\infty} \sqrt{\frac{2}{N}}$ to \eqref{eq:eta} ( such $\eta$ minimize the r.h.s. of \eqref{eq:eta}), we get \begin{align*}
& F(\breve x^N) - F(x^*) \leq \delta D + \frac{M_\infty D\sqrt{2\log(1/\alpha)} }{\sqrt{N} } + \frac{M_\infty R}{\sqrt{2N}}+ \frac{M_\infty R\sqrt{2}}{\sqrt{N}} \notag \\
&\leq \delta D + \frac{M_\infty (3R+2D \sqrt{\log(1/\alpha) })}{\sqrt{2N}}.
\end{align*}
Using $R=\sqrt{\log n}$ and {$D = 2$} in this inequality,
we obtain
\begin{align}\label{eq:final_est}
F(\breve x^N) - F(x^*) &\leq \frac{M_\infty (3\sqrt{\log{n}} +4 \sqrt{\log(1/\alpha)})}{\sqrt{2N}} +2\delta.
\end{align}
We raise this to the second power, use that for all $a,b\geq 0, ~ 2\sqrt{ab}\leq a+b$ and then extract the square root. We obtain the following
\begin{align*}
\sqrt{\left(3\sqrt{\log{n}} +4 \sqrt{\log(1/\alpha)}\right)^2} &= \sqrt{ 9\log{n} + 16\log(1/\alpha) +24\sqrt{\log{n}}\sqrt{\log(1/\alpha)} } \\
&\leq \sqrt{ 18\log{n} + 32\log(1/\alpha) }.
\end{align*}
Using this for \eqref{eq:final_est}, we get the statement of the theorem
\begin{align*}\label{eq:final_est2}
F(\breve x^N) - F(x^*) &\leq \frac{M_\infty \sqrt{18\log{n} +32 \log(1/\alpha)}}{\sqrt{2N}} +2\delta = O\left(\frac{M_\infty \sqrt{\log ({n}/{\alpha})}}{\sqrt{N}} +2\delta \right) .
\end{align*}
\end{proof}
\subsection{ Penalization in the SAA Approach}
In this section, we study the SAA approach for non-strongly convex problem \eqref{eq:gener_risk_min_nonconv}. We regularize this problem by 1-strongly convex w.r.t. $x$ penalty function $r(x,x^1)$ in the $\ell_2$-norm
\begin{equation}\label{def:gener_reg_prob}
\min_{x\in X \subseteq \mathbb{R}^n} F_\lambda(x) \triangleq \mathbb E f(x,\xi) + \lambda r(x,x^1)
\end{equation}
and we prove that the sample sizes in the SA and the SAA approaches will be equal up to logarithmic terms.
The empirical counterpart of problem \eqref{def:gener_reg_prob} is
\begin{equation}\label{eq:gen_prob_empir}
\min_{x\in X }\hat{F}_\lambda(x) \triangleq \frac{1}{m}\sum_{i=1}^m f(x,\xi_i) + \lambda r(x,x^1).
\end{equation}
Let us define $ \hat x_\lambda = \arg\min\limits_{x\in X} \hat{F}_{\lambda}(x)$.
The next lemma proves the statement from \cite{shalev2009stochastic} on boundness of the population sub-optimality in terms of the square root of empirical sub-optimality.
\begin{lemma}\label{Lm:pop_sub_opt}
Let $f(x,\xi)$ be convex and $M$-Lipschitz continuous w.r.t $\ell_2$-norm.
Then for any $x \in X$ with probability at least $ 1-\delta$ the following holds
\[F_\lambda(x) - F_\lambda(x^*_\lambda) \leq \sqrt{\frac{2M_\lambda^2}{\lambda} \left(\hat F_\lambda(x) - \hat F_\lambda(\hat x_\lambda)\right)} + \frac{4M_\lambda^2}{\alpha \lambda m},\]
where $ x^*_\lambda = \arg\min\limits_{x\in X} {F}_\lambda(x)$, $M_\lambda \triangleq M +\lambda \mathcal {R}^2$ and $\mathcal{R}^2 = r(x^*,x^1)$.
\end{lemma}
\begin{proof}
Let us define $f_\lambda(x, \xi) \triangleq f (x, \xi) + \lambda r(x,x^1) $. As $f(x, \xi)$ is $M$-Lipschitz continuous, $f_\lambda(x, \xi)$ is also Lipschitz continuous with $M_\lambda \triangleq M +\lambda \mathcal {R}^2$.
From
Jensen's inequality for the expectation, and the module as a convex function, we get that
$F_\lambda(x)$ is also $M_\lambda$-Lipschitz continuous
\begin{equation}\label{eq:MLipscht_cont}
|F_\lambda(x)- F_\lambda(\hat x_\lambda) | \leq M_\lambda\| x -\hat x_\lambda\|_2, \qquad \forall x \in X.
\end{equation}
From $\lambda$-strong convexity of $f(x, \xi)$, we obtain that $\hat F_\lambda(x)$ is also $\lambda$-strongly convex
\[
\|x-\hat x_\lambda\|_2^2\leq \frac{2}{\lambda}\left( \hat F_\lambda(x)-\hat F_\lambda(\hat x_\lambda) \right), \qquad \forall x \in X.
\]
From this and \eqref{eq:MLipscht_cont} it follows
\begin{equation}\label{eq:sup_opt_empr}
F_\lambda(x)- F_\lambda(\hat x_\lambda) \leq \sqrt{\frac{2M_\lambda^2}{\lambda}\left( \hat F_\lambda(x)-\hat F_\lambda(\hat x_\lambda) \right)}.
\end{equation}
For any $x \in X$ and $ x^*_\lambda = \arg\min\limits_{x\in X} {F}_\lambda(x)$ we consider
\begin{equation}\label{eq:Fxhatx}
F_\lambda(x)- F_\lambda( x^*_\lambda) = F_\lambda(x)- F_\lambda(\hat x_\lambda) + F_\lambda(\hat x_\lambda) - F_\lambda( x^*_\lambda).
\end{equation}
From \citep[Theorem 6]{shalev2009stochastic} we have with probability at least $ 1 -\alpha$
\[ F_\lambda(\hat x_\lambda) - F_\lambda(x^*_\lambda) \leq \frac{4M_\lambda^2}{\alpha \lambda m}.\]
Using this and \eqref{eq:sup_opt_empr} for \eqref{eq:Fxhatx} we obtain with probability at least $ 1 -\alpha$
\[F_\lambda(x) - F_\lambda(x^*_\lambda) \leq \sqrt{\frac{2M_\lambda^2}{\lambda}\left( \hat F_\lambda(x)-\hat F_\lambda(\hat x_\lambda) \right)} + \frac{4M_\lambda^2}{\alpha \lambda m}.\]
\end{proof}
The next theorem proves the eliminating the linear dependence on $n$ in the sample size of the regularized SAA approach for a non-strongly convex objective
(see estimate \eqref{eq:SNSm}), and estimates the auxiliary precision for the regularized SAA problem \eqref{eq:aux_e_quad}.
\begin{theorem}\label{th_reg_ERM}
Let $f(x,\xi)$ be convex and $M$-Lipschitz continuous w.r.t $x$ and
let $\hat x_{\varepsilon'}$ be such that
\[
\frac{1}{m}\sum_{i=1}^m f(\hat x_{\varepsilon'},\xi_i) + \lambda r(\hat x_{\varepsilon'}, x^1) - \arg\min_{x\in X} \left\{\frac{1}{m}\sum_{i=1}^m f(x,\xi_i) + \lambda r(x,x^1)\right\} \leq \varepsilon'.
\]
To satisfy
\[F(\hat x_{\varepsilon'}) - F(x^*)\leq \varepsilon\]
with probability at least $1-\alpha$
, we need to take $\lambda = \varepsilon/(2\mathcal{R}^2)$,
\[m = \frac{ 32 M^2\mathcal{R}^2}{\alpha \varepsilon^2}, \]
where
$\mathcal{R}^2 = r(x^*,x^1)$. The precision $\varepsilon'$ is defined as
\[\varepsilon' = \frac{\varepsilon^3}{64M^2 \mathcal{R}^2}.\]
\end{theorem}
\begin{proof}
From Lemma \ref{Lm:pop_sub_opt}
we get for $x=\hat x_{\varepsilon'}$
\begin{align}\label{eq:suboptimality}
F_\lambda(\hat x_{\varepsilon'}) - F_\lambda(x^*_\lambda) &\leq \sqrt{\frac{2M_\lambda^2}{\lambda}\left( \hat F_\lambda(\hat x_{\varepsilon'})-\hat F_\lambda(\hat x_\lambda ) \right)} + \frac{4M_\lambda^2}{\alpha \lambda m} \notag \\
&=\sqrt{\frac{2M_\lambda^2}{\lambda}\varepsilon'} + \frac{4M_\lambda^2}{\alpha \lambda m},
\end{align}
where we used the definition of $\hat x_{\varepsilon'}$ from the statement of the this theorem.
Then we subtract $F(x^*)$ in both sides of \eqref{eq:suboptimality} and get
\begin{align}\label{eq:suboptimalityIm}
F_\lambda( \hat x_{\varepsilon'}) - F(x^*)
&\leq \sqrt{\frac{2M_\lambda^2\varepsilon'}{\lambda}} +\frac{4M_\lambda^2}{\alpha\lambda m} +F_\lambda(x^*_\lambda)-F(x^*).
\end{align}
Then we use
\begin{align*}
F_\lambda(x^*_\lambda) &\triangleq \min_{x\in X}\left\{ F(x)+\lambda r(x,x^1) \right\} && \notag\\
&\leq F(x^*) + \lambda r(x^*,x^1) && \text{The inequality holds for any $x \in X$}, \notag\\
&=F(x^*) +\lambda \mathcal{R}^2
\end{align*}
where $\mathcal{R} = r(x^*,x^1)$.
Then from this and \eqref{eq:suboptimalityIm} and the definition of $F_\lambda(\hat x_{\varepsilon'})$ in \eqref{def:gener_reg_prob} we get
\begin{align}\label{eq:minim+lamb}
F( \hat x_{\varepsilon'}) - F(x^*) &\leq \sqrt{\frac{2M_\lambda^2}{\lambda}\varepsilon'} + \frac{4M_\lambda^2}{\alpha \lambda m} - \lambda r(\hat x_{\varepsilon'}, x^1)+{\lambda}\mathcal{R}^2\notag \\
&\leq \sqrt{\frac{2M_\lambda^2\varepsilon'}{\lambda}} +\frac{4M_\lambda^2}{\alpha\lambda m} +{\lambda}\mathcal{R}^2.
\end{align}
Assuming $M \gg \lambda \mathcal{R}^2 $ and choosing $\lambda =\varepsilon/ (2\mathcal{R}^2)$ in \eqref{eq:minim+lamb}, we get the
following
\begin{equation}\label{offline23}
F( \hat x_{\varepsilon'}) - F(x^*) = \sqrt{\frac{4M^2\mathcal R^2\varepsilon'}{\varepsilon}} +\frac{8M^2\mathcal R^2}{\alpha m \varepsilon} +\varepsilon/2.
\end{equation}
Equating the first term and the second term in the r.h.s. of \eqref{offline23} to $\varepsilon/4$ we obtain the
the rest statements of the theorem including $ F( \hat x_{\varepsilon'}) - F(x^*) \leq \varepsilon.$
\end{proof}
\section{Fr\'{e}chet Mean with respect to Entropy-Regularized Optimal Transport }\label{sec:pen_bar}
In this section, we consider the problem of finding population barycenter of independent identically distributed random discrete measures. We define the population barycenter of distribution $\mathbb P$ with respect to
entropy-regularized transport distances
\begin{equation}\label{def:populationWBFrech}
\min_{p\in \Delta_n}
W_\gamma(p)\triangleq
\mathbb E_q W_\gamma(p,q), \qquad q \sim \mathbb P.
\end{equation}
\subsection{Properties of Entropy-Regularized Optimal Transport}
Entropic regularization of transport distances \cite{cuturi2013sinkhorn} improves their statistical properties \cite{klatt2020empirical,bigot2019central} and reduces their computational complexity. Entropic regularization has shown good results in generative models \cite{genevay2017learning},
multi-label learning \cite{frogner2015learning}, dictionary learning \cite{rolet2016fast}, image processing \cite{cuturi2016smoothed,rabin2015convex}, neural imaging \cite{gramfort2015fast}.
Let us firstly remind optimal transport problem between histograms $p,q \in \Delta_n$ with cost matrix $C\in \mathbb R_{+}^{n\times n}$
\begin{equation}\label{eq:OTproblem}
W(p,q) \triangleq \min_{\pi \in U(p,q)} \langle C, \pi \rangle,
\end{equation}
where
\[U(p,q) \triangleq\{ \pi\in \mathbb R^{n\times n}_+: \pi {\mathbf 1} =p, \pi^T {\mathbf 1} = q\}.\]
\begin{remark}[Connection with the $\rho$-Wasserstein distance]
When for $\rho\geq 1$, $C_{ij} =\mathtt d(x_i, x_j)^\rho$ in \eqref{eq:OTproblem}, where $\mathtt d(x_i, x_j)$ is a distance on support points $x_i, x_j$ of space $X$, then $W(p,q)^{1/\rho}$ is known as the $\rho$-Wasserstein distance on $\Delta_n$.
\end{remark}
Nevertheless, all the results of this thesis are based only on the assumptions that the matrix $C \in \mathbb R_+^{n\times n}$ is symmetric and non-negative. Thus, optimal transport problem defined in \eqref{eq:OTproblem} is a more general than the Wasserstein distances.
Following \cite{cuturi2013sinkhorn}, we introduce entropy-regularized optimal transport problem
\begin{align}\label{eq:wass_distance_regul}
W_\gamma (p,q) &\triangleq \min_{\pi \in U(p,q)} \left\lbrace \left\langle C,\pi\right\rangle - \gamma E(\pi)\right\rbrace,
\end{align}
where $\gamma>0$ and $E(\pi) \triangleq -\langle \pi,\log \pi \rangle $ is the entropy. Since $E(\pi)$ is 1-strongly concave on $\Delta_n$ in the $\ell_1$-norm, the objective in \eqref{eq:wass_distance_regul} is $\gamma$-strongly convex with respect to $\pi$ in the $\ell_1$-norm on $\Delta_n$, and hence problem \eqref{eq:wass_distance_regul} has a unique optimal solution. Moreover, $W_\gamma (p,q)$ is $\gamma$-strongly convex with respect to $p$ in the $\ell_2$-norm on $\Delta_n$ \citep[Theorem 3.4]{bigot2019data}.
One particular advantage of the entropy-regularized optimal transport is a closed-form representation for its dual function~\cite{agueh2011barycenters,cuturi2016smoothed} defined by the Fenchel--Legendre transform of $W_\gamma(p,q)$ as a function of $p$
\begin{align}\label{eq:FenchLegdef}
W_{\gamma, q}^*(u) &= \max_{ p \in \Delta_n}\left\{ \langle u, p \rangle - W_{\gamma}(p, q) \right\} = \gamma\left(E(q) + \left\langle q, \log (K \beta) \right\rangle \right)\notag\\
&= \gamma\left(-\langle q,\log q\rangle + \sum_{j=1}^n [q]_j \log\left( \sum_{i=1}^n \exp\left(([u]_i - C_{ji})/\gamma\right) \right)\right)
\end{align}
where $\beta = \exp( {u}/{\gamma}) $, \mbox{$K = \exp( {-C}/{\gamma }) $} and $[q]_j$ is $j$-th component of vector $q$. Functions such as $\log$ or $\exp$ are always applied element-wise for vectors.
Hence, the gradient of dual function $W_{\gamma, q}^*(u)$ is also represented in a closed-form \cite{cuturi2016smoothed}
\begin{equation*}
\nabla W^*_{\gamma,q} (u)
= \beta \odot \left(K \cdot {q}/({K \beta}) \right) \in \Delta_n,
\end{equation*}
where symbols $\odot$ and $/$ stand for the element-wise product and element-wise division respectively.
This can be also written as
\begin{align}\label{eq:cuturi_primal}
\forall l =1,...,n \qquad [\nabla W^*_{\gamma,q} (u)]_l = \sum_{j=1}^n [q]_j \frac{\exp\left(([u]_l-C_{lj})/\gamma\right) }{\sum_{i=1}^n\exp\left(([u]_i-C_{ji})/\gamma\right)}.
\end{align}
The dual representation of $W_\gamma (p,q) $ is
\begin{align}\label{eq:dual_Was}
W_{\gamma}(p,q) &= \min_{\pi \in U(p,q) }\sum_{i,j=1}^n\left( C_{ij}\pi_{i,j} + \gamma \pi_{i,j}\log \pi_{i,j} \right) \notag \\
&=\max_{u, \nu \in \mathbb R^n} \left\{ \langle u,p\rangle + \langle\nu,q\rangle - \gamma\sum_{i,j=1}^n\exp\left( ([u]_i+[\nu]_j -C_{ij})/\gamma -1 \right) \right\} \\
&=\max_{u \in \mathbb R^n}\left\{ \langle u,p\rangle -
\gamma\sum_{j=1}^n [q]_j\log\left(\frac{1}{[q]_j}\sum_{i=1}^n\exp\left( ([u]_i -C_{ij})/\gamma \right) \right)
\right\}. \notag
\end{align}
Any solution $\begin{pmatrix}
u^*\\
\nu^*
\end{pmatrix}$ of \eqref{eq:dual_Was} is a subgradient
of $ W_{\gamma}(p,q)$ \citep[Proposition 4.6]{peyre2019computational} \begin{equation}\label{eq:nabla_wass_lagrang}
\nabla W_\gamma(p,q) = \begin{pmatrix}
u^*\\
\nu^*
\end{pmatrix}.
\end{equation} We consider $u^*$ and $\nu^*$ such that
$\langle u^*, {\mathbf 1}\rangle = 0$ and $\langle \nu^*, {\mathbf 1}\rangle = 0$ ($u^*$ and $\nu^*$ are determined up to an additive constant).
The next theorem \cite{bigot2019data} describes the Lipschitz continuity of $W_\gamma (p,q)$ in $p$ on probability simplex $\Delta_n$ restricted to
\[\Delta^\rho_n = \left\{p\in \Delta_n : \min_{i \in [n]}p_i \geq \rho \right\},\]
where $0<\rho<1$ is an arbitrary small constant.
\begin{theorem}\citep[Theorem 3.4, Lemma 3.5]{bigot2019data}
\label{Prop:wass_prop}
\begin{itemize}
\item For any $q \in \Delta_n$,
$W_\gamma (p,q)$ is $\gamma$-strongly convex w.r.t. $p$ in the $\ell_2$-norm
\item For any $q \in \Delta_n$, $p \in \Delta^\rho_n$ and $0<\rho<1$,
$\|\nabla_p W_\gamma (p,q)\|_2 \leq M$, where
\[M = \sqrt{\sum_{j=1}^n\left( 2\gamma\log n +\inf_{i\in [n]}\sup_{l \in [n]} |C_{jl} -C_{il}| -\gamma\log \rho \right)^2}. \]
\end{itemize}
\end{theorem}
We roughly take $M = O(\sqrt n\|C\|_\infty)$ since for all $i,j\in [n], C_{ij} > 0$, we get
\begin{align*}
M &\stackrel{\text{\cite{bigot2019data}}}{=} O\left(\sqrt{\sum_{j=1}^n\left( \inf_{i\in [n]}\sup_{l \in [n]} |C_{jl} -C_{il}| \right)^2} \right) \\
&= O \left(\sqrt{\sum_{j=1}^n \sup_{l \in [n]} C_{jl}^2} \right)= O\left( \sqrt{n} \sup_{j,l \in [n]} C_{jl} \right)
= O\left( \sqrt{n}\sup_{j\in [n]}\sum_{l \in [n]}C_{jl} \right)= O \left( \sqrt{n}\|C\|_\infty\right).
\end{align*}
Thus, we suppose that $W_\gamma(p,q)$ and $W(p,q)$ are Lipschitz continuous with almost the same Lipschitz constant $M$ in the $\ell_2$-norm on $\Delta_n^\rho$.
Moreover,
by the same arguments,
for the Lipschitz continuity in the $\ell_1$-norm: $\|\nabla_p W_\gamma (p,q)\|_\infty \leq M_\infty$, we can roughly estimate $M_\infty = O(\|C\|_\infty)$ by taking maximum instead of the square root of the sum.
In what follows, we use Lipshitz continuity of $W_\gamma(p,q)$ and $W(p,q)$ for measures from $\Delta_n$ keeping in mind that adding some noise and normalizing the measures makes them belong to $\Delta_n^\rho$. We also notice that if the measures are from the interior of $\Delta_n$ then their barycenter will be also from the interior of $\Delta_n$.
\subsection{The SA Approach: Stochastic Gradient Descent}
For problem \eqref{def:populationWBFrech}, as a particular case of problem \eqref{eq:gener_risk_min}, stochastic gradient descent method can be used. From Eq. \eqref{eq:nabla_wass_lagrang}, it follows that an approximation for the gradient of $W_\gamma(p,q)$ with respect to $p$ can be calculated by Sinkhorn algorithm \cite{altschuler2017near-linear,peyre2019computational,dvurechensky2018computational}
through the computing dual variable $u$ with $\delta$-precision
\begin{equation}\label{inexact}
\|\nabla_p W_\gamma(p,q) - \nabla_p^\delta W_\gamma(p,q) \|_2\leq \delta, \quad \forall q\in \Delta_n.
\end{equation}
Here denotation $\nabla_p^\delta W_\gamma(p,q)$ means an inexact stochastic subgradient of $ W_\gamma(p,q)$ with respect to $p$.
Algorithm \ref{Alg:OnlineGD} combines stochastic gradient descent
given by iterative formula \eqref{SA:implement_simple} for $\eta_k = \frac{1}{\gamma k}$ with Sinkhorn algorithm (Algorithm \ref{Alg:SinkhWas})
and Algorithm \ref{Alg:EuProj} making the projection onto the simplex $\Delta_n$.
\begin{algorithm}[ht!]
\caption{Sinkhorn's algorithm \cite{peyre2019computational} for calculating $ \nabla_p^\delta W_\gamma(p^k,q^k) $}
\label{Alg:SinkhWas}
\begin{algorithmic}[1]
\Procedure{Sinkhorn}{$p,q, C, \gamma$}
\State $a^1 \gets (1/n,...,1/n)$, $ b^1 \gets (1/n,...,1/n)$
\State $K \gets \exp (-C/\gamma)$
\While{\rm not converged}
\State $a \gets {p}/(Kb)$
\State $ b \gets {q }/(K^\top a)$
\EndWhile
\State \textbf{return} $ \gamma \log(a)$\Comment{Sinkhorn scaling $ a = e^{u/\gamma}$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[ht!]
\caption{Euclidean Projection $\Pi_{\Delta_n}(p) = \arg\min\limits_{v\in \Delta_n}\|p-v\|_2$ onto Simplex $\Delta_n$ \cite{duchi2008efficient}}
\label{Alg:EuProj}
\begin{algorithmic}[1]
\Procedure{Projection}{$w\in \mathbb R^n$}
\State Sort components of $w$ in decreasing manner: $r_1\geq r_2 \geq ... \geq r_n$.
\State Find $\rho = \max\left\{ j \in [n]: r_j - \frac{1}{j}\left(\sum^{j}_{i=1}r_i - 1\right) \right\}$
\State Define
$\theta = \frac{1}{\rho}(\sum^{\rho}_{i=1}r_i - 1)$
\State For all $i \in [n]$, define $p_i = \max\{w_i - \theta, 0\}$.
\State \textbf{return} $p \in \Delta_n$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[ht!]
\caption{Projected Online Stochastic Gradient Descent for WB (PSGDWB)}
\label{Alg:OnlineGD}
\begin{algorithmic}[1]
\Require starting point $p^1 \in \Delta_n$, realization $q^1$, $\delta$, $\gamma$.
\For{$k= 1,2,3,\dots$}
\State $\eta_{k} = \frac{1}{\gamma k}$
\State $\nabla_p^\delta W_\gamma(p^k,q^k) \gets$ \textsc{Sinkhorn}$(p^k,q^k, C, \gamma)$ or the accelerated Sinkhorn \cite{guminov2019accelerated}
\State $p^{(k+1)/2} \gets p^k - \eta_{k} \nabla_p^\delta W_\gamma(p^k,q^k)$
\State $p^{k+1} \gets$ \textsc{Projection}$(p^{(k+1)/2})$
\State Sample $q^{k+1}$
\EndFor
\Ensure $p^1,p^2, p^3...$
\end{algorithmic}
\end{algorithm}
For Algorithm \ref{Alg:OnlineGD} and problem \eqref{def:populationWBFrech}, Theorem \ref{Th:contract_gener} can be specified as follows
\begin{theorem}\label{Th:contract}
Let $\tilde p^N \triangleq \frac{1}{N}\sum_{k=1}^{N}p^k $ be the average of $N$ online outputs of Algorithm \ref{Alg:OnlineGD} run with $\delta$. Then, with probability
at least $1-\alpha$ the following holds
\begin{equation*}
W_{\gamma}(\tilde p^N) - W_{\gamma}(p^*_{\gamma})
= O\left(\frac{M^2\log(N/\alpha)}{\gamma N} + \delta \right),
\end{equation*}
where $p^*_{\gamma} \triangleq \arg\min\limits_{p\in \Delta_n}
W_\gamma(p)$.
Let Algorithm \ref{Alg:OnlineGD} run with $\delta = O\left(\varepsilon\right)$ and $
N =\widetilde O \left( \frac{M^2}{\gamma \varepsilon} \right) = \widetilde O \left( \frac{n\|C\|_\infty^2}{\gamma \varepsilon} \right)
$. Then, with probability
at least $1-\alpha$
\begin{equation*}
W_{\gamma}(\tilde p^N) -W_{\gamma}(p^*_{\gamma}) \leq \varepsilon \quad \text{and} \quad \|\tilde p^N - p^*_{\gamma}\|_2 \leq \sqrt{2\varepsilon/\gamma}.
\end{equation*}
The total complexity of
Algorithm \ref{Alg:OnlineGD}
is
\begin{align*
\widetilde O\left( \frac{n^3\|C\|_\infty^2}{\gamma\varepsilon}\min\left\{ \exp\left( \frac{\|C\|_{\infty}}{\gamma} \right) \left( \frac{\|C\|_{\infty}}{\gamma} + \log\left(\frac{\|C\|_{\infty}}{\kappa \varepsilon^2} \right) \right), \sqrt{\frac{n \|C\|^2_{\infty}}{ \kappa\gamma \varepsilon^2}} \right\} \right),
\end{align*}
where $ \kappa \triangleq \lambda^+_{\min}\left(\nabla^2 W_{\gamma, q}^*(u^*)\right)$.
\end{theorem}
\begin{proof}
We estimate the co-domain (image) of $W_\gamma(p,q)$
\begin{align*}
\max_{p,q \in \Delta_n} W_\gamma (p,q)
&= \max_{p,q \in \Delta_n} \min_{ \substack{\pi \in \mathbb R^{n\times n}_+, \\ \pi {\mathbf 1} =p, \\ \pi^T {\mathbf 1} = q}} ~ \sum_{i,j=1}^n (C_{ij}\pi_{ij}+\gamma\pi_{ij}\log \pi_{ij})\notag\\
&\leq \max_{\substack{\pi \in \mathbb R^{n\times n}_+, \\ \sum_{i,j=1}^n \pi_{ij}=1}} \sum_{i,j=1}^n(C_{ij}\pi_{ij}+\gamma\pi_{ij}\log \pi_{ij}) \leq \|C\|_\infty.
\end{align*}
Therefore, $W_\gamma(p,q): \Delta_n\times \Delta_n\rightarrow \left[-2\gamma\log n, \|C\|_\infty\right]$.
Then we apply Theorem \ref{Th:contract_gener} with $B =\|C\|_\infty $ and $D =\max\limits_{p',p''\in \Delta_n}\|p'-p''\|_2 = \sqrt{2}$, and we sharply get
\begin{equation*}
W_{\gamma}(\tilde p^N) - W_{\gamma}(p^*_{\gamma})
= O\left(\frac{M^2\log(N/\alpha)}{\gamma N} + \delta \right),
\end{equation*}
Equating each terms in the r.h.s. of this equality to $\varepsilon/2$ and using $M=O(\sqrt n \|C\|_\infty)$, we get the expressions for $N$ and $\delta$. The statement
$\|\tilde p^N - p^*_{\gamma}\|_2 \leq \sqrt{2\varepsilon/\gamma}$
follows directly from strong convexity of $W_\gamma(p,q)$ and $W_\gamma(p)$.
The proof of algorithm complexity follows from the complexity
of the Sinkhorn's algorithm.
To state the complexity of the Sinkhorn's \avg{algorithm} we firstly define \avg{$\tilde\delta$ as the accuracy in function value of the inexact solution $u$ of maximization problem in \eqref{eq:dual_Was}.}
Using this
we formulate the number of iteration of the Sinkhorn's
\cite{franklin1989scaling,carlier2021linear,kroshnin2019complexity,stonyakin2019gradient}
\begin{align}\label{eq:sink}
\widetilde O \left( \exp\left( \frac{\|C\|_{\infty}}{\gamma} \right) \left( \frac{\|C\|_{\infty}}{\gamma} + \log\left(\frac{\|C\|_{\infty}}{\tilde \delta} \right) \right)\right).
\end{align}
The number of iteration for the accelerated Sinkhorn's can be improved \cite{guminov2019accelerated}
\begin{equation}\label{eq:accel}
\widetilde{O} \left(\sqrt{\frac{n \|C\|^2_\infty}{\gamma \varepsilon'}} \right).
\end{equation}
Here $\varepsilon'$ is the accuracy in the function value, which is the expression
$ \langle u,p\rangle + \langle\nu,q\rangle - \gamma\sum_{i,j=1}^n\exp\left( {(-C_{ji}+u_i+\nu_j)}/{\gamma} -1 \right)$ under the maximum in \eqref{eq:dual_Was}.
From strong convexity of this objective on the space orthogonal to eigenvector $\boldsymbol 1_n$ corresponds to the eigenvalue $0$ for this function, it follows that
\begin{equation}\label{eq:str_k}
\varepsilon'\geq \frac{\gamma}{2}\|u - u^*\|^2_2 = \frac{\kappa}{2}\delta,
\end{equation}
where $\kappa \triangleq \lambda^+_{\min}\left(\nabla^2 W_{\gamma, q}^*(u^*)\right)$. From \citep[Proposition A.2.]{bigot2019data}, for the eigenvalue of $\nabla^2 W^*_{\gamma,q}(u^*)$ it holds that $0=\lambda_n\left(\nabla^2 W_{\gamma, q}^*(u^*)\right) < \lambda_k\left(\nabla^2 W_{\gamma, q}^*(u^*)\right) \text{ for all } k=1,...,n-1$. Inequality \eqref{eq:str_k} holds due to $\nabla^\delta_p W_\gamma(p,q) := u$ in Algorithm \ref{Alg:OnlineGD} and $\nabla_p W_\gamma(p,q) \triangleq u^*$ in \eqref{eq:nabla_wass_lagrang}.
Multiplying both of estimates \eqref{eq:sink} and \eqref{eq:accel} by
the complexity of each iteration of the (accelerated) Sinkhorn's algorithm $\avg{O}(n^2)$ and
\ag{the} number of iterations $
N =\widetilde O \left( \frac{M^2}{\gamma \varepsilon} \right)$ \ag{(measures)} of Algorithm \ref{Alg:OnlineGD},
and
taking the minimum, we get the last statement of the theorem.
\end{proof}
Next, we study the practical convergence of projected stochastic gradient descent (Algorithm \ref{Alg:OnlineGD}).
Using the fact that the true Wasserstein barycenter of one-dimensional Gaussian measures
has closed form expression for the mean and the variance \cite{delon2020wasserstein}, we study the convergence to the true barycenter of
the generated truncated Gaussian measures. Figure \ref{fig:gausbarSGD} illustrates the convergence in the $2$-Wasserstein distance within 40 seconds.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.45\textwidth]{images/projectedSGD.png}
\caption{Convergence of projected stochastic gradient descent to the true barycenter of $2\times10^4$ Gaussian measures in the $2$-Wasserstein distance. }
\label{fig:gausbarSGD}
\end{figure}
\subsection{The SAA Approach }
The empirical counterpart of problem \eqref{def:populationWBFrech} is the (empirical) Wasserstein barycenter problem
\begin{equation}\label{EWB_unregFrech}
\min_{p\in \Delta_n} \frac{1}{m}\sum_{i=1}^m W_\gamma(p,q_i),
\end{equation}
where $q_1, q_2,...,q_m$ are some realizations of random variable with distribution $\mathbb P$.
Let us define $\hat p_\gamma^m \triangleq \arg \min\limits_{p\in \Delta_n}{\frac{1}{m}}\sum_{i=1}^m W_\gamma(p,q_i)$ and its $\varepsilon'$-approximation $\hat p_{\varepsilon'}$ such that
\begin{equation}\label{eq:fidelity_wass}
\frac{1}{m} \sum_{i=1}^m W_{\gamma}( \hat p_{\varepsilon'}, q_i) - \frac{1}{m} \sum_{i=1}^m W_{\gamma}(\hat p^m_{\gamma}, q_i) \leq \varepsilon'.
\end{equation}
For instance, $\hat p_{\varepsilon'}$ can be calculated by the IBP algorithm \cite{benamou2015iterative} or the accelerated IBP algorithm \cite{guminov2019accelerated}.
The next theorem specifies Theorem \ref{Th:contractSAA} for the Wassertein barycenter problem \eqref{EWB_unregFrech}.
\begin{theorem}\label{Th:contract2}
Let $\hat p_{\varepsilon'}$ satisfies \eqref{eq:fidelity_wass}.
Then, with probability at least $1-\alpha$
\begin{align*}
W_{\gamma}( \hat p_{\varepsilon'}) - W_{\gamma}(p_{\gamma}^*)
&\leq \sqrt{\frac{2M^2}{\gamma}\varepsilon'} +\frac{4M^2}{\alpha\gamma m},
\end{align*}
where $p^*_{\gamma} \triangleq \arg\min\limits_{p\in \Delta_n}
W_\gamma(p)$.
Let $\varepsilon' = O \left(\frac{\varepsilon^2\gamma}{n\|C\|_\infty^2} \right)$ and $m = O\left( \frac{M^2}{\alpha \gamma \varepsilon} \right) =O\left( \frac{n\|C\|_\infty^2}{\alpha \gamma \varepsilon} \right)$. Then, with probability at least $1-\alpha$
\[W_{\gamma}( \hat p_{\varepsilon'}) - W_{\gamma}(p_{\gamma}^*)\leq \varepsilon \quad \text{and} \quad \|\hat p_{\varepsilon'} - p^*_{\gamma}\|_2 \leq \sqrt{2\varepsilon/\gamma}.\]
The total complexity of the accelerated IBP computing $\hat p_{\varepsilon'}$ is
\begin{equation*}
\widetilde O\left(\frac{n^4\|C\|_\infty^4}{\alpha \gamma^2\varepsilon^2} \right).
\end{equation*}
\end{theorem}
\begin{proof}
From Theorem \ref{Th:contractSAA} we get the first statement of the theorem
\[ W_{\gamma}( \hat p_{\varepsilon'}) - W_{\gamma}(p_{\gamma}^*)
\leq \sqrt{\frac{2M^2}{\gamma}\varepsilon'} +\frac{4M^2}{\alpha\gamma m}. \]
From \cite{guminov2019accelerated}
we have that complexity of the accelerated IBP is
\[
\widetilde O\left(\frac{mn^2\sqrt n\|C\|_\infty}{\sqrt{\gamma \varepsilon'}} \right).
\]
Substituting the expression for $m$ and the expression for $\varepsilon'$ from Theorem \ref{Th:contractSAA}
\[\varepsilon' = O \left(\frac{\varepsilon^2 \gamma}{M^2} \right), \qquad m = O\left( \frac{M^2}{\alpha \gamma \varepsilon} \right)\]
to this equation we get the final statement of the theorem and finish the proof.
\end{proof}
Next, we study the practical convergence of the Iterative Bregman Projections on truncated Gaussian measures.
Figure \ref{fig:gausbarSGD} illustrates the convergence of the barycenter calculated by the IBP algorithm to the true barycenter of Gaussian measures in the $2$-Wasserstein distance within 10 seconds. For the convergence to the true barycenter w.r.t. the $2$-Wasserstein distance in the SAA approach, we refer to
\cite{boissard2015distribution}, however, considering the convergence in the $\ell_2$-norm (Theorem \ref{Th:contract2}) allows to obtain better convergence rate in comparison with the bounds for the $2$-Wasserstein distance.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.5\textwidth]{images/ibpSAA.png}
\caption{Convergence of the Iterative Bregman Projections to the true barycenter of $2\times10^4$ Gaussian measures in the $2$-Wasserstein distance. }
\label{fig:gausbarIBP}
\end{figure}
\subsection{Comparison of the SA and the SAA for the WB Problem}
Now we compare the complexity bounds for the SA and the SAA implementations solving problem \eqref{def:populationWBFrech}.
For the brevity, we skip the high probability details since we can fixed $\alpha$ (say $\alpha = 0.05$) in the all bounds.
Moreover, based on \cite{shalev2009stochastic}, we assume that in fact all bounds of this paper have logarithmic dependence on $\alpha$ which is hidden in $\widetilde{O}(\cdot)$ \ddd{\cite{feldman2019high,klochkov2021stability}}.
\begin{table}[ht!]
\caption{Total complexity of the SA and the SAA implementations for $ \min\limits_{p\in \Delta_n}\mathbb E_q W_\gamma(p,q)$. }
\small
\hspace{-0.5cm}
{ \begin{tabular}{ll}
\toprule
\textbf{Algorithm} & \textbf{Complexity} \\
\midrule
Projected SGD (SA) & $ \widetilde O\left( \frac{n^3\|C\|^2_\infty}{\gamma\varepsilon} \min\left\{ \exp\left( \frac{\|C\|_{\infty}}{\gamma} \right) \left( \frac{\|C\|_{\infty}}{\gamma} + \log\left(\frac{\|C\|_{\infty}}{\kappa \varepsilon^2} \right) \right), \sqrt{\frac{n \|C\|^2_{\infty}}{ \kappa\gamma \varepsilon^2}} \right\} \right)$ \\
\midrule
Accelerated IBP (SAA) &
$\widetilde O\left(\frac{n^4\|C\|_{\infty}^4}{\gamma^2\varepsilon^2}\right)$\\
\bottomrule
\end{tabular}}
\label{Tab:entropic_OT2}
\end{table}
Table \ref{Tab:entropic_OT2} presents the total complexity of the numerical algorithms implementing the SA and the SAA approaches.
When $\gamma$ is not too \ag{large}, the complexity in the first row of the table is achieved by the second term under the minimum, namely \[\widetilde O \left(\frac{n^3\sqrt n \|C\|^3_\infty}{\gamma \sqrt{\gamma \kappa }\varepsilon^2}\right),\]
where $ \kappa \triangleq \lambda^+_{\min}\left(\nabla^2 W_{\gamma, q}^*(u^*)\right)$. This is \dm{typically bigger than the SAA complexity when $\kappa \ll \gamma/n$.}
Hereby, the SAA approach may outperform the SA approach provided that the regularization parameter $\gamma$ is not too large.
From the practical point of view, the SAA implementation converges much faster than the SA implementation.
Executing the SAA algorithm in a distributed manner only enhances this superiority since for the case when the objective is not Lipschitz smooth, the distributed implementation of the SA approach is not possible. This is the case of the Wasserstein barycenter problem, indeed, the objective is Lipschitz continuous but not Lipschitz smooth.
\section{Fr\'{e}chet Mean with respect to Optimal Transport}\label{sec:unreg}
Now we are interested in finding a Fr\'{e}chet mean with respect to optimal transport
\begin{equation}\label{def:population_unregFrech}
\min_{p\in \Delta_n}W(p) \triangleq \mathbb E_q W (p,q).
\end{equation}
\subsection{The SA Approach with Regularization: Stochastic Gradient Descent}
The next theorem explains how the solution of strongly convex problem \eqref{def:populationWBFrech} approximates
a solution of convex problem
\eqref{def:population_unregFrech} under the proper choice of the regularization parameter $\gamma$.
\begin{theorem}\label{Th:SAunreg}
Let $\tilde p^N \triangleq \frac{1}{N}\sum_{k=1}^{N}p^k $ be the average of $N$ online outputs of Algorithm \ref{Alg:OnlineGD} run with $\delta = O\left(\varepsilon\right)$ and $
N =\widetilde O \left( \frac{n\|C\|_\infty^2}{\gamma \varepsilon} \right)
$. Let $\gamma = \avg{{\varepsilon}/{(2 \mathcal{R}^2)} } $ with $\mathcal{R}^2 = 2 \log n$. Then, with probability
at least $1-\alpha$ the following holds
\begin{equation*}
W(\tilde p^N) - W(p^*) \leq \varepsilon,
\end{equation*}
where $p^*$ is a solution of \eqref{def:population_unregFrech}.
The total complexity of
Algorithm \ref{Alg:OnlineGD} with the accelerated Sinkhorn
is
\[\widetilde O \left(\frac{n^3\sqrt n \|C\|^3_\infty}{\gamma \sqrt{\gamma \kappa }\varepsilon^2}\right)= \widetilde O \left(\frac{n^3\sqrt n \|C\|^3_\infty}{\varepsilon^3 \sqrt{\varepsilon \kappa }}\right).\]
where $ \kappa \triangleq \lambda^+_{\min}\left(\nabla^2 W_{\gamma, q}^*(u^*)\right)$.
\end{theorem}
\begin{proof}
The proof of this theorem follows from Theorem \ref{Th:contract} and the following \cite{gasnikov2015universal,kroshnin2019complexity,peyre2019computational}
\[
W(p) - W(p^*) \leq W_{\gamma}(p) - W_{\gamma}(p^*) + 2\gamma\log n \leq W_{\gamma}(p) - W_{\gamma}(p^*_{\gamma})+ 2\gamma\log n,
\]
where $p \in \Delta_n $, $p^* = \arg\min\limits_{p\in \Delta_n}W(p) $.
The choice $\gamma = \frac{\varepsilon}{4\log n}$ ensures the following
\begin{equation*}
W(p) - W(p^*) \leq W_{\gamma}(p) - W_{\gamma}(p^*_{\gamma}) + \varepsilon/2, \quad \forall p \in \Delta_n.
\end{equation*}
This means that solving problem \eqref{def:populationWBFrech} with $\varepsilon/2$ precision, we get a solution of problem \eqref{def:population_unregFrech} with $\varepsilon$ precision.
When $\gamma$ is not too \ag{large}, Algorithm \ref{Alg:OnlineGD} uses the accelerated Sinkhorn’s algorithm (instead of Sinkhorn’s algorithm). Thus, using $\gamma = \frac{\varepsilon}{4\log n} $ and meaning that $\varepsilon$ is small, we get the complexity according to the statement of the theorem.
\end{proof}
\subsection{The SA Approach: Stochastic Mirror Descent}
Now we propose an approach to solve problem \eqref{def:population_unregFrech} without additional regularization.
The approach is based on mirror prox given by the iterative formula \eqref{eq:prox_mirr_step}. We use simplex setup which
provides a closed form solution for \eqref{eq:prox_mirr_step}. Algorithm \ref{Alg:OnlineMD} presents the application of mirror prox to problem \eqref{def:population_unregFrech}, where
the gradient of $ W(p^k,q^k)$ can be calculated using dual representation of OT \cite{peyre2019computational} by any LP solver exactly
\begin{align}\label{eq:refusol}
W(p,q) = \max_{ \substack{(u, \nu) \in \mathbb R^n\times \mathbb R^n,\\
u_i+\nu_j \leq C_{ij}, \forall i,j \in [n]}}\left\{ \langle u,p \rangle + \langle \nu,q \rangle \right\}.
\end{align}
Then \[
\nabla_p W(p,q) = u^*,
\]
where $u^*$ is a solution of \eqref{eq:refusol} such that $\langle u^*,{\mathbf 1}\rangle =0$.
\begin{algorithm}[ht!]
\caption
Stochastic Mirror Descent for the Wasserstein Barycenter Problem}
\label{Alg:OnlineMD}
\begin{algorithmic}[1]
\Require starting point $p^1 = (1/n,...,1/n)^T$,
number of measures $N$, $q^1,...,q^N$, accuracy of gradient calculation $\delta$
\State $\eta = \frac{\sqrt{2\log n}}{\|C\|_{\infty}\sqrt{N}}$
\For{$k= 1,\dots, N$}
\State Calculate $\nabla_{p^k} W(p^k,q^k)$ solving dual LP by any LP solver
\State
\[p^{k+1} = \frac{p^{k}\odot \exp\left(-\eta\nabla_{p^k} W(p^k,q^k)\right)}{\sum_{j=1}^n [p^{k}]_j\exp\left(-\eta\left[\nabla_{p^k} W(p^k,q^k)\right]_j\right)} \]
\EndFor
\Ensure $\breve{p}^N = \frac{1}{N}\sum_{k=1}^{N} p^{k}$
\end{algorithmic}
\end{algorithm}
The next theorem estimates the complexity of Algorithm \ref{Alg:OnlineMD}
\begin{theorem}\label{Th:MD}
Let $\breve p^N$ be the output of Algorithm \ref{Alg:OnlineMD} processing $N$ measures. Then, with probability
at least $1-\alpha$ we have
\begin{equation*}
W(\breve p^N) - W({p^*}) = O\left(\frac{\|C\|_\infty \sqrt{\log ({n}/{\alpha})}}{\sqrt{N}} \right),
\end{equation*}
Let Algorithm \ref{Alg:OnlineMD} run with $
N = \widetilde O \left( \frac{M_\infty^2R^2}{\varepsilon^2} \right) =\widetilde O \left( \frac{\|C\|_\infty^2}{\varepsilon^2} \right)
$, $ R^2 \triangleq {\rm KL}(p^1,p^*) \leq \log n $.
Then, with probability
at least $1-\alpha$
\[ W(\breve p^N) - W(p^*) \leq \varepsilon.\]
The {total} complexity of Algorithm \ref{Alg:OnlineMD} is
\[ \widetilde O\left( \frac{ n^3 \|C\|^2_\infty}{\varepsilon^2}\right).\]
\end{theorem}
\begin{proof}
From Theorem \ref{Th:MDgener} and using $M_\infty = O\left(\|C\|_\infty\right)$, we have
\begin{align}\label{eq:final_est234}
W(\breve p^N) - W(p^*) & = O\left(\frac{\|C\|_\infty \sqrt{\log ({n}/{\alpha})}}{\sqrt{N}} +2\delta \right).
\end{align}
Notice, that $\nabla_{p^k} W(p^k,q^k)$ can be calculated exactly by any LP solver. Thus, we take $\delta = 0$ in \eqref{eq:final_est234} and get the first statement of the theorem.
The second statement of the theorem directly follows from this and the condition $ W(\breve p^N) - W(p^*)\leq \varepsilon$.
To get the complexity bounds we
notice that the complexity for calculating $\nabla_p W(p^k,q^k)$ is $\tilde{O}(n^3)$ \cite{ahuja1993network,dadush2018friendly,dong2020study,gabow1991faster}, multiplying this by $N = O \left( {\|C\|_\infty^2R^2}/{\varepsilon^2} \right) $ with $ R^2 \triangleq {\rm KL}(p^*,p^1) \leq \log n $, we get the last statement of the theorem.
\[\widetilde O(n^3N) = {\widetilde O\left(n^3 \left(\frac{\|C\|_{\infty} R}{\varepsilon}\right)^2\right) =} \widetilde O\left(n^3 \left(\frac{\|C\|_\infty}{\varepsilon}\right)^2\right).\]
\end{proof}
Next we compare the
SA approaches with and without regularization of optimal transport in problem \eqref{def:population_unregFrech}. Entropic regularization of optimal transport leads to strong convexity of regularized optimal transport in the $\ell_2$-norm, hence, the Euclidean setup should be used. Regularization parameter $\gamma = \frac{\varepsilon}{4 \log n}$ ensures $\varepsilon$-approximation for the unregularized solution.
In this case, we use stochastic gradient descent with Euclidean projection onto simplex since it converges faster for strongly convex objective.
For non-regularized problem we can significantly use the simplex prox structure, indeed, we can apply stochastic mirror descent with simplex setup (the Kullback-Leibler divergence as the Bregman divergence) with Lipschitz constant $M_\infty = O(\|C\|_\infty)$ that is $\sqrt{n}$ better than Lipschitz constant in the Euclidean norm $M = O(\sqrt{n}\|C\|_\infty)$.
We studied the convergence of stochastic mirror descent (Algorithm \ref{Alg:OnlineMD}) and stochastic gradient descent (Algorithm \ref{Alg:OnlineGD}) in the $2$-Wasserstein distance within $10^4$ iterations (processing of $10^4$ probability measures).
Figure \ref{fig:gausbarcomparison1} confirms
better convergence of stochastic mirror descent than projected stochastic gradient descent as stated in their theoretical complexity (Theorems \ref{Th:SAunreg} and \ref{Th:MD}).
\begin{figure}[ht!]
\centering
\includegraphics[width=0.45\textwidth]{images/SGDMD.png}
\caption{Convergence of projected stochastic gradient descent, and stochastic mirror descent to the true barycenter of $2\times10^4$ Gaussian measures in the $2$-Wasserstein distance. }
\label{fig:gausbarcomparison1}
\end{figure}
\subsection{The SAA Approach }
Similarly for the SA approach, we provide the proper choice of the regularization parameter $\gamma$ in the SAA approach
so that the
solution of strongly convex problem \eqref{def:populationWBFrech} approximates
a solution of convex problem
\eqref{def:population_unregFrech}.
\begin{theorem}\label{Th:SAAunreg}
Let $\hat p_{\varepsilon'}$ satisfy
\begin{equation*}
\frac{1}{m} \sum_{i=1}^m W_{\gamma}( \hat p_{\varepsilon'}, q^i) - \frac{1}{m} \sum_{k=1}^m W_{\gamma}(\hat p^*_{\gamma}, q^i) \leq \varepsilon',
\end{equation*}
where $ \hat p_\gamma^* = \arg\min\limits_{p\in \Delta_n}\dm{\frac{1}{m}}\sum\limits_{i=1}^m W_\gamma(p,q^i)$, $\varepsilon' = O \left(\frac{\varepsilon^2 \gamma}{n\|C\|_\infty^2} \right)$, $m = O\left( \frac{n\|C\|_\infty^2}{\alpha \gamma \varepsilon} \right)$, and $\gamma = {\varepsilon}/{(2 \mathcal{R}^2)} $ with $\mathcal{R}^2 = 2 \log n$.
Then, with probability at least $1-\alpha$ the following holds
\[W( \hat p_{\varepsilon'}) - W(p^*)\leq \varepsilon.\]
The total complexity of the accelerated IBP computing $\hat p_{\varepsilon'}$ is
\begin{equation*}
\widetilde O\left(\frac{n^4\|C\|_\infty^4}{\alpha \varepsilon^4} \right).
\end{equation*}
\end{theorem}
\begin{proof}
The proof follows from Theorem \ref{Th:contract2} and the proof of Theorem \ref{Th:SAunreg} with $\gamma = {\varepsilon}/{(4 \log n)} $.
\end{proof}
\subsection{Penalization of the WB problem}
For the population Wasserstein barycenter problem, we construct 1-strongly convex penalty function in the $\ell_1$-norm based on Bregman divergence.
We consider the following prox-function \cite{ben-tal2001lectures}
\[d(p) = \frac{1}{2(a-1)}\|p\|_a^2, \quad a = 1 + \frac{1}{2\log n}, \qquad p\in \Delta_n\]
that is 1-strongly convex in the $\ell_1$-norm. Then Bregman divergence $ B_d(p,p^1)$ associated with $d(p)$ is
\[B_d(p,p^1) = d(p) - d(p^1) - \langle \nabla d(p^1), p - p^1 \rangle.\]
$B_d(p,p^1)$ is 1-strongly convex w.r.t. $p$ in the $\ell_1$-norm and $\tilde{O}(1)$-Lipschitz continuous in the $\ell_1$-norm on $\Delta_n$. One of the advantages of this penalization compared to the negative entropy penalization proposed in \cite{ballu2020stochastic,bigot2019penalization}, is that we get the upper bound on the Lipschitz constant, the properties of strong convexity in the $\ell_1$-norm on $\Delta_n$ remain the same. Moreover, this penalization contributes to the better wall-clock time complexity than quadratic penalization \cite{bigot2019penalization} since the constants of Lipschitz continuity for $W(p,q)$ with respect to the $\ell_1$-norm is $\sqrt{n}$ better than with respect to the $\ell_2$-norm but $R^2 = \|p^* - p^1\|_2^2\leq \|p^* - p^1\|_1^2 \leq \sqrt{2} $ and $R^2_d = B_d(p^*,p^1) = O(\log n)$ are equal up to a logarithmic factor.
The regularized SAA problem is following
\begin{equation}\label{EWB_Bregman_Reg}
\min_{p\in \Delta_n}\left\{\frac{1}{m}\sum_{k=1}^m W(p,q^k) + \lambda B_d(p,p^1)\right\}.
\end{equation}
The next theorem is particular case of Theorem \eqref{th_reg_ERM} for the population WB problem \eqref{def:population_unregFrech} with $r(p,p^1) = B_d(p,p^1)$.
\begin{theorem}\label{Th:newreg}
Let $\hat p_{\varepsilon'}$ be such that
\begin{equation}\label{EWB_eps}
\frac{1}{m}\sum_{k=1}^m W(\hat p_{\varepsilon'},q^k) + \lambda B_d(\hat p_{\varepsilon'},p^1) -
\min_{p\in \Delta_n}\left\{\frac{1}{m}\sum_{k=1}^m W(p,q^k) + \lambda B_d(p,p^1)\right\}
\le \varepsilon'.
\end{equation}
To satisfy
\[ W( \hat p_{\varepsilon'}) - W(p^*)\leq \varepsilon.\]
with probability at least $1-\alpha$,
we need to take $\lambda = \varepsilon/(2{R_d^2})$ and
\[m = \widetilde O\left(\frac{ \|C\|_\infty^2}{\alpha \varepsilon^2}\right), \]
where
$ R_d^2 = B_d(p^*,p^1) = {O(\log n)}$. The precision $\varepsilon'$ is defined as
\[\varepsilon' = \widetilde O\left(\frac{\varepsilon^3}{\|C\|_\infty^2 }\right).\]
The total complexity of Mirror Prox computing $\hat p_{\varepsilon'}$ is
\[
\widetilde O\left( \frac{ n^2\sqrt n\|C\|^5_\infty}{\varepsilon^5} \right).\]
\end{theorem}
\begin{proof}
The proof is based on saddle-point reformulation of the WB problem.
Further, we provide the explanation how to do this.
Firstly
we rewrite the OT as \cite{jambulapati2019direct}
\begin{equation}\label{eq:OTreform}
W(p,q) = \min_{x \in \Delta_{n^2}} \max_{y\in [-1,1]^{2n}} \{d^\top x +2\|d\|_\infty(~ y^\top Ax -b^\top y)\},
\end{equation}
where $b =
(p^\top, q^\top)^\top$, $d$ is vectorized cost matrix of $C$, $x$ be vectorized transport plan of $X$, and $A=\{0,1\}^{2n\times n^2}$ is an incidence matrix.
Then we reformulate the WB problem as a saddle-point problem \cite{dvinskikh2020improved}
\begin{align}\label{eq:alm_distr}
\min_{ \substack{ p \in \Delta^n, \\ \mathbf{x} \in \mathcal X \triangleq \underbrace{\Delta_{n^2}\times \ldots \times \Delta_{n^2}}_{m} }} \max_{ \mathbf{y} \in [-1,1]^{2mn}}
\frac{1}{m} \left\{\boldsymbol d^\top \mathbf{x} +2\|d\|_\infty\left(\mathbf{y}^\top\boldsymbol A \mathbf{x} -\mathbf b^\top \mathbf{y} \right)\right\},
\end{align}
where
$\mathbf{x} = (x_1^\top ,\ldots,x_m^\top )^\top $,
$\mathbf{y} = (y_1^\top,\ldots,y_m^\top)^\top $,
$\mathbf b = (p^\top, q_1^\top, ..., p^\top, q_m^\top)^\top$,
$\boldsymbol d = (d^\top, \ldots, d^\top )^\top $,
and $\boldsymbol A = {\rm diag}\{A, ..., A\} \in \{0,1\}^{2mn\times mn^2}$ is block-diagonal matrix.
Similarly to \eqref{eq:alm_distr} we reformulate \eqref{EWB_Bregman_Reg} as a saddle-point problem
\begin{align*}
\min_{ \substack{ p \in \Delta^n, \\ \mathbf{x} \in \mathcal X}} \max_{ \mathbf{y} \in [-1,1]^{2mn}}
~f_\lambda(\mathbf{x},p,\mathbf{y})
&\triangleq
\frac{1}{m} \left\{\boldsymbol d^\top \mathbf{x} +2\|d\|_\infty\left(\mathbf{y}^\top\boldsymbol A \mathbf{x} -\mathbf b^\top \mathbf{y} \right)\right\} +\lambda B_d(p,p^1)
\end{align*}
The gradient operator for $f(\mathbf{x},p,\mathbf{y})$ is defined by
\begin{align}\label{eq:gradMPrec}
G(\mathbf{x}, p, \mathbf{y}) =
\begin{pmatrix}
\nabla_\mathbf{x} f \\
\nabla_p f\\
-\nabla_\mathbf{y} f
\end{pmatrix} =
\frac{1}{m} \begin{pmatrix}
\boldsymbol d + 2\|d\|_\infty \boldsymbol A^\top \mathbf{y} \\
- 2\|d\|_\infty \{[y_{i}]_{1...n}\}_{i=1}^m +\lambda(\nabla d(p) - \nabla d(p^1)) \\
2\|d\|_\infty(\boldsymbol A\mathbf{x} - \mathbf{b})
\end{pmatrix},
\end{align}
where $[d(p)]_i = \frac{1}{a-1}\|p\|_a^{2-a}[p]_i^{a-1}$.
To get the complexity of MP we use the same reasons as in \cite{dvinskikh2020improved} with \eqref{eq:gradMPrec}. The total complexity is
\[
\widetilde O\left( \frac{ mn^2\sqrt{n} \|C\|_\infty}{\varepsilon'} \right)
\]
Then we use Theorem \ref{th_reg_ERM}
and get the exspressions for $m$, $\varepsilon'$ with $\lambda = \varepsilon/(2{R_d}^2)$, where ${R_d}^2 = B_d(p^*,p^1)$.
The number of measures is
\[m = \frac{ 32 M_\infty^2 R_d^2}{\alpha \varepsilon^2} = \widetilde O\left(\frac{\|C\|_\infty^2}{\alpha \varepsilon^2}\right). \]
The precision $\varepsilon'$ is defined as
\[\varepsilon' = \frac{\varepsilon^3}{64M_\infty^2 {R_d}^2} = O\left(\frac{\varepsilon^3}{\|C\|_\infty^2}\right).\]
\end{proof}
\subsection{Comparison of the SA and the SAA for the WB Problem.}
Now we compare the complexity bounds for the SA and the SAA implementations solving problem \eqref{def:population_unregFrech}. Table \ref{Tab:OT} presents the total complexity for the numerical algorithms.
{
\begin{table}[H]
\caption{Total complexity of the SA and the SAA implementations for $ \min\limits_{p\in \Delta_n}\mathbb E_q W(p,q)$. }
\begin{center}
\begin{tabular}{lll}\toprule
\textbf{Algorithm} & \textbf{Theorem} & \textbf{Complexity} \\
\midrule
\makecell[l]{Projected SGD (SA) \\ \text{with }$\gamma = \frac{\varepsilon}{4 \log n}$}
& \ref{Th:SAunreg} & $ \widetilde O \left(\frac{n^3\sqrt n \|C\|^3_\infty}{\varepsilon^3 \sqrt{\varepsilon \kappa }}\right)$ \\
\midrule
Stochastic MD (SA) & \ref{Th:MD} & $\widetilde O\left( \frac{n^3\|C\|^2_{\infty}}{\varepsilon^2}\right) $ \\
\midrule
\makecell[l]{Accelerated IBP (SAA) \\ \text{with }$\gamma = \frac{\varepsilon}{4 \log n}$}
& \ref{Th:SAAunreg} &
$\widetilde{O}\left(\frac{n^4\|C\|^4_{\infty}}{\varepsilon^4}\right)$
\\
\midrule
\makecell[l]{ Mirror Prox with $B_d(p^*,p^1)$\\ penalization (SAA)} & \ref{Th:newreg} & \ag{$\widetilde O\left(\frac{ n^{2}\sqrt{n}\|C\|^{5} _{\infty}}{\varepsilon^{5} }\right)$} \\
\bottomrule
\end{tabular}
\label{Tab:OT}
\end{center}
\end{table}
}
For the SA algorithms, which are Stochastic MD and Projected SGD, we can conclude the following: non-regularized approach (Stochastic MD) uses simplex prox structure and gets better complexity bounds, indeed Lipschitz constant in the $\ell_1$-norm is $M_\infty = O(\|C\|_\infty)$, whereas Lipschitz constant in the Euclidean norm is $M = O(\sqrt{n}\|C\|_\infty)$. The practical comparison of Stochastic MD (Algorithm \ref{Alg:OnlineMD}) and Projected SGD (Algorithm \ref{Alg:OnlineGD}) can be found in Figure \ref{fig:gausbarcomparison1}.
For the SAA approaches (Accelerated IBP and Mirror Prox with specific penalization) we enclose the following: entropy-regularized approach (Accelerated IBP) has better dependence on $\varepsilon$ than penalized approach (Mirror Prox with specific penalization), however, worse dependence on $n$. Using Dual Extrapolation method for the WP problem from paper \cite{dvinskikh2020improved} instead of Mirror Prox allows to omit $\sqrt{n}$ in the penalized approach.
One of the main advantages of the SAA approach is the possibility to perform it in a decentralized manner
in contrast to the SA approach, which cannot be executed in a decentralized manner or even in distributed or parallel fashion for non-smooth objective \cite{gorbunov2019optimal}. This is the case of the Wasserstein barycenter problem, indeed, the objective is Lipschitz continuous but not Lipschitz smooth.
\section*{Acknowledgements}
The work was
supported by the Russian Science Foundation (project 18-71-10108), \url{https://rscf.ru/project/18-71-10108/}; and by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye) number 075-00337-20-03, project No. 0714-2020-0005.
\bibliographystyle{tfs}
\section{Introduction}
In order to assist authors in the process of preparing a manuscript for a journal, the Taylor \& Francis `\textsf{Interact}' layout style has been implemented as a \LaTeXe\ class file based on the \texttt{article} document class. A \textsc{Bib}\TeX\ bibliography style file and a sample bibliography are also provided in order to assist with the formatting of your references.
Commands that differ from or are provided in addition to standard \LaTeXe\ are described in this document, which is \emph{not} a substitute for a \LaTeXe\ tutorial.
The \texttt{interacttfssample.tex} file can be used as a template for a manuscript by cutting, pasting, inserting and deleting text as appropriate, using the preamble and the \LaTeX\ environments provided (e.g.\ \verb"\begin{abstract}", \verb"\begin{keywords}").
\subsection{The \textsf{Interact} class file}\label{class}
The \texttt{interact} class file preserves the standard \LaTeXe\ interface such that any document that can be produced using \texttt{article.cls} can also be produced with minimal alteration using the \texttt{interact} class file as described in this document.
If your article is accepted for publication it will be typeset as the journal requires in Minion Pro and/or Myriad Pro. Since most authors will not have these fonts installed, the page make-up is liable to alter slightly with the change of font. Also, the \texttt{interact} class file produces only single-column format, which is preferred for peer review and will be converted to two-column format by the typesetter if necessary during preparation of the proofs. Please therefore do not try to match the typeset format exactly, but use the standard \LaTeX\ fonts instead and ignore details such as slightly long lines of text or figures/tables not appearing in exact synchronization with their citations in the text: these details will be dealt with by the typesetter. Similarly, it is unnecessary to spend time addressing warnings in the log file -- if your .tex file compiles to produce a PDF document that correctly shows how you wish your paper to appear, such warnings will not prevent your source files being imported into the typesetter's program.
\subsection{Submission of manuscripts prepared using \emph{\LaTeX}}
Manuscripts for possible publication should be submitted to the Editors for review as directed in the journal's Instructions for Authors, and in accordance with any technical instructions provided in the journal's ScholarOne Manuscripts or Editorial Manager site. Your \LaTeX\ source file(s), the class file and any graphics files will be required in addition to the final PDF version when final, revised versions of accepted manuscripts are submitted.
Please ensure that any author-defined macros used in your article are gathered together in the preamble of your .tex file, i.e.\ before the \verb"\begin{document}" command. Note that if serious problems are encountered in the coding of a document (missing author-defined macros, for example), the typesetter may resort to rekeying it.
\section{Using the \texttt{interact} class file}
For convenience, simply copy the \texttt{interact.cls} file into the same directory as your manuscript files (you do not need to install it in your \TeX\ distribution). In order to use the \texttt{interact} document class, replace the command \verb"\documentclass{article}" at the beginning of your document with the command \verb"\documentclass{interact}".
The following document-class options should \emph{not} be used with the \texttt{interact} class file:
\begin{itemize}
\item \texttt{10pt}, \texttt{11pt}, \texttt{12pt} -- unavailable;
\item \texttt{oneside}, \texttt{twoside} -- not necessary, \texttt{oneside} is the default;
\item \texttt{leqno}, \texttt{titlepage} -- should not be used;
\item \texttt{twocolumn} -- should not be used (see Subsection~\ref{class});
\item \texttt{onecolumn} -- not necessary as it is the default style.
\end{itemize}
To prepare a manuscript for a journal that is printed in A4 (two column) format, use the \verb"largeformat" document-class option provided by \texttt{interact.cls}; otherwise the class file produces pages sized for B5 (single column) format by default. The \texttt{geometry} package should not be used to make any further adjustments to the page dimensions.
\section{Additional features of the \texttt{interact} class file}
\subsection{Title, authors' names and affiliations, abstracts and article types}
The title should be generated at the beginning of your article using the \verb"\maketitle" command.
In the final version the author name(s) and affiliation(s) must be followed immediately by \verb"\maketitle" as shown below in order for them to be displayed in your PDF document.
To prepare an anonymous version for double-blind peer review, you can put the \verb"\maketitle" between the \verb"\title" and the \verb"\author" in order to hide the author name(s) and affiliation(s) temporarily.
Next you should include the abstract if your article has one, enclosed within an \texttt{abstract} environment.
The \verb"\articletype" command is also provided as an \emph{optional} element which should \emph{only} be included if your article actually needs it.
For example, the titles for this document begin as follows:
\begin{verbatim}
\articletype{ARTICLE TEMPLATE}
\title{Taylor \& Francis \LaTeX\ template for authors (\textsf{Interact}
layout + reference style S)}
\author{
\name{A.~N. Author\textsuperscript{a}\thanks{CONTACT A.~N. Author.
Email: [email protected]} and John Smith\textsuperscript{b}}
\affil{\textsuperscript{a}Taylor \& Francis, 4 Park Square, Milton
Park, Abingdon, UK; \textsuperscript{b}Institut f\"{u}r Informatik,
Albert-Ludwigs-Universit\"{a}t, Freiburg, Germany} }
\maketitle
\begin{abstract}
This template is for authors who are preparing a manuscript for a
Taylor \& Francis journal using the \LaTeX\ document preparation system
and the \texttt{interact} class file, which is available via selected
journals' home pages on the Taylor \& Francis website.
\end{abstract}
\end{verbatim}
An additional abstract in another language (preceded by a translation of the article title) may be included within the \verb"abstract" environment if required.
A graphical abstract may also be included if required. Within the \verb"abstract" environment you can include the code
\begin{verbatim}
\\\resizebox{25pc}{!}{\includegraphics{abstract.eps}}
\end{verbatim}
where the graphical abstract is to appear, where \verb"abstract.eps" is the name of the file containing the graphic (note that \verb"25pc" is the recommended maximum width, expressed in pica, for the graphical abstract in your manuscript).
\subsection{Abbreviations}
A list of abbreviations may be included if required, enclosed within an \texttt{abbreviations} environment, i.e.\ \verb"\begin{abbreviations}"\ldots\verb"\end{abbreviations}", immediately following the \verb"abstract" environment.
\subsection{Keywords}
A list of keywords may be included if required, enclosed within a \texttt{keywords} environment, i.e.\ \verb"\begin{keywords}"\ldots\verb"\end{keywords}". Additional keywords in other languages (preceded by a translation of the word `keywords') may also be included within the \verb"keywords" environment if required.
\subsection{Subject classification codes}
AMS, JEL or PACS classification codes may be included if required. The \texttt{interact} class file provides an \texttt{amscode} environment, i.e.\ \verb"\begin{amscode}"\ldots\verb"\end{amscode}", a \texttt{jelcode} environment, i.e.\ \verb"\begin{jelcode}"\ldots\verb"\end{jelcode}", and a \texttt{pacscode} environment, i.e.\ \verb"\begin{pacscode}"\ldots\verb"\end{pacscode}" to assist with this.
\subsection{Additional footnotes to the title or authors' names}
The \verb"\thanks" command may be used to create additional footnotes to the title or authors' names if required. Footnote symbols for this purpose should be used in the order
$^\ast$~(coded as \verb"$^\ast$"), $\dagger$~(\verb"$\dagger$"), $\ddagger$~(\verb"$\ddagger$"), $\S$~(\verb"$\S$"), $\P$~(\verb"$\P$"), $\|$~(\verb"$\|$"),
$\dagger\dagger$~(\verb"$\dagger\dagger$"), $\ddagger\ddagger$~(\verb"$\ddagger\ddagger$"), $\S\S$~(\verb"$\S\S$"), $\P\P$~(\verb"$\P\P$").
Note that any \verb"footnote"s to the main text will automatically be assigned the superscript symbols 1, 2, 3, etc. by the class file.\footnote{If preferred, the \texttt{endnotes} package may be used to set the notes at the end of your text, before the bibliography. The symbols will be changed to match the style of the journal if necessary by the typesetter.}
\section{Some guidelines for using the standard features of \LaTeX}
\subsection{Sections}
The \textsf{Interact} layout style allows for five levels of section heading, all of which are provided in the \texttt{interact} class file using the standard \LaTeX\ commands \verb"\section", \verb"\subsection", \verb"\subsubsection", \verb"\paragraph" and \verb"\subparagraph". Numbering will be automatically generated for all these headings by default.
\subsection{Lists}
Numbered lists are produced using the \texttt{enumerate} environment, which will number each list item with arabic numerals by default. For example,
\begin{enumerate}
\item first item
\item second item
\item third item
\end{enumerate}
was produced by
\begin{verbatim}
\begin{enumerate}
\item first item
\item second item
\item third item
\end{enumerate}
\end{verbatim}
Alternative numbering styles can be achieved by inserting an optional argument in square brackets to each \verb"item", e.g.\ \verb"\item[(i)] first item"\, to create a list numbered with roman numerals at level one.
Bulleted lists are produced using the \texttt{itemize} environment. For example,
\begin{itemize}
\item First bulleted item
\item Second bulleted item
\item Third bulleted item
\end{itemize}
was produced by
\begin{verbatim}
\begin{itemize}
\item First bulleted item
\item Second bulleted item
\item Third bulleted item
\end{itemize}
\end{verbatim}
\subsection{Figures}
The \texttt{interact} class file will deal with positioning your figures in the same way as standard \LaTeX. It should not normally be necessary to use the optional \texttt{[htb]} location specifiers of the \texttt{figure} environment in your manuscript; you may, however, find the \verb"[p]" placement option or the \verb"endfloat" package useful if a journal insists on the need to separate figures from the text.
Figure captions appear below the figures themselves, therefore the \verb"\caption" command should appear after the body of the figure. For example, Figure~\ref{sample-figure} with caption and sub-captions is produced using the following commands:
\begin{verbatim}
\begin{figure}
\centering
\subfloat[An example of an individual figure sub-caption.]{%
\resizebox*{5cm}{!}{\includegraphics{graph1.eps}}}\hspace{5pt}
\subfloat[A slightly shorter sub-caption.]{%
\resizebox*{5cm}{!}{\includegraphics{graph2.eps}}}
\caption{Example of a two-part figure with individual sub-captions
showing that captions are flush left and justified if greater
than one line of text.} \label{sample-figure}
\end{figure}
\end{verbatim}
\begin{figure}
\centering
\subfloat[An example of an individual figure sub-caption.]{%
\resizebox*{5cm}{!}{\includegraphics{graph1.eps}}}\hspace{5pt}
\subfloat[A slightly shorter sub-caption.]{%
\resizebox*{5cm}{!}{\includegraphics{graph2.eps}}}
\caption{Example of a two-part figure with individual sub-captions
showing that captions are flush left and justified if greater
than one line of text.} \label{sample-figure}
\end{figure}
To ensure that figures are correctly numbered automatically, the \verb"\label" command should be included just after the \verb"\caption" command, or in its argument.
The \verb"\subfloat" command requires \verb"subfig.sty", which is called in the preamble of the \texttt{interacttfssample.tex} file (to allow your choice of an alternative package if preferred) and included in the \textsf{Interact} \LaTeX\ bundle for convenience. Please supply any additional figure macros used with your article in the preamble of your .tex file.
The source files of any figures will be required when the final, revised version of a manuscript is submitted. Authors should ensure that these are suitable (in terms of lettering size, etc.) for the reductions they envisage.
The \texttt{epstopdf} package can be used to incorporate encapsulated PostScript (.eps) illustrations when using PDF\LaTeX, etc. Please provide the original .eps source files rather than the generated PDF images of those illustrations for production purposes.
\subsection{Tables}
The \texttt{interact} class file will deal with positioning your tables in the same way as standard \LaTeX. It should not normally be necessary to use the optional \texttt{[htb]} location specifiers of the \texttt{table} environment in your manuscript; you may, however, find the \verb"[p]" placement option or the \verb"endfloat" package useful if a journal insists on the need to separate tables from the text.
The \texttt{tabular} environment can be used as shown to create tables with single horizontal rules at the head, foot and elsewhere as appropriate. The captions appear above the tables in the \textsf{Interact} style, therefore the \verb"\tbl" command should be used before the body of the table. For example, Table~\ref{sample-table} is produced using the following commands:
\begin{table}
\tbl{Example of a table showing that its caption is as wide as
the table itself and justified.}
{\begin{tabular}{lcccccc} \toprule
& \multicolumn{2}{l}{Type} \\ \cmidrule{2-7}
Class & One & Two & Three & Four & Five & Six \\ \midrule
Alpha\textsuperscript{a} & A1 & A2 & A3 & A4 & A5 & A6 \\
Beta & B2 & B2 & B3 & B4 & B5 & B6 \\
Gamma & C2 & C2 & C3 & C4 & C5 & C6 \\ \bottomrule
\end{tabular}}
\tabnote{\textsuperscript{a}This footnote shows how to include
footnotes to a table if required.}
\label{sample-table}
\end{table}
\begin{verbatim}
\begin{table}
\tbl{Example of a table showing that its caption is as wide as
the table itself and justified.}
{\begin{tabular}{lcccccc} \toprule
& \multicolumn{2}{l}{Type} \\ \cmidrule{2-7}
Class & One & Two & Three & Four & Five & Six \\ \midrule
Alpha\textsuperscript{a} & A1 & A2 & A3 & A4 & A5 & A6 \\
Beta & B2 & B2 & B3 & B4 & B5 & B6 \\
Gamma & C2 & C2 & C3 & C4 & C5 & C6 \\ \bottomrule
\end{tabular}}
\tabnote{\textsuperscript{a}This footnote shows how to include
footnotes to a table if required.}
\label{sample-table}
\end{table}
\end{verbatim}
To ensure that tables are correctly numbered automatically, the \verb"\label" command should be included just before \verb"\end{table}".
The \verb"\toprule", \verb"\midrule", \verb"\bottomrule" and \verb"\cmidrule" commands are those used by \verb"booktabs.sty", which is called by the \texttt{interact} class file and included in the \textsf{Interact} \LaTeX\ bundle for convenience. Tables produced using the standard commands of the \texttt{tabular} environment are also compatible with the \texttt{interact} class file.
\subsection{Landscape pages}
If a figure or table is too wide to fit the page it will need to be rotated, along with its caption, through 90$^{\circ}$ anticlockwise. Landscape figures and tables can be produced using the \verb"rotating" package, which is called by the \texttt{interact} class file. The following commands (for example) can be used to produce such pages.
\begin{verbatim}
\setcounter{figure}{1}
\begin{sidewaysfigure}
\centerline{\epsfbox{figname.eps}}
\caption{Example landscape figure caption.}
\label{landfig}
\end{sidewaysfigure}
\end{verbatim}
\begin{verbatim}
\setcounter{table}{1}
\begin{sidewaystable}
\tbl{Example landscape table caption.}
{\begin{tabular}{@{}llllcll}
.
.
.
\end{tabular}}\label{landtab}
\end{sidewaystable}
\end{verbatim}
Before any such float environment, use the \verb"\setcounter" command as above to fix the numbering of the caption (the value of the counter being the number given to the preceding figure or table). Subsequent captions will then be automatically renumbered accordingly. The \verb"\epsfbox" command requires \verb"epsfig.sty", which is called by the \texttt{interact} class file and is also included in the \textsf{Interact} \LaTeX\ bundle for convenience.
Please note that if the \verb"endfloat" package is used, one or both of the commands
\begin{verbatim}
\DeclareDelayedFloatFlavor{sidewaysfigure}{figure}
\DeclareDelayedFloatFlavor{sidewaystable}{table}
\end{verbatim}
will need to be included in the preamble of your .tex file, after the \verb"endfloat" package is loaded, in order to process any landscape figures and/or tables correctly.
\subsection{Theorem-like structures}
A predefined \verb"proof" environment is provided by the \texttt{amsthm} package (which is called by the \texttt{interact} class file), as follows:
\begin{proof}
More recent algorithms for solving the semidefinite programming relaxation are particularly efficient, because they explore the structure of the MAX-CUT problem.
\end{proof}
\noindent This was produced by simply typing:
\begin{verbatim}
\begin{proof}
More recent algorithms for solving the semidefinite programming
relaxation are particularly efficient, because they explore the
structure of the MAX-CUT problem.
\end{proof}
\end{verbatim}
Other theorem-like environments (theorem, definition, remark, etc.) need to be defined as required, e.g.\ using \verb"\newtheorem{theorem}{Theorem}" in the preamble of your .tex file (see the preamble of \verb"interacttfssample.tex" for more examples). You can define the numbering scheme for these structures however suits your article best. Please note that the format of the text in these environments may be changed if necessary to match the style of individual journals by the typesetter during preparation of the proofs.
\subsection{Mathematics}
\subsubsection{Displayed mathematics}
The \texttt{interact} class file will set displayed mathematical formulas centred on the page without equation numbers if you use the \texttt{displaymath} environment or the equivalent \verb"\[...\]" construction. For example, the equation
\[
\hat{\theta}_{w_i} = \hat{\theta}(s(t,\mathcal{U}_{w_i}))
\]
was typeset using the commands
\begin{verbatim}
\[
\hat{\theta}_{w_i} = \hat{\theta}(s(t,\mathcal{U}_{w_i}))
\]
\end{verbatim}
For those of your equations that you wish to be automatically numbered sequentially throughout the text for future reference, use the \texttt{equation} environment, e.g.
\begin{equation}
\hat{\theta}_{w_i} = \hat{\theta}(s(t,\mathcal{U}_{w_i}))
\end{equation}
was typeset using the commands
\begin{verbatim}
\begin{equation}
\hat{\theta}_{w_i} = \hat{\theta}(s(t,\mathcal{U}_{w_i}))
\end{equation}
\end{verbatim}
Part numbers for sets of equations may be generated using the \texttt{subequations} environment, e.g.
\begin{subequations} \label{subeqnexample}
\begin{equation}
\varepsilon \rho w_{tt}(s,t) = N[w_{s}(s,t),w_{st}(s,t)]_{s},
\label{subeqnparta}
\end{equation}
\begin{equation}
w_{tt}(1,t)+N[w_{s}(1,t),w_{st}(1,t)] = 0, \label{subeqnpartb}
\end{equation}
\end{subequations}
which was typeset using the commands
\begin{verbatim}
\begin{subequations} \label{subeqnexample}
\begin{equation}
\varepsilon \rho w_{tt}(s,t) = N[w_{s}(s,t),w_{st}(s,t)]_{s},
\label{subeqnparta}
\end{equation}
\begin{equation}
w_{tt}(1,t)+N[w_{s}(1,t),w_{st}(1,t)] = 0, \label{subeqnpartb}
\end{equation}
\end{subequations}
\end{verbatim}
This is made possible by the \texttt{amsmath} package, which is called by the class file. If you put a \verb"\label" just after the \verb"\begin{subequations}" command, references can be made to the collection of equations, i.e.\ `(\ref{subeqnexample})' in the example above. Or, as the example also shows, you can label and refer to each equation individually -- i.e.\ `(\ref{subeqnparta})' and `(\ref{subeqnpartb})'.
Displayed mathematics should be given end-of-line punctuation appropriate to the running text sentence of which it forms a part, if required.
\subsubsection{Math fonts}
\paragraph{Superscripts and subscripts}
Superscripts and subscripts will automatically come out in the correct size in a math environment (i.e.\ enclosed within \verb"\(...\)" or \verb"$...$" commands in running text, or within \verb"\[...\]" or the \texttt{equation} environment for displayed equations). Sub/superscripts that are physical variables should be italic, whereas those that are labels should be roman (e.g.\ $C_p$, $T_\mathrm{eff}$). If the subscripts or superscripts need to be other than italic, they must be coded individually.
\paragraph{Upright Greek characters and the upright partial derivative sign}
Upright lowercase Greek characters can be obtained by inserting the letter `u' in the control code for the character, e.g.\ \verb"\umu" and \verb"\upi" produce $\umu$ (used, for example, in the symbol for the unit microns -- $\umu\mathrm{m}$) and $\upi$ (the ratio of the circumference of a circle to its diameter). Similarly, the control code for the upright partial derivative $\upartial$ is \verb"\upartial". Bold lowercase as well as uppercase Greek characters can be obtained by \verb"{\bm \gamma}", for example, which gives ${\bm \gamma}$, and \verb"{\bm \Gamma}", which gives ${\bm \Gamma}$.
\section*{Acknowledgement(s)}
An unnumbered section, e.g.\ \verb"\section*{Acknowledgements}", may be used for thanks, etc.\ if required and included \emph{in the non-anonymous version} before any Notes or References.
\section*{Disclosure statement}
An unnumbered section, e.g.\ \verb"\section*{Disclosure statement}", may be used to declare any potential conflict of interest and included \emph{in the non-anonymous version} before any Notes or References, after any Acknowledgements and before any Funding information.
\section*{Funding}
An unnumbered section, e.g.\ \verb"\section*{Funding}", may be used for grant details, etc.\ if required and included \emph{in the non-anonymous version} before any Notes or References.
\section*{Notes on contributor(s)}
An unnumbered section, e.g.\ \verb"\section*{Notes on contributors}", may be included \emph{in the non-anonymous version} if required. A photograph may be added if requested.
\section*{Nomenclature/Notation}
An unnumbered section, e.g.\ \verb"\section*{Nomenclature}" (or \verb"\section*{Notation}"), may be included if required, before any Notes or References.
\section*{Notes}
An unnumbered `Notes' section may be included before the References (if using the \verb"endnotes" package, use the command \verb"\theendnotes" where the notes are to appear, instead of creating a \verb"\section*").
\section{References}
\subsection{References cited in the text}
References should be cited in the text by numbers in square brackets based on the order in which they appear in an alphabetical list of references at the end of the document (not the order of citation), so the first reference cited in the text might be [23]. For example, these may take the forms [32], [5,\,6,\,14], [21--55] (\emph{not} [21]--[55]). For further details on this reference style, see the Instructions for Authors on the Taylor \& Francis website.
Each bibliographic entry has a key, which is assigned by the author and is used to refer to that entry in the text. In this document, the key \verb"Ali95" in the citation form \verb"\cite{Ali95}" produces `\cite{Ali95}', and the keys \verb"Bow76" and \verb"BGP02" in the citation form \verb"\cite{Bow76,BGP02}" produce `\cite{Bow76,BGP02}'. The citation for a range of bibliographic entries (e.g. `\cite{GMW81,Hor96,Con96,Har97,FGK03,Fle80,Ste98,Ell98,Str97,Coo03,Kih01,Hol03,Hai01,Hag03,Hov03,Mul00,GHGsoft,Pow00}') will automatically be produced by \verb"\cite{GMW81,Hor96,Con96,Har97,FGK03,Fle80,Ste98,Ell98,Str97,Coo03,Kih01," \verb"Hol03,Hai01,Hag03,Hov03,Mul00,GHGsoft,Pow00}".
Optional notes may be included at the beginning and/or end of a citation by the use of square brackets, e.g. \verb"\cite[cf.][]{Tay03}" produces `\cite[cf.][]{Tay03}', and \verb"\cite[see][and references therein]{Wei95}" produces `\cite[see][and references therein]{Wei95}'.
\subsection{The list of references}
To produce the list of references, the bibliographic data about each reference item should be listed in \texttt{thebibliography} environment in alphabetical order. References with the same author or group of authors are further sorted chronologically, beginning with the earliest. The following list shows some sample references prepared in Taylor \& Francis' Reference Style S.
|
2,869,038,155,568 | arxiv | \section{Introduction}
Quantum computers are expected to offer speedups over classical computers in solving various computational tasks.
The recent demonstration of quantum computational advantage by Google researchers \cite{arute2019quantum} further strengthened the case for the practical potential of quantum computation. However, despite the many promising results, the limitations of the near-term Noisy Intermediate-Scale Quantum (NISQ) devices has also been highlighted \cite{preskill2018quantum}. Currently, various architectures and physical realizations are being considered for quantum processors, e.g., superconducting qubits \cite{corcoles2015demonstration, barends2014superconducting, ofek2016extending}, ion-trap-based systems \cite{debnath2016demonstration,monz2016realization}, integrated quantum optics \cite{qiang2018large,elshaari2020hybrid}.
The different quantum hardware implementations have different strengths and weaknesses. For example, scaling up the number of qubits in superconducting architectures is easier than in ion trap
systems, while the latter gives rise to deeper circuits. Due to these different features, there has been an intensive discussion about how to define a suitable metric to quantify a quantum processor's performance, one measure being the so-called quantum volume \cite{cross2019validating}. Consequently, the challenge to compile large problems into programs (circuits) that minimize the number of qubits and/or the circuit depth has become of central importance in the quantum computing community.
Our paper addresses this challenge by introducing a new space-efficient embedding of the graph coloring problem. By applying this method as an input for the Quantum Approximate Optimization Algorithm \cite{farhi2014quantum}, we obtain a deeper circuit but the number of required qubits (circuit width) is exponentially reduced in the number of colors compared to the standard quadratic binary embedding method. Our numerical studies also indicates that the increase of the depth might not be that significant for larger system sizes, as one needs less levels in the space-efficient embedded version. Moreover, the number of iterations to reach nearly optimal parameters is also significantly decreased compared to the standard version.
The paper is organized as follows. In the next section, graph coloring as a QUBO problem is reviewed. In Sec.~\ref{sec:space-efficient-method}, we present our novel space-efficient embedding method for the graph coloring problem. In order to test the current quantum hardware's performance on graph coloring, Sec.~\ref{sec:annealer} is devoted to experimenting on D-Wave's quantum annealer, covering a wide range of \emph{Erd\H{o}s-R\'{e}nyi} random graph instances. In Sec.~\ref{sec:QAOA}, we outline the Quantum Approximate Optimization Algorithm (QAOA). We also present numerical simulation results for both the standard and the space-efficient QAOA methods applied to graph coloring problems of different graphs. Finally, Sec.~\ref{sec:conclusion} summarizes our findings.
\section{Graph Coloring and its Reformulation as a QUBO problem}
\label{sec:gc_as_qubo}
In this section, we shortly review the basics of the coloring problem and its standard formulation as a Quadratic Unconstrained Binary Optimization (QUBO) problem, we introduce the relevant
notations and relate our results to previous studies.
\subsection{Graph Coloring}
Graph coloring is a way of labeling the vertices of a graph with colors
such that no two adjacent vertices are assigned the same color. A coloring using at most $k$ different labels is called a $k$-coloring, and the smallest number of colors needed to color a graph G is called its {\it chromatic number}.
Graph coloring has many applications in a wide range of industrial and scientific fields, such as social networks \cite{rossi2014coloring}, telecommunication \cite{bandh2009graph}, and compiler theory \cite{chaitin1982register}. Due to this, it has been in the focus of attention of researchers in computer science and operations research. Although graph coloring, in general, is NP-hard \cite{gary1979computers}, the hardness of a coloring problem is highly dependent on the graph structure and number of colors, and for some special cases, there exists polynomial time exact solvers. For the general case, approximate solution can be achieved by using heuristic algorithms such as Tabu search \cite{hertz1987using} and Simulated Annealing \cite{johnson1991optimization}.
\subsection{Graph Coloring as a QUBO Problem}
\label{sec:qubo-problem}
QUBO is a standard model in optimization theory that is frequently used in quantum computing as
it can serve as an input for algorithms like the Quantum Approximate Optimization Algorithm (QAOA) \cite{farhi2014quantum} or Quantum Annealing (QA) \cite{kadowaki1998quantum, farhi2002quantum}. The general form of QUBO problems is the minimization of $f : \left\{ 0,1 \right\} ^N \to \ensuremath{\mathbb{R}}$, the pseudo-Boolean objective function of the following form:
\begin{align}
\minimize f(x) &= x^T Q x =\sum_{i,j=1}^N Q_{ij} x_ix_j\enspace , \nonumber \\
x^* &= \argmin_{x \in \{ 0,1 \}^N} f(x), \nonumber
\end{align}
\noindent
where $Q$ is a real symmetric matrix, $f$ is often called the {\it cost function} and $x^*$ is referred to as a {\it solution bit string} or a {\it global minimizer} of $f$.
Such a QUBO problem is equivalent to finding the ground-state energy and configurations of the following $N$-qubit Ising Hamiltonian \cite{lucas2014ising}:
\begin{align}
H&= \sum_{i,j=1}^N Q_{i j} (\mathbbm{1}-Z_i) (\mathbbm{1}-Z_j),
\end{align}
\noindent
where $Z_k$ denotes the operator that acts as the Pauli-$Z$ gate on the $k$th qubit and as identity on the other qubits.
The coloring problem, similarly to several other families of NP-complete problems, can be naturally formulated as a QUBO problem. The QUBO description of the $k$-coloring problem for a graph with $n$ nodes uses $N= n {\cdot }k$ number of bits. The bits $x_{v,i}$ in this formulation have double labels $(v,i)$, where $v \in \{1, \ldots, n \}$ labels the vertices and $i \in \{1, \ldots , k \}$ labels the colors. One uses a one-hot encoding, i.e., if vertex $v$ is assigned the color $j$ we set $x_{v,j}=1$ and for all $i \ne j$ we set $x_{v, i}=0$. To ensure that the solution of the QUBO will satisfy such a one-hot encoding requirement, one employs a penalty term for each vertex $v$ of the form $(1 - \sum_{i=1}^k x_{v,i})^2$. Next, for all pairs $(v,w)$ of neighboring sites, one penalizes the same-color assignments by the term $\sum_{i=1}^k x_{v,i}x_{w,i} $. Thus, in total, the cost function for the $k$-coloring of a graph with $n$ nodes and adjacency matrix $A$ can be written as follows:
\begin{equation}
f(x) {=} C\sum_{v=1}^n \left(1 {-} \sum_{i=1}^k x_{v,i} \right)^2 {+} D\sum_{v,w=1}^n \sum_{i=1}^k A_{vw} x_{v,i}x_{w,i},
\end{equation}
where $C$ and $D$ can be arbitrary positive numbers. The corresponding Ising model is thus:
\begin{align}
H & = C \sum_{v=1}^n \Big(2\mathbbm{1}- \sum_{i=1}^k (\mathbbm{1}{-}Z_{v,i}) \Big)^2 \nonumber \\
&+ D \sum_{v,w=1}^n \sum_{i=1}^k A_{vw}(\mathbbm{1} -Z_{v,i})(\mathbbm{1}{-}Z_{w,i}). \label{eq:color_ising1}
\end{align}
\section{Space-Efficient Graph Coloring Embedding}
\label{sec:space-efficient-method}
We now introduce a method to map the $k$-coloring problem to the ground-state problem of a Hamiltonian using only $n \lceil \log k \rceil$ instead of $n k$ qubits that are required by the standard QUBO method. This embedding will be used to set up a space-efficient QAOA method for coloring in Section~\ref{sec:QAOA}.
We will first describe the embedding of the $4$-coloring problem of an $n$-vertex graph into a $2n$-bit Hamiltonian optimization problem.
The four colors will be encoded by 2 bits ($00$, $01$, $10$, $11$). To each vertex $v$, we assign two bits $b_{v,1}$ and $b_{v,2}$, and the bit-string $(b_{v,1},b_{v,2})$ encodes the color that we assign to vertex $v$. To ensure that two neighboring vertices do not have the same color, we introduce the penalty term
\begin{align}
&b_{v,1}b_{w,1}b_{v,2}b_{w,2} {+} (1{-}b_{v,1})(1{-}b_{w,1})(1{-}b_{v,2})(1{-}b_{w,2})\nonumber \\ &+(1{-}b_{v,1})(1{-}b_{w,1})b_{v,2}b_{w,2}
+b_{v,1}b_{w,1}(b_{v,2}{-}1)(b_{w,2}{-}1), \nonumber
\end{align}
since this term is only zero if the colors assigned to $v$ and $w$ differ. Thus, in the case of a graph with adjacency matrix $A$, the $4$-coloring problem translates to the ground-state problem of the Hamiltonian
\begin{align}
H {=} \sum_{v, w=1}^n &A_{vw} \Big((\mathbbm{1} {-} Z_{v, 1}) (\mathbbm{1} {-} Z_{v, 2}) (\mathbbm{1} {-} Z_{w, 1}) (\mathbbm{1} {-}{ }Z_{w, 2}) \nonumber \\[-3mm]
& \; \quad + (\mathbbm{1} {+} Z_{v, 1}) (\mathbbm{1} {+} Z_{v, 2}) (\mathbbm{1} {+} Z_{w, 1}) (\mathbbm{1} {+} Z_{w, 2}) \nonumber \\[1mm]
& \; \quad + (\mathbbm{1} {+} Z_{v, 1}) (\mathbbm{1} {-} Z_{v, 2}) (\mathbbm{1} {+} Z_{w, 1}) (\mathbbm{1} {-} Z_{w, 2}) \nonumber \\
& \; \quad + (\mathbbm{1} {-} Z_{v, 1}) (\mathbbm{1} {+} Z_{v, 2}) (\mathbbm{1} {-} Z_{w, 1}) (\mathbbm{1} {+} Z_{w, 2}) \Big) \nonumber \\
\phantom{H} {=} \sum_{v, w=1}^n & 4A_{vw} \Big( Z_{v,1}Z_{v,2}Z_{w,1}Z_{w,2} {+} Z_{v,1}Z_{w,1} {+} Z_{v,2}Z_{w,2}\Big) \nonumber \\
&+ c_1 \mathbbm{1}, \label{eq:4-colors}
\end{align}
where the irrelevant constant term in the last line can be omitted.
Analogously, in the case of $k=2^m$ colors, we can label the possible colors by $m$ bits. To each vertex $v$ of the graph, we assign a string of $m$ bits $b_{v,j}$ ($j=1, \ldots, m$) which labels the color of $v$. Considering the usual correspondence between $b_{v,j}$ and $(\mathbbm{1}-Z_{v,j})$, for a graph with adjacency matrix $A$, the graph coloring problem can be embedded into the ground state problem
of the $(n \log k)$-qubit Hamiltonian
\begin{align}
H {=} \sum_{v, w =1}^n A_{vw}\sum_{ \underline{a} \in \{0,1 \}^m} \prod_{\ell=1}^m (\mathbbm{1} {+} (-1)^{a_\ell} Z_{v, \ell}) (\mathbbm{1} {+} (-1)^{a_\ell} Z_{w, \ell}), \label{eq:k-color}
\end{align}
since the computational basis state $\otimes_{v,j}\lvert b_{v,j}\rangle$ is a ground state (in this case a $0$-energy state) of $H$ iff the bitstrings $b_{v,j}$ provide a proper coloring of the graph.
If the number of colors $k$ is not a power of $2$, i.e., $ 2^{m-1} < k < 2^{m}$ , then we again label the colors with $m$-long bitstrings $(b_1,b_2, \ldots b_m)$, but only those are allowed for which
$ \sum_{j=1}^m 2^{k-j}b_j < k$ is satisfied. One can consistently add new terms to the Hamiltonian
such that the non-allowed bit-strings are penalized, as we show in \cite{Adam}.
In particular, for the case of $k=3$ colors, we will have a Hamiltonian that is a sum of two terms: the first being the same Hamiltonian as for the problems of 4 colors, Eq.~\eqref{eq:4-colors}, and the second term being $\sum_{v=1}^{n} (1-Z_{v,1})(1-Z_{v,2})$ that penalizes the non-allowed $(b_{v,1}, b_{v,2}) =(1,1)$ assignments, while giving zero for the allowed $(b_{v,1}, b_{v,2})$ values.
\section{Quantum Annealer Experiments}
\label{sec:annealer}
In this section, to explore current possibilities of quantum hardware, we present a study to uncover the main limiting factors of graph coloring solved with quantum annealer (QA) devices. We created a series of experiments to be performed with the currently available D-Wave QA hardware.
\subsection{Quantum Annealing for the Coloring Problem}
When graph coloring is reformulated as a QUBO problem, as discussed in Sec.~\ref{sec:gc_as_qubo}, its cost function is equivalent to an Ising model and its global optimum can be approximated by QA \cite{kadowaki1998quantum, farhi2002quantum}. We test the limit of this approach using the commercially available quantum annealer, the D-Wave 2000Q which implements a programmable Ising spin network using superconducting flux qubits \cite{johnson2011quantum}.
The D-Wave 2000Q QPU quantum annealer has at most $2048$ available qubits, and has a C$16$ Chimera topology (the \emph{working graph}) consisting of a $16 \times 16$ matrix of $8$-qubit bipartite graphs.
To create a bridge between logical and physical representation of qubits, a technique called minor-embedding maps logical qubits to physical ones.
Since minor-embedding is an NP-hard problem, heuristic algorithms can be employed to find an embedding of the coloring problems \cite{cai2014practical}.
While theoretically any graph with $n$ nodes can be minor-embedded into a Chimera graph with $O(n^2)$ nodes \cite{bienstock1994algorithmic}, several studies of minor-embedding algorithms suggest that, the set of completely embeddable problems is also limited by the effectiveness of the minor-embedding algorithm \cite{boothby2016fast, yang2016graph, rieffel2015case}.
Another problem with the minor embedding is that it connects physical qubits otherwise not connected (due to the sparse Chimera structure), creating \emph{chains} of physical qubits that tend to grow very long in case of large problem complexity.
We measured how the lengths of these physical qubit chains, created by automated minor-embedding, are effected by the number of nodes and colors and the edge probability of the ER graphs affect the lengths of these physical qubit chains, created by automated minor-embedding.
Fig.~\ref{fig:embed} summarizes the successful embeddings for random graphs, as the result of more than $50 000$ embedding trials.
\begin{figure}[t!]
\hspace*{-2mm}
\centering
{\includegraphics[width=0.36\textwidth]{num_nodes_num_colors_prob_edge__max_chain_len.pdf}}
\caption{Longest chain lengths, indicated by colors, as the problem complexity increases. Results of more than $50 000$ attempts to minor-embed problems into the Chimera structure of the D-Wave 2000Q QPU are presented; the included lengths are the longest ones of the best embeddings found.}
\label{fig:embed}
\end{figure}
\subsection{Coloring Experiments with Random Graphs}
\begin{table}[t!]
\caption{Summary of D-Wave QA minor-embeddings and colorings for the Erd\H{o}s-R\'{e}nyi random graphs.}
\begin{center}
\begin{tabular}{|l|r|}
\hline
Colors & $3 - 8$ \\
\hline
Graph size & $3 - 40$ \\
\hline
Edge probability [\%] & $2 - 32$ \\
\hline
Max. problem volume & $96.0$ \\
\hline
Connected graphs & $3837$ \\
\hline
Successful embeddings & $1677$ \\
\hline
Max. embedded problem volume & $40.96$ \\
\hline
Coloring success [\%] & $27.46$ \\
\hline
\end{tabular}
\label{tab:color_succ}
\end{center}
\end{table}
To test the quantum annealing algorithm for the $k$-coloring problem, we used Erd\H{o}s-R\'{e}nyi random graphs generated according to the $G(n,p)$ model, i.e., a graph of $n$ nodes is generated by randomly including each edge with probability $p$.
The \textit{average connectivity} $c$ of such graphs is given by $ c= pn$.
It is known that there exists a threshold of average graph connectivity above which the graph cannot be colored with $k$ colors, the threshold grows asymptotically as $2k \ln k$ \cite{luczak1991chromatic}.
For smaller number of colors there are different ways to estimate this threshold. For example, it was shown by a heuristic local search algorithm \cite{Mulet_2002}, that is based on Potts spin glass model representation of graph coloring, that one can find $3$, $4$, and $5$-colorings of random graphs with at most $4.69$, $8.9$, $13.69$ average connectivity, respectively.
Our dataset consisted of quantum anneals with more connected graphs that could be optimally colored, however, we paid attention to these limits in our evaluation.
\subsection{Experimental Results and Findings}
To measure the problem size, we use the term \textit{problem volume}, which is simply the multiplication of the problem parameters ($v = p n k$).
On small scales ($v < 7$), the D-Wave annealer was able to solve every single problem, including the evaluation graph coloring problems mentioned in Sec.~\ref{sec:QAOA}.
As the logical connectivity of the problem graph
increased, the quality of the solutions started to degrade, indicated by slowly arising coloring errors.
These errors formed either as missing node colors, or as pairs of adjacent nodes sharing the same colors.
The summation of these errors is shown in Fig.~\ref{fig:solerr}.
\begin{figure}[t!]
\centering
\subfloat[Correlation between problem volume and chain length.\label{subfig:clen_numq}]{
{\includegraphics[width=0.36\textwidth]{complexity_avg_chain_len__num_errors.pdf}}}
\subfloat[Coloring quality as the number of physical and logical qubits increases.\label{subfig:phy_numq}]{{\includegraphics[width=0.36\textwidth]{num_qubits_num_phy_qubits__num_errors.pdf}}}
\caption{
The quality of the D-Wave QPU solutions for graph coloring problems described in Sec.~\ref{sec:QAOA}. More than $200$ random coloring problems of different volume were selected. The number of coloring errors (missing/adjacent colors) is indicated by the color and size of the scatter points.
Sub-figure~\protect\subref{subfig:clen_numq} depicts how coloring errors arise as the maximal chain lengths of the minor-embedded problem increases with the problem volume, whereas \protect\subref{subfig:phy_numq} shows the physical qubits required for different problem sizes (measured in logical qubits), and how the number coloring errors grow with them.}
\label{fig:solerr}
\end{figure}
While the D-Wave machine performed well on the smaller problems, the illustrations show how it failed to solve most of the complex problems due to the sparse connectivity of the working graph.
However, we believe, that by using custom embedding procedures, or by fine-tuning the parameters of the solver, these results could improve significantly \cite{yarkoni2019boosting}.
The biggest random graph problem ($27$ nodes) that we could solve on D-Wave is depicted in Fig.~\ref{fig:big_graph}.
\begin{figure}[t!]
\hspace*{-2mm}
\centering
{\includegraphics[width=0.36\textwidth]{big_graph.pdf}}
\caption{A $27$-node random graph with $0.1$ edge probability, successfully colored with with $5$ colors by the D-Wave quantum annealer (with problem volume of $13.5$). This represents the highest complexity random graph coloring problem that could be solved without fine tuning the parameters of the QPU.
}
\label{fig:big_graph}
\end{figure}
It is worth mentioning, that we were able to solve more than $90\%$ of the embeddable problems perfectly with simple Tabu-search algorithm, with the search run-times restricted to a couple of seconds.
For this purpose, we used the algorithms provided by D-Waves' Hybrid framework.
We also ran the problems on the D-Wave Leap's Hybrid Solver, which is a cloud-based quantum-classical solver.
The hybrid solver managed to solve all (except one) of the hardest problems (ranked by the embedding chain lengths) with computing time limited to $3$ seconds (which is the minimum time limit, for this solver).
It should be noted, however, that the QPU time per run used by the hybrid solver exceeded twice the time that was required for a pure QPU sampling run.
\section{Simulations of the standard and space-efficient QAOA for graph coloring}
\label{sec:QAOA}
In this section, we present results for the simulation of the newly introduced space-efficient QAOA algorithm for the coloring problem and compare it to the standard approach that uses the QUBO embedding. We refer to the evaluation graphs shown in Fig.~\ref{fig:graphs} with the following notation: $n-k$, where the $n$ represents the number of graph nodes and $k$ denotes the number of colors.
\subsection{Quantum Approximate Optimization Algorithm}
The Quantum Approximate Optimization Algorithm, introduced originally in Ref.~\cite{farhi2014quantum}, is considered to be one of the most promising approaches towards using near-term quantum computers for practical application. Its experimental feasibility has been recently demonstrated by a current Google experiment \cite{arute2020quantum}.
The purpose of QAOA is to find an approximate solution ground-state energy of a classical cost
Hamiltonian ${{H}_{c}}$, which usually encodes some combinatorial problem.
The algorithm starts with applying Hadamard gates on all qubits, i.e., the state of the system is initially transformed to $\left| + \right\rangle^{\otimes N} $. Next, we apply sequentially the cost Hamiltonian ${{H}_{c}}$ and a mixer Hamiltonian ${H}_{m}$ for time-parameters $\beta_i$ and $\gamma_i$, respectively. After $p$ iterations, the resulting state is of the form:
$U({{H}_{m}},{{\gamma }_{p}})U({{H}_{c}},{{\beta }_{p}})\cdots U({{H}_{m}},{{\gamma }_{1}})U({{H}_{c}},{{\beta }_{1}})\left| + \right\rangle^{\otimes N} $. One then extracts the expectation value of $H_c$ in the given state and using some classical optimizer updates the parameters $\beta_i$ and $\gamma_i$ until the minimum value for a given level-p. When building the quantum circuit for a QAOA, one has to decompose $U({{H}_{c}},{\beta })$ into single rotation gates and CNOTs. In the standard case $H_c$ contains only single-body and two-body terms, $Z_i$ and $Z_i Z_j$. Our space-efficient method uses higher order Hamiltonians. In particular, in our simulations we will have Hamiltonians of the form of Eq.~\eqref{eq:4-colors}, containing fourth-order terms, which can be decomposed as shown in Fig.~\ref{fig:decomp}.
A large amount of studies has been performed to characterize properties of QAOA algorithms, in general and for different application types. These include, both rigorous proofs of computational power and reachability properties \cite{morales2020universality, lloyd2018quantum, hastings2019classical, farhi2016quantum, farhi2020quantum} as well as characterization through heuristics and numerical experiments and extensions of the algorithm \cite{hadfield2019quantum, do2020planning, akshay2020reachability, garcia2019quantum, ho2019efficient, yao2020policy, matos2020quantifying, zhu2020adaptive, wierichs2020avoiding}.
\begin{figure}[t!]
\centering
\subfloat[Graph $A$ \newline $4-3$\label{subfig:graphAB}]{\includegraphics[width=0.13\textwidth, trim={6cm 2.5cm 6cm 2cm}, clip]{Fig_Graph_A_B.pdf}}
\subfloat[Graph $B$ \newline $5-4$\label{subfig:graphF}]{\includegraphics[width=0.13\textwidth, trim={6cm 2.5cm 6cm 2cm}, clip]{Fig_Graph_F.pdf}}
\subfloat[Graph $C$ \newline $6-4$\label{subfig:graphH}]{\includegraphics[width=0.13\textwidth, trim={6cm 2.5cm 6cm 2cm}, clip]{Fig_Graph_H.pdf}}
\caption{Graph coloring problems used for the performance tests of the space-efficient QAOA method. Graph $A$ with $4$ nodes is used for a $3$-color problem, Graph $B$ contains $5$ nodes and colored with $4$ colors, while Graph $C$ has $6$ nodes and to be colored with $4$ colors. The $n\cdot k$ is twice as big for Graph $C$, than for Graph $A$. The space-efficient method reduces the problem representation size exponentially in the number of colors.}
\label{fig:graphs}
\end{figure}
\begin{figure}[t!
\[
\Qcircuit @C=1em @R=1em {
& \ctrl{1} & \qw & \qw & \qw & \qw & \qw & \ctrl{1} & \qw \\
& \targ & \ctrl{1} &\qw & \qw & \qw & \ctrl{1} & \targ & \qw \\
& \qw & \targ & \ctrl{1} & \qw & \ctrl{1} & \targ & \qw & \qw \\
& \qw & \qw & \targ & \gate{R_z(\theta)} & \targ & \qw & \qw & \qw \\
}
\]
\caption{Decomposition of $\textrm{e}^{i \theta( Z \otimes Z \otimes
Z \otimes Z)/2}$. The task of gate sequence decomposition of the exponentiated Hamiltonian terms similar to Eq.~\ref{eq:4-colors} can be achieved by the symmetric circuit of CNOTs and a central Z-rotation gate. } \label{fig:decomp}
\end{figure}
Finally, let us mention that an important aspect of running a QAOA-based method is the choice of classical optimizer method. The Nelder-Mead algorithm \cite{nelder_simplex_1965} and constrained optimization by linear approximation (COBYLA) \cite{powell_direct_1998} are commonly used algorithms since they require a low number of function evaluations. Other approaches, such as the sequential least squares programming (SLSQP) \cite{kraft1988software} can be efficient if the search space of the problem is bigger. These methods can be found in SciPy Python package, and Qiskit Aqua has already integrated them into its sub-modules, enabling us to automatically optimize variational quantum circuits. We chose to use the $NestrovMomentum$ \cite{ruder_overview_2016} variation of the Gradient Descent method. This optimizer shows a good performance in shallow and deep circuits when the momentum and learning rate values are set carefully. We used the PennyLane quantum machine learning framework \cite{bergholm2018pennylane} combined with TensorFlow.
We found that in deep QAOA circuits the backpropagation method provided by TensorFlow showed a better convergence rate than the Parameter-shift rule, while in shallow circuits both methods have the same performance.
\subsection{Comparison of Standard and Space-Efficient QAOA}
\label{subsec:caseA}
Let us now compare the results for a coloring problem first solved by \emph{i) a standard QAOA approach} presented in Sec.~\ref{sec:qubo-problem} and then by \emph{ii) a space-efficient QAOA} with the method presented in Sec.~\ref{sec:space-efficient-method}. The considered problem is the $3$-coloring of Graph A, shown in Fig.~\ref{subfig:graphAB}.
Looking at the convergence characteristics, the first problem was solved with a level-$10$ QAOA algorithm, using a circuit of $12$ qubits, resulting in a circuit depth of $170$. The convergence shown in Fig.~\ref{fig:AB} is towards the zero energy level, as we the shifted minimum energy eigenvalue to this value. For the coloring problem, when the gap reaches a size less than $0.75$, the corresponding solution gives a correct coloring of the graph with probability $0.25$. For the original algorithm, this energy level is reached at iteration no.~$240$, while the enhanced algorithm of space-efficient QAOA reaches the same level already at iteration no.~$44$ - showing a substantial improvement in the performance.
Another metric to characterize the performance gain of the new algorithm is the overall CPU time usage. In this metric, the enhancement is showing an improvement of increasing execution speed by $75$ times. Moreover, the new method decreased the memory usage by a factor of $3.47$.
Taking the probability distribution over the best circuit solution, we can calculate the probability of measuring a state corresponding to a good coloring solution.
We find the probabilities to be $0.557$ for the original circuit, and $0.883$ for our space-efficient algorithm.
The depth of a single QAOA level for the enhanced circuit is $37$, whereas for the original circuit this number is $17$. However, the enhanced algorithm needs only a level-$6$ QAOA, using a circuit of $8$ qubits. Hence, although the total depth is increased, this enlargement is not that substantial and the number of required qubits and the needed iterations is far less.
\begin{figure}[t!]
\hspace*{-2mm}
\centering
{\includegraphics[width=0.36\textwidth]{AB.pdf}}
\caption{The convergence of the standard (blue square) and the space-efficient (red cross) QAOA simulations for the coloring of Graph $A$. The standard method requires $12$ qubits and level-$10$ circuits to converge, while for the space-efficient method it is sufficient to use $8$ qubits and a level-$6$ circuit. The performance gain from the space-efficient modification is measurable in both the lower number of iterations for reaching near optimum and the overall reduction in CPU time used.
}
\label{fig:AB}
\end{figure}
\subsection{Application of Stochastic Gradient Descent}
We also studied the convergence properties of both the gradient descent with exact Hamiltonian expectation values and the {\it stochastic gradient descent} (SGD) \cite{sweke2019stochastic, elsafty2020stochastic} obtained from estimating the expectation value from only a finite $n$ number of shots.
For this investigation, we considered the coloring problem of Graph B represented in Fig.~\ref{subfig:graphF} with $k=4$ colors.
During the simulations, we used a level-$6$ QAOA, which has a circuit depth of $354$. We found that after the optimization, the probability of measuring a bitstring representing a good coloring of Graph B is $0.845$.
As shown in Fig.~\ref{fig:F_layer_6}, the convergence of the true gradient descent optimization approaches the global minimum value with an energy distance of $0.75$ already at the iteration no. $13$, while the SGD with $50$ and $100$ shots at iterations $10$ and $17$, respectively. The global minimum is approached with a distance of $0.5$ after $32$ iterations when using the true gradient descent method and at iteration steps $11$ and $43$ by using SGD with $50$ and $150$ number of shots, respectively.
That is, the SGD performed on par with the gradient descent using exact expectation values, and it would require only $150$ measurements per iterations much less than one .
\begin{figure}[t!]
\hspace*{-2mm}
\centering
{\includegraphics[width=0.36\textwidth]{layer_6.pdf}}
\caption{Space-efficient QAOA simulations for the coloring problem of Graph $B$. The figure shows the comparison between the exact expectation value (black cross), and stochastic approximations using $50$ measurements (blue square) and $150$ measurements (red dot). It is clearly visible that the $150$ shot approximation reaches the same level of coloring efficiency as the exact simulation. This method can further decreases the load on a quantum device in terms of the number of runs and measurements required.}
\label{fig:F_layer_6}
\end{figure}
\subsection{Evaluation of the $6$-node, $4$-color problem on Graph $C$}
\label{subsec:caseH}
Here we present our findings on the evaluation problem of Graph C, which is the coloring task described in Fig.~\ref{subfig:graphH}. In this more complex example, we looked for the successful coloring (with $k=4$ colors) of the $6$-node graph. To simulate this problem using the space-efficient embedding technique, it is enough to consider a circuit containing only $12$ qubits. A fast convergence can be achieved by only using a level-$9$ QAOA algorithm which is, once again, lower than the requirements for the simplest problem Graph $A$ solved with the standard embedding. While the convergence pattern is different from the ones seen in Fig.~\ref{subfig:graphAB} and Fig.~\ref{fig:F_layer_6}, we observe significant improvement in convergence characteristics. The QAOA output energy reaches the theoretical minima within $0.1$ gap in only $100$ iterations. Our simulations show that the observed probability of finding the correct solution is as high as $97.4\%$. These results represent a performance improvement of $97.3\%$ as compared to the Graph A simulation in Sec.~\ref{subsec:caseA} in terms of CPU time required.
\begin{figure}[t!]
\hspace*{-2mm}
\centering
{\includegraphics[width=0.36\textwidth]{H.pdf}}
\caption{Numerical simulation of the space-efficient QAOA on Graph $C$. A fast convergence can be seen within as low as $100$ iterations by only using a level-$9$ circuit. Although the problem space is of higher complexity than the evaluation problem over Graph $A$, a lower level algorithm performs better with the space-efficient embedding technique.}
\label{fig:H}
\end{figure}
\section{Conclusion and Outlook}
\label{sec:conclusion}
We introduced a space-efficient embedding for quantum circuits solving the graph coloring problem. Through a series of investigations, we presented the performance gain of this method. We showed the limitations of the existing QA hardware solutions and then with various numerical simulations compared the standard and enhanced QAOA circuits. The required circuit width to embed the coloring problem is exponentially reduced in the number of colors; and although the depth of a single QAOA layer is increased, the number of required layers and optimization iteration steps to reach optimal solution are also decreased.
The presented method and comparative study can be extended to a benchmarking framework for such performance gain analyses.
Furthermore, analogous space-efficient embedding techniques could be used to improve upon other graph-related quantum optimization methods. We leave this for future work.
\section*{Acknowledgment}
Z.T. and Z.Z. would like to thank the support of the Hungarian Quantum Technology National Excellence Program (Project No. 2017-1.2.1-NKP-2017-00001). Z.Z. acknowledges also support from the Hungarian National Research, Development and Innovation Office (NKFIH) through Grants No. K124351, K124152, K124176 KH129601 and the J\'anos Bolyai Scholarship. A.G. was supported by the Polish National Science Center under the grant agreements 2019/32/T/ST6/00158 and 2019/33/B/ST6/02011.
\newpage
\bibliographystyle{IEEEtran}
|
2,869,038,155,569 | arxiv | \section{Introduction}\label{sec:intro}
The weak equivalence principle (WEP) is a fundamental postulate of general relativity as well as of
many other metric theories of gravity. One statement of the WEP is that the trajectory of any freely
falling, uncharged test body does not depend on its internal structure and composition \cite{2006LRR.....9....3W,2014LRR....17....4W}.
It implies that different species of messenger particles (e.g., photons, neutrinos, or
gravitational waves), or the same species of particles but with different internal structures (e.g,
energies or polarization states), if radiated simultaneously from the same astrophysical source and
passing through the same gravitational field, should arrive at our Earth at the same time.
The WEP test can therefore be performed by comparing the arrival-time differences between correlated
particles from the same astrophysical source (e.g., \cite{1988PhRvL..60..176K,1988PhRvL..60..173L,2015ApJ...810..121G,2015PhRvL.115z1101W,
2016ApJ...818L...2W,2016JCAP...08..031W,2017JCAP...11..035W,2019JHEAp..22....1W,
2016PhLB..756..265K,2016ApJ...827...75L,2016ApJ...821L...2N,2016MNRAS.460.2282S,
2016ApJ...820L..31T,2016PhRvL.116o1101W,2016PhRvD..94b4061W,2017PhRvD..95j3004W,
2016PhRvD..94j1501Y,2017ApJ...848L..13A,2017PhLB..770....8L,2017ApJ...837..134Z,
2018EPJC...78...86D,2018ApJ...861...66L,2018PhRvD..97h3013S,2018ApJ...860..173Y,
2019EPJC...79..185B,2019PhRvD.100j3002L,2019ApJ...882L..13X}).
Additionally, if the WEP is invalid then arrival times of photons with right- and left-handed circular
polarizations should differ slightly, leading to a frequency-dependent rotation of the polarization
plane of a linearly polarized light. Thus, polarimetric observations of astrophysical sources can
also be used to test the WEP \cite{2017MNRAS.469L..36Y,2019PhRvD..99j3012W,2020MNRAS.493.1782Y}. Currently, the best
upper limit on a deviation from the WEP has been obtained from the gamma-ray polarization measurement
of gamma-ray burst (GRB) 061122 \cite{2019PhRvD..99j3012W}. The WEP passes this extraordinarily
stringent test with an accuracy of $\mathcal{O}(10^{-33})$.
Lorentz invariance is a foundational symmetry of Einstein's theory of relativity. However, many
quantum gravity theories seeking to unify quantum mechanics and general relativity predict that
Lorentz invariance may be broken at the Planck energy scale $E_{\rm Pl}\simeq1.22\times10^{19}$ GeV
\cite{1989PhRvD..39..683K,1991NuPhB.359..545K,1995PhRvD..51.3923K,1998Natur.393..763A,
2005LRR.....8....5M,2005hep.ph....6054B,2013LRR....16....5A,2014RPPh...77f2901T}.
As a consequence of Lorentz invariance violation (LIV), the polarization vector of linearly polarized
photons would make an energy-dependent rotation, also known as vacuum birefringence.
Lorentz invariance can therefore be tested with astrophysical polarization measurements
(e.g.,
\cite{1999PhRvD..59l4021G,
2001PhRvD..64h3007G,
2001PhRvL..87y1304K,
2006PhRvL..97n0401K,
2007PhRvL..99a1601K,
2008ApJ...689L...1K,
2013PhRvL.110t1601K,
2003Natur.426Q.139M,
2004PhRvL..93b1101J,
2007MNRAS.376.1857F,
2009JCAP...08..021G,
2011PhRvD..83l1301L,
2011APh....35...95S,
2012PhRvL.109x1104T,
2013MNRAS.431.3550G,
2014MNRAS.444.2776G,
2016MNRAS.463..375L,
2017PhRvD..95h3013K,
2019PhRvD..99c5045F,
2019MNRAS.485.2401W}).
The presence of linear polarization in the prompt gamma-ray emission of GRBs sets the strictest upper
limit to date on the birefringent parameter, namely $\eta<\mathcal{O}(10^{-16})$ \cite{2012PhRvL.109x1104T,2013MNRAS.431.3550G,2014MNRAS.444.2776G,2016MNRAS.463..375L,2019MNRAS.485.2401W}
(see also \cite{2011RvMP...83...11K,2013CQGra..30m3001L} and summary constraints for LIV therein).
In general, it is hard to know the intrinsic polarization angles for photons with different energies
from a given source. If one possesses this information, a rotation angle of the polarization plane,
which induced by astrophysical effects (e.g., the WEP violation or LIV), could be
directly extracted by measuring the difference between the known intrinsic polarization angle and
the observed polarization angle for photons at a certain energy. Even in the absence of such knowledge,
however, birefringent effects can still be constrained for sources at arbitrary redshifts. The reason
is as follows \cite{2012PhRvL.109x1104T}. It is believed that if the rotation angle (denoted by
$\Delta\phi$) differs by more than $\pi/2$ over an energy range $[E_{1},\;E_{2}]$, then the net
polarization of the signal would be substantially depleted and could not be as high as the observed level.
That is, the detection of high polarization means that the relative rotation angle
$|\Delta\phi(E_{2})-\Delta\phi(E_{1})|$ should not be too large. Therefore, some upper limits on
violations of the WEP and Lorentz invariance can be obtained under the assumption that
$|\Delta\phi(E_{2})-\Delta\phi(E_{1})|$ is smaller than $\pi/2$. However, through the detailed analyses
for the evolution of GRB polarization arising from violations of the WEP and Lorentz invariance,
Lin et al. \cite{2016MNRAS.463..375L} and Wei \& Wu \cite{2019PhRvD..99j3012W} proved that more than 60\% of the initial
polarization can be conserved even if $|\Delta\phi(E_{2})-\Delta\phi(E_{1})|$ is as large as $\pi/2$.
This is conflict with the intuition that $|\Delta\phi(E_{2})-\Delta\phi(E_{1})|$ could not be larger
than $\pi/2$ when high polarization is detected. Hence, it is inappropriate to simply use $\pi/2$ as
the upper limit of $|\Delta\phi(E_{2})-\Delta\phi(E_{1})|$ to constrain deviations from the WEP and
Lorentz invariance. Furthermore, even though some upper limits of the violations were found to be
extremely small \cite{2012PhRvL.109x1104T,2013MNRAS.431.3550G,2014MNRAS.444.2776G,2016MNRAS.463..375L,
2017MNRAS.469L..36Y,2019MNRAS.485.2401W,2019PhRvD..99j3012W}, the outcomes of these limits are lack of
significantly statistical robustness.
In this work, we propose that an intrinsic polarization angle can be extracted and a more robust
bound on a deviation from the WEP or from Lorentz invariance can be derived as well, by directly fitting
the multiwavelength polarization observations of astrophysical sources. More importantly,
the analysis of the multiwavelength polarimetric data also allows us to simultaneously test
the WEP and Lorentz invariance, when we consider that the rotation angle is caused by violations of
both the WEP and Lorentz invariance.
\section{Tests of the Weak Equivalence Principle}
\label{sec:WEP}
Adopting the parameterized post-Newtonian (PPN) formalism, the time interval required for test particles to
travel across a given distance would be longer in the presence of a gravitational potential $U(r)$ by
\begin{equation}
t_{\rm gra}=-\frac{1+\gamma}{c^3}\int_{r_e}^{r_o}~U(r)dr\;,
\end{equation}
where $\gamma$ is one of the PPN parameters ($\gamma$ reflects the level of space curved by unit rest
mass) and the integration is along the propagation path from the emitting source $r_e$ to the observer $r_o$.
This effect is known as Shapiro time delay \cite{1964PhRvL..13..789S}. It is important to note that all metric theories of
gravity satisfying the WEP predict that any two test particles traveling in the same gravitational
field must follow the same trajectory and undergo the identical Shapiro delay. In other words, as long
as the WEP is valid, all metric theories predict that the measured value of $\gamma$ should be the
same for all test particles \cite{2006LRR.....9....3W,2014LRR....17....4W}. The accuracy of the WEP
can therefore be characterized by placing constraints on the differences of the $\gamma$ values for
different particles.
Linearly polarized light is a superposition of two monochromatic waves with opposite circular polarizations
(labeled with $r$ and $l$). If the WEP is broken, different $\gamma$ values might be measured with right-
and left-handed circularly polarized photons, leading to the slight arrival-time difference of these two
circular components. The arrival-time lag is then given by
\begin{equation}
\Delta t_{\rm gra}=\left|\frac{\Delta\gamma}{c^3}\int_{r_e}^{r_o}~U(r)dr\right|\;,
\label{eq:delta-tgra}
\end{equation}
where $\Delta\gamma=\gamma_{r}-\gamma_{l}$ corresponds to the difference of the $\gamma$ values for different circular polarization states.
To compute $\Delta t_{\rm gra}$ with Equation~(\ref{eq:delta-tgra}), we have to figure out the gravitational
potential $U(r)$. For a cosmic source, $U(r)$ should have contributions from the gravitational potentials of
the Milky Way $U_{\rm MW}(r)$, the intergalactic space $U_{\rm IG}(r)$, and the source host galaxy $U_{\rm host}(r)$.
Since the potential models of $U_{\rm IG}(r)$ and $U_{\rm host}(r)$ are poorly understood, for the purposes
of obtaining conservative limits, we here just consider the Milky Way gravitational potential. Adopting a
Keplerian potential\footnote{Although the potential model of the Milky Way is still not well known,
ref.~\cite{1988PhRvL..60..176K} examined two popular potential models (i.e., the Keplerian potential and the isothermal
potential) and suggested that the adoption of a different model for $U_{\rm MW}(r)$ has only a minimal influence
on the WEP tests.} $U(r)=-GM/r$ for the Milky Way, we thus have \cite{1988PhRvL..60..173L,2016PhRvD..94b4061W}
\begin{eqnarray}
\Delta t_{\rm gra}= \Delta\gamma \frac{GM_{G}}{c^{3}} \times \qquad\qquad\qquad\qquad\qquad\qquad\qquad\\ \nonumber
\ln \left\{ \frac{ \left[d+\left(d^{2}-b^{2}\right)^{1/2}\right] \left[r_{G}+s_{\rm n}\left(r_{G}^{2}-b^{2}\right)^{1/2}\right] }{b^{2}} \right\}\;,
\label{eq:gammadiff}
\end{eqnarray}
where $M_{G}\simeq6\times10^{11}M_{\odot}$ is the mass of the Milky Way \cite{2012ApJ...761...98K}, $d$ is
approximated as the distance from the source to our Earth, $b$ represents the impact parameter of the light paths
relative to our Galactic center, and $r_{G}=8.3$ kpc denotes the distance of our Galactic center. Here we use
$s_{\rm n}=+1$ or $s_{\rm n}=-1$ to correspond to the cases where the source is located along the Galactic center
or anti-Galactic center. The impact parameter $b$ can be estimated as
\begin{equation}
b=r_{G}\sqrt{1-(\sin \delta_{s} \sin \delta_{G}+\cos \delta_{s} \cos \delta_{G} \cos(\beta_{s}-\beta_{G}))^{2}}\;,
\label{eq:b}
\end{equation}
where $\beta_{s}$ and $\delta_{s}$ are the right ascension and declination of the source in the equatorial coordinate
system and ($\beta_{G}=17^{\rm h}45^{\rm m}40.04^{\rm s}$, $\delta_{G}=-29^{\circ}00^{'}28.1^{''}$) represent
the coordinates of the Galactic center \cite{2009ApJ...692.1075G}.
As mentioned above, a possible violation of the WEP can lead to the slight arrival-time difference of photons with
right- and left-handed circular polarizations. In this case, the polarization vector of a linearly polarized light
will rotate during the propagation. The rotation angle induced by the WEP violation is expressed as \cite{2017MNRAS.469L..36Y,2019PhRvD..99j3012W}
\begin{equation}
\Delta\phi_{\rm WEP}\left(E\right)=\Delta t_{\rm gra}\frac{2\pi c}{\lambda}=\Delta t_{\rm gra}\frac{E}{\hbar}\;,
\label{eq:theta-WEP}
\end{equation}
where $E$ is the observed photon energy.
If the birefringent effect arising from the WEP violation is considered here,
the observed linear polarization angle ($\phi_{\rm obs}$) for photons emitted with energy $E$ from an astrophysical
source should consist of two terms
\begin{equation}
\phi_{\rm obs}=\phi_{0}+\Delta\phi_{\rm WEP}\left(E\right)\;,
\end{equation}
where $\phi_{0}$ represents the intrinsic polarization angle. As $\phi_{0}$ is unknown, the exact value of
$\Delta\phi_{\rm WEP}$ is not available. Yet, an upper limit on the $\gamma$ discrepancy ($\Delta\gamma$)
can be obtained by setting the upper limit of the relative rotation angle $|\Delta\phi_{\rm WEP}(E_{2})-\Delta\phi_{\rm WEP}(E_{1})|$
to be $\pi/2$, which is based on the argument that the observed polarization degree will be significantly
suppressed if $|\Delta\phi_{\rm WEP}(E_{2})-\Delta\phi_{\rm WEP}(E_{1})|>\pi/2$ over an observed energy range $[E_{1},\;E_{2}]$,
regardless of the intrinsic polarization fraction at the corresponding rest-frame energy range \cite{2017MNRAS.469L..36Y}.
Instead of requiring the more complicated and indirect argument, here we simply assume that all photons
in the observed bandpass are emitted with the same (unknown) intrinsic polarization angle. In this case,
we expect to observe the birefringent effect induced by the WEP violation as an energy-dependent linear
polarization vector. Such an observation could confirm the existence of a birefringent effect and give
a robust limit on the WEP violation. We look for a similar energy-dependent trend in multiwavelength
polarization observations of present astrophysical sources. One can see from Equation~(\ref{eq:theta-WEP})
that much more stringent constraints on $\Delta\gamma$ can be obtained from the higher energy band of
polarization observations. Unfortunately, there are not existing multiwavelength polarization observations
(i.e., more than three wavelength bins) in the gamma-ray or X-ray energy band. We explore here the implications
and limits that can be set by multiwavelength linear polarization observations from the optical afterglows
of GRB 020813 \cite{2003ApJ...584L..47B} and GRB 021004 \cite{2003A&A...410..823L}.
GRB 020813 was detected by the \emph{High Energy Transient Explore 2} (\emph{HETE2}) on 13 August 2002, with
coordinates R.A.=$19^{\rm h}46^{\rm m}38^{\rm s}$ and Dec.=$-19^{\circ}35^{'}16^{''}$ \cite{2002GCN..1471....1V}.
Its redshift has been measured to be $z=1.255$ \cite{2003ApJ...584L..47B}. The multiwavelength
[(4000, 5000, 6300, 7300, 8300)$\pm$500 {\AA}] polarization measurements of the optical afterglow of GRB 020813
were carried out during 4.7--7.9 hr after the burst. The observed linear polarization is in the range of
1.8\%--2.4\% (see Table 2 of ref.~\cite{2003ApJ...584L..47B}). At $t\sim7.36$ hr after the burst, the observed
polarization angles in five wavelength bins are $153^{\circ}\pm1^{\circ}$, $149^{\circ}\pm1^{\circ}$,
$156^{\circ}\pm1^{\circ}$, $153^{\circ}\pm1^{\circ}$, and $149^{\circ}\pm1^{\circ}$, respectively.
At $t\sim6.27$ hr, the corresponding polarization angles are $160^{\circ}\pm1^{\circ}$, $155^{\circ}\pm1^{\circ}$,
$151^{\circ}\pm1^{\circ}$, $150^{\circ}\pm1^{\circ}$, and $151^{\circ}\pm2^{\circ}$, respectively.
At $t\sim5.16$ hr, the corresponding polarization angles are $161^{\circ}\pm1^{\circ}$, $159^{\circ}\pm1^{\circ}$,
$158^{\circ}\pm1^{\circ}$, $155^{\circ}\pm1^{\circ}$, and $155^{\circ}\pm1^{\circ}$, respectively.
We allow a temporal variation of the polarization and consider only the relative polarization angle.
In each wavelength bin, the time-averaged polarization angle during three observational periods
is calculated through $\overline{\phi}_{\rm obs}=\sum_{i}\phi_{{\rm obs},i}/3$.
The scatters in the shift between $\phi_{{\rm obs},i}$ and $\overline{\phi}_{\rm obs}$, i.e.,
$\sigma_{\phi_{i}}=|\phi_{{\rm obs},i}-\overline{\phi}_{\rm obs}|$, provide an estimate of the error in $\overline{\phi}_{\rm obs}$,
i.e., $\sigma_{\overline{\phi}}=(\sum_{i}\sigma^{2}_{\phi_{i}})^{1/2}/3$. The time-averaged polarization angle
as a function of energy is displayed in the upper-left panel of Figure~\ref{fig:f1}. We fit the $\gamma$ discrepancy
($\Delta\gamma$), along with the intrinsic polarization angle $\phi_{0}$, by maximizing the likelihood
function:
\begin{equation}
\mathcal{L} = \prod_{i}\frac{1}{\sqrt{2\pi}\,\sigma_{{\rm tot}, i}}\times
\exp\left[-\frac{\left(\phi_{{\rm obs},i}-\phi_{\rm th}\left(E_{i}\right)\right)^{2}}{2\sigma^{2}_{{\rm tot}, i}}\right]\;,
\label{eq:likelihood}
\end{equation}
where $\phi_{\rm th}\left(E_{i}\right)=\phi_{0}+\Delta\phi_{\rm WEP}\left(\Delta\gamma,\;E_{i}\right)$ and
the variance
\begin{equation}
\sigma_{{\rm tot},i}^{2}=\sigma_{\phi_{{\rm obs},i}}^{2}+\left(\frac{\Delta\phi_{\rm WEP}}{E_{i}}\sigma_{E_{i}}\right)^{2}
\end{equation}
is given in terms of the measurement error $\sigma_{\phi_{{\rm obs},i}}$ in $\phi_{{\rm obs},i}$ and the
propagated error of $\sigma_{E_{i}}$. The 1--3$\sigma$ confidence levels in the $\Delta\gamma$--$\phi_{0}$ plane
are presented in the upper-right panel of Figure~\ref{fig:f1}. The best-fitting values are $\Delta\gamma=0.81\times10^{-24}$
and $\phi_{0}=2.57$ rad. Our constraints show that $\Delta\gamma$ is consistent with $0$ at the $2.5\sigma$ confidence level,
implying that there is no convincing evidence for the violation of the WEP. At the $3\sigma$ confidence level, the limits on
$\Delta\gamma$ are $-1.6\times10^{-25}<\Delta\gamma<2.7\times10^{-24}$.
\begin{figure}
\vskip-0.2in
\centerline{\includegraphics[angle=0,width=1.1\hsize]{f1.eps}}
\vskip-0.3in
\caption{Fit to multiwavelength polarimetric observations of the optical afterglows of GRB 020813
(upper panels; the polarization angles are time-averaged) and GRB 021004 (lower panels).
Left panels: observed polarization angle $\phi_{\rm obs}$ as a function of the energy $E$,
and the best-fit theoretical curves for the case of the WEP violation.
Right panels: 1--3$\sigma$ confidence levels in the $\Delta\gamma$--$\phi_{0}$ plane.
The plus symbol represents the best fit, corresponding to the reduced chi-square value $\chi^{2}_{\rm dof}$.}
\label{fig:f1}
\end{figure}
\begin{figure}
\vskip-0.2in
\centerline{\includegraphics[angle=0,width=1.0\hsize]{f2.eps}}
\vskip-0.1in
\caption{Individual and joint constraints on the difference of the $\gamma$ values
from the optical polarimetry of GRB 020813 and GRB 021004.}
\label{fig:f2}
\end{figure}
GRB 021004 was detected by $HETE2$ on 4 October 2002, with coordinates R.A.=$00^{\rm h}26^{\rm m}57^{\rm s}$
and Dec.=$+18^{\circ}55^{'}44^{''}$ \cite{2002GCN..1565....1S}. Its redshift is $z=2.328$ \cite{2002GCN..1618....1M}.
Here we directly take the reduced spectropolarimetric observations for the optical counterpart to GRB 021004
that presented in ref.~\cite{2003A&A...410..823L}. As illustrated in the lower-left panel of Figure~\ref{fig:f1},
the observed polarization angle has a negative dependence on the energy, rather than a positive dependence.
The parameter constraints are shown in the lower-right panel of Figure~\ref{fig:f1}. We see here that
the best-fit corresponds to $\Delta\gamma=-1.17\times10^{-24}$ and $\phi_{0}=2.23$ rad. The data set is consistent
with the possibility of no WEP violation at all (i.e., $\Delta\gamma=0$) at the $2.4\sigma$ confidence level.
At the $3\sigma$ confidence level, we have $-2.7\times10^{-24}<\Delta\gamma<3.1\times10^{-25}$.
Now we combine the polarimetric data of GRB 020813 and GRB 021004 to further investigate the possible birefringent
effect that arises from the WEP violation. The calculation procedure is as follows. For each data set,
we first derive the 1-D probability distribution of $\Delta\gamma$ by marginalizing over the intrinsic polarization
angle $\phi_{0}$. The joint probability of each $\Delta\gamma$ for two bursts is then calculated with the total
likelihood function
$\mathcal{L}_{\rm tot}(\Delta\gamma_{i})\propto\mathcal{L}_{\rm 020813}(\Delta\gamma_{i})\cdot \mathcal{L}_{\rm 021004}(\Delta\gamma_{i})$,
where $i$ indicates the $i$th $\Delta\gamma$. In Figure~\ref{fig:f2}, we present the marginalized likelihood
distributions of $\Delta\gamma$ derived from the optical polarimetry of GRB 020813 (dotted curve), GRB 021004
(dashed curve), and their combination (solid curve), respectively. At the $3\sigma$ confidence level,
the combined constraint on $\Delta\gamma$ is $\left(0.2^{+0.8}_{-1.0}\right)\times10^{-24}$.
Under the same assumption that the Shapiro delay is attributed to the Milky Way's gravity,
ref. \cite{2017MNRAS.469L..36Y} obtained the current best limit of $\Delta\gamma<1.6\times10^{-27}$
from the gamma-ray polarimetric data of GRB 110721A. While our combined limit is three orders of magnitude
less precise, our analysis, which may be unique in the literature, does constrain $\Delta\gamma$ by directly
fitting the multiwavelength polarimetric observations in the optical band. As such, our analysis provides
an independent test of the WEP and has the promise to compliment existing WEP tests.
\section{Constraints on the Violation of Lorentz Invariance}
\label{sec:LIV}
In the photon sector, the Lorentz-violating dispersion relation can be expressed as \cite{2003PhRvL..90u1601M}
\begin{equation}\label{eq:dispersion}
E_{\pm}^2=p^2c^2\pm \frac{2\eta}{E_{\rm pl}} p^3c^3\;,
\end{equation}
where $E_{\rm pl}\approx 1.22\times 10^{19}$ GeV is the Planck energy, $\pm$ denotes the left- or right-handed
circular polarization states, and $\eta$ is a dimensionless parameter characterizing the broken degree of Lorentz invariance.
If $\eta\neq0$, then group velocities for different circular polarization states should differ slightly, leading
to vacuum birefringence and a phase rotation of linear polarization \cite{1999PhRvD..59l4021G,2001PhRvD..64h3007G,
2003Natur.426Q.139M,2004PhRvL..93b1101J}. The rotation angle of the polarization vector for linearly polarized photons
propagating from the source at redshift $z$ to the observer is given by \cite{2011PhRvD..83l1301L,2012PhRvL.109x1104T}
\begin{equation}\label{eq:theta-LIV}
\Delta\phi_{\rm LIV}(E)=\eta\frac{E^2}{\hbar E_{\rm pl}}\int_0^z\frac{1+z'}{H(z')}dz'\;,
\end{equation}
where $E$ is the energy of the observed light. Also, $H(z)=H_0\left[\Omega_{\rm m}(1+z)^3+\Omega_{\Lambda}\right]^{1/2}$
is the Hubble parameter at $z$, where the standard flat $\Lambda$CDM model with parameters
$H_{0}=67.36$ km $\rm s^{-1}$ $\rm Mpc^{-1}$, $\Omega_{\rm m}=0.315$, and $\Omega_{\Lambda}=1-\Omega_{\rm m}$ is adopted
\cite{2018arXiv180706209P}.
If the birefringent effect arising from LIV is considered here, the observed polarization angle for photons
at a certain energy $E$ with an intrinsic polarization angle $\phi_{0}$ should be
\begin{equation}
\phi_{\rm obs}=\phi_{0}+\Delta\phi_{\rm LIV}\left(E\right)\;.
\end{equation}
The observed polarization angles of GRB 020813 and GRB 021004 as a function of $E^{2}$ are shown in the left panels
of Figure~\ref{fig:f3}. To find the best-fitting birefringent parameter $\eta$ and the intrinsic polarization angle
$\phi_{0}$, we also adopt the method of maximum likelihood estimation. The adopted likelihood function is the same as
Equation~(\ref{eq:likelihood}), except now $\phi_{\rm th}\left(E_{i}\right)=\phi_{0}+\Delta\phi_{\rm LIV}\left(\eta,\;E_{i}\right)$ and
$\sigma_{{\rm tot},i}^{2}=\sigma_{\phi_{{\rm obs},i}}^{2}+\left(2\frac{\Delta\phi_{\rm LIV}}{E_{i}}\sigma_{E_{i}}\right)^{2}$.
The resulting constraints on $\eta$ and $\phi_{0}$ are displayed in the right panels of Figure~\ref{fig:f3}.
For GRB 020813, the best-fitting parameters are $\eta=1.58\times10^{-7}$ and $\phi_{0}=2.63$ rad, and $\eta$
is consistent with 0 (i.e., no evidence of LIV) at the $2.5\sigma$ confidence level.
For GRB 021004, the best-fit corresponds to $\eta=-1.07\times10^{-7}$ and $\phi_{0}=2.14$ rad, and the data set
is consistent with the possibility of no LIV at all (i.e., $\eta=0$) at the $2.2\sigma$ confidence level.
The 1-D marginalized likelihood distributions of $\eta$ derived from the optical polarimetry of GRB 020813 (dotted curve),
GRB 021004 (dashed curve), and their combination (solid curve) are plotted in Figure~\ref{fig:f4}.
We find that the $3\sigma$ level joint-constraint is $\eta=\left(-0.1^{+1.2}_{-1.7}\right)\times10^{-7}$,
which is in good agreement with the result of ref.~\cite{2007MNRAS.376.1857F}.
\begin{figure}
\vskip-0.2in
\centerline{\includegraphics[angle=0,width=1.1\hsize]{f3.eps}}
\vskip-0.3in
\caption{Similar to Figure~\ref{fig:f1}, but for the case of LIV.}
\label{fig:f3}
\end{figure}
\begin{figure}
\vskip-0.2in
\centerline{\includegraphics[angle=0,width=1.0\hsize]{f4.eps}}
\vskip-0.1in
\caption{Individual and joint constraints on the birefringent parameter $\eta$
from the optical polarimetry of GRB 020813 and GRB 021004.}
\label{fig:f4}
\end{figure}
Using the detections of prompt emission polarization in GRBs, ref.~\cite{2019MNRAS.485.2401W}
set the hitherto most stringent constraint on the birefringent parameter, i.e., $\eta<\mathcal{O}(10^{-16})$
(see also \cite{2013MNRAS.431.3550G,2014MNRAS.444.2776G,2016MNRAS.463..375L}).
While our optical polarization constraint is not competitive with observations of gamma-ray polarization,
there is merit to the result. We use the spectropolarimetric data in order to directly
measure the energy-dependent change of the polarization angle. This is an improvement over many previous analyses,
which made use of the argument that birefringence would significantly reduce polarization over a broad bandwidth
to obtain limits on the birefringent parameter $\eta$.
\section{Limits on violations of both the WEP and Lorentz Invariance}
\label{sec:both}
The Einstein equivalence principle entails three assumptions:
the universality of free fall (WEP), local Lorentz invariance, and
local position invariance of non-gravitational experiments
\cite{2006LRR.....9....3W,2014LRR....17....4W}. Indeed, it is well
known that particles endowed with modified dispersion relation like
Equation~(\ref{eq:dispersion}) are not only violating Lorentz invariance
but also the WEP as in general they will not follow the geodesic of any
metric. And actually to describe such modified dispersion relations,
one has to assume the presence of an extra field beyond the metric
such as an aether vector field, see e.g. Einstein-{\AE}ther gravity theory
\cite{2001PhRvD..64b4028J,2004PhRvD..70b4003J,2004PhRvD..69f4005E,2006PhRvD..73f4015F}.
Since both the WEP violation and the LIV effect can lead to an energy-dependent rotation of the linear polarization
angle, the observed polarization angle
\begin{equation}
\phi_{\rm obs}=\phi_{0}+\Delta\phi_{\rm WEP}\left(E\right)+\Delta\phi_{\rm LIV}\left(E\right)
\end{equation}
in principle should have contributions from the intrinsic polarization angle and the rotation angles induced by
violations of the WEP and Lorentz invariance, respectively. If we suppose that the energy-dependent rotation angle
is attributed to these two causes, the difference of the $\gamma$ values and the birefringent parameter $\eta$
can be simultaneously constrained by fitting the multiwavelength polarimetric observations of the optical afterglows
of GRB 020813 and GRB 021004. Similarly, we maximize the likelihood function (Equation~(\ref{eq:likelihood})) to
find the best-fitting parameters, except now
$\phi_{\rm th}\left(E_{i}\right)=\phi_{0}+\Delta\phi_{\rm WEP}\left(\Delta\gamma,\;E_{i}\right)+\Delta\phi_{\rm LIV}\left(\eta,\;E_{i}\right)$
and $\sigma_{{\rm tot},i}^{2}=\sigma_{\phi_{{\rm obs},i}}^{2}+\left[\left(\Delta\phi_{\rm WEP}+2\Delta\phi_{\rm LIV}\right)\frac{\sigma_{E_{i}}}{E_{i}}\right]^{2}$.
In Figure~\ref{fig:f5}, we show the marginalized likelihood distributions of $\Delta\gamma$ and $\eta$
derived from the polarimetry of GRB 020813 (dotted curves), GRB 021004 (dashed curves), and their combination
(solid curves), respectively. For the combination of the two bursts, the $3\sigma$ confidence level constraints
on the parameters are $\Delta\gamma=\left(-4.5^{+10.0}_{-16.0}\right)\times10^{-24}$ and $\eta=\left(6.5^{+15.0}_{-14.0}\right)\times10^{-7}$.
With such stringent constraints on both $\Delta\gamma$ and $\eta$, we can conclude that
the birefringence effect arising from violations of both the WEP and Lorentz invariance is insignificant.
These are the first simultaneous verifications of the WEP and Lorentz invariance in the photon sector.
\begin{figure*}
\vskip-0.2in
\centerline{\includegraphics[angle=0,width=1.0\hsize]{f5.eps}}
\vskip-0.1in
\caption{Individual and joint constraints on both the difference of the $\gamma$ values and the birefringent parameter $\eta$
from the optical polarimetry of GRB 020813 and GRB 021004.}
\label{fig:f5}
\end{figure*}
\section{Summary and discussion}
\label{sec:summary}
The WEP states that any two test particles, if emitted from the same astronomical object and traveling through
the same gravitational field, should follow the identical trajectory and undergo the same $\gamma$-dependent
Shapiro delay ($\gamma$ is one of the PPN parameters), regardless of their internal structures (e.g., energies
or polarization states) and compositions. Once the WEP fails, then photons with different circular polarization
states might correspond to different $\gamma$ values, which results in slightly different arrival times for
these two polarization states, leading to birefringence and a frequency-dependent rotation of the polarization
vector of a linear polarized wave. Therefore, linear polarization measurements of astrophysical sources can
provide stringent tests of the WEP through the relative differential variations of the $\gamma$ parameter.
A key challenge in the idea of searching for frequency-dependent linear polarization vector, however, is to
distinguish the rotation angle induced by the WEP violation from any source intrinsic polarization angle
in the emission of photons at different energies. In this work, we simply assume that the intrinsic
polarization angle is an unknown constant, and try to search for the birefringent effect in multiwavelength
polarization measurements of astrophysical sources. By fitting the multiwavelength polarimetric
data of the optical afterglows of GRB 020813 and GRB 021004, we place a statistically robust limit on the WEP
violation at the $3\sigma$ confidence level, i.e., $\Delta\gamma=\left(0.2^{+0.8}_{-1.0}\right)\times10^{-24}$.
As a consequence of LIV, the plane of linear polarization can also generate a frequency-dependent rotation.
Assuming that the rotation angle is mainly caused by LIV, a robust limit on the birefringent parameter $\eta$
quantifying the broken degree of Lorentz invariance can be obtained through the similar fit procedure
in testing the WEP. Using the optical polarimetry of GRB 020813 and GRB 021004, we find that the $3\sigma$
level joint-constraint on the birefringent parameter is $\eta=\left(-0.1^{+1.2}_{-1.7}\right)\times10^{-7}$.
If we consider that the frequency-dependent rotation angle is attributed to violations of both the WEP and Lorentz
invariance, the analysis of the spectropolarimetric data also allows us to simultaneously constrain the difference of
the $\gamma$ values and the birefringent parameter $\eta$. For the combination of GRB 020813 and GRB 021004,
the $3\sigma$ confidence level constraints on both $\Delta\gamma$ and $\eta$ are
$\Delta\gamma=\left(-4.5^{+10.0}_{-16.0}\right)\times10^{-24}$ and $\eta=\left(6.5^{+15.0}_{-14.0}\right)\times10^{-7}$.
While the optical polarimetry of GRBs does not currently have the best sensitivity to WEP tests and LIV
constraints, there is nonetheless merit to the result. First, this is the first time, to our knowledge, that
it has been possible to simultaneously test the WEP and Lorentz invariance through direct fitting of the
multiwavelength polarimetric data of a GRB. Second, thanks to the adoption of multiwavelength polarimetric data set,
our constraints are much more statistically robust than previous results which only with upper limits.
Compared with previous works \cite{2012PhRvL.109x1104T,2013MNRAS.431.3550G,2014MNRAS.444.2776G,2017MNRAS.469L..36Y},
which constrained $\Delta\gamma$ or $\eta$ based on the indirect argument that the relative rotation angle
$|\Delta\phi(E_{2})-\Delta\phi(E_{1})|$ is smaller than $\pi/2$, our present analysis is independent of this argument.
As more and more GRB polarimeters (such as TSUBAME,
COSI, and GRAPE) enter service \cite{2017NewAR..76....1M}, it is reasonable to expect that multiwavelength
polarization observations in the prompt gamma-ray emission of GRBs will be available. Much stronger limits on violations
of both the WEP and Lorentz invariance can be expected as the analysis presented here is applied to larger number of
GRBs with higher energy polarimetry.
It should be noted that the rotation of the linear polarization plane can also be affected by magnetized plasmas
(the so-called Faraday rotation). The rotation angle induced by the Faraday rotation is
$\frac{\Delta\phi_{\rm Far}}{\rm rad}=8.1\times10^{5}\left(\frac{\lambda}{\rm m}\right)^{2}\int_{0}^{L}\left(\frac{B_{\parallel}}{\rm Gs}\right)
\left(\frac{n_{e}}{\rm cm^{-3}}\right)\frac{dL}{\rm pc}$, where $\lambda$ is the wavelength in units of meter,
$B_{\parallel}$ is the magnetic field strength in the intergalactic medium (IGM) parallel to the line-of-sight
(in units of Gauss), $n_{e}$ is the number density of electrons per $\rm cm^{3}$, and $L$ is the distance in units
of pc. Assuming that a cosmic source occurs at $z=2$ (corresponding to a distance of $L\sim10^{10}$ pc),
then for typical IGM with $n_{e}\sim10^{-6}$ ${\rm cm^{-3}}$ and $B_{\parallel}\leq10^{-9}$ Gs,
and at the wavelength of $\lambda\sim10^{-6}$ m where the optical polarimeter operates, we have $\Delta\phi_{\rm Far}\leq10^{-11}$ rad.
It is obvious that the rotation angle $\Delta\phi_{\rm Far}$ at the optical and higher energy band is extremely small,
therefore the Faraday rotation is negligible for the purposes of this work.
\begin{acknowledgements}
This work is partially supported by the National Natural Science Foundation of China
(grant Nos. 11673068, 11725314, and U1831122), the Youth Innovation Promotion
Association (2017366), the Key Research Program of Frontier Sciences (grant Nos. QYZDB-SSW-SYS005
and ZDBS-LY-7014), and the Strategic Priority Research Program ``Multi-waveband gravitational wave universe''
(grant No. XDB23000000) of Chinese Academy of Sciences.
\end{acknowledgements}
|
2,869,038,155,570 | arxiv | \section{Proof of Proposition 1}\label{app:relay_in_num_ant}
If TX $i$ transmits $S_i\leq M$ data streams, then $M-S_i$ columns
of $\bP_i$ are zeros. For example, in a system with 4 subcarriers where TX $i$ transmits 2 data streams spread over 3 subcarriers,
$\bP_i$ has the following form,
\begin{equation}
\bP_i= \left[ \begin{array}{cccc}
\ast & \ast & 0 & 0\\
\ast & \ast& 0 & 0\\
0 & 0 & 0 & 0\\
\ast & \ast& 0 & 0
\end{array}
\right].
\end{equation}
Denote the non-zero columns of $\bP_i$ by $\hat{\bP}_i \in \bbC^{M \times S_i}$. The information
leakage constraint \eqref{eqt:in_const} is equivalent to
\begin{equation}\label{eqt:in_const1}
\left(\bH_{ji} + \G_j^{\her} \R \F_i \right) \hat{\bP}_i=0, \hspace{1cm} i,j=1,\ldots, K, i \neq j.
\end{equation}
For each $i$, we stack the constrains for all $j \neq i$ by using $\G^{\her}_{-i}$ from \eqref{eqt:in_out_eave} and defining
\begin{equation*}
\bH_{-i}=[\bH^{\her}_{1i}, \ldots, \bH^{\her}_{(i-1)i}, \bH^{\her}_{(i+1)i},
\ldots, \bH^{\her}_{Ki} ]^{\her}.
\end{equation*}
We write \eqref{eqt:in_const1} as
\begin{equation}
\left(\bH_{-i}+ \G^{\her}_{-i} \R \F_i\right) \hat{\bP}_i =0, \hspace{1cm} i=1,\ldots, K
\end{equation}
which can be manipulated to the following by performing vectorization on the matrices,
\begin{equation}\label{eqt:in_const2}
\left( \left( \hat{\bP}^{\tran}_i \F_i^{\tran} \right) \otimes \G^{\her}_{-i} \right) \bvec(\R)
= - \bvec \left(\bH_{-i} \hat{\bP}_i\right), \hspace{0.2cm} i=1, \ldots, K.
\end{equation}
The matrix $\bH_{-i}$ has dimension $(K-1) M \times M$ and the matrix $\hat{\bP}_i$ has dimension $M \times S_i$.
Hence, the product $\bH_{-i} \bP_i$ has dimension $(K-1)M \times S_i$. The number of
constraints in \eqref{eqt:in_const2}
is the number of elements in $\bH_{-i} \bP_i$, which is $(K-1)M S_i$. Summing up all constraints for $i=1,\ldots,K$,
we have the total number of constraints $(K-1)M \sum_{i=1}^K S_i $. The number of variables is the number of elements in
$\R$ which equals to $M^2 N^2$. To neutralize information leakage at all users, we must satisfy \eqref{eqt:in_const2}
for all $i$. To this end, the relay must have the number of antennas $N$ satisfying $ M^2 N^2 \geq (K-1)M\sum_{i=1}^K S_i$, or
\begin{equation}
N \geq \sqrt{\frac{K-1}{M}\sum_{i=1}^K S_i }.
\end{equation}
\section{Proof of Proposition 2}\label{app:relay_in_form}
Stacking the matrices in \eqref{eqt:in_const1} for all $i$, we obtain $\A \bvec(\R)= \bb$.
The matrix $\A$ is a block matrix with vertically stacked blocks
$\left( \hat{\bP}^{\tran}_i \F_i^{\tran} \right) \otimes \G^{\her}_{-i}$ , for $i=1, \ldots, K$, and therefore
has dimension $\sum_{i=1}^{K} S_i (K-1)M \times M^2 N^2$.
The matrix $\G_{-i}$ concatenates matrices $\G_j$ for $j \neq i$, e.g., $\G_{-1}=[\G_2, \ldots, \G_K]$.
As $\G_{-i}$ are not mutually independent, $\A$ is of low rank. Denote the number of rows of $\A$ by
$\alpha=\sum_{i=1}^{K} S_i (K-1)M$ and the rank of $\A$ by $\beta=\rank(\A)$.
The pseudo-inverse of $\A$ can be computed by performing singular-value-decomposition on $\A$,
\begin{equation}
\begin{aligned}
& \left[\A\right]_{\alpha \times M^2 N^2}\\
&= \left[ \U_1 | \U_2 \right] \left[\begin{array}{cc}
\vGamma & \0_{\beta \times (M^2 N^2-\beta)}\\
\0_{(\alpha-\beta) \times \beta} & \0_{(\alpha-\beta) \times (M^2 N^2-\beta)}
\end{array}
\right] \left[ \begin{array}{c}
\V_1^{\her}\\
\V_2^{\her}
\end{array}
\right],
\end{aligned}
\end{equation}
where $\U_1 \in \bbC^{\alpha \times \beta}, \U_2 \in \bbC^{\alpha \times (\alpha-\beta)}$ are the left singular vectors in
the signal space and null space of $\A$ respectively; $\V_1^{\her} \in \bbC^{\beta \times M^2 N^2}$, $\V_2^{\her} \in \bbC^{(M^2 N^2-\beta) \times M^2 N^2}$
are the right singular vectors in the signal space and null space of $\A$ respectively; $\vGamma \in \bbC^{\beta \times \beta}$ holds the
non-zero singular values in the diagonal and zeros everywhere else. Thus, the solution of $\bvec(\R)$ satisfying $\A \bvec(\R)=\bb$ is
\begin{equation}\label{eqt:R_vec}
\bvec(\R)= \V_1 \vGamma^{-1} \U_1^{\her}\bb + \V_2 \y
\end{equation} where $\y$ is any vector in the space of $\bbC^{M^2 N^2 \times 1}$. The result follows by setting $\z=\V_2 \y$ as a vector
in the null space of $\A$.
\section{Proof of Corollary \ref{cor:min_power}}\label{app:min_power}
Using the properties of Kronecker products, the relay transmit power from \eqref{eqt:pow_constraint} is equivalent to
$\left(\A^{\dagger} \bb +\z\right)^{\her} \left( \left( \sum_{i=1}^K \F_{i} \bP_i \bP_i^{\her} \F_{i}^{\her} + \I_{\mn} \right)
\otimes \I_{\mn} \right) \left(\A^{\dagger} \bb+\z\right)$.
By Proposition 2 and \eqref{eqt:pow_constraint}, the minimum transmit power required to satisfy information leakage neutralization is
\begin{equation}
\begin{aligned}
& \min_{\z} \left(\A^{\dagger} \bb +\z\right)^{\her} \left( \left( \sum_{i=1}^K \F_{i} \bP_i \bP_i^{\her} \F_{i}^{\her} + \I_{\mn} \right)
\otimes \I_{\mn} \right) \left(\A^{\dagger} \bb+\z\right)\\
& \xLeftrightarrow[]{\z=\0} \tr\left( \left(\A^{\dagger} \bb \right)\left( \sum_{i=1}^K \F_{i} \bP_i \bP_i^{\her} \F_{i}^{\her} + \I_{\mn} \right)
\left(\A^{\dagger} \bb\right)^{\her} \right)\leq P_r^{max},
\end{aligned}
\end{equation} where the transition is due to the fact that $\z$ is in the null space of $\A$ and
the fact that $\Q= \left( \sum_{i=1}^K \F_{i} \bP_i \bP_i^{\her} \F_{i}^{\her} + \I_{\mn} \right)
\otimes \I_{\mn} $ is positive semi-definite and $\z^{\her} \Q \z \geq 0$ for any $\z$.
\section{Formulation of $\cQ_2'$}\label{app:cq2}
Let $\E_i^T= \e_i^T \otimes \I_M$, $\bar{\T}_i=[\T_i, \I_M]$ and
\begin{equation}\label{eqt:xyz}
\begin{aligned}
\tilde{\F}&=\left( \F \bP \right)^{\dagger}\left( \F \bP \right)^{\her \dagger},
\X_i=\sum_{m=1}^K \sum_{l=1}^K \bH_{im} \bP_m \tilde{\F}_{ml} \bP_l^{\her} \bH_{il}^{\her},\\
\Y_i&=\left[ \begin{array}{cc}
\tilde{\F}_{ii} & -\sum_{l=1}^K \tilde{\F}_{il} \bP_l^{\her} \bH_{il}^H\\
-\sum_{m=1}^K \bH_{im} \bP_m \tilde{\F}_{mi} & \I_M
\end{array}
\right], \\
\Z_i&= \left[ \begin{array}{cc}
\I_{M} & \0_{M} \\
\0_{M}& \0_M
\end{array}\right] + \Y_i.
\end{aligned}
\end{equation}
With the equality constraint \eqref{eqt:in_block}, the amplification noise can be written as \eqref{eqt:amp_noise1}
\begin{figure*}[!h]
\begin{equation}\label{eqt:amp_noise1}
\begin{aligned}
&\G_i^{\her} \R \R^{\her} \G_i
= \E_i^{\tran}\G^{\her} \R \R^{\her} \G \E_i\\
&= \E_i^{\tran} \left( \T- \bH \bP\right) \left( \F \bP \right)^{\dagger}\left( \F \bP \right)^{\her \dagger}\left( \T- \bH \bP\right)^{\her} \E_i\\
&= \left[- \bH_{i1} \bP_1 , \ldots, \T_i - \bH_{ii} \bP_{i}, \ldots, -\bH_{iK} \bP_K \right] \left[\begin{array}{ccc}
\tilde{\F}_{11} &\ldots & \tilde{\F}_{1K}\\
\vdots & \ddots & \vdots\\
\tilde{\F}_{K1} & \ldots & \tilde{\F}_{KK}
\end{array}
\right] \left[ \begin{array}{c}
- \bP_1^{\her} \bH_{i1}^{\her}\\
\vdots\\
\T_i^{\her}- \bP_i^{\her} \bH_{ii}^{\her}\\
\vdots\\
-\bP_K^{\her} \bH_{iK}^{\her}
\end{array}
\right]\\
&= \sum_{m=1}^K \sum_{l=1}^K \bH_{im} \bP_m \tilde{\F}_{ml} \bP_l^{\her} \bH_{il}^{\her}- \T_i \sum_{l=1}^K \tilde{\F}_{il} \bP_l^{\her} \bH_{il}^H
-\sum_{m=1}^K \bH_{im} \bP_m \tilde{\F}_{mi}\T_i^{\her} + \T_i \tilde{\F}_{ii} \T_i^{\her}\\
&= \X_i - \I_M + \left[\T_i, \I_M \right] \left[ \begin{array}{cc}
\tilde{\F}_{ii} & -\sum_{l=1}^K \tilde{\F}_{il} \bP_l^{\her} \bH_{il}^H\\
-\sum_{m=1}^K \bH_{im} \bP_m \tilde{\F}_{mi} & \I_M
\end{array}
\right] \left[ \begin{array}{c}
\T_i^H\\
\I_M
\end{array}
\right]\\
&= \X_i - \I_M + \bar{\T}_i \Y_i \bar{\T}_i^{\her},
\end{aligned}
\end{equation}
\hrule
\begin{equation}\label{eqt:pow_const1}
\begin{aligned}
&\tr\left(\R \left(\F \bP \bP^{\her} \F^{\her}+ \I_{\mn}\right) \R^{\her}\right) \\
&=\tr \left( \G^{\her \dagger} \left( \T- \bH \bP\right) \left( \F \bP \right)^{\dagger} \left(\F \bP \bP^{\her} \F^{\her}+
\I_{\mn}\right)\left( \F \bP \right)^{\dagger \her}\left( \T- \bH \bP\right)^{\her}\G^{ \dagger}\right)\\
&=\tr \left( \G^{\her \dagger} \left( \T- \bH \bP \right) \left( \left( \F \bP \right)^{\dagger} \left( \F \bP \right)^{\dagger \her} +
\I_{KM} \right)\left( \T- \bH \bP \right)^{\her}\G^{ \dagger}\right)\\
&=\tr \left( \G^{\her \dagger} \left( \T- \bH \bP\right) \left( \tilde{\F}+ \I_{KM} \right)\left( \T- \bH \bP\right)^{\her}\G^{ \dagger}\right)\leq P_r^{max}.
\end{aligned}
\end{equation}
\hrule
\end{figure*}
where
$\tilde{\F}_{ml}\in \bbC^{M}$ is the $(m,l)$-th block matrix in $\tilde{\F}$.
As a result, the objective can be written as
\begin{equation*}
\begin{aligned}
& \sum_{i=1}^K \cC \left( \I_M + \T_i \T_i^{\her} \left( \G_i^{\her} \R \R^{\her} \G_i+ \I_M\right)^{-1} \right)\\
&= \sum_{i=1}^K \Bigg( \cC \left( \I_M + \T_i \T_i^{\her} + \G_i^{\her} \R \R^{\her} \G_i\right)\\
& \hspace{0.5cm} - \cC \left( \I_M + \G_i^{\her} \R \R^{\her} \G_i \right) \Bigg)\\
&= \sum_{i=1}^K \Bigg( \cC \left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)
-\cC \left( \X_i + \bar{\T}_i \Y_i \bar{\T}_i^{\her} \right) \Bigg).
\end{aligned}
\end{equation*}
Similarly, the power constraint is written as \eqref{eqt:pow_const1}.
\section{Computation of the gradient of Lagrangian \eqref{eqt:lagrangian} }\label{app:lagrangian}
Recall the Lagrangian from \eqref{eqt:lagrangian},
\begin{equation*}
\begin{aligned}
& L(\T,\lambda)=\sum_{i=1}^K \left( \cC \left( \X_{i} + \bar{\T}_i \Z_{i} \bar{\T}_i^H \right)
-\cC \left( \X_i + \bar{\T}_i \Y_{i} \bar{\T}_i^{\her} \right) \right)\\
& -\lambda \left( \tr \left( \G^{\her \dagger} \left( \T- \bH \bP\right) \left( \tilde{\F}+ \I_{MK}\right)
\left( \T- \bH \bP\right)^{\her}\G^{ \dagger}\right)- P_r^{max} \right)\\
&= \sum_{i=1}^{K} f_i(\T_i)- \lambda g(\T),
\end{aligned}
\end{equation*} where $f_i(\T_i)= \cC \left( \X_{i} + \bar{\T}_i \Z_{i} \bar{\T}_i^H \right)
-\cC \left( \X_i + \bar{\T}_i \Y_{i} \bar{\T}_i^{\her} \right) $ denotes the secrecy rates of TX \nolinebreak$i$ and $g(\T)$ denotes the power constraint.
We compute the gradient of the Lagrangian \eqref{eqt:lagrangian} with respect to $\T$,
\begin{equation*}
\cD_{\T^*}L(\T,\lambda)= \cD_{\T^*} \sum_{i=1}^K f_i(\T_i)-\lambda \cD_{\T^*} g(\T).
\end{equation*}
As $f_i(\T_i)$ is independent to $\T_j$ for $j \neq i$, the derivative can be written in a block diagonal form
\begin{equation}\label{eqt:gra1}
\cD_{\T^*}L(\T,\lambda) =\diag\left(
\cD_{\T_1^*}f_1(\T_1), \ldots, \cD_{\T_K^*}f_K(\T_K) \right) -\lambda \cD_{\T^*} g(\T).
\end{equation}
The gradient of the objective function $f_i(\T_i)$ with respect to $\T_i^*$ is
\begin{equation}
\cD_{\T_i^*} f_i(\T_i)=\cD_{\T_i^*}\cC \left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)
-\cD_{\T_i^*}\cC \left( \X_i + \bar{\T}_i \Y_i \bar{\T}_i^{\her} \right).
\end{equation}
We begin with
\begin{equation}\label{eqt:gradient1}
\begin{aligned}
& \ln(2) \cD_{\T_i^*}\cC \left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)\\
&= \cD_{\bar{\T}_i^*}\ln \det\left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)\cdot\cD_{\T_i^*}\bar{\T}_i^*\\
&= \bvec\left(\left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)^{-1} \bar{\T}_i \Z_i \right)^{\tran}\cdot \frac{\partial \bvec(\bar{\T}_i^*)}{\partial \bvec(\T_i^*)}\\
&= \bvec\left(\left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)^{-1} \bar{\T}_i \Z_i \right)^{\tran}\left[ \begin{array}{c}
\I_{M^2}\\
\0_{M^2}
\end{array}
\right]\\
&= \left[\left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)^{-1} \bar{\T}_i \Z_i \right]_{(:,1: M)}\\
&= \left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)^{-1} \bar{\T}_i \left[\begin{array}{c}
\I_{M} +\tilde{\F}_{ii}\\
-\sum_{m =1}^K\bH_{im} \bP_m \tilde{\F}_{mi}
\end{array}\right].
\end{aligned}
\end{equation}
Similarly, we have
\begin{equation}\label{eqt:gradient2}
\begin{aligned}
& \ln(2) \cD_{\T_i^*}\cC \left( \X_i + \bar{\T}_i \Y_i \bar{\T}_i^H \right)\\
&=\left( \X_i + \bar{\T}_i \Y_i \bar{\T}_i^H \right)^{-1} \bar{\T}_i \left[\begin{array}{c}
\tilde{\F}_{ii}\\
-\sum_{m=1}^K \bH_{im} \bP_m \tilde{\F}_{mi}
\end{array}\right].
\end{aligned}
\end{equation}
Thus, we have the gradient of $f_i(\T_i)$ as
\begin{equation}\label{eqt:gra2}
\begin{aligned}
& \cD_{\T_i^*} f_i(\T_i)\\
&=\frac{1}{\ln(2)}\left(\left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)^{-1} \bar{\T}_i \left[\begin{array}{c}
\I_{M} +\tilde{\F}_{ii}\\
-\sum_{m =1}^K\bH_{im} \bP_m \tilde{\F}_{mi}
\end{array}\right] \right.\\
& - \left(\left( \X_i + \bar{\T}_i \Y_i \bar{\T}_i^H \right)^{-1} \bar{\T}_i \left[\begin{array}{c}
\tilde{\F}_{ii}\\
-\sum_{m=1}^K \bH_{im} \bP_m \tilde{\F}_{mi}
\end{array}\right]\right).
\end{aligned}
\end{equation}
The last step of computing the gradient of the Lagrangian is to compute
\begin{equation}\label{eqt:gradient3}
\begin{aligned}
&\cD_{\T^*}\tr \left( \G^{\her \dagger} \left( \T- \bH \bP\right) \left( \tilde{\F}+ \I_{KM}\right) \left( \T- \bH \bP\right)^{\her}
\G^{ \dagger}\right)\\
&= \G^{ \dagger}\G^{\her \dagger} \left( \T- \bH \bP\right)
\left( \tilde{\F}+ \I_{KM}\right).
\end{aligned}
\end{equation}
Combining \eqref{eqt:gra1}, \eqref{eqt:gra2} and \eqref{eqt:gradient3}, the gradient of the Lagrangian is obtained.
\section{Introduction}
The trend of future wireless network systems is towards spectrum sharing over different wireless infrastructures such as
LTE networks, smart grid sensor networks and WiMAX networks. With isolated
wireless infrastructures, such as multiple non-cooperating LTE cells (as shown in Figure \ref{fig:ltecell}), ensuring data security remains a major
technical challenge. While cryptography techniques are employed in most established communication standards,
physical layer security techniques provide an alternative approach when the communicating front-ends are of limited
computation capability and are not able to carry out standard cryptography methods such as symmetric key and asymmetric key encryption.
These applications include but are not limited to ubiquitous or pervasive computing \cite{Mattern2004}.
\begin{figure}\label{fig:ltecell}
\begin{center}
\resizebox{\linewidth}{4cm}{
\input{fig_lte_pretty}
}
\caption{Three overlapping LTE cells. The sum secrecy rates over the cells can be improved if a smart multi-antenna relay is introduced into the system. The
emphasized arrows from BS 1 to the smart relay in the middle and then to UE 1 illustrate that desired signal strength (together
with the direct channel path in red) can be boosted
by choosing an appropriate relay strategy. The emphasized arrows from BS 2 to the smart relay and then to UE 1
illustrate that information leakage (shown by a dashed arrow in blue) can be neutralized by choosing the relay strategy appropriately.}
\end{center}
\end{figure}
With the high demand of wireless applications in recent years, the issues of communication security become ever more important.
Physical layer security techniques \cite{Liang2009a, Liu2009a, Bloch2011} provide an additional protection to the conventional
secure transmission methods using cryptography.
As early as four decades ago, the seminal work on the secrecy capacity on the wire-tap channel \cite{Wyner1975a} - the most fundamental model
consisting one source node, one destination node and one eavesdropper - started the era of research on physical layer security.
Extensive analysis and designs have been conducted ever since; physical layer security results can be found in \cite{Liang2009a, Liu2009a, Bloch2011}
and recent tutorial papers \cite{Shiu2011,Poor2012}.
With advantages such as increased cell coverage and transmission rates, relays are incorporated into the standards of current wireless
infrastructures. The wireless resources in these systems are frequently shared by
many users/subscribers and a potential malicious user in the system can lead to compromised confidentiality. Many novel
strategies have been proposed to improve the secrecy in
\begin{itemize}
\item relay systems, including
cooperative jamming (CJ) \cite{Zheng2011,Dong2011,Huang2011}, noise-forwarding (NF) \cite{Lai2008}, a mixture of CJ and NF \cite{Bassily2012},
signal-forwarding strategies such as amplify-and-forward (AF) and decode-and-forward (DF) \cite{Petropulu2010,Ng2011,Bassily2012a
\footnote{All aforementioned works assume that the relays are cooperative and trusted. For secure transmission strategies with untrusted relays,
please refer to \cite{He2010,Khodakarami2011,Jeong}.}.
\item multi-carrier systems \cite{Jorswieck2008a, Wang2011b,Renna2012}
and multi-carrier relay systems with external eavesdropper(s) \cite{Jeong2011, Ng2011}.
\end{itemize}
Yet, a joint optimization of secrecy rates over the frequency-spatial resources in a relay-assisted multi-user interference channel (with
internal eavesdroppers) remains an open problem, as considered here.
We assume that the relay employs an amplify-and-forward (AF) strategy which provides flexibility in implementation as the relay is
transparent to the modulation and coding schemes and induces negligible signal processing delays \cite{Berger2009}.
The novel notion of \emph{relay-without-delays}, also known as instantaneous relays if the relays are memoryless \cite{ElGamal2005,ElGamal2007,Cadambe2009a,Lee2011},
refers to relays that forward signals consisting of both current symbol and symbols in the past, instead of only the past symbols as in
conventional relays. As shown in Figure \ref{fig:irc}, the instantaneous relay model provides a matching model of layer\nobreakdash-\hspace{0pt}1 repeaters connected networks (such
as LTE networks) and helps us analyze the
system performance of nowadays repeaters connected networks\footnote{In modern networks such as LTE, wireless links are often connected
using boosters or layer-1 repeaters (simple amplifiers)
\cite{Seidel2009}. If the time consumed for the signals to travel from a source to a repeater or from a repeater to a destination is counted as one unit,
then the total time for the signal to travel from a source to a destination is two units - the same amount of time for the signal to travel from a source
through a smart AF relay to a destination. }.
In order to provide secure transmission over relay-assisted multi-carrier networks, we propose a relay strategy termed as
\emph{information leakage neutralization} which by choosing relay forwarding strategies algebraically neutralizes
information leakage from each transmitter in the network
to each eavesdropper on each frequency subcarrier. This method is adopted from a technique on relay networks, termed as
interference neutralization (IN).
IN has been applied to eliminate interference in various single-carrier systems, such as deterministic channels
\cite{Mohajer2008, Mohajer2009a}, two-hop relay channels
\cite{Berger2005, Rankov2007, Berger2009} and instantaneous relay channels \cite{Ho2011d}. Our prior work shows that IN
is effective in improving secrecy rates in a two-hop wiretap channel \cite{Gerbracht2012}. The proposed method in this paper
differs from previous works above as the neutralization over multi-carrier systems is of high complexity. Another important difference is that
here the colluding eavesdroppers as well as the relays have multiple antennas.
The contribution and outline of this manuscript are summarized as follows:
\begin{itemize}
\item We transform a general and complicated sum secrecy rate optimization problem on a relay-assisted multi-carrier interference channel with
mutually eavesdropping users to an optimization-ready formulation. Systematic optimization techniques can then be applied to
solve for the sum-secrecy-rate-optimal relay strategies and precoding matrices at the transmitters.
\item An illustrative example is given in Section \ref{sec:example} for a basic setting to highlight the efficiency of information
leakage neutralization.
\item We propose a novel idea of information leakage neutralization strategies in Section \nolinebreak\ref{sec:in}. These strategies neutralize information
leakage from each user to its colluding eavesdroppers on each frequency-spatial channel. The resulting secrecy rate expression is significantly simplified.
Detailed analyzes for the multi-carrier information leakage neutralization methods are provided.
In particular, the minimum number
of antennas at the relay for complete information leakage neutralization is computed in Proposition 1. The required number of antennas depends on
the number of data streams sent by each user, the number of frequency subcarriers and the number of users in the system. Relevant to
applications where relay power must be reserved, the minimum power at the relay required for information leakage neutralization is computed in Proposition 2.
\item We propose an efficient and simple information leakage neutralization strategy (EFFIN) which ensures
secure transmissions in the scenarios of limited power and computational resources at relay and transmitters. With sufficient power
at the relay, we propose an optimized information leakage neutralization technique (OPTIN) to maximize the secrecy rates while ensuring zero information
leakage.
\item The achievable secrecy rates from proposed strategies EFFIN and OPTIN are compared to several baseline strategies by numerical simulations
in Section \ref{sec:sim}. Baseline 1
is a scenario where the relay is a layer-1 repeater and baseline 2 is a scenario with no relay. Simulation results show that the proposed
strategies outperform the baseline strategies significantly in various operating SNRs.
\end{itemize}
\subsection{Notations}
The set $\mathbb{C}^{a \times b}$ denotes a set of complex matrices of size $a$ by $b$ and is shortened to $\mathbb{C}^a$ when $a=b$.
The notation $\mathcal{N}(\mathbf{A})$ is the null space of $\A$.
The operator $\otimes$ denotes the Kronecker product. The superscripts $^{\tran}$, $^{\her}$, $^\dagger$
represent transpose, Hermitian transpose and Moore-Penrose inverse
respectively whereas the superscript $^*$ denotes the conjugation operation. The Euclidean norm for scalars
is written as $|.|$. The trace of matrix $\mathbf{A}$ is denoted as $\tr(\mathbf{A})$.
Vectorization stacks the columns of a matrix $\A$ to form a long column vector denoted as $\bvec(\mathbf{A})$.
The function $\mathcal{C}(\A)$ denotes the log-
determinant function of matrix $\A$, $\log \det \left(\A \right)$.
The identity and zero matrices of dimension $K\times K$ are written as $\I_K$ and $\0_
K$. The vector $\e_i$ represents a column vector with zero elements everywhere and one at the $i$-th position.
The notation $[\A]_{ml}$ denotes the $m$-th row and $l$-th column element of the matrix $\A$.
The notation $\p_{a:b}$, $0 \leq a\leq b\leq n$, denotes a vector which has elements $[p_a, p_{a+1},\ldots, p_b]$ where $\p=[p_1,\ldots,p_n]$.
\section{System Model}
\input{fig_irc}
In the following subsection, we give an example of a two-user interference relay channel in which the relay has two antennas and all nodes share two frequency
subcarriers. We shall illustrate that the conventional assumption of block diagonal relay matrix (which maximizes achievable rates in peaceful systems) cannot be
adopted a-priori when secrecy rates are considered.
\subsection{An example of two users on two frequencies with two antennas at the relay}\label{sec:example}
Transmitter $i$, $i=1,2$, transmits symbols $\x_i \in \bbC^{M \times 1}$ which are spread
over $M$ frequency subcarriers by precoding matrix $\bP_i$. For the ease of notation, we assume that precoding matrix $\bP_i$
is a square matrix $\bP_i \in \bbC^{M}$. When user $i$ transmits $S_i\leq M$ symbols, then zeros are padded in $\x_i$ so that its
dimension is always $M \times 1$ and correspondingly zero columns are padded in $\bP_i$.
We assume that the users do not overload the system and therefore $S_i$ is smaller than or equal to
the number of frequency subcarriers, here two. Note that $\bP_i$ may have low row rank when certain subcarriers are not used.
For example, if user $i$
transmits one symbol on subcarrier 1 but nothing on subcarrier 2, then $\bP_i= [a, 0; 0, 0]$ for some complex scalar $a$.
If $\bP_i$ is diagonal, then each
symbol is only sent on one frequency.
Denote the $m$-th transmit symbol of user $i$ as $\x_i(m)$ which is randomly generated, mutually independent and with covariance matrix $\I_{2}$.
The precoding matrix $\bP_i$ satisfies
the transmit power constraint of user $i$: $\tr\left( \bP_i \bP_i^{\her}\right)\leq P_i^{max}$.
Denote the channel gain from transmitter (TX) $i$ to receiver (RX) $j$ on frequency $m$ as $h_{ji}(m)$.
For simplicity of the example, we let $S_i$ equal two. The received
signal of user $i$ is a vector whose $m$-th element is the received signal on the $m$-th frequency subcarrier,
\begin{equation}
\y_i = \left[\begin{array}{c}
\y_i(1)\\ \y_i(2)
\end{array}
\right]= \sum_{j=1}^2 \left[ \begin{array}{cc}
h_{ij}(1) & 0\\
0 & h_{ij}(2)
\end{array}
\right] \bP_j \left[ \begin{array}{c}
x_j(1)\\
x_j(2)
\end{array}
\right] + \left[ \begin{array}{c}
n_i(1)\\
n_i(2)
\end{array}
\right].
\end{equation} The circular Gaussian noise with unit variance received on the $m$-th subcarrier at RX $i$ is denoted as $n_i(m)$.
If a relay with two antennas is introduced into the system, it receives the broadcasting signal from TXs and forwards
them to RXs. We denote the received signal at the relay as a stacked vector of the received signal at each frequency $m$, with
$\y_r(m)\in \bbC^{2 \times 1}$ representing the received signal on frequency $m$ and the $a$-th element in $\y_r(m)$ representing
the signal at the $a$-th antenna:
\begin{equation}
\y_r= \left[\begin{array}{c}
\y_r(1)\\ \y_r(2)
\end{array}
\right]= \sum_{j=1}^2 \left[ \begin{array}{cc}
\f_j(1) & \0_{2 \times 1}\\
\0_{2 \times 1} & \f_j(2)
\end{array}
\right] \bP_j \left[ \begin{array}{c}
x_j(1)\\
x_j(2)
\end{array}
\right]+ \left[ \begin{array}{c}
\n_r(1)\\
\n_r(2)
\end{array}
\right]
\end{equation} where $\n_r(m)\in \bbC^{2 \times 1}$ is a circular Gaussian noise vector received at frequency $m$ with
identity covariance matrix and $\f_j(m)$ is the complex vector channel from user $j$ to the relay on frequency $m$.
The relay processes the received signal $\y_r$ by a multiplication of matrix $\R \in \bbC^{4}$ and forwards the signal
to the RXs. Denote the channel from relay to RX $i$ on frequency $m$ by $\g_i(m)\in \bbC^{2 \times 1}$. At RX $i$, the received
signal is
\begin{equation}\label{eqt:received_sig_ex}
\begin{aligned}
\y_i& = \sum_{j=1}^2 \left( \left[ \begin{array}{cc}
h_{ij}(1) & 0\\
0 & h_{ij}(2)
\end{array}
\right]+ \left[ \begin{array}{cc}
\g^{\her}_i(1) & \0_{1 \times 2}\\
\0_{1 \times 2} & \g^{\her}_i(2)
\end{array}
\right] \R \left[ \begin{array}{cc}
\f_j(1) & \0_{2 \times 1}\\
\0_{2 \times 1} & \f_j(2)
\end{array}
\right] \right) \bP_j \left[ \begin{array}{c}
x_j(1)\\
x_j(2)
\end{array}
\right]\\
& + \left[ \begin{array}{cc}
\g^{\her}_i(1) & \0_{1 \times 2}\\
\0_{1 \times 2} & \g^{\her}_i(2)
\end{array}
\right] \R \left[ \begin{array}{c}
\n_r(1)\\
\n_r(2)
\end{array}
\right]+ \left[ \begin{array}{c}
n_i(1)\\
n_i(2)
\end{array}
\right].
\end{aligned}
\end{equation}
Denote channel matrices
\begin{equation*}
\bH_{ij}=\left[ \begin{array}{cc}
h_{ij}(1) & 0\\
0 & h_{ij}(2)
\end{array}
\right], \hspace{0.5cm} \G^{\her}_i=\left[ \begin{array}{cc}
\g^{\her}_i(1) & \0_{1 \times 2}\\
\0_{1 \times 2} & \g^{\her}_i(2)
\end{array}
\right], \hspace{0.5cm} \F_i= \left[ \begin{array}{cc}
\f_j(1) & \0_{2 \times 1}\\
\0_{2 \times 1} & \f_j(2)
\end{array}
\right]
\end{equation*}
and the equivalent channel from TX $j$ to RX $i$ as
$
\bar{\bH}_{ij}=\bH_{ij} + \G_i^{\her} \R \F_j.
$
An achievable rate of user 1 is
\begin{equation}
r_1(\R)= \cC\left( \I_2 + \bar{\bH}_{11} \bP_1 \bP_1^{\her} \bar{\bH}^{\her}_{11}
\left( \bar{\bH}_{12} \bP_2 \bP_2^{\her} \bar{\bH}^{\her}_{12} + \G^{\her}_1 \R \R^{\her} \G_1 + \I_2\right)^{-1} \right).
\end{equation}
Consider that RX $2$ is an eavesdropper. We compute the worst-case scenario in which RX $2$ decodes all other symbols perfectly
before decoding the messages from TX $1$ and RX 2 sees a MIMO channel and decodes messages $x_1(1)$ and $x_2(2)$ utilizing
both frequencies (with a MMSE receive filter for example).
\begin{equation}\label{eqt:in_leakage_ex}
\begin{aligned}
\y_{2 \leftarrow 1}&=\left( \left[ \begin{array}{cc}
h_{21}(1) & 0\\
0 & h_{21}(2)
\end{array}
\right]+ \left[ \begin{array}{cc}
\g^{\her}_2(1) & \0_{1 \times 2}\\
\0_{1 \times 2} & \g^{\her}_2(2)
\end{array}
\right] \R \left[ \begin{array}{cc}
\f_1(1) & \0_{2 \times 1}\\
\0_{2 \times 1} & \f_1(2)
\end{array}
\right] \right) \bP_1 \left[ \begin{array}{c}
x_1(1)\\
x_1(2)
\end{array}
\right]\\
& + \left[ \begin{array}{cc}
\g^{\her}_2(1) & \0_{1 \times 2}\\
\0_{1 \times 2} & \g^{\her}_2(2)
\end{array}
\right] \R \left[ \begin{array}{c}
\n_r(1)\\
\n_r(2)
\end{array}
\right]+ \left[ \begin{array}{c}
n_2(1)\\
n_2(2)
\end{array}
\right]
\end{aligned}
\end{equation} An achievable rate is then
$
r_{2 \leftarrow 1}(\R)= \cC \left( \I_2 + \bar{\bH}_{21} \bP_1 \bP_1^{\her} \bar{\bH}^{\her}_{21} \left( \G_2^{\her} \R \R^{\her} \G_2 + \I_2\right)^{-1}\right).
$
An achievable secrecy rate of user 1 is then the achievable rate of user 1 $r_1(\R)$ minus the leakage rate to user 2 $r_{2 \leftarrow 1}(\R)$ \cite{Khisti2010a}:
\begin{equation}\label{eqt:secrecy_rate_ex1}
\begin{aligned}
r_1^{s}(\R)&= \left(r_1(\R) -r_{2 \leftarrow 1}(\R) \right)^{+}\\
&= \Bigg( \cC\left( \I_2 + \bar{\bH}_{11} \bP_1 \bP_1^{\her} \bar{\bH}^{\her}_{11} \left( \bar{\bH}_{12} \bP_2 \bP_2^{\her} \bar{\bH}^{\her}_{12} + \G^{\her}_1 \R \R^{\her} \G_1 + \I_2\right)^{-1} \right)\\
&-\cC \left( \I_2 + \bar{\bH}_{21} \bP_1 \bP_1^{\her} \bar{\bH}^{\her}_{21} \left( \G_2^{\her} \R \R^{\her} \G_2 + \I_2\right)^{-1}\right) \Bigg)^+.
\end{aligned}
\end{equation}
The relay processing matrix is defined as
\begin{equation}
\R = \left[ \begin{array}{cc}
\R_{11} & \R_{12}\\
\R_{21} & \R_{22}
\end{array}
\right]
\end{equation} where each submatrix block $\R_{mn}$ forwards signals from frequency $n$ to frequency $m$.
In a peaceful MIMO IRC, $\R$ bares a block diagonal structure, $\R_{12}=\R_{21}=\0_2$.
The intuition is that relays should not generate cross talk over frequency channels. However,
it is not trivial to examine the effect of $\R_{12}$ and $\R_{21}$ on secrecy rates as illustrated below and the conventional block diagonal
structure should not be a-priori assumed.
\input{table_channel}
As a numerical example, we compute the secrecy rates with the following randomly generated channels given in Table \ref{table:channel}. We set the
precoding matrices of TX 1 and TX 2 to be
\begin{equation*}
\bP_1=\left[ \begin{array}{cc}
1& 0\\
1&0
\end{array}
\right], \hspace{0.3cm} \bP_2=\left[ \begin{array}{cc} 1&4\\-4 & 1 \end{array}\right]
\end{equation*} which means that TX 1 transmits only one data stream on both subcarriers and TX 2 transmits two
data streams spread over both frequency subcarriers with orthogonal sequences. With relay matrix $\R^{\IN}$ (see Table \ref{table:channel})
a sum secrecy rate of 3.4104 is achievable whereas with block diagonal matrix $\R^{\IN,d}$ the sum secrecy rate is 3.1881.
A block diagonal relay matrix does not always improve secrecy rate and therefore in the following we assume a general non-block-diagonal structure $\R$. In fact, the relay matrix $\R^{\IN}$ is chosen such that the secrecy leakage is zero: $(\bH_{12} + \G_1^{\her} \R \F_2) \bP_2=\0$ and
$(\bH_{21} + \G_2^{\her} \R \F_1) \bP_1=\0$.
Thus, the secrecy rate from \eqref{eqt:secrecy_rate_ex1} can be simplified to the following
\begin{equation}\label{eqt:ach_rate_in_ex2}
r_i^{s} = \cC \left( \I_2 + \bar{\bH}_{11} \bP_1 \bP_1^{\her} \bar{\bH}^{\her}_{11} \left( \G^{\her}_1 \R \R^{\her} \G_1 + \I_2\right)^{-1} \right).
\end{equation}
This motivates our following proposition on information leakage neutralization techniques. Interestingly, with information leakage
neutralization, we can simplify the optimization problem significantly. The idea is to set the
information leakage from each user at each frequency to zero, in particular, by setting the equivalent
channel of $\x_1$ from TX 1 to RX 2 and vice versa in \eqref{eqt:in_leakage_ex} to zero,
\begin{equation}\label{eqt:in_ex_block}
\left\{
\begin{aligned}
& \left(\bH_{12}+ \G_1^{\her} \R \F_2 \right) \bP_2 = \0\\
& \left(\bH_{21}+ \G_2^{\her} \R \F_1 \right) \bP_1 = \0.\\
\end{aligned}
\right.
\end{equation}
With the properties of the Kronecker product, \eqref{eqt:in_ex_block} can be written as
\begin{equation}
\left[
\begin{array}{c}
\left( \left(\F_2 \bP_2 \right)^{\tran} \otimes \G_1^{\her} \right)\\
\left( \left(\F_1 \bP_1 \right)^{\tran} \otimes \G_2^{\her} \right)
\end{array} \right] \bvec(\R) = \B \bvec(\R)= \left[\begin{array}{c}
-\bvec(\bH_{12} \bP_2)\\
-\bvec(\bH_{21} \bP_1)\\
\end{array} \right] = \bb
\end{equation}
The stacked matrix $\B$ in the above equation is a fat matrix\footnote{Care must be taken when users send less than $M$ data streams
(when $\bP_i$ has zero columns. More discussion is provided later in Proposition 2).}. We obtain the relay matrix that can perform information leakage neutralization:
\begin{equation}\label{eqt:r_i^sn_num1}
\bvec(\R)= \B^{\her} \left( \B \B^{\her} \right)^{-1} \bb.
\end{equation}
Substitute the channel realizations in Table \ref{table:channel} into the above equation and reverse the vectorization operation,
we obtain the relay matrix $\R^{\IN}$ (please refer to the table for numerical values).
\begin{Remark}
If the precoding matrices $\{\bP_i\}$ are invertible, then the relay matrix $\R$ obtained using \eqref{eqt:r_i^sn_num1} is block diagonal.
A block diagonal relay matrix means that the relay sets cross talk over frequency subcarriers to zero and due to the interference
leakage neutralization, the interference from users on the same frequency is also zero. This results in $KM$ parallel channels without
interference. We propose in Section \ref{sec:effin} a suboptimal but very efficient algorithm which optimizes the achievable rates in
this case\footnote{The achievable rates here are secrecy rates as the information leakage is zero.}.
\end{Remark}
In fact, the matrix in \eqref{eqt:r_i^sn_num1}
is not unique, any matrix which is a sum of $\bvec(\R)$ in \eqref{eqt:r_i^sn_num1} and
a vector in the null space of $\B$ can also neutralize information leakage,
\begin{equation}\label{eqt:r_i^sn_num2}
\bvec(\R)= \B^{\her} \left( \B \B^{\her} \right)^{-1} \bb + \z,
\end{equation} where $\z \in \mathcal{N}(\B)$. With the channel realizations given in Table \ref{table:channel},
we can generate another matrix $\R^{\IN,z}$ which achieves a higher secrecy rate 4.1553, a 17.8\% increase of secrecy
rate by optimization over $\z$. This motivates us to investigate an efficient method to find $\z$ and consequently $\R$
which neutralizes information leakage and optimizes the secrecy rate at the same time.
\begin{Remark}
With the optimization over $\z$, the relay matrix is no longer block diagonal which couples the frequency channels. Although
the problem is more complicated, we have shown in the above example that one can get a better secrecy rate performance.
In Section \ref{sec:optin}, we propose an iterative sum secrecy rates optimization over the relay matrix $\R$ and the precoding matrices $\{\bP_i\}$.
\end{Remark}
In the following section, we illustrate how the relay matrix can
be chosen carefully to amplify the desired signal strength and at the same time neutralize information leakage
in the multi-user scenario.
\section{General multi-user multi-antenna multi-carrier scenario}
In this section, we let the number of TXs and RXs be $K\geq2$. The TXs and RXs have single antenna and
the relay has $N$ antennas. Let the number of frequency subcarriers be $M$.
Denote the complex channel from TX $i$ to RX $j$, as a diagonal matrix $\bH_{ji} \in \bbC^{M}$ and the complex channel from
TX $i$ to relay as $\F_{i} \in \bbC^{NM \times M}$ and from relay to RX $j$ as $\G_{j} \in \bbC^{MN \times M}$. The signal received at the relay is,
\begin{equation}
\mathbf{y}_r= \sum_{i=1}^K \F_{i} \bP_i\x_i+ \mathbf{n}_r
\end{equation}
where $\F_i=\diag\left(\f_i(1), \ldots, \f_i(M) \right)$ and
$\x_i \in \bbC^{M \times 1}$ are the circular Gaussian transmit symbols from TX $i$, with zero mean and identity covariance matrix.
The matrix $\bP_i\in \bbC^{M}$ satisfies the power constraint:
\begin{equation}
\tr \left( \bP_i \bP_i^{\her} \right) \leq P_i^{max}.
\end{equation}
With AF strategy, the relay multiplies the received signal $\y_r$ on the left by processing matrix $\R$ and transmits $\R \y_r$.
The transmit power of the relay is constrained by $P_r^{max}$,
\begin{equation}\label{eqt:pow_constraint}
\tr\left( \R \left( \sum_{i=1}^K \F_{i} \bP_i \bP_i^{\her} \F_{i}^{\her} + \I_{\mn} \right) \R^H\right) \leq P_r^{max}.
\end{equation}
The received signal at RX $j$ is
\begin{equation}\label{eqt:in_out}
\y_j= \sum_{i=1}^K \left( \bH_{ji} + \G_{j}^{\her} \R \F_{i}\right) \bP_i \x_i + \G_{j}^{\her} \R \mathbf{n}_r + \n_j
\end{equation} where $\n_j$ is the circular Gaussian noise at RX $j$ with zero mean and identity covariance matrix and
$\G_{j}=\diag(\g_j(1), \ldots, \g_j(M))$.
For the ease of notation, we define the equivalent channel from $i$ to $j$ as
\begin{equation}\label{eqt:equiv}
\bar{\bH}_{ji}=\bH_{ji} + \G_{j}^H \R \F_{i}
\end{equation} and its $(f,m)$-element is $[\bar{\bH}_{ji}]_{fm}=h_{ji}+ \g_j^{\her}(f) \R_{fm} \f_i(m)$ which is the equivalent channel
from user $i$ frequency $m$ to user $j$ frequency $f$.
Each RX is not only interested in decoding its own signal but also eavesdropping from other TXs. In the following,
we define the worst case achievable secrecy rate with colluding eavesdroppers.
For messages $\x_i$,
all RXs except RX $i$ collaborate to form an eavesdropper with multiple antennas and the message $\x_i$ goes
through a multi-carrier MIMO channel to the colluding eavesdroppers. A worst case secrecy rate is then to assume that
all other messages $\x_j, j\neq i$ are decoded perfectly and subtracted before decoding $\x_i$.
The received signals at RX $i$ and the colluding eavesdroppers are
\begin{equation}\label{eqt:in_out_eave}\left\{
\begin{aligned}
\y_i &=\sum_{k=1}^K \bar{\bH}_{ik} \bP_k \x_k + \G^{\her}_{i} \R \mathbf{n}_r + \n_i\\
\y_{-i}&= \left[ \begin{array}{c}
\bar{\bH}_{1i}\\
\vdots\\
\bar{\bH}_{(i-1)i} \\
\bar{\bH}_{(i+1)i}\\ \vdots\\
\bar{\bH}_{Ki}
\end{array}
\right] \bP_i \x_i +
\left[ \begin{array}{c}
\G_{1}^{\her} \\
\vdots\\
\G_{i-1}^{\her} \\
\G_{i+1}^{\her} \\ \vdots\\
\G_{k}^{\her}
\end{array}
\right] \R \n_r+
\left[ \begin{array}{c}
\n_1\\
\vdots\\
\n_{i-1}\\
\n_{i+1}\\
\vdots\\
\n_{K}
\end{array}
\right]\\
&= \bar{\bH}_{-i} \bP_i \x_i + \G_{-i}^{\her} \R \n_r + \n_{-i}.
\end{aligned} \right.
\end{equation}
The secrecy rate of user $i$ is \cite{Khisti2010a},
\begin{equation}\label{eqt:secrecy_rate}
\begin{aligned}
r_i^s&= \Bigg( \cC\left( \I_{M} + \bar{\bH}_{ii} \bP_i \bP_i^{\her} \bar{\bH}^{\her}_{ii}
\left( \sum_{j \neq i} \bar{\bH}_{ij} \bP_j \bP_j^{\her} \bar{\bH}^{\her}_{ij} + \G^{\her}_{i}\R \R^{\her} \G_{i} + \I_{M}\right)^{-1} \right) \\
&- \cC \bigg( \I_{M(K-1)} + \bar{\bH}_{-i} \bP_i \bP_i^{\her} \bar{\bH}^{\her}_{-i} \left( \G^{\her}_{-i}\R \R^{\her} \G_{-i} + \I_{M(K-1)}\right)^{-1} \bigg) \Bigg)^+ .
\end{aligned}
\end{equation}
Recall from \eqref{eqt:equiv} that the equivalent channel from Tx $j$ to Rx $i$ $\bar{\bH}_{ij}$ is a function of the relay
processing matrix $\R$, $\bar{\bH}_{ij}=\bH_{ij} + \G_i^{\her} \R \F_j$. The optimization of the aforementioned
secrecy rates is highly complicated due to their non-convex structure. In the following, we propose the information leakage neutralization
technique \cite{Ho2011d} which is able to neutralize
all information leakage to all eavesdroppers in the air by choosing the relay strategy in a careful manner.
As illustrated in the previous section, with information leakage neutralization, the secrecy rate expression \eqref{eqt:secrecy_rate}
can be simplified to
\begin{equation}\label{eqt:secrecy_rate_multiuser}
r_i^s=\cC\left( \I_{M} + \bar{\bH}_{ii} \bP_i \bP_i^{\her} \bar{\bH}^{\her}_{ii}
\left( \G^{\her}_{i}\R \R^{\her} \G_{i} + \I_{M}\right)^{-1} \right).
\end{equation}
In the following section, we illustrate how we can choose $\R$ to achieve a secrecy rate as such.
\subsection{Information Leakage Neutralization}\label{sec:in}
We choose $\R$ such that the equivalent channel of message $\x_i$ to the eavesdropper in
\eqref{eqt:in_out_eave} is neutralized to zero. The challenge of information leakage neutralization in multi-subcarrier environment as compared to
the single-subcarrier case \cite{Ho2011d} is that the information leakage neutralization constraints must be modified
to incorporate frequency sharing:
\begin{equation}\label{eqt:in_const}
\left(\bH_{ji} + \G_j^{\her} \R \F_i \right) \bP_i=0, \hspace{1cm} i,j=1,\ldots, K, i \neq j.
\end{equation}
Note that we
consider the most general scenario where users may only use part of the spectrum and send less than $M$ data streams and
thus $\bP_i$ may have zero rows and zero columns.
In the following, we show the dependency of the number of antennas at the relay for information leakage neutralization
on these system parameters.
\PropBox{
\label{prop:in_dim}
The number of antennas at the relay, $N$, required to neutralize all information leakage from each of the $K$ users at each frequency subcarrier, in
a total of $M$ subcarriers, satisfies
\begin{equation}
N \geq \sqrt{\frac{K-1}{M}\sum_{i=1}^K S_i }
\end{equation}
where $S_i$ is the number of data streams sent by TX $i$ .
}
For the proof, please refer to Appendix \ref{app:relay_in_num_ant}.
Proposition 1 offers the minimum number of antennas required to ensure secrecy which
depends on the number of users $K$, the number subcarriers $M$ and the amount of data streams transmitted $S_i$.
\begin{itemize}
\item If every user employs full frequency multiplexing $S_i=M$, we have then
\begin{equation}
N \geq \sqrt{\frac{K-1}{M}\sum_{i=1}^K M}= \sqrt{K(K-1)}.
\end{equation} As $N$ is an integer, we have $N \geq K$ which is the same criteria as in the flat-fading case \cite{Ho2011d}.
\item If every user sends $S_i=aM$ data streams and $0 \leq a \leq 1$, we have then
\begin{equation}
N \geq \sqrt{\frac{K-1}{M} \sum_{i=1}^K a M} = \sqrt{a K(K-1)}.
\end{equation}
For example, in a scenario of $K=3$ users, $M=16$ frequency subcarriers and each user transmits
$S_i=8$ data streams $\left(a=\frac{1}{2}\right)$,
the relay must have at least $\left\lceil \sqrt{ \frac{1}{2} \cdot 3 \cdot 2} \right\rceil=
\left\lceil \sqrt{3} \right\rceil=2$
antennas to completely remove any information leakage from any TX to any RX. This is
less than $\lceil \sqrt{3(2)} \rceil=3$ if all users send $S_i=M=16$ data streams.
\item Note that the number of antennas required for information leakage neutralization is \emph{independent}
to the number of frequency subcarriers used by each user (the number of non-zero rows of $\bP_i$)\footnote{The reason is that even if a user does not transmit
on a certain frequency, the relay must make sure that it does not forward the user's information on other subcarriers to
this subcarrier at which the eavesdroppers can decode the information.}. However, the power required to neutralize information leakage depends on
how crowded the subcarriers is. If
a lot of frequency subcarriers are occupied, the relay may not have enough power to neutralize all information leakage as we will see
in the following.
\end{itemize}
When the number of antennas at the relay is sufficient for information leakage neutralization, we can
use the following method to compute the relay forwarding matrix $\R$ for such purpose.
\PropBox{
Any relay matrix $\R$ satisfying the information leakage neutralization constraint \eqref{eqt:in_const1} has the following form:
\begin{equation*}
\bvec(\R)= \A^{\dagger} \bb + \z
\end{equation*}
where
\begin{equation*}
\begin{aligned}
\A&=\left[\left( \left( \hat{\bP}^{\tran}_1 \F_1^{\tran} \right) \otimes \G^{\her}_{-1} \right)^{\her},
\ldots, \left( \left( \hat{\bP}^{\tran}_K \F_K^{\tran} \right) \otimes \G^{\her}_{-K} \right)^{\her} \right]^{\her}\\
\bb&= \left[ -\bvec\left(\bH_{-1} \hat{\bP}_1\right)^{\her}, \ldots,
-\bvec\left( \bH_{-K} \hat{\bP}_K\right)^{\her}\right]^{\her}\\
\z&\in \nnull\left( \A \right).
\end{aligned}
\end{equation*} and $\hat{\bP}_i$ is a submatrix of $\bP_i$, containing its non-zero columns.
}
\vspace{0.2cm}
For the proof, please refer to Appendix \ref{app:relay_in_form}. From Proposition 2,
it follows that there is a minimum power requirement for information leakage neutralization.
\begin{Corollary}\label{cor:min_power}
The minimum power required for information leakage neutralization is
\begin{equation*}
P_r^{max} \geq \left(\A^{\dagger} \bb \right)^{\her} \left( \left( \sum_{i=1}^K \F_{i} \bP_i \bP_i^{\her} \F_{i}^{\her} + \I_{\mn} \right)
\otimes \I_{\mn} \right) \left(\A^{\dagger} \bb\right).
\end{equation*}
\vspace{0.05cm}
\end{Corollary}
For the proof, please refer to Appendix \ref{app:min_power}.
Depending on the available transmit power at the relay, one may only have enough power to neutralize information leakage but not enough
power to further improve the transmission rates. If there is limited power resource and therefore
one must ensure secure transmission with as little power as possible, then one can set $\z$ in Proposition 2 to zero. If there is a
high priority of secrecy rates and with abundant transmit power, one can optimize $\z$ for the purpose of sum secrecy rate maximization.
In the following, we investigate algorithms to address these applications.
\section{Information Leakage Neutralization Algorithms }\label{sec:propose}
In the previous section, we have shown that secrecy rates \eqref{eqt:secrecy_rate_multiuser} are achievable by information leakage neutralization. Also,
in order to implement information leakage neutralization, the number of antennas at the relay, the number of frequency subcarriers and the number of users
in the system must satisfy the relation in Proposition 1. In Proposition 2, we computed the minimum relay power required in order to perform information
leakage neutralization. With more power available at the relay, we can improve the achievable secrecy rates by optimizing the relay matrix and
the precoding matrices. The optimization of sum
secrecy rates can be written formally in the following:
\boxeqn{
\begin{aligned}
\max_{\R,\{ \bP_i\}} \hspace{0.5cm} & \sum_{i=1}^{K} \cC\left( \I_{M} + \bar{\bH}_{ii} \bP_i \bP_i^{\her} \bar{\bH}^{\her}_{ii}
\left( \G^{\her}_{i}\R \R^{\her} \G_{i} + \I_{M}\right)^{-1} \right)\\
\st \hspace{0.5cm} & \tr \left( \bP_i \bP_i^{\her} \right) \leq P_i^{max}\\
& \tr\left(\R \left(\sum_{i=1}^K \F_i \bP_i \bP_i^{\her} \F_i^{\her} \right) \R^{\her} \right) \leq P_r^{max}.
\end{aligned}
}
In the following, we propose two algorithms. The first algorithm EFFIN, in Section \ref{sec:effin}, considers the scenario where $\z=\0$ in Proposition 2
and all users transmit
the maximum number of data streams allowed $S_i=M$. We observe that in this situation, information leakage neutralization
decomposes the system into $KM$ parallel channels and consequently both the relay processing matrix $\R$ and the precoding matrix $\bP_i$
can be computed very efficiently. The second algorithm OPTIN, in Section \ref{sec:optin}, investigates a systematic method for the computation of
$\R$ and $\bP_i$ when there is enough transmit power budget at the relay to allow further optimization of secrecy rates.
\subsection{Efficient Information Leakage Neutralization (EFFIN)}\label{sec:effin}
When every user transmits $S_i=M$ data streams and $\bP_i$ is invertible, we propose the following algorithm that decomposes the
$K$ user interference relay channels with $M$ frequency subcarriers and $N$ antennas at the relay to $KM$ parallel secure channels
\emph{with no interference and no information leakage}.
The information leakage neutralization criteria
$
\left( \bH_{ij} + \G_{i}^{\her} \R \F_j \right) \bP_i = \0,
$ when $\bP_i$ is invertible, is equivalent to
\begin{equation*}
\bH_{ij} + \G_{i}^{\her} \R \F_j = \0.
\end{equation*} Due to the block diagonal structure of $\bH_{ij}$, $\G_i$ and $\F_j$, one feasible solution of the above equation is a block diagonal $\R$.
With the block diagonal structure, the resulting secrecy rates may be suboptimal, but the information leakage neutralization constraint
can be broken down to the optimization over the diagonal blocks $\R_{mm}$ in $\R$:
\begin{equation}
h_{ji}(m)+ \g_j^{\her}(m) \R_{mm} \f_i(m)=0, \hspace{0.3cm} i,j=1,\ldots,K, i\neq j.
\end{equation} Following the same approach as before, we stack the constraints for all $j \neq i$ and define
\begin{equation*}
\begin{aligned}
\h_{-i}(m)&=\left[h^{\her}_{1i}(m), \ldots, h^{\her}_{(i-1)i}(m), h^{\her}_{(i+1)i}(m),\ldots, h^{\her}_{Ki}(m)\right]^{\her}\\
\G_{-i}(m)&=\left[\g_1(m),\ldots,\g_{i-1}(m),\g_{i+1}(m),\ldots, \g_{K}(m) \right].
\end{aligned}
\end{equation*} We obtain $\h_{-i}(m) + \G^{\her}_{-i}(m) \R_{mm} \f_i(m)=\0_{(K-1) \times 1}$ which is equivalent to
\begin{equation*}
\left(\f_i^{\tran}(m) \otimes \G^{\her}_{-i}(m)\right) \bvec\left(\R_{mm} \right)= - \h_{-i}(m).
\end{equation*} Stacking constraints for all $i$, we have
\begin{equation*}
\A(m)=\left[\begin{array}{c}
\left(\f_1^{\tran}(m) \otimes \G^{\her}_{-1}(m)\right) \\
\vdots\\
\left(\f_K^{\tran}(m) \otimes \G^{\her}_{-K}(m)\right)
\end{array} \right], \hspace{0.5cm}
\bb(m)= \left[\begin{array}{c}
- \h_{-1}(m) \\
\vdots\\
- \h_{-K}(m)
\end{array}
\right].
\end{equation*} With a limited power budget at relay, we propose to implement information leakage neutralization with
the least relay transmit power and utilize the result from Proposition 2, the relay matrix has the $m$-th diagonal block equal to
\begin{equation} \label{eqt:block_diag_R}
\R_{mm}= \bvec^{-1}\left(\left(\A(m) \right)^{\dagger} \bb(m)\right)
\end{equation} where $\bvec(.)^{-1}$ is to reverse the vectorization of a vector columnwise to a $M \times M$ matrix.
After the computation of the relay matrix in \eqref{eqt:block_diag_R}, $\R=\diag\left( \R_{11},\ldots, \R_{MM}\right)$,
the optimal precoding matrices $\{\bP_i\}$ are computed by solving $\cQ_1$.
\boxeqn{
\begin{aligned}
\cQ_1: \hspace{0.1cm} \max_{\{ \Q_i \}, \Q_i \succeq 0} \hspace{0.3cm} & \sum_{i=1}^K \cC \left( \I_M + \Q_i \W_i \right)\\
\st \hspace{0.3cm} & \tr \left( \Q_i \right) \leq P_i^{max}, \hspace{0.2cm} i=1,\ldots, K,\\
& \sum_{i=1}^K \tr \left( \Q_i \X_i \right) \leq \bar{P}_r^{max}.
\end{aligned}
}
where we replace $\bP_i \bP_i^{\her}$ by positive semi-definite variable $\Q_i$ and denote the following matrices
\begin{equation}
\begin{aligned}
\W_i&=\left( \bH_{ii}+ \G_i^{\her}\R \F_i \right)^{\her} \left( \G_i^{\her} \R \R^{\her} \G_i+ \I_M\right)^{-1} \left( \bH_{ii}+ \G_i^{\her}\R \F_i \right),\\
\X_i&=\F_{i}^{\her}\R^{\her} \R \F_{i}, \\
\bar{P}_r^{max}&= P_r^{max}- \tr \left( \R \R^{\her}\right).
\end{aligned}
\end{equation}
The objective in $\cQ_1$
is concave in $\Q_i$ as $\W_i$ is positive semi-definite and the constraints are linear in $\Q_i$.
Thus, $\cQ_1$ is a semi-definite program and can be solved readily using convex optimization solvers, e.g. CVX%
\footnote{Given block diagonal $\R$ in \eqref{eqt:block_diag_R}, the equivalent channel $\W_i$ and matrix $\X_i$ are also
block diagonal. It is possible to solve $\cQ_1$ using water-filling with $K+1$ Lagrange multipliers. For large problem size,
it may be more computational efficient using a tailor made water-filling method. For medium size problems and illustrative purposes,
we propose here to solve by semi-definite programming. }. The optimal $\bP_i$ is obtained by performing eigenvalue
decomposition on $\Q_i= \U_i \D_i \U^{\her}_i$ and $\bP_i=\U_i \D_i^{1/2}$.
The psuedocode of the EFFIN is given in Algorithm \ref{algo:effin}.
\begin{algorithm}
\caption{The pseudo-code for Efficient Information Leakage Neutralization (EFFIN) \label{algo:effin}}
\begin{algorithmic}[1]
\For{$m=1 \to M$} \Comment{Compute block diagonal relay processing matrix }
\State Compute $\R_{mm}=\bvec^{-1}\left(\left(\A(m) \right)^{\dagger} \bb(m)\right)$ with
\begin{equation*}
\A(m)=\left[\begin{array}{c}
\left(\f_1^{\tran}(m) \otimes \G^{\her}_{-1}(m)\right) \\
\vdots\\
\left(\f_K^{\tran}(m) \otimes \G^{\her}_{-K}(m)\right)
\end{array} \right], \hspace{0.5cm}
\bb(m)= \left[\begin{array}{c}
-\h_{-1}(m) \\
\vdots\\
-\h_{-K}(m)
\end{array}
\right].
\end{equation*}
\EndFor
\State The relay processing matrix is $\R=\diag\left(\R_{11},\ldots, \R_{MM} \right)$.
\State Solve $\cQ_1$ using convex optimization solvers and obtain optimal $\{\Q_i\}$.
\For{$i=1 \to K$}\Comment{Compute precoding matrices}
\State Perform eigen-value decomposition, $\Q_i=\U_i \D_i \U_i^{\her}$. Set $\bP_i=\U_i \D_i^{1/2}$.
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Optimized Information Leakage Neutralization (OPTIN)}\label{sec:optin}
In the previous subsection, we have discussed a simple, efficient and power saving solution of the relay matrix and precoding matrices for secure transmission.
One drawback of the efficient method is that its performance may be suboptimal. In this subsection, we discuss how to choose the relay and precoding matrices
such that the sum secrecy rates are optimized while ensuring zero information leakage.
To this end, we rewrite the information leakage neutralization constraint \eqref{eqt:in_const} to promote the optimization of secrecy rates,
\begin{equation}\label{eqt:in_block}
\left(\bH + \G^{\her} \R \F \right) \bP = \T
\end{equation} where $\bH=\left[\bH_{11}, \ldots, \bH_{1K}; \ldots; \bH_{K1},\ldots, \bH_{KK} \right]$, $\G^{\her}=\left[ \G_1^{\her}; \ldots; \G_K^{\her}\right]$,
$\F=\left[ \F_1, \ldots, \F_K\right]$ and $\bP=\diag(\bP_1,\ldots, \bP_K)$. The block diagonal matrix $\T=\diag(\T_1,\ldots, \T_K)$ is
the new optimization variable. $\T_i$ is the equivalent desired channel from TX $i$ to RX $i$ as $\T_i= (\bH_{ii} + \G^{\her}_i \R \F_i) \bP_i$.
By applying pseudo-inverse
\footnote{Note that $\G^{\her}$
has dimension $MK \times MN$ and $\F \bP$ has dimension $MN \times KM$. If $MN\geq MK$,
then $\G^{\her \dagger}= \G \left( \G^{\her} \G\right)^{-1}$ and
$ \left( \F \bP \right)^{\dagger}=\left( \left( \F \bP \right)^{\her}\left( \F \bP \right)\right)^{-1}\left( \F \bP \right)^{\her}$.
If $MN< KM$,
then $\G^{\her \dagger}=\left(\G \G^{\her}\right)^{-1}\G$ and
$\left( \F \bP \right)^{\dagger}=\left(\F \bP\right)^{\her} \left( \F \bP \left(\F \bP\right)^{\her}\right)^{-1}$.}
of $\G^{\her}$ and $\F \bP$ ($\G^{\her \dagger}$ and $\left(\F \bP \right)^\dagger$ respectively),
one can rewrite \eqref{eqt:in_block} to the following
\begin{equation}\label{eqt:r_i^sntermsof_t}
\R= \G^{\her \dagger} \left( \T- \bH \bP\right) \left( \F \bP \right)^{\dagger}.
\end{equation}
The maximum achievable sum secrecy rate is the solution of the following problem
\begin{subequations}
\begin{align}
\max_{ \R, \T, \{\bP_i \}} \hspace{0.3cm} &
\sum_{i=1}^K \cC \left( \I_M + \T_i \bP_i \bP_i^{\her} \T_i^{\her} \left( \G_i^{\her} \R \R^{\her} \G_i+ \I_M\right)^{-1} \right)\\
\st \hspace{0.3cm} & \tr \left( \bP_i \bP_i^{\her}\right) \leq P_i^{max}, \hspace{0.2cm} i=1, \ldots, K, \label{opt:trans_pow_constraint}\\
& \left(\bH + \G^{\her} \R \F \right) \bP = \T, \label{opt:in_constraint}\\
& \tr \left( \R \left( \F \bP \bP^{\her} \F^{\her} + \I_{\mn}\right) \R^{\her} \right) \leq P_r^{max}\label{opt:relay_pow_constraint}\\
& \T=\diag\left( \T_1,\ldots, \T_K\right).
\end{align}
\end{subequations}
Note that in the objective function, the information leakage is neutralized for each user.
Constraints \eqref{opt:trans_pow_constraint} and \eqref{opt:relay_pow_constraint} are
the transmit power constraints at the TXs and at the relay respectively. The information leakage neutralization constraint is written as \eqref{opt:in_constraint}.
The optimization is not jointly convex in $\R$, $\T$ and $\{\bP_i\}$.
To simplify the optimization problem,
we propose the following iterative optimization algorithm. Given $\R$ and $\T$, we solve $\bP_i$ optimally using $\cQ_1$ in EFFIN.
The second part of the iterative algorithm is to compute the optimal relay strategy $\R$ and the auxiliary variable $\T$ (by solving $\cQ_2$)
if the precoding matrices $\bP_i$ as the solutions of $\cQ_1$ are given.
\boxeqn{
\begin{aligned}
\cQ_2: \hspace{0.3cm} \max_{ \R, \T} \hspace{0.3cm} & \sum_{i=1}^K \cC \left( \I_M + \T_i \T_i^{\her} \left( \G_i^{\her}
\R \R^{\her} \G_i+ \I_M\right)^{-1} \right)\\
\st \hspace{0.3cm} & \R= \G^{\her \dagger} \left( \T- \bH \bP\right) \left( \F \bP \right)^{\dagger},\\
& \tr \left( \R \left( \F \bP \bP^{\her} \F^{\her} + \I_{\mn}\right) \R^{\her} \right) \leq P_r^{max},\\
& \T=\diag(\T_1,\ldots, \T_K).
\end{aligned}
}
Problem $\cQ_2$ is non-convex. The major challenge is due to the sum of log-determinants in the objective function and
the equality constraints.
In the following, we utilize the first equality constraint and replace $\R$ as a function of $\T$.
The optimization problem $\cQ_2$ can be written as,
\boxeqn{
\begin{aligned}
\cQ_2': \hspace{0.3cm} \max_{ \T} \hspace{0.3cm} & \sum_{i=1}^K \left( \cC \left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)
-\cC \left( \X_i + \bar{\T}_i \Y_i \bar{\T}_i^{\her} \right) \right)\\
\st \hspace{0.3cm} & \tr \left( \G^{\her \dagger} \left( \T- \bH \bP\right) \left( \tilde{\F}+ \I_{MK}\right)
\left( \T- \bH \bP\right)^{\her}\G^{ \dagger}\right)\leq P_r^{max},\\
& \bar{\T}_i =\left[ \T_i, \I_M\right],\\
& \T=\diag(\T_1,\ldots, \T_K).
\end{aligned}
}
Please see the proof and the definition of $\X_i, \Y_i, \Z_i$ in \eqref{eqt:xyz} in Appendix \ref{app:cq2}.
Although the optimization problem is simplified, it is still non-convex in $\T$. In the following, we propose to solve $\cQ_2'$ with gradient descent method.
To this end, we write the Lagrangian of $\cQ_2'$ as $L(\T,\lambda)$,
\begin{equation}\label{eqt:lagrangian}
\begin{aligned}
L(\T,\lambda)&=\sum_{i=1}^K \left( \cC \left( \X_i + \bar{\T}_i \Z_i \bar{\T}_i^H \right)
-\cC \left( \X_i + \bar{\T}_i \Y_i \bar{\T}_i^{\her} \right) \right)\\
& -\lambda \left( \tr \left( \G^{\her \dagger} \left( \T- \bH \bP\right) \left( \tilde{\F}+ \I_{MK}\right)
\left( \T- \bH \bP\right)^{\her}\G^{ \dagger}\right)- P_r^{max} \right)\\
&= \sum_{i=1}^K f_i(\T_i) - \lambda g(\T).
\end{aligned}
\end{equation}
The gradient of the Lagrangian with respect to $\T^*$ is
\begin{equation}\label{eqt:gradient}
\begin{aligned}
\cD_{\T^*} L(\T,\lambda)
&= \frac{1}{\ln(2)} \left[ \begin{array}{cccc}
\cD_{\T_1^*} f_1(\T_1) & \0_M & \ldots & \0_M\\
\0_M & \cD_{\T_2^*} f_2(\T_2) & \ldots & \0_M\\
& & \ddots & \vdots\\
\0_M &\ldots & & \cD_{\T_K^*} f_K(\T_K)
\end{array}
\right]\\
& - \lambda \G^{ \dagger}\G^{\her \dagger} \left( \T- \bH \bP\right)
\left( \tilde{\F}+ \I_{KM}\right).
\end{aligned}
\end{equation} Please see the proof in Appendix \ref{app:lagrangian}.
We summarize in Algorithm \ref{algo:optin} the proposed iterative algorithm on sum secrecy rate optimization.
\begin{algorithm}
\caption{The pseudo-code for Optimized Information Leakage Neutralization (OPTIN)\label{algo:optin}}
\begin{algorithmic}[1]
\While{}\Comment{Compute relay processing matrix }
\State Initialize $\{\bP_i\}$ and $\R$ as the solutions of EFFIN.
\State Solve $\cQ_2'$ using gradient descent method with gradient \eqref{eqt:gradient} and obtain optimal solution $\T$.
Obtain relay processing matrix $\R$ from $\T$ using \eqref{eqt:r_i^sntermsof_t}.
\State With $\R$ and $\T$ above, solve $\cQ_1$ using convex optimization solvers and obtain optimal $\{\Q_i\}$.
\For{$i=1 \to K$}\Comment{Compute precoding matrices}
\State Perform eigen-value decomposition, $\Q_i=\U_i \D_i \U_i^{\her}$. Set $\bP_i=\U_i \D_i^{1/2}$.
\EndFor
\If{ sum secrecy rate improvement is less than a predefined threshold}
\State Convergence reached. Break.
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
\section{Simulation Results}\label{sec:sim}
To illustrate the effectiveness of the proposed algorithms, we provide in this section numerical simulations for different system settings. As an
example, we simulate the secrecy rates of a relay assisted network with $K=2$ users, $M=8$ frequency subcarriers and $N=2$ antennas at the relay,
unless otherwise stated.
To examine the performance of the algorithms with respect to system signal-to-noise ratio, we vary the transmit power constraint at relay from $0$ to $30 \dB$
while keeping the transmit power constraint at TXs as $10 \dB$ (see Figure \ref{fig:relay_pow}.) Similarly, we examine the algorithms by varying
the transmit power constraint at TXs from $0$ to $30 \dB$ while keeping the transmit power at relay constraint at $23, 27,30 \dB$. Note that by varying the
power constraints, we do not force the power of the optimized precoding matrices and the relay processing matrix to be equal to the power constraints.
In the following, we compare algorithms:
\begin{itemize}
\item Baseline 1 (Repeater): the relay is a layer 1 relay and is only able to forward signals without additional signal processing. This corresponds to
setting $\R=\I_{MN} \sqrt{\frac{P_r^{max}}{MN}}$.
\item Baseline 2 (IC): the relay shuts down, i.e. $\R=\0_{MN}$, and we obtain an interference channel where users eavesdrop each other.
\item Proposed algorithm EFFIN: an efficient relay and precoding matrices optimization algorithm outlined in Algorithm \ref{algo:effin}.
\item Proposed algorithm OPTIN: an optimized algorithm whose performance exceeds EFFIN with a price of higher complexity. OPTIN is outlined in
Algorithm \ref{algo:optin}.
\end{itemize}
For each baseline algorithm, we examine the effect of spectrum sharing on achievable secrecy rates by employing either one of the following spectrum
sharing methods:
\begin{itemize}
\item Full spectrum sharing (FS): users are allowed to use the entire spectrum. Each TX measures the channel qualities of the direct channel and
the channel from itself to other RXs. Based on the measured channel qualities, each TX excludes frequency subcarriers with zero secrecy rates and transmits
on the channels with non-zero secrecy rates. For subcarriers at which more than one user would like to transmit, we assume that the TXs coordinate
so that the TX with a high secrecy rate would transmit on that subcarrier. Despite such coordination, each user eavesdrops other users on each subcarrier.
\item Orthogonal spectrum sharing (OS): users are assigned exclusive portion of spectrum. Each TX excludes subcarriers with zero secrecy rates and
transmits on the channels with non-zero secrecy rates. Each user eavesdrops other users on each subcarrier.
\end{itemize}
\subsection{Secrecy rates with increasing relay power}
In Figure \ref{fig:relay_pow}, we show achievable sum secrecy rates over varying the transmit power constraint at the relay from $0$ to $30 \dB$
while keeping the transmit power constraint at the TXs at $10 \dB$. As the IC does not
utilize the relay, the achievable sum secrecy rates (plotted with triangles) are constant as the
relay power constraint increases. As expected from intuition, the performance of IC with
FS is better than OS because OS has an additional constraint of subcarrier assignment.
The achievable sum secrecy rates achieved by a repeater decreases with relay transmit power.
This is due to the increased amplification noise in AF relaying. Interestingly, the non-intelligent
relaying scheme, e.g. a repeater, may decrease the secrecy rate significantly, even worse than switching
off the relay. However, utilizing an intelligent relay and choosing the relaying scheme, one
can improve the achievable secrecy rate significantly, about 550\% over a simple repeater and about 200\%
over IC. Although EFFIN is very simple and efficient, it achieves 94.5\% of the sum secrecy rate achieved by the
more complicated algorithm OPTIN.
\begin{figure}
\begin{center}
\input{fig_sim_relaypow}
\caption{The achievable secrecy rates of a two-user IRC with 8 frequency subcarriers is shown with varying relay power constraint.
The TX power constraints are $10 \dB$ and there
are two antennas at the relay. The proposed scheme EFFIN and OPTIN outperform baseline algorithms Repeater and IC by 550\% and 200\% respectively.}
\label{fig:relay_pow}
\end{center}
\end{figure}
\subsection{Secrecy rates with increasing TX power}
In Figure \ref{fig:tx_pow}, we simulate the achievable sum secrecy rate by the transmit power constraint at TXs from
$0$ to $30 \dB$ while keeping the transmit power at relay constraint at $23, 27, 30 \dB$. As the transmit power at TX
increases, the sum secrecy rates saturate in both baseline algorithms, Repeater and IC. With the proposed information
leakage neutralization, we see that the sum secrecy rates grow unbounded with the TX power as each user enjoys
a leakage free frequency channel. Note that the sum secrecy rates achieved by relay with power constraint at $23, 27, 30 \dB$ are
plotted in dotted, dashed and solid lines respectively. When there is only $23 \dB$ available, there is only enough power
for information leakage neutralization, but
not enough to further optimize the system performance. Hence, the achievable sum secrecy rates of EFFIN and OPTIN overlap.
With more power available, it is possible to optimize the sum secrecy rates while neutralizing information leakage and the performance
of OPTIN is better than EFFIN.
\begin{figure}
\begin{center}
\input{fig_sim_txpow}
\caption{The achievable secrecy rates of a two-user IRC with 8 frequency subcarriers is shown with varying transmitter power constraints.
The relay power constraint is $30 \dB$ and there
are two antennas at the relay. The secrecy rates achieved by EFFIN and OPTIN grows unbounded with the transmit power at TX whereas
the secrecy rates achieved by baseline algorithms saturate in high SNR regime.}
\label{fig:tx_pow}
\end{center}
\end{figure}
\subsection{Secrecy rates with larger systems}
In Figure \ref{fig:sim_txpow_m16}, we examine the performance of the proposed algorithms in a slightly larger systems with
$N=4$ antennas at the relay and $M=16$ frequency subcarriers. The relay processing matrix is therefore a $64 \times 64$ matrix.
The proposed scheme EFFIN and OPTIN outperform baseline algorithms Repeater and IC by 200\% whereas the efficient EFFIN algorithm achieves 94.86\%
of the sum secrecy rate performance by OPTIN.
\begin{figure}
\begin{center}
\input{fig_sim_txpow_m16}
\caption{The achievable sum secrecy rates of a two-user IRC with 16 frequency subcarriers and 4 antennas at the relay is shown with varying
relay power constraint.
The TX power constraints are $10 \dB$ and there
are two antennas at the relay. The proposed scheme EFFIN and OPTIN outperform baseline algorithms Repeater and IC by 200\%. EFFIN achieves 94.86\%
of the sum secrecy rate performance by OPTIN.}
\label{fig:sim_txpow_m16}
\end{center}
\end{figure}
\input{appendices}
\bibliographystyle{IEEEbib}
|
2,869,038,155,571 | arxiv | \section{Introduction and Statement of Main Results}
In 1985, Bloom \cite{Bloom} proved a two-weight version of the celebrated commutator theorem of Coifman, Rochberg and Weiss \cite{CRW}.
Specifically, \cite{Bloom} characterized the two-weight norm of the commutator $[b, H]$ with the Hilbert transform in terms of the norm of $b$
in a certain weighted BMO space:
$$ \| [b, H] : L^p(\mu) \rightarrow L^p(\lambda)\| \simeq \|b\|_{BMO(\nu)}, $$
where $\mu, \lambda$ are $A_p$ weights, $1<p<\infty$, and $\nu := \mu^{1/p}\lambda^{-1/p}$.
Recently, this was extended to the $n$-dimensional case of Calder\'{o}n-Zygmund operators in \cite{HLW2}, using the modern dyadic methods
started by \cite{P} and continued in \cite{HytRep}. The main idea in these methods is to represent continuous operators like the Hilbert transform
in terms of dyadic shift operators.
This theory was recently extended to biparameter singular integrals in \cite{MRep}.
In this paper we extend the Bloom theory to commutators with
biparameter Calder\'{o}n-Zygmund operators, also known as Journ\'{e} operators, and characterize their norms
in terms of a weighted version of the little bmo space of Cotlar and Sadosky \cite{CotlarSadosky}.
The main results are:
\begin{thm}[Upper Bound] \label{T:UB}
Let $T$ be a biparameter Journ\'{e} operator on $\mathbb{R}^{\vec{n}} = \mathbb{R}^{n_1}\otimes\mathbb{R}^{n_2}$, as defined in Section \ref{Ss:JourneDef}.
Let $\mu$ and $\lambda$ be $A_p(\mathbb{R}^{\vec{n}})$ weights, $1<p<\infty$, and define $\nu := \mu^{1/p} \lambda^{-1/p}$. Then
\begin{equation}
\| [b, T] : L^p(\mu) \rightarrow L^p(\lambda) \| \lesssim \|b\|_{bmo(\nu)},
\end{equation}
where $\|b\|_{bmo(\nu)}$ denotes the norm of $b$ in the weighted little $bmo(\nu)$ space on $\mathbb{R}^{\vec{n}}$.
\end{thm}
We make a few remarks about the proof of this result. At its core, the strategy is the same as in \cite{HLW2},
and may be roughly stated as:
\begin{enumerate}
\item Use a representation theorem to reduce the problem from bounding the norm of $[b, T]$ to bounding the norm of
$[b, \text{Dyadic Shift}]$.
\item Prove the two-weight bound for $[b, \text{Dyadic Shift}]$ by decomposing into paraproducts.
\end{enumerate}
However, the biparameter case presents some significant new obstacles. In \cite{HLW2}, $T$ was a Calder\'{o}n-Zygmund operator on
$\mathbb{R}^n$, and the representation theorem was that of Hyt\"{o}nen \cite{HytRep}. In the present paper, $T$ is a biparameter Journ\'{e} operator on $\mathbb{R}^{\vec{n}} = \mathbb{R}^{n_1} \otimes \mathbb{R}^{n_2}$ (see Section \ref{Ss:JourneDef}) and we use Martikainen's representation theorem \cite{MRep} to reduce the problem to commutators $[b, \mathbbm{S}_{\bm{\mathcal{D}}}]$, where $\mathbbm{S}_{\bm{\mathcal{D}}}$ is now a \textit{biparameter} dyadic shift. These can be cancellative, i.e. all Haar functions have mean zero, (defined in Section \ref{Ss:CShifts}), or non-cancellative (defined in Section \ref{Ss:NCShifts}). The strategy is summarized in Figure \ref{fig:M1}.
The main difficulty arises from the structure of the biparameter dyadic shifts. At first glance, the cancellative shifts are ``almost'' compositions of two one-parameter shifts $\mathbbm{S}_{\mathcal{D}_1}$ and $\mathbbm{S}_{\mathcal{D}_2}$ applied in each variable -- if this were so, many of the results would follow trivially by iteration of the one-parameter results. Unfortunately, there is no reason for the coefficients $a_{P_1Q_1R_1P_2Q_2R_2}$ in the biparameter shifts to ``separate'' into a product $a_{P_1Q_1R_1} \cdot a_{P_2Q_2R_2}$, as would be required in a composition of two one-parameter shifts. Therefore, many of the inequalities needed for biparameter shifts must be proved from scratch.
Even more difficult is the case of non-cancellative shifts. As outlined in Section \ref{Ss:NCShifts}, these are really paraproducts, and there are three possible types that arise from the representation theorem:
\begin{enumerate}
\item Full standard paraproducts;
\item Full mixed paraproducts;
\item Partial paraproducts.
\end{enumerate}
These methods were considered previously in \cite{OPS} and \cite{OP} for the unweighted, $p = 2$ case.
In \cite{OPS} it was shown that
\begin{equation} \label{E:OPS}
\| [b, T] : L^2(\mathbb{R}^{\vec{n}}) \rightarrow L^2(\mathbb{R}^{\vec{n}}) \| \lesssim \|b\|_{bmo(\mathbb{R}^{\vec{n}})},
\end{equation}
where $T$ is a \textit{paraproduct-free} Journ\'{e} operator. This restriction essentially means that all the dyadic shifts in the representation of $T$ are \textit{cancellative}, so the case of non-cancellative shifts remained open. This gap was partially filled in \cite{OP}, which treats the case of non-cancellative shifts of standard paraproduct type. So the case of general Journ\'{e} operators, which includes non-cancellative shifts of mixed and partial type in the representation, remained open even in the unweighted, $p = 2$ case. These types of paraproducts are notoriously difficult -- see also \cite{MOrponen} for a wonderful discussion of this issue.
We fill this gap in Section \ref{Ss:NCShifts}, where we prove two-weight bounds of the type
$$ \|[b, \mathbbm{S}_{\bm{\mathcal{D}}}] : L^p(\mu) \rightarrow L^p(\lambda) \| \lesssim \|b\|_{bmo(\nu)},$$
where $\mathbbm{S}_{\bm{\mathcal{D}}}$ is a non-cancellative shift. The same is proved for cancellative shifts in Section \ref{Ss:CShifts}.
\begin{figure}
\centering
\begin{tikzpicture}[
level 1/.style={sibling distance=100mm},
edge from parent/.style={<-,draw},
>=latex]
\node[root] {$\| [b, T] : L^p(\mu) \rightarrow L^p(\lambda) \| \lesssim \|b\|_{bmo(\nu)}$}
child [below=.9cm]{node[level 2] (c1)
{$\| [b, \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i},\vec{j}}] : L^p(\mu) \rightarrow L^p(\lambda) \| \lesssim \|b\|_{bmo(\nu)}$\\
with at most polynomial bounds in $i, j$.}
edge from parent node[left,draw=none] {Martikainen representation theorem}};
\begin{scope
\node[level 3] [below = .5cm of c1] (c11) {Cancellative Shifts: \\ Theorem \ref{T:CShifts}};
\node[level 3] [right = 2cm of c11] (cr) {Two-weight bounds for paraproducts: \\ Section \ref{S:Para}};
\node[level 3] [below of = c11] (c12) {Non-Cancellative Shifts};
\node[level 4] [below right = of c12] (c121) {Full standard paraproduct:\\ Theorem \ref{T:NCS-1}};
\node[level 4] [below of = c121] (c122) {Full mixed paraproduct:\\ Theorem \ref{T:NCS-2}};
\node[level 4] [below of = c122] (c123) {Partial paraproduct: \\ Theorem \ref{T:NCS-3}};
\end{scope}
\draw[<-] (c1.195) |- (c11.west);
\draw[<-] (c1.195) |- (c12.west);
\draw[<-] (c12) |- (c121);
\draw[<-] (c12) |- (c122);
\draw[<-] (c12) |- (c123);
\draw[<-] (c11) -- (cr);
\draw[<-] (c121.east) -- + (1,0) |- (cr);
\draw[<-] (c122.east) -- + (1,0) |- (cr);
\draw[<-] (c123.east) -- + (1,0) |- (cr);
\end{tikzpicture}
\caption{Strategy for Theorem \ref{T:UB}} \label{fig:M1}
\end{figure}
At the backbone of all these proofs will be the biparameter paraproducts, developed in Section \ref{S:Para}, and a variety of biparameter square functions, developed in Section \ref{S:BDSF}.
For instance, in the case of the cancellative shifts, one can decompose the commutator as
$$ [b, \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}] f = \sum \: [\mathsf{P}_{\mathsf{b}}, \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}] f + \sum \: [\mathsf{p}_{\mathsf{b}}, \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}] f + \mathcal{R}_{\vec{i},\vec{j}}f.$$
Here $\mathsf{P}_{\mathsf{b}}$ runs through nine paraproducts associated with \textit{product BMO}, and $\mathsf{p}_{\mathsf{b}}$ runs through six paraproducts associated with \textit{little bmo}, so we are dealing with fifteen paraproducts in total in the biparameter case.
Some of these are straightforward generalizations of the one-parameter paraproducts, while some are more complicated ``mixed'' paraproducts. Two-weight bounds are proved for all these paraproducts in Section \ref{S:Para}, building on two essential blocks: the biparameter square functions in Section \ref{S:BDSF}, and the weighted $H^1 - BMO$ duality in the product setting, developed in Section \ref{S:BWBMO}. In fact, Section \ref{S:BWBMO} is a self-contained presentation of large parts of the weighted biparameter BMO theory.
Once the paraproducts are bounded, all that is left is to bound the so-called ``remainder term'' $\mathcal{R}_{\vec{i},\vec{j}}f$, of the form $\Pi_{\mathbbm{S} f}b - \mathbbm{S} \Pi_f b$, where one can no longer appeal directly to the paraproducts. At this point however, things become very technical, so bounding the remainder terms is no easy task. To help guide the reader, we outline below the general strategy we will employ. This applies to Theorem \ref{T:CShifts}, and in large part to Theorems \ref{T:NCS-1}, \ref{T:NCS-2}, and \ref{T:NCS-3}:
\begin{enumerate}[1.]
\item We break up the remainder term into more convenient sums of operators of the type $\mathcal{O}(b, f)$, involving both $b \in bmo(\nu)$ and $f \in L^p(\mu)$. We want to show $\|\mathcal{O}(b, f) : L^p(\mu) \rightarrow L^p(\lambda)\| \lesssim \|b\|_{bmo(\nu)}$.
Using duality this amounts to showing that
$$ |\left\langle \mathcal{O}(b, f), g\right\rangle | \lesssim \|b\|_{BMO(\nu)} \|f\|_{L^p(\mu)} \|g\|_{L^{p'}(\lambda')}.$$
\item Some of these operators $\mathcal{O}(b, f)$ involve full Haar coefficients $\widehat{b}(Q_1\times Q_2)$ of $b$, while others involve a Haar coefficient in one variable and averaging in the other variable, such as $\left\langle b, h_{Q_1} \times \mathbbm 1} %{1\!\!1_{Q_2}/|Q_2| \right\rangle $.
Since, ultimately, we wish to use some type of $H^1-BMO$ duality, the goal will be to ``separate out'' $b$ from the inner product $\left\langle \mathcal{O}(b, f), g\right\rangle $.
If $\mathcal{O}(b, f)$ involves full Haar coefficients of $b$, we use duality with \textit{product BMO} and obtain
$$|\left\langle \mathcal{O}(b, f), g\right\rangle | \lesssim \|b\|_{BMO(\nu)} \|S_{\bm{\mathcal{D}}}\phi(f, g)\|_{L^1(\nu)},$$
where $\phi(f, g)$ is the operator we are left with after separating out $b$, and $S_{\bm{\mathcal{D}}}$ is the full biparameter dyadic square function.
If $\mathcal{O}(b,f)$ involves terms of the form $\left\langle b, h_{Q_1} \times \mathbbm 1} %{1\!\!1_{Q_2}/|Q_2| \right\rangle $, we use duality with \textit{little bmo}, and obtain something of the form
$$|\left\langle \mathcal{O}(b, f), g\right\rangle | \lesssim \|b\|_{bmo(\nu)} \|S_{\mathcal{D}_1}\phi(f, g)\|_{L^1(\nu)},$$
where $S_{\mathcal{D}_1}$ is the dyadic square function in the first variable. Obviously this is replaced with $S_{\mathcal{D}_2}$ if the Haar coefficient on $b$ is in the second variable.
\item Then the next goal is to show that
$$S_{\bm{\mathcal{D}}}\phi(f, g) \lesssim (\mathcal{O}_1 f) (\mathcal{O}_2 g),$$
where $\mathcal{O}_{1,2}$ will be operators satisfying a \textit{one-weight} bound of the type $L^p(w) \rightarrow L^p(w)$. These operators will usually be a combination of the biparameter square functions in Section \ref{S:BDSF}. Once we have this, we are done.
\end{enumerate}
In Theorem \ref{T:CShifts}, dealing with cancellative shifts, the crucial part is really step 1. At first glance, the remainder term
$\mathcal{R}_{\vec{i},\vec{j}}f$ seems intractable using this method, since it involves average terms $\left\langle b\right\rangle _{Q_1\times Q_2}$ instead of Haar coefficients of $b$. So they key here is to decompose these terms in some convenient form.
In Section \ref{Ss:NCShifts}, dealing with non-cancellative shifts, the proofs follow this strategy in spirit, but deviate as we advance through the more and more difficult operators.
The main issue here is that we are are really dealing with terms of the form $|\left\langle \mathcal{O}(a, b, f), g \right\rangle |$, where now the operator $\mathcal{O}$ involves a function $b$ in the \textit{weighted little} $bmo(\nu)$, and a function $a$ in \textit{unweighted product} BMO. In the most difficult case of partial paraproducts, $a$ is even more complicated, because it is essentially a \textit{sequence} of \textit{one-parameter} unweighted BMO functions. In all these cases, the creature $\phi$ in the last step is really $\phi(a, f, g)$. While in the previous case involving $\phi(f,g)$ it was straightforward to see the correct operators $\mathcal{O}_{1,2}$ to achieve step 5, in this case nothing straightforward seems to work.
There are two key new ideas in these cases: one is to combine the cumbersome remainder term with a cleverly chosen third term, which will make the decompositions easier to handle. The other is to temporarily employ martingale transforms -- which works for us because this does not increase the BMO norms. We briefly describe the three situations below. As above, we will be rather non-rigorous about the notations in this expository section. There is plenty of notation later, and the purpose here is just to explain the main ideas and guide the reader through the technical proofs in Section \ref{Ss:NCShifts}.
\begin{enumerate}[1.]
\item \textit{The full standard paraproduct} -- Theorem \ref{T:NCS-1}. This case only requires simple martingale transforms ($a_{\tau}$ and $g_{\tau}$, which have all non-negative Haar coefficients), and otherwise follows the strategy outlined above. However, we already start to see the operators $\mathcal{O}_{1,2}$ becoming strange compositions of ``standard'' operators and unweighted paraproducts, such as
$$ S_{\bm{\mathcal{D}}}\phi \leq (M_S \Pi^*_{a_{\tau}} g_{\tau}) (S_{\bm{\mathcal{D}}}f).$$
\item \textit{The full mixed paraproduct} -- Theorem \ref{T:NCS-2}. Here we introduce the idea of combining the remainder term
$\Pi_{\mathbbm{S} f}b - \mathbbm{S} \Pi_f b$ with a third term $T$, and we analyze $(\Pi_{\mathbbm{S} f}b -T)$ and $(T - \mathbbm{S} \Pi_f b)$ separately. This allows us to express the remainder as
$$\sum [\mathsf{P}_{\mathsf{a}}, \mathsf{p}_{\mathsf{b}}]f + T_{a,b}^{(1,0)}f - T_{a,b}^{(0,1)}f,$$
a sum of \textit{commutators of paraproduct operators}, and a new remainder term.
The new remainder has no cancellation properties, so we prove separately that the $T_{a,b}$ operators satisfy
$$|\left\langle T_{a,b}f, g\right\rangle | \lesssim \|b\|_{bmo(\nu)}\|f\|_{L^p(\mu)}\|g\|_{L^{p'}(\lambda')}.$$
Here is where we employ the strategy outlined earlier, combined with a martingale transform $a_{\tau}$ applied to $a$. Interestingly, this transform depends on the particular argument $f$ of $[b, \mathbbm{S}_{\bm{\mathcal{D}}}]f$. This will be absorbed in the end by the $BMO$ norm of the symbol for $\mathbbm{S}_{\bm{\mathcal{D}}}$, so ultimately the choice of $f$ will not matter.
\item \textit{The partial paraproducts} -- Theorem \ref{T:NCS-3}. Here we again combine the remainder terms with a third term $T$, and this time end up with terms of the form $\mathsf{p}_{\mathsf{b}} F$, where $F$ is a term depending on $a$ and $f$. So we are done if we can show that $\|F\|_{L^p(\mu)} \leq \|f\|_{L^p(\mu)}$. Without getting too technical about the notations, we reiterate that here $a$ is not \textit{one function} but rather a \textit{sequence} $a_{PQR}$ of one-parameter unweighted BMO functions. So the difficulty here is that the inner products look something like
$$\left\langle F, g\right\rangle = \sum \left\langle \Pi^*_{a_{PQR}}\widetilde{f}, \widetilde{g}\right\rangle ,$$
where each summand has its own BMO function! The trick is then to write this as $\sum \left\langle a_{PQR}, \phi_{PQR}(f, g)\right\rangle $. The happy ending is that these functions $a_{PQR}$ have uniformly bounded BMO norms, so at this point we apply unweighted one-parameter $H^1-BMO$ duality and we are left to work with $\|S_{\mathcal{D}}\phi(f, g)\|_{L^1(\mathbb{R}^n)}$; this is manageable. In one case, we do have to work with $F_{\tau}$ instead, which is again obtained by applying martingale transforms chosen in terms of $f$ -- only this time to each function $a_{PQR}$.
\end{enumerate}
Finally, we see no reason why this result cannot be generalized to $k$-parameter Journ\'{e} operators. The main trouble in such a generalization should be strictly computational, as the number of paraproducts will blow up.
In section \ref{S:mix} we recall the definition of the mixed ${\text{BMO}}_{\mathcal{I}}$ classes in between Chang-Fefferman's product BMO and Cotlar-Sadosky's little BMO. In the same way as in \cite{OPS} we deduce a corollary from Theorem \ref{T:UB}:
\begin{thm}[Upper bound, iterated, unweighted case]\label{upperbd_all_journe}
Let us consider $\mathbb{R}^{\vec{d}}$, $\vec{d}=(d_1,\ldots ,d_t)$ with a partition $\mathcal{I}=(I_s)_{1\le s \le l}$ of $\{1,\ldots ,t\}$. Let $b\in {\text{BMO}}_{\mathcal{I}}(\mathbb{R}^{\vec{d}})$ and let $T_s$ denote a multi-parameter Journ\'e operator acting on function defined on $\bigotimes_{k\in I_s}\mathbb{R}^{d_k}$. Then we have the estimate
\[
\|[T_1,\ldots[T_l,b]\ldots]\|_{L^p(\mathbb{R}^{\vec{d}})\to L^p(\mathbb{R}^{\vec{d}})}\lesssim \|b\|_{{\text{BMO}}_{\mathcal{I}}(\mathbb{R}^{\vec{d}})}.
\]
\end{thm}
Coming back to the Bloom setting, we prove the lower estimate below, via a modification of the unweighted one-parameter argument of Coifman-Rochberg-Weiss.
\begin{thm}[Lower Bound] \label{T:LB}
Let $\mu, \lambda$ be $A_p(\mathbb{R}^n\times\mathbb{R}^n)$ weights, and set $\nu = \mu^{1/p}\lambda^{-1/p}$. Then
\begin{equation}
\|b\|_{bmo(\nu)}
\lesssim \sup_{1 \leqslant k, l \leqslant n} \| [b, R^1_k R^2_l] \|_{L^p
({\mu}) \rightarrow L^p (\lambda) .},
\end{equation}
where $R_{k}^1$ and $R_l^2$ are the Riesz transforms acting in
the first and second variable, respectively.
\end{thm}
This lower estimate allows us to see the tensor products of Riesz transforms as a representative testing class for all Journ\'e operators.
We point out that in our quest to prove Theorem \ref{T:UB}, we also obtain a much simplified proof of the following one-weight result for Journ\'{e} operators, originally due to R. Fefferman:
\begin{thm}[Weighted Inequality for Journ\'{e} Operators] \label{T:Journe}
Let $T$ be a biparameter Journ\'{e} operator on $\mathbb{R}^{\vec{n}} = \mathbb{R}^{n_1}\otimes\mathbb{R}^{n_2}$. Then $T$ is bounded
$L^p(w) \rightarrow L^p(w)$ for all $w\in A_p(\mathbb{R}^{\vec{n}})$, $1<p<\infty$.
\end{thm}
A version of Theorem \ref{T:Journe} first appeared in R. Fefferman and E. M. Stein \cite{RStein}, with restrictive assumptions on the kernel.
Subsequently the kernel assumptions were weakened significantly by R. Fefferman in \cite{RF}, at the cost of assuming
the weight belongs to the more restrictive class $A_{p/2}$. This was due to the use of his sharp function
$T^{\#}f = M_S (f^2)^{1/2}$, where $M_S$ is strong maximal function. Finally, R. Fefferman improved his own result in \cite{RF2}, where he showed that the $A_p$ class sufficed and obtained the full statement of Theorem \ref{T:Journe}. This was achieved by an involved bootstrapping argument based on his previous result \cite{RF}.
Our proof in Section \ref{Ss:FeffProof} of Theorem \ref{T:Journe} is significantly simpler. This may seem like a ``rough sell'' in light of the many pages of highly technical calculations that precede it. However, our proof of Theorem \ref{Ss:FeffProof} is only based on one-weight bounds for the biparameter dyadic shifts, of the form
\begin{equation}\label{E:1wtDS}
\| \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} : L^p(w) \rightarrow L^p(w) \| \lesssim 1.
\end{equation}
These had to be proved along the way, as part of our proof of the two-weight upper bound for commutators, Theorem \ref{T:UB}.
These one-weight bounds are useful in themselves, and their proofs are not that long:
the proof for cancellative shifts, given in \eqref{E:2pDShift1wt}, is easy, and the proof for the non-cancellative shifts of partial paraproduct type is given in Proposition \ref{P:NCS-3-1wt}. Once we have \eqref{E:1wtDS}, the proof of Theorem \ref{T:Journe} follows immediately from Martikainen's representation theorem -- just as in the one-parameter case, a weighted bound for Calder\'{o}n-Zygmund operators follows trivially from Hyt\"{o}nen's representation theorem, once one has the one-weight bounds for the one-parameter dyadic shifts.
The paper is organized as follows. In Section \ref{S:BN} we review the necessary background, both one- and bi-parameter, and set up the notation.
In Section \ref{S:BDSF} we set up the types of dyadic square functions we will need throughout the rest of the paper. In Section \ref{S:BWBMO}, we
discuss the weighted and Bloom BMO spaces in the biparameter setting, and use some of these results in Section \ref{S:LB} to prove the lower bound result.
Section \ref{S:Para} is dedicated to biparameter paraproducts, which will be crucial in the final Section \ref{S:UB}, which proves the upper bound by
an appeal to Martikainen's \cite{MRep} representation theorem. Finally, we prove Theorem \ref{T:Journe}.
\section{Background and Notation}
\label{S:BN}
In this section we review some of the basic building blocks of one-parameter dyadic harmonic analysis on $\mathbb{R}^n$,
followed by their biparameter versions for $\mathbb{R}^{\vec{n}} := \mathbb{R}^{n_1} \otimes \mathbb{R}^{n_2}$.
\subsection{Dyadic Grids on $\mathbb{R}^n$}
Let $\mathcal{D}_0 := \{ 2^{-k}([0,1)^n + m) : k \in \mathbb{Z}, m \in \mathbb{Z}^n\}$ denote the standard dyadic grid on $\mathbb{R}^n$.
For every $\omega = (\omega_j)_{j \in \mathbb{Z}} \in (\{0, 1\}^n)^{\mathbb{Z}}$ define the shifted dyadic grid $\mathcal{D}_{\omega}$:
$$ \mathcal{D}_{\omega} := \{ Q \stackrel{\cdot}{+} \omega : Q \in \mathcal{D}_0\} \text{, where }
Q \stackrel{\cdot}{+} \omega := Q + \sum_{j: 2^{-j}<l(Q)} 2^{-j}\omega_j, $$
and $l(Q)$ denotes the side length of a cube $Q$.
The indexing parameter $\omega$ is rarely relevant in what follows: it only appears when we are dealing with
$\mathbb{E}_{\omega}$ -- expectation with respect to the standard probability measure on the space
of parameters $\omega$.
In fact, an important feature of the (by now standard) methods we employ in this paper is obtaining upper bounds
for dyadic operators that are independent of the choice of dyadic grid. The focus therefore is on the geometrical
properties shared by any dyadic grid $\mathcal{D}$ on $\mathbb{R}^n$:
\begin{itemize}
\item $P \cap Q \in \{ P, Q, \emptyset \}$ for every $P, Q \in \mathcal{D}$.
\item The cubes $Q \in \mathcal{D}$ with $l(Q) = 2^{-k}$, for some fixed integer $k$, partition $\mathbb{R}^n$.
\end{itemize}
For every $Q \in \mathcal{D}$ and every non-negative integer $k$ we define:
\begin{itemize}
\item $Q^{(k)}$ -- the $k^{th}$ generation ancestor of $Q$ in $\mathcal{D}$, i.e.
the unique element of $\mathcal{D}$ which contains $Q$ and has side length $ 2^k l(Q)$.
\item $(Q)_k$ -- the collection of $k^{th}$ generation descendants of $Q$ in $\mathcal{D}$, i.e.
the $2^{kn}$ disjoint subcubes of $Q$ with side length $2^{-k} l(Q)$.
\end{itemize}
\subsection{The Haar system on $\mathbb{R}^n$}
Recall that every dyadic interval $I$ in $\mathbb{R}$ is associated with two Haar functions:
$$ h_I^{0} := \frac{1}{\sqrt{|I|}} (\mathbbm 1} %{1\!\!1_{I-} - \mathbbm 1} %{1\!\!1_{I_+}) \text{ and } h_I^1 := \frac{1}{\sqrt{|I|}}\mathbbm 1} %{1\!\!1_I, $$
the first one being cancellative (it has mean $0$).
Given a dyadic grid $\mathcal{D}$ on $\mathbb{R}^{n}$, every dyadic cube $Q = I_1 \times\ldots\times I_n$, where all $I_i$ are dyadic intervals in $\mathbb{R}$ with common length $l(Q)$,
is associated with $2^n-1$ cancellative Haar functions:
$$ h^\epsilon_{Q}(x) := h_{I_1\times\ldots\times I_n}^{(\epsilon_1, \ldots, \epsilon_n)}(x_1, \ldots, x_n) := \prod_{i=1}^n h_{I_i}^{\epsilon_i}(x_i),$$
where $\epsilon \in \{0,1\}^n\setminus \{(1, \ldots, 1)\}$ is the signature of $h_Q^{\epsilon}$. To simplify notation, we assume that signatures are never the identically $1$ signature, in which case
the corresponding Haar function would be non-cancellative.
The cancellative Haar functions form an orthonormal basis for $L^2(\mathbb{R}^n)$. We write
$$ f = \sum_{Q\in\mathcal{D}} \widehat{f}(Q^\epsilon) h_Q^\epsilon, $$
where $\widehat{f}(Q^\epsilon) := \left\langle f, h_Q^\epsilon\right\rangle $, $\left\langle f, g\right\rangle := \int_{\mathbb{R}^n} fg\,dx$, and summation over $\epsilon$ is assumed.
We list here some other useful facts which will come in handy later:
\begin{itemize}
\item $h_P^{\epsilon}(x)$ is constant on any subcube $Q \in \mathcal{D}$, $Q \subsetneq P$. We denote this value by $h_P^{\epsilon}(Q)$.
\item The average of $f$ over a cube $Q \in \mathcal{D}$ may be expressed as:
\begin{equation} \label{E:1pavg}
\left\langle f\right\rangle _Q = \sum_{P \in \mathcal{D}, P \supsetneq Q} \widehat{f}(P^{\epsilon}) h_P^{\epsilon}(Q).
\end{equation}
\item Then, if $Q \subsetneq R \in \mathcal{D}$:
\begin{equation} \label{E:1pavgdiff}
\left\langle f\right\rangle _Q - \left\langle f\right\rangle _R = \sum_{P \in \mathcal{D}: Q\subsetneq P \subset R} \widehat{f}(P^{\epsilon}) h_P^{\epsilon}(Q).
\end{equation}
\item For $Q\in\mathcal{D}$:
\begin{equation} \label{E:1punitmo}
\mathbbm 1} %{1\!\!1_Q (f - \left\langle f\right\rangle _Q) = \sum_{P \in\mathcal{D}: P \subset Q} \widehat{f}(P^{\epsilon})h_P^{\epsilon}.
\end{equation}
\item For two \textit{distinct} signatures $\epsilon\neq\delta$, define the signature $\epsilon+\delta$ by letting $(\epsilon+\delta)_i$ be $1$ if $\epsilon_i = \delta_i$ and $0$ otherwise.
Note that $\epsilon+\delta$ is distinct from both $\epsilon$ and $\delta$, and is not the identically $\vec{1}$ signature. Then
$$ h_Q^\epsilon h_Q^\delta = \frac{1}{\sqrt{Q}}h_Q^{\epsilon+\delta} \text{, if } \epsilon\neq\delta \text{, and } h_Q^\epsilon h_Q^\epsilon = \frac{\mathbbm 1} %{1\!\!1_Q}{|Q|}. $$
Again to simplify notation, we assume throughout this paper that we only write $h_Q^{\epsilon+\delta}$ for \textit{distinct} signatures $\epsilon$ and $\delta$.
\end{itemize}
Given a dyadic grid $\mathcal{D}$, we define the dyadic square function on $\mathbb{R}^n$ by:
$$ S_{\mathcal{D}}f(x) := \bigg( \sum_{Q \in \mathcal{D}} |\widehat{f}(Q^{\epsilon})|^2 \frac{\mathbbm 1} %{1\!\!1_Q(x)}{|Q|} \bigg)^{1/2}. $$
Then $\|f\|_p \simeq \|S_{\mathcal{D}}f\|_p$ for all $1 < p < \infty$. We also define the dyadic version of the maximal function:
$$ M_{\mathcal{D}}f(x) = \sup_{Q \in \mathcal{D}} \left\langle |f|\right\rangle _Q \mathbbm 1} %{1\!\!1_Q(x). $$
\subsection{$A_p(\mathbb{R}^n)$ Weights}
Let $w$ be a weight on $\mathbb{R}^n$, i.e. $w$ is an almost everywhere positive, locally integrable function.
For $1 < p < \infty$, let $L^p(w) \defeq L^p(\mathbb{R}^n; w(x)\,dx)$. For a cube $Q$ in $\mathbb{R}^n$, we let
$$w(Q) := \int_Q w(x)\,dx \text{ and } \left<w\right>_Q := \frac{w(Q)}{|Q|}.$$
We say that $w$ belongs to the Muckenhoupt $A_p(\mathbb{R}^n)$ class provided that:
$$[w]_{A_p} := \sup_{Q} \left<w\right>_Q \left<w^{1-p'}\right>_Q^{p-1} < \infty,$$
where $p'$ denotes the H\"older conjugate of $p$ and the supremum above is over all cubes $Q$ in $\mathbb{R}^n$ with sides parallel to the axes.
The weight $w' := w^{1-p'}$ is sometimes called the weight ``conjugate'' to $w$, because
$w \in A_p$ if and only if $w' \in A_{p'}$.
We recall the classical inequalities for the maximal and square functions:
$$\|M f\|_{L^p(w)} \lesssim \|f\|_{L^p(w)} \text{ and } \|f\|_{L^p(w)} \simeq \|S_{\mathcal{D}}f\|_{L^p(w)},$$
for all $w \in A_p(\mathbb{R}^n)$, $1 < p < \infty$, where throughout this paper ``$A \lesssim B$'' denotes $A \leq cB$ for some
constant $c$ which may depend on the dimensions and the weight $w$.
In dealing with dyadic shifts, we will also need to consider the following shifted dyadic square function: given non-negative integers $i$ and $j$, define
$$ S_{\mathcal{D}}^{i, j}f(x) := \bigg[ \sum_{R\in\mathcal{D}} \Big( \sum_{P \in (R)_i} |\widehat{f}(P^\epsilon)| \Big)^2 \Big( \sum_{Q\in (R)_j} \frac{\mathbbm 1} %{1\!\!1_Q(x)}{|Q|}\Big) \bigg]^{1/2}. $$
It was shown in \cite{HLW2} that
\begin{equation} \label{E:ShiftedDSF1p}
\| S_{\mathcal{D}}^{i,j} : L^p(w) \rightarrow L^p(w) \| \lesssim 2^{\frac{n}{2}(i+j)},
\end{equation}
for all $w \in A_p(\mathbb{R}^n)$, $1 < p < \infty$.
A \textit{martingale transform} on $\mathbb{R}^n$ is an operator of the form
$$ f \mapsto f_{\tau} := \sum_{P\in\mathcal{D}} \tau_P^{\epsilon} \widehat{f}(P^{\epsilon}) h_P^{\epsilon}, $$
where each $\tau_P^{\epsilon}$ is either $+1$ or $-1$. Obviously $S_{\mathcal{D}}f = S_{\mathcal{D}}f_{\tau}$,
so one can work with $f_{\tau}$ instead when convenient, without increasing the $L^p(w)$-norm of $f$.
\subsection{The Haar system on $\mathbb{R}^{\vec{n}}$}
In $\mathbb{R}^{\vec{n}} := \mathbb{R}^{n_1} \otimes \mathbb{R}^{n_2}$, we work with dyadic rectangles
$$\bm{\mathcal{D}} := \mathcal{D}_1 \times \mathcal{D}_2 = \{ R = Q_1 \times Q_2 : Q_i \in \mathcal{D}_i \},$$
where each $\mathcal{D}_i$ is a dyadic grid on $\mathbb{R}^{n_i}$.
While we unfortunately lose the nice nestedness and partitioning properties of
one-parameter dyadic grids, we do have the tensor product Haar wavelet orthonormal basis
for $L^2(\mathbb{R}^{\vec{n}})$, defined by
$$ h_R^{\vec{\epsilon}} (x_1, x_2) := h_{Q_1}^{\epsilon_1}(x_1) \otimes h_{Q_2}^{\epsilon_2}(x_2), $$
for all $R = Q_1 \times Q_2 \in \bm{\mathcal{D}}$ and $\vec{\epsilon} = (\epsilon_1, \epsilon_2)$.
We often write
$$ f = \sum_{Q_1\times Q_2} \widehat{f}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) h_{Q_1}^{\epsilon_1} \otimes h_{Q_2}^{\epsilon_2},$$
short for summing over $Q_1 \in \mathcal{D}_1$ and $Q_2 \in \mathcal{D}_2$, and of course over all signatures, where
$$\widehat{f}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) := \left< f, h_{Q_1}^{\epsilon_1} \otimes h_{Q_2}^{\epsilon_2} \right> =
\int_{\mathbb{R}^{\vec{n}}} f(x_1, x_2) h_{Q_1}^{\epsilon_1}(x_1) h_{Q_2}^{\epsilon_2}(x_2)\,dx_1\,dx_2 .$$
While the averaging formula \eqref{E:1pavg} has a straightforward biparameter analogue:
\begin{equation} \label{E:2pavg}
\left\langle f\right\rangle _{Q_1\times Q_2} = \sum_{P_1 \supsetneq Q_1; \: P_2 \supsetneq Q_2}
\widehat{f} (P_1^{\epsilon_1}\times P_2^{\epsilon_2}) h_{P_1}^{\epsilon_1}(Q_1) h_{P_2}^{\epsilon_2}(Q_2),
\end{equation}
the expression in \eqref{E:1punitmo} takes a slightly messier form in two parameters: for any $R = Q_1 \times Q_2$
\begin{align}
\mathbbm 1} %{1\!\!1_{R} (f - \left\langle f\right\rangle _{R}) &=
\sum_{\substack{P_1\subset Q_1 \\ P_2 \subset Q_2}} \widehat{f} (P_1^{\epsilon_1}\times P_2^{\epsilon_2}) h_{P_1}^{\epsilon_1} \otimes h_{P_2}^{\epsilon_2} & \nonumber \\
& + \sum_{P_2\subset Q_2} \left\langle f, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes h_{P_2}^{\epsilon_2} \right\rangle \mathbbm 1} %{1\!\!1_{Q_1} \otimes h_{P_2}^{\epsilon_2}
+ \sum_{P_1\subset Q_1} \left\langle f, h_{P_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} \right\rangle h_{P_1}^{\epsilon_1}\otimes \mathbbm 1} %{1\!\!1_{Q_2} & \label{E:2punitmo} \\
&= \sum_{\substack{P_1\subset Q_1 \\ P_2 \subset Q_2}} \widehat{f} (P_1^{\epsilon_1}\times P_2^{\epsilon_2}) h_{P_1}^{\epsilon_1} \otimes h_{P_2}^{\epsilon_2}
+ \mathbbm 1} %{1\!\!1_R [m_{Q_1}f(x_2) - \left\langle f\right\rangle _R] + \mathbbm 1} %{1\!\!1_R [m_{Q_2}f(x_1) - \left\langle f\right\rangle _R], &\nonumber
\end{align}
where for any cubes $Q_i \in \mathcal{D}_i$:
\begin{equation} \label{E:mQnotation}
m_{Q_1}f(x_2) := \frac{1}{|Q_1|}\int_{Q_1} f(x_1, x_2)\,dx_1 \text{, and }
m_{Q_2}f(x_1) := \frac{1}{|Q_2|} \int_{Q_2} f(x_1, x_2)\,dx_2.
\end{equation}
As we shall see later, this particular expression will be quite relevant for biparameter BMO spaces.
\subsection{$A_p(\mathbb{R}^{\vec{n}})$ Weights}
A weight $w(x_1, x_2)$ on $\mathbb{R}^{\vec{n}}$ belongs to the class $A_p(\mathbb{R}^{\vec{n}})$, for some $1 < p < \infty$, provided that
$$ [w]_{A_p} := \sup_R \left\langle w\right\rangle _R \left\langle w^{1-p'}\right\rangle _R^{p-1} < \infty, $$
where the supremum is over all \textit{rectangles} $R$.
These are the weights which characterize $L^p(w)$ boundedness of the \textit{strong} maximal function:
$$ M_Sf(x_1, x_2) := \sup_R \left\langle |f|\right\rangle _R \mathbbm 1} %{1\!\!1_R(x_1, x_2), $$
where the supremum is again over all rectangles. As is well-known, the usual weak $(1, 1)$ inequality fails for the
strong maximal function, where it is replaced by an Orlicz norm expression. In the weighted case, we have \cite{BagbyKurtz}
for all $w \in A_p(\mathbb{R}^{\vec{n}})$:
\begin{equation} \label{E:LlogLMS}
w\{ x \in \mathbb{R}^{\vec{n}}: M_Sf(x) > \lambda \} \lesssim \int_{\mathbb{R}^{\vec{n}}} \left( \frac{|f(x)|}{\lambda} \right)^p \left( 1 + \log^{+} \frac{|f(x)|}{\lambda} \right)^{k-1}\,dw(x).
\end{equation}
Moreover, $w$ belongs to $A_p(\mathbb{R}^{\vec{n}})$ if and only if $w$ belongs to the \textit{one-parameter}
classes $A_p(\mathbb{R}^{n_i})$ in each variable separately and uniformly:
$$ [w]_{A_p(\mathbb{R}^{\vec{n}})} \simeq \max \bigg\{
\esssup_{x_1 \in \mathbb{R}^{n_1}} [w(x_1, \cdot)]_{A_p(\mathbb{R}^{n_2})}, \:\:\:
\esssup_{x_2 \in \mathbb{R}^{n_2}} [w(\cdot, x_2)]_{A_p(\mathbb{R}^{n_1})} \bigg\}.$$
It also follows as in the one-parameter case that $w \in A_p(\mathbb{R}^{\vec{n}})$
if and only if $w' := w^{1-p'} \in A_{p'}(\mathbb{R}^{\vec{n}})$, and
$L^p(w)^* \simeq L^{p'}(w')$, in the sense that:
\begin{equation} \label{E:ApDuality}
\|f\|_{L^p(w)} = \sup\{ |\left\langle f, g\right\rangle |: g \in L^{p'}(w'), \|g\|_{L^{p'}(w')} \leq 1 \}.
\end{equation}
We may also define weights $m_{Q_1}w$ and $m_{Q_2}w$ on $\mathbb{R}^{n_2}$ and $\mathbb{R}^{n_1}$, respectively, as in \eqref{E:mQnotation}. As shown below, these are then also uniformly in their respective one-parameter $A_p$ classes:
\begin{prop}\label{P:avg2ParAp}
If $w \in A_p(\mathbb{R}^{\vec{n}})$, $1<p<\infty$, then $m_{Q_1}w \in A_p(\mathbb{R}^{n_2})$ and $m_{Q_2}w \in A_p(\mathbb{R}^{n_1})$ for any cubes $Q_i \subset \mathbb{R}^{n_i}$, with uniformly bounded $A_p$ constants:
$$ [m_{Q_i}w]_{A_p(\mathbb{R}^{n_j})} \leq [w]_{A_p(\mathbb{R}^{\vec{n}})}, $$
for all $Q_i \subset \mathbb{R}^{n_i}$, $i \in \{1,2\}$, $i \neq j$.
\end{prop}
\begin{proof}
Fix a cube $Q_1 \subset \mathbb{R}^{n_1}$. Then for every $x_2 \in \mathbb{R}^{n_2}$,
$$ |Q_1| = \int_{Q_1} 1\,dx_1 \leq \bigg( \int_{Q_1} w(x_1, x_2)\,dx_1\bigg)^{1/p}
\bigg( \int_{Q_1} w'(x_1, x_2)\,dx_1\bigg)^{1/p'}, $$
and so
$$ ( m_{Q_1}w)'(x_2) := ( m_{Q_1}w)^{1-p'}(x_2) \leq m_{Q_1}w'(x_2).$$
Then for all cubes $Q_2 \subset \mathbb{R}^{n_2}$,
$$ \left\langle m_{Q_1}w\right\rangle _{Q_2} \left\langle (m_{Q_1}w)'\right\rangle _{Q_2}^{p-1} \leq \left\langle w\right\rangle _{Q_1\times Q_2} \left\langle w'\right\rangle _{Q_1\times Q_2}^{p-1}
\leq [w]_{A^p(\mathbb{R}^{\vec{n}})}, $$
proving the result for $m_{Q_1}w$. The other case follows symmetrically.
\end{proof}
Finally, we will later use a reverse H\"{o}lder property of biparameter $A_p$ weights. This is well-known to experts, but we include a proof here for completeness.
\begin{prop}\label{P:2ParRH}
If $w \in A_p(\mathbb{R}^{\vec{n}})$, then there exist positive constants $C, \epsilon, \delta > 0$ (depending only on $\vec{n}$, $p$,
and $[w]_{A_p(\mathbb{R}^{\vec{n}})}$), such that
\begin{enumerate}[i).]
\item For all rectangles $R \subset \mathbb{R}^{\vec{n}}$,
$$ \bigg( \frac{1}{|R|} \int_R w(x)^{1+\epsilon}\,dx\bigg)^{\frac{1}{1+\epsilon}} \leq
\frac{C}{|R|}\int_R w(x)\,dx. $$
\item For all rectangles $R \subset \mathbb{R}^{\vec{n}}$ and all measurable subsets $E\subset R$,
$$ \frac{w(E)}{w(R)} \leq C \bigg( \frac{|E|}{|R|}\bigg)^{\delta}. $$
\end{enumerate}
\end{prop}
\begin{proof}
Note first that \textit{ii).} follows easily from \textit{i).} by applying the H\"{o}lder inequality with exponents $1+\epsilon$
and $\frac{1+\epsilon}{\epsilon}$ in $w(E) = \int_E w(x)\,dx$. This gives \textit{ii).} with $\delta = \frac{\epsilon}{1+\epsilon}$.
In order to prove \textit{i).} we first recall a more general statement of the one-parameter reverse H\"{o}lder property of $A_p$ weights (see Remark 9.2.3 in \cite{Grafakos}):
\setlength{\leftskip}{1cm}
\noindent\textit{For any $1<p<\infty$ and $B > 1$, there exist positive constants}
\begin{equation}\label{E:tempStar}
D = D(n, p, B) \text{ and } \beta = \beta(n, p, B)
\end{equation}
\textit{such that for all $v \in A_p(\mathbb{R}^n)$ with $[v]_{A_p(\mathbb{R}^{\vec{n}})} \leq B$, the reverse H\"{o}lder condition}
\begin{equation}\label{E:tempStarStar}
\bigg( \frac{1}{|Q|} \int_Q v(t)^{1+\beta}\,dt\bigg)^{\frac{1}{1+\beta}} \leq
\frac{D}{|Q|}\int_Q v(t)\,dt.
\end{equation}
\textit{holds for all cubes $Q\subset \mathbb{R}^n$.}
\setlength{\leftskip}{0pt}
\noindent It is easy to see that if a weight $v$ satisfies the reverse H\"{o}lder condition \eqref{E:tempStarStar} with constants
$D, \beta$, then it also satisfies it with any constants $C, \epsilon$ with $C \geq D$ and $\epsilon \leq \beta$.
Now let $w\in A_p(\mathbb{R}^{\vec{n}})$, set $B := [w]_{A_p(\mathbb{R}^{\vec{n}})}$, and for $i \in \{1, 2\}$ let
$ D_i := D(n_i, p, B)$ and $\beta_i := \beta(n_i, p, B) $
be as in \eqref{E:tempStar}. Fix a rectangle $R = Q_1\times Q_2$, a measurable subset $E\subset R$, and set
$$ C^2 := \max(D_1, D_2) \text{ and } \epsilon:= \min(\beta_1, \beta_2). $$
For almost all $x_1 \in \mathbb{R}^{n_1}$, the weight $w(x_1, \cdot) \in A_p(\mathbb{R}^{n_2})$ with $[w(x_1, \cdot)]_{A_p(\mathbb{R}^{n_2})} \leq B$,
so $w(x_1, \cdot)$ satisfies reverse H\"{o}lder with constants $D_2, \beta_2$ -- and therefore also with constants $\sqrt{C}, \epsilon$. So
\begin{align}
\frac{1}{|R|} \int_R w(x)^{1+\epsilon}\,dx &= \frac{1}{|Q_1|} \int_{Q_1}
\bigg(\frac{1}{|Q_2|} w(x_1, x_2)^{1+\epsilon}\,dx_2\bigg)\,dx_1\\
& \leq \frac{1}{|Q_1|} \int_{Q_1} \bigg( \frac{\sqrt{C}}{|Q_2|} \int_{Q_2} w(x_1, x_2)\,dx_2\bigg)^{1+\epsilon} \,dx_1\\
& = \frac{C^{(1+\epsilon)/2}}{|Q_1|} \int_{Q_1} (m_{Q_2}w(x_1))^{1+\epsilon}\,dx_1.
\end{align}
By Proposition \ref{P:avg2ParAp}, the weight $m_{Q_2}w\in A_p(\mathbb{R}^{n_1})$ with $[m_{Q_2}w]_{A_p(\mathbb{R}^{n_1})} \leq B$, so this weight satisfies reverse H\"{o}lder with constants $D_1, \beta_1$ -- and therefore also with constants $\sqrt{C}, \epsilon$. Then the last inequality above gives that
$$\bigg( \frac{1}{|R|} \int_R w(x)^{1+\epsilon}\,dx\bigg)^{\frac{1}{1+\epsilon}} \leq \frac{C}{|Q_1|}\int_{Q_1}
m_{Q_2}w(x_1)\,dx_1 = \frac{C}{|R|}\int_R w(x)\,dx.$$
\end{proof}
\section{Biparameter Dyadic Square Functions}
\label{S:BDSF}
Throughout this section, fix dyadic rectangles $\bm{\mathcal{D}} := \mathcal{D}_1 \times \mathcal{D}_2$ on $\mathbb{R}^{\vec{n}}$.
The dyadic square function associated with $\bm{\mathcal{D}}$ is then defined in the obvious way:
$$ S_{\bm{\mathcal{D}}}f(x_1, x_2) := \bigg( \sum_{R\in\bm{\mathcal{D}}} |\widehat{f}(R^{\vec{\epsilon}})|^2 \frac{\mathbbm 1} %{1\!\!1_R(x_1, x_2)}{|R|} \bigg)^{1/2}. $$
We also want to look at the dyadic square functions \textit{in each variable}, namely
$$ S_{\mathcal{D}_1}f(x_1, x_2) := \bigg( \sum_{Q_1\in\mathcal{D}_1} |H_{Q_1}^{\epsilon_1}f(x_2)|^2 \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|} \bigg)^{1/2}
; \:\:\:
S_{\mathcal{D}_2}f(x_1, x_2) := \bigg( \sum_{Q_2\in\mathcal{D}_2} |H_{Q_2}^{\epsilon_2}(x_1)|^2 \frac{\mathbbm 1} %{1\!\!1_{Q_2}(x_2)}{|Q_2|} \bigg)^2,$$
where for every $Q_i \in \mathcal{D}_i$ and signatures $\epsilon_i$, we denote
$$ H_{Q_1}^{\epsilon_1} f(x_2) := \int_{\mathbb{R}^{n_1}} f(x_1, x_2) h_{Q_1}^{\epsilon_1}(x_1)\,dx_1 ; \:\:\:
H_{Q_2}^{\epsilon_2} f(x_1) := \int_{\mathbb{R}^{n_2}} f(x_1, x_2) h_{Q_2}^{\epsilon_2}(x_2)\,dx_2. $$
Then for any $w \in A_p(\mathbb{R}^{\vec{n}})$:
$$ \|f\|_{L^p(w)} \simeq \|S_{\bm{\mathcal{D}}}f\|_{L^p(w)} \simeq \|S_{\mathcal{D}_1}f\|_{L^p(w)} \simeq \|S_{\mathcal{D}_2}f\|_{L^p(w)}. $$
More generally, define the shifted biparameter square function, for pairs $\vec{i} = (i_1, i_2)$ and $\vec{j} = (j_1, j_2)$ of non-negative integers, by:
\begin{equation}\label{E:ShiftedDSF2pDef}
S_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}f := \bigg[ \sum_{\substack{R_1\in\mathcal{D}_1 \\ R_2\in\mathcal{D}_2}}
\bigg( \sum_{\substack{P_1\in (R_1)_{i_1} \\ P_2\in (R_2)_{i_2}}} |\widehat{f}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})| \bigg)^2
\bigg( \sum_{\substack{Q_1\in (R_1)_{j_1} \\ Q_2\in (R_2)_{j_2}}} \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\bigg) \bigg]^{1/2}.
\end{equation}
We claim that:
\begin{equation} \label{E:ShiftedDSF2p}
\| S_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} : L^p(w) \rightarrow L^p(w) \| \lesssim 2^{\frac{n_1}{2}(i_1+j_1)} 2^{\frac{n_2}{2}(i_2+j_2)},
\end{equation}
for all $w \in A_p(\mathbb{R}^{\vec{n}})$, $1 < p < \infty$.
This follows by iteration of the one-parameter result in \eqref{E:ShiftedDSF1p}, through the following vector-valued version of the extrapolation theorem (see Corollary 9.5.7 in \cite{Grafakos}):
\begin{prop} \label{P:VvalExt}
Suppose that an operator $T$ satisfies $\|T: L^2(w) \rightarrow L^2(w)\| \leq AC_n[w]_{A_2}$ for all $w \in A_2(\mathbb{R}^n)$, for some constants $A$ and $C_n$, where the latter only depends on the dimension. Then:
$$ \left\| \bigg( \sum_{j} |T f_j |^2 \bigg)^{1/2} \right\|_{L^p(w)}
\leq A C_n'[w]_{A_p}^{\max(1, \frac{1}{p-1})} \left\| \bigg( \sum_j |f_j |^2 \bigg)^{1/2} \right\|_{L^p(w)},$$
for all $w\in A_p(\mathbb{R}^n)$, $1 < p < \infty$ and all sequences $\{f_j\}\subset L^p(w)$, where $C_n'$ is a dimensional constant.
\end{prop}
\begin{proof}[Proof of \eqref{E:ShiftedDSF2p}]
Note that $(S_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} f)^2 = \sum_{R_1\in\mathcal{D}_1} (S_{\mathcal{D}_2}^{i_2, j_2} F_{R_1})^2$, where
$$ F_{R_1}(x_1, x_2) := \sum_{P_2\in\mathcal{D}_2} \bigg( \sum_{P_1\in (R_1)_{i_1}} |\widehat{f}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})| \bigg)
\bigg( \sum_{Q_1\in (R_1)_{j_1}} \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|} \bigg)^{1/2} h_{P_2}^{\epsilon_2}(x_2).$$
Then
$$ \| S_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} f\|^p_{L^p(w)} = \int_{\mathbb{R}^{n_1}} \int_{\mathbb{R}^{n_2}}
\bigg( \sum_{R_1\in\mathcal{D}_1} (S_{\mathcal{D}_2}^{i_2, j_2} F_{R_1}(x_1, x_2))^2 \bigg)^{p/2} w(x_1, x_2)\,dx_2\,dx_1.$$
For almost all fixed $x_1\in\mathbb{R}^{n_1}$, $w(x_1, \cdot)$ is in $A_p(\mathbb{R}^{n_2})$ uniformly, so we may apply Proposition \ref{P:VvalExt} and \eqref{E:ShiftedDSF1p} to the inner integral and obtain:
$$ \| S_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} f\|^p_{L^p(w)} \lesssim 2^{\frac{pn_2}{2}(i_2+j_2)}
\int_{\mathbb{R}^{n_1}} \int_{\mathbb{R}^{n_2}} \bigg( \sum_{R_1\in\mathcal{D}_1} |F_{R_1}(x_1, x_2)|^2 \bigg)^{p/2} w(x_1, x_2) \,dx_2\,dx_1. $$
Now, we can express the integral above as
$$ \int_{\mathbb{R}^{n_2}} \int_{\mathbb{R}^{n_1}} \bigg( S_{\mathcal{D}_1}^{i_1, j_1}f_{\tau}(x_1, x_2) \bigg)^p w(x_1, x_2) \,dx_1\,dx_2
\lesssim 2^{\frac{pn_1}{2}(i_1+j_1)} \|f_{\tau}\|^p,$$
where $f_{\tau} = \sum_{P_1\times P_2} |\widehat{f}(P_1^{\epsilon_1} \times P_2^{\epsilon_2})| h_{P_1}^{\epsilon_1} \otimes h_{P_2}^{\epsilon_2}$
is just a biparameter martingale transform applied to $f$, and therefore $\|f\|_{L^p(w)}\simeq \|f_{\tau}\|_{L^p(w)}$ by passing to the square function.
\end{proof}
\subsection{Mixed Square and Maximal Functions} \label{Ss:MixedSquare}
We will later encounter mixed operators such as:
$$ [SM]f(x_1, x_2) := \left( \sum_{Q_1\in\mathcal{D}_1} \left( M_{\mathcal{D}_2}(H_{Q_1}^{\epsilon_1}f)(x_2) \right)^2 \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|} \right)^{1/2}, $$
$$ [MS]f(x_1, x_2) := \left( \sum_{Q_2\in\mathcal{D}_2} \left( M_{\mathcal{D}_1}(H_{Q_2}^{\epsilon_2}f)(x_1) \right)^2 \frac{\mathbbm 1} %{1\!\!1_{Q_2}(x_2)}{|Q_2|} \right)^{1/2}. $$
Next we show that these operators are bounded $L^p(w) \rightarrow L^p(w)$ for all $w\in A_p(\mathbb{R}^{\vec{n}})$. The proof only relies on the fact that the one-parameter maximal function satisfies a weighted bound. So we state the result in a slightly more general form below, replacing $M_{\mathcal{D}_2}$ and $M_{\mathcal{D}_1}$ by any one-parameter operator that satisfies a weighted bound.
\begin{prop} \label{P:Mixed2pSF}
Let $T$ denote a (one-parameter) operator acting on functions on $\mathbb{R}^n$ that satisfies $\|T: L^2(v) \rightarrow L^2(v)\| \leq C$ for all $v \in A_2(\mathbb{R}^n)$.
Define the following operators on $\mathbb{R}^{\vec{n}}$:
$$ [ST]f(x_1, x_2) := \left( \sum_{Q_1\in\mathcal{D}_1} \left( T(H_{Q_1}^{\epsilon_1}f)(x_2) \right)^2 \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|} \right)^{1/2}, $$
$$ [TS]f(x_1, x_2) := \left( \sum_{Q_2\in\mathcal{D}_2} \left( T(H_{Q_2}^{\epsilon_2}f)(x_1) \right)^2 \frac{\mathbbm 1} %{1\!\!1_{Q_2}(x_2)}{|Q_2|} \right)^{1/2}, $$
where $T$ acts on $\mathbb{R}^{n_2}$ in the first operator, and on $\mathbb{R}^{n_1}$ in the second.
Then $[ST]$ and $[TS]$ are bounded $L^p(w) \rightarrow L^p(w)$ for all $w\in A_p(\mathbb{R}^{\vec{n}})$.
\end{prop}
\begin{proof}
\begin{align*}
\| [ST]f \|^p_{L^p(w)} &= \int_{\mathbb{R}^{n_1}} \int_{\mathbb{R}^{n_2}}
\bigg( \sum_{Q_1\in\mathcal{D}_1} \bigg(T(H_{Q_1}^{\epsilon_1})(x_2) \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{\sqrt{|Q_1|}}\bigg)^2 \bigg)^{p/2} w(x_1, x_2)\,dx_2\,dx_1 \\
& \lesssim \int_{\mathbb{R}^{n_1}} \int_{\mathbb{R}^{n_2}}
\bigg( \sum_{Q_1\in\mathcal{D}_1} (H_{Q_1}^{\epsilon_1})^2(x_2) \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|} \bigg)^{p/2} w(x_1, x_2)\,dx_2\,dx_1\\
&= \|S_{\mathcal{D}_1}f\|^p_{L^p(w)} \lesssim \|f\|^p_{L^p(w)},
\end{align*}
where the first inequality follows as before from Proposition \ref{P:VvalExt}. The proof for $[TS]$ is symmetrical.
\end{proof}
More generally, define \textit{shifted} versions of these mixed operators:
$$ [ST]^{i_1, j_1} f(x_1, x_2) := \left( \sum_{R_1\in\mathcal{D}_1}
\bigg( \sum_{P_1\in (R_1)_{i_1}} T(H_{P_1}^{\epsilon_1}f)(x_2) \bigg)^2 \sum_{Q_1\in(R_1)_{j_1}} \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|} \right)^{1/2}, $$
$$ [TS]^{i_2, j_2} f(x_1, x_2) := \left( \sum_{R_2\in\mathcal{D}_2}
\bigg( \sum_{P_2\in (R_2)_{i_2}} T(H_{P_2}^{\epsilon_2}f)(x_1) \bigg)^2 \sum_{Q_2\in(R_2)_{j_2}} \frac{\mathbbm 1} %{1\!\!1_{Q_2}(x_2)}{|Q_2|} \right)^{1/2}. $$
Under the same assumptions on $T$, it is easy to see that
\begin{equation} \label{E:Mixed2pSFShift}
\| [ST]^{i_1, j_1} : L^p(w) \rightarrow L^p(w) \| \lesssim 2^{\frac{n_1}{2}(i_1+j_1)}
\text{ and }
\| [TS]^{i_2, j_2} : L^p(w) \rightarrow L^p(w) \| \lesssim 2^{\frac{n_2}{2}(i_2+j_2)},
\end{equation}
for all $w \in A_p(\mathbb{R}^{\vec{n}})$. Specifically,
\begin{align*}
\| [ST]^{i_1, j_1} f\|^p_{L^p(w)} &= \int |S_{\mathcal{D}_1}^{i_1, j_1}F(x_1, x_2)|^p \,dw
\text{, where } F(x_1, x_2) := \sum_{P_1\in\mathcal{D}_1} T(H_{P_1}^{\epsilon_1}f)(x_2) h_{P_1}^{\epsilon_1}(x_1),
\end{align*}
so $ \| [ST]^{i_1, j_1} f\|_{L^p(w)} \lesssim 2^{\frac{n_1}{2}(i_1+j_1)} \|F\|_{L^p(w)}$.
Now, $ \|F\|_{L^p(w)} \simeq \|S_{\mathcal{D}_1}F\|_{L^p(w)} = \| [ST] f\|_{L^p(w)} \lesssim \|f\|_{L^p(w)}$.
\section{Biparameter Weighted BMO Spaces}
\label{S:BWBMO}
Given a weight $w$ on $\mathbb{R}^n$, a locally integrable function $b$ is said to be in the weighted $BMO(w)$ space if
$$\|b\|_{BMO(w)} := \sup_Q \frac{1}{w(Q)}\int_Q |b(x) - \left\langle b\right\rangle _Q|\,dx < \infty,$$
where the supremum is over all cubes $Q$ in $\mathbb{R}^n$. If $w = 1$, we obtain the unweighted $BMO(\mathbb{R}^n)$ space.
The dyadic version $BMO_{\mathcal{D}}(w)$ is obtained by only taking supremum over $Q\in\mathcal{D}$ for some given dyadic grid $\mathcal{D}$
on $\mathbb{R}^n$. If the weight $w\in A_p(\mathbb{R}^n)$ for some $1<p<\infty$, Muckenhoupt and Wheeden show in \cite{MuckWheeden} that
\begin{equation}\label{E:MuckWheeden-1p}
\|b\|_{BMO(w)} \simeq \|b\|_{BMO(w';p')} := \sup_Q \bigg( \frac{1}{w(Q)} \int_Q |b - \left\langle b\right\rangle _Q|^{p'}\,dw' \bigg)^{1/p'},
\end{equation}
where $w'$ is the conjugate weight to $w$.
Moreover, if $w\in A_2(\mathbb{R}^n)$, Wu's argument in \cite{Wu} shows that $BMO_{\mathcal{D}}(w) \simeq H^1_{\mathcal{D}}(w)^*$,
where the dyadic Hardy space $H_{\mathcal{D}}^1(w)$ is defined by the norm
$$ \|\phi\|_{H^1_{\mathcal{D}}(w)} := \| S_{\mathcal{D}}\phi\|_{L^1(w)}.$$
Then
\begin{equation} \label{E:H1BMO-1p}
|\left\langle b, \phi\right\rangle | \lesssim \|b\|_{BMO_{\mathcal{D}}(w)} \|S_{\mathcal{D}}\phi\|_{L^1(w)} \text{, for all } w\in A_2(\mathbb{R}^n).
\end{equation}
Now suppose $\mu$ and $\lambda$ are $A_p(\mathbb{R}^n)$ weights for some $1<p<\infty$, and define the Bloom weight $\nu: = \mu^{1/p} \lambda^{-1/p}$.
As shown in \cite{HLW2}, the weight $\nu \in A_2(\mathbb{R}^n)$, which means we may use \eqref{E:H1BMO-1p} with $\nu$.
A two-weight John-Nirenberg theorem for the Bloom BMO space $BMO(\nu)$ is also proved in \cite{HLW2}, namely
\begin{equation} \label{E:Bloom-JN-1p}
\|b\|_{BMO(\nu)} \simeq \|b\|_{BMO(\mu,\lambda,p)} \simeq \|b\|_{BMO(\lambda',\mu',p')},
\end{equation}
where
\begin{align}
& \|b\|_{BMO(\mu,\lambda,p)} := \sup_Q \bigg( \frac{1}{\mu(Q)} \int_Q |b - \left\langle b\right\rangle _Q|^p\,d\lambda \bigg)^{1/p},\\
& \|b\|_{BMO(\lambda',\mu',p')} := \sup_Q \bigg( \frac{1}{\lambda'(Q)} \int_Q |b - \left\langle b\right\rangle _Q|^{p'}\,d\mu' \bigg)^{1/p'}.
\end{align}
We now look at weighted BMO spaces in the product setting $\mathbb{R}^{\vec{n}} = \mathbb{R}^{n_1}\otimes\mathbb{R}^{n_2}$.
Suppose $w(x_1, x_2)$ is a weight on $\mathbb{R}^{\vec{n}}$. Then we have three BMO spaces:
\begin{itemize}
\item \underline{Weighted Little $bmo(w)$:} is the space of all locally integrable functions $b$ on $\mathbb{R}^{\vec{n}}$ such that
$$ \|b\|_{bmo(w)} := \sup_R \frac{1}{w(R)} \int_R |b - \left\langle b\right\rangle _R| \,dx < \infty,$$
where the supremum is over all \textit{rectangles} $R = Q_1\times Q_2$ in $\mathbb{R}^{\vec{n}}$.
Given a choice of dyadic rectangles $\bm{\mathcal{D}} = \mathcal{D}_1\times\mathcal{D}_2$, we define the dyadic weighted little
$bmo_{\bm{\mathcal{D}}}(w)$ by taking supremum over $R\in\bm{\mathcal{D}}$.
\item \underline{Weighted Product $BMO_{\bm{\mathcal{D}}}(w)$:} is the space of all locally integrable functions $b$ on $\mathbb{R}^{\vec{n}}$ such that:
\begin{equation}
\|b\|_{BMO_{\bm{\mathcal{D}}}(w)} := \sup_{\Omega} \left( \frac{1}{w(\Omega)} \sum_{R \subset \Omega; R \in \bm{\mathcal{D}}} | \widehat{b}(R)|^2 \frac{1}{\left\langle w\right\rangle _R} \right)^{1/2} < \infty,
\end{equation}
where the supremum is over all \textit{open sets} $\Omega \subset \mathbb{R}^{\vec{n}}$ with $w(\Omega) < \infty$.
\item \underline{Weighted Rectangular $BMO_{\bm{\mathcal{D}}, Rec}(w)$:} is defined in a similar fashion to the unweighted case --
just like product BMO, but taking supremum over rectangles instead of over open sets:
$$ \|b\|_{BMO_{\bm{\mathcal{D}}, Rec}(w)} := \sup_R \left( \frac{1}{w(R)} \sum_{T \subset R} |\widehat{b}(T^\epsilon)|^2 \frac{1}{\left\langle w\right\rangle _T} \right)^{1/2}, $$
where the supremum is over all rectangles $R$, and the summation is over all subrectangles $T \in \bm{\mathcal{D}}$, $T \subset R$.
\end{itemize}
We have the inclusions
$$ bmo_{\mathcal{D}}(w) \subsetneq BMO_{\bm{\mathcal{D}}}(w) \subsetneq BMO_{\bm{\mathcal{D}}, Rec}(w). $$
Let us look more closely at some of these spaces.
\subsection{Weighted Product $BMO_{\bm{\mathcal{D}}}(w)$}
As in the one parameter case, we define the dyadic weighted Hardy space $\mathcal{H}^1_{\bm{\mathcal{D}}}(w)$ to be the space of all $\phi \in L^1(w)$ such that $S_{\bm{\mathcal{D}}}\phi \in L^1(w)$,
a Banach space under the norm $ \|\phi\|_{\mathcal{H}^1_{\bm{\mathcal{D}}}(w)} := \| S_{\bm{\mathcal{D}}} \phi\|_{L^1(w)}$.
The following result exists in the literature under various forms, but we include a proof here for completeness.
\begin{prop} \label{P:H1BMO}
With the notation above, $\mathcal{H}^1_{\bm{\mathcal{D}}}(w)^* \equiv BMO_{\bm{\mathcal{D}}}(w)$. Specifically,
every $b \in BMO_{\bm{\mathcal{D}}}(w)$ determines a continuous linear functional on $\mathcal{H}^1_{\bm{\mathcal{D}}}(w)$
by $\phi \mapsto \left\langle b, \phi \right\rangle $:
\begin{equation} \label{E:H1BMOprod}
\left| \left\langle b, \phi \right\rangle \right| \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(w)} \|S_{\bm{\mathcal{D}}} \phi\|_{L^1(w)},
\end{equation}
and, conversely, every $L \in \mathcal{H}^1_{\bm{\mathcal{D}}}(w)^*$ may be realized as $L \phi = \left\langle b, \phi\right\rangle $ for some $b \in BMO_{\bm{\mathcal{D}}}(w)$.
\end{prop}
\begin{proof}
To prove the first statement, let $b \in BMO_{\bm{\mathcal{D}}}(w)$ and $\phi \in \mathcal{H}^1_{\bm{\mathcal{D}}}(w)$. For every $j \in \mathbb{Z}$, define the set
$ U_j := \{ x \in \mathbb{R}^{\vec{n}}: S_{\bm{\mathcal{D}}}\phi(x) > 2^j \}, $
and the collection of rectangles
$ \mathcal{R}_j := \{ R\in\bm{\mathcal{D}}: w(R \cap U_j) > \f{1}{2} w(R)\}. $
Clearly $U_{j+1} \subset U_j$ and $\mathcal{R}_{j+1} \subset \mathcal{R}_j$. Moreover,
\begin{equation}\label{E:DualP-b}
\sum_{j \in \mathbb{Z}} 2^j w(U_j) \simeq \|S_{\bm{\mathcal{D}}}\phi\|_{L^1(w)},
\end{equation}
which comes from the measure theoretical fact that for any integrable function $f$ on a measure space $(\mathcal{X}, \mu)$:
$ \|f\|_{L^1(\mu)} \simeq \sum_{j \in \mathbb{Z}} 2^j \mu\{x\in\mathcal{X}: |f(x)|>2^j\}. $
As shown in Proposition \ref{P:2ParRH}, there exist $C, \delta > 0$ such that
$\frac{w(E)}{w(R)} \leq C \left( \frac{|E|}{|R|} \right)^{\delta},$
for all rectangles $R$ and measurable subsets $E \subset R$. Define then for every $j \in \mathbb{Z}$ the (open) set:
$$ V_j := \{ x \in \mathbb{R}^{\vec{n}}: M_S\mathbbm 1} %{1\!\!1_{U_j}(x) > \theta \} \text{, where } \theta := \left(\frac{1}{2C}\right)^{1/\delta}.$$
First note that if $R \in \mathcal{R}_j$, then
$$ \frac{1}{2} < \frac{w(R\cap U_j)}{w(R)} \leq C\left( \frac{|R\cap U_j|}{|R|} \right)^{\delta} \text{, so } \theta < \left\langle \mathbbm 1} %{1\!\!1_{U_j}\right\rangle _{R} \leq M_S \mathbbm 1} %{1\!\!1_{U_j}(x)
\text{, for all } x \in R.$$
Therefore
\begin{equation} \label{E:DualP-c}
\bigcup_{R \in \mathcal{R}_j} R \subset V_j.
\end{equation}
Using \eqref{E:LlogLMS}, we have that
\begin{equation}\label{E:DualP-d}
w(V_j) \lesssim \int_{U_j} \frac{1}{\theta^p} \left( 1 + \log^{+} \frac{1}{\theta}\right)^{k-1}\,dw \simeq w(U_j).
\end{equation}
Now suppose $R\in\bm{\mathcal{D}}$ but $R \notin \bigcup_{j\in\mathbb{Z}} \mathcal{R}_j$. Then $w(R \cap \{S_{\bm{\mathcal{D}}}\phi \leq 2^j\}) \geq \f{1}{2}w(R)$
for all $j \in \mathbb{Z}$, and so
$$ w(R \cap \{S_{\bm{\mathcal{D}}}\phi = 0\}) = w\left( \bigcap_{j=1}^{\infty} R \cap \{S_{\bm{\mathcal{D}}}\phi \leq 2^{-j}\} \right) \geq \frac{1}{2} w(R).$$
Then $|\{S_{\bm{\mathcal{D}}}\phi = 0\}| \geq |R \cap \{S_{\bm{\mathcal{D}}}\phi = 0\}| \geq \theta |R| > 0$, and we may write
$$ |\widehat{\phi}(R)|^2 = \int_{\{S_{\bm{\mathcal{D}}}\phi = 0\}} |\widehat{\phi}(R)|^2 \frac{\mathbbm 1} %{1\!\!1_R}{|R\cap \{S_{\bm{\mathcal{D}}}\phi = 0\}|} \,dx
\leq \frac{1}{\theta} \int_{\{S_{\bm{\mathcal{D}}}\phi = 0\}} (S_{\bm{\mathcal{D}}}\phi)^2\, dx = 0.$$
So
\begin{equation} \label{E:DualP-e}
\widehat{\phi}(R) = 0 \text{, for all } R \in \bm{\mathcal{D}},\: R \notin \bigcup_{j \in \mathbb{Z}}\mathcal{R}_j.
\end{equation}
Finally, if $R \in \bigcap_{j \in \mathbb{Z}}\mathcal{R}_j$, then
$$0 = w(R \cap \{S_{\bm{\mathcal{D}}}\phi = \infty\}) = \lim_{j \rightarrow \infty} w(R \cap \{S_{\bm{\mathcal{D}}}\phi > 2^j\}) \geq \f{1}{2} w(R),$$
a contradiction. In light of this and \eqref{E:DualP-e},
\begin{eqnarray*}
\sum_{R \in \bm{\mathcal{D}}} |\widehat{b}(R)| |\widehat{\phi}(R)| &=& \sum_{j \in \mathbb{Z}} \sum_{R \in \mathcal{R}_j \setminus \mathcal{R}_{j+1}} |\widehat{b}(R)| |\widehat{\phi}(R)|\\
&\leq& \sum_{j \in \mathbb{Z}} \left( \sum_{R \in \mathcal{R}_j \setminus \mathcal{R}_{j+1}} |\widehat{b}(R)|^2 \frac{1}{\left\langle w\right\rangle _R} \right)^{1/2}
\left( \sum_{R \in \mathcal{R}_j \setminus \mathcal{R}_{j+1}} |\widehat{\phi}(R)|^2 \left\langle w\right\rangle _R \right)^{1/2}
\end{eqnarray*}
To estimate the first term, we simply note that
$$ \sum_{R \in \mathcal{R}_j \setminus \mathcal{R}_{j+1}} |\widehat{b}(R)|^2 \frac{1}{\left\langle w\right\rangle _R} \leq \sum_{R \in \mathcal{R}_j} |\widehat{b}(R)|^2 \frac{1}{\left\langle w\right\rangle _R}
\leq \sum_{R\subset V_j; R \in \bm{\mathcal{D}}} |\widehat{b}(R)|^2 \frac{1}{\left\langle w\right\rangle _R} \leq \|b\|^2_{BMO_{\bm{\mathcal{D}}}(w)} w(V_j), $$
where the second inequality follows from \eqref{E:DualP-c}.
For the second term, remark that any $R \in \mathcal{R}_{j} \setminus \mathcal{R}_{j+1}$ satisfies $R \subset V_j$ and $w(R \setminus U_{j+1}) \geq \frac{1}{2}w(R)$.
Then
\begin{eqnarray*}
\sum_{R \in \mathcal{R}_j \setminus \mathcal{R}_{j+1}} |\widehat{\phi}(R)|^2 \left\langle w\right\rangle _R &\leq&
2\sum_{R\in\mathcal{R}_j\setminus\mathcal{R}_{j+1}} |\widehat{\phi}(R)|^2 \frac{w(R\setminus U_{j+1})}{|R|}\\
&=& 2 \int_{V_j \setminus U_{j+1}} \sum_{R\in\mathcal{R}_j\setminus\mathcal{R}_{j+1}} |\widehat{\phi}(R)|^2 \frac{\mathbbm 1} %{1\!\!1_R}{|R|}\,dw\\
&\leq& 2 \int_{V_j\setminus U_{j+1}} (S_{\bm{\mathcal{D}}}\phi)^2\,dw \lesssim 2^{2j} w(V_j),
\end{eqnarray*}
since $S_{\bm{\mathcal{D}}}\phi \leq 2^{j+1}$ off $U_{j+1}$. Finally, we have by \eqref{E:DualP-d}:
$$ \sum_{R \in \bm{\mathcal{D}}} |\widehat{b}(R)| |\widehat{\phi}(R)| \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(w)}
\sum_{j \in \mathbb{Z}} 2^j w(V_j) \simeq \|b\|_{BMO_{\bm{\mathcal{D}}}(w)} \sum_{j\in\mathbb{Z}} 2^j w(U_j).$$
Combining this with \eqref{E:DualP-b}, we obtain \eqref{E:H1BMOprod}.
To see the converse, let $L \in \mathcal{H}^1_{\bm{\mathcal{D}}}(w)$. Then $L$ is given by $L \phi = \left\langle b, \phi\right\rangle $ for some function $b$. Fix an open set $\Omega$ with $w(\Omega) < \infty$. Then
$$ \left( \sum_{R\subset \Omega; R\in\bm{\mathcal{D}}} |\widehat{b}(R)|^2 \frac{1}{\left\langle w\right\rangle _R} \right)^{1/2}
\leq \sup_{\|\phi\|_{l^2(\Omega, w)} \leq 1} \left| \sum_{R\subset\Omega, R\in\bm{\mathcal{D}}} \widehat{b}(R) \widehat{\phi}(R)\right|, $$
where $ \|\phi\|_{l^2(\Omega, w)}^2 := \sum_{R\subset\Omega, R\in\bm{\mathcal{D}}} |\widehat{\phi}(R)|^2 \left\langle w\right\rangle _R$. By a simple application of H\"{o}lder's inequality,
$$ \left| \sum_{R\subset\Omega, R\in\bm{\mathcal{D}}} \widehat{b}(R) \widehat{\phi}(R)\right| \lesssim \|L\|_{\star} \|\phi\|_{\mathcal{H}^1_{\bm{\mathcal{D}}}(w)}
\leq \|L\|_{\star} (w(\Omega))^{1/2} \|\phi\|_{l^2(\Omega, w)}, $$
so $\|b\|_{BMO_{\bm{\mathcal{D}}}(w)} \lesssim \|L\|_{\star}$.
\end{proof}
\subsection{Weighted little $bmo_{\bm{\mathcal{D}}}(w)$}
In this case, we also want to look at each variable separately. Specifically, we look at the space
$BMO(w_1, x_2)$: for each $x_2 \in \mathbb{R}^{n_2}$, this is the weighted BMO space over $\mathbb{R}^{n_1}$, with respect to the weight $w(\cdot, x_2)$.
$$ BMO(w_1, x_2) := BMO(w(\cdot, x_2); \:\: \mathbb{R}^{n_1}) \text{, for each } x_2 \in \mathbb{R}^{n_2}. $$
The norm in this space is given by
$$ \|b(\cdot, x_2)\|_{BMO(w_1, x_2)} := \sup_{Q_1} \frac{1}{w(Q_1, x_2)} \int_{Q_1} | b(x_1, x_2) - m_{Q_1}b(x_2) |\,dx_1, $$
where
$$ w(Q_1, x_2) := \int_{Q_1} w(x_1, x_2)\,dx_1 \:\:\text{ and }\:\: m_{Q_1}b(x_2) := \frac{1}{|Q_1|} \int_{Q_1} b(x_1, x_2)\,dx_1. $$
The space $BMO(w_2, x_1)$ and the quantities $w(Q_2, x_1)$ and $m_{Q_2}b(x_1)$ are defined symmetrically.
\begin{prop} \label{P:bmo-eachVar}
Let $w(x_1, x_2)$ be a weight on $\mathbb{R}^{\vec{n}} = \mathbb{R}^{n_1} \otimes \mathbb{R}^{n_2}$. Then $b \in L^1_{loc}(\mathbb{R}^{\vec{n}})$ is in $bmo(w)$ if and only if $b$
is in the one-parameter weighted BMO spaces $BMO(w_i, x_j)$ separately in each variable, uniformly:
\begin{equation} \label{E:bmoSep}
\|b\|_{bmo(w)} \simeq \max\left\{
\esssup_{x_1\in\mathbb{R}^{n_1}} \|b(x_1, \cdot)\|_{BMO(w_2, x_1)};\:\: \esssup_{x_2\in\mathbb{R}^{n_2}} \|b(\cdot, x_2)\|_{BMO(w_1, x_2)}
\right\}.
\end{equation}
\end{prop}
\begin{rem} \label{R1}
In the \textit{unweighted} case $bmo(\mathbb{R}^{\vec{n}})$, if we fixed $x_2\in\mathbb{R}^{n_2}$, we would look at $b(\cdot, x_2)$ in the space $BMO(\mathbb{R}^{n_1})$ -- the \textit{same}
one-parameter BMO space for all $x_2$. In the weighted case however, the one-parameter space for $b(\cdot, x_2)$ \textit{changes with} $x_2$, because the weight
$w(\cdot, x_2)$ changes with $x_2$.
\end{rem}
\begin{proof}
Suppose first that $b \in bmo(w)$. Then for all cubes $Q_1$, $Q_2$:
\begin{eqnarray*}
\|b\|_{bmo(w)} &\geq& \frac{1}{w(Q_1\times Q_2)} \int_{Q_1} \int_{Q_2} |b(x_1, x_2) - \left\langle b\right\rangle _{Q_1 \times Q_2}|\,dx_2\,dx_1 \\
&\geq& \frac{1}{w(Q_1 \times Q_2)} \int_{Q_1} \left| \int_{Q_2} b(x_1, x_2) - \left\langle b\right\rangle _{Q_1 \times Q_2} \,dx_2\right|\,dx_1, \\
\end{eqnarray*}
so
\begin{equation} \label{E:bmoSep1}
\int_{Q_1} | m_{Q_2}b(x_1) - \left\langle b\right\rangle _{Q_1 \times Q_2} |\,dx_1 \leq \frac{w(Q_1\times Q_2)}{|Q_2|} \|b\|_{bmo(w)}.
\end{equation}
Now fix a cube $Q_2$ in $\mathbb{R}^{n_2}$ and let $f_{Q_2}(x_1) := \int_{Q_2} |b(x_1, x_2) - m_{Q_2}b(x_1)|\,dx_2$. Then for any $Q_1$:
\begin{eqnarray*}
\left\langle f_{Q_2}\right\rangle _{Q_1} &\leq& \frac{1}{|Q_1|} \int_{Q_1} \int_{Q_2} |b(x_1, x_2) - \left\langle b\right\rangle _{Q_1\times Q_2}|\,dx
+ \frac{1}{|Q_1|} \int_{Q_1} \int_{Q_2} |m_{Q_2}b(x_1) - \left\langle b\right\rangle _{Q_1\times Q_2}|\,dx\\
&\leq& \frac{w(Q_1\times Q_2)}{|Q_1|} \|b\|_{bmo(w)} + \frac{|Q_2|}{|Q_1|} \int_{Q_1} |m_{Q_2}b(x_1) - \left\langle b\right\rangle _{Q_1\times Q_2}|\,dx_1\\
&\leq& 2 \frac{w(Q_1\times Q_2)}{|Q_1|} \|b\|_{bmo(w)} = 2 \left\langle w(Q_2, \cdot)\right\rangle _{Q_1} \|b\|_{bmo(w)},
\end{eqnarray*}
where the last inequality follows from \eqref{E:bmoSep1}. By the Lebesgue differentiation theorem:
$$ f_{Q_2}(x_1) = \lim_{Q_1 \rightarrow x_1} \left\langle f_{Q_2}\right\rangle _{Q_1} \leq 2\|b\|_{bmo(w)} \lim_{Q_1 \rightarrow x_1} \left\langle w(Q_2, \cdot)\right\rangle _{Q_1}
= 2\|b\|_{bmo(w)} w(Q_2, x_1),$$
for almost all $x_1 \in \mathbb{R}^{n_1}$, where $Q_1 \rightarrow x_1$ denotes a sequence of cubes containing $x_1$ with side length tending to $0$.
We would like to say at this point that $\|b(x_1, \cdot)\|_{BMO(w_2, x_1)} = \sup_{Q_2} \frac{1}{w(Q_2, x_1)} f_{Q_2}(x_1)$ is uniformly (a.a. $x_1$)
bounded. However, we must be a little careful and note that at this point we really have that for every cube $Q_2$ in $\mathbb{R}^{n_2}$, there is a
\textit{null set} $N(Q_2) \subset \mathbb{R}^{n_1}$ such that
$$ f_{Q_2}(x_1) \leq 2\|b\|_{bmo(w)} w(Q_2, x_1) \text{ for all } x_1 \in \mathbb{R}^{n_1} \setminus N(Q_2). $$
In order to obtain the inequality we want, holding for a.a. $x_1$, let $N:= \cup N(\widetilde{Q_2})$ where $\widetilde{Q_2}$ are the cubes in $\mathbb{R}^{n_2}$ with rational side length
and centers with rational coordinates. Then $N$ is a null set and $ f_{\widetilde{Q_2}}(x_1) \leq 2\|b\|_{bmo(w)} w(\widetilde{Q_2}, x_1)$ for all $x_1 \in \mathbb{R}^{n_1} \setminus N$.
By density, this statement then holds for \textit{all} cubes $Q_2$ and $x_1 \notin N$, so
$$ \esssup_{x_1\in\mathbb{R}^{n_1}} \|b(x_1, \cdot)\|_{BMO(w_2, x_1)} \leq 2 \|b\|_{bmo(w)}. $$
The result for the other variable follows symmetrically.
Conversely, suppose
$$ \|b(x_1, \cdot)\|_{BMO(w_2, x_1)} \leq C_1 \text{ for a.a. } x_1 \text{, and } \|b(\cdot, x_2)\|_{BMO(w_1, x_2)} \leq C_2 \text{ for a.a. } x_2. $$
Then for any $R = Q_1 \times Q_2$:
\begin{eqnarray*}
\int_R |b - \left\langle b\right\rangle _R|\,dx &\leq& \int_{Q_1}\int_{Q_2} |b(x_1, x_2) - m_{Q_2}(x_1)|\,dx + \int_{Q_1} |Q_2| |m_{Q_2}b(x_1) - \left\langle b\right\rangle _{Q_1\times Q_2}|\,dx_1 \\
&\leq& \int_{Q_1} C_2 w(Q_2, x_1)\,dx_1 + \int_{Q_1}\int_{Q_2} |b(x_1, x_2) - m_{Q_1}b(x_2)|\,dx_2\,dx_1\\
&\leq& C_2 w(R) + \int_{Q_2} C_1 w(Q_1, x_2)\,dx_2\\
&=& (C_1 + C_2) w(R),
\end{eqnarray*}
so
$$\|b\|_{bmo(w)} \leq 2\max\left\{
\esssup_{x_1\in\mathbb{R}^{n_1}} \|b(x_1, \cdot)\|_{BMO(w_2, x_1)};\:\: \esssup_{x_2\in\mathbb{R}^{n_2}} \|b(\cdot, x_2)\|_{BMO(w_1, x_2)}
\right\}.$$
\end{proof}
\begin{cor} \label{C:A2bmo2}
Let $w \in A_2(\mathbb{R}^{\vec{n}})$ and $b \in bmo_{\bm{\mathcal{D}}}(w)$. Then
$$ |\left\langle b, \phi\right\rangle | \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(w)} \|S_{\mathcal{D}_i}\phi\|_{L^1(w)},$$
for all $i \in \{1, 2\}$.
\end{cor}
\begin{proof}
This follows immediately from the one-parameter result in \eqref{E:H1BMO-1p} and the proposition above:
\begin{align}
|\left\langle b, \phi\right\rangle | & \leq \int_{\mathbb{R}^{n_1}} | \left\langle b(x_1, \cdot), \phi(x_1, \cdot)\right\rangle _{\mathbb{R}^{n_2}} |\,dx_1\\
& \lesssim \int_{\mathbb{R}^{n_1}} \|b(x_1, \cdot)\|_{BMO_{\mathcal{D}_2}(w(x_1, \cdot))} \|S_{\mathcal{D}_2}\phi(x_1, \cdot)\|_{L^1(w(x_1, \cdot))} \,dx_1\\
& \lesssim \|b\|_{bmo(w)} \|S_{\mathcal{D}_2}\phi\|_{L^1(w)},
\end{align}
and similarly for $S_{\mathcal{D}_1}$.
\end{proof}
We now look at the little bmo version of \eqref{E:MuckWheeden-1p}.
\begin{prop} \label{P:MuckWheeden-2p}
If $w\in A_p(\mathbb{R}^{\vec{n}})$ for some $1<p<\infty$, then
$$ \|b\|_{bmo(w)} \simeq \|b\|_{bmo(w;p')} := \sup_R \bigg( \frac{1}{w(R)} \int_R |b - \left\langle b\right\rangle _R|^{p'}\,dw' \bigg)^{1/p'}. $$
\end{prop}
\begin{proof}
By Proposition \ref{P:bmo-eachVar} and \eqref{E:MuckWheeden-1p}:
$$\|b\|_{bmo(w)} \simeq \max\left\{
\esssup_{x_1\in\mathbb{R}^{n_1}} \|b(x_1, \cdot)\|_{BMO(w(x_1, \cdot); p')};\:\: \esssup_{x_2\in\mathbb{R}^{n_2}} \|b(\cdot, x_2)\|_{BMO(w(\cdot, x_2);p')}
\right\}. $$
Suppose first that $b \in bmo(w;p')$. Note that for some function $g$ on $\mathbb{R}^{\vec{n}}$ and a cube $Q_2$ in $\mathbb{R}^{n_2}$, we have
$$ \int_{Q_2} |g(x_1, x_2)|^{p'} w'(x_1, x_2)\,dx_2 \geq \frac{1}{w(Q_2,x_1)^{p'-1}} \left| \int_{Q_2} g(x_1, x_2)\,dx_2 \right|^{p'}.$$
Then
\begin{align}
\|b\|^{p'}_{bmo(w;p')} &\geq \frac{1}{w(R)} \int_{Q_1} \frac{1}{w(Q_2, x_1)^{p'-1}}
\left| \int_{Q_2} b(x_1, x_2) - \left\langle b\right\rangle _{Q_1\times Q_2}\,dx_2 \right|^{p'}\,dx_1\\
&= \frac{1}{w(R)} \int_{Q_1} \left| m_{Q_2}b(x_1) - \left\langle b\right\rangle _{Q_1\times Q_2} \right|^{p'} \frac{|Q_2|^{p'}}{w(Q_2,x_1)^{p'-1}}\,dx_1\\
&\geq \frac{1}{w(R)} \int_{Q_1} \left| m_{Q_2}b(x_1) - \left\langle b\right\rangle _{Q_1\times Q_2} \right|^{p'} w'(Q_2, x_1)\,dx_1,
\end{align}
where the last inequality follows from
$$ \frac{|Q_2|^{p'}}{w(Q_2,x_1)^{p'-1}} = |Q_2| \frac{1}{\left\langle w(x_1, \cdot)\right\rangle _{Q_2}^{p'-1}} \geq |Q_2| \frac{\left\langle w'(x_1, \cdot)\right\rangle _{Q_2}}{[w(x_1,\cdot)]_{A_p}^{p'-1}}
\simeq w'(Q_2, x_1). $$
Now fix $Q_2$ and consider $f_{Q_2}(x_1) := \int_{Q_2} |b(x_1, x_2) - m_{Q_2}b(x_1)|^{p'} w'(x_1, x_2)\,dx_2$. Then
\begin{align}
\left\langle f_{Q_2}\right\rangle _{Q_1} &\lesssim \frac{1}{|Q_1|} \int_{Q_1} \int_{Q_2} \bigg(
|b(x_1, x_2) - \left\langle b\right\rangle _{Q_1\times Q_2}|^{p'} + |m_{Q_2}b(x_1) - \left\langle b\right\rangle _{Q_1\times Q_2}|^{p'} \bigg) w'(x_1, x_2)\,dx_2\,dx_1\\
&\lesssim \frac{w(Q_1\times Q_2)}{|Q_1|} \|b\|^{p'}_{bmo(w;p')} + \frac{1}{|Q_1|} \int_{Q_1} |m_{Q_2} b(x_1) - \left\langle b\right\rangle _{Q_1\times Q_2}|^{p'}
w'(Q_2, x_1)\,dx_1\\
& \lesssim \frac{w(Q_1\times Q_2)}{|Q_1|} \|b\|^{p'}_{bmo(w;p')}.
\end{align}
Then for almost all $x_1$:
$$ f_{Q_2}(x_1) = \lim_{Q_1\rightarrow x_1} \left\langle f_{Q_2}\right\rangle _{Q_1} \lesssim \lim_{Q_1\rightarrow x_1} \frac{w(Q_1\times Q_2)}{|Q_1|} \|b\|^{p'}_{bmo(w;p')}
= w(Q_2, x_1) \|b\|^{p'}_{bmo(w;p')}.$$
Taking again rational cubes, we obtain
$$ \|b(x_1, \cdot)\|_{BMO(w(x_1, \cdot);p')} = \sup_{Q_2} \bigg( \frac{1}{w(Q_2, x_1)} f_{Q_2}(x_1) \bigg)^{1/p'} \lesssim \|b\|_{bmo(w;p')},$$
for almost all $x_1$.
Conversely, if $b\in bmo(w)$, then there exist $C_1$ and $C_2$ such that
$$ \|b(x_1, \cdot)\|_{BMO(w(x_1, \cdot);p')}\leq C_1 \text{ a.a. } x_1\text{, and }
\|b(\cdot, x_2)\|_{BMO(w(\cdot, x_2);p')} \leq C_2 \text{ a.a. } x_2.$$
Then
\begin{align}
\int_R |b - \left\langle b\right\rangle _R|^{p'}\,dw' \lesssim & \int_{Q_1}\int_{Q_2} |b(x_1, x_2) - m_{Q_2}b(x_1)|^{p'} w'(x_1, x_2)\,dx_2\,dx_1\\
&+ \int_{Q_1} \int_{Q_2} |m_{Q_2}b(x_1) - \left\langle b\right\rangle _{Q_1\times Q_2}|^{p'} w'(x_1, x_2)\,dx_2\,dx_1.
\end{align}
The first integral is easily seen to be bounded by
$$ \int_{Q_1} \|b(x_1, \cdot)\|^{p'}_{BMO(w(x_1,\cdot))} w(Q_2, x_1)\,dx_1 \leq C_1^{p'} w(Q_1\times Q_2).$$
The second integral is equal to:
\begin{align}
& \int_{Q_1} |m_{Q_2}b(x_1) - \left\langle b\right\rangle _{Q_1\times Q_2}|^{p'} w'(Q_2, x_1)\,dx_1\\
&\leq \int_{Q_1} \frac{w'(Q_2, x_1)}{|Q_2|^{p'}} \big( \int_{Q_2} |b(x_1,x_2) - m_{Q_1}b(x_2)|\,dx_2 \big)^{p'}\,dx_1\\
&\leq \int_{Q_1} \frac{w'(Q_2,x_1) w(Q_2, x_1)^{p'-1}}{|Q_2|^{p'}} \int_{Q_2} |b(x_1,x_2) - m_{Q_1}b(x_2)|^{p'} w'(x_1, x_2)\,dx_2\,dx_1.
\end{align}
We may express the first term as $\left\langle w'(x_1, \cdot)\right\rangle _{Q_2} \left\langle w(x_1, \cdot)\right\rangle _{Q_2}^{p'-1} \lesssim [w]_{A_p}^{p'-1}$ for almost all $x_1$. Then, the integral is
further bounded by
$$\int_{Q_2} w(Q_1, x_2) \|b(\cdot, x_2)\|_{BMO(w(\cdot, x_2);p')}\,dx_2
\lesssim C_2^{p'} w(Q_1\times Q_2).$$
Finally, this gives
$$ \|b\|_{bmo(w;p')} \lesssim (C_1^{p'} + C_2^{p'})^{1/p'} \lesssim \max(C_1, C_2) \simeq \|b\|_{bmo(w)}. $$
\end{proof}
We also have a two-weight John-Nirenberg for Bloom little bmo, which follows very similarly to the proof above.
\begin{prop}\label{P:Bloom-JN-2p}
Let $\mu, \lambda \in A_p(\mathbb{R}^{\vec{n}})$ for $1<p<\infty$, and $\nu := \mu^{1/p}\lambda^{-1/p}$. Then
\begin{equation} \label{E:Bloom-JN-2p}
\|b\|_{bmo(\nu)} \simeq \|b\|_{bmo(\mu,\lambda,p)} \simeq \|b\|_{bmo(\lambda',\mu',p')},
\end{equation}
where
\begin{align}
& \|b\|_{bmo(\mu,\lambda,p)} := \sup_R \bigg( \frac{1}{\mu(R)} \int_R |b - \left\langle b\right\rangle _R|^p\,d\lambda \bigg)^{1/p},\\
& \|b\|_{bmo(\lambda',\mu',p')} := \sup_R \bigg( \frac{1}{\lambda'(R)} \int_R |b - \left\langle b\right\rangle _R|^{p'}\,d\mu' \bigg)^{1/p'}.
\end{align}
\end{prop}
\noindent Remark that it also easily follows that $\nu \in A_2(\mathbb{R}^{\vec{n}})$.
\section{Proof of the Lower Bound}
\label{S:LB}
\begin{proof}[Proof of Theorem \ref{T:LB}]
To see the lower bound, we adapt the argument of Coifman, Rochberg and Weiss \cite{CRW}. Let $\{ X_k (x) \}$ and
$\{ Y_l (y) \}$ both be orthonormal bases for the space of spherical harmonics of degree $n$
in $\mathbb{R}^n$ respectively. Then $\sum_k | X_k (x) |^2 = c_n | x
|^{2 n}$ and thus
\[ 1 = \frac{1}{c_n} \sum_k \frac{X_k (x - x')}{| x - x' |^{2 n}} X_k (x - x')
\]
and similarly for $Y_l .$
Furthermore $X_k (x - x') = \sum_{| \alpha | + | \beta | = n}
\mathbf{x}^{(k)}_{\alpha \beta} x^{\alpha} x'^{\beta}$ and equally for
$Y_l$. Remember that
\[ b (x, y) \in {bmo(\nu)} \Longleftrightarrow \| b \|_{{bmo(\nu)}} = \sup_Q
\frac{1}{\nu(Q)} \int_Q | b (x, y) - \langle b \rangle_Q | d x d y < \infty
. \]
Here, $Q = I \times J$ and $I$ and $J$ are cubes in $\mathbb{R}^n$. Let us
define the function
\[ \Gamma_Q (x, y) = \text{sign} (b (x, y) - \langle b \rangle_Q)
\mathbf{1}_Q (x, y) . \]
So
\begin{eqnarray*}
& & | b (x, y) - \langle b \rangle_Q | | Q | \mathbf{1}_Q (x, y)\\
& = & (b (x, y) - \langle b \rangle_Q) | Q | \Gamma_Q (x, y)\\
& = & \int_Q (b (x, y) - b (x', y')) \Gamma_Q (x, y) d x' d y'\\
& \sim & \sum_{k, l} \int_Q (b (x, y) - b (x', y')) \frac{X_k (x - x')}{| x
- x' |^{2 n}} X_k (x - x') \frac{Y_l (y - y')}{| y - y' |^{2 n}} Y_l (y
- y') \Gamma_Q (x, y) d x' d y'\\
& = & \sum_{k, l} \int_{\mathbb{R}^{2 n}} \frac{b (x, y) - b (x', y')}{| x
- x' |^{2 n} | y - y' |^{2 n}}_{} X_k (x - x') Y_l (y - y') \cdot\\
& & \cdot \sum_{| \alpha | + | \beta | = n} \mathbf{x}^{(k)}_{\alpha
\beta} x^{\alpha} x'^{\beta} \sum_{| \gamma | + | \delta | = n}
\mathbf{y}^{(l)}_{\gamma \delta} y^{\gamma} y'^{\delta} \Gamma_Q (x, y)
\mathbf{1}_Q (x', y') d x' d y' .
\end{eqnarray*}
\
Note that
\begin{align}
& \int_{\mathbb{R}^{2 n}} \frac{b (x, y) - b (x', y')}{| x - x' |^{2
n} | y - y' |^{2 n}}_{} X_k (x - x') Y_l (y - y') \cdot x'^{\beta} y'^{\delta} \mathbf{1}_Q (x', y') d x' d y' \\
=& [b, T_k T_l] (x'^{\beta} y'^{\delta} \mathbf{1}_Q (x', y')) .
\end{align}
Here $T_k $ and $T_l$ are the Calder\'on-Zygmund operators that correspond to the
kernels
\[ \frac{X_k (x)}{| x |^{2 n}} \text{ and } \frac{Y_l (y)}{| y |^{2 n}} . \]
Observe that these have the correct homogeneity due to the homogeneity of the
$X_k$ and $Y_l$. With this notation, the above becomes
\begin{eqnarray*}
& & | b (x, y) - \langle b \rangle_Q | | Q | \mathbf{1}_Q (x, y)\\
& = & \sum_{k, l} \sum_{| \alpha | + | \beta | = n} \sum_{| \gamma | + |
\delta | = n} \mathbf{x}^{(k)}_{\alpha \beta} x^{\alpha}
\mathbf{y}^{(l)}_{\gamma \delta} y^{\gamma} \Gamma_Q (x, y) [b, T_k T_l]
(x'^{\beta} y'^{\delta} \mathbf{1}_Q (x', y')) (x, y) .
\end{eqnarray*}
Now, we integrate with respect to $(x, y)$ and the measure $\lambda .$ Now let
us assume for a moment that both $I$ and $J$ are centered at 0 and thus $Q$
centered at 0. In this case, since $\Gamma_Q$ and $\mathbf{1}_Q$ are
supported in $Q$, there is only contribution for $x, x', y, y'$ in $Q$.
\begin{eqnarray*}
& & | Q | \left( \int_Q | b (x, y) - \langle b \rangle_Q |^p d \lambda (x,
y) \right)^{1 / p}\\
& \leqslant & \sum_{k, l} \sum_{| \alpha | + | \beta | = n} \sum_{| \gamma
| + | \delta | = n} \| \mathbf{x}^{(k)}_{\alpha \beta} x^{\alpha}
\mathbf{y}^{(l)}_{\gamma \delta} y^{\gamma} \Gamma_Q (x, y) [b, T_k T_l]
(x'^{\beta} y'^{\delta} \mathbf{1}_Q (x', y')) (x, y) \|_{L^p (\lambda)}\\
& \lesssim & \sum_{k, l} \sum_{| \alpha | + | \beta | = n} \sum_{| \gamma |
+ | \delta | = n} \mathfrak{l} (I)^{| \alpha |} \mathfrak{l} (J)^{| \gamma
|} \| [b, T_k T_l] (x'^{\beta} y'^{\delta} \mathbf{1}_Q (x', y')) \|_{L^p
(\lambda)}\\
& \lesssim & \sum_{k, l} \sum_{| \alpha | + | \beta | = n} \sum_{| \gamma |
+ | \delta | = n} \mathfrak{l} (I)^{| \alpha |} \mathfrak{l} (J)^{| \gamma
|} \| [b, T_k T_l] \|_{L^p ({\mu}) \rightarrow L^p (\lambda)} \|
x'^{\beta} y'^{\delta} \mathbf{1}_Q (x', y') \|_{L^p ({\mu})}\\
& \lesssim & \sum_{k, l} \sum_{| \alpha | + | \beta | = n} \sum_{| \gamma |
+ | \delta | = n} \mathfrak{l} (I)^{| \alpha |} \mathfrak{l} (J)^{| \gamma
|} \mathfrak{l} (I)^{| \beta |} \mathfrak{l} (J)^{| \delta |} \| [b, T_k
T_l] \|_{L^p ({\mu}) \rightarrow L^p (\lambda)} {\mu} (Q)^{1 / p}
\end{eqnarray*}
We disregarded the coefficients of the $X$ and $Y$ at the cost of a constant.
Notice that the $T_k$ and $T_l$ are homogeneous polynomials in Riesz
transforms. Therefore the commutator $[b, T_k T_l]$ writes as a linear
combination of terms such as $M [b, R_i^1 R_j^2] N$ where $M$ and $N$ are
compositions of Riesz transforms: in a first step write $[b, T_k T_l]$ as
linear combination of terms of the form $[b, R_{(n)}^k R_{(n)}^l]$ where
\[ R_{(n)}^k = \prod_s R_{i_s^{(k)}}^1 \]
is a composition of $n$ Riesz transforms acting in the variabe $1$ with a
choice $i^{(k)} = (i^{(k)}_s)^n_{s = 1} \in \{ 1, \ldots, n \}^n$ for each $k$
and similar for $R_{(n)}^l$ acting in variable 2. Then, for each term, apply
$[A B, b] = A [B, b] + [A, b] B$ successively as follows. Use $A = R_{i_1}^1
R_{j_1}^2$ and $B$ of the form $R_{(n - 1)}^k R_{(n - 1)}^l$ and repeat. It
then follows that for each $k, l$ the commutator $[b, T_k T_l]$ writes as a
linear combination of terms such as $M [b, R_i^1 R_j^2] N$ where $M$ and $N$
are compositions of Riesz transforms. It is decisive that $T_k$ and $T_l$ are
homogeneous polynomials in Riesz transforms of the same degree. We required
that all commutators of the form $[b, R^1_i R^2_j]$ are bounded, we have shown
the ${bmo}$ estimate for $b$ for rectangles $Q$ whose sides are centered
at 0. We now translate $b$ in the two directions separately and obtain what we
need, by Proposition \ref{P:Bloom-JN-2p}:
$$ \|b\|_{bmo(\nu)} \simeq \|b\|_{bmo(\mu,\lambda,p)} := \sup_R \bigg( \frac{1}{\mu(R)} \int_R |b - \left\langle b\right\rangle _R|^p\,d\lambda \bigg)^{1/p}
\lesssim \sup_{1 \leqslant k, l \leqslant n} \| [b, R^1_k R^2_l] \|_{L^p
({\mu}) \rightarrow L^p (\lambda) .} $$
\end{proof}
\section{Biparameter Paraproducts}
\label{S:Para}
Decomposing two functions $b$ and $f$ on $\mathbb{R}^n$ into their Haar series adapted to some dyadic grid $\mathcal{D}$ and analyzing the different inclusion properties of the dyadic cubes, one may express their product as
$$ bf = \Pi_b f + \Pi_b^*f + \Gamma_b f + \Pi_f b,$$
where
$$ \Pi_b f := \sum_{Q\in\mathcal{D}} \widehat{b}(Q^{\epsilon}) \left\langle f\right\rangle _Q h_Q^{\epsilon},
\:\:\: \Pi_b^*f := \sum_{Q\in\mathcal{D}} \widehat{b}(Q^{\epsilon}) \widehat{f}(Q^{\epsilon}) \frac{\mathbbm 1} %{1\!\!1_Q}{|Q|}, \text{ and }
\Gamma_b f := \sum_{Q \in \mathcal{D}} \widehat{b}(Q^{\epsilon}) \widehat{f}(Q^{\delta}) \frac{1}{\sqrt{|Q|}} h_Q^{\epsilon+\delta}. $$
In \cite{HLW2}, it was shown that, when $b \in BMO(\nu)$, the operators $\Pi_b$, $\Pi_b^*$, and $\Gamma_b$ are bounded $L^p(\mu) \rightarrow L^p(\lambda)$.
\subsection{Product BMO Paraproducts}\label{Ss:ProdPara}
In the biparameter setting $\bm{\mathcal{D}} = \mathcal{D}_1 \times \mathcal{D}_2$, we have \textit{fifteen} paraproducts. We treat them beginning with the nine paraproducts associated with product BMO.
First, we have the three ``pure'' paraproducts, direct adaptations of the one-parameter paraproducts:
\begin{align}
\Pi_b f &:= \sum_{Q_1 \times Q_2} \widehat{b}(Q_1^{\epsilon_1} \times Q_2^{\epsilon_2}) \left\langle f\right\rangle _{Q_1 \times Q_2} h_{Q_1}^{\epsilon_1} \otimes h_{Q_2}^{\epsilon_2},\\
\Pi_b^*f &:= \sum_{Q_1 \times Q_2} \widehat{b}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) \widehat{f}(Q_1^{\epsilon_1} \times Q_2^{\epsilon_2}) \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes\frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|},\\
\Gamma_{b}f &:= \sum_{Q_1 \times Q_2} \widehat{b}(Q_1^{\epsilon_1} \times Q_2^{\epsilon_2}) \widehat{f}(Q_1^{\delta_1}\times Q_2^{\delta_2})
\frac{1}{\sqrt{|Q_1|}} \frac{1}{\sqrt{|Q_2|}} h_{Q_1}^{\epsilon_1+\delta_1} \otimes h_{Q_2}^{\epsilon_2+\delta_2} = \Gamma_{b}^*f.
\end{align}
Next, we have the ``mixed'' paraproducts. We index these based on the types of Haar functions acting on $f$, since the action on $b$ is the same for all of them,
namely $\widehat{b}(Q_1\times Q_2)$ -- this is the property which associates these paraproducts with product $BMO_{\bm{\mathcal{D}}}$: in a proof using duality,
one would separate out the $b$ function and be left with the biparameter square function $S_{\bm{\mathcal{D}}}$. They are:
\begin{align}
\Pi_{b;(0,1)}f &:= \sum_{Q_1\times Q_2} \widehat{b}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) \left\langle f, h_{Q_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} \right\rangle
\frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes h_{Q_2}^{\epsilon_2}\\
\Pi_{b;(1,0)}f &:= \sum_{Q_1\times Q_2} \widehat{b}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) \left\langle f, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes h_{Q_2}^{\epsilon_2} \right\rangle
h_{Q_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} = \Pi_{b; (0,1)}^*\\
\Gamma_{b; (0,1)}f &:= \sum_{Q_1 \times Q_2} \widehat{b}(Q_1^{\epsilon_1} \times Q_2^{\epsilon_2}) \left\langle f, h_{Q_1}^{\delta_1} \otimes
\frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} \right\rangle \frac{1}{\sqrt{|Q_1|}} h_{Q_1}^{\epsilon_1+\delta_1}\otimes h_{Q_2}^{\epsilon_2} \\
\Gamma_{b; (0,1)}^*f &:= \sum_{Q_1 \times Q_2} \widehat{b}(Q_1^{\epsilon_1} \times Q_2^{\epsilon_2}) \widehat{f}(Q_1^{\delta_1}\times Q_2^{\epsilon_2})
\frac{1}{\sqrt{|Q_1|}} h_{Q_1}^{\epsilon_1+\delta_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\\
\Gamma_{b; (1,0)}f &:= \sum_{Q_1 \times Q_2} \widehat{b}(Q_1^{\epsilon_1} \times Q_2^{\epsilon_2}) \left\langle f, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes
h_{Q_2}^{\delta_2} \right\rangle \frac{1}{\sqrt{|Q_2|}} h_{Q_1}^{\epsilon_1}\otimes h_{Q_2}^{\epsilon_2+\delta_2} \\
\Gamma_{b; (1,0)}^*f &:= \sum_{Q_1 \times Q_2} \widehat{b}(Q_1^{\epsilon_1} \times Q_2^{\epsilon_2}) \widehat{f}(Q_1^{\epsilon_1}\times Q_2^{\delta_2})
\frac{1}{\sqrt{|Q_2|}} \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes h_{Q_2}^{\epsilon_2+\delta_2}.
\end{align}
\begin{prop} \label{P:BMOparaprod}
If $\nu := \mu^{1/p}\lambda^{-1/p}$ for $A_p(\mathbb{R}^{\vec{n}})$ weights $\mu$ and $\lambda$, and $\mathsf{P}_{\mathsf{b}}$ denotes any one of the
nine paraproducts defined above, then
\begin{equation} \label{E:BMOparaprod}
\left\| \mathsf{P}_{\mathsf{b}} : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)},
\end{equation}
where $\|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)}$ denotes the norm of $b$ in the dyadic weighted product $BMO_{\bm{\mathcal{D}}}(\nu)$ space on $\mathbb{R}^{\vec{n}}$.
\end{prop}
\begin{proof}
We first outline the general strategy we use to prove \eqref{E:BMOparaprod}.
From \eqref{E:ApDuality}, it suffices to take $f \in L^p(\mu)$ and $g \in L^{p'}(\lambda')$ and show that:
$$ |\left\langle \mathsf{P}_{\mathsf{b}} f, g\right\rangle | \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|f\|_{L^p(\mu)} \|g\|_{L^{p'}(\lambda')}. $$
\begin{enumerate}[1.]
\item Write $\left\langle \mathsf{P}_{\mathsf{b}} f, g\right\rangle = \left\langle b, \phi\right\rangle $, where $\phi$ depends on $f$ and $g$. By \eqref{E:H1BMOprod}, $|\left\langle \mathsf{P}_{\mathsf{b}} f, g\right\rangle | \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|S_{\bm{\mathcal{D}}}\phi\|_{L^1(\nu)}$.
\item Show that $S_{\bm{\mathcal{D}}}\phi \lesssim (\mathcal{O}_1 f) (\mathcal{O}_2 g)$, where $\mathcal{O}_1$ and $\mathcal{O}_2$ are operators satisfying a \textit{one-weight bound}
$L^p(w) \rightarrow L^p(w)$, for all $w \in A_p(\mathbb{R}^{\vec{n}})$ -- these operators will usually be a combination of maximal and square functions.
\item Then the $L^1(\nu)$-norm of $S_{\bm{\mathcal{D}}}\phi$ can be separated into the $L^p(\mu)$ and $L^{p'}(\lambda')$ norms of these operators $\mathcal{O}_i$, by a simple application of
H\"{o}lder's inequality:
$$ \|S_{\bm{\mathcal{D}}}\phi\|_{L^1(\nu)} \lesssim \|\mathcal{O}_1f\|_{L^p(\mu)} \|\mathcal{O}_2g\|_{L^{p'}(\lambda')}
\lesssim \|f\|_{L^p(\mu)} \|g\|_{L^{p'}(\lambda')},$$
and the result follows.
\end{enumerate}
Remark also that we will not have to treat the adjoints $\mathsf{P}_{\mathsf{b}}^*$ separately: interchanging the roles of $f$ and $g$ in the proof strategy above will show that $\mathsf{P}_{\mathsf{b}}$ is also bounded
$L^{p'}(\lambda') \rightarrow L^{p'}(\mu')$, which means that $\mathsf{P}_{\mathsf{b}}^*$ is bounded $L^p(\mu) \rightarrow L^p(\lambda)$.
Let us begin with $\Pi_b f$. We write
$$ \left\langle \Pi_b f, g\right\rangle = \left\langle b, \phi\right\rangle \text{, where } \phi :=
\sum_{Q_1\times Q_2} \left\langle f\right\rangle _{Q_1\times Q_2} \widehat{g}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) h_{Q_1}^{\epsilon_1}\otimes h_{Q_2}^{\epsilon_2}. $$
Then
$$ (S_{\bm{\mathcal{D}}}\phi)^2 \leq \sum_{Q_1\times Q_2} \left\langle |f|\right\rangle ^2_{Q_1\times Q_2} |\widehat{g}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2})|^2
\frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} \leq (M_{S}f)^2 \cdot (S_{\bm{\mathcal{D}}}g)^2,$$
so
$$ |\left\langle \Pi_b f, g\right\rangle | \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|M_S f\|_{L^p(\mu)} \|S_{\bm{\mathcal{D}}}g\|_{L^{p'}(\lambda')} \lesssim
\|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|f\|_{L^p(\mu)} \|g\|_{L^{p'}(\lambda')}. $$
Note that if we take instead $f \in L^{p'}(\lambda')$ and $g \in L^p(\mu)$, we have
$$ |\left\langle \Pi_bf, g\right\rangle | \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|M_S f\|_{L^{p'}(\lambda')} \|S_{\bm{\mathcal{D}}}g\|_{L^p(\mu)} \lesssim
\|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|f\|_{L^{p'}(\lambda')} \|g\|_{L^p(\mu)}, $$
proving that $\|\Pi_b : L^{p'}(\lambda') \rightarrow L^{p'}(\mu')\| = \| \Pi_b^* : L^p(\mu) \rightarrow L^p(\lambda)\| \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)}$.
For $\Gamma_b$:
$$ \left\langle \Gamma_b f, g\right\rangle = \left\langle b, \phi\right\rangle \text{, where } \phi :=
\sum_{Q_1\times Q_2} \widehat{f}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) \widehat{g}(Q_1^{\delta_1}\times Q_2^{\delta_2})
\frac{1}{\sqrt{|Q_1|}} \frac{1}{\sqrt{|Q_2|}} h_{Q_1}^{\epsilon_1+\delta_1} \otimes h_{Q_2}^{\epsilon_2+\delta_2},$$
from which it easily follows that $S_{\bm{\mathcal{D}}}\phi \lesssim S_{\bm{\mathcal{D}}}f \cdot S_{\bm{\mathcal{D}}}g$.
Let us now look at $\Pi_{b; (0,1)}$. In this case:
$$ \phi := \sum_{Q_1\times Q_2} \left\langle f, h_{Q_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\right\rangle
\left\langle g, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\epsilon_2}\right\rangle h_{Q_1}^{\epsilon_1}\otimes h_{Q_2}^{\epsilon_2}.$$
Then
\begin{align}
(S_{\bm{\mathcal{D}}}\phi)^2 &= \sum_{Q_1\times Q_2} \left\langle f, h_{Q_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\right\rangle ^2
\left\langle g, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\epsilon_2}\right\rangle ^2 \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\\
&= \sum_{Q_1\times Q_2} \left\langle H_{Q_1}^{\epsilon_1}f\right\rangle _{Q_2}^2 \left\langle H_{Q_2}^{\epsilon_2}g\right\rangle _{Q_1}^2 \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\\
&\leq \Bigg( \sum_{Q_1} \big( M_{\mathcal{D}_2} H_{Q_1}^{\epsilon_1}f \big)^2(x_2) \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|}\Bigg)
\Bigg( \sum_{Q_2} \big( M_{\mathcal{D}_1} H_{Q_2}^{\epsilon_2}g \big)^2(x_1) \frac{\mathbbm 1} %{1\!\!1_{Q_2}(x_2)}{|Q_2|}\Bigg)\\
& = [SM]^2f \cdot [MS]^2g,
\end{align}
where $[SM]$ and $[MS]$ are the mixed square-maximal operators in Section \ref{Ss:MixedSquare}.
Boundedness of $\Pi_{b;(0,1)}$ then follows from Proposition \ref{P:Mixed2pSF}.
By the usual duality trick, the same holds for $\Pi_{b; (1,0)}$.
Finally, for $\Gamma_{b; (0,1)}$:
$$ \phi = \sum_{Q_1\times Q_2} \left\langle H_{Q_1}^{\delta_1}f\right\rangle _{Q_2} \frac{1}{\sqrt{|Q_1|}} \widehat{g}(Q_1^{\epsilon_1+\delta_1}\times Q_2^{\epsilon_2}) h_{Q_1}^{\epsilon_1}\otimes
h_{Q_2}^{\epsilon_2}, $$
so $S_{\bm{\mathcal{D}}}\phi \lesssim [SM]f \cdot S_{\bm{\mathcal{D}}}g$. Note that $\Gamma_{b; (1,0)}$ works the same way, except we bound $S_{\bm{\mathcal{D}}}\phi$ by $[MS]f \cdot S_{\bm{\mathcal{D}}}g$, and the remaining two paraproducts follow by duality.
\end{proof}
\subsection{Little bmo Paraproducts} \label{Ss:bmoPara}
Next, we have the six paraproducts associated with little bmo. We denote these by the small greek letters corresponding to the previous paraproducts, and
index them based on the Haar functions acting on $b$ -- in this case, separating out the $b$ function will yield
one of the square functions $S_{\mathcal{D}_i}$ in one of the variables:
\begin{align*}
\pi_{b; (0,1)}f &:= \sum_{Q_1\times Q_2} \left\langle b, h_{Q_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} \right\rangle
\left\langle f, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes h_{Q_2}^{\epsilon_2} \right\rangle h_{Q_1}^{\epsilon_1} \otimes h_{Q_2}^{\epsilon_2} \\
\pi_{b; (0,1)}^*f &:= \sum_{Q_1\times Q_2} \left\langle b, h_{Q_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} \right\rangle
\widehat{f}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes h_{Q_2}^{\epsilon_2}\\
\pi_{b; (1,0)}f &:= \sum_{Q_1\times Q_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\epsilon_2} \right\rangle
\left\langle f, h_{Q_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} \right\rangle h_{Q_1}^{\epsilon_1}\otimes h_{Q_2}^{\epsilon_2}\\
\pi_{b; (1,0)}^*f &:= \sum_{Q_1\times Q_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\epsilon_2} \right\rangle
\widehat{f}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) h_{Q_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\\
\gamma_{b; (0,1)} f &:= \sum_{Q_1\times Q_2} \left\langle b, h_{Q_1}^{\delta_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} \right\rangle
\widehat{f}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) \frac{1}{\sqrt{|Q_1|}} h_{Q_1}^{\epsilon_1+\delta_1} \otimes h_{Q_2}^{\epsilon_2} = \gamma_{b; (0,1)}^*f \\
\gamma_{b; (1,0)} f &:= \sum_{Q_1\times Q_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\delta_2} \right\rangle
\widehat{f}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) \frac{1}{\sqrt{|Q_2|}} h_{Q_1}^{\epsilon_1}\otimes h_{Q_2}^{\epsilon_2+\delta_2} = \gamma_{b; (1,0)}^*f.
\end{align*}
\begin{prop} \label{P:bmoparaprod}
If $\nu := \mu^{1/p}\lambda^{-1/p}$ for $A_p(\mathbb{R}^{\vec{n}})$ weights $\mu$ and $\lambda$, and $\mathsf{p}_{\mathsf{b}}$ denotes any one of the
six paraproducts defined above, then
\begin{equation} \label{E:bmoparaprod}
\left\| \mathsf{p}_{\mathsf{b}} : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)},
\end{equation}
where $\|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}$ denotes the norm of $b$ in the dyadic weighted little $bmo_{\bm{\mathcal{D}}}(\nu)$ space on $\mathbb{R}^{\vec{n}}$.
\end{prop}
\begin{proof}
The proof strategy is the same as that of the product BMO paraproducts, with the modification that we use one of the $S_{\mathcal{D}_i}$ square functions
and Corollary \ref{C:A2bmo2}. For instance, in the case of $\pi_{b;(0,1)}$ we write
$$ \left\langle \pi_{b; (0,1)}f, g\right\rangle = \left\langle b, \phi\right\rangle \text{, where }
\phi:= \sum_{Q_1\times Q_2} \left\langle H_{Q_2}^{\epsilon_2}f \right\rangle _{Q_1} \widehat{g}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) h_{Q_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}. $$
Then
\begin{align}
(S_{\mathcal{D}_1}\phi)^2 &\leq \sum_{Q_1} \bigg( \sum_{Q_2} \left\langle |H_{Q_2}^{\epsilon_2}f|\right\rangle _{Q_1}^2 \mathbbm 1} %{1\!\!1_{Q_1}(x_1) \frac{\mathbbm 1} %{1\!\!1_{Q_2}(x_2)}{|Q_2|} \bigg)
\bigg( \sum_{Q_2} |\widehat{g}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2})|^2 \frac{\mathbbm 1} %{1\!\!1_{Q_2}(x_2)}{|Q_2|} \bigg)\frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|}\\
&\leq \bigg( \sum_{Q_2} M_{\mathcal{D}_1}^2(H_{Q_2}^{\epsilon_2}f)(x_1) \frac{\mathbbm 1} %{1\!\!1_{Q_2}(x_2)}{|Q_2|} \bigg)
\bigg( \sum_{Q_1} \sum_{Q_2} |\widehat{g}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2})|^2 \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}(x_2)}{|Q_2|}\bigg)\\
&= [MS]^2f \cdot S_{\bm{\mathcal{D}}}^2g,
\end{align}
and so
$$ |\left\langle \pi_{b;(0,1)}f, g\right\rangle | \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)} \|S_{\mathcal{D}_1}\phi\|_{L^1(\nu)}
\lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)} \|f\|_{L^p(\mu)} \|g\|_{L^{p'}(\lambda')}.$$
The proof for $\pi_{b;(1,0)}$ is symmetrical -- we take $S_{\mathcal{D}_2}\phi$, which will be bounded by $[SM]f \cdot S_{\bm{\mathcal{D}}}g$. The adjoint paraproducts
$\pi_{b; (0,1)}^*$ and $\pi_{b; (1,0)}^*$ follow again by duality. Finally,
for $\gamma_{b; (0,1)}$:
$$ \phi := \sum_{Q_1\times Q_2} \widehat{f}(Q_1^{\epsilon_1}\times Q_2^{\epsilon_2}) \frac{1}{\sqrt{|Q_1|}} \widehat{g}(Q_1^{\epsilon_1+\delta_1}\times Q_2^{\epsilon_2})
h_{Q_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|},$$
from which it easily follows that $S_{\mathcal{D}_1}\phi \leq S_{\bm{\mathcal{D}}}f \cdot S_{\bm{\mathcal{D}}}g$. The proof for $\gamma_{b; (1,0)}$ is symmetrical.
\end{proof}
\section{Commutators with Journ\'{e} Operators}
\label{S:UB}
\subsection{Definition of Journ\'{e} Operators} \label{Ss:JourneDef}
We begin with the definition of biparameter Calder\'{o}n-Zygmund operators, or Journ\'{e} operators, on $\mathbb{R}^{\vec{n}} := \mathbb{R}^{n_1} \otimes \mathbb{R}^{n_2}$,
as outlined in \cite{MRep}. As shown later in \cite{Herran}, these conditions are
equivalent to the original definition of Journ\'{e} \cite{Journe}.
\vspace{0.05in}
\noindent I. \underline{Structural Assumptions:} Given $f = f_1\otimes f_2$ and $g = g_1\otimes g_2$, where $f_i, g_i : \mathbb{R}^{n_i} \rightarrow \mathbb{C}$ satisfy $spt(f_i) \cap spt(g_i) = \emptyset$ for $i = 1, 2$, we assume the kernel representation
$$ \left\langle Tf, g\right\rangle = \int_{\mathbb{R}^{\vec{n}}} \int_{\mathbb{R}^{\vec{n}}} K(x, y) f(y) g(x) \,dx\,dy. $$
The kernel $K : \mathbb{R}^{\vec{n}} \times \mathbb{R}^{\vec{n}} \setminus \{ (x, y) \in \mathbb{R}^{\vec{n}}\times\mathbb{R}^{\vec{n}}: x_1=y_1 \text{ or } x_2=y_2 \} \rightarrow \mathbb{C}$ is assumed to satisfy:
\begin{enumerate}[1.]
\item Size condition:
$$ |K(x, y)| \leq C \frac{1}{|x_1-y_1|^{n_1}} \frac{1}{|x_2-y_2|^{n_2}}. $$
\item H\"{o}lder conditions:
\begin{enumerate}[2a.]
\item If $|y_1 - y_1'| \leq \frac{1}{2}|x_1 - y_1|$ and $|y_2 - y_2'| \leq \frac{1}{2}|x_2-y_2|$:
$$ \left| K(x,y) - K\big( x, (y_1, y_2') \big) - K\big( x, (y_1', y_2) \big) + K(x, y') \right|
\leq C \frac{|y_1-y_1'|^{\delta}}{|x_1-y_1|^{n_1+\delta}} \frac{|y_2-y_2'|^{\delta}}{|x_2-y_2|^{n_2+\delta}}.$$
\item If $|x_1 - x_1'| \leq \frac{1}{2}|x_1 - y_1|$ and $|x_2 - x_2'| \leq \frac{1}{2}|x_2-y_2|$:
$$ \left| K(x,y) - K\big( (x_1, x_2'), y \big) - K\big( (x_1', x_2), y \big) + K(x', y) \right|
\leq C \frac{|x_1-x_1'|^{\delta}}{|x_1-y_1|^{n_1+\delta}} \frac{|x_2-x_2'|^{\delta}}{|x_2-y_2|^{n_2+\delta}}.$$
\item If $|y_1 - y_1'| \leq \frac{1}{2}|x_1 - y_1|$ and $|x_2 - x_2'| \leq \frac{1}{2}|x_2-y_2|$:
$$ \left| K(x,y) - K\big( (x_1, x_2'), y \big) - K\big( x, (y_1', y_2) \big) + K\big( (x_1, x_2'), (y_1', y_2) \big) \right|
\leq C \frac{|y_1-y_1'|^{\delta}}{|x_1-y_1|^{n_1+\delta}} \frac{|x_2-x_2'|^{\delta}}{|x_2-y_2|^{n_2+\delta}}.$$
\item If $|x_1 - x_1'| \leq \frac{1}{2}|x_1 - y_1|$ and $|y_2 - y_2'| \leq \frac{1}{2}|x_2-y_2|$:
$$ \left| K(x,y) - K\big( x, (y_1, y_2') \big) - K\big( (x_1',x_2), y \big) + K\big( (x_1', x_2), (y_1, y_2') \big) \right|
\leq C \frac{|x_1-x_1'|^{\delta}}{|x_1-y_1|^{n_1+\delta}} \frac{|y_2-y_2'|^{\delta}}{|x_2-y_2|^{n_2+\delta}}.$$
\end{enumerate}
\item Mixed size and H\"{o}lder conditions:
\begin{align}
& \text{3a. If } |x_1 - x_1'| \leq \frac{1}{2}|x_1 - y_1| \text{, then }
\left| K(x,y) - K\big( (x_1', x_2), y \big) \right| \leq C \frac{|x_1 - x_1'|^{\delta}}{|x_1 - y_1|^{n_1+\delta}} \frac{1}{|x_2 - y_2|^{n_2}}.\\
& \text{3b. If } |y_1 - y_1'| \leq \frac{1}{2}|x_1 - y_1| \text{, then }
\left| K(x,y) - K\big( x, (y_1', y_2) \big) \right| \leq C \frac{|y_1 - y_1'|^{\delta}}{|x_1 - y_1|^{n_1+\delta}} \frac{1}{|x_2 - y_2|^{n_2}}.\\
& \text{3c. If } |x_2 - x_2'| \leq \frac{1}{2} |x_2 - y_2| \text{, then }
\left| K(x, y) - K\big( (x_1, x_2'), y \big) \right| \leq C \frac{1}{|x_1 - y_1|^{n_1}} \frac{|x_2 - x_2'|^{\delta}}{|x_2 - y_2|^{n_2+\delta}}.\\
& \text{3d. If } |y_2 - y_2'| \leq \frac{1}{2} |x_2 - y_2| \text{, then }
\left| K(x, y) - K\big( x, (y_1, y_2') \big) \right| \leq C \frac{1}{|x_1 - y_1|^{n_1}} \frac{|y_2 - y_2'|^{\delta}}{|x_2 - y_2|^{n_2+\delta}}.
\end{align}
\item Calder\'{o}n-Zygmund structure in $\mathbb{R}^{n_1}$ and $\mathbb{R}^{n_2}$ separately: If $f = f_1 \otimes f_2$ and $g = g_1 \otimes g_2$ with
$spt(f_1) \cap spt(g_1) = \emptyset$, we assume the kernel representation:
$$ \left\langle Tf, g\right\rangle = \int_{\mathbb{R}^{n_1}} \int_{\mathbb{R}^{n_1}} K_{f_2, g_2}(x_1, y_1) f_1(y_1) g_1(x_1) \,dx_1\,dy_1, $$
where the kernel $K_{f_2, g_2} : \mathbb{R}^{n_1} \times \mathbb{R}^{n_1} \setminus \{ (x_1, y_1) \in \mathbb{R}^{n_1}\times\mathbb{R}^{n_1} : x_1=y_1 \}$
satisfies the following size condition:
$$ |K_{f_2, g_2} (x_1, y_1)| \leq C(f_2, g_2) \frac{1}{|x_1 - y_1|^{n_1}}, $$
and H\"{o}lder conditions:
$$\text{If } |x_1 - x_1'| \leq \frac{1}{2}|x_1-y_1| \text{, then }
\left| K_{f_2, g_2} (x_1, y_1) - K_{f_2, g_2}(x_1', y_1) \right| \leq C(f_2, g_2) \frac{|x_1 - x_1'|^{\delta}}{|x_1-y_1|^{n_1+\delta}},$$
$$\text{If } |y_1 - y_1'| \leq \frac{1}{2}|x_1-y_1| \text{, then }
\left| K_{f_2, g_2} (x_1, y_1) - K_{f_2, g_2}(x_1, y_1') \right| \leq C(f_2, g_2) \frac{|y_1 - y_1'|^{\delta}}{|x_1-y_1|^{n_1+\delta}}.$$
We only assume the above representation and a certain control over $C(f_2, g_2)$ in the diagonal, that is:
$$ C(\mathbbm 1} %{1\!\!1_{Q_2}, \mathbbm 1} %{1\!\!1_{Q_2}) + C(\mathbbm 1} %{1\!\!1_{Q_2}, u_{Q_2}) + C(u_{Q_2}, \mathbbm 1} %{1\!\!1_{Q_2}) \leq C |Q_2|,$$
for all cubes $Q_2\subset\mathbb{R}^{n_2}$ and all ``$Q_2$-adapted zero-mean'' functions $u_{Q_2}$ -- that is, $spt(u_{Q_2}) \subset Q_2$,
$|u_{Q_2}|\leq 1$, and $\int u_{Q_2} = 0$.
We assume the symmetrical representation with kernel $K_{f_1, g_1}$ in the case $spt(f_2)\cap spt(g_2) = \emptyset$.
\end{enumerate}
\vspace{0.05in}
\noindent II. \underline{Boundedness and Cancellation Assumptions:}
\begin{enumerate}[1.]
\item Assume $T1, T^*1, T_1(1)$ and $T_1^*(1)$ are in product $BMO(\mathbb{R}^{\vec{n}})$, where $T_1$ is the partial adjoint of $T$, defined by
$\left\langle T_1(f_1\otimes f_2), g_1\otimes g_2\right\rangle = \left\langle T(g_1\otimes f_2), f_1\otimes g_2\right\rangle $.
\item Assume $|\left\langle T(\mathbbm 1} %{1\!\!1_{Q_1} \otimes \mathbbm 1} %{1\!\!1_{Q_2}), \mathbbm 1} %{1\!\!1_{Q_1}\otimes\mathbbm 1} %{1\!\!1_{Q_2} \right\rangle | \leq C |Q_1| |Q_2|$, for all cubes $Q_i \subset \mathbb{R}^{n_i}$ (weak boundedness).
\item Diagonal BMO conditions: for all cubes $Q_i\subset\mathbb{R}^{n_i}$ and all zero-mean functions $a_{Q_1}$ and $b_{Q_2}$
that are $Q_1-$ and $Q_2-$ adapted, respectively, assume:
\begin{align*}
& |\left\langle T(a_{Q_1} \otimes \mathbbm 1} %{1\!\!1_{Q_2}), \mathbbm 1} %{1\!\!1_{Q_1}\otimes\mathbbm 1} %{1\!\!1_{Q_2} \right\rangle | \leq C|Q_1||Q_2|,
& |\left\langle T(\mathbbm 1} %{1\!\!1_{Q_1} \otimes \mathbbm 1} %{1\!\!1_{Q_2}), a_{Q_1}\otimes\mathbbm 1} %{1\!\!1_{Q_2} \right\rangle | \leq C|Q_1||Q_2|,\\
& |\left\langle T(\mathbbm 1} %{1\!\!1_{Q_1} \otimes b_{Q_2}), \mathbbm 1} %{1\!\!1_{Q_1}\otimes\mathbbm 1} %{1\!\!1_{Q_2} \right\rangle | \leq C|Q_1||Q_2|,
& |\left\langle T(\mathbbm 1} %{1\!\!1_{Q_1} \otimes \mathbbm 1} %{1\!\!1_{Q_2}), \mathbbm 1} %{1\!\!1_{Q_1}\otimes b_{Q_2} \right\rangle | \leq C|Q_1||Q_2|.
\end{align*}
\end{enumerate}
\subsection{Biparameter Dyadic Shifts and Martikainen's Representation Theorem}
\label{Ss:RepThm}
Given dyadic rectangles $\bm{\mathcal{D}} = \mathcal{D}_1 \times \mathcal{D}_2$ and pairs of non-negative integers $\vec{i} = (i_1, i_2)$ and $\vec{j} = (j_1, j_2)$,
a (cancellative) biparameter dyadic shift is an operator of the form:
\begin{equation} \label{E:CShiftDef}
\mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} f := \sum_{\substack{R_1\in\mathcal{D}_1\\R_2\in\mathcal{D}_2}}
\sum_{\substack{P_1\in (R_1)_{i_1} \\ P_2\in (R_2)_{i_2}}}
\sum_{\substack{Q_1\in(R_1)_{j_1} \\ Q_2 \in (R_2)_{j_2} }}
a_{P_1Q_1R_1P_2Q_2R_2} \: \widehat{f}(P_1^{\epsilon_1}\times P_2^{\epsilon_2}) \: h_{Q_1}^{\delta_1} \otimes h_{Q_2}^{\delta_2},
\end{equation}
where
$$ |a_{P_1Q_1R_1P_2Q_2R_2}| \leq \frac{\sqrt{|P_1||Q_1|}}{|R_1|} \frac{\sqrt{|P_2||Q_2|}}{|R_2|} = 2^{\frac{-n_1}{2}(i_1+j_1)}
2^{\frac{-n_2}{2}(i_2+j_2)}. $$
We suppress for now the signatures of the Haar functions, and assume summation over them is understood. We use the simplified notation
$$ \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} f := \sum_{\mathbf{R}, \mathbf{P}, \mathbf{Q}}^{\vec{i}, \vec{j}} a_{\mathbf{P}\mathbf{Q}\mathbf{R}} \: \widehat{f}(P_1\times P_2) \: h_{Q_1}\otimes h_{Q_2}$$
for the summation above.
First note that
\begin{align}
S_{\mathcal{D}}^2 (\mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i},\vec{j}}f) &= \sum_{R_1\times R_2} \sum_{\substack{Q_1\in(R_1)_{j_1}\\ Q_2\in(R_2)_{j_2}}}
\bigg( \sum_{\substack{P_1\in (R_1)_{i_1} \\ P_2\in(R_2)_{i_2}}}
a_{P_1Q_1R_1P_2Q_2R_2} \: \widehat{f}(P_1\times P_2)
\bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|} \\
& \lesssim 2^{-n_1(i_1+j_1)} 2^{-n_2(i_2+j_2)} \bigg( S_{\bm{\mathcal{D}}}^{\vec{i},\vec{j}} f \bigg)^2,
\end{align}
where $S_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}$ is the shifted biparameter square function in \eqref{E:ShiftedDSF2pDef}. Then, by \eqref{E:ShiftedDSF2p}:
\begin{equation} \label{E:2pDShift1wt}
\|\mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} f \|_{L^p(w)} \lesssim 2^{\frac{-n_1}{2}(i_1+j_1)}
2^{\frac{-n_2}{2}(i_2+j_2)} \|S_{\bm{\mathcal{D}}}^{\vec{i},\vec{j}} f \|_{L^p(w)} \lesssim \|f\|_{L^p(w)},
\end{equation}
for all $w \in A_p(\mathbb{R}^{\vec{n}})$.
Next, we state Martikainen's Representation Theorem \cite{MRep}:
\begin{thm}[Martikainen]\label{T:MRep}
For a biparameter singular integral operator $T$ as defined in Section \ref{Ss:JourneDef},
there holds for some biparameter shifts $\mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i},\vec{j}}$ that:
$$ \left\langle Tf, g\right\rangle = C_T \mathbb{E}_{\omega_1} \mathbb{E}_{\omega_2}
\sum_{\vec{i}, \vec{j} \in \mathbb{Z}^2_{+}} 2^{-\text{max}(i_1, j_1) \delta/2} 2^{-\text{max}(i_2, j_2) \delta/2}
\left\langle \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i},\vec{j}} f, g\right\rangle ,$$
where non-cancellative shifts may only appear if $(i_1, j_1) = (0,0)$ or $(i_2, j_2)=(0,0)$.
\end{thm}
In light of this theorem, in order to prove Theorem \ref{T:UB}, it suffices to prove the two-weight bound for commutators
$[b, \mathbbm{S}_{\bm{\mathcal{D}}}]$ with the dyadic shifts, with the requirements that the bounds be \textit{independent} of the choice of $\bm{\mathcal{D}}$
and that they depend on $\vec{i}$ and $\vec{j}$ at most \textit{polynomially}. We first look at the case of cancellative shifts,
and then treat the non-cancellative case in Section \ref{Ss:NCShifts}.
\subsection{Cancellative Case} \label{Ss:CShifts}
\begin{thm} \label{T:CShifts}
Let $\bm{\mathcal{D}} = \mathcal{D}_1\times\mathcal{D}_2$ be dyadic rectangles in $\mathbb{R}^{\vec{n}} = \mathbb{R}^{n_1}\otimes\mathbb{R}^{n_2}$ and
$\mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}$ be a cancellative dyadic shift as defined in \eqref{E:CShiftDef}. If $\mu, \lambda \in A_p(\mathbb{R}^{\vec{n}})$, $1 < p < \infty$,
and $\nu = \mu^{1/p}\lambda^{-1/p}$, then
\begin{equation}
\left\| [b, \mathbbm{S}_{\mathcal{D}}^{\vec{i}, \vec{j}}] : L^p(\mu) \rightarrow L^p(\lambda)\right\| \lesssim
\bigg( (1+\max(i_1, j_1))(1+\max(i_2, j_2)) \bigg) \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)},
\end{equation}
where $\|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}$ denotes the norm of $b$ in the dyadic weighted little $bmo(\nu)$ space on $\mathbb{R}^{\vec{n}}$.
\end{thm}
\begin{proof}
We may express the product of two functions $b$ and $f$ on $\mathbb{R}^{\vec{n}}$ as
\begin{equation} \label{E:2pParaprodDecomp}
bf = \sum \mathsf{P}_{\mathsf{b}} f + \sum \mathsf{p}_{\mathsf{b}} f + \Pi_f b,
\end{equation}
where $\mathsf{P}_{\mathsf{b}}$ runs through the nine paraproducts associated with $BMO_{\bm{\mathcal{D}}}(\nu)$ in Section \ref{Ss:ProdPara}, and $\mathsf{p}_{\mathsf{b}}$ runs through the six paraproducts associated with $bmo_{\bm{\mathcal{D}}}(\nu)$ in Section \ref{Ss:bmoPara}. Then
$$ [b, \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}] f = \sum \: [\mathsf{P}_{\mathsf{b}}, \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}] f + \sum \: [\mathsf{p}_{\mathsf{b}}, \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}] f + \mathcal{R}_{\vec{i},\vec{j}}f,$$
where
$$ \mathcal{R}_{\vec{i},\vec{j}} f := \Pi_{\mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}f} b - \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} \Pi_f b. $$
From the two-weight inequalities for the paraproducts in Propositions \ref{P:BMOparaprod} and \ref{P:bmoparaprod},
and the one-weight inequality for the shifts in \eqref{E:2pDShift1wt},
$$ \left\| \sum \: [\mathsf{P}_{\mathsf{b}}, \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}] + \sum \: [\mathsf{p}_{\mathsf{b}}, \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}}] : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)},$$
so we are left with bounding the remainder term $\mathcal{R}_{\vec{i}, \vec{j}}$. We claim that:
\begin{equation} \label{E:CSRem}
\left\| \mathcal{R}_{\vec{i},\vec{j}} : L^p(\mu) \rightarrow L^p(\lambda)\right\| \lesssim
\bigg( (1+\max(i_1, j_1))(1+\max(i_2, j_2)) \bigg) \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)},
\end{equation}
from which the result follows.
A straightforward calculation shows that
$$ \mathcal{R}_{\vec{i},\vec{j}} f = \sum_{\mathbf{R},\mathbf{P},\mathbf{Q}}^{\vec{i}, \vec{j}} a_{\mathbf{P}\mathbf{Q}\mathbf{R}} \widehat{f}(P_1 \times P_2)
\bigg( \left\langle b\right\rangle _{Q_1\times Q_2} - \left\langle b\right\rangle _{P_1\times P_2} \bigg) h_{Q_1} \otimes h_{Q_2}.$$
We write this as a sum $\mathcal{R}_{\vec{i},\vec{j}} f = \mathcal{R}^1_{\vec{i},\vec{j}} f + \mathcal{R}^2_{\vec{i},\vec{j}} f$ by
splitting the term in parentheses as:
$$ \left\langle b\right\rangle _{Q_1\times Q_2} - \left\langle b\right\rangle _{P_1\times P_2} = \bigg( \left\langle b\right\rangle _{Q_1\times Q_2} - \left\langle b\right\rangle _{R_1\times R_2} \bigg)
+ \bigg( \left\langle b\right\rangle _{R_1\times R_2} - \left\langle b\right\rangle _{P_1\times P_2} \bigg).$$
For the first term, we may apply the biparameter version of \eqref{E:1pavgdiff}, where we keep in mind that $R_1 = Q_1^{(j_1)}$ and $R_2 = Q_2^{(j_2)}$:
\begin{align}
\left\langle b\right\rangle _{Q_1\times Q_2} - \left\langle b\right\rangle _{R_1\times R_2} = &
\sum_{\substack{1\leq k_1\leq j_1 \\ 1\leq k_2\leq j_2}} \widehat{b}(Q_1^{(k_1)} \times Q_2^{(k_2)}) h_{Q_1^{(k_1)}}(Q_1) h_{Q_2^{(k_2)}}(Q_2)\\
& + \sum_{1\leq k_1\leq j_1} \left\langle b, h_{Q_1^{(k_1)}} \otimes \frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|} \right\rangle h_{Q_1^{(k_1)}}(Q_1)
+ \sum_{1\leq k_2\leq j_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{R_1}}{|R_1|}\otimes h_{Q_2^{(k_2)}} \right\rangle h_{Q_2^{(k_2)}}(Q_2).
\end{align}
Then, we may write the operator $\mathcal{R}^1_{\vec{i},\vec{j}} $ as
\begin{equation} \label{temp1}
\mathcal{R}^1_{\vec{i},\vec{j}} f = \sum_{\substack{1\leq k_1\leq j_1\\ 1\leq k_2\leq j_2}} A_{k_1,k_2}f + \sum_{1\leq k_1\leq j_1} B_{k_1}^{(0,1)}f + \sum_{1\leq k_2\leq j_2} B_{k_2}^{(1,0)}f,
\end{equation}
where
$$ A_{k_1, k_2}f := \sum_{\mathbf{R},\mathbf{P},\mathbf{Q}}^{\vec{i},\vec{j}} a_{\mathbf{P}\mathbf{Q}\mathbf{R}} \widehat{f}(P_1\times P_2) \widehat{b}(Q_1^{(k_1)}\times Q_2^{(k_2)})
h_{Q_1^{(k_1)}}(Q_1) h_{Q_2^{(k_2)}}(Q_2) h_{Q_1}\otimes h_{Q_2},$$
$$ B_{k_1}^{(0,1)}f := \sum_{\mathbf{R},\mathbf{P},\mathbf{Q}}^{\vec{i},\vec{j}} a_{\mathbf{P}\mathbf{Q}\mathbf{R}} \widehat{f}(P_1\times P_2)
\left\langle b, h_{Q_1^{(k_1)}} \otimes \frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|} \right\rangle h_{Q_1^{(k_1)}}(Q_1) h_{Q_1}\otimes h_{Q_2}, $$
and
$$ B_{k_2}^{(1,0)}f := \sum_{\mathbf{R},\mathbf{P},\mathbf{Q}}^{\vec{i},\vec{j}} a_{\mathbf{P}\mathbf{Q}\mathbf{R}} \widehat{f}(P_1\times P_2)
\left\langle b, \frac{\mathbbm 1} %{1\!\!1_{R_1}}{|R_1|} \otimes h_{Q_2^{(k_2)}} \right\rangle h_{Q_2^{(k_2)}}(Q_2) h_{Q_1}\otimes h_{Q_2}. $$
We show that these operators satisfy:
$$ \left\| A_{k_1,k_2} : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|b\|_{BMO_{\mathcal{D}}(\nu)}\text{, for all } k_1, k_2, $$
$$ \left\| B_{k_1}^{(0,1)} : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|b\|_{bmo_{\mathcal{D}}(\nu)} \text{, for all } k_1,
\text{ and } \left\| B_{k_2}^{(1,0)} : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|b\|_{bmo_{\mathcal{D}}(\nu)} \text{, for all } k_2.$$
Going back to the decomposition in \eqref{temp1}, these inequalities will give that
$$ \left\| \mathcal{R}^1_{\vec{i},\vec{j}} : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim (j_1j_2 + j_1 + j_2)\|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}.$$
A symmetrical proof for the term $\mathcal{R}^2_{\vec{i},\vec{j}}$ coming from $(\left\langle b\right\rangle _{R_1\times R_2} - \left\langle b\right\rangle _{P_1\times P_2})$ will show that
$$ \left\| \mathcal{R}^2_{\vec{i},\vec{j}} : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim (i_1i_2 + i_1 + i_2)\|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}.$$
Putting these estimates together, we obtain the desired result
$$ \left\| \mathcal{R}_{\vec{i},\vec{j}} : L^p(\mu) \rightarrow L^p(\lambda)\right\| \lesssim
(i_1 + i_2 + i_1i_2 + j_1 + j_2 + j_1j_2) \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)} \lesssim
(1+\max(i_1, j_1))(1+\max(i_2, j_2)) \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}. $$
Remark that we are allowed to have one of the situations $(i_1, i_2) = (0,0)$ or $(j_1, j_2) = (0,0)$ -- but not both -- and then
either the term $\mathcal{R}^2_{\vec{i},\vec{j}} f$ or $\mathcal{R}^1_{\vec{i},\vec{j}} f$, respectively, will vanish.
Let us now look at the estimate for $A_{k_1, k_2}$. Taking again $f \in L^p(\mu)$ and $g \in L^{p'}(\lambda')$, we write
$\left\langle A_{k_1, k_2}f, g\right\rangle = \left\langle b, \phi\right\rangle $, where
\begin{align}
\phi &= \sum_{\mathbf{R},\mathbf{P},\mathbf{Q}}^{\vec{i},\vec{j}} a_{\mathbf{P}\mathbf{Q}\mathbf{R}} \widehat{f}(P_1\times P_2) h_{Q_1^{(k_1)}}(Q_1) h_{Q_2^{(k_2)}}(Q_2)
\widehat{g}(Q_1\times Q_2) h_{Q_1^{(k_1)}}\otimes h_{Q_2^{(k_2)}}\\
&= \sum_{R_1\times R_2} \sum_{\substack{P_1\in (R_1)_{i_1} \\ P_2\in(R_2)_{i_2}}}
\sum_{\substack{N_1\in (R_1)_{j_1-k_1}\\ N_2\in(R_2)_{j_2-k_2}}} \widehat{f}(P_1\times P_2)
\bigg( \sum_{\substack{Q_1\in (N_1)_{k_1} \\ Q_2\in (N_2)_{k_2}}} a_{\mathbf{P}\mathbf{Q}\mathbf{R}} \widehat{g}(Q_1\times Q_2) h_{N_1}(Q_1) h_{N_2}(Q_2) \bigg)
h_{N_1}\otimes h_{N_2}.
\end{align}
Then
\begin{align*}
S_{\bm{\mathcal{D}}}^2\phi &\lesssim \sum_{N_1\times N_2} \bigg(
\sum_{\substack{P_1\in (N_1^{(j_1-k_1)})_{i_1} \\ P_2\in (N_2^{(j_2-k_2)})_{i_2} }} |\widehat{f}(P_1\times P_2)|
\sum_{\substack{ Q_1\in (N_1)_{k_1} \\ Q_2\in (N_2)_{k_2} }} |a_{\mathbf{P}\mathbf{Q}\mathbf{R}}| |\widehat{g}(Q_1\times Q_2)| \frac{1}{\sqrt{|N_1|}} \frac{1}{\sqrt{|N_2|}}
\bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{N_1}\otimes \mathbbm 1} %{1\!\!1_{N_2}}{|N_1| |N_2|} \\
&\lesssim 2^{-n_1(i_1+j_1)} 2^{-n_2(i_2+j_2)} \sum_{N_1\times N_2}
\bigg( \sum_{\substack{P_1\in (N_1^{(j_1-k_1)})_{i_1} \\ P_2\in (N_2^{(j_2-k_2)})_{i_2} }} |\widehat{f}(P_1\times P_2)| 2^{n_1k_1/2}2^{n_2k_2/2}
\left\langle |g|\right\rangle _{N_1\times N_2}
\bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{N_1}\otimes \mathbbm 1} %{1\!\!1_{N_2}}{|N_1| |N_2|}\\
&\lesssim 2^{-n_1(i_1+j_1-k_1)} 2^{-n_2(i_2+j_2-k_2)} (M_S g)^2
\sum_{R_1\times R_2} \bigg( \sum_{\substack{ P_1\in (R_1)_{i_1}\\ P_2\in (R_2)_{i_2} }} |\widehat{f}(P_1\times P_2)| \bigg)^2
\sum_{\substack{ N_1\in (R_1)_{j_1-k_1} \\ N_2\in (R_2)_{j_2-k_2} }} \frac{\mathbbm 1} %{1\!\!1_{N_1}\otimes\mathbbm 1} %{1\!\!1_{N_2}}{|N_1||N_2|}\\
&= 2^{-n_1(i_1+j_1-k_1)} 2^{-n_2(i_2+j_2-k_2)} (M_S g)^2 \bigg( S_{\bm{\mathcal{D}}}^{(i_1, i_2), (j_1-k_1, j_2-k_2)} f \bigg)^2,
\end{align*}
where the last operator is the shifted square function in \eqref{E:ShiftedDSF2pDef}. Then, from \eqref{E:ShiftedDSF2p}:
\begin{align}
\left\| A_{k_1, k_2} : L^p(\mu) \rightarrow L^p(\lambda) \right\| & \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|S_{\bm{\mathcal{D}}}\phi\|_{L^1(\nu)}\\
&\lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} 2^{\frac{-n_1}{2}(i_1+j_1-k_1)} 2^{\frac{-n_2}{2}(i_2+j_2-k_2)} \|M_Sg\|_{L^{p'}(\lambda')} \|S_{\bm{\mathcal{D}}}^{(i_1, i_2), (j_1-k_1, j_2-k_2)} f\|_{L^p(\mu)}\\
&\lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|g\|_{L^{p'}(\lambda')} \|f\|_{L^p(\mu)}.
\end{align}
Finally, we look at $B_{k_1}^{(0,1)}$, with the proof for $B_{k_2}^{(1,0)}$ being symmetrical. We write again $\left\langle B_{k_1}^{(0,1)}f, g\right\rangle = \left\langle b, \phi\right\rangle $, where
$$ \phi = \sum_{\mathbf{R}, \mathbf{P}, \mathbf{Q}}^{\vec{i}, \vec{j}} a_{\mathbf{P}\mathbf{Q}\mathbf{R}} \widehat{f}(P_1\times P_2) h_{Q_1^{(k_1)}}(Q_1) \widehat{g}(Q_1\times Q_2) h_{Q_1^{(k_1)}}\otimes
\frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|}.$$
Then
\begin{equation}
S_{\mathcal{D}_1}^2 f \lesssim 2^{-n_1(i_1+j_1)} 2^{-n_2(i_2+j_2)}
\sum_{\substack{R_1\in\mathcal{D}_1 \\ N_1\in(R_1)_{j_1-k_1}}} \frac{\mathbbm 1} %{1\!\!1_{N_1}}{|N_1|}
\bigg( \sum_{R_2\in\mathcal{D}_2} \sum_{\substack{P_1\in (R_1)_{i_1} \\ P_2\in (R_2)_{i_2}}} |\widehat{f}(P_1\times P_2)|
\sum_{Q_2\in (R_2)_{j_2}} \left\langle |H_{Q_2}g|\right\rangle _{N_1} 2^{n_1k_1/2} \frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|} \bigg)^2,
\end{equation}
and the summation above is bounded by:
$$\bigg(\sum_{\substack{R_1\in\mathcal{D}_1 \\ N_1\in(R_1)_{j_1-k_1}}} \frac{\mathbbm 1} %{1\!\!1_{N_1}}{|N_1|}
\sum_{R_2\in\mathcal{D}_2} \big(
\sum_{\substack{P_1\in (R_1)_{i_1} \\ P_2\in (R_2)_{i_2} }}|\widehat{f}(P_1\times P_2)| \big)^2 \frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|}\bigg)\\
\bigg( \sum_{R_2\in\mathcal{D}_2} \big(\sum_{Q_2\in(R_2)_{j_2}} M_{\mathcal{D}_1}(H_{Q_2}g)\big)^2 \frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|} \bigg),$$
which is exactly
$$ \bigg( S_{\bm{\mathcal{D}}}^{(i_1, i_2), (j_1-k_1,0)} f\bigg)^2 \big([MS]^{j_2, 0}g\big)^2.$$
From \eqref{E:ShiftedDSF2p} and \eqref{E:Mixed2pSFShift}, we obtain exactly $\|S_{\mathcal{D}_1}\phi\|_{L^1(\nu)} \lesssim \|f\|_{L^p(\mu)} \|g\|_{L^{p'}(\lambda')}$, and the proof is complete.
\end{proof}
\subsection{The Non-Cancellative Case} \label{Ss:NCShifts}
Following Martikainen's proof in \cite{MRep}, we are left with three types of terms to consider -- all of paraproduct type:
\begin{itemize}
\item The full standard paraproduct: $\Pi_a$ and $\Pi^*_a$,
\item The full mixed paraproducts: $\Pi_{a;(0,1)}$ and $\Pi_{a;(1,0)}$,
\end{itemize}
where, in each case, $a$ is some fixed function in \textit{unweighted} product $BMO(\mathbb{R}^{\vec{n}})$, with $\|a\|_{BMO(\mathbb{R}^{\vec{n}})} \leq 1$,
and
\begin{itemize}
\item The \textit{partial} paraproducts, defined for every $i_1, j_1 \geq 0$ as:
$$ \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1} f := \sum_{\substack{R_1\in\mathcal{D}_1\\R_2\in\mathcal{D}_2}}
\sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in (R_1)_{j_1}}} \widehat{a}_{P_1Q_1R_1}(R_2^{\delta_2})
\widehat{f}(P_1^{\epsilon_1} \times R_2^{\epsilon_2}) h_{Q_1}^{\delta_1} \times \frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|},$$
where, for every fixed $P_1$, $Q_1$, $R_1$, $a_{P_1Q_1R_1}(x_2)$ is a $BMO(\mathbb{R}^{n_2})$ function with
$$ \|a_{P_1Q_1R_1}\|_{BMO(\mathbb{R}^{n_2})} \leq \frac{\sqrt{|P_1|}\sqrt{|Q_1|}}{|R_1|} = 2^{\frac{-n_1}{2} (i_1+j_1)}, $$
and
$$\widehat{a}_{P_1Q_1R_1}(R_2^{\delta_2}) := \left\langle a_{P_1Q_1R_1}, h_{R_2}^{\delta_2}\right\rangle _{\mathbb{R}^{n_2}}
:= \int_{\mathbb{R}^{n_2}} a_{P_1Q_1R_1}(x_2) h_{R_2}^{\delta_2}(x_2)\,dx_2.$$
The symmetrical partial paraproduct $\mathbbm{S}_{\bm{\mathcal{D}}}^{i_2, j_2}$ is defined analogously.
\end{itemize}
We treat each case separately.
\subsubsection{The full standard paraproduct.}
In this case, we are looking at the commutator $[b, \Pi_a]$ where
$$ \Pi_a f := \sum_{R\in\bm{\mathcal{D}}} \widehat{a}(R) \left\langle f\right\rangle _{R} h_{R}, $$
and $a\in BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})$ with $\|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \leq 1$. We prove that
\begin{thm} \label{T:NCS-1}
Let $\mu, \lambda \in A_p(\mathbb{R}^{\vec{n}})$, $1<p<\infty$ and $\nu := \mu^{1/p}\lambda^{-1/p}$. Then
$$ \left\| [b, \Pi_a] : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}.$$
\end{thm}
\begin{proof}
Remark first that
$$ \Pi_a(bf) = \sum_{R\in\bm{\mathcal{D}}} \widehat{a}(R) \left\langle bf\right\rangle _R h_R \text{ and }
\Pi_{\Pi_af}b = \sum_{R\in\bm{\mathcal{D}}} \widehat{a}(R) \left\langle b\right\rangle _R \left\langle f\right\rangle _R h_R, $$
so
\begin{align}
\Pi_a(bf) - \Pi_{\Pi_a f}b &= \sum_{R\in\bm{\mathcal{D}}} \widehat{a}(R) \big( \left\langle bf\right\rangle _R - \left\langle b\right\rangle _R \left\langle f\right\rangle _R \big) h_R\\
& = \Pi_a \big( \sum \mathsf{P}_{\mathsf{b}} f + \sum \mathsf{p}_{\mathsf{b}} f + \Pi_f b \big) - \Pi_{\Pi_a f}b,
\end{align}
where the last equality was obtained by simply expanding $bf$ into paraproducts.
Then
$$ \Pi_{\Pi_a f}b - \Pi_a \Pi_f b = \sum \Pi_a \mathsf{P}_{\mathsf{b}} f + \sum \Pi_a \mathsf{p}_{\mathsf{b}} f -
\sum_{R\in\bm{\mathcal{D}}} \widehat{a}(R) \big( \left\langle bf\right\rangle _R - \left\langle b\right\rangle _R \left\langle f\right\rangle _R \big) h_R.$$
Noting that
$$ [b, \Pi_a]f = \sum \mathsf{P}_{\mathsf{b}} \Pi_a f + \sum \mathsf{p}_{\mathsf{b}} \Pi_a f - \sum \Pi_a\mathsf{P}_{\mathsf{b}} f - \sum \Pi_a \mathsf{p}_{\mathsf{b}} f + \Pi_{\Pi_a f}b - \Pi_a \Pi_f b, $$
we obtain
$$ [b, \Pi_a] f = \sum \mathsf{P}_{\mathsf{b}} \Pi_a f + \sum \mathsf{p}_{\mathsf{b}} \Pi_a f - \sum_{R\in\bm{\mathcal{D}}} \widehat{a}(R) \big( \left\langle bf\right\rangle _R - \left\langle b\right\rangle _R \left\langle f\right\rangle _R\big) h_R. $$
The first terms are easily handled:
$$ \|\mathsf{P}_{\mathsf{b}} \Pi_a f\|_{L^p(\lambda)} \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|\Pi_af\|_{L^p(\mu)} \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|f\|_{L^p(\mu)}, $$
$$ \|\mathsf{p}_{\mathsf{b}} \Pi_a f\|_{L^p(\lambda)} \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)} \|\Pi_af\|_{L^p(\mu)} \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)} \|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|f\|_{L^p(\mu)}. $$
So we are left with the third term.
Now, for any dyadic rectangle $R$:
$$ \left\langle bf\right\rangle _R - \left\langle b\right\rangle _R\left\langle f\right\rangle _R = \frac{1}{|R|} \int_R f(x) \mathbbm 1} %{1\!\!1_R(x) (b(x) - \left\langle b\right\rangle _R)\,dx. $$
Expressing $\mathbbm 1} %{1\!\!1_R(b - \left\langle b\right\rangle _R)$ as in \eqref{E:2punitmo}, we obtain
\begin{align}
\left\langle bf\right\rangle _R - \left\langle b\right\rangle _R\left\langle f\right\rangle _R =& \frac{1}{|R|} \sum_{\substack{P_1\subset Q_1 \\ P_2\subset Q_2}} \widehat{b}(P_1\times P_2) \widehat{f}(P_1\times P_2)\\
&+ \frac{1}{|R|} \sum_{P_1\subset Q_1} \left\langle b, h_{P_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\right\rangle \left\langle f, h_{P_1}\otimes \mathbbm 1} %{1\!\!1_{Q_2}\right\rangle
+ \frac{1}{|R|} \sum_{P_2\subset Q_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{P_2}\right\rangle \left\langle f, \mathbbm 1} %{1\!\!1_{Q_1}\otimes h_{P_2}\right\rangle .
\end{align}
Therefore
$$ \sum_{R\in\bm{\mathcal{D}}} \widehat{a}(R) \big( \left\langle bf\right\rangle _R - \left\langle b\right\rangle _R \left\langle f\right\rangle _R\big) h_R =
\Lambda_{a, b}f + \lambda_{a,b}^{(0,1)}f + \lambda_{a,b}^{(1,0)}f, $$
where:
$$ \Lambda_{a,b}f := \sum_{Q_1\times Q_2} \widehat{a}(Q_1\times Q_2) \frac{1}{|Q_1||Q_2|}
\bigg( \sum_{\substack{P_1\subset Q_1 \\ P_2\subset Q_2}} \widehat{b}(P_1\times P_2) \widehat{f}(P_1\times P_2) \bigg) h_{Q_1}\otimes h_{Q_2},$$
$$ \lambda_{a,b}^{(0,1)} f := \sum_{Q_1\times Q_2} \widehat{a}(Q_1\times Q_2) \frac{1}{|Q_1||Q_2|}
\bigg( \sum_{P_1\subset Q_1} \left\langle b, h_{P_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\right\rangle
\left\langle f, h_{P_1}\otimes \mathbbm 1} %{1\!\!1_{Q_2}\right\rangle \bigg) h_{Q_1}\otimes h_{Q_2}, $$
$$ \lambda_{a,b}^{(1,0)} f := \sum_{Q_1\times Q_2} \widehat{a}(Q_1\times Q_2) \frac{1}{|Q_1||Q_2|}
\bigg( \sum_{P_2\subset Q_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{P_2}\right\rangle \left\langle f, \mathbbm 1} %{1\!\!1_{Q_1}\otimes h_{P_2}\right\rangle \bigg) h_{Q_1}\otimes h_{Q_2}. $$
To analyze the term $\Lambda_{a,b}$, we write $\left\langle \Lambda_{a,b}f, g\right\rangle = \left\langle b, \phi\right\rangle $, where
\begin{align}
\phi &= \sum_{P_1\times P_2} \widehat{f}(P_1\times P_2)
\bigg( \sum_{\substack{Q_1\supset P_1\\ Q_2\supset P_2}} \widehat{a}(Q_1\times Q_2) \widehat{g}(Q_1\times Q_2) \frac{1}{|Q_1||Q_2|} \bigg) h_{P_1}\otimes h_{P_2}\\
&= \sum_{R\in\bm{\mathcal{D}}} \widehat{f}(R) \bigg( \sum_{T\in\bm{\mathcal{D}}: T\supset R} \widehat{a}(T)\widehat{g}(T)\frac{1}{|T|} \bigg) h_R.
\end{align}
So $|\left\langle \Lambda_{a,b}f, g\right\rangle | \lesssim \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)}\|S_{\bm{\mathcal{D}}}\phi\|_{L^1(\nu)}$, and
$$ S_{\bm{\mathcal{D}}}^2\phi = \sum_{R\in\bm{\mathcal{D}}} |\widehat{f}(R)|^2 \bigg( \sum_{T\in\bm{\mathcal{D}}: T\supset R} \widehat{a}(T) \widehat{g}(T) \frac{1}{|T|} \bigg)^2 \frac{\mathbbm 1} %{1\!\!1_R}{|R|}
\leq \sum_{R\in\bm{\mathcal{D}}} |\widehat{f}(R)|^2 \bigg( \sum_{T\in\bm{\mathcal{D}}: T\supset R} \widehat{a}_{\tau}(T) \widehat{g}_{\tau}(T)\frac{1}{|T|} \bigg)^2 \frac{\mathbbm 1} %{1\!\!1_R}{|R|}, $$
where $a_{\tau}: = \sum_{R\in\bm{\mathcal{D}}} |\widehat{a}(R)|h_R$ and $g_{\tau} := \sum_{R\in\bm{\mathcal{D}}} |\widehat{g}(R)|h_R$ are martingale transforms which do not increase
either the $BMO$ norm of $a$, or the $L^{p'}(\lambda')$ norm of $g$.
Now note that
$$ \left\langle \Pi_{a_{\tau}}^*g_{\tau}\right\rangle _R = \sum_{T \subsetneq R} \widehat{a}_{\tau}(T) \widehat{g}_{\tau}(T) \frac{1}{|R|} + \sum_{T \supset R}
\widehat{a}_{\tau}(T) \widehat{g}_{\tau}(T) \frac{1}{|T|},$$
and since all the Haar coefficients of $a_{\tau}$ and $g_{\tau}$ are non-negative, we may write
$$ \sum_{T\supset R} \widehat{a}_{\tau}(T) \widehat{g}_{\tau}(T)\frac{1}{|T|} \leq \left\langle \Pi_{a_{\tau}}^*g_{\tau}\right\rangle _R.$$
Then
$$
S_{\bm{\mathcal{D}}}^2\phi \leq \sum_{R\in\bm{\mathcal{D}}} |\widehat{f}(R)|^2 \left\langle \Pi_{a_{\tau}}^* g_{\tau}\right\rangle _R^2 \frac{\mathbbm 1} %{1\!\!1_R}{|R|}
\leq \big( M_S \Pi_{a_{\tau}}^* g_{\tau} \big)^2 S_{\bm{\mathcal{D}}}^2f,
$$
and
\begin{align}
\|S_{\bm{\mathcal{D}}}\phi\|_{L^1(\nu)} &\leq \|M_S \Pi_{a_{\tau}}^* g_{\tau}\|_{L^{p'}(\lambda')} \|S_{\bm{\mathcal{D}}}f\|_{L^p(\mu)} \\
& \lesssim \|\Pi_{a_{\tau}}^* g_{\tau} \|_{L^{p'}(\lambda')} \|f\|_{L^p(\mu)}\\
& \lesssim \|a_{\tau}\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|g_{\tau}\|_{L^{p'}(\lambda')} \|f\|_{L^p(\mu)},
\end{align}
which gives us the desired estimate
$$ \left\| \Lambda_{a,b} : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|b\|_{BMO_{\bm{\mathcal{D}}}(\nu)}.$$
Finally, we analyze the term $\lambda_{a,b}^{(0,1)}$, with the last term being symmetrical. We have
$\left\langle \lambda_{a,b}^{(0,1)}f, g\right\rangle = \left\langle b, \phi\right\rangle $ with
$$ \phi = \sum_{P_1} \bigg(
\sum_{P_2} \left\langle f, h_{P_1}\otimes \mathbbm 1} %{1\!\!1_{P_2}\right\rangle \frac{1}{|P_2|} \sum_{Q_1\supset P_1} \widehat{a}(Q_1\times P_2)
\widehat{g}(Q_1\times P_2) \frac{1}{|Q_1|} \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}
\bigg) h_{P_1}, $$
and $| \left\langle \lambda_{a,b}^{(0,1)} f, g\right\rangle | \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)} \|S_{\mathcal{D}_1}\phi\|_{L^1(\nu)}$.
Now
$$ S_{\mathcal{D}_1}^2\phi \leq \sum_{P_1} \bigg(
\sum_{P_2} \left\langle |H_{P_1}f|\right\rangle _{P_2} \big(
\sum_{Q_1\supset P_1} \widehat{a}_{\tau}(Q_1\times P_2) \widehat{g}_{\tau}(Q_2\times P_2) \frac{1}{|Q_1|} \big) \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}
\bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{P_1}}{|P_1|}, $$
where we are using the same martingale transforms as above. Note that
$$ \left\langle \Pi_{a_{\tau}}^* g_{\tau}, \frac{\mathbbm 1} %{1\!\!1_{P_1}}{|P_1|}\right\rangle _{\mathbb{R}^{n_1}}(x_2) = \sum_{P_2} \frac{\mathbbm 1} %{1\!\!1_{P_2}(x_2)}{|P_2|}
\sum_{Q_1} \widehat{a}_{\tau}(Q_1\times P_2) \widehat{g}_{\tau}(Q_1\times P_2) \frac{|Q_1\cap P_1|}{|Q_1||P_1|}, $$
and again since all terms are non-negative:
\begin{align}
S_{\mathcal{D}_1}^2\phi & \leq \sum_{P_1} M_{\mathcal{D}_2}^2(H_{P_1}f)(x_2)
\bigg( \sum_{Q_1\supset P_1} \sum_{P_2} \widehat{a}_{\tau}(Q_1\times P_2) \widehat{g}_{\tau}(Q_1\times P_2) \frac{1}{|Q_1|} \frac{\mathbbm 1} %{1\!\!1_{P_2}(x_2)}{|P_2|}
\bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{P_1}(x_1)}{|P_1|}\\
&\leq \sum_{P_1} M_{\mathcal{D}_2}^2(H_{P_1}f)(x_2) \bigg( \left\langle \Pi_{a_{\tau}}^* g_{\tau}, \frac{\mathbbm 1} %{1\!\!1_{P_1}}{|P_1|}\right\rangle _{\mathbb{R}^{n_1}}(x_2) \bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{P_1}(x_1)}{|P_1|}\\
&\leq \bigg( M_{\mathcal{D}_1}(\Pi_{a_{\tau}}^* g_{\tau}) (x_1, x_2) \bigg)^2 \sum_{P_1} M_{\mathcal{D}_2}^2(H_{P_1}f)(x_2) \frac{\mathbbm 1} %{1\!\!1_{P_1}(x_1)}{|P_1|}
= \bigg( M_{\mathcal{D}_1}(\Pi_{a_{\tau}}^* g_{\tau}) (x_1, x_2) \bigg)^2 \bigg( [SM]f(x_1, x_2) \bigg)^2.
\end{align}
Then
$$ \|S_{\mathcal{D}_1}\phi \|_{L^1(\nu)} \lesssim \|\Pi_{a_{\tau}}^* g_{\tau}\|_{L^{p'}(\lambda')} \|[SM]f\|_{L^p(\mu)} \lesssim \|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|g\|_{L^{p'}(\lambda')}\|f\|_{L^p(\mu)},$$
and so
$$ \left\|\lambda_{a,b}^{(0,1)} : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|a\|_{BMO_{\bm{\mathcal{D}}}(\nu)} \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)},$$
and the proof is complete.
\end{proof}
\subsubsection{The full mixed paraproduct}
We are now dealing with $[b, \Pi_{a;(0,1)}]$, where
$$ \Pi_{a;(0,1)}f := \sum_{P_1\times P_2} \widehat{a}(P_1\times P_2) \left\langle f, h_{P_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle \frac{\mathbbm 1} %{1\!\!1_{P_1}}{|P_1|} \otimes h_{P_2}. $$
\begin{thm} \label{T:NCS-2}
Let $\mu, \lambda \in A_p(\mathbb{R}^{\vec{n}})$, $1<p<\infty$ and $\nu := \mu^{1/p}\lambda^{-1/p}$. Then
$$ \left\| [b, \Pi_{a; (0,1)}] : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}.$$
\end{thm}
Note that the case $[b, \Pi_{a;(1,0)}]$ follows symmetrically.
\begin{proof}
By the standard considerations, we only need to bound the remainder term
$$ \mathcal{R}^{(0,1)}_{a,b} f := \Pi_{\Pi_{a;(0,1)}f}b - \Pi_{a;(0,1)} \Pi_f b. $$
Explicitly, these terms are:
\begin{align}
& \Pi_{\Pi_{a;(0,1)}f}b = \sum_{P_1\times P_2} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2}) \left\langle f, h_{P_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle
\bigg( \sum_{Q_1\supsetneq P_1} \left\langle b\right\rangle _{Q_1\times P_2} h_{Q_1}^{\delta_1}(P_1) h_{Q_1}^{\delta_1}(x_1) \bigg) h_{P_2}^{\epsilon_2}(x_2), \\
& \Pi_{a;(0,1)} \Pi_f b = \sum_{P_1\times P_2} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})
\bigg( \sum_{Q_2\supsetneq P_2} \widehat{f}(P_1^{\epsilon_1}\times Q_2^{\delta_2}) \left\langle b\right\rangle _{P_1\times Q_2} h_{Q_2}^{\delta_2}(P_2) \bigg)
\frac{\mathbbm 1} %{1\!\!1_{P_1}(x_1)}{|P_1|} \otimes h_{P_2}^{\epsilon_2}(x_2).
\end{align}
Consider now a third term
$$ T := \sum_{P_1\times P_2} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2}) \left\langle b\right\rangle _{P_1\times P_2}
\left\langle f, h_{P_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle \frac{\mathbbm 1} %{1\!\!1_{P_1}}{|P_1|}\otimes h_{P_2}^{\epsilon_2}. $$
Using the one-parameter formula:
$$ \frac{\mathbbm 1} %{1\!\!1_{P_1}(x_1)}{|P_1|} = \sum_{Q_1\supsetneq P_1} h_{Q_1}^{\delta_1} (P_1) h_{Q_1}^{\delta_1}(x_1), $$
we write $T$ as
$$ T = \sum_{P_1\times P_2} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2}) \left\langle f, h_{P_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle
\bigg( \sum_{Q_1\supsetneq P_1} \left\langle b\right\rangle _{P_1\times P_2} h_{Q_1}^{\delta_1}(P_1) h_{Q_1}^{\delta_1}(x_1) \bigg) h_{P_2}^{\epsilon_2}(x_2),$$
allowing us to combine this term with $\Pi_{\Pi_{a;(0,1)}f}b$:
$$ \Pi_{\Pi_{a;(0,1)}f}b - T =
\sum_{P_1\times P_2} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2}) \left\langle f, h_{P_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle
\bigg( \sum_{Q_1\supsetneq P_1} \big(\left\langle b\right\rangle _{Q_1\times P_2} - \left\langle b\right\rangle _{P_1\times P_2}\big) h_{Q_1}^{\delta_1}(P_1) h_{Q_1}^{\delta_1}(x_1) \bigg) h_{P_2}^{\epsilon_2}(x_2).$$
Using \eqref{E:1pavgdiff}:
$$ \left\langle b\right\rangle _{Q_1\times P_2} - \left\langle b\right\rangle _{P_1\times P_2} = - \sum_{R_1: P_1\subsetneq R_1\subset Q_1}
\left\langle b, h_{R_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle h_{R_1}^{\tau_1}(P_1). $$
Then the term in parentheses above becomes
\begin{equation}\label{temp2}
-\sum_{Q_1\supsetneq P_1} \bigg( \sum_{R_1: P_1\subsetneq R_1\subset Q_1} \left\langle b, h_{R_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle
h_{R_1}^{\tau_1}(P_1) \bigg) h_{Q_1}^{\delta_1}(P_1) h_{Q_1}^{\delta_1}(x_1).
\end{equation}
Next, we analyze this term depending on the relationship between $R_1$ and $Q_1$:
\vspace{0.05in}
\noindent \underline{Case 1: $R_1\subsetneq Q_1$:} Then we may rewrite the sum as
\begin{align}
& \sum_{R_1\supsetneq P_1} \left\langle b, h_{R_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle h_{R_1}^{\tau_1}(P_1)
\sum_{Q_1\supsetneq R_1} \underbrace{h_{Q_1}^{\delta_1}(P_1)}_{= h_{Q_1}^{\delta_1}(R_1)} h_{Q_1}^{\delta_1}(x_1)
= \sum_{R_1\supsetneq P_1} \left\langle b, h_{R_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle h_{R_1}^{\tau_1}(P_1)
\frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|}.
\end{align}
This then leads to
\begin{align}
& \sum_{P_1\times P_2} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2}) \left\langle f, h_{P_1}^{\epsilon_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle
\bigg( \sum_{R_1\supsetneq P_1} \left\langle b, h_{R_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle h_{R_1}^{\tau_1}(P_1)
\frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|} \bigg) h_{P_2}^{\epsilon_2}(x_2) \\
=& \sum_{R_1\times P_2} \left\langle b, h_{R_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle
\bigg( \sum_{P_1\subsetneq R_1} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2}) \left\langle f, h_{P_1}^{\epsilon_1}\otimes\frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle h_{R_1}^{\tau_1}(P_1)
\bigg) \frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|}\otimes h_{P_2}^{\epsilon_2}(x_2)\\
=& \sum_{R_1\times P_2} \left\langle b, h_{R_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|} \right\rangle
\left\langle \Pi_{a;(0,1)} f, h_{R_1}^{\tau_1}\otimes h_{P_2}^{\epsilon_2}\right\rangle \frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|}\otimes h_{P_2}^{\epsilon_2}(x_2)\\
=& \pi_{b;(0,1)}^* \Pi_{a;(0,1)}f.
\end{align}
\vspace{0.05in}
\noindent \underline{Case 2(a): $R_1 = Q_1$ and $\tau_1\neq \delta_1$:} Then \eqref{temp2} becomes:
$$ -\sum_{Q_1\supsetneq P_1} \left\langle b, h_{Q_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle \frac{1}{\sqrt{|Q_1|}} h_{Q_1}^{\tau_1+\delta_1}(P_1) h_{Q_1}^{\delta_1}(x_1),$$
which leads to
\begin{align}
& \sum_{Q_1\times P_2} \left\langle b, h_{Q_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle \frac{1}{\sqrt{|Q_1|}} h_{Q_1}^{\delta_1}(x_1) h_{P_2}^{\epsilon_2}(x_2)
\sum_{P_1\subsetneq Q_1} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2}) \left\langle f, h_{P_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle h_{Q_1}^{\tau_1+\delta_1}(P_1)\\
=& \sum_{Q_1\times P_2} \left\langle b, h_{Q_1}^{\tau_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle
\left\langle \Pi_{a;(0,1)}f, h_{Q_1}^{\tau_1+\delta_1}\otimes h_{P_2}^{\epsilon_2}\right\rangle \frac{1}{\sqrt{|Q_1|}} h_{Q_1}^{\delta_1}(x_1) \otimes h_{P_2}^{\epsilon_2}(x_2)\\
=& \gamma_{b; (0,1)} \Pi_{a;(0,1)}f.
\end{align}
\vspace{0.05in}
\noindent \underline{Case 2(b): $R_1 = Q_1$ and $\tau_1 = \delta_1$:} Then \eqref{temp2} becomes:
$$ \sum_{Q_1\supsetneq P_1} \left\langle b, h_{Q_1}^{\delta_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle \frac{1}{|Q_1|} h_{Q_1}^{\delta_1},$$
which gives rise to the term
$$ T_{a,b}^{(0,1)}f := \sum_{Q_1\times P_2} \left\langle b, h_{Q_1}^{\delta_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle
h_{Q_1}^{\delta_1}(x_1) h_{P_2}^{\epsilon_2} (x_2) \frac{1}{|Q_1|} \sum_{P_1\subsetneq Q_1} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})
\left\langle f, h_{P_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|} \right\rangle . $$
We have proved that
\begin{equation}\label{temp3}
\Pi_{\Pi_{a;(0,1)}f}b - T = - \pi_{b; (0,1)}^* \Pi_{a;(0,1)}f - \gamma_{b; (0,1)} \Pi_{a;(0,1)}f - T_{a,b}^{(0,1)}f.
\end{equation}
Expressing $T$ instead as
$$ T = \sum_{P_1\times P_2} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})
\bigg( \sum_{Q_2\supsetneq P_2} \widehat{f}(P_1^{\epsilon_1}\times Q_2^{\delta_2}) \left\langle b\right\rangle _{P_1\times P_2} h_{Q_2}^{\delta_2}(P_2) \bigg)
\frac{\mathbbm 1} %{1\!\!1_{P_1}}{|P_1|}\otimes h_{P_2}^{\epsilon_2}, $$
we are able to pair it with $\Pi_{a;(0,1)}\Pi_f b$. Then, a similar analysis yields
\begin{equation}\label{temp4}
T - \Pi_{a;(0,1)} \Pi_f b = \Pi_{a;(0,1)} \pi_{b;(1,0)}f + \Pi_{a;(0,1)} \gamma_{b;(1,0)}f + T_{a,b}^{(1,0)}f,
\end{equation}
where
$$ T_{a,b}^{(1,0)}f := \sum_{P_1\times P_2} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2}) \frac{\mathbbm 1} %{1\!\!1_{P_1}(x_1)}{|P_1|}\otimes h_{P_2}^{\epsilon_2}(x_2)
\bigg( \sum_{Q_2\supsetneq P_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{P_1}}{|P_1|}\otimes h_{Q_2}^{\delta_2}\right\rangle \widehat{f}(P_1^{\epsilon_1}\times Q_2^{\delta_2}) \frac{1}{|Q_2|} \bigg). $$
Then
$$ \mathcal{R}_{a,b}^{(0,1)} f = \Pi_{a;(0,1)} \pi_{b;(1,0)}f + \Pi_{a;(0,1)} \gamma_{b;(1,0)}f
- \pi_{b; (0,1)}^* \Pi_{a;(0,1)}f - \gamma_{b; (0,1)} \Pi_{a;(0,1)}f + T_{a,b}^{(1,0)}f - T_{a,b}^{(0,1)}f.$$
It is now obvious that the first four terms are bounded as desired, and it remains to bound the terms $T_{a,b}$.
We look at $T_{a,b}^{(0,1)}$, for which we can write $\left\langle T_{a,b}^{(0,1)}f, g\right\rangle = \left\langle b, \phi\right\rangle $, where
$$ \phi = \sum_{Q_1\times P_2} \widehat{g}(Q_1^{\delta_1}\times P_2^{\epsilon_2}) \frac{1}{|Q_1|}
\bigg( \sum_{P_1\subsetneq Q_1} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})
\left\langle f, h_{P_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle \bigg) h_{Q_1}^{\delta_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}.$$
Then $|\left\langle T_{a,b}^{(0,1)}f, g \right\rangle | \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)} \|S_{\mathcal{D}_1}\phi\|_{L^1(\nu)}$, and
$$ S_{\mathcal{D}_1}^2\phi = \sum_{Q_1} \bigg(
\sum_{P_2} \widehat{g}(Q_1^{\delta_1}\times P_2^{\epsilon_2}) \bigg(
\frac{1}{|Q_1|} \sum_{P_1\subsetneq Q_1} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})
\left\langle f, h_{P_1}^{\epsilon_2}\otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle
\bigg) \frac{\mathbbm 1} %{1\!\!1_{P_2}(x_2)}{|P_2|}
\bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|}. $$
Now,
$$ \left\langle \Pi_{a;(0,1)}f, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|} \otimes h_{P_2}^{\epsilon_2}\right\rangle = \sum_{P_1} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})
\left\langle f, h_{P_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle \frac{|P_1\cap Q_1|}{|P_1||Q_1|}.$$
Define the martingale transform $a \mapsto a_{\tau} = \sum_{P_1\times P_2} \tau_{P_1,P_2}^{\epsilon_1,\epsilon_2} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})$,
where
$$
\tau_{P_1,P_2}^{\epsilon_1,\epsilon_2} = \left\{ \begin{array}{l}
+1 \text{, if } \left\langle f, h_{P_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle \geq 0\\
-1 \text{, otherwise.}
\end{array}\right.
$$
Remark that, while this transform does depend on $f$, in the end it will not matter, as this will be absorbed into the product BMO norm of $a_{\tau}$.
Then we have
$$ \frac{1}{|Q_1|} \left| \sum_{P_1\subsetneq Q_1} \widehat{a}(P_1^{\epsilon_1}\times P_2^{\epsilon_2})
\left\langle f, h_{P_1}^{\epsilon_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{P_2}}{|P_2|}\right\rangle \right| \leq \left\langle \Pi_{a_{\tau};(0,1)}f, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{P_2}^{\epsilon_2} \right\rangle .$$
Returning to the square function estimate, we now have
\begin{align}
S_{\mathcal{D}_1}^2\phi &\leq \sum_{Q_1} \bigg( \sum_{P_2} |\widehat{g}(Q_1^{\delta_1}\times P_2^{\epsilon_2})|^2 \frac{\mathbbm 1} %{1\!\!1_{P_2}(x_2)}{|P_2|} \bigg)
\bigg( \sum_{P_2} \left\langle |H_{P_2}^{\epsilon_2} \Pi_{a_{\tau};(0,1)} f| \right\rangle ^2_{Q_1} \mathbbm 1} %{1\!\!1_{Q_1}(x_1) \frac{\mathbbm 1} %{1\!\!1_{P_2}(x_2)}{|P_2|} \bigg) \frac{\mathbbm 1} %{1\!\!1_{Q_1}(x_1)}{|Q_1|}\\
&\leq S_{\bm{\mathcal{D}}}^2g \bigg( \sum_{P_2} M_{\mathcal{D}_1}^2 (H_{P_2}^{\epsilon_2} \Pi_{a_{\tau};(0,1)} f)(x_1) \frac{\mathbbm 1} %{1\!\!1_{P_2}(x_2)}{|P_2|} \bigg)
= S_{\bm{\mathcal{D}}}^2 g \bigg( [MS] \Pi_{a_{\tau};(0,1)}f \bigg)^2.
\end{align}
Finally,
\begin{align}
\|S_{\mathcal{D}_1}\phi\|_{L^1(\nu)} &\leq \|S_{\bm{\mathcal{D}}}g\|_{L^{p'}(\lambda')} \left\| [MS] \Pi_{a_{\tau};(0,1)}f \right\|_{L^p(\mu)}\\
&\lesssim \|g\|_{L^{p'}(\lambda')} \underbrace{ \|\Pi_{a_{\tau};(0,1)}f\|_{L^p(\mu)} }_{\lesssim \|a_{\tau}\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|f\|_{L^p(\mu)} }
\lesssim \|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|f\|_{L^p(\mu)} \|g\|_{L^{p'}(\lambda')},
\end{align}
showing that
$$ \| T_{a,b}^{(0,1)} : L^p(\mu) \rightarrow L^p(\lambda)\| \lesssim \|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}.$$
The estimate for $T_{a,b}^{(1,0)}$ follows similarly.
\end{proof}
\subsubsection{The partial paraproducts} We work with
$$ \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1}f := \sum_{R_1\times R_2} \sum_{\substack{P_1\in (R_1)_{i_1} \\ Q_1\in(R_1)_{j_1}}}
\widehat{a}_{P_1Q_1R_1}(R_2^{\epsilon_2}) \widehat{f}(P_1^{\epsilon_1}\times R_2^{\epsilon_2}) h_{Q_1}^{\delta_1} \otimes \frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|},$$
where $i_1, j_1$ are non-negative integers, and for every $P_1, Q_1, R_1$:
$$ a_{P_1Q_1R_1}(x_2) \in BMO(\mathbb{R}^{n_2}) \text{ with } \|a_{P_1Q_1R_1}\|_{BMO(\mathbb{R}^{n_2})} \leq 2^{\frac{-n_1}{2}(i_1+j_1)}.$$
\begin{thm} \label{T:NCS-3}
Let $\mu, \lambda \in A_p(\mathbb{R}^{\vec{n}})$, $1<p<\infty$ and $\nu := \mu^{1/p}\lambda^{-1/p}$. Then
$$ \left\| [b, \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1}] : L^p(\mu) \rightarrow L^p(\lambda) \right\| \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}.$$
\end{thm}
First we need the one-weight bound for the partial paraproducts:
\begin{prop} \label{P:NCS-3-1wt}
For any $w\in A_p(\mathbb{R}^{\vec{n}})$, $1<p<\infty$:
\begin{equation} \label{E:NCS-3-1wt}
\left\| \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1} : L^p(w) \rightarrow L^p(w) \right\| \lesssim 1.
\end{equation}
\end{prop}
\begin{proof}
Let $f \in L^p(w)$ and $g \in L^{p'}(w')$, and show that $| \left\langle \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1,j_1}f, g \right\rangle | \lesssim \|f\|_{L^p(w)} \|g\|_{L^{p'}(w')}$.
\begin{align}
\left\| \left\langle \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1,j_1}f, g \right\rangle \right\| &\leq \sum_{R_1} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}}
\left| \left\langle a_{P_1Q_1R_1}, \phi_{P_1Q_1R_1} \right\rangle _{\mathbb{R}^{n_2}} \right| \\
& \leq \sum_{R_1} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}}
\|a_{P_1Q_1R_1}\|_{BMO(\mathbb{R}^{n_2})} \|S_{\mathcal{D}_2}\phi_{P_1Q_1R_1}\|_{L^1(\mathbb{R}^{n_2})}\\
&\leq 2^{\frac{-n_1}{2}(i_1+j_1)} \sum_{R_1} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}} \|S_{\mathcal{D}_2}\phi_{P_1Q_1R_1}\|_{L^1(\mathbb{R}^{n_2})},
\end{align}
where for every $P_1, Q_1, R_1$:
$$ \phi_{P_1Q_1R_1}(x_2) := \sum_{R_2} \widehat{f}(P_1\times R_2) \left\langle g, h_{Q_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|}\right\rangle h_{R_2}(x_2).$$
Now,
\begin{align}
S^2_{\mathcal{D}_2}\phi_{P_1Q_1R_1} &= \sum_{R_2} |\widehat{H_{P_1}f}(R_2)|^2 \left\langle |H_{Q_1}g|\right\rangle _{R_2}^2 \frac{\mathbbm 1} %{1\!\!1_{R_2}(x_2)}{|R_2|}\\
&\leq (M_{\mathcal{D}_2} H_{Q_1}g)^2(x_2) (S_{\mathcal{D}_2} H_{P_1}f)^2(x_2),
\end{align}
so
\begin{align}
&\sum_{R_1} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}} \|S_{\mathcal{D}_2}\phi_{P_1Q_1R_1}\|_{L^1(\mathbb{R}^{n_2})}
\leq \sum_{R_1} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}} \int_{\mathbb{R}^{n_2}} (M_{\mathcal{D}_2} H_{Q_1}g)(x_2) (S_{\mathcal{D}_2} H_{P_1}f)(x_2)\,dx_2\\
=& \int_{\mathbb{R}^{n_2}} \int_{\mathbb{R}^{n_1}} \sum_{R_1} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}}
(M_{\mathcal{D}_2} H_{Q_1}g)(x_2) (S_{\mathcal{D}_2} H_{P_1}f)(x_2) \frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|}\,dx_1\,dx_2\\
\leq& \int_{\mathbb{R}^{\vec{n}}} \bigg(\sum_{R_1} \bigg( \sum_{P_1\in(R_1)_{i_1}} S_{\mathcal{D}_2} H_{P_1}f (x_2)\bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|} \bigg)^{1/2}
\bigg( \sum_{R_1} \bigg( \sum_{Q_1\in(R_1)_{j_1}} M_{\mathcal{D}_2}H_{Q_1}g(x_2) \bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|} \bigg)^{1/2}\,dx\\
=& \int_{\mathbb{R}^{\vec{n}}} [S S_{\mathcal{D}_2}]^{i_1, 0} f \cdot [SM_{\mathcal{D}_2}]^{j_1, 0}g w^{1/p} w^{-1/p}\,dx.
\end{align}
Then, from the estimates in \eqref{E:Mixed2pSFShift}:
\begin{align}
\left\| \left\langle \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1,j_1}f, g \right\rangle \right\| &\leq 2^{\frac{-n_1}{2}(i_1+j_1)}
\left\| [SS_{\mathcal{D}_2}]^{i_1, 0} f\right\|_{L^p(w)} \left\| [SM_{\mathcal{D}_2}]^{j_1, 0}g \right\|_{L^{p'}(w')}\\
&\lesssim 2^{\frac{-n_1}{2}(i_1+j_1)} 2^{\frac{n_1i_1}{2}}\|f\|_{L^p(w)} 2^{\frac{n_1j_1}{2}} \|g\|_{L^{p'}(w')},
\end{align}
and the result follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:NCS-3}]
In light of \eqref{E:NCS-3-1wt}, we only need to bound the remainder term
$$ \mathcal{R}^{i_1, j_1}f := \Pi_{\mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1}f}b - \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1} \Pi_f b. $$
The proof is somewhat similar to that of the full mixed paraproducts, in that we combine each of these terms:
\begin{align}
& \Pi_{\mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1}f}b = \sum_{R_1\times R_2} \sum_{ \substack{ P_1\in(R_1)_{i_1} \\ Q_1 \in (R_1)_{j_1} } }
\widehat{a}_{P_1Q_1R_1}(R_2^{\epsilon_2}) \widehat{f}(P_1^{\epsilon_1}\times R_2^{\epsilon_2})
\bigg( \sum_{Q_2\supsetneq R_2} \left\langle b\right\rangle _{Q_1\times Q_2} h_{Q_2}^{\delta_2}(R_2) h_{Q_2}^{\delta_2}(x_2) \bigg) h_{Q_1}^{\delta_1}(x_1),\\
& \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1} \Pi_f b = \sum_{R_1\times R_2} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}}
\widehat{a}_{P_1Q_1R_1}(R_2^{\epsilon_2}) \widehat{f}(P_1^{\epsilon_1}\times R_2^{\epsilon_2}) \left\langle b\right\rangle _{P_1\times R_2} h_{Q_1}^{\delta_1}(x_1) \otimes \frac{\mathbbm 1} %{1\!\!1_{R_2}(x_2)}{|R_2|},
\end{align}
with a third term:
$$T := \sum_{R_1\times R_2} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}}
\widehat{a}_{P_1Q_1R_1}(R_2^{\epsilon_2}) \widehat{f}(P_1^{\epsilon_1}\times R_2^{\epsilon_2}) \left\langle b\right\rangle _{Q_1\times R_2} h_{Q_1}^{\delta_1}\otimes \frac{\mathbbm 1} %{1\!\!1_{R_2}}{|R_2|}.$$
As before, expanding the indicator function in $T$ into its Haar series, we may combine $T$ with $\Pi_{\mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1}f}b$:
$$ \Pi_{\mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1}f}b - T = \sum_{R_1\times R_2} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}}
\widehat{a}_{P_1Q_1R_1}(R_2^{\epsilon_2}) \widehat{f}(P_1^{\epsilon_1}\times R_2^{\epsilon_2}) T_b(x_2) h_{Q_1}^{\delta_1}(x_1), $$
where
\begin{align}
T_b(x_2) &= \sum_{Q_2\supsetneq R_2} \bigg( \left\langle b\right\rangle _{Q_1\times Q_2} - \left\langle b\right\rangle _{Q_1\times P_2} \bigg) h_{Q_2}^{\delta_2}(R_2) h_{Q_2}^{\delta_2}(x_2) \\
&= \sum_{Q_2\supsetneq R_2} \bigg( \sum_{P_2: R_2\subsetneq P_2 \subset Q_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{P_2}^{\tau_2}\right\rangle
h_{P_2}^{\tau_2}(R_2) \bigg)h_{Q_2}^{\delta_2}(R_2) h_{Q_2}^{\delta_2}(x_2).
\end{align}
We analyze this term depending on the relationship of $P_2$ with $Q_2$.
\vspace{0.05in}
\noindent \underline{Case 1: $P_2\subsetneq Q_2$:} Then
$$ T_b(x_2) = \sum_{P_2\supsetneq R_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{P_2}^{\tau_2}\right\rangle h_{P_2}^{\tau_2}(R_2) \frac{\mathbbm 1} %{1\!\!1_{P_2}(x_2)}{|P_2|},$$
which gives the operator
\begin{align}
& \sum_{Q_1 \times P_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{P_2}^{\tau_2}\right\rangle h_{Q_1}^{\tau_1}(x_1) \frac{\mathbbm 1} %{1\!\!1_{P_2}(x_2)}{|P_2|}
\bigg( \sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \sum_{R_2\subsetneq P_2} \widehat{a}_{P_1Q_1R_1} (R_2^{\epsilon_2}) \widehat{H_{P_1}^{\epsilon_1}f}(R_2^{\epsilon_2})
h_{P_2}^{\tau_2}(R_2) \bigg)\\
=& \sum_{Q_1 \times P_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{P_2}^{\tau_2}\right\rangle h_{Q_1}^{\tau_1}(x_1) \frac{\mathbbm 1} %{1\!\!1_{P_2}(x_2)}{|P_2|}
\bigg( \sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \left\langle \Pi_{a_{P_1Q_1R_1}}^*(H_{P_1}^{\epsilon_1}f), h_{P_2}^{\tau_2} \right\rangle _{\mathbb{R}^{n_2}} \bigg) \\
=& \pi_{b;(1,0)}^*F,
\end{align}
where
$$ F := \sum_{Q_1} \bigg( \sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \Pi_{a_{P_1Q_1R_1}}^*(H_{P_1}^{\epsilon_1}f)(x_2) \bigg) h_{Q_1}^{\delta_1}(x_1). $$
Now
$$ \| \pi_{b;(1,0)}^*F\|_{L^p(\lambda)} \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)} \|F\|_{L^p(\mu)},$$
so we are done if we can show that
\begin{equation}\label{temp5}
\|F\|_{L^p(\mu)} \lesssim \|f\|_{L^p(\mu)}.
\end{equation}
Take $g \in L^{p'}(\mu')$. Then
\begin{align}
|\left\langle F, g\right\rangle | &\leq \sum_{Q_1} \sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \left| \left\langle \Pi^*_{a_{P_1Q_1R_1}}(H_{P_1}^{\epsilon_1}f), H_{Q_1}^{\delta_1}g \right\rangle _{\mathbb{R}^{n_2}} \right|.
\end{align}
Notice that we may write
$$ \left\langle \Pi^*_{a_{P_1Q_1R_1}}(H_{P_1}^{\epsilon_1}f), H_{Q_1}^{\delta_1}g \right\rangle _{\mathbb{R}^{n_2}} = \left\langle a_{P_1Q_1R_1}, \phi_{P_1Q_1R_1}\right\rangle _{\mathbb{R}^{n_2}}, $$
where
$$ \phi_{P_1Q_1R_1}(x_2) = \sum_{R_2} \widehat{H_{P_1}^{\epsilon_1}f}(R_2^{\delta_2}) \left\langle H_{Q_1}^{\delta_1}g\right\rangle _{R_2} h_{R_2}^{\delta_2}(x_2). $$
Then
\begin{align}
|\left\langle F, g\right\rangle | &\leq \sum_{Q_1} \sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \|a_{P_1Q_1R_1}\|_{BMO(\mathbb{R}^{n_2})} \|S_{\mathcal{D}_2}\phi_{P_1Q_1R_1}\|_{L^1(\mathbb{R}^{n_2})}\\
&\leq 2^{\frac{-n_1}{2}(i_1+j_1) } \sum_{R_1} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}} \int_{\mathbb{R}^{n_2}} \bigg(
\sum_{R_2} |\widehat{H_{P_1}^{\epsilon_1}f}(R_2^{\delta_2})|^2 \left\langle |H_{Q_1}^{\delta_1}g|\right\rangle ^2_{R_2} \frac{\mathbbm 1} %{1\!\!1_{R_2}(x_2)}{|R_2|}
\bigg)^{1/2}\,dx_2\\
&\leq 2^{\frac{-n_1}{2}(i_1+j_1) } \int_{\mathbb{R}^{\vec{n}}} \sum_{R_1} \sum_{\substack{P_1\in(R_1)_{i_1} \\ Q_1\in(R_1)_{j_2}}}
(M_{\mathcal{D}_2}H_{Q_1}^{\delta_1}g)(x_2) (S_{\mathcal{D}_2} H_{P_1}^{\epsilon_1}f)(x_2) \frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|}\,dx.
\end{align}
The integral above is bounded by
\begin{align}
& \int_{\mathbb{R}^{\vec{n}}}
\bigg( \sum_{R_1} \bigg( \sum_{P_1\in(R_1)_{i_1}} (S_{\mathcal{D}_2}H_{P_1}^{\epsilon_1}f)(x_2) \bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|} \bigg)^{1/2}
\bigg( \sum_{R_1} \bigg( \sum_{P_1\in(R_1)_{i_1}} (S_{\mathcal{D}_2}H_{P_1}^{\epsilon_1}f)(x_2) \bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{R_1}(x_1)}{|R_1|} \bigg)^{1/2}\,dx \\
&= \int_{\mathbb{R}^{\vec{n}}} \bigg( [SS_{\mathcal{D}_2}]^{i_1, 0}f \bigg) \bigg( [SM_{\mathcal{D}_2}]^{j_1, 0}g \bigg)\,dx
\leq \left\| [SS_{\mathcal{D}_2}]^{i_1, 0}f \right\|_{L^p(\mu)} \left\| [SM_{\mathcal{D}_2}]^{j_1, 0}g \right\|_{L^{p'}(\mu')} \\
&\lesssim 2^{\frac{n_1}{2}(i_1+j_1)} \|f\|_{L^p(\mu)} \|g\|_{L^{p'}(\mu')} \text{, by \eqref{E:Mixed2pSFShift}.}
\end{align}
The desired estimate in \eqref{temp5} is now proved.
\vspace{0.05in}
\noindent \underline{Case 2(a): $P_2 = Q_2$ and $\tau_2 \neq \delta_2$:} Then
$$T_b(x_2) = \sum_{Q_2\supsetneq R_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\tau_2}\right\rangle
\frac{1}{\sqrt{|Q_2|}} h_{Q_2}^{\tau_2+\delta_2}(R_2) h_{Q_2}^{\delta_2}(x_2),$$
giving rise to the operator
\begin{align}
& \sum_{Q_1\times Q_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\tau_2}\right\rangle \bigg(
\sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \left\langle \Pi^*_{a_{P_1Q_1R_1}}(H_{P_1}^{\epsilon_1}f), h_{Q_2}^{\tau_2+\delta_2} \right\rangle _{\mathbb{R}^{n_2}}
\bigg) \frac{1}{\sqrt{|Q_2|}} h_{Q_1}^{\delta_1}\otimes h_{Q_2}^{\delta_2}
= \gamma_{b;(1,0)}F,
\end{align}
which is handled as in the previous case.
\vspace{0.05in}
\noindent \underline{Case 2(b): $P_2 = Q_2$ and $\tau_2 = \delta_2$:} In this case, $T_b(x_2)$ gives rise to the operator
$$ T' := \sum_{Q_1\times Q_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\delta_2}\right\rangle h_{Q_1}^{\delta_1}\otimes h_{Q_2}^{\delta_2}
\sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \frac{1}{|Q_2|} \sum_{R_2\subsetneq Q_2} \widehat{a}_{P_1Q_1R_1} (R_2^{\epsilon_2}) \widehat{H_{P_1}^{\epsilon_1}f}(R_2^{\epsilon_2}).$$
Now define
$$ F_{\tau} := \sum_{Q_1} \bigg( \sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \Pi_{a^{\tau}_{P_1Q_1R_1}}^*(H_{P_1}^{\epsilon_1}f)(x_2) \bigg) h_{Q_1}^{\delta_1}(x_1), $$
just as we defined $F$ before, except now to every function $a_{P_1Q_1R_1}$ we apply the martingale transform
$$ a_{P_1Q_1R_1} \mapsto a^{\tau}_{P_1Q_1R_1} = \sum_{R_2} \tau_{R_2}^{\epsilon_2} \widehat{a}_{P_1Q_1R_1}(R_2^{\epsilon_2}) h_{R_2}^{\epsilon_2}
\text{, where } \tau_{R_2}^{\epsilon_2} := \left\{ \begin{array}{l}
+1 \text{, if } \widehat{H_{P_1}^{\epsilon_1}f}(R_2^{\epsilon_2}) \geq 0,\\
-1 \text{, otherwise.}
\end{array}\right.$$
Since this does not increase the $BMO(\mathbb{R}^{n_2})$ norms of the $a_{P_1Q_1R_1}$ functions,
the estimate \eqref{temp5} still holds: $\|F_{\tau\|_{L^p(\mu)}} \lesssim \|f\|_{L^p(\mu)}$.
Moreover, note that
$$ \left\langle \Pi^*_{a^{\tau}_{P_1Q_1R_1}}(H_{P_1}^{\epsilon_1}f)\right\rangle _{Q_2} = \sum_{R_2}
\underbrace{\widehat{a^{\tau}}_{P_1Q_1R_1}(R_2^{\epsilon_2}) \widehat{H_{P_1}^{\epsilon_1}f}(R_2^{\epsilon_2}) }_{\geq 0} \frac{|R_2\cap Q_2|}{|R_2||Q_2|}$$
and that
$$ \pi_{b;(1,0)}F_{\tau} = \sum_{Q_1\times Q_2} \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\delta_2}\right\rangle
\sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \left\langle \Pi^*_{a^{\tau}_{P_1Q_1R_1}}(H_{P_1}^{\epsilon_1}f)\right\rangle _{Q_2} h_{Q_1}^{\delta_1}\otimes h_{Q_2}^{\delta_2}. $$
Then
\begin{align}
S_{\bm{\mathcal{D}}}^2 T' & \leq \sum_{Q_1\times Q_2} \left| \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\delta_2}\right\rangle \right|^2
\bigg( \sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \frac{1}{|Q_2|} \sum_{R_2\subsetneq Q_2} | \widehat{a}_{P_1Q_1R_1} (R_2^{\epsilon_2}) \widehat{H_{P_1}^{\epsilon_1}f}(R_2^{\epsilon_2})| \bigg)^2
\frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes\frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\\
& \leq \sum_{Q_1\times Q_2} \left| \left\langle b, \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes h_{Q_2}^{\delta_2}\right\rangle \right|^2
\bigg( \sum_{P_1\in (Q_1^{(j_1)})_{i_1}} \left\langle \Pi^*_{a^{\tau}_{P_1Q_1R_1}}(H_{P_1}^{\epsilon_1}f)\right\rangle _{Q_2} \bigg)^2 \frac{\mathbbm 1} %{1\!\!1_{Q_1}}{|Q_1|}\otimes\frac{\mathbbm 1} %{1\!\!1_{Q_2}}{|Q_2|}\\
&= S_{\bm{\mathcal{D}}}^2(\pi_{b;(1,0)} F_{\tau}).
\end{align}
Finally, this gives us that
\begin{align}
\|T'\|_{L^p(\lambda)} & \simeq \|S_{\bm{\mathcal{D}}}T'\|_{L^p(\lambda)} \leq \|S_{\bm{\mathcal{D}}} \pi_{b;(1,0)}F_{\tau}\|_{L^p(\lambda)}
\simeq \|\pi_{b;(1,0)}F_{\tau}\|_{L^p(\lambda)} \lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}\|F_{\tau}\|_{L^p(\mu)}\\
&\lesssim \|b\|_{bmo_{\bm{\mathcal{D}}}(\nu)}\|f\|_{L^p(\mu)}.
\end{align}
This proves that $\Pi_{\mathbbm{S}_{\bm{\mathcal{D}}}^{i_1, j_1}f}b - T$ obeys the desired bound, and the case $T - \mathbbm{S}_{\bm{\mathcal{D}}}^{i_1,j_1}\Pi_f b$
is handled similarly.
\end{proof}
\subsection{Proof of Theorem \ref{T:Journe}} \label{Ss:FeffProof}
Having now proved all the one-weight inequalities for dyadic shifts, we may conclude that
$$\| \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i}, \vec{j}} : L^p(w) \rightarrow L^p(w) \| \lesssim 1,$$
for all $w\in A_p(\mathbb{R}^{\vec{n}})$.
For the cancellative shifts, this was proved in \eqref{E:2pDShift1wt}. For the non-cancellative shifts,
the first two types are simply paraproducts with symbol $\|a\|_{BMO_{\bm{\mathcal{D}}}(\mathbb{R}^{\vec{n}})} \leq 1$,
while the third type, a partial paraproduct, was proved to be bounded on $L^p(w)$ in
Proposition \ref{P:NCS-3-1wt}.
Theorem \ref{T:Journe} now follows trivially from Martikainen's representation Theorem \ref{T:MRep}:
take $f \in L^p(w)$ and $g \in L^{p'}(w')$. Then
\begin{align}
|\left\langle Tf, g\right\rangle | & \leq C_T \mathbb{E}_{\omega_1} \mathbb{E}_{\omega_2}
\sum_{\vec{i}, \vec{j} \in \mathbb{Z}^2_{+}} 2^{-\text{max}(i_1, j_1) \delta/2} 2^{-\text{max}(i_2, j_2) \delta/2}
\left|\left\langle \mathbbm{S}_{\bm{\mathcal{D}}}^{\vec{i},\vec{j}} f, g\right\rangle \right| \\
& \lesssim \|f\|_{L^p(w)} \|g\|_{L^{p'}(w')} \sum_{\vec{i}, \vec{j} \in \mathbb{Z}^2_{+}} 2^{-\text{max}(i_1, j_1) \delta/2} 2^{- \text{max}(i_2, j_2) \delta/2} \\
& \simeq \|f\|_{L^p(w)} \|g\|_{L^{p'}(w')}.
\end{align}
\qed
\section{The unweighted case of higher order Journ\'e commutators}\label{S:mix}
Here is the definition of the BMO spaces which are in between little BMO and product BMO.
\begin{def}\label{definitionlpbmo}
Let $b : \mathbb{R}^{\vec{d}} \to \mathbb{C}$ with $\vec{d}=(d_1,\cdots,d_t)$. Take a partition $\mathcal{I}=\{I_s:1\le s\le l\}$ of $\{1,2,...,t\}$ so that $\dot{\cup}_{1\le s \le l} I_s=\{1,2,...,t\}$. We say that $b \in \text{BMO}_{\mathcal{I}}(\mathbb{R}^{\vec{d}})$ if for any choices ${\boldsymbol{v}}=(v_s), v_s\in I_{s}$, $b$ is uniformly in product BMO in the variables indexed by ${v_s}$.
We call a $\text{BMO}$ space of this type a `little product BMO'. If for any $\vec{x}=(x_1,...,x_t) \in \mathbb{R}^{\vec{d}}$, we define $\vec{x}_{\hat{\boldsymbol{v}}}$ by removing those variables indexed by ${v_s}$, the little product BMO norm becomes $$\|b\|_{\text{BMO}_{\mathcal{I}}}=\max_{{\boldsymbol{v}}} \{\sup_{\vec{x}_{\hat{\boldsymbol{v}}}}\|b(\vec{x}_{\hat{\boldsymbol{v}}})\|_{\text{BMO}}\}$$ where the BMO norm is product BMO in the variables indexed by ${v_s}$.
\end{def}
In \cite{OPS} it was proved that commutators involving tensor products of Riesz transforms in $L^p$ are a testing class for these BMO spaces:
\begin{thm}[Ou-Petermichl-Strouse]\label{corollary_riesz}
Let $\vec{j}=(j_1,\ldots,j_t)$ with $1\le j_k\le d_k$ and let for each $1\le s\le l$, $\vec{j}^{(s)}=(j_k)_{k\in I_s}$ be associated a tensor product of Riesz transforms $\vec{R}_{s,\vec{j}^{(s)}}=\bigotimes_{k\in I_s}R_{k,j_k}$; here $R_{k,j_{k}}$ are $j_k^{\text{th}}$ Riesz transforms acting on functions defined on the $k^{\text{th}}$ variable.
We have the two-sided estimate $$\|b\|_{BMO_{\mathcal{I}}(\mathbb{R}^{\vec{d}})} \lesssim \sup_{\vec{j}}\|[\vec{R}_{1,\vec{j}^{(1)}},\ldots,[\vec{R}_{t,\vec{j}^{(t)}},b]\ldots]\|_{L^p(\mathbb{R}^{\vec{d}})\to L^p(\mathbb{R}^{\vec{d}})}\lesssim \|b\|_{BMO_{\mathcal{I}}(\mathbb{R}^{\vec{d}})}.$$
\end{thm}
It was also proved that the estimate self improves to paraproduct-free Journ\'e commutators in $L^2$, in the sense $T$ is paraproduct free $T(1\otimes \cdot)=T(\cdot\otimes 1)=T^*(1\otimes \cdot)=T^*(\cdot\otimes 1)=0$.
\begin{thm}[Ou-Petermichl-Strouse]\label{upperbd_journe}
Let us consider $\mathbb{R}^{\vec{d}}$, $\vec{d}=(d_1,\ldots ,d_t)$ with a partition $\mathcal{I}=(I_s)_{1\le s \le l}$ of $\{1,\ldots ,t\}$ as discussed before. Let $b\in {\text{BMO}}_{\mathcal{I}}(\mathbb{R}^{\vec{d}})$ and let $T_s$ denote a multi-parameter paraproduct free Journ\'e operator acting on function defined on $\bigotimes_{k\in I_s}\mathbb{R}^{d_k}$. Then we have the estimate below
\[
\|[T_1,\ldots[T_l,b]\ldots]\|_{L^2(\mathbb{R}^{\vec{d}})\to L^2(\mathbb{R}^{\vec{d}})}\lesssim \|b\|_{{\text{BMO}}_{\mathcal{I}}(\mathbb{R}^{\vec{d}})}.
\]
\end{thm}
This estimate was generalised somewhat in \cite{OP} in that the paraproduct free condition was slightly weakened, the considerations in this present text in combination with arguments from \cite{DO} and \cite{OPS} to pass to the iterated case, readily give us the following full result, for all Journ\'e operators and all $p$:
\begin{thm}
Let us consider $\mathbb{R}^{\vec{d}}$, $\vec{d}=(d_1,\ldots ,d_t)$ with a partition $\mathcal{I}=(I_s)_{1\le s \le l}$ of $\{1,\ldots ,t\}$ as discussed before. Let $b\in {\text{BMO}}_{\mathcal{I}}(\mathbb{R}^{\vec{d}})$ and let $T_s$ denote a multi-parameter Journ\'e operator acting on function defined on $\bigotimes_{k\in I_s}\mathbb{R}^{d_k}$. Then we have the estimate below
\[
\|[T_1,\ldots[T_l,b]\ldots]\|_{L^p(\mathbb{R}^{\vec{d}})\to L^p(\mathbb{R}^{\vec{d}})}\lesssim \|b\|_{{\text{BMO}}_{\mathcal{I}}(\mathbb{R}^{\vec{d}})}.
\]
\end{thm}
\noindent \textbf{Acknowledgement: }
We are grateful to Jill Pipher and Yumeng Ou for helpful suggestions, especially in pointing out to us the connection with \cite{RF} and \cite{RF2}.
The first and second author would like to thank MSRI for their support during January 2017, as well as Michigan State University's Department of Mathematics for their hospitality in May 2017.
Finally, we are grateful to the referee for the careful reading of our paper and the very valuable suggestions to improve the presentation.
\begin{bibdiv}
\begin{biblist}
\normalsize
\bib{BagbyKurtz}{article}{
author={Bagby, R.J.},
author={Kurtz, D.S.},
title={L(log L) spaces and weights for the strong maximal function},
journal={J. Anal. Math.},
volume={44},
number={1},
date={1984},
pages={21--31},
}
\bib{Bloom}{article}{
author={Bloom, S.},
title={A commutator theorem and weighted BMO},
journal={Trans. Amer. Math. Soc.},
volume={292},
date={1985},
number={1},
pages={103--122}
}
\bib{CRW}{article}{
author={Coifman, R. R.},
author={Rochberg, R.},
author={Weiss, Guido},
title={Factorization theorems for Hardy spaces in several variables},
journal={Ann. of Math. (2)},
volume={103},
date={1976},
number={3},
pages={611--635},
}
\bib{CotlarSadosky}{article}{
author={Cotlar, M.},
author={Sadosky, C.},
title={Two distinguished subspaces of product BMO and Nehari-AAK theory for Hankel operators on the torus},
journal={Integr. Equ. Oper. Theory},
volume={26},
number={3},
date={1996},
}
\bib{DO}{article}{
author={Dalenc, L.},
author={Ou, Y.},
title={Upper bound for multi-parameter iterated commutators},
journal={Preprint},
date={2014}
}
\bib{RStein}{article}{
author={Fefferman, R.},
author={Stein, E. M.},
title={Singular integrals on product spaces},
journal={Adv. Math.},
volume={45},
number={2},
date={1982},
}
\bib{RF}{article}{
author={Fefferman, R.},
title={Harmonic Analysis on Product Spaces},
journal={Ann. of Math.},
volume={126},
number={1},
date={1987},
}
\bib{RF2}{article}{
author={Fefferman, R.},
title={$A^p$ weights and singular integrals},
journal={Amer. J. Math.},
volume={110},
number={5},
date={1988},
}
\bib{Grafakos}{book}{
author={Grafakos, L.},
title={Classical and Modern Fourier Analysis},
publisher={Pearson/Prentice Hall},
date={2004},
}
\bib{Herran}{article}{
author={Grau de la Herran, A.},
title={Comparison of T1 conditions for multi-parameter operators},
journal={Proc. Amer. Math. Soc},
date={2016},
volume={144},
number={6},
}
\bib{HLW2}{article}{
author={Holmes, I.},
author={Lacey, M. T.},
author={Wick, B. D.},
title={Commutators in the two-weight setting},
journal={Math. Ann.},
volume={367},
date={2017},
number={1-2},
pages={51--80},
eprint={http://arxiv.org/abs/1506.05747},
}
\bib{HytRep}{article}{
author={Hyt{\"o}nen, T.},
title={The sharp weighted bound for general Calder\'on-Zygmund operators},
journal={Ann. of Math. (2)},
volume={175},
date={2012},
number={3},
pages={1473--1506},
}
\bib{Journe}{article}{
author={Journ\'{e}, J-L.},
title={Calder\'{o}n-Zygmund operators on product spaces},
journal={Rev. Mat. Iberoamericana},
volume={1},
number={3},
date={1985},
}
\bib{MRep}{article}{
author={Martikainen, H.},
title={Representation of bi-parameter singular integrals by dyadic operators},
date={2012},
journal={Adv. Math.},
volume={229},
number={3},
}
\bib{MOrponen}{article}{
author={Martikainen, H.},
author={Orponen, T.},
title={Some obstacles in characterising the boundedness of bi-parameter singular integrals},
journal={Math. Z.},
volume={282},
issue={1--2},
date={2016},
}
\bib{MuckWheeden}{article}{
author={Muckenhoupt, B.},
author={Wheeden, R. L.},
title={Weighted bounded mean oscillation and the Hilbert transform},
journal={Studia Math.},
volume={54},
number={3},
date={1975/76},
}
\bib{OPS}{article}{
author={Ou, Y.},
author={Petermichl, S.},
author={Strouse, E.},
title={Higher order Journ\'e commutators and characterizations of
multi-parameter BMO},
journal={Adv. Math.},
volume={291},
date={2016},
pages={24--58}
}
\bib{OP}{article}{
author={Ou, Y.},
author={Petermichl, S.},
title={Little bmo and Journ\'{e} commutators},
eprint={http://yumengou.mit.edu/sites/default/files/documents/LittleBMOJourneYOSP.pdf},
}
\bib{P}{article}{
author={Petermichl, S.},
title={Dyadic shifts and a logarithmic estimate for Hankel operators with matrix symbol},
journal={C. R. Acad. Sci. Paris S\'{e}r. I Math.},
volume={330},
number={6},
date={2000},
}
\bib{RdF}{article}{
author={Rubio de Francia, J. L.},
author={Ruiz, J.},
author={Torrea, J. L.},
title={Calder\'{o}n-Zygmund theory for operator-valued kernels},
journal={Adv. Math.},
date={1986},
volume={62},
pages={7--48},
}
\bib{Wu}{article}{
author={Wu, S.},
title={A wavelet characterization for weighted Hardy Spaces},
journal={Rev. Mat. Iberoamericana},
volume={8},
number={3},
date={1992},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
2,869,038,155,572 | arxiv | \section{Introduction}
Composite materials now make up more than 50\% of a modern aircraft
design by weight~\cite{krauklis2020composite}. This is due to higher specific strength
and better fatigue properties for high
tension components than conventional aluminium alternatives. Composite materials also have lower electrical and thermal conductivity than the
supplanted aluminium, which can adversely affect the response of
aircraft components to lightning strikes. The initial stages of a
lightning strike results in a large current flow through the aircraft
skin, which, for materials with low electrical conductivity can result
in large energy input through Joule heating. In Carbon Fibre
Reinforced Polymer (CFRP), for example, the high energy input can
result in fibre fracture, resin decomposition, delamination and
thermal ablation.
The interaction of lightning with aircraft exterior surfaces can be
further complicated by the integration of conductive components, such
as metallic fasteners, with {higher} conductivity than
the surrounding composite substrates. Aerospace fasteners, used to
join skin panels, are commonly manufactured {from}
titanium alloys that are lightweight, strong and corrosion
resistant. These can, however, act as a preferred pathway for the
current to access the internal airframe and embedded carbon composite
panels. High current flow through a fastener can arise either from a
direct attachment of a {lightning} arc to the fastener
head or indirectly as current conducted from a remote attachment
site. In addition to paint, panel and sealant damage, the high current
flow in the metallic fastener can cause a thermal ejection of hot
particles (energetic discharge) from the {interfaces}
between fastener components. Energetic discharge from fastener
assemblies can represent a potential threat in safety critical regions
of an aircraft, such as in integrated fuel tanks where significant
fuel vapour is present.
Chemartin {et al.}~\cite{chemartin2012direct} outline three important
mechanisms through which a fastener can undergo energetic
discharge. The first, termed `outgassing', results from the current
passage across small resistive gaps between the fastener and
skin. The formation of a plasma in the gap, and
subsequent Joule heating, increases the internal plasma pressure
until a blow out of sparks and hot gas occurs at the component
interface. Further information regarding the characteristics and
causes of outgassing (also known as pressure sparking) is provided in
the comprehensive review of
Evans~\cite{evans2018thesisCharacterisation}. The second energetic
discharge mechanism outlined in Chemartin {et al.}~\cite{chemartin2012direct} is thermal sparking between contacting components,
i.e., the fastener nut and rib. Odam et al.~\cite{odam1991lightning,
odam2001factors} suggests that thermal sparking occur when a very high
current is forced to cross a joint between two conducting materials,
which have imperfect mating between their surfaces. This process is
noted to be distinct from voltage sparking, which occurs when the
voltage difference between two conducting materials is sufficient to
break down the intervening medium, whether this is air or another
dielectric medium. The final energetic discharge mechanism outlined in Chemartin {et al.}~\cite{chemartin2012direct} is edge
glow. This is defined by Revel et al.~\cite{revel2016edge} as
consisting of a bright glow combined with strong material ejections,
and occurs on the edges of composite materials. Two
mechanisms responsible for the presence of edge glow are proposed in the available literature. The first of
these occurs when the potential difference between adjacent composite
plies exceeds a threshold value, irrespective of whether a
pre-existing contact between the plies is present. The second
mechanism occurs when the power deposition into the substructure is
above a threshold power value that produces a glow due to heating of
sealant.
These energetic discharge mechanisms can been distinguished using an appropriate experimental approach.
Day and Kwon~\cite{kwon2009optical} describe a method which analyses light
over a narrow spectral range using a spectrometer and can identify hot
particle ejection, arcing and edge glow. Work by Haigh et
al.~\cite{haigh1991measurements}, in contrast, applies two-colour
spectroscopy to estimate the temperature of sparks emitted using
red/blue ratios, comparing with baseline nickel and tungsten
sparks. Later work by Haigh et al.~\cite{haighCrookTerzino2013} uses
photography to detect light sources that are potential ignition
hazards on a T-joint section with multiple fasteners. Focusing
specifically on outgassing events, Mulazimoglu and
Haylock~\cite{mulazimoglu2011recent} relate sparking intensity to the
fastener material and geometry choice using energy dispersion
spectrometer chemical analysis, and determine that the principle
constituent of the ejected particle debris in question is polysulphide sealant,
with small quantities of metallic droplets and carbon fibre
particles. They surmise that the chemical composition of the debris
mean that electrical arcing occurs between the bolt and the CFRP hole
surface. The ablated material is then carried by hot gases during the
outgassing ejection event due to the arcing pressure. The
microstructure of the resulting damage is analysed using scanning
electron microscopy and the outgassing characteristics that result
from deliberate design additions are analysed. These additions include
the introduction of metallic sleeve components, dielectric bolt head
coverings and bolt-line metallic meshes.
The wide range of potential fastener configurations, along with the
various mechanisms through which sparking can occur, mean that
computational simulation can provide an efficient and cost effective
technique for rapidly exploring the available parameter space.
Computational techniques can also provide a useful tool in the design
of experimental testing for proposed fastener configurations. Finite
element methods, for example, are a common, single-material approach to model the
effects of high current flow through carbon composite substrates.
This is achieved through prescribing a current waveform along the
upper surface, see, for example Ogasawara et
al.~\cite{ogasawara2010coupled}, Abdelal and
Murphy~\cite{abdelal2014nonlinear}, Guo et
al.~\cite{guo2017comparison}, Dong et al.~\cite{dong2015coupled}, Wang
et al.~\cite{wang2014ablation} and Liu et al.~\cite{liu2015combining}
and for commercial software by Wang and
Pasiliao~\cite{wang2018modeling}, Kamiyama et
al.~\cite{kamiyama2018delamination, kamiyama2019damage}, Fu and
Ye~\cite{fu2019modelling} and Evans et
al.~\cite{evans2014comsolansys}. The prescription of a current
waveform along the upper surface of a composite material is perhaps
most suited to cases in which the damage to individual ply and resin
layers is of direct interest, since inter-ply loading and damage
characteristics can be efficiently modelled using modest computational
resources for comparison with experimental results. Modelling a
lightning strike using this approach in isolation can, however,
neglect the dynamic change in current and pressure loading on the
upper surface of the substrate by an evolving plasma arc, and the
non-linear feedback from these changes, which in turn affect the arc
behaviour.
To allow the evolution of the plasma arc to effect the time-dependent
substrate current and pressure loading, the `co-simulation' approach
couples two software packages or approaches. Using this technique a
magnetohydrodynamic (MHD) code can be used to describe the evolution
of temperature, pressure and current density within the arc. The
results of running the MHD code are then fed as initial and boundary
conditions to a second simulation that models the mechanical, thermal
and electrodynamic evolution of the substrate. Examples of this
approach include Tholin et al.~\cite{tholin2015numerical}, who couple
two distinct software packages (C\`edre and Code--Saturn) to model the
plasma arc attachment to single material substrates, and Millen et
al.~\cite{millen2019sequential} who couple two commercial software
packages (COMSOL Multiphysics and Abaqus FEA) to model the mechanical
loading and electromagnetic effects on a carbon composite
substrate. Kirchdoerfer et al.~\cite{kirchdoerfer2017,
kirchdoerfer2018cth} apply the co-simulation approach to aerospace
fasteners, coupling the results from COMSOL Multiphysics with a
research shock-physics code developed at Sandia National Laboratories
(CTH). The electric and magnetic fields, and current density, are
solved in COMSOL and used to determine Joule heating effects. One-way
coupling is then applied with the CTH solver using an effective
heating power, computed from the modelled Joule heating, allowing
the simulation of the fluid-structure interaction. This one-way
coupled solution is used to determine the temperature and pressure
rise in an internal cavity between a bolt, nut, and surrounding CFRP
panels, with the final pressure rise being compared to experimental
results.
The co-simulation approach results in a one-directional coupling
between materials where the substrate behaviour does not influence the
evolution of the arc. However, experimental results, such as the
optical measurements of Martins~\cite{martins2016etude}, indicate that
changes in the electrical conductivity and substrate shape can alter
the arc attachment characteristics. This work uses a multi-physics
methodology introduced in Millmore and
Nikiforakis~\cite{millmore2019multi}, to simulate a dynamic
non-linearly coupled system comprising the plasma arc, titanium
aerospace fastener components, surrounding aircraft panels and the
internal supporting structure. The electromagnetic, thermal and
elastoplastic response of individual fastener components is captured
by this method, allowing a critical analysis of fastener design and
material layering. Dynamic feedback between the components is achieved
in this multi-physics approach by simultaneously solving the governing
hyperbolic partial differential equations for each material in a
single system. The non-linear dynamic feedback between adjacent
materials achieved by this approach provides a distinct improvement
over existing numerical methods for modelling lightning strike
attachment. The underlying numerical methods and implementation used
in this paper are outlined in Millmore \&
Nikiforakis~\cite{millmore2019multi}, and extended in Michael et
al.~\cite{michael2019multi} and Tr{\"a}uble et
al.~\cite{trauble2021improved}. Millmore and
Nikiforakis compare numerical results from
the non-linear multi-physics approach used in this paper with the
optical measurements of Martins~\cite{martins2016etude}, for a plasma
arc attachment to a single material substrate, and demonstrate that
the two-way interaction between the substrate and plasma is accurately
captured by this method.
The key aim of this work is to use this approach to model the rise in pressure within an internal cavity between a titanium
fastener and a CFRP panel. The breakdown of air in this cavity
requires a strategy for defining an internal plasma region, and the
influence of parametric changes in the cavity geometry on the pressure
rise through Joule heating can be studied. This mechanism is
acknowledged by Chemartin et al.~\cite{chemartin2012direct} and
Evans~\cite{evans2018thesisCharacterisation} as a major contributing
factor in outgassing, hence this paper focuses on this mechanism
over thermal sparking and edge glow. An overview of the multi-physics
methodology is given in section~\ref{sec:MathematicalModel} and an
assessment of the methodology for modelling lightning strikes on
aerospace fasteners is made in
section~\ref{sec:FastenerModelling}. The model is validated by
comparing to experimental measurements of a fastener holding together
carbon composite and aluminium panels, electrically isolated from each
other by a dielectric layer. The multi-physics methodology is then
exercised in section~\ref{sec:SensitivityStudies} to investigate how
parametric changes in the design of an idealised fastener influence
the pressure loading, component temperature rise and electrical
current path characteristics. This study considers a variety of
fastener design and material layering choices, and permits the
pressure rise in confined internal plasma regions to be numerically
quantified. Section~\ref{sec:Conclusions} provides a summary and an
outlook to future work.
\section{Mathematical Model Description}
\label{sec:MathematicalModel}
This section gives an overview of the mathematical model used to
simulate the response of aerospace fastener assemblies to lightning attachment.
\subsection{{ Plasma model}}
\label{sec:PlasmaModelling}
The plasma arc is described through a system of equations suitable for
simulating the evolution and ionisation of air caused by an electric
discharge. This model must describe the change in the chemical
composition of the plasma under the high temperatures of the arc.
Additionally electromagnetic effects can have a strong influence on
the arc evolution. This requires an MHD formulation which includes the
effects of current flow in the arc through the Lorentz and Joule
effects. For lightning plasma, this system can be assumed to be under
local thermodynamic
equilibrium~\cite{tholin2015numerical,martins2016etude}. The governing
system of equations is therefore given by
\begin{equation}
\frac{\partial \rho}{\partial t} + \nabla \cdot \left(\rho{\bf v}\right) = 0,
\label{eq:mhdrho}
\end{equation}
\begin{equation}
\frac{\partial }{\partial t}\left(\rho {\bf v}\right) + \nabla\cdot\left(\rho{\bf v}\otimes{\bf v}\right) + \nabla p = {\bf J}\times{\bf B},
\label{eq:mhdcont}
\end{equation}
\begin{equation}
\frac{\partial E}{\partial t} + \nabla \cdot \left[\left(E + p\right) {\bf v}\right] = {\bf v} \cdot \left({\bf J} \times {\bf B}\right) + \eta {\bf J} \cdot {\bf J} - S_r,
\label{eq:mhdene}
\end{equation}
\begin{equation}
-\nabla^2 {\bf A} = \mu_0 {\bf J}.
\label{eq:mhdmag}
\end{equation}
where $\rho$ is density, ${\bf v}$ is velocity, $p$
is the pressure, $E$ is the total energy, ${\bf J}$
is the current density, ${\bf B}$ is the magnetic field, $\eta$ is the
resistivity of the plasma, $S_r$ is a term for the radiative losses
from a heated material, and ${\bf A}$ is the magnetic vector
potential, related to the magnetic field through
${\bf B} = \nabla \times {\bf A}$. The current density is governed by
the continuity equation.
\begin{equation}
\nabla \cdot {\bf J} = -\nabla \cdot \left(\sigma \nabla \phi\right) = 0
\label{eq:current-cont}
\end{equation}
where $\phi$ is the electric potential and $\sigma = 1/\eta$ is the
electrical conductivity of the plasma arc.
\subsubsection{Equation of state}
\label{sec:equation-state}
The system of equations~(\ref{eq:mhdrho})--(\ref{eq:mhdmag}) comprises
8 equations for 10 unknown variables, $\rho$, $E$,
$p$ and the vectors ${\bf v}${, ${\bf B}$ and $\phi$}.
These are closed using equation~(\ref{eq:current-cont}) and an
equation of state which describes the thermodynamic properties of the
system. The equation of state is typically written in the form
$p = p\left(e, \rho\right)$, where $e$ is the specific internal
energy, and is related to the total energy through
$E = \rho e + \frac{1}{2}\rho v^2$. The equation of state of a plasma
is complex, since its thermodynamic properties depend strongly on the
degree of ionisation.
To capture this behaviour, a tabulated equation of state for air
plasma is used in this paper. This was developed by Tr{\"a}uble et
al.~\cite{trauble2021improved}, based on the theoretical model of
d'Angola et al.~\cite{d2008thermodynamic}. This considers the 19 most
important species present in an air plasma at temperatures up to
$60,000$~K over a pressure range of $0.01<p<100$~atm.
\subsection{Elastoplastic model}
\label{sec:MultimaterialModelling}
In this work, the material substrate uses an elastoplastic solid
model described by Barton et al.~\cite{Barton20097046} and
Schoch et al.~\cite{schoch2013eulerian, schoch2013propagation}, based
on the formulation by Godunov and
Romenskii~\cite{godunov1972nonstationary}. The plasticity model
follows the work of Miller and Colella~\cite{miller2001high}. The
elastoplastic implementation used in the present work is described in
Michael et al.~\cite{michael2019multi}, therefore only a brief
overview is provided here for completeness.
To account for the material deformation, the elastic deformation
gradient is defined as
\begin{equation}
\label{eq:elas-def-grad}
\mathrm{F}^e_{ij} = \frac{\partial x_i}{\partial X_j}
\end{equation}
which maps a coordinate in the original configuration, $X_i$, to its
evolved coordinate in the deformed configuration, $x_i$. The
deformation gradient, along with the momentum, energy and a scalar material
history parameter, $\kappa$, give a hyperbolic system of conservation
laws
\begin{equation}
\frac{\partial \rho u_i}{\partial t} + \frac{\partial}{\partial
x_k}\left(\rho u_iu_k - \sigma_{ik}\right) = 0,
\label{eq:solid-form-1}
\end{equation}
\begin{equation}
\frac{\partial \rho E}{\partial t} + \frac{\partial}{\partial
x_k}\left(\rho E u_k - u_i\sigma_{ik}\right) = \eta J_i J_i
\label{eq:solid-form-2}
\end{equation}
\begin{equation}
\frac{\partial \rho \mathrm{F}_{ij}^e}{\partial t} +
\frac{\partial}{\partial x_k}\left(\rho u_k\mathrm{F}_{ij}^e - \rho
u_i\mathrm{F}_{kj}^e\right)
= -u_i\frac{\partial \rho \mathrm{F}_{kj}^e}{\partial x_k} + \mathrm{P}_{ij},
\label{eq:solid-form-2_5}
\end{equation}
\begin{equation}
\frac{\partial \rho \kappa}{\partial t} + \frac{\partial}{\partial
x_i} \left( \rho u_i \kappa \right) = \rho \dot{\kappa}.
\label{eq:solid-form-3}
\end{equation}
\begin{equation}
-\nabla^2 A_i = \mu_0 J_i
\label{eq:solid-form-4}
\end{equation}
where $\sigma$ is the stress tensor, given by
\begin{equation}
\label{eq:stress-defn}
\sigma_{ij} = \rho \mathrm{F}^e_{ik} \frac{\partial e}{\partial \mathrm{F}^e_{jk}}
\end{equation}
and $e$ is the specific internal energy. The scalar material history,
$\kappa$, tracks work hardening of the material through plastic
deformation. Source terms associated with the plastic update are
denoted $\mathrm{\bf P}$. The density is given by
\begin{equation}
\label{eq:solid-density}
\rho = \frac{\rho_0}{\mathrm{det}\,{\mathrm{\bf F}}^e}
\end{equation}
where $\rho_0$ is the density of the initial, unstressed material and
the system is coupled with compatibility constraints on the
deformation gradient
\begin{equation}
\label{eq:compat-const}
\frac{\partial \rho \mathrm{F}_{ji}}{\partial x_j} = 0
\end{equation}
which ensure deformations of the solid remain physical.
\subsubsection{Equations of state for elastoplastic materials}
\label{sec:SubstrateElectrodynamics}
As with the plasma model, in order to describe the thermodynamic
properties of the model, and to close the system of
equations~(\ref{eq:solid-form-1})--(\ref{eq:solid-form-4}), an
equation of state is required. A variety of solid materials are used
in aerospace fasteners, in particular, aluminium, composite materials,
titanium, dielectric coatings and sealants. Additionally,
experimental studies include an electrode, which is typically tungsten
or steel. This is used to generate a plasma arc and evolution
within this electrode does not affect the simulation dynamics (and
damage issues are not an issue for simulation purposes). Therefore
any conductive metal is typically used as a substitute.
Aluminium is typically described through a Romenskii equation of
state~\cite{Barton20105518}, for which the specific internal energy of
the metal is given by
\begin{equation}
\label{eq:romenskii-eos}
\begin{array}{c}
e = \frac{K_0}{2\alpha^2}\left({\mathcal{I}_3^{\alpha/2} - 1}\right)^2 + c_v T_0
\mathcal{I}_3^{\gamma/2} \left({\mathrm{exp}\left({S/c_v}\right)-1}\right) + \\
\frac{B_0}{2}\mathcal{I}_3^{\left({\beta/2}\right)}{\frac{\mathcal{I}_1^2}{3}-\mathcal{I}_2}
\end{array}
\end{equation}
where
\begin{equation}
\label{eq:romesnk-coeffs}
K_0 = c_0^2 - \frac{4}{3}b_0^2, \qquad B_0 = b_0^2
\end{equation}
and these are the bulk speed of sound and the shear wave speed
respectively, with $c_0$ being the sound speed, $S$ is the entropy,
$c_v$ is the heat capacity at constant volume and $T_0$ a reference
temperature. The constants $\alpha$, $\beta$ and $\gamma$ are related
to the non-linear dependence of sound speeds on temperature and
density, and must be determined experimentally for each material. The
quantities $\mathcal{I}_K$ are invariants of the Finger strain tensor
$G = F^{-T} F^{-1}$, and are given by
\begin{equation}
\label{eq:finger-invar}
\mathcal{I}_1 = \mathrm{tr}\left({G}\right), \quad \mathcal{I}_2 =
\frac{1}{2}\left[{\left({\mathrm{tr}\left({G}\right)}\right)^2 - tr\left({G^2}\right)}\right],
\quad \mathcal{I}_3 = \mathrm{det}\left|{G}\right|.
\end{equation}
The entropy is computed from the primitive variables and a reference
density $\rho_0$,
\begin{equation}
\label{eq:romensk-entropy}
S = c_v \mathrm{log} \left({\frac{\frac{p}{\rho} - K_0 \alpha
\left({\frac{\rho}{\rho_0}}\right)^\alpha
\left[{\left({\left({\rho}\right){\rho_0}}\right)^\alpha - 1}\right]}{\gamma c_v T_0
\left({\frac{\rho}{\rho_0}}\right)^\gamma} + 1}\right).
\end{equation}
The parameters for aluminium, as used in Barton et al.~\cite{barton2010eulerian}, are given by
\begin{equation}
\label{eq:al1100-params}
\begin{array}{c}
\rho_0=2710 \text{\,kg\,m}^{-3},\,
c_v=900\text{\,J\,kg$^{-1}$\,K$^{-1}$} \\
T_0=300\text{\,K},\,b_0=3160\text{\,m\,s}^{-1} \\
c_0=6220\text{\,m\,s}^{-1},\, \alpha=1 \\
\beta=3.577,\, \gamma=2.088.
\end{array}
\end{equation}
Carbon composites are anisotropic materials, and thus have a more
complex equation of state. These are described in the work of
Lukyanov~\cite{lukyanov2010equation}, though their implementation in
this model is, at present, beyond the scope of this work. Additional
work is required within the elastoplastic model described above to
deal with material anisotropy. Following Millmore and
Nikiforakis~\cite{millmore2019multi}, an isotropic approximation to
CFRP can be made, which is suitable for modelling `with weave' and
`against weave' configurations due to the symmetries of the problem.
This isotropic approximation uses the equation of state as for
aluminium, but with electrical conductivity values that approximate
CFRP.
A Romenskii equation of state is used for modelling titanium
components due to the lack of readily available equations of state
for this material. In the configurations considered in this work,
electromagnetic effects dominate, and thus this approximation does not
have a significant effect on the evolution. The parameters for this
equation of state are
\begin{equation}
\label{eq:titanium-params}
\begin{array}{c}
\rho_0=8030 \text{\,kg\,m}^{-3},\,
c_v=500\text{\,J\,kg$^{-1}$\,K$^{-1}$} \\
T_0=300\text{\,K},\, b_0=3100\text{\,m\,s}^{-1} \\
c_0=5680\text{\,m\,s}^{-1},\, \alpha=0.596 \\
\beta=2.437,\, \gamma=1.563.
\end{array}
\end{equation}
The dielectric coatings on aircraft skins are similarly complex, often
comprising several layers of material and equations of state for these
materials are not openly available. For such coatings used in this
work, the plastic polymethyl methylacetate (PMMA) is used, which has
been widely studied due to its use in improvised
explosives~\cite{Christou201248,hamada2004performance}. For PMMA, a
Mie-Gr{\"u}neisen equation of state is employed which directly
relates pressure and density through
\begin{equation}
\label{eq:mie-grun-eos}
p = \frac{\rho_0 c_0^2}{s\left({1-s\eta}\right)}\left({\frac{1}{1-s\eta} -
1}\right), \qquad \eta = 1 - \frac{\rho_0}{\rho}
\end{equation}
where $c_0$ and $\rho_0$ again refer to the speed of sound in the
material and the reference density, whilst $s$ is a single
experimentally determined coefficient. These quantities are
given by
\begin{equation}
\label{eq:pmma-mg-values}
\rho_0 = 1180\ \mathrm{kgm}^{-3}, \quad c_0 = 2260\
\mathrm{ms}^{-1}, \quad s = 1.82.
\end{equation}
For all materials, an electrical conductivity is also required for use
in equation~(\ref{eq:solid-form-2}). Over the temperature ranges
observed within the substrate, this can be considered constant for all
three materials used in this work. The electrical
conductivity employed for aluminium is
$\sigma = 3.2\times10^7$~Sm$^{-1}$ (e.g., Tholin et al.~\cite{tholin2015numerical}), for
titanium: $\sigma = 2.38\times 10^6$~Sm$^{-1}$ and for PMMA:
$\sigma = 2.6\times10^{-5}$~Sm$^{-1}$. The electrical conductivity for PMMA is several orders of magnitude lower than all other materials used and approximates a dielectric layer. The carbon
composite is modelled using the isotropic approximation of Millmore
and Nikiforakis~\cite{millmore2019multi} where, unless otherwise
stated, a bulk electrical conductivity of $4.1\times 10^4$~Sm$^{-1}$
is used.
\subsection{Numerical approach}
\label{sec:NumericalMethod}
The coupled multi-physics system requires the solution for the plasma,
equations~(\ref{eq:mhdcont})-(\ref{eq:mhdene}), and for the
elastoplastic solid components,
equations~(\ref{eq:solid-form-1})-(\ref{eq:solid-form-3}), each of
which is a hyperbolic system of partial differential equations. In addition, we require the elliptic magnetic field, equations~(\ref{eq:mhdmag}) and
(\ref{eq:current-cont}), which exist for all materials. The
hyperbolic equations are solved using a finite volume methodology; in
this case, a second order, slope limited centred method is employed
for solving the discrete form of the equations and updating each
material independently.
Information is passed across material boundaries using a ghost fluid
method, with level set methods (one for each material) being used to
track the location of the interfaces between each material. Dynamic
boundary conditions are applied at the interfaces using the Riemann
ghost fluid method of Sambasivan and
Udaykumar~\cite{sambasivan2009ghost1}, which provides these conditions
through a mixed-material Riemann solver. The projection method of
Lasasso et al.~\cite{losasso2006multiple} is used to correct the level
set for non-unique level-set values caused by potential numerical
approximation errors. A comprehensive overview of the multi-material
approach used in this work is given by Michael et
al.~\cite{michael2019multi}.
The elliptic equations are solved by coupling the system to a finite
element solver, in this case, the DOLFIN package for the
FEniCS framework~\cite{langtangen2016solving} is used.
\section{Experimental comparison of the numerical model}
\label{sec:FastenerModelling}
In this section the multi-physics methodology is applied to an
aerospace fastener configuration used in experimental testing by
Evans~\cite{evans2018thesisCharacterisation}. This can validate the
suitability of the approach outlined in this paper for modelling high
current flow through a complex fastener geometry, comprising a number
of materials with different electrical and thermal properties in
contact. Specifically, a high conductivity titanium fastener
surrounded by successive horizontal layers of carbon fibre and
dielectric is modelled. The methodology used in this paper has
previously been validated against experimental results for lightning
strikes on thin aluminium and carbon composite panels by Millmore and
Nikiforakis~\cite{millmore2019multi}. The experiment of
Evans~\cite{evans2018thesisCharacterisation}, used in this section,
investigates the electrical resistance and current distribution in a
countersunk fastener assembly under attachment of an arc with a 50kA
peak current density, an experimental input which is representative of
a lightning strike. The countersunk head fastener design is typically
used in practice to maintain the smooth profile of the external
aircraft skin. An interference fit of the fastener with the
surrounding CFRP and Glass Fibre Reinforced Polymer (GFRP) layers is
used in the experiment and is replicated in this simulation. An
axisymmetric simulation models the physical experiment in which a
cylindrical fastener is surrounded by a square carbon fibre
panel. Return fasteners are equally spaced along the outer edge of the
panel in the experiment, justifying the axisymmetric approach. A
two-dimensional cross-section of the simulation set-up is given in
figure~\ref{fig:validation-evans-component-diagram}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{ValidationDiagramObjectsC}
\caption{Fastener simulation layout. (a) Physical components, A:
Electrode, B: Pre-heated arc, C: Fastener shank, D: Fastener
collar, E: Dielectric layer, F: CFRP {panel}, G: Dielectric
layer, H: CFRP {panel}, I: Dielectric layer, J: Air. Illustration
shows a cross-section of the axisymmetric simulation. The
computational domain extends $\mathrm{L_{R}}$=50.8~mm and
$\mathrm{L_{Z}}$=62~mm in the radial and axial directions
respectively. (b) Simulation boundary conditions. Boundary
conditions (i-xi) given in
Table~\ref{table:Validation-evans-boundary-table}.}
\label{fig:validation-evans-component-diagram}
\end{figure}
Figure~\ref{fig:validation-evans-component-diagram}\,(a) identifies
individual fastener material components using the labels C-I. In
addition, the electrode and pre-heated arc are identified using labels
A and B for reference and the ambient air below the fastener assembly
is labelled J. The radial, $\mathrm{L_{R}}$, and axial,
$\mathrm{L_{Z}}$, lengths of the computational domain are also shown,
where in this work $\mathrm{L_{R}}$=50.8~mm and
$\mathrm{L_{Z}}$=62~mm. The countersunk titanium fastener, labelled C
in figure~\ref{fig:validation-evans-component-diagram}\,(a), has a
shank diameter of 6.35~mm. The titanium retaining nut, labelled D, has
an outer diameter of 12.4~mm. In the experiment of Evans, a high
voltage electrode is placed in contact with the fastener head. To
replicate this in the present work, in which an electrode is placed
40~mm above the fastener head, a thin dielectric layer (labelled E) is
positioned between the arc and the CFRP panel. This dielectric layer,
which is 0.6~mm thick, ensures that the radially expanding arc
maintains contact only with the fastener head throughout the
simulation. The upper panel, labelled F, is 2.032~mm thick and is
directly in contact with the fastener head. To electrically isolate
the upper panel from the fastener shank and from the lower panel, a
further 0.844~mm thick GFRP dielectric layer, labelled G, is placed
between the two panels. The lower carbon fibre panel, labelled H in
figure~\ref{fig:validation-evans-component-diagram}\,(a) is in
electrical contact with the fastener shank only and is 6.096~mm
thick. To electrically isolate the fastener collar from the lower
panel, a further dielectric layer is used, labelled I, and is 0.5~mm
thick. Above the fastener head, a 4~mm wide plasma arc is initially
defined with a temperature of 8000~K, this is labelled B in
figure~\ref{fig:validation-evans-component-diagram}\,(a). This
pre-heated region is necessary since the breakdown of the air, forming
the initial plasma arc, is not modelled in this framework. This region
is given a sufficient temperature such that a conductive path is
formed between the electrode and the substrate, based on the approach
of Chemartin et al.~\cite{chemartin2011modelling}. Larsson et
al.~\cite{larsson2000lightning} test a number of initial high
temperature (pre-heated) columns up to 20,000K and conclude that the
values within the preheated region do not affect the overall evolution
of the plasma arc. Due to the lack of available equations of state for
GFRP, these layers are approximated in this work using a PMMA equation
of state. For simulating the isotropic approximation to CFRP used in
this work, an electrical conductivity of $\sigma = 8872$~Sm$^{-1}$ is
used, as defined by Evans~\cite{evans2018thesisCharacterisation}. At
the electrode, a modified `Component A' electrical current waveform,
as defined in ARP 5412B~\cite{ARP5412B}, is applied. For this test,
the input current waveform has a peak current of 50\,kA and is given
by,
\begin{equation}
I\left(t\right) = I_{0}\left(e^{-\alpha t} - e ^{-\beta t} \right)\left(1 - e^{-\gamma t}\right)^{2}
\label{eq:CurrentComponentB}
\end{equation}
where, $\alpha$ = 51,000 s$^{-1}$, $\beta$ = 90,000 s$^{-1}$,
$\gamma$ = 5,423,540 s$^{-1}$ {and} $I_{0}$ = 243,000\,A.
Outside of the plasma arc, the unionised gas is initialised using
ambient ideal gas conditions.
Figure~\ref{fig:validation-evans-component-diagram}\,(b) highlights the
boundary conditions applied in the simulation by labelling eleven
regions of interest, i-xi. Conditions are required for the conserved
variables, $\mathbf{q}$, the electrical potential, $\phi$, as well as the
radial and axial components of magnetic field, $A_{r}$ and $A_{z}$, as defined in
table~\ref{table:Validation-evans-boundary-table}.
\begin{table}
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
Boundary & \textbf{q} & $\phi$ & $A_r$ & $A_Z$ \\
\hline
\hline
i & Transmissive &$\frac{\partial \phi}{\partial \eta}=-\frac{1}{\sigma}\frac{I(t)}{\pi r^{2}_{c}}$ & $A_{r}=0$ & $\frac{\partial A_{z}}{\partial \eta} = 0$ \\
\hline
ii & Transmissive & $\frac{\partial \phi}{\partial \eta}=0$ & $A_{r}=0$ & $\frac{\partial A_{z}}{\partial \eta} = 0$ \\
\hline
iii & Transmissive & $\frac{\partial \phi}{\partial \eta}=0$ & $\frac{\partial A_{r}}{\partial \eta} = 0$ & $A_{z} = 0$ \\
\hline
iv & Transmissive & $\frac{\partial \phi}{\partial \eta}=0$ & $\frac{\partial A_{r}}{\partial \eta} = 0$ & $A_{z} = 0$ \\
\hline
v & Transmissive & $\phi=0$ & $\frac{\partial A_{r}}{\partial \eta} = 0$ & $A_{z} = 0$ \\
\hline
vi & Transmissive & $\frac{\partial \phi}{\partial \eta}=0$ & $\frac{\partial A_{r}}{\partial \eta} = 0$ & $A_{z} = 0$ \\
\hline
vii & Transmissive & $\phi=0$ & $\frac{\partial A_{r}}{\partial \eta} = 0$ & $A_{z} = 0$ \\
\hline
viii & Transmissive & $\frac{\partial \phi}{\partial \eta}=0$ & $\frac{\partial A_{r}}{\partial \eta} = 0$ & $A_{z} = 0$ \\
\hline
ix & Transmissive & $\phi=0$ & $\frac{\partial A_{r}}{\partial \eta} = 0$ & $A_{z} = 0$ \\
\hline
x & Transmissive & $\frac{\partial \phi}{\partial \eta}=0$ & $\frac{\partial A_{r}}{\partial \eta} = 0$ & $A_{z} = 0$ \\
\hline
xi & Symmetry & $\frac{\partial \phi}{\partial \eta}=0$ & $A_{r}=0$ & $\frac{\partial A_{z}}{\partial \eta} = 0$ \\
\hline
\end{tabular}
\caption{Simulation boundary conditions for the conserved
variables, $\textbf{q}$, electric potential,
$\phi$, the radial and axial components of magnetic potential,
$A_{r}$ and $A_{z}$. Boundary indices in the first column
correspond to
figure~\ref{fig:validation-evans-component-diagram}(b).}\label{table:Validation-evans-boundary-table}
\end{center}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.4\textwidth]{FastenerJdensity25_withLeg}
\caption{Fastener current density and plasma temperature at a time
of 30 $\mu$s. The plasma region has been
truncated in this figure to allow greater detail of the fastener
to be shown.}
\label{fig:validation-evans-snapshot25}
\end{figure}
Figure~\ref{fig:validation-evans-snapshot25} shows a snapshot of this
test case at a time of 30~$\mu$s. This plot shows current density
within the composite substrate materials and the temperature within
the plasma arc; the dielectric layers are shown in red. The regions of
highest current density are highlighted as being at the attachment
point at the top of the countersunk fastener bolt head, at the outer
radial tip of the countersunk head and along the upper panel. It is
clear that the preferred path for current flow is through the upper
panel, close to the surface. A localised peak in current density
magnitude is visible at the upper radial edge of the countersunk
fastener head, at the interface with the upper carbon composite
panel. The difference in electrical conductivity between the titanium
fastener and the low conductivity panels results in a preference for
the current path to remain in the higher conductivity fastener for as
long as possible. The shape of the countersunk fastener head outer
edge acts to promote a gradient in current density magnitude between
the top and bottom of the upper carbon composite panel, at the
interface with the fastener. Together with the small axial cross
sectional area of the fastener at the tip of the countersunk head,
this results in a local peak in current density magnitude, and could
cause issues with high localised temperature and pressure through
Joule heating. The experimental post-test specimen in Kirchdoerfer
et al.~\cite{kirchdoerfer2017} shows the greatest arc-induced erosion
of the CFRP panel at this location in their countersunk fastener case
using a 40~kA peak current waveform. At the bottom of the fastener in
figure~\ref{fig:validation-evans-snapshot25}, the electrical isolation
of the fastener nut by the bottom dielectric layer results in
extremely low current density magnitude values in the fastener shank
and nut at this location.
In the experiment of Evans~\cite{evans2018thesisCharacterisation}, the
current flow through the upper and lower composite panels are measured
independently using alternating cut-out sections at the outer edge of
each panel so that each return fastener contacts only the upper or
lower panel. The relative proportion of total current passing through
the upper and lower panels is then measured over time. The electrical
current share is computed from the simulation results by integrating
current density along a line normal to the current flow
direction. This line is taken at a radius of 12~mm; at which point the
current density streamlines were effectively purely radial. The
measurements of Evans show that for an interference fit fastener, over
three quarters of the total current passes through the upper panel,
identifying this route as the preferred current
path. Figure~\ref{fig:validation-evans-current-share} compares the
experimental and simulation results for the relative share of total
current passing through the upper and lower composite panels over the
first 40~$\mu$s.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{EvansFastenerShare_FinalE}
\caption{Percentage of current passing through the upper CFRP
panel via the countersunk head (solid lines)
and through the lower CFRP panel via the fastener
shank (dash-dot line). Numerical results are shown
with red lines, and the experimental results of
Evans~\cite{evans2018thesisCharacterisation} with black lines.}
\label{fig:validation-evans-current-share}
\end{figure}
Comparing the simulation and the experiment in
figure~\ref{fig:validation-evans-current-share}, there is a similar
distribution of current between the upper and lower panels in both
cases. The significantly higher proportion of current flowing through
the upper panel reflects the large area of contact between the upper
panel and the fastener head, as well as the preference for current to
travel via the least resistive route to ground. The percentage of
current passing through the upper panel is slightly greater in the
simulation than in the measurements of
Evans~\cite{evans2018thesisCharacterisation}. This may in part be due
to the difference in current input at the top of the fastener, with
the simulation using an attached arc, rather than a point source. The
radial expansion of the arc increases the contact area between the
simulated arc and fastener head. Once the arc radius has increased
past the radius of the fastener shank, a significant proportion of the
current density input is close to the outer radius of the fastener
head, resulting in a greater current flow across the short distance
between the top of the fastener head an the upper panel. However, the
close match in behaviour over time does confirm that this
computational approach is capable of simulating the transient current
flow in a complex fastener geometry comprising a number of distinct
layers of materials with differing electrical and thermal properties.
\section{Fastener design sensitivity studies}
\label{sec:SensitivityStudies}
This section considers the influence of fastener design choices in the
distribution of current flow and associated pressure and temperature
increases in the fastener assembly. In
section~\ref{sec:FastenerGeometry} the location of dielectric layers
in the fastener assembly is considered. Dielectric layers can be
included in fastener assemblies through judicious design to control
current flow away from sensitive components, or as a result of
component sealant and resin use. The inclusion of a clearance gap
between the fastener shank is also considered in this section by
comparison with an interference fit type
fastener. Section~\ref{sec:FastenerPreHeating} develops the clearance
fit gap analysis by considering the necessity in the present numerical
modelling method for pre-heating of the clearance fit gap to promote
an electrically conductive path across the gap. The effect of
clearance gap width is then considered in
section~\ref{sec:FastenerShankWidth}, both with and without
pre-heating. All simulations in this section use the current
described by equation~(\ref{eq:CurrentComponentB}) with parameters as
defined in ARP 5412B~\cite{ARP5412B}, $\alpha$ = 11,354 s$^{-1}$,
$\beta$ = 647,265 s$^{-1}$, $\gamma$ = 5,423,540 s$^{-1}$ {and}
$I_{0}$ = 43,762\,A.
\subsection{The effect of dielectric layers and fastener clearance fitting}
\label{sec:FastenerGeometry}
The multi-physics methodology outlined in
section~\ref{sec:MathematicalModel} allows not only the mechanical,
thermodynamic and electrodynamic evolution of an aerospace fastener
subject to lightning attachment to be captured, but also for the
sensitivity of these properties to changes in fastener design to be
assessed.
Figure~\ref{fig:Case7and8InitialAnnotated} shows an example of two
different idealised aerospace fastener configurations. Both fasteners
comprise a titanium nut and bolt and a single carbon composite
panel. An electrode is placed 40~mm above the substrate surface, and a
pre-heated region exists between this and the substrate.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{FastenerCase7and8Initial_AnnotatedWithZoom}
\caption{Annotated schematic of two different fastener
geometries. (a) A dielectric layer above the panel prevents
direct contact between the panel and the arc and an interference
fit exists between the fastener and the panel. (b) Dielectric
layers are placed on both the top and bottom of the panel,
preventing direct contact between the fastener nut and the
underside of the panel, and a clearance fit exists between the
fastener and the panel. The insert shows an enlarged section
highlighting the clearance gap.}
\label{fig:Case7and8InitialAnnotated}
\end{figure}
The first fastener design, shown in
figure~\ref{fig:Case7and8InitialAnnotated}\,(a), has a dielectric
coating layer on the upper face of the carbon composite panel. This
layer, which is 0.5~mm thick, represents, for example, a painted
surface or other coating. The titanium bolt is 6~mm in diameter and is
in direct contact with a composite panel of thickness 12~mm,
representing an interference-type fit. A titanium nut, of diameter
12~mm and thickness 2~mm, is in direct contact with the lower face
of the carbon composite panel. The second idealised fastener design,
figure~\ref{fig:Case7and8InitialAnnotated}\,(b) has dielectric layers at
both upper and lower surfaces of the carbon composite panel, each
of thickness 0.5~mm. A clearance fit air gap between the titanium bolt
and the surrounding carbon composite panel of width 0.3~mm is defined
in the second fastener design. This is shown as a enlarged region in
figure~\ref{fig:Case7and8InitialAnnotated}\,(b). The only area of direct
contact between the fastener bolt and the carbon composite panel is a
2.5~mm length directly below the top dielectric layer.
In both fastener designs, a pin hole punctures the radial centre of
the upper dielectric layer. This pin hole is a technique frequently
used in lightning experiments, and is used in this case to initialise a
conductive path between the plasma arc and the titanium fastener. In
practice, this pin hole may crudely represent a small region of
dielectric ablation during the initial attachment of the lightning
strike. The diameter of the hole at the centre of the upper dielectric
layer is reduced from 6~mm to 3~mm in the latter configuration. The
final difference between the two configurations is the introduction of
a narrow strip of non-ionised air between the fastener shank and the
panel. In order to reduce the computational resolution overhead in
these idealised fastener designs, the thread between the bolt and nut
is neglected.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{FastenerCase7and8_Streamlines_WithLegend}
\caption{ Showing the temperature profile within the plasma arc,
and current density magnitude within the substrate, after
20~$\mu$s for the two fastener designs shown in
figure~\ref{fig:Case7and8InitialAnnotated}. Current density
streamlines are also shown in the substrate, as white lines. In
the case of direct contact between the titanium nut and the
substrate in (a), despite the interference fit between between
the fastener and the substrate, the preferred path for current is
through the nut. When a second dielectric layer is introduced,
in (b), the current now flows directly from the fastener to the
substrate. Although the clearance fit results in the the
majority of the current entering the substrate where it is in
contact with the fastener, it subsequently spreads out as it
travels radially outwards.}
\label{fig:Case7and8Streamlines}
\end{figure}
Figure~\ref{fig:Case7and8Streamlines} shows the current density
profiles in the substrate, and temperature in the plasma arc, for the
two fastener configurations shown in
figure~\ref{fig:Case7and8InitialAnnotated} at a time of
20~$\mu$s. Current density streamlines are also shown in the
substrate materials. As expected, for the first fastener configuration
in figure~\ref{fig:Case7and8Streamlines}\,(a), the main electrical
current path is along the titanium fastener bolt, through the titanium
nut and into the lower face of the carbon composite panel to the
ground location at the outer radius of the panel. The second fastener
configuration, figure~\ref{fig:Case7and8Streamlines}\,(b), is
electrically insulated between the titanium nut and the panel, and due
to the clearance fit used in this case, the preferred path for current
is through the small region of contact between the fastener and the
panel.
This results in high current density close to the top of the fastener,
seen in figure~\ref{fig:Case7and8Streamlines}\,(b), and subsequent
current flow into the substrate at this point. It is clear from the
current density streamlines that the subsequent flow through the panel
is then spread over a wider area than in the case where the lower dielectric layer
is not used.
The reduction in the hole diameter of the upper dielectric and changes
in current density magnitude also influence the shape of the arc above
the fastener. A local radial restriction, or `pinching', of the plasma
arc is evident in figure~\ref{fig:Case7and8Streamlines}\,(b),
immediately above the attachment point; this pinching is not evident
to the same extent in figure~\ref{fig:Case7and8Streamlines}\,(a). The
ability for the present numerical method to identify the dependence of
arc characteristics on the configuration, material choice and layering
of the substrate highlights the advantage of a fully coupled
system. This coupling was demonstrated and validated against
experimental results in Millmore and
Nikiforakis~\cite{millmore2019multi}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{FastenerCase7_J_Times1to4_B}
\caption{ Evolution of current density magnitude in the substrate
and temperature in the plasma arc over the first 40~$\mu$s
following plasma arc attachment for the fastener configuration
illustrated in
figure~\ref{fig:Case7and8InitialAnnotated}\,(a). Times shown are
(a) 5~$\mu$s, (b) 10~$\mu$s, (c) 20~$\mu$s and (d) 40~$\mu$s.
The radial expansion of the arc is clearly visible, with some
pinching effects at later times. The reduction in current
density magnitude over time within the fastener follows the input
current profile at the electrode, and the higher values
throughout the fastener, compared to the panel, show this being
the preferred path for current flow.}
\label{fig:Case7_Snapshots}
\end{figure}
The thermal, mechanical and electrodynamic development of the first
fastener configuration over the first 40~$\mu$s is shown in
figure~\ref{fig:Case7_Snapshots}, with snapshots shown at 5~$\mu$s,
10~$\mu$s, 20~$\mu$s and 40~$\mu$s. The increase in the radial extent
of the plasma arc is evident, as is the rise and fall in current
density in the fastener over time as the current input in the
electrode rises and falls according to
equation~\ref{eq:CurrentComponentB}. Figure~\ref{fig:Case7_Snapshots}\,(c)
corresponds with the current density streamlines shown in
figure~\ref{fig:Case7and8Streamlines}\,(a) .
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{FastenerCase8_J_Times1to4_B}
\caption{ Evolution of current density magnitude in the substrate
and temperature in the plasma arc over the first 40~$\mu$s
following plasma arc attachment for the fastener configuration
illustrated in
figure~\ref{fig:Case7and8InitialAnnotated}\,(b). Times shown are
(a) 5~$\mu$s, (b) 10~$\mu$s, (c) 20~$\mu$s and (d) 40~$\mu$s. The
pinching of the plasma arc is much more visible in this case,
than in figure~\ref{fig:Case7_Snapshots}, and is apparent from 10
$\mu$s onwards. Due to the small contact region between the
fastener and the panel, the current density magnitude remains
high at the attachment point.}
\label{fig:Case8_Snapshots}
\end{figure}
The thermal, mechanical and electrodynamic development of the second
fastener configuration over the first 40~$\mu$s is shown in
figure~\ref{fig:Case8_Snapshots}, at times directly comparable to
figure~\ref{fig:Case7_Snapshots}. In this case,
figure~\ref{fig:Case8_Snapshots}\,(c) corresponds with the current
density streamline plot shown in
figure~\ref{fig:Case7and8Streamlines}\,(b).
The increase in arc radius over time is again evident, as is the
difference in arc shape between the two fastener configurations. The
current density at the region of direct contact between the fastener
bolt and the carbon composite panel remains high throughout the
evolution, and this then leads to higher local pressures at the
interface between the fastener and panel through increased Joule
heating. The interface between the high conductivity titanium fastener
and the lower conductivity panel is one area of particular concern in
fastener design to reduce the possibility of local breakdown in
material integrity. Such behaviour is more likely with the higher
current density magnitude over a limited spatial distribution around
the upper region of the fastener bolt in the case shown in
figure~\ref{fig:Case8_Snapshots}, in comparison to
figure~\ref{fig:Case7_Snapshots}. As this is associated with
higher levels of Joule heating and increased pressure rise at the
top of the bolt, figures~\ref{fig:Case7_Snapshots} and~\ref{fig:Case8_Snapshots} highlight the advantage of fastener
designs that maximise the physical contact area of conductive materials across the fastener-panel interface. This approach is evident in the metallic
sleeve designs of Mulazimoglu \& Haylock~\cite{mulazimoglu2011recent}.
The analysis in this section is extended in the next section to assess how changes in
fastener design can effect the temperature and pressure rise in a gap
between the fastener and carbon composite panels. Excessive pressure
rise in internal fastener gaps is a potential cause of unwanted
outgassing events and electrical sparking in
fasteners~\cite{kirchdoerfer2017,kirchdoerfer2018cth,teulet2017energy}.
\subsection{The effect of clearance gap pre-heating}
\label{sec:FastenerPreHeating}
In practice, a fastener assembly may contain internal gaps between,
for example, the bolt and the composite panels, or between the bolt
and nut. Under the high current input conditions of a
lightning strike, ionisation of the gas within the internal voids
may occur. Ionisation of the trapped gas can establish an electrically
conductive path across the void, which can lead to changes in the
current distribution of the assembly and significant increases in the gap
pressure. In this section, a method for modelling the
development of a conductive path across internal voids is
considered and a pre-heating approach to initialise this is investigated.
Building on the initial application of the multi-physics methodology
to enable assessment of changing thermal and mechanical behaviour
from variations in fastener design and material choice. We now extend this analysis to the internal gap
between the fastener bolt, nut and adjacent panel.
To enable this extension, a modification is made
to the fastener design in the previous section to include a bolt
head. It is recognised that in practical fastener designs, a
counter-sunk fastener head is often used to enable a flush panel
surface and to maximise direct contact with the surrounding
substrate. The idealised fastener chosen as an exercise in this
section has, however, been designed to limit the direct contact
between the fastener bolt and the panel to a region around the bolt
head. This is expected to result in a localised region of high current
density, with an associated rise in the pressure and temperature
through Joule heating. This would therefore make a poor aerospace
fastener design in reality but satisfactorily serves here as an
idealised model to investigate numerically establishing a conductive
path across internal voids. The titanium bolt has a clearance fit with
the surrounding panel. The gap is initially assumed
to contain air at ambient conditions. In later testing in this
section, this assumption is altered with the assumption that
electrical breakdown has occurred.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{BoeingFastener_DesignB_initialA}
\caption{ Schematic of an idealised fastener geometry
for investigating the behaviour of air gaps between the fastener
and substrate panels. Two carbon composite panels are used, and
the fastener comprises a bolt head, which is in direct contact
with the upper panel, and a shank with a clearance fit between
it and the panels. The fastener collar, however, is in contact
with the lower panel, and a dielectric coating is considered, as
in previous tests, on top of the upper panel.}
\label{fig:BoeingFastener_InitialAnnotated}
\end{figure}
The idealised fastener design used in this section is shown in
figure~\ref{fig:BoeingFastener_InitialAnnotated}. The fastener head
is located above two low conductivity panels, sitting flush
with the upper surface of a dielectric coating. The titanium fastener
has a shank diameter of 6~mm, and this screws into a securing titanium
nut (collar) with an outer diameter of 12~mm. Again, in this
simplified example the fastener thread detail is not considered. The
titanium collar rests on the underside of the lower carbon composite
panel, establishing a direct electrical contact between the collar and
the lower panel. There is a narrow gap between the fastener and the
collar, highlighted as a blue region in
figure~\ref{fig:BoeingFastener_InitialAnnotated}. The width of this
gap (0.38~mm) is larger than the gap that exists between the fastener
bolt and the carbon composite panel (0.1~mm). The carbon composite
substrate is split into two narrower panels, each of thickness
1.7~mm.
The evolution of this system is governed not only by the
electrical conductivity of the various components, but also through
contact resistance between them. A contact resistance of 1~m$\Omega$
is defined between the two carbon composite panels and a further
contact resistance region of 1~m$\Omega$ is defined between the lower
panel and the titanium collar. The contact resistance is taken as a
typical value from the literature, as used by, for example Chemartin
et al.~\cite{chemartin2013modeling}. In this work it is assumed to be
constant. Mastrolembo~\cite{mastrolembo2017understanding}, however,
discusses the change in contact resistance that can occur with variations in mechanical loading.
In the present fastener configuration, this mechanical loading would relate to the tightening torque of the fastener.
The transient current waveform given by
equation~\ref{eq:CurrentComponentB} is defined along the upper domain
boundary and a narrow pre-heated region of the domain at the radial
centre is imposed. This pre-heated region is of sufficient temperature
(8000~K) for ionisation of the plasma. In this first simulation, the
upper and lower fastener gaps are under atmospheric conditions, i.e.\
there is no initial ionised material present. This is intended to
contrast with later simulations in which a pre-heated region is
present close to the top of the gap at the start of the computation.
The temperature profiles for the plasma and substrate over the
first 40~$\mu$s in this configuration is shown in
figure~\ref{fig:BoeingFastener_Snapshots}. A high temperature is
evident in the low conductivity panels, in contrast to the highly
conductive titanium fastener bolt and collar which show little sign of
heating. A rise in temperature occurs in both upper and lower panels,
highlighting two main current paths; through the fastener bolt head
and through direct contact between the collar and lower panel.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{BoeingFastener_DesignA_T_Times1to4A}
\caption{ Temperature evolution over the first 40~$\mu$s after arc
attachment for the configuration shown in
figure~\ref{fig:BoeingFastener_InitialAnnotated}. Times shown
are: (a) 5~$\mu$s, (b) 10~$\mu$s, (c) 20~$\mu$s and (d)
40~$\mu$s. It is clear that there is current flow through both
carbon composite panels, both show a strong temperature rise,
with the lower panel being at slightly higher temperature than
the upper panel.}
\label{fig:BoeingFastener_Snapshots}
\end{figure}
Plotting the current density magnitude and streamlines, as shown in
figure~\ref{fig:BoeingFastener_Streamlines}, highlights these two
primary current pathways through the fastener geometry. In this
preliminary study, only the lower panel is grounded, the current in
the upper panel passes to the lower panel at the radial extent of the
lower panel. This is visible as a small region of higher current
density magnitude at the outer radial edge of the inter-panel region.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{BoeingFastener_DesignA_Streamlines}
\caption{ Current density magnitude and streamlines (white lines)
within the substrate materials for the fastener configuration
shown in figure~\ref{fig:BoeingFastener_InitialAnnotated} after
10~$\mu$s. It is clear that current passes through both panels
of carbon composite, though due to the grounding of the lower
panel, the current density magnitude is greater here. The rise
in current density at the outer edges of the panels is due to
current flow from the upper panel to the lower panel, and hence
to the ground site.}
\label{fig:BoeingFastener_Streamlines}
\end{figure}
The local increase in current density at the fastener bolt results in
a local pressure and temperature rise from Joule heating. Through
the multi-material boundary conditions at the interface between the
fastener gap and the surrounding materials, this leads to a rise in
these properties within the gap itself. This is particularly apparent
in the upper-most region of the fastener gap, where conditions are
sufficient for ionisation of the confined air to occur. Once
this happens, the fastener gap becomes a viable path for current
passage between the fastener and the panel.
Although there is a significant increase in temperature of over
5000~K within the fastener gap over the first 40~$\mu$s, the
corresponding rise in plasma pressure in the cavity is markedly below
published experimental measurements for comparable
configurations. For example, pressures of 25\,-\,30~MPa (typical)
and 70~MPa (peak) are reported in Kirchdoerfer et
al.~\cite{kirchdoerfer2017} for a comparable peak input current, or
24~MPa\,-\,33~MPa for the 10~mm$^{3}$ volume tested sample in Teulet et
al.~\cite{teulet2017energy}.
The behaviour within the fastener gap, shown in
figures~\ref{fig:BoeingFastener_Snapshots} and
\ref{fig:BoeingFastener_Streamlines}, assumes that ionisation within
this region results only from mechanical effects at the gap interfaces
raising temperature and pressure. However, electromagnetic effects
can lead to breakdown of the air within this region, and this offer an
alternative mechanism for the generation of plasma. Modelling this
breakdown is a complex issue and is beyond the scope of this work, however,
such behaviour can be approximated by defining a pre-heated region within
the fastener gap. This approach is similar to the method used to initialise the
plasma arc from the electrode. The necessity to approximate the
breakdown of air in this gap is also reported by Kirchdoerfer et
al~\cite{kirchdoerfer2017}, who artificially augment the energy in the
gap between a fastener, panel and nut. In the work of Kirchdoerfer et
al~\cite{kirchdoerfer2017}, the energy in the confined gas void is
increased through the detonation of a small charge in the fastener gap
at a time corresponding to the moment when the local electric field
adjacent to the lower panel exceeds the dielectric strength of air,
3~MVm$^{-1}$. This raises the resulting pressure in the gap, and leads
to simulation results which better approximate experimental measurements.
It is therefore of interest to establish a conductive path by
pre-heating a small section at the top of the gap which may then increase the
pressure in the remaining gap over the course of the
simulation. Understanding the effect of any approximation to
breakdown and the sensitivity this has on the pressure within the gap
is important because high pressure close to the bolt-nut interface is considered
to result in energetic discharge (outgassing) from an interface of this type. An
example configuration with pre-heating within the fastener gap region
is shown in figure~\ref{fig:BoeingFastener_PreHeating}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.3\textwidth]{BoeingGeomLayout_PMMAPlasma0p5VolumeCoverage}
\caption{ An example fastener configuration with a pre-heated upper
region of the fastener gap, visible as a yellow region between
the fastener and the carbon composite panels. All other features
in this configuration are the same as in
figure~\ref{fig:BoeingFastener_InitialAnnotated}. }
\label{fig:BoeingFastener_PreHeating}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{BoeingFastener_SparkVersusNoSpark_BackToBack_Times1to4A}
\caption{Comparison of the temperature between fasteners
configurations with no pre-heating (left-half) and with
pre-heating (right-half) of the upper region of the fastener
gap. Snapshots are shown at times (a) 5~$\mu$s, (b) 10~$\mu$s,
(c) 20~$\mu$s and (d) 40~$\mu$s. It is clear that the pre-heated
region leads to a large temperature rise within the gap, but also
to greater temperatures within the panels.}
\label{fig:BoeingFastener_Snapshots_BacktoBack}
\end{figure}
Figure~\ref{fig:BoeingFastener_Snapshots_BacktoBack} shows a
comparison of the temperature profile for configurations with and without a pre-heated gap.
The temperature evolution for the fastener configuration with pre-heating is shown as
the right-half of each sub-figure, whilst the left-half is the case
without, with results reproduced from
figure~\ref{fig:BoeingFastener_Snapshots}. It is clear that the high
temperature region in the upper section of the fastener gap, results
in significant evolution of the material in this gap, filling much of
it with plasma. This evolution slows at later times as it expands
into the larger volume collar gap. Temperatures in this
region are sufficient for a partially ionised plasma to form, hence
electrical conductivity increases to provide a new current pathway
between the fastener and the surrounding carbon composite panels. The
current passage through the plasma in the gap consequently increases
the temperature and pressure.
In addition to the evolution within the fastener gap, there is also an
increase in temperature in the carbon composite panels for the
pre-heated case. This is due to direct energy deposition in
the panels, since ionised material in the fastener gap provides a
highly conductive path for the majority of the current to
flow.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{BoeingFastener_withSpark_StreamlinesA}
\caption{Current density magnitude and streamlines (white lines)
within the substrate materials for the fastener configuration
shown in figure~\ref{fig:BoeingFastener_Snapshots_BacktoBack} after
10$\mu$s. The pre-heated region within the fastener gap results
in current flow throughout the panels, in comparison to
figure~\ref{fig:BoeingFastener_Streamlines} where the current in
confined to the top and bottom surface of the panels.}
\label{fig:BoeingFastener_Streamlines_withPreheating}
\end{figure}
Figure~\ref{fig:BoeingFastener_Streamlines_withPreheating}
shows the current streamlines flowing through the panels.
This can be compared with the case without pre-heating in figure~\ref{fig:BoeingFastener_Streamlines}, where current
flow is restricted to the top of the upper panel and the bottom of the
lower panel. The electrical conductivity in the plasma gap at this time
is still lower than the electrical conductivity in the titanium
fastener and shank, hence a significant proportion of the current
still passes through the fastener shank, nut and lower side of the
carbon composite panel, and directly between the fastener head and the
upper side of the carbon composite panel. The addition of a viable path
in figure~\ref{fig:BoeingFastener_Streamlines_withPreheating} across
the fastener gap further raises the plasma temperature, pressure and electrical
conductivity, reinforcing this route for current flow.
\\
\begin{figure}[!ht]
\centering
\includegraphics[width=0.38\textwidth]{BoeingGeomLayout_PMMAPlasmaVersion_Points1}
\caption{ Location of points for recording evolution data within
the fastener gap}.
\label{fig:BoeingFastener_DataPoints}
\end{figure}
The behaviour of the material in the fastener gap can be monitored
over the course of the simulation by defining data collection
points. Five fixed spatial points are illustrated in figure~\ref{fig:BoeingFastener_DataPoints}.
Three of these points monitor behaviour in the narrow section of the
fastener gap and two further data points are located in the wider cavity between
the fastener bolt and collar. The temperature over the first
100~$\mu$s is shown for each of these locations in
figure~\ref{fig:BoeingFastener_TemperatureDevelopment}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{BoeingFastener_Parametric_8000K_Points1to5E_ALL}
\caption{Temperature evolution at the five fixed data recording
locations shown in figure~\ref{fig:BoeingFastener_DataPoints};
solid line: point 1, dashed line and squares: point 2, dotted
line: point 3, dashed line and circles: point 4, dotted and
dashed line: point
5. For points 1-3, the temperature rises at early times, but
expansion into the wider region between the collar, and a
reduction in current flow, leads to a drop in temperature at
later times. For the two points in the wider region of the gap,
temperature initially rises, and then remains close to constant
throughout the simulation.}
\label{fig:BoeingFastener_TemperatureDevelopment}
\end{figure}
The temperature at the highest location, point 1, in
figure~\ref{fig:BoeingFastener_TemperatureDevelopment} rises
gradually from the pre-heated temperature over the first
20~$\mu$s. This is then accompanied by a rapid rise in temperature
between 20~$\mu$s and 40~$\mu$s as this region becomes a viable
current path. The temperature continues to increase to 60~$\mu$s,
after which the reduction in input current and the expansion in to
the wider gap between the fastener and collar results in a drop in
temperature. Points 2-5, are initially located just outside the
pre-heated region, hence the material here is initially under ambient
conditions. The initial discontinuity between the pre-heated and
ambient regions results in a shock wave which travels along the
fastener gap, raising the pressure and temperature in the lower fastener gap region.
This is visible through the initial rise in temperature at points 2 and 3, and also in the initial decrease in
temperature at point 1 over the first 5~$\mu$s. After this time, the current flow through the material is the dominant cause of
evolution, and the temperature continues to rise, with point 2
reaching a peak value of 1540~K at 55~$\mu$s. The subsequent decrease
in temperature at point 2 is greater than for point 1, falling to
1200~K at a time of 100~$\mu$s, whilst point 1 falls to 1420~K. Point
3 shows a similar temperature profile though the rise in temperature
from ambient conditions occurs later than at point
2. Figure~\ref{fig:BoeingFastener_TemperatureDevelopment} also
shows that the peak temperature at point 3 is again lower than for the
two higher locations, rising to 1145~K at 40~$\mu$s. The two points
inside the larger collar cavity, points 4 and 5, appear
to be outside the main current path, with the rise in temperature in
this region largely due to flow from the higher temperature regions
above. This continued movement from the upper cavity region to the
lower region causes the temperature at the lowest point, point 5, to
continue to increase over the entire course of the simulation, though
the final temperature here remains considerably lower than the upper
four points. This gradient highlights the difference in gap temperature (and hence
pressure) that can result from the existence of a viable current path
across only part of the fastener gap. In the next section, the data
collection points are used to investigate how the pressure in the gap
changes with alterations in the width of the narrow fastener gap.
\subsection{The effect of clearance gap size}
\label{sec:FastenerShankWidth}
One of the contributory factors in outgassing and thermal sparking
from fastener joints is considered to be the pressure rise in the
collar gap. Understanding the role of gap size on the pressure rise
close to the interface is therefore important for guiding design
choices. The present numerical approach can be used to investigate the
effects of geometry-related changes in pressure within the fastener
gap.
Using configurations similar to those presented in
section~\ref{sec:FastenerPreHeating}, the effect of changing the
radial distance between the fastener bolt and the carbon composite panel is
considered, for pre-heated and non pre-heated fastener gaps.
Seven cases are considered, with the distance between fastener and
panels referred to as the `shank gap'. The larger gap, between the
fastener and the collar, is termed the `collar gap' in this section and is
kept constant throughout. The shank gap is varied between 50~$\mu$m and
200~$\mu$m. All other parameters, including the
larger collar gap and the panel thickness, are kept the same.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{GapWidth_points4and5_NoKirchdoerfer_withAdditional_withLines}
\caption{ The effects of the shank gap width on the pressure at
data points 4 (hollow triangles) and 5 (filled triangles) after
100~$\mu$s with no pre-heating within the fastener gap. For
narrow gaps, pressure at these points rises with increasing shank
gap size, though as this gap gets wider still, there is then a
drop in the pressure at these points. }
\label{fig:BoeingFastener_GapWidth_NoPreHeating}
\end{figure}
The key region at which outgassing is likely to occur is where the collar meets
the carbon composite panel, hence the primary interest in this study
is to compare the pressure rise at data collection points 4 and 5, as identified in
figure~\ref{fig:BoeingFastener_DataPoints}. Figure~\ref{fig:BoeingFastener_GapWidth_NoPreHeating}
shows the pressure at these two points after 100~$\mu$s for the case
without pre-heating. This time is chosen to
allow the pressure evolution to equilibrate between the collar and
shank gaps. The pressure is shown to vary significantly with the width
of the shank gap, though in all cases the pressure at point 4 (the higher of the
two locations) remains greater than at point 5. The
pressure decreases sharply at point 5 above a shank gap width of
0.1~mm, though it remains high at this width for point 4. This may
indicate a reduction in flow from the upper to the lower
region of the shank gap. The subsequent reduction in pressure at both
data point locations for wider shank gaps highlights the decrease in
electrical conductivity at the top of the gap. It is also noted
that for all shank gap widths, the pressure is about two orders of
magnitude below those cited in experiments.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{GapWidth_points4and5_withInitialisation_withAdditional_withLines}
\caption{ The effects of shank gap width on the pressure at data
points 4 (hollow triangles) and 5 (filled triangles) after
100~$\mu$s with an initial pre-heated region at the top of the
shank gap. The effects of increasing the shank gap width are
more pronounced for point 5, for which a wider gap correlates to
a lower pressure. For point 4, this behaviour is only seen for
shank gaps above about 0.125~mm.}
\label{fig:BoeingFastener_GapWidth_PreHeated}
\end{figure}
Figure~\ref{fig:BoeingFastener_GapWidth_PreHeated} shows the pressure
at points 4 and 5 after 100~$\mu$s for varying shank gap width where
the upper region of the shank gap has been pre-heated. In all cases,
the pressures at both points are significantly greater than those
without pre-heating, in
figure~\ref{fig:BoeingFastener_GapWidth_NoPreHeating}. In this case,
a maximum pressure of 11.8~MPa occurs at point 4 for the 0.1~mm shank
gap width. Interestingly, for the narrowest shank gap, 0.05~mm, the
pressure at point 5 is greater than that at point 4. This is not
mirrored in any of the other gap widths and may be associated with a
greater penetration of the high temperature fluid from the shank gap
into the collar gap. A similar trend to that shown in
figure~\ref{fig:BoeingFastener_GapWidth_NoPreHeating}, with pressure
decreasing as shank gap width increases is observed in
figure~\ref{fig:BoeingFastener_GapWidth_PreHeated}. Again, the drop in
pressure resulting from the increasing shank gap width at point 4 lags
behind that of point 5. From analysis of the corresponding pressure
traces in the shank gap, this decrease in pressure appears to be
predominantly associated with a decrease in the length of time over
which the electrical conductivity in the upper regions of the shank
gap is sufficiently high for the fastener gap to remain a viable
current path.
The results in this section demonstrate that the multi-physics
methodology outlined in this paper is capable of computing the effect
of transient changes in current flow through a fastener geometry on
the pressure and temperature within the substrate materials, and
in gas-filled voids. These results also indicate that this
approach is capable of accounting for the influence of geometric
changes on the current distribution and associated thermal and
mechanical fastener behaviour. The increase in temperature and
pressure in the pre-heated gas results further highlights the
influence of gas-filled gaps on the electrical and thermal
development of the fastener as a whole.
The importance of considering direct lightning attachment in fastener
design is highlighted by the simulations presented in this paper.
Understanding the path taken by the current through the fastener
geometry can lead to judicious use of dielectric layers to manipulate
the current path. Potentially this can mean directing the current
away from electrically sensitive components, minimising indirect
attachment to remote components, or reducing the possibility of energetic discharge, thermal sparking and
edge glow. The simulations in this paper also provide confirmation of
existing best practice, that to minimise the possibility of
outgassing, the fastener design should maximise the electrical contact
between fastener components. This, in turn, minimises the contact
resistance and reduces the potential for ionisation of the gas-filled gaps
between components. Sections~\ref{sec:FastenerPreHeating}
and~\ref{sec:FastenerShankWidth} suggest that the plasma heating and
mass transport characteristics in regions of confined plasma can be
influenced by the size, shape and relative location of internal voids.
The simulations presented in this work also suggest
that convective transport of the hot plasma can lead to a pressure
rise in connected void regions away from the direct current path.
\section{Conclusions}
\label{sec:Conclusions}
This paper presents a multi-physics methodology that provides dynamic,
non-linear coupling of a plasma arc with an elastoplastic
multi-material model description of an aerospace fastener
assembly. This methodology simultaneously solves hyperbolic partial
differential equations for each material to achieve a two-way coupled
system between the plasma arc and the fastener materials. The
advantage of this approach is that the transient changes in the
mechanical and electrodynamic properties for each material, and their
associated influence on the surrounding materials, are captured.
The ability for the numerical model to capture the dynamic influence
of changes in material choice, and layering design, on the electrical
current path and associated thermal and mechanical properties is
demonstrated. This highlights how a non-judicious use of dielectric
layers and the presence of unsealed internal gaps in the fastener
design may result in conditions such that structural and sparking
issues could occur under a transient, high current event. Joule
heating of the substrate materials in these regions can result in high
local stresses and material temperatures. Large potential differences
across internal gas-filled voids can cause ionisation and the
promotion of a high pressure plasma which is considered to be a
driving mechanism for energetic discharge from fasteners through
sparking. The inclusion of simple contact resistances in the numerical
model accounts for surface roughness, fibre ply pull-out and other
surface imperfections between fastener components. This could be
extended to include dynamically changing contact resistances to
account for transient changes in the mechanical and thermal loading of
components over the course of the lightning strike.
Additionally, the model presented in this work could be extended to
include a statistical approximation to the microscopic surface
imperfections that exist at the contact surface between adjacent
materials. This would allow additional mechanisms for energetic
discharge to be studied, in particular thermal sparking, as detailed
by Odam et al.~\cite{odam1991lightning, odam2001factors}.
Incorporating a fully anisotropic equation of state for
composite materials, for example the method of
Lukyanov~\cite{lukyanov2010equation}, would allow for
directionally-dependent effects to be studied.
The flexibility of the numerical model for undertaking parameter
studies is demonstrated with an example in which the dimensions of an
internal gap between fastener components is systematically
altered. This paper is intended to tentatively highlight the potential
for the multi-physics methodology to be used as an engineering tool in
the design and optimisation of aerospace components subject to plasma
arc attachment, or as a developmental aid in experiment design.
\section*{Acknowledgements}
The authors acknowledge the funding support of Boeing Research \&
Technology (BR\&T) through project number SSOW-BRT-L0516-0569. The
authors would also like to thank Micah Goldade, Philipp Boettcher and
Louisa Michael of BR\&T for technical input throughout this work.
\bibliographystyle{unsrt}
\section{\label{sec:level1}First-level heading:\protect\\ The line
break was forced \lowercase{via} \textbackslash\textbackslash}
This sample document demonstrates proper use of REV\TeX~4.1 (and
\LaTeXe) in manuscripts prepared for submission to AIP
journals. Further information can be found in the documentation included in the distribution or available at
\url{http://authors.aip.org} and in the documentation for
REV\TeX~4.1 itself.
When commands are referred to in this example file, they are always
shown with their required arguments, using normal \TeX{} format. In
this format, \verb+#1+, \verb+#2+, etc. stand for required
author-supplied arguments to commands. For example, in
\verb+\section{#1}+ the \verb+#1+ stands for the title text of the
author's section heading, and in \verb+\title{#1}+ the \verb+#1+
stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using
\textbackslash\textbackslash. A blank input line tells \TeX\ that the
paragraph has ended.
\subsection{\label{sec:level2}Second-level heading: Formatting}
This file may be formatted in both the \texttt{preprint} (the default) and
\texttt{reprint} styles; the latter format may be used to
mimic final journal output. Either format may be used for submission
purposes; however, for peer review and production, AIP will format the
article using the \texttt{preprint} class option. Hence, it is
essential that authors check that their manuscripts format acceptably
under \texttt{preprint}. Manuscripts submitted to AIP that do not
format correctly under the \texttt{preprint} option may be delayed in
both the editorial and production processes.
The \texttt{widetext} environment will make the text the width of the
full page, as on page~\pageref{eq:wideeq}. (Note the use the
\verb+\pageref{#1}+ to get the page number right automatically.) The
width-changing commands only take effect in \texttt{twocolumn}
formatting. It has no effect if \texttt{preprint} formatting is chosen
instead.
\subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes}
Citations in text refer to entries in the Bibliography;
they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+.
Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly,
its entire repertoire of commands are available in your document;
see the \verb+natbib+ documentation for further details.
The argument of \verb+\cite+ is a comma-separated list of \emph{keys};
a key may consist of letters and numerals.
By default, citations are numerical; \cite{feyn54} author-year citations are an option.
To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}).
REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate.
REV\TeX\ provides the ability to properly punctuate textual citations in author-year style;
this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off.
To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983},
and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}).
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command,
where the argument is the citation key mentioned above.
\verb+\bibitem{#1}+ commands may be crafted by hand or, preferably,
generated by using Bib\TeX.
The AIP styles for REV\TeX~4 include Bib\TeX\ style files
\verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for
numbered and author-year bibliographies,
respectively.
REV\TeX~4 will automatically choose the style appropriate for
the document's selected class options: the default is numerical, and
you obtain the author-year style by specifying a class option of \verb+author-year+.
This sample file demonstrates a simple use of Bib\TeX\
via a \verb+\bibliography+ command referencing the \verb+aipsamp.bib+ file.
Running Bib\TeX\ (in this case \texttt{bibtex
aipsamp}) after the first pass of \LaTeX\ produces the file
\verb+aipsamp.bbl+ which contains the automatically formatted
\verb+\bibitem+ commands (including extra markup information via
\verb+\bibinfo+ commands). If not using Bib\TeX, the
\verb+thebibiliography+ environment should be used instead.
\paragraph{Fourth-level heading is run in.}%
Footnotes are produced using the \verb+\footnote{#1}+ command.
Numerical style citations put footnotes into the
bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}.
Author-year and numerical author-year citation styles (each for its own reason) cannot use this method.
Note: due to the method used to place footnotes in the bibliography, \emph{you
must re-run BibTeX every time you change any of your document's
footnotes}.
\section{Math and Equations}
Inline math may be typeset using the \verb+$+ delimiters. Bold math
symbols may be achieved using the \verb+bm+ package and the
\verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can
be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and
Blackboard (or open face or double struck) characters should be
typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands
respectively. Both are supplied by the \texttt{amssymb} package. For
example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and
\verb+$\mathfrak{G}$+ gives $\mathfrak{G}$
In \LaTeX\ there are many different ways to display equations, and a
few preferred ways are noted below. Displayed math will center by
default. Use the class option \verb+fleqn+ to flush equations left.
Below we have numbered single-line equations, the most common kind:
\begin{eqnarray}
\chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2}
\left(
\begin{array}{c}
|{\bf p}|+p_z\\
px+ip_y
\end{array}\right)\;,
\\
\left\{%
\openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}%
\label{eq:one}.
\end{eqnarray}
Note the open one in Eq.~(\ref{eq:one}).
Not all numbered equations will fit within a narrow column this
way. The equation number will move down automatically if it cannot fit
on the same line with a one-line equation:
\begin{equation}
\left\{
ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}.
\end{equation}
When the \verb+\label{#1}+ command is used [cf. input for
Eq.~(\ref{eq:one})], the equation can be referred to in text without
knowing the equation number that \TeX\ will assign to it. Just
use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in
the \verb+\label{#1}+ command.
Unnumbered single-line equations can be typeset
using the \verb+\[+, \verb+\]+ format:
\[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \]
\subsection{Multiline equations}
Multiline equations are obtained by using the \verb+eqnarray+
environment. Use the \verb+\nonumber+ command at the end of each line
to avoid assigning a number:
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
\delta_{\sigma_1,-\sigma_2}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1),
\end{eqnarray}
\begin{eqnarray}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\nonumber \\
& &\times \left( \sum_{i<j}\right)
\sum_{\text{perm}}
\frac{1}{S_{12}}
\frac{1}{S_{12}}
\sum_\tau c^f_\tau~.
\end{eqnarray}
\textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline
equation if \verb+\nonumber+ is also used on that line. Incorrect
cross-referencing will result. Notice the use \verb+\text{#1}+ for
using a Roman font within a math environment.
To set a multiline equation without \emph{any} equation
numbers, use the \verb+\begin{eqnarray*}+,
\verb+\end{eqnarray*}+ format:
\begin{eqnarray*}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\\
& &\times \left( \sum_{i<j}\right)
\left(
\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}
\right)
\frac{1}{S_{12}}~.
\end{eqnarray*}
To obtain numbers not normally produced by the automatic numbering,
use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired
equation number. For example, to get an equation number of
(\ref{eq:mynum}),
\begin{equation}
g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum}
\end{equation}
A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires
\texttt{amsmath}. The \verb+\tag{#1}+ must come before the
\verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is
\textit{transparent} to the automatic numbering in REV\TeX{};
therefore, the number must be known ahead of time, and it must be
manually adjusted if other equations are added. \verb+\tag{#1}+ works
with both single-line and multiline equations. \verb+\tag{#1}+ should
only be used in exceptional case - do not use it to number all
equations in a paper.
Enclosing single-line and multiline equations in
\verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce
a set of equations that are ``numbered'' with letters, as shown in
Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below:
\begin{subequations}
\label{eq:whole}
\begin{equation}
\left\{
abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}
\right\},\label{subeq:1}
\end{equation}
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2}
\end{eqnarray}
\end{subequations}
Putting a \verb+\label{#1}+ command right after the
\verb+\begin{subequations}+, allows one to
reference all the equations in a subequations environment. For
example, the equations in the preceding subequations environment were
Eqs.~(\ref{eq:whole}).
\subsubsection{Wide equations}
The equation that follows is set in a wide format, i.e., it spans
across the full page. The wide format is reserved for long equations
that cannot be easily broken into four lines or less:
\begin{widetext}
\begin{equation}
{\cal R}^{(\text{d})}=
g_{\sigma_2}^e
\left(
\frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)
+ x_WQ_e
\left(
\frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)\;. \label{eq:wideeq}
\end{equation}
\end{widetext}
This is typed to show the output is in wide format.
(Since there is no input line between \verb+\equation+ and
this paragraph, there is no paragraph indent for this paragraph.)
\section{Cross-referencing}
REV\TeX{} will automatically number sections, equations, figure
captions, and tables. In order to reference them in text, use the
\verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a
particular page, use the \verb+\pageref{#1}+ command.
The \verb+\label{#1}+ should appear in a section heading, within an
equation, or in a table or figure caption. The \verb+\ref{#1}+ command
is used in the text where the citation is to be displayed. Some
examples: Section~\ref{sec:level1} on page~\pageref{sec:level1},
Table~\ref{tab:table1},%
\begin{table}
\caption{\label{tab:table1}This is a narrow table which fits into a
text column when using \texttt{twocolumn} formatting. Note that
REV\TeX~4 adjusts the intercolumn spacing so that the table fills the
entire width of the column. Table captions are numbered
automatically. This table illustrates left-aligned, centered, and
right-aligned columns. }
\begin{ruledtabular}
\begin{tabular}{lcr}
Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\
\hline
1 & 2 & 3\\
10 & 20 & 30\\
100 & 200 & 300\\
\end{tabular}
\end{ruledtabular}
\end{table}
and Fig.~\ref{fig:epsart}.
\section{Figures and Tables}
Figures and tables are typically ``floats''; \LaTeX\ determines their
final position via placement rules.
\LaTeX\ isn't always successful in automatically placing floats where you wish them.
Figures are marked up with the \texttt{figure} environment, the content of which
imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+).
The argument of the latter command should itself contain a \verb+\label+ command if you
wish to refer to your figure with \verb+\ref+.
Import your image using either the \texttt{graphics} or
\texttt{graphix} packages. These packages both define the
\verb+\includegraphics{#1}+ command, but they differ in the optional
arguments for specifying the orientation, scaling, and translation of the figure.
Fig.~\ref{fig:epsart}%
\begin{figure}
\includegraphics{fig_1
\caption{\label{fig:epsart} A figure caption. The figure captions are
automatically numbered.}
\end{figure}
is small enough to fit in a single column, while
Fig.~\ref{fig:wide}%
\begin{figure*}
\includegraphics{fig_2
\caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide
figure, spanning the page in \texttt{twocolumn} formatting.}
\end{figure*}
is too wide for a single column,
so instead the \texttt{figure*} environment has been used.
The analog of the \texttt{figure} environment is \texttt{table}, which uses
the same \verb+\caption+ command.
However, you should type your caption command first within the \texttt{table},
instead of last as you did for \texttt{figure}.
The heart of any table is the \texttt{tabular} environment,
which represents the table content as a (vertical) sequence of table rows,
each containing a (horizontal) sequence of table cells.
Cells are separated by the \verb+&+ character;
the row terminates with \verb+\\+.
The required argument for the \texttt{tabular} environment
specifies how data are displayed in each of the columns.
For instance, a column
may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+),
or aligned on a decimal point (\verb+d+).
(Table~\ref{tab:table4}%
\begin{table}
\caption{\label{tab:table4}Numbers in columns Three--Five have been
aligned by using the ``d'' column specifier (requires the
\texttt{dcolumn} package).
Non-numeric entries (those entries without
a ``.'') in a ``d'' column are aligned on the decimal point.
Use the
``D'' specifier for more complex layouts. }
\begin{ruledtabular}
\begin{tabular}{ccddd}
One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\
\hline
one&two&\mbox{three}&\mbox{four}&\mbox{five}\\
He&2& 2.77234 & 45672. & 0.69 \\
C\footnote{Some tables require footnotes.}
&C\footnote{Some tables need more than one footnote.}
& 12537.64 & 37.66345 & 86.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
illustrates the use of decimal column alignment.)
Extra column-spacing may be be specified as well, although
REV\TeX~4 sets this spacing so that the columns fill the width of the
table.
Horizontal rules are typeset using the \verb+\hline+
command.
The doubled (or Scotch) rules that appear at the top and
bottom of a table can be achieved by enclosing the \texttt{tabular}
environment within a \texttt{ruledtabular} environment.
Rows whose columns span multiple columns can be typeset using \LaTeX's
\verb+\multicolumn{#1}{#2}{#3}+ command
(for example, see the first row of Table~\ref{tab:table3}).%
\begin{table*}
\caption{\label{tab:table3}This is a wide table that spans the page
width in \texttt{twocolumn} mode. It is formatted using the
\texttt{table*} environment. It also demonstrates the use of
\textbackslash\texttt{multicolumn} in rows with entries that span
more than one column.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\
Ion&1st alternative&2nd alternative&lst alternative
&2nd alternative\\ \hline
K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\
Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.}
&$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\
Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page
width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. }
&$(4e)^{\text{a}}$\\
He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\
Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The tables in this document illustrate various effects.
Tables that fit in a narrow column are contained in a \texttt{table}
environment.
Table~\ref{tab:table3} is a wide table, therefore set with the
\texttt{table*} environment.
Lengthy tables may need to break across pages.
A simple way to allow this is to specify
the \verb+[H]+ float placement on the \texttt{table} or
\texttt{table*} environment.
Alternatively, using the standard \LaTeXe\ package \texttt{longtable}
gives more control over how tables break and allows headers and footers
to be specified for each page of the table.
An example of the use of \texttt{longtable} can be found
in the file \texttt{summary.tex} that is included with the REV\TeX~4
distribution.
There are two methods for setting footnotes within a table (these
footnotes will be displayed directly below the table rather than at
the bottom of the page or in the bibliography).
The easiest
and preferred method is just to use the \verb+\footnote{#1}+
command. This will automatically enumerate the footnotes with
lowercase roman letters.
However, it is sometimes necessary to have
multiple entries in the table share the same footnote.
In this case,
create the footnotes using
\verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+.
\texttt{\#1} is a numeric value.
Each time the same value for \texttt{\#1} is used,
the same mark is produced in the table.
The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular}
environment.
Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and
\ref{tab:table2}%
\begin{table}
\caption{\label{tab:table2}A table with more columns still fits
properly in a column. Note that several entries share the same
footnote. Inspect the \LaTeX\ input for this table to see
exactly how it is done.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
&$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$&
&$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\
\hline
Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1]
& 0.680 & 1.870 & 3.700 \\
Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2]
& 0.450 & 1.930 & 3.760 \\
Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3]
& 0.750 & 2.170 & 3.560 \\
Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4]
& 0.900 & 2.370 & 3.720 \\
Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2]
& 0.380 & 1.730 & 2.830 \\
Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5]
& 0.760 & 2.110 & 3.120 \\
Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5]
& 1.120 & 2.620 & 3.480 \\
Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3]
& 1.330 & 2.800 & 3.590 \\
Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4]
& 1.420 & 3.030 & 3.740 \\
In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5]
& 0.960 & 2.460 & 3.780 \\
Tl& 0.480 & 18.90 & 3.550 & & & & \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.}
\footnotetext[2]{Here's the second.}
\footnotetext[3]{Here's the third.}
\footnotetext[4]{Here's the fourth.}
\footnotetext[5]{And etc.}
\end{table}
for an illustration.
All AIP journals require that the initial citation of
figures or tables be in numerical order.
\LaTeX's automatic numbering of floats is your friend here:
just put each \texttt{figure} environment immediately following
its first reference (\verb+\ref+), as we have done in this example file.
\begin{acknowledgments}
We wish to acknowledge the support of the author community in using
REV\TeX{}, offering suggestions and encouragement, testing new versions,
\dots.
\end{acknowledgments}
\section*{Data Availability Statement}
AIP Publishing believes that all datasets underlying the conclusions of the paper should be available to readers. Authors are encouraged to deposit their datasets in publicly available repositories or present them in the main manuscript. All research articles must include a data availability statement stating where the data can be found. In this section, authors should add the respective statement from the chart below based on the availability of data in their paper.
\begin{center}
\renewcommand\arraystretch{1.2}
\begin{tabular}{| >{\raggedright\arraybackslash}p{0.3\linewidth} | >{\raggedright\arraybackslash}p{0.65\linewidth} |}
\hline
\textbf{AVAILABILITY OF DATA} & \textbf{STATEMENT OF DATA AVAILABILITY}\\
\hline
Data available on request from the authors
&
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\\\hline
Data available in article or supplementary material
&
The data that support the findings of this study are available within the article [and its supplementary material].
\\\hline
Data openly available in a public repository that issues datasets with DOIs
&
The data that support the findings of this study are openly available in [repository name] at http://doi.org/[doi], reference number [reference number].
\\\hline
Data openly available in a public repository that does not issue DOIs
&
The data that support the findings of this study are openly available in [repository name], reference number [reference number].
\\\hline
Data sharing not applicable – no new data generated
&
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
\\\hline
Data generated at a central, large scale facility
&
Raw data were generated at the [facility name] large scale facility. Derived data supporting the findings of this study are available from the corresponding author upon reasonable request.
\\\hline
Embargo on data due to commercial restrictions
&
The data that support the findings will be available in [repository name] at [DOI link] following an embargo from the date of publication to allow for commercialization of research findings.
\\\hline
Data available on request due to privacy/ethical restrictions
&
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due [state restrictions such as privacy or ethical restrictions].
\\\hline
Data subject to third party restrictions
&
The data that support the findings of this study are available from [third party]. Restrictions apply to the availability of these data, which were used under license for this study. Data are available from the authors upon reasonable request and with the permission of [third party].
\\\hline
\end{tabular}
\end{center}
\section{}
\subsection{}
\subsubsection{}
|
2,869,038,155,573 | arxiv |
\section{Employment Records as a Resource for Record-linkage in Organizational Social
As large datasets on individual political behavior have become more common, scholars have focused increasing attention on the methodological problem of linking records from different sources \citep{Enamorado2019,Herzog2010,Larsen2001}. Record linkage is an all too common task for researchers building datasets. When a unique identifier (such as a social security number) is shared between datasets and available to researchers, the problem of record linkage is significantly reduced. Errors in linkage, presumably rare, may be regarded as sources of noise. In cases where unique identifiers like social security number are not available, the recent literature has developed sophisticated probabilistic linkage algorithms that can find the same individual in two datasets using stable characteristics such as birth year and race, or even mutable characteristics such as address \citep{Enamorado2019}. The rise of such techniques has allowed research that would have been costly or impossible to conduct in previous eras (e.g., \citet{Figlio2014a,Bolsen2014,Hill2017})
Despite progress on the record-linkage problem for data on political behavior, these developments have had less of an impact so far on scholarship concerning organizational entities such as corporations, universities, trade associations, think tanks, religious groups, non-profits, and international associations---entities that are important players in theories of political economy, American politics, and other subfields. Similar to researchers on individuals, scholars of organizations also seek to combine multiple data streams to develop evidence-based models. However, in addition to lacking shared unique identifiers, datasets on organizations also often lack common covariate data that form the basis for probabilistic linkage algorithms. Therefore, scholars must (and do) rely heavily on exact or fuzzy string matching to link records on organizations---or alternatively, pay the often significant costs of manually linking datasets.
To take a recent example from the applied political science literature, \citet{Crosson2020} compare the ideology scores of organizations with political action committees (PACs) to those without. Scores are calculated from a dataset of interest group position-taking compiled by a non-profit (Maplight). The list of organizations with PACs comes from Federal Election Commission (FEC) records. Maplight and the FEC do not refer to organizations using the same names. There is no covariate data to help with the linkage. The authors state that matching records in this situation is ``challenging'' (p.~32), and consider both exact and fuzzy matching. Ultimately, they perform exact matching on names after considerable pre-processing\footnote{The literature offers little guidance on what pre-processing steps are desirable in these cases, and so researchers are doing their best to make the choices that seem reasonable at the time.} because of concerns about false positives, acknowledging that they inevitably do not link all records as a result. Indeed, the authors supplement the 545 algorithmic matches with 243 additional hand matches, implying that their first algorithmic effort missed at least one in three correct matches.
The challenge faced by Crosson, Furnas, and Lorenz is typical for scholars studying organizations in the US or other contexts. Given the relatively small size of their matching problem, the authors are able to directly match the data themselves and bring to bear their subject matter expertise. In many cases, where the number of matches sought is not in the hundreds but in the thousands, practical necessity requires using computational algorithms like fuzzy matching or hiring one or more non-expert coders (with the latter usually being undergraduates or participants in online markets such as Amazon's Mechanical Turk). Both string matching and relying on human coders without real domain expertise have serious limitations and downsides.
Even though string distance metrics can link records whose identifiers contain minor differences, they optimize no objective matching quality function in themselves and have trouble handling the diversity of monikers an organization may have. For example, ``JPM'' and ``Chase Bank'' refer to the same organization, yet these strings share no characters. Likewise, string matching and research assistants would both have difficulty detecting a relationship between Fannie Mae and the Federal National Mortgage Association. Such complex matches can be especially difficult for human coders from outside a study's geographic context, as these coders may lack the required contextual information for performing such matches.\footnote{These problems are compounded when one attempts to link datasets from different source languages, as, for example, Chinese and English names will not share common characters. Research on organizations is hampered by such basic challenges connecting data sources, which impose substantial start-up costs to conducting research in this area.}
Methodologists have begun to address the challenges that researchers face in matching organizational records. \citet{kaufman_klevs_2021}, for example, propose an adaptive learning algorithm that does many different kinds of fuzzy matching and uses a human-in-the-loop to ``adapt'' possible fuzzy matched data to the researcher's particular task. While their approach represents a significant advance over contemporary research practice, an adaptive system based on fuzzy matching still requires substantial time investment by the researcher in producing manual matches, and will also inevitably struggle to make connections in the relatively common situation where shared characters are few and far between (e.g.~Chase Bank and JPM) or where characters are shared but the strings have very different lengths (e.g. Fannie Mae and Federal National Mortgage Association).
In this paper, we develop methodologies for linking organizational records that leverage abundant information available on employment networking sites such as LinkedIn. In particular, we begin by developing two distinct but complementary approaches to using half a billion open collaborated records. The first method is based on machine learning. The second is based on network analysis and community detection. We then combine both methods to produce an even more powerful automated matching method than either approach would be able to achieve on its own using this novel data source.
In our first approach, we view the large set of open-collaborated records as a massive training corpus and apply machine learning. We outline two assumptions that once made, allow us to compute the match probability for any two organizational names. Using continuous representations for each character in each text alias to avoid problems due to abbreviations or spelling differences (e.g., between American and British English), we then construct an organizational name-matching model that draws from the literature in natural language processing to predict the probability that two aliases refer to the same organization using the trillions of alias pair training examples from the LinkedIn corpus.
The tool just described improves on fuzzy matching by explicitly modeling the match probability using training data, but suffers from the same character dependency problem as fuzzy matching. This leads to challenges in cases such as the ``Chase Bank'' versus ``JP Morgan'' example. Therefore, we also develop an alternative approach that assists organizational linkage by exploiting information contained in the network representation of the LinkedIn data. In this approach, we use the LinkedIn corpus to construct a network of organizational aliases and URLs. In this formulation, ``JP Morgan'' is one node in a graph, while ``Chase Bank'' is another. Relationships between nodes are also extracted from the data because users have asserted that these organizations are connected to the same organizational profile pages (e.g.~\url{linkedin.com/company/chase}). A community of nodes with dense connections will likely refer to the same organization. If the algorithm is successful, ``Chase Bank'' and ``JP Morgan'' will find themselves placed in the same community.\footnote{Some readers may ask why community detection on the network is necessary rather than assuming the network as given is initially true. There are two concerns. First, there are likely to be occasional ``noisy'' links, such as between Bank of America and JP Morgan, so the network as given will be over-inclusive of matches one would wish to make. Second, there are entities with only second- or third-order connections in the raw data which should be connected---for example, distinct subsidiaries of a parent organization. The raw network without community detection will under-include such linkages.} We apply two distinct community detection algorithms to alternative representations of this network and use the detected communities to solve the record linkage problem. Both community detection algorithms produce similarly substantial performance gains over fuzzy matching. We also introduce a unified approach towards LinkedIn-assisted organizational record linkage that combines the machine learning and the graph theoretic methods, which in some of our application tasks yields optimal performance.
Taken together, we propose several new tools and data assets which enable researchers to better link datasets on organizations from over 100 countries and in dozens of languages. Intuitively, each approach uses the combined wisdom of millions of human beings with first-hand knowledge of these organizations. We illustrate the usefulness of our methods through two applications involving lobbying data.\footnote{We also add in an Appendix, a linkage task between English and Chinese company names.} An open-source package (\Verb|LinkOrgs|) implements the computationally efficient methods we develop.
\section{Employment Networking Data as a Resource for Scholars of
Organizational
Politics}\label{social-networking-data-as-a-resource-for-scholars-of-organizational-politics}
Here we describe how human-generated records from social media companies contain a large amount of information that can potentially be useful to scholars of organizational politics, particularly in their all-too-common challenge of assembling datasets. Although our approach will at times refer to specifics of our source data (LinkedIn), one may expect that other networking site data could allow for similar approaches.
Users on LinkedIn provide substantial information about their current
and previous employers. For the sake of our illustration, we will use a
census of the LinkedIn network circa 2017, which we acquired from a vendor, \Verb|Datahut.co|. Future researchers have the legal right to update this corpus using new scrapes of the website (as the Ninth Circuit Court of Appeals established in \textit{HIQ Labs, Inc., v. LinkedIn Corporation} (2017)). The dataset contains about 350 million unique profiles drawn from over 200 countries---a size and coverage consistent with LinkedIn's internal estimates that it has reported on its website and marketing materials.
To construct a linkage directory for assisting dataset merges, we here use the professional experience category posted by users. In each profile on LinkedIn, a user may list the name of their employer as free-response text. We will refer to the free-response name (or ``alias'') associated with unit \(i\) as \(A_i\). In this professional experience category, users also often post the URL link to their employer's LinkedIn page. In essence, the URL link serves as a kind of unique identifier for each organization. There are some complications we will have to deal with, however, as large internally differentiated organizations may maintain multiple profiles (e.g., ICPSR and the University of Michigan both have distinct organizational profile URLs on LinkedIn). Below, we provide examples of the professional experience data for several public figures, with the organizational alias used by these figures listed along with the profile URL to which they linked themselves.
\begin{table}[ht]
\caption{Illustration of source data using three figures who obtained public notoriety several years after the data was collected.}\label{tab:example}
\resizebox{\textwidth}{!}{%
\begin{tabular}{l l l l l l}
\hline \hline
\textit{Name} & \textit{Title} & \textit{Organization} & \textit{Organization URL Path} (\Verb|linkedin.com/company/|) & \textit{Start date} & \textit{End date} \\
\hline
Michael Cohen & EVP \& Special Counsel to Donald J. Trump & The Trump Organization & \url{the-trump-organization} & 20070501 & 20170418 \\
Allen Weisselberg & EVP/CFO & The Trump Organization & \url{the-trump-organization} & & 20170316 \\
Michael Avenatti & Founding Partner & Eagan Avenatti, LLP & & 20070101 & 20170318 \\
Michael Avenatti & Chairman & Tully's Coffee & \url{tully's-coffee} & 20120101 & 20170318 \\
Michael Avenatti & Attorney & Greene Broillet \& Wheeler, LLP & & 20030101 & 20070101 \\
Michael Avenatti & Attorney & O'Melveny \& Myers LLP & \url{o'melveny-\&-myers-llp} & 20000101 & 20030101 \\
\hline \hline
\end{tabular}
}
\end{table}
We provide some descriptive statistics about the scope of the dataset as it relates to organizational name usage in Table \ref{tab:descriptive}. The statistics reveal that, on average, users refer to organizations linguistically in about three different ways and that each of the 15 million aliases, on average, links to slightly more than one organizational URL.
\begin{table}[ht]
\centering
\caption{Descriptive statistics for the LinkedIn data.}\label{tab:descriptive}
\begin{tabular}{lc}
\hline \hline
\textit{Statistic} & \textit{Value} \\
\hline
{\# unique aliases} & 15,270,027 \\
{\# unique URLs} & 5,950,995 \\
{Mean \# of unique aliases per URL} & 2.88 \\
{Mean \# of URL links per unique alias} & 1.12 \\
{Total \# of alias pair examples} & $>10^{14}$ \\
\hline \hline
\end{tabular}
\end{table}
We have described the human-contributed data source that provides hundreds of millions of examples of people using organizational aliases. We next turn to the task of deducing how this information can be converted into something meaningful for social scientists seeking to link datasets on publicly traded firms, NGOs, or government agencies.
\section{Modeling Organizational Linkage Using the LinkedIn
Corpus}\label{modeling-organizational-linkage-using-the-linkedin-corpus}
We will introduce two distinct ways in which researchers can conceptualize the record linkage problem using employment networking data. In the first approach, the LinkedIn corpus will assist with record linkage framed as a prediction problem: the employment network data serves as a massive training set for name pairs. We build a machine learning model that (a) generates continuous numerical representations of the semantic content of the organizational names to (b) learn the mapping between that content and the underlying probability that two aliases are matched to the same organization. In the second approach, the problem is framed in graph theoretic terms. Here, record linkage will be assisted by considering the network structure of alias-to-URL pairings.
\subsection{The Machine Learning Approach}\label{method-1-organizational-record-linkage-as-prediction-problem}
Here, we assume that the alias-to-URL pairings are relevant for understanding alias-to-organization ties. That is, we assume what we term validity: \begin{equation}
\textit{Validity Assumption:} \ \Pr(O_i = O_j \mid A_i = a, A_j=a') = \Pr(U_i = U_j \mid A_i=a, A_j=a'),
\end{equation}
where \(O_i\), \(O_j\) refer to the organizations associated with index \(i\) and \(j\), \(A_i\), \(A_j\) refer to the aliases, and \(U_i\), \(U_j\) refer to the URLs. This assumption states that the overall behavior of users linking their aliases to organizational URLs is the same as if they were linking to actual organizations. Intuitively, the assumption means that the LinkedIn name-to-URL patterns give us information about the name-to-organization mapping.
Because, in organizational linkage tasks, the organizational unit of analysis is defined by the researcher, this assumption may be satisfied in some circumstances but not in others. Indeed, a match for one purpose may not be a match for others. For example, General Motors has divisions called Buick and Cadillac, which were initially separate companies. A researcher might want these entities to match each other in some datasets. Other researchers may not want these to match, however, and for plenty of purposes, it may not make sense to make these entities connect. Therefore, whether alias-to-URL linkage is valid for a particular alias-to-organization match will depend on the way in which organizations are defined in the researchers' analysis. Even in cases where researchers do not want subsidiaries such as Buck and Cadillac to connect, typically, researchers would prefer to reject such connections, which are often close calls, than to struggle with a random draw from the set of potential matches (e.g., Buick and Buc-ee's\footnote{Buc-cee's, for those not aware, is a chain of gas stations in the US South whose logo features a buck-toothed Beaver.}).
Next, we assume semantic mapping, which states that the organizational match probability is a function of the semantic content of the aliases. That is,
\begin{equation}\label{eq:semantic}
\textit{Semantic Mapping Assumption:} \ \Pr(O_i = O_j \mid A_i, A_j) = f_p( f_n(A_i), f_n(A_j)),
\end{equation}
where \(f_n\) represents a function that numericizes each alias string, and \(f_p\) represents a function that transforms the alias numeric representations into overall match probabilities. This assumption, which depends on the existence of a single match function, is not insignificant given the many possibilities for the researcher-defined organizational unit of analysis. It also could be questionable for cross-language link tasks. Nevertheless, assuming that this function does exist, we will employ machine learning algorithms in an attempt to learn this function from the trillions of LinkedIn organizational alias pairs.
Both the validity and semantic mapping assumptions would be vulnerable to errors in the training data. Regardless of how researchers define their organizational unit of analysis, inaccuracies due to the selection of incorrect URLs by users would limit our ability to capture the link probability using the alias and URL information. It could introduce systematic bias as well. However, as a professional networking site, users have an incentive to maintain accurate information about themselves on the page, hopefully limiting the degree of systematic bias.
Finally, to generate ground truth data for our prediction model training, we assume independence of indices \(i\) and \(j\), which allows us to calculate \(\Pr(O_i = O_j \mid A_i, A_j)\). This independence assumption would be violated if, for example, users could coordinate so that one use of an alias linking to a given URL gave additional information about another use. Given the wide geographic reach of the LinkedIn network, this sort of coordination is presumably rare, yet is difficult to evaluate for all pairs of indices. The assumption, in any case, is critical because, using it, we can calculate the overall organizational link probability:
\begin{align}
\Pr(O_i = O_j \mid A_i=a, A_j=a') &= \Pr(U_i = U_j \mid A_i=a, A_j=a') \;\; \textrm{(by \emph{Validity})}
\\ &= \sum_{u\in\mathcal{U}} \Pr(U_i = u, U_j = u \mid A_i=a, A_j=a')
\\ &= \sum_{u\in\mathcal{U}} \Pr(U_i = u \mid A_i=a)\Pr(U_j = u \mid A_j=a') \;\; \textrm{(by \emph{Independence})} \label{eq:GetMatchProb}
\end{align}
The two terms in the final expression can be calculated using the LinkedIn network with empirical frequencies (e.g., the fraction of times alias \(a\) and \(a'\) link to \(u\)).
\subsubsection{Modeling Approach }\label{ss:ml}
There are many model classes that could be consistent with these core assumptions we describe above. Indeed, as machine learning methods continue to evolve, the future will no doubt yield improvements in any machine-learning approach to discovering these match probabilities. Nevertheless, for concreteness, we summarize one modeling strategy that we use throughout. In particular, we assume the following functional form for the match probabilities in an effort to build an effective yet computationally efficient machine-learning-based matching model:
\begin{equation}
\textit{Distance-to-Probability Mapping: } \log\left(\frac{\Pr(O_i = O_j \mid A_i=a, A_j=a')}{1-\Pr(O_i = O_j \mid A_i=a, A_j=a')}\right) \propto -||f_n(a) - f_n(a')||_2.
\end{equation}
In other words, the match probability is a function of the Euclidean distance between the numerical representations for aliases \(a\) and \(a'\). This captures the intuition that intuitively close names like ``apple'' and ``apple inc.'' are more likely to be matched than distant names like ``apple'' and ``jp morgan''.
Besides this intuition, this distance-to-probability mapping also has important computational benefits: if we seek to calculate the match probability between two datasets of 100 observations each, we do not have to apply a computationally expensive non-linear \(f_p\) function to all 5,050 possible pairs. Instead, we only need to execute a non-linear \(f_n\) function 200 times---once for each alias in each dataset---after which we take the Euclidean distance between each pair of numerical representations in a step requiring little computational effort even for large data merge tasks.
Instead of making assumptions about how the organizational names should be numerically represented, we learn this representation from data. The specific modeling approach for generating each alias's numerical representation is motivated by advances in natural language sequence modeling. The approach we employ here builds from the success of vector representations for words popularized in \citet*{mikolov2013distributed}. Word vectors enabled improved performance in natural language processing tasks \citep{egger2022text} in part because they overcome the sparsity issue present in other text representations (such as word counts): most linguistic features (e.g., words) do not occur in most language examples (e.g., documents), but if each linguistic feature is represented by a vector of 256 dimensions,\footnote{The choice of 256 dimensions is an arbitrary but necessary decision. Fewer dimensions have greater computational efficiency with less informational richness, while more dimensions have the reverse.} information on every latent semantic dimension is present for every example. For instance, one dimension of a word might relate to the extent to which it is specific vs. abstract, whereas another may refer to the degree to which a word is associated with lower or higher positions in a hierarchy.
In our case, we build a model for organizational name matches from the characters on up. In particular, we model each \emph{character} as a vector (with each dimension representing some latent quality of that character), each \emph{word} as a summary of some time series representation of character vectors, and each \emph{organizational name} as a summary of some time series representation of learned word vectors. To model the time series dynamics of character and word representations, we employ two Long Short-Term Memory (LSTM) neural network architectures \citep{sundermeyer2012lstm}\footnote{We also experimented with several other possibilities, including the Transformer model class used in large language models \citep{vaswani2017attention}. Future work should continue down this line of effort.}: the idea is to ``read'' the sequence of characters (and then the sequence of words), where ``read'' here means combining information together non-linearly through recursive non-linear model. Once we have obtained the final representation for two aliases after ``reading'' the sequence of sequences, we then form the overall match probability by taking the normalized Euclidean distance between those pairs and essentially applying a univariate logistic regression model that maps each distance to a number between 0 and 1. Figure \ref{fig:MLillustrate} illustrates the modeling approach; additional details are found in Section \ref{ss:ModelingDetails} of the Appendix.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\textwidth]{LinkOrgsFigureML.pdf}
\caption{High-level illustration of the modeling strategy. We learn from data how to represent (a) the characters that constitute words, (b) the words that constitute organizational names, and (c) how to compare two organizational name representations efficiently. \label{fig:MLillustrate}}
\end{center}
\end{figure}
The character vectors and LSTM weights are model parameters, which are jointly updated using stochastic gradient descent to minimize prediction error on our training data computed using the LinkedIn corpus. Specifically, we minimize the KL divergence between the true match probability, \(\Pr(O_i=O_j|A_i=a,A_j=a')\), and the estimated probability, where the KL divergence between two distributions, \(P\) and \(Q\), denoted by \(D_{\textrm{KL}}(P||Q)\), is a measure of how much information \(Q\) contains that \(P\) does not and is 0 when the two distributions contain the same information. That is, we minimize:
\begin{equation}\label{eq:minimize}
\textrm{minimize} \sum_{a,a'\in\mathcal{A}} D_{\textrm{KL}}\left(\widehat{\Pr}(O_i=O_j|A_i=a,A_j=a') \; || \;
\Pr(O_i=O_j|A_i=a,A_j=a')\right),
\end{equation}
where \(\mathcal{A}\) denotes the set of all aliases. In this way, we learn from data how the aliases should be numerically represented---learning along the way the representation of characters, words, and how these things combine together in a name---with the overall goal of predicting the organizational link probability between each alias pair. We apply the approach here jointly to the entire LinkedIn corpus, so that name representations in many languages are learned together.
\subsubsection{Model Illustration}\label{model-illustration}
In Figure \ref{fig:showML}, we examine the output of the machine learning model. In the left panel of Figure \ref{fig:showML}, we see the distributions of match probabilities for an out-of-sample set of data points. The distribution is formed separately for matches and non-matches. A Kolmogorov-Smirnov (KS) test for assessing whether the probabilities are drawn from the same distribution yields a statistic of 0.83 ($p$<$10^{-16}$). If there were \emph{total} overlap between these two distributions, the test statistic would be 0. If we could perfectly distinguish matches from non-matches, the test statistic would be 1. We are far closer to this second possibility, and the performance appears relatively good.\footnote{As a point of comparison, a KS test on the same data using a Jaccard distance metric is 0.55.}
In the right panel, we see pairs of aliases representing the same URL (pairs are represented by the same graphical mark type). We project the 256-dimensional alias embedding space down to two dimensions via Principal Components Analysis (PCA) for plotting purposes. We see that aliases representing the same organization are generally quite close in this embedding space. For example, aliases ``apple'' and ``apple co'' are near each other, as the two aliases are linguistically similar and do indeed refer to the same organization. The model seems to be able to handle less salient information reasonably well: ``oracle'' and ``oracle corporation'' are quite close in this embedding space even though the presence of the long word ``corporation'' would substantially affect string distance measures based only on the presence or absence of discrete letter combinations. While, in some situations, researchers may simply drop common, presumably low-signal words like ``corporation'', such choices are likely to be ad hoc, and researchers will likely feel uncertain about their justification in making choices about what words to drop. The optimized matching model learns to ignore or emphasize certain words or character combinations from the data: for example, ``JP Morgan Chase Commercial Banking'' has a reasonably high estimated match probability with ``JP Morgan Chase Asset Management'' (0.62). Also, the set of characters used on the LinkedIn network, both Latin and non-Latin, are all included in the model. Hence, the Chinese names for Apple Incorporated and Apple Corporation ~(``\begin{CJK*}{UTF8}{gbsn}苹果公司\end{CJK*}'' and ``
\begin{CJK*}{UTF8}{gbsn}苹果股份有限公司\end{CJK*}'') are quite close in this embedding space as well (indicating high match probability), as the model estimates match probabilities between organizational names across the many languages used on LinkedIn.
\begin{figure}[t]
\includegraphics[width=0.50\textwidth]{TransformerVectorCors.pdf}
\includegraphics[width=0.50\textwidth]{PCFigDistances.jpg}
\caption{Visualizing the machine learning model output. The left panel shows how, on average, organizational alias matches have higher match probabilities compared with the set of all possible non-matches. The left panel shows how similar organizational names are close in this machine-learning generated vector space (which has been projected to two dimensions using PCA).\label{fig:showML}}
\end{figure}
Despite these successes, there are true links that would remain hard to model using this prediction-oriented framework. For instance, the aliases ``Chase Bank'' and ``JP Morgan'' have a relatively low match probability (0.22). To handle such difficult linkage problems, we will need to exploit an entirely different kind of information contained in the LinkedIn network.
\subsection{The Network Representation and Community Detection Approach}\label{method-2-organizational-record-linkage-as-community-detection}
When the semantic mapping assumption described in Equation \ref{eq:semantic} is violated, the linguistic information in alias names has limited usefulness in linking organizations. However, we can still proceed by exploiting a different kind of information contained in the LinkedIn data. This second approach will allow us to weaken the semantic mapping assumption while taking a graph theoretic approach toward organizational name matching.
We represent the data source explicitly as a network, where organizational aliases are nodes. Aliases may be connected by edges, which may be directed or undirected. A cluster of nodes with relatively dense connections is a ``community'' of aliases. Instead of viewing the record linkage tasks as matching two lists of organizations directly, one can instead view the problem as placing a list of organization names into the organizational community to which it belongs. For example, the names ``MIT'' and ``Massachusetts Institute of Technology'' are nodes that will fall into the same alias community. Implicitly, this approach assumes that each real organization has one community of aliases, and one community of aliases identifies a discrete organization.
Because community detection is a well-studied problem that occurs in many different contexts \citep{rohe2011spectral}, we consider two algorithms established in the literature: (a) Markov clustering algorithm \citep{van2008graph} and (b) greedy clustering \citep{clauset2004finding}. We focus on these algorithms because they model the network in two distinct ways and are computationally efficient.
\subsubsection{Alias Linkage Assistance through a Probabilistic Network
Representation}\label{community-detection-in-the-probabilistic-network-representation}
Our first approach towards organizational alias community detection uses Markov clustering \citep{van2008graph}. The Markov clustering algorithm operates on weighted adjacency matrices that have a probabilistic interpretation.
We denote the organization names in the source data as aliases \(a\). These aliases constitute nodes on our graph. As in Table \ref{tab:example}, many aliases appear simultaneously with profile URLs in our source data. One possible approach to defining links between aliases is to assume that two aliases are connected whenever they both link to the same profile URL. However, this approach would place as strong a connection on aliases that rarely link to the same profile URL as those that are frequently connected. For example, the URL \url{linkedin.com/company/university-of-michigan} has been associated with an alias 75,462 distinct times in our data. The vast majority of these connections occur via the alias ``University of Michigan'', but a link with the alias ``Michigan State University'' does in fact occur a handful of times. In defining links on the network, it will help to up weight the former but down weight the latter in a data-driven
way.
While there are many possible ways to ensure more frequently made connections are weighted higher than connections that are rarer, we adopt an approach inspired by naive Bayesian classification methods. We define the edge weight, \(w_{aa'}\), between node \(a\) and \(a'\) using Equation \ref{formula:edgeweight}: \begin{equation}\label{formula:edgeweight} w_{aa'}=\sum_{u\in\mathcal{U}}\frac{\text{\# of times } a\text{ co-occurs with }u}{\text{\# of times }u\text{ co-occurs with any alias}} \times \frac{\text{\# of times \ensuremath{u} co-occurs with alias }a'}{\text{\# of times }a' \text{ occurs with any profile URL}} \end{equation} These edge weights then make up the full adjacency matrix, \(\mathbf{W}\), which captures the interrelationship between all pairs of aliases.
The connection of this expression to naive Bayesian classifiers requires some elaboration. As a pure formalism, one can write the law of total probability as
\begin{align}\label{formula:complex}
\Pr(A_{i}=a\mid A_{j}=a')=\sum_{u\in\mathcal{U}}\Pr\left(A_{i}=a\mid U=u,A_{j}=a'\right)\Pr\left(U=u|A_{j}=a'\right)
\end{align}
By asserting the equation as a formalism, we do not dwell on questions about what exactly the probability of \(A_i\) given \(A_j\) means in this context, which might distract us from the core task of showing how community detection methods on networks can be useful for record linkage problems. Taking this equation as given, then, suppose further one were to ``naively'' assume conditional independence of \(A_i\) and \(A_j\) given \(U\) \citep{rish2001empirical}, or more formally \(\Pr\left(A_{i}=a\mid U=u,A_{j}=a'\right)=\Pr\left(A_{i}=a\mid U=u\right)\), which is similar to the independence assumption of Section \ref{ss:ml}. Equation \ref{formula:edgeweight} is then what would result if one also replaced the true probabilities with sample proportions.
There are several benefits to using this formula, especially over the more obvious ``shared link'' approach. Besides using much more of the data, the naive Bayesian calculation reweights edges in ways that are proportionate to the actual prevalence of relationships found in the data. One corollary is that the edge weights are asymmetric, and the specificity with which an alias and link are shared matters a great deal. For example, ICPSR is a rare alias, but when it does occur, it is very often entered with \url{linkedin.com/company/university-of-michigan}, which in turn usually points to the ``University of Michigan'' alias. Therefore, the weight \(w_{\text{``University of Michigan''}, \text{``ICPSR''}}\) will be close to 1 even if \(w_{\text{``ICPSR''},\text{``University of Michigan''}}\) is very small. By contrast, since ``Michigan State University'' and ``University of Michigan'' are both relatively common aliases, one spurious link between them will not result in large weight in either direction.
Having built an adjacency matrix, \(\mathbf{W}\), the probabilistic network of alias-to-alias links, we next apply the Markov clustering algorithm to it to find communities of aliases that tend to link to the same URLs.
Briefly, this clustering algorithm proceeds in two steps---expansion and inflation---that are repeated until convergence. In the expansion step, the network's adjacency matrix is multiplied by itself \(k\) times. This matrix multiplication simulates the diffusion of a Markov process on the nodes (i.e.~``traveling'' \(k\) steps on the network, with probabilities of where to go defined by \(\mathbf{W}\)). In the inflation step, entries of the resulting matrix are raised to some power \(p\), and the matrix is renormalized into a valid Markov transition matrix. Since small probabilities shrink faster under exponentiation than large probabilities, the inflation step causes higher probability states to stand out (i.e.~likely places are made even more likely). After alternating expansion and inflation, the output converges: row \(i\) will have only one non-zero valued entry in column \(j\), which defines the representative node in the community. All rows which have a one in column \(j\) are a part of the same community of alias linking to (hopefully) the same organizational entity.
This clustering process involves hyperparameters, in particular, the number of matrix multiplications to do in the expansion step (\(k\)) and the power of exponentiation (\(p\)) in the inflation step. For both steps, \(k\) and \(p\) are frequently set to 2. The total number of clusters is not explicitly specified but instead controlled indirectly through \(p\) and \(k\). We adopt the common choice of \(p=k=2\).\footnote{Alternative choices yielded substantially similar results in the applications that follow.}
\begin{figure}[t]
{\centering \includegraphics[width=0.8\linewidth]{Figuresmarkov-illustration-1.pdf}
}
\caption{Illustration of Markov clustering.}\label{fig:markov-illustration}
\end{figure}
Figure \ref{fig:markov-illustration} illustrates the process on a subset of our data. Darker shades reflect heavier weights in the initial transition matrix. Some links are much stronger than others. In the initial weighting, two cliques reflect a set of names associated with ``JP Morgan Chase.'' Another reflects names associated with ``Bank of America.'' However, these initial links are dense, making it difficult to distinguish one cluster of aliases from another. As the algorithm iterates, some links weaken and disappear while others strengthen. Eventually, each node links to exactly one other node. Notably, the final cliques contain lexicographically dissimilar nodes that do indeed belong in the same cluster. For example, the ``Chase'' clique contains ``Wamu'', ``Paymenttech'', and ``\begin{CJK*}{UTF8}{gbsn}摩根大通\end{CJK*}'' which are all Chase affiliates. The ``Bank of America'' clique includes ``Countrywide Financial'' and ``MBNA,'' both under the Bank of America umbrella. In this way, the semantic mapping assumption from Section \ref{ss:ml} has been weakened; graph-theoretic information has been exploited to assist organizational matching.
\subsubsection{Alias Linkage Assistance through a Bipartite Network Representation}\label{community-detection-in-a-bipartite-network-representation}
Although the network formulation just described has the benefit of a probabilistic interpretation, it involves processing the data with some assumptions. Here, we offer a second community detection approach requiring no extra processing of the LinkedIn network data (formally, we keep will keep the data's natural bipartite representation instead of projecting the network into alias nodes).
Here, we shall assume that both organization names and profile URLs are ``aliases'' for a real organization, which is synonymous with a community of aliases. Organization names do not link directly to one another; rather, organization names are associated with URLs. Therefore, these associations can be understood as representing a bipartite network: we have alias nodes and URL nodes and undirected links between aliases and URLs represented by weighted edges (with the weights being the number of times alias \(a\) is linked to URL \(u\)).
In this bipartite network representation, linking datasets using name aliases is equivalent to finding the best alias-URL community structure, where aliases within community \(k\) all tend to link to URLs within that community too. The final clustering of aliases then provides a directory that researchers can use for determining whether alias \(a\) and \(a'\) refer to the same organization or not. We do not mean to suggest that the alias matching task is equivalent in some sense to the community detection task, but only that community detection, viewed from the right perspective, can offer assistance to the matching task in organizational linkage.
There are many possible criteria for quantifying the quality of one clustering or another in a bipartite network. We rely on previous literature, particularly \citet{clauset2004finding}, and use a criterion, called a modularity score, that (a) is \(0\) when community ties between aliases and URLs occur as if communities were assigned randomly and (b) gets larger when the proposed community structure places aliases that tend to link to the same URLs in the same community. Details are left to Section \ref{s:Greedy} in the Appendix. Here, the main hyperparameter is the stopping criterion for greedy optimization. Like in the Markov clustering approach, the number of clusters is not itself a hyperparameter. Instead, the number of clusters is a byproduct of the optimization and this stopping threshold. The end product of the community detection clustering applied to the bipartite representation of the LinkedIn data is a set of names that group together in referring to the same organization (with this information extracted through the name-to-URL link behavior).
\subsection{Approach 3: Unified Organizational Record Linkage Using the LinkedIn Corpus}\label{s:UnifiedApproach}
Finally, to make use of both the graph theoretic and semantic information contained in the LinkedIn corpus, we here propose a unified approach to organizational record linkage using assistance from the LinkedIn data. The LinkedIn-calibrated machine learning model uses complex semantic information to assist matching, but does not make use of graph-theoretic information. The network-based assistance methods use network information, but do not use semantic information to help link names in that network. We therefore propose this third, unified approach that uses both the semantic and graph theoretic information content contained in the massive LinkedIn corpus.
\begin{figure}
\centering
\captionsetup[sub]{
labelformat=step}
\begin{subfigure}[b]{0.9\textwidth}
\includegraphics[width=1\linewidth]{checkered_flag.pdf}
\caption{Direct linkage through machine learning-optimized string matching.}
\label{fig:Unified-a}
\end{subfigure}
\begin{subfigure}[b]{0.9\textwidth}
\includegraphics[width=1\linewidth]{checked_flags2.pdf}
\caption{Indirect linkage through directory constructed using community detection.}
\label{fig:Unified-b}
\end{subfigure}
\caption[Checkered Flags]{Checkered flag diagrams illustrating the unified approach to name record linkage.}
\label{fig:Unified}
\end{figure}
The unified approach is an ensemble of both previous methods and involves three steps. Figure \ref{fig:Unified} presents checkered flag diagrams to illustrate a toy example involving the merging of two datasets $\mathbf{X}$ and $\mathbf{Y}$. $\mathbf{X}$ contains data about Wells Fargo Bank, JP Morgan Chase Bank, and Goldman Sachs. $\mathbf{Y}$ contains data about Wells Fargo Advisors, Washington Mutual (at one time a wholly owned subsidiary of JP Morgan Chase), and Saks Fifth Avenue. Ideally, one would hope to link the Wells Fargo and JP Morgan Chase subsidiaries without matching Goldman Sachs to Saks Fifth Avenue. In step (a), machine learning-assisted name linkage is directly applied between the two datasets. Similar to fuzzy string matching, scores are calculated on the cross product of two sets of names; scores exceeding a threshold (in the figure, set to 0.5) are said to match. In this particular example, fuzzy matching could produce similar results, although the thresholds and scores would differ, and as a result, so too would performance. In step (b), machine learning-assisted name linkage is applied to an intermediary directory built using community detection. Initially, dataset $\mathbf{X}$ is linked to the directory ($\mathbf{D}$) associating the set of all aliases on LinkedIn with a community of names identifier from the graph clustering step.
Again, we perform this linkage via the machine-learning-assisted string matching method. As the Figure shows, both under-matching and over-matching are expected. For instance, the Chinese name for Goldman Sachs will have low similarity with its English name using any character-based methods, but this kind of problem also happens even within tasks where all names are in the same language. Further, organizations often have multiple aliases that have high similarity in terms of characters or words that tend to co-occur (e.g., ``Wells Fargo Bank, NA'' and ``Wells Fargo Mortgage''). Eliminating redundant matches is therefore important. Machine-learning linkage is also performed on dataset $\mathbf{Y}$ against the directory $\bm{D}$, and this enables the datasets $\mathbf{X}$ and $\mathbf{Y}$ to be combined through the community IDs with which they have previously been associated. Note through step (b), Washington Mutual in dataset $Y$ finds its community, and this allows linkage to dataset $\mathbf{X}$. Human coders, let alone character-based methods, might easily miss such connections. In step (c), we combine the results from steps (a) and (b) and remove redundancies.
This toy example shows how the unified approach can maximize the potential of both methods presented thus far. Through step (a), it can link datasets for organizations that do not appear in LinkedIn but whose naming conventions are similar. Through step (b), the unified method picks up on relationships that are not apparent based on names and requires specialized domain expertise to know. While bringing a great deal more data to the linkage problem should help in multiple ways, how much it helps in any particular application will certainly depend. Therefore, we now turn to the task of assessing how these different algorithmic approaches and representations of the LinkedIn corpus perform in examples from contemporary social scientific practice.
\begin{comment}
\begin{figure}[t]
\includegraphics[width=0.90\textwidth]{unified.png}
\caption{A unified approach to LinkedIn-assisted organizational record linkage.}\label{fig:Unified}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure}[htb]
\begin{center}
{
\tikzstyle{main node}=[circle,draw,font=\sffamily\small\bfseries]
\tikzstyle{sub node}=[circle,draw,dashed,font=\sffamily\small\bfseries]
\begin{tikzpicture}
[->,>=stealth',shorten >=1pt,node distance=1.6cm,thick
\node (1) [left = of 3,xshift = -8.6cm,yshift=0.25cm,draw] {\emph{Dataset 1} ($\mathbf{y}$)};
\node (2) [below = of 1,xshift = -0cm,yshift=-0.cm,draw] {\emph{Dataset 2} ($\mathbf{y}$)};
\node (3) [right = of 1,yshift = 2cm,xshift = -0cm,draw] {
$\begin{matrix*}[l]
&\textrm{\emph{LinkedIn-based Graph Clustering}}
\\ & \qquad \textrm{--Bipartite Representation}
\\ & \qquad \textrm{--Probabilistic Representation}
\end{matrix*}$
};
\node (4) [left = of 5,yshift=-4.5cm,xshift = -1.2cm,draw] {
$\begin{matrix*}[l]
&\textrm{\emph{Machine Learning Prediction}}
\\ & \qquad \textrm{Trained using $>$ Trillion}
\\ & \qquad \textrm{LinkedIn Training Examples}
\end{matrix*}$
};
\node (5) [right,xshift=-1.5cm,yshift=-0.7cm,draw] {\emph{Final Matched Dataset} ($\mathbf{z}$)};
\node (5a) [above = of 4,xshift=0.5cm,yshift=-0.7cm,draw] {$\mathbf{z}^{\textrm{ML}}$};
\node (5b) [below = of 3,xshift=-0.5cm,yshift=.7cm,draw] {$\mathbf{z}^{\textrm{Graph}}$};
\draw[gray] (-13.3,-5.5) rectangle (3., 3.3);
\path[every node/.style={font=\sffamily\small}]
(1) edge node {} (5a)
(2) edge node {} (5a)
(1) edge node {} (5b)
(2) edge node {} (5b)
(1) edge node {} (5a)
(3) edge[thin,-,densely dotted] node {} (5b)
(4) edge[thin,-,densely dotted] node {} (5b)
(5a) edge node {} (5)
(5b) edge node {} (5)
(4) edge[thin,-,densely dotted] node {} (5a)
\end{tikzpicture}
}
\caption{A unified approach to LinkedIn-assisted organizational record linkage.}\label{fig:Unified}
\end{center}
\end{figure}
\end{comment}
\subsection{Summary}
We have presented several high-level approaches to using a massive dataset of human-contributed records about organizations to assist with organizational name linkage. The first used machine learning and the second used community detection. The third combined the strengths of both models. We are aware that there are a large number of alternative modeling strategies that fall within both buckets of community detection and machine learning. That said, although we have experimented with several possible permutations, we have not found truly substantial differences in performance, which does not disprove the possibility that some alternative modeling strategies might be substantially better. The important point is that we expect many implementations of these high-level strategies to produce substantial gains over contemporary research practice on this very common and important task.
\section{Illustration Tasks Using Campaign Contribution and Lobbying Data}\label{three-illustration-tasks-using-campaign-contribution-lobbying-and-cross-language-data}
\subsection{Method Evaluation}\label{method-evaluation}
Before we describe our illustration tasks, we first introduce our competitive baseline and our evaluation metrics. This introduction will help put the performance of the proposed methods in context.
\subsubsection{Fuzzy String Matching Baseline}\label{fuzzy-string-matching-baseline}
We examine the performance of the LinkedIn-assisted methods against a fuzzy string matching baseline. While there are many ways to calculate string similarity, we focus on fuzzy string matching using the Jaccard distance measure in order to keep the number of comparisons manageable. Other string discrepancy measures, such as cosine distance, produce similar results.
\subsubsection{Performance Metrics}\label{performance-metrics}
We consider two measures of performance for each organizational matching algorithm. We will examine these measures as we vary the distance threshold for accepting a match between two aliases.\footnote{Results for the network-based linkage approaches also vary with this parameter because we first match aliases with entries in the directory in order to find the position of those aliases within the community structure of the LinkedIn network.}
First, we examine the fraction of true matches discovered as we vary the acceptance threshold. This value is defined to be
\begin{equation}
\textrm{True positive rate} = \frac{\textrm{\# of true positives found}}{\textrm{\# of true positives in total}}
\end{equation}
This measure is important because, in some cases, researchers may have the willingness or ability to manually evaluate the set of proposed matches, rejecting false positive matches. The true positive rate is therefore more relevant for the setting where scholars use an automated method as an initial processing step and then evaluate the resulting matches themselves, as may occur for smaller match tasks.
While the true positive rate captures our ability to find true matches, it does not weigh the cost involved in deciding between true positives and false positives. Therefore, we also examine a measure that considers the presence of true positives, false positives, and false negatives known as the \(F_\beta\) score \citep{lever2016classification}. This score is defined as
\begin{equation}
F_\beta = \frac {(1 + \beta^2) \cdot \mathrm{true\ positive} }{(1 + \beta^2) \cdot \mathrm{true\ positive} + \beta^2 \cdot \mathrm{false\ negative} + \mathrm{false\ positive}}
\end{equation}
In the best case scenario, the \(F_\beta\) score is 1, which occurs when all true matches are found, with no false negatives/positives. In the worst-case scenario, the score approaches 0, which occurs when few true positives are found or many false negatives/positives. The \(\beta\) parameter controls the relative costs of false negatives compared to false positives. If \(\beta\)\textgreater{}1, false negatives are regarded as less costly than false positives; if \(\beta\)\textless{}1, then the reverse. In the matching context, errors of \emph{inclusion} are typically less costly than errors of \emph{exclusion}: the list of successful matches is usually shorter and easier to double-check than the list of non-matched pairs. Therefore, we examine the \(F_2\) score, a common choice in evaluation tasks (e.g., \citet{devarriya2020unbalanced}). In contrast to the true positive rate, the \(F_2\) measure is more relevant for the setting where the datasets to be merged are so large that scholars cannot reasonably evaluate all the resulting matches manually.
\subsubsection{Comparing Algorithm Performance Across Acceptance
Thresholds}\label{comparing-algorithm-performance-across-acceptance-thresholds.}
Approximate matching algorithms have a parameter that controls how close is close enough to deem a match acceptable. Two algorithms might perform differently depending on how the acceptance threshold parameter is set. Unfortunately, the parameters used by our machine learning and network-based algorithms are not directly comparable. A change of 0.1 in the match probability tolerance under the ML algorithm implies a much different change in matched dataset size than a 0.1 change in the Jaccard string distance tolerance. To compare the performance of these algorithms, our figures and discussion focus on the size of matched dataset an acceptance threshold induces. The most stringent choice produces the smallest dataset (i.e., consisting of the exact matches) while the lowest possible acceptance threshold produces the cross product of the two datasets (i.e., everything matches everything). Between the two, different thresholds produce larger and smaller datasets. By comparing performance across matched dataset sizes, we can evaluate how the algorithms perform for different acceptance thresholds.
\subsection{Task 1: Matching Performance on a Lobbying
Dataset}\label{task-1-matching-performance-on-a-lobbying-dataset}
We first illustrate the use of the organizational directory on a record linkage task involving lobbying and the stock market. \citet{Libgober2020a} shows that firms that meet with regulators tend to receive positive returns in the stock market after the regulator announces the policies on which those firms lobbied. These returns are significantly higher than the positive returns experienced by market competitors and firms that send regulators written correspondence. Matching of meeting logs to stock market tickers is burdensome because there are almost 700 distinct organization names described in the lobbying records and around 7,000 public companies listed on major US exchanges. Manual matching typically involves research on these 700 entities using tools such as Google Finance. While the burden of researching seven hundred organizations in this fashion is not enormous, \citet{Libgober2020a} only considers meetings with one regulator. If one were to increase the scope to cover more agencies or all lobbying efforts in Congress, the burden could become insurmountable.
Treating the human-coded matches in \citet{Libgober2020a} as ground truth, our results show how the incorporation of the LinkedIn corpus into the matching process can improve performance. Figure \ref{fig:performance} shows that the LinkedIn-assisted approaches almost always yield higher \(F_2\) scores and true positives across the range of acceptance thresholds. The highest \(F_2\) score is over 0.6, something achieved both by the unified approaches, the machine-learning approach, and the bipartite graph-assisted matching. The best-performing algorithm across the range of acceptance thresholds is the unified approach using the bipartite network representation combined with the distance measure obtained via machine learning. The percentage gain in performance of the LinkedIn-based approaches is higher when the acceptance threshold is closer to 0; as we increase the threshold so that the matched dataset is ten or more times larger than the true matched dataset, the \(F_2\) score for all algorithms approaches 0, and the true positive rate approaches 1.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{Example2_reduceDataFALSE_truePositives.pdf}\includegraphics[width=0.45\textwidth]{Example2_reduceDataFALSE_Fscore.pdf}
\caption{We find that dataset linkage using any one of the approaches using the LinkedIn network obtains favorable performance relative to fuzzy string matching both when examining only the raw percentage of correct matches obtained (left panel) and when adjusting for the rate of false positives and false negatives in the $F_2$ score (right panel). \label{fig:performance}}
\end{figure}
It is also instructive to consider an example from this linkage task where fuzzy matching failed, but the LinkedIn approach, for example, the Markov clustering method, had success. Fuzzy matching fails to link the organizational log entry associated with ``HSBC Holdings PLC'' to the stock market data associated with ``HSBC.'' Their fuzzy string distance is 0.57, which is much higher than the distance of ``HSBC Holdings PLC'' to its fuzzy match (0.13 for ``AMC Entertainment Holdings, Inc.''). ``HSBC Holdings PLC'', however, has an exact match in the LinkedIn-based directory, so that the two organizations are successfully paired using the patterns learned from the LinkedIn corpus.
Overall, results from this task illustrate how our methods would appear to yield superior performance compared to a leading alternative method in the common use case when researchers do not have access to shared covariates across organizational datasets.
\subsection{Task 2: Linking Financial Returns and Lobbying Expenditures from Fortune 1000 Companies}\label{task-3-linking-financial-returns-and-campaign-contributions-from-fortune-1000-companies}
In our next evaluation exercise, we focus on a substantive question drawn from the study of organizational lobbying: how do a company's economic resources correlate with political contribution behavior? Based on prior research \citep{chen2015corporate}, we should expect a positive association between company size and lobbying activity: larger firms have more resources that they can use in lobbying, perhaps further increasing their performance \citep{ridge2017beyond,eun2021aspirations}. We here explore an example where there are such strong theoretical expectations in order to understand when results from different organizational matching algorithms do or do not yield plausible results---something that would not be possible without a robust \emph{a priori} hypothesis.
For this exercise, we use firm-level data on the total dollar amount spent between 2013-2018 on lobbying activity. This data has been collected by Open Secrets, a non-profit organization focused on improving access to publicly available federal campaign contributions and lobbying data \citep{OpenSecrets}. We match this firm-level data to the Fortune 1000 dataset on the largest 1000 US companies \citep{Fortune1000}, where the data point of interest is the mean total company assets in the 2013-2018 period. The key linkage variable will be organizational names that are present in the two datasets. We manually obtained hand-coded matches to provide ground truth data.
In Figure \ref{fig:fortuneSubstantive}, we explore the substantive implications of different matching choices, that is, how researchers' conclusions may be affected by the quality of organizational matches. In the left panel, we see that the true coefficient relating log organizational assets to log campaign contributions is about 2.5. In the dataset constructed using fuzzy matching, this coefficient is underestimated by about half. The situation is better for the datasets constructed using the LinkedIn-assisted approaches, with the effect estimates being closer to the true value. Notice that, for all algorithms examined in the left panel of Figure \ref{fig:fortuneSubstantive}, there is significant attenuation bias in the estimated coefficient towards 0 as we increase the size of the matched dataset. The inclusion of poor-quality matches injects random noise into estimation, biasing the coefficient severely towards 0. Overall, we see from the right panel that match quality depends on algorithm choice as well as string distance threshold, with encouraging results for the approaches proposed here.
The right panel of Figure \ref{fig:fortuneSubstantive} presents a kind of plot that is particularly helpful in understanding the output of any choice of string matching algorithm, LinkedIn-assisted or otherwise. For a given choice of matching algorithm, the figure presents the sequence of estimated coefficients as the size of the matched dataset varies with the distance threshold. It also presents 95\% confidence intervals (obtained from heteroskedasticity-consistent standard errors) as a function of the distance as well. We can therefore visually decompose the point and uncertainty estimate as a function of the matched dataset size (where larger dataset sizes---beyond a point---mean poorer data quality). Researchers should expect that effect estimates decay towards 0 as the dataset size grows beyond a point due to the attenuation bias effect. In short, with this sort of illustration, researchers can visually examine the point at which attenuation bias begins to kick in and could argue about the estimated reliability given their preferred choice of matched dataset size. This kind of plot, along with all matching estimators used in this analysis, are available in in the \verb|LinkOrgs| package (see Section \ref{s:LinkOrgs} for a tutorial).
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{Example4_reduceDataFALSE_Application.pdf}
\includegraphics[width=0.5\textwidth]{Example4_reduceDataFALSE_Application2.pdf}
\caption{We see the coefficient on log(Assets) for predicting log(1+Contributions) using the ground truth data is about 2.5 (bold gray line, 95\% confidence interval displayed using dotted gray lines). We see in the left panel that, at its best point, fuzzy matching underestimates this quantity by about half. The LinkedIn-based matching algorithms better recover the coefficient. The right panel illustrates the robustness figure researchers should consider including in their analyses following organizational matching tasks (the matching results from the machine learning algorithm are used).\label{fig:fortuneSubstantive}}
\end{figure}
\section{Discussion: Perspectives on the Limits and Future Uses of the LinkedIn Data in Improving Record Linkage}\label{discussion}
We have shown how to use half a billion user-contributed records from a prominent employment networking site to help link datasets about organizations. It is hard to understate how frequently researchers find themselves needing to link datasets just based on names because shared covariates are not available. Existing methods, notably human coding and fuzzy matching or some combination of the two, are costly to apply and often involve ad hoc decision-making by scholars about what seems to be working well (or well enough). Our alternative approach uses machine learning to understand the meaning of words in names, and also network information contributed by users for cases when the words are not terribly helpful for linking records. These approaches are summarized in Table \ref{tab:ModelComparison}.
Our first framework uses the LinkedIn data as a massive training corpus for learning how to distinguish matching name pairs from non-matching pairs. We outlined assumptions on which the approach depends and developed a machine learning model that uses continuous, as opposed to discrete, character representations. Future research can likely improve upon the machine learning models as these approaches become more sophisticated all the time. For example, in the context of the semantic mapping assumption, new machine learning architectures could better approximate the function mapping two names into their match probability. Moreover, weighting strategies could improve the robustness of the validity assumption by up-weighting in the model training portions of the LinkedIn network most relevant to a particular dataset. For example, we have taken all user-contributed records at face value, but lower quality records might be down-weighted and thereby improve aggregate performance. While such strategies would come at an additional computational cost, they could improve performance.
Our second framework uses the LinkedIn data to build a directory of organizational name matches by using the data's intrinsic network structure. This approach captures information about how the different names interrelate, and, by finding communities in that network structure, we can find organizational name pairs without using linguistic information. While we explore two distinct network modeling and community detection approaches, additional research could improve on this by more explicitly tying linguistic and network information in predicting name matches in a joint manner. In contrast, our unified approach predicts name matches using machine learning and builds name communities separately, although both approaches are then applied sequentially.
\begin{table}[ht]
\centering
\renewcommand\arraystretch{1.5}
\caption{Comparing different approaches to organizational record linkage.}\label{tab:ModelComparison}
\begin{tabular}[t]{
>{\raggedright}p{0.23\linewidth}
>{\raggedright}p{0.15\linewidth}
>{\raggedright}p{0.15\linewidth}
>{\raggedright\arraybackslash}p{0.15\linewidth}
>{\raggedright\arraybackslash}p{0.15\linewidth}
>{\raggedright\arraybackslash}p{0.35\linewidth}
}
\toprule\midrule
& {\it Fuzzy String Matching} & {\it LinkedIn-Calibrated ML} & {\it LinkedIn Network Approaches} & {\it Combined ML+Network Approach}
\\ \midrule
\underline{\it Character} & & & &
\\ Optimized for organizational name matching? & No & Yes & No & Partially
\\ Text representation & Discrete & Continuous & Discrete & Continuous
\\ Information utilized & Semantic & Semantic & Graph theoretic & Semantic + graph theoretic
\\ Hyper-parameters& Acceptance threshold; $q$ -gram settings& Acceptance threshold; ML model architecture & Acceptance threshold; $q$-gram settings; clustering hyperparameters & Acceptance threshold; ML model architecture; clustering hyperparameters
\vspace{0.4cm}
\\ \underline{\it Data Requirements} & & & &
\\ Requires access to saved matching model parameters? & No & Yes & No & Yes
\\ Requires access to saved alias clustering? & No & No &Yes & Yes
\\ \bottomrule
\end{tabular}
\end{table}%
We apply these approaches to different research tasks that are drawn from substantive papers. We find performance gains that are substantial relative to the current practices. For example, in the second application, regression coefficients from our LinkedIn-assisted merges fall within the confidence interval found when using the human-coded datasets, but not when using fuzzy matching alone.
Our results have several implications for applied researchers. The choice of record linkage method is potentially consequential for the ultimate regressions that one runs and intends to present to other scholars. Using our unified approach, we were able to estimate a coefficient of theoretical interest to within a 95\% confidence interval of the estimate arrived at with ground truth. Using other methods, and particularly fuzzy matching, we were unable to recover the coefficient of interest. Although the sign was correct, the magnitude was statistically and substantively very different. Typically, scholars do not have access to ground truth, therefore will not have a sense of how well or how badly they are doing in aggregate. This is a potentially serious problem affecting research on organizations; however, we do not believe that our research should cast substantial doubt on what scholars have been doing. Typically, researchers use a mix of hand-coding and automated methods, and we expect that this kind of approach will do better than a purely automated approach (especially one relying on string distance metrics alone). We think that mixed workflows will still likely make sense with our approach, and expect that the higher number of true positive and better mix of true positives to false positives that our methods provide will substantially reduce the researcher costs of linking data on organizations. For those linkage problems that are too big for mixed workflows, our work here suggests it is important to do as well as possible and also test sensitivity to linkage approaches. We provide some initial examples of how that might be done.
For methodologically interested researchers, we have released the database of name embeddings from the machine learning approach (available on a stable Dataverse page, URL to be released upon publication). That is, we have applied the model from Section \ref{ss:ml} and obtained numerical representations for each LinkedIn organization name. These could be useful for other research endeavors (e.g., are organizations with similar embeddings similar in other ways?). Second, we have also made the LinkedIn name pair training data available for others to, hopefully, build upon this work. Due to the computational effort required in calculating the LinkedIn match probabilities from Equation \ref{eq:GetMatchProb}, we can only make available a subset of the match data. Still, this dataset comprises over 6.3 million match examples.
While the methods introduced here would seem to improve organizational match performance on real data tasks, there are many avenues for future extensions in addition to those already mentioned.
First, to incorporate auxiliary information and to adjust for uncertainty about merging in post-merge analyses, probabilistic linkage models are an attractive option for record linkage tasks on individuals \citep{Enamorado2019}. In such models, a latent mixing variable indicates whether a pair of records does or does not represent a match. This latent mixing variable is inferred through an Expectation Maximization (EM) algorithm incorporating information about the agreement level for a set of variables such as birthdate, name, place of residence, and, potentially, employer. Information from our LinkedIn-assisted algorithms can be readily incorporated in these algorithms for inferring match probabilities on individuals.
The methods described here can also incorporate covariate information about companies. For instance, researchers can incorporate such information in the final layer of our machine learning model and re-train that layer using a small training corpus. This process, an application of transfer learning, enables extra information to be brought to bear while also retaining the rich numerical representations obtained from the original training process performed on our massive LinkedIn dataset. Finally, the approaches here are complementary to those described in \citet{kaufman_klevs_2021}, and it would be interesting to explore possible combined performance gains.
\section{Conclusion}\label{conclusion}
Data sources that are important to scholars of organizational politics often lack common covariate data. This lack of shared information makes it difficult to apply probabilistic linkage methods and motivates the widespread use of fuzzy matching algorithms. Yet, fuzzy matching is often an inadequate tool for the task at hand, while human coding is frequently very costly, particularly if one wants human coders with the specialized domain knowledge necessary to generate high-quality matches. We have presented new tools for improving the matching of organizational entities using over half a billion open collaborated employment records from a prominent online employment network. This approach can match organizations containing no common words or even characters. We validate the approach on example tasks. We show favorable performance to the most common alternative automated method (fuzzy matching), with gains of up to 60\%. We also illustrated how substantive insights could be improved when match quality is increased, with better statistical precision and predictive accuracy. Our primary contribution is showing how one can apply network analysis methods and machine learning to the record linkage problem for organizations. Future work may attempt more sophisticated approaches to machine learning architecture, graph construction, community detection, and hybrid matching approaches while using this unique and useful data source. \hfill \(\square\)
\section{Data Availability Statement}
All methods are accessible in an open-source software package available at \Verb|github.com/cjerzak/LinkOrgs-software|. Replication data will be made available in a Harvard Dataverse.
\printbibliography
\renewcommand{\thefigure}{A.I.\arabic{figure}}
\setcounter{figure}{0}
\renewcommand{\thetable}{A.I.\arabic{table}}
\setcounter{table}{0}
\renewcommand{\thesection}{A.I.\arabic{section}}
\setcounter{section}{1}
\setcounter{subsection}{0}
\newpage \clearpage
\setcounter{page}{1}
\section*{Appendix I: Machine Learning Model Details}\label{ss:ModelingDetails}
\subsection{Step 1. Obtaining Organizational Name
Representations}\label{step-1.-obtaining-organizational-name-representations}
We first outline the modeling strategy for each organizational name.
\begin{center}
\emph{Input}: An organizational name (e.g., ``j.p. morgan chase'')
\end{center}
\begin{enumerate}
\item Break each name into words.
\begin{itemize}
\item For each word of length $l_w$, query the vector representation for each character to form a $l_w\times 128$-dimensional representation for that word.
\item Apply batch normalization to the feature dimension.
\item Apply a bidirectional LSTM model to each word to obtain a representation for each word.
\end{itemize}
\item For each alias of $l_a$ words, query the word vectors obtained from the previous step to form a $l_a\times 128$-dimensional representation for the alias.
\begin{itemize}
\item Apply batch normalization to the feature dimension.
\item Apply a bidirectional LSTM model to each alias to obtain a representation for each sequence of words.
\end{itemize}
\item Apply two feed-forward neural network layers to the initial alias representation to form the final alias representation.
\end{enumerate}
\begin{center}
\emph{Output}: 256-dimensional representation of the organizational name
\end{center}
\subsection{Step 2. Transforming Name Representations into Match Probabilities}\label{step-2.-transforming-name-representations-into-match-probabilities}
With the name representations in hand, we now discuss in greater detail how we form the final match probabilities.
\begin{center}
\emph{Input}: Numerical representations/embeddings for two organizational names
\end{center}
\begin{enumerate}
\item Take the normalized Euclidean distance between the two embeddings, $e_i$, $e_j$:
\begin{align*}
d_{ij} = \frac{|| e_i - e_j ||}{\sqrt{\textrm{dim}(e_i)}}.
\end{align*}
\item Form the match probability from the distance using unidimensional logistic regression:
\begin{equation*}
\Pr(O_i = O_j | A_i = a, A_j = a') = \frac{1}{1+\exp(-[\beta_0+\beta_1 d_{ij}])}.
\end{equation*}
\end{enumerate}
\begin{center}
\emph{Output}: A match probability indicating the probability that two names refer to the same organization
\end{center}
The batch normalization layers are included to avoid exploding gradients (e.g., large values are centered and scaled) and to make the learning more robust to initialization \citep{santurkar2018does}. The \(\sim\)\hspace{-0.1mm}1.3 million parameters used in steps 1 and 2 are updated jointly using a modified form of stochastic gradient descent. (Future progress can likely be made here, given the development of large language models such as GPT-3 involving billions of parameters.) We next give more details about the sampling procedure involved in the model training.
\subsection{Sampling Design Details for Training of the Machine Learning Model}\label{sampling-design-details-for-training-of-the-machine-learning-model}
Recall that we seek to optimize the following: \[ \textrm{minimize} \sum_{a, a' \in \mathcal{A}} D_{\textrm{KL}}\left(\widehat{\Pr}(O_i=O_j|A_i=a,A_j=a') \; || \; \Pr(O_i=O_j|A_i=a,A_j=a')\right). \] One challenge in performing this optimization in practice is that most \(a'\) are very distinct semantically from \(a\), so that the learning process is slowed by the vast number of ``easy'' predictions (e.g., where \(\Pr(O_i=O_j|A_i,A_j)\) is clearly 0). To make matters worse, most match probabilities are 0, a kind of imbalance that can impair final model performance \citep{kuhn2013remedies}
We adopt two measures to address these two problems. First, we implement a balanced sampling scheme: in the optimization, we ensure half of all training points have \(\Pr(O_i=O_j|A_i=a,A_j=a')=0\) and half have \(\Pr(O_i=O_j|A_i=a,A_j=a')>0\).
Second, we select \(a, a'\) pairs that will be maximally informative so that the model is able to more quickly learn from semantically similar non-matches (e.g., ``The University of Chicago'' from ``The University of Colorado'') or semantically distinct matches (e.g., ``Massachusetts Institute of Technology'' and ``MIT''). In our adversarial sampling approach (which is somewhat similar to \citep{kim2020model}), we select, for non-matches in our training batch, alias pairs that have \(\Pr(O_i=O_j|A_i=a,A_j=a') = 0\) but have the largest \(\widehat{\Pr}(O_i=O_j|A_i=a,A_j=a')\). Likewise, for matches, we select pairs that have lowest predicted probability. More formally, we find the negative \(a\) and \(a'\) pairs by solving: \begin{equation*}
\textrm{argmax}_{a,a' \textrm{ s.t. } \Pr(O_i=O_j|A_i=a,A_j=a') = 0 } \; D_{\textrm{KL}}\left(\Pr(O_i=O_j|A_i=a,A_j=a') \; || \; \widehat{\Pr}(O_i=O_j|A_i=a,A_j=a')\right), \;
\end{equation*}
with a similar approach applied for positive pairs.
We achieve this in practice by, at the current state of the model, finding the closest alias to \(a'\) in the alias vector space for which \(\Pr(O_i=O_j|A_i=a,A_j=a')\) is 0. A similar approach is done to obtain high-information positive matches. Intuitively, this process upweighs the importance of similar aliases that refer to different organizations in the learning process so as to learn how to model the semantic mapping of aliases into match probabilities.
We found this adaptive sampling scheme to be important in learning to quickly distinguish similar aliases that refer to different organizations, although only use this adversarial approach to obtain half of each training batch to mitigate against potential problems with this approach (such as occurs if the model is fit on only high surprise mistake alias pairs that occur when users incorrectly link to the same
URL).
\renewcommand{\thefigure}{A.II.\arabic{figure}}
\setcounter{figure}{0}
\renewcommand{\thetable}{A.II.\arabic{table}}
\setcounter{table}{0}
\renewcommand{\thesection}{A.II.\arabic{section}}
\setcounter{section}{1}
\setcounter{subsection}{0}
\newpage \clearpage
\setcounter{page}{1}
\section*{Appendix II: Additional Network Model \& Fuzzy String Distance Details}
\subsection{Greedy Bipartite Clustering Details}
\label{s:Greedy}
Following \citet{clauset2004finding}, our modularity score for bipartite community detection is
\begin{equation}\label{eq:modularity}
\text{Modularity Score} = \frac{1}{2m} \sum_{a,u} \bigg[ B_{au} - \frac{ k_a k_u}{2m}\bigg] \mathbb{I}\{
c_a = c_u \big\},
\end{equation}
where \(m\) denotes the total number of edges in the graph, \(B_{au}\) denotes the \(au\)-th entry in the weighted bipartite network (i.e., the number of ties between alias \(a\) and URL \(u\)), and \(k_a\) denotes the degree of node \(a\) (i.e.~\(k_a=\sum_{a\in\mathcal{A}} B_{au}\)). The indicator variable, \(\mathbb{I}\{\cdot\}\), is 1 when the community for \(a\) equals the community for \(u\) (these communities are denoted as \(c_a\) and \(c_u\)).
The intuition in Equation \ref{eq:modularity} is that the modularity score is maximized when the number of ties between \(a\) and \(u\) is large (i.e., \(B_{au}\) is big) and \(a, u\) are placed in the same cluster (so that \(c_a=c_u\)). We obtain community estimates based on the greedy optimization of Equation \ref{eq:modularity} (see \citet{clauset2004finding} for details).
\subsection{Fuzzy String Matching Details}
\label{s:AppendixFuzzy}
Fuzzy string distances in our baseline linkage method are calculated as follows. Let \(\tilde{a}\) and \(\tilde{a}'\) denote the decomposition of organizational aliases \(a\) and \(a'\) into all their \(q\)-character combinations (known as \(q\)-grams). For example, if \(q=2\), ``bank'' would be decomposed into the set \(\{\)``ba'',``an'',``nk''\(\}\). The Jaccard measure is then defined to be \[d(a, a') = 1 - \frac{ |\tilde{a} \cap \tilde{a}'| }{ |\tilde{a} \cup \tilde{a}'|}.\] If all \(q\)-grams co-occur within \(a\) and \(a'\), this measure returns 0. If none co-occur, the measure returns 1. If exactly half co-occur, the measure returns 0.5. Following \citep{Navarro2009}, we set \(q=2\).
\renewcommand{\thefigure}{A.III.\arabic{figure}}
\setcounter{figure}{0}
\renewcommand{\thetable}{A.III.\arabic{table}}
\setcounter{table}{0}
\renewcommand{\thesection}{A.III.\arabic{section}}
\setcounter{section}{1}
\setcounter{subsection}{0}
\newpage \clearpage
\setcounter{page}{1}
\section*{Appendix III}
\subsection{An Additional Task Matching English and Non-English Company Names}\label{s:NonLatinEx}
In this supplementary evaluation task, we examine the performance of our approach in a particularly challenging case where we seek to match organizational entities using their names written in English and in Mandarin Chinese. This matching task is especially challenging because English and Mandarin Chinese are based on two entirely different kinds of writing systems (i.e., alphabetic and logographic).
The data for this task is from FluentU, an organization focusing on
language learning \citep{FluentU}. The organization has provided a directory of 76 companies with their English names paired with their Chinese names. Table \ref{tab:crosslang} displays some of the companies found in this dataset, which is composed primarily of large multinational corporations.
\begin{table}[!htbp] \centering
\begin{tabular}{@{\extracolsep{5pt}} cc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
English Name & Chinese Name \\
\hline \\[-1.8ex]
amazon & \begin{CJK*}{UTF8}{gbsn}亚马逊\end{CJK*} \\
coca cola & \begin{CJK*}{UTF8}{gbsn}可口可乐 \end{CJK*} \\
jp morgan & \begin{CJK*}{UTF8}{gbsn}摩根\end{CJK*} \\
marlboro & \begin{CJK*}{UTF8}{gbsn}万宝路\end{CJK*} \\
\hline \\[-1.8ex]
\end{tabular}
\caption{A sample of the organizational entities in the cross-language dataset. We attempt to match the pool of English names to their associated names in Mandarin Chinese.}
\label{tab:crosslang}
\end{table}
We compare linkage performance using our community detection algorithms based on the LinkedIn network and using our character-based machine learning and the baseline fuzzy matching approach. The fuzzy matching approach gives \emph{no} matches (except when the fuzzy distance threshold is set to 1 so that all English names are matched with all Chinese names). This occurs because none of the English names share any Latin character combinations with any of the Chinese names.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{Example5_reduceDataFALSE_truePositives.pdf}\includegraphics[width=0.45\textwidth]{Example5_reduceDataFALSE_Fscore.pdf}
\caption{In the left panel, we see that the LinkedIn-based community detection algorithms find a significant fraction of the true positive matches between the English and non-English aliases. In the right panel, we see that these true positive matches are found without introducing numerous false positives (so that the $F_2$ scores for the community detection algorithms approach 0.5). The machine learning approach yields some true positive matches; fuzzy matching yields none.\label{fig:chinese}}
\end{figure}
The machine learning linkage approach based on the LinkedIn network fares somewhat better---yielding a true positive rate of 0.2 when the matched dataset size approaches 1,000. In contrast to fuzzy matching, it also achieves a non-zero \(F_2\) score. However, the community detection approaches using the LinkedIn network fare better, successfully picking out 33 of the 76 matches with only five false positives. The left panel of Figure \ref{fig:chinese} shows how the Markov clustering approach somewhat outperforms the bipartite clustering, although both approaches achieve similar maximum true positive rates and \(F_2\) scores just above 0.4. In this way, the network information present in the LinkedIn corpus allows us to find almost half of the true matches, even though the names to be matched are written in entirely different language systems.
Still, it is once again instructive to examine names for which the community detection approaches succeed and fail in this cross-language matching task. The approach succeeds in cases where an organization has employees who use both English and non-English organizational names while linking to the same company URL. For example, we successfully match the ``Coca-cola'' to ``\begin{CJK*}{UTF8}{gbsn}可口可乐\end{CJK*}'' match, as Coca-cola has at least six Chinese offices, and employees in these offices often use Coca-Cola's Chinese name in their LinkedIn URL. However, we do not find the ``lamborghini'' to ``\begin{CJK*}{UTF8}{gbsn}兰博基尼\end{CJK*}'' match. Lamborghini does operate offices in China, but its employees use the name ``Lamborghini'' in their LinkedIn profiles. In this way, the community-detection algorithms based on the LinkedIn network are, for cross-language merge tasks, best suited for the matching of organizational entities which have many employees across linguistic contexts and where non-English names are commonly used on LinkedIn.
\renewcommand{\thefigure}{A.IV.\arabic{figure}}
\setcounter{figure}{0}
\renewcommand{\thetable}{A.IV.\arabic{table}}
\setcounter{table}{0}
\renewcommand{\thesection}{A.IV.\arabic{section}}
\setcounter{section}{1}
\setcounter{subsection}{0}
\newpage \clearpage
\setcounter{page}{1}
\section*{Appendix IV}
\subsection{A LinkOrgs Package Tutorial}\label{s:LinkOrgs}
The package can be downloaded via \Verb|GitHub|:
\begin{Verbatim}
# Download package via github
devtools::install_github("cjerzak/LinkOrgs-software/LinkOrgs")
# load it into your R session
library(LinkOrgs)
\end{Verbatim}
There are several major functions in the package. The first is \Verb|FastFuzzyMatch|, which performs traditional string distance matching using all available CPU cores. This offers a substantial time speedup compared to the sequential application of a fuzzy distance calculator. The second function, \Verb|LinkOrgs|, is the main estimation function used in this paper to calculate the match probabilities. We also have a function, \Verb|AssessMatchPerformance|, that computes performance metrics of interest. To get help with any of these functions, one can run the following:
\begin{Verbatim}
# To see package documentation, enter
# ?LinkOrgs::LinkOrgs
# ?LinkOrgs::AssessMatchPerformance
# ?LinkOrgs::FastFuzzyMatch
\end{Verbatim}
Here is the syntax for how to use the package using synthetic data:
\begin{Verbatim}
# Create synthetic data to try everything out
x_orgnames <- c("apple","oracle","enron inc.","mcdonalds corporation")
y_orgnames <- c("apple corp","oracle inc","enron","mcdonalds co")
x <- data.frame("orgnames_x"=x_orgnames)
y <- data.frame("orgnames_y"=y_orgnames)
# Perform a simple merge with the package
linkedOrgs <- LinkOrgs(x = x,
y = y,
by.x = "orgnames_x",
by.y = "orgnames_y",
algorithm = "bipartite")
# Print results
print( linkedOrgs )
\end{Verbatim}
An up-to-date tutorial in light of ongoing evolutions to the \Verb|LinkOrgs| package (in terms of efficiency and new capabilities) is available at \Verb|github.com/cjerzak/LinkOrgs-software/blob/master/README.md|.
\renewcommand{\thefigure}{A.V.\arabic{figure}}
\setcounter{figure}{0}
\renewcommand{\thetable}{A.V.\arabic{table}}
\setcounter{table}{0}
\renewcommand{\thesection}{A.V.\arabic{section}}
\setcounter{section}{1}
\setcounter{subsection}{0}
\newpage \clearpage
\setcounter{page}{1}
\section*{Appendix V: Algorithm Execution Times by Task}
\subsection{Matching Execution Times by Algorithm by Task}\label{s:Time}
\input{ExecutionTime2.tex}
\input{ExecutionTime4.tex}
\input{ExecutionTime5.tex}
\end{document} |
2,869,038,155,574 | arxiv |
\section{\label{EOM}Equations of motion and their link to the 1D Dirac Equation}
Consider a periodic array having two resonators per unit cell [Fig.~\ref{s1}(a)]. We define the displacement of each resonator $x_n(t)$ where $n$ represents the resonator index. From here, we can describe our system using the coupled-mode equations
\begin{equation}
i\frac{d x_n(t)}{dt}+\gamma[x_{n+1}(t)+x_{n-1}(t)]-(-1)^n\Delta\cdot x_n(t)=0,
\end{equation}
where $\gamma$ is the coupling strength, $2\Delta$ is the resonance frequency mismatch between the resonators within each unit cell. The periodic array supported by this system of equations has two bands separated by a band gap of size $2\Delta$ (Fig.~\ref{s2}) defined by the dispersion curves $\omega_{\pm}(k)=\pm\sqrt{\Delta^2+4\gamma^2\cos^2(k)}$, assuming unit lattice constant.
Near the edge of the first Brillouin zone, for non-zeros $\Delta$, the dispersion curves form two opposite hyperbolas--mimicking the typical energy-momentum relation of a freely moving relativistic massive particle \cite{dreisow_classical_2010}. In this way, phonons moving in the lattice with wave number $k\approx\pi/2$ simulate the temporal dynamics of the relativistic Dirac equation.
If we transform the coupled-mode equations by setting
\begin{equation}
\begin{cases}
\begin{aligned}
& x_{2n}(t)=(-1)^n\psi_1(n,t),\\
& x_{2n-1}(t)=-i(-1)^n\psi_2(n,t),
\end{aligned}
\end{cases}
\end{equation} and switching to the continuous transverse coordinate $n\to\xi$, the two-component spinor wave function describing displacements in an unit cell $\psi(\xi,t)=(\psi_1,\psi_2)^T$ satisfies the one-dimensional Dirac equation for a freely moving particle of mass $m$\begin{equation}
i\frac{\partial\psi}{\partial t}=-i\gamma\hat\sigma_x\frac{\partial\psi}{\partial\xi}+\Delta\hat\sigma_z\psi,
\end{equation} where the formal change is made
$\gamma\to c$, $\Delta\to\frac{mc^2}{\hbar}$.
We therefore recognize the term $\Delta$ as the Dirac mass for a periodic array which could take on either positive or negative values [Fig.~\ref{s1}(b,c)]. Furthermore, it has been shown that at the interface between two such lattices with Dirac mass of opposite signs ($\Delta_1=-\Delta_2$), a Jackiw-Rebbi-type edge state will emerge \cite{tran_linear_2017}.
In our work, we tune the Dirac mass terms along an array by adding third-order nonlinearities to our resonators such that a JR-type mode would spontaneously emerge as the injected excitation level increases above a certain threshold [See Main Text \cite{main}, Fig.~3(d-g)]. In the next section, we explain how we control the nonlinearities of our resonators.\\
\begin{figure}[hb]
\centering
\includegraphics[width=.5\linewidth]{sup1.jpg}
\caption{Toy models for dimerized periodic resonator chain and configurations for positive/negative Dirac mass as used in the main text.}\label{s1}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=.35\linewidth]{sup2.jpg}
\caption{Dispersion of the linear array shown in Fig.~1(a) of the main text \cite{main}. The gap size is labeled with green arrows.}\label{s2}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\linewidth]{sup3.jpg}
\caption{\label{s3}Spatial configurations to create cubic nonlinearities. $\mathbf{a,b)}$ Position and orientation of fixed tuning magnets relative to the resonator magnet in the softening and stiffening configurations, respectively. Note, all magnets are aligned on the $\hat x$ direction and the resonator magnets rotates along its $\hat z$ axis which cartoon depicted in exaggeration. $\mathbf{c)}$ Cartoon of the array used in nonlinear experiments.}
\end{figure}
\section{\label{nl}Resonator Nonlinearity Control}
Magnetically sensitive mechanical oscillators in a nonuniform magnetic field can experience a magnetostatic spring effect in addition to the usual mechanical restoring force. Such effect has been studied in detail \cite{grinberg_magnetostatic_2019} and was found to be capable of creating magneto-mechanical resonators with softening as well as stiffening nonlinearities depending on the relative spatial relationship of the magneto-mechanical resonator with respect to the local magnetic field.
We show in Fig.~\ref{s3}(a) and Fig.~\ref{s3}(b) two configurations where the magnetic field from a fixed disc magnet gives the magneto-mechanical resonators softening and stiffening nonlinearities respectively (example data shown in Fig.~3 of Main Text \cite{main}). Then, by adding tuning magnets, we are able to construct a compact resonator array with alternating types of nonlinearities as shown in Fig.~\ref{s3}(c). A drive coil is then used to excite the array from its left edge.
\section{\label{fs}Inferring Resonator Frequencies in a Nonlinear Experiment}
Prior to assembling the nonlinear array, we calibrate each nonlinear resonator and the accompanying Hall sensor using three drive amplitudes so that the nonlinear frequency detuning can be measured against the oscillation amplitudes (shown in Fig.~\ref{s4} for softening and stiffening resonators). Fitting of these amplitude-frequency nonlinearities is then done using Main Text \cite{main}, Eq.~(3) for each resonator.
In the experiment, we measure the displacements of all resonators and acquire the steady-state oscillation amplitudes for them at each excitation level. These measured amplitudes are then used to back-extract the effective individual resonance frequencies shown in the Main Text \cite{main}, Fig.~3(e-g).
\begin{figure*}[!tbph]
\includegraphics[width=0.90\textwidth]{sup4.jpg}
\caption{Individually measured nonlinear detuning for all softening and stiffening resonators from the nonlinear experiment. Detuning and frequency response are measured in a similar fashion as shown in Main Text \cite{main}, Fig.~3(a and b) at three excitation amplitudes for analysis, respectively.}\label{s4}
\end{figure*}
\FloatBarrier
|
2,869,038,155,575 | arxiv | \section{prime modular numbers}
\begin{definition}
Let $n \in N$ and $p_i, i=1..n$ the first n prime numbers.\newline
The \textbf{prime modular numbers of level n} are tuples of length n defined by
\begin{center}
$PM(n) := \{\colvec{6}{t_1 \mod 2}{t_2 \mod 3}{t_3 \mod 5}{t_4 \mod 7}{...}{t_n \mod p_n}| t_i \in N\}$.
\end{center}
The set of all prime modular numbers of length n is called $PM(n)$.
\end{definition}
\begin{example}
For level n=1 there are two prime modular numbers:
$$PM(1) = \{\colvec{1}{0},\colvec{1}{1}\}$$
For level n=2 there are six prime modular numbers:
$$PM(2) = \{\colvec{2}{0}{0},\colvec{2}{0}{1},\colvec{2}{0}{2},\colvec{2}{1}{0},\colvec{2}{1}{1},\colvec{2}{1}{2}\}$$
\end{example}
\begin{definition}
The \textbf{primorial number} $\prod\limits_{i=1}^np_i$ is defined as the product of the first n prime numbers.
\end{definition}
\begin{lemma}
The number of elements in PM(n) equals the primorial number $\prod\limits_{i=1}^np_i$.
\end{lemma}
\begin{proof}
The number of different element on position $i\in \{1,..,n\}$ equals $p_i$ due to the defintion of the mod() function. The number of permutations of the first n primes equals $\prod\limits_{i=1}^np_i$. \qed
\end{proof}
\section{Isomorphism $f: F_{\prod{p_i}} \rightarrow PM(n)$}
\begin{definition}
We define addition in PM(n) by:
\begin{center}
$\colvec{5}{s_1 \mod 2}{s_2 \mod 3}{s_3 \mod 5}{...}{s_n \mod p_n}
+ \colvec{5}{t_1 \mod 2}{t_2 \mod 3}{t_3 \mod 5}{...}{t_n \mod p_n}
:= \colvec{5}{(s_1 + t_1) \mod 2}{(s_2 + t_2) \mod 3}{(s_3 + t_3) \mod 5}{...}{(s_n + t_n) \mod p_n}$.
\end{center}
\end{definition}
\begin{definition}
We define multiplication in PM(n) by:
\begin{center}
$\colvec{5}{s_1 \mod 2}{s_2 \mod 3}{s_3 \mod 5}{...}{s_n \mod p_n}
* \colvec{5}{t_1 \mod 2}{t_2 \mod 3}{t_3 \mod 5}{...}{t_n \mod p_n}
:= \colvec{5}{(s_1 * t_1) \mod 2}{(s_2 * t_2) \mod 3}{(s_3 * t_3) \mod 5}{...}{(s_n * t_n) \mod p_n}$.
\end{center}
\end{definition}
\begin{definition}
Let be $F_{\prod{p_i}} = \{1,2,..,\prod\limits_{i=1}^np_i\}$. \newline
$F_{\prod{p_i}}$ is a finite integral domain of size $\prod\limits_{i=1}^np_i$. \newline
We define the homomorphism $f: F_{\prod{p_i}} \rightarrow PM(n)$:
$$f(k) := \colvec{5}{k \mod 2}{k \mod 3}{k \mod 5}{...}{k \mod p_n}$$
\end{definition}
\newtheorem{propIso}{Proposition}
\begin{propIso}[Isomorphism from $F_{\prod{p_i}}$ into PM(n)]
Let be $ k \in F_{\prod{p_i}}$ and f the homomorphism: $$f(k) :=\colvec{5}{k \mod 2}{k \mod 3}{k \mod 5}{...}{k \mod p_n} \in PM(n)$$
f is an isomorphism.
\end{propIso}
\begin{proof}
To proof the isomorphism, we show that
$f(m+n) = f(m) + f(n)$, $f(m*n) = f(m) * f(n)$ and f is injective.
a) $$f(m+n) = \colvec{5}{m+n \mod 2}{m+n \mod 3}{m+n \mod 5}{...}{m+n \mod p_n}
= \colvec{5}{m \mod 2 + n \mod 2}{m \mod 3 +n \mod 3}{m \mod 5 + n \mod 5}{...}{m \mod p_n + n \mod p_n} $$ $$ = \colvec{5}{m \mod 2}{m \mod 3}{m \mod 5}{...}{m \mod p_n} + \colvec{5}{n \mod 2}{n \mod 3}{n \mod 5}{...}{n \mod p_n} = f(m) + f(n)$$
b) $$f(m*n) = \colvec{5}{m*n \mod 2}{m*n \mod 3}{m*n \mod 5}{...}{m*n \mod p_n}
= \colvec{5}{m \mod 2 * n \mod 2}{m \mod 3 *n \mod 3}{m \mod 5 * n \mod 5}{...}{m \mod p_n * n \mod p_n} $$ $$ = \colvec{5}{m \mod 2}{m \mod 3}{m \mod 5}{...}{m \mod p_n} * \colvec{5}{n \mod 2}{n \mod 3}{n \mod 5}{...}{n \mod p_n} = f(m) * f(n)$$
c) f is injective:\newline
$$f(1)=\colvec{4}{1}{1}{...}{1} \text{ and } f(\prod{p_i}) = \colvec{4}{0}{0}{...}{0}$$
$$\forall m \in F_{\prod{p_i}}: f(m+1) = \colvec{4}{m+1 \mod 2}{m+1 \mod 3}{...}{m+1 \mod p_n} $$
Thus $$\forall m,n \in F_{\prod{p_i}}: f(m) = f(n) => f(m+1) = f(n+1)$$
Thus $$\forall m,n,r \in F_{\prod{p_i}}: f(m) = f(n) => f(m+r) = f(n+r) \text{ }[\alpha]$$
Suppose there are two values $$ m,n \in F_{\prod{p_i}} \text{ and } n>m\text{ with } f(m) = f(n)$$.
Then there is $$ r:= \prod{p_i} -n \text{ and } f(n+r)= f(\prod{p_i})= \colvec{4}{0}{0}{...}{0}$$
Due to $[\alpha]$: $$f(n+r) = f(m+r) = \colvec{4}{0}{0}{...}{0}$$
So there must be an value $m < \prod{p_i}$ that is divisible by the first n primes.
This is impossible, therefore f is injective.\qed
\end{proof}
\begin{example}
For level n=2:
\begin{center}
\begin{tabular}{c|cccccc|}
k& 1& 2& 3& 4& 5& 6\\
\hline
$f(k)$& $\colvec{2}{1}{1}$& $\colvec{2}{0}{2}$& $\colvec{2}{1}{0}$& $\colvec{2}{0}{1}$& $\colvec{2}{1}{2}$& $\colvec{2}{0}{0}$ \\
\end{tabular}
\end{center}
For level n=3:
$$f(1)=\colvec{3}{1}{1}{1}, f(2)=\colvec{3}{0}{2}{2}, f(3)=\colvec{3}{1}{0}{3}, f(4)=\colvec{3}{0}{1}{4}, f(5)=\colvec{3}{1}{2}{0}, f(6)=\colvec{3}{0}{0}{1},$$
$$f(7)=\colvec{3}{1}{1}{2}, f(8)=\colvec{3}{0}{2}{3}, f(9)=\colvec{3}{1}{0}{4}, f(10)=\colvec{3}{0}{1}{0}, f(11)=\colvec{3}{1}{2}{1}, f(12)=\colvec{3}{0}{0}{2},$$
$$f(13)=\colvec{3}{1}{1}{3}, f(14)=\colvec{3}{0}{2}{4}, f(15)=\colvec{3}{1}{0}{0}, f(16)=\colvec{3}{0}{1}{1}, f(17)=\colvec{3}{1}{2}{2}, f(18)=\colvec{3}{0}{0}{3},$$
$$f(19)=\colvec{3}{1}{1}{4}, f(20)=\colvec{3}{0}{2}{0}, f(21)=\colvec{3}{1}{0}{1}, f(22)=\colvec{3}{0}{1}{2}, f(23)=\colvec{3}{1}{2}{3}, f(24)=\colvec{3}{0}{0}{4},$$
$$f(25)=\colvec{3}{1}{1}{0}, f(26)=\colvec{3}{0}{2}{1}, f(27)=\colvec{3}{1}{0}{2}, f(28)=\colvec{3}{0}{1}{3}, f(29)=\colvec{3}{1}{2}{4}, f(30)=\colvec{3}{0}{0}{0} $$
\end{example}
\newtheorem{corIso}{Corollary}
\begin{corIso}
PM(n) is a finite integral domain of size $\prod\limits_{i=1}^np_i$.
\end{corIso}
\begin{remark}
The main drawback for calculation with prime modular numbers, is that it is no longer possible to compare two numbers and decide which one is bigger. But we get some other advantages instead, because it is much easier to decide whether a number is relative prime.
There is an inverse function $f^{-1}$: $PM(n) \rightarrow F_{\prod{p_i}}$ that will be discussed in chapter 4.
\end{remark}
\section{relative prime elements}
\begin{definition}
Given the set PM(n). An element in PM(n) is said to be \textbf{relative prime} if it has no zeros in its tuple.
\end{definition}
\begin{lemma}
The size of the set PM(n) is $\prod\limits_{i=1..n} (p_i -1)$.
\end{lemma}
\begin{proof}
The number of values in a prime modular number on each position without 0 is $p_i -1$.
Therefore the number of possible permutations is $\prod\limits_{i=1..n} (p_i -1)$. \qed
\end{proof}
\begin{example}
in PM(3) there are 8 relative prime elements: \newline $ 1=\colvec{3}{1}{1}{1}$,$7=\colvec{3}{1}{1}{2}$,$13=\colvec{3}{1}{1}{3}$,$19=\colvec{3}{1}{1}{4}$,\newline $11=\colvec{3}{1}{2}{1}$,$17=\colvec{3}{1}{2}{2}$,$23=\colvec{3}{1}{2}{3}$,$29=\colvec{3}{1}{2}{4}$.
\end{example}
\begin{definition}
Given the set PM(n) and the isomorphism $f: F_{\prod{p_i}} \rightarrow PM(n)$. We define $f^{-1}: PM(n) \rightarrow F_{\prod{p_i}}$ as the \textbf{inverse function} to f with $$\forall k \in F_{\prod{p_i}} : f^{-1}(f(k)) = k$$.
\end{definition}
\begin{definition}
Given the set PM(n). We define the \textbf{prime} elements \newline $t \in PM(n)$ as all elements, with $ f^{-1}(\colvec{4}{t_1}{t_2}{...}{t_{p_n}}) $ is prime in $F_{\prod{p_i}}$.
\end{definition}
\begin{lemma}
Let $p_{n+1}$ be the (n+1)-th prime number in N.
\newline Let $Q = \{q \textup{ is prime } \wedge p_{n+1} \leq q < \prod\limits_{i=1}^np_i \}$. \newline $ \forall q \in Q: f(q) $ is relative prime.
\end{lemma}
\begin{proof}
Each $q \in Q$ is prime and therefore not divisible by the first n primes $p_1 .. p_n$.
$$f(q) = \colvec{5}{q \mod 2}{q \mod 3}{q \mod 5}{...}{q \mod p_n}$$ has no zero value in its tuple and therefore is relative prime. \qed
\end{proof}
\begin{remark}
$f(1)=\colvec{4}{1}{1}{..}{1}$ is never prime in PM(n).
\newline
In PM(2) and PM(3) all relative prime elements (except 1) are prime.
\newline
For $n \geq 4$ there are many other relative prime elements in PM(n), that are not prime.
\newline
In PM(4) (range 11..210) all examples for this are $$11^2=121=\colvec{4}{1}{1}{1}{2},11*13=143=\colvec{4}{1}{2}{3}{3},$$ $$11*17=187=\colvec{4}{1}{1}{1}{6},13^2=169=\colvec{4}{1}{1}{4}{1},11*19=209=\colvec{4}{1}{2}{4}{6}$$.
\end{remark}
\begin{lemma}
Let $p_{n+1}$ be the (n+1)-th prime number in N.
Each relative prime number $t \in PM(n)$ with $f^{-1}(t) = k$ and $p_{n+1} \leq k < p_{n+1}^2$ is prime.
\end{lemma}
\begin{proof}
Each relative prime number in PM(n) is not divisible by the first n primes. Therefore the minimal relative prime number q, that is not prime must have $f^{-1}(q) = p_{n+1}^2$. \qed
\end{proof}
\begin{lemma}
Given the set PM(n). We define the subset $$PM^{'}(n) := \{t \in PM(n) \mid t \text{ is not prime } \wedge \text{ minimal prime factor }(f^{-1}(t)) > p_n\}$$ . $PM^{'}(n)$ contains all relative prime elements that are not prime.
\end{lemma}
\begin{proof}
With minimal prime factor $(f^-1(t)) > p_n =>$ \newline all tuple values are $> 0 =>$ \newline t is relative prime per definitionem. \qed
\end{proof}
\newtheorem{propPrim}{Proposition}
\begin{propPrim}
Let $p_{n+1}$ be the (n+1)-th prime number in N.
The number of all primes q with $p_{n+1} \leq q < \prod\limits_{i=1}^np_i$ is
$$\prod\limits_{i=1..n} (p_i -1) -1 - \text{ size of } PM^{'}(n)$$
\end{propPrim}
\begin{proof}
The number of relative primes in PM(n) is $\prod\limits_{i=1..n} (p_i -1)$.
From this we substract one for the relative prime $1=\colvec{4}{1}{1}{..}{1}$.
The remaining relative primes, that are not prime are defined in the set $PM^{'}(n)$. \qed
\end{proof}
\begin{example}
The number of primes between 6 and 30: (1*2*4) -1 -0 = 7 \newline
The number of primes between 10 and 210: (1*2*4*6) -1 -5 = 42.
\end{example}
\newtheorem{corRel}{Corollary}
\begin{corRel}
Let $p_{n+1}$ be the (n+1)-th prime number in N.
The number of all primes q with $q < \prod\limits_{i=1}^np_i$ is
$$ n + \prod\limits_{i=1..n} (p_i -1) -1 - \text{ size of } PM^{'}(n)$$
\end{corRel}
\section{the inverse function $f^{-1}$}
\begin{definition}
Let the set PM(n) be given. We define the \textbf{unary} elements in PM(n) as all elements with (n-1) zeros and one 1 in the tuple.
\end{definition}
\begin{example}
There are n unary tuples in PM(n).
In PM(4) there are 4 unary elements $\colvec{4}{1}{0}{0}{0}$,$\colvec{4}{0}{1}{0}{0}$,$\colvec{4}{0}{0}{1}{0}$,$\colvec{4}{0}{0}{0}{1}$.
\end{example}
\begin{lemma}
Each unary element in PM(n) generates a subset of PM(n) that is a finite field.
Therefore each element t in this subset has an inverse element.
\end{lemma}
\begin{proof}
Let $$PM(n)_{u_i} := \{ \colvec{6}{0}{..}{k}{..}{0}{0} | k \in \{0,..,p_i -1\}\}$$ be the subset of all tuples, where all tuple values are zero, but on the i-th position.
This subset forms a field with addition and multiplication and is isomorph to the finite field $Z/Z_{p_i}$. \qed
\end{proof}
\begin{lemma}
Each element in PM(n) can be constructed by the unary elements.
\end{lemma}
\begin{proof}
$$\colvec{5}{k_1 \mod 2}{k_2 \mod 3}{k_3 \mod 5}{...}{k_n \mod p_n} =
\sum_{i = 1}^{k_1} \colvec{5}{1}{0}{0}{...}{0} +
\sum_{i = 1}^{k_2} \colvec{5}{0}{1}{0}{...}{0} +
... +
\sum_{i = 1}^{k_n} \colvec{5}{0}{0}{0}{...}{1} $$
\qed
\end{proof}
\begin{definition}
Let the finite field $Z/Z_{p_i} $ with $p_i$ prime be given.\newline
For each element $t \in Z/Z_{p_i} $ \newline there exists an inverse element $t^{-1}$ with $t * t^{-1} = 1$.
\newline
We define $t \textbf{ modInverse}(p_i) := t^{-1}$
\end{definition}
\newtheorem{propUnary}{Proposition}
\begin{propUnary}
For an unary element $u_i \in PM(n): v_i := f^{-1}(u_i)$ can be calculated by:
$$v_i = {(\prod\limits_{j=1}^np_j) / p_i} * [(\prod\limits_{j=1}^np_j) / p_i \text{ modInverse }(p_i)]$$
\end{propUnary}
\begin{proof}
The value $v_i = f^{-1}(u_i)$ must be \newline [condition a]: divisible by all primes $ p_j, j \in \{1..n\}, j \neq i $ except $p_i$ \newline
and [condition b]: $v_i \mod(p_i) = 1 $.\newline
$v_i := (\prod\limits_{j=1}^np_j / p_i)$ resolves condition a , but not in all cases condition b.
To resolve [condition b] we multiple the value $v_i$ by $(\prod\limits_{j=1}^np_j / p_i) \text{ modInverse }(p_i)$.
\newline
$v_i^* := {v_i} * [(\prod\limits_{j=1}^np_j) / p_i \text{ modInverse }(p_i)] \mod p_i = 1$ resolves both conditions.
\newline
$v_i$ is an element in the field $PM(n)_{u_i}$. Therefore there exists an inverse element.
\newline
Due to the isomorphism in chapter 2 there can only be one $v_i^* \text{ with } f(v_i^´)=u_i$. \qed
\end{proof}
\newtheorem{thmPM}{Theorem}
\begin{thmPM}[inverse function]
Let PM(n) be given.\newline Let g: PM(n) $ \rightarrow F_{\prod{p_i}}: $
Let the map be defined by $$g(\colvec{5}{t_1}{t_2}{t_3}{...}{t_n}) = (\sum_{i=1..n} t_i * v_i) \mod (\prod\limits_{i=1}^np_i)$$ \newline Then g is the inverse function to f.
\end{thmPM}
\begin{proof}
Each $k \in PM(n)$ can be calculated by its unary elements.
For each unary element the value $v_i$ is known.
\qed
\end{proof}
\begin{remark}
For all $t_i = 0 \rightarrow g(t) = 0$. In $F_{\prod{p_i}}$ this is equivalent to $\prod{p_i}$.
\end{remark}
\begin{example}
\begin{center}
\begin{tabular}{c|c|c|c|c|}
n& range& rel. primes& coeffs\\
\hline
1& 1 - 2& 1& $f^{-1}(t)=(1t_1) \mod 2$\\
2& 1 - 6& 2& $f^{-1}(t)=(3t_1 + 4t_2) \mod 6$\\
3& 1 - 30& 8& $f^{-1}(t)=(15t_1 + 10t_2 + 6t_3) \mod 30 $\\
4& 1 - 210& 48& $f^{-1}(t)=(105t_1 + 70t_2 + 126t_3 + 120t_4) \mod 210 $\\
5& 1 - 2310& 480& $f^{-1}(t)=(1155t_1 + 1540t_2 + 1386t_3 + 330t_4+ 210t_5) \mod 2310 $\\
\end{tabular}
with $t_1 \in \{0,1\}, t_2 \in \{0,1,2\}, t_3 \in \{0,1,2,3,4\}, t_4 \in \{0,1,2,3,4,5,6\},t_5 \in \{0,1,2,3,4,5,6,7,8,9,10\}$
\end{center}
\end{example}
\section{conclusions}
The inverse function provides an algorithm to test for higher primes:
We can iterate all relative prime elements in PM(n) to calculate all relative prime values up to $\prod\limits_{i=1}^np_i$.
Start for example with $\colvec{4}{1}{2}{4}{6}$ and subtract one from the last value $\colvec{4}{1}{2}{4}{5}, \colvec{4}{1}{2}{4}{4}, ...$ until $\colvec{4}{1}{2}{4}{1}$ and then switch to $\colvec{4}{1}{2}{3}{6}$ and then repeat until we reach $\colvec{4}{1}{1}{1}{1}$.
For each element we can calculate $f^{-1}(t)$ as a prime candidate. This is really similar to the wheel factorization of primes, but not in sequential order. Instead of iterating the wheel multiple times we just go up to $\prod\limits_{i=1}^np_i$.
There are some questions open to make this algorithm effective. We still have to check each relative prime element for its primeness.
As part of a solution, it is possible to extend each element $$t = \colvec{4}{t_1}{t_2}{..}{t_n} \in PM(n)$$ for tuple positions $t_i > n$. Together with $f^{-1}(t) <= \prod\limits_{i=1}^np_i$ these extensions are unique. \newline The extended element $t'= \colvec{7}{t_1}{t_2}{..}{t_n}{t_{n+1}}{..}{t_{n+m}}$ can be easily checked for primeness.
\section{remarks}
Thanks to all the reviewers for their valuable feedback.
A special thanks to Ralf Schiffler (University of Connecticut) for his help with the calculation of the inverse function.
\newline
After publication of this article Ramin Zahedi contacted me with his own article, which provides similar results from a different perspective.\cite{zahedi}
|
2,869,038,155,576 | arxiv | \section{Introduction}
\label{sec:intro}
Quantum information processing and quantum optics with superconducting circuits have been experimentally realized in the past two decades~\cite{Blais04, Wallraff04, Koch07, Gu17, Krantz19, Arute19}. The semi-one-dimensional architecture of circuit quantum electrodynamics (QED) systems features large spatial mode matching among various types of artificial atoms and their detectors~\cite{Gu17}. The trade-off is that the eigenmodes of each quantum element have limited degrees of freedom, which thus restricts the diversity of selection rules. The lack of metastable states in these artificial atoms restricts the implementation of atomic and molecular optics in circuit QED systems. For instance, the category of $\Lambda$-type artificial atom has been rather unexplored. In most of the reported approaches, Purcell-protected qubits~\cite{Yang04, Liu16, Gu16, Novikov16, Long18}, additional coherent drive~\cite{Gu16, Long18}, and indirect Raman transition~\cite{Novikov16, Earnest18, Kelly10} are utilized. To set up a rather simple $\Lambda$-type artificial atom with an inherent metastable state, especially with decent frequency tunability, is still of great interest in superconducting quantum circuits.
An alternative strategy is to consider two coupled qubits, which provides an additional degree of freedom to the system, thereby modifying the parity characteristics of the eigenmodes. Symmetric and antisymmetric states of two coupled qubits are well known in literature~\cite{Dicke54, Ficek02}. Moreover, there are theoretical and experimental studies on coupled transmon qubits~\cite{KL13, vLoo13, Mlynek14, Majer07, Gambetta11, Srinivasan11, Zhang17}. The spontaneous decay of the symmetric (antisymmetric) state is inherently enhanced (suppressed), featuring its short (long) life time. Furthermore, the symmetric (antisymmetric) state is easy (difficult) to manipulate and measure. Nevertheless, the transition moment between these two extreme states is absent. It is known that two nonidentical Cooper-pair boxes form a system with nearly symmetric and antisymmetric states with weakly allowed transition between them~\cite{SR09}. Introducing e a mechanism that bridges symmetric and antisymmetric states could allow one to construct a promising $\Lambda$-type system.
On the other hand, fast in-situ level tunability of superconducting qubits brings various conveniences to circuit QED systems. For example, manipulation of qubit-qubit interaction through parametric modulation has been demonstrated~\cite{Niskanen07, McKay16, Roth17, Mundada19}. Here we introduce a simple parametric protocol to induce transition between symmetric and antisymmetric states in coupled identical qubits. It provides a tunable $\Lambda$-type scheme made of a resonant transmon qubit pair.
This paper is organized as follows. In Sec.~\ref{sec:ParametricDrive}, we introduce the protocol and the configuration of a general two-qubit system that constructs an effective $\Lambda$-type system. The results discussed in Sec~\ref{sec:ParametricDrive} are verified numerically by considering a pair of capacitively coupled transmon qubits in Sec~\ref{sec:lambda}. Sec.~\ref{sec:EIT} demonstrates its continuous-wave applications on $\Lambda$-type electromagnetically induced transparency (EIT) and Autler-Townes splitting (ATS). Sec.~\ref{sec:STIRAP} demonstrates its pulsed control application on $\Lambda$-type stimulated Raman adiabatic passage (STIRAP). Sec.~\ref{sec:discussion} discusses the features and potential applications of this $\Lambda$-type system. Compared to other $\Lambda$-type systems, its large frequency tunability is also highlighted. Sec.~\ref{sec:conclusion} summarizes our paper.
\section{Parametric drive induced mode transfer}
\label{sec:ParametricDrive}
Consider a pair of coupled qubits $Q_a$ and $Q_b$ (see Fig.~\ref{fig:setup}(a)). The Jaynes-Cummings Hamiltonian in the bare-qubit basis is written as
\begin{equation}
\begin{aligned}
H_0 = \omega_a \vert eg \rangle \langle eg \vert + \omega_b \vert ge \rangle \langle ge \vert + J [\vert eg \rangle \langle ge \vert + \vert ge \rangle \langle eg \vert],
\end{aligned}
\label{eq:JC}
\end{equation}
\noindent where $\vert eg \rangle$ indicates that $Q_a$ is in the excited state and $Q_b$ is in the ground state. The excited state resembles dipole oscillation, which can be driven by an external electric field. $\omega_i$ is the transition frequency of $Q_i$. In the platform of superconducting qubits, the transverse inter-qubit coupling $J$ can be realized either by direct capacitive coupling or by an additional waveguide structure~\cite{Krantz19, Gu17}. A realistic example will be discussed in detail in Sec.~\ref{sec:lambda}. In the resonant case $\omega_a = \omega_b = \omega_0$, the three lowest eigenstates and eigenenergies of the combined system are (refer to Fig.~\ref{fig:setup}(b))
\begin{subequations}
\renewcommand{\theequation}{\theparentequation.\arabic{equation}}
\begin{align}
\vert G \rangle & = \vert gg \rangle &, \quad \omega_G & = 0, \\
\vert D \rangle &= \frac{1}{\sqrt{2}}[\vert eg \rangle - \vert ge \rangle ] &, \quad \omega_D & = \omega_0 - J, \\
\vert B \rangle &=\frac{1}{\sqrt{2}}[\vert eg \rangle + \vert ge \rangle ] &, \quad \omega_B & = \omega_0 + J.
\end{align}
\label{eq:GDB}
\end{subequations}
Consider that the spatial separation between the two qubits is much smaller than the transition wavelength $\lambda = 2 \pi c / \omega_0$, where $c$ is the speed of light. The two qubits see the oscillation of the electric field in phase. The antisymmetric state $\vert D \rangle$ resembles the out-of-phase dipole oscillation. It has zero total dipole moment and thus it is inherently protected from the external field and the vacuum fluctuation. Therefore, for the state $\vert D \rangle$ the spontaneous decay rate $\Gamma^1_D = 0$. It is also referred to as the \textit {Dark} state. On the other hand, the symmetric state $\vert B \rangle$ features the in-phase dipole oscillation. Thus $\vert B \rangle$ has an enhanced interaction with the external field, with the total dipole moment twice that of the single qubit~\cite{Gambetta11, Srinivasan11}. Consequently, the spontaneous decay rate is enhanced by 4-fold, $\Gamma^1_B = 4\Gamma^1$. The $\vert B \rangle$ state is also referred to as the \textit{Bright} state~\cite{vLoo13, Mlynek14, Majer07, Gambetta11, Srinivasan11, Zhang17}.
\begin{figure}
\includegraphics[width=85 mm]{Fig1.png}
\caption{
Level diagram of the tunable $\Lambda$-type system.
(a) Representation of the control protocol in the basis of two resonant qubits $Q_a$ and $Q_b$. The transition frequency $\omega_a$ is modulated sinusoidally (red wavy, see main text), while $\omega_b$ stays static. $J$ represents the inter-qubit coupling.
(b) Level diagram described in the system eigenstates, with corresponding transition driven by the parametric modulation in (a) (red circular). It is also referred to as the coupling beam in Sec.~\ref{sec:EIT} or the Stokes tone in Sec.~\ref{sec:STIRAP}. The blue arrow from $\vert G \rangle$ to $\vert B \rangle$ indicates the dipole-allowed transition, named as the probe beam in Sec.~\ref{sec:EIT} or the pump tone in Sec.~\ref{sec:STIRAP}.
}
\label{fig:setup}
\end{figure}
The decoherence rates of the state $\vert D \rangle$ and $\vert B \rangle$ are
\begin{subequations}
\renewcommand{\theequation}{\theparentequation.\arabic{equation}}
\begin{align}
\gamma_{DD} &= \frac{1}{2}(\Gamma_a^{\phi}+\Gamma_b^{\phi}), \\
\gamma_{BB} &= \frac{1}{2}(\Gamma_a^{\phi}+\Gamma_b^{\phi}) + \frac{1}{2}(4 \Gamma^1).
\end{align}
\label{eq:decay}
\end{subequations}
\noindent Here $\Gamma_i^{\phi}$ is the pure dephasing rate of $Q_i$. Shining an electromagnetic wave is unable to induce transition between state $\vert D \rangle$ and $\vert B \rangle$ due to their parity characteristics (Fig.~\ref{fig:Selection}). However, it is reported that proper level modulation from Stark shift can induce coherent transfer between $\vert D \rangle $ and $\vert B \rangle$ ~\cite{Feng17}. Here we take advantage of the in-situ tunability of level spacing of a superconducting qubit. The transition frequency of $Q_a$ is sinusoidally modulated by its local flux line, denoted as $\omega_a(t) = \omega_0 + 2\Omega_{\Phi}(t) = \omega_0 + 2\Omega^0_{\Phi}(t)\sin{(\omega_{\Phi}t)}$. Here the factor 2 is introduced only for convenience. Meanwhile, $\omega_b = \omega_0 $ stays fixed (see Fig.~\ref{fig:setup}(a)). The Hamiltonian under parametric modulation reads
\begin{equation}
\begin{aligned}
H_{\rm{m}}= [\omega_0 + 2\Omega_{\Phi}(t)] \vert eg \rangle \langle eg \vert + \omega_0 \vert ge \rangle \langle ge \vert \\ + [J \vert eg \rangle \langle ge \vert + \rm{H.c.}].
\end{aligned}
\label{eq:bareHm}
\end{equation}
\noindent Rewriting $H_{\rm{m}}$ in the basis of $\{\vert G \rangle, \vert D \rangle, \vert B \rangle\}$, one gets
\begin{equation}
\begin{aligned}
H_{\rm{m }}= [ \omega_{D} + \Omega_{\Phi}(t) ] \vert D \rangle \langle D \vert + [\omega_B+ \Omega_{\Phi}(t) ] \vert B \rangle \langle B \vert \\
+ [\Omega_{\Phi}(t) \vert B \rangle \langle D \vert + \rm{H.c.}].
\label{eq:bareHmGDB}
\end{aligned}
\end{equation}
Eq.~\ref{eq:bareHmGDB} shows that the energy of the interested levels $\vert D \rangle$ and $\vert B \rangle$ vary in time in general. Nevertheless, for fast modulation $ \omega_{\Phi} \gg \Omega^0_{\Phi}$, motional averaging takes place~\cite{Shevchenko10, Li13, Silveri15, Wen20, LZS}, and one can ignore $ \Omega_{\Phi}(t)$ in the diagonal terms of Eq.~\ref{eq:bareHmGDB}. That is, the $\vert D \rangle$ and $\vert B \rangle$ levels act as if they stay at the centered frequencies:
\begin{equation}
\begin{aligned}
{\tilde{H}_{\rm{m }} = \omega_{D} \vert D \rangle \langle D \vert+ \omega_B \vert B \rangle \langle B \vert+ [\Omega_{\Phi}(t) \vert B \rangle \langle D \vert + \rm{H.c.} ]}.
\label{eq:HmGDB}
\end{aligned}
\end{equation}
From the above discussion one easily sees that the parametric drive $2\Omega_{\Phi}(t) = 2\Omega^0_{\Phi}(t)\sin{(\omega_{\Phi}t)}$ is able to induce a coherent transition between state $\vert D \rangle$ and $\vert B \rangle$ as $\omega_{\Phi} \approx \omega_B - \omega_D = 2J$. When the effective Rabi frequency $\Omega^0_{\Phi}(t) \ll \omega_{\Phi}$, the rotating wave approximation (RWA) is satisfied. Therefore, $\omega_{\Phi} \approx 2J \gg \Omega^0_{\Phi}$ is the working regime of the scheme presented in this paper. The strong inter-qubit coupling is essential to achieve motional averaging and to avoid the breakdown of RWA.
In Eq.~(\ref{eq:bareHm}) to Eq.~(\ref{eq:HmGDB}), we consider the resonant condition, $\omega_a=\omega_b=\omega_0$. The parametric drive still works when the two qubits have unequal transition moment from their ground states to excited states. In such a case, the transition moment between the symmetric and antisymmetric states can be nonzero. However, the antisymmetric state is no longer ideally ''dark'', and so is the symmetric states~\cite{SR09}. Slight detuning between $\omega_a$ and $\omega_b$ has similar effect on the eigenstates, while the effective Rabi frequency $\Omega^0_{\Phi}(t)$ is slightly modified. See Appendix~\ref{sec:detune} for a related discussion. A coupled identical pair forms the exact bright state $\vert B \rangle$ and dark state $\vert D \rangle$. With the introduced parametric drive, direct Rabi swapping between the two extreme states become possible, without smearing their critical properties.
A mechanical analogy for the phenomenon discussed above is a pair of coupled identical oscillators with the vacuum mode $\vert G \rangle$, the out-of-phase oscillation $\vert D \rangle$ and the in-phase oscillation $\vert B \rangle$. By a sinusoidal modulation of one of the spring constants, the system picks up a relative phase between the two oscillators, resulting in the coherent mode transfer between the out-of-phase and the in-phase oscillation. The energy is injected or extracted by the external modulation agency. It is reported that similar principle is used to manipulate phonon modes in far-detuned coupled mechanical oscillators~\cite{Okamoto13}.
\section{Effective $\Lambda$-type system made of transmon pair}
\label{sec:lambda}
\begin{figure}
\includegraphics[width=85 mm]{Fig2.png}
\caption{Tunable $\Lambda$-type system is constructed with two transmon qubits.
(a) Two capacitively coupled and resonant split-junction transmon qubits, with time-dependent flux $\Phi_a(t)$ threading the loop of $Q_a$. Level spacing of the on-resonance system is set by the static fluxes $\bar{\Phi}_a = \Phi_b = \bar{\Phi}$ threading the two qubits. The parametric drive $\Omega_{\Phi}(t)$ is through the local flux line nearby $Q_a$. The electromagnetic excitation $\Omega_p$ is through the waveguide that capacitively coupled to $Q_a$ and $Q_b$. (b) Frequency tunability of the system. The transition frequency of the bright state $\omega_B$ (thick solid), the dark state $\omega_D$ (dotted) and bare qubit frequency $\omega_0$ (dashed green) as a function of dc flux bias $\bar{\Phi}$.}
\label{fig:TCQ}
\end{figure}
The degenerate tunable coupling qubit (TCQ) architecture~\cite{Gambetta11, Srinivasan11, Zhang17}, i.e., two identical split junction transmon qubits with strong capacitive coupling, provides a good platform for our proposed $\Lambda$-type system. The schematic is shown in Figure~\ref{fig:TCQ}(a). The Hamiltonian of a TCQ in terms of the net Cooper pair number states $\{\vert n_i\rangle\}$ on qubit $i\in \{a, b\}$ reads
\begin{equation}
\begin{aligned}
H_{\rm{TCQ}} &= \sum_{i=a,b}\sum_{n_i} 4E_{C_i}(n_{i}-n_{g_{i}})^2\vert n_i \rangle \langle n_i \vert \\
&- \sum_{i=a,b}\sum_{n_i}\frac{1}{2}E_{J_i}[\vert n_i+1 \rangle \langle n_i \vert + \vert n_i-1\rangle \langle n_i \vert] \\
&+ \sum_{i,j=a,b} \sum_{n_i, n_j} 4E_I n_i n_j \vert n_i \rangle \langle n_j \vert ,
\end{aligned}
\label{eq:TCQcharge}
\end{equation}
\noindent where $n_{g_{i}}$ is the gate charge applied on $Q_i$. $E_{C_i}$ and $E_{J_i}$ denote the charging energy and Josephson energy of $Q_i$. $E_{C_a}=E_{C_b}=E_{C}$ is assumed. The inter-qubit coupling $J = 2E_I[E_{J_a}/E_{C_a}]^{1/4}[E_{J_b}/E_{C_b}]^{1/4}$~\cite{Gambetta11,Srinivasan11}, where $E_I$ denotes the interaction energy of the capacitively coupled two-qubit system. The TCQ is biased in degeneracy, i.e., the Josephson energy $\bar{E}_{J_a}(\Phi_a) = E_{J_b}(\Phi_b)$. We numerically solve the eigenstates of the TCQ in the charge basis $\{n_a,n_b\}$ from Eq.~(\ref{eq:TCQcharge}) (Appendix~\ref{sec:selection}). The characteristics of its lowest three eigenstates $\{ \vert G \rangle, \vert D \rangle, \vert B \rangle \}$ resemble that of Eq.~(\ref{eq:GDB}). As the parametric modulation is introduced, $E_{J_a}(t) = E^{\rm{max}}_{J_a}\cos{[\pi\Phi_a(t)/\Phi_0]}\sqrt{1+d^2\tan^2{(\pi\Phi_a(t)/\Phi_0)}}$ varies in time, where $\Phi_0 =h/2e$ is the magnetic flux quantum and $d$ denotes the SQUID asymmetry~\cite{Koch07,Hutchings17}. Note that the selective modulation of $E_{J_i}(\Phi_i)$ is through the much stronger mutual inductance between the flux line and its nearest SQUID loop of $Q_i$, compared to that to the other qubit. It does not conflict the idea of the \textit {small} atom (molecule) assumption, which relies on small spatial separation between the antenna electrodes, compared to the wavelength of the field $\lambda$ in the case of transmon qubits. Refer to Appendix~\ref{sec:Crosstalk} for the discussion on the effect of finite flux crosstalk.
The dynamics of the lowest six eigenstates of the TCQ Hamiltonian (Eq. (7)), $\{ \vert G \rangle, \vert D \rangle, \vert B \rangle, \vert D2 \rangle, \vert E \rangle, \vert B2 \rangle \}$ is studied to examine the transition between $\vert D \rangle$ and $\vert B \rangle$ by a time-varing flux $\Phi_a(t)$. The density matrix $\rho$ written in the basis of these states is evolved via the master equation
\begin{figure}
\includegraphics[width=85mm]{Fig3.png}
\caption{
Numerical demonstration of parametrically induced transition between $\vert B \rangle$ to $\vert D \rangle$, started with $\rho(t=0) = \vert B\rangle \langle B\vert$. The external flux $\Phi_a(t) = \bar{\Phi} + \delta \Phi \cos{(\omega_{\Phi}t)}$ is applied to $Q_a$. $\bar{\Phi} = 0.25\Phi_0$. $\delta \Phi = 10^{-3}\Phi_0$, corresponds to $\Omega^0_{\Phi} /2\pi=$ 2.32 MHz. The parametric driving frequency $\omega_{\Phi}$ is set around $\omega_B - \omega_D \cong 2J $.
(a) Rabi oscillation chevron. $\rho_{DD}$(colored) as a function of time, against detuning $\Delta_{\Phi}$. (b) Density matrix element $\rho_{DD}$(violet), $\rho_{BB}$(yellow) and $\rho_{GG}$(gray) as a function of time at $\Delta_{\Phi}=0$. The decay envelope (dashed) refers to the overall excitation decay rate $\Gamma_B^1/2$ to the ground state.
(c) Extracted Rabi frequency $\Omega^{\rm{fit}}_{\Phi}$ (hollow symbols) against flux modulation amplitude $\delta\Phi$ at different flux bias point $\bar{\Phi}$ with $\Delta_{\Phi}=0$. They agree well with the evaluation $\Omega^0_{\Phi} = \frac{1}{2}\frac{\partial \omega_a}{\partial \bar{\Phi}} \delta \Phi$ (solid lines). Refer to Appendix~\ref{sec:RWA} for $\rho(t)$ under different $J/\Omega^0_{\Phi}$ ratio. The detailed mapping of $\Omega^{\rm{fit}}_{\Phi}$ and the validity of applying RWA against $\bar{\Phi}$ is in Appendix~\ref{sec:tunability}. Calculation results that includes only the lowest three levels of a TCQ are presented in (b) (black lines) and (c) (solid symbols) as well.
}
\label{fig:Lambda}
\end{figure}
\begin{equation}
\begin{aligned}
\dot{\rho}= &i [\rho, H_{\rm{TCQ}}] \\
&+ \sum_{k,l} {\bigl\{\frac{\Gamma_{jk}^1}{2} \mathcal{D}[\vert l\rangle \langle k \vert]\rho
+ \delta_{kl} \Gamma^{\phi} \mathcal{D}[\vert l \rangle \langle k \vert]\rho\bigr\}},
\label{eq:ME}
\end{aligned}
\end{equation}
\noindent where $\delta_{kl}$ is the Kronecker delta. $\vert k \rangle, \vert l \rangle \in \{ \vert G \rangle, \vert D \rangle, \vert B \rangle, \vert D2 \rangle, \vert E \rangle, \vert B2 \rangle \}$. Note that while truncation of higher energy state space is employed, the full Rabi Hamiltonian is considered in this six-state subspace simulation, i.e., the rotating wave approximation is not applied. The dissipation term $\mathcal{D}[\mathcal{O}]\rho = 2\mathcal{O}\rho\mathcal{O}^\dagger - \mathcal{O}^\dagger\mathcal{O}\rho - \rho\mathcal{O}^\dagger\mathcal{O}$. The term denotes that the spontaneous decay from $\vert B \rangle$ to $\vert D \rangle$ is absent due to the selection rule $\langle D \vert ( \hat{n}_a + \hat{n}_b )\vert B \rangle=0$.
For the following discussion, typical transmon parameters $E^{\rm{max}}_{J_a}/2\pi = E^{\rm{max}}_{J_b}/2\pi = E^{\rm{max}}_{J}/2\pi =15$ GHz, $E_{C}/2\pi = 400$ MHz and $E_I/2\pi=180$ MHz are used and the corresponding $J/2\pi = 700$ MHz. This strong inter-qubit coupling can be experimentally realized by direct capacitive coupling of 11.5 fF~\cite{Gambetta11, Srinivasan11, Zhang17}, and ensures that $\Omega^0_{\Phi}$ can achieve 100 MHz (see later discussion and Fig.~\ref{fig:Lambda}(c)). Note that, despite strong coupling $J$ being assumed here, the discussion based on the Jaynes-Cummings Hamiltonian in Sec. II is still valid. See Appendix~\ref{sec:FullRabiH} for the related discussion. The SQUID asymmetry factor $d=0.6$ is assumed. The corresponding eigenfrequencies are $\omega_D/2\pi = 5.13$ GHz and $\omega_B /2\pi= 6.53$ GHz. As shown in Fig.~\ref{fig:TCQ}, the frequency of the bright state $\omega_B(\bar{\Phi})$ can be varied nearly 1 GHz by applying static flux biases. Potential applications of a tunable $\Lambda$-type system thus can be expected. As described by Eq.~(\ref{eq:decay}), the pure dephasing rate of the qubits $\Gamma_i^{\phi}$ limits the coherence of state $\vert D \rangle$. Throughout the discussion we set $\Gamma_i^{\phi}/2\pi = 0.2$ MHz, which is reported~\cite{Hutchings17} even if the transmons are biased away from their flux sweet spots. The spontaneous decay rate of the state $\vert B \rangle$, $\Gamma_B^1/2\pi =40$ MHz when embedded in a one-dimensional open transmission line and $\Gamma_B^1/2\pi =0.1$ MHz when embedded in a far-detuned resonator.
Figure~\ref{fig:Lambda} illustrates the parametric drive induced mode transfer by the aforementioned numerical method. By applying sinusoidal flux $\Phi_a(t) = \bar{\Phi} + \delta\Phi\sin(\omega_{\Phi}t)$ with $\omega_{\Phi} \approx 2J$, $\omega_a$ is parametrically modulated. As a result, the atomic population can be swapped between $\vert B \rangle$ and $\vert D \rangle$ (Fig.~\ref{fig:Lambda}(a)). Spontaneous decay from $\vert B \rangle$ to the ground state $\vert G \rangle$ is present. In Fig.~\ref{fig:Lambda}(b), step-wise growing of $\rho_{GG}$ features the contrasting decay rate $\Gamma^1_{B}$ and $\Gamma^1_{D}$, respectively. The swapping rate $\Omega_{\rm eff} = \sqrt{{\Omega^0_{\Phi}}^2+\Delta^2_{\Phi}}$, where the Rabi frequency $\Omega^0_{\Phi} = \frac{1}{2}\frac{\partial \omega_a}{\partial \bar{\Phi}} \delta \Phi$. The detuning $\Delta_{\Phi} \equiv \omega_{\Phi} - (\omega_B - \omega_D ) \cong \omega_{\Phi} - 2J$.
The upper limit of the available $\Omega^0_{\Phi}$ depends on two factors : the flux bias point $\bar\Phi$ and the inter-qubit coupling $J$. Fig.~\ref{fig:Lambda}(c) demonstrates $\Omega^0_{\Phi}$ as a function of $\delta{\Phi}$ at different $\bar\Phi$, and therefore different $\omega_B$. Remarkably, the linear regime of $\Omega^0_{\Phi}$ versus $\delta{\Phi}$ covers a wide range from sub-MHz up to 100 MHz as long as $\Omega^0_{\Phi} \ll J$. The result shows a great potential for fast-control applications.
We also perform the simulation in the subspace constituted by only $\{ \vert G \rangle, \vert D \rangle, \vert B \rangle \}$ states. The three-state subspace simulation results are shown in Fig.~\ref{fig:Lambda}(b) (black lines) and (c) (solid symbols) for comparison. The effect of parametric drive induced mode transfer shows no difference between the three-state subspace and the six-state subspace simulations. It is because that the system has no nearby transition in response to the parametric drive except between $\vert D \rangle$ and $\vert B \rangle$ (see Appendix~\ref{sec:selection}). Meanwhile, the strong coupling $J$ has little effect on parametric drive induced transfer between $\vert D \rangle$ and $\vert B \rangle$ (see Appendix~\ref{sec:FullRabiH} for details). The appropriateness of the three-state subspace simulation echoes the principal idea of the parametrically induced transition based on the Jaynes-Cumming Hamiltonian (Eq.~(\ref{eq:JC})) described in Section~\ref{sec:ParametricDrive}. In the following sections, most of the numerical demonstrations are done in the three-state subspace to save computation power.
\begin{figure*}
\includegraphics[width=170mm]{Fig4.png}
\caption
{
Demonstration of $\Lambda$-type EIT and ATS in a degenerate TCQ. Refer to Fig.~\ref{fig:setup} for the setup and Sec.~\ref{sec:lambda} for the simulation parameters. Steady-state atomic absorption response Im($\rho^{SS}_{GB}$) as a function of probe detuning $\delta_p$ for (a) $\Omega^0_{\Phi}/2\pi = 13.1$ MHz and (c) $\Omega^0_{\Phi}/2\pi = 69.0$ MHz, respectively. (b) and (d) show the corresponding dispersion responses Re($\rho^{SS}_{GB}$) of (a) and (c), respectively. The symbols indicate the results of numerical simulation from Eq.~(\ref{eq:ME}). The results that include the 6-lowest levels of the system are also shown as crosses here for comparison. The fitting curve adapted from Eq.~(\ref{eq:GEN}) is presented as black solid lines. The crosses are appended to show the results where 6-lowest levels are included. (e) Extracted fitting parameters $\gamma_{BB}/2\pi$ (black star), $\gamma_{DD}/2\pi$ (black cross) and $\Omega^0_{\Phi}/2\pi$(red square) of Eq.~\ref{eq:GEN} to Im($\rho^{SS}_{GB}$) as a function of $\Omega^0_{\Phi}$. The lines indicate the parameter values used in the simulation. (f) $\bar{\omega}_{\rm{EIT}}$ and $\bar{\omega}_{\rm{ATS}}$ against $\Omega^0_{\Phi}$. $\bar{\omega}_{\rm{EIT}} > 0.5$ indicates where the EIT spectrum fits better. The shaded region indicates the corresponding EIT regime $\Omega_{\Phi}^0 < \left|\gamma_{BB}-\gamma_{DD}\right|$.
}
\label{fig:EIT}
\end{figure*}
\section{Electromagnetically Induced Transparency and Autler-Townes Splitting}
\label{sec:EIT}
Consider a TCQ embedded in an open transmission line. As a probe beam $\Omega_p$ is shined onto the waveguide, together with an applied parametric drive $\Omega_{\Phi}$ onto the flux line (Fig.~\ref{fig:setup}(b)), the system Hamiltonian reads
\begin{equation}
\begin{aligned}
H_{\rm{m}}=& \omega_{D} \vert D \rangle \langle D \vert + \omega_{B} \vert B \rangle \langle B\vert\\
&+ [ \Omega_{p}(t) \vert G \rangle \langle B \vert + \Omega_{\Phi}(t) \vert D \rangle \langle B\vert + \rm{H.c.}. ]
\label{eq:HLambda}
\end{aligned}
\end{equation}
\noindent The weak probe beam $\Omega_{p}(t) = \Omega_{p}^0\sin{(\omega_pt)}$ with detuning $\delta_p \equiv \omega_p - \omega_{B}$. The coupling beam $\Omega_{\Phi}(t)=\Omega_{\Phi}^0\sin{(\omega_{\Phi}t)}$ with detuning $\Delta_{\Phi} = 0$. In Fig.~\ref{fig:EIT}, the master equation (Eq.~\ref{eq:ME}) is evolved until a steady-state is reached to spectroscopically demonstrate $\Lambda$-type EIT and ATS in a one-dimensional open transmission line, where we set $\Gamma_B^1/2\pi=40$ MHz and $\Gamma_i^{\phi}/2\pi = 0.2$ MHz. Correspondingly $\gamma_{BB}/2\pi=40.2$ MHz and $\gamma_{DD}/2\pi=0.2$ MHz. Ref.~\cite{Sun14} gives the general solution of the steady-state optical susceptibility $\chi_{GB}(\delta_p)$ of Eq.~(\ref{eq:HLambda})
\begin{equation}
\begin{aligned}
\chi_{GB}(\delta_p) = \frac{\vert p_{GB}\vert^2}{\delta_- - \delta_+} [\frac{\delta_++i\gamma_{DD}}{\delta_p-\delta_+} -\frac{\delta_-+i\gamma_{DD}}{\delta_p-\delta_-}] ,
\label{eq:GEN}
\end{aligned}
\end{equation}
\noindent where $\delta_{\pm} = (-i\gamma_{DD}-i\gamma_{BB}\pm\Omega_T)/2$, with $\Omega_T = \sqrt{{\Omega^0_{\Phi}}^2 - (\gamma_{DD} - \gamma_{BB})^2}$. $p_{GB} = \langle G \vert ( \hat{n}^2_a + \hat{n}^2_b ) \vert B \rangle$ denotes the transition moment between $\vert G \rangle$ and $\vert B \rangle$ . Additionally, $\chi_{GB}(\omega_p)= \tilde{\alpha}\rho^{SS}_{GB}(\omega_p)$. $\rho^{\rm{SS}}_{GB}(\omega_p)$ is the steady-state oscillation amplitude of the density matrix element. The factor $\tilde{\alpha} = \Omega_p^0/(\varepsilon_0\vert E_p \vert^2 )$, with $\varepsilon_0$ the vacuum permitivity and $E_p$ the amplitude of the probe field.
The coherence criteria of $\Lambda$-type EIT, $\gamma_{BB} \gg \gamma_{DD}$, is satisfied in this system. The EIT effect competes with the ATS one. When $\Omega_{\Phi}^0 < \left|\gamma_{DD}-\gamma_{BB}\right|$, resembling that the control tone Rabi frequency is smaller than the linewidth of $\vert B \rangle$ state, quantum interference dominates. Therefore, Eq.~(\ref{eq:GEN}) reduces to $\chi_{\rm{EIT}}$, and a narrow Lorenztian transmission window emerges at the center of $\vert G \rangle$ to $\vert B \rangle$ absorption, as illustrated in Fig.~\ref{fig:EIT}(a)(b). As $\Omega_{\Phi}^0 > \left|\gamma_{DD}-\gamma_{BB}\right|$, ATS~\cite{Silla09, Jian11} takes over the optical response, Eq.~(\ref{eq:GEN}) reduces to $\chi_{\rm{ATS}}$, two Lorentzian absorption window separated by $\Omega^0_{\Phi}$, as illustrated in Fig.~\ref{fig:EIT}(c)(d). In both regimes, our simulation based on Eq.~(\ref{eq:ME}) matches the analytical solution Eq.~(\ref{eq:GEN}) with excellent agreement. Fig~\ref{fig:EIT}(e) shows the fitting parameters $\{\gamma_{DD}, \gamma_{BB}, \Omega^0_{\Phi}\}$ of the simulated Eq.~(\ref{eq:GEN}) to ${\rm{Im}}(\rho^{\rm{SS}}_{GB})$ for different modulation amplitude $\Omega^0_\Phi$. The result implies the validity of the effective $\Lambda$-type system activated by parametric drive induced transition.
Akaike's information metric~\cite{Anisimov11} is performed to analyze the steady-state optical response ${\rm{Im}}(\rho^{\rm{SS}}_{GB})$ as a function of $\Omega^0_{\Phi}$ (Fig.\ref{fig:EIT}(f)). ${\rm{Im}}(\rho^{\rm{SS}}_{GB})$ are fitted to both $\chi_{\rm{EIT}}$ and $\chi_{\rm{ATS}}$ respectively. The per-point weight
\begin{subequations}
\renewcommand{\theequation}{\theparentequation.\arabic{equation}}
\begin{align}
\bar{w}_{\rm{EIT}} &= \frac{e^{-\bar{I}_{\rm{EIT}}}} { e^{-\bar{I}_{\rm{EIT}}} + e^{-\bar{I}_{\rm{ATS}}}} \\
\bar{w}_{\rm{ATS}} &= 1 - \bar{w}_{\rm{EIT}}
\label{eq:AIC}
\end{align}
\end{subequations}
\noindent are then evaluated, where $\bar{I} = \ln{R/N} + 2k/N$, with $N$ the number of data points, $R$ the sum of the square of fitting residual and $k$ the number of fitting parameters~\cite{Liu16}. Ref.~\cite{Sun14} gives the EIT window of the control tone power, $2\gamma_{DD}\sqrt{\gamma_{DD}/(\gamma_{BB}+2\gamma_{DD})}<\Omega^0_{\Phi}< \left|\gamma_{DD}-\gamma_{BB}\right|$, which gives 0.04 MHz$<\Omega^0_{\Phi}/2\pi<$ 20 MHz in our scheme. It agrees with the measure in Fig.~\ref{fig:EIT}(f).
The non-unity transparency at the center is limited by the finite decoherence rate $\gamma_{DD}$ of the metastable state $\vert D \rangle$. Practically, 1 MHz $ < \gamma_{BB} /2\pi < $ 50 MHz is dominated by the spontaneous decay, while 0.1 MHz $ < \gamma_{DD} /2\pi < $ 10 MHz is dominated by the pure dephasing. Consequently, the bottleneck threshold for the emergence of EIT, $\gamma_{BB} > 2 \gamma_{DD}$, can be easily overcome. Here we illustrate the case $\gamma_{BB} / \gamma_{DD} \cong 100$. Remark that in principle $\gamma_{BB}$ can be enhanced while $\gamma_{DD}$ can be suppressed independently, owing to their contrary decoherence channels.
\section{Stimulated Raman adiabatic passage}
\label{sec:STIRAP}
\begin{figure}
\includegraphics[width=75mm]{Fig5.png}
\caption{
$\Lambda$-type STIRAP to swap population from $\vert G \rangle$ to $\vert D \rangle$ state, started with $\rho({t=0}) = \vert G \rangle \langle G \vert$ and stopped at time $t_{\rm{f}}=0.8 \rm{\mu}$s $= 2.57\times 2\pi/\gamma_{BB}$.
Refer Fig.~\ref{fig:setup}(b) for the the setup.
(a) Control sequence, with the parametric Stokes pulse $\Omega^0_{\Phi}(t)$ (red) followed by the pump pulse $\Omega^0_p(t)$ (blue). The pulse separation $2\tau = 182$ ns and the pulse duration $2T = 200$ ns. The black dotted line indicates $\Omega^0_{\rm{rms}}(t)$ with peak Rabi frequency 67.2 MHz.
(b) Corresponding atomic response as a function of time. $\rho_{DD}(t_{\rm{f}}) = 0.969$ (dotted) and $\rho_{GG}(t_{\rm{f}}) = 0.031$ (gray solid). The intermediated state $\rho_{BB}$ (thick solid black) reaches 0.014 during the process. Simulation results that include the 6-lowest levels of a TCQ are presented as colored shaded lines for comparison.
(c) Transfer effeciency as a function of peak pulse amplitude $\Omega^{\rm{pk}}$. Colors represent different pulse delay $\tau$. Solid lines represent the analytical modeling accounting for nonadiabaticity~\cite{Yatsenko02, Yatsenko14}.
}
\label{fig:STIRAP}
\end{figure}
In this section, we show the application of proposed $\Lambda$-type system in transient optical response by considering STIRAP \cite{ Vitanov17, Torosov13, Bergmann19, Kumar16,AV19, Premaratne17}. Refer to Eq.~(\ref{eq:HLambda}) for the system Hamiltonian. The Stokes tone $\Omega_{\Phi}(t) = \Omega^0_{\Phi}(t)\sin(\omega_{\Phi}t)$ is parametrically applied while the pump tone $\Omega_p(t)= \Omega^0_p(t)\sin(\omega_pt)$ is applied onto the waveguide. A typical STIRAP sequence is composed of two partially overlapped pulses in time domain, $\Omega_{\Phi} (t)$ followed by $\Omega_p (t)$, with equal peak Rabi frequency $\Omega^{\rm{pk}}$. With an optimized pulse sequence, the population can be transferred from $\vert G \rangle$ to $\vert D \rangle$ with negligible population in the dissipative intermediate state $\vert B \rangle$. Fig.~\ref{fig:STIRAP}(a) shows an illustrative protocol of a $\Lambda$-type STIRAP and Fig.~\ref{fig:STIRAP}(b) is the simulated corresponding response of the system, where the master equation (Eq.~\ref{eq:ME}) is used. The detunings $\delta_p = \Delta_{\Phi} = 0$ are assumed. The atomic state is swapped from $\vert G \rangle$ to $\vert D \rangle$ by a single set of hyper-Gaussian pulse, $\Omega^0_p(t) = \Omega^{\rm{pk}}\rm{exp}[-(t-\tau/2)^4/T^4] $ and $\Omega^0_{\Phi}(t) = \Omega^{\rm{pk}}\rm{exp}[-(t+\tau/2)^4/T^4]$. Here $\tau$ is the pulse separation and $2T$ characterizes the pulse width. The transfer efficiency is defined as the metastable state population at the end of the process, $\rho_{DD}(t_{\rm{f}})$. The non-unit transfer efficiency of the process is contributed by the nonadiabaticity of the protocol, and described by~\cite{Yatsenko02, Yatsenko14}
\begin{equation}
\rho_{BB}(t_{\rm{f}}) = \exp{ \biggl\{ - \bigintsss_0^{t_{\rm{f}}}{4\gamma_{BB} {\biggl(\frac{\dot{\theta}}{\Omega^0_{\rm{rms}}}\biggr)}^2\,dt} \biggr\}},
\label{eq:STIRAPadb}
\end{equation}
\noindent where $\Omega^0_{\rm{rms}}(t) = \sqrt{{\Omega^0_{\Phi}}(t)^2 + \Omega^0_p(t)^2} $ is the average Rabi frequency of the two pulses and $\theta(t) = \tan^{-1}{[\Omega^0_p(t) / \Omega^0_{\Phi}(t)]}$ is the mixing angle. $\rho_{DD}(t_{\rm{f}})$ approaches unity as the local adiabatic condition for the sequential pulses $\Omega^0_{\rm{rms}}(t) \gg \lvert{\dot{\theta}(t)}\rvert$ is satisfied.
Fig.~\ref{fig:STIRAP}(c) shows that the incomplete population transfer $\rho_{DD}(t_{\rm{f}})$ with various protocol parameters \{$\tau$, $\Omega$\} can be explained by Eq.~(\ref{eq:STIRAPadb}). Remark that despite the length of the sequence comparable to the lifetime of the intermediate state $\vert B \rangle$, the transfer efficiency can still approach unity as long as adiabaticity is hold throughout the process.
There are pioneer experiments that demonstrate the STIRAP process in a $\Xi$-type artificial atom or a qubit-cavity system in circuit QED architecture \cite{KKumar16, Premaratne17, AV19}. Here we address the possibility of applying STIRAP on this effective $\Lambda$-type artificial atom, with a relaxation-free final state. Note that despite large excited state decoherence $\gamma_{BB}$, high transfer efficiency $\rho_{BB}(t_{\rm{f}})$ is obtained. With the TCQ being placed in a far-detuned cavity, the system with suppressed $\gamma_{BB}$ could approach unit transfer efficiency.
In the sense of coupled oscillator dynamics, this STIRAP process is analogous to the cooperation of the in-phase drive on both objects and stiffness modulation on one of them. It excites the system from vacuum to out-of phase oscillation, without any in-phase oscillation.
\section{Discussion}
\label{sec:discussion}
Realization of EIT in superconducting quantum circuits has been studied and carried out in recent years~\cite{AS10, Anisimov11, Sun14, Gu16, Liu16, Novikov16, Long18,Andersson20}. The main challenge is to create a reliable metastable state~\cite{AS10, Anisimov11, Sun14}. In most of the successful approaches to create effective $\Lambda$-type levels, a single qubit is dispersively coupled with a cavity (QC). A strong pumping field is also applied either to excite a higher order transition~\cite{Novikov16} or to modify the level configuration~\cite{Gu16, Long18}. They overcome the coherence criteria by making use of the decay rates of the cavity-like state (excited state) and the Purcell-protected qubit-like state (metastable state). However, the desires to have a high cavity decay rate $\kappa$ and to have low Purcell-protected qubit decay $\gamma_{\kappa} = (g/\Delta)^2\kappa$ conflict each other, thereby compromising the choice of the cavity bandwidth. Here $g$ and $\Delta$ denote the coupling and detuning between the qubit and the cavity, respectively. Regarding frequency tunability of the QC approach, a tunable qubit can slightly modify the transparency window by $g^2/\Delta$ as it moves toward the cavity frequency. Meanwhile, degraded Purcell protection limits the performance of the metastable state in the QC approach. Alternatively, SQUID loops with high critical current can be integrated to the coplanar waveguide cavity to achieve high tunability, while the manufacturing would become complicated. In contrast, the degenerate TCQ itself is a $\Lambda$-type system, with ideally zero spontaneous decay from the Dark state. It allows a tunable transparency window up to few GHz (Fig.\ref{fig:TCQ}(b)) with standard SQUID loops on both qubits. Meanwhile, the spontaneous decay remain eliminated~\cite{Gambetta11, Srinivasan11} and insignificant degradation of the pure dephasing could be achieved~\cite{Hutchings17}. Fig.~\ref{fig:EIT}(b) measures the slow light effect with $v_g\cong 5\% $ of the speed of light. Benefit from the tunability of the system, frequency transduction between the encoded and retrieved light could be possible in such a quantum memory.
Note that the main difference between this scheme and a conventional $\Lambda$-type atomic level is that a negligible spontaneous decay from $\vert B \rangle$ to $\vert D \rangle$ is mediated by the local flux lines with noise near $\omega_B - \omega_D$. $\Gamma^1_{DB}$ could be below 1 Hz~\cite{Koch07, Hutchings17}. Therefore, frequency down-conversion due to spontaneous emission~\cite{Koshino13, Inomata14} is absent in this system.
The proposed scheme can be generalized for different platforms. For example, it can be applied to different types of qubits, as well as with alternative realization of strong transverse coupling. The shortcoming of this scheme is that the frequency tunability of the system introduces non-negligible dephasing, which is the main decoherence source of metastable state. Advanced effort can be made to suppress the pure dephasing either by surface treatment~\cite{Kumar16} or considering different types of qubits~\cite{Yan16}.
\section{Conclusion}
\label{sec:conclusion}
In conclusion, we propose a simple and effective $\Lambda$-type system made of a resonant superconducting qubit pair. The symmetric state plays the role of the excited state and the antisymmetric state represents the metastable state. They are mediated by a parametric drive on one of the qubits. Its application on $\Lambda$-type EIT, ATS and STIRAP are numerically demonstrated. Compared to other approaches in circuit QED architecture, our proposed $\Lambda$-type scheme features large level tunability, while retaining sufficient coherence of the metastable state. The device volume is compact and the manufacturing can be directly implemented by the transmon-type approach. It provides a solution of having a $\Lambda$-type system in on-chip superconducting quantum circuits.
\begin{acknowledgments}
The authors wish to thank Io-Chun Hoi, Jeng-Chung Chen and George Thomas for helpful discussion. This work has been supported by Ministry of Science and Technology in Taiwan under Grants No. MOST 109-2112-M-008-025 and MOST-110-2112-M-008-024.
\end{acknowledgments}
|
2,869,038,155,577 | arxiv | \section{Experiments}\label{sec:experiments}
\iffalse
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\centering%
\caption{Comparing results}
\begin{tabular}{cc|c|c|c}
Model & Mode & 1h & 2h & 3h\\
\hline
Linear ? & U & 1,509 & 3,995 & 7,121 \\
MLP ? & U & 0,891 & 2,133 & 4,015 \\
EncDec & U & \textbf{0.793} & \textbf{2,021} & \textbf{3.947} \\
\hline
Linear ? & L& 1,478 & 3,736 & 6,145 \\
MLP ? & L & 0,673 & 1,245 & 1,956 \\
EncDec & L & \textbf{0,540} & \textbf{1,108}& \textbf{1,814} \\
\end{tabular}
\end{table}\label{tbl:compare}
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\centering%
\caption{Comparing results}
\begin{tabular}{cc|cc|cc|cc}
& &
\multicolumn{2}{c}{1h} & \multicolumn{2}{c}{
2h} & \multicolumn{2}{c}{3h}\\
Model & Mode & HMAE & LH & HMAE & LH & HMAE &LH \\%& HMAE& LL & HMAE&LL HMAE & LL \\
\hline
Linear ? & U & 1,509 & - & 3,995 & - & 7,121 & -\\
MLP ? & U & 0,891 & - & 2,133 & - & 4,015 & -\\
EncDec & U & \textbf{0.793} & 1084.537 & \textbf{2,021} & 357,720& \textbf{3.947} & 150,286\\
\hline
Linear ? & L& 1,478 & - & 3,736 & - & 6,145 & -\\
MLP ? & L & 0,673 & - & 1,245 & - & 1,956 & -\\
EncDec & L & \textbf{0,540} & 1247.130 & \textbf{1,108} & 468,942 & \textbf{1,814} & 242,471\\
\end{tabular}
\end{table*}\label{tbl:compare}
\fi
In this section we describe the experiments carried out to evaluate the proposed encoder-decoder architecture for trajectory prediction with uncertainty.
\subsection{Dataset benchmark}
We used an AIS dataset extracted
from the historical data made freely available by the Danish Maritime Authority (DMA)~\cite{DMA},
comprising $394$ trajectories of \textit{tanker} vessels belonging to two specific motion patterns of interest in the period ranging from January to February 2020. The complete dataset used in this paper is illustrated in Fig.~\ref{fig:training_dataset}, and a detailed description of the dataset preparation method can be found in~\cite{Capobianco2021}.
Paths falling into the same maritime pattern are described as sequences of positions
in planar coordinates assigned through the Universal Transverse Mercator (UTM) projection (zone 32V),
and correlated by voyage-related attributes including departure and destination areas.
In order to provide regular input sequences to
train the model in a supervised fashion,
we applied a temporal interpolation to each trajectory using a fixed sampling time of
$\Delta = 15$ minutes to resample the data,
and data segmentation by using the sliding window approach
to produce fixed-length input and output sequences
of length $\ell=h=12$ vessel positions (i.e., $3$ hours).
\subsection{Models}
We propose an encoder-decoder architecture with attention mechanism composed of
a BiLSTM encoder layer with $64$ hidden units,
and an LSTM decoder layer with $64$ hidden units.
For Bayesian modeling of the epistemic uncertainty, we used MC dropout \cite{Gal2016} with $M=100$ samples,
and dropout rate applied to recurrent connection $p = 0.05$ in both encoder and decoder layers.
The model was trained by applying AdamW optimizer~\cite{Loshchilov19}
with a learning rate of $0.0001$ and weight decay of $0.0001$ to minimize the mean absolute error loss function,
an early-stopping rule
with $3000$-epoch patience,
and a mini-batch size of $200$ samples.
In \cite{Ristic08} it is shown how
better performance in terms of prediction accuracy can be achieved
by labeling the input data based on a high-level pattern information $\bm{\psi}$,
such as the vessel's intended destination.
In this regard, we compare the following two different prediction methods
based on labeled or unlabeled trajectories, where
in the \textit{labeled} case the neural model is trained to exploit also $\bm{\psi}$.
\begin{figure*}
\centering%
\subfloat[][]{%
\includegraphics[trim=5 10 0 10,clip,width=.95\columnwidth]{fig/complete_test_set/compare_ape_aes_letter.pdf}%
\label{fig:compare_dist_perf}
}%
\hfil %
\subfloat[][]{%
\includegraphics[trim=5 10 0 10,clip,width=.95\columnwidth]{fig/complete_test_set/compare_cov_det_aes_letter.pdf}%
\label{fig:compare_cov_perf}
}%
\caption{APE~\protect\subref{fig:compare_dist_perf} and predicted total variance~\protect\subref{fig:compare_cov_perf}. Both are computed at the $h$-th prediction sample (i.e., prediction horizon 3 hours) and are expressed as a function of the vessel distance from a fixed point, located in the upper left corner of Fig.~\ref{fig:compare_dist_data}. Panel~\protect\subref{fig:compare_cov_perf} shows the square root of the determinant of the covariance matrix produced by the network, which is also known as generalized variance and is proportional to the area of the prediction uncertainty ellipse.}%
\label{fig:performance_comparison}%
\end{figure*}
\begin{itemize}
\item \textit{Unlabeled (U)}: we train the predictive model and perform prediction using unlabeled
data, i.e.
using only
low-level context representation encoded from
a sequence of
past observations,
without any
high-level
information about the motion pattern.
\item \textit{Labeled (L)}: we train the predictive model and perform prediction using labeled
data, i.e.
using
low-level context representation encoded from
a sequence of
past observations,
as well as
additional inputs $\bm{\psi}$
about high-level intention behavior of the vessel.
This model includes the intention regularization mechanism with dropout probability $\gamma=0.3$, shown to be the highest-performing model in appendix \ref{appendix}.
\end{itemize}
\subsection{Results}
In this section we demonstrate the effectiveness of the proposed
encoder-decoder \textit{VarLSTM}
in learning trajectory predictions with quantified uncertainty for the maritime domain using labeled and unlabeled data.
From the complete dataset in Fig.~\ref{fig:training_dataset}, we isolated a set of trajectories to be used as test set. The selected test trajectories are illustrated in Fig.~\ref{fig:compare_dist_data}, with a color that changes with the distance traveled from a fixed origin point, located in the upper left corner of the figure.
Figure~\ref{fig:compare_dist_perf} shows the Average Prediction Error (APE) computed as the average Euclidean distance between the predicted position and the ground truth at the $h$-th prediction sample (i.e., prediction horizon 3 hours)
as a function of the vessel's
traveled distance
from the fixed origin point.
This representation allows mapping all prediction errors on a common axis and identifying
regions where the prediction error is high.
Figure~\ref{fig:compare_dist_perf} shows that
the two models achieve comparable prediction errors,
apart from
a specific waypoint area $WP$ (distance between $80$ and $100$ nmi), highlighted in Fig.~\ref{fig:compare_dist_data},
where the APE obtained with the unlabeled model is much higher than that of the labeled model.
This is a crossroad area where vessels can take three different paths towards the same destination,
which makes it challenging for the prediction system to anticipate which direction the vessel will follow after the crossroad.
The plot proves how
high-level information is key to improve prediction performance corresponding to deviation areas (e.g., area $WP$ in Fig. \ref{fig:compare_dist_data}).
The major difference between unlabeled and labeled predictions is that unlabeled
use past observed positions to generate future states, while labeled
use additional high-level pattern information, i.e.,
the ship's intended destination.
In the area $WP$ (Fig.~\ref{fig:compare_dist_data}),
the unavailability of any prior information
puts the unlabeled model at a disadvantage
for deciding which future path to be followed by the vessel after a crossroad.
Instead, the labeled model has more information available, and can generate
more realistic future trajectories by exploiting the information related to the pattern descriptor.
In Fig.~\ref{fig:compare_cov_perf} we show the performance in terms
of predictive uncertainty estimation by using the notion
of \textit{generalized variance}, defined as the determinant of the covariance matrix $\overline{\bm\Sigma}_{k}$ in \eqref{eq:var_tot} produced by the unlabeled and labeled models; more precisely, in Fig.~\ref{fig:compare_cov_perf}, the square root of the determinant of the prediction covariance matrix (averaged over all the trajectories) is plotted, which is proportional to the area of the uncertainty ellipse.
Figure~\ref{fig:predictions} shows the results obtained by the proposed model on a specific trajectory, which intersects the area $WP$ (Fig.~\ref{fig:compare_dist_data})
at three time steps $k_i$, $i=1,2,3$.
We perform the predictions considering the sliding window approach, so the input sequence is fed in each neural model to predict the next output sequence.
We can see that the predictions using the
unlabeled model
shown in Figs.~\ref{fig:predictions_unlabeled_1}--\ref{fig:predictions_unlabeled_3}
detect the correct direction to be followed only after the critical point of a crossroad at time step $k_3$,
while, conversely, the ones using the
labeled model shown in Figs.~\ref{fig:predictions_labeled_1}--\ref{fig:predictions_labeled_3} are able to
anticipate the direction to be followed by the vessel after the crossroad many steps ahead.
Another important point to be made regarding the predictive uncertainty modeling is
how the level of uncertainty quantified by the unlabeled model is larger and includes many possible directions when the path to be taken is still undetermined (e.g., in correspondence to a crossroad such as in Fig.~\ref{fig:predictions_unlabeled_2}). The dataset used to train the network (represented in Fig. \ref{fig:training_dataset}) contains multiple paths that can be taken by a vessel after the waypoint; for this reason, the unlabeled model computes a predicted trajectory that is the average mode among multiple possible paths, even if the unimodal prediction is not necessarily a valid future behaviour, and this is especially apparent in Fig.~\ref{fig:predictions_unlabeled_2}.
In contrast, the predictive uncertainty diminishes when the decision of the predictor based on the training data is less complicated due to the direct path of the vessel as shown in Fig.~\ref{fig:predictions_unlabeled_3}.
In the labeled case, due to the additional high-level information available, the predictions are shown to be more accurate, especially when the vessel is before the crossroad, as showed in Fig.~\ref{fig:predictions_labeled_1} and Fig.~\ref{fig:predictions_labeled_2}.
The labeled model can compute more accurate predictions, as some of the possible future paths (specifically, after the waypoint) can be excluded, precisely on the basis of the additional high-level information. This can be easily noted with a comparative inspection of Figures~\ref{fig:predictions_unlabeled_2} and~\ref{fig:predictions_labeled_2}, where the uncertainty of the unlabeled model, contrary to the labeled one, has to account also for the additional South-East path. It should be noted that the intention information is most useful when it can \textit{reduce} the multimodal nature of the task to a unimodal prediction task. For instance, in Fig.~\ref{fig:predictions_labeled_2}, the high-level information allows excluding the South-East path, but cannot help further in deciding \textit{exactly} which one of the two South-West paths the vessel is going to follow. Still, the labeled model achieves better prediction performance, as the additional information allows going from a higher modality prediction task to a lower modality one.
\begin{comment}
A data segmentation procedure is required in order to obtain batches of input/output sequences that can be easily trained for supervised learning.
First, we obtain regularly-sampled trajectories from the original collection $\mathcal{C}$ by using a fixed-length sampling time $\Delta$
to interpolate the original trajectories at times $t_k=k \, \Delta$, $k=1, \dots, T_i$.
Then, a fixed-size sliding window (windowing) approach can be used to operate with arbitrary fixed-length input and output sequences of $\ell$ and, respectively, $h$ samples.
This leads to
a new data representation
$\mathcal{D}=\{(\mathcal{X}^i,\mathcal{Y}^i,\mathcal{P}^i\}_{i=1}^{N}$
obtained by extracting from each trajectory $i$:
a set
$\mathcal{X}^i = \{ \mathbf{x}_{k,\ell}^i \}_{k=\ell}^{T_i-h}$
of $n_i=T_i-(\ell+h)+1$ batches
such that
$\mathbf{x}_{k,\ell}^i \triangleq
\{ \mathbf{s}_{\tau}^i \}_{\tau=k-\ell+1}^{k} \subseteq \mathcal{S}^i$
is the input sequence of $\ell \geq 1$ observed states up to time $k$;
a set
$\mathcal{Y}^i = \{ \mathbf{y}_{k,h}^i \}_{k=\ell}^{T_i-h}$
such that
$\mathbf{y}_{k,h}^i \triangleq
\{ \mathbf{s}_{\tau}^i \}_{\tau=k+1}^{k+h} \subseteq \mathcal{S}^i$
is the output sequence of at time $k$ of $h \geq 1$ future states;
$\mathcal{P}^i = \{ \bm{\psi}_{k,\ell}^i \}_{k=\ell}^{T_i-h}$ contains any
additional
available
feature (pattern descriptor)
$\bm{\psi}$
which is salient to predict the $i$-th trajectory.
If this additional information uniquely identifies a specific category among the set of $P$ possible motion patterns, then we can represent this knowledge using a $P$-way categorical feature
through
one-hot encoding,
by which categorical variables are converted into a
$1$-of-$P$ binary vector.
Note that, after applying the above windowing segmentation over all the trajectories, the new dataset
$\mathcal{D}$ for sequence-to-sequence trajectory prediction
contains
$Nn, \, n=\sum_{i=1}^N n_i$
tuples of input sequences $\mathbf{x}_{k,\ell}^i \in \mathbb{R}^{d \times \ell}$ with pattern descriptors $\mathbf{\psi}_{k,\ell}^i \in \{0,1\}^{P}$,
and output sequences $\mathbf{y}_{k,h}^i \in \mathbb{R}^{d \times h}$.
\end{comment}
\section{Introduction}
\label{sec:intro}
Trajectory prediction -- for collision avoidance, anomaly detection and risk assessment -- is a crucial functional component of intelligent maritime surveillance systems and next-generation autonomous ships.
Maritime surveillance systems are increasingly relying on the huge amount of data
made available by terrestrial and satellite networks of Automatic Identification System (AIS) receivers.
The availability of maritime big data enables the automatic extraction of spatial-temporal mobility patterns that can be processed
by modern deep learning networks
to enhance trajectory forecasting
Today, most commercial systems primarily rely on
trajectory prediction methods based on the Nearly Constant Velocity (NCV) model, since this linear model is simple, fast, and of practical operability
to perform short-term predictions of straight-line trajectories~\cite{survey19}.
However, the NCV model tends to overestimate the actual uncertainty as the prediction horizon increases.
A novel linear model, based on the Ornstein-Uhlenbeck (OU) stochastic process,
is becoming recognized as a reliable means to improve long-term predictions~\cite{Millefiori2016}, with special focus on uncertainty reduction via estimation of current navigation settings.
\begin{figure}[t!]
\centering%
\includegraphics[trim=80 10 80 10,clip,width=\columnwidth]{fig/trajectories_fusion2021.pdf}%
\caption{Vessel trajectory prediction with representation of the prediction uncertainty depicted as confidence ellipses (at different confidence levels). Contains data from the Danish Maritime Authority that is used in accordance with the conditions for the use of Danish public data~\cite{DMA}.}
\label{fig:pred_unc}
\end{figure}
Although most maritime traffic is very regular,
and thus model-based methods can be easily applied,
in the presence of
maneuvering behaviors of the ship such models will tend to
lack the desired prediction accuracy.
In such cases, nonlinear and data-driven methods including
adaptive kernel density estimation~\cite{Ristic08},
nonlinear filtering~\cite{Perera2012,Mazzarella15},
nearest-neighbor search methods~\cite{Hexeberg17,Dalsnes2018},
and machine learning techniques~\cite{Zissis2017},
may provide
more suitable solutions.
Furthermore, the latest advances in deep learning-based predictive models and the combined availability of large volumes of AIS data
allow for enhanced vessel trajectory prediction and maritime surveillance.
Recent approaches based on deep learning
are documented in~\cite{Nguyen2018,Nguyen2018b,Yu2020,Zhou2020,Murray2020,Murray2021,Forti2020,Capobianco2021}.
However, standard deep learning models
cannot provide
predictive uncertainty,
hence no quantification of the confidence with which the prediction outputs can be trusted is available.
To accompany results from deep learning by their associated confidence levels has recently arisen to be of heightened importance, and naturally there has been significant research attention toward that goal. Data-driven methods for uncertainty quantification have been recently proposed to estimate model and data uncertainty based on ensemble~\cite{Laks2017}
or Bayesian~\cite{Gal2016,Kendall2017,Bhatta2018,Xiao2019,KuleshovFE18}
learning.
Bayesian deep learning methods apply Bayesian modeling and
variational inference to neural networks, leading to Bayesian neural networks (BNNs) that treat the network parameters as random variables
instead of deterministic unknowns
to represent the model uncertainty on its predictions.
BNNs can capture the uncertainty within the learning model, while maintaining the flexibility of deep neural networks, and hence they are particularly appealing
for safety-critical applications (e.g., autonomous transportation systems, robotics, medical and space systems)
where uncertainty estimates can be propagated in an end-to-end neural architecture
to enable improved decision making.
In particular,~\cite{Gal2016} shows that \emph{dropout}, a well-known regularization technique to prevent neural network over-fitting~\cite{Srivastava14},
can be used as a variational Bayesian approximation of the predictive uncertainty
in existing deep learning models trained with standard dropout,
by performing Monte Carlo (MC) sampling with dropout at test time,
i.e., by sampling the network with random omission of
units representing MC samples obtained from the posterior distribution
over models.
In~\cite{Kendall2017} a Bayesian deep learning approach for the combined quantification of both data and model uncertainty, extracted from BNNs, is proposed for computer vision applications.
In addition,
a neural network architecture based on Long Short-Term Memory (LSTM) with uncertainty modeling is proposed in~\cite{Jung2020}
to incorporate non-Markovian dynamic models in the prediction step of a standard Kalman filter for target tracking.
In this paper, we present a vessel trajectory learning and prediction framework to generate future vessel trajectory samples
given the sequence of the latest AIS observations.
The proposed method is built upon the LSTM encoder-decoder architecture presented in~\cite{Forti2020,Capobianco2021}, which has emerged as an effective and scalable model for sequence-to-sequence learning of maritime trajectories.
We extend~\cite{Forti2020,Capobianco2021} by providing a practical quantification of the predictive uncertainty via Bayesian learning tools based on MC dropout~\cite{Gal2016}.
Preliminary work on trajectory prediction with uncertainty quantification applied to the maritime domain
was presented in \cite{fusion2021}.
In this work, we extend \cite{fusion2021} by
providing a comprehensive description of the prediction uncertainty modeling, a detailed introduction of the variational LSTM model used to implement our encoder-decoder architecture,
and by proposing an alternative decoder scheme with a novel regularization method of the information about the vessel's intention, which may be available in the maritime surveillance data.
Moreover, we present novel results on the estimated predicted variance, and
a performance comparison demonstrating the gain in state prediction accuracy with respect to \cite{fusion2021}.
Fig.~\ref{fig:pred_unc}
shows an example of how the proposed model is able to predict future trajectories and prediction uncertainty given input sequences from past data.
To summarize, the main contributions of this work are:
\begin{enumerate}
\item A model for the aleatoric and epistemic uncertainty of trajectory predictions
provided by encoder-decoder RNNs
using Bayesian deep learning tools;
\item A novel regularization method for the decoding phase to prevent complex co-adaptations between the encoded data and the high-level information about the vessel’s intention;
\item Experimental results on real-world AIS data showing the effectiveness of the proposed encoder-decoder architecture with uncertainty modeling in learning trajectory predictions with well-quantified uncertainty estimates using labeled and unlabeled data.
\end{enumerate}
The paper is organized as follows. In Section~\ref{sec:problem}, we introduce the vessel trajectory prediction problem.
In Section~\ref{sec:encdec},
we present the proposed attention-based encoder-decoder framework for trajectory prediction with uncertainty.
Experimental results on a real-world AIS dataset are presented and discussed in Section~\ref{sec:experiments}. Finally, we conclude the paper in Section~\ref{sec:discussion}.
\section{Problem formulation}
\label{sec:problem}
We formulate the vessel trajectory prediction problem
as a supervised learning process
by following a sequence-to-sequence deep learning approach
to directly generate
the distribution of an
output sequence of future states
given an input sequence of past AIS observations.
From a probabilistic perspective, the objective is to
determine the following predictive distribution
\begin{equation}\label{eq:predictive_distr}
p(\mathbf{y}_{1:h}^* | \mathbf{x}_{1:\ell}^*, \mathcal{D} ),
\end{equation}
which represents
the probability of
an output sequence
$\mathbf{y}_{1:h}^* \triangleq
\{ \mathbf{s}_{k} \}_{k=1}^{h} \in \mathbb{R}^{d \times h}$
of
$h \geq 1$ future
$d$-dimensional states
of the vessel
given a new input sequence
$\mathbf{x}_{1:\ell}^* \triangleq
\{ \mathbf{s}_{k} \}_{k=1}^{\ell} \in \mathbb{R}^{d \times \ell}$
of $\ell \geq 1$ observed states,
and the available training data
$\mathcal{D}=\{\mathcal{X}^i,\mathcal{Y}^i,\Psi^i\}_{i=1}^{N}$
containing
the two sets
$\mathcal{X}^i = \{ \mathbf{x}_{1:\ell}^i \}_{i=1}^N$
and
$\mathcal{Y}^i = \{ \mathbf{y}_{1:h}^i \}_{i=1}^N$
of training input and, respectively, output sequences.
Note that each sequence element $\mathbf{s}^i_k \triangleq \mathbf{s}^i(t_k) \in \mathbb{R}^d$ represents the vessel's position
in latitude and longitude coordinates,
taken from the available time-stamped AIS messages.
In many practical cases, it is also possible to exploit
some additional information available from AIS data
such as the vessel destination.
The destination port is an example of voyage related information provided by the AIS that may be relevant to anticipate a vessel trajectory.
We denote this (possibly available) input feature,
which may be salient to predict the $i$-th output sequence,
by
$\Psi^i = \{ \bm{\psi}^i \}_{i=1}^{N}$,
where $\bm{\psi}^i \in \{0,1\}^v$ is the $v$-way categorical feature for each trajectory $i$ representing the class label of the specific motion pattern encoded into a one-hot vector of size $v$\cite{Capobianco2021}.
For example,
$\bm{\psi}^i = \{0, 0, 1\}$ would mean that this particular vessel followed a trajectory that has been labeled $\#3$ based on three possible destinations.
\subsection{Modeling prediction uncertainty}
Uncertainty on the prediction estimates can be captured with
recently developed Bayesian deep learning tools,
which offer a practical framework for representing uncertainty in deep learning models \cite{Kendall2017,Bhatta2018,Xiao2019}.
In the context of supervised learning,
two forms of uncertainty, i.e.,
\textit{aleatoric} and \textit{epistemic} uncertainty are considered,
where epistemic
is the reducible and aleatoric the irreducible part of uncertainty \cite{Kendall2017}.
Aleatoric (or data) uncertainty captures noise inherent in the observations,
whereas epistemic (or model) uncertainty accounts for uncertainty in the neural network model parameters \cite{Kendall2017}.
Epistemic uncertainty is a particular concern for neural networks
given their many free parameters, and can be large for
data that is significantly different from the training set.
Thus, for any real-world application of neural network uncertainty
estimation, it is critical that it be taken into account.
We follow a combined aleatoric-epistemic model \cite{Kendall2017} to capture both aleatoric and epistemic uncertainty in our prediction model.
Following a Bayesian framework \cite{Neal96} with prior distributions placed over the parameters of the NN,
epistemic uncertainty can be captured by learning a distribution of NN models $p(F|\mathcal{D})$ representing the posterior distribution over the space of functions $\mathbf{y}_{1:h} = F(\mathbf{x}_{1:\ell})$ that are \emph{likely} to have generated our dataset $\mathcal{D}$.
The predictive probability \eqref{eq:predictive_distr} is then obtained by marginalizing over the implied posterior distribution of models, i.e.,
\begin{equation}
\label{eq:predictive_distr2}
p(\mathbf{y}_{1:h}^* | \mathbf{x}_{1:\ell}^*, \mathcal{D} ) =
\int p(\mathbf{y}_{1:h}^* | \mathbf{x}_{1:\ell}^*, F )
p(F | \mathcal{D} ) d F .
\end{equation}
In our case, $F$ are RNN encoder-decoder models \cite{Capobianco2021}
assumed to be described by a finite set of parameters $\bm{\theta}$, such that
\begin{equation}
\label{eq:predictive_distr3}
p(\mathbf{y}_{1:h}^* | \mathbf{x}_{1:\ell}^*, \mathcal{D} ) =
\int p(\mathbf{y}_{1:h}^* | \mathbf{x}_{1:\ell}^*, \bm{\theta} )
p(\bm{\theta} | \mathcal{D}) d \bm{\theta} .
\end{equation}
Since
$p(\bm{\theta} | \mathcal{D})$ cannot be obtained analytically,
it can be approximated by using variational inference with
approximating distribution $q(\bm{\theta})$, which allows for efficient sampling.
This results in the approximate predictive distribution
\begin{equation}
\label{eq:predictive_distr4}
q(\mathbf{y}_{1:h}^* | \mathbf{x}_{1:\ell}^*, \mathcal{D} ) =
\int p(\mathbf{y}_{1:h}^* | \mathbf{x}_{1:\ell}^*, \bm{\theta} )
q(\bm{\theta}) d \bm{\theta},
\end{equation}
which can be kept as close as possible to the original distribution
by minimizing the Kullback-Leibler (KL) divergence
between $q(\bm{\theta})$ and the true posterior $p(\bm{\theta} | \mathcal{D})$ during the training stage.
Let us consider a generic NN architecture with $L$ transformation layers,
and denote by $\mathbf{W}_l$ the weight matrix of size $K_l \times K_{l-1}$ for each layer $l=1,\dots,L$.
Then, following~\cite{Gal2016_nips}, by setting the set of weight matrices\footnote{All bias terms are omitted to simplify the notation.}
of the NN architecture
as the set of parameters, i.e.,
$\bm{\theta} = \{\mathbf{W}_l\}_{l=1}^L$,
and
by using a Bernoulli approximating variational distribution, it is possible to relate variational inference in Bayesian NNs to the dropout mechanism.
In particular, the approximating distribution
can be defined
for each row $i$ of $\mathbf{W}_l$
as
\begin{equation}
\label{eq:bernoulli}
q( \mathbf{w}_i ) = \gamma \, \mathcal{N}( \mathbf{w}_i; \mathbf{0}, \rho^2 \mathbf{I} )
+ ( 1-\gamma )
\mathcal{N}( \mathbf{w}_i; \mathbf{m}_i, \rho^2 \mathbf{I} ),
\end{equation}
where $\mathbf{m}_i$ is the vector of variational parameters,
$\rho$ the standard deviation of the Gaussian prior distribution placed over $\mathbf{w}_i$,
and $\gamma$ the Bernoulli probability used for dropout.
Then,
evaluating the model output
$F_{\widehat {\bm{\theta}}}$
where sample $\widehat {\bm{\theta}} \sim q({\bm{\theta}})$
corresponds to performing dropout
by randomly masking rows in the weight matrix $\mathbf{W}_l$ during the forward pass.
Predictions can then be obtained by performing
MC dropout \cite{Gal2016} which consists of
executing dropout at test time and averaging results for all $M$ samples,
i.e.,
by approximating \eqref{eq:predictive_distr4} as
\begin{equation}
\label{eq:predictive_distr5}
q(\mathbf{y}_{1:h}^* | \mathbf{x}_{1:\ell}^*, \mathcal{D} ) \approx
\frac{1}{M} \sum_{j=1}^M
p(\mathbf{y}_{1:h}^* | \mathbf{x}_{1:\ell}^*, \widehat {\bm{\theta}}_j ),
\end{equation}
with $\widehat {\bm{\theta}}_j \sim q({\bm{\theta}})$.
Note that in the case of
MC dropout for RNNs,
the same parameter realizations $\widehat {\bm{\theta}}_j$
are used for each time step of the input sequence \cite{Gal2016_nips}.
As shown in \cite{Kendall2017}, aleatoric uncertainty can be modeled together with epistemic uncertainty
by estimating the sufficient statistics of a given distribution describing the measurement noise of data.
By fixing a Gaussian likelihood to model aleatoric uncertainty the predictive distribution of an
output sequence
$\hat{\mathbf{y}}_{1:h}$ for a given input sequence $\mathbf{x}_{1:\ell}$
can be approximated by a sequence of multivariate Gaussian distributions where each output is a Gaussian $\mathcal{N}(\mathbf{y}_{k};\hat{\mathbf{y}}_{k},\bm\Sigma_k)$ with predictive mean $\hat{\mathbf{y}}_{k}$ and covariance $\bm\Sigma_k, k=1,\dots,h$.
\subsection{Aleatoric and epistemic uncertainty}
The supervised learning task
\eqref{eq:predictive_distr}
can be recast as a sequence regression problem~\cite{Bishop96},
which aims at
training a neural network model
$F$
to predict, given an input sequence $\mathbf{x}_{1:\ell}$ of length $\ell$,
the
predictive mean
$\hat{\mathbf{y}}_{1:h}$
and the predictive covariance $\bm\Sigma_{1:h}$, i.e.
\begin{equation}\label{eq:pred}
[ \hat{\mathbf{y}}_{1:h},\bm\Sigma_{1:h} ] = F_{\widehat {\bm{\theta}}} ( \mathbf{x}_{1:\ell} )
\end{equation}
where $F$ is a BNN parameterized by model weights $\widehat {\bm{\theta}}$ drawn from the approximate dropout variational distribution $\widehat {\bm{\theta}} \sim q(\bm{\theta})$.
Note that a single network can be used to
predict both
$\hat{\mathbf{y}}_{1:h}$ and $\bm\Sigma_{1:h}$.
By setting the distribution modeling aleatoric uncertainty as Gaussian, we induce a minimization objective of
the loss function $\mathcal{L}_{1:h} \triangleq \mathcal{L}( \mathbf{y}_{1:h} )$,
here defined via the negative log-likelihood
\begin{eqnarray}\label{eq:neg_log_like}
\mathcal{L}_{1:h}
= \sum_{k=1}^h \frac{1}{2} (\mathbf{y}_{k} - \hat{\mathbf{y}}_{k})^T \bm\Sigma_{k}^{-1} (\mathbf{y}_{k} - \hat{\mathbf{y}}_{k})
+ \frac{1}{2} \operatorname{ln} |\bm\Sigma_{k}|
\end{eqnarray} \normalsize
which enables
simultaneous training of both
$\hat{\mathbf{y}}_{k}$ and $\bm\Sigma_{k}$
in an
end-to-end optimization process of the entire
vessel prediction and uncertainty estimation framework.
Thus, for each state of the predicted sequence, the network outputs a Gaussian distribution parameterized by $\hat{\mathbf{y}}_{k}$ and $\bm\Sigma_{k}$
such that the negative log-likelihood in \eqref{eq:neg_log_like} of the ground truth vessel positions $\mathbf{y}_{k}, k=1,\dots,h$ over all training data is as small as possible.
If the epistemic uncertainty can be estimated through MC dropout \cite{Gal2016,Kendall2017}
from $M$ samples
$\{\hat{\mathbf{y}}_{1:h}^j,\bm\Sigma_{1:h}^j\}_{j=1}^M$,
then the total predictive uncertainty for the output $\mathbf{y}_{k}$
can be approximated as follows:
\begin{eqnarray}\label{eq:var_tot}
&&\hspace{-.7cm}
\overline{\bm\Sigma}_{k}
= \frac{1}{M} \sum_{j=1}^M \hat{\mathbf{y}}_{k}^j \hat{\mathbf{y}}_{k}^{j^{T}}
- \Bigg[
\frac{1}{M} \sum_{j=1}^M \hat{\mathbf{y}}_{k}^j
\frac{1}{M} \sum_{j=1}^M \hat{\mathbf{y}}_{k}^{j^{T}}
\Bigg]
\nonumber
\\
&&
+\frac{1}{M}
\sum_{j=1}^M \bm\Sigma_{k}^j,
\end{eqnarray}
where the first two terms of the sum correspond to the epistemic uncertainty,
and the third corresponds to the aleatoric
uncertainty.
Finally, we average the uncertainty across time steps $k=1,\dots,h$
to obtain the uncertainty estimate of the complete output sequence.
\begin{comment}
\begin{figure}[t]
\centering%
\resizebox{0.48\textwidth}{!}{
\begin{tikzpicture}
\usetikzlibrary{arrows,decorations.pathreplacing,patterns,calc}
\draw[->,line width=.5mm,>=triangle 45] (-1,0) -- (14,0);
\foreach \x in {-1,0,1,2,3,4,5,6,7,8,9,10,11,12,13}
\draw[thick] (\x cm,5pt) -- (\x cm, -5pt);
\draw (1,0) node[scale=1.6,above=10pt] {vessel trajectory};
\draw[red] (1,0) node[scale=1.2,below=5pt] {$k-\ell+1$};
\draw[red] (2.7,0) node[scale=1.4,below=5pt] {$\cdots$};
\draw[red] (4,0) node[scale=1.2,below=5pt] {$k-1$};
\draw[red] (5,0) node[scale=1.2,below=5pt] {$k$};
\draw[blue] (6,0) node[scale=1.2,below=5pt] {$k+1$};
\draw[blue] (8,0) node[scale=1.4,below=5pt] {$\cdots$};
\draw[blue] (10,0) node[scale=1.4,below=5pt] {$\cdots$};
\draw[blue] (12,0) node[scale=1.2,below=5pt] {$k+h$};
\draw[decoration={brace,amplitude=10pt,mirror}, decorate, line width=1.25pt, red] (1,-.9) -- (5,-.9) node[scale=1.6,midway,yshift=-0.62cm] {input sequence};
\draw[decoration={brace,amplitude=10pt}, decorate, line width=1.25pt, blue] (6,0.6) -- (12,0.6) node[scale=1.6,midway,yshift=.62cm] {predicted sequence};
\draw[-, ultra thick, red] (1,.23) -- (1,-.23);
\draw[-, ultra thick, red] (2,.23) -- (2,-.23);
\draw[-, ultra thick, red] (3,.23) -- (3,-.23);
\draw[-, ultra thick, red] (4,.23) -- (4,-.23);
\draw[-, ultra thick, red] (5,.23) -- (5,-.23);
\fill[opacity = 0.2, blue] (6,-1.5ex) -- (12, -1.5ex) -- (12, 1.5ex) -- (6,1.5ex) -- cycle;
\fill[opacity = 0.2, red] (1,-1.5ex) -- (5, -1.5ex) -- (5, 1.5ex) -- (1,1.5ex) -- cycle;
\draw[-, ultra thick, blue] (6,.23) -- (6,-.23);
\draw[-, ultra thick, blue] (7,.23) -- (7,-.23);
\draw[-, ultra thick, blue] (8,.23) -- (8,-.23);
\draw[-, ultra thick, blue] (9,.23) -- (9,-.23);
\draw[-, ultra thick, blue] (10,.23) -- (10,-.23);
\draw[-, ultra thick, blue] (11,.23) -- (11,-.23);
\draw[-, ultra thick, blue] (12,.23) -- (12,-.23);
\end{tikzpicture}
}
\caption{Example of observed (input) and predicted (output) sequences for a prediction task starting at time step $k$;
sampling intervals are uniform in the resampled vessel trajectory.
}
\label{fig:timeline}
\end{figure}
We consider
a dataset of $N$ space-time vessel trajectories
represented by a set of temporally
ordered sequences of tuples $\mathcal{C}=\{(\mathcal{S}^i,\mathcal{T}^i,\mathcal{P}^i\}_{i=1}^{N}$
such that each data case
is formed by concatenating
the positional trajectory $\mathcal{S}^i$, i.e. a sequence of $T_i$
vessel states $\mathbf{s}^i_k$ defined as
\begin{equation}\label{eq:traj}
\mathcal{S}^i = \{ \mathbf{s}^i_k; \, k = 1,\dots,T_i \}
\end{equation}
and a list $\mathcal{T}^i = \{ t_1,\dots,t_{T_{i}} \}$, $t_1 < \dots < t_{T_{i}}$ of time points
with a categorical feature $\mathcal{P}^i$, expressing the class label of the specific motion
pattern to which the data case $i$ belongs.
Note that
each sequence element $\mathbf{s}^i_k \triangleq \mathbf{s}^i(t_k) \in \mathbb{R}^d$
contains longitude-latitude coordinate pairs of a vessel which is navigating trajectory $\mathbf{\mathcal{S}}^i$ at time $t_k$, and directly extracted from the available time-stamped AIS messages.
In many practical cases, it is also possible to exploit
some additional information available from AIS data
such as the destination port.
This knowledge, if included as input to the prediction model,
can dramatically improve
the performance in predicting future states.
Given a collection $\mathcal{C}$ of historical vessel paths,
we address the problem of
maritime pattern learning and
trajectory prediction, i.e. how to
exploit the available training set in order to
learn the underlying space-time mapping
allowing us to predict the state of a test vessel at future time points
(with prediction horizon generally in the order of hours)
based on previous observations.
We formulate the vessel trajectory prediction problem
as a supervised learning process
by following a sequence-to-sequence deep learning approach
to directly generate an output sequence of future states given an input sequence of past AIS observations.
From a probabilistic perspective, the objective is to learn the
following predictive distribution at time step $k$
\begin{equation}\label{eq:predictive_distr}
p(\mathbf{y}_{k+1}, \dots, \mathbf{y}_{k+h} | \mathbf{x}_{k-\ell+1}, \dots, \mathbf{x}_k )
\end{equation}
which represents
the probability of
an output sequence
$\mathbf{y}_{k,h} \triangleq
\{ \mathbf{s}_{\tau} \}_{\tau=k+1}^{k+h} \subseteq \mathcal{S}$
at time $k$ of $h \geq 1$ future states
of the vessel
given the
input sequence
$\mathbf{x}_{k,\ell} \triangleq
\{ \mathbf{s}_{\tau} \}_{\tau=k-\ell+1}^{k} \subseteq \mathcal{S}$
of $\ell \geq 1$ observed states up to time $k$.
Once the predictive distribution is learned from the available training data, the neural network model can be used to directly sample an output sequence given an input sequence of observed states.
The supervised learning task
can be recast as a sequence regression problem
\cite{Bishop96}
which aims at
training a neural network model
$F_{\ell,h}$
to predict, given an input sequence $\mathbf{x}_{k,\ell}$ of length $\ell$
and the
possibly available pattern descriptor
$\bm{\psi}_{k,\ell}$,
the
output sequence
$\hat{\mathbf{y}}_{k,h}$ of length $h$
that maximizes
the conditional probability \eqref{eq:predictive_distr}, i.e.
\begin{equation}\label{eq:pred}
\hat{\mathbf{y}}_{k,h} = F_{\ell,h} ( \mathbf{x}_{k,\ell}, \bm{\psi}_{k,\ell} ) = \argmax_\mathbf{\mathbf{y}} p( \mathbf{y}_{k,h} | \mathbf{x}_{k,\ell}, \bm{\psi}_{k,\ell} ) .
\end{equation}
The parameterized function $F_{\ell,h} (\mathbf{x}_{k,\ell}, \bm{\psi}_{k,\ell}; \bm{\theta} )$ can be directly trained from the given
dataset
$\mathcal{D}$
in order to find the set of parameters $\hat{\bm{\theta}}$ that best approximate the mapping function in \eqref{eq:pred}
over all the input-output training samples, i.e.
that minimize
some loss function $\mathcal{L}_{\ell,h}$ used to measure the prediction error:
\begin{equation}\label{eq:lossmin}
\hat{\bm{\theta}}=\argmin_\mathbf{\theta}\frac{1}{N}\sum_{i=1}^N\mathcal{L}_{\ell,h}\Big(F_{\ell,h} (\mathbf{x}_{k,\ell}^i, \bm{\psi}_{k,\ell}^i;\bm{\theta}),\mathbf{y}_{k,h}^i\Big)
\end{equation}
where $N$ is the number of training samples,
$\mathbf{x}_{k,\ell}^i$, $\mathbf{y}_{k,h}^i$, and $\bm{\psi}_{k,\ell}^i$ are, respectively, the input sequence, target sequence, and pattern descriptor of the $i$-th sample.
$\mathcal{L}_{\ell,h}$ is the loss function
to be minimized in order to
obtain the best set of trainable parameters or weights $\hat{\bm{\theta}}$ of the neural network model.
\end{comment}
\section{Learning to predict under uncertainty}
\label{sec:encdec}
\begin{comment}
\subsection{Recurrent networks for sequence modeling}\label{sec:rnns}
Recurrent neural networks (RNNs) are
natural extensions of feedforward NNs with cyclical connections to facilitate pattern recognition in sequential data.
Such property makes them ideal models for sequence modelling.
RNNs have achieved outstanding results in
speech recognition \cite{Graves2013}, machine translation \cite{Sutskever2014,Cho2014,bahdanau2015}, and image captioning \cite{Xu2015}.
Given an input sequence
$\mathbf{x}_{1:\ell} = \{\mathbf{x}_{1}, \dots, \mathbf{x}_{\ell}\}$ of length $\ell$,
a generic RNN
sequentially reads each element $\mathbf{x}_{t}, \, t=1,\dots,\ell$
of the sequence
and updates its internal hidden state $\mathbf{h}_{t} \in \mathbb{R}^q$
according to:
\begin{equation}\label{eq:RNN}
\mathbf{h}_{t} = g (\mathbf{x}_{t},\mathbf{h}_{t-1}; \bm{\theta})
\end{equation}
where $g$ is a nonlinear activation function with $\bm{\theta}$ learnable parameters.
Usually RNNs are trained with additional networks
such as output layers
that take the hidden state $\mathbf{h}_{t}$ in \eqref{eq:RNN} to sequentially make predictions.
Long Short-Term Memory (LSTM) networks \cite{Hochreiter1997} are a special kind of RNN architecture capable of learning
long-term dependencies and successfully applied to language modeling and other applications to overcome the vanishing gradient problem.
The input sequence
$\mathbf{x}_{1:\ell}$
is passed through the
LSTM network to sequentially compute the hidden vector sequence $\mathbf{h}_{1:\ell} = \{\mathbf{h}_1, \dots, \mathbf{h}_{\ell}\}$.
In particular, the memory cell takes the input vector at the current time step $\mathbf{x}_{t}$ and the the hidden state at the previous time step $\mathbf{h}_{t-1}$
to update the internal hidden state $\mathbf{h}_{t}$ by using the following equations:
\begin{eqnarray}
\mathbf{i}_{t} &=& \sigma_s(\mathbf{U}_{xi} \mathbf{x}_{t} + \mathbf{W}_{hi} \mathbf{h}_{t-1} + \mathbf{b}_i) \nonumber\\
\mathbf{f}_{t} &=& \sigma_s(\mathbf{U}_{xf} \mathbf{x}_{t} + \mathbf{W}_{hf} \mathbf{h}_{t-1} + \mathbf{b}_f) \nonumber\\
\mathbf{o}_{t} &=& \sigma_s(\mathbf{U}_{xo}\mathbf{x}_{t} + \mathbf{W}_{ho} \mathbf{h}_{t-1} + \mathbf{b}_o) \nonumber\\
\Tilde{\mathbf{c}}_{t} &=& \sigma_r(\mathbf{U}_{xc} \mathbf{x}_{t} + \mathbf{W}_{hc} \mathbf{h}_{t-1} + \mathbf{b}_c) \nonumber\\
\mathbf{c}_{t} &=& \mathbf{f}_{t}\odot \mathbf{c}_{t-1} + \mathbf{i}_{t}\odot\Tilde{\mathbf{c}}_{t} \nonumber\\
\mathbf{h}_{t} &=& \mathbf{o}_{t}\odot \sigma_r(\mathbf{c}_{t})
\end{eqnarray}
where $\odot$ denotes the element-wise product,
$\sigma_s$ is the logistic sigmoid function,
$\sigma_r$ is the hyperbolic tangent function,
$\mathbf{i}, \mathbf{f}, \mathbf{o}, \Tilde{\mathbf{c}}$ and $\mathbf{c}$
(all quantities having same dimension $q$ of the hidden vector $\mathbf{h}$)
denote, respectively, the input gate, forget gate, output gate, cell input activation and cell state vector,
while $\mathbf{W}$s and $\mathbf{U}$s are weight matrices, and $\mathbf{b}$s are bias terms.
The weight matrix subscripts denote the input-output connections, for instance $\mathbf{W}_{hf}$ is the hidden-forget
gate matrix.
\end{comment}
We extend the encoder-decoder architecture
based on recurrent networks with attention
proposed for
vessel trajectory prediction in~\cite{Forti2020,Capobianco2021}
by providing a practical quantification of the total (i.e., comprising both model and data: epistemic and aleatoric) predictive uncertainty following a Bayesian deep learning approach.
The encoder-decoder architecture consists of an bidirectional encoder RNN that reads a sequence of vessel positions one state at a time encoding the input information into a sequence of hidden states,
and a decoder RNN that generates an output sequence
of future states step-by-step conditioned on a \emph{context} representation of the input sequence
generated from the encoder's hidden states through an attention aggregation layer.
Moreover, we show that, if available, additional information on the intention of the vessel can be exploited to generate the decoder's outputs.
The RNN encoder-decoder architecture consists of an encoder model with $\theta_{E}$ parameters and a decoder model with $\theta_D$ parameters. Each model is implemented as a fully-trainable RNN containing different weight matrices. We define the Bernoulli variational distribution $q(\theta)$ over the union of all the weight matrices of our architecture, i.e., $\theta = \{\theta_E, \theta_D\}$. In the next sections we will see how the full model learns to predict future trajectories under uncertainty.
\subsection{Variational LSTM}
LSTM networks are a special kind of RNN capable of capturing long temporal patterns in sequential data. They work well on a large variety of problems~\cite{Capobianco2021,Forti2020,Graves2013}.
A Bayesian view of RNNs has been proposed in~\cite{Gal2016_nips}, where the authors interpret LSTM as probabilistic models considering the network weights as variables trainable by suitable likelihood functions.
This has been shown to be equivalent to implementing a
novel variant of dropout for RNN models
to
approximate the posterior distribution over the weights with a mixture of Gaussians
leading to
a tractable optimization objective.
\iffalse
The LSTM used in \cite{Capobianco2021} is implemented by the following composite function:
\begin{eqnarray}
\mathbf{i}_{t} &=& \operatorname{\sigma}(\mathbf{U}_{i} \mathbf{x}_{t} + \mathbf{W}_{i} \mathbf{h}_{t-1} + \mathbf{b}_i) \nonumber\\
\mathbf{f}_{t} &=& \operatorname{\sigma}(\mathbf{U}_{f} \mathbf{x}_{t} + \mathbf{W}_{f} \mathbf{h}_{t-1} + \mathbf{b}_f) \nonumber\\
\mathbf{o}_{t} &=& \operatorname{\sigma}(\mathbf{U}_{o}\mathbf{x}_{t} + \mathbf{W}_{o} \mathbf{h}_{t-1} + \mathbf{b}_o) \nonumber\\
\Tilde{\mathbf{c}}_{t} &=& \operatorname{tanh}(\mathbf{U}_{c} \mathbf{x}_{t} + \mathbf{W}_{c} \mathbf{h}_{t-1} + \mathbf{b}_c) \nonumber\\
\mathbf{c}_{t} &=& \mathbf{f}_{t}\odot \mathbf{c}_{t-1} + \mathbf{i}_{t}\odot\Tilde{\mathbf{c}}_{t} \nonumber\\
\mathbf{h}_{t} &=& \mathbf{o}_{t}\odot \operatorname{tanh}(\mathbf{c}_{t}),\label{eq:LSTM}
\end{eqnarray}
where $\odot$ denotes the element-wise product,
$ \operatorname{\sigma}$ is the sigmoid function,
$\operatorname{tanh}$ is the hyperbolic tangent function,
$\mathbf{i}$, $\mathbf{f}$, $\mathbf{o}$, $\Tilde{\mathbf{c}}$ and $\mathbf{c} \in \mathbb{R}^q$
denote, respectively, the input gate, forget gate, output gate, cell input activation and cell state vector;
the $\mathbf{W}$s and $\mathbf{U}$s are the weight matrices, and the $\mathbf{b}$s are the bias terms.
This LSTM model for trajectory prediction can be parameterized using the \textit{untied-weights} LSTM \cite{Graves2013} with $W_{R} = \{W_i, U_i, W_f , U_f ,W_o, U_o,W_g, U_g\}$ weight matrices to sequentially compute the hidden states.
\fi
Following~\cite{Gal2016_nips}, we extend the deterministic LSTM architecture implementing dropout
on input or output, and recurrent connections,
with the same network units dropped at each time step.
In this paper, the LSTM architecture maps the input sequence $\mathbf{x}_{1}, \dots, \mathbf{x}_{\ell}$ into a sequence of cell activation and hidden states by applying the \textit{tied-weights} LSTM parameterization
in \cite{Gal2016_nips}:
\begin{equation}\label{eq:lstm_mat}
\begin{bmatrix}
\tilde{\mathbf{i}}_t\\
\tilde{\mathbf{f}}_t\\
\tilde{\mathbf{o}}_t\\
\tilde{\mathbf{g}}_t
\end{bmatrix} = \underbrace{\begin{bmatrix}
\mathbf{W}_i & \mathbf{U}_i \\
\mathbf{W}_f & \mathbf{U}_f \\
\mathbf{W}_o & \mathbf{U}_o \\
\mathbf{W}_g & \mathbf{U}_g
\end{bmatrix}}_{\mathbf{W}_R} \begin{bmatrix}
\mathbf{x}_t \\
\mathbf{h}_{t-1}
\end{bmatrix} = \begin{bmatrix}
\mathbf{W}_i\mathbf{x}_t + \mathbf{U}_i\mathbf{h}_{t-1} \\
\mathbf{W}_f\mathbf{x}_t + \mathbf{U}_f\mathbf{h}_{t-1} \\
\mathbf{W}_o\mathbf{x}_t + \mathbf{U}_o\mathbf{h}_{t-1} \\
\mathbf{W}_g\mathbf{x}_t + \mathbf{U}_g\mathbf{h}_{t-1}
\end{bmatrix},
\end{equation}
where $\mathbf{W}_{i}, \mathbf{W}_f,\mathbf{W}_o, \mathbf{W}_g$ are all $p$-by-$d$ weight matrices, and $\mathbf{U}_{i}, \mathbf{U}_f,\mathbf{U}_o, \mathbf{U}_g$ have dimension $p$ by $p$, where $d$ is the number of input features and $p$ is the dimension of the (unidirectional) hidden state, i.e., $\mathbf{h}_{t-1} \in \mathbb{R}^p$. The \emph{input}, \emph{forget}, \emph{output}, and \emph{input modulation} gates are
$\mathbf{i}_t$, $\mathbf{f}_t$, $\mathbf{o}_t$ and $\mathbf{g}_t$, respectively, and can be computed as
\begin{IEEEeqnarray}{rClCrCl}\label{eq:ifog}
\mathbf{i}_t &=& \operatorname{sigm}(\tilde{\mathbf{i}}_t), &\qquad\qquad&
\mathbf{f}_t &=& \operatorname{sigm}(\tilde{\mathbf{f}}_t), \IEEEnonumber \\
\mathbf{o}_t &=& \operatorname{sigm}(\tilde{\mathbf{o}}_t), &\qquad\qquad&
\mathbf{g}_t &=& \tanh(\tilde{\mathbf{g}}_t).
\end{IEEEeqnarray}
Finally, the cell activation state and hidden state vectors, respectively $\mathbf{c}_t$ and $\mathbf{h}_t$, are:
\begin{eqnarray}\label{eq:lstm}
\mathbf{c}_t &=& \mathbf{f}_t \odot \mathbf{c}_{t-1} + \mathbf{i}_t\odot\mathbf{g}_t \nonumber\\
\mathbf{h}_{t} &=& \mathbf{o}_t\odot \operatorname{tanh}(\mathbf{c}_{t}),
\end{eqnarray}
where $\odot$ denotes the Hadamard product. Note that the matrix $\mathbf{W}_R \in \mathbb{R}^{(4p)\times(d+p)}$ is a compact representation of all the weight matrices of the four LSTM gates.
In order to perform approximate variational inference over the weights,
we may write the dropout variant of the parameterization in \eqref{eq:lstm_mat} as
\begin{equation}\label{eq:lstm_variational}
\begin{bmatrix}
\tilde{\mathbf{i}}_t\\
\tilde{\mathbf{f}}_t\\
\tilde{\mathbf{o}}_t\\
\tilde{\mathbf{g}}_t
\end{bmatrix} =
\mathbf{W}_R
\begin{bmatrix}
\mathbf{x}_t \odot \mathbf{d}_x\\
\mathbf{h}_{t-1} \odot \mathbf{d}_h
\end{bmatrix}
\end{equation}
with $\mathbf{d_x}$, $\mathbf{d_h}$ random masks repeated at all time steps.
In the following sections, we will see how to apply the above Variational LSTM (\textit{VarLSTM}) model to our trajectory prediction task under uncertainty, using an encoder-decoder RNN architecture.
\subsection{Encoder Network
The encoder network is designed as a bidirectional RNN~\cite{Schuster97} to capture and analyze the temporal patterns in vessel trajectory both in the positive and negative time directions simultaneously. Following the encoder architecture used in~\cite{Capobianco2021}, we use two \textit{VarLSTM} to encode the input sequence using one model for each time direction.
In our \textit{VarLSTM} encoder implementation, the dropout mask $\mathbf{d}_h$ is applied only to the recurrent connections, therefore it does not perturb the input vessel trajectory. In the end, the trainable parameters are $\bm{\theta}_E = \{\overrightarrow{\mathbf{W}}_R, \overleftarrow{\mathbf{W}}_R\}$, one weight matrix for each \textit{VarLSTM}.
The bidirectional \textit{VarLSTM} maps the input sequence
into two different temporal representations:
the forward hidden sequence by iterating the
forward layer from
$t = 1$ to $\ell$,
and the backward hidden sequence by iterating the backward layer from $t = \ell$ to $1$.
In this way, the encoder network
is able to learn long-term patterns in both temporal directions.
The output layer of the encoder is then updated
into a compact hidden state representation
obtained by
concatenating the forward and backward hidden states, i.e.,
$\overleftrightarrow{\mathbf{h}}_{t} = \overrightarrow{\mathbf{h}}_{t} \oplus \overleftarrow{\mathbf{h}}_{t}$
computing the output vectors of the encoder layer $\overleftrightarrow{\mathbf{h}}_1,\dots,\overleftrightarrow{\mathbf{h}}_\ell$
for an input sequence of length $\ell$.
Each element $\overleftrightarrow{\mathbf{h}}_t$ encodes bidirectional spatio-temporal information of the input sequence extracted from the
states of the vessel preceding and following the
$t$-th component of the sequence.
\subsection{Attention-based Decoder Network with Uncertainty}
In the proposed architecture, the decoder RNN is trained to learn
the following conditional probability
\begin{equation}\label{eq:decoder2}
p(\mathbf{y}_{1}, \dots, \mathbf{y}_{h} | \mathbf{x}_{1}, \dots, \mathbf{x}_{\ell}) = \prod_{t=1}^{h}
p(\mathbf{y}_{t}|\{\mathbf{y}_{1}, \dots, \mathbf{y}_{t-1}\},\mathbf{z}_t, \bm{\psi}),
\end{equation}
where each factor in \eqref{eq:decoder2} can be modeled by an RNN $D$, with trainable parameters $\bm{\theta}_{D}$, of the form
\begin{equation}\label{eq:decoder3}
p(\mathbf{y}_{t}|\{\mathbf{y}_{1}, \dots, \mathbf{y}_{t-1}\},\mathbf{z}_t) = D(\hat{\mathbf{y}}_{t-1}, \mathbf{s}_{t-1}, \mathbf{z}_t;\bm{\theta}_{D}).
\end{equation}
The decoder~\eqref{eq:decoder3} generates the next kinematic state $\hat{\mathbf{y}}_{t}$
given the context vector $\mathbf{z}_t$,
the possibly available information on the high-level intention $\bm{\psi}$ of the vessel,
the decoder's hidden state $\mathbf{s}_{t-1}$,
and the previous predictions $\hat{\mathbf{y}}_{t-1}$,
which are fed back into the model in a recursive fashion
as additional inputs to predict further into the future.
In addition, \eqref{eq:decoder3} is initialized by setting
\begin{equation}\label{eq:init_map}
\mathbf{s}_{0} = g( \overrightarrow{\mathbf{h}_\ell}, \bm{\psi})
\end{equation}
to map the last encoder (forward) hidden state and the intention into the initial decoder hidden state $\mathbf{s}_{0}$.
Note that, similar to~\cite{Capobianco2021}, this work partially addresses the multimodal nature of the prediction task with the use of the vessel's intention (i.e., destination) $\bm{\psi}$. However, different from~\cite{Capobianco2021} and as an additional measure to avoid overfitting, here the intention information is used only to initialize the decoder through \eqref{eq:init_map}, which in this case takes the form
\begin{equation}\label{eq:init_map2}
\bm{s}_0 =
\tanh(\mathbf{W}_{\psi}\bm{\eta})
\end{equation}
with
$\bm{\eta} = \overrightarrow{\mathbf{h}_\ell} \oplus \bm{\psi}$,
$\mathbf{W}_{\psi}$
being the trainable parameters\footnote{\label{note_biases}Again, all biases are omitted for simplicity.}
of the neural network \eqref{eq:init_map}.
The attention mechanism~\cite{bahdanau2015} is adopted as an intermediate layer between the encoder and the decoder
to learn the relationship between the observed and the predicted kinematic states
while preserving the spatio-temporal structure of the input. We extend the attention module
of \cite{bahdanau2015} implemented in \cite{Capobianco2021}
by applying a random dropout mask repeated at all time steps $\mathbf{d}_a$ to the input hidden features, previously computed by the encoder network.
This is achieved by allowing the context representation to be a set of fixed-size vectors, or context set
$\mathbf{z}=\{\mathbf{z}_t\}_{t=1}^{\ell}$,
where
each context vector $\mathbf{z}_t$ in~\eqref{eq:decoder3}
can be computed as a weighted sum of the encoded input states, i.e.,
$
\mathbf{z}_t = \sum_{j=1}^\ell \alpha_{tj}\mathbf{h}_j \odot \mathbf{d}_a ,
$
where
$
\alpha_{tj} = \operatorname{exp}(e_{tj})
/
\sum_{k=1}^\ell \operatorname{exp}(e_{tk})
$
represents the attention weight,
and $e_{tj} = a(\mathbf{s}_{t-1},\mathbf{h}_j \odot \mathbf{d}_a; \mathbf{W}_a)$
is a variational neural network with parameters
$\mathbf{W}_a$
and dropout mask $\mathbf{d}_a$.
The variational attention network is trained jointly with
the prediction model to quantify the level of matching between the inputs around position $j$ and the output at position $t$
based on the $j$-th encoded input state $\mathbf{h}_j$
and the decoder's state $\mathbf{s}_{t-1}$ (generating the $t$-th output).
In order to deal with uncertainty,
the decoder~\eqref{eq:decoder3}
is implemented as a unidirectional \textit{VarLSTM}
which generates the sequence of future predicted distributions by iterating $\forall t = 1, \dots, h$:
\begin{eqnarray}
\bm{c}_t &=& \hat{\mathbf{y}}_{t-1} \oplus \mathbf{z}_t\\
\mathbf{s}_t &=& \operatorname{\textit{VarLSTM}}(\bm{c}_t, \mathbf{s}_{t-1}; \mathbf{W}_R)
\label{eq:var_lstm}
\\
\hat{\mathbf{y}}_{t}
&=& \mathbf{W}_{\mu} \mathbf{s}_{t}
\label{bnn_mean}
\\
\left[ \log b_{11}, \log b_{22}, b_{21}\right]^T &=& \mathbf{W}_{\Sigma} \mathbf{s}_{t} \label{bnn_variance}
\end{eqnarray}
where
$\hat{\mathbf{y}}_0 = \mathbf{x}_\ell$,
$\mathbf{z}_t$ is the context vector computed through the attention mechanism,
and $\mathbf{W}_{\mu}$, $\mathbf{W}_{\Sigma}$ are trainable parameters\footnoteref{note_biases} of a single neural network mapping the \textit{VarLSTM} output $\mathbf{s}_t$
to the parameters
in~\eqref{bnn_mean}-\eqref{bnn_variance}
used to estimate
the predictive uncertainty at time step $t$,
here modeled as a multivariate Gaussian distribution $\mathcal{N}(
\hat{\mathbf{y}}_{t}, \mathbf{\Sigma}_t)$.
Note that,
while the predictive mean
$\hat{\mathbf{y}}_{t}$
can be directly obtained through~\eqref{bnn_mean},
to stabilize training and enforce positive-definite predictive covariance matrices,
the network~\eqref{bnn_variance} is trained to predict the elements of a lower triangular matrix with real and positive diagonal entries $\mathbf{B} = \begin{bmatrix}
b_{11} & 0 \\ b_{21} & b_{22}
\end{bmatrix}$,
such that
\begin{equation} \label{chol}
\bm{\Sigma}_t = \mathbf{B}\mathbf{B}^T = \begin{bmatrix}
\sigma_{t,x}^2 & \sigma_{t,x}\sigma_{t,y}\rho_t \\
\sigma_{t,x}\sigma_{t,y}\rho_t & \sigma_{t,y}^2
\end{bmatrix}
\end{equation}
is guaranteed to be positive definite using the Cholesky decomposition~\eqref{chol}.
Note that $\sigma_{t,x}$ and $\sigma_{t,y}$ in~\eqref{chol} denote the components of the predictive variance $\sigma_{t}$ along the $x$ and, respectively, $y$ direction.
Notice also that in the proposed \textit{VarLSTM} decoder implementation, the $\mathbf{d}_h$ dropout mask is applied only on the recurrent connections.
This end-to-end solution is trained by the stochastic gradient descent algorithm in order to learn an optimal function approximation \eqref{eq:pred}.
A decoder \eqref{eq:decoder3} with such a recursive structure
offers the advantage of being able to handle
sequences of arbitrary length.
The model consists of a set of trainable parameters $\bm{\theta}_{D} = \{\mathbf{W}_{\psi},\mathbf{W}_R,\mathbf{W}_a, \mathbf{W_\mu}, \mathbf{W}_\Sigma\}$ respectively used to model the decoder initialization \eqref{eq:init_map2}, the unidirectional \textit{VarLSTM} \eqref{eq:var_lstm}, the attention mechanism, and the output distribution given by \eqref{bnn_mean}-\eqref{bnn_variance}.
\begin{figure}[t!]
\centering%
\includegraphics[width=\columnwidth]{fig/seq2seq-arch.pdf}%
\caption{Diagram of the proposed attention-based \textit{VarLSTM} encoder-decoder architecture for vessel trajectory prediction with uncertainty estimation.
The encoder is a bidirectional \textit{VarLSTM} that maps the input sequence into a sequential hidden state.
Then, the encoded information, randomly masked through variational dropout, is passed to an intermediate attention module to generate a context representation of the input.
This intermediate representation is fed to the decoder network, implemented as a unidirectional \textit{VarLSTM}, that generates the sequence of output distributions $\mathcal{N}(\hat{\mathbf{y}}_{t}, \mathbf{\Sigma}_t), t = 1, \dots, h$. Finally, the sequence of total predictive distributions $\mathcal{N}(\overline{\mathbf{y}}_{t}, \overline{\mathbf{\Sigma}}_t)$ is obtained by combining aleatoric and epistemic uncertainty, the latter estimated through MC dropout.
Note that the decoder network is initialized using a regularization method on the intention information.}
\label{fig:arch}
\end{figure}
\begin{comment}
\subsection{Attention mechanism}
\label{ssec:attention}
The aggregation function $A$ in \eqref{eq:aggregate} allows us to compress all the information encoded in $\mathbf{H}$ into a single contextual representation
which finally will be used by the decoder to produce the output sequence.
We rely on the attention mechanism.
Attention-based neural networks have recently demonstrated
great success in a wide range of tasks including question answering, machine translation,
and image captioning \cite{Hermann2015}, \cite{bahdanau2015}, \cite{Xu2015}.
The attention mechanism is introduced
as an intermediate layer
between the encoder and the decoder networks to address two issues
that usually characterize encoder-decoder architectures using simple aggregation functions:
i) the limited capacity of a fixed-dimensional context vector regardless of the
amount of information in the input \cite{Cho2014}, and
ii) the lack of interpretability \cite{bahdanau2015}.
The attention mechanism can be adopted
to model the relation between the encoder and the decoder hidden states
and to learn the relation between the observed and the predicted kinematic states
while preserving the spatio-temporal structure of the input.
This is achieved by allowing the context representation to be a set of fixed-size vectors, or context set
$\mathbf{z}=[ \mathbf{z}_1, \dots, \mathbf{z}_{\ell} ]$.
Consider the matrix $\mathbf{H}$ of the encoder's hidden states
$\mathbf{h}_{1:\ell}$
and the hidden state $\mathbf{s}_{t-1}$ of the decoder network,
then
each context vector $\mathbf{z}_t$ can be computed as a weighted sum of the hidden states, i.e.,
\begin{equation}
\mathbf{z}_t = \sum_{j=1}^\ell \alpha_{tj}\mathbf{h}_j,
\end{equation}
where $\alpha_{tj}$ represents the attention weight calculated,
from the score $e_{tj}$,
via the softmax operator as
\begin{equation}
\alpha_{tj} = \frac{\operatorname{exp}(e_{tj})}{\sum_{k=1}^\ell \operatorname{exp}(e_{tk})}
\end{equation}
with
\begin{eqnarray}
e_{tj} &=& \vect{v}_m^\top\operatorname{tanh}(\mathbf{W}_{h}\mathbf{h}_j + \mathbf{W}_{s}\mathbf{s}_{t-1}). \label{eq:combine}
\end{eqnarray}
Note that the transformation matrices $\mathbf{W}_{h} \in \mathbb{R}^{2q\times q}$ and $\mathbf{W}_{s} \in \mathbb{R}^{q\times q}$ in the trainable neural network \eqref{eq:combine} are used to combine together all the encoder's hidden states $\mathbf{h}_j$ for each decoder state $\mathbf{s}_{t-1}$, before applying the hyperbolic tangent as activation function.
The matrix multiplication applies the transformation matrix $\mathbf{v}_{m} \in \mathbb{R}^{m}$ to produce element $e_{tj}$, which scores the quality of alignment between the inputs around position $j$ and the output at position $t$, i.e., how well they match.
The main task of the attention mechanism is to score
each context vector $\mathbf{z}_t$
with respect to the decoder's hidden state $\mathbf{s}_{t-1}$
while it is generating the output trajectory
to better approximate the predictive distribution $\eqref{eq:predictive_distr}$.
This scoring operation corresponds to assigning a probability to each context of being attended by the decoder.
\end{comment}
\subsection{Intention Regularization}\label{sec:regular}
\iffalse
\begin{equation}\label{eq:init_map3}
\bm{s}_0 = g(\bm{h}_\ell, \bm{\psi}) = \tanh([\bm{h}_\ell, \bm{\psi}]\mathbf{W}_{\Psi} + \bm{b}_{\Psi}).
\end{equation}
\begin{eqnarray}
X = [\bm{h}_\ell, \bm{\psi}]\\
\bm{s}_0 = g(\bm{h}_\ell, \bm{\psi}) = \tanh(X\mathbf{W}_{\Psi} + \bm{b}_{\Psi}).
\end{eqnarray}
\fi
Despite the proposed decoder initialization~\eqref{eq:init_map2}, the encoder-decoder architecture may overfit the intention information, and predict erroneous future maneuvers of the vessel.
This is due to the fact that the model tends to generate predictions that are mainly based on the high level intention rather than on the information encoded in the past trajectory.
To avoid this behavior, and inspired by the dropout mechanism~\cite{Srivastava2014},
we apply a regularization technique based on dropout noise \cite{Wager2013}
by randomly masking the intention information at train time
in order to prevent complex co-adaptations between the encoded past trajectory and the high-level information.
In particular,
we apply a random dropout noise to the intention at each iteration of the training procedure
by feeding
the intention information $\bm{\psi}$ into \eqref{eq:init_map2} with some probability $q$ (a predefined hyperparameter), or
setting
it to zero otherwise.
Thus, the decoder initialization~\eqref{eq:init_map2} takes the following form
\begin{equation}
\bm{s}_0 = \tanh( \mathbf{W}_{\psi}(\bm{\eta} \odot \mathbf{d}_{\eta} )), \\
\end{equation}
where
$\mathbf{d}_{\eta} = \frac{p+v}{p} \bm{\nu}$,
and $\bm{\nu} \in \{0,1\}^{p+v}$ is a random binary mask on the input information such that $\nu_i=1$, $i=1,\dots,p$,
and $\nu_i= \nu \sim \operatorname{Bern}(1-\gamma)$, $i=p+1,\dots,p+v$. The random variable $\nu$
is Bernoulli with parameter $1-\gamma$, i.e., it takes the value $0$ with probability $\gamma$ for each training sample of the intention components of $\bm{\eta}$, otherwise it is $1$.
In other words, this random mask is applied to $\bm{\eta}$ in order to regularize via dropout noise only the intention information $\bm{\psi} \in \{0,1\}^v$, while leaving the encoded feature $\overrightarrow{\mathbf{h}_\ell} \in \mathbb{R}^p$ untouched.
\iffalse
\begin{equation}
\bar{\bm{\eta}} = \begin{cases}
[ \overrightarrow{\mathbf{h}_\ell}, \bm{\psi}], & \text{if $r = 1$} \\
[\frac{1}{1 - b} \mathbf{h}_\ell, 0], & \text{if $r = 0$}
\end{cases}
\end{equation}
\fi
Then, a scaling factor
$\frac{p+v}{p}$
is applied to the random mask at train time, while leaving the forward pass at test time unchanged.
Note that this pre-scaling performed at train time, commonly referred to as the \textit{inverted dropout} implementation, does not require any changes to the network to compensate for the absence of information during test time, as usually done in traditional dropout \cite{Srivastava2014}.
This additional regularization of the intention information
can be viewed as a Multimodal Dropout method \cite{Neverova2016}, in which the input features belonging to the same
group (or modality) are either all dropped out or all preserved to avoid false co-adaptations between different groups of input information, and to handle missing data in one of the groups at test time.
In our case, this has been shown to improve the prediction performance by preventing co-adaptations between the encoded past observations and the possibly available intention information.
The improvements in performance with respect to a previous version of the labeled architecture \cite{fusion2021} are provided in appendix \ref{appendix} for a varying probability $\gamma$.
\begin{figure}
\centering%
\includegraphics[trim=52 10 52 10,clip,width=.95\columnwidth]{fig/dataset_train_fusion2021.png}%
\caption{Complete dataset of AIS positions used in the experiments; the training and testing sets are subsets of the dataset showed here. Each dot in the image corresponds to a ship's position. Contains data from the Danish Maritime Authority that is used in accordance with the conditions for the use of Danish public data~\cite{DMA}.}%
\label{fig:training_dataset}%
\end{figure}
\begin{figure}
\centering%
\includegraphics[trim=137 10 137 10,clip,width=.95\columnwidth]{fig/complete_test_set/distance_aes_letter.pdf}%
\caption{Test dataset used for the evaluation of performance metrics. Each dot in the image corresponds to a ship's position and is colored based on its distance from a fixed point, located in the upper left corner of the map. Contains data from the Danish Maritime Authority that is used in accordance with the conditions for the use of Danish public data~\cite{DMA}.}%
\label{fig:compare_dist_data}%
\end{figure}
\input{experiments.tex}
\section{CONCLUSION}
\label{sec:discussion}
\begin{figure*}
\centering %
\subfloat[]{%
\label{fig:predictions_unlabeled_1}
\includegraphics[trim=50 10 50 10, clip, width=.3\textwidth]{fig/unlabeled/237_14.png
} \hfill
\subfloat[]{%
\label{fig:predictions_unlabeled_2}
\includegraphics[trim=50 10 50 10, clip, width=.3\textwidth]{fig/unlabeled/237_18.png
} \hfill
\subfloat[]{%
\label{fig:predictions_unlabeled_3}
\includegraphics[trim=50 10 50 10, clip, width=.3\textwidth]{fig/unlabeled/237_27.png
} \\
\subfloat[]{%
\label{fig:predictions_labeled_1}
\includegraphics[trim=50 10 50 10, clip, width=.3\textwidth]{fig/labeled/237_14.png
} \hfill
\subfloat[]{%
\label{fig:predictions_labeled_2}
\includegraphics[trim=50 10 50 10, clip, width=.3\textwidth]{fig/labeled/237_18.png
} \hfill
\subfloat[]{%
\label{fig:predictions_labeled_3}
\includegraphics[trim=50 10 50 10, clip, width=.3\textwidth]{fig/labeled/237_27.png
}
\caption{Predictions computed with unlabeled models~\protect\subref{fig:predictions_unlabeled_1}--\protect\subref{fig:predictions_unlabeled_3} compared to labeled models~\protect\subref{fig:predictions_labeled_1}--\protect\subref{fig:predictions_labeled_3}. Contains data from the Danish Maritime Authority that is used in accordance with the conditions for the use of Danish public data~\cite{DMA}.}
\label{fig:predictions}
\end{figure*}
In this paper, we proposed
an attention-based recurrent encoder-decoder architecture to address the problem of
trajectory prediction with uncertainty quantification applied to a maritime domain case study.
The predictive uncertainty is estimated through Bayesian learning by combining both aleatoric and epistemic uncertainty, with the latter modeled via Monte Carlo dropout.
Experimental results show that the proposed architecture is able to learn maritime sequential patterns from historical AIS data, and successfully predict
future vessel trajectories with a reliable quantification of the predictive uncertainty.
Two models are compared and show how prediction performance can be improved by exploiting high-level intention behavior of vessels (e.g., their intended destination) when available. Future lines of research on this topic include the investigation of multimodal prediction techniques in combination with high-level intention modeling to further improve the prediction performance when the intention information alone is not sufficient to fully account for the multimodality of the prediction task.
\appendices
\section{Intention Regularization Performance}\label{appendix}
In this appendix we investigate how the prediction model can benefit from the intention regularization technique proposed in Section \ref{sec:regular} for the novel version of the labeled architecture (Lv2), by comparing the results with those obtained with the previous version of the labeled architecture (Lv1) presented in \cite{fusion2021}, in which the intention information is injected for each time step during the decoding phase.
For this comparison, we used the complete AIS dataset from \cite{Capobianco2021} shown in Fig.~\ref{fig:training_dataset}.
In our experiment we split the original dataset composed of \num{394} full trajectories into \num{284} trajectories for training, \num{32} for validation, and \num{78} for testing.
The windowing procedure proposed in \cite{Capobianco2021} produces \num{8574} input/output sequences of length $\ell=h=12$ for training, \num{1054} sequences for validation,
and \num{2379} sequences for the testing phase.
In the evaluation of our experiment,
we use
the APE
for different horizons (i.e., 1, 2, and 3 hours),
and the Average Displacement Error (ADE) as
the average Euclidean distance
between the predicted trajectory and the ground truth
(i.e., over all the predicted positions of a trajectory) \cite{Pellegrini09, Alahi16}.
The performance evaluation of the proposed intention regularization method is shown in
Table~\ref{tbl:compare},
where only the most informative results are shown for a varying probability of intention dropout mask.
As shown in Table~\ref{tbl:compare},
the proposed architecture Lv2 performs better than
Lv1, with the best performance achieved by setting $\gamma=0.3$.
\begin{table
\centering%
\caption{Comparison of the impact of intention regularization\\ on the APE and the ADE metrics}
\label{tbl:compare}
\begin{tabular}{lc|ccc|c}
\toprule
& & \multicolumn{3}{c|}{\footnotesize APE} & {\footnotesize ADE} \\
{\footnotesize Model} & {\footnotesize Mask ($\gamma$)} & {\footnotesize 1h} & {\footnotesize 2h} & {\footnotesize 3h} & \\
\hline
Lv1\cite{Capobianco2021} & -- & \tablenum{0,57} & \tablenum{ 1,14} & \tablenum{ 1,90} & \tablenum{0,97} \\
Lv2 (ours)& \tablenum[detect-weight=true]{0} & \tablenum{0,56} & \tablenum{1,11} & \tablenum{1,86} & \tablenum{0,95} \\
{\bfseries Lv2 (ours)} & {\bfseries \tablenum[detect-weight=true]{0.3}} & {\bfseries \tablenum[detect-weight=true]{0,51}} & {\bfseries \tablenum[detect-weight=true]{1,03}} & {\bfseries \tablenum[detect-weight=true]{1,78}} & {\bfseries \tablenum[detect-weight=true]{0,88}} \\
Lv2 (ours)& \tablenum[detect-weight=true]{0.5} & \tablenum{0,54} & \tablenum{ 1,06} & \tablenum{ 1,81} & \tablenum{0,91}\\
\bottomrule
\end{tabular}%
\end{table}
\bibliographystyle{IEEEtran}
|
2,869,038,155,578 | arxiv | \section{Introduction}
A Hermitian connection on a Hermitian manifold $(M,J,g)$ is a connection which leaves both $J$ and $g$ parallel. Each Hermitian manifold admits plenty of these connections. Among them, there is only one whose torsion is totally skew-symmetric. This unique connection is called the Bismut connection associated to $(J,g)$, and it is also known as the Strominger connection or the KT connection (for K\"ahler with torsion). In this article we will denote it by $\nabla^b$.
As with any connection, it is important to determine which are the Hermitian manifolds whose corresponding Bismut connection is flat. Well-known examples of such manifolds are given by Lie groups equipped with a bi-invariant Riemannian metric and a compatible left invariant complex structure (see \cite{IP}). In particular, this family contains the Hermitian manifolds $(G,J,g)$ where $G$ is a compact Lie group, $J$ is one of the left invariant complex structures constructed by Samelson in \cite{Sam} and $g$ is a bi-invariant metric. More recently, it was proved in \cite{WYZ} that every compact Bismut-flat Hermitian manifold is closely related to these examples: indeed, if $M$ is a compact Hermitian manifold with flat Bismut connection, then its universal cover is a Lie group $G'$ equipped with a bi-invariant metric and a left invariant complex structure compatible with the metric. In particular, $G'$ is the product of a compact semisimple Lie group and a real vector space.
Since the flat case is already settled, it is interesting to analyze other Hermitian manifolds whose associated Bismut connection have special curvature properties. One way is to consider the notion of \emph{K\"ahler-like}. In \cite{AOUV}, the authors conjectured that if the Bismut connection is K\"ahler-like, then the metric is pluriclosed (i.e., $\partial\overline{\partial} \omega=0$, where $\omega$ denotes the fundamental $2$-form $\omega=g(J\cdot,\cdot)$). This conjecture has been recently proved in \cite{ZZ}. Another way, and this is the aim of the paper, is to study the holonomy group $\operatorname{Hol}^b$ of the Bismut connection $\nabla^b$. Since both the complex structure and the Hermitian metric are $\nabla^b$-parallel we have that $\operatorname{Hol}^b\subseteq \operatorname{U}(n)$, where $2n$ is the real dimension of the manifold. In particular, $2n$-dimensional Hermitian manifolds whose Bismut holonomy is contained in $\operatorname{SU}(n)$ have attracted plenty of attention. These manifolds are known as \textit{Calabi-Yau with torsion}, and they appear in heterotic string theory, related to the Strominger system in six dimensions. It has been shown that this reduction to $\operatorname{SU}(n)$ is related in certain cases to the Hermitian metric being \textit{balanced}, that is, when the fundamental $2$-form $\omega$ satisfies $d\omega^{n-1}=0$ or, equivalently, $d^\ast\omega=0$.
For instance, it was shown in \cite{St,LY} that if the compact Hermitian manifold $(M^{2n},J,g)$ has holomorphically trivial canonical bundle, then $\operatorname{Hol}^b\subseteq \operatorname{SU}(n)$ if and only if $g$ is conformally balanced; in particular, $(M,J)$ admits a balanced metric. In the case when $M^{2n}$ is a nilmanifold, that is, $M=\Gamma\backslash G$ where $G$ is a nilpotent Lie group and $\Gamma$ is a co-compact discrete subgroup of $G$, more can be said, since it was proved in \cite{FPS} that an invariant Hermitian structure $(J,g)$ on $M$ satisfies $\operatorname{Hol}^b\subseteq \operatorname{SU}(n)$ if and only if $g$ is balanced.
In the Gray-Hervella classification of almost Hermitian structures, balanced metrics fall into the class $\mathcal{W}_3$. In this article we are interested in Hermitian manifolds which belong to the class $\mathcal{W}_4$, namely \textit{locally conformally K\"ahler} manifolds (or LCK for short). As the name suggests, these Hermitian manifolds are characterized by the property that each point has a neighbourhood where the metric is conformal to a K\"ahler metric. This condition is equivalent to the existence of a closed $1$-form $\theta$ satisfying $d\omega=\theta\wedge \omega$. The $1$-form $\theta$ is known as the Lee form, and it is given by $\theta=-\frac{1}{n-1} d^\ast\omega\circ J$, where $2n$ is the real dimension of the manifold. A distinguished class of LCK manifolds is given by those where the Lee form is parallel with respect to the Levi-Civita connection of the Hermitian metric. These manifolds were first studied by I. Vaisman in the late '70s (see for instance \cite{V}) and, accordingly, they are nowadays known as \textit{Vaisman} manifolds. Not all LCK manifolds are Vaisman, for instance, the Oeljeklaus-Toma manifolds of type $(s,1)$ are compact complex manifolds which admit LCK metrics but do not admit any Vaisman metric (see \cite{OT,K}).
Our main goal is to study the Bismut holonomy of Vaisman manifolds and exhibit explicit examples where this holonomy can be computed. We point out that the Riemannian holonomy of compact Vaisman manifolds has been analyzed in \cite{MMP}. Examples of Vaisman manifolds are given by the classical Hopf manifolds, that is, quotients of $\C^{n}-\{0\}$ by a group of automorphisms generated by $z\to \lambda z$, where $\lambda$ is a complex number with $|\lambda|>1$. These manifolds are all diffeomorphic to $S^1\times S^{2n-1}$ and do not admit any K\"ahler structure.
Another family of examples of compact Vaisman manifolds was introduced in \cite{CFL} in 1986. They are defined as compact quotients of the nilpotent Lie groups $H_{2n+1}\times\R$ by a discrete subgroup $\Gamma$, where $H_{2n+1}$ denotes the $(2n+1)$-dimensional Heisenberg Lie group; they are thus examples of \textit{nilmanifolds}. In these examples the Vaisman structures are left invariant, and recently Bazzoni proved in \cite{Ba} that if a nilmanifold $\Gamma\backslash N$ admits a Vaisman structure (invariant or not) then $N$ is isomorphic to $H_{2n+1}\times\R$. Examples of Vaisman structures on solvmanifolds (i.e., compact quotients of a simply connected solvable Lie group by a discrete subgroup) first appeared in \cite{MP} in 1997. More recently, there have been advances on the structure of the Lie algebras associated to solvmanifolds equipped with invariant Vaisman structures, see for instance \cite{AHK,AO}. In particular, the description given in \cite{AO} will be very useful for us in order to analyze the Bismut connection on these Vaisman solvmanifolds and compute its holonomy.
The paper is structured as follows: In Section 2 we start collecting some known results about Gauduchon connections, holonomy and the Ambrose-Singer theorem as a tool for determining the holonomy. In Section 3 we study the Bismut connection on Vaisman manifolds and its curvature. The main results are Corollary~\ref{hol} and Corollary~\ref{parallel torsion} where we prove that the holonomy of the Bismut connection on Vaiman manifolds of real dimension $2n$ reduces to $\operatorname{U}(n-1)$ and that the Bismut torsion 3-form is $\nabla^b$-parallel. Section 4 is devoted to solvmanifolds endowed with an invariant Vaisman structure. In this setting, we prove that the holonomy of the Bismut connection has dimension 1 and it is not contained in $\operatorname{SU}(n)$. Some classical Hopf manifolds are studied in Section 5. Using a global parallelization of these manifolds which is compatible with the Vaisman structure, we determine explicitly the holonomy group of the Bismut connection, obtaining the group $\operatorname{U}(n-1)$. Non-Vaisman LCK Oeljeklaus-Toma manifolds are considered in Section 6 and we show that there is no reduction of the Bismut holonomy in this case. Finally, we study the parallelism of the Lee form $\theta$ of a Vaisman manifold for the line of Gauduchon connections and more generally, for the $2$-parameter family of metric connections introduced in \cite{OUV}.
All manifolds considered in this paper have real dimension $\geq 4$.
\
\section{Preliminaries on holonomy and the Ambrose-Singer theorem}
We collect here some well-known facts on holonomy groups and the Ambrose-Singer theorem that will be useful in subsequent sections.
\medskip
Let $\nabla$ denote any linear connection on a connected manifold $M$ and let us fix a point $p\in M$. If $\gamma:[0,1]\to M$ is a piecewise smooth loop based at $p$, the connection $\nabla$ gives rise to a parallel transport map $P_\gamma:T_pM \to T_pM$, which is linear and invertible. The holonomy group of $\nabla$ based at $p\in M$ is defined as
\[ \operatorname{Hol}_p(\nabla)=\{ P_\gamma\in \operatorname{GL}(T_pM) \mid \gamma \text{ is a loop based at } p\}.\]
It turns out that $\operatorname{Hol}_p(\nabla)$ is a Lie subgroup of $\operatorname{GL}(T_pM)$.
Since $M$ is connected, the holonomy groups based at two different points are conjugated, and therefore we can speak of the holonomy group of $\nabla$, denoted simply by $\operatorname{Hol}(\nabla)$. If $\dim M=n$, we can identify $\operatorname{Hol}(\nabla)$ with a Lie subgroup of $\operatorname{GL}(n,\R)$, after some choice of basis.
The holonomy group need not be connected, and its identity component is denoted by $\operatorname{Hol}_0(\nabla)$; it is known as the restricted holonomy group of $\nabla$ and it consists of the parallel transport maps $P_\gamma$ where $\gamma$ is null-homotopic. Clearly, if $M$ is also simply connected then $\operatorname{Hol}(\nabla)=\operatorname{Hol}_0(\nabla)$.
We point out that if $\nabla$ is a metric connection on a Riemannian manifold $(M,g)$, i.e. $\nabla g=0$, then $P_\gamma$ is an isometry of $(T_pM,g_p)$, while if $\nabla$ satisfies $\nabla J=0$ on an almost complex manifold $(M,J)$ then $P_\gamma J=JP_\gamma$. Therefore, if $\nabla$ is a Hermitian connection ($\nabla g=\nabla J=0$) on an almost Hermitian manifold $(M^{2n},J,g)$ then
\[ \operatorname{Hol}(\nabla)\subseteq \operatorname{O}(n)\cap \operatorname{GL}(n,\C)=\operatorname{U}(n).\]
In \cite{Ga} a monoparametric family $\{\nabla^t\}_{t\in\R}$ of Hermitian connections on any Hermitian manifold $(M,g,J)$ was introduced. These are known as the \textit{Gauduchon} connections (or \textit{canonical} connections), and they can be written as
\[ g(\nabla^t_XY,Z)=g(\nabla^g_XY,Z)+\frac{t-1}{4}(d^c\omega)(X,Y,Z)+\frac{t+1}{4}(d^c\omega)(X,JY,JZ), \quad X,Y,Z\in\mathfrak{X}(M), \]
where $\omega$ is the fundamental 2-form $\omega(X,Y)=g(JX,Y)$ and $d^c:\Omega^r(M)\to\Omega^{r+1}(M)$ is the operator defined by
$d^c=(-1)^rJdJ$. More explicitly, for $\alpha\in\Omega^2(M)$ we have that $d^c\alpha(U,V,W)=-d\alpha(JU,JV,JW)$ for any $U,V,W\in\mathfrak{X}(M)$, so that the expression for $\nabla^t$ becomes
\begin{equation}\label{canonical}
g(\nabla^t_XY,Z)=g(\nabla^g_XY,Z)-\frac{t-1}{4}d\omega(JX,JY,JZ)-\frac{t+1}{4}d\omega(JX,Y,Z).
\end{equation}
When $(M,J,g)$ is K\"ahler this family of connections reduces to a single point, given by the Levi-Civita connection. However, in general, the torsion $T^t$ of these connections is non-zero, where $T^t$ is the $(1,2)$-tensor defined by $T^t(X,Y)=\nabla^t_XY-\nabla^t_YX-[X,Y]$ for $X,Y$ vector fields on $M$.
For particular values of $t\in\R$ we obtain well-known Hermitian connections. For instance, for $t=1$ we have the \textit{Chern connection}, while for $t=0$ we have the \textit{first canonical connection}.
\medskip
In this article we will focus on the \textit{Bismut connection}, which is the connection $\nabla^{-1}$ obtained for $t=-1$. From now on the Bismut connection will be denoted by $\nabla^b$, with corresponding torsion $T^b$. It was introduced in \cite{Bis} and it can be defined as the unique Hermitian connection whose torsion $T^b$ is totally skew-symmetric, i.e. $c(X,Y,Z):=g(X,T^b(Y,Z))$ is a $3$-form on $M$. It follows from \eqref{canonical} that its expression is given by
\begin{equation} \label{bismut}
g(\nabla^b_XY,Z)=g(\nabla^g_XY,Z)+\frac12 d\omega(JX,JY,JZ),
\end{equation}
and its torsion $3$-form $c$ is:
\begin{equation} \label{3-form}
c(X,Y,Z)=d\omega(JX,JY,JZ),
\end{equation}
for any $X,Y,Z\in\mathfrak{X}(M)$.
\medskip
As notation, we will use $\operatorname{Hol}^b(M)$ to refer to the holonomy of the Bismut connection on the Hermitian manifold $M$.
\
The Ambrose-Singer theorem provides a way to compute the holonomy group of a linear connection; indeed, it describes the Lie algebra $\mathfrak{hol}_p(\nabla)$ of $\operatorname{Hol}_p(\nabla)$ in terms of curvature endomorphisms $R_p(x,y)$ for $x,y\in T_pM$:
\begin{theorem}\label{AS1}\cite{AS}
The holonomy algebra $\mathfrak{hol}_p(\nabla)$ is the smallest subalgebra of $\mathfrak{gl}(T_pM)$ containing the endomorphims $P_\sigma^{-1}\circ R_p(x,y)\circ P_\sigma$, where $x,y$ run through $T_pM$, $\sigma$ runs through all piecewise smooth paths starting from $p$ and $P_\sigma$ denotes the parallel transport map along $\sigma$.
\end{theorem}
In particular, $\mathfrak{hol}_p(\nabla)$ contains all the curvature endomorphisms $R_p(x,y),\, x,y\in T_pM$. This fact will be used in Section \ref{Hopf}.
\medskip
Let us consider the particular case when $M=G$ is a Lie group with Lie algebra $\frg$. A linear connection $\nabla$ on $G$ is said to be left invariant if the left translations on $G$ are affine maps. As a consequence, if $X,Y$ are left invariant vector fields then $\nabla_XY$ is also left invariant. Therefore $\nabla$ is uniquely determined by a bilinear multiplication $\frg \times \frg \to \frg$, still denoted by $\nabla$. We also denote by $\nabla_x:\frg\to\frg$ the endomorphism defined by left multiplication with $x\in\frg$. In this case the Ambrose-Singer theorem takes the following form:
\begin{theorem}\label{AS2}\cite{Alek}
Let $\nabla$ be a left invariant linear connection on the Lie group $G$, and let $\frg$ denote the Lie algebra of $G$. Then the holonomy algebra $\mathfrak{hol}(\nabla)$, based at the identity element $e\in G$, is the smallest subalgebra of $\mathfrak{gl}(\frg)$ containing the curvature endomorphisms $R(x,y)$ for any $x,y\in\frg$, and closed under commutators with the left multiplication operators $\nabla_x:\frg\to \frg$.
\end{theorem}
This version of the Ambrose-Singer theorem will be used in Sections \ref{solvmanifold} and \ref{section-OT}.
\
\section{Curvature of the Bismut connection on Vaisman manifolds}
Let $(J,g)$ be a Hermitian structure on a connected manifold $M$ with fundamental $2$-form~$\omega$. This structure is called \textit{locally conformally K\"ahler} (LCK for short) if there exists an open covering $\{ U_i\}_{i\in I}$ of $M$ and differentiable functions $f_i:U_i \to \R$, $i\in I$, such that each local
metric $g_i=\exp(-f_i)\,g|_{U_i}$ is K\"ahler.
Equivalently, $(J,g)$ is LCK if there exists
a closed $1$-form $\theta$ such that the differential of $\omega$ is given by
\begin{equation}\label{dif}
d\omega=\theta\wedge \omega.
\end{equation}
The $1$-form $\theta$ is known as the Lee form. We denote by $A\in\mathfrak{X}(M)$ the vector field which is metric dual to $\theta$, i.e., $g(A,U)=\theta(U)$ for all $U\in\mathfrak{X}(M)$. If, moreover, $\nabla^g\theta=0$, the Hermitian structure $(J,g)$ is called \textit{Vaisman}.
\
\begin{remark}
{\rm (i) The Lee form is uniquely determined by
\begin{equation} \label{d-theta}
\theta=-\frac{1}{n-1}(d^\ast\omega)\circ J,
\end{equation}
where $d^\ast$ is the codifferential and $2n$ is the real dimension of $M$.
\smallskip (ii) If the Lee form $\theta$ is exact, i.e. $\theta=df$ with $f\in C^\infty(M)$, then $\exp(-f)g$ is a K\"ahler metric on $M$. Therefore any simply connected LCK manifold admits a global K\"ahler metric; consequently, ``genuine" LCK metrics occur on non-simply connected manifolds.
\smallskip (iii) The LCK structure is K\"ahler if and only if $\theta=0$. Indeed, $\theta \wedge \omega=0$ and $\omega$ non degenerate imply $\theta=0$.}
\end{remark}
\
Let us start with some results concerning the torsion of the Bismut connection on an LCK manifold.
\medskip
\begin{lemma}\label{torsion}
Let $(M,J,g)$ be an LCK manifold with fundamental $2$-form $\omega$ and Lee form~$\theta$. Then the torsion $3$-form $c$ of the Bismut connection is given by
\[ c=-J\theta \wedge \omega,\]
where $J\theta$ denotes the $1$-form on $M$ defined by $J\theta(X)=-\theta(JX)$.
\end{lemma}
\begin{proof}
Recall that the torsion $3$-form $c$ is given by $c(X,Y,Z)=d\omega(JX,JY,JZ)$. It follows from \eqref{dif} that
\begin{align*}
c(X,Y,Z) & = \theta\wedge \omega(JX,JY,JZ) \\
& = \theta(JX)\omega(JY,JZ)+\theta(JY)\omega(JZ,JX)+\theta(JZ)\omega(JX,JY) \\
& = -J\theta(X)\omega(Y,Z)-J\theta(Y)\omega(Z,X)-J\theta(Z)\omega(X,Y) \\
& = -J\theta\wedge \omega(X,Y,Z),
\end{align*}
for any vector fields $X,Y,Z$ on $M$.
\end{proof}
\begin{corollary}\label{torsion-A}
The torsion $3$-form $c$ of the Bismut connection satisfies $c(A,\cdot,\cdot) = 0$.
\end{corollary}
\begin{proof} Using the previous expression for the torsion, one gets:
\begin{align*}
c(A,X,Y) &= (J\theta\wedge \omega)(A,X,Y) \\ & = -\theta(JA)\omega(X,Y) -\theta(JX)\omega(Y,A)-\theta(JY)\omega(A,X)\\
& =-g(A,JA)g(JX,Y) -g(A,JX)g(JY,A)-g(A,JY)g(JA,X) \\
& = 0,
\end{align*} since $g(A,JA)=0$ and $J$ is skew-symmetric.
\end{proof}
\begin{corollary}\label{torsion-12}
The $(1,2)$-torsion tensor $T^b$ of the Bismut connection is given by
\[ T^b(X,Y)= \theta(JX)JY-\theta(JY)JX-\omega(X,Y)JA, \]
for any vector fields $X,Y$ on $M$.
\end{corollary}
\begin{proof}
Recalling that $c(X,Y,Z)=c(Z,X,Y)=g(Z,T^b(X,Y))$, this follows easily from Lemma \ref{torsion}.
\end{proof}
\medskip
As a consequence, we have that
\begin{equation}\label{nablab-formula}
\nabla^b_XY=\nabla^g_XY+\frac12\left( \theta(JX)JY-\theta(JY)JX-\omega(X,Y)JA \right),
\end{equation}
for any vector fields $X,Y$ on $M$.
\
From now on, we focus on Vaisman manifolds. Our objective is to compute the Bismut connection on Vaisman manifolds in terms of the vector fields $A,JA$ and the distribution $\mathcal D$ orthogonal to $A$ and $JA$, and then study the symmetries of the corresponding curvature tensor $R^b$. In particular, we obtain that the torsion $3$-form $c$ is always $\nabla^b$-parallel on a Vaisman manifold, and we obtain a first reduction of its holonomy.
\
\subsection{Results about the Bismut connection on Vaisman manifolds}
On a Vaisman manifold, the vector field $A$ $g$-dual to the Lee form $\theta$ satisfies $\nabla^g A=0$. It follows that $|A|$ is constant on $M$ and, therefore, by rescaling the metric, we may assume from now on that $|A|=1$, so that $\theta(A)=1$.
\smallskip
In the next result we collect some well-known facts about Vaisman manifolds which will be used throughout this article.
\begin{proposition}\cite{V}\label{propiedades-Vaisman}
Let $(M,J,g)$ be a Vaisman manifold with associated Lee form $\theta$. Let $A$ be the vector field which is metric dual to $\theta$, with $|A|=1$. Then:
\begin{enumerate}
\item[(a)] $[A,JA]=0$;
\item[(b)] both $A$ and $JA$ are Killing vector fields;
\item[(c)] $\mathcal{L}_AJ=\mathcal{L}_{JA}J=0$, where $\mathcal{L}$ denotes the Lie derivative. That is, $[A,JX]=J[A,X]$, $[JA,JX]=J[JA,X]$ for any vector field $X$ on $M$.
\end{enumerate}
\end{proposition}
\medskip
We denote by $\mathcal D$ the distribution on $M$ such that $\mathcal{D}_p$ is the orthogonal complement of $\text{span}\{A_p,J_pA_p\}$ in $T_pM$ for any $p\in M$. Clearly, $\mathcal{D}$ is $J$-invariant and $\theta(X)=J\theta(X)=0$ for any $X\in\Gamma(\mathcal{D})$. Moreover, $\mathcal D$ is not involutive, since using $d\omega(A,X,Y)=c\wedge \omega(A,X,Y)$ and Proposition \ref{propiedades-Vaisman} it can be seen that $g(JA,[X,Y])=\omega(X,Y)$ for any $X,Y\in\Gamma(\mathcal D)$. However, we can show that
\begin{corollary}\label{D}
If $X\in\Gamma(\mathcal{D})$ then $[A,X]\in\Gamma(\mathcal{D})$ and $[JA,X]\in\Gamma(\mathcal{D})$.
\end{corollary}
\begin{proof}
Since $d\theta=0$,
\begin{align*}
0=d\theta(A,X) & = A(\theta(X))-X(\theta(A))-\theta([A,X])\\
& =Ag(A,X)-Xg(A,A)-g(A,[A,X])\\
& = -g(A,[A,X]).
\end{align*}
If in this expression we replace $X\in\Gamma(\mathcal D)$ by $JX\in\Gamma(\mathcal D)$, and using Proposition \ref{propiedades-Vaisman}, we obtain
\[ 0=-g(A,[A,JX])=g(JA,[A,X]).\]
Thus, $[A,X]\in\Gamma(\mathcal{D})$.
The fact that $[JA,X]\in\Gamma(\mathcal{D})$ follows in the same way from $d\theta(JA,X)=0$.
\end{proof}
\
We will prove next that the Lee form $\theta$ on an LCK manifold is parallel with respect to the Levi-Civita connection (i.e. the manifold is Vaisman) if and only if it is parallel with respect to the Bismut connection. This fact was already mentioned in \cite{Sc}.
\begin{theorem}\label{theta-parallel}
Let $(M,J,g)$ be an LCK manifold with fundamental $2$-form $\omega$ and Lee form~$\theta$. Then $(M,J,g)$ is Vaisman (i.e., $\nabla^g\theta=0$) if and only if $\nabla^b\theta=0$.
\end{theorem}
\begin{proof}
Let us compute $(\nabla^b_X \theta)Y$ for any $X,Y\in\mathfrak{X}(M)$:
\begin{align*}
(\nabla^b_X \theta)Y & = X(\theta(Y))-\theta(\nabla^b_XY) \\
& = Xg(A,Y)-g(\nabla^b_XY,A) \\
& = g(\nabla^g_XA,Y)+g(A,\nabla^g_XY)- \left(g(\nabla^g_XY,A)+\frac12 c(X,Y,A)\right) \\
& = g(\nabla^g_XA,Y)
\end{align*}
according to
Corollary \ref{torsion-A}. Since $g(\nabla^g_XA,Y)=(\nabla^g_X \theta)Y$, the result follows.
\end{proof}
Since $\nabla^b J=0$, it follows from Theorem \ref{theta-parallel} that $\nabla^b J\theta=0$, or equivalently, $\nabla^b JA=0$. As an immediate consequence we have the following important result:
\begin{corollary}\label{hol}
If $(M,J,g)$ is Vaisman and $\dim M=2n$, then the holonomy group $\operatorname{Hol}^b(M)$ of the Bismut connection is contained in $\operatorname{U}(n-1)$.
\end{corollary}
Here $\operatorname{U}(n-1)$ is considered as a subgroup of $\operatorname{U}(n)$ in the following way:
\[ \operatorname{U}(n-1)\hookrightarrow \operatorname{U}(n), \qquad A \mapsto\bigg( \begin{array}{c|c} 1& \\ \hline & A \end{array}\bigg).\]
\
Also, combining $\nabla^b J\theta=0$ and $\nabla^b \omega=0$ with Lemma \ref{torsion} we obtain
\begin{corollary}\label{parallel torsion}
On any Vaisman manifold the torsion $3$-form $c$ of the Bismut connection is $\nabla^b$-parallel.
\end{corollary}
\medskip
\begin{remark}
{\rm The converse of Corollary \ref{parallel torsion} holds: if $(M,J,g)$ is an LCK manifold such that the torsion of the Bismut connection is $\nabla^b$-parallel then $(M,J,g)$ is Vaisman.
Indeed, according to Lemma \ref{torsion} the torsion of $\nabla^b$ is $c=-J\theta\wedge\omega$. If $\nabla^b_X(J\theta\wedge \omega)=0$ for any vector field $X$, then $\nabla^b_X(J\theta)\wedge \omega=0$, since $\nabla^b\omega=0$. Since $\omega$ is non degenerate, the operator $-\wedge \omega$ is injective on $1$-forms, hence $\nabla^b_X(J\theta)=0$. We deduce from Theorem \ref{theta-parallel} that $M$ is Vaisman.}
\end{remark}
\
In order to compute the holonomy of the Bismut connection on a Vaisman manifold, we look first for parallel tensors. Let us consider the following skew-symmetric $(1,1)$-tensor:
\begin{equation}\label{tensor-phi}
\varphi=J-\theta\otimes JA+J\theta\otimes A.
\end{equation}
This tensor was introduced by Vaisman in \cite{V} and it is an \textit{$f$-structure} (i.e. $\varphi$ satisfies $\varphi^3+\varphi=0$). It has some important properties related to the Bismut connection, as the following propositions show.
\begin{proposition}\label{phi paralelo}
On any Vaisman manifold, the tensor $\varphi$ is $\nabla^b$-parallel.
\end{proposition}
\begin {proof}
For any $X\in\mathfrak{X}(M)$ we have that
\[ \nabla^b_X\varphi= \nabla^b_XJ+\nabla^b_X(-\theta\otimes JA+J\theta\otimes A).\]
Since $\nabla^bJ=0$, we only have to check that the second term vanishes. We compute
\begin{align*}
\nabla^b_X(-\theta\otimes JA+J\theta\otimes A) & = -\nabla^b_X\theta \otimes JA -\theta\otimes \nabla^b_XJA +\nabla^b_X J\theta\otimes A + J\theta\otimes \nabla^b_XA \\
& = 0,
\end{align*}
using again that $\nabla^b\theta=0$, $\nabla^bA=0$ and $\nabla^bJ=0$.
\end {proof}
\medskip
\begin{proposition}\label{torsion-JA}
On any Vaisman manifold, the torsion $3$-form $c$ of the Bismut connection satisfies $c(JA,X,Y) = -g(\varphi(X),Y)$.
\end{proposition}
\begin{proof}
For $X,Y\in\mathfrak{X}(M)$, using Proposition \ref{torsion} we have that
\begin{align*}
c(JA,X,Y) & = -J\theta \wedge \omega (JA,X,Y) \\
& = -J\theta(JA)\omega(X,Y)-J\theta(X)\omega(Y,JA)-J\theta(Y)\omega(JA,X) \\
& = -g(JX,Y)-J\theta(X)g(Y,A)-g(A,JY)g(A,X) \\
& = -g(JX,Y)-J\theta(X)g(A,Y)+\theta(X)g(JA,Y) \\
& = -g(\varphi(X),Y),
\end{align*}
and the proof is complete.
\end{proof}
\medskip
The tensor $\varphi$ is closely related to the $2$-form $d(J\theta)$, as the following result shows:
\begin{corollary}\label{coro-phi}
The $2$-form $d(J\theta)$ satisfies:
\begin{enumerate}
\item[(a)] $d(J\theta)(X,Y)=c(JA,X,Y)$ (hence also equal to $-g(\varphi(X),Y)$),
\item[(b)] $d(J\theta)$ is $\nabla^b$-parallel,
\item[(c)] $d(J\theta)(JX,JY)=d(J\theta)(X,Y)$ for any $X,Y\in\mathfrak{X}(M)$,
\item[(d)] $d(J\theta)(A,\cdot)=d(J\theta)(JA,\cdot)=0$.
\end{enumerate}
\end{corollary}
\begin{proof}
For $X,Y\in\mathfrak{X}(M)$, we compute
\begin{align*}
d(J\theta)(X,Y) & = X(J\theta (Y))-Y(J\theta(X))-J\theta([X,Y]) \\
& = -X(\theta(JY))+Y(\theta(JX))+\theta(J[X,Y]) \\
& = -Xg(A,JY)+Yg(A,JX)+g(A,J[X,Y]).
\end{align*}
Since $\nabla^bg=0$, we have that
\[ d(J\theta)(X,Y) = -g(\nabla^b_XA,JY)-g(A,\nabla^b_X JY)+g(\nabla^b_YA,JX)+g(A,\nabla^b_Y JX)+g(A,J[X,Y]). \]
According to Theorem \ref{theta-parallel}, we have that $\nabla^bA=0$. Now, using that $\nabla^bJ=0$ and $J$ is skew-symmetric, we obtain that
\[ d(J\theta)(X,Y) = g(JA,T^b(X,Y))=c(JA,X,Y). \]
Therefore (a) holds. Now, (b) follows immediately from Proposition \ref{torsion-JA}.
Finally, (c) and (d) follow readily from (a). Indeed, for (c) we use that $\varphi$ and $J$ commute, and for (d) we use that $\varphi(A)=\varphi(JA)=0$.
\end{proof}
\
\subsection{Explicit computation of $\nabla^b$}
We describe next explicitly the Bismut connection $\nabla^b$ on Vaisman manifolds. We begin by computing $\nabla^b_A$. Since $c(A,\cdot,\cdot) = 0$ (see Corollary~\ref{torsion-A}), we have that $\nabla^b_AY = \nabla^g_AY$ for any $Y\in\mathfrak{X}(M)$. Moreover, due to $\nabla^g A=0$, we have that $\nabla^g_AY=[A,Y]$ and therefore $\nabla^b_AY=[A,Y]$ for all $Y\in\mathfrak{X}(M)$.
\medskip
Next, we determine $\nabla^b_{JA}$. In order to do this, we compute first $g(\nabla^g_{JA}X,Y)$ for any $X,Y\in\mathfrak{X}(M)$, using the Koszul formula:
\begin{align*}
g(\nabla^g_{JA}X,Y) = & \frac12\left\{JA g(X,Y)+Xg(Y,JA)-Yg(JA,X) \right. \\
& \qquad \left. +g([JA,X],Y)-g([X,Y],JA)+g([Y,JA],X)\right\}.
\end{align*}
Since $JA$ is a Killing vector field, we have that $JA g(X,Y)=g([JA,X],Y)+g(X,[JA,Y])$, so that the expression above becomes
\begin{align*}
g(\nabla^g_{JA}X,Y) & =\frac12\left\{2g([JA,X],Y)+X(J\theta(Y))-Y(J\theta(X))-J\theta([X,Y]) \right\}\\
& = g([JA,X],Y)+\frac12 d(J\theta)(X,Y)\\
& = g([JA,X],Y)-\frac12 g(\varphi(X),Y)\\
& = g\left([JA,X]-\frac12\varphi(X),Y\right).
\end{align*}
Hence we obtain $\nabla^g_{JA}X=[JA,X]-\frac12\varphi(X)$ for any $X\in\mathfrak{X}(M)$. Now, using \eqref{bismut} and Corollary~\ref{torsion-JA}:
\begin{align*}
g(\nabla^b_{JA}X,Y) = & g(\nabla^g_{JA}X,Y) +\frac12 c(JA,X,Y) \\
= & g([JA,X]-\frac12\varphi(X),Y)-\frac12 g(\varphi(X),Y)\\
= & g([JA,X]-\varphi(X),Y),
\end{align*}
so that $\nabla^b_{JA}X=[JA,X]-\varphi(X)$.
\medskip
Finally, we obtain from \eqref{nablab-formula} that, for $X,Y\in\Gamma(\mathcal{D})$,
\begin{equation}\label{XYenD}
\nabla^b_XY=\nabla^g_XY-\frac12 \omega(X,Y)JA,
\end{equation}
with $\nabla^b_XY\in\Gamma(\mathcal D)$. Indeed, observe that
\[ g(\nabla^b_XY,A)=Xg(Y,A)-g(Y,\nabla^b_XA)=0 \]
and also
\[ g(\nabla^b_XY,JA)=Xg(Y,JA)-g(Y,\nabla^b_XJA)=0, \]
so that $\nabla^b_XY\in\Gamma(\mathcal{D})$.
\
To sum up, we state the following theorem.
\begin{theorem}\label{nabla_b}
With notation as above, the Bismut connection $\nabla^b$ on the Vaisman manifold $(M,J,g)$ is given by:
\begin{itemize}
\item $\nabla^bA=\nabla^bJA=0$,
\item $\nabla^b_AX=[A,X]$ for any $X\in\mathfrak{X}(M)$,
\item $\nabla^b_{JA}X=[JA,X]-\varphi(X)$ for any $X\in\mathfrak{X}(M)$,
\item if $X,Y\in\Gamma(\mathcal D)$ then $\nabla^b_XY\in \Gamma(\mathcal D)$ and, moreover, $\nabla^b_XY=\nabla^g_XY-\frac12 \omega(X,Y)JA$.
\end{itemize}
\end{theorem}
\medskip
\subsection{Curvature of $\nabla^b$}
We will use the convention $R^b(X,Y)Z=\nabla^b_X\nabla^b_YZ-\nabla^b_Y\nabla^b_XZ-\nabla^b_{[X,Y]}Z$ for the $(1,3)$-curvature tensor $R^b$ of the Bismut connection. We will denote also by $R^b$ the associated $(0,4)$-curvature tensor: $R^b(X,Y,Z,W)=g(R^b(X,Y)Z,W)$.
\
In the following result we state some symmetries of the Bismut curvature tensor on Vaisman manifolds.
\begin{lemma}\label{curvatura-JJ}
On any Vaisman manifold $(M,J,g)$, the curvature tensor $R^b$ of the Bismut connection satisfies:
\begin{enumerate}
\item[(a)] $R^b(X,Y)JZ=JR^b(X,Y)Z$,
\item[(b)] $R^b(X,Y,Z,W)=R^b(Z,W,X,Y)$,
\item[(c)] $R^b(JX,JY)=R^b(X,Y)$,
\item[(d)] $R^b(A,X)=R^b(JA,X)=0$,
\end{enumerate}
for any vector fields $X,Y,Z,W$ on $M$.
\end{lemma}
\begin{proof}
(a) holds for the Bismut connection on any Hermitian manifold, since $\nabla^bJ=0$.
\medskip
(b) holds for any metric connection with \textit{parallel} skew-symmetric torsion, according to \cite[Lemma 2.2]{CMS}. Recall that this is the case for the Bismut connection on a Vaisman manifold, due to Corollary \ref{parallel torsion}.
\medskip
(c) follows from (a) and (b). Indeed, for any vector fields $X,Y,Z,W$ on $M$, we have that
\begin{align*}
g(R^b(JX,JY)Z,W) & = R^b(JX,JY,Z,W) \\
& = R^b(Z,W,JX,JY) \\
& = g(R^b(Z,W)JX,JY) \\
& = g(R^b(Z,W)X,Y)\\
& = g(R^b(X,Y)Z,W).
\end{align*}
\medskip
(d) follows from (b). Indeed, for vector fields $X,U,V$ on $M$ we compute
\[ g(R^b(A,X)U,V)=g(R^b(U,V)A,X)=0 \]
since $A$ is $\nabla^b$-parallel. The analogous result holds for $JA$ since it is also $\nabla^b$-parallel.
\end{proof}
\
Next, we will establish an explicit relation between the Bismut curvature $R^b$ and the Riemannian curvature $R^g$. For this, we will use the following formula from \cite{IP}, which in this case has been simplified since the torsion $3$-form $c$ is $\nabla^b$-parallel:
\begin{align*}
R^b(X,Y,Z,U)= & R^g(X,Y,Z,U)+\frac12g(T^b(X,Y),T^b(Z,U))\\
& \qquad +\frac14 g(T^b(X,U),T^b(Y,Z))+\frac14 g(T^b(Y,U),T^b(Z,X)),
\end{align*}
for any vector fields $X,Y,Z,U$ on $M$. Using the expression for $T^b$ given in Corollary \ref{torsion-12}, and after lengthy computations, we arrive at:
\begin{align}
R^b(X,Y)Z & = R^g(X,Y)Z -\frac14\theta(JY) \theta(JZ)X +\frac14 \theta(JX)\theta(JZ)Y \label{Rb-Rg} \nonumber\\
& \quad +\frac14 g(\varphi(Y),Z)JX-\frac14 g(\varphi(X),Z)JY+\frac12g(\varphi(X),Y)JZ \\
& \quad +\frac14(-\omega(X,Y) \theta(JZ)+J\theta\wedge\omega(X,Y,Z))A \nonumber\\
\nonumber & \quad -\frac14(J\theta\wedge\omega(X,Y,JZ)+\theta\wedge\omega(X,Y,Z))JA \nonumber.
\end{align}
Observe that, since $R^b(JA,\cdot)=0$, we obtain from \eqref{Rb-Rg} the following expression for $R^g(JA,Y)$:
\begin{equation}\label{ric-JA}
R^g(JA,Y)Z= \frac14 \theta(JZ)Y -\frac14 \theta(Y)\theta(JZ)A+\frac14 \{\theta(JY)\theta(JZ)+g(\varphi(Y),JZ) \} JA,
\end{equation}
where we have used Lemma \ref{torsion} and Corollary \ref{coro-phi}.
\
We study now some properties of the Bismut Ricci curvature $\operatorname{Ric}^b$, defined as usual by $\operatorname{Ric}^b(X,Y)=\trace{(Z\to R^b(Z,X)Y)}$. The next result follows easily from Lemma \ref{curvatura-JJ}:
\begin{corollary}\label{ric-sym}
The Bismut Ricci curvature $\operatorname{Ric}^b$ of a Vaisman manifold satisfies:
\begin{enumerate}
\item[(a)] $\operatorname{Ric}^b$ is symmetric;
\item[(b)] $\operatorname{Ric}^b(JX,JY)=\operatorname{Ric}^b(X,Y)$ for any vector fields $X,Y$.
\end{enumerate}
\end{corollary}
We point out that $\operatorname{Ric}^b$ being symmetric is not a surprising fact, since it holds for any metric connection with parallel skew-symmetric torsion.
\medskip
Now, we are able to obtain an expression for $\operatorname{Ric}^b$, the Bismut Ricci curvature, in terms of the Riemannian Ricci curvature $\operatorname{Ric}^g$. Indeed, let us consider a local orthonormal frame of the form $\{A,JA\}\cup\{e_1,\ldots,e_{2n-2}\}$ where $e_i$ is a local section of $\mathcal D$ for each $i$. Therefore, for any vector fields $Y,Z$ on $M$,
\begin{align*}
\operatorname{Ric}^b(Y,Z) & =g(R^b(A,Y)Z,A)+g(R^b(JA,Y)Z,JA)+\sum_i g( R^b(e_i,Y)Z,e_i) \\
& = \sum_i g(R^b(e_i,Y)Z,e_i) ,
\end{align*}
due to Lemma \ref{curvatura-JJ}(d).
Using \eqref{Rb-Rg} we obtain
\begin{align*}
\operatorname{Ric}^b(Y,Z) & = \sum_i g(R^g(e_i,Y)Z,e_i) \\
& \qquad+\sum_i \left( -\frac14 \theta(JY)\theta(JZ)-\frac14 g(Je_i,Z)g(JY,e_i)+\frac12 g(Je_i,Y)g(JZ,e_i) \right)\\
& = \sum_i g(R^g(e_i,Y)Z,e_i)
-\frac14\sum_i \left( \theta(JY)\theta(JZ)+ g(JZ,e_i)g(JY,e_i) \right).
\end{align*}
Since $\nabla^g A=0$ we have that $R^g(A,\cdot)=0$, hence
\begin{align*}
\operatorname{Ric}^g(Y,Z) & =g(R^g(JA,Y,)Z,JA)+\sum_i g(R^g(e_i,Y)Z,e_i)\\
& =\frac14 g(\varphi(Y),JZ)+\sum_i g(R^g(e_i,Y)Z,e_i)\\
& = \frac14 (g(Y,Z)-\theta(Y)\theta(Z)-\theta(JY)\theta(JZ))+\sum_i g(R^g(e_i,Y)Z,e_i),
\end{align*}
where we have used \eqref{ric-JA} in the second equality and the definition of $\varphi$ in the third. Therefore, combining both expressions:
\begin{align}
\operatorname{Ric}^b(Y,Z) & = \operatorname{Ric}^g(Y,Z)-\frac14 (g(Y,Z)-\theta(Y)\theta(Z)-\theta(JY)\theta(JZ)) \label{ricb-ricg}\nonumber\\
& -\frac14 \left( (2n-2)\theta(JY)\theta(JZ)+g(Y,Z)-g(JZ,A)g(JY,A)-g(JZ,JA)g(JY,JA)\right) \\
& = \operatorname{Ric}^g(Y,Z)-\frac12 g(Y,Z)+\frac12 \theta(Y)\theta(Z)-\frac{n-2}{2}\theta(JY)\theta(JZ). \nonumber
\end{align}
\
As expected, according to Corollary \ref{ric-sym}, $\operatorname{Ric}^b$ is symmetric since the expression above is symmetric in $Y$ and $Z$. It was proved in \cite{IP} that the symmetry of $\operatorname{Ric}^b$ is equivalent to the torsion $3$-form being co-closed, thus we obtain:
\begin{corollary}
On any Vaisman manifold, the Bismut torsion $3$-form $c$ is co-closed.
\end{corollary}
\
On the other hand, concerning the closedness of the torsion $3$-form $c$, the following result shows that $c$ is never closed in high dimensions.
\begin{proposition}\label{c not closed}
On a Vaisman manifold of dimension $2n\geq 6$, the Bismut torsion $3$-form $c$ is not closed.
\end{proposition}
\begin{proof}
The $3$-form $c$ is given by $c=-J\theta\wedge \omega$, according to Lemma \ref{torsion}. Therefore $dc$ is given by
\[ dc=-d(J\theta)\wedge \omega +J\theta\wedge d\omega=-(d(J\theta)-J\theta\wedge \theta)\wedge \omega. \]
So, if $dc=0$ then $\eta:=d(J\theta)-J\theta\wedge \theta=0$, since in dimensions at least $6$ the operator $-\wedge \omega$ is injective on $2$-forms.
However, it follows from Corollary \ref{coro-phi}(d) that
\[ d(J\theta)(A,JA)=0. \]
On the other hand,
\[(J\theta \wedge \theta)(A,JA)=J\theta(A)\theta(JA)-J\theta(JA)\theta(A)=-1. \]
Hence $\eta(A,JA)=1\neq 0$, a contradiction. As a consequence, $dc\neq 0$.
\end{proof}
\smallskip
\begin{remark}
{\rm
(i) A Hermitian metric whose associated Bismut torsion $3$-form $c$ is closed is called \textit{pluriclosed} or \textit{strong K\"ahler with torsion (SKT)}. This condition is equivalent to $\partial\overline{\partial} \omega=0$. According to Proposition \ref{c not closed}, a Vaisman metric in dimension $\geq 6$ is never pluriclosed. This result was already known in the compact case, since it was proved in \cite{AI} that on a compact Hermitian manifold of dimension at least 6, the Hermitian metric cannot be LCK and pluriclosed simultaneously, unless the metric is K\"ahler.
\smallskip (ii) Notice that according to \cite[Theorem A]{FT}, if $(M,J,g)$ is a $4$-dimensional Vaisman manifold then the Hermitian structure $(J,g)$ is pluriclosed and $\nabla^b$ satisfies the first Bianchi identity. In particular, in real dimension $4$ the torsion $3$-form is harmonic. However, more can be said: $c$ is also $\nabla^g$-parallel, which can be seen from the relation $c=-\ast \theta$, proved in \cite{IP}, which holds for any 4-dimensional LCK manifold. Belgun provided in \cite{Bel} the classification of compact complex surfaces which admit Vaisman metrics: they are properly elliptic surfaces, Kodaira surfaces (either primary or secondary), elliptic Hopf surfaces and Hopf surfaces of class $1$.
\smallskip
(iii) On Vaisman manifolds of dimension greater than or equal to 6, according to Corollary~\ref{parallel torsion}, Proposition~\ref{c not closed} and \cite[Theorem 3.2]{FT}, the Bismut connection does not satisfy the first Bianchi identity, and therefore it is not K\"ahler-like. However, due to Lemma~\ref{curvatura-JJ}(c), the Bismut connection satisfies the \emph{type condition} (see for instance \cite{AOUV})}.
\end{remark}
\
\section{Bismut holonomy of Vaisman solvmanifolds}\label{solvmanifold}
In this section we will study the Bismut holonomy of a concrete family of Vaisman manifolds; namely, solvmanifolds equipped with invariant Vaisman structures. We will call them simply Vaisman solvmanifolds. In order to perform this analysis, we will use the results appearing in \cite{AO}.
\
Let $G$ be a Lie group with a left invariant complex structure $J$ and a left invariant metric $g$, i.e. the left translations $L_g:G\to G$ defined by $L_g(h)=gh$ for $h\in G$ are both biholomorphisms and isometries. If $(G,J,g)$ satisfies the LCK condition \eqref{dif}, then $(J,g)$ is called a
{\em left invariant LCK structure} on the Lie group $G$. In this case, it follows from \eqref{d-theta} that the corresponding Lee form $\theta$ on $G$ is also left invariant.
We will restrict our study to solvable Lie groups equipped with left invariant Vaisman structures. If the solvable Lie group $G$ is simply connected then any left invariant Vaisman structure on $G$ turns out to be globally conformal to a K\"ahler structure. Therefore we will consider quotients $M_\Gamma:=\Gamma\backslash G$ where $\Gamma$ is a co-compact discrete subgroup of $G$, so that $M_\Gamma$ is a compact manifold such that the canonical projection $G\to M_\Gamma$ is a local diffeomorphism. The compact quotient $M_\Gamma$ is not simply connected (as $\pi_1(M_\Gamma)=\Gamma$) and it inherits a Vaisman structure. The aim of this section is to analyze the holonomy of the Bismut connection on $M_\Gamma$ associated to this induced structure.
A co-compact discrete subgroup $\Gamma$ of a simply connected solvable Lie group $G$ is called a lattice and the quotient $M_\Gamma=\Gamma\backslash G$ is known as a solvmanifold. We point out that, according to \cite{Mi}, if $G$ admits a lattice then $G$ is unimodular (i.e., $\operatorname{tr} \operatorname{ad}_x=0$ for all $x\in \operatorname{Lie}(G)$).
\
Since we are dealing with left invariant structures on Lie groups, we can work at the Lie algebra level. Therefore we will consider LCK or Vaisman structures on Lie algebras, that is, a Hermitian structure $(J,\langle \cdotp,\cdotp \rangle )$ on a Lie algebra $\frg$, where $\langle \cdotp,\cdotp \rangle $ is an inner product on $\frg$ and $J:\frg\to\frg$ is a skew-symmetric endomorphism of $\frg$ that satisfies
\[ J^2=-\operatorname{I}, \quad \text{and} \quad [Jx,Jy]-[x,y]-J([Jx,y]+[x,Jy])=0,\] for any $x,y\in \frg$. Moreover, $d\omega=\theta\wedge \omega$ for some closed $1$-form $\theta\in \frg^*$, and $\nabla^g\theta= 0$ in the Vaisman case.
As before, let $A\in \frg$ denote the vector dual to $\theta$, i.e., $\theta(U)=\langle A,U\rangle$ for all $U\in\frg$. We may assume $|A|=1$. In this context, Proposition \ref{propiedades-Vaisman} takes the following form:
\begin{proposition
If $(\frg,J,\langle \cdotp,\cdotp \rangle )$ is Vaisman then
\begin{enumerate}
\item[(a)] $[A,JA]=0$,
\item[(b)] $\operatorname{ad}_A$ and $\operatorname{ad}_{JA}$ are skew-symmetric;
\item[(c)] $J\circ\operatorname{ad}_A=\operatorname{ad}_A\circ J$.
\end{enumerate}
\end{proposition}
\
Solvable Lie groups equipped with left invariant Vaisman structures, and their associated Vaisman solvmanifolds, were studied in \cite{AO}. We will recall some of the results from that article that will be needed for our study.
\medskip
\begin{lemma}\label{lem-centro}\cite{AO}
Let $\frg$ be a unimodular solvable Lie algebra equipped with a Vaisman structure $(J,\langle \cdotp,\cdotp \rangle )$ and let $\mathfrak{z}(\frg)$ denote the center of $\frg$. Then $JA\in\mathfrak{z}(\frg)$. Moreover $\frz(\frg)\subset \operatorname{span}\{A,JA\}$.
\end{lemma}
\medskip
The subspace $\ker \theta$ is in fact an ideal of $\frg$, since $\theta$ is closed, and $JA\in\ker\theta$. Denoting $\frk:=(\text{span}\{A,JA\})^\perp$ (which plays the role of $\mathcal{D}$ in Section 3), we have a decomposition
\[\ker\theta=\R JA \stackrel{\perp}{\oplus}\mathfrak{k}. \]
For $x,y\in\frk$, we have that $[x,y]\in \ker\theta$ and it can be proved that
\begin{equation}\label{ka}
[x,y]=\omega(x,y)JA + [x,y]_\frk,
\end{equation}
where $[x,y]_\frk$ is the component in $\frk$ of $[x,y]$.
\medskip
It follows from \cite{AO} that $[\cdot,\cdot]_\mathfrak{k}$ is a Lie bracket on $\frk$ and, moreover, $(\mathfrak{k},[\cdot,\cdot]_\mathfrak{k},J|_\mathfrak{k},\langle \cdotp,\cdotp \rangle |_\mathfrak{k})$ is a K\"ahler Lie algebra. Therefore $\ker\theta$ is a $1$-dimensional central extension of $(\frk,[\cdot,\cdot]_\mathfrak{k})$: $\ker\theta=\R JA\oplus_\omega \mathfrak{k}$.
Moreover, since $\frg$ is unimodular we have that $\frk$ is unimodular as well. Due to a classical result of Hano \cite{Hano}, it follows that $\langle \cdotp,\cdotp \rangle |_\frk$ is flat. The main result in \cite{AO} is:
\begin{theorem}\cite{AO}
If $(\frg,J,\langle \cdotp,\cdotp \rangle )$ is Vaisman with $\frg$ unimodular and solvable, then:
\[ \frg=\R A\ltimes (\R JA\oplus_\omega \frk) , \]
where:
\begin{itemize}
\item $\operatorname{ad}_A$ is a skew-symmetric derivation of $\ker\theta=\R JA\oplus_\omega \mathfrak{k}$ with $\operatorname{ad}_A(JA)=0$;
\item $(\mathfrak{k},J_{\mathfrak k}, \langle \cdotp,\cdotp \rangle |_\mathfrak{k})$ is a K\"ahler flat Lie algebra;
\item $D:=\operatorname{ad}_A|_{\mathfrak k}$ is a skew-symmetric derivation of $(\mathfrak{k},\langle \cdotp,\cdotp \rangle |_\mathfrak{k})$ which commutes with $J|_{\mathfrak k}$ (i.e. $D\in\mathfrak{u}(\mathfrak k)$).
\end{itemize}
\end{theorem}
\medskip
\begin{example}
{\rm In \cite{AO} many examples of unimodular solvable Lie algebras were provided. We recall here one such family of examples. Let us consider the Lie algebras $\frg$ with basis $\{ A,B, e_1,\dots,e_{2n-2}\}$ and Lie bracket given by
\[
[A,e_{2i-1}]=a_i e_{2i}, \quad [A,e_{2i}]=-a_i e_{2i-1}, \quad [e_{2i-1},e_{2i}]=B, \quad i=1,\ldots, n-1,
\]
for some $a_i\in \R$. Let $\langle \cdotp,\cdotp \rangle $ denote the inner product on $\frg$ such that the basis above is orthonormal, and let $J$ denote the skew-symmetric complex structure on $\frg$ given by
\[ JA=B, \quad Je_{2i-1}=e_{2i}, \quad i=1,\ldots, n. \]
Then it is easy to verify that the Hermitian structure $(J,\langle \cdotp,\cdotp \rangle )$ is Vaisman, where the Lee form $\theta$ is the metric dual of $A$: $\theta(\cdot)=\langle A,\cdot \, \rangle$. Note that $\ker \theta=\text{span}\{B, e_1,\dots,e_{2n-2} \}$ is isomorphic to the $(2n-1)$-dimensional Heisenberg Lie algebra $\frh_{2n-1}$ (so that $\frg=\R\ltimes \frh_{2n-1}$), and the subspace $\frk=\text{span}\{ e_1,\dots,e_{2n-2}\}$, equipped with the Lie bracket $[\cdot,\cdot]_\mathfrak{k}$, is an abelian Lie algebra (which is clearly a flat K\"ahler Lie algebra equipped with the restrictions of $(J,\langle \cdotp,\cdotp \rangle )$.
It was also shown in \cite{AO} that whenever $a_i\in\Q$ for every $i$ the corresponding simply connected Lie group admits lattices. If $a_i=0$ for all $i$, then $\frg$ is the direct product $\frg=\R\times \frh_{2n-1}$, with the well-known Vaisman structure given in \cite{CFL}.
}
\end{example}
\
We compute next the Bismut connection on unimodular solvable Lie algebras equipped with Vaisman structures, using Theorem \ref{nabla_b}. We denote by $\nabla^\frk$ the (flat) Levi-Civita connection on the K\"ahler Lie algebra $\frk$. Recall the skew-symmetric operator $\varphi$ defined in \eqref{tensor-phi}; it satisfies $\varphi(A)=\varphi(JA)=0$ and $\varphi(x)= Jx$ for $x\in\frk$.
%
\begin{lemma}\label{bismut-g}
The Bismut connection $\nabla^b$ on $\frg$ is given as follows:
\begin{itemize}
\item $\nabla^bA=\nabla^b JA=0$,
\item $\nabla^b_Ax=[A,x] \in \frk$ for any $x\in\frg$,
\item $\nabla^b_{JA}x=-\varphi(x)$ for any $x\in\frg$,
\item $\nabla^b_xy=\nabla^\frk_xy\in \frk$ for any $x,y\in\frk$.
\end{itemize}
\end{lemma}
\begin{proof}
The first three items follow directly from Theorem \ref{nabla_b}, recalling that $JA$ is a central element of $\frg$, due to Lemma \ref{lem-centro}. As for the fourth, we compute $\nabla^g_xy$ for $x,y\in\frk$. Since $\nabla^gA=0$, we have that
\[ \langle\nabla^g_xy,A\rangle=-\langle y,\nabla^g_x A\rangle=0. \]
On the other hand, we know that $\nabla^b_xy\in \frk$ (Theorem \ref{nabla_b}) and it follows from \eqref{XYenD} that
\[ \langle\nabla^g_xy,JA\rangle=\frac12 \omega(x,y).\]
For $z\in\frk$, we have
\begin{align*}
\langle\nabla^g_xy,z\rangle & = \frac12\{\langle [x,y],z\rangle -\langle [y,z],x\rangle+\langle [z,x],y\rangle\} \\
& = \frac12\{\langle[x,y]_\frk,z\rangle-\langle [y,z]_\frk,x\rangle+\langle [z,x]_\frk,y\rangle\} \quad \text{(using \eqref{ka})}\\
& = \langle \nabla^\frk_xy,z\rangle.
\end{align*}
Therefore, $\nabla^g_xy=\frac12 \omega(x,y)JA+\nabla^\frk_xy$ for any $x,y\in\frk$. Comparing with Theorem \ref{nabla_b} we obtain $\nabla^b_xy=\nabla^\frk_xy$, $x,y\in\frk$.
\end{proof}
\
Finally, we are able to compute the curvature tensor $R^b$ of the Bismut connection on $\frg$ in terms of the endomorphism $\varphi$.
\begin{theorem}\label{curvature}
If $R^b$ denotes the curvature tensor of the Bismut connection, then $R^b$ is given by
\[ R^b(u,v)=\langle \varphi( u),v\rangle \varphi, \qquad u,v\in\frg.\]
\end{theorem}
\begin{proof}
Note first that $R^b(u,v)A=R^b(u,v)JA=0$, since both $A$ and $JA$ are $\nabla^b$-parallel. Therefore, we only have to compute $R^b(u,v)$ when evaluated in elements of $\frk$.
Next, recall that $R^b(A,\cdot)=R^b(JA,\cdot)=0$, according to Lemma \ref{curvatura-JJ}(d). Thus, we only have to compute $R^b(x,y)z$ for $x,y,z\in\frk$. First note that, according to \eqref{ka},
\[ \nabla^b_{[x,y]}z=\nabla^b_{[x,y]_\frk}z+\omega(x,y)\nabla^b_{JA}z = \nabla^b_{[x,y]_\frk}z -\omega(x,y)Jz=\nabla^\frk_{[x,y]_\frk}z -\omega(x,y)Jz, \]
where we have used Lemma \ref{bismut-g} in the last equality. Hence we have that
\begin{align*}
R^b(x,y)z & = \nabla^b_x\nabla^b_y z-\nabla^b_y\nabla^b_x z - \nabla^b_{[x,y]}z \\
& = \nabla^\frk_x\nabla^\frk_y z-\nabla^\frk_y\nabla^\frk_x z-\nabla^\frk_{[x,y]_\frk}z +\omega(x,y)Jz \\
& = R^\frk(x,y)z+\langle Jx,y\rangle Jz \\
& = \langle Jx,y\rangle Jz
\end{align*}
since $\nabla^\frk$ is flat. The result follows.
\end{proof}
\medskip
\begin{corollary}\label{endo}
Any curvature endomorphism $R^b(u,v)$, $u,v\in\frg$, is parallel with respect to~$\nabla^b$.
\end{corollary}
\begin{proof}
This is a straightforward consequence of Theorem \ref{curvature} and Corollary \ref{phi paralelo}.
\end{proof}
\
Regarding the holonomy group of the Bismut connection of a Vaisman solvmanifold, we have
\begin{theorem}\label{phi}
If $M=\Gamma\backslash G$ is a $2n$-dimensional Vaisman solvmanifold then its holonomy group $\operatorname{Hol}^b(M)$ has dimension $1$ and it is not contained in $\operatorname{SU}(n)$.
\end{theorem}
\begin{proof}
The restricted holonomy group $\operatorname{Hol}^b_0(M)$ coincides with the holonomy group $\operatorname{Hol}^b(G)$. According to Theorem \ref{AS2}, its Lie algebra $\mathfrak{hol}^b(M)$ is generated by all the curvature endomorphisms $R^b(u,v)$, $u,v\in\frg$, together with their covariant derivatives of any order. Therefore, it follows from Theorem \ref{curvature} and Corollary \ref{endo} that $\mathfrak{hol}^b(M)$ is spanned by $\varphi$, therefore it is one-dimensional.
Moreover, in an adapted basis $\{A,JA,e_1,f_1,\ldots,e_{n-1},f_{n-1}\}$ with $Je_i=f_i$, we have that the matrix of $\varphi$ is given by
\[ \varphi=\left(\begin{array}{ccccccc}
0 & 0 & & & & & \\
0 & 0 & & & & & \\
& & 0 & -1 & & & \\
& & 1 & 0 & & & \\
& & & & \ddots & & \\
& & & & & 0 & -1\\
& & & & & 1 & 0 \\
\end{array}\right)\in\mathfrak{u}(n),\]
but it is clear that $\varphi\notin \mathfrak{su}(n)$.
\end{proof}
\
Moreover, a result stronger than Corollary \ref{endo} can be obtained also as a consequence of Theorem \ref{curvature}:
\begin{proposition}
On any Vaisman solvmanifold $\Gamma\backslash G$, the Bismut curvature tensor $R^b$ is $\nabla^b$-parallel: $\nabla^bR^b=0$.
\end{proposition}
\begin{proof}
This is an immediate consequence of Theorem \ref{curvature}. Indeed, we need only verify that $(\nabla^b_xR^b)(y,z)w=0$ for any $x,y,z,w\in\frg$. We compute
\begin{align*}
(\nabla^b_xR^b)(y,z)w & = \nabla^b_x(R^b(y,z)w)-R^b(\nabla^b_xy,z)w-R^b(y,\nabla^b_xz)w-R^b(y,z)(\nabla^b_xw)\\
& = \langle\varphi(y),z\rangle \nabla^b_x\varphi(w)-\langle \varphi \nabla^b_xy,z\rangle \varphi(w) - \langle \varphi(y),\nabla^b_xz\rangle \varphi(w) - \langle \varphi(y),z\rangle \varphi \nabla^b_xw.
\end{align*}
The first and the last terms cancel out since $\varphi$ is $\nabla^b$-parallel, and the second and third terms also cancel out, since
\[ \langle \varphi \nabla^b_xy,z\rangle=\langle \nabla^b_x\varphi(y),z\rangle=-\langle \varphi(y),\nabla^b_x z \rangle. \]
This completes the proof.
\end{proof}
\begin{remark}
{\rm According to \cite{AP}, the Bismut connection on a Vaisman solvmanifold is a \textit{Hermitian Ambrose-Singer connection}, since $\nabla^bT^b=0$ and $\nabla^bR^b=0$. In particular, any Vaisman solvmanifold is a locally homogeneous Hermitian space \cite{Ki,Se}. However, this is true for any solvmanifold $M:=\Gamma\backslash G$ equipped with an invariant almost Hermitian structure $(J,g)$, since the connection on $G$ defined by $\nabla_x y=0$ for any $x,y\in\frg=\operatorname{Lie}(G)$ induces a connection $\nabla$ on $M$ satisfying $\nabla J=\nabla g=0$.
}
\end{remark}
\medskip
Concerning the Bismut Ricci curvature of a Vaisman solvmanifold, we have the following straightforward consequence of Theorem \ref{curvature}:
\begin{corollary}
The Bismut Ricci curvature of a Vaisman solvmanifold $\Gamma\backslash G$ is given by
\[
\operatorname{Ric}^b(u,v)=-\langle u,v\rangle+\theta(u)\theta(v)+\theta(Ju)\theta(Jv), \quad u,v\in\frg. \]
In particular, $\operatorname{Ric}^b\neq 0$.
\end{corollary}
\smallskip
Using \eqref{ricb-ricg} we are able to determine the Riemannian Ricci curvature of a $2n$-dimensional Vaisman solvmanifold $\Gamma\backslash G$:
\[ \operatorname{Ric}^g(u,v)=-\frac12 \langle u,v\rangle+\frac12 \theta(u)\theta(v)+\frac{n}{2}\theta(Ju)\theta(Jv), \quad u,v\in\frg.\]
\
For a general non-Vaisman LCK solvmanifold, we cannot expect a reduction of the holonomy of the Bismut connection, as the following example shows.
\begin{example}\label{example}
{\rm
Let $G$ be the simply connected solvable Lie group with Lie algebra $\frg$ generated by $\{e_1,e_2,e_3,e_4\}$ with non-zero brackets given by
\[ [e_1,e_2]=\mu e_2, \quad [e_1,e_3]=-\frac{\mu}{2}e_3+y e_4, \quad [e_1,e_4]=-ye_3-\frac{\mu}{2}e_4, \]
for some $\mu\neq 0$ and $y\in\R$. Note that $G$ is an almost abelian Lie group; it was proved in \cite{AO1} that for certain values of $\mu$ and $y$ the Lie group $G$ admits lattices. The associated solvmanifolds are Inoue surfaces of type $S^0$.
Consider on $\frg$ the inner product $\langle \cdotp,\cdotp \rangle $ such that the basis above is orthonormal and the endomorphism $J:\frg\to\frg$ given by $Je_1=e_2,\, Je_3=e_4,\, J^2=-\operatorname{Id}$. It is easy to verify that the almost complex structure $J$ is integrable and hence $(J,\langle \cdotp,\cdotp \rangle )$ determines a Hermitian structure on $\frg$ with associated fundamental $2$-form $\omega$ given by $\omega=e^{12}+e^{34}$. Here, $\{e^1, e^2, e^3, e^4\}$ is the dual basis of $\{e_1, e_2, e_3, e_4\}$ and $e^{ij}$ stands for the wedge product $e^i\wedge e^j$. Note that $d\omega=\mu e^1\wedge \omega$, which means that $(J,\langle \cdotp,\cdotp \rangle )$ is LCK since $\mu\neq 0$ and $de^1=0$. Clearly, the Lee form $\theta$ is $\theta=\mu e^1$.
Computing the Bismut connection on $\frg$ using \eqref{bismut}, we obtain
\begin{gather*}
\nabla^b_{e_1}= \begin{pmatrix} \\ \\ && 0& -y \\ && y & 0 \end{pmatrix}, \qquad \nabla^b_{e_2}= \begin{pmatrix} 0 &\mu && \\ -\mu & 0 &&\\ && 0& \frac{\mu}{2} \\ && -\frac{\mu}{2} & 0 \end{pmatrix}, \\
\nabla^b_{e_3}= \begin{pmatrix} && -\frac{\mu}{2} & 0 \\ &&0&-\frac{\mu}{2}\\ \frac{\mu}{2}&0 & & \\ 0 & \frac{\mu}{2} & & \end{pmatrix}
\qquad \nabla^b_{e_4}= \begin{pmatrix} &&0&-\frac{\mu}{2}\\ &&\frac{\mu}{2} &0 \\ 0& -\frac{\mu}{2} & & \\ \frac{\mu}{2}&0 & & \end{pmatrix}.
\end{gather*}
Note that $\nabla^b \theta\neq 0$, so that the LCK metric is not Vaisman. The curvature endomorphisms $R^b(e_i,e_j)$ are given by
\begin{gather*}
R^b(e_1,e_2)= \begin{pmatrix} 0 & -\mu^2 &&\\ \mu^2 & 0 && \\ && 0 & -\frac{\mu^2}{2} \\ && \frac{\mu^2}{2} & 0 \end{pmatrix}, \quad
R^b(e_1,e_3)=-R^b(e_2,e_4)= \begin{pmatrix} && -\frac{\mu^2}{4} & 0 \\ && 0 &-\frac{\mu^2}{4}\\ \frac{\mu^2}{4}& 0 & & \\ 0 & \frac{\mu^2}{4} & & \end{pmatrix} \\
R^b(e_1,e_4)=R^b(e_2,e_3)=
\begin{pmatrix} && 0 &-\frac{\mu^2}{4}\\ &&\frac{\mu^2}{4} & 0 \\ 0 & -\frac{\mu^2}{4} & & \\ \frac{\mu^2}{4}& 0 & & \end{pmatrix},
\quad R^b(e_3,e_4)=\begin{pmatrix} 0 &\frac{\mu^2}{2} && \\ -\frac{\mu^2}{2} & 0 &&\\ && 0 & -\frac{\mu^2}{2} \\ && \frac{\mu^2}{2} & 0 \end{pmatrix}.
\end{gather*}
Since $\mu\neq 0$, it follows easily that all these curvature endomorphisms are linearly independent, hence the subspace $\text{span}\{R^b(e_i,e_j)\mid i<j \}$ of $\mathfrak{u}(2)$ has dimension $4$. According to Theorem \ref{AS2}, we have that $\mathfrak{hol}^b=\mathfrak{u}(2)$. There is no reduction of the holonomy in this case.
\medskip
This example will be generalized in Section \ref{section-OT}.
}
\end{example}
\
\section{Bismut holonomy of Hopf manifolds}\label{Hopf}
In this section we determine explicitly the holonomy group of the Bismut connection on some classical \textit{Hopf manifolds}, which are the archetypical examples of compact Vaisman manifolds. These are defined as a quotient of $\C^n-\{0\}$, $n\geq 2$, by the action of the cyclic group generated by the transformation $z\mapsto \lambda z$, for some $\lambda \in\C$ with $|\lambda|>1$. Their underlying smooth manifold is $S^1\times S^{2n-1}$, so that its first Betti number is $b_1=1$ and therefore they cannot admit a K\"ahler metric. We also point out that for different values of $\lambda$ the corresponding compact complex manifolds are non-biholomorphic (this can be seen with the same arguments used in \cite{KS} for the case of Hopf surfaces, i.e., for $n=2$).
\smallskip
We will describe in more detail their construction for $\lambda\in\R$, $\lambda>1$, exhibiting in this case an explicit parallelization of $S^1\times S^{2n-1}$, generalizing the one given in \cite{Par1} (and in greater length in \cite{Par2}) for $\lambda=e^{2\pi}$. It will be easy to express the usual Vaisman structure in terms of this parallelization, and using this expression we will be able to show that the associated Bismut holonomy group is equal to $\operatorname{U}(n-1)$, which is the largest possible holonomy group, according to Corollary~\ref{hol}.
\smallskip
\begin{remark}
{\rm These classical Hopf manifolds are not homeomorphic to a solvmanifold for $n>1$. Indeed, the universal cover of a $2n$-dimensional solvmanifold is homeomorphic to $\R^{2n}$, whereas the universal cover of $S^1\times S^{2n-1}$ is $\R^{2n}-\{0\}$. }
\end{remark}
\
\subsection{Revisiting the construction of a Vaisman structure on Hopf manifolds}
Let us consider $\R^{2n}$ with the usual Cartesian coordinates $(x_1,\ldots,x_{2n})$. Let us denote by $N$ the unit normal vector field of $S^{2n-1}\subset \R^{2n}$, that is, $N=\sum_{i=1}^{2n} x_i\frac{\partial}{\partial x_i}$. For any $i=1,\ldots,2n$, the orthogonal projection of $\frac{\partial}{\partial x_i}$ on $S^{2n-1}$ gives a vector field $T_i$ on $S^{2n-1}$, which can be expressed as $T_i=\frac{\partial}{\partial x_i}-x_i N$. The vector field $T_i$ is called the $i^{th}$ meridian vector field on $S^{2n-1}$.
Let $\lambda$ be a real number, $\lambda>1$. Let $\Gamma_\lambda$ be the cyclic infinite group of transformations of $\R^{2n}-\{0\}$ generated by the map $x \mapsto \lambda x$. Then the projection $p_\lambda:\R^{2n}-\{0\} \to S^1\times S^{2n-1}$ given by \[ p_\lambda(x)=\left(\exp\left(2\pi i\,\frac{\log|x|}{\log \lambda}\right),\frac{x}{|x|}\right) \]
induces a diffeomorphism between $(\R^{2n}-\{0\})/\Gamma_\lambda$ and $S^1\times S^{2n-1}$. Consider the smooth function $F:\R^{2n}-\{0\}\to \R$ given by $F(x)=|x|$. Then the vector fields $F\frac{\partial}{\partial x_i}$, $i=1,\ldots,2n$, are $\Gamma_\lambda$-invariant\footnote{Here we are using $\lambda >0$, this does not hold for general $\lambda\in \C$.} and therefore the vector fields $U_i^\lambda$ on $S^1\times S^{2n-1}$ given by $U_i^\lambda=(p_\lambda)_\ast(F\frac{\partial}{\partial x_i})$ are well-defined and, moreover, $\{U_1^\lambda,\ldots,U_{2n}^\lambda\}$ defines a parallelization of $S^1\times S^{2n-1}$. In terms of the meridian vector fields $T_i$, it can be shown, modifying suitably the computations in \cite{Par2}, that $U_i^\lambda=T_i+\frac{2\pi}{\log \lambda}x_i\frac{\partial}{\partial t}$, where $t$ denotes the usual coordinate on $S^1$. It follows that
\begin{gather*}
[U_i^\lambda,U_j^\lambda]= x_iU_j^\lambda-x_jU_i^\lambda, \\
U_i^\lambda(x_j)=\delta_{ij}-x_ix_j,
\end{gather*}
for $i,j=1,\ldots,2n$. Here we are considering the functions $x_j\in C^\infty(S^1\times S^{2n-1})$ defined as $x_j(e^{it},(p_1,\ldots, p_{2n}))=p_j$, for $j=1,\ldots, 2n$ and $(p_1,\ldots,p_{2n})\in S^{2n-1}$. In particular, $\sum_{j=1}^{2n} x_j^2=1$. Observe that differentiating this expression we get
\begin{equation}\label{dxj}
\sum_{j=1}^{2n} x_j dx_j=0.
\end{equation}
If $\langle \cdotp,\cdotp \rangle $ denotes the usual metric on $\R^{2n}$, then the Riemannian metric $\frac{1}{F^2}\langle \cdot,\cdot\rangle$ on $\R^{2n}-\{0\}$ is $\Gamma_\lambda$-invariant and then it induces a Riemannian metric $g_\lambda$ on $S^1\times S^{2n-1}$. It can be easily seen that $g_\lambda$ coincides with the product metric $g_\lambda=\left(\frac{\log\lambda}{2\pi}\right)^2g_{S^1}+g_{S^{2n-1}}$, where $g_{S^k}$ denotes the round metric on $S^k$.
On $\R^{2n}-\{0\}$ there is the canonical complex structure given by
\[ J\left(\frac{\partial}{\partial x_{2i-1}}\right) = \frac{\partial}{\partial x_{2i}},\quad J\left(\frac{\partial}{\partial x_{2i}}\right) = -\frac{\partial}{\partial x_{2i-1}}, \quad i=1,\ldots,n. \]
This complex structure is $\Gamma_\lambda$-invariant and therefore defines a complex structure $J_\lambda$ on $S^1\times S^{2n-1}$ given by
\[ J_\lambda U_{2i-1}^\lambda=U_{2i}^\lambda, \quad J_\lambda U_{2i}^\lambda=-U_{2i-1}^\lambda,\quad i=1,\ldots,n. \]
It is clear that $J_\lambda$ is $g_\lambda$-orthogonal and therefore $(J_\lambda,g_\lambda)$ is a Hermitian structure on $S^1\times S^{2n-1}$ for any $\lambda>1$.
Let $\eta_i^\lambda$ denote the $1$-form on $S^1\times S^{2n-1}$ which is $g_{\lambda}$-dual to $U_i^\lambda$. Then $\eta_i^\lambda=dx_i+\frac{\log\lambda}{2\pi}x_i dt$ and it can be seen that $d\eta_i^\lambda=\eta_i^\lambda\wedge\alpha^\lambda$, where $\alpha^\lambda$ is the $1$-form defined by $\alpha^\lambda:=\sum_j x_j\eta_j^\lambda$. Note that applying \eqref{dxj} we obtain $\alpha^\lambda=\frac{\log\lambda}{2\pi}dt$.
\
We summarize all the equations we have obtained so far in the following lemma:
\begin{lemma}\label{lema-hopf}
The manifold $S^1\times S^{2n-1}$ admits a family of Hermitian structures $(J_{\lambda}, g_{\lambda})$ for $\lambda>1$. In terms of the $g_\lambda$-orthonormal global frame $\{U_1^\lambda,\ldots,U_{2n}^\lambda\}$ and functions $x_i\in C^\infty(S^1\times S^{2n-1})$, $i=1,\ldots,2n$, described above, we have:
\begin{align*}
J_\lambda U_{2i-1}^\lambda& =U_{2i}^\lambda, \quad J_\lambda U_{2i}^\lambda=-U_{2i-1}^\lambda \text{ for all } i,\\
[U_i^\lambda,U_j^\lambda] &= x_iU_j^\lambda-x_jU_i^\lambda, \quad
U_i^\lambda(x_j) =\delta_{ij}-x_ix_j,
\end{align*}
and
\[ d\eta_i^\lambda =\eta_i^\lambda\wedge \alpha^\lambda,\]
where $\eta_i^{\lambda}$ is the form $g_{\lambda}$-dual to $U_i^{\lambda}$ and $\alpha^\lambda:=\sum_j x_j\eta_j^\lambda$.
\end{lemma}
\medskip
Note that these expressions do not actually depend on $\lambda$, so that from now on we will omit the subscript/superscript $\lambda$ in all the forthcoming computations.
\
We will verify next the well-known fact that the previous Hermitian structure $(J,g)$ is Vaisman. Indeed, computing $d\omega$ for $\omega:=g(J\cdot,\cdot)=\sum_i \eta_{2i-1}\wedge\eta_{2i}$ and applying \eqref{dxj}, we obtain that
\[ d\omega=-2\alpha\wedge\omega,\quad d\alpha=0,\]
so that $(J,g)$ is LCK with Lee form $\theta=-2\alpha$. The vector field $H$ on $S^1\times S^{2n-1}$ which is $g$-dual to $\alpha$ will play an important role in our computations. It can be written in terms of the frame $\{U_i\}$ as
\[ H=\sum_{i=1}^{2n} x_i U_i,\]
and coincides with $\frac{2\pi}{\log \lambda}\frac{\partial}{\partial t}$. Observe that $H$ is a multiple of the vector field $A$ defined in previous sections for any Vaisman manifold; more precisely, $H = -A/2$. In order to show that $\alpha$, or equivalently $H$, is $\nabla^g$-parallel, we determine the Levi-Civita connection $\nabla ^g$ of $g$ in terms of $\{U_i\}$. Using the Koszul formula together with Lemma \ref{lema-hopf} we obtain
\begin{lemma}\label{nabla_g}
The Levi-Civita connection $\nabla^g$ on $S^1\times S^{2n-1}$ is given by
\begin{itemize}
\item $\nabla^g_{U_i}U_j=-x_j U_i$, if $i\neq j$;
\item $\nabla^g_{U_i}U_i= \sum_{k\neq i}x_kU_k=H-x_iU_i$.
\end{itemize}
\end{lemma}
\smallskip
Therefore,
\begin{align*}
\nabla^g_{U_i}H & =\nabla^g_{U_i}(\sum_j x_jU_j) \\
& = \sum_j(U_i(x_j)U_j+x_j\nabla^g_{U_i}U_j) \\
& = (1-x_i^2)U_i+x_i(H-x_iU_i)+\sum_{j\neq i} (-x_ix_jU_j+x_j(-x_j)U_i)\\
& = (1-x_i^2)U_i+x_i(H-x_iU_i)-x_i(H-x_iU_i)-(\sum_{j\neq i}x_j^2) U_i\\
&= 0 \qquad (\text{since} \sum_j x_j^2=1).
\end{align*}
Thus we recover the fact that $S^1\times S^{2n-1}$ with the Hermitian structure $(J,g)$ is a Vaisman manifold.
\
\subsection{Computation of the Bismut holonomy of Hopf manifolds}
In what follows, we will study the Bismut connection $\nabla^b$ associated to $(J,g)$. We will first express $\nabla^b$ in terms of the frame $\{U_i\}$, and later we will determine its curvature tensor $R^b$ and its holonomy group.
\begin{proposition}
The Bismut connection $\nabla^b$ associated to $(J,g)$ on $S^1\times S^{2n-1}$ is given by
\begin{itemize}
\item $\nabla^b_{U_{2i-1}}U_{2j-1}=-x_{2j-1}U_{2i-1}+x_{2j}U_{2i}-x_{2i}U_{2j}$, $(i\neq j)$;
\item $\nabla^b_{U_{2i-1}}U_{2i-1}=\sum_{k\neq 2i-1}x_kU_k=H-x_{2i-1}U_{2i-1}$;
\item $\nabla^b_{U_{2i-1}}U_{2j}= -x_{2j}U_{2i-1}+x_{2i}U_{2j-1}-x_{2j-1}U_{2i}$, $(i\neq j)$;
\item $\nabla^b_{U_{2i-1}}U_{2i}=-\sum_{k}x_{2k}U_{2k-1}+\sum_{k\neq i}x_{2k-1}U_{2k}=JH-x_{2i-1}U_{2i}$;
\item $\nabla^b_{U_{2i}}U_{2j-1}=-x_{2j}U_{2i-1}-x_{2j-1}U_{2i}+x_{2i-1}U_{2j}$, $(i\neq j)$;
\item $\nabla^b_{U_{2i}}U_{2i-1}=-\sum_{k}x_{2k-1}U_{2k}+\sum_{k\neq i}x_{2k}U_{2k-1}=-JH-x_{2i}U_{2i-1}$;
\item $\nabla^b_{U_{2i}}U_{2j}=-x_{2i-1}U_{2j-1}+x_{2j-1}U_{2i-1}-x_{2j}U_{2i}$, $(i\neq j)$;
\item $\nabla^b_{U_{2i}}U_{2i}=\sum_{k\neq 2i}x_{k}U_{k}=H-x_{2i}U_{2i}$.
\end{itemize}
\end{proposition}
\begin{proof}
The proof follows from \eqref{bismut}, using Lemma \ref{nabla_g} and the fact that $d\omega=-2\alpha\wedge \omega$, where $\alpha=\sum x_j\eta_j, \, \omega=\sum \eta_{2j-1}\wedge \eta_{2j}$. We prove only the first two expressions, the others follow analogously. Also take into account that $\nabla^b_X U_{2r} = J(\nabla^b_X U_{2r-1})$ since $J$ is $\nabla^b$-parallel.
\medskip
For $i\neq j$:
\begin{align*}
g(\nabla^b_{U_{2i-1}}U_{2j-1},U_{2k-1}) & = -x_{2j-1}\delta_{ik}-\alpha\wedge \omega(U_{2i},U_{2j},U_{2k})\\
& = -x_{2j-1}\delta_{ik},
\end{align*}
while
\begin{align*}
g(\nabla^b_{U_{2i-1}}U_{2j-1},U_{2k}) & = \alpha\wedge \omega(U_{2i},U_{2j},U_{2k-1})\\
& = \alpha(U_{2i})\omega(U_{2j},U_{2k-1})+\alpha(U_{2j})\omega(U_{2k-1},U_{2i})\\
& = -x_{2i}\delta_{jk}+x_{2j}\delta_{ik}.
\end{align*}
Therefore: $\nabla^b_{U_{2i-1}}U_{2j-1}=-x_{2j-1}U_{2i-1}+x_{2j}U_{2i}-x_{2i}U_{2j}$. Now, for $i=j$, and for any vector field $X$ on $S^1\times S^{2n-1}$ we have that
\[ g(\nabla^b_{U_{2i-1}}U_{2i-1},X)=g(\nabla^g_{U_{2i-1}}U_{2i-1},X)-\alpha\wedge\omega(U_{2i},U_{2i},JX)=g(\nabla^g_{U_{2i-1}}U_{2i-1},X). \]
Therefore, $\nabla^b_{U_{2i-1}}U_{2i-1}=\nabla^g_{U_{2i-1}}U_{2i-1}=H-x_{2i-1}U_{2i-1}$, as claimed.
\end{proof}
\
Next, we will determine the curvature endomorphisms $R^b(U_i,U_j)$ associated to $\nabla^b$. This will be done in Propositions \ref{curvatura1}, \ref{curvatura3} and \ref{curvatura2}; their proofs are long but straightforward computations and therefore we omit them\footnote{Another expression for the Bismut curvature tensor on Hopf manifolds was given recently in \cite{B}.}.
\begin{proposition}\label{curvatura1}
For any $i=1,\ldots,n$, the curvature endomorphism $R^b(U_{2i-1},U_{2i})$ is given by
\begin{align*}
R^b(U_{2i-1},U_{2i}) U_{2i-1} & = R^b(U_{2i-1},U_{2i})U_{2i} = 0, \\
R^b(U_{2i-1},U_{2i}) U_{2j-1} & = 2(1-x_{2i-1}^2-x_{2i}^2-x_{2j-1}^2-x_{2j}^2)U_{2j} \\
& \quad +2\sum_{k\neq i,j}(x_{2j-1}x_{2k}-x_{2j}x_{2k-1})U_{2k-1} -2\sum_{k\neq i,j}(x_{2j-1}x_{2k-1}+x_{2j}x_{2k})U_{2k}
\end{align*} and $R^b(U_{2i-1},U_{2i})U_{2j}=J(R^b(U_{2i-1},U_{2i})U_{2j-1})$ by Lemma~\ref{curvatura-JJ}(a).
\end{proposition}
\
\begin{proposition}\label{curvatura3}
For any $i,j=1,\ldots,n$, $(i\neq j)$, the curvature endomorphism $R^b(U_{2i-1},U_{2j})$ is given by
\begin{align*}
R^b(U_{2i-1},U_{2j})U_{2i-1} & = -(1-x_{2i-1}^2-x_{2i}^2-x_{2j-1}^2-x_{2j}^2)U_{2j} \\
& \qquad +\sum_{k\neq i,j}(x_{2j}x_{2k-1}-x_{2j-1}x_{2k})U_{2k-1}+ \sum_{k\neq i,j}(x_{2j-1}x_{2k-1}+x_{2j}x_{2k})U_{2k} \\
R^b(U_{2i-1},U_{2j})U_{2j-1} & = -(1-x_{2i-1}^2-x_{2i}^2-x_{2j-1}^2-x_{2j}^2)U_{2i} \\
& \qquad +\sum_{k\neq i,j}(x_{2i}x_{2k-1}-x_{2i-1}x_{2k})U_{2k-1}+ \sum_{k\neq i,j}(x_{2i-1}x_{2k-1}+x_{2i}x_{2k})U_{2k} \\
R^b(U_{2i-1},U_{2j})U_{2k-1} & = (x_{2j-1}x_{2k}-x_{2j}x_{2k-1})U_{2i-1}+(x_{2j-1}x_{2k-1}+x_{2j}x_{2k})U_{2i} \\
& \qquad +(x_{2i-1}x_{2k}-x_{2i}x_{2k-1})U_{2j-1}+(x_{2i-1}x_{2k-1}+x_{2i}x_{2k})U_{2j} \\
& \qquad - 2(x_{2i-1}x_{2j-1}+x_{2i}x_{2j})U_{2k} \quad (k\neq i,j)\\
\end{align*}
Moreover, $R^b(U_{2i-1},U_{2j})U_{2r}=J(R^b(U_{2i-1},U_{2j})U_{2r-1})$ by Lemma~\ref{curvatura-JJ}(a).
\end{proposition}
\
\begin{proposition}\label{curvatura2}
For any $i,j=1,\ldots,n$, $(i\neq j)$, the curvature endomorphism $R^b(U_{2i-1},U_{2j-1})$ is given by
\begin{align*}
R^b(U_{2i-1},U_{2j-1})U_{2i-1} & = -(1-x_{2i-1}^2-x_{2i}^2-x_{2j-1}^2-x_{2j}^2)U_{2j-1} \\
& \qquad +\sum_{k\neq i,j}(x_{2j-1}x_{2k-1}+x_{2j}x_{2k})U_{2k-1}+ \sum_{k\neq i,j}(x_{2j-1}x_{2k}-x_{2j}x_{2k-1})U_{2k} \\
R^b(U_{2i-1},U_{2j-1})U_{2j-1} & = (1-x_{2i-1}^2-x_{2i}^2-x_{2j-1}^2-x_{2j}^2)U_{2i-1} \\
& \qquad -\sum_{k\neq i,j}(x_{2i-1}x_{2k-1}+x_{2i}x_{2k})U_{2k-1}+ \sum_{k\neq i,j}(x_{2i}x_{2k-1}-x_{2i-1}x_{2k})U_{2k} \\
R^b(U_{2i-1},U_{2j-1})U_{2k-1} & = -(x_{2j-1}x_{2k-1}+x_{2j}x_{2k})U_{2i-1}+(x_{2j-1}x_{2k}-x_{2j}x_{2k-1})U_{2i} \\
& \qquad +(x_{2i-1}x_{2k-1}+x_{2i}x_{2k})U_{2j-1}+(x_{2i}x_{2k-1}-x_{2i-1}x_{2k})U_{2j} \\
& \qquad + 2(x_{2i-1}x_{2j}-x_{2i}x_{2j-1})U_{2k} \quad (k\neq i,j)\\
\end{align*}
and $R^b(U_{2i-1},U_{2j-1})U_{2r}=J(R^b(U_{2i-1},U_{2j-1})U_{2r-1})$ by Lemma~\ref{curvatura-JJ}(a).
\end{proposition}
\
\begin{remark}
{\rm It follows from Lemma~\ref{curvatura-JJ}(c) that $R^b(U_{2i},U_{2j})=R^b(U_{2i-1},U_{2j-1})$ for any $i\neq j$.}
\end{remark}
\medskip
As a consequence of Propositions \ref{curvatura1}, \ref{curvatura3} and \ref{curvatura2}, we recover the familiar fact that the Bismut connection on $S^1\times S^3$ is flat:
\begin{corollary}\label{Rb-flat}
On $S^1\times S^3$ the Bismut connection is flat, i.e., $R^b\equiv 0$.
\end{corollary}
\begin{remark}
{\rm It was proved by Samelson in \cite{Sam} that any compact Lie group of even dimension admits a left invariant complex structure compatible with a bi-invariant metric. Moreover, it was shown in \cite{AI,J} that such Hermitian manifold is Bismut flat. We point out that $S^1\times S^{2n-1}$ (with $n\geq 2$) is a Lie group only for $n=2$. In this case we have that $S^1\times S^3$ is isomorphic to $S^1\times \SU(2)$, and the Hermitian structure $(J,g)$ is left-invariant (in fact, $g$ is bi-invariant), and it can be obtained with Samelson's construction.}
\end{remark}
\
In what follows we will determine the holonomy group $\operatorname{Hol}^b(S^1\times S^{2n-1})$ of the Bismut connection $\nabla^b$. Since $S^1\times S^{2n-1}$ is connected, we can choose any point $p$ as base point, and it will be convenient for us to choose $p=\left(1,\frac{1}{\sqrt{2n}}(1,\ldots,1)\right)\in S^1\times S^{2n-1}$. We will use Theorem \ref{AS1} in order to determine its Lie algebra $\mathfrak{hol}^b$. We begin with some auxiliary results.
\begin{lemma}\label{Rb-1}
For $n\geq 3$, we have that:
\begin{enumerate}
\item[(a)] the set $\{R^b(U_{2i-1},U_{2j})_p \mid 1\leq i<j\leq n\}$ is linearly independent.
\item[(b)] the set $\{R^b(U_{2i-1},U_{2j-1})_p \mid 1\leq i<j\leq n-1\}$ is linearly independent.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) It follows from Propositions \ref{curvatura3}, evaluating in $p=\left(1,\frac{1}{\sqrt{2n}}(1,\ldots,1)\right)$, that
for $i\neq j$ we have
\begin{align*}
R^b(U_{2i-1},U_{2j})_p U_{2i-1} & = -\frac{n-2}{n}U_{2j}+\frac{1}{n}\sum_{k\neq i,j}U_{2k},\\
R^b(U_{2i-1},U_{2j})_p U_{2j-1} & = -\frac{n-2}{n}U_{2i}+\frac{1}{n}\sum_{k\neq i,j}U_{2k},\\
R^b(U_{2i-1},U_{2j})_p U_{2k-1} & = \frac{1}{n}U_{2i}+\frac{1}{n}U_{2j}-\frac{2}{n}U_{2k},
\end{align*}
for $k\neq i,j$.
\smallskip
Let us consider a linear combination
\[ \sum_{i<j} c_{ij} R^b(U_{2i-1},U_{2j})_p=0.\]
Expanding $g(\sum_{i<j} c_{ij} R^b(U_{2i-1},U_{2j})_p U_{2r-1},U_{2s})=0$ for $r<s$, we obtain
\begin{equation}\label{system1}
-(n-2)c_{rs}+\sum_{r<j\neq s} c_{rj}+\sum_{i<r}c_{ir}+\sum_{s<j}c_{sj}+\sum_{r\neq i<s} c_{is}=0, \qquad 1\leq r<s\leq n.
\end{equation}
Therefore \eqref{system1} defines a homogeneous linear system $Mc=0$ of $\binom{n}{2}$ equations with $\binom{n}{2}$ unknowns ordered lexicographically. It turns out that the matrix $M$ is symmetric, the elements on the diagonal are all equal to $-(n-2)$, the elements off the diagonal are equal to either $0$ or $1$, and all the rows and columns have the same sum, namely, $n-2$. The $\binom{n}{2}\times\binom{n}{2}$-matrix $\tilde M=(\tilde{M}_{ij})$ given by:
\[ \tilde{M}_{ij}=\begin{cases}
-\frac{2n-6}{n(n-2)}, \quad \text{if } i=j,\\
-\frac{n-6}{2n(n-2)}, \quad \text{if } M_{ij}=1,\\
-\frac{2}{n(n-2)}, \quad \text{if } M_{ij}=0.
\end{cases} \]
is the inverse of $M$, which means that the system $Mc=0$ has the unique solution $c_{ij}=0$ for all $i<j$. The proof of (a) is complete.
\medskip
(b) It follows from Proposition \ref{curvatura2}, evaluating in $p=\left(1,\frac{1}{\sqrt{2n}}(1,\ldots,1)\right)$, that when $i\neq j$ we have
\begin{align*}
R^b(U_{2i-1},U_{2j-1})_p U_{2i-1} & = -\frac{n-2}{n}U_{2j-1}+\frac{1}{n}\sum_{k\neq i,j}U_{2k-1},\\
R^b(U_{2i-1},U_{2j-1})_p U_{2j-1} & = \frac{n-2}{n}U_{2i-1}-\frac{1}{n}\sum_{k\neq i,j}U_{2k-1},\\
R^b(U_{2i-1},U_{2j-1})_p U_{2r-1} & = -\frac{1}{n}U_{2i-1}+\frac{1}{n}U_{2j-1},
\end{align*}
for $r\neq i,j$.
Let us consider now a linear combination
\[ {\sum_{i<j}}' c_{ij} R^b(U_{2i-1},U_{2j-1})_p=0,\]
where ${\sum}'$ means that the indices run up to $n-1$, i.e. $1\leq i<j\leq n-1$. Expanding $\displaystyle{g({\sum_{i<j}}' c_{ij} R^b(U_{2i-1},U_{2j-1})_p U_{2r-1},U_{2s-1})=0}$ for $1\leq r<s\leq n-1$, we obtain
\begin{equation}\label{system2}
-(n-2)c_{rs}+{\sum_{r<j\neq s}}'c_{rj}-{\sum_{i<r}}'c_{ir}-{\sum_{s<j}}'c_{sj}+{\sum_{r\neq i<s}}'c_{is}=0.
\end{equation}
Therefore \eqref{system2} defines a homogeneous linear system $Mc=0$ of $\binom{n-1}{2}$ equations with $\binom{n-1}{2}$ unknowns ordered lexicographically. It turns out that the matrix $M$ is symmetric, the elements on the diagonal are all equal to $-(n-2)$ and the elements off the diagonal are equal to either $0$ or $\pm 1$. The $\binom{n-1}{2}\times\binom{n-1}{2}$-matrix $\tilde M=(\tilde{M}_{ij})$ given by:
\[ \tilde{M}_{ij}=\begin{cases}
-\frac{3}{n}, \quad \text{if } i=j,\\
\mp\frac{1}{n}, \quad \text{if } M_{ij}=\pm1,\\
0, \quad \text{if } M_{ij}=0.
\end{cases} \]
is the inverse of $M$, and therefore $c_{ij}=0$ for all $1\leq i<j\leq n-1$. The proof of (b) is complete.
\end{proof}
\
Now we can prove the main result in this section.
\begin{theorem}
Let $n\geq 3$ and $\lambda>1$. If $S^1\times S^{2n-1}$ is equipped with the Vaisman structure $(J_\lambda,g_\lambda)$ described above, then the associated Bismut connection $\nabla^b$ has holonomy group $\operatorname{Hol}^b(S^1\times S^{2n-1})=\U(n-1)$.
\end{theorem}
\begin{proof}
We will compute, as before, the holonomy group $\operatorname{Hol}^b(S^1\times S^{2n-1})$ (and its Lie algebra $\mathfrak{hol}^b:=\mathfrak{hol}^b(S^1\times S^{2n-1})$) at the point $p=\left(1,\frac{1}{\sqrt{2n}}(1,\ldots,1)\right)\in S^1\times S^{2n-1}$.
Let us recall first from Corollary \ref{hol} that $\operatorname{Hol}^b(S^1\times S^{2n-1}) \subseteq \U(n-1)$. Therefore $\dim\mathfrak{hol}^b\leq (n-1)^2$.
Now, according to Theorem \ref{AS1}, $\mathfrak{hol}^b\subseteq \mathfrak{u}(n-1)$ contains all the curvature operators $R^b(X,Y)_p$ for any vector fields $X,Y$ on $S^1\times S^{2n-1}$. In particular,
\begin{equation}\label{set}
\{R^b(U_{2i-1},U_{2j})_p\mid 1\leq i<j\leq n\} \cup \{R^b(U_{2i-1},U_{2j-1})_p\mid 1\leq i<j\leq n-1\} \subset \mathfrak{hol}^b. \end{equation}
It follows from Lemma \ref{Rb-1} that each subset in the left-hand side of \eqref{set} is linearly independent; moreover, it is easy to see that their union is also linearly independent. Hence
\[ \dim\mathfrak{hol}^b\geq \binom{n}{2}+\binom{n-1}{2}= (n-1)^2.\]
Therefore $\dim\mathfrak{hol}^b=(n-1)^2$. This implies that $\mathfrak{hol}^b=\mathfrak{u}(n-1)$. Since $\operatorname{U}(n-1)$ is connected we have that $\operatorname{Hol}^b(S^1\times S^{2n-1})=\operatorname{U}(n-1)$.
\end{proof}
\medskip
We end this section this by writing down the Bismut Ricci curvature of the Hopf manifolds we are considering. The following result follows in a straightforward manner from Propositions \ref{curvatura1}, \ref{curvatura3} and \ref{curvatura2}.
\begin{proposition}
The Bismut Ricci curvature on $(S^1\times S^{2n-1},J_\lambda,g_\lambda)$ for $\lambda>1$ is given in terms of the orthonormal basis $\{U_1,\ldots,U_{2n}\}$ by
\begin{align*}
\operatorname{Ric}^b(U_{2r-1},U_{2s-1}) & = -2(n-2)(x_{2r-1}x_{2s-1}+x_{2r}x_{2s} - \delta_{rs}), \\
\operatorname{Ric}^b(U_{2r-1},U_{2s}) & = -2(n-2)(x_{2r-1}x_{2s}-x_{2r}x_{2s-1}).
\end{align*}
Also, $\operatorname{Ric}^b(U_{2r},U_{2s})=\operatorname{Ric}^b(U_{2r-1},U_{2s-1})$ for all $r,s$, according to Corollary \ref{ric-sym}.
\end{proposition}
\begin{corollary}
The Bismut connection associated to $(S^1\times S^{2n-1},J_\lambda,g_\lambda)$ has $\operatorname{Ric}^b=0$ if and only if $n=2$, and in this case $\nabla^b$ is flat (see Corollary \ref{Rb-flat}).
\end{corollary}
\
\section{Bismut holonomy of LCK Oeljeklaus-Toma manifolds}\label{section-OT}
In this section we study the Bismut holonomy of Oeljeklaus-Toma manifolds (OT manifolds for short) admitting an LCK metric. OT manifolds appeared in \cite{OT}, and they are non-K\"ahler compact complex manifolds which arise from certain number fields which admit $s$ real embeddings and $2t$ complex embeddings (OT manifolds of type $(s,t)$). When $t=1$ these OT manifolds admit LCK metrics. However, it was shown recently in \cite{DV, D, Vu} that they do not admit any LCK metric when $t\geq 2$.
It was proved in \cite{K} that all OT manifolds are in fact solvmanifolds, whose complex structure is induced by a left invariant one on the corresponding solvable Lie group. Using this solvmanifold structure, Kasuya also showed in \cite{K} that OT manifolds of any type do not admit Vaisman metrics. Moreover, for OT manifolds of type $(s,1)$, the LCK Hermitian structure is also induced by a left invariant one on the solvable Lie group. It can be deduced from \cite{K} that the Lie algebra $\frg$ of the Lie group corresponding to an OT manifold of type $(s,1)$ has a basis $\{A_1,\ldots,A_s,B_1,\ldots,B_s,C_1,C_2\}$ with Lie brackets given by\footnote{Note that for $s=1$ we obtain a Lie bracket isomorphic to the one in Example \ref{example}.}:
\begin{align*}
[A_i,B_i] &=B_i, \qquad i=1,\ldots, s, \nonumber \\
[A_i,C_1] &=-\frac12 C_1 +r_i C_2, \\
[A_i,C_2] &=-r_i C_1 -\frac12 C_2, \nonumber
\end{align*}
for some real numbers $r_i\in\R$, $i=1,\ldots, s$. The complex structure $J$ on $\frg$ takes the following expression:
\[ JA_i=B_i \; (i=1,\ldots, s), \qquad JC_1=C_2,\]
and the fundamental $2$-form is given by
\[ \omega=2\sum_i \alpha_i\wedge\beta_i + \sum_{i\neq j}\alpha_i\wedge \beta_j + \gamma_1\wedge \gamma_2, \]
where $\{\alpha_1,\ldots,\alpha_s,\beta_1,\ldots,\beta_s,\gamma_1,\gamma_{2}\}$ is the dual basis of $\frg^*$. If $g=\omega(\cdot,J\cdot)$ denotes the corresponding Hermitian metric, then $(\frg,J,g)$ is an LCK Lie algebra, with Lee form $\theta=\alpha_1+\cdots+\alpha_s$. Note that the non-zero values of the metric $g$, in terms of the basis of $\frg$, are given by:
\[g(A_i,A_i)=g(B_i,B_i)=2,\quad g(A_i,A_j)=g(B_i,B_j)=1, \; (i\neq j),\quad g(C_1,C_1)=g(C_2,C_2)=1. \]
Note that the vectors $A$ and $JA$, $g$-dual to the Lee form $\theta$ and $J\theta$ respectively, are given by
\[ A=\frac{1}{s+1}(A_1+\cdots+A_s), \quad JA=\frac{1}{s+1}(B_1+\cdots+B_s).\]
\
The main result of this section is Theorem \ref{hol-OT} where we set the holonomy group of OT manifolds of type $(s,1)$. Since these metrics are not Vaisman, one cannot expect a reduction of the holonomy group as stated in Corollary~\ref{hol}. In fact, in Theorem~\ref{hol-OT} we obtain the whole group $\operatorname{U}(s+1)$ as holonomy group. In order to get this result, several computations are needed. We start determining the curvature endomorphisms $R^b$.
Using the well-known Koszul formula for the computation of the Levi-Civita connection $\nabla^g$, we obtain that $\nabla^g$, expressed in terms of the basis of $\frg$, is given by the following:
\begin{multicols}{2}
\begin{itemize}
\item $\nabla^g_{A_i}A_j=0, \; \forall i,j$
\item $\nabla^g_{A_i}B_j=-\frac12 B_i+ \frac12(1+\delta_{ij}) JA$
\item $\nabla^g_{A_i}C_1=r_i C_2$
\item $\nabla^g_{A_i}C_2=-r_i C_1$
\item $\nabla^g_{B_i}A_j=-(\frac12+\delta_{ij}) B_j+\frac12(1+\delta_{ij}) JA$
\item $\nabla^g_{B_i}B_j=(1+\delta_{ij})\left[\frac12(A_i+A_j)-A\right]$
\item $\nabla^g_{B_i}C_k=0$, $k=1,2$
\item $\nabla^g_{C_k}A_i=\frac12 C_k$, $k=1,2$
\item $\nabla^g_{C_k}B_i=0$, $k=1,2$
\item $\nabla^g_{C_k}C_k=-\frac12 A$, $k=1,2$
\item $\nabla^g_{C_i}C_j=0$, $i\neq j$.
\end{itemize}
\end{multicols}
\
Applying \eqref{nablab-formula}, we arrive at the following result:
\begin{proposition}
The Bismut connection of an OT manifold of type $(s,1)$, expressed in terms of the basis $\{A_i,B_i,C_1,C_2\}$ of the corresponding Lie algebra $\frg$, is given by
\begin{multicols}{2}
\begin{itemize}
\item $\nabla^b_{A_i}A_j=0 \; \forall i,j$
\item $\nabla^b_{A_i}C_1=r_i C_2$
\item $\nabla^b_{B_i}A_j=(1+\delta_{ij}) (-B_j + JA)$
\item $\nabla^b_{B_i}C_1=-\frac12 C_2$
\item $\nabla^b_{C_k}A_i=\frac12 C_k$, $k=1,2$
\item $\nabla^b_{C_1}C_1=-\frac12 A$
\item $\nabla^b_{C_2}C_1=\frac12 JA$
\end{itemize}
\end{multicols}
The missing values can be deduced from $\nabla^b_X JY = J(\nabla^b_X Y)$.
\end{proposition}
\
Using the previous proposition we obtain the following expressions for the curvature $R^b$. The proof consists of long but standard computations.
\begin{proposition}\label{Rb-OT}
The curvature $R^b$ of the Bismut connection on an OT manifold of type $(s,1)$ is given by:
\begin{multicols}{2}
\begin{itemize}
\item $R^b(A_i,A_j)=0$
\item $R^b(A_i,B_j)=0 \; (i\neq j)$
\item $R^b(A_i,B_i)A_j=(1+\delta_{ij})(B_j-JA)$
\item $R^b(A_i,B_i)C_1=\frac12 C_2$
\item $R^b(B_i,B_j)A_k=\frac{(1+\delta_{jk})A_i-(1+\delta_{ik})A_j}{s+1}
\item $R^b(B_i,B_j)C_1=0$
\item $R^b(A_i,C_k)A_j=\frac14 C_k \; (\forall i,j,k)$
\item $R^b(A_i,C_1)C_1=-\frac14 A$
\item $R^b(A_i,C_2)C_1=\frac14 JA$
\item $R^b(B_i,C_1)A_j=\frac{-s+1+2\delta_{ij}}{4(s+1)}C_2$
\item $R^b(B_i,C_1)C_1=\frac{1}{2(s+1)}B_i-\frac14 JA$
\item $R^b(B_i,C_2)A_j=\frac{s-1-2\delta_{ij}}{4(s+1)}C_1$
\item $R^b(B_i,C_2)C_1=\frac{1}{2(s+1)}A_i-\frac14 A$
\item $R^b(C_1,C_2)A_i=-\frac12 JA$
\item $R^b(C_1,C_2)C_1=\frac{s}{2(s+1)} C_2$
\end{itemize}
\end{multicols}
The missing values are either $0$ or can be deduced from the ones in this list using that $R^b(\cdot,\cdot)J=JR^b(\cdot,\cdot)$.
\end{proposition}
\medskip
\begin{remark}
{\rm It follows from Proposition \ref{Rb-OT} that the curvature operators $R^b(A_i,C_1)$ and $R^b(A_i,C_2)$ are independent of $i$. We will denote simply
\[ R^b_{AC1}:=R^b(A_i,C_1), \qquad R^b_{AC2}:=R^b(A_i,C_2),\]
for any $i$. Moreover, for $s\neq 2$, these operators verify a linear relation with other curvature operators, since
\[
R^b_{AC1}=\frac{1}{s-2}\sum_i R^b(B_i,C_2), \qquad
R^b_{AC2}=-\frac{1}{s-2}\sum_i R^b(B_i,C_1).
\] }
\end{remark}
\
We may now compute the holonomy algebra $\mathfrak{hol}^b(M)$ of the Bismut connection associated to the LCK structure on an OT manifold $M=\Gamma\backslash G$ of type $(s,1)$. Recall that $\mathfrak{hol}^b(M)$ is the smallest subalgebra of $\mathfrak{u}(\frg)\cong \mathfrak{u}(s+1)$ containing the curvature operators $R^b(X,Y)$, $X,Y\in\frg$, and being closed under commutators with $\nabla^b_X$, $X\in\frg$, due to Theorem \ref{AS2}.
\
For low dimensions, it is straightforward to verify that $\mathfrak{hol}^b(M)= \mathfrak{u}(\frg)$, that is, there is no reduction of the Bismut holonomy group. Indeed, computing all the corresponding curvature operators and the commutators between any two of them we obtain:
\begin{lemma}\label{dim-baja}
For $s=1$ and $s=2$, the holonomy algebra $\mathfrak{hol}^b(M)$ of the Bismut connection on an OT manifold $M$ of type $(s,1)$ coincides with $\mathfrak{u}(\frg)$.
\end{lemma}
Note that the computations for the case $s=1$ were carried out in Example \ref{example}.
\
Therefore, we assume from now on that $s\geq 3$. Our strategy for computing the holonomy consist on finding two sets of linearly independent homomorphisms belonging to $\mathfrak{hol}^b(M)$ (see Lemma~\ref{li-1} and Lemma~\ref{li-2}) with a suitable number of elements.
\begin{lemma}\label{li-1}
The elements of the subset \[ \mathcal{U}:=\{R^b(B_i,C_1), R^b(B_i,C_2)\}_{1\leq i\leq s} \cup \{R^b(B_i,B_j)\}_{1\leq i<j\leq s}\]
of $\mathfrak{hol}^b(M)$ are linearly independent. \end{lemma}
\begin{proof}
Consider a linear combination of elements of $\mathcal U$
\[ T:= \sum_{1\leq i<j\leq s} x_{ij}R^b(B_i,B_j)+\sum_{i=1}^s c_i R^b(B_i,C_1)+\sum_{i=1}^s d_i R^b(B_i,C_2), \]
and assume $T=0$.
For any $k=1,\ldots,s$, we look for the coefficient of $C_1$ in $T(A_k)=0$ and we obtain, according to Proposition \ref{Rb-OT},
\[ (s-3)d_k+(s-1)\sum_{i\neq k} d_i=0. \]
This implies
\begin{equation}\label{dk}
-2d_k+(s-1)\sum_i d_i=0,
\end{equation}
and summing this equality over all $k=1,\ldots,s$, we get
\[ -2\sum_k d_k+s(s-1)\sum_k d_k=0, \]
which is equivalent to
\[ (s+1)(s-2)\sum_k d_k=0.\]
Since $s\geq 3$, we deduce that $\sum_k d_k=0$, which together with \eqref{dk} gives $d_k=0$ for all $k$.
Taking into account the coefficient of $C_2$ in $T(A_k)=0$ we obtain, in a similar fashion, that $c_k=0$ for all $k$.
Now, $T(A_k)=0$ is equivalent to $\sum_{i<j} x_{ij} R^b(B_i,B_j)(A_k)=0$, which when expanded gives
\[ \sum_{i<j} x_{ij}(A_i-A_j)+\sum_{i<k} x_{ik} A_i-\sum_{j>k} x_{kj} A_j=0, \]
for all $k$. Note that the first sum in the equation above is independent of $k$, so that
\[ \sum_{i<k} x_{ik} A_i-\sum_{j>k} x_{kj} A_j=v \]
for some constant vector $v\in\text{span}\{A_1,\ldots,A_s\}$, for any $k$. Fix now a pair $(p,q)$ with $1\leq p<q\leq s$, we have then
\[ \sum_{i<p} x_{ip} A_i-\sum_{j>p} x_{pj} A_j= \sum_{i<q} x_{iq} A_i-\sum_{j>q} x_{qj} A_j.\]
The coefficient of $A_q$ in the left-hand side is $-x_{pq}$, whereas on the right-hand side is $0$. Therefore $x_{pq}=0$ for all $1\leq p<q\leq s$, and the proof is complete.
\end{proof}
\
According to Lemma \ref{li-1}, we can forget about the operators $R^b_{AC1}$ and $R^b_{AC2}$ when searching for a basis of $\mathfrak{hol}^b(M)$.
We should also analyze the commutators between any two curvature operators. We will prove that we need not compute all these commutators, but only the ones appearing in the next result.
\
For any $i=1,\ldots,s$, let $S_i$ denote the endomorphism of $\frg$ defined as follows:
\[ S_i(A_j)=\delta_{ij}\left(-sB_j+\sum_{k\neq j} B_k\right), \qquad S_i(C_1)=-\frac12 C_2, \qquad S_iJ=JS_i.\]
It is easy to verify that $S_i$ is skew-symmetric, and therefore $S_i\in \mathfrak{u}(\frg)$. Moreover, the following result relates them with the curvature operators $R^b(A_i,B_i)$:
\begin{lemma}\label{Si}
The endomorphims $\{S_i \}_{1\leq i\leq s}$ in $\mathfrak{u}(\frg)$ are linearly independent and, furthermore,
\[ \text{span}\{S_i \}_{1\leq i\leq s}=\text{span}\{R^b(A_i,B_i)\}_{1\leq i\leq s}.\]
In particular, $S_i\in\mathfrak{hol}^b(M)$.
\end{lemma}
\begin{proof}
The fact that $\{S_1,\ldots, S_s\}$ are linearly independent is very easy to verify. As for the second statement, it is a consequence of the following expressions:
\[ R^b(A_i,B_i)=-\frac{1}{s+1}\left(2S_i+\sum_{k\neq i} S_k\right),\]
and
\[ S_i=-sR^b(A_i,B_i)+\sum_{k\neq i}R^b(A_k,B_k).\]
\end{proof}
\
Let us consider now the endomorphisms $T_{ij}\in\mathfrak{hol}^b(M)$, $i\neq j$, defined by
\[ T_{ij}=[S_i,R^b(B_i,B_j)]. \]
Direct computations prove the following:
\begin{lemma}\label{Tij-new}
The operators $T_{ij}$, $i\neq j$, act on $\frg$ as follows:
\begin{itemize}
\item $T_{ij}(A_i)=\frac{1}{s+1}\left(-sB_i-s B_j+\sum_{k\neq i,j}B_k\right)$,
\item $T_{ij}(A_j)=\frac{1}{s+1}\left(-2s B_i+2\sum_{k\neq i}B_k\right)$,
\item $T_{ij}(A_l)=\frac{1}{s+1}\left(-sB_i+\sum_{k\neq i}B_k\right)$, for $l\neq i,j$,
\item $T_{ij}(C_1)=0$.
\end{itemize}
The missing values can be deduced from $T_{ij}J=JT_{ij}$.
\end{lemma}
\medskip
\begin{lemma}\label{li-2}
The elements of the subset \[ \mathcal{V}:=\{S_i\}_{1\leq i\leq s}\cup \{R^b(C_1,C_2)\} \cup\{T_{ij}\}_{1\leq i<j\leq s}\] of $\mathfrak{hol}^b(M)$
are linearly independent.
\end{lemma}
\begin{proof}
Let us consider a linear combination of elements of $\mathcal V$
\[ S:=\sum_{i=1}^s x_i S_i + y R^b(C_1,C_2) + \sum_{1\leq i<j\leq s} a_{ij}T_{ij},\]
for some $x_i,y,a_{ij}\in\R$, and assume $S=0$.
Let us see first that $y=0$.
It follows from $S(C_1)=0$ that
\begin{equation}\label{suma-x}
\sum_i x_i=\frac{s}{s+1}y.
\end{equation}
Fix now $l\in\{1,\ldots,s\}$, and consider $S(A_l)=0$:
\begin{equation}\label{SAl}
\begin{array}{lll}
S(A_l)& =& x_l(-s\,B_l + \displaystyle\sum_{k\neq l} B_k) - \frac{y}{2(s+1)}(B_1+\cdots+B_s) \\[5pt] && + \frac{1}{s+1} \sum_{i<l} a_{il} (-2s B_i + 2 \displaystyle\sum_{k\neq i} B_k)\\[5pt]
&& + \frac{1}{s+1} \displaystyle\sum_{j>l} a_{lj} (-s B_l -s B_j + \displaystyle\sum_{k\neq l,j} B_k) + \frac{1}{s+1} \displaystyle\sum_{l\neq i<j \neq l} a_{ij} (-s B_i + \displaystyle\sum_{k\neq i} B_k)\\& =& 0.
\end{array}
\end{equation}
The coefficient of $B_l$ in \eqref{SAl} is zero and, using Lemmas \ref{Si} and \ref{Tij-new}, it is given by
\begin{equation}\label{Bl}
-s x_l-\frac{y}{2(s+1)}+\frac{2}{s+1}X_l-\frac{s}{s+1}Y_l+\frac{1}{s+1}Z_l=0,
\end{equation}
where
\[ X_l=\sum_{i<l} a_{il}, \qquad Y_l= \sum_{j>l} a_{lj}, \qquad Z_l=\sum_{l\neq i<j\neq l}a_{ij}. \]
Note that $X_l+Y_l+Z_l=\sum_{i<j}a_{ij}$ for all $l$ and, moreover,
\[ \sum_{l=1}^s X_l=\sum_{i<j}a_{ij}, \quad \sum_{l=1}^s Y_l=\sum_{i<j}a_{ij}, \quad \sum_{l=1}^s Z_l=(s-2)\sum_{i<j}a_{ij}.\]
Summing \eqref{Bl} over $l=1,\ldots,s$ and using \eqref{suma-x} we arrive at
\[ -s\frac{s}{s+1}y-s\frac{y}{2(s+1)}+\frac{2}{s+1}
\sum_{i<j}a_{ij}-\frac{s}{s+1}\sum_{i<j} a_{ij}+\frac{s-2}{s+1}\sum_{i<j}a_{ij}=0. \]
From this we deduce
\[ \frac{-s(2s+1)}{2(s+1)}y=0, \]
thus
\[ y=0. \]
Fixing again $l\in\{1,\ldots,s\}$, we compute now the coefficient of $B_r$ in \eqref{SAl}. Using Proposition \ref{Rb-OT}, Lemmas \ref{Si} and \ref{Tij-new} and $y=0$ we obtain the system:
\begin{align}
-s x_l+\frac{2}{s+1}X_l-\frac{s}{s+1}Y_l+\frac{1}{s+1}Z_l & =0, \qquad \text{for } r=l, \nonumber\\
x_l-a_{rl}+\frac{2}{s+1}X_l+\frac{1}{s+1}Y_l+\frac{1}{s+1}Z_l & =Y_r, \qquad \text{for } r<l,\label {system}\\
x_l-a_{lr}+\frac{2}{s+1}X_l+\frac{1}{s+1}Y_l+\frac{1}{s+1}Z_l & =Y_r, \qquad \text{for } r>l.\nonumber
\end{align}
Summing all the equations in the system \eqref{system} we obtain
\begin{equation*}
-x_l-X_l-Y_l+\frac{2s}{s+1}X_l-\frac{1}{s+1}Y_l+\frac{s}{s+1}Z_l= \sum_{r\neq l} Y_r,
\end{equation*}
and, since $\sum_{r\neq l}Y_r=\sum_{r}Y_r-Y_l=\sum_{i<j}a_{ij}-Y_l$, we deduce
\begin{equation}\label{xl}
x_l=\frac{s-1}{s+1}X_l-\frac{1}{s+1}Y_l+\frac{s}{s+1}Z_l-\sum_{i<j}a_{ij}.
\end{equation}
Summing \eqref{xl} over $l=1,\ldots, s$, and recalling \eqref{suma-x} with $y=0$, we get
\[ 0=\sum_l x_l =\sum_{l=1}^s\left(\frac{s-1}{s+1}X_l-\frac{1}{s+1}Y_l+\frac{s}{s+1}Z_l -\sum_{i<j}a_{ij} \right)=-2\sum_{i<j} a_{ij}. \]
Thus,
\begin{equation}\label{suma-a}
\sum_{i<j} a_{ij}=0,
\end{equation}
which is equivalent to
\[ X_l+Y_l+Z_l=0, \]
for all $l$. Now, replacing $Z_l=-X_l-Y_l$ in \eqref{xl} and using \eqref{suma-a} we arrive at
\begin{equation}\label{xl-bis}
x_l=-\frac{1}{s+1}X_l-Y_l.
\end{equation}
Now, if we replace this value of $x_l$ in the first equation of the system \eqref{system}, we arrive at
\begin{equation}\label{eq-2}
X_l+(s-1)Y_l=0,
\end{equation}
for all $l=1,\ldots,s$.
\medskip
\textsl{Claim:} $X_l=Y_l=Z_l=0$ for all $l=1,\ldots,s$. Observe that it suffices to prove that $X_l = Y_l = 0$.
\smallskip
The proof of the claim follows by induction on $l$. We begin with the case $l=1$. It is clear that $X_1=0$, and it follows from \eqref{eq-2} for $l=1$ that $Y_1=0$. The case $l=1$ is proved.
Next, fix $1<l\leq s$ and assume that $X_r=Y_r=Z_r=0$ for all $r<l$. With this hypothesis, the equations corresponding to $r<l$ in the system \eqref{system} can be written as
\[ x_l-a_{rl}+\frac{1}{s+1}X_l=0. \]
Substituting the value of $x_l$ from \eqref{xl-bis}, we obtain that
\[ Y_l=- a_{rl}. \]
Summing over $r<l$, we obtain
\[ (l-1)Y_l = -X_l,\]
which together with \eqref{eq-2} gives
\[ X_l=Y_l=0, \quad \text{for} \quad l\leq s-1.\]
For $l=s$ we simply need to point out that $Y_s=0$, due to the very definition of $Y_s$. It follows from \eqref{eq-2} that $X_s=0$ and hence the claim is proved.
\
We notice next that the claim just proved and \eqref{xl-bis} imply that $x_l=0$ for all $l$. Now it is clear from the system \eqref{system} that $a_{ij}=0$ for all $i<j$.
\end{proof}
\
\begin{lemma}\label{lema-final}
The subset $\mathcal{U}\cup \mathcal{V}$ of $\mathfrak{hol}^b(M)$ is linearly independent.
\end{lemma}
\begin{proof}
Analyzing the action on $\frg$ of each of the operators in $\mathcal{U}\cup \mathcal{V}$, it is easy to verify that the linear independence of $\mathcal{U}\cup \mathcal{V}$ is equivalent to the linear independence of $\mathcal{U}$ and $\mathcal{V}$ separately. Thus this result follows from Lemmas \ref{li-1} and \ref{li-2}.
\end{proof}
\
With these lemmas we are able to finally prove the main result of this section.
\begin{theorem}\label{hol-OT}
The holonomy group of the Bismut connection $\nabla^b$ on an OT manifold $M$ of type $(s,1)$ (hence of dimension $2s+2$) is $\operatorname{Hol}^b(M)=\operatorname{U}(s+1)$, for any $s\in\N$. Therefore, there is no reduction of the Bismut holonomy. \end{theorem}
\begin{proof}
If $s=1,2$, then $\mathfrak{hol}^b(M)= \mathfrak{u}(s+1)$, according to Lemma \ref{dim-baja}.
For $s\geq 3$, the cardinal of the subset $\mathcal{U}$ is $2s + \binom{s}{2} = \frac{s^2+3s}{2}$, whereas the cardinal of $\mathcal{V}$ is $s + 1 + \binom{s}{2} = \frac{s^2+s+2}{2}$. Therefore, the cardinal of the subset of $\mathfrak{hol}^b(M)$ given in Lemma \ref{lema-final} is \[\frac{s^2+3s}{2} + \frac{s^2+s+2}{2} = (s+1)^2,\]
so that $\dim\mathfrak{hol}^b(M)\geq (s+1)^2$. On the other hand, we know that $\mathfrak{hol}^b(M)$ is a subalgebra of $\mathfrak{u}(\frg)\cong \mathfrak{u}(s+1)$, so that $\dim\mathfrak{hol}^b(M)\leq (s+1)^2$. Therefore we arrive at
\[ \mathfrak{hol}^b(M)= \mathfrak{u}(\frg)\cong \mathfrak{u}(s+1).\]
Since $\operatorname{U}(s+1)$ is connected, this implies that $\operatorname{Hol}^b(M)=\operatorname{U}(s+1)$, for all $s\geq 1$.
\end{proof}
\
\section{Gauduchon connections and the Vaisman condition}
In the last section of the article we study the relation between the Gauduchon connections on an LCK manifold and the Vaisman condition. In fact, we prove that if the Lee form of a compact LCK manifold is non-zero and parallel with respect to a Gauduchon connection $\nabla^t$, then the LCK manifold is Vaisman and, moreover, $t=-1$. In other words, the Lee form can only be parallel with respect to the Bismut connection and in this case it is also parallel with respect to the Levi-Civita connection (recall Theorem \ref{theta-parallel}).
\begin{theorem}\label{gauduchon}
Let $(M,J,g)$ be a connected compact LCK manifold with corresponding Lee form $\theta$ and let $\{\nabla^t\}_{t\in\R}$ be the family of associated Gauduchon connections \eqref{canonical}. If $\nabla^t\theta=0$ for some $t\in\R$ then either:
\begin{enumerate}
\item[(a)] $\theta=0$, i.e., $(M,J,g)$ is K\"ahler (and therefore $\nabla^t=\nabla^g$ for all $t$), or
\item[(b)] $(M,J,g)$ is Vaisman and $t=-1$ (therefore $\nabla^{-1}=\nabla^{b}$ is the Bismut connection).
\end{enumerate}
\end{theorem}
\begin{proof}
As usual, let us denote by $A$ the vector field on $M$ which is $g$-dual to $\theta$. Then it follows from \eqref{canonical} that, for any $X,Y\in\mathfrak{X}(M)$,
\begin{align*}
(\nabla^t_X\theta)(Y) & = g(\nabla^t_X A,Y) \\
& = g(\nabla^g_X A,Y)-\frac{t-1}{4} d\omega (JX,JA,JY) -\frac{t+1}{4}d\omega (JX,A,Y).
\end{align*}
From \eqref{3-form} we obtain that $d\omega(JX,JA,JY)=c(X,A,Y)$, where $c$ is the torsion $3$-form of the corresponding Bismut connection. Taking now into account Corollary \ref{torsion-A} we arrive at $d\omega(JX,JA,JY)=0$ for any $X,Y$. As a consequence the expression for $(\nabla^t_X\theta)(Y)$ becomes
\begin{align*}
(\nabla^t_X\theta)(Y) & = g(\nabla^g_X A,Y)- \frac{t+1}{4}\theta\wedge \omega (JX,A,Y)\\
& =g(\nabla^g_X A,Y)- \frac{t+1}{4}\left(\theta(JX)g(JA,Y)+|A|^2g(X,Y)-g(A,X)g(A,Y)\right).
\end{align*}
Therefore, if $\nabla^t\theta=0$ for some $t\in\R$ then
\begin{equation}\label{nabla-t}
\nabla^g_X A = \frac{t+1}{4}\left(|A|^2X-\theta(X)A+\theta(JX)JA\right)
\end{equation}
for any vector field $X$ on $M$. Note that the $(1,1)$-tensor $\nabla^g A$ is symmetric (in accordance with $\theta$ being closed) and, moreover, it commutes with the complex structure $J$. Hence, it follows from \cite[Lemma 3]{MMO} that $A$ is holomorphic, i.e., $\mathcal{L}_AJ=0$.
\smallskip
Next, we observe that, since $\nabla^t$ is a metric connection and $\theta$ is $\nabla^t$-parallel, $|A|$ is a constant function. That is, $|A|=c$ for some $c\in \R$, $c\geq 0$. If $c=0$ then $\theta=0$ and therefore $(M,J,g)$ is K\"ahler.
\smallskip
Assume now $c>0$. We have proved that $A$ is holomorphic with constant length. According to \cite[Theorem 1(i)]{MMO}, we have that the compact LCK manifold $(M,J,g)$ is actually Vaisman, that is, $\nabla^gA=0$. Choosing $0\neq v\in T_pM$ for some $p\in M$ with $\theta_p(v)=\theta_p(J_pv)=0$, it follows from \eqref{nabla-t} that
\[ 0=\nabla^g_vA=\frac{t+1}{4}c^2v.\]
Thus $t=-1$, and the proof is complete.
\end{proof}
\medskip
\begin{example}
{\rm There exist compact LCK manifolds which admit a Hermitian connection with respect to which the Lee form is parallel, but the Hermitian structure is not Vaisman. Indeed, consider a solvmanifold $M=\Gamma\backslash G$ from Example \ref{example} equipped with the LCK structure exhibited there. We can define a Hermitian connection $\nabla$ on $G$ in terms of the basis $\{e_1,\ldots,e_4\}$ of left invariant vector fields simply by setting
\[ \nabla_{e_1}e_3=e_4, \quad \nabla_{e_1}e_4=-e_3, \quad \nabla_{e_i}e_j=0,\]
for all other possible choices of $(i,j)$. The Lee form on $G$ is parallel with respect to $\nabla$ and the same happens on $M$ with the induced connection.
}
\end{example}
\
We can generalize Theorem \ref{gauduchon} to a larger class of metric connections. Indeed, in \cite{OUV} a $2$-parameter family $\{\nabla^{\varepsilon,\rho}\mid (\varepsilon,\rho)\in \R^2\}$ of metric connections was introduced on any Hermitian manifold. Inspired by formula \eqref{canonical}, these connections are defined by
\begin{equation}\label{e-r}
g(\nabla^{\varepsilon,\rho}_XY,Z)=g(\nabla^g_XY,Z)-\varepsilon\, d\omega(JX,JY,JZ)-\rho\, d\omega(JX,Y,Z).
\end{equation}
Note that the Gauduchon connections $\nabla^t$ correspond to $\nabla^{\varepsilon,\rho}$ with $\varepsilon+\rho=\frac12$ and $t=1-4\varepsilon$. In particular, $\nabla^b=\nabla^{1/2,0}$; moreover, $\nabla^g=\nabla^{0,0}$.
It is clear that all these connections are metric, i.e., $\nabla^{\varepsilon,\rho}g=0$, since the expression $\varepsilon\, d\omega(JX,JY,JZ)+\rho\, d\omega(JX,Y,Z)$ is skew-symmetric in $Y,Z$. However, it is not true that they are all compatible with $J$: it was proved in \cite{OUV} that
\[ \nabla^{\varepsilon,\rho}J= -2\left(\varepsilon+\rho-\frac12\right)\nabla^gJ;\]
therefore, if $(M,J,g)$ is not K\"ahler, then $\nabla^{\varepsilon,\rho}$ is a Hermitian connection if and only if $\varepsilon+\rho=\frac12$, i.e., it is a Gauduchon connection.
\medskip
With the exact same proof of Theorem \ref{gauduchon} we can show the following result.
\begin{theorem}
Let $(M,J,g)$ be a connected compact LCK manifold with corresponding Lee form $\theta$ and consider the metric connections $\nabla^{\varepsilon,\rho}$ on $M$ defined as in \eqref{e-r}. If $\nabla^{\varepsilon,\rho}\theta=0$ for some $(\varepsilon,\rho)\in \R^2$ then either:
\begin{enumerate}
\item[(a)] $\theta=0$, i.e., $(M,J,g)$ is K\"ahler (and therefore $\nabla^{\varepsilon,\rho}=\nabla^g$ for all $(\varepsilon,\rho)$), or
\item[(b)] $(M,J,g)$ is Vaisman and $\rho=0$.
\end{enumerate}
\end{theorem}
\
We point out that on a Vaisman manifold $(M,J,g)$ with Lee form $\theta$ the line $\{\nabla^{\varepsilon, 0}\mid \varepsilon\in \R\}$ of metric connections goes through $\nabla^g$ (for $\varepsilon=0$) and $\nabla^b$ (for $\varepsilon=1/2)$ and, moreover, $\theta$ is parallel with respect to each one of them.
\begin{corollary}
On a Vaisman manifold $(M,J,g)$ with Lee form $\theta$ there exist infinite metric connections which respect to which $\theta$ is parallel.
\end{corollary}
\
|
2,869,038,155,579 | arxiv | \section{The phonon of pyrite chromium dioxide at the ambient pressure}
The pyrite CrO$_2$ phase has been demonstrated to be stable ferromagnetic (FM) half-metallic state occurring at critical pressure of $\sim$45 GPa \cite{Li2012}. Here, we use the phonon spectrum, which is one useful way to investigate the stability and structural rigidity. The method of force constants has been used to calculate the phonon frequencies as implemented in PHONOPY package~\cite{Togo1,Togo2,Togo3}. We employ
$3 \times3 \times 3$ supercell with 108 Cr atoms and 216 O atoms to obtain the real-space force constants. Our result for the phonon dispersions at the ambient pressure is shown in Fig.\ref{figS1}, respectively. We find that there is the absence of any imaginary frequencies over the entire BZ, demonstrating that the pyrite CrO$_{2}$ is dynamical stability.
\section{The other Weyl points close to Fermi level between $(N$-$1)$th and $N$th bands }
Pyrite CrO$_2$ exhibits the FM metallic rather than semimetallic features. Due to its complex topological electronic band structure, some other Weyl points between the $(N-1)$th and $N$th bands also appear close to the Fermi level. We find that there are three pairs of the Weyl points (the energies relative to Fermi level are lower than 0.3 eV). Their positions in momentum space are $\frac{2\pi}{a}$(0.0011, 0.1414, 0.0842), $\frac{2\pi}{a}$(0.1419, 0.0844, 0.0010), and $\frac{2\pi}{a}$(0.0907, 0.0006, 0.1613), which are of the most relevant located only 0.111, 0.137, and 0.141 eV below the Fermi level, respectively. Although we found a plethora of topological features formed by the $(N-1)$th and $N$th bands, these additional Weyl points and their associated Fermi arcs may overlap with the bulk states when projected onto a surface, such as (001) or (110) surfaces. Hence, these Weyl points can not contribute visible spectroscopic signatures of surface Fermi arcs.
\begin{figure}
\centering
\includegraphics[scale=0.3]{PHONON.pdf}
\caption{The phonon dispersions of pyrite CrO2.
\label{figS1}}.
\end{figure}
\section{The topological features with magnetization along [001] direction}
Our first-principles calculations suggest that there are only tiny energy differences among all magnetic configurations in pyrite CrO$_2$, implying that an applied magnetic field can easily manipulate the spin-polarized direction. Therefore, we also perform the calculations for magnetization along [001] direction. When the FM magnetization is parallel to [001] direction, the system reduces to magnetic space group $D_{4h}(C_{4h})$ and the three-fold rotational symmetry $C_3$ is broken. Hence, the Weyl points arising from the triply-degenerate points splitting may not locate on $\Gamma$-R axis. In this case, we only pay attention to the Weyl points between $N$th and $(N+1)$th bands. There are five pairs of Weyl points formed at the boundary of electron and hole pockets. Furthermore, the presence of an odd number of pairs of Weyl points between $N$th and $(N+1)$th bands can be clarified by the product of the inversion eigenvalues of the number of occupied bands $N$ at eight time reversal invariant momenta points $k_{\mathrm{inv}}$ \cite{Hughes2011}, as
\begin{equation}
\chi_{P}=\prod\limits_{k_{\mathrm{inv}};i\in \mathrm{occ}} \zeta_i (k_{\mathrm{inv}}).
\end{equation}
Our calculations show that the value of $\chi_{P}$ is -1, implying that the system may be WSM co-existing with an odd number of pairs of Weyl points. In pyrite CrO$_2$ with magnetization along [001] direction, five pairs of Weyl points between $N$th and $N+1$th bands are present. Their precise positions in momentum space, Chern numbers, and the energies related to the Fermi level $E_F$ are listed in Table \ref{tableS}.
\begin{table}
\caption{
The Weyl points between $N$th and $(N+1)$th bands with magnetization along [001] direction. The positions (in reduced coordinates $k_x$, $k_y$, and $k_z$), Chern numbers, and the energies relative to $E_F$ are listed.
The coordinates of the other WPs are related to the ones listed by the $I$ symmetry.}
\begin{tabular}{p{1.0 cm}|*{1}{p{4.0cm}} *{3}{p{1.4cm}}
\hline
\hline
Weyl & \centering Coordinates [$k_x(2\pi/a)$, &\centering Chern & $E-E_F$ \\
points & \centering $k_y(2\pi/a)$, $k_z(2\pi/a)$] &\centering number & (meV) \\
\hline
1 &\centering (0.0821, 0.0, -0.0549) &\centering $-1$ &{\centering 19} \\
2 &\centering (0.0, 0.0, 0.0159) &\centering $+1$ & 45 \\
3 &\centering (0.0821, 0.0, 0.0549) &\centering $+1$ &{\centering 19} \\
4 &\centering (0.0, 0.0549, 0.0821) &\centering $-1$ &{\centering 19} \\
5 &\centering (0.0 0.0549, -0.0821) &\centering $+1$ &{\centering 19} \\
\hline
\hline
\end{tabular}
\label{tableS}
\end{table}
\section{The triply-degenerate points in the absence of spin-orbital coupling }
In the absence of spin-orbital coupling, the symmetry group is the abelian group $T_h$, which contains four three-fold rotational symmetry $C_3$ axes [111], $[1\bar{1}1]$, $[11\bar{1}]$, and $[\bar{1}11]$, inversion $I$, and three mirror symmetries $M_x$, $M_{y}$, and $M_z$, respectively. The mirror symmetries send
\begin{equation}
\begin{split}
M_x: (x, y, z) \rightarrow (-x, y, z),\\
M_y: (x, y, z) \rightarrow (x, -y, z),\\
M_z: (x, y, z) \rightarrow (x, y, -z),\\
\end{split}
\end{equation}
and $C_3 ^{111}$ and the product $IM_x M_y M_z$ of inversion $I$ and mirror reflection symmetries leave every momentum point invariant along $\Gamma$-R (or $\mathbf{k}\parallel [111]$) axis. Hence, at each point along the $\Gamma$-R axis, the Bloch states that form a possibly degenerate eigenspace (band) of the Hamiltonian must be invariant under $C_3 ^{111}$ and $IM_x M_y M_z$. Without SOC, there are three eigenvalues of $C_3$ rotational symmetry, namely, $e^{-i \frac{2\pi}{3}}$, $e^{i \frac{2\pi}{3}}$, and 1 ($e^{i\pi}$), and we denote the corresponding eigenstates as $\psi_{1}$, $\psi_{2}$, and $\psi_{3}$, respectively. Using the basis ($\psi_{1}$, $\psi_{2}$, $\psi_{3}$), the representations of a operators $O$ can be determined as
\begin{equation}
O_{ij}=\langle \psi_{i}|O|\psi_{j}\rangle,
\end{equation}
so $C_3 ^{111}$ and mirror symmetries $M_x$, $M_y$, and $M_z$ can be expressed as
\begin{equation}
C_3 ^{111}=\mathrm{diag}\{e^{-i \frac{2\pi}{3}}, e^{i \frac{2\pi}{3}}, 1\},
\end{equation}
\\
\begin{equation}
M_x=\left(
\begin{array}{ccc}
0 & -1 & 0 \\
-1 & 0 & 0 \\
0 & 0 & 1 \\
\end{array}
\right),
\end{equation}
\begin{equation}
M_y=\left(
\begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1 \\
\end{array}
\right),
\end{equation}
\begin{equation}
M_z=\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}
\right).
\end{equation}
It can be seen that $C_3 ^{111}$ and $M_i$ ($i=x$, $y$, $z$) can not commute with each other, leading to that failure of $C_3 ^{111}$ and $M_i$ to be simultaneously diagonalizable. Therefore, in the absence of SOC, along $\Gamma$-R (or $\mathbf{k}\parallel [111]$) axis, the three bands with the three different eigenvalues of $C_3 ^{111}$ always appear as a singly-degenerate band ($\psi_{3}$) and a doubly-degenerate band ($\psi_{1}$ and $\psi_{2}$). If the single degenerate and the
doubly-degenerate bands cross each other accidentally, a triply-degenerate node will form because their different $C_3 ^{111}$ eigenvalues prohibit hybridization \cite{Changarxiv}. When spin-orbital coupling is considered, the triply-degenerate node would like to split into Weyl points depending on the magnetic space group.
|
2,869,038,155,580 | arxiv | \section{Introduction}
The coalescence of black hole (BH)-neutron star (NS) binaries is one
of the most promising sources for kilometer-size laserinterferometric
gravitational-wave detectors such as LIGO \cite{LIGO} and VIRGO
\cite{VIRGO}. A statistical study based on the stellar evolution
synthesis suggests that the detection rate of gravitational waves from
BH-NS binaries will be about 1/20--1/3 of the rate expected for the merger
of the NS-NS binaries in the universe \cite{BHNS,Bel}. Then,
the detection rate of such systems will be $\sim 0.5$--50
events per year for advanced detectors such as advanced-LIGO
\cite{advLIGO}, and hence, the detection is expected to be achieved in
the near future. For clarifying the nature of the sources of
gravitational waves and for extracting their physical information,
theoretical templates of gravitational waves are necessary
for the data analysis. For theoretically computing gravitational
waveforms for the late inspiral and merger phases of BH-NS binaries,
numerical relativity is the unique approach.
The final fate of coalescing BH-NS binaries is divided into two cases
depending primarily on the BH mass. When the BH mass is small enough,
the companion NS will be tidally disrupted before it is swallowed by
the BH. By contrast, when the BH mass is large enough, the NS will be
swallowed by the BH without tidal disruption. The latest study for the
BH-NS binary in a quasiequilibrium indicates that the tidal disruption
of the NSs by a nonspinning BH will occur for the case that the BH mass
is $\alt 4M_{\odot}$, for the hypothetical mass and radius of the NS,
$M_{\rm NS}\sim 1.35M_{\odot}$ and $R_{\rm NS}\sim 11$--12 km,
respectively \cite{TBFS,TBFS2,FKPT}. The tidally disrupted NSs may
form a disk or a torus around the BH if the tidal disruption occurs
outside the innermost stable circular orbit (ISCO). A system
consisting of a rotating BH surrounded by a massive, hot torus has
been proposed as one of the likely sources for the central engine of
gamma-ray bursts with a short duration \cite{grb2} (hereafter
SGRBs). Hence, the merger of a low-mass BH and its companion NS can be
a candidate of the central engine. According to the observational
results by the {\it Swift} and {\it HETE-2} satellites \cite{Swift},
the total energy of the SGRBs is larger than $\sim 10^{48}$ ergs, and
typically $10^{49}$--$10^{50}$ ergs. The studies of hypercritical,
neutrino-dominated accretion disks around a BH suggest that disk mass
should be larger than $\sim 0.01M_{\odot}$ for providing such high
energy for generating gamma-ray via neutrino process
\cite{GRBdisk,GRBdisk1}. The question is whether or not the mass and
thermal energy of the disk (torus) are large enough for driving the
SGRBs of the huge total energy. Numerical-relativity simulation plays
an important role for answering this question.
In the last three years, general relativistic numerical simulations
for the inspiral and merger of BH-NS binaries have been performed by
three groups \cite{SU06,ST08,ILLINOIS,SACRA,CORNELL}. However, most of
these previous works should be regarded as preliminary ones because
the simulations were performed only for a short time scale: In the
early works \cite{SU06,ST08,ILLINOIS}, the inspiral motion is followed
for at most two orbits. With such short-term simulations, the orbit is
not guaranteed to be quasicircular at the onset of the merger, and
hence, the obtained results are not likely to be very realistic
because of the presence of the unrealistic eccentricity and incorrect
approaching velocity. Moreover, by such incorrect conditions at the
onset of the merger, the final outcome such as mass and spin of the BH
and the physical condition for the disk surrounding the BH is not likely
to be computed correctly, although a rough qualitative feature of the
merger mechanism and gravitational waveforms have been found from
these works. In the latest works \cite{SACRA,CORNELL}, the inspiral
motion is followed for $\sim 4$ orbits, but in each of these works,
the simulation was performed only for one model. More systematic study
is obviously necessary to clarify the quantitative details about the
merger process, the final outcome, and gravitational waveforms.
In this paper, we report our latest work in which a long-term
simulation of BH-NS binaries is performed for a wide variety of BH
masses, NS masses, and NS radii for the first time. In the present
simulation, the inspiral motion is followed for 4--7 orbits. With this
setting, the eccentricity of the last inspiral motion appears to be
negligible and the approaching velocity at the onset of the merger is
correctly taken into account. Furthermore, we systematically choose
the BH and NS masses and NS radii for a wide range, and as a result,
it becomes possible to clarify the dependence of the merger process,
final outcome, and gravitational waveforms on the mass ratio of the
binary and the compactness of the NS. One drawback in the present work
is that the NSs are modeled by a simple equation of state (EOS).
However, the various features found in this paper will qualitatively
hold irrespective of the EOS and thus the present work will be an
important first step toward more detailed simulation in the near
future in which the NSs are modeled by more realistic EOSs.
The paper is organized as follows. Section II summarizes the initial
conditions chosen in this paper. Section III briefly describes the
formulation and methods for the numerical simulation. Section IV
presents the numerical results of the simulation focusing primarily on
the dependence of the merger mechanism and gravitational waveforms on
the mass ratio and radius of NSs. Section V is devoted to a summary.
Throughout this paper, the geometrical units of $c=G=1$, where $c$ and
$G$ are the speed of light and gravitational constant, are used,
otherwise stated. The irreducible mass of a BH, rest mass of an NS,
gravitational mass and circumferential radius of an NS in isolation,
Arnowitt-Deser-Misner (ADM) mass of system, and sum of the BH and NS
mass (often referred to as the total mass) are denoted by $M_{\rm
BH}$, $M_*$, $M_{\rm NS}$, $R_{\rm NS}$, $M$, and $m_0 (=M_{\rm
BH}+M_{\rm NS})$, respectively. The mass ratio of the binary and the
compactness of the NS are defined by $Q \equiv M_{\rm BH}/M_{\rm NS}$
and ${\cal C}\equiv GM_{\rm NS}/c^2R_{\rm NS}$, respectively. Note
that $M_{\rm BH}$ is equal to the ADM mass of the BH in isolation for
a nonspinning BH. Greek indices and Latin indices denote the spacetime
and space components, respectively; Cartesian coordinates are used
for the spatial coordinates.
\section{Initial condition}
BH-NS binaries in a quasiequilibrium state are employed as an initial
condition of the numerical simulation. Following our previous papers
\cite{SU06,ST08}, the quasiequilibrium state is computed in the
moving-puncture framework \cite{BB,BB2}. The formulation in this
framework is slightly different from that in the excision framework
which is adopted in most of the previous
works~\cite{GRAN,TBFS0,TBFS,TBFS2,FKPT}, although both formulations
are based on the conformal-flatness formalism for the three-metric
\cite{IWM}. In the present work, we adopt the same basic equations as
those in Ref.~\cite{SU06}. However, we change the condition for
defining the center of mass of system to improve the quality of the
quasiequilibrium (specifically, to reduce the orbital eccentricity).
To clarify which part is changed, we summarize the basic equations and
method for determining the quasiequilibrium state again in the
following. Detailed numerical solutions and their properties are
presented in an accompanied paper \cite{KST}, to which the reader may
refer for details.
\subsection{Formulation}
If the orbital separation of a BH-NS binary is large enough, the time
scale of gravitational-wave emission ($\tau_{\rm GW}$) is much longer
than the orbital period ($P_0$). In the present work, we follow the
inspiral motion of the BH-NS binaries for more than 4 orbits
(typically $\agt 5$ orbits). This implies that the initial binary
separation is always large enough to satisfy the condition, $\tau_{\rm
GW} \gg P_0$. Thus, the binaries initially given should be in a
quasicircular orbit (i.e., the BH and NS are approximately in an
equilibrium in the comoving frame with an angular velocity
$\Omega$). To obtain such a state, we may assume the presence of a
helical Killing vector around the center of mass of the system as
follows, \begin{equation} \ell^{\mu}=(\partial_t)^{\mu} +\Omega (\partial_{\varphi})^{\mu},
\end{equation} where $\Omega$ denotes the orbital angular velocity.
We assume that the NSs have the irrotational velocity field because it
is believed to be realistic for the BH-NS binaries in close orbits
\cite{KBC}. For the fluid of the irrotational velocity field, the
Euler and continuity equations reduce to a first integral of motion
and an elliptic-type equation for the velocity potential in the
presence of the helical Killing vector \cite{ST}. As a result, the
density profile and velocity field are determined by solving these
hydrostatic equations.
For computing a solution of the geometric variables of a
quasiequilibrium state, we employ the so-called conformal-flatness
formalism for the three-geometry \cite{IWM}. In this formalism, a
solution is obtained by solving Hamiltonian and momentum constraint
equations, and an equation for the lapse function ($\alpha$) which is
derived by imposing the maximal slicing condition as $K=0=\partial_t K$
where $K$ is the trace part of the extrinsic curvature $K_{ij}$.
These equations lead to the equations for the conformal factor $\psi$, a
rescaled tracefree extrinsic curvature $\hat A_{i}^{~j} \equiv \psi^6
K_{i}^{~j}$, and a weighted lapse $\Phi\equiv \alpha\psi$ as
\begin{eqnarray}
&&\Delta \psi = -2\pi \rho_{\rm H} \psi^5 -{1 \over 8} \hat A_{i}^{~j}
\hat A_{j}^{~i}\psi^{-7}, \label{ham2} \\ && \hat A^{~j}_{i~,j} = 8\pi
J_i \psi^6,\label{mom2} \\ && \Delta \Phi = 2\pi \Phi \Big[\psi^4
(\rho_{\rm H} + 2 S) +{7 \over 16\pi} \psi^{-8}\hat A_{i}^{~j} \hat
A_{j}^{~i}\Big],
\label{alpsi}
\end{eqnarray}
where $\Delta$ denotes the flat Laplacian,
$\rho_{\rm H}=\rho h (\alpha u^t)^2-P$, $J_i=\rho h \alpha u^t u_i$,
and $S=\rho h [(\alpha u^t)^2-1]+3P$. $\rho$ is the rest-mass density,
$h$ is the specific enthalpy defined by $1+\varepsilon+P/\rho$,
$\varepsilon$ is the specific internal energy, $P$ is the pressure, and
$u^{\mu}$ is the four-velocity.
For the relation among $\rho$, $\varepsilon$, and $P$, we adopt the
polytropic EOS as,
\begin{equation}
P=\kappa \rho^{\Gamma},\label{EOSEOS}
\end{equation}
where $\kappa$ is the adiabatic constant and $\Gamma$ the adiabatic
index for which we choose 2 in this paper. In this EOS,
$\varepsilon$ is given by $P/(\Gamma-1)\rho$.
We solve the elliptic equations (\ref{ham2})--(\ref{alpsi})
in the moving-puncture framework \cite{BB,BB2,BB4}. Assuming that
the puncture is located at ${\mbox{\boldmath$r$}}_{\rm P}(=x_{\rm P}^k)$,
we set $\psi$ and $\Phi$ as
\begin{eqnarray}
\psi=1+{M_{\rm P} \over 2 r_{\rm BH}} + \phi
\ \ \mbox{and} \ \
\Phi=1 - {M_{\Phi} \over r_{\rm BH}} + \eta, \label{eq6}
\end{eqnarray}
where $M_{\rm P}$ and $M_{\Phi}$ are positive constants of mass dimension,
and $r_{\rm BH}=|x^k_{\rm BH}|$ ($x^k_{\rm BH}=x^k-x^k_{\rm P}$). Then,
substituting Eq. (\ref{eq6}) into Eqs. (\ref{ham2}) and (\ref{alpsi}),
elliptic equations for new non-singular functions $\phi$ and $\eta$ are
derived.
The mass parameter $M_{\rm P}$ may be arbitrarily given, and thus,
it is appropriately chosen to obtain a desired BH mass. For a
given value of $M_{\rm P}$, $M_{\Phi}$ is determined by the virial
relation, which should hold for stationary spacetimes
(e.g., Ref.~\cite{vir}), as
\begin{eqnarray}
\oint_{r \rightarrow \infty} \partial_i \Phi dS^i=
-\oint_{r \rightarrow \infty} \partial_i \psi dS^i=2\pi M_0,
\end{eqnarray}
where $M_0$ is the initial ADM mass of the system.
For a solution obtained in this formalism, $\alpha$ always becomes
negative near the puncture (approximately, inside apparent horizon),
but such lapse is not favorable for numerical simulation. Following
previous papers \cite{SU06,ST08}, thus, the initial condition for
$\alpha$ is appropriately modified so as to satisfy the condition of
$\alpha > 0$ everywhere.
Equation (\ref{mom2}) is rewritten by setting
\begin{eqnarray}
\hat A_{ij}(=\hat A_i^{~k}\delta_{jk})
=W_{i,j}+W_{j,i}-{2 \over 3}\delta_{ij} \delta^{kl}
W_{k,l}+K^{\rm P}_{ij},\label{hataij}
\end{eqnarray}
where $W_i(=W^i)$ denotes an auxilary three-dimensional function and
$K^{\rm P}_{ij}$ denotes a weighted extrinsic curvature
associated with the linear momentum of a BH;
\begin{eqnarray}
K^{\rm P}_{ij}={3 \over 2 r_{\rm BH}^2}\biggl(n_i P_j +n_j P_i
+(n_i n_j-\delta_{ij}) P_k n^k \biggr).
\end{eqnarray}
Here, $n_k=n^k=x^k_{\rm BH}/r_{\rm BH}$ and $P_i(=P^i)$
denotes the linear momentum of the BH, determined from the condition
that the total linear momentum of system is zero as
\begin{eqnarray}
P_i=-\int J_i \psi^6 d^3x. \label{P}
\end{eqnarray}
The right-hand side of Eq. (\ref{P}) denotes the linear momentum
of the companion NS.
Then, the total angular momentum of the system is derived from
\begin{eqnarray}
J=\int J_{\varphi} \psi^6 d^3x + \epsilon_{zjk} x_{\rm P}^j P^k,
\end{eqnarray}
where we assume that the $z$ axis is the axis of orbital rotation.
Substituting Eq. (\ref{hataij}) into Eq. (\ref{mom2}),
an elliptic equation for $W_i$ is derived in the form
\begin{eqnarray}
\Delta W_i + {1 \over 3}\partial_i \partial_k W^k=8\pi J_i \psi^6. \label{weq}
\end{eqnarray}
Denoting $W_i=7 B_i - (\chi_{,i}+B_{k,i} x^k)$ where $\chi$ and $B_i$
are auxiliary functions,
we decompose Eq. (\ref{weq}) into two linear elliptic equations
\begin{equation}
\Delta B_i = \pi J_i \psi^6~~{\rm and}~~\Delta \chi= -\pi J_i x^i \psi^6.
\end{equation}
To determine a BH-NS binary in quasiequilibrium states, in addition,
we have to determine the shift vector, $\beta^i$.
The reason for this is that $\beta^i$ appears in the hydrostatic equations
\cite{KST}. In the conformal-flatness formalism, the relation between
$\beta^i$ and $\hat A_{ij}$ is written as
\begin{eqnarray}
\delta_{jk} \partial_i \beta^k +\delta_{ik} \partial_j \beta^k
-{2\over 3}\delta_{ij} \partial_k \beta^k ={2\alpha \over \psi^6} \hat A_{ij}.
\label{eq13}
\end{eqnarray}
Operating $\delta^{jl}\partial_l$ to this equation, an elliptic equation
is derived
\begin{eqnarray}
\Delta \beta^i + {1 \over 3} \delta^{ik} \partial_k\partial_j\beta^j
=2 \partial_j (\alpha \psi^{-6}) \hat A^{ij}+16\pi\alpha J_j \delta^{ij}.
\label{betaeq}
\end{eqnarray}
For $\hat A_{ij}$ in the right-hand side of this equation, we
substitute the relation of Eq. (\ref{hataij}) [not Eq. (\ref{eq13})].
As a result, no singular term appears in the right-hand side of
Eq. (\ref{betaeq}), and thus, $\beta^i$ is solved in the same manner as
that for $W_i$.
In this formulation, the elliptic equations for the
gravitational-field components, $\phi$, $\eta$, $B_i$, $\chi$, and
$\beta^i$, and the velocity potential have to be solved. For a
numerical solution of them, we use the LORENE library \cite{LORENE},
by which a high-precision numerical solution can be computed using the
spectral method. We note that in the puncture framework, we do not
basically have to impose an inner boundary condition around the BH. In
this point, the moving-puncture framework differs from the excision
framework, and this may be a demerit of the moving-puncture framework
because a physical condition may not be imposed for the BH. However,
this may be also a merit of this framework because there remains a
degree of freedom which can be used to adjust the property of the
quasiequilibrium to a desired state, as mentioned in the following.
\begin{figure*}[t]
\epsfxsize=3.2in
\leavevmode
(a)\epsffile{fig1a.ps}
\epsfxsize=3.2in
\leavevmode
~~(b)\epsffile{fig1b.ps}
\vspace{-2mm}
\caption{Evolution of orbital separation (a) for model M20.145 and (b)
for model M50.145. The solid and dotted curves denote the results
with the initial conditions obtained in the 3PN-J and
$\beta^{\varphi}$ conditions, respectively. The dashed curve in
panel (a) denote the result for model M20.145N. To align the curves
at the onset of the merger, the time is appropriately shifted for the
results of M20.145N, M20.145b, and M50.145b. Note that the merger
sets in when the orbital separation becomes $\sim 5M_0$. The
binaries for models M20.145 and M50.145 spend in the inspiral phase
for $\sim 5$ and $7.5$ orbits, respectively (see Sec. IV), whereas
those for models M20.145b and M50.145b spend only for $\sim 4$ and
5.25 orbits, respectively.
\label{FIG1}}
\end{figure*}
The final remaining task in the moving-puncture framework is to
determine the center of mass of system. The issue in this framework is
that we do not have any natural physical condition for determining it.
(By contrast, the condition is automatically derived in the excision
framework \cite{TBFS2}, although it is not clear whether the condition
is really physical and whether the resulting quasiequilibrium is a
quasicircular state; see, e.g., Ref.~\cite{BERTI} which assesses the
circularity of the quasiequilibrium.) In our first paper \cite{SU06},
we employed a condition that the dipole part of $\psi$ at spatial
infinity is zero. However, in this method, the angular momentum
derived for a close orbit of $m_0 \Omega \agt 0.03$ is by $\sim 2\%$
smaller than that derived by the third post-Newtonian (3PN)
approximation \cite{Luc}. (Note $m_0=M_{\rm BH} + M_{\rm NS}$.)
Because the 3PN approximation should be an excellent approximation of
general relativity for describing a binary of a fairly distant orbit
as $m_0 \Omega \approx 0.03$, we should consider that the obtained
initial data deviates from the true quasicircular state, and thus, the
initial orbit would be eccentric. Such initial condition is not
suitable for quantitatively accurate numerical simulation of
the inspiraling BH-NS binaries.
In the subsequent work \cite{ST08}, we adopted a condition that the
azimuthal component of the shift vector $\beta^{\varphi}$ at the
location of the puncture (${\mbox{\boldmath$r$}}={\mbox{\boldmath$r$}}_{\rm P}$) is equal to $-\Omega$;
i.e., we imposed a corotating gauge condition at the location of the
puncture. In the following, we refer to this condition as
``$\beta^{\varphi}$ condition''. This is slightly better than the
original condition, but the angular momentum derived for a close orbit
of $m_0 \Omega \agt 0.03$ is still by $\agt 1\%$ smaller than that
derived by the 3PN relation for a large mass ratio $Q \geq 2$ (see
Ref.~\cite{KST} for detailed numerical results). The disagreement is
larger for the larger mass ratio. As a result of this, the initial
condition is likely to deviate from the true quasicircular state and
hence the initial orbital eccentricity is not also negligible (see
Sec. IV C of Ref.~\cite{SACRA} for numerical evolution of such initial
data), in particular, for binaries of large mass ratios. This also
suggests that the $\beta^{\varphi}$ condition is not suitable for
deriving a realistic quasicircular state.
If the simulations are started with a sufficiently large orbital
separation, the eccentricity, which is initially $\sim 0.1$, will
decrease to $\sim 0.01$ within several orbits, because gravitational
radiation reaction has a strong effect to reduce the orbital
eccentricity \cite{PM}. However, the separation has to be large enough
for the large initial eccentricity to be reduced. This implies that a
long-term simulation, which is not computationally favored, is
required. (As we illustrate in this paper, the eccentricity is not
sufficiently suppressed to $\alt 0.01$ in $\sim 5$ orbits if we adopt
the initial condition obtained in the $\beta^{\varphi}$ condition.)
In this paper, we employ a new condition in which the center of mass
is determined in a phenomenological manner: We impose the condition
that the total angular momentum of the system for a given value of
$m_0\Omega$ agrees with that derived by the 3PN approximation (see
Ref.~\cite{Husa} for a similar concept). This can be achieved by
appropriately choosing a hypothetical position of the center of mass
(it may not be better to refer to this position as ``the center of
mass'' but simply as ``the rotation axis''). With this method, the
total energy of the system does not agree completely with that derived
by the 3PN approximation \cite{KST}, and thus, the eccentricity does
not become zero. However, the resulting eccentricity in this condition
is much smaller than that in the $\beta^{\varphi}$ condition (see
Fig.~\ref{FIG1}), and thus, for a moderately long-term simulation (in
$\sim 2$--3 orbits), the effect of the eccentricity is suppressed to
an acceptable level for a scientific discussion, as shown in
Sec.~\ref{sec4}. We refer to this condition as ``3PN-J condition'' in
the following.
Figure \ref{FIG1} plots evolution of the orbital separation for ${\cal
C}=0.145$ and for $Q=2$ and 5. Here, the separation is defined as the
coordinate separation between the positions of the puncture and of the
maximum density of the NS, $r_{\rm sep}=|x^i_{\rm sep}|$; see
Eq. (\ref{eqsep}). For the results with the initial condition derived
in the $\beta^{\varphi}$ condition, the separation badly oscillates
with time and the amplitude of this oscillation is still conspicuous
even after 3--4 orbital motions. Consequently, the eccentricity is not
negligible even at the last orbit just before the merger. By contrast,
for the case that the initial condition is derived in the 3PN-J
condition, the amplitude of the oscillation is much smaller and,
furthermore, the amplitude of this oscillation does not become
conspicuous within about two orbital motions. Although the
eccentricity does not become exactly zero, it is within an acceptable
level at the last orbit just before the merger (see, e.g.
gravitational waveforms shown in Sec. \ref{sec:gw}).
We note that the coordinate separation is a gauge-dependent quantity
and hence the discussion here is not based on the very physical
quantity. The physical quantity such as the orbital eccentricity is
not rigously extracted from it. However, the similar quantitative
feature is seen if we plot evolution of the gravitational-wave
frequency as a function of time (which is the physical quantity); the
oscillation amplitude of the orbital separation shows the magnitude of
the orbital eccentricity at least approximately (see Sec. \ref{sec:gw}
for more physical analysis).
\subsection{Chosen models}
\begin{table*}[t]
\caption{Key parameters for the initial conditions adopted for the
numerical simulation in units of $\kappa=1$. The mass ratio
($Q=M_{\rm BH}/M_{\rm NS}$), BH mass ($M_{\rm BH}$), mass parameter of
the puncture ($M_{\rm p}$), rest mass of the NS ($M_{*}$), mass
($M_{\rm NS}$) and compactness (${\cal C}=M_{\rm NS}/R_{\rm NS}$:
$R_{\rm NS}$ is the circumferential radius) of the NS when it is in
isolation, maximum density of the NS ($\rho_{\rm max}$), ADM mass of
the system ($M_0$), total angular momentum ($J_0$) in units of
$M_0^2$, orbital period ($P_0$) in units of $M_0$, and orbital angular
velocity ($\Omega_0$) in units of $m_0^{-1}$. The first and second
numerical values described for the model name (the first column)
denote the values of $Q$ and ${\cal C}$, respectively.
The first 10 models are computed in the 3PN-J condition and the
last 4 are in the $\beta^{\varphi}$ condition.}
\begin{tabular}{cccccccccccc} \hline
3PN-J &&&&&&&&&&& \\
~~~Model~~~
& ~~$Q$~~ & ~~~$M_{\rm BH}$~~~ & ~~~$ M_{\rm p}$~~~ & ~~~~$M_{*}$~~~
& ~~~$M_{\rm NS}$~~~ & $M_{\rm NS}/R_{\rm NS}$ & ~~~$\rho_{\rm max}$~~~
& ~~~$M_0$~~~ & ~~$J_0/M_0^2$~~ & ~$P_0/M_0$~ & ~~$m_0\Omega_0$~~ \\ \hline
M15.145 & 1.5 & 0.2093 & 0.2043 & 0.1500 & 0.1395 & 0.145
& 0.1262 & 0.3452 & 0.8393 & 212.5 & 0.02987 \\ \hline
M20.145 & 2 & 0.2790 & 0.2736 & 0.1500 & 0.1395 & 0.145
& 0.1261 & 0.4146 & 0.8603 & 213.7 & 0.02968 \\ \hline
M20.145N& 2 & 0.2790 & 0.2732 & 0.1500 & 0.1395 & 0.145
& 0.1260 & 0.4143 & 0.8418 & 191.8 & 0.03309 \\ \hline
M20.160 & 2 & 0.2957 & 0.2900 & 0.1600 & 0.1478 & 0.160
& 0.1512 & 0.4394 & 0.8608 & 214.4 & 0.02958 \\ \hline
M20.178 & 2 & 0.3119 & 0.3059 & 0.1700 & 0.1560 & 0.178
& 0.1890 & 0.4635 & 0.8582 & 211.4 & 0.03001 \\ \hline
M30.145 & 3 & 0.4185 & 0.4121 & 0.1500 & 0.1395 & 0.145
& 0.1255 & 0.5534 & 0.7091 & 192.2 & 0.03298 \\ \hline
M30.160 & 3 & 0.4435 & 0.4367 & 0.1600 & 0.1478 & 0.160
& 0.1504 & 0.5864 & 0.7096 & 192.9 & 0.03285 \\ \hline
M30.178 & 3 & 0.4679 & 0.4608 & 0.1700 & 0.1560 & 0.178
& 0.1878 & 0.6187 & 0.7103 & 193.8 & 0.03269 \\ \hline
M40.145 & 4 & 0.5580 & 0.5512 & 0.1500 & 0.1395 & 0.145
& 0.1252 & 0.6926 & 0.6042 & 192.1 & 0.03294 \\ \hline
M50.145 & 5 & 0.6975 & 0.6905 & 0.1500 & 0.1395 & 0.145
& 0.1249 & 0.8318 & 0.5238 & 191.8 & 0.03296 \\ \hline
$\beta^{\varphi}$ &&&&&&&&&&& \\
Model & $Q$ & $M_{\rm BH}$ & $ M_{\rm p}$ & $M_{*}$
& $M_{\rm NS}$ & $M_{\rm NS}/R_{\rm NS}$ & $\rho_{\rm max}$
& $M_0$ & $J_0/M_0^2$ & $P_0/M_0$ & $m_0\Omega_0$ \\ \hline
M20.145b & 2 & 0.2790 & 0.2737 & 0.1500 & 0.1395 & 0.145
& 0.1261 & 0.4144 & 0.8462 & 211.0 & 0.03007 \\ \hline
M30.145b & 3 & 0.4185 & 0.4121 & 0.1500 & 0.1395 & 0.145
& 0.1256 & 0.5531 & 0.6960 & 189.5 & 0.03345 \\ \hline
M40.145b & 4 & 0.5580 & 0.5512 & 0.1500 & 0.1395 & 0.145
& 0.1252 & 0.6922 & 0.5905 & 188.9 & 0.03351 \\ \hline
M50.145b & 5 & 0.6975 & 0.6905 & 0.1500 & 0.1395 & 0.145
& 0.1250 & 0.8315 & 0.5134 & 189.1 & 0.03345 \\ \hline
\end{tabular}
\end{table*}
In the polytropic EOS, the adiabatic constant, $\kappa$, is a free
parameter, and thus, physical units such as mass, radius, and time can
be rescaled arbitrarily by simply changing the value if $\kappa$:
i.e., when a numerical result for a particular value (say
$\kappa=\kappa_1$) is obtained, we can also obtain the numerical
results of these quantities for $\kappa=\kappa_2$ by simply rescaling
them by a factor of $(\kappa_2/\kappa_1)^{1/2(\Gamma-1)}$. This
implies that $\kappa$ can be completely scaled out of the problem. In
this paper, we present the results in units of $\kappa=1$ (and
$c=G=1$), because such units are popular in other groups
\cite{ILLINOIS,CORNELL}. In the polytropic EOS, the nondimensional
quantities such as ${\cal C}=GM_{\rm NS}/c^2R_{\rm NS}$, $M_0\Omega$,
$M_0\kappa^{-1/2(\Gamma-1)}$, $R_{\rm NS}\kappa^{-1/2(\Gamma-1)}$, and
mass ratio ($Q$) are unchanged irrespective of the value of $\kappa$
and have an invariant meaning.
In the present work, the numerical simulation is performed for a wide
variety of initial conditions (see Table I), but for restricting that
the BH spin is zero. We characterize the BH-NS binaries by the mass
ratio, $Q=M_{\rm BH}/M_{\rm NS}$, and the compactness of the NS,
${\cal C}=M_{\rm NS}/R_{\rm NS}$. (Note that in
Refs.~\cite{ST08,SACRA}, we use $q=1/Q$ instead of $Q$ to specify the
model.) The mass ratio is chosen in the range between 1.5 and 5.0,
and the compactness of the NS is in the range, 0.145--0.178. We note
that the typical mass of the NS in nature is 1.3--$1.4 M_{\rm NS}$
\cite{Stairs} and the likely lower bound of the BH mass is $\sim
2M_{\odot}$. This implies that $Q$ should be chosen to be larger than
$\sim 1.5$. The circumferential radius of an NS of $M_{\rm
NS}=1.35M_{\odot}$ and ${\cal C}=0.145$, 0.160, and 0.178 is 13.8,
12.5, and 11.2 km, which are reasonable values for modeling the NSs.
Figure \ref{FIG2} shows the initial conditions in the parameter space
of $({\cal C}, Q)$. The meaning of the dashed curve, derived in
Ref.~\cite{TBFS2}, is as follows: For the binaries located above the
dashed curve, mass-shedding of the NS by the tidal effect of the
companion BH does not occur until the binary reaches the ISCO, whereas
for the binary shown below the dashed curve, the mass-shedding will
occur for the NS before the ISCO is reached and in such a system,
tidal disruption may subsequently occur. Thus, many of the NSs in the
chosen binary system will be subject to the tidal effect of the
companion BH in a close orbit, but for some of them (e.g., $({\cal C},
Q)= (0.145, 5)$ and (0.178, 3)), the tidal effect is unlikely to play
an essential role in the inspiral phase. We also note that an NS which
is in the mass-shedding is not always tidally disrupted immediately,
although the mass-shedding NS is the candidate to be tidally disrupted.
Namely, the condition for inducing the tidal disruption is in general
more restricted than that for inducing the mass-shedding (see Sec.~IV).
\begin{figure}[t]
\epsfxsize=3.2in
\leavevmode
\epsffile{fig2.ps}
\vspace{-5mm}
\caption{Initial conditions in the parameter space of $({\cal C}, Q)$
are plotted by the solid circles. The meaning of the dashed curve is
as follows \cite{TBFS2}: For the binaries located above the dashed
curve, mass-shedding (MS) does not occur until the binaries reach
the ISCO, whereas for the binaries below the dashed curve, the
mass-shedding will occur for the NS due to the tidal force of the BH
before the ISCO is reached.
\label{FIG2}}
\end{figure}
In the present work, we prepare quasiequilibrium states with
$m_0\Omega_0 \approx 0.030$ for $Q=1.5$ and 2, and $m_0\Omega_0
\approx 0.033$ for $Q=2$--5, where $\Omega_0$ is the orbital angular
velocity. We choose these values of $\Omega_0$ so as for the BH-NS
binaries to experience more than 4 orbits before the onset of the merger.
In such a long-term evolution, the eccentricity which
presents at $t=0$ decreases to an acceptable level during the inspiral
phase, and also the nonzero radial velocity associated with the
gravitational radiation reaction is approximately taken into account
from the late inspiral phase.
In Table I, several key quantities for the quasiequilibrium states
adopted in this paper are listed. Specifically, we prepare 10
models. The first two and last three numerical numbers described for
the model name denote the values of $Q$ and ${\cal C}$,
respectively. For comparison, the same quantities are also listed for
4 selected quasiequilibrium states obtained in the $\beta^{\varphi}$
condition. We find that the angular momentum of them is by $\sim 2\%$
smaller than that of the quasiequilibrium states obtained in the 3PN-J
condition for $m_0 \Omega_0 \approx 0.033$.
\section{Preparation for numerical simulation}
\begin{table*}[tbh]
\caption{Parameters of the grid structure for the numerical simulation
with our AMR algorithm. In the column named ``Levels'', the number of
total refinement levels is shown (in the bracket, the numbers of
coarser and finer levels are specified; see Ref.~\cite{SACRA} for the
definition of the coarser and finer levels). $\Delta x(=h_7)$ is the
minimum grid spacing for $N=36$, $R_{\rm diam}$ the coordinate length
of semi-major diameter of the NS, $L$ the location of outer boundaries
along each axis, $\lambda_0(=\pi/\Omega_0)$ the gravitational
wavelength at $t=0$, and $\Delta x_{\rm gw}$ the grid spacing, by
which gravitational waves are extracted for $N=36$. Note that $M_{\rm
BH}$ denotes the BH mass (the irreducible mass) at $t=0$. The grid
structures for models M20.145b--M50.145b are the same as those for
models M20.145--M50.145, respectively. For the simulations with
$N\not=36$, the size of each domain (and hence $L$) is unchanged, and
thus, the grid spacing $h_l$ is simply changed.
\label{BHNSGRID}}
\begin{tabular}{cccccc} \hline
Run & ~~Levels~~ & $\Delta x/M_0~(\Delta x/M_{\rm BH})$ &
~$R_{\rm diam}/\Delta x$~
& ~$L/M_0~(L/\lambda_0)$~ & ~$\Delta x_{\rm gw}/M_0$~ \\ \hline
M15.145 & 8~(4+4) &0.0407~(0.0672)& 112 & 187.7~(1.77)&1.30--5.21\\ \hline
M20.145 & 8~(4+4) &0.0377~(0.0560)& 99.3 & 173.7~(1.63)&1.21--4.82\\ \hline
M20.145N& 8~(4+4) &0.0377~(0.0560)& 99.4 & 173.8~(1.81)&1.21--4.83\\ \hline
M20.160 & 8~(4+4) &0.0356~(0.0528)& 92.9 & 163.9~(1.53)&1.14--4.55\\ \hline
M20.178 & 8~(4+4) &0.0324~(0.0481)& 89.1 & 149.1~(1.41)&1.04--4.14\\ \hline
M30.145 & 8~(4+4) &0.0305~(0.0403)& 90.0 & 140.5~(1.46)&0.98--3.90\\ \hline
M30.160 & 8~(3+5) &0.0266~(0.0352)& 91.2 & 122.8~(1.27)&0.85--3.41\\ \hline
M30.178 & 8~(3+5) &0.0253~(0.0334)& 84.1 & 116.3~(1.20)&0.81--3.23\\ \hline
M40.145 & 8~(3+5) &0.0244~(0.0302)& 89.0 & 112.3~(1.17)&0.78--3.12\\ \hline
M50.145 & 8~(3+5) &0.0203~(0.0242)& 88.4 & 93.5~(0.98) &0.65--2.60\\ \hline
\end{tabular}
\end{table*}
\subsection{Brief summary of formulation and methods}
The numerical simulations are performed using a code {\tt SACRA}
recently developed in our group \cite{SACRA}. The details of the chosen
scheme, formulation, gauge condition, and methods for the analysis
are described in Ref.~\cite{SACRA} to which the reader may refer.
Reference \cite{SACRA} also shows that {\tt SACRA} can successfully
simulate the inspiral and merger of BH-BH, NS-NS, and BH-NS binaries.
In {\tt SACRA}, the Einstein equations are solved in a moving-puncture
version \cite{BB,BB2,BB4} of the Baumgarte-Shapiro-Shibata-Nakamura
formalism \cite{BSSN}. Specifically, we evolve $W \equiv
\gamma^{-1/6}$ \cite{WWW}, conformal three-metric, $\tilde \gamma_{ij}
\equiv \gamma^{-1/3}\gamma_{ij}$, the tracefree extrinsic curvature,
$\tilde A_{ij} \equiv \gamma^{-1/3}(K_{ij}-K\gamma_{ij}/3)$, the trace
part of $K_{ij}$, $K$, and a three auxiliary variable, $\Gamma^i
\equiv -\partial_j \tilde \gamma^{ij}$ (or $F_i \equiv \partial_j \tilde
\gamma_{ij}$). Here, $\gamma_{ij}$ is the three-metric, $K_{ij}$ the
extrinsic curvature, and $\gamma \equiv {\rm det}(\gamma_{ij})$. In
the numerical simulation, a fourth-order finite differencing scheme in
space and time is used implementing an adaptive mesh refinement (AMR)
algorithm (at refinement boundaries, a second-order interpolation
scheme is partly adopted). The advection term such as
$\beta^i\partial_i \tilde \gamma_{jk}$ is evaluated by a fourth-order
non-centered finite-difference, as proposed in Ref.~\cite{BB4}.
The moving-puncture approach is adopted for evolving the BH
\cite{BB2} (see also Ref.~\cite{BB3});
i.e., we adopt one of the moving-puncture gauge
conditions as \cite{BB4}
\begin{eqnarray}
&&(\partial_t -\beta^j\partial_j) \beta^i=0.75B^i, \\
&&(\partial_t -\beta^j\partial_j) B^i =(\partial_t -\beta^j\partial_j) \tilde \Gamma^i
-\eta_s B^i,
\label{shift2}
\end{eqnarray}
where $B^i$ is an auxiliary variable and $\eta_s$ is an arbitrary
constant. In the present paper, we set $\eta_s=0.414$ in units of
$\kappa=c=G=1$.
For the numerical hydrodynamics, we evolve $\rho_* \equiv \rho \alpha
u^t e^{6\phi}$, $\hat u_i \equiv h u_i$, and $e_* \equiv \rho h \alpha
u^t -P/(\rho \alpha u^t)$. To handle the advection terms in the
hydrodynamic equations, a high-resolution central scheme \cite{KT} is
adopted with a third-order piecewise parabolic interpolation and with
a steep min-mod limiter in which the limiter parameter $b$ is set to
be 3 (see appendix A of Ref.~\cite{S03}). We adopt the $\Gamma$-law
EOS in the simulation as
\begin{equation}
P=(\Gamma-1)\rho \varepsilon,
\end{equation}
where $\Gamma=2$.
Properties of the BHs such as mass and spin are determined by
analyzing area and circumferential radii of apparent horizons.
A numerical scheme of our apparent horizon finder is described in
Refs. \cite{SACRA,AH}.
Gravitational waves are computed by extracting the outgoing part of
the Newman-Penrose quantity (the so-called $\Psi_4$). The extraction
of $\Psi_4$ is carried out for several constant coordinate radii, $r
\approx 50$--$100M_0$. The plus and cross modes of gravitational waves
are obtained by performing time integration of $\Psi_4$ twice, with
appropriate choice of integration constants and subtraction of
unphysical drift which is caused primarily by the drift of the center
of mass of the system. (Because we extract $\Psi_4$ for fixed, finite
coordinate radii, the drift of the center of mass spuriously affects
gravitational waveforms.) Specifically, whenever the time integration
is performed, we subtract a function of the form $a_2 t^2 + a_1 t +
a_0$ where $a_0$--$a_2$ denote constants which are determined by the
least-square fitting of the numerical data.
We compute the modes of $2 \leq l \leq 4$ and $|m| \leq l$ for
$\Psi_4$, and found that the quadrupole mode of $(l, |m|)=(2, 2)$ is
always dominant, but $(l, |m|)=(3, 3)$, (4, 4), and (2, 1) modes also
contribute to the energy and angular momentum dissipation by more than
$1\%$ for some of models (in particular for large values of $Q$;
cf. Table IV).
We also estimate the kick velocity from the linear momentum flux of
gravitational waves. The linear momentum flux $dP_i/dt$ is computed by
the same method as that given in, e.g., Refs.~\cite{BB4,kick2}.
Specifically, the coupling terms between $(l, m)=(2, \pm 2)$ and $(3,
\mp 3)$ modes, between $(2, \pm 2)$ and $(2, \mp 1)$ modes, and
between $(3, \pm 3)$ and $(4, \mp 4)$ modes contribute primarily to
the linear momentum flux \cite{Buonanno}. From the total linear
momentum dissipated by gravitational waves, \begin{eqnarray} \Delta P_i = \int
{dP_i \over dt} dt, \end{eqnarray} the kick velocity is defined by $\Delta
P_i/M_0$ where $M_0$ is the initial ADM mass of the system.
\subsection{Setting grids for AMR scheme}
The Einstein and hydrodynamic equations are solved in an AMR algorithm
described in Ref.~\cite{SACRA}. In the present work, we prepare 8
refinement-level domains of different grid resolutions and domain
sizes. Each domain is composed of the uniform vertex-center-grid with
grid number $(2N+1,2N+1,N+1)$ for $(x,y,z)$ where $N$ is a constant
and is always chosen to be 24, 30, and 36 to check convergence of
numerical results. The equatorial plane symmetry with respect to the
$z=0$ plane is assumed. The length of a side for the largest domain is
denoted by $2L$, and the grid spacing for each domain is $h_l=L/N/2^l$
for $l=0$--7. As described in Ref.~\cite{SACRA}, the regions around a BH
and an NS are covered by ``finer'' domains which move with the BH and
NS. On the other hand, ``coarser'' domains which cover wider regions
do not move and their center is fixed approximately to be the
center of mass of the system.
Table II lists the parameters for the grid structure in our AMR
scheme. $\lambda_0$ denotes the wavelength of gravitational waves at
$t=0$ ($\lambda_0=\pi/\Omega_0$). For all the cases, $L$ is chosen to
be 1--2$\lambda_0$, implying that the outer boundaries are located in
a wave zone. The NS is covered by the finest and second-finest domains
and the coordinate radius of the apparent horizon for the BH is always
covered by more than 15 grid points for $N=36$. Because the numerical
results (such as the time spent in the inspiral phase, the mass and
spin for the final state of the BH, and total energy radiated by
gravitational waves) for $N \geq 30$ depend only weakly on the grid
resolution, we conclude that the convergence is approximately achieved
for $N=36$ (except for the rest mass of disks which are formed for the
model with small values of ${\cal C}$ and $Q$; see Sec. \ref{sec4.1}).
For $N=36$, the total memory required for the simulation with 13
domains is about 5 GBytes. We perform all the simulations using
personal computers of 8 GBytes memory and 2--8 processors (in one job,
only two processors are used with an OpenMP library). The typical
computation time for one model with $N=36$ is 5--7 weeks on the
personal computers of clock speed 3 GHz.
\begin{figure*}[t]
\epsfxsize=2.7in
\leavevmode
(a)\hspace{-1cm}\epsffile{fig3a.ps}
\epsfxsize=2.7in
\leavevmode
\hspace{-0.5cm}(b)\hspace{-1cm}\epsffile{fig3b.ps}
\epsfxsize=2.7in
\leavevmode
\hspace{-0.5cm}(c)\hspace{-1cm}\epsffile{fig3c.ps}
\vspace{-2mm}
\caption{Coordinate separation between the BH and NS (a) for models
M20.145, (b) M40.145, and (c) M50.145. Here, $r_{\rm sep}=|x^i_{\rm
NS}-x^i_{\rm BH}|$; see Eq.~(\ref{eqsep}).
\label{FIG3}}
\end{figure*}
\subsection{Atmosphere}
Because any conservation scheme of hydrodynamics is unable to evolve a
vacuum, we have to introduce an artificial atmosphere outside the NSs.
The density of the atmosphere has to be small enough to exclude its
spurious effect to the orbital motion of the BH and NS, and to avoid
overestimation of the total rest mass for a disk surrounding the BH,
which may be formed after the merger if the NS is tidal disrupted. We
initially assign a small rest-mass density for the atmosphere as follows
\begin{eqnarray}
\rho_{\rm atmo}=\left\{
\begin{array}{ll}
\rho_{\rm crit} & r \leq r_0 \\
\rho_{\rm crit}e^{1-r/r_0} & r > r_0.
\end{array}
\right.
\end{eqnarray}
where $r_0$ is a constant chosen to be $5L/32$. We choose $\rho_{\rm
crit}=\rho_{\rm max} \times 10^{-9}$ where $\rho_{\rm max}$ is the
maximum rest-mass density of the NS initially given. With such
choice, the total amount of the rest mass in the atmosphere is less
than $10^{-5}M_*$. Accretion of the atmosphere of such small mass onto
the BH and NS plays a negligible role for their orbital evolution. As
we state in Sec. I, one of the important tasks for the merger
simulation of the BH-NS binaries is to determine the rest mass of
disks surrounding a BH formed after the merger. In the simulation, we
pay attention only to the case that the disk mass is larger than
$10^{-5}M_*$.
During the evolution, we also adopt an artificial treatment for the
low-density region in the following manner: (i) If the density is
smaller than $\rho_{\rm atmo}$, we set $\rho=\rho_{\rm atmo}$ and
$u_i=0$. Then, the specific internal energy $\varepsilon$ is set to be
$\kappa \rho_{\rm crit}^{\Gamma-1}/(\Gamma-1)$. (ii) Even if the
density is larger than $\rho_{\rm atmo}$, we reduce the specific
momentum $h u_i$ by a factor of $1-\exp[-\rho/\rho_{\rm crit}]$; i.e.,
for a fluid of density $\alt 5 \rho_{\rm crit}$, the specific momentum
is artificially reduced. The reason that we adopt these treatments is
that a numerical instability resulting in a negative density or
pressure often happens accidentally for the low-density region in the
absence of the artificial treatment. However, by limiting the
unphysical growth of the specific momentum, as described above, such
instabilities are excluded. For some models, we checked whether the
magnitude of $\rho_{\rm crit}$ affects the numerical result, but as
long as the small value is chosen for it as in the present work,
dependence of the numerical results on the magnitude of $\rho_{\rm
crit}$ is quite tiny.
\section{Numerical results}\label{sec4}
\begin{figure*}[th]
\caption{Snapshots of the density contour curves and density contrasts
as well as the location of the BH in the merger and ringdown phases for
model M20.145. The contour curves are plotted for $\rho=10^{-i}$
where $i=2, 3, 4, 5$ in the first four panels, whereas in the
last two, $\rho=10^{-i}$ where $i=3$, 4, and 5. The first panel
denotes the state just at the onset of the merger. The filled circles
show the region inside the apparent horizon. Note that the maximum value
of $\rho$ for the NS in the inspiral phase is $\approx 0.126$.
\label{FIG4}}
\end{figure*}
\subsection{Orbital evolution, tidal disruption, and disk mass}\label{sec4.1}
Figure \ref{FIG3} plots evolution of the coordinate separation
between the BH and NS, $x^i_{\rm sep}$, for models M20.145,
M40.145, and M50.145. $x^i_{\rm sep}$ is defined by
\begin{equation}
x^i_{\rm sep}=x^i_{\rm NS}-x^i_{\rm BH}, \label{eqsep}
\end{equation}
where $x^i_{\rm NS}$ and $x^i_{\rm BH}$ denote the positions of the
maximum rest-mass density of the NS and of the puncture,
respectively. This figure illustrates that the binaries are
in a slightly eccentric orbit for the first $\sim 2$ orbits because the
strictly circular orbit is not provided initially. Also,
in the first $\sim 2$ orbits, decrease rate of the orbital
separation due to gravitational radiation reaction is not as large as
that predicted by the PN theory, in particular for larger
values of $Q$. However, because the eccentricity decreases due to
the gravitational radiation reaction during the evolution and also the
initial eccentricity is not very large (see, e.g., Fig.~\ref{FIG1}),
the orbit approaches approximately to a quasicircular orbit
after a few orbits. The resulting final 2--3 orbits before the onset of
merger appear to be close to a quasicircular one. This behavior is
much better than that obtained when an initial condition computed in
the $\beta^{\varphi}$ condition is adopted
(see, e.g. Fig.~15 of Ref.~\cite{SACRA}).
For models M20.145, M40.145, and M50.145, the binaries spend in the
inspiral phase for $\sim 4.8$, 6.5, and 7.8 orbits, respectively. For
the larger values of $Q$ with an approximately fixed value of
$m_0\Omega_0$, the number of the inspiral orbit is larger, because the
luminosity of gravitational waves is approximately proportional to
$Q^2/(1+Q)^4$, and as a result, the binary evolves more slowly for the
larger values of $Q$.
\begin{figure*}[p]
\caption{The same as Fig.~\ref{FIG4} but for model M40.145.
\label{FIG5}}
\end{figure*}
\begin{figure*}[p]
\caption{The same as Fig.~\ref{FIG4} but for model M50.145.
\label{FIG6}}
\end{figure*}
Figures \ref{FIG4}--\ref{FIG6} plot late-time evolution of the
rest-mass density contour curves and density contrasts as well as the
location of the BHs for models M20.145, M40.145, and M50.145,
respectively. For model M20.145, the NS is tidally disrupted in the
late inspiral phase (see the first panel of Fig.~\ref{FIG4}).
Subsequently the material of the NS forms a one-armed spiral arm
around its companion BH (second panel of Fig.~\ref{FIG4}). In the
spiral arm, a transport process of angular momentum from its inner to
outer region is likely to work efficiently. Then, the spiral arm
winds around the BH (third panel of Fig.~\ref{FIG4}), and most of the
fluid elements, which do not have angular momentum large enough to
have an orbit around the BH, fall into the BH. In the first $\sim 200
M_0$ after the tidal disruption, $\sim 98\%$ of the material falls
into the BH (see Fig.~\ref{FIG7}). However, a small fraction of the
fluid elements obtain angular momentum large enough to escape the
capture by the BH, and form a disk around the BH. For $t-t_{\rm
merger} \agt 300 M_0$, where $t_{\rm merger}$ approximately denotes
the time at which the merger sets in, accretion rate decreases, and
then, the disk relaxes to a quasisteady state (fourth--sixth panels).
For model M20.145, the rest mass of the disk is $\sim 0.01M_*$ at
$t-t_{\rm merger}\approx 1000M_0$ for $N=36$
(cf. Fig.~\ref{FIG7}). For a hypothetical value of $M_{\rm
NS}=1.35M_{\odot}$, $1000M_0$ is approximately equal to 20 ms, and
thus, the lifetime of the formed disk is likely to be much longer than
20 ms. Also, for a hypothetical value of $M_{\rm NS}=1.35M_{\odot}$,
$\rho=10^{-4}$ in the units of $c=G=\kappa=1$ corresponds to
$\rho\approx 6.0 \times 10^{11}~{\rm g/cm^3}$. Thus, for that
hypothetical mass, the rest-mass density of the disk is high as $\sim
10^{11}$--$10^{12}~{\rm g/cm^3}$. Evolution and the final outcome for
model M15.145 are similar to those for model M20.145, although
the disk mass is by a factor of $\sim 2$ larger due to the smaller
value of $Q$.
The NS for model M40.145 is also subject to tidal deformation and mass
shedding by the tidal effects of the companion BH in the late inspiral
phase, as indicated in Fig.~\ref{FIG2} and in the first panel of
Fig.~\ref{FIG5}. However, the tidal effects in this binary become
important only for the inspiral orbits close to the ISCO. Because the
approaching velocity is a substantial fraction of the orbital velocity
at such close orbits, the time scale available for the NS to be
tidally deformed before the onset of the merger is too short to
efficiently transport angular momentum inside the NS. As a result,
one-armed spiral arm is formed in a less conspicuous manner than that
for model M20.145 (see the second panel of Fig.~\ref{FIG5}), although
the NS is highly elongated at the merger. Rather, most of the
material of the NS falls into the BH in a short time scale $\sim 200M_0$
(see the third panel of Fig.~\ref{FIG5}). A tiny fraction of the
material still spreads outward during the merger phase (see the fourth
panel of Fig.~\ref{FIG5}), but specific angular momentum for such
material is not large enough to escape the capture by the BH. In this
case, more than 99.999\% of the material is eventually swallowed by
the BH (cf. Fig.~\ref{FIG7}).
For model M50.145, even the mass shedding does not occur in the
inspiral phase, as indicated in Fig.~\ref{FIG2}. During the merger,
the NS is deformed by the tidal field of its companion BH (see the
first and second panels of Fig.~\ref{FIG6}), but spiral arm is not
formed nor angular momentum transport work. As a result, nearly all
the materials are swallowed by the BH in a short time scale of $\alt
100M_0$ (see the third and fourth panels of Fig.~\ref{FIG6}), and the
final outcome is a rotating BH approximately in a vacuum spacetime.
\begin{figure*}[t]
\epsfxsize=3.2in
\leavevmode
\epsffile{fig7a.ps}
\epsfxsize=3.2in
\leavevmode
~~~~\epsffile{fig7b.ps}\\
\epsfxsize=3.2in
\leavevmode
\epsffile{fig7c.ps}
\epsfxsize=3.2in
\leavevmode
~~~~\epsffile{fig7d.ps}
\vspace{-4mm}
\caption{Evolution of the rest mass of the material located outside
the apparent horizon (a) for models M15.145, M20.145, M30.145,
M40.145, and M50.145 with $N=36$, (b) for model M20.145, M20.145N, and
M20.145b with $N=36$, (c) for models M20.145, M20.160, M20.178,
M30.145, M30.160, and M30.178 with $N=36$, and (d) for models M15.145,
M20.145, M20.160, and M20.178 with $N=36$ (solid curves) and $N=30$
(dotted curves). For a hypothetical value of $M_{\rm NS}=1.35M_{\odot}$,
$100M_0 \approx 1.66$, 2.00, 2.66, 3.33, and 3.99 ms for
$Q=1.5$, 2, 3, 4, and 5, respectively.
\label{FIG7}}
\end{figure*}
To clarify the infall process of the material into the
companion BH, we plot evolution of the rest mass of the material
located outside the apparent horizon, $M_{r>r_{\rm AH}}$, for several
models in Fig.~\ref{FIG7}. Here, $M_{r> r_{\rm AH}}$ is defined by
\begin{eqnarray}
M_{r> r_{\rm AH}} \equiv \int_{r > r_{\rm AH}} \rho_* d^3x,
\end{eqnarray}
and $r_{\rm AH}(\theta, \varphi)$ denotes the radius of the apparent
horizon for given angular coordinates.
Figure \ref{FIG7}(a) plots $M_{r > r_{\rm AH}}$ as a function of
$t-t_{\rm merge}$, where $t_{\rm merge}$ denotes the approximate onset
time of the merger, for models M15.145, M20.145, M30.145, M40.145, and
M50.145. This shows that (i) for $Q \leq 2$, a disk of rest mass
$\sim 0.01$-- $0.02M_*$ is formed and the lifetime of the disk is much
longer than the dynamical time scale; (ii) for $Q \geq 3$,
approximately all the materials are swallowed by the BH in $\sim
500M_0$ (the rest mass of the material located outside the apparent
horizon is less than $10^{-5}M_*$ at the final stage; see also Table
III). The time scale for the infalling is shorter for the larger values of
$Q$. These facts hold irrespective of the initial orbital separation
and grid resolution, as long as $Q \geq 3$. Remember the fact that
the compactness of the NSs for these models is 0.145, which is a
relatively small value; the typical compactness for the NS of mass
1.3--$1.4M_{\odot}$ is between $\sim 0.14$ and $\sim 0.21$ according
to theories of high-density matter \cite{LP}. Because the less
compact NS (i.e., the NS of larger radius) is more subject to disk
formation, we conclude that the disk (or torus) mass around a BH
formed after the merger is negligible, for the mass ratio $Q \geq
3$. By contrast, if the mass ratio is smaller than $\sim 2$ and the
compactness of the NS is relatively small as ${\cal{C}}=0.145$, a disk of
mass $\sim 0.01$--$0.02M_*$ may be formed. (We note that in the
presence of BH spin, this conclusion changes; work in progress in our
group.)
To show the further evidence that the disk is really formed for $Q=2$
and ${\cal C}=0.145$, we generate Fig.~\ref{FIG7}(b). In this figure,
we plot $M_{r > r_{\rm AH}}$ as a function of time for model M20.145,
for model M20.145N for which the initial orbital separation is smaller
than that of model M20.145, and for model M20.145b for which the
initial condition is computed in the $\beta^{\varphi}$ condition. This
figure shows that the disk mass at $t-t_{\rm merger} \approx 1000M_0$
is $\sim 0.01M_{*}$ irrespective of the initial conditions for a given
grid resolution ($N=36$). This fact indicates that the initial
orbital separation is large enough to exclude spurious effects
associated with noncircularity of the initial condition to the
resulting disk mass. We note that in the previous early-stage works
\cite{ST08,ILLINOIS}, the simulations were performed from the initial
conditions of small orbital separations, and consequently, it was
found that the disk mass depends strongly on the initial separation,
failing to draw the definitive conclusion about the disk mass. This
drawback is overcome in this work.
To show dependence of $M_{r > r_{\rm AH}}$ as a function of $t$ on the
compactness of the NSs, in Fig.~\ref{FIG7}(c), we compare the results
for models M20.145, M20.160, and M20.178, and for models M30.145,
M30.160, and M30.178. As this figure shows, the disk mass
systematically and steeply decreases with the increase of the
compactness ${\cal C}$ for $Q=2$; the disk mass for model M20.160 is by
a factor of $\sim 15$ smaller than that for model M20.145. This
dependence is simply caused by the fact that the tidal disruption of
more compact NSs occurs for an orbit closer to the ISCO, suppressing
disk formation. This result implies that even for a small mass ratio
$Q=2$, disks are not formed if the radius of the NS is not very
large. Note that for a hypothetical mass of $M_{\rm NS}=1.35M_{\odot}$
for model M20.160, the circumferential radius of the NS is $R_{\rm
NS}=12.5$ km, which is not a large value because nuclear theories
predict it in the range $\sim 10$--15 km \cite{LP}. Nevertheless, the
disk is not formed. This implies that for a typical NS of
$M_0=1.35M_{\odot}$ and $R_{\rm NS}=11$--12 km, the disk is formed
only for a highly restricted case, $Q < 2$, i.e., $M_{\rm BH} <
2.7M_{\odot}$. This conclusion agrees with a conjecture by Miller
\cite{CMiller}, and is also consistent with the results by Duez et
al. \cite{CORNELL} in which they show that the disk mass is at most
$\sim 0.01M_*$ for a compact NS of ${\cal C}=0.174$ with the most
optimistic mass ratio $Q=1$. The present result also suggests that
most of BH-NS binaries may not be a promising candidate for the
central engine of SGRBs \cite{GRBdisk,GRBdisk1}, unless the radius of
the NS is fairly large $\agt 14$ km or the BH has a spin.
Figure \ref{FIG7}(c) also compares $M_{r > r_{\rm AH}}$ as a function
of $t$ for models M30.145, M30.160, and M30.178. For all these cases,
approximately all the materials of the NS eventually fall into the BH
in $\sim 500M_0$ after the onset of the merger. However, the merger
process and subsequent infalling process into the BH depend strongly
on the compactness of the NSs. For model M30.145, the mass-shedding
and subsequent tidal disruption occur before the binary reaches the
ISCO, as in model M20.145 (see also Fig.~18 of Ref.~\cite{SACRA} for
which essentially the same result as for model M30.145 is shown). As a
result, a spiral arm is formed around the companion BH and a fraction
of the material spreads outward. Then, a disk of rest mass $\agt
0.1M_*$ surrounding the BH is transiently formed, although a large
fraction of the material is swallowed by the BH in $\sim 50M_0$ after
the onset of the merger. Subsequently, the material gradually
accretes from the disk into the BH in the time duration of $\sim
150M_0$. During this phase, the disk mass decreases from $\sim
0.1M_*$ to $\sim 0.02M_*$, and thus, for the first $\sim 150M_0$ after
its formation, a massive disk is present. However, the disk material
does not have sufficiently large specific angular momentum for keeping
orbits around the BH, and eventually falls into the BH in a runaway
manner. In the end, more than 99.999\% of the material is swallowed
by the BH.
For model M30.160, a disk is formed around the BH transiently.
However, its mass is much smaller than that for model M30.145 because
the tidal disruption occurs at an orbit close to the ISCO, as in model
M40.145. For model M30.178, a disk is not formed around the BH even
transiently, as in model M50.145. The reason is that the NS in this
model is not tidally disrupted before the binary reaches the ISCO.
For these two cases, more than 99.999\% of the material is swallowed
into the BH within a short duration of 200--$300M_0$.
Before closing this section, we note that the final disk mass depends
very weakly on the grid resolution for models M30.145, M30.160,
M30.178, M20.160, and M20.178, whereas for models M15.145 and M20.145
for which the disk mass is $\agt 0.01M_*$, the final disk mass {\em
increases} with improving the grid resolution [see
Fig.~\ref{FIG7}(d)]. This implies that (i) for the case that the
massive disk is not formed, our conclusion is based on the convergent
result, whereas (ii) for the case that a disk of mass $\agt 0.01M_*$
is formed, the results with $N=36$ should be regarded as a lower-bound
for the disk mass, and the disk mass would be larger than $0.01M_*$
for model M20.145 and $0.02M_*$ for model M15.145. This dependence on
the grid resolution results from the fact that with the poorer grid
resolutions, numerical dissipation of angular momentum is larger,
increasing an amount of the material which falls into the BH. However,
this systematic dependence shows that the disk of mass $\agt 0.01M_*$
is indeed formed.
\subsection{Black hole mass and spin after merger}\label{BHspin}
\begin{figure}[t]
\begin{center}
\epsfxsize=3.1in
\leavevmode
(a)\epsffile{fig8a.ps}\\
\epsfxsize=3.1in
\leavevmode
(b)\epsffile{fig8b.ps}
\end{center}
\vspace{-3mm}
\caption{Evolution of $C_p/C_e$, $C_e/4\pi M_0$, and $M_{\rm irr}/M_0$ as
functions of time (a) for models M20.145 and (b) M40.145.
\label{FIG8}}
\end{figure}
Figure \ref{FIG8} plots $M_{\rm irr}/M_0$, $C_p/C_e$, and $C_e/4\pi
M_0$ of a BH as functions of time for models M20.145 and M40.145
(similar behavior is found for other models). Here, $C_p$ and $C_e$
are polar and equatorial circumferential radii of the BH. An
irreducible mass, $M_{\rm irr}$, of the BH is defined by the area of
apparent horizon, $A_{\rm AH}$,
\begin{eqnarray}
M_{\rm irr}=\sqrt{{A_{\rm AH} \over 16\pi}}.
\end{eqnarray}
$C_e/4\pi$ is equal to the BH mass in stationary vacuum spacetimes of
a BH. We follow its evolution, assuming that it is approximately
equal to the BH mass even in the dynamical spacetime.
Figure \ref{FIG8} shows that the values of these three quantities
remain approximately constant before the onset of the merger (more
specifically, before the material of the NS falls into the companion
BH). Because the BH is not spinning initially, the hypothetical ``BH
mass'', $C_e/4\pi$, should be approximately equal to the irreducible
mass $M_{\rm irr}$, and $C_p/C_e$ is approximately equal to unity.
These hold except for small numerical error of magnitude $\sim 1\%$.
After the onset of tidal disruption, $C_e/4\pi$ and $M_{\rm irr}$
quickly increase as the material of the NS falls into the BH, and
finally, they approximately reach constants. By contrast, $C_p/C_e$
decreases due to spin-up of the BH caused by mass accretion. Because
of the presence of the BH spin, $C_e/4\pi$ becomes unequal to the
irreducible mass after the onset of the merger.
In addition to $C_e/4\pi$, the mass of a BH formed after merger may be
estimated approximately by evaluating the total energy dissipated by
gravitational waves, $\Delta E$, and the baryon rest mass of disks
surrounding the BH from an approximate relation of energy conservation as
\begin{eqnarray}
M_{\rm BH,f} \equiv M_0 - M_{r> r_{\rm AH}}-\Delta E. \label{mass_bhf}
\end{eqnarray}
We note that in this formula, we ignore the binding energy
between the BH and surrounding material. Thus, $M_{\rm BH,f}$ is likely
to give a slightly overestimated value for the true BH mass.
The values of $\Delta E$, $M_{r> r_{\rm AH}}$, $M_{\rm BH,f}$, and
$C_e/4\pi$ are listed for all the models chosen in this paper in Table
\ref{TABLE3} and IV. In Table \ref{TABLE3}, the results in the end of
the simulation for $N=36$ are presented, whereas Table IV lists the
results for energy and angular momentum radiated by gravitational
waves. Here, the values of $\Delta E$ and $\Delta J$ depend on the
radii of the wave extraction by 1--3\% for $N=36$. In addition, these
systematically increase with improving the grid resolution. Thus,
we infer these quantities by an extrapolation of the data for
$N=30$ and 36, which carried out assuming the third-order convergence.
(We note that the Einstein and hydrodynamic equations are solved
in the fourth and third order schemes.)
Table \ref{TABLE3} shows that $M_{\rm BH,f}$ agrees with $C_e/4\pi$
within $\sim 0.5\%$ error, but $M_{\rm BH,f}$ is systematically larger
than $C_e/4\pi$. The likely reason that $\Delta E$, which is used in
computing $M_{\rm BH,f}$, is slightly underestimated for $N=36$
because of numerical dissipation of gravitational wave amplitude:
Indeed, numerical results for the energy and angular momentum radiated
by gravitational waves increase with improving the grid resolution as
mentioned above, and for the extrapolated results shown in Table IV
another conservation relations such as
\begin{eqnarray}
{C_e \over 4\pi} + M_{r> r_{\rm AH}} + \Delta E = M_0
\end{eqnarray}
holds in a good manner within the numerical error of $\sim 0.1$--0.2\%.
As described in Ref.~\cite{ST08}, there are at least three methods for
approximately estimating the final BH spin. In this paper, we use the
following methods. In the first method, we approximately
estimate the mass and spin of the BH by conservation laws. Namely, we
determine them by subtracting total energy and angular momentum
dissipated by gravitational waves, and rest mass and angular momentum
of disks surrounding the BH from the initial ADM mass and angular
momentum, respectively. As shown in Eq. (\ref{mass_bhf}), the BH mass
is estimated to give $M_{\rm BH,f}$. In the same manner, angular
momentum of the BH may be estimated by
\begin{eqnarray}
J_{\rm BH,f} \equiv J_0 - J_{r> r_{\rm AH}} - \Delta J, \label{ang_bhf}
\end{eqnarray}
where $\Delta J$ is total angular momentum radiated
by gravitational waves and $J_{r> r_{\rm AH}}$ is angular momentum
of the material located outside the apparent horizon, defined by
\begin{eqnarray}
J_{r> r_{\rm AH}} \equiv \int_{r> r_{\rm AH}} \rho_* h u_{\varphi} d^3x.
\label{j_disk}
\end{eqnarray}
Here, $u_{\varphi}=(x-x_{\rm P})u_y+(y-y_{\rm P})u_x$ and
$(x_{\rm P}, y_{\rm P})$ denote the position of the puncture.
$J_{r> r_{\rm AH}}$ exactly gives the angular momentum of the material in
the axisymmetric and stationary spacetime. In the late phase of the
merger, the spacetime relaxes to a quasistationary and nearly
axisymmetric state. Thus, we may expect that $J_{r> r_{\rm AH}}$
will provide an approximate magnitude of the angular momentum of disks.
From $J_{\rm BH,f}$ and $M_{\rm BH,f}$, we define a nondimensional
spin parameter by $a_{\rm f1} \equiv J_{\rm BH,f}/M_{\rm BH,f}^2$ (see
Table III).
In the second method, the spin is determined from the following geometric
quantities of the apparent horizon; $C_e/4\pi$ and $M_{\rm irr}$. For
Kerr BHs of spin $a$, the following relation holds,
\begin{eqnarray}
M_{\rm irr}={C_e \over 4\sqrt{2}\pi}
\Bigl(1+\sqrt{1-a^2}\Bigr)^{1/2}, \label{relK2}
\end{eqnarray}
and hence, $a$ is determined from $C_e$ and $M_{\rm irr}$. We
estimate the spin assuming that Eq. (\ref{relK2}) holds even for the BH
surrounded by the disk \cite{bhdisk07}. The BH spin
determined by this method is referred to as $a_{\rm f2}$.
In the third method, $C_p/C_e$ is used. For Kerr BHs, it is
calculated to give
\begin{eqnarray}
{C_p \over C_e}={\sqrt{2 \hat r_+} \over \pi}
E(a^2/2 \hat r_+), \label{eq:cpce}
\end{eqnarray}
where $\hat r_+=1+\sqrt{1-a^2}$ and $E(z)$ is an elliptic
integral defined by
\begin{equation}
E(z)=\int^{\pi/2}_0 \sqrt{1-z\sin^2\theta} d\theta.
\end{equation}
Thus, assuming that the same relation holds even for the BH surrounded by
the disk, we may estimate a BH spin from $C_p/C_e$. We refer to this
spin as $a_{\rm f3}$.
The values of $a_{\rm f1}$--$a_{\rm f3}$ for $N=36$ are listed in
Table III. We find that three values agree within a few \% for
$Q=3$--5. The values of the spin depend weakly on the compactness of
the NSs, and hence, we conclude that the spin parameter of the formed
BH is $\approx 0.56 \pm 0.01$, $0.48 \pm 0.01$, and $0.42 \pm 0.01$
for $Q=3$, 4, and 5, respectively. For the smaller values of $Q$, the
final BH spin is larger. The reason for this is that for the larger
mass ratio, the total angular momentum of the system $J$ at a given
value of $m_0\Omega$ in the inspiral orbit is approximately
proportional to $m_0^2 Q/(1+Q)^2$ (see also the initial condition in
Table I). Thus, the binaries of smaller values of $Q$ should form a BH
of higher spin.
For the case that $Q \leq 2$ and the disk mass is $\agt 0.01M_*$,
$a_{\rm f1}$ does not agree well with $a_{\rm f2}$ and $a_{\rm f3}$
with the error size $\sim 0.05$. Because $a_{\rm f2}$ and $a_{\rm f3}$
agree well, the error in $a_{\rm f1}$ seems to be much larger than
those of $a_{\rm f2}$ and $a_{\rm f3}$. Then, the possible error
sources are (i) underestimation of angular momentum that the disks possess
and/or (ii) underestimation of angular momentum dissipated by
gravitational waves. The possibility (ii) is not very likely, because
for $Q \geq 3$ and for models M20.160 and M20.178, $a_{\rm
f1}$--$a_{\rm f3}$ agree in a much better manner, indicating that
gravitational waves are computed with a good accuracy. A possible
reason that the angular momentum of the disk is underestimated is that
many of the disk materials are located not in the finest grid domain
in the AMR grid but in the second--fourth finest domains in which the
grid resolution may not be high enough, and hence, the angular momentum
is spuriously dissipated. This reason is also inferred from
Fig.~\ref{FIG7} (d), which shows that the disk mass surrounding BH
depends on the grid resolution.
\begin{table*}[t]
\caption{Rest mass of material located outside apparent horizon ($M_{r
> r_{\rm AH}}$), BH mass estimated by energy-conservation law ($M_{\rm
BH,f}$), BH mass estimated from equatorial circumferential radius
($C_e/4\pi$), irreducible mass of the BH ($M_{\rm irr}$; square root
of area of apparent horizon in units of $16\pi$), ratio of polar
circumferential radius ($C_p$) to equatorial one ($C_e$) of apparent
horizon (i.e., $C_p/C_e$), and estimated spin parameters of the final
state of the BH. $a_{\rm f1}$, $a_{\rm f2}$, and $a_{\rm f3}$ are
computed from BH mass and angular momentum estimated by conservation
laws, from $M_{\rm irr}$ and $C_e$ of apparent horizon, and from
$C_p/C_e$, respectively. All the values presented here are measured
for the state obtained at the end of the simulations for $N=36$. Note
that the parameters for the BH still vary with time at the end of
the simulation for models M15.145, M20.145, and M20.145N because of
gradual mass accretion. The error for $C_e$ and
$C_p/C_e$ is $\alt 0.1\%$, whereas that for $a_{\rm f1}$, $a_{\rm
f2}$, and $a_{\rm f3}$ is $\alt 0.01$ except for the case that the disk is
formed for which the error of $a_{\rm f1}$ would be $\sim 0.05$.
The values for the final state of the BH depend very weakly on the
grid resolution as far as $N \geq 30$, but the mass of
disk which presents only for $Q \leq 2$ systematically increases
with $N$. The present results should be regarded as the lower-bound
for the disk mass.
\label{TABLE3}}
\begin{tabular}{ccccccccc} \hline
Model & $M_{r > r_{\rm AH}}/M_*$ & $M_{\rm BH,f}/M_0$ & $C_e/4\pi
M_0$ & $M_{\rm irr}/M_0$ & $C_p/C_e$ & $a_{\rm f1}$ &
$a_{\rm f2}$ & $a_{\rm f3}$ \\ \hline
M15.145 & 0.023
& 0.983 & 0.981 & 0.895 & 0.872 & ~0.801 & ~0.747 & ~0.750 \\ \hline
M20.145 & 0.010
& 0.988 & 0.984 & 0.915 & 0.898 & ~0.717 & ~0.684 & ~0.682 \\ \hline
M20.145N & 0.011
& 0.988 & 0.983 & 0.916 & 0.900 & ~0.721 & ~0.677 & ~0.676 \\ \hline
M20.160 & $6 \times 10^{-4}$
& 0.988 & 0.987 & 0.919 & 0.899& ~0.694 & ~0.679 & ~0.680 \\ \hline
M20.178 & $< 10^{-5}$
& 0.983 & 0.983 & 0.917 & 0.904& ~0.676 & ~0.672 & ~0.666 \\ \hline
M30.145 & $<10^{-5}$
& 0.989 & 0.985 & 0.942 & 0.934& ~0.566 & ~0.559 & ~0.564 \\ \hline
M30.160 & $<10^{-5}$
& 0.985 & 0.983 & 0.940 & 0.935& ~0.547 & ~0.560 & ~0.562 \\ \hline
M30.178 & $<10^{-5}$
& 0.982 & 0.981 & 0.940 & 0.937 & ~0.551 & ~0.550 & ~0.552 \\ \hline
M40.145 & $<10^{-5}$
& 0.988 & 0.986 & 0.955 & 0.953& ~0.485 & ~0.482 & ~0.474 \\ \hline
M50.145 & $<10^{-5}$
& 0.989 & 0.986 & 0.963 & 0.964& ~0.408 & ~0.419 & ~0.425 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[t]
\caption{Several outputs of gravitational waves. Total radiated
energy ($\Delta E$) and angular momentum ($\Delta J$) in units of
initial ADM mass ($M_0$) and initial angular momentum ($J_0$),
fraction of radiated energy for $(l, |m|)=(2,2)$, (3,3), (4,4), and
(2,1) modes, frequency of the fundamental quasinormal mode, kick
velocity, and type of the gravitational waveform. The radiated
energy for each mode is shown in unit of $M_0$ by $\%$. The origin
of the error bar is primarily the numerical error associated with
the finite grid resolution, and in part, the finite extraction
radii of gravitational waves. Note that gravitational waves are
extracted for several coordinate radii of $50$--$100M_0$.
\label{TABLE4}}
\begin{tabular}{cccccccccc} \hline
Model & $\Delta E/M_0$ (\%) & $\Delta J/J_0$ (\%)
& ~~~~~~(2,2)~~~~~~ & ~~~~~~(3,3)~~~~~~ & ~~~~~~(4,4)~~~~~~ & ~~~~~~(2,1)~~~~~~
& $f_{\rm QNM} M_{\rm BH}$ & $V_{\rm kick}$ (km) & Type \\ \hline
M15.145 & $0.68 \pm 0.02$ & $14 \pm 1$ &$0.66 \pm 0.02$ & $0.006\pm 0.002$
&$0.003 \pm 0.001$&$\alt 0.01$ & --- & $<5$ & I \\ \hline
M20.145 & $0.87 \pm 0.02$ & $17.4 \pm 0.3$ &$0.85 \pm 0.01$&$0.017\pm 0.002$
&$0.005 \pm 0.001$ &$0.003 \pm 0.001$ & --- & $21 \pm 5$ & I \\ \hline
M20.145N & $0.78 \pm 0.02$ & $15.0 \pm 0.2$ &$0.76 \pm 0.01$&$0.014\pm 0.001$
& $0.005 \pm 0.001$ &$0.002 \pm 0.001$ & --- & $11 \pm 2$ & I \\ \hline
M20.160 & $1.22 \pm 0.02$ & $22 \pm 1$ & $1.19 \pm 0.02$ & $0.025 \pm 0.005$
&$0.006 \pm 0.002$ & $\alt 0.01$ & --- & $62 \pm 15$ & II \\ \hline
M20.178 & $1.7 \pm 0.1$ & $25 \pm 1$ & $1.6 \pm 0.1$ & $0.05 \pm 0.01$
&$0.01 \pm 0.005$ & $0.03 \pm 0.01$ & 0.087 & $126 \pm 20$ & II \\ \hline
M30.145 & $1.3 \pm 0.1$ & $22.0 \pm 0.4$ & $1.2 \pm 0.1$ & $0.08 \pm 0.01$
&$0.014 \pm 0.002$ & $0.03 \pm 0.02$ & 0.081 &$98 \pm 9$ & II \\ \hline
M30.160 & $1.6 \pm 0.1$ & $26 \pm 1$ & $1.4 \pm 0.1$ & $0.09 \pm 0.02$
&$0.020 \pm 0.003$& $0.05 \pm 0.02$ & 0.080 &$153 \pm 16$ & III\\ \hline
M30.178 & $1.8 \pm 0.1$ & $26 \pm 1$ & $1.7 \pm 0.1$ & $0.12 \pm 0.01$
&$0.02 \pm 0.01$& $0.04 \pm 0.03$ & 0.080 &$137 \pm 43$ & III \\ \hline
M40.145 & $1.3 \pm 0.1$ & $23.5 \pm 0.5$ & $1.1 \pm 0.1$ & $0.13 \pm 0.01$
& $0.025 \pm 0.005$ & $0.06 \pm 0.02$ & 0.077 &$136 \pm 12$ & III \\ \hline
M50.145 & $1.11 \pm 0.05$ & $24 \pm 1$ & $0.85 \pm 0.02$ & $0.13 \pm 0.01$
& $0.04 \pm 0.01$ & $0.09 \pm 0.01$ & 0.074 &$137 \pm 6$ & III \\ \hline
\end{tabular}
\end{table*}
\subsection{Gravitational waves}\label{sec:gw}
\subsubsection{Comparison with Taylor T4 waveform}
\begin{figure*}[t]
\vspace{-8mm}
\begin{center}
\epsfxsize=3.5in
\leavevmode
\epsffile{fig9a.ps}
\epsfxsize=3.5in
\leavevmode
~\epsffile{fig9b.ps} \\
\vspace{-9mm}
\epsfxsize=3.5in
\leavevmode
\epsffile{fig9c.ps}
\epsfxsize=3.5in
\leavevmode
~\epsffile{fig9d.ps} \\
\vspace{-9mm}
\epsfxsize=3.5in
\leavevmode
\epsffile{fig9e.ps}
\epsfxsize=3.5in
\leavevmode
~\epsffile{fig9f.ps}
\end{center}
\vspace{-12mm}
\caption{Gravitational waveforms observed along the $z$ axis (solid
curve) for models (a) M20.145, (b) M30.145, (c) M40.145, (d) M50.145,
(e) M20.160, and (f) M30.178. $t_{\rm ret}$ denotes the retarded time
[see Eq. (\ref{ret}) for definition] and $m_0$ is the total mass
defined by $M_{\rm BH}+M_{\rm NS}$. The amplitude at a hypothetical
distance $D$ can be found from Eq. (\ref{hamp}). The dot-dotted curves
denote the waveforms derived by the Taylor-T4 formula.
\label{FIG9}}
\end{figure*}
\begin{figure*}[t]
\vspace{-8mm}
\begin{center}
\epsfxsize=3.3in
\leavevmode
\epsffile{fig10a.ps}
\epsfxsize=3.3in
\leavevmode
~~\epsffile{fig10b.ps} \\
\vspace{-1.2cm}
\epsfxsize=3.3in
\leavevmode
\epsffile{fig10c.ps}
\epsfxsize=3.3in
\leavevmode
~~\epsffile{fig10d.ps}
\end{center}
\vspace{-13mm}
\caption{Orbital angular velocity computed from $\Psi_4$
as a function of the retarded time for models (a) M20.145, (b)
M30.145, (c) M40.145, and (d) M50.145. $t_{\rm ret}$ denotes the
retarded time and $m_0$ is the total mass. For all the panels, the
results for two different grid resolutions with $N=30$ (dashed curve)
and 36 (solid curve) and the results for M20.145b--M50.145b
(dot-dashed curves) are shown together. The dot-dotted curve denotes
the result derived by the Taylor-T4 formula.
\label{FIG10}}
\end{figure*}
Figure \ref{FIG9} (a)--(f) plot gravitational waveforms ($+$ mode)
observed along the $z$ axis as a function of the retarded time for models
M20.145--M50.145, M20.160, and M30.178. ($h$ denotes a gravitational
wave amplitude.) The retarded time is approximately defined by
\begin{eqnarray}
t_{\rm ret} \equiv t - D - 2M_0 \ln (D/M_0), \label{ret}
\end{eqnarray}
where $D$ is a distance between the source and
an observer.
The gravitational waveforms shown here are obtained by performing the
time integration of the Newman-Penrose quantity of $l=|m|=2$ mode.
From the values of $D h/m_0$ shown in Fig.~\ref{FIG9},
the amplitude of gravitational waves at a distance $D$ is evaluated by
\begin{eqnarray}
&&h_{\rm gw} \approx 2.4 \times 10^{-22}
\biggl( {D h/m_0 \over 0.1}\biggr) \nonumber \\
&& ~~~~~~~~~~~~~~~~~~~\times
\biggl({100~{\rm Mpc} \over D}\biggr)
\biggl({m_0 \over 5 M_{\odot}}\biggr). \label{hamp}
\end{eqnarray}
To validate the numerical waveforms presented here, we first compare
the waveforms in the inspiral phase with those derived by the
so-called Taylor-T4 formula for two point masses in quasicircular
orbits. In the Taylor-T4 formula, one calculates evolution of the
angular velocity, $\Omega$, of the quasicircular orbits due to the
gravitational radiation reaction up to the 3.5PN level beyond the
quadrupole formula: The circular orbits at a given value of $\Omega$
are determined by the 3PN equations of motion neglecting gravitational
radiation reaction, and then, one considers an adiabatic evolution of
$\Omega$ using the 3.5PN formula for gravitational radiation reaction
(see, e.g., Refs.~\cite{BHBH12,BHBH} for a detailed description of
various formulas based on the PN theory). Recent high-accuracy
simulations for equal-mass (nonspinning or corotating) BH-BH binaries
have proven that the Taylor-T4 formula provides their orbital
evolution and gravitational waveforms with a high accuracy at least up
to about one orbit before the onset of the merger. Assuming that this
holds for unequal-mass binaries, we calibrate our numerical results by
comparing them with the results by the Taylor-T4 formula. (Indeed,
our numerical results indicate that the Taylor-T4 formula provides a
good approximate solution for $\Omega(t)$ even for the nonequal-mass
binaries as shown below.) In the present work, the 3PN formula
\cite{Kidder} is employed for calculating the amplitude of
gravitational waves in the Taylor-T4 formula.
More specifically, the comparison of the numerical waveforms and
semianalytic ones derived by the Taylor-T4 formula is carried out in
the following manner. First, we derive the orbital angular velocity
as a function of time for a numerical result from $\Psi_4$ by
\cite{BHBH12}
\begin{eqnarray}
\Omega(t)
={1 \over 2} {|\Psi_4(l=m=2)| \over \displaystyle \Big|\int dt
\Psi_4(l=m=2)\Big|},
\label{gwangv}
\end{eqnarray}
where $\Psi_4(l=m=2)$ is the $l=m=2$ mode of $\Psi_4$. Then, we
compare the numerical result for $\Omega(t)$ with the semianalytic one
derived by the Taylor-T4 formula (see Fig.~\ref{FIG10}). When
comparing two results, we have a degree of freedom for the time
translation. Thus, first of all, by shifting the time axis of the
Taylor-T4's result, we align the origin of the time. As shown in
Fig.~\ref{FIG10}, we can always shift the time axis appropriately to
align two results for $\Omega(t)$.
After the appropriate time translation, we compare the gravitational
waveforms obtained by the numerical simulation and by the Taylor-T4
formula. Then, we still have a degree of freedom for choosing the
wave phase. Thus, we iteratively change the phase of the Taylor-T4's
waveform until a good matching of two waveforms is achieved.
\begin{figure*}[thb]
\vspace{-8mm}
\begin{center}
\epsfxsize=3.3in
\leavevmode
\epsffile{fig11a.ps}
\epsfxsize=3.3in
\leavevmode
~~~~~~~\epsffile{fig11b.ps} \\
\vspace{-1.2cm}
\epsfxsize=3.3in
\leavevmode
\epsffile{fig11c.ps}
\epsfxsize=3.3in
\leavevmode
~~~~~~~\epsffile{fig11d.ps}
\end{center}
\vspace{-13mm}
\caption{Gravitational waves (the real part of $\Psi_4$) emitted in
the merger and ringdown phases for models (a)
M30.145, (b) M40.145, (c) M50.145, and (d) M30.145 (dashed curve),
M30.160 (dotted curve), and M30.178 (solid curve). For (a)--(c), a
fitting waveform given by Eq. (\ref{QNM}) is plotted together by the
dot-dotted curve.
\label{FIG11}}
\end{figure*}
In Fig.~\ref{FIG9}, the dot-dotted curve denote the resulting
semianalytic waveform derived by the Taylor-T4 formula. It is found
that the numerical waveforms agree with the Taylor-T4's results with a
good accuracy, except for the early stage of the simulations (i.e.,
for the first a few wave cycles), during which the eccentricity of the
binaries is not small. In particular, for the last several wave cycles
(except for the orbit just before the merger), the numerical wave
phases agree with those derived by the Taylor-T4 formula with in $\sim
3\%$ error. This indicates that the binaries computed in the present
simulation are indeed in an approximately quasicircular orbit of small
eccentricity at least for the last several inspiral orbits. Also, this
indicates that the Taylor-T4 formula provides a good approximate
solution for $\Omega(t)$ even for the nonequal-mass binaries, because
the agreement systematically holds irrespective of the mass
ratio.
Figure \ref{FIG10} also shows good agreement between the numerical and
Taylor-T4's results for $\Omega(t)$. The numerical results for two
grid resolutions [$N=36$ (solid curve) and 30 (dashed curve)] are
shown for illustrating that a convergence is approximately
achieved. In addition, the results for models M20.145b--M50.145b are
plotted for comparison. This figure shows that the evolution of
$\Omega(t)$ for models M20.145--M50.145 agrees well with those
predicted by the Taylor-T4 formula for a long time duration
irrespective of the mass ratio. Modulation in the evolution of
$\Omega$ is $\Delta \Omega/\Omega \alt 6\%$ for the last a few
inspiral phase (irrespective of $N=30$ or 36), and thus, the
eccentricity, which is approximately estimated by $2\Delta
\Omega/3\Omega$, is $\alt 4\%$. In the last several orbits, the
eccentricity appears to be at most $\sim 1\%$.
Comparing the results of M20.145--M50.145 with those of
M20.145b--M50.145b, we find that the modulation is by a factor of
$\agt 2$ larger with the initial condition computed in the
$\beta^{\varphi}$ condition. This unfavored behavior is more
significant for the larger values of $Q$. This demonstrates the
advantage for using the initial condition computed in the 3PN-J
condition.
\subsubsection{Classification of waveforms}
Figure \ref{FIG9} shows that gravitational waveforms in the merger and
ringdown phases depend sensitively on the mass ratio and compactness
of the NS. Comparison of the waveforms for models M20.145 and M20.160
[see Fig.~\ref{FIG9} (a) and (e)], for which masses of the BH and NS
are identical each other, illustrates a strong dependence of the
merger waveforms on the NS compactness. The wave amplitude for model
M20.145 decreases suddenly in the middle of the inspiral phase due to
tidal disruption. By contrast, the wave amplitude for model M20.160
does not decrease as quickly as that for model M20.145 because the
tidal disruption does not occur far outside the ISCO.
Gravitational waveforms in the merger and ringdown phases for models
M30.145 and M30.178 [see Fig.~\ref{FIG9} (b) and (f), and
Fig.~\ref{FIG11}(d)] are also distinguishable because the amplitude in
these phases is much larger for model M30.178. This is due to the
fact that the NS for model M30.178 is not strongly affected by the
tidal force of the companion BH even at the ISCO. By contrast, the
tidal effects play an important role for deformation of the NS in
close orbits for model M30.145. Because the NS is disrupted near the
ISCO for this model, the wave amplitude in the merger and ringdown
phases is significantly suppressed. These results illustrate that
gravitational waves emitted in the merger and ringdown phases have a
potential information about the compactness of the NS (see also
Sec. \ref{spectrum}).
As these comparisons clarify, there are three types of gravitational
waveforms. For the case that the NS is tidally disrupted during the
inspiral phase (e.g., for model M20.145), the wave amplitude quickly
decreases at the tidal disruption, as the waveforms associated with
the inspiral motion are suddenly shut off. Namely, the waveform is
composed only of the inspiral waveform and subsequent sudden shut-off,
and the merger and ringdown waveforms are essentially absent. We
refer to this type as the type I.
Even in this case, most of the material of the tidally disrupted NS
falls into the companion BH (see Fig.~\ref{FIG7}). During the tidal
disruption and subsequent infalling into the BH, an orbital motion of
the disrupted material and an oscillation of the BH may excite merger
and ringdown gravitational waves (here ``ringdown gravitational
waves'' imply gravitational waves associated with quasinormal modes of
the BH). However, such waveforms are not seen. The likely reason for
the absence of the merger waveform is that the NS is significantly
elongated in a short time scale and its density quickly decreases,
suppressing an efficient excitation of gravitational waves. The reason
for the absence of the ringdown waveform is that the material of the
NS, which falls into the BH, does not have a compact configuration but
have an elongated low-density configuration. In the case that such
low-density diffuse matter incoherently falls into the BH, the
excitation of the quasinormal modes is significantly suppressed due to
the phase cancellation effect \cite{SaNa}.
For models M30.145 and M20.160, mass shedding occurs before the
binaries reach the ISCO. However, the sudden shut-off of the inspiral
waveforms is not seen because the tidal disruption and the subsequent
spreading of the material do not occur during the inspiral
phase. Rather, most part of the elongated NS falls into the BH before
the tidal disruption is completed. In this case, the ringdown waveform
is seen but the amplitude is low because the quasinormal mode is not
excited efficiently. By contrast, just before the ringdown
gravitational waves are emitted, gravitational waves are significantly
excited by a matter motion around the BH. Thus, the merger
gravitational waves are present. Namely, in these cases, the
gravitational waveforms are composed primarily of the inspiral and
merger ones. We refer to this type as the type II.
For models M40.145, M50.145, and M30.178, tidal effects to the NS do
not play an important role. In this case, the gravitational waveform
is composed of the inspiral, merger, and ringdown waveforms, as in the
merger of BH-BH binaries (e.g., \cite{BHBH12,BHBH}). We refer to this
type as the type III.
\subsubsection{Ringdown waveforms}
For models M30.145, M40.145, M50.145, M30.160, M20.178, and M30.178,
ringdown gravitational waves associated with quasinormal modes of the
formed BH are excited in the final phase. These gravitational waves
are approximately described by
\begin{eqnarray}
A e^{-t/t_d} \sin(2\pi f_{\rm QNM} t + \delta), \label{QNM}
\end{eqnarray}
where $A$ and $\delta$ are constants, and $f_{\rm QNM}$ and $t_d$ are
the frequency and damping time scale of the fundamental quasinormal
mode. A perturbation study predicts the frequency and damping time
scale for a BH of mass $M_{\rm BH}$ and spin $a$ as \cite{leaver}
\begin{eqnarray}
&& f_{\rm QNM} M_{\rm BH} \approx 0.16
[ 1-0.63(1-a)^{0.3}], \label{eqbh} \\
&& t_d \approx {2(1-a)^{-0.45} \over \pi f_{\rm QNM}}.
\label{QNMf}
\end{eqnarray}
For models M30.145, M40.145, and M50.145, the BH spin is $\approx
0.56$, 0.48, and 0.42 as shown in Sec. \ref{BHspin}. (For models
M30.160 and M30.178, the spin agrees approximately with that for model
M30.145.) Thus, for each of these models, $f_{\rm QNM} M_{\rm BH}
\approx 0.081$, 0.077, and 0.074 ($f_{\rm QNM} M_{0} \approx 0.082$,
0.078, and 0.075), and $\pi t_d f_{\rm QNM} \approx 2.9$, 2.7, and
2.6, respectively.
Figure~\ref{FIG11} (a)--(c) compares the numerical waveforms in
the ringdown phase with the hypothetical analytic waveforms for models
M30.145--M50.145. This shows that the numerical waveforms are fitted
by the analytic one (\ref{QNM}) fairly well, and that $f_{\rm QNM}$ and $t_d$
computed from the data of the BH geometry agree approximately with
those computed from gravitational waveforms. However, the numerical
waveforms do not agree completely with the hypothetical ones. This
disagreement is reasonable because for these models, the material of
the NS does not simultaneously fall into the BH at the merger. Thus,
gravitational waves are emitted both by a motion of the material
moving in the vicinity of the BH and by the quasinormal-mode
oscillation of the BH. In addition, the quasinormal modes are not
simultaneously excited because the material does not simultaneously
fall into the BH, and the resulting waveforms may be composed of many
ringdown waveforms, as well as of the waveforms excited by a material
moving around the BH. Furthermore, the system is not completely in a
vacuum nor in a stationary state, and hence, the numerical waveforms
may not be fitted precisely by the analytic results derived in the
ideal assumption.
The wavelength of gravitational waves emitted in the merger phase
(around the time when the peak amplitude is reached) is slightly
longer than that of the quasinormal mode (i.e., the frequency is lower
than $f_{\rm QNM}$). This indicates that these gravitational waves
are not emitted by an oscillation of the BH, but they are likely to be
emitted primarily by a motion of the material which moves in the
vicinity of the BH. In the merger phase, the amplitude gradually
decreases after the peak is reached. The reason for this behavior is
that the material is elongated by the tidal effect of the BH during
the infalling; i.e., the reason is not the damping associated with the
quasinormal mode oscillation. The amplitude emitted in the merger
phase is much larger than that emitted in the ringdown phase, although
the characteristic frequencies of these two types of gravitational
waves are not very different. In Sec. \ref{spectrum}, we find that
the Fourier spectrum has a plateau in a high frequency region $f m_0
\sim 0.06$--0.08 for the case that $Q=3$--5. This plateau is primarily
generated by gravitational waves emitted in the merger phase not by
the quasinormal mode oscillation.
Figure~\ref{FIG11} (d) compares the waveforms emitted in the
merger and ringdown phases for models M30.145, M30.160, and
M30.178. For these models, the masses of the BH and NS are identical
each other, but the compactness of the NSs is different. This figure
shows that the amplitude is larger for the model with the larger value
of ${\cal C}$. This is reasonable because more compact NSs are less
subject to the tidal effects by the companion BH, and hence, the
material of the NS falls into the BH in more simultaneous manner,
resulting in a coherent excitation of gravitational waves.
The difference in the amplitude of gravitational waves emitted in the
merger and ringdown phases is reflected also in the noticeable
difference of energy and angular momentum carried by gravitational
waves. As shown in Table IV, for example, the total energy radiated
for models M30.160 and M30.178 is by $\sim 35\%$ and $\sim 50\%$
larger than that for model M30.145, respectively.
\subsection{Gravitational wave spectrum}\label{spectrum}
\begin{figure*}[t]
\vspace{-4mm}
\epsfxsize=3.3in
\leavevmode
\epsffile{fig12a.ps}
\epsfxsize=3.3in
\leavevmode
~~~~~\epsffile{fig12b.ps}\\
\vspace{3mm}
\epsfxsize=3.3in
\leavevmode
\epsffile{fig12c.ps}
\epsfxsize=3.3in
\leavevmode
~~~~~\epsffile{fig12d.ps}\\
\vspace{3mm}
\epsfxsize=3.3in
\leavevmode
\epsffile{fig12e.ps}
\epsfxsize=3.3in
\leavevmode
~~~~~\epsffile{fig12f.ps}
\vspace{-3mm}
\caption{(a)--(d) The spectrum of gravitational waves $f h(f) D/m_0$
(solid curves) for models (a) M20.145 and M20.160, (b) M30.145, (c)
M40.145, and (d) M50.145, respectively. The dashed and long-dashed
curves denote the relation according to Eq. (\ref{Nspec}) and
spectra of gravitational waves computed in the Taylor-T4 formula.
The dotted curves denote the results of fitting. The upper
horizontal and right vertical axes show the value in a hypothetical
value of $M_{\rm NS}=1.35M_{\odot}$ and $D=100$ Mpc. The arrow
indicates the frequency of the fundamental quasinormal mode
calculated by Eq. (\ref{QNMf}). (e) The same as (a) but for
gathering the spectra for models M20.145--M50.145 (long-dashed,
dotted, dashed, and solid curves). To clarify the qualitative
feature of the spectra, a smoothing procedure is applied for the
numerical data. The three lines in the upper left side denote the
predicted frequency at which mass shedding of the NS occurs due to
tidal field of the companion BH for models M20.145--M40.145 ($Q$
denotes the mass ratio). (f) The same as (b) but for models M30.145
(dot-dotted curve), M30.160 (dotted curve), and M30.178 (solid
curve). A smoothing procedure is also applied for the numerical
data.
\label{FIG12}}
\end{figure*}
To determine the effective amplitude of gravitational waves for a
given frequency, the Fourier spectrum of gravitational waves of $l=|m|=2$
modes are computed. In this paper, as the Fourier spectrum, we define
\begin{eqnarray}
h(f) \equiv \sqrt{{|h_+(f)|^2+|h_{\times}(f)|^2 \over 2}},
\end{eqnarray}
where $f$ is the frequency,
$h_+(f)$ and $h_{\times}(f)$ are the Fourier transformation of
the plus and cross modes of gravitational waves observed along the $z$
axis;
\begin{eqnarray}
&&h_+(f)=\int e^{2\pi i f t} h_+(t)dt,\\
&&h_{\times}(f)=\int e^{2\pi i f t} h_{\times}(t)dt.
\end{eqnarray}
Then, the most optimistic effective amplitude of gravitational
waves for a given frequency is defined by $f h(f)$.
In the numerical simulations, BH-NS binaries of a finite orbital
separation are prepared as the initial condition and the inspiral
phase is computed for a finite duration. Consequently, the Fourier
spectrum for the low-frequency side (for $f \alt \Omega_0/\pi$)
becomes absent if we naively perform the Fourier transformation for
the numerical data. To compensate the Fourier spectrum for the
low-frequency side, we combine a hypothetical waveform computed by the
Taylor-T4 formula, as often done (e.g., Refs.~\cite{BHBH12,ST08}). To
do so, we match the numerical waveforms with those by the Taylor-T4
formula at a time when the corresponding binary orbit in the numerical
simulation is approximately in a quasicircular state. As shown in
Fig.~\ref{FIG9}, two waveforms match well for a wide range of time, so
the resulting Fourier spectrum depends only weakly on the chosen
matching time.
Figure \ref{FIG12} plots $f h(f) D /m_0$ for various models. In
Fig.~\ref{FIG12} (a)--(d), we also plot the Fourier spectra of a
gravitational waveform derived in the Taylor-T4 formula (long-dashed
curve) and by the Newtonian waveform (dashed curve) as (e.g., \cite{CF})
\begin{eqnarray}
f h(f) =\sqrt{{5 \over 24\pi}} {m_0 \over D}
{Q^{1/2} \over 1+Q} (\pi m_0 f)^{-1/6}. \label{Nspec}
\end{eqnarray}
Here, ``Newtonian'' implies that the orbital motion is calculated in
the Newtonian plus 2.5 PN equations of motion, and the gravitational
waveform is computed by the quadrupole formula.
As Figure \ref{FIG12} indicates, the spectra of gravitational waves
emitted in the inspiral, merger, and ringdown phases are smoothly
connected. Nevertheless, we still see modulations of small amplitude
for $0.01 \alt f m_0 \alt 0.02$, which are likely to be caused by
slight modulations in the amplitude and/or phase of numerical
gravitational waveforms.
In the upper-horizontal and right-vertical axis, we plot the frequency
and averaged effective amplitude for hypothetical values $M_{\rm
NS}=1.35M_{\odot}$ and $D=100$ Mpc. Here, the averaged effective
amplitude is defined by the average of $f h(f)$ over the source
direction and the direction of the binary orbital plane as
\begin{eqnarray}
h_{\rm eff} & \equiv & 0.4 f h(f) \nonumber \\
&=& 9.6 \times 10^{-23} \biggl({f h(f) D/m_0 \over 0.1}\biggr)
\biggl({m_0 \over 5M_{\odot}}\biggr) \nonumber \\
&&~~~~~~~~~~~~~~~~ \times \biggl({D \over 100~{\rm Mpc}}\biggr)^{-1}.
\label{heff}
\end{eqnarray}
Figure \ref{FIG12} shows that the spectrum shape has the following
universal qualitative feature: Irrespective of the values of ${\cal
C}$ and $Q$, $f h(f)$ (hereafter referred to as the spectrum
amplitude) decreases with the increase of $f$ and above a ``cut-off''
frequency, $f_{\rm cut}$, it decreases exponentially. For $f
\rightarrow 0$, $f h(f)$ is universally proportional to $f^{-1/6}$,
and $f < f_{\rm cut}$, $f h(f)$ is always written as $\propto
f^{-1/6}F(f)$, where $F(f)$ is a slowly varying function of $f$ and
the value is $\alt 1$. However, the detailed spectrum shape depends on
the values of ${\cal C}$ and $Q$, and Fig.~\ref{FIG12} exhibits a wide
variety of the possible spectrum shape as explained in the following.
For the case that the NS is tidally disrupted outside the ISCO, type I
gravitational waves are emitted (e.g., for model M20.145). In this
case, the spectrum amplitude monotonically decreases, and above a
cut-off frequency, it exponentially decreases with
the increase of $f$. The cut-off frequency for model M20.145 is
$f_{\rm cut} \approx 0.04/m_0$. For $f \leq 0.03/m_0 \sim 0.8f_{\rm
cut}$, $F(f)$ may be approximately fitted by the 3PN formula for the
amplitude \cite{Kidder}, and thus, the spectrum may be described,
e.g., by the following way
\begin{eqnarray}
h(f) =h_{\rm 3PN}(f) e^{-(f/f_{\rm cut})^{\sigma}}, \label{fit1}
\end{eqnarray}
where $h_{\rm 3PN}$ is the amplitude of gravitational waves derived in
the 3PN theory (see Eq. (79) of Ref.~\cite{Kidder}), and $\sigma$ is a
constant of order unity. The dotted curve in Fig.~\ref{FIG12} (a)
denotes the result of the fitting for $f_{\rm cut}=0.038/m_0$ and
$\sigma=2.2$, and we see that the fitting works well. For model
M15.145, the fitting also works well and gives $f_{\rm cut}=0.030/m_0$
and $\sigma=2.2$. Because the NS is tidally disrupted farther outside
the ISCO, the value of $f_{\rm cut}$ is smaller for model M15.145.
The latest high-precision numerical study for the quasiequilibrium
states of the BH-NS binaries shows that the mass shedding of the NSs
occurs if the following condition is satisfied \cite{TBFS2}
\begin{eqnarray}
\Omega \geq \Omega_{\rm MS}
=C \biggl({GM_{\rm NS} \over R_{\rm NS}^3}\biggr)^{1/2}
\biggl(1+{M_{\rm NS} \over M_{\rm BH}}\biggr)^{1/2}, \label{MSEQ}
\end{eqnarray}
where the value of $C$ is $\approx 0.270$ for the $\Gamma=2$ polytropic
EOS. (Note that this is the relation which holds only for the NS of
the irrotational velocity field and for the BH of no spin.)
From this relation, we expect that the mass shedding sets in at a
frequency $f_{\rm MS}=\Omega_{\rm MS}/\pi$. We plot this expected
frequency (0.0174, 0.0219, and $0.0265/m_0$) for models M20.145--M40.145
together with the Fourier spectra for models M20.145--M50.145 in
Fig.~\ref{FIG12} (e).
Figure \ref{FIG12} (a) and (e) show that any special feature is not
seen at $f=f_{\rm MS}$ in the spectrum even for the case that the NS
is tidally disrupted before the binary reaches the ISCO, e.g., for
model M20.145 (a small bump seen for $f < f_{\rm MS}$ is due to
modulation of the wave amplitude and wave frequency in the numerical
data). Also, it is found that $f_{\rm cut}$ is much larger than
$f_{\rm MS}$, as already pointed out in Ref.~\cite{ST08}. This implies
that at $f=f_{\rm MS}$, the mass-shedding sets in for the NS, but the
NS is not tidally disrupted at such a low frequency. Even for the closer
orbits of $f > f_{\rm MS}$, the NS behaves as a self-gravitating star
although it transfers a small amount of mass to the companion BH, and
gravitational waves from this BH-NS binary are characterized basically
by the inspiral waveforms. However, because the tidal effect becomes
more important for the smaller orbital separation, the tidal
deformation of the NS is enhanced more with time, and eventually, the
tidal disruption occurs. At the tidal disruption, the gravitational
wave amplitude should quickly decrease, so it is natural to identify
the frequency of the last inspiral wave as $f_{\rm cut}$ in this case.
Here, we should emphasize that {\em the compactness (or radius) of the
NSs is reflected in $f_{\rm cut}$ not in $f_{\rm MS}$.} In the
existing idea, one naively assumes that the tidal disruption occurs at
$f=f_{\rm MS}$ and discusses that the radius of the NS (i.e., the EOS
of the NS) may be constrained by identifying the values of $f_{\rm
MS}$ (e.g. Ref.~\cite{valli}). However, our present result indicates
that this possibility is unlikely. To constrain the EOS of the NS from
gravitational waves detected, we have to theoretically determine
$f_{\rm cut}$ for a variety of the EOSs and masses of the BH and NS.
For models M20.160 and M30.145 [see Fig.~\ref{FIG12} (a) and (b)], the
spectrum shape is similar to that for model M20.145, but the cut-off
frequency is higher as $f_{\rm cut} \sim 0.06/m_0$ (see below). This
cut-off frequency is still lower than the frequency of the fundamental
quasinormal mode ($f_{\rm QNM}$; see the arrow of Fig.~\ref{FIG12} (a)
and (b)). This indicates that the tidal disruption occurs at an
orbit corresponding to $f=f_{\rm cut}$. This cut-off frequency, which
is higher than that for model M20.145, reflects the fact that the
tidal disruption occurs at a closer orbit than that for model M20.145
due to the larger compactness or mass ratio of the system. The cut-off
frequency appears to be much higher than the frequency of
gravitational waves emitted at the ISCO, which is $f_{\rm lso} \sim
0.03/m_0$. This implies that gravitational waves of $f_{\rm lso} \alt
f \alt f_{\rm cut}$ are not emitted by the inspiral motion of the
binary but by a motion in the merger phase. (This is the reason why we
do not classify the waveforms for models M20.160 and M30.145 into type I
but into type II.) In this case, it is not appropriate to fit the
spectrum by Eq. (\ref{fit1}) but by a different function (see below).
Another interesting feature found in the spectrum of model M30.145 is
that no peak associated with the quasinormal mode appears at $f=f_{\rm
QNM}$ (see arrow in Fig.~\ref{FIG12} (b)). The reason is simply that
the amplitude of the ringdown waveform is much smaller than that of
the merger waveform, as pointed out in Sec.~\ref{sec:gw}.
For models M40.145, M50.145, and M30.178 (see Fig.~\ref{FIG12} (c),
(d), and (f)), the gravitational waveforms are type III. In these
cases, the spectrum amplitude also steeply decreases above a cut-off
frequency, but the feature of the spectrum shape is qualitatively
different from those of models M20.145, M30.145, and M20.160, because
the gradual decrease of the spectrum amplitude continues approximately up
to the frequency of the quasinormal mode (see the arrows in these
figures). Namely, $f_{\rm cut}$ is approximately equal to $f_{\rm
QNM}$, and thus, the cut-off frequency does not indicate that
the tidal disruption occurs at an orbit of $f=f_{\rm cut}$. In other
words, the signal of the tidal disruption is absent in these cases.
Another difference is seen in the spectrum shape for a frequency
slightly smaller than $f_{\rm cut}$. For models M20.145, M30.145, and
M20.160, a spectral index, $n \equiv -d\ln(fh(f))/d\ln(f)$, for $f
\alt f_{\rm cut}$ monotonically increases with $f$. For models
M40.145, M50.145, and M30.178, by contrast, the value of $n$ slightly
decreases with $f$, and a ``plateau'' appears. This spectrum shape is
similar to that for the merger of unequal-mass BH-BH binaries
\cite{BBBB}. This is reasonable because in this case, tidal effects
do not play an important role during the inspiral and merger phases,
and the merger process should be qualitatively the same as that in
BH-BH binaries.
In Fig.~\ref{FIG12} (f), we compare the spectrum shape for models
M30.145, M30.160, and M30.178, for which the masses of the BH and NS
are identical whereas the compactness of the NS is different. This
figure illustrates that for the larger compactness of the NS, the cut-off
frequency is higher. Also, the width of the plateau is larger for the
more compact models (M30.160 and M30.178). As discussed above, these
differences reflect the difference in the evolution process of the
late inspiral and merger phases. Because the spectrum shape near
$f=f_{\rm cut}$ is significantly different among three models,
observing such high-frequency gravitational waves will play a special
role in constraining the compactness (i.e., radius) of the NS.
To systematically identify the cut-off frequency, the fitting of the
spectrum with a specific function is useful, as illustrated for model
M20.145. The spectra associated with the inspiral phase for models
M30.145--M50.145, M20.160, M20.178, M30.160, and M30.178 are also fitted
by the 3PN amplitude, $h_{\rm 3PN}(f)$, for $f \alt f_{\rm lso} \sim
0.03/m_0$. However, this should not be the case for $f_{\rm lso} \alt
f \alt f_{\rm cut}$ because gravitational waves for this
high-frequency component are not emitted by the inspiral motion, but
by a motion of the material associated with the merger and infalling
into the BH. Hence, the spectrum should not be fitted by
Eq. (\ref{fit1}) but by
\begin{eqnarray}
h(f) =h_{\rm 3PN}(f) e^{-(f/f_{\rm ins})^{\sigma}}
+h_{\rm merger}(f),
\label{fit2}
\end{eqnarray}
where $h_{\rm merger}(f)$ denotes the spectrum of gravitational
waves emitted in the merger phase, and $f_{\rm ins}$ is a
frequency of 0.01--$0.03/m_0$. In this paper, we fix $\sigma=3.5$
to reduce the total number of free parameters, and
choose the following function for $h_{\rm merger}(f)$,
\begin{eqnarray}
h_{\rm merger}(f)={A m_0 \over D f} e^{-(f/f_{\rm cut})^{\sigma_{\rm cut}}}
[1-e^{-(f/f_{\rm ins2})^5}],
\end{eqnarray}
where $A$ and $\sigma_{\rm cut}$ are free parameters. We add
a free parameter $f_{\rm ins2}$ for which the value is close
to $f_{\rm ins}$, because the fitting is achieved in a better manner
with it.
The dotted curves in Fig.~\ref{FIG12}~(b)--(d) denote the results for
$(f_{\rm ins}m_0,f_{\rm ins2}m_0, A, f_{\rm cut}m_0, \sigma_{\rm
cut})=$(0.014, 0.014, 0.130, 0.063, 2.90) for model M30.145, (0.016,
0.016, 0.103, 0.079, 4.60) for model M40.145, and (0.019, 0.020,
0.090, 0.087, 3.70) for model M50.145. As Fig.~\ref{FIG12} shows, the
spectrum is well fitted by this simple fitting function.
The fitting for model M50.145 (and M30.178) is not as excellent as
those for other models. In these cases, the inclination of the plateau
is rather flat in the high-frequency region. This feature is
universal for the gravitational waveforms emitted in the merger of
BH-BH binaries. Thus, in such a case, another fitting function
proposed, e.g. in Ref.~\cite{BBBB}, may be better. The fitting
procedure here is, however, still robust in extracting important
physical information, as discussed in the following.
\subsection{Cut-off frequency and determining the compactness of the NS}
In the fitting procedure described in the previous subsections, the
most important output is the cut-off frequency, $f_{\rm cut}$, in
which information about the compactness (or radius) of the NS may be
reflected. For the case that the NS is swallowed by the BH with no
tidal disruption (e.g., $Q \agt 4$), it approximately indicates the
frequency of the quasinormal mode of the formed BH, and thus, it will
not be possible to extract the compactness of the NSs from the
gravitational-wave signal. By contrast, for the case that the NS is
tidally disrupted before it is swallowed by the BH (i.e., for the
small values of $Q$ and ${\cal C}$), $f_{\rm cut}$ is much smaller
than $f_{\rm QNM}$ and indicates a characteristic frequency of
gravitational waves emitted at the tidal disruption (see
Fig. \ref{FIG13}). We find that for models M20.145, M20.160, M20.178,
M30.145, and M30.160, $f_{\rm cut} m_0 \approx 0.038$, 0.056, 0.070,
0.063, and 0.083. For $Q=2$, the value of $f_{\rm cut}$ is smaller
than the frequency of the quasinormal mode, $f_{\rm QNM} \sim
0.09/m_0$ for a wide value of ${\cal C}$. This indicates that we will
be able to constrain the compactness of the NS if we detect
gravitational waves in the merger phase for a binary composed of a
low-mass BH and an NS with $Q \alt 2$. For model M30.160, $f_{\rm
cut}$ is approximately equal to $f_{\rm QNM}$, whereas $f_{\rm cut} <
f_{\rm QNM}$ for mode M30.145. Thus, if the compactness of the NS is
fairly small ${\cal C} < 0.16$, we will have a chance that the
detection of gravitational waves from a binary of $Q \approx 3$
constrains the compactness of the NS. By contrast for $Q \geq 4$,
gravitational waves are not likely to have robust information about
the compactness of the NSs.
\begin{figure}[t]
\begin{center}
\epsfxsize=3.3in
\leavevmode
\epsffile{fig13.ps}
\end{center}
\vspace{-7mm}
\caption{$f_{\rm cut}$ as a function of ${\cal C}$ for various values
of $Q$. The dashed and dotted lines denote the value of $f_{\rm QNM}$
for $Q=2$ and 3, respectively. For $Q=4$ and 5, $f_{\rm QNM}$ is
slightly smaller than that for $Q=3$, and $\approx 0.078/m_0$ and
$0.075/m_0$, respectively.
\label{FIG13}}
\end{figure}
Figure \ref{FIG12} (a)--(d) and (f) show that the cut-off frequency is
$f=f_{\rm cut} \sim 1.4$--2.2 kHz and the effective amplitude at
$f=f_{\rm cut}$ is universally $\sim 1 \times 10^{-22}$ for
hypothetical values $D=100$ Mpc and $M_{\rm NS}=1.35M_{\odot}$. Even
for the optimistic direction of the source with respect to the plane
of the detector's arm and of its binary orbital plane, the effective
amplitude is at most $\approx 2.5 \times 10^{-22}$. The designed
sensitivity of the advanced LIGO is $\leq 3 \times 10^{-22}$ at $f
\geq 1$ kHz \cite{KIP}. This implies that it will not be able to
detect gravitational waves during the merger and ringdown phases by
the laserinterferometric detectors of standard design for $D \agt 100$
Mpc. To detect gravitational waves at such high frequency, the
detectors composed of special instrument (e.g., resonant-side band
extraction), which has a high sensitivity for high-frequency
gravitational waves, is necessary. As mentioned above, gravitational
waves for $f \alt f_{\rm cut}$ carry important information of the NS
radius. It is strongly desired that a detector which is sensitive for
the frequency of 1 kHz $\alt f \alt 3$ kHz will be developed in the
future.
\subsection{Kick velocity}
\begin{figure}[thb]
\vspace{-4mm}
\begin{center}
\epsfxsize=3.2in
\leavevmode
\epsffile{fig14.ps}
\end{center}
\vspace{-5mm}
\caption{Kick velocity as a function of $\nu \equiv Q/(1+Q)^2$ for
different compactness of the NSs. The dot-dotted curve denotes the
fitting formula derived in Ref.~\cite{kick2} for the merger of
nonspinning BH-BH binaries.
\label{FIG14}}
\end{figure}
Kick velocity induced by anisotropic gravitational radiation is
estimated from the total linear momentum carried by gravitational
waves. Figure \ref{FIG14} plots the kick velocity as a function of the
ratio of reduced mass to total mass for different compactness of
the NSs (see also Table IV).
We find that the kick velocity depends strongly on the mass ratio and
compactness of the NS. More specifically, it depends strongly on whether
the NS is tidally disrupted or not. If the tidal disruption does not
occur, the NS simply falls into the companion BH in a basic picture
(although tidal deformation and strong elongation of the NS still
occur just before it falls into the BH). In such a case, the merger
process is similar to that in the merger of nonspinning BH-BH
binaries, and the kick velocity should agree approximately with the
results for those systems. To confirm this fact, we compare
our numerical results with the fitting formula derived in
Ref.~\cite{kick2} (see dot-dotted curve in Fig.~\ref{FIG14}). Figure
\ref{FIG14} shows that our results for models M40.145, M50.145,
M30.160, and M30.178 are in a reasonable agreement with the fitting
formula of Ref.~\cite{kick2}; the kick velocity is $\sim 100$--200
km/s. This indicates that our numerical results are reliable.
In the case that the NS is tidally disrupted, the kick velocity is
significantly suppressed. The primary reason is that the kick is
excited most efficiently when two stars merge; more specifically, when
the amplitude of gravitational waves becomes maximum. The maximum
amplitude of inspiral gravitational waves is higher for the case
that two stars have more compact orbits (i.e., the NS is more
compact). Also, the amplitude of gravitational waves in the merger
phase becomes higher when an NS falls into a companion BH without
tidal deformation. For the case that the tidal disruption happens,
close inspiral orbits are prohibited, and as a result, the enhancement
of the gravitational-wave amplitude is suppressed as we showed in Sec.
\ref{sec:gw}. Due to this fact, the kick velocity is smaller for the
smaller values of compactness and mass ratio.
\section{Summary}
New general relativistic simulation for the inspiral, merger, and
ringdown of BH-NS binaries are performed by a new AMR code {\tt
SACRA}. This paper presents the numerical results of a longterm
simulation for a variety of mass ratio and compactness of the NS for
the first time. For the simulation, we adopt initial conditions for
which the unphysical eccentricity is much smaller than that in our
previous study \cite{SU06,ST08}, and also the initial orbital
separation is much larger. In the present results, the binaries spend
in the inspiral phase for 4--7 orbits, and in the last several orbits,
the eccentricity is sufficiently suppressed to be $\sim 0.01$ and the
orbit becomes approximately quasicircular. This is reflected in
successful computation of gravitational waveforms for the inspiral
phase which agree well with those predicted by the PN theory. Because
the moving-puncture approach is adopted, the merger and subsequent
evolution of the BH with the surrounding material are simulated for a
long time until the system relaxes to a quasisteady state. As a
result, accurate gravitational waveforms for the late inspiral,
merger, and ringdown phases are derived successfully.
By the present self-consistent simulation, we reconfirm the finding in
our previous paper \cite{ST08} that the merger process depends
sensitively on the mass ratio, $Q$, and compactness of the NS, ${\cal
C}$. Only for the case that both $Q$ and ${\cal C}$ are sufficiently
small, the NS is tidally disrupted by the companion BH. For example,
for the chosen EOS in this paper (polytropic EOS with $\Gamma=2$), the
NS of ${\cal C}=0.145$ is tidally disrupted only for $Q \alt 3$ before
the binary reaches the ISCO. By analyzing the spectrum of
gravitational waves, we confirm that even in the case that the NS
satisfies the condition for the onset of the mass shedding [see
Eq. (\ref{MSEQ})], the tidal disruption does not occur immediately.
Rather, the NS behaves as a self-gravitating star for a while during
the mass-shedding phase. Therefore, the tidal disruption occurs for
the more restricted case than the mass shedding does. This fact also
implies that for determining the condition of tidal disruption,
numerical-relativity simulation is the unique approach.
The parameter space for the formation of a long-lived disk surrounding
the BH is even more restricted. For ${\cal C}=0.145$, the disk of
lifetime $\gg 10$ ms is formed only for $Q \alt 2$, and for ${\cal
C}=0.160$, the disk mass is at most $10^{-3}M_*$ even for $Q=2$. The
mass of the formed disk is $\agt 0.01$--$0.02M_*$ for $Q=2$--1.5 even for
relatively less compact NSs with ${\cal C}=0.145$, and thus, the disk
is not very massive, even if it is formed. This conclusion is
consistent with the results of Ref.~\cite{CORNELL}.
At present, the precise EOS of high-density nuclear matter is
unknown \cite{LP}. However, many EOSs predict that the radius of NSs
of canonical mass 1.25--$1.45M_{\odot}$ is smaller than $\sim 12$ km
(e.g., Ref.~\cite{EOS}). Namely, the compactness of the NS is larger than
0.155. If the EOS is really stiff and the NS is compact as the nuclear
theory predicts, the disk will not be formed after the merger between
nonspinning BH and NS, as pointed out by Miller \cite{CMiller}.
(However, this will not be the case if the BH has the spin of
substantial magnitude.)
We find that gravitational waveforms from BH-NS binaries are roughly
classified into the following three types. (i) When the NS is tidally
disrupted during the inspiral phase, the gravitational waveform is
characterized by the inspiral waveform and subsequent sudden
shut-down. In this case, the amplitude of the Fourier spectrum
monotonically decreases with the increase of the frequency, and at a
cut-off frequency $f_{\rm cut}$, which is the frequency of
gravitational waves when the NS is tidally disrupted (not equal to
$f_{\rm MS}$), the spectrum amplitude decreases exponentially [see
Eqs.~(\ref{fit1}) and (\ref{fit2})]. We refer to this waveform as type
I in this paper. (ii) In the case that the NS is tidally disrupted
but most of the material falls into the companion BH before the tidal
disruption is completed, the gravitational waveform is characterized
by the inspiral and merger waveforms. In this case, ringdown
gravitational waves are excited in the final phase, but its amplitude
is much smaller than those of late inspiral and merger
gravitational waves. As a result, the shape of the Fourier spectrum
is similar to that of type I but the cut-off frequency does not
correspond to the frequency of the last inspiral orbit nor the
frequency of the quasinormal mode. We refer to this waveform as type
II. (iii) When the NS is not tidally disrupted before it is swallowed
by the BH, the gravitational waveform is characterized by the
inspiral, merger, and ringdown waveforms. In this case, the amplitude
of the Fourier spectrum monotonically decreases with the increase of
the frequency in the inspiral phase as in the cases (i) and
(ii). However, in the late inspiral and merger phases, the
gravitational-wave amplitude increases and, as a result, a plateau
appears in the Fourier spectrum of $f \alt f_{\rm cut}$. Then the
spectrum amplitude exponentially decreases for $f > f_{\rm cut}$ [see
Eq. (\ref{fit2})]. In this case, $f_{\rm cut}$ is approximately equal
to the frequency of the fundamental quasinormal mode of the formed BH,
$f_{\rm QNM}$, and does not have any information about tidal
disruption (and thus compactness of the NS). We refer to this waveform
as type III.
We fit the spectrum of gravitational waves with hypothetical functions
of a small number of parameters. We find that the fitting works well
irrespective of the values of $Q$ and ${\cal C}$. By this fitting, we
systematically determine the value of the cut-off frequency, $f_{\rm
cut}$ and confirm the following fact: For the case that the NS is not
tidally disrupted, the value of $f_{\rm cut}$ is approximately equal
to the value of $f_{\rm QNM}$, as mentioned above. By contrast, for
the case that the tidal disruption occurs, the value of $f_{\rm cut}$
is smaller than $f_{\rm QNM}$, and it depends strongly on $Q$ and
${\cal C}$. The tidal disruption occurs for a wide range of ${\cal C}$
if $Q$ is smaller than $\sim 2$ . Thus, if gravitational waves from
binaries composed of a low-mass BH and NS at tidal disruption are
observed, it may be possible to constrain the compactness of the NS,
and as a result, the EOS of the NS.
We also reconfirm that the frequency at the onset of the mass shedding of
the NSs is not reflected in the spectrum in an outstanding manner.
Namely, the compactness (or radius) of the NSs is reflected in $f_{\rm
cut}$ not in $f_{\rm MS}$. In the existing idea, one naively assumes
that tidal disruption occurs at $f=f_{\rm MS}$ and discusses that the
radius of the NS (i.e., the EOS of the NS) may be constrained by
identifying the values of $f_{\rm MS}$
(e.g. Ref.~\cite{valli}). However, our present result indicates that
this possibility is unlikely. To constrain the EOS of the NS from
gravitational waves detected, we have to determine $f_{\rm cut}$ for a
variety of the EOSs and masses of the BH and NS, that can be done only
by numerical-relativity simulation.
Kick velocity induced by asymmetric gravitational wave emission is
also computed. When tidal disruption does not occur (e.g., for $Q=5$),
our result agrees approximately with that derived for the merger of
nonspinning BH-BH binaries; the kick velocity is $\sim 100$--200 km/s.
For the case that the tidal disruption occurs, the kick velocity is
significantly suppressed, and it is much smaller than 100 km/s. This
is due to the fact that gravitational wave amplitude in the last
inspiral and merger phases is suppressed.
As shown in this paper, gravitational waveforms in the merger and
ringdown phases depend strongly on the compactness of the NS and mass
ratio of the binary. This results primarily from the fact that the
degree of the tidal deformation and condition for the onset of the mass
shedding and tidal disruption depend strongly on these two
parameters. Gravitational waveforms should also depend on the EOS of
the NS and on the spin of the BH, as illustrated, e.g. in
Ref.~\cite{ISM}. Thus, in the subsequent work, we will systematically
perform the simulation for a variety of the EOSs of the NS and BH spin,
to clarify the dependence of the gravitational waveform and its
spectrum on these additional parameters.
{\em Note added in proof}: After this paper was completed, we noticed
that Illinois group posted a preprint which presented their latest
results on the inspiral and merger of BH-NS binaries
\cite{XXX}. Gravitational waveforms they presented are qualitatively
in good agreement with ours. However, in their results, the mass of
disk surrounding a BH, which is formed after tidal disruption of the
NS, is much larger than our results. For example, for $Q=3$ and ${\cal
C}=0.145$ (with no spin for the BH), the disk is not formed in our
results irrespective of the grid resolution, but in their result, the
disk mass is of order $0.01M_*$. Currently, we do not understand the
reason for this discrepancy.
\begin{acknowledgments}
All the initial conditions for the present numerical simulation are
computed using the free library LORENE \cite{LORENE}. We thank
members in the Meudon Relativity Group, in particular, Eric
Gourgoulhon, for developing LORENE. We also thank Alessandra Buonanno
for telling methods for the analysis of numerical gravitational
waveforms. The numerical computations were in part performed on
NEC-SX8 at Yukawa Institute of Theoretical Physics of Kyoto
University. This work was supported by Monbukagakusho Grant
No. 19540263 and 20105004.
\end{acknowledgments}
|
2,869,038,155,581 | arxiv | \section{Interference in compensation}
\label{sec1}
Additive manufacturing, or 3D printing, refers to a class of technology
for the direct fabrication of physical products from 3D Computer-Aided
Design (CAD) models. In contrast to material removal processes in
traditional machining, the printing process adds material layer by
layer. This enables direct printing of geometrically complex products
without affecting building efficiency. No extra effort is necessary for
molding construction or fixture tooling design, making 3D printing a
promising manufacturing technique [\citet{hilton2000rapid,gibson2009additive,melchels2010review,campbell2011could}].
Despite
these promising features, accurate control of a product's printed
dimensions remains a major bottleneck. Material solidification during
layer formation leads to product deformation, or shrinkage
[\citet{wang1996influence}], which reduces the utility of printed
products. Shrinkage control is crucial to overcome the accuracy barrier
in 3D printing.
To control detailed features along the boundary of a printed product,
\citet{tong2003parametric} and \citet{tong2008error} used polynomial
regression models to first analyze shrinkage in different directions
separately, and then compensate for product deformation by changing the
original CAD accordingly. Unfortunately, their predictions are
independent of the product's geometry, which is not consistent with the
physical manufacturing process. \citet{huangzhangsabbaghidasgupta}
built on this work, establishing a generic, physically consistent
approach to model and predict product deformations, and to derive
compensation plans. The essence of this new modeling approach is to
transform in-plane geometric errors from the Cartesian coordinate
system into a functional profile defined on the polar coordinate
system. This representation decouples the geometric shape complexity
from the deformation modeling, and a generic formulation of shape
deformation can thus be achieved. The approach was developed for a
stereolithography process, and in experiments achieved an improvement
of one order of magnitude in reduction of deformation for cylinder
products.
\begin{figure}
\includegraphics{762f01.eps}
\caption{A discretized compensation plan (dashed line) to the nominal
boundary (solid line). Note that compensation could be negative.}
\label{compensationexample}
\end{figure}
However, an important issue not yet addressed in the previously cited
work on deformation control for 3D printing is how the application of
compensation to one section of a product will affect the deformation of
its neighbors. Compensation plans are always discretized according to
the tolerance of the 3D printer, in the sense that sections of the CAD
are altered by single amounts, for example, as in Figure~\ref{compensationexample}. Furthermore, when planning an experiment to
assess the effect of compensation on product deformation, it is natural
to discretize the quantitative ``compensation'' factor into a finite
number of levels, which also leads to a product having a more complex
boundary. Ultimately, such changes may introduce interference between
different sections of the printed product, which is defined to occur
when one section's deformation depends not only on its assigned
compensation, but also on compensations assigned to its neighbors
[\citet{Rubin1980}]. For example, in Figure~\ref{compensationexample},
the deformation for points near the boundary of two neighboring
sections should depend on compensations applied to both. By the same
logic, interference becomes a practical issue when printing products
with complex geometry. Therefore, to improve quality control in 3D
printing, it is important to formally investigate complications
introduced by the interference that results from discretization in
compensation plans. We take the first step with an experiment involving
a discretized compensation plan for a simple shape.
We begin in Section~\ref{sec2} with a review of interference, models
for product
deformation, and the effect of compensation. Adoption of the Rubin
Causal Model
[RCM, \citet{holland1986}] is a significant and novel feature of our
investigation,
and facilitates the study of interference. Section~\ref{secnocompensationfit}
summarizes the basic model and analysis for deformation of cylinders
given by
\citet{huangzhangsabbaghidasgupta}. Our analyses are in Sections~\ref{secexperimentaldesign}--\ref{secrefinedmodelinterference}: we first
describe an experiment hypothesized to generate interference, then
proceed with
posterior predictive checks to demonstrate the existence of
interference, and finally
conclude with a model that captures interference. A statistically
substantial idea in
Section~\ref{secassessinginterference} is that, in experiments with
distinct units of
analysis and units of interpretation [\citet{CoxDonnelly}, pages
18--19], the
posterior distribution of model parameters, based on ``benchmark''
data, yields a
simple assessment and inference for interference in the experiment,
similar to that
suggested by \citet{sobel} and \citet{rosenbaum}. Analyses in Sections~\ref{secsimplemodelinterference}--\ref{secrefinedmodelinterference}
demonstrate how discretized compensation plans complicate quality
control through the
\hyperref[sec1]{Introduction} of interference. This illustrates the fact that in complex
manufacturing
processes, a proper definition of experimental units and understanding
of interference
are critical to quality control.
\section{Potential outcomes and interference}
\label{sec2}
\subsection{Experimental units and potential outcomes}
\label{secunitsoutcomes}
We use the general\break framework for product deformation given by
\citeauthor{huangzhangsabbaghidasgupta} [(\citeyear{huangzhangsabbaghidasgupta}), pages 3--6].
Suppose a product
has intended shape $\psi_0$ and observed shape $\psi$ under a 3D
printing process. Deformation is informally described as the difference
between $\psi$ and $\psi_0$, where we can represent both either in the
Cartesian coordinate system $(x,y,z)$ or cylindrical coordinate system
$(r,\theta,z)$. Cylindrical coordinates facilitate deformation modeling
and are used throughout.
For illustrative purposes, we define terms for two-dimensional products
(notation for three dimensions follows immediately). Quality control
requires an understanding of deformation in different regions of the
product that receive different amounts of compensation. We therefore
define a finite number $N$ of points on the boundary of the product,
corresponding to specific angles $\theta_1, \ldots, \theta_N$, as the
experimental units. The desired boundary from the CAD model is defined
by the function $r_0(\theta)$, denoting the nominal radius at angle
$\theta$. We consider only one (quantitative) treatment factor,
compensation to the CAD, defined as a change in the nominal radius of
the CAD by $x_i$ units at $\theta_i$ for $i = 1, \ldots, N$.
Compensation is not restricted to be nonnegative. The potential radius
for $\theta_i$ under compensation $\mathbf{x} = (x_1, \ldots, x_N)$ to
$\theta_1, \ldots, \theta_N$ is a function of $\theta_i$, $r_0(\cdot)$,
and $\mathbf{x}$, denoted by $r(\theta_i, r_0(\cdot), \mathbf{x})$. The
difference between the potential and nominal radius at $\theta_i$
defines deformation, and so
\begin{equation}
\label{eqpotentialoutcomes}
\Delta r\bigl(\theta_i, r_0(\cdot),
\mathbf{x}\bigr) = r\bigl(\theta_i, r_0(\cdot),
\mathbf{x}\bigr) - r_0(\theta_i)
\end{equation}
is defined as our potential outcome for $\theta_i$. Potential outcomes
are viewed as fixed numbers, with randomness introduced in Section~\ref{secmodeling} in our general model for the potential outcomes.
This definition of the potential outcome is convenient for visualizing
shrinkage. For example, suppose the desired shape of the product is the
solid line, and the manufactured product when $\mathbf{x} = \mathbf{0}
= (0, \ldots, 0)$ is the dashed line, in Figure~\ref{deformationcurve}(a). Plotting the deformation at each angle yields a
visualization amenable to analysis [Figure~\ref{deformationcurve}(b)].
Orientation is fixed: we match the coordinate axes of the printed
product with those of the CAD model.
\begin{figure}
\includegraphics{762f02.eps}
\caption{(\textup{a}) Ideal shape (solid line) versus the actual shape
(dashed line). \textup{(b)} Visualization of shrinkage.}
\label{deformationcurve}
\end{figure}
\subsection{Interference}
\label{secinterference}
A unit $\theta_i$ is said to be affected by interference if
\[
\Delta r\bigl(\theta_i, r_0(\cdot), \mathbf{x}\bigr)
\neq\Delta r\bigl(\theta_i, r_0(\cdot),
\mathbf{x}'\bigr)
\]
for at least one pair of distinct treatment vectors $\mathbf{x}, \mathbf
{x}' \in\mathbb{R}^N$ with $x_i = x'_i$
[\citet{Rubin1980}].
If there is no interference, then $\Delta r(\theta
_i, r_0(\cdot), \mathbf{x})$ is a function of $\mathbf{x}$ only via the
component $x_i$. As the experimental units reside on a connected
boundary, the deformation of one unit may depend on compensations
assigned to its neighbors when the compensation plan is discretized.
Perhaps less plausible, but equally serious, is the possible leakage of
assigned compensations across units. These considerations explain the
presence of the vector $\mathbf{x}$, containing compensations for all
units, in the potential outcome notation (\ref{eqpotentialoutcomes}).
Practically, accommodations made for interference should reduce bias in
compensation plans for complex products and improve quality control.
\subsection{General deformation model}
\label{secmodeling}
Following \citeauthor{huangzhangsabbaghidasgupta}
[(\citeyear{huangzhangsabbaghidasgupta}),
pages 6--8], our
potential outcome model under compensation plan $\mathbf{x} = \mathbf
{0}$ is decomposed into three components:
\begin{equation}
\label{eqdecomp1}
\Delta r\bigl(\theta_i, r_0(\cdot),
\mathbf{0}\bigr) = f_1\bigl(r_0(\cdot)\bigr) +
f_2\bigl(\theta_i, r_0(\cdot), \mathbf{0}
\bigr) + \varepsilon_{i}.
\end{equation}
Function $f_1(r_0(\cdot))$ represents average deformation of a given
nominal shape $r_0(\cdot)$ independent of location $\theta_i$, and
$f_2(\theta_i, r_0(\cdot), \mathbf{0})$ is the additional
location-dependent deformation, geometrically and physically related to
the CAD model. We can also interpret $f_1(\cdot)$ as a low-order
component and $f_2(\cdot, \cdot, \mathbf{0})$ as a high-order component
of deformation. The $\varepsilon_{i}$ are random variables representing
high-frequency components that add on to the main trend, with
expectation $\mathbb{E}(\varepsilon_{i}) = 0$ and $\operatorname{Var}(\varepsilon
_{i}) < \infty$ for all $i = 1, \ldots, N$.
Figure~\ref{deformationcurve} demonstrates model (\ref{eqdecomp1}).
In this example, $r_0(\theta) = r_0$, so $f_1(\cdot)$ is a function of
$r_0$, and $f_2(0, r_0, \mathbf{0}) = f_2(2\pi, r_0, \mathbf{0})$.
Decomposition of deformation into lower and higher order terms yields
\begin{equation}
\label{eqfourierseries} \Delta r(\theta_i, r_0, \mathbf{0}) =
c_{r_0} + \sum_k \bigl\{
a_{r_0, k} \cos(k\theta_i) + b_{r_0, k} \sin (k
\theta_i) \bigr\} + \varepsilon_{i},
\end{equation}
where $f_1(r_0) = c_{r_0}$, and $\{a_{r_0, k}, b_{r_0, k}\}$ are
coefficients of a Fourier series expansion of $f_2(\cdot, \cdot, \mathbf
{0})$. The $\{ a_{r_0, k}, b_{r_0, k} \}$ terms with large $k$
represent the product's surface roughness, which is not of primary interest.
\subsection{General compensation and interference models}
\label{seccompensation}
Under the polar coordinate system, a compensation of $x_i$ units at
$\theta_i$ can be thought of as an extension of the product's radius by
$x_i$ units in that direction. Bearing this in mind, we first
follow \citeauthor{huangzhangsabbaghidasgupta} [(\citeyear{huangzhangsabbaghidasgupta}), page 8] to extend (\ref{eqdecomp1})
to accommodate compensations, and then build upon this to
give an extension that can help capture interference resulting from
discretized compensation plans.
Let $r(\theta_i, r_0(\cdot), (x_i, \ldots, x_i)) = r(\theta_i, r_0(\cdot
), x_i\mathbf{1})$ denote the potential radius for $\theta_i$ under
compensation of $x_i$ units to all points. Compensation $x_i\mathbf{1}$
is equivalent, in terms of the final manufactured product, as if a CAD
model with nominal radius $r_0(\cdot) + x_i$ and compensation $\mathbf{0}$ was initially submitted to the 3D printer. Then
\begin{eqnarray}
\label{eqintstep1}
r\bigl(\theta_i, r_0(\cdot),
x_i\mathbf{1}\bigr) - \bigl\{ r_0(\theta_i)
+ x_i \bigr\} &=& r\bigl(\theta_i, r_0(
\cdot) + x_i, \mathbf{0}\bigr) - \bigl\{ r_0(\theta
_i) + x_i \bigr\}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=& \Delta r\bigl(\theta_i, r_0(\cdot) +
x_i, \mathbf{0}\bigr),
\end{eqnarray}
where $\Delta r(\theta_i, r_0(\cdot) + x_i, \mathbf{0})$ follows the
same form as (\ref{eqdecomp1}), abbreviated as
\begin{equation}
\label{eqintstep2} \Delta r\bigl(\theta_i, r_0(\cdot) +
x_i, \mathbf{0}\bigr) = \mathbb{E} \bigl\{ \Delta r\bigl(
\theta_i, r_0(\cdot) + x_i, \mathbf{0}
\bigr) \bigr\} + \varepsilon_i.
\end{equation}
Consequently, the potential outcome for $\theta_i$ is
\begin{eqnarray}
\label{eqmodelcomp2}
\Delta r\bigl(\theta_i, r_0(\cdot),
x_i\mathbf{1}\bigr) &=& r\bigl(\theta_i,
r_0(\cdot ), x_i\mathbf{1}\bigr) - r_0(
\theta_i)
\nonumber
\\
&=& r\bigl(\theta_i, r_0(\cdot), x_i
\mathbf{1}\bigr) - \bigl\{ r_0(\theta_i) +
x_i \bigr\} + x_i
\nonumber
\\[-8pt]
\\[-8pt]
&=& \Delta r\bigl(\theta_i, r_0(\cdot) +
x_i, \mathbf{0}\bigr) + x_i
\nonumber
\\
&=& \mathbb{E} \bigl\{ \Delta r\bigl(\theta_i, r_0(
\cdot) + x_i, \mathbf{0}\bigr) \bigr\} + x_i +
\varepsilon_i.\nonumber
\end{eqnarray}
The last two steps follow from (\ref{eqintstep1}) and (\ref
{eqintstep2}), respectively. If $x_i$ is small relative to
$r_0(\theta_i)$, then (\ref{eqmodelcomp2}) can be approximated using
the first and second terms of the Taylor expansion of
$\mathbb{E} \{ \Delta r(\theta_i, r_0(\cdot) + x_i, \mathbf{0})
\}$ at $r_0(\theta_i)$:
\begin{eqnarray}
\label{eqcomptaylor}
\hspace*{2pt}\quad\Delta r\bigl(\theta_i, r_0(\cdot),
x_i\mathbf{1}\bigr) &\approx & \mathbb{E} \bigl\{ \Delta r\bigl(
\theta_i, r_0(\cdot), \mathbf{0}\bigr) \bigr\}
\nonumber
\\
&&{}+ (x_i - 0) \biggl[ \frac{d}{dx} \mathbb{E} \bigl\{ \Delta
r\bigl(\theta_i, r_0(\cdot) + x, \mathbf{0}\bigr) \bigr
\} \biggr]_{x = 0} + x_i + \varepsilon _{i}
\\
&=& \Delta r\bigl(\theta_i,r_0(\cdot),\mathbf{0}\bigr)
+ \bigl\{ 1 + h\bigl(\theta_i, r_0(\cdot), \mathbf{0}
\bigr)\bigr\} x_i,
\nonumber
\end{eqnarray}
where $h(\theta_i, r_0(\cdot), \mathbf{0}) = [ d/dx \mathbb{E}
\{ \Delta r(\theta_i, r_0(\cdot) + x, \mathbf{0}) \}
]_{x = 0}$. Under a specified parametric model for the potential
outcomes, this Taylor expansion is performed conditional on the model
parameters. When there is no interference,
\[
\Delta r\bigl(\theta_i, r_0(\cdot), \mathbf{x}\bigr) =
\Delta r\bigl(\theta_i, r_0(\cdot), x_i
\mathbf{1}\bigr)
\]
for any $\mathbf{x} \in\mathbb{R}^N$, and so (\ref{eqcomptaylor}) is
a model for compensation effects in this case.
We can generalize this model to incorporate interference in a simple
manner for a compensation plan $\mathbf{x}$ with different units
assigned different compensations. As all units are connected on the
boundary of the product, unit $\theta_i$'s treatment effect will change
due to interference from its neighbors, so that $\theta_i$ will deform
not just according to its assigned compensation $x_i$, but instead
according to a compensation $g_i(\mathbf{x})$. Thus, we generalize (\ref
{eqcomptaylor}) to
\begin{equation}
\label{eqcomptaylorinterference}
\Delta r\bigl(\theta_i, r_0(\cdot),
\mathbf{x}\bigr) \approx \Delta r\bigl(\theta_i ,r_0(
\cdot), \mathbf{0}\bigr) + \bigl\{ 1 + h\bigl(\theta_i,
r_0(\cdot), \mathbf{0}\bigr) \bigr\} g_i(\mathbf{x}),
\end{equation}
where the \textit{effective treatment} $g_i(\mathbf{x})$ is a function of
$x_i$ and assigned compensations for neighbors of $\theta_i$ (with the
definition of neighboring units naturally dependent on the specific
product), hence potentially a function of the entire vector $\mathbf
{x}$. Allowing the treatment effect for $\theta_i$ to depend on
treatments assigned to its neighboring units effectively incorporates
interference in a meaningful manner, as will be seen in the analysis of
our experiment.
\section{Experimental design and analysis for interference}
\label{sec3}
\subsection{Compensation model for cylinders}
\label{secnocompensationfit}
Huang et~al. [(\citeyear{huangzhangsabbaghidasgupta}), page 12] constructed four
cylinders with $r_0 = 0.5, 1, 2$, and $3$ inches, and used $N_{0.5} =
749, N_1 = 707, N_2 = 700$, and $N_3 = 721$ equally-spaced units from
each. Based on the logic in Section~\ref{secmodeling}, they fitted
\begin{equation}\label{eqnocompmodel}
\Delta r(\theta_i, r_0, \mathbf{0}) = x_0
+ \alpha(r_0+x_0)^a + \beta(r_0+x_0)^b
\cos(2 \theta_i) + \varepsilon_{i}
\end{equation}
to the data, with $\varepsilon_{i} \sim\mathrm{N}(0, \sigma^2)$
independently, and parameters $\alpha, \beta, a, b, x_0$, and $\sigma$
independent of $r_0$. Specifically, for the cylinder, the
location-independent term is thought to be proportional to $r_0$, so
that with overexposure of $x_0$ units it would be of the form $x_0 +
\alpha(r_0 + x_0)$. Furthermore, the location-dependent term is thought
to be a harmonic function of $\theta_i$, and also proportional to
$r_0$, of the form $\beta(r_0 + x_0)\cos(2\theta_i)$ with
overexposure. Independent errors are used throughout because the focus
is on a correct specification of the mean trend in deformation
(Appendix~\ref{seccorrelation} contains a discussion on this point).
\citet{huangzhangsabbaghidasgupta} specified
\[
a \sim\mathrm{N}\bigl(1, 2^2\bigr), \qquad b \sim\mathrm{N}
\bigl(1,1^2\bigr), \qquad \log (x_0) \sim\mathrm{N}
\bigl(0,1^2\bigr)
\]
and placed flat priors on $\alpha, \beta$, and $\log (\sigma)$,
with all parameters independent {a priori}. Posterior draws of
the parameters were obtained by Hamiltonian Monte Carlo [HMC,
\citet{duanehybrid1987}] and are summarized in Table~\ref{posteriorpredictivetable}, with convergence diagnostics discussed in
Appendix~\ref{secdiagnostics}. A simple comparison of the posterior
predictive distribution of product deformation to the observed data
[\citet{huangzhangsabbaghidasgupta}, page 19] demonstrates the good fit,
and so we proceed with this specification and parameter inferences to
design and analyze an experiment for interference.
\begin{table}
\caption{Summary of 1000 posterior draws of parameters after a
burn-in of 500 when no compensation is applied. This is drawn from
Table~5 in \citet{huangzhangsabbaghidasgupta}. Effective sample size
is abbreviated as ESS throughout}\label{posteriorpredictivetable}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ld{2.9}d{1.9}d{2.9}d{3.15}c@{}}
\hline
& \multicolumn{1}{c}{\textbf{Mean}} & \multicolumn{1}{c}{\textbf{SD}} & \multicolumn{1}{c}{\textbf{Median}} & \multicolumn{1}{c}{$\bolds{95\%}$
\textbf{credible} \textbf{interval}} & \multicolumn{1}{c@{}}{\textbf{ESS}} \\\hline
$\alpha$ & -1.34 \times 10^{-2} & 1.6 \times 10^{-4} & -1.34 \times 10^{-2} & (-1.37, -1.31) \times 10^{-2} & $8198$ \\
$\beta$ & 5.7 \times 10^{-3} & 3.1 \times 10^{-5} & 5.71 \times 10^{-3} & (5.65, 5.8) \times 10^{-3} & $9522$ \\
$a$ & 8.61 \times 10^{-1} & 7.33 \times 10^{-3} & 8.61 \times 10^{-1} & (8.47, 8.75) \times 10^{-1} & $8223$ \\
$b$ & 1.13 & 5.46 \times 10^{-3} & 1.13 & (1.12, 1.14) & $9424$ \\
$x_0$ & 8.79 \times 10^{-3} & 1.5 \times 10^{-4} & 8.79 \times 10^{-3} & (8.5, 9.07) \times 10^{-3} & $8211$ \\
$\sigma$ & 8.7 \times 10^{-4} & 1.18 \times 10^{-5} & 8.7 \times 10^{-4} & (8.5, 8.9) \times 10^{-4} & $9384$ \\\hline
\end{tabular*}
\end{table}
Substituting $\Delta r(\theta_i, r_0, \mathbf{0})$ from (\ref{eqnocompmodel}) into the general model (\ref{eqmodelcomp2}), we have
\begin{eqnarray}
\label{compcylinderorigin}
&& \Delta r(\theta_i, r_0,
x_i\mathbf{1})
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&& \qquad = x_0 + x_i +
\alpha(r_0 + x_0 + x_i)^a +
\beta(r_0 + x_0 + x_i)^b \cos(2
\theta_i) + \varepsilon_{i}.
\end{eqnarray}
The Taylor expansion at $r_0 + x_0$, as in (\ref{eqcomptaylor}),
yields the model
\begin{eqnarray}
\label{eqmodelcompcylinder} &&\Delta r(\theta_i, r_0,
x_i\mathbf{1}) \nonumber\\
&& \qquad = x_0 + \alpha(r_0 +
x_0)^a + \beta(r_0 + x_0)^b
\cos(2 \theta_i)
\\
&&\qquad\quad {}+ \bigl\{ 1 + a\alpha(r_0 + x_0)^{a-1}
+ b\beta(r_0 + x_0)^{b-1}\cos(2
\theta_i) \bigr\}x_i + \varepsilon_{i}.\nonumber
\end{eqnarray}
We incorporate interference for a plan $\mathbf{x}$ with different
units assigned different compensations by changing $x_i$ in the right
side of (\ref{eqmodelcompcylinder}) to $g_i(\mathbf{x})$, with the
functional form of $g_i(\mathbf{x})$ derived by exploratory means in
Section~\ref{secassessinginterference}.
\subsection{Experimental design for interference}
\label{secexperimentaldesign}
Under a discretized compensation plan, the boundary of a product is
divided into sections, with all points in one section assigned the same
compensation. In the terminology of
\citeauthor{CoxDonnelly} [(\citeyear{CoxDonnelly}), pages 18--19], these sections constitute units
of analysis, and individual angles are units of interpretation. We
expect interference for angles near neighboring sections. Interference
should be substantial for a large difference in neighboring
compensations, and negligible otherwise.
This reasoning led to the following restricted Latin square design to
study interference. We apply compensations to four cylinders of radius
$0.5, 1, 2$, and $3$ inches, with each cylinder divided into $16$
equal-sized sections of $\pi/8$ radians. One unit of compensation is
$0.004, 0.008, 0.016$, and $0.03$ inch for each respective cylinder,
and there are only four possible levels of compensation, $-1, 0, +1$,
and $+2$ units. Two blocking factors are considered. The first is the
quadrant and the second is the ``symmetry group'' consisting of $\pi/8$-radian sections that are reflections about the coordinate axes from
each other. Symmetric sections form a meaningful block: if compensation
$x$ is applied to all units, then we have from (\ref{eqmodelcompcylinder}) that for $0 \leq\theta\leq\pi/2$,
\begin{eqnarray*}
\mathbb{E} \bigl\{ \Delta r(\theta, r_0, x\mathbf{1}) \vert \alpha,
\beta, a, b, x_0, \sigma \bigr\} &=& \mathbb{E} \bigl\{ \Delta r(\pi-
\theta, r_0, x\mathbf{1}) \vert \alpha, \beta, a, b,
x_0, \sigma \bigr\}
\\
&= & \mathbb{E} \bigl\{ \Delta r(\pi+ \theta, r_0, x\mathbf{1}) \vert
\alpha, \beta, a, b, x_0, \sigma \bigr\}
\\
&=& \mathbb{E} \bigl\{ \Delta r(2\pi- \theta, r_0, x\mathbf{1}) \vert \alpha, \beta, a, b, x_0, \sigma \bigr\},
\end{eqnarray*}
suggesting a need to control for this symmetry in the experiment. Thus,
for each product, we conceive of the $16$ sections as a $4 \times4$
table, with symmetry groups forming the column blocking factor and
quadrants the row blocking factor. Based on prior concerns about the
possible severity of interference and resulting scope of inference from
our model (\ref{eqcomptaylor}), the set of possible designs was
restricted to Latin squares (each compensation level occurs only once
in any quadrant and symmetry group), where the absolute difference in
assigned treatments between two neighboring sections does not exceed
two levels of compensation. Each product was randomly assigned one
design from this set, with no further restriction that all the products
have the same design.
Our restricted Latin square design forms a discretized compensation
plan that blocks on two factors suggested by the previous deformation
model, and remains model-robust to a certain extent. The chosen
experimental designs are in Figure~\ref{design}, and observed
deformations for the manufactured products are in Figure~\ref{experimentaldata}. There are $N_{0.5} = 6159, N_1 = 6022, N_2 =
6206$, and $N_3 = 6056$ equally spaced angles considered for the four cylinders.
\begin{figure}
\includegraphics{762f03.eps}
\caption{Experimental designs. Dashed lines represent assigned compensations.}
\label{design}
\end{figure}
\begin{figure}
\includegraphics{762f04.eps}
\caption{Observed deformations in the experiment. Dashed lines
represent sections, and numbers at the bottom of each represent
assigned compensations.}
\label{experimentaldata}
\end{figure}
\subsection{Assessing the structure of interference}
\label{secassessinginterference}
Our first task is to assess which units have negligible interference in
the experiment. To do so, we use the suggestions of \citet{sobel}
and \citet{rosenbaum}, who describe when interest exists in
comparing a treatment assignment $\mathbf{x}$ to a baseline.
We have in Section~\ref{secnocompensationfit} data on cylinders that
receive no compensation (denoted by $\mathbf{D}_{n}$) and a model (\ref
{eqnocompmodel}) that provides a good fit. Furthermore, we have a
hypothesized model (\ref{eqmodelcompcylinder}) for compensation
effects when interference is negligible, which is a function of
parameters in (\ref{eqnocompmodel}). If the manufacturing process is
in control, posterior inferences based on $\mathbf{D}_n$ then yield, by
(\ref{eqmodelcompcylinder}), predictions for the experiment. In the
absence of any other information, units in the experiment with observed
deformations deviating strongly from their predictions can be argued to
have substantial interference. After all, if $\theta_i$ has negligible
interference under assignment $\mathbf{x} = (x_1, \ldots, x_N)$, then
\[
\Delta r(\theta_i, r_0, \mathbf{x}) = \Delta r\bigl(
\theta_i, r_0, (x_i, \ldots,
x_i)\bigr) = \Delta r(\theta_i, r_0,
x_i\mathbf{1}).
\]
This suggests the following procedure to assess interference:
\begin{longlist}[(3)]
\item[(1)] Calculate the posterior distribution of the parameters
conditional on $\mathbf{D}_{n}$, denoted by
$\pi(\alpha, \beta, a, b, x_0, \sigma\vert \mathbf{D}_{n})$.
\item[(2)] For every angle in the four cylinders, form the posterior
predictive distribution of the potential outcome
corresponding to the observed treatment assignment (Figure~\ref{design}) using model
(\ref{eqmodelcompcylinder}) and $\pi(\alpha, \beta, a, b, x_0,
\sigma\vert \mathbf{D}_{n})$.
\item[(3)] Compare the posterior predictive distributions to the observed
deformations in the experiment.
\begin{itemize}
\item If a unit's observed outcome falls within the $99\%$ central
posterior predictive interval and follows
the posterior predictive mean trend, it is deemed to have negligible
interference.
\item Otherwise, we conclude that the unit has substantial interference.
\end{itemize}
\end{longlist}
This procedure is similar to the construction of control charts [\citet{boxetal}]. When an observed outcome lies outside the $99\%$ central
posterior predictive interval, we suspect existence of a special cause.
As the entire product is manufactured simultaneously, we believe that
the only reasonable assignable cause is interference.
We implemented this procedure and observed that approximately 70\%--80\% of units, primarily in the central regions of sections, have
negligible interference (Appendix~\ref{secposteriorpredictivecheck}). This is clearly seen with another
graph that assesses effective treatments, which we proceed to describe.
Taking expectations in (\ref{eqmodelcompcylinder}), the treatment
effectively received by $\theta_i$ is
\begin{eqnarray}
\label{eqtreatmentinterference}
&& \bigl( \mathbb{E} \bigl\{ \Delta r(\theta_i, r_0, \mathbf{x}) \vert
\alpha, \beta, a, b, x_0, \sigma \bigr\} -
x_0 \nonumber\\
&& \qquad {}- \alpha(r_0 + x_0)^a - \beta(r_0 + x_0)^b\cos(2 \theta
_i) \bigr)\\
&& \qquad \hspace{2pt}/ \bigl( 1 + a \alpha(r_0 + x_0)^{a-1} + b \beta(r_0 + x_0)^{b-1}\cos(2 \theta_i)\bigr).\nonumber
\end{eqnarray}
We gauge $g_i(\mathbf{x})$ by plugging observed data from the
experiment and posterior draws of the parameters based on $\mathbf
{D}_{n}$ into (\ref{eqtreatmentinterference}). These discrepancy
measure [\citet{Meng}] calculations, summarized in Figure~\ref{posteriorpredictivetreatment}, again suggest that central angles in
each section have negligible interference: estimates of their effective
treatments correspond to their assigned treatments. There is a slight
discrepancy between assigned treatments and inferred effective
treatments for some central angles, but this is likely due to different
parameter values for the two data sets. Of more importance is the
observation that the effective treatment of a boundary angle $\theta_i$
is a weighted average of the treatment assigned to its section,
$x_{i,M}$, and its nearest neighboring section, $x_{i,\mathit{NM}}$, with the
weights a function of the distances (in radians) between $\theta_i$ and
the midpoint angle of its section, $\theta_{i,M}$, and the midpoint
angle of its nearest neighboring section, $\theta_{i,\mathit{NM}}$. All these
observations correspond to the intuition that interference should be
substantial near section boundaries.
\begin{figure}
\includegraphics{762f05.eps}
\caption{Gauging effective treatment $g_i(\mathbf{x})$ using (\protect\ref{eqtreatmentinterference}). Four horizontal lines in each subfigure
denote the possible compensations, and dots denote estimates of
treatments that units effectively received in the experiment.}
\label{posteriorpredictivetreatment}
\end{figure}
\subsection{A simple interference model}
\label{secsimplemodelinterference}
We first alter (\ref{eqmodelcompcylinder}) to
\begin{eqnarray}
\label{eqfullmodelcylinder}
&& \Delta r(\theta_i, r_0, \mathbf{x})\nonumber\\
&& \qquad =
x_0 + \alpha(r_0+x_0)^a +
\beta (r_0+x_0)^b \cos(2
\theta_i)
\\
&&\qquad\quad {}+ \bigl\{ 1 + a\alpha(r_0+x_0)^{a-1}
+ b\beta(r_0+x_0)^{b-1}\cos (2
\theta_i) \bigr\} g_i(\mathbf{x}) +
\varepsilon_{i},\nonumber
\end{eqnarray}
where
\begin{eqnarray}
\label{eqweightedtreatment}
g_i(\mathbf{x}) &=& \bigl\{ 1 + \exp \bigl( -
\lambda_{r_0} | \theta _i - \theta_{i,\mathit{NM}}| +
\lambda_{r_0} |\theta_i - \theta_{i,M}| \bigr) \bigr\}^{-1} x_{i,M}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&{}+ \bigl\{ 1 + \exp \bigl( \lambda_{r_0} |
\theta_i - \theta_{i,\mathit{NM}}| - \lambda_{r_0} |
\theta_i - \theta_{i,M}| \bigr) \bigr\}^{-1}
x_{i,\mathit{NM}},
\end{eqnarray}
with $\theta_{i,M}, \theta_{i,\mathit{NM}}$ denoting midpoint angles for the $\pi
/8$-radian sections containing and neighboring nearest to $\theta_i$,
respectively, and $x_{i,M}, x_{i,\mathit{NM}}$ compensations assigned to these
sections. Effective treatment $g_i(\mathbf{x})$ is a weighted average
of the unit's assigned treatment $x_i = x_{i,M}$ and the treatment
$x_{i,\mathit{NM}}$ assigned to its nearest neighboring section. Although the
form of the weights is chosen for computational convenience, we
recognize that (\ref{eqweightedtreatment}) belongs to a class of
models agreeing with prior subject-matter knowledge that interference
may be negligible if the implemented compensation plan is sufficiently
``continuous,'' in the sense that the theoretical compensation plan is
a continuous function of $\theta$ and the tolerance of the 3D printer
is sufficiently fine so that discretization of compensation is
negligible (Appendix~\ref{secnote}).
We fit the model in (\ref{eqfullmodelcylinder}) and (\ref
{eqweightedtreatment}), having $10$ total parameters, to the
experiment data. The prior specification remains the same, with $\log (\lambda_{r_0}) \sim\mathrm{N}(0,4^2)$ independently {a
priori} for $r_0 = 0.5, 1, 2$, and $3$ inches. A HMC algorithm was used
to obtain $1000$ draws from the joint posterior distribution after a
burn-in of $500$, and these are summarized in Table~\ref{posteriorpredictivetableexperimental}.
\begin{table}
\centering
\caption{Summary of posterior draws for simple interference model}
\label{posteriorpredictivetableexperimental}
\begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}ld{2.9}d{1.9}d{2.9}d{3.15}c@{}} \hline
& \multicolumn{1}{c}{\textbf{Mean}} & \multicolumn{1}{c}{\textbf{SD}} & \multicolumn{1}{c}{\textbf{Median}} & \multicolumn{1}{c}{$\bolds{95\%}$ \textbf{credible}
\textbf{interval}} & \multicolumn{1}{c@{}}{\textbf{ESS}} \\
\hline
$\alpha$ & -1.06 \times 10^{-2} & 1.53 \times 10^{-4} & -1.06 \times 10^{-2} & (-1.09,-1.03) \times 10^{-2} & 8078 \\
$\beta$ & 5.79 \times 10^{-3} & 3.69 \times 10^{-5} & 5.79 \times 10^{-3} & (5.72, 5.86) \times 10^{-3} & 8237 \\
$a$ & 9.5 \times 10^{-1} & 9.46 \times 10^{-3} & 9.5 \times 10^{-1} & (9.31, 9.69) \times 10^{-1} & 8150 \\
$b$ & 1.12 & 6.64 \times 10^{-3} & 1.12 & (1.0, 1.13) & 8504 \\
$x_0$ & 7.1 \times 10^{-3} & 1.43 \times 10^{-4} & 7.1 \times 10^{-3} & (6.82, 7.39) \times 10^{-3} & 8404 \\
$\sigma$ & 3.14 \times 10^{-3} & 1.36 \times 10^{-5} & 3.14 \times 10^{-3} & (3.11, 3.17) \times 10^{-3} & 8924 \\
$\lambda_{0.5}$ & 32.66 & 2.05 & 32.62 & (28.69, 36.76) & 8686 \\
$\lambda_{1}$ & 48.24 & 2 & 48.12 & (44.5, 52.6) & 8666 \\
$\lambda_{2}$ & 76.83 & 1.78 & 76.78 & (73.42, 80.44) & 8770 \\
$\lambda_{3}$ & 86.08
& 0.83
& 86.06 & (84.49, 87.68)
& 8385
\\
\hline
\end{tabular*}
\end{table}
\begin{figure}
\includegraphics{762f06.eps}
\caption{\textup{(a)} An example of the type of erroneous predictions
made by model (\protect\ref{eqfullmodelcylinder}),
(\protect\ref{eqweightedtreatment}) for the $3$ inch cylinder. The vertical
line is drawn at $\theta= \pi$, marking the boundary between two
sections. Units to the left of this line were given $0$ compensation,
and units to the right were given $+2$ compensation. The posterior mean
trend is represented by the solid line, and posterior quantiles are
represented by dashed lines. Observed data are denoted by dots.
\textup{(b)} Corresponding inferred effective treatment for $15\pi/16
\leq\theta\leq17\pi/16$.
\textup{(c)} Refined posterior predictions for $r_0 = 3$ inches, $15\pi
/16 \leq\theta\leq17\pi/16$.
\textup{(d)} Comparing inferred effective treatments (solid line) with
refined effective treatment model (dashed line) for
the $3$ inch cylinder.}
\label{posteriorpredictive3error}
\end{figure}
This model provides a good fit for the $0.5$ and $1$ inch cylinders,
but not the others. As an example, in Figure~\ref{posteriorpredictive3error}(a) the posterior mean trend does not
correctly capture the observed transition across sections for the $3$
inch cylinder. The problem appears to reside in (\ref{eqweightedtreatment}). This specification implies that effective
treatments of units $\theta_i = k\pi/8$ for $k \in\mathbb{Z}_{>0}$ are
equal-weighted averages of compensations applied to units $k\pi/8 \pm
\pi/16$. To assess the validity of this implication, we use the
posterior distribution of the parameters to calculate, for each $\theta
_i$, the inferred effective treatment in (\ref{eqtreatmentinterference}).
An example of these calculations, Figure~\ref{posteriorpredictive3error}(b), shows that the inferred
effective treatment for $\theta_i = \pi$ is nearly $0.06$ inch, the
compensation applied to the right-side section. Thus, specification
(\ref{eqweightedtreatment}) is invalidated by the experiment.
Another posterior predictive check helps clarify the problem. From (\ref{eqweightedtreatment}),
\[
g_i(\mathbf{x}) = w_i x_{i,M} + (1 -
w_i) x_{i,\mathit{NM}},
\]
and so
\begin{equation}\label{eqinferredweightfunction}
w_i = \frac{g_i(\mathbf{x}) - x_{i,\mathit{NM}}}{x_{i,M} - x_{i,\mathit{NM}}},
\end{equation}
which is well defined because $x_{i,M} \neq x_{i,\mathit{NM}}$ in this
experiment. Plugging in the inferred effective treatments, calculated
from (\ref{eqtreatmentinterference}), into (\ref{eqinferredweightfunction}), we then diagnose how to modify (\ref{eqweightedtreatment}) to better model
interference in the experiment.
This calculation was made for all cylinders, and the results for $r_0 =
3$ inches are summarized in Figure~\ref{radius3weight} as an example.
Rows in this figure show the weights for each quadrant, and we focus on
their behavior in neighborhoods of integral multiples of $\pi/8$.
Neither the decay in the weights [represented by $\lambda_{r_0}$ in
(\ref{eqweightedtreatment})] nor the weight for integral multiples of
$\pi/8$ remain constant across sections. In fact, these figures suggest
that $\lambda_{r_0}$ is a function of $\theta_{i,M}, \theta_{i,\mathit{NM}}$,
and that a location term is required. They also demonstrate a possible,
subtle quadrant effect and, as our experiment blocks on this factor, we
are better able to use these posterior predictive checks to refine our
simple interference model and capture this unexpected deformation pattern.
\begin{figure}
\includegraphics{762f07.eps}
\caption{Inferring weights $w_i$ in the interference model for the $r_0
= 3$ inch cylinder, using effective treatments calculated from equation
(\protect\ref{eqtreatmentinterference}), based on the posterior distribution
of parameters from Section~\protect\ref{secsimplemodelinterference} and
equation (\protect\ref{eqinferredweightfunction}). Vertical lines represent
$\theta= k\pi/8$ for $k = 1, \ldots, 16$, and numbers at the bottom of
each subfigure represent assigned compensations.}
\label{radius3weight}
\end{figure}
\subsection{A refined interference model}
\label{secrefinedmodelinterference}
Our refined effective treatment model is of the same form as (\ref
{eqweightedtreatment}), with $\lambda_{r_0}$ replaced by $\lambda
_{r_0}(\theta_{i,M}, \theta_{i,\mathit{NM}})$, and $|\theta_i - \theta_{i,M}|,
|\theta_i - \theta_{i,\mathit{NM}}|$ replaced by $|\theta_i - \theta_{i,M} -
\delta_{r_0}(\theta_{i,M}, \theta_{i,\mathit{NM}})|, |\theta_i - \theta_{i,\mathit{NM}} -
\delta_{r_0}(\theta_{i,M}, \theta_{i,\mathit{NM}})|$, respectively. Here, $\delta
_{r_0}(\theta_{i,M}, \theta_{i,\mathit{NM}})$ represent location shifts across
sections suggested by the previous posterior predictive checks.
Our specific model is
\begin{eqnarray}
\label{eqdeltaFourierexpansion}
\delta_{r_0}(\theta_{i,M},
\theta_{i,\mathit{NM}}) & =& \delta_{r_0,0} + \sum
_{k=1}^3 \bigl\{ \delta_{r_0,k}^c
\cos(k\theta_{i,B}) + \delta_{r_0,k}^s
\sin(k\theta_{i,B}) \bigr\},
\\
\label{eqlambdarefined}
\lambda_{r_0}(\theta_{i,M},
\theta_{i,\mathit{NM}}) &=& \mathbb{I}\bigl(|x_{i,M} - x_{i,\mathit{NM}}| = 1\bigr)
\lambda_{r_0,1}
\nonumber
\\[-8pt]
\\[-8pt]
&&{}+ \mathbb{I}\bigl(|x_{i,M} - x_{i,\mathit{NM}}| = 2\bigr)
\lambda_{r_0,2},\nonumber
\end{eqnarray}
where $\theta_{i,B} = (\theta_{i,M} + \theta_{i,\mathit{NM}})/2$ and $|x_{i,M} -
x_{i,\mathit{NM}}|$ is measured in absolute units of compensation. From Figure~\ref{radius3weight} and the fact that
\[
\delta_{r_0}(\theta_{i,M}, \theta_{i,\mathit{NM}}) =
\delta_{r_0}(\theta_{i,M} + 2\pi, \theta_{i,\mathit{NM}} + 2\pi),
\]
location shifts should be modeled using harmonic functions.
This model provides a better fit. Comparing Figure~\ref{posteriorpredictive3error}(c), which displays posterior predictions
from the refined model (based on one chain of posterior draws using a
standard random walk Metropolis algorithm), with the previous model's
predictions in Figure~\ref{posteriorpredictive3error}(a), we
immediately see that the refined model better captures the posterior
predictive mean trend. Similar improvements exist for the other
sections and cylinders. We also compare the original inferred effective
treatments obtained from (\ref{eqtreatmentinterference}) with the
refined model in Figure~\ref{posteriorpredictive3error}(d) and again
observe that the new model better captures interference.
\subsection{Summary of the experimental design and analysis}
\label{secdiscussion}
Three key ingredients relating to the data, model, and experimental
design have made our series of analyses possible, and are relevant and
useful across a wide variety of disciplines. First is the availability
of benchmark data, for example, every unit on the cylinder receiving
zero compensation. Second is the potential outcomes model~(\ref{eqmodelcompcylinder}) for compensation effects when there is no
interference, defined in terms of a fixed number of parameters that do
not depend on the compensation plan $\mathbf{x}$. These two enable
calculation of the posterior predictive distribution of potential
outcomes under the assumption of negligible interference. The final
ingredient is the explicit distinction between units of analysis and
units of interpretation in our design, which provides the means to
assess and model interference in the experiment. Comparing observed
outcomes from the experiment to posterior predictions allows one to
infer the structure of interference, which can be validated by further
experimentation.
These considerations suggest that our methodology can be generalized
and applied to other experimental situations with units residing on
connected surfaces. In general, when experimenting with units on a
connected surface, a principled and step-by-step analysis using the
three ingredients above, as illustrated in this paper, can ultimately
shed more light on the substantive question of interest.
\section{Conclusion: Ignoring interference inhibits improvements}
\label{sec4}
To manufacture 3D printed products satisfying dimensional accuracy
demands, it is important to address the problem of interference in a
principled manner.
\citet{huangzhangsabbaghidasgupta} recognized that continuous
compensation plans implemented on printers with a sufficiently fine
tolerance can effectively control a product's printed dimensions
without inducing additional complications through interference. Their
models for product deformation motivated our experiment that introduces
interference through the application of a discretized compensation plan
to the boundary of a cylinder. Combining this experiment's data with
inferences based on data for which every unit received no compensation
led to an assessment of interference in terms of how units' effective
treatments differed from that physically assigned. Further analyses
effectively modeled interference in the experiment.
It is important to note that the refined interference model's location
and scale terms (\ref{eqdeltaFourierexpansion}), (\ref
{eqlambdarefined}) are a function of the compensation plan. For
example, reflecting the assigned compensations across the y axis would
accordingly change the location shifts. The implication of this and all
our previous observations for manufacturing is that severely
discretized compensation plans introduce interference, and, if this
fact is ignored, then quality control of 3D printed products will be
hindered, especially for geometrically complex products relevant in
real-life manufacturing.
Many research challenges and opportunities for both statistics and
additive manufacturing remain to be addressed. Perhaps the most
important is experimental design in the presence of interference. For
example, when focus is on the construction of specific classes of
products (e.g., complicated gear structures), optimum designs can lead
to precise estimates of model parameters, hence improved compensation
plans and control of deformation. An important and subtle statistical
issue that then arises is how the structure of interference changes as
a function of the compensation plan derived from the experimental
design. Instead of being a weighted average of the treatment applied to
its section and nearest neighboring section, the derived compensation
plan may cause a unit's effective treatment to be a weighted average of
treatments applied to other sections as well, with weights depending on
the absolute difference in applied compensations. Knowledge of the
relationship between compensation plans derived from specific
experimental designs and interference is necessary to improve quality
control in general, and therefore is an important issue to address for
3D printing.
\begin{appendix}
\section{Correlation in \texorpdfstring{$\varepsilon$}{varepsilon}}
\label{seccorrelation}
In all our analyses, we assumed the $\varepsilon_i$ were independent. As
pointed out by a referee, when units reside on a constrained boundary,
independence of error terms is generally unrealistic. However, we
believe that our specific context helps justify this simplifying
assumption for several reasons.
First, the major objective driving our work on 3D printing is
compensation for product deformation. To derive compensation plans, it
is important to accurately specify the mean trend in deformation.
Although incorporating correlation may change parameter estimates that
govern the mean trend, we do not believe that modeling the correlation
in errors will substantially help us compensate for printed product
deformations. This is something we intend to address further in our
future work.
\begin{figure}
\includegraphics{762f08.eps}
\caption{Residuals for the model fit in Section~\protect\ref{secnocompensationfit}. Here, the residual is defined as the
difference between the observed deformation and the posterior mean of
deformation for each angle $\theta_i$.}\vspace*{-5pt}
\label{figresiduals}
\end{figure}
Second, there is a factor that may further confound the potential
benefits of including correlated errors in our model: the resolution of
the CAD model. To illustrate, consider the model fit in Section~\ref{secnocompensationfit}.
We display the residual plots in Figure~\ref{figresiduals}. All residuals are (in absolute value) less than $1\%$
of the nominal radius for $r_0 = 0.5$ inch and at most approximately
$0.1\%$ of the nominal radius for $r_0 = 1, 2, 3$ inches, supporting
our claim that we have accurately modeled the mean trend in deformation
for these products. However, we note that for $r_0 = 1, 2, 3$ inches,
there is substantial negative correlation in residuals between adjacent
units, with the residuals following a high-frequency harmonic trend.
There is a simple explanation for this phenomenon. Our first
manufactured products were $r_0 = 1, 2, 3$ inches, and the CAD models
for these products had low resolution. Low resolution in the CAD model
yields the high-frequency pattern in the residual plots. The next
product we constructed was $r_0 = 0.5$ inch, and its CAD model had
higher resolution than that previously used, which helped to remove
this high-frequency pattern. Minor trends appear to exist in this
particular plot, and an ACF plot formally reveals significant
autocorrelations. Accordingly, we observe that the correlation in
residuals is a function of the resolution of the initial CAD model. In
consideration of our current data and our primary objective to
accurately capture the mean trend in deformation, we use independent
$\varepsilon_i$ throughout. We intend to pursue this issue further in our
future work, for example, in the direction of \citet
{colosimosemeraropacella}.
Furthermore, as pointed out by the Associate Editor, correlations in
residuals for more complicated products may be accounted for by
modeling the interference between units, which is precisely the focus
of this manuscript.
\section{MCMC convergence diagnostics}
\label{secdiagnostics}
Convergence of our MCMC algorithms was gauged by analysis of ACF and
trace plots, and effective sample size (ESS) and \citeauthor{GelmanRubin} [(\citeyear{GelmanRubin}), GR] statistics, which were calculated using $10$
independent chains of $1000$ draws after a burn-in of $500$. In
Sections~\ref{secnocompensationfit} and \ref{secsimplemodelinterference}, the ESS were all above $8000$ (the
maximum is $10\mbox{,}000$), and the GR statistics were all $1$.
\section{Assessing interference}
\label{secposteriorpredictivecheck}
The results of the first procedure described in Section~\ref{secassessinginterference} are displayed in Figure~\ref{posteriorpredictiveexperiment}: bold lines represent posterior
means, dashed lines quantiles forming the 99\% central posterior\vadjust{\goodbreak}
intervals, and dots the observed outcomes in the experiment, with
separate figures for each nominal radius and compensation. For example,
the graph in the first row and column of Figure~\ref{posteriorpredictiveexperiment} contains the observed data for angles
in the $0.5$ inch radius cylinder that received $-$1 compensation. This
figure also contains the posterior predictive mean and 99\%
intervals for all angles under the assumption that $-$1 compensation
was applied uniformly to the cylinder. Although only four sections of
the cylinder received this compensation in the experiment, forming this
distribution makes the posterior predictive mean trend transparent, and
so helps identify when a unit's observed outcome deviates strongly from
its prediction.
\begin{figure}
\includegraphics{762f09.eps}
\caption{Assessing interference in the experiment based on posterior
inferences drawn from the no-compensation data.
Clockwise from top left: predictions for units that received $-1, 0,
+1$, and $+$2 compensation.}\vspace*{-6pt}
\label{posteriorpredictiveexperiment}
\end{figure}
\section{Note on a class of interference models}
\label{secnote}
Compensation is applied in practice by discretizing the plan at a
finite number of points, according to some tolerance specified\vadjust{\goodbreak} by the
size (in radians) for each section or, alternatively, the maximum value
of $|\theta_{i,M} - \theta_{i,\mathit{NM}}|$.
Suppose compensation plan $x(\theta)$ is a continuous function of
$\theta$, and define
\[
w_i = \frac{h(|\theta_i - \theta_{i,M} |)}{
h(|\theta_i - \theta_{i,M} |) + h(|\theta_i - \theta_{i,\mathit{NM}}|)},
\]
with $h\dvtx \mathbb{R} \rightarrow\mathbb{R}_{>0}$ a monotonically
decreasing continuous function, and
\[
g_i(\mathbf{x}) = w_i x_{i,M} + (1 -
w_i) x_{i, \mathit{NM}}.
\]
Then for the cylinders considered in our experiment, $g_i(\mathbf{x})
\rightarrow x_i$ as $|\theta_{i,M} - \theta_{i, \mathit{NM}}| \rightarrow0$.
This is because $|x_{i,M} - x_{i,\mathit{NM}}| \rightarrow0$ as $|\theta_{i,M}
- \theta_{i,\mathit{NM}}| \rightarrow0$, and
\[
0 \leq |\theta_i - \theta_{i,\mathit{NM}} | - |\theta_i
- \theta_{i,M} | \leq |\theta_{i,M} - \theta_{i,\mathit{NM}}|.
\]
\end{appendix}
\section*{Acknowledgements}
We are grateful to Xiao-Li Meng, Joseph Blitzstein, David Watson,
Matthew Plumlee, the Editor, Associate Editor, and a referee for their
valuable comments, which improved this paper.
|
2,869,038,155,582 | arxiv | \section*{\centering REFERENCES}\list
{\arabic{enumi}.}{\settowidth\labelwidth{#1.}\leftmargin\labelwidth
\advance\leftmargin\labelsep
\usecounter{enumi}}}
\def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em}
\sloppy\clubpenalty4000\widowpenalty4000
\sfcode`\.=1000\relax
\let\endThebibliography=\endlist
\begin{document}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\begin{flushright}
hep-ph/9509381 \\
September 1995 \\
\end{flushright}
\vspace{0.1cm}
\begin{center}
{\Huge\bf Charmonium Theory\footnote{Talk presented at the
Workshop on the Tau/Charm Factory, Argonne National Laboratory,
June 21--23, 1995.}}
\vspace{0.2in}
\end{center}
\begin{center}
{\Large Aida X. El-Khadra\footnote{Address after Sept. 1, 1995:
Physics Department, University of Illinois, 1110 W. Green St.,
Urbana, IL~61801}}
\end{center}
\vskip 0.2in
\begin{center}
{\small\em Physics Department, Ohio State University, 174 W 18th Ave,
Columbus, OH 43210, U.S.A.}
\vspace{0.3in}
\end{center}
\begin{quote}
\small{\bf Abstract.}
Recent theoretical progress in calculations of the spectrum and decays
of charmonium is reviewed.
Traditionally, our understanding of charmonium was based on
potential models. This is now being replaced by first principles.
\end{quote}
\vskip 1cm
\setcounter{footnote}{0}
\renewcommand{\thefootnote}{\alph{footnote}}
\section*{\centering INTRODUCTION} \label{sec:IM}
The charmonium system is
at present among the theoretically best understood hadronic systems.
The charm quark mass is large compared to the typical QCD
scale, $\Lambda_{QCD}$. The $c\bar{c}$ bound states are therefore
governed by non-relativistic dynamics. Historically, while the
QCD potential was not known from first principles,
relatively simple guesses for
phenomenological potentials had proven quite successful in describing
the experimentally measured bound state spectrum of charmonium \cite{quigg}.
Likewise, for annihilation decays of charmonium,
an intuitive factorization
{\em ansatz}, involving the wave function at the origin (calculable in
potential models) worked quite well, for the most part.
This model-based theoretical understanding is now being replaced
by first principles. For charmonium spectroscopy, the progress comes
from using lattice QCD. Charmonium decays can be treated rigorously
with the help of non-relativistic QCD (or NRQCD).
Lattice field theory offers a systematic first
principles approach to solving QCD.
It has been argued by Lepage \cite{lepage} that charmonium is one of
the easiest systems to study with lattice QCD, with the potential
of complete control over all systematic errors.
Finite-volume errors are much easier to control for
quarkonia than for light hadrons.
Lattice-spacing errors, on the other hand, can be larger
for quarkonia and need to be considered.
An alternative to reducing the lattice spacing in order to control
this systematic error is improving the action (and operators).
For quarkonia, the size of lattice-spacing errors in a numerical
simulation can be {\em anticipated} by calculating expectation
values of the corresponding operators using potential model wave
functions. They are therefore ideal systems to test and establish
improvement techniques.
Most of the work of phenomenological relevance is done in
what is generally referred to as the ``quenched''
(and sometimes as the ``valence'') approximation.
In this approximation gluons are not allowed to split into
quark - anti-quark pairs (sea quarks). In the case of
charmonium, potential model phenomenology can be used to
estimate this systematic error.
Control over systematic
errors in turn allows the extraction of Standard Model parameters
from the quarkonia spectra.
Non-relativistic systems, like charmonium, are best described
with an effective field theory, non-relativistic QCD (or NRQCD).
It was shown in Ref.~\cite{bbl} that the application of NRQCD
to the problem of annihilation decays of quarkonium leads to
a general factorization formula. It reproduces the earlier
({\em ad hoc}) factorization {\em ansatz} for S-wave decays while putting it
on a firm theoretical footing. In the case of P-wave decays, the
previous {\em ansatz} is modified.
Lattice QCD and NRQCD are introduced in the following subsections.
The remainder of the talk is organized in two parts.
The first part reviews recent progress in calculations of the
charmonium spectrum based on lattice QCD.
The second part reviews progress in understanding
charmonium decays based on NRQCD.
\subsection*{\centering An Introduction to Lattice QCD} \label{sec:primer}
Lattice Field theory is formulated using the Feynman path integral
in Euclidean space. The quantities that are actually calculated
are expectation values of Greens functions (${\cal G}$), which are
products of gauge and fermion fields. The physical quantities of interest,
hadron masses, matrix elements, etc., are then extracted from these
Greens functions.
The discretization of space-time (with lattice spacing $a$) regulates
the path integral at short distances or in the ultraviolet.
A finite volume (of length $L$) is necessary for numerical
techniques and also introduces an infrared cut-off or momentum-space
discretization.
The vacuum expectation of a Greens function, ${\cal G}$,
is defined as:
\begin{equation} \label{eq:lim}
\langle {\cal G} \rangle = \lim_{L \rightarrow \infty} \,
\lim_{a \rightarrow 0} \, \langle {\cal G} \rangle_{L,a}
\;\;,\;\;\;
\langle {\cal G} \rangle_{L,a} = Z^{-1}_{L,a}
\int {\cal D}\psi {\cal D}\bar{\psi} {\cal D}U \, {\cal G}
\, e^{-S_{\rm lat}} \;\;\;.
\end{equation}
$Z_{L,a}$ normalizes the expectation value.
I have omitted spin and color indices for compactness.
The gauge degrees of freedom are written as (path ordered) exponentials
of the gauge field, $A_{\mu}$:
\begin{equation}
U_{\mu} (x) = e^{ i \int_x^{x+a} dx' A_{\mu} (x')} \simeq e^{ia A_{\mu} (x)}
\;\;\;,
\end{equation}
which makes it easy to maintain gauge invariance.
The link fields, $U$, are $SU(3)$ matrices.
The (Euclidean) QCD action,
\begin{equation} \label{eq:qcd}
S = S_g + S_f \;\;, \;\;\;\;
S_g = \frac{1}{4g^2} \int d^4x \, F_{\mu \nu}F^{\mu \nu} \;\;, \;\;\;\;
S_f = \int d^4x \, \bar{\psi}(x) (D\!\!\!\!\slash + m ) \psi (x) \;\;.
\end{equation}
is discretized, such that Eq.~(\ref{eq:qcd}) is recovered in the
the continuum ($a \rightarrow 0$) limit:
\begin{equation} \label{eq:limit}
S_{\rm lat} = S + {\cal O} (a^n)
\;\;, \;\;\; n \geq 1 \;\;\;.
\end{equation}
I will not go into the explicit formulations of $S_{\rm lat}$
here, but instead refer the reader to pedagogical introductions~\cite{intro}.
The most common form for the gauge action is Wilson's \cite{wilson_g},
written in terms of plaquettes
-- products of $U$ fields around the smallest closed loop on a lattice.
Wilson's gauge action has discretization errors of ${\cal O}(a^2)$.
For fermions the situation is more complicated.
The discretization of
\begin{equation}
M \equiv D\!\!\!\!\slash + m \;\;,
\end{equation}
is a sparse, finite dimensional matrix.
Two different approaches are in use.
In Wilson's formulation \cite{wilson_f} chiral symmetry is explicitly
broken, but restored in the continuum limit.
The pay-off is a solution of the so-called fermion doubling problem.
Staggered fermions \cite{ks} keep a $U(1)$ chiral symmetry
at the expense of dealing with 4 degenerate flavors of
fermions.
Eq.~(\ref{eq:lim}) emphasizes that QCD is a limit of lattice QCD.
However, in numerical calculations these limits cannot be
taken explicitly, only by extrapolation. This is feasible, because
theoretical guidance for both limits is available.
The zero-lattice-spacing limit is guided by asymptotic freedom, since
the lattice spacing is related to the gauge coupling by the
renormalization group.
Quantum field theories in large but finite volumes have also been
analyzed theoretically \cite{ml_vol}.
In a numerical calculation the limits are taken by considering
a series of lattices, as illustrated in Figure~\ref{fig:lim}.
While keeping the physical volume (or $L$) fixed, the lattice spacing is
successively reduced; then, keeping the lattice spacing fixed the
volume is increased. The calculation is in the continuum (infinite
volume) limit once the hadron spectrum or matrix elements of
interest become independent of the lattice spacing (volume).
\begin{figure}[htb]
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(110,80)(0,-5)
\multiput(115,15)(5,0){3}{\line(0,-1){10}}
\multiput(115,15)(0,-5){3}{\line(1,0){10}}
\multiput(55,15)(2.5,0){5}{\line(0,-1){10}}
\multiput(55,15)(0,-2.5){5}{\line(1,0){10}}
\multiput(25,15)(1.25,0){9}{\line(0,-1){10}}
\multiput(25,15)(0,-1.25){9}{\line(1,0){10}}
\multiput(20,65)(1.25,0){17}{\line(0,-1){20}}
\multiput(20,65)(0,-1.25){17}{\line(1,0){20}}
\put(100,10){\vector(-1,0){20}}
\put(85,15){\makebox(0,0)[l]{\Large $0 \leftarrow a$}}
\put(30,20){\vector(0,1){20}}
\put(35,30){\makebox(0,0)[l]{\Large $V \rightarrow \infty$}}
\put(0,0){\vector(0,1){70}}
\put(0,0){\vector(1,0){130}}
\put(0,73){\makebox(0,0)[b]{\Large $L$ (fm)}}
\put(120,-5){\makebox(0,0)[r]{\Large $a$ (fm)}}
\end{picture}
\end{center}
\caption{Illustration of the continuum and infinite-volume limits.}
\label{fig:lim}
\end{figure}
In practice, however, limitations in computational resources
do not permit the ideal lattice QCD
calculation just described. In particular, the computational
cost of reducing
the lattice spacing naively scales like $(L/a)^4$. (The
computational cost is really higher, because of numerical problems
at smaller lattice spacings.)
Eq.~(\ref{eq:limit}) illustrates
an alternative. By improving the discretization errors in
the lattice action (and operators), the continuum limit
can be reached at coarser lattice spacings than before.
Simulations with improved actions can come at only a slightly
higher computational price.
The ideas underlying improvement were developed some time
ago \cite{sym,lw,sw}, and have since been
revitalized \cite{it,us_thy,perfect,adhlm}.
If the quark mass is large compared to the typical QCD scale,
$\Lambda_{QCD}$, effective theories (such as NRQCD) are most
adequate in describing the physics \cite{eff}. In that case, the lattice
spacing cannot be taken to zero. Lattice-spacing errors can, however, be
systematically reduced by improvement \cite{nrqcd_thy}.
The problem is now (more or less) set up. I again refer the reader
to the literature \cite{intro} for more details on the organization
of typical lattice QCD calculations.
\subsection*{\centering An Introduction to NRQCD (Non-Relativistic QCD)}
In non-relativistic systems, the velocity $v$ of the heavy quark
inside the bound state is a small parameter; in
charmonium $v^2 \sim 0.3$. The important momentum scales with
regard to the structure of the bound state are the heavy quark momentum
($mv$) and its kinetic energy ($mv^2$).
Momenta of the order of the heavy quark mass ($m$), or above, are
relatively unimportant in the bound state dynamics.
If $v$ is small enough, then all the different momentum
scales are well separated, in particular:
\begin{equation}
\Lambda_{\rm QCD} \;, \; mv^2 \;,\; mv \ll m
\end{equation}
The most adequate description of such systems is in terms of an
effective field theory, non-relativistic
QCD (or NRQCD). The theory has a cut-off $\Lambda \sim m$.
The effective lagrangian can be written as an expansion in powers of
$1/m$.
The NRQCD lagrangian is \cite{eff}
\begin{equation} \label{eq:eff}
{\cal L}_{\rm NRQCD} = {\cal L}_{\rm light} + {\cal L}_{\rm heavy}
+ \delta{\cal L} \;\;.
\end{equation}
${\cal L}_{\rm light}$ is the fully relativistic lagrangian
for the light quarks and gluons.
${\cal L}_{\rm heavy}$ is the heavy quark (and anti-quark) lagrangian,
\begin{equation} \label{eq:nrqcd}
{\cal L}_{\rm heavy} = \psi^{\dagger}
\left( iD_0 + \frac{\bm{D}}{2m} \right) \psi
+ \chi^{\dagger} \left( iD_0 - \frac{\bm{D}}{2m} \right) \chi
\;\;,
\end{equation}
where $\psi$ and $\chi$ are 2-component Pauli spinors describing quark and
anti-quark degrees of freedom.
Relativistic effects of full QCD introduce corrections to
Eq.~(\ref{eq:nrqcd}) which appear in $\delta{\cal L}$ as local,
non-renormalizable interactions, with coefficients that are
calculable in perturbation theory \cite{eff,nrqcd_thy}.
In principle, infinitely many terms must be considered to reproduce full
QCD.
In practice, however, only a finite number is needed,
since every operator scales with a certain
power of $v$, as shown in Ref.~\cite{nrqcd_thy}. These power
counting rules (e.g., $\psi \sim (mv)^{(3/2)}$, $\bm{D} \sim mv$, etc.)
effectively order the terms in the NRQCD lagrangian by powers of $v$.
The annihilation of a $Q\bar{Q}$ pair occurs at momenta of order
$m$. This short distance physics cannot be treated directly in NRQCD.
However, the annihilation contribution to low-energy
$Q\bar{Q} \rightarrow Q\bar{Q}$ scattering can be incorporated in NRQCD by
adding local 4-fermion operators to $\delta {\cal L}$ \cite{bbl},
\begin{equation} \label{eq:4f}
\delta {\cal L}_{\rm 4-fermion} = \sum_i \frac{f_i}{m^{d_i -4}} \,
{\cal O}_i \;\;.
\end{equation}
For example,
\begin{equation}
{\cal O}_1 (^1S_0) = \psi^{\dagger} \chi \chi^{\dagger} \psi \;\;.
\end{equation}
The $f_i$ are again calculable in perturbation theory as expansions
in $\alpha_s (m)$.
\section*{\centering CHARMONIUM SPECTROSCOPY} \label{sec:QQ}
Two different formulations for fermions have been used in lattice
calculations of these spectra.
Lepage and collaborators \cite{lepage,nrqcd_thy} have adapted the NRQCD
formalism to the lattice regulator. Several groups have performed
numerical calculations of quarkonia in this approach.
In Ref.~\cite{nrqcd_cc} the
NRQCD action is used to calculate
the charmonium spectrum, including
terms of ${\cal O} (mv^4)$ and ${\cal O}(a^2)$.
In addition, this group has calculated the $b\bar{b}$ spectrum in
the quenched approximation ($n_f=0$) \cite{nrqcd_spec} and also using gauge
configurations that include 2 flavors of sea quarks
\cite{nrqcd_als,nrqcd_mb}.
The Fermilab group \cite{us_thy} developed a generalization of previous
approaches, which encompasses the non-relativistic
limit for heavy quarks as well as Wilson's relativistic action
for light quarks. Lattice-spacing artifacts are analyzed for quarks with
arbitrary mass. Ref.~\cite{us} uses this approach to calculate
the $c\bar{c}$ (and $b\bar{b}$) spectra in the quenched approximation. We
considered the effect of reducing lattice-spacing errors from
${\cal O}(a)$ to ${\cal O}(a^2)$.
The two groups mentioned above use gauge configurations generated
with the Wilson action, leaving ${\cal O}(a^2)$ lattice-spacing errors
in the results. The lattice spacings, in this case, are in the range
$a \simeq 0.05 - 0.2$ fm.
Ref.~\cite{adhlm} uses an improved gauge action (to ${\cal O}(a^4)$)
together with a non-relativistic quark action improved to the same order
(but without spin-dependent terms) on coarse ($a \simeq 0.4 - 0.24$ fm)
lattices.
The first step in any lattice QCD calculation is the determination of the
two free free parameters of the theory, the gauge coupling and quark
mass, from experiment. This is discussed in the following subsection.
The results for the charmonium spectrum from all groups are
summarized in Figure~\ref{fig:cc}.
\begin{figure}[htb]
\begin{center}
\epsfxsize= 0.65\textwidth
\leavevmode
\epsfbox[11 219 588 560]{ccbar.ps}
\end{center}
\caption[xx]{A comparison of lattice QCD results for the $c\bar{c}$ spectrum
using the quenched approximation ($n_f = 0$). The error bars
are statistical only.
---: Experiment; $\Box$: FNAL \cite{us}; $\circ$: NRQCD \cite{nrqcd_cc};
$\Diamond$: ADHLM \cite{adhlm}.}
\label{fig:cc}
\end{figure}
The agreement between the experimentally-observed spectrum and
lattice QCD calculations is respectable. As indicated in the
preceding paragraphs, the lattice artifacts are different for all groups.
Figure~\ref{fig:cc} therefore emphasizes the level of
control over systematic errors.
I should also note, however, that the theoretical errors (from the
Monte Carlo integration alone) are still much larger than
the experimental ones. They also grow with every level of
excitation considered.
The first quarkonium results with 2 flavors of degenerate sea quarks
have appeared \cite{nrqcd_als,kek_als,cdhw} with lattice-spacing
and finite-volume errors similar to the quenched calculations,
significantly reducing this systematic error.
In Refs.~\cite{kek_als,cdhw} the 1P-1S splitting in
charmonium is calculated, while Ref.~\cite{nrqcd_als}
considers the $b\bar{b}$ spectrum.
Several systematic effects associated with the
inclusion of sea quarks must still be studied.
They include the dependence on the quarkonium
spectrum of the number of flavors of sea quarks and
the sea-quark action (staggered vs. Wilson). The inclusion
of sea quarks with realistic light-quark masses is very
difficult. However, quarkonia are expected to depend only
very mildly on the masses of the light quarks. This systematic
error has not been included yet and should be checked
numerically.
The first (and second) generation of lattice QCD calculations,
as described here,
is focused on the simplest physical quantities, like the
low lying states or simple (decay) matrix elements, to establish
the method. Once first principles calculations have been achieved,
this technology can and will (given sufficient motivation) be
used to look at more complicated problems,
like higher excited states, mixing with glueballs, hybrids, etc.
\subsection*{\centering Standard Model Parameters from
Charmonium} \label{sec:sm}
The first step, the determination of the lattice gauge coupling and
quark mass, follows from comparing appropriate quantities calculated
on the lattice with the corresponding experimental measurements.
The lattice parameters can then be converted to their counterparts
in continuum QCD with perturbation theory.
Precise determinations of Standard Model parameters
are an interesting by-product of lattice QCD calculations of
charmonium (and bottomonium). However, the theoretical uncertainties
must be reduced by an order of magnitude before they become comparable
to the present experimental errors.
After discussing the determination
of the lattice spacing (which sets the scale in Figure~\ref{fig:cc}),
I will summarize the results for the strong coupling and
the charm quark mass from charmonium.
\subsubsection*{\it\centering Determination of the Lattice Spacing, $a$}
The input gauge coupling sets the lattice spacing, $a$, which is
determined in physical units by comparing a suitable quantity on
the lattice with its experimental value.
For this purpose, one should identify quantities that are insensitive
to lattice errors. In quarkonia, spin-averaged splittings are good
candidates. The experimentally observed 1P-1S and 2S-1S splittings
depend only mildly on the quark mass (for masses between $m_b$ and $m_c$),
as shown in table~\ref{tab:QQexp}.
\begin{table}
\caption{Spin-averaged splittings in the $J/\psi$ and $\Upsilon$
systems in comparison.} \label{tab:QQexp}
\begin{center}
\begin{tabular}{rrrr}
\hline
& $c\bar{c}$ (MeV) & $b\bar{b}$ (MeV) \\ \hline
$m({\rm 1P-1S})$ & $456.8$ & $452\;\;\;$ \\
$m({\rm 2S-1S})$ & $596\;\;\;$ & $563\;\;\;$ \\
$m({\rm 2P-1P})$ & ---$\;\,$ & $359.7$ \\ \hline
\end{tabular}
\end{center}
\end{table}
Figure~\ref{fig:1p1s}
shows the observed mass dependence of the 1P-1S splitting
in a lattice QCD calculation. The comparison between results from
different lattice actions illustrates that
higher-order lattice-spacing errors for these splittings
are small\cite{nrqcd_als,us}.
In contrast, Figure~\ref{fig:hyp} shows the hyperfine splitting
as an example of a quantity that strongly depends on both the mass
and the lattice action. It would therefore be a poor choice for a
determination of the lattice spacing.
\begin{figure}[htb]
\begin{center}
\epsfxsize= 0.65\textwidth
\leavevmode
\epsfbox{m1p1s.ps}
\end{center}
\caption[xx]{The 1P-1S splitting as a function of the 1S mass
(statistical errors only) from Ref. \cite{us};
$\Box$: ${\cal O}(a^2)$ errors; $\times$: ${\cal O}(a)$ errors.}
\label{fig:1p1s}
\end{figure}
\begin{figure}[htb]
\begin{center}
\epsfxsize= 0.65\textwidth
\leavevmode
\epsfbox{mhyp.ps}
\end{center}
\caption[xx]{The hyperfine splitting as a function of the 1S mass
(statistical errors only) from Ref. \cite{us};
$\Box$: ${\cal O}(a^2)$ errors; $\times$: ${\cal O}(a)$ errors.}
\label{fig:hyp}
\end{figure}
\subsubsection*{\it\centering The Strong Coupling, $\alpha_s$}
Within the framework of lattice QCD the conversion from the bare
to a renormalized coupling can, in principle, be made
non-perturbatively \cite{luesch}.
An alternative is to define a renormalized coupling through short
distance lattice quantities \cite{lm}.
The size of higher-order corrections associated with the above
defined coupling constant can be tested by comparing
perturbative predictions for short-distance lattice quantities
with non-perturbative results \cite{lm}.
At this point the relation to the $\overline{{\rm MS}}$ coupling is known
to 1-loop, leading to a $5 \,\%$ uncertainty. It has recently
been calculated to 2-loops \cite{lw-2l}
in the quenched approximation (no sea quarks, $n_f = 0$).
The extension to $n_f \neq 0$ will significantly reduce
the uncertainty due to the use of perturbation theory.
{\em Sea Quark Effects.\ }
Calculations that properly include all sea-quark effects
do not yet exist.
If we want to make contact with the ``real world'', these effects
have to be estimated phenomenologically or extrapolated away.
The phenomenological correction necessary to account for
the sea-quark effects omitted in calculations of quarkonia
that use the quenched approximation gives rise to the dominant
systematic error in these calculations \cite{prl,nrqcd_l93}.
Similar ideas were used to
correct for sea-quark effects in early calculations of quarkonia
spectra from the heavy-quark potential calculated in quenched
lattice QCD \cite{rebbi}.
By demanding that, say, the spin-averaged 1P-1S splitting calculated on
the lattice reproduce the experimentally observed one (which
sets the lattice spacing, $a^{-1}$, in physical units), the effective
coupling of the quenched potential is in effect matched to the
coupling of the effective 3 flavor potential at the typical
momentum scale of the quarkonium states in question. The difference
in the evolution of the zero flavor and 3,4 flavor couplings
from the effective low-energy scale to the ultraviolet cut-off,
where $\alpha_s$ is determined, is the perturbative estimate
of the correction.
For comparison with other determinations of $\alpha_s$, the $\overline{{\rm MS}}$
coupling can be evolved to the $Z$ mass scale. An average \cite{als_rev} of
Refs.~\cite{us,nrqcd_l93} yields for $\alpha_s$ from calculations
in the quenched approximation:
\begin{equation} \label{eq:nf0}
\alpha^{(5)}_{\overline{{\rm MS}}} (m_Z) = 0.110 \pm 0.006 \;\;\;.
\end{equation}
The experimental error in this determination from the quarkonium
mass splitting is much smaller than the theoretical uncertainty
and does not contribute to the total.
The phenomenological correction described in the previous paragraphs
has been tested from first principles in
Refs.~\cite{nrqcd_als,kek_als,cdhw}. All groups calculate quarkonium
splittings with 0 and 2 flavors of sea quarks. After extrapolating
to the physical 3 flavor case and evolving the coupling to
$m_Z$, Refs.~\cite{kek_als,cdhw} find for the strong coupling from
charmonium
\begin{equation}
\alpha^{(5)}_{\overline{{\rm MS}}} (m_Z) = 0.111 \pm 0.005
\end{equation}
in good agreement with the previous result in Eq.~(\ref{eq:nf0}).
The total error is now dominated by the rather large
statistical errors and the perturbative uncertainty.
At present, the result of Ref.~\cite{nrqcd_als} has the
smallest statistical and systematic errors for the strong
coupling (in this case from the $b\bar{b}$ spectrum):
\begin{equation} \label{eq:ms5}
\alpha^{(5)}_{\overline{{\rm MS}}} (m_Z) = 0.115 \pm 0.002 \;\;\;.
\end{equation}
Phenomenological corrections are a necessary evil that enter most
coupling constant determinations.
In contrast, lattice QCD calculations with complete control over
systematic errors will yield truly first-principles determinations
of $\alpha_s$ from the experimentally observed hadron spectrum.
At present, determinations of $\alpha_s$ from the experimentally measured
quarkonia spectra using lattice
QCD are comparable in reliability and accuracy to other determinations
based on perturbative QCD from high energy experiments. They
are therefore part of the 1994 world average for $\alpha_s$ \cite{als_rev}.
The phenomenological corrections for the most important sources
of systematic errors in lattice QCD calculations of quarkonia are
now being replaced by first principles, which will significantly increase
the accuracy of $\alpha_s$ determinations from quarkonia.
In particular, the systematic errors associated with the
inclusion of sea quarks into the simulation have to be checked.
\subsubsection*{\it\centering The Charm Quark Mass} \label{sec:mass}
Because of confinement, the quark masses cannot be measured
directly, but have to be inferred from experimental
measurements of hadron masses, and depend on the calculational
scheme employed.
In lattice QCD quark masses are determined non-perturbatively,
by tuning the input lattice quark mass ($m^{\rm lat}_Q$) so that,
for example, the experimentally observed $J/\psi$ mass is reproduced
by the calculation.
Phenomenologically useful quark masses are the perturbatively defined
pole and $\overline{{\rm MS}}$ masses, which the bare lattice mass can be related
to by perturbation theory:
\begin{equation} \label{eq:mms}
m_Q^{\rm pole} = Z^{\rm pole}_m \, m^{\rm lat}_Q \;\;\;,\hspace{2cm}
m_Q^{\overline{{\rm MS}}} (m_Q) = Z^{\overline{{\rm MS}}}_m \, m^{\rm lat}_Q \;\;\;.
\end{equation}
Of course, as always, all systematic errors arising from the
lattice QCD calculation need to be under control for a
phenomenologically interesting result;
in particular, the systematic error introduced by the
(partial) omission of sea quarks has to be removed.
The short-distance corrections that introduced the dominant
uncertainty to the $\alpha_s$ determination from quarkonia
are absent for the pole mass determination, because this
effective mass does not run for momenta below its mass.
An analysis of the $b$-quark mass from the $b\bar{b}$ spectrum
with and without sea quarks is consistent with this
estimate \cite{nrqcd_mb}.
Ref.~\cite{us_mc} analyzes the the charm quark mass
from the charmonium spectrum with the preliminary result,
$m_c^{\rm pole} = 1.5 (2)$ GeV.
The $\overline{{\rm MS}}$ mass for the charm quark has also been determined
from a compilation of $D$ meson calculations in the quenched
approximation \cite{elc_mq}, with
$m_c^{\overline{{\rm MS}}} (2 \,{\rm GeV}) = 1.47 (28)$ GeV. The error includes
statistical errors from the original calculations and the
perturbative error. However sea-quark effects cannot, in this case,
be estimated phenomenologically, leaving this systematic
error uncontrolled.
\section*{\centering CHARMONIUM DECAYS} \label{sec:decays}
Historically \cite{itep,quigg}, charmonium annihilation decays were treated
with a factorization {\em ansatz}, which divides the decay rate into
a short distance, perturbative part, and a long distance,
non-perturbative part, usually parametrized as the wave
function at the origin, $R(0)$.
This {\em ansatz} worked quite well for S-wave decays,
even after including radiative corrections at next-to-leading order
\cite{s-decays}.
For P-wave decays, the long distance piece was
identified as the derivative of the wave function at the origin,
$R'(0)$.
However, the radiative corrections were found to have infra-red
divergences \cite{p-decays} for $J=1$ P-wave states already at leading order,
and for $J=0,2$ P-wave states at next-to-leading order,
preventing reliable theoretical predictions.
This {\em ansatz} also did not allow for a systematic inclusion of higher
Fock states, or relativistic corrections.
If the effective field theory framework of NRQCD is used to describe
annihilation decays, it was shown \cite{bbl} that a general factorization
formula holds for the decay rates.
The annihilation decay rate of a quarkonium state $A$ can
be written as
\begin{equation}
\Gamma ( A ) = 2 \; {\rm Im} \langle A | \delta {\cal L}_{\rm 4-fermion}
| A \rangle \;\;.
\end{equation}
Using Eq.~(\ref{eq:4f}) this gives
\begin{equation} \label{eq:bbl}
\Gamma ( A ) = \sum_{i} \frac{F_i}{m^{d_i -4}}
\; \langle A | {\cal O}_i | A \rangle \;\;\;.
\end{equation}
The coefficients $F_i$ are the imaginary parts of the $f_i$ in
Eq.~(\ref{eq:4f}). Because of the power counting rules
of Ref.~\cite{nrqcd_thy}, the matrix elements scale with some power
of the velocity,
\begin{equation}
\langle A | {\cal O}_i | A \rangle \sim v^{2n} \;\;,
\end{equation}
which usually increases as higher dimensional
operators are considered in the sum of Eq.~(\ref{eq:bbl}).
Effectively, this approach expresses the quarkonium decay
rates as expansions in the the short distance parameter $\alpha_s(m)$
and the long distance parameter $v^2$.
The desired accuracy of the theoretical prediction thus serves
as a truncation criterion for the (infinite) sum in Eq.~(\ref{eq:bbl}).
The matrix elements in Eq.~(\ref{eq:bbl}) are calculable from first
principles using lattice NRQCD. These calculations are in progress
by the ANL group \cite{anl}. In the meantime,
the matrix elements can also be extracted from experimental measurements
of charmonium decay rates.
It was shown in Ref.~\cite{bbl} that heavy-quark spin symmetry and
the vacuum-saturation approximation reduces the number of independent
matrix elements that have to be determined non-perturbatively or
phenomenologically.
The matrix elements of the 4-fermion operators,
which contribute to the charmonium decays into light hadrons
can be simplified using the vacuum saturation approximation.
This approximation is valid in NRQCD up to relative order $v^4$.
For example,
\begin{equation} \label{eq:vac}
\langle \eta_c | \psi^{\dagger} \chi \chi^{\dagger} \psi | \eta_c \rangle =
|\langle 0 | \chi^{\dagger} \psi | \eta_c \rangle |^2
( 1 + {\cal O}(v^4) ) \;\;.
\end{equation}
This relates the matrix elements appearing in electro-magnetic
charmonium decays to those in strong decays (into light hadrons).
Heavy-quark spin symmetry relates matrix elements of states in
the same radial and orbital levels but different spins. For example,
\begin{equation} \label{eq:spin}
\bm{\epsilon}^* \cdot
\langle 0 | \chi^{\dagger} \bm{\sigma} \psi | J/\psi \rangle
= \langle 0 | \chi^{\dagger} \psi | \eta_c \rangle
( 1 + {\cal O}(v^2) ) \;\;.
\end{equation}
The following two subsections discuss the application
of this approach to S- and P-wave decays, using specific
examples.
\subsection*{\centering S-Wave Decays}
I discuss the theoretical knowledge of S-wave decays using
$\eta_c \rightarrow \gamma\gamma$ as an example.
At leading order in $v^2$ the rate is
\begin{equation} \label{eq:s-v2}
\Gamma ( \eta_c \rightarrow \gamma\gamma )
= \frac{F_{\gamma\gamma} (^1S_0)}{m^2} |\overline{R_S}|^2 \;\;,
\end{equation}
where by spin symmetry (see Eq.~(\ref{eq:spin})) $\overline{R_S}$
can be taken as
$\overline{R_{\eta_c}}$ or the spin-averaged combination of
$\overline{R_{\eta_c}}$ and $\overline{R_{\psi}}$, with
\begin{equation}
\overline{R_{\eta_c}} = \sqrt{\frac{2\pi}{3}} \;
\langle 0 | \chi^{\dagger} \psi | \eta_c \rangle \;\;.
\end{equation}
The other $\eta_c$ and $J/\psi$ decays differ from Eq.~(\ref{eq:s-v2})
in the short distance coefficient but, using Eqs.~(\ref{eq:vac})
and (\ref{eq:spin}),
have the same long distance parameter, $\overline{R_S}$.
Identifying $\overline{R_S}$ with the wave function at the origin,
$R_S(0)$, leads back to the old factorization {\em ansatz}.
The short distance coefficients, $F_i$, are known to next-to-leading
order in $\alpha_s$.
At next to leading order in $v^2$, the decay rate becomes
\begin{equation}
\Gamma ( \eta_c \rightarrow \gamma\gamma )
= \frac{F_{\gamma\gamma} (^1S_0)}{m^2} \; |\overline{R_{\eta_c}}|^2
+ \frac{G_{\gamma\gamma} (^1S_0)}{m^4} \;
{\rm Re}(\overline{R_S}^* \overline{\bm{\nabla}^2R_S}) \;\;.
\end{equation}
Now the difference between $\overline{R_{\eta_c}}$ and $\overline{R_S}$,
which is of order $v^2$ (see Eq.~(\ref{eq:spin})) has to be taken into
account for consistency.
$\overline{\bm{\nabla}^2R_S}$ can be interpreted as the laplacian
of the wave function at the origin,
\begin{equation}
\langle 0 | \chi^{\dagger} (\bm{D}^2) \psi | \eta_c \rangle =
\sqrt{\frac{3}{2\pi}} \; \overline{\bm{\nabla}^2R_S}
\; ( 1 + {\cal O}(v^2) ) \;\;.
\end{equation}
The short distance coefficients, $G_i$, are known to leading
order in $\alpha_s$.
The long distance parameters, $\overline{R_{\eta_c}}$,
$\overline{R_{\psi}}$ and $\overline{\bm{\nabla}^2R_S}$ can be
calculated in lattice NRQCD, estimated using potential models
or extracted from a phenomenological analysis of experimental data.
The relative errors (from the truncation of the perturbative and
non-relativistic series) are $\alpha_s(m_c)^2$, $\alpha_s (m_c) v^2$
and $v^4$, adding up to ${\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 15 \;\%$.
It is not inconceivable that at least for the electro-magnetic
S-wave decays some (or all) of the higher order calculations
will be performed, reducing the theoretical error accordingly.
\subsection*{\centering P-Wave Decays}
The structure of P-wave decay rates is more complicated than
that of S-waves. I shall start with a simple example first,
the decay $\chi_{cJ} \rightarrow \gamma \gamma$.
\begin{equation}
\Gamma ( \chi_{cJ} \rightarrow \gamma \gamma ) =
\frac{ F_{\gamma \gamma}( ^3P_J)}{m^4} \; |\overline{R'_P}|^2 \;\;,
\end{equation}
where again, by spin symmetry $\overline{R'_P}$ can be taken
from the matrix elements of any of the P-wave states (or their
spin-averaged combination). For example,
\begin{equation}
\langle 0 | \chi^{\dagger} (\frac{1}{2}\bm{D\cdot \sigma}) \psi
| \chi_{c0} \rangle =
\sqrt{\frac{27}{2\pi}} \; \overline{R'_{\chi_{c0}}} \;
(1 + {\cal O} (v^2) ) \;\;.
\end{equation}
The obvious interpretation of $\overline{R'_P}$ as the derivative
of the wave function at the origin, $R'_P(0)$, again connects this
formalism to the old factorization {\em ansatz}.
The coefficients, $F_{\gamma\gamma}$, are known to next-to-leading
order in $\alpha_s$. The relative error is $v^2$ or $\sim 25\, \%$ of the
decay rate.
The situation is different for strong decays of
P-wave states. Taking as an example $h_c \rightarrow LH$ (light
hadrons), Eq.~(\ref{eq:bbl}) gives at leading order in $v^2$,
\begin{equation}
\Gamma ( h_c \rightarrow LH)
= \frac{F_1 (^1P_1)}{m^4} \langle {\cal O}_1 \rangle
+ \frac{F_8 (^1S_0)}{m^4} \langle {\cal O}_8 \rangle \;\;,
\end{equation}
where
\begin{eqnarray}
\langle {\cal O}_1 \rangle & = &
\langle h_c | \psi^{\dagger} (\frac{1}{2} \bm{D \cdot \sigma}) \chi
\chi^{\dagger} (\frac{1}{2} \bm{D \cdot \sigma}) \psi | h_c
\rangle \\
& = & \frac{9}{2 \pi} \; |\overline{R'_P}|^2 \; ( 1 + {\cal O}(v^2) )
\nonumber
\end{eqnarray}
and
\begin{equation}
\langle {\cal O}_8 \rangle =
\langle h_c | \psi^{\dagger} T^a \chi \chi^{\dagger}T^a \psi | h_c \rangle
\;\;.
\end{equation}
The two matrix elements $\langle {\cal O}_1 \rangle$ and
$\langle {\cal O}_8 \rangle$ enter at the same order in $v^2$, because
they are both suppressed by $v^2$ with respect to the leading order
S-wave matrix elements. ${\cal O}_1$ is a dimension eight operator; the two
powers of $\bm{D}$ give the $v^2$ suppression.
${\cal O}_8$ is of dimension
six, but its matrix element picks the
$|c\bar{c} g\rangle$ Fock state, where the $c\bar{c}$ pair is in a color-octet $^1S_0$
state. The dominant Fock state of charmonium is the color-singlet
$|c\bar{c} \rangle$.
\begin{equation}
| A \rangle = \Psi_{c\bar{c}} | c\bar{c} \rangle + \Psi_{c\bar{c} g} | c\bar{c} g \rangle +
\ldots \;\;,
\end{equation}
with $\Psi_{c\bar{c} g} \sim {\cal O}(v)$. This gives the $v^2$ suppression.
This departure from the old factorization {\em ansatz} solves the problem
of infrared divergences, and thus leads to a consistent treatment of
this case similar to the other charmonium decays.
The short distance coefficients, $F_1$, are known to next-to-leading
order, while the $F_8$'s are only known to leading order in $\alpha_s$.
At present, the dominant relative error is thus $\alpha_s$ and $v^2$,
adding up to $\sim 40 \, \%$.
It should be possible to
extend the (perturbative) calculations of P-wave decays in this
frame work to the same order as what is presently available
for S-wave decays, reducing the theoretical error to ${\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 15 \, \%$
(assuming knowledge of the relevant matrix elements).
A comparison of theory and experiment for P-wave decays
shows fair agreement, albeit with rather large errors
from both sides \cite{mangano}.
\section*{\centering CONCLUSIONS} \label{sec:con}
Quarkonia were, upon their discovery, called the hydrogen
atoms of particle physics. Their non-relativistic nature
justified the use of potential models, which gave a nice,
phenomenological understanding of these systems.
This phenomenology is at present useful to control
systematic errors in lattice QCD calculations of the charmonium
spectrum. However, we are quickly moving towards
truly first-principles calculations of quarkonia using
lattice QCD, thereby testing QCD non-perturbatively.
In this sense, quarkonia are still the hydrogen atoms of
particle physics.
Precise determinations of the Standard Model parameters,
$\alpha_s$, $m_c$ (and $m_b$), are by-products of this work.
Still lacking for a first-principles result is the
proper inclusion of sea quarks. The most difficult
problem in this context is the inclusion of sea quarks
with physical light quark masses. At present, this can
only be achieved by extrapolation (from $m_q \simeq 0.3 - 0.5 m_s$
to $m_{u,d}$).
If the light quark mass dependence of the quarkonia spectra
is mild, as anticipated, the associated systematic error
can be controlled.
First-principles calculations of quarkonia could then be
performed with currently available computational resources.
The present theoretical status of charmonium annihilation
decays is rather promising. The frame-work developed in Ref.~\cite{bbl}
leads to a systematic expansion in $\alpha_s(m_c)$ and $v^2$, with
controllable uncertainties. Until first-principles calculations
of the non-perturbative matrix elements become available, this
formalism can still be tested phenomenologically, using experimental
data. Theoretical predictions for
the decay rates should be available with uncertainties of
$ {\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 10 \, \%$, in most cases before the Tau/Charm factory
turns on.
It is conceivable, that by the time a Tau/Charm factory turns
on, the theory of charmonium will be solidly based upon first
principles with accurate predictions for the spectrum and decays
of the low-lying states.
We will have moved to the next stage in theoretical (first-principles)
calculations concerning, for example, the properties of hybrid states
or mixing with glueballs.
Experimental information gathered at the Tau/Charm
factory and earlier experiments \cite{ginsburg} will then
give us {\em precision} tests of perturbative and non-perturbative QCD
in the charmonium system.
\section*{\centering ACKNOWLEDGEMENTS}
I thank the organizers for an enjoyable conference, and
G. Bodwin, E. Braaten, A. Kronfeld, P. Lepage, P. Mackenzie,
C. Quigg and J. Shigemitsu for discussions while preparing
this talk.
\begin{Thebibliography}{9}
\bibitem{quigg} W. Kwong, J. Rosner and C. Quigg, \ARNPS{37}{87}{325}.
\bibitem{lepage} P. Lepage, \NPBproc{26}{92}{45};
B. Thacker and P. Lepage, \PRD{43}{91}{196};
P. Lepage and B. Thacker, \NPBproc{4}{88}{199}.
\bibitem{bbl} E. Braaten, G. Bodwin and P. Lepage, \PRD{46}{92}{1914};
\PRD{51}{95}{1125}; E. Braaten, NUHEP-TH-94-22, hep-ph/9409286.
\bibitem{intro} For pedagogical introductions to Lattice Field Theory,
see, for example:
M. Creutz, {\em Quarks, Gluons and Lattices} (Cambridge
University Press, New York 1985);
A. Hasenfratz and P. Hasenfratz, \ARNPS{35}{85}{559};
A. Kronfeld, in {\em Perspectives in the Standard Model},
R. Ellis, C. Hill and J. Lykken (eds.) (World Scientific,
Singapore 1992), p. 421;
see also A. Kronfeld and P. Mackenzie, \ARNPS{43}{93}{793};
A.~El-Khadra, in {\em Physics in Collision 14}, S. Keller and
H. Wahl (eds.) (Editions Frontieres, Cedex - France 1995), p. 209;
for introductory reviews of lattice QCD.
\bibitem{wilson_g} K. Wilson, \PRD{10}{74}{2445}.
\bibitem{wilson_f} K. Wilson, in {\em New Phenomena in Subnuclear Physics},
A. Zichichi (ed.) (Plenum, New York 1977).
\bibitem{ks} L. Susskind, \PRD{16}{77}{3031};
T. Banks, J. Kogut and L. Susskind, \PRD{13}{76}{1043}.
\bibitem{ml_vol} M. L\"{u}scher, \CMP{104}{86}{177};
\CMP{105}{86}{153}.
\bibitem{sym} See for example, T. Bell and K. Wilson, \PRB {11}{75}{3431};
K. Symanzik, \NPB{226}{83}{187}; {\em ibid.} 205.
\bibitem{lw} P. Weisz, \NPB{212}{83}{1}; M. L\"{u}scher and P. Weisz,
\NPB {212}{84}{349}; \CMP{97}{85}{59}; (E) {\bf 98} (1985) 433.
\bibitem{sw} B. Sheikholeslami and R. Wohlert, \NPB{259}{85}{572}.
\bibitem{it} C. Heatlie, {\em et al.}, \NPB {352}{91}{266}.
\bibitem{us_thy} P. Mackenzie, \NPBproc{30}{93}{35};
A. Kronfeld, \NPBproc{30}{93}{445};
A. El-Khadra, A. Kronfeld and P. Mackenzie,
Fermilab PUB-93/195-T.
\bibitem{perfect} P. Hasenfratz, \NPBproc {34}{94}{3};
P. Hasenfratz and F. Niedermayer, \NPB{414}{94}{785};
U. Wiese, \PLB{315}{93}{417};
W. Bietenholz and U. Wiese, \NPBproc{34}{94}{516}.
\bibitem{adhlm} P. Lepage, in {\em The Building Blocks of Creation},
S. Raby and T. Walker (eds) (World Scientific, Singapore 1994),
hep-lat/9403018;
M. Alford, {\em et al.}, \NPBproc{42}{95}{787}; hep-lat/9507010.
\bibitem{eff} E. Eichten and F. Feinberg, \PRD{23}{81}{2724};
W. Caswell and P. Lepage, \PLB{167}{86}{437}.
\bibitem{nrqcd_thy} P. Lepage, {\em et al.}, \PRD{46}{92}{4052}.
\bibitem{nrqcd_cc} C. Davies, {\em et al.}, hep-lat/9506026.
\bibitem{nrqcd_spec} C. Davies, {\em et al.}, \PRD{50}{94}{6963}.
\bibitem{nrqcd_als} C. Davies, {\em et al.}, \PLB{345}{95}{42}.
\bibitem{nrqcd_mb} C. Davies, {\em et al.}, \PRL{73}{94}{2654}.
\bibitem{us} A. El-Khadra, G. Hockney, A. Kronfeld, P. Mackenzie,
T. Onogi and J. Simone, Fermilab PUB-94/091-T.
\bibitem{kek_als} S. Aoki, {\em et al.}, \PRL{74}{95}{22}.
\bibitem{cdhw} M. Wingate, {\em et al.}, hep-lat/9501034.
\bibitem{luesch} For a review of $\alpha_s$ from the heavy-quark
potential, see K. Schilling and G. Bali, \NPBproc{34}{94}{147};
M. L\"{u}scher, R. Sommer, P. Weisz, and U. Wolff,
\NPB{413}{94}{481}; G. de Divitiis, {\em et al.}, \NPB{433}{95}{390};
\NPB{437}{95}{447};
C. Bernard, C. Parrinello and A. Soni, \PRD{49}{94}{1585}.
\bibitem{lm} P. Lepage and P. Mackenzie, \PRD{48}{92}{2250}.
\bibitem{lw-2l} M. L\"{u}scher and P. Weisz, \PLB{349}{95}{165};
hep-lat/9505011.
\bibitem{prl} A. El-Khadra, G. Hockney, A. Kronfeld and P. Mackenzie,
\PRL{69}{92}{729}; A. El-Khadra, \NPBproc{34}{94}{141}
\bibitem{nrqcd_l93} The NRQCD Collaboration, \NPBproc{34}{94}{417}.
\bibitem{rebbi} D. Barkai, K. Moriarty and C. Rebbi, \PRD{30}{84}{2201};
M. Campostrini, \PLB{147}{84}{343}.
\bibitem{als_rev} For reviews on the status of $\alpha_s$
determinations, see, for example:
B. Webber, ICHEP'94; I. Hinchliffe, DPF'94 and
\PRD{50}{94}{1173}, p.1297.
\bibitem{us_mc} A. El-Khadra and B. Mertens, \NPBproc{42}{95}{406}.
\bibitem{elc_mq} C. Allton, {\em et al.}, \NPB{431}{94}{667}.
\bibitem{itep} V. Novikov, {\em et al.}, \PRepC{41}{78}{1}.
\bibitem{s-decays} R. Barbieri, G. Curci, E. d'Emilio and E. Remiddi,
\NPB{154}{79}{535};
K. Hagiwara, C. Kim, T. Yoshino, \NPB{177}{81}{461};
P. Mackenzie and P. Lepage, \PRL{47}{81}{1244}.
\bibitem{p-decays} R. Barbieri, R. Gatto and R. K\"{o}gerler,
\PLB{60}{76}{183};
R. Barbieri, R. Gatto and E. Remiddi, \PLB{61}{76}{465};
R. Barbieri, M. Caffo, R. Gatto and E. Remiddi, \PLB{95}{80}{93};
\NPB{192}{81}{61}.
\bibitem{anl} G. Bodwin, S. Kim and D. Sinclair, \NPBproc{34}{94}{434};
\NPBproc{42}{95}{306}.
\bibitem{mangano} M. Mangano and A. Petrelli, \PLB{352}{95}{445}.
\bibitem{ginsburg} C. Ginsburg (E760 collaboration), these proceedings.
\end{Thebibliography}
\end{document}
|
2,869,038,155,583 | arxiv | \section{Introduction}
\label{sec.int}
The KKM theorem of Knaster, Kuratowski, and Mazurkiewicz~\cite{KKM} is a set covering variant of Brouwer's fixed point theorem.
It states that for any covering of the $k$-simplex $\Delta_k$ on vertex set $[k+1]$ with closed sets $A_1, \dots, A_{k+1}$ such that
the face spanned by vertices $S$ is contained in $\bigcup_{i\in S} A_i$ for every $S \subset [k+1]$, the intersection $\bigcap_{i \in [k+1]} A_i$
is nonempty.
The KKM theorem has inspired many extensions and variants some of which we will briefly survey in Section~\ref{sec:komiya}. Important strengthenings
include a colorful extension of the KKM theorem due to Gale~\cite{gale1984} that deals with $k+1$ possibly distinct coverings of the $k$-simplex and the
KKMS theorem of Shapley~\cite{shapley}, where the sets in the covering are associated to faces of the $k$-simplex instead of its vertices. Further generalizations
of the KKMS theorem are a polytopal version due to Komiya~\cite{komiya} and the colorful KKMS theorem of Shih and Lee~\cite{ShihLee}.
In this note we prove a colorful polytopal KKMS theorem, extending all results above. This result is finally sufficiently general to also
specialize to B\'ar\'any's celebrated colorful Carath\'eodory theorem~\cite{barany} from 1982, which asserts that if
$X_1, \dots, X_{k+1}$ are subsets of $\mathbb{R}^k$ with $0 \in \conv X_i$ for every $i \in [k+1]$, then there exists a choice of points
$x_1 \in X_1, \dots, x_{k+1} \in X_{k+1}$ such that $0 \in \conv\{x_1, \dots, x_{k+1}\}$ --- as in the case of the colorful KKM theorem,
Carath\'eodory's classical result is the case $X_1 = X_2 = \dots = X_{k+1}$. We deduce the colorful Carath\'eodory theorem
from our main result in Section~\ref{sec5}.
For a set $\sigma \subset \mathbb{R}^k$ we denote by $C_\sigma$ the \emph{cone of $\sigma$}, that is, the union of all rays emanating
from the origin that intersect~$\sigma$. Our main result is the following:
\begin{theorem}
\label{thm:col-komiya}
Let $P$ be a $k$-dimensional polytope with~${0 \in P}$. Suppose for every nonempty, proper face $\sigma$ of $P$ we are given $k+1$ points
$y^{(1)}_\sigma, \dots, y^{(k+1)}_\sigma \in C_\sigma$ and $k+1$ closed sets $A^{(1)}_\sigma, \dots, A^{(k+1)}_\sigma \subset P$.
If $\sigma \subset \bigcup_{\tau \subset \sigma} A^{(j)}_\tau$ for every face~$\sigma$ of~$P$ and every~${j\in [k+1]}$,
then there exist faces $\sigma_1, \dots, \sigma_{k+1}$ of $P$ such that
$0 \in \conv\{y_{\sigma_1}^{(1)}, \dots, y_{\sigma_{k+1}}^{(k+1)}\}$ and $\bigcap_{i=1}^{k+1} A^{(i)}_{\sigma_i}\neq \emptyset.$
\end{theorem}
Our proof of this result is entirely different from B\'ar\'any's proof of the colorful Carath\'eodory theorem and relies on a
topological mapping degree argument, and thus provides a new topological route to prove this theorem, which is less involved
than the deduction recently given by Meunier, Mulzer, Sarrabezolles, and Stein~\cite{meunier2017} to show that algorithmically
finding the configuration whose existence is guaranteed by the colorful Carath\'eodory theorem is in PPAD, that is, informally
speaking, it can be found by a (directed) path-following algorithm. Our method involves a limiting argument and thus does not have immediate
algorithmic consequences. Furthermore, our proof also exhibits a surprisingly simple way to prove KKMS-type results and their colorful extensions.
As an application of Theorem~\ref{thm:col-komiya} we prove a bound on the piercing numbers of colorful $d$-interval hypergraphs.
A {\em $d$-interval} is a union of at most $d$ disjoint closed
intervals on~$\mathbb{R}$. A $d$-interval $h$ is {\em separated} if it consists of $d$ disjoint components $h = h^1 \cup \dots \cup h^d$ with $h^{i+1} \subset (i, i + 1)$ for $i \in\{0, \dots, d-1\}$.
A {\em hypergraph of (separated) $d$-intervals} is a hypergraph $H$ whose vertex set is $\mathbb{R}$ and whose edge set is a finite family of (separated) $d$-intervals.
A {\em matching} in a hypergraph $H=(V,E)$ with vertex set $V$ and
edge set $E$ is a set of disjoint edges. A {\em cover} is a subset of
$V$ intersecting all edges. The \emph{matching number} $\nu(H)$ is the maximal size of a matching, and
the \emph{covering number} (or {\em piercing number}) $\tau(H)$ is the minimal size of a
cover.
Tardos \cite{tardos} and Kaiser \cite{kaiser} proved the following bound on the covering number in
hypergraphs of $d$-intervals:
\begin{theorem}[Tardos \cite{tardos}, Kaiser \cite{kaiser}]\label{t:kaiser}
In every hypergraph $H$ of
$d$-intervals we have
$\tau(H) \leq (d^2-d+1)\nu(H).$ Moreover, if $H$ is a hypergraph of separated $d$-intervals then $ \tau(H) \leq (d^2-d)\nu(H).$
\end{theorem}
Matou\v{s}ek~\cite{matousek} showed that this bound is not far from
the truth: There are examples of hypergraphs of
$d$-intervals in which $\tau = \Omega(\frac{d^2}{\log
d}\nu)$. Aharoni, Kaiser and Zerbib~\cite{AKZ} gave a proof of Theorem~\ref{t:kaiser} that used the KKMS theorem and Komiya's polytopal extension, Theorem~\ref{thm:komiya}.
Using Theorem \ref{thm:col-komiya} we prove here a colorful generalization of Theorem~\ref{t:kaiser}:
\begin{theorem}
\label{coloreddintervals}
Let $F_i,~ i\in[k+1]$, be $k+1$ hypergraphs of $d$-intervals and let $\mathcal{F}= \bigcup_{i=1}^{k+1} F_i$.
\begin{enumerate}[1.]
\item If $\tau(F_i)>k$ for all $i\in[k+1]$, then there exists a collection $\mathcal{M}$ of pairwise disjoint $d$-intervals in $\mathcal{F}$ of size $|\mathcal{M}| \ge \frac{k+1}{d^2-d+1}$, with $|\mathcal{M}\cap F_i| \le 1$.
\item If $F_i$ consists of separated $d$-intervals and $\tau(F_i)>kd$ for all $i \in [k+1]$, then there exists a collection $\mathcal{M}$ of pairwise disjoint separated $d$-intervals in $\mathcal{F}$ of size $|\mathcal{M}| \ge \frac{k+1}{d-1}$, with $|\mathcal{M}\cap F_i| \le 1$.
\end{enumerate}
\end{theorem}
Note that Theorem~\ref{t:kaiser} is the case where all the hypergraphs $F_i$ are the same.
In Section~\ref{sec:komiya} we introduce some notation and, as an introduction to our methods, provide a new simple proof of Komiya's theorem. Then, in Section~\ref{sec5}, we prove Theorem~\ref{thm:col-komiya} and use it to derive B\'ar\'any's colorful Carath\'eodory theorem. Section~\ref{sec:interval}
is devoted to the proof of Theorem~\ref{coloreddintervals}.
\section{Coverings of polytopes and Komiya's theorem}
\label{sec:komiya}
Let $\Delta_k$ be the $k$-dimensional simplex with vertex set $[k+1]$ realized in $\mathbb{R}^{k+1}$ as
$\{x \in \mathbb{R}^{k+1}_{\ge 0} \: : \: \sum_{i=1}^{k+1} x_i = 0\}$. For every $S\subset[k+1]$ let $\Delta^S$ be the face of $\Delta_k$ spanned by the vertices in~$S$.
Recall that the KKM theorem asserts that if $A_1,\dots,A_{k+1}$ are closed sets covering $\Delta_k$ so that
$\Delta^S \subset \bigcup_{i\in S} A_i$ for every $S\subset [k+1]$, then the intersection of all the sets $A_i$ is non-empty.
We will refer to covers $A_1, \dots, A_{k+1}$ as above as \emph{KKM cover}.
A generalization of this result, known as the KKMS theorem, was proven by Shapley~\cite{shapley} in 1973. Now we have a cover of $\Delta_k$ by closed sets $A_T,~T\subset [k+1]$, so that $\Delta^S \subset \bigcup_{T\subset S} A_T$ for every $S\subset [k+1]$. Such a collection of sets $A_T$ is called {\em KKMS cover}. The conclusion of the KKMS theorem is that there exists a balanced collection of $T_1,\dots,T_m$ of subsets of $[k+1]$ for which $\bigcap_{i=1}^{m} A_{T_i} \neq \emptyset$. Here $T_1,\dots,T_m$ form a balanced collection if the barycenters of the corresponding faces $\Delta_{T_1},\dots,\Delta_{T_m}$ contain the barycenter of $\Delta_k$ in their convex hull.
A different generalization of the KKM theorem is a colorful version due to Gale~\cite{gale1984}. It states that given $k+1$ KKM covers
$A_1^{(i)}, \dots, A_{k+1}^{(i)}$, $i \in [k+1]$, of the $k$-simplex~$\Delta_k$, there is a permutation $\pi$ of $[k+1]$
such that $\bigcap_{i \in [k+1]} A_{\pi(i)}^{(i)}$ is nonempty. This theorem is colorful in the sense that we think of each
KKM cover as having a different color; the theorem then asserts that there is an intersection of $k+1$ sets of pairwise
distinct colors associated to pairwise distinct vertices. Asada et al.~\cite{asada2017} showed that one can additionally prescribe~$\pi(1)$.
In 1993 Shih and Lee~\cite{ShihLee} proved a common generalization of the KKMS theorem and Gale's colorful KKM theorem: Given $k+1$ such KKMS covers $A_T^{i},~T\subset [k+1],~i\in [k+1]$, of $\Delta_k$, there exists a balanced collection of $T_1,\dots,T_{k+1}$ of subsets of $[k+1]$ for which we have $\bigcap_{i=1}^{m} A_{T_i}^i \ne \emptyset$.
Another far reaching extension of the KKMS theorem to general polytopes is due to Komiya~\cite{komiya} from 1994. Komiya proved that the simplex $\Delta_k$ in the KKMS theorem can be replaced by any $k$-dimensional polytope $P$, and that the barycenters of the faces can be replaced by any points $y_{\sigma}$ in the face~$\sigma$:
\begin{theorem}[Komiya's theorem~\cite{komiya}]
\label{thm:komiya}
Let $P$ be a polytope, and for every nonempty face $\sigma$ of $P$ choose a point $y_\sigma \in \sigma$
and a closed set $A_\sigma \subset P$. If $\sigma \subset \bigcup_{\tau \subset \sigma} A_\tau$
for every face~$\sigma$ of~$P$, then there are faces $\sigma_1, \dots, \sigma_m$ of $P$ such that
$y_P \in \conv\{y_{\sigma_1}, \dots, y_{\sigma_m}\}$ and $\bigcap_{i=1}^m A_{\sigma_i} \neq \emptyset$.
\end{theorem}
This specializes to the KKMS theorem if $P$ is the simplex and each point $y_\sigma$ is the barycenter
of the face~$\sigma$. Moreover, there are quantitative versions of the KKM theorem due to De Loera, Peterson, and Su~\cite{deLoera2002}
as well as Asada et al.~\cite{asada2017} and KKM theorems for general pairs of spaces due to Musin~\cite{musin2017}.
To set the stage we will first present a simple proof of Komiya's theorem.
The KKM theorem can be easily deduced from Sperner's lemma on vertex labelings of triangulations of a simplex.
Our proof of Komiya's theorem -- just as Shapley's original proof of the KKMS theorem -- first establishes an equivalent Sperner-type
version. A \emph{Sperner--Shapley labeling} of a triangulation $T$ of a polytope $P$ is a map
$f\colon V(T) \longrightarrow \{\sigma \: : \: \sigma \ \text{a nonempty face of} \ P\}$ from the vertex set $V(T)$ of $T$ to the set of nonempty
faces of $P$ such that $f(v) \subset \supp(v)$, where $\supp(v)$ is the minimal face of $P$ containing $v$.
We prove the following polytopal Sperner--Shapley theorem that will imply Theorem~\ref{thm:komiya} by a limiting and
compactness argument:
\begin{theorem}
\label{thm:sperner-s}
Let $T$ be a triangulation of the polytope~$P \subset \mathbb{R}^k$, and let $f\colon V(T) \longrightarrow \{\sigma \: : \: \sigma \ \text{a nonempty face of} \ P\}$
be a Sperner--Shapley labeling of $T$. For every nonempty face $\sigma$ of $P$ choose a point $y_\sigma \in \sigma$.
Then there is a face $\tau$ of $T$ such that $y_P \in \conv\{y_{f(v)} \: : \: v \ \text{vertex of} \ \tau\}$.
\end{theorem}
\begin{proof}
The Sperner--Shapley labeling $f$ maps vertices of
the triangulation $T$ of $P$ to faces of~$P$; thus mapping vertex $v$ to the chosen point $y_{f(v)}$ in the face~$f(v)$
and extending linearly onto faces of~$T$ defines a continuous map $F\colon P \longrightarrow P$. By the Sperner--Shapley
condition for every face $\sigma$ of $P$ we have that $F(\sigma) \subset \sigma$. This implies that $F$ is
homotopic to the identity on~$\partial P$, and thus $F|_{\partial P}$ has degree one. Then $F$ is surjective
and we can find a point $x \in P$ such that $F(x) = y_P$. Let $\tau$ be the smallest face of $T$ containing~$x$.
By definition of $F$ the image $F(\tau)$ is equal to the convex hull $\conv\{y_{f(v)} \: : \: v \ \text{vertex of} \ \tau\}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:komiya}]
Let $\varepsilon > 0$, and let $T$ be a triangulation of~$P$ such that every face of $T$ has diameter at most~$\varepsilon$.
Given a cover $\{A_\sigma \: : \: \sigma \ \text{a nonempty face of} \ P\}$ that satisfies the covering condition of the theorem
we define a Sperner--Shapley labeling in the following way:
For a vertex $v$ of $T$, label $v$ by a face $\sigma \subset \supp(v)$ such that $v\in A_{\sigma}$.
Such a face $\sigma$ exists since $v\in\supp(v) \subset \bigcup_{\sigma \subset \supp(v)} A_{\sigma}$. Thus by
Theorem~\ref{thm:sperner-s} there is a face $\tau$ of $T$ whose vertices are labeled by faces
$\sigma_1, \dots, \sigma_m$ of $P$ such that $y_P \in \conv\{y_{\sigma_1}, \dots, y_{\sigma_m}\}$.
In particular, the $\varepsilon$-neighborhoods of the sets $A_{\sigma_i}$, $i \in [m]$, intersect.
Now let $\varepsilon$ tend to zero. As there
are only finitely many collections of faces of~$P$, one collection $\sigma_1, \dots, \sigma_m$ must appear infinitely
many times. By compactness of $P$ the sets $A_{\sigma_i}$, $i \in [m]$, then all intersect since they are closed.
\end{proof}
Note that Theorem \ref{thm:komiya} is true also if all the sets $A_{\sigma}$ are open in~$P$.
Indeed, given an open cover $\{A_\sigma \: : \: \sigma \ \text{a nonempty face of} \ P\}$ of $P$ as in Theorem~\ref{thm:komiya},
we can find closed sets $B_\sigma \subset A_\sigma$ that have the same nerve as $A_\sigma$ and
still satisfy $\sigma \subset \bigcup_{\tau \subset \sigma} B_\tau$ for every face~$\sigma$ of~$P$.
\section{A colorful Komiya theorem}\label{sec5}
Recall that the colorful KKMS theorem of Shih and Lee~\cite{ShihLee} states the following: If for every $i\in [k+1]$ the collection $\{A^{i}_{\sigma} \: : \: \sigma \text{ a nonempty face of }\Delta_k\}$ forms a KKMS cover of $\Delta_k$, then there exists a balanced collection of faces $\sigma_1,\dots,\sigma_{k+1}$ so that $\bigcap_{i=1}^{k+1}A_{\sigma_i}^i \neq \emptyset$. Theorem~\ref{thm:col-komiya}, proved in this section, is a colorful extension of Theorem~\ref{thm:komiya}, and thus generalizing the colorful KKMS theorem to any polytope.
Let $P$ be a $k$-dimensional polytope. Suppose that for every nonempty face $\sigma$ of $P$ we choose $k+1$ points
$y^{(1)}_\sigma, \dots, y^{(k+1)}_\sigma \in \sigma$ and $k+1$ closed sets $A^{(1)}_\sigma, \dots, A^{(k+1)}_\sigma \subset P$, so that $\sigma \subset \bigcup_{\tau \subset \sigma} A^{(j)}_\tau$ for every face~$\sigma$ of~$P$ and every~${j\in [k+1]}$.
Theorem~\ref{thm:komiya} now guarantees that for every fixed $j \in [k+1]$ there are faces $\sigma_1^{(j)},\dots, \sigma_{m_j}^{(j)}$ of $P$ such that
$y_P^{(j)} \in \conv\{y_{\sigma_1}^{(j)}, \dots, y_{\sigma_{m_j}}^{(j)}\}$ and $\bigcap_{i=1}^{m_j} A_{\sigma_i}^{(j)}$ is nonempty.
Now let us choose $y_P^{(1)} = y_P^{(2)} = \dots = y_P^{(k+1)}$ and denote this point by~$y_P$.
The colorful Carath\'eodory theorem implies the existence points $z_j \in \{y_{\sigma_1}^{(j)}, \dots, y_{\sigma_{m_j}}^{(j)}\}$,
$j \in [k+1]$, such that $y_P \in \conv\{z_1, \dots, z_{k+1}\}$. Theorem \ref{thm:col-komiya} shows that this implication can be realized simultaneously with the existence of sets
$B_j \in \{A_{\sigma_1}^{(j)}, \dots, A_{\sigma_{m_j}}^{(j)}\}$, $j \in [k+1]$, such that $\bigcap_{j=1}^{k+1} B_j$ is nonempty.
We prove Theorem~\ref{thm:col-komiya} by applying the Sperner--Shapley version of Komiya's theorem -- Theorem~\ref{thm:sperner-s} -- to
a labeling of the barycentric subdivision of a triangulation of~$P$. The same idea was used by Su~\cite{su1999} to prove a colorful Sperner's lemma.
For related Sperner-type results for multiple Sperner labelings see Babson~\cite{babson2012}, Bapat~\cite{bapat1989},
and Frick, Houston-Edwards, and Meunier~\cite{frick2017}.
\begin{proof}[Proof of Theorem~\ref{thm:col-komiya}]
Let $\varepsilon > 0$, and let $T$ be a triangulation of~$P$ such that every face of $T$ has diameter at most~$\varepsilon$.
We will also assume that the chosen points $y^{(1)}_\sigma, \dots, y^{(k+1)}_\sigma$ are contained in~$\sigma$. This assumption does not restrict the generality of our proof
since $0 \in \conv\{x_1, \dots, x_{k+1}\}$ for vectors $x_1, \dots, x_{k+1} \in \mathbb{R}^k$ if and only if
$0 \in \conv\{\alpha_1x_1, \dots, \alpha_{k+1}x_{k+1}\}$ with arbitrary coefficients $\alpha_i > 0$.
Denote by $T'$ the barycentric subdivision of~$T$. We now define a Sperner--Shapley labeling of the vertices of $T'$:
For $v\in V(T')$ let $\sigma_v$ be the face of $T$ so that $v$ lies at the barycenter of $\sigma_v$, let $\ell=\dim \sigma_v$, and let $\sigma$ be the minimal supporting face of $P$ containing $\sigma_v$. By the conditions of the theorem $v$ is contained in a set $A^{(\ell+1)}_\tau$ where $\tau \subset \sigma$.
We label $v$ by~$\tau$. Thus by Theorem~\ref{thm:sperner-s} there exists a face $\tau$ of $T'$ (without loss of generality $\tau$ is a facet)
whose vertices are labeled by faces $\sigma_1, \dots, \sigma_{k+1}$ of $P$ such that
$0 \in \conv\{y^{(1)}_{\sigma_1}, \dots, y^{(k+1)}_{\sigma_{k+1}}\}$. In particular, the $\varepsilon$-neighborhoods of the
sets $A^{(i)}_{\sigma_i}$, $i \in [k+1]$, intersect. Now use a limiting argument as before.
\end{proof}
Note that by the same argument as before, Theorem \ref{thm:col-komiya} is true also if all the sets $A^{(i)}_\sigma$ are open.
For a point $x\neq 0$ in $\mathbb{R}^k$ let $H(x) = \{y \in \mathbb{R}^k \: : \: \langle x,y \rangle = 0\}$ be the hyperplane perpendicular to $x$ and let $H^+(x) = \{y \in \mathbb{R}^k \: : \: \langle x,y \rangle \ge 0\}$ be the closed halfspace with boundary $H(x)$ containing~$x$.
Let us now show that B\'ar\'any's colorful Carath\'eodory theorem is a special case of Theorem~\ref{thm:col-komiya}.
\begin{theorem}[Colorful Carath\'eodory theorem, B\'ar\'any~\cite{barany}]
\label{thm:col-car}
Let $X_1, \dots, X_{k+1}$ be finite subsets of $\mathbb{R}^k$ with $0 \in \conv X_i$ for every $i \in [k+1]$. Then there are
$x_1 \in X_1, \dots, x_{k+1} \in X_{k+1}$ such that $0 \in \conv\{x_1, \dots, x_{k+1}\}$.
\end{theorem}
\begin{proof}
We will assume $0 \notin X_i$ for each $i \in [k+1]$, for otherwise we are done immediately.
Let $P \subset \mathbb{R}^k$ be a polytope containing $0$ in its interior, such that if points $x$ and $y$ belong to the same face of $P$ then ${\langle x,y \rangle \ge 0}$.
For example, a sufficiently fine subdivision of any polytope that contains $0$ in its interior satisfies this condition.
We can assume that any ray emanating from the origin intersects each $X_i$ in at most one point by arbitrarily deleting
any additional points from~$X_i$. This will not affect the property that~${0 \in \conv X_i}$. Furthermore, we can choose $P$ in such a way that
for each face $\sigma$ and $i \in [k+1]$ the intersection $C_\sigma \cap X_i$ contains at most one point.
For every $i\in [k+1]$ let $y_P^{(i)} = 0$ and $A_P^{(i)} = \emptyset$. Now for each nonempty, proper
face $\sigma$ of $P$ choose points $y_{\sigma}^{(i)}$ and sets $A_{\sigma}^{(i)}$ in the following way:
If there exists $x \in C_\sigma \cap X_i$ then let $y_\sigma^{(i)} = x$ and
$A_\sigma^{(i)} = \{y \in P \: : \: \langle y, x \rangle \ge 0\} = P\cap H^+(x)$; otherwise let $y_\sigma^{(i)}$ be some point in $\sigma$
and let~${A_\sigma^{(i)} = \sigma}$.
Suppose the statement of the theorem was incorrect; then in particular, we can slightly perturb the vertices of $P$ and those points $y_{\sigma}^{(i)}$
that were chosen arbitrarily in~$\sigma$, to make sure that for any collection of points $y_{\sigma_1}^{(1)}, \dots, y_{\sigma_{k+1}}^{(k+1)}$
and any subset $S$ of this collection of size at most $k$, $0\notin \conv S$.
Let us now check that with these definitions the conditions of Theorem~\ref{thm:col-komiya} hold. Clearly, all the sets $A_\sigma^{(i)}$ are closed.
The fact that
$P$ is covered by the sets $A_\sigma^{(i)}$ for every fixed~$i$ follows from the condition $0 \in \conv X_i$. Indeed, this condition implies that for every $p \in P$ there exists a point $x \in X_i$ with $\langle p, x \rangle \ge 0$, and therefore, for the face $\sigma$ of $P$
for which ~${x \in C_\sigma}$ we have ${p \in A_{\sigma}^{(i)}}$.
Now fix a proper face $\sigma$ of~$P$. We claim that $\sigma \subset A_\sigma^{(i)}$ for every~$i$. Indeed, either
$X_i \cap C_\sigma = \emptyset$ in which case $A_\sigma^{(i)} = \sigma$, or otherwise,
pick $x \in X_i \cap C_\sigma$ and let $\lambda > 0$ such that $\lambda x \in \sigma$; then for every $p \in \sigma$
we have $\langle p, \lambda x \rangle\ge 0$ by our assumption on~$P$, and thus $\langle p, x \rangle \ge 0$,
or equivalently~${p \in A_\sigma^{(i)}}$.
Thus by Theorem~\ref{thm:col-komiya} there exist faces $\sigma_1, \dots, \sigma_{k+1}$ of $P$
such that $\bigcap_{i=1}^{k+1} A_{\sigma_i}^{(i)} \ne \emptyset$ and $0 \in \conv\{y_{\sigma_1}^{(1)}, \dots, y_{\sigma_{k+1}}^{(k+1)}\}$.
We claim that $\bigcap_{i=1}^{k+1} A_{\sigma_i}^{(i)}$ can contain only the origin. Indeed, suppose that $0\neq x_0 \in \bigcap_{i=1}^{k+1} A_{\sigma_i}^{(i)}$. Fix $i\in [k+1]$. If $y_{\sigma_i}^{(i)}\in C_{\sigma_i} \cap X_i$, then since $x_0 \in A_{\sigma_i}^{(i)}$ we have $y_{\sigma_i}^{(i)} \in H^+(x_0)$ by definition.
Otherwise $x_0 \in A_{\sigma_i}^{(i)} = \sigma_i$ and $y_{\sigma_i}^{(i)}\in \sigma_i$, so by our choice of $P$ we obtain again that $y_{\sigma_i}^{(i)} \in H^+(x_0)$.
Thus all the points $y_{\sigma_1}^{(1)}, \dots, y_{\sigma_{k+1}}^{(k+1)}$ are in $H^+(x_0)$. But
since $0 \in \conv\{y_{\sigma_1}^{(1)}, \dots, y_{\sigma_{k+1}}^{(k+1)}\}$ this implies that the convex hull of the points in
$\{y_{\sigma_1}^{(1)}, \dots, y_{\sigma_{k+1}}^{(k+1)}\} \cap H(x_0)$ contains the origin. Now, the dimension of $H(x_0)$ is $k-1$, and thus by Carath\'eodory's theorem there exists a set $S$ of
at most $k$ of the points in $y_{\sigma_1}^{(1)}, \dots, y_{\sigma_{k+1}}^{(k+1)}$ with $0\in \conv S$, in contradiction to our general position assumption.
We have shown that $\bigcap_{i=1}^{k+1} A_{\sigma_i}^{(i)} = \{0\}$, and thus in particular, $A_{\sigma_i}^{(i)} \ne \sigma_i$ for all $i$. By our definitions, this implies $y_{\sigma_i}^{(i)} \in X_i$ for all $i$, concluding the proof of the theorem.
\end{proof}
\section{A colorful $d$-interval theorem}
\label{sec:interval}
Recall that
a {\em fractional matching} in a hypergraph $H=(V,E)$ is a function $f\colon E \longrightarrow \mathbb{R}_{\ge 0}$ satisfying $\sum_{e:~ e\ni v} f(e)\le 1$
for all~${v\in V}$. A {\em fractional cover} is a function $g\colon V \longrightarrow \mathbb{R}_{\ge 0}$ satisfying $\sum_{v:~ v \in e} g(v)\ge 1$
for all~${e\in E}$. The \emph{fractional matching number} $\nu^*(H)$ is the maximum of $\sum_{e\in E} f(e)$ over all fractional matchings $f$ of $H$, and
the \emph{fractional covering number} $\tau^*(H)$ is the minimum of $\sum_{v\in V} g(v)$ over all fractional covers $g$. By linear programming duality, $\nu\le \nu^*=\tau^*\le \tau$.
A {\em perfect fractional matching} in $H$ is a fractional matching $f$ in which $\sum_{e:v\in e} f(e) = 1$ for every $v\in V$.
It is a simple observation that a collection of sets $\mathcal{I} \subset 2^{[k+1]}$ is balanced if and only if the hypergraph $H=([k+1], \mathcal{I})$ has a perfect fractional
matching; see~\cite{AKZ}.
The {\em rank} of $H$ is the maximal size of an edge in $H$. A hypergraph $H = (V,E)$ is $d$-partite if there exists a partition $V_1,\dots,V_d$ of $V$ such that $|e\cap V_i| =1$ for every $e\in E$ and $i\in [d]$.
For the proof of Theorem \ref{coloreddintervals} we will use the following theorem by F\"uredi.
\begin{theorem}[F\"uredi \cite{furedi}]\label{furedi}
If $H$ is a hypergraph of rank $d\ge 2$ then
$\nu(H) \ge \frac{\nu^*(H)}{d-1+\frac{1}{d}}.$ If, in addition, $H$ is $d$-partite then $\nu(H) \ge \frac{\nu^*(H)}{d-1}.$
\end{theorem}
We will also need the following simple counting argument.
\begin{lemma}\label{rank}
If a hypergraph $H=(V,E)$ of rank $d$ has a perfect fractional matching then $\nu^*(H)\ge \frac{|V|}{d}$.
\end{lemma}
\begin{proof}
Let $f\colon E\longrightarrow \mathbb{R}_{\ge 0}$ be a perfect fractional matching of $H$. Then
$\sum_{v\in V}\sum_{e: v\in e} f(e) = \sum_{v\in V} 1 = |V|.$
Since $f(e)$ was counted $|e|\le d$ times in this equation for every edge $e\in E$, we have that
$\nu^*(H)\ge \sum_{e\in E} f(e) \ge \frac{|V|}{d}.$
\end{proof}
We are now ready to prove Theorem~\ref{coloreddintervals}. The proof is an adaption of the methods in~\cite{AKZ}.
For the first part we need the simplex version of Theorem~\ref{thm:col-komiya}, which was already proven by Shih and Lee~\cite{ShihLee},
while the second part requires our more general polytopal extension.
\begin{proof}[Proof of Theorem~\ref{coloreddintervals}.]
For a point
$\vec{x}=(x_1,\dots,x_{k+1}) \in \Delta_k$ let $p_{\vec{x}}({j})=\sum_{t=1}^j x_t \in [0,1]$.
Since $\mathcal{F}$ is finite, by rescaling $\mathbb{R}$ we may assume that
$\mathcal{F} \subset (0,1)$.
For every $T\subset [k+1]$ let $B^i_T$ be the set consisting of all $\vec{x} \in
\Delta_k$ for which there exists a $d$-interval $f \in F_i$ satisfying:
(a) $f\subset \bigcup_{j\in T} (p_{\vec{x}}({j-1}),p_{\vec{x}}({j}))$, and
(b) $f \cap
(p_{\vec{x}}({j-1}),p_{\vec{x}}({j})) \neq \emptyset$ for each $j
\in T$.
Note that $B^i_T =
\emptyset$ whenever $|T| > d$.
Clearly, the sets $B^i_{T}$ are open. The assumption $\tau(F_i)>k$ implies that for every $\vec{x}=(x_1,\dots,x_{k+1}) \in \Delta_k$, the set $P(\vec{x})=\{p_{\vec{x}}({j}) \mid j\in [k]\}$ is not a cover of $F_i$,
meaning that there exists $f\in F_i$ not
containing any $p_{\vec{x}}(j)$. This, in turn, means that $\vec{x}
\in B^i_{T}$ for some $T \subseteq [k+1]$, and thus the sets $B^i_{T}~$ form a cover of $\Delta_k$ for every $i\in[k+1]$.
To show that this is a KKMS cover, let $\Delta^S$ be a face of $\Delta_k$ for some $S\subset [k+1]$. If $\vec{x}\in \Delta^S$ then
$(p_{\vec{x}}({j-1}),p_{\vec{x}}({j}))=\emptyset$ for $j \notin S$,
and hence it is impossible to have ${f \cap (p_{\vec{x}}({j-1}),p_{\vec{x}}({j}))\neq \emptyset}$.
Thus $\vec{x} \in B^i_{T}$
for some $T \subseteq S$. This proves that $\Delta^S \subseteq \bigcup_{T
\subseteq S}B^i_{T}$ for all $i\in [k+1]$.
By Theorem~\ref{thm:col-komiya}
there exists
a balanced set $\mathcal{T}=\{T_1,\dots,T_{k+1}\}$ of subsets of $[k+1]$, satisfying
$\bigcap_{i=1}^{k+1} B^i_{T_i}\neq \emptyset$. In particular,
$|T_i|\le d$ for all $i$. Then the hypergraph $H=([k+1],\mathcal{T})$ of rank $d$ has a perfect fractional matching, and thus by Lemma \ref{rank} we have $\nu^*(H) \ge
\frac{k+1}{d}$. Therefore, by Theorem \ref{furedi}, $\nu(H) \ge
\frac{\nu^*(H)}{d-1+\frac{1}{d}}\ge \frac{k+1}{d^2-d+1}.$
Let $M$ be a matching in $H$ of size $m \ge \frac{k+1}{d^2-d+1}$.
Let $\vec{x} \in \bigcap_{i=1}^{k+1} B^i_{T_i}$. For every $i\in [k+1]$ let $f(T_i)$ be the $d$-interval of $F_i$ witnessing the fact that
$\vec{x} \in B^i_{T_i}$. Then the set $\mathcal{M}=\{f(T_i)\mid T_i \in M\}$ is a matching of
size $m$ in $\mathcal{F}$ with $|\mathcal{M}\cap F_i| \le 1$. This proves the first assertion of the theorem.
Now suppose that $F_i$ is a hypergraph of separated $d$-intervals for all $i\in [k+1]$.
For $f \in \mathcal{F}$ let $f^t \subset (t-1,t)$ be the $t$-th interval component of~$f$.
We can assume w.l.o.g. that $f^t$ is nonempty.
Let $P=(\Delta_k)^d$. For a $d$-tuple $T=(j_i,\dots,j_d) \subset [k+1]^d$ let $B^i_T$ consist of all $\vec{X}=\vec{x}^1\times\cdots\times\vec{x}^d \in
P$ for which there exists $f \in F_i$ satisfying
$f^t\subset (t-1+p_{\vec{x}^t}({j_t-1}),t-1+p_{\vec{x}^t}({j_t}))$ for all $t\in[d]$.
Since $\tau(\mathcal{F})>kd$, the points $t-1+p_{\vec{x}^t}(j), t\in[d], j\in [k],$ do not form a cover of $\mathcal{F}$. Therefore,
by the same argument as before, the sets $B^i_{T}$ are open and satisfy the covering condition of Theorem \ref{thm:col-komiya}. Therefore,
by Theorem \ref{thm:col-komiya},
there exists
a set $\mathcal{T}=\{T_1,\dots,T_{k+1}\}$ of $d$-tuples in $[k+1]^d$ containing the point $(\frac{1}{k+1},\dots,\frac{1}{k+1})\times\dots \times(\frac{1}{k+1},\dots,\frac{1}{k+1})\in P$ in its convex hull and satisfying $\bigcap_{i\in [k+1]} B^i_{T_i} \neq \emptyset$. Then the $d$-partite hypergraph $H=(\bigcup_{i=1}^d V_i,\mathcal{T})$, where $V_i=[k+1]$ for all $i$, has a perfect fractional matching, and thus by Lemma \ref{rank} we have $\nu^*(H) \ge k+1$. Therefore, by Theorem \ref{furedi}, $\nu(H) \ge
\frac{\nu^*(H)}{d-1}\ge \frac{k+1}{d-1}
.$ Now, by the same argument as before, by taking $\vec{X} \in \bigcap_{i\in [k+1]} B^i_{T_i}$ we obtain a matching in $\mathcal{F}$ of the same size as a maximal matching in $H$, concluding the proof of the theorem.
\end{proof}
\section*{Acknowledgment}
This material is based upon work supported by the National Science Foundation under Grant No. DMS-1440140 while the authors were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2017 semester.
\bibliographystyle{amsplain}
|
2,869,038,155,584 | arxiv | \section{Introduction}
The continued need for increasing the information storage content of high density magnetic recording devices requires the development of new nanostructured magnetic materials such as chains, one-dimensional (1D) periodic linear arrangements of atoms. Most of the experimental methods and potential industrial applications require a high packing density of these chains. 1D periodic linear chains have been investigated experimentally \cite{shen1,shen2,mo2,Cyrus,Gambardella} and theoretically.\cite{lds08,lsw03,tung,mbb07,mzk08,sh02,sh03,sh03a,sh04a,mo1}
Stepped surfaces are common templates for 1D nanostructures\cite{teg09} since they can take advantage of 1D symmetry provided by an array of parallel steps on a vicinal surface. Cu surfaces can be prepared with a large number of atom-high steps through a procedure known as step decoration. In this process, material is deposited on a stepped surface and subsequently nucleates along the edges of the steps with chains or nanostripes growing on the lower terraces along ascending step edges. However, Shen \emph{et al.}\cite{shen1,shen2} demonstrated that Fe nanostripes grow on the upper terraces of stepped Cu(111) surfaces.
In an important study of the growth of linear Fe nanostructures on a stepped Cu(111) surface, Mo \emph{et al.}\cite{mo1} examined elementary diffusion and exchange processes of Fe atoms on the surface by means of \emph{ab initio} calculations based on density functional theory (DFT). This study demonstrated the existence of a special two-stage kinetic pathway leading to the formation of Fe nanowires. In the first stage, Fe adatoms form a very stable 1D atom chain embedded in the Cu substrate behind a row of Cu atoms on the descending step. In the second stage, the embedded Fe chain acts as an attractor for subsequent Fe atoms deposited on the surface since Fe-Fe bonds are stronger than Fe-Cu bonds. This attraction assists in the formation of a secondary chain of Fe atoms on top of the original embedded Fe chain (cf. \ref{fig:1}) resulting in a very stable two atom-wide iron nanowire formed on the Cu surface. Total energy calculations revealed that the position of the Fe chain at the upper edge is energetically favorable to a Fe chain located at the step edge only if another row of Fe atoms is incorporated underneath the exposed row. \cite{mo1}
In a scanning tunnelling microscopy (STM) investigation aided by DFT calculations, Guo \textit{et al.} \cite{mo2} confirmed this growth process. A careful study of all atomic processes in the line of Ref.[\onlinecite{mo1}] has been used to perform kinetic Monte Carlo calculations. \cite{negulyaev2008} The simulations demonstrated the growth process as predicted by Mo \emph{et al.} and has been proven experimentally. \cite{mo2}
The interplay between dimensionality, local environment and magnetic properties has attracted special interest in such systems. In the following, a single linear periodic arrangement of atoms is referred to as a chain, while two parallel chains, either isolated or embedded in the Cu(111) surface, are called a wire.
The present investigation provides a systematic discussion of magnetic properties of 1D Fe nanostructures grown on a vicinal Cu(111) surface using the above mentioned template(cf. Fig. \ref{fig:1}). Detailed information on the real structure and magnetic states of such systems is given in Ref.[\onlinecite{hashemiprb,hashemijmmm}]. Ferromagnetic ordering is achieved for Fe wires deposited on this template. We present a systematic investigation of the magnetic couplings for Fe embedded in the Cu surface with terraces ranging from three to eight lattice constants wide. The analysis of the exchange coupling and of the magneto crystalline anisotropy allows us to set up a classical Heisenberg model to study finite temperature effects.
The outline of this paper is as follows. Sec. \ref{sec:Theoreticalmethod} is devoted to a brief description of the theoretical framework and setup we have used. Exchange parameters extracted from the DFT calculations are discussed in Sec. \ref{sec:Magneticexchangeinteractions}. The magnetic phase transition to the paramagnetic state and an adequate estimation of the critical temperature on the basis of numerical simulations is discussed in Sec. \ref{sec:MagnetismatFiniteTemperatures}. Finally, in Sec. \ref{sec:Conclusions} we summarize our main results and conclude.
\begin{figure}[th!]
\centering
\includegraphics[trim = 0cm 0.3cm 0.1cm 1.7cm, clip, scale=1.1, angle=0]{Fe-Fe.png}
\caption{ \label{fig:1} A one-atom wide Fe double chain (brown) is formed on vicinal
Cu(111) surface (blue). Fe chains at the top (t) and embedded (e) positions are also shown.}
\end{figure}
\section{Theoretical method\label{sec:Theoreticalmethod}}
The calculations were performed within the framework of
spin-polarized density functional theory, using the Vienna \emph{ab initio}
simulation package (VASP) \cite{vasp1,vasp2}. The frozen-core
full-potential projector augmented-wave method (PAW) was used
\cite{paw1}, applying the generalized gradient approximation of
Perdew, Burke, and Ernzerhof (GGA-PBE) \cite{PhysRevLett.77.3865}.
Computational details and convergence checks are the same as those in our previous study.\cite{hashemiprb} with minor changes that are explained in Sec. \ref{sec:Resultsanddiscussion}.
A supercell containing twelve Cu layers, corresponding to between 72 and 192 Cu atoms for Cu(n+2,n,n), with n=2-7, was constructed to model the Cu(111) stepped surface. Conversely, the terraces range from three to eight lattice constants wide. The distance from one slab to its nearest image was equivalent to 13.5\AA. The number of k points was chosen according to the requirement that the number of atoms times the number of k points in the irreducible Brillouin zone.
\section{Results and discussion\label{sec:Resultsanddiscussion}}
\subsection{Real structure of embedded Fe wires \label{sec:RealstructureofembeddedFewires}}
We compared the relaxation of a Cu(111) surface with an embedded Fe chain with that of a clean Cu(111) surface. The extent of relaxation in the second subsurface layer is generally small. The relaxation in the first subsurface layer is larger but only significantly so at the step edge. The relaxation of the surface layer of Cu(111) is dominated by lateral and inward relaxations. Lateral relaxation is directed towards the center of the terrace, causing compression. The lateral relaxation in the middle of the terrace is small. In general the surface layer shows an inward relaxation, which is large at the step edge. Together with the outward relaxation for the first Cu atom of the terrace, the relaxations reduce the interatomic distances at the step edge. Significantly larger relaxation is observed when one row of Cu atoms is substituted by one row of Fe atoms behind the step edge. From a structural point of view the Fe chain acts as a ”center of attraction”. On clean Cu(111) surfaces, in the center of a terrace, practically no lateral shift can be seen, whereas a Cu atom at the same site will shift towards the embedded Fe chain. The Cu atoms at the step edge are also strongly attracted to the Fe chain. The inward relaxation of the Fe chain is much larger than the corresponding relaxation of a Cu atom at this site. In summary, the Fe chain dramatically increases the tendency for compression near the step. The predominant structural reorganization is an inward relaxation of the Fe atoms relative to their ideal positions. For the Fe wire the inward relaxation of Fe atoms at the top and embedded positions are 22.5 \% and 5.9 \%, relative to the Cu lattice plane distance, respectively.
\subsection{Magnetic exchange interactions \label{sec:Magneticexchangeinteractions}}
The analysis of the exchange couplings and of the magneto crystalline anisotropy allows to set up a classical Heisenberg model to study finite temperature effects in Sec. \ref{sec:MagnetismatFiniteTemperatures}.
For the Fe wires grown on vicinal Cu(111) surface, the absolute magnetic moments of Fe atoms at the top and embedded positions are 2.41 $\mu_B$ and 2.94 $\mu_B$, respectively.\cite{hashemiprb} There are two magnetic interactions between these moments: the intrawire ($J_{\parallel}$) and interwire ($J_{\perp}$) magnetic couplings (as shown in Fig.\ref{fig:2}) , both of which will be explored in this study.
\begin{figure}[th!]
\centering
\includegraphics[trim = 0cm 2.8cm 0cm 8.4cm, clip, scale=0.17, angle=0]{Jpe_Jpa.png
\caption{ \label{fig:2} Schematic picture showing magnetic interactions of Fe wires. $J_{\parallel}$: Intrawire exchange coupling, and
$J_{\perp}$: Interwire exchange coupling.}
\end{figure}
The Heisenberg theory of magnetism maps magnetic interactions in a material onto localized spin moments. The resulting classical Hamiltonian,
\begin{equation}
H=-\sum_{i \neq j} J_{ij} \mathbf{e}_i \cdot \mathbf{e}_j - \sum_{i} K_{i} (\mathbf{e}_i \cdot \mathbf{e}_K)^2
\label{equ:aHeisenbergHamiltonian}
\end{equation}
contains the unit vectors $\mathbf{e}_{i(j)}$ of the magnetic moments, the exchange parameters $J_{ij}$, the magnetic anisotropy energy (MAE) $K_{i}$ (at site $i$), and the unit vector along the magnetization easy axis $\mathbf{e}_K$. Here $i$ and $j$ index the sites.
The MAE is 4.76 meV per site. \cite{hashemianisotropy} Using a constant value for MAE does not have any effect on the calculation of exchange couplings since our previous study showed that the MAE of Fe and the embedded Fe sublattices are equal. \cite{hashemianisotropy}
$J_{ij}$ can be calculated by making parallel and antiparallel alignment of the moments. Therefore,
\begin{equation}
J_{ij} = \frac{H_{AF} - H_{FM}}{2}
\label{equ:aHeisenbergHamiltonian}
\end{equation}
where $H_{AF}$ and $H_{FM}$ are the DFT total energies calculated for antiparallel alignment of the moments and parallel alignment of the moments, respectively.
The interwire coupling constants are calculated in a similar manner by exploiting supercells doubled in the direction perpendicular to the wires, and for parallel and antiparallel alignment
of the moments on the two wires on each side of the supercell.
\subsubsection{$J_{\parallel}$: Intrawire exchange coupling \label{sec:Intrawireexchangecoupling}}
In order to systematically study the intrawire exchange coupling of Fe wires, three systems were investigated; freestanding Fe chains, freestanding Fe wires, and embedded Fe wires. In freestanding Fe chains the atomic distances were constrained to the Cu bond length of the Cu(111) substrate in order to simulate a freestanding equivalent to a singular Fe chain in the Fe wire on the substrate. A freestanding wire was studied as an equivalent to the one embedded into the Cu(111) surface. All interatomic distances also correspond to the Cu bond length of the substrate in this case.
A central task for mapping onto a classical Heisenberg model is the determination of exchange constants and magnetocrystalline
anisotropy. The exchange constants can be extracted from DFT calculations either by comparing the total energies of several
artificial collinear magnetic structures or by applying the magnetic force theorem in the framework of the Korringa-Kohn-Rostoker (KKR) Green's function
method\cite{gun,lic}. In addition to these two methods artificial noncollinear structures can be used to study exchange interactions in the Heisenberg model by choosing noncolinear states that can be controllably switched on and off. Noncollinear configurations used to calculate the exchange parameters for free-standing and embedded wires are given in Refs.[\onlinecite{hashemijap,hashemiphysica}].
\emph{Ab initio} investigations of the freestanding Fe wire revealed that nearest neighbor exchange interactions dominate. \cite{hashemijap,hashemiphysica}
Next-nearest neighbor interactions are an order of magnitude smaller than smaller than nearest-neighbor interactions, therefore, we restrict the Heisenberg model to nearest neighbor interactions only.
Exchange constants $J_{ij}$ can be extracted from DFT calculations by comparing the total energies of several artificial noncollinear magnetic structures. In this approach we selectively switch on or off interactions between atoms $i$ and $j$ by deliberately choosing those noncollinear states.
$J_{\parallel}$ can be broken down into three main couplings (as shown in Fig.\ref{fig:j}) : magnetic couplings between Fe atoms at the top position, $J_{t}$, crossing magnetic couplings between the Fe chain at the
top position and the embedded Fe chain, $J_{c}$, and magnetic coupling between embedded Fe atoms, $J_{e}$.
Noncollinear configurations used for calculation of the exchange parameters for freestanding Fe chain, freestanding and embedded Fe wires can be found in Ref. [\onlinecite{hashemijap}]. The obtained exchange parameters published in Ref. [\onlinecite{hashemijap}] for Fe systems are summarized in Tab. \ref{tab:1}. As depicted in Fig.\ref{fig:j}, the top and the embedded Fe moments are ferromagnetically ordered.\cite{hashemiprb}
Our calculations show that the magnetic moments are constant for the different noncollinear configurations within a specific system and symmetrically equivalent arrangements lead to the same exchange parameters. These data suggest that the exchange constant for free-standing Fe chain is consistent with the literature. \cite{tung, mokrousov,hashemiprb} The magnetic groundstate for free-standing Fe chain is in agreement with the results in Ref. [\onlinecite{tung}] for the relaxed chain since the relaxed bond length in the ferromagnetic state is close to the Cu bond length.
The calculated exchange constants show that Fe wires have ferromagnetic ground states and relaxation effects are seen in the exchange constants of the embedded systems. The exchange constants in Fe wire are generally smaller than in corresponding linear Fe chains due to an increased coordination number in the planar equilateral triangle ribbon form of Fe wires. The stronger hybridization due to the inward relaxation of the Fe wires leads to smaller intrawire exchange constants.
\begin{figure}
\centering
\includegraphics[trim = 1.8cm 4cm 11cm 3.8cm, clip,scale=0.180, angle=0]{j.png
\caption{ \label{fig:j} Schematic picture showing the Fe wires have the $\textrm{FF}$ ground state and also the magnetic interactions within the Fe wires; $J_{t}$, $J_{c}$, and $J_{e}$. }
\end{figure}
\begin{table}[h!]
\caption{\label{tab:1} Exchange constants for the freestanding Fe
chain, freestanding Fe wires, and the embedded Fe wires.
The definition of the constants in the Heisenberg model
incorporates the magnetic spin moments. All values are given in meV.}
\begin{ruledtabular}
\begin{tabular}{ccc}
\vspace*{-6pt}\\
\multicolumn{3}{c}{\textit{freestanding Fe chain}} \rule[0pt]{0pt}{5pt} \\
\hline
$J$ & 114.54 \\
\hline
\vspace*{-6pt}\\
\multicolumn{3}{c}{\textit{freestanding Fe wire}} \rule[0pt]{0pt}{5pt} \\
\hline
$J_{t}$ & 80.33 \\
$J_{c}$ &150.42 \\
$J_{e}$ & 80.33 \\
\hline
\vspace*{-6pt}\\
\multicolumn{3}{c}{\textit{embedded Fe wire}}\rule[0pt]{0pt}{5pt} \\
\hline
$J_{t}$ &79.76\\
$J_{c}$ & 80.00 \\
$J_{e}$ & 74.58
\end{tabular}
\end{ruledtabular}
\end{table}
\subsubsection{$J_{\perp}$: Interwire exchange coupling \label{sec:Interwireexchangecoupling}}
It is known that surface-state electrons on the (111) surfaces of noble metals create a two-dimensional (2D) nearly free electron gas which are confined to top layers at the surface. Electrons in these states move along the surface causing scattering of the surface electrons by nanostructures formed on the surface. This scattering leads to quantum interference patterns in the local density of states (LDOS) and long-range oscillatory interactions between adsorbates. \cite{Blongrange} In previous studies, the long range interactions have been attributed to the surface states of Cu(111) surfaces.\cite{PhysRevLett.98.056601,Stepanyuk2006272, PhysRevB.76.033409} Based on the calculations presented in Refs.[\onlinecite{hashemithesis}], we concluded that only considering 18 layer slabs of Cu produces a band structure which is comparable to experimental energy dispersion. Here, the interwire coupling constants are determined by making parallel and antiparallel alignments of the moments in the Fe wires: energy differences are calculated not according to the absolute value of energy alone. Our calculations showed that the interwire couplings (energy difference) converge faster than the absolute values of energies. These couplings were converged for slabs as thin as 12 layers. As one can see in Fig. \ref{fig:exchange}, using a 15 or 12 layer slab of Cu, gives a negligible difference for the interwire couplings, obtained for two interwire separations. Therefore, a 12 layer slab of Cu is used to simulate the Cu(111) surface and its surface states. A direct relaxation calculation for such a big system is very expensive, therefor the four top most relaxed layers of an eight layer slab of Cu(111) have been taken and replaced on the corresponding geometry of a twelve layer slab of Cu(111) surface, mimicking the relaxed geometry while the eight remaining bottom layers are fixed in their ideal bulk positions. The strength of the interwire magnetic coupling can be deduced from the energy difference of the ferro-magnetic and antiferromagnetic oriented wires, with supercells doubled in a direction perpendicular to the wires.
To construct a Heisenberg Hamiltonian which takes into account all the magnetic interactions, $J_{\parallel}$, effective intrawire couplings and $J_{\perp}$, the interwire couplings are required. The effective intrawire couplings were determined in Sec. \ref{sec:Intrawireexchangecoupling}. Now, we discuss how to estimate the interwire couplings. In principle, there are three interchain coupling constants in the cell. The first is the coupling between the embedded Fe chain and embedded Fe chain in the nearest neighboring wire. The second is the coupling between the embedded Fe chain and the deposited Fe chain at the top position in the nearest neighboring wire. Finally, there is the coupling between the deposited Fe chain on top and the deposited Fe chain at the top position in the nearest neighboring wire. These calculations are computationally demanding due to possible magnetic configurations besides ferromagnetic and antiferromagnetic. These extra configurations are necessary for as correction factors for exchange couplings even though they may not be energetically favored. The difference between the calculated exchange interactions may be too small, which begs the question whether this difference has a significant effect on the estimated transition temperature. Therefore the current study has been confined to effective interwire coupling constants. The interwire coupling constants were estimated using Equ. (\ref{sec:Intrawireexchangecoupling}) by making parallel and antiparallel alignment of the moments on the two wires on each side of the supercell.
The calculated exchange couplings as a function of interwire separation are shown in Fig. \ref{fig:exchange} and summarized in Tab. \ref{tab:jperp}. The exchange constants reflect the result that wires have a ferromagnetic groundstate on Cu(422). But there is a strong antiferromagnetic coupling for the nanowires on Cu(533). On Cu(644) the coupling becomes weakly ferromagnetic again. However on higher-index Cu(111) vicinal surfaces, we observe weak antiferromagnetic ordering. Relaxation effects are insignificant for the interwire couplings.
Similar to the current study, RKKY interactions have been observed not only in metallic layered systems but also between magnetic nanostructures deposited on metal surfaces in which the magnetic interactions are often mediated by surface-state electrons.\cite{Khajetoorians1062,PhysRevB.85.045429,PhysRevB.83.224416, PhysRevB.78.165413}
A rough estimation of the envelope of the magnitude of $J_{\perp}$ suggests an asymptotic decay with the inverse square of the interwire separation. This is in agreement with \emph{ab initio} calculations and STM experiments that predicted similar interactions between 3$d$ magnetic nanostructures on a Cu(111) surface caused by surface-state electrons. \cite{PhysRevB.68.205410, PhysRevB.70.075414,PhysRevB.83.224416} It is worth noting that the Cu bulk states can affect the interaction energies at relatively short interwire separation because the Fe wires couple to the Cu bulk bands as well. The magnetic interaction energy in the bulk asymptotically decays as the inverse fifth power of the interatomic distance.\cite{LAU197869}
Long-range interactions between the nanostructures on vicinal Cu(111) surfaces are distinct from those on Cu(111) surface for two reasons: First, the surfaces-states are affected by the electronic potential at the steps which is close to the Fermi energy in vicinal Cu(111) surfaces. Second, The Fe wires significantly affect the surface-states on vicinal surfaces.\cite{PhysRevB.75.155428}
\begin{figure}
\centering
\includegraphics[trim = 0cm 0cm 0cm 0cm, clip, scale=0.3, angle=0]{Exchange.png}
\caption{The interwire coupling constants of Fe wires as functions of interwire separation. Blue square and red triangle symbols represents the couplings calculated using 12 and 15 layer slabs, respectively. The blue line connecting the symbols is a guide to the eye. \label{fig:exchange}}
\end{figure}
\begin{table}[h!]
\caption{Exchange coupling constants ($J_{\perp}$) of Fe wires on Cu(111) stepped surfaces. \label{tab:jperp}}
\begin{ruledtabular}
\begin{tabular}{lcccccc}
Surface & (4,2,2) & (5,3,3) & (6,4,4) & (7,5,5) & (8,6,6) & (9,7,7) \\
\hline
Seperation (\AA) & 6.31 & 8.45 & 10.62 & 12.81 & 15.02 & 17.23 \\
\hline
$J_{\perp}$ (meV) & +51.75 & -33.52 & +2.16 & -0.55 & -0.32 & -0.14 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Magnetism at Finite Temperatures\label{sec:MagnetismatFiniteTemperatures}}
It is necessary to calculate some well-defined macroscopic property which ensures the correct implementation of interactions in a system. Critical temperature ($T_c$) of the investigated systems is determined using Monte Carlo simulations. $T_c$ of a nanowire is primarily determined by the strength
of the exchange interaction between spins and the magnetic anisotropy energies. For the representation of the interwire interaction, $J_{\perp}$ is used as given in Tab. \ref{tab:jperp}.
During Monte Carlo simulations, lattices with 40$\times$40, 42$\times$42, and 44$\times$44 unit cells with 4 atoms per unit cell are used. The MAE of the first row of the Fe the Fe sublattices are equal.\cite{hashemianisotropy} Periodic boundary conditions are applied in the direction of and perpendicular to the wires. The critical temperature of the system is found by relaxing into thermodynamical equilibrium with 10,000 MC steps per temperature step. Correlation effects are accounted for by averaging over 15,000 measurements, between each of which are two MC steps. To improve the statistics, averaging over 20 temperature loops is performed and importance sampling is performed using the Metropolis algorithm. $T_c$$'$s are determined using the specific heat and, in the case of a ferromagnetic system, the susceptibility $\chi$ and 4th-order-cumulant $U_4$.
Figure\ref{fig:Specificheat} shows the specific heats for a Fe wire on the vicinal Cu(111) surfaces. We observe that there is a $T_c$ for all systems. In our previous study,\cite{hashemijap} we observed no magnetic ordering for a vanishing MAE, in agreement with the Mermin--Wagner theorem.\cite{PhysRevLett.17.1133} Our results also show that an increase in interwire couplings stabilizes the moments against thermal fluctuation and, thus, leads to an increase of the $T_c$. Furthermore, the phase transition broadens as the interwire couplings are increased. The calculated $T_c$$'$s are below room temperature and close to each other for Fe wires on Cu(977), Cu(866), and Cu(755). The interwire couplings $J_{\perp}$ are very small and the $T_c$$'$s for these systems are determined their intrawire couplings, which are the interactions responsible for the ferromagnetic ordering within the Fe wires. The $T_c$ is close to room temperature for Fe wire on Cu(644) and well-above room temperature for Cu(533) and C(422). This can be traced back to the high values of $J_{\perp}$ for these systems. A summary of the $T_c$$'$s of all systems for MAE=4.76 meV can be found in Tab. \ref{tab:Tc}.
\begin{figure}
\begin{center}
\centering
\includegraphics[scale=0.26]{Cv.png}
\caption{\label{fig:Specificheat} Heat capacity of the Fe nanowires for the MAE of 4.76 meV.}
\end{center}
\end{figure}
\begin{table}[h!]
\caption{Critical temperatures ($T_c$) of Fe wires on the vicinal Cu(111) surfaces for the MAE of 4.76 meV. The temperature values are given in Kelvin.\label{tab:Tc}}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
Surface & (4,2,2) & (5,3,3) & (6,4,4) & (7,5,5) & (8,6,6) & (9,7,7) \\
\hline
$T_{c}$ & 445 & 431 & 312 & 271 & 266 & 262 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Conclusions\label{sec:Conclusions}}
\textit{Ab initio} DFT calculations have been used to set up a classical anisotropic Heisenberg model to study finite temperature properties of Fe wires embedded in Cu(111). To do this, exchange parameters including intrawire and interwire couplings are extracted from non-collinear and collinear DFT calculations, respectively. The intrawire couplings are one order of magnitude higher than the interwire couplings, for relatively large interwire separation. A slab of of Cu at least 12 layers thick is used to simulate the Cu(111) surface states to provide converged values for the interwire couplings. The interwire exchange couplings of the Fe wires across the vicinal Cu(111) surface oscillate with the interwire separation. This provides reliable means for stabilizing the magnetic ordering of the Fe nanowires in either a ferromagnetic or an antiferromagnetic configuration.
This study provides a technologically feasible way of tailoring 1D magnetic nanostructures adsorbed on a vicinal Cu(111) surface. The critical temperatures of the systems with shorter interwire separation is well-above room temperature. This is a strong indication that these nanowires have potential applications in high-density magnetic data storages.
\noindent \textbf{Acknowledgement}\\
The work was supported by the cluster of excellence
Nanostructured Materials of the state Saxony-Anhalt and the
International Max Planck Research School for Science and
Technology of Nanostructures. We acknowledge the resources
provided by the University of Michigan’s Advanced Research Computing.
|
2,869,038,155,585 | arxiv | \section{Motivation for a sub-GeV dark sector}
Hidden sectors are frequently proposed as part of the physics
beyond the standard model. Since their interactions with the
visible sectors are very weak, so are the current experimental
bounds. In fact, motivated both from a bottom-up and a top-down
perspective, those sectors might even contain light particles
with masses in the sub-GeV range that have so far escaped
detection. Among those weakly interacting slim particles
(WISPs) are hidden U(1) gauge bosons, the CP-odd Higgs of the
NMSSM, and other axion-like particles (ALPs).
Such particles are of great interest in many models that seek
to interpret recent terrestrial and astrophysical anomalies in
terms of dark matter (DM). The rise in the positron-fraction
with energy as observed by PAMELA (cf.~\cite{talk-Sparvoli})
and the deviation from the power-law in the $e^+ + e^-$
spectrum measured by FERMI (cf.~\cite{talk-Strigari}) together
with the absence of an excess in anti-protons require the DM
candidate to annihilate dominantly into leptons
\textit{(leptophil)} with a cross section much larger than the
one giving the correct relic abundance. Different direct
detection measurements like the annual modulation observed by
DAMA/LIBRA~\cite{talk-Cerulli} and the null results of CDMS and
XENON~\cite{talk-Balakishiyeva,talk-Oberlack} seem somewhat
contradicting. Consistency might still be possible if either
the DM candidate is light ($m \sim 5-10$ GeV) with elastic
scattering or heavy with excited states (mass splitting $\Delta
m \sim 100$ keV) generating inelastic scattering. Those
properties are challenging for standard DM candidates and
alternative scenarios like hidden sectors with light messenger
particles have been considered because of the following
advantageous features. A long range attractive force mediated
by such a light messenger generates a so called Sommerfeld
enhancement of the annihilation cross section. Dark matter
annihilation proceeding through this messenger -- if light
enough -- is naturally leptophilic due to kinematics. Inelastic
scattering on nuclei can also be mediated by such a light
particle. Possible examples of messenger particles that have
already been studied are ALPs like the NMSSM CP-odd
Higgs~\cite{Hooper:2009gm} and hidden U(1) photons of a generic
hidden
sector~\cite{ArkaniHamed:2008qn,Cheung:2009qd,Morrissey:2009ur,Cohen:2010kn}
or of asymmetric mirror worlds~\cite{An:2009vq}.
From a top-down perspective, hidden sectors appear naturally in
various supersymmetric models descending from string theory.
Mediator particles are generally weakly coupled to the visible
sector and can also be light. Specifically
in~\cite{Lebedev:2009ag} it was found that the heterotic string
can reproduce the NMSSM in a Peccei-Quinn limit with a light
Pseudo-Goldstone boson, an axion-like particle. The breaking of
larger groups down to the SM gauge group can in general yield
hidden U(1) symmetries which may remain unbroken down to small
energy scales. Their hidden photon may be light and couple
weakly to the visible sector through kinetic
mixing~\cite{Holdom:1985ag,Goodsell:2009xc}.
In the following we present various constraints on the NMSSM
CP-odd Higgs as representative of an axion-like particle and
the hidden photon for masses below the muon threshold.
\section{NMSSM CP-odd Higgs}
The extension of the MSSM with an additional scalar field $S$
to the NMSSM has been motivated as it solves the $\mu$-problem
by replacing the $\mu$-parameter with a SM singlet
$S$~\cite{Ellis:1988er}. Additionally, the enlargement of the
particle content by an additional CP-odd Higgs $A^0$ alleviates
the little hierarchy problem if $A^0$ is light by opening an
additional Higgs decay channel $h \to 2 A^0$, thereby reducing
the LEP limit on the Higgs mass. We focus our analysis on the
$Z_3$-symmetric NMSSM, a special version without direct
$\mu$-term, with superpotential
\begin{equation}
W = \lambda S H_u H_d + \frac{1}{3} \kappa S^3. \nonumber
\end{equation}
In the limit $\kappa \rightarrow 0$, the Higgs potential
possesses an approximate Peccei-Quinn symmetry and a naturally
light pseudoscalar $A^0$ arises with $m_{A^0}^2 \simeq \kappa
\cdot \mathcal{O}(\mathrm{EW \, scale})^2$ where $\kappa \ll
1$. In the heterotic string example of~\cite{Lebedev:2009ag},
$\kappa$ can be as small as $10^{-6}$ resulting in a $100$~MeV
pseudoscalar. Its couplings to fermions are according
to~\cite{Dermisek:2010mg} given by
\begin{equation}
\Delta\mathcal{L} = - i \frac{g}{2 m_W} \ C_{Aff} \ \biggl(m_d~ \bar{d}
\gamma_5 d + \frac{1}{\tan^2 \beta} m_u~ \bar{u} \gamma_5 u + m_l~ \bar{l}
\gamma_5 l \biggr) \ A^0. \nonumber
\end{equation}
We treat $C_{Aff}$ as free parameter focusing on the range
$10^{-2} \lesssim C_{Aff} \lesssim 10^2$ to avoid violation of
perturbativity and/or finetuning and summarize the constraints
derived in~\cite{Andreas:2010ms} in the following.
Different meson-decays set bounds for two distinct cases
depending on the lifetime of $A^0$. If it is sufficiently long
lived to escape the detector, invisible decays $X \! \to Y +
A^0 \to Y + $ inv. place limits requiring $\Gamma^{X \to Y A^0}
/ \Gamma^{\rm tot} < \mathcal{B}^{\rm exp}_{\rm inv}$. Larger
values of $C_{Aff}$ for which $A^0$ decays within the detector
are constrained by visible decays $X \! \to Y + A^0 \to Y + e^+
e^-$ demanding \mbox{$\rm{BR}^{X \to Y A^0} \rm{BR}^{A^0 \to
e^+ e^-} \!\!\!\! < \mathcal{B}^{\rm exp}_{e^+ e^-}$}. Together
with the limit from a search for a peak in the $\pi^+$~momentum
spectrum in $K^+ \! \to \pi^+ + X$, meson decays cover most of
the parameter space in Fig~\ref{Fig:NMSSM}.
Complementary constraints arising from the pion-decay $\pi^0
\to e^+ e^-$ and the muon anomalous magnetic moment $a_\mu$
completely close the available parameter space. The former
process which proceeds in the SM through loop diagrams receives
a tree level contribution from $A^0$ and sets a limit requiring
$\Gamma^{\pi^0 \overset{A^0}{\rightarrow} e^+ e^-} /
\Gamma^{\rm tot} \! < \mathcal{B}^{\rm exp}_{\pi^0 \to e^+
e^-}$. As there are several NMSSM contributions to $a_\mu$ of
both signs, even though the negative loop-contribution from
$A^0$ worsens the current discrepancy $a_\mu^{\rm exp} >
a_\mu^{\rm SM}$, we derive a constraint demanding $A^0$ not to
worsen it beyond $5\sigma$.
Additional constraints can be derived from beam-dump and
reactor experiments (lines and shaded regions, respectively, in
Fig.\ref{Fig:NMSSM}, right) searching for the decay $A^0 \to
e^+ e^-$. Like any ALP, $A^0$ can be emitted in the former via
bremsstrahlung from an $e$- or $p$-beam and in the latter in
place of photons in transitions between nuclear levels.
\begin{wrapfigure}{r}{0.7\textwidth}
\centerline{
\includegraphics[width=0.342\textwidth]{andreas_sarah_fig1.pdf}
\includegraphics[width=0.342\textwidth]{andreas_sarah_fig2.pdf}}
\caption{Excluded regions for the NMSSM CP-odd
Higgs~\cite{Andreas:2010ms}.} \label{Fig:NMSSM}
\vspace{-1cm}
\end{wrapfigure}
In summary, for masses below the muon threshold, the CP-odd
Higgs is excluded or required to couple to matter at least 4
orders of magnitude weaker than the SM Higgs which can hardly
be achieved in the NMSSM. Those constraints as they are plotted
in Fig.~\ref{Fig:NMSSM} apply in general to the coupling of a
light pseudoscalar to matter.
\vspace{0.3cm}
\section{Hidden U(1) gauge boson}
Many SM extensions contain additional U(1) symmetries in the
hidden sector under which the SM is neutral. The corresponding
gauge boson, the hidden photon $\gamma'$ and the ordinary
photon kinetically mix~\cite{Holdom:1985ag,talk-Redondo}
induced by loops of heavy particles charged under both U(1)
groups.
The most general Lagrangian is
\begin{equation}
{\cal L} = -\frac{1}{4}F_{\mu \nu}F^{\mu \nu} - \frac{1}{4}X_{\mu \nu}X^{\mu \nu} +
\frac{\chi}{2} X_{\mu \nu} F^{\mu \nu} + \frac{m_{\gamma^{\prime}}^2}{2} X_{\mu}X^\mu \nonumber
\end{equation}
where $F^{\mu \nu}$ is the ususal electromagnetic field
strength and $X^{\mu \nu}$ the one corresponding to the hidden
gauge field $X^\mu$. The kinetic mixing $\chi$ is typically of
the size of a radiative correction $\sim \mathcal{O}(10^{-4} -
10^{-3})$. Kinetic mixing allows $\gamma'$ to couple and decay
to SM fermions thereby making it accessible for experimental
searches, the constraints of which are presented in the
following.
\begin{wrapfigure}{r}{0.7\textwidth}
\vspace{-0.4cm} \centerline{
\includegraphics[width=0.342\textwidth]{andreas_sarah_fig3.pdf}
\includegraphics[width=0.342\textwidth]{andreas_sarah_fig4.pdf}}
\caption{Exclusion regions \textit{(left)} as well as projected
sensitivities and phenomenological motivations \textit{(right)}
for the hidden photon.}\label{Fig:HP} \vspace{-0.4cm}
\end{wrapfigure}
Similarly to the CP-odd Higgs, limits arise from one-loop
contributions of the hidden photon to the muon and electron
anomalous magnetic moment~\cite{Pospelov:2008zw}. Also
beam-dump experiments in which $\gamma'$ is emitted through
bremsstrahlung \newline from an $e$-beam can set constraints by
searching for the decay $\gamma' \to e^+
e^-$~\cite{Bjorken:2009mm}. The resulting limits (shaded in
Fig.~\ref{Fig:HP}) leave an unexplored region in the parameter
space which is best explored by fixed-target
experiments~\cite{Bjorken:2009mm,Freytsis:2009bh}. Dedicated
proposals are being developed at DESY (HIPS, see
also~\cite{talk-Mnich}), JLab (APEX~\cite{talk-Afanasev},
HPS~\cite{talkSLAC-HPS},
DarkLight~\cite{Freytsis:2009bh,talkSLAC-DarkLight}), and MAMI,
with complementary sensitivities (cf. lines in
Fig.~\ref{Fig:HP}, right).
The whole allowed parameter range in Fig.~\ref{Fig:HP} is
phenomenologically interesting for DM with dark photons of a
generic hidden U(1)~\cite{ArkaniHamed:2008qn} or mirror photons
in asymmetric mirror DM models (AMDM)~\cite{An:2009vq}. The
former can reproduce DAMA for inelastic DM~\cite{Essig:2009nc}
(orange ``iDM'' band) and achieve naturally the leptophilic DM
annihilation required for PAMELA~\cite{Meade:2009iu}, while the
latter is able to explain the DAMA and CoGeNT measurements with
mirror neutrons as DM~\cite{An:2010kc} (colored ``ADMD''
bands).
\section{Conclusions}
Hidden sectors are well motivated by DM, SM extensions, and
string theory. They might contain light particles that despite
their very weak couplings to the SM can be constrained
experimentally. In particular, the NMSSM CP-odd Higgs has to be
heavier than 210 MeV or couple much weaker to fermions than the
SM Higgs. Hidden photons on the contrary are less constrained
and can be searched for in complementary experiments at DESY,
JLab and MAMI.
\begin{footnotesize}
|
2,869,038,155,586 | arxiv | \section{Introduction}
Let $X$ be an irreducible normal projective variety, defined over an algebraically closed field.
Fix a very ample line bundle on $X$ to define (semi)stable sheaves. Here (semi)stability would always
refer to $\mu$-(semi)stability.
Let $E$ be a reflexive semistable sheaf on $X$. Then $E$ admits a unique maximal polystable subsheaf
$E_1\, \subset\, E$ such that $\mu(E)\,=\, \mu(E_1)$ and $E/E_1$ is torsionfree. It may be mentioned that
a similar result holds also for Gieseker semistable sheaves. However, if $E$ is just a torsionfree semistable
sheaf on $X$, then a similar subsheaf $E_1$ does not exist in general (see Section \ref{se3.1} for such
an example).
We prove the following (see Proposition \ref{prop2} and Proposition \ref{prop3}):
\begin{proposition}\label{prop-i}
Let $F$ be a torsionfree semistable sheaf on $X$.
Assume that $F$ contains a polystable reflexive subsheaf $F'$ with $\mu(F')\,=\, \mu(F)$. Then
there is a unique reflexive subsheaf $V\, \subset\, F$ satisfying the following three conditions:
\begin{enumerate}
\item $V$ is polystable with $\mu(V)\,=\, \mu(F)$,
\item $F/V$ is torsionfree, and
\item $V$ is maximal among all reflexive subsheaves of $F$ satisfying the above two conditions.
\end{enumerate}
Assume that $F$ contains a polystable locally free subsheaf $F''$ with $\mu(F'')\,=\, \mu(F)$. Then
There is a unique locally free subsheaf $W\, \subset\, F$
satisfying the following three conditions:
\begin{enumerate}
\item $W$ is polystable with $\mu(W)\,=\, \mu(F)$,
\item $F/W$ is torsionfree, and
\item $W$ is maximal among all locally free subsheaves of $F$ satisfying the above two conditions.
\end{enumerate}
\end{proposition}
In the first part of Proposition \ref{prop-i}, if $F$ does not contain any polystable reflexive subsheaf
$F'$ with $\mu(F')\,=\, \mu(F)$, then we set $V\,=\,0$.
In the second part of Proposition \ref{prop-i}, if $F$ does not contain any polystable locally free subsheaf
$F''$ with $\mu(F'')\,=\, \mu(F)$, then we set $W\,=\,0$.
A semistable sheaf $F$ is called a pseudo-stable sheaf if $F$ admits a filtration of
subsheaves
$$
0\,=\, F_0\, \subsetneq\, F_1\, \subsetneq\, F_2\, \subsetneq\, \cdots \, \subsetneq\, F_{n-1}
\, \subsetneq\, F_n\,=\, F
$$
such that $F_i/F_{i-1}$ is a stable reflexive sheaf with $\mu(F_i/F_{i-1})\,=\, \mu(F)$ for all
$1\, \leq\, i\, \leq \, n$. If $F$ and all $F_i/F_{i-1}$ are locally free, then a
pseudo-stable sheaf is called a pseudo-stable bundle.
An iterative application of Proposition \ref{prop-i} gives the following (see Theorem \ref{thm1}
and Theorem \ref{thm2}):
\begin{theorem}\label{thm-i}
Let $F$ be a torsionfree semistable sheaf on $X$.
Assume that $F$ contains a polystable reflexive subsheaf $F'$ with $\mu(F')\,=\, \mu(F)$. Then
there is a unique pseudo-stable subsheaf $V\, \subset\, F$
satisfying the following three conditions:
\begin{enumerate}
\item $\mu(V)\,=\, \mu(F)$,
\item $F/V$ is torsionfree, and
\item $V$ is maximal among all pseudo-stable subsheaves of $F$ satisfying the above two conditions.
\end{enumerate}
Assume that $F$ contains a polystable locally free subsheaf $F''$ with $\mu(F'')\,=\, \mu(F)$. Then
there is a unique subsheaf $W\, \subset\, F$
satisfying the following three conditions:
\begin{enumerate}
\item $\mu(W)\,=\, \mu(F)$,
\item $W$ is a pseudo-stable bundle,
\item $F/W$ is torsionfree, and
\item $W$ is maximal among all subsheaves of $F$ satisfying the above three conditions.
\end{enumerate}
\end{theorem}
As before, in the first part of Theorem \ref{thm-i}, if $F$ does not contain any polystable reflexive subsheaf
$F'$ with $\mu(F')\,=\, \mu(F)$, then we set $V\,=\,0$.
In the second part of Theorem \ref{thm-i}, if $F$ does not contain any polystable locally free subsheaf
$F''$ with $\mu(F'')\,=\, \mu(F)$, then we set $W\,=\,0$.
In Section \ref{se5.1} we show that the canonical subsheaves in Theorem \ref{thm-i} behave well
with respect to the pullback operation by \'etale Galois covering maps; see Proposition \ref{prop4}
and Proposition \ref{prop5}.
Let $X$ and $Y$ be irreducible normal projective varieties over $k$, and let
$$
\phi\, :\, Y \, \longrightarrow\, X
$$
be a separable finite surjective map. Then $\mu_{\rm max}(\phi_*{\mathcal O}_Y)\,=\, 0$. Let
$F\, \subset\, \phi_*{\mathcal O}_Y$ be the first nonzero term of the Harder--Narasimhan filtration
of $\phi_*{\mathcal O}_Y$. Let
\begin{equation}\label{ie1}
W\, \subset\, F
\end{equation}
be the unique locally free pseudo-stable bundle given by the second part of
Theorem \ref{thm-i}. Then we have ${\mathcal O}_X\, \subset\, W$.
We prove the following (see Proposition \ref{propn2}):
\begin{proposition}\label{propni}
The homomorphism of \'etale fundamental groups induced by $\phi$
$$
\phi_*\, :\, \pi^{\rm et}_{1}(Y) \, \longrightarrow\, \pi^{\rm et}_{1}(X)
$$
is surjective if and only if $W\,=\, {\mathcal O}_X$.
\end{proposition}
When $\dim X\,=\,1$ (equivalently, $\dim Y\,=\,1$), Proposition \ref{propni} was proved in \cite{BP}.
We note that $W\,=\, F$ in \eqref{ie1} when $\dim X\,=\,1$.
\section{Pseudo-stable sheaves}
Let $k$ be an algebraically closed field. Let $X$ be an irreducible normal
projective variety defined over $k$. Fix a very ample line bundle $L$ on $X$;
using it, define the degree
$$
{\rm degree}(F)\, \in\,\mathbb Z
$$
of any torsionfree coherent $F$ sheaf on $X$ \cite[p.~13--14, Definition 1.2.11]{HL}, and also
define the \textit{slope} of $F$
$$
\mu(F)\, :=\, \frac{{\rm degree}(F)}{{\rm rank}(F)}\, .
$$
We recall that $F$ is called
\textit{stable} (respectively, \textit{semistable}) if
$$
\mu(V)\, <\, \mu(F) \ \ \text{(respectively,}\ \mu(V)\, \leq\, \mu(F)\text{)}
$$
for all coherent subsheaves $V\, \subset\, F$ with $0\, <\, {\rm rank}(V)\, <\, {\rm rank}(F)$
\cite[p.~14, Definition 1.2.11]{HL}. Also, $F$ is called \textit{polystable} if
\begin{itemize}
\item $F$ is semistable, and
\item $F$ is a direct sum of stable sheaves.
\end{itemize}
Following \cite{BS} we define:
\begin{definition}\label{def1}\mbox{}
\begin{enumerate}
\item A semistable sheaf $F$ is called a \textit{pseudo-stable} sheaf if $F$ admits a filtration of
subsheaves
$$
0\,=\, F_0\, \subsetneq\, F_1\, \subsetneq\, F_2\, \subsetneq\, \cdots \, \subsetneq\, F_{n-1}
\, \subsetneq\, F_n\,=\, F
$$
such that $F_i/F_{i-1}$ is a stable reflexive sheaf with $\mu(F_i/F_{i-1})\,=\, \mu(F)$ for all
$1\, \leq\, i\, \leq \, n$.
\item A semistable vector bundle $F$ is called a \textit{pseudo-stable} bundle if $F$ admits a filtration of
subbundles
$$
0\,=\, F_0\, \subsetneq\, F_1\, \subsetneq\, F_2\, \subsetneq\, \cdots \, \subsetneq\, F_{n-1}
\, \subsetneq\, F_n\,=\, F
$$
such that $F_i/F_{i-1}$ is a stable with $\mu(F_i/F_{i-1})\,=\, \mu(F)$ for all
$1\, \leq\, i\, \leq \, n$.
\end{enumerate}
\end{definition}
Note that a polystable reflexive sheaf is a pseudo-stable sheaf, and a polystable
vector bundle is a pseudo-stable bundle.
\begin{remark}\label{rem1}
Since any polystable sheaf is a direct sum of stable sheaves of same slope, we conclude
that a semistable sheaf $F$ is pseudo-stable if it admits a filtration of
subsheaves
$$
0\,=\, F_0\, \subsetneq\, F_1\, \subsetneq\, F_2\, \subsetneq\, \cdots \, \subsetneq\, F_{n-1}
\, \subsetneq\, F_n\,=\, F
$$
such that $F_i/F_{i-1}$ is a polystable reflexive sheaf with $\mu(F_i/F_{i-1})\,=\, \mu(F)$ for all
$1\, \leq\, i\, \leq \, n$. Similarly, a semistable vector bundle $F$ is a pseudo-stable bundle if $F$ admits
a filtration of subbundles
$$
0\,=\, F_0\, \subsetneq\, F_1\, \subsetneq\, F_2\, \subsetneq\, \cdots \, \subsetneq\, F_{n-1}
\, \subsetneq\, F_n\,=\, F
$$
such that $F_i/F_{i-1}$ is a polystable with $\mu(F_i/F_{i-1})\,=\, \mu(F)$ for all
$1\, \leq\, i\, \leq \, n$.
\end{remark}
\begin{lemma}\label{lem1}
Let $F$ be a pseudo-stable sheaf on $X$. Then $F$ is reflexive.
\end{lemma}
\begin{proof}
Let
$$
0\, \longrightarrow\, A \, \longrightarrow\, E \, \longrightarrow\, B \, \longrightarrow\, 0
$$
be a short exact sequence of coherent sheaves on $X$ such that both $A$ and $B$ and reflexive. Then it can
be shown that $E$ is reflexive. To prove this, consider the commutative diagram
$$
\begin{matrix}
0 & \longrightarrow & A & \stackrel{a}{\longrightarrow} & E & \stackrel{b}{\longrightarrow} & B & \longrightarrow & 0\\
&& \,\,\,\Big\downarrow\widehat{\iota} && \,\,\, \Big\downarrow\iota && \,\, \,\Big\downarrow\iota'\\
0 & \longrightarrow & A^{**} & \stackrel{a^{**}}{\longrightarrow} & E^{**} & \stackrel{b^{**}}{\longrightarrow} & B^{**}
\end{matrix}
$$
Since $A$ and $B$ are reflexive it follows that $\widehat{\iota}$ and $\iota'$ are isomorphisms. The homomorphism
$b^{**}$ is surjective because both $b$ and $\iota'$ are surjective. This and the fact that both
$\widehat{\iota}$ and $\iota'$ are isomorphisms together imply that $\iota$ is an isomorphism. Hence $E$
is reflexive.
Since $F$ admits a filtration of subsheaves
$$
0\,=\, F_0\, \subsetneq\, F_1\, \subsetneq\, F_2\, \subsetneq\, \cdots \, \subsetneq\, F_{n-1}
\, \subsetneq\, F_n\,=\, F
$$
such that $F_i/F_{i-1}$ is reflexive for all $1\, \leq\, i\, \leq \, n$, using the above observation and
arguing inductively we conclude that $F$ is reflexive.
\end{proof}
\section{Maximal polystable subsheaves of semistable sheaves}\label{se3}
We recall a proposition from \cite{BDL}.
\begin{proposition}[{\cite[p.~1034, Proposition 3.1]{BDL}}]\label{prop1}
Let $F$ be a reflexive semistable sheaf on $X$. Then there is a unique coherent subsheaf $V\, \subset\, F$
satisfying the following three conditions:
\begin{enumerate}
\item $V$ is polystable with $\mu(V)\,=\, \mu(F)$,
\item $F/V$ is torsionfree, and
\item $V$ is maximal among all subsheaves of $F$ satisfying the above two conditions.
\end{enumerate}
\end{proposition}
Although the base field in Proposition 3.1 of \cite{BDL} is $\mathbb C$, it is straight-forward to check
that the proof of Proposition 3.1 in \cite{BDL} gives Proposition \ref{prop1}.
A similar result for Gieseker semistable sheaves is known (see \cite[p.~23, Lemma 1.5.5]{HL}).
In Proposition \ref{prop1} the assumption that $F$ is reflexive is essential, as shown by the
following example.
\subsection{An example}\label{se3.1}
Take $(X,\, L)$ with $\dim X\, \geq\, 2$ such that there are two line bundle $A$ and $B$ on $X$ satisfying
the following two conditions:
\begin{itemize}
\item $A\, \not=\, B$, and
\item $\text{degree}(A)\,=\, \text{degree}(B)$.
\end{itemize}
Fix a point $x\, \in\, X$, and take a line $S\, \subset\, (A\oplus B)_x\,=\, A_x\oplus B_x$ such that
$$
S\, \not=\, A_x \ \ \text{ and }\ \ S\, \not=\, B_x\, .
$$
Consider the composition of homomorphisms
$$
A\oplus B \, \longrightarrow\, (A\oplus B)_x \, \longrightarrow\, (A\oplus B)_x/S\, ;
$$
both $(A\oplus B)_x$ and $S$ are torsion sheaves supported at $x$.
Let $F$ denote the kernel of this composition of homomorphisms. We list some properties of $F$:
\begin{itemize}
\item $F$ is torsionfree as it is a subsheaf of $A\oplus B$.
\item $F$ is not reflexive, because $F^{**}\,=\, A\oplus B$; here the condition that
$\dim X\, \geq\, 2$ is used.
\item $\mu(F)\,=\, \mu(A\oplus B)\,=\, \mu(A)\,=\, \mu(B)$.
\item $F/(F\cap A)\,=\, B$ and $F/(F\cap B)\,=\, A$.
\item $F$ is semistable because $A\oplus B$ is so.
\item $F$ is not polystable. This is because the short exact sequence
$$
0\, \longrightarrow\, F\cap A\, \longrightarrow\, F \, \longrightarrow\, B
\, \longrightarrow\, 0
$$
does not split.
\end{itemize}
Therefore, if we set $V\,=\, F\cap A$ or $V\,=\, F\cap B$, then the following three conditions
are valid:
\begin{enumerate}
\item $V$ is polystable with $\mu(V)\,=\, \mu(F)$,
\item $V$ is torsionfree, and
\item $V$ is maximal among all subsheaves satisfying the above two conditions.
\end{enumerate}
Hence Proposition \ref{prop1} fails for $F$.
\subsection{A unique maximal reflexive subsheaf}\label{se3.2}
Although Proposition \ref{prop1} fails for general non-reflexive torsionfree sheaves, the following variation
of it holds.
\begin{proposition}\label{prop2}
Let $F$ be a torsionfree semistable sheaf on $X$.
Assume that $F$ contains a polystable reflexive subsheaf $F'$ with $\mu(F')\,=\, \mu(F)$.
Then there is a unique coherent subsheaf $V\, \subset\, F$
satisfying the following four conditions:
\begin{enumerate}
\item $V$ is reflexive,
\item $V$ is polystable with $\mu(V)\,=\, \mu(F)$,
\item $F/V$ is torsionfree, and
\item $V$ is maximal among all subsheaves of $F$ satisfying the above three conditions.
\end{enumerate}
\end{proposition}
\begin{proof}
The coherent sheaf $F^{**}$ is reflexive and semistable; also, we have $\mu(F^{**})\,=\, \mu(F)$.
Apply Proposition \ref{prop1} to $F^{**}$. Let
$$
W\,\subset\, F^{**}
$$
be the unique polystable subsheaf satisfying the three conditions in Proposition \ref{prop1}.
Let $\mu_0\, :=\, \mu(W)\,=\, \mu(F)$.
Since $W$ is polystable, it is a direct sum of stable sheaves of slope $\mu_0$. Let
${\mathcal C}$ denote the space of all coherent subsheaves $E\, \subset\, W$ satisfying the following two conditions:
\begin{enumerate}
\item $E$ is a direct summand of $W$, meaning there is a coherent subsheaf $E'\, \subset\, W$ such that the
natural homomorphism $E\oplus E'\, \longrightarrow\, W$ is an isomorphism.
\item $E$ is indecomposable, meaning if $E\,=\, E_1\oplus E_2$, then either $E_1\,=\, 0$ or $E_2\,=\, 0$.
\end{enumerate}
Note that any $E\, \in\, {\mathcal C}$ is stable with $\mu(E)\,=\, \mu_0$. Moreover, $E$ is reflexive, because it
is direct summand of the reflexive sheaf $W$. Since $W$ is polystable, for any subset ${\mathcal C}'\, \subset\,
{\mathcal C}$, the coherent subsheaf of $W$ generated by all subsheaves $E\,\in\, {\mathcal C}'$ is a direct
summand of $W$. Let
\begin{equation}\label{e1}
{\mathcal C}_F\, \subset\, {\mathcal C}
\end{equation}
be the subset consisting of all subsheaves $E\, \in\, {\mathcal C}$ such that $E\, \subset\, F$.
The space ${\mathcal C}_F$ is nonempty, because
$F$ contains a polystable reflexive subsheaf $F'$ with $\mu(F')\,=\, \mu(F)$.
Let
\begin{equation}\label{e2}
V\, \subset\, W
\end{equation}
be the coherent subsheaf of $W$ generated by all subsheaves $E\,\in\, {\mathcal C}_F$, where ${\mathcal C}_F$
is defined in \eqref{e1}. We will show that $V$ defined in \eqref{e2} satisfies all the conditions in the proposition.
Recall from Proposition \ref{prop1} that $W$ is reflexive and polystable with $\mu(W)\,=\, \mu_0$.
Since $V$ in \eqref{e2} is a direct summand of $W$, we conclude that $V$ is reflexive and polystable with
$\mu(V)\,=\, \mu_0$.
We note that $F^{**}/V$ fits in the short exact sequence
$$
0\, \longrightarrow\, W/V \, \longrightarrow\, F^{**}/V \, \longrightarrow\, F^{**}/W \, \longrightarrow\, 0\, .
$$
Since both $F^{**}/W$ and $W/V$ are torsionfree, it follows that $F^{**}/V$ is also torsionfree. Hence the
subsheaf $F/V\, \subset\, F^{**}/V$ is torsionfree.
Let $\widetilde{V}\, \subset\,F$ be a subsheaf satisfying the following three conditions:
\begin{enumerate}
\item $\widetilde{V}$ is reflexive,
\item $\widetilde{V}$ is polystable with $\mu(\widetilde{V})\,=\, \mu(F)$, and
\item $F/\widetilde{V}$ is torsionfree.
\end{enumerate}
Then from the properties of $W$ we know that $\widetilde{V}\, \subset\, W$. From this it follows
that $\widetilde{V}\, \subset\, V$. This proves the uniqueness of $V$ satisfying the four
conditions in the proposition.
\end{proof}
\begin{remark}\label{rem2}
In Proposition \ref{prop2}, if $F$ does not contain any polystable reflexive subsheaf $F'$ with $\mu(F')
\,=\, \mu(F)$, then we set the maximal subsheaf
$V$ in Proposition \ref{prop2} to be the zero subsheaf $0\, \subset\, F$. To
explain this convention, consider the sheaf $F$ in Section \ref{se3.1}. Note that $F$
does not contain any reflexive subsheaf $F'$ with $\mu(F')\,=\, \mu(F)$. Hence, by this convention,
the subsheaf $V$ of $F$ given by Proposition \ref{prop2}
is the zero subsheaf $0\, \subset\, F$.
\end{remark}
\subsection{A unique maximal locally free subsheaf}\label{se3.3}
Proposition \ref{prop2} has the following variation.
\begin{proposition}\label{prop3}
Let $F$ be a torsionfree semistable sheaf on $X$.
Assume that $F$ contains a polystable locally free subsheaf $F''$ with $\mu(F'')\,=\, \mu(F)$.
Then there is a unique coherent subsheaf $V^f\, \subset\, F$
satisfying the following four conditions:
\begin{enumerate}
\item $V^f$ is locally free,
\item $V^f$ is polystable with $\mu(V^f)\,=\, \mu(F)$,
\item $F/V^f$ is torsionfree, and
\item $V^f$ is maximal among all subsheaves of $F$ satisfying the above three conditions.
\end{enumerate}
\end{proposition}
\begin{proof}
The proof is very similar to the proof of Proposition \ref{prop2}.
Take $W\, \subset\, F^{**}$ as in the proof of Proposition \ref{prop2}. Consider
${\mathcal C}_F$ defined in \eqref{e1}. Let
\begin{equation}\label{e3}
{\mathcal C}^f_F\, \subset\, {\mathcal C}_F
\end{equation}
be the subset consisting of all subsheaves $E\, \in\, {\mathcal C}_F$ such that $E$ is locally free.
We note that the set ${\mathcal C}^f_F$ is nonempty, because
$F$ contains a polystable locally free subsheaf $F''$ with $\mu(F'')\,=\, \mu(F)$.
Let
\begin{equation}\label{e4}
V^f\, \subset\, W
\end{equation}
be the coherent subsheaf of $W$ generated by all subsheaves $E\,\in\, {\mathcal C}^f_F$, where ${\mathcal C}^f_F$
is defined in \eqref{e3}.
We will describe an alternative construction of the subsheaf $V^f$ in \eqref{e4}.
A theorem of Atiyah says that any coherent sheaf ${\mathcal E}$ on $X$ can be expressed as a direct sum
of indecomposable sheaves, and if
$$
{\mathcal E}\,=\, \bigoplus_{i=1}^{m_1} {\mathcal E}^1_i\,=\,\bigoplus_{i=1}^{m_2} {\mathcal E}^2_i
$$
are two decompositions of ${\mathcal E}$ into direct sum of indecomposable sheaves, then
\begin{itemize}
\item $m_1\,=\, m_2$, and
\item there is a permutation $\sigma$ of $\{1,\, \cdots,\, m_1\}$ such that
${\mathcal E}^1_i$ is isomorphic to ${\mathcal E}^2_{\sigma(i)}$ for all $1\, \leq\, i\, \leq\, m_1$.
\end{itemize}
(See \cite[p.~315, Theorem 2(i)]{At}.) From this theorem of Atiyah it follows immediately that any
coherent sheaf ${\mathcal E}$ on $X$ can be expressed as
\begin{equation}\label{ed}
{\mathcal E}\,=\, {\mathcal E}^f\oplus {\mathcal E}'\, ,
\end{equation}
where ${\mathcal E}^f$ is locally free, and every indecomposable component of ${\mathcal E}'$ is
\textit{not} locally free. Furthermore, the decomposition of ${\mathcal E}$ in \eqref{ed} is unique if
\begin{itemize}
\item there is no nonzero homomorphism from ${\mathcal E}^f$ to ${\mathcal E}'$, and
\item there is no nonzero homomorphism from ${\mathcal E}'$ to ${\mathcal E}^f$.
\end{itemize}
Note that this condition is satisfied for $W$; this is because any nonzero homomorphism between two stable
reflexive sheaves of same slope is an isomorphism.
Let
$$
W^f\, \subset\, W
$$
be the (unique) maximal locally free component of $W$. So $W$ is a direct sum of stable vector bundles of slope
$\mu_0$. The subsheaf $V^f$ in \eqref{e4} is generated by all direct summands of $W^f$ that are contained in $F$.
Since $V^f$ is a direct summand of $W^f$, it follows that $V^f$ is locally free. It clearly satisfies
all the conditions in the proposition, and it is unique.
\end{proof}
\begin{remark}\label{rem3}
In Proposition \ref{prop3}, if $F$ does not contain any polystable locally free subsheaf $F''$ with $\mu(F'')
\,=\, \mu(F)$, then we set the maximal subsheaf $V^f$ in Proposition \ref{prop3} to be the zero subsheaf
$0\, \subset\, F$. For the sheaf $F$ in Section \ref{se3.1}, the subsheaf given by
Proposition \ref{prop3} is the zero subsheaf $0\, \subset\, F$.
\end{remark}
\section{Maximal pseudo-stable subsheaves}
\begin{theorem}\label{thm1}
Let $F$ be a torsionfree semistable sheaf on $X$.
Assume that $F$ contains a polystable reflexive subsheaf $F'$ with $\mu(F')\,=\, \mu(F)$.
Then there is a unique pseudo-stable subsheaf $V\, \subset\, F$
satisfying the following three conditions:
\begin{enumerate}
\item $\mu(V)\,=\, \mu(F)$,
\item $F/V$ is torsionfree, and
\item $V$ is maximal among all pseudo-stable subsheaves of $F$ satisfying the above two conditions.
\end{enumerate}
\end{theorem}
\begin{proof}
The idea is to apply Proposition \ref{prop2} iteratively. Let
$$
V_1\, \subset\, F
$$
be the subsheaf given by Proposition \ref{prop2}. Assume that $V_1\, \not=\, 0$. Let
\begin{equation}\label{q1}
q_1\, :\, F\, \longrightarrow\, F/V_1
\end{equation}
be the quotient map. We note that $F/V_1$ is torsionfree and semistable; moreover,
we have $\mu(F/V_1)\,=\, \mu(F)$, if $F/V_1\, \not=\, 0$. Let
$$
V'_1\, \subset\, F/V_1
$$
be the subsheaf given by Proposition \ref{prop2}. Define
$$
V_2\, :=\, q^{-1}_1(V'_1)\, \subset\, F\, ,
$$
where $q_1$ is the projection in \eqref{q1}.
So $F/V_2\,=\, (F/V_1)/V'_1$ is torsionfree and semistable; moreover,
we have $\mu(F/V_2)\,=\, \mu(F)$, if $F/V_2\, \not=\, 0$.
proceeding step-by-step, this way we obtain a filtration of subsheaves
\begin{equation}\label{e5}
0\, =\, V_0\, \subset\, V_1\, \subset\, V_2\, \subset\, \cdots\, \subset\, V_{n-1}\, \subset\, V_n\,=\, V
\, \subset\, F
\end{equation}
such that $V_i/V_{i-1}$ is the subsheaf of $F/V_{i-1}$ given by Proposition \ref{prop2}, for
every $0\, \leq\, i\, \leq\, n$, and subsheaf of $F/V_n$ given by Proposition \ref{prop2} is the
zero subsheaf.
{}From Proposition \ref{prop2} it follows immediately that
\begin{itemize}
\item $V$ is pseudo-stable (see Remark \ref{rem1}),
\item $\mu(V)\,=\, \mu(F)$, and
\item $F/V$ is torsionfree.
\end{itemize}
We need to show that $V$ is the unique maximal one among all subsheaves of $F$ satisfying these three
conditions.
Let $$W\, \subset\, F$$ be a coherent subsheaf such that
\begin{itemize}
\item $W$ is pseudo-stable,
\item $\mu(W)\,=\, \mu(F)$, and
\item $F/W$ is torsionfree.
\end{itemize}
Let
$$
0\, =\, W_0\, \subset\, W_1\, \subset\, W_2\, \subset\, \cdots\, \subset\, W_{m-1}\, \subset\, W_m\,=\, W
$$
be a filtration of $W$ such that
$W_i/W_{i-1}$ is a stable reflexive sheaf with $\mu(W_i/W_{i-1})\,=\, \mu(W)$ for all
$1\, \leq\, i\, \leq \, m$. Then it follows immediately that $W_1\, \subset\, V_1$, where $V_1$ is the subsheaf
in \eqref{e5}. For the same reason, we have $W_i\, \subset\, V_i$ for all $i$ (see \eqref{e5}).
This proves that $V$ is the unique maximal one among all subsheaves of $F$ satisfying the three
conditions.
\end{proof}
\begin{remark}\label{rem4}
In Theorem \ref{thm1}, if $F$ does not contain any polystable reflexive subsheaf $F'$ with $\mu(F')\,=\, \mu(F)$,
then we set the maximal subsheaf $V$ in Theorem \ref{thm1} to be the zero subsheaf $0\, \subset\, F$.
\end{remark}
\begin{theorem}\label{thm2}
Let $F$ be a torsionfree semistable sheaf on $X$.
Assume that $F$ contains a polystable locally free subsheaf $F''$ with $\mu(F'')\,=\, \mu(F)$.
Then there is a unique subsheaf $V\, \subset\, F$
satisfying the following three conditions:
\begin{enumerate}
\item $\mu(V)\,=\, \mu(F)$,
\item $V$ is a pseudo-stable bundle,
\item $F/V$ is torsionfree, and
\item $V$ is maximal among all subsheaves of $F$ satisfying the above three conditions.
\end{enumerate}
\end{theorem}
\begin{proof}
We apply Proposition \ref{prop3} iteratively, just as Proposition \ref{prop2} was applied iteratively
in the proof of Theorem \ref{thm1}. Apart from that, the proof is identical to the proof of Theorem \ref{thm1};
we omit the details.
\end{proof}
\begin{remark}\label{rem5}
In Theorem \ref{thm2}, if $F$ does not contain any polystable locally free subsheaf $F''$ with $\mu(F'')
\,=\, \mu(F)$, then we set the maximal subsheaf $V$ in Theorem \ref{thm2} to be the zero subsheaf $0\, \subset\, F$.
\end{remark}
\section{Galois coverings and descent}
\subsection{Galois covering}\label{se5.1}
As before, $X$ is an irreducible normal projective variety over $k$, equipped
with an ample line bundle $L$. Let $Y$ be an irreducible projective variety and
\begin{equation}\label{e6}
\phi\, :\, Y\, \longrightarrow\, X
\end{equation}
an \'etale Galois covering. The Galois group $\text{Gal}(\phi)$ will be denoted by $\Gamma$.
Note that the line bundle $\phi^*L$ on $Y$ is ample. The degree of sheaves on $Y$ will be defined
using $\phi^*L$. For any torsionfree coherent sheaf $E$ on $X$,
\begin{equation}\label{f1}
\text{degree}(\phi^*E)\,=\, (\#\Gamma)\cdot \text{degree}(E)\, .
\end{equation}
Let $E$ be a semistable torsionfree sheaf on $X$. Then it can be shown that $\phi^*E$ is semistable. To prove
this, assume that $\phi^*E$ is not semistable. Let
$$
E'\, \subset\, \phi^*E
$$
be the maximal semistable subsheaf, namely the first term of the Harder--Narasimhan filtration of $\phi^*E$
(see \cite[p.~16, Theorem 1.3.4]{HL}). From the uniqueness of $E'$ it follows immediately that the natural
action of the Galois group $\Gamma$ on $\phi^*E$ preserves $E'$. Therefore, there is a unique subsheaf
$$
E_1\, \subset\, E
$$
such that $\phi^*E_1\,=\, E'$. Since $E'$ contradicts the semistability condition for $\phi^*E$, using
\eqref{f1} we conclude that $E_1$ contradicts the semistability condition for $E$. But $E$ is
semistable. This proves that $\phi^*E$ is semistable.
\subsection{An example}\label{se5.2}
Let
$$
V\, \subset\, E\ \ \ \text{ and }\ \ \ W\, \subset\, \phi^*E
$$
be the polystable subsheaves given by Proposition \ref{prop1}. In general, $W\, \not=\, \phi^*V$. Such an
example is given below.
Take $\Gamma$ such that the $\Gamma$--module $k[\Gamma]$ is not completely reducible; this
requires the characteristic of $k$ to be positive. Set
$E\,=\, \phi_*{\mathcal O}_Y$. We note that $\phi_*{\mathcal O}_Y$ is the vector bundle associated to
the principal $\Gamma$--bundle $Y\, \stackrel{\phi}{\longrightarrow}\, X$ for the $\Gamma$--module $k[\Gamma]$.
Then $\phi^*E$ is the trivial vector bundle
$Y\times k[\Gamma]\, \longrightarrow\, Y$ with fiber $k[\Gamma]$. Hence the polystable subsheaf
$$
W\, \subset\, \phi^*E
$$
given by Proposition \ref{prop1} is $\phi^*E$ itself. But $E$ is not polystable (though it is semistable) because
the $\Gamma$--module $k[\Gamma]$ is not completely reducible. Therefore, the
polystable subsheaf
$$
V\, \subset\, E
$$
given by Proposition \ref{prop1} is a proper subsheaf of $E$. In particular, we have $W\, \not=\, \phi^*V$.
\subsection{Descent of the reflexive subsheaf}
We continue with the set-up of Section \ref{se5.1}.
\begin{proposition}\label{prop4}
Let $F$ be a semistable torsionfree sheaf on $X$. Let
$$
V\, \subset\, F\ \ \ \text{ and }\ \ \ W\, \subset\, \phi^*F
$$
be the pseudo-stable subsheaves given by Theorem \ref{thm1}. Then
$$
W\,=\, \phi^*V
$$
as subsheaves on $\phi^*F$.
\end{proposition}
\begin{proof}
We trace the construction of the subsheaf in Proposition \ref{prop2}. First note that we have
$\phi^*(F^{**})\,=\,
(\phi^*F)^{**}$, because $\phi$ is \'etale. Therefore, the Galois group $\Gamma$ has a natural
action on $(\phi^*F)^{**}$. Since the polystable subsheaf $$B'\, \subset\, (\phi^*F)^{**}$$
given by Proposition \ref{prop1} is unique, the action of $\Gamma$ on $(\phi^*F)^{**}$ preserves
$B'$. Consequently, the polystable subsheaf
$$
B\, \subset\, \phi^*F
$$
given by Proposition \ref{prop2} is preserved by the action of $\Gamma$ on $\phi^*F$.
Hence there is a unique coherent subsheaf
$$
A\, \subset\, F
$$
such that $\phi^*A\,=\, B$ as subsheaves on $\phi^*F$.
The sheaf $A$ is semistable, because for a subsheaf $A_1\, \subset\, A$ contradicts the semistability
condition for $A$, the subsheaf $\phi^*A_1\, \subset\, \phi^*A\,=\, B$ contradicts the semistability
condition for $B$. We will prove that $A$ is pseudo-stable.
To prove that $A$ is pseudo-stable, first note that $A$ is reflexive, because $\phi$ is
\'etale and $B\,=\, \phi^*A$ is reflexive. Let
$$
0\, \longrightarrow\, S \, \longrightarrow\, A \, \longrightarrow\, Q \, \longrightarrow\, 0
$$
be a short exact sequence such that
\begin{itemize}
\item $\mu(S)\,=\, \mu(A)$, and
\item $Q$ is torsionfree.
\end{itemize}
Since $A$ is semistable, and $\mu(S)\,=\, \mu(A)$, it follows that both $S$ and $Q$ are semistable
and also we have $\mu(Q)\,=\, \mu(A)$.
To prove that $A$ is pseudo-stable, it suffices to show that $Q$ is reflexive.
As noted in Section \ref{se5.1}, the semistability of $S$ implies the
semistability of $\phi^*S$. We also have $\mu(\phi^*S)\,=\, \mu(\phi^*A)$, because
$\mu(S)\,=\, \mu(A)$. On the other hand $\phi^*A\,=\, B$ is polystable. These together imply
that $\phi^*S$ is a direct summand of $\phi^*A$. Fix a subsheaf
$$
S'\, \subset\, \phi^*A
$$
such that the natural homomorphism $S'\oplus \phi^*S\, \longrightarrow\, \phi^*A$ is an
isomorphism. Since $\phi^*A$ is reflexive, it follows that $S'$ is also reflexive. But
$S'\,=\, \phi^*Q$. Therefore, we conclude that $Q$ is reflexive.
As noted before, this proves that $A$ is pseudo-stable.
Now following the iterative construction in Theorem \ref{thm1} it is straightforward to deduce
that
\begin{equation}\label{e8}
W\,\subset\, \phi^*V\, .
\end{equation}
To complete the proof we need to show that
\begin{equation}\label{e7}
\phi^*V \,\subset\, W\, .
\end{equation}
We note that to prove \eqref{e7} it suffices to show the following:
\textit{Let $E$ be a reflexive stable sheaf on $X$. Then $\phi^*E$ is pseudo-stable.}
To prove the above statement, let $E$ be a reflexive stable sheaf on $X$. So $\phi^*E$ is reflexive
and semistable (shown in Section \ref{se5.1}). Let
$$
E_1\, \subset\, \phi^*E
$$
be the polystable subsheaf given by Proposition \ref{prop1}. As observed earlier, from the uniqueness of $E_1$ it
follows immediately that the natural action of $\Gamma$ on $\phi^*E$ preserves $E_1$. Let $$E'\, \subset\, E$$
be the unique subsheaf such that $$E_1\,=\, \phi^*E'$$ as subsheaves of $\phi^*E$.
As noted before, $E'$ is semistable, because $\phi^*E$ is so; also, we have $\mu(E')\,=\, \mu(E)$, because
$\mu(\phi^*E')\,=\, \mu(\phi^*E)$. Furthermore, $E'$ is reflexive because $\phi^*E'$ is so.
Since $E$ is polystable, these together imply that $E'$ is a direct summand of $E$. Since $E$ is reflexive,
and $E/E'$ is a direct summand of $E$, we conclude that $E/E'$ is also reflexive. Consequently, the pullback
$$
\phi^*(E/E')\,=\, (\phi^*E)/E_1
$$
is also reflexive. Using this it is straightforward to deduce that $\phi^*E$ is pseudo-stable. Indeed,
in the above argument, substitute $E/E'$ in place of $E$, and iterate.
Hence the inclusion in \eqref{e7} holds. The proposition follows from \eqref{e8} and \eqref{e7}.
\end{proof}
\subsection{Descent of the locally free subsheaf}
\begin{proposition}\label{prop5}
Let $F$ be a semistable torsionfree sheaf on $X$. Let
$$
V\, \subset\, F\ \ \ \text{ and }\ \ \ W\, \subset\, \phi^*F
$$
be the pseudo-stable bundles given by Theorem \ref{thm2}. Then
$$
W\,=\, \phi^*V
$$
as subsheaves on $\phi^*F$.
\end{proposition}
\begin{proof}
The proof is very similar to the proof of Proposition \ref{prop4}.
The construction of the subsheaf in Proposition \ref{prop3} needs to traced
instead of the construction in Proposition \ref{prop2}. We omit the details.
\end{proof}
\section{Direct image of structure sheaf}
\subsection{The case of group quotients}
Let $Z$ be an irreducible normal projective variety over $k$ such that a finite (reduced) group $\Gamma$ is acting
faithfully on it. Then the quotient $X\, :=\, Z/\Gamma$ is also an irreducible normal projective variety,
and the quotient map
\begin{equation}\label{n1}
\varphi\, :\, Z \, \longrightarrow\, Z/\Gamma\,=:\, X
\end{equation}
is separable. Note that we are not assuming that the action of $\Gamma$ is free; so the map $\varphi$ need
not be \'etale. We fix an ample line bundle $L$ on $X$, and we equip $Z$ with $\varphi^*L$; so we have
the notion of degree of sheaves on both $X$ and $Z$.
Consider the direct image $\varphi_*{\mathcal O}_Z$; it is reflexive because $X$ and $Z$ are normal.
We have
\begin{equation}\label{n0}
(\varphi^*\varphi_*{\mathcal O}_Z)/{\rm Torsion}\, \subset\, {\mathcal O}_Z\otimes_k k[\Gamma]\, .
\end{equation}
Indeed, over the open subset $U\, \subset\, Z$ where $\varphi$ is flat, we have
$$\widehat{\varphi}^*\widehat{\varphi}_*{\mathcal O}_U\, \subset\, {\mathcal O}_U\otimes_k k[\Gamma]\, ,$$
where $\widehat{\varphi}\,=\, \varphi\vert_U$.
Since the codimension of $Z\setminus U\, \subset\, Z$ is at least two, and $Z$ is normal, this inclusion
map over $U$ extends to an inclusion map as in \eqref{n0}. From \eqref{n0} it
follows that $\mu_{\rm max}((\varphi^*\varphi_*{\mathcal O}_Z)/{\rm Torsion})\, \leq\, 0$, and hence
$$
\mu_{\rm max}(\varphi_*{\mathcal O}_Z)\, \leq\, 0\, .
$$
On the other hand, we have ${\mathcal O}_X\, \subset\, \varphi_*{\mathcal O}_Z$. Combining these
it follows that
\begin{equation}\label{n2}
\mu_{\rm max}(\varphi_*{\mathcal O}_Z)\, =\, 0\, .
\end{equation}
Let
\begin{equation}\label{n3}
F_1\, \subset\, \varphi_*{\mathcal O}_Z
\end{equation}
be the maximal semistable subsheaf of degree zero (see \eqref{n2}); in other words, $F_1$ is the first
nonzero term of the Harder--Narasimhan filtration of $\varphi_*{\mathcal O}_Z$. We note that $F_1$ is reflexive.
Let
\begin{equation}\label{n4}
W_1\, \subset\, F_1
\end{equation}
be the unique maximal locally free pseudo-stable bundle given by Theorem \ref{thm2} for
the sheaf $F_1$ in \eqref{n3}. We note that
Theorem \ref{thm2} is applicable because ${\mathcal O}_X\, \subset\, F_1$.
\begin{lemma}\label{lemn1}
The pullback $\varphi^*W_1$ of the sheaf $W_1$ in \eqref{n4} by the map in \eqref{n1} is a trivial bundle on $Z$.
\end{lemma}
\begin{proof}
We know that $\varphi^*W_1$ is locally free of degree zero; let $r$ be its rank. Using the inclusion map
in \eqref{n0} we have
\begin{equation}\label{n5}
\bigwedge\nolimits^r \varphi^*W_1\, \subset\, \bigwedge\nolimits^r ({\mathcal O}_Z\otimes_k k[\Gamma])\,=\,
{\mathcal O}_Z\otimes_k \bigwedge\nolimits^r k[\Gamma]\, .
\end{equation}
Since $\bigwedge\nolimits^r \varphi^*W_1$ is a line bundle of degree zero, any nonzero homomorphism
$\bigwedge\nolimits^r \varphi^*W_1\, \longrightarrow\, {\mathcal O}_Z$ is an isomorphism. So
the subsheaf $\bigwedge\nolimits^r \varphi^*W_1$ in \eqref{n5} is a subbundle. From this it follows
that $\varphi^*W_1$ is a subbundle of ${\mathcal O}_Z\otimes_k k[\Gamma]$.
Any subbundle of degree zero of the trivial bundle is trivial. This proves that $\varphi^*W_1$ is trivial.
\end{proof}
\subsection{The general case}
Let $X$ and $Y$ be irreducible normal projective varieties over $k$, and let
\begin{equation}\label{m1}
\phi\, :\, Y \, \longrightarrow\, X
\end{equation}
be a separable finite surjective map. Let
\begin{equation}\label{m2}
\varphi\, :\, Z \, \longrightarrow\, X
\end{equation}
be the normal Galois closure of $\phi$. So there is a commutative diagram
$$
\begin{matrix}
Z & \stackrel{\varphi}{\longrightarrow}& X\\
\Big\downarrow && \Vert\\
Y & \stackrel{\phi}{\longrightarrow}& X
\end{matrix}
$$
We fix an ample line bundle $L$ on $X$, and we equip both $Y$ and $Z$ with its pullback.
We have
\begin{equation}\label{m0}
\phi_*{\mathcal O}_Y\, \subset\, \varphi_*{\mathcal O}_Z\, .
\end{equation}
Hence from \eqref{n2} it follows that
$$
\mu_{\rm max}(\phi_*{\mathcal O}_Y)\, =\, 0\, ;
$$
recall that ${\mathcal O}_X\, \subset\, \phi_*{\mathcal O}_Y$.
Let
\begin{equation}\label{m3}
F\, \subset\, \varphi_*{\mathcal O}_Y
\end{equation}
be the maximal semistable subsheaf of degree zero (equivalently,
it is the first nonzero term of the Harder--Narasimhan filtration); let
\begin{equation}\label{m4}
W\, \subset\, F
\end{equation}
be the unique maximal locally free pseudo-stable bundle given by Theorem \ref{thm2} for
the sheaf $F$ in \eqref{m3}. As before, Theorem \ref{thm2} is applicable because
${\mathcal O}_X\, \subset\, F$.
The algebra structure on ${\mathcal O}_Y$ produces an algebra structure on $\phi_*{\mathcal O}_Y$. Let
\begin{equation}\label{n6}
{\mathbf m}\, :\, (\phi_*{\mathcal O}_Y)\otimes (\phi_*{\mathcal O}_Y)\, \longrightarrow\,
\phi_*{\mathcal O}_Y
\end{equation}
be the corresponding multiplication map.
\begin{proposition}\label{propn1}
The subsheaf $W\,\subset\,\phi_*{\mathcal O}_Y$ (see \eqref{m4}, \eqref{m3}) satisfies the condition
$$
{\mathbf m} (W\otimes W)\, \subset\, W\, ,
$$
where ${\mathbf m}$ is the map in \eqref{n6}.
\end{proposition}
\begin{proof}
{}From the inclusion in \eqref{m0} it follows that $F\, \subset\, F_1$ (defined in \eqref{m3}
and \eqref{n3}), and hence we have
$$
W\, \subset\, W_1
$$
(defined in \eqref{m4} and \eqref{n4}). This implies that
$$
\varphi^* W\, \subset\, \varphi^* W_1\, .
$$
Now, $\varphi^* W_1$ is trivial by Lemma \ref{lemn1}, and $\varphi^* W$ is a locally free
subsheaf of it of degree zero. Therefore, using the argument in the proof of Lemma \ref{lemn1}
we conclude that $\varphi^* W$ is a trivial subbundle of $\varphi^* W_1$.
Since $\varphi^* W$ is trivial, it follows that $(\varphi^* W)\otimes (\varphi^* W)\,=\,
\varphi^* (W\otimes W)$ is trivial. This implies that $W\otimes W$ is semistable. Also,
$\text{degree}(W\otimes W)\,=\, 0$, because $\text{degree}(W)\,=\, 0$. From these it follows
that
$$
{\mathbf m} (W\otimes W)\, \subset\, F
$$
(see \eqref{m4} and \eqref{n6} $F$ and $\mathbf m$).
Any torsionfree quotient, of degree zero, of a trivial vector bundle is also a
trivial vector bundle; this follows using the argument in Lemma \ref{lemn1}. From this, and the
characterization of $W$ in Theorem \ref{thm2}, we have ${\mathbf m} (W\otimes W)\, \subset\, W$.
\end{proof}
\begin{proposition}\label{propn2}
Take $\phi$ as in \eqref{m1}. Then the induced homomorphism of \'etale fundamental groups
$$
\phi_*\, :\, \pi^{\rm et}_{1}(Y) \, \longrightarrow\, \pi^{\rm et}_{1}(X)
$$
is surjective if and only if $W\,=\, {\mathcal O}_X$, where $W$ is defined in \eqref{m4}.
\end{proposition}
\begin{proof}
First assume that the homomorphism induced by $\phi$
\begin{equation}\label{n7}
\phi_*\, :\, \pi^{\rm et}_{1}(Y) \, \longrightarrow\, \pi^{\rm et}_{1}(X)
\end{equation}
is not surjective. Since $\phi$ is a finite surjective map, $$\phi_*(\pi^{\rm et}_{1}(Y))\,\subset\,
\pi^{\rm et}_{1}(X)$$ is a subgroup of finite index. Let
$$
\phi'\, :\, Y'\, \longrightarrow\, X
$$
be the \'etale covering corresponding to this finite index subgroup $\phi_*(\pi^{\rm et}_{1}(Y))$. So there
is a morphism
$$
\phi''\, :\, Y\, \longrightarrow\, Y'
$$
such that $\phi\,=\, \phi'\circ\phi''$.
Since $\phi'$ is \'etale it follows that $\text{degree}(\phi'_*{\mathcal O}_{Y'})\,=\, 0$
\cite[p.~306, Ch.~IV, Ex.~2.6(d)]{Ha}.
Let
$$
\varphi_1\, :\, Z'\, \longrightarrow\, X
$$
be the normal Galois closure of $\phi'$. Since $\text{degree}(\phi'_*{\mathcal O}_{Y'})\,=\, 0$,
from the proof of Proposition \ref{propn1} it follows that $\varphi^*_1\phi'_*{\mathcal O}_{Y'}$
is trivial. Hence
$$
\phi'_*{\mathcal O}_{Y'}\, \subset\, W\, .
$$
But $\text{rank}(\phi'_*{\mathcal O}_{Y'}) \,=\, \text{degree}(\phi')\, >\, 1$, because $\phi_*$
in \eqref{n7} is not surjective. This implies that $\text{rank}(W)\, \geq\,
\text{rank}(\phi'_*{\mathcal O}_{Y'})\, >\, 1$, and hence $W\, \not=\, {\mathcal O}_X$.
To prove the converse assume that the homomorphism $\phi_*$ in \eqref{n7} is surjective.
Recall that ${\mathcal O}_X\, \subset\, W$.
Assume that $W\, \not=\, {\mathcal O}_X$. Hence we have $\text{rank}(W)\, \geq\,2$.
The spectrum of the subalgebra bundle $W\, \subset\, \varphi_*{\mathcal O}_Y$ in Proposition \ref{propn1}
produces a covering (it need not be \'etale)
\begin{equation}\label{l1}
\phi'\, :\, Y'\, \longrightarrow\, X\, .
\end{equation}
In the proof of Proposition \ref{propn1} we saw that $\varphi^*W$ is trivial, where $\varphi$ is the map
in \eqref{m2}. This implies that the pullback of the covering $\phi'$ in \eqref{l1} to $Z$ in \eqref{m2}
is trivial. Consequently, the map $\phi'$ is \'etale.
The inclusion map of $W$ in $\phi_*{\mathcal O}_Y$ (see \eqref{m3} and \eqref{m4}) produces a covering
$$\phi''\, :\, Y\, \longrightarrow\, Y'$$ such that $\phi\,=\, \phi'\circ\phi''$, where $\phi'$
is the map in \eqref{l1}.
But we have $\text{degree}(\phi')\,=\, \text{rank}(W)\, \geq\,2$. Hence
$\phi_*$ in \eqref{n7} is not surjective. Since this contradicts the hypothesis,
we conclude that $W\, =\, {\mathcal O}_X$. This completes the proof.
\end{proof}
|
2,869,038,155,587 | arxiv | \section{Introduction}
The spin $S=1/2$ antiferromagnetic (AFM) Heisenberg model is the natural starting point for describing the magnetic properties of many electronic
insulators with localized spins \cite{Anderson52}. The two-dimensional (2D) square-lattice variant of the model came to particular prominence due to its
relevance to the undoped parent compounds of the cuprate high-temperature superconductors \cite{Anderson87,Manousakis91}, e.g., \chemfig{La_2CuO_4},
and it has also remained a fruitful testing grounds for quantum magnetism more broadly. Though there is no rigorous proof of the existence of
AFM long-range order at temperature $T=0$ in the case of $S=1/2$ spins (while for $S \ge 1$ there is such a proof \cite{Neves86}), series-expansion
\cite{Singh89} and quantum Monte Carlo (QMC) calculations \cite{Reger88,Runge92,sandvik97,sandvik10a,Jiang11} have convincingly demonstrated a sublattice
magnetization in close agreement with the simple linear spinwave theory. Thermodynamic properties and the spin correlations at $T>0$ \cite{Beard96,Kim98,Beard98}
also conform very nicely with the expectations \cite{Chakravarty89,Hasenfratz93} for a ``renormalized classical'' system with exponentially divergent correlation
length when $T \to 0$. Thus, at first sight it may appear that the case is settled and the system lacks 'exotic' quantum-mechanical features. However, it has
been known for some time that the dynamical properties of the model at short wavelengths cannot be fully described by spinwave theory. Along the line
${\bf q}=(\pi,0)$ to $(\pi/2,\pi/2)$ in the Brillouin zone (BZ) of the square lattice (with lattice spacing one), the magnon energy is maximal and constant
within linear spinwave theory. However, various numerical calculations have pointed to a significant suppression of the magnon energy and an anomalously large
continuum of excitations in the dynamic spin structure factor $S({\bf q},\omega)$ around ${\bf q}=(\pi,0)$ \cite{Singh95,sandvik01,zheng05,Powalski15,Powalski17}.
At ${\bf q}=(\pi/2,\pi/2)$ the energy is instead elevated and the continuum is smaller. Conventional spinwave theory can only capture a small fraction
of the $(\pi,0)$
anomaly, even when pushed to high orders in the $1/S$ expansion \cite{Igarashi92,Canali93,Igarashi05,syromyatnikov10}.
A large continuum at high energies for ${\bf q}$ close to $(\pi,0)$ was also observed in neutron scattering experiments on \chemfig{La_2CuO_4}, but an opposite
trend in the energy shifts is apparent there; a reduction at ${\bf q}=(\pi/2,\pi/2)$ and increase at $(\pi,0)$ \cite{coldea01,headings10}. It was realized that
this is due to the fact that the exchange constant $J$ is large in this case ($J\approx 100 {\rm meV}$), and, when considering its origin from an electronic Hubbard
model, higher-order exchange processes play an important role \cite{Peres02,Wan09,Delannoy09,Piazza12}. Interestingly, in \chemfig{Cu{(DCOO)}_2\cdot4D_2O} (CFTD), which is
considered the best realization of the square-lattice Heisenberg model to date, anomalous features in close agreement with those in the Heisenberg model have been
observed \cite{ronnow01,christensen07,piazza15}. In this case the exchange constant is much smaller, $J\approx 6 {\rm meV}$, and the higher-order interactions
are expected to be relatively much smaller than in \chemfig{La_2CuO_4}.
The existence of a large continuum in the excitation spectrum close to ${\bf q}=(\pi,0)$ has for some time prompted speculations of physics beyond magnons in
materials such as \chemfig{La_2CuO_4} and CFTD. In particular, in recent low-temperature polarized neutron scattering experiments on CFTD \cite{piazza15}, the
broad and spin-isotropic continuum in $S({\bf q},\omega)$ at ${\bf q}=(\pi,0)$ was interpreted as a sign of deconfinement of spinons, i.e., that the $S=1$
degrees of freedom excited by a neutron at this wavevector would fractionalize into two independently propagating $S=1/2$ objects. In contrast, the
$(\pi/2,\pi/2)$ scattering remained more magnon-like, with a small spin-anisotropic continuum. Calculation within a class of variational resonating-valence-bond
(RVB) wave functions gave some support to this picture \cite{piazza15}, showing that a pair of spinons originating from a ``broken''
valence bond \cite{tang13} at ${\bf q}=(\pi,0)$ could deconfine and account for both the energy suppression and the broad continuum.
A potential problem with the spinon interpretation is that there is still also a magnon pole at ${\bf q}=(\pi,0)$, even though its amplitude is suppressed, and
this would indicate that the lowest-energy excitations there are still magnons. Lacking AFM long-range order, the RVB wave-function does not contain
any magnon pole, and the interplay between the magnon and putative spinon continuum was not considered in Ref.~\onlinecite{piazza15}. Many
different calculations have indicated a magnon pole in the entire BZ in 2D Heisenberg model \cite{Singh95,sandvik01,zheng05,Powalski15,Powalski17}.
The prominent continuum at and close to ${\bf q}=(\pi,0)$ has been ascribed to multi-magnon processes, and systematic expansions \cite{Powalski17} in the
number of magnons indeed converge rapidly and give results for the relative weight of the single-magnon pole in close agreement \cite{Powalski17note} with
series-expansion and QMC calculations \cite{Singh95,sandvik01}. Since the results also agree very well with the neutron data for CFTD, the spinon
interpretation of the experiments can be questioned.
Despite the apparent success of the multi-magnon scenario in accounting for the observations, one may still wonder whether spinons could
have some relevance in the Heisenberg model and materials such as CFTD and \chemfig{La_2CuO_4}---this question is the topic of the present paper. Our main motivation
for revisiting the spinon scenario is the direct connection between the Heisenberg model and deconfined quantum criticality: If a certain four-spin interaction
$Q$ is added to the Heisenberg exchange $J$ on the square lattice (the $J$-$Q$ model \cite{sandvik07}), the system can be driven into a spontaneously dimerized
ground state; a valence-bond solid (VBS). At the dimerization point, $Q_c/J \approx 22$, the AFM order also vanishes, in what appears to be a continuous quantum phase transition
\cite{melko08,harada13,shao16}, in accord with the scenario of deconfined quantum critical points \cite{senthil04a,senthil04b}. At the critical point, linearly
dispersing gapless triplets emerge at ${\bf q}=(\pi,0)$ and $(0,\pi)$ \cite{spanu06,suwa16} in addition to the gapless points $(0,0)$ and $(\pi,\pi)$ in the long-range
ordered AFM, and all the low-energy $S=1$ excitations around these points should comprise spinon pairs. Thus, it is possible that the reduction in $(\pi,0)$ excitation
energy observed in the Heisenberg model and CFTD is a precursor to deconfined quantum criticality. If that is the case, then it may indeed be possible to also
describe the continuum in $S(q,\omega)$ around $q=(\pi,0)$ in terms of spinons, as already proposed in Ref.~\onlinecite{Powalski17}. However, the persistence of
the magnon pole remains unexplained in this scenario.
Here we will revise and complete the picture of deconfined spinon states in the continuum by also investigating the nature of the sharp
magnon-like state in the Heisenberg model and its fate as the deconfined critical point is approached. Using QMC calculations and an improved numerical analytic
continuation technique (also presented in this paper) to obtain the dynamic structure factor from imaginary-time dependent spin correlations, we will show
that the $(\pi,0)$ magnon pole in the Heisenberg model is fragile---it is destroyed in the presence of even a very small $Q$ interaction, well before the
critical point where the AFM order vanishes. In contrast, the $(\pi/2,\pi/2)$ magnon is robust and survives even at the critical point. We will explain
these behaviors within an effective magnon-spinon mixing model, where a bare magnon in the Heisenberg model becomes dressed by fluctuating in and out of a
two-spinon continuum at higher energy. The mixing is the strongest at ${\bf q}=(\pi,0)$; the point of minimum gap between the magnon and spinon. Our results indicate that there
already exist spinons close above the magnon band in the Heisenberg model, and a small perturbation, here the $Q$ interaction, can cause their bare energy
to dip below the magnon, thus destabilizing this part of the magnon band and changing the nature of the excitation from a well-defined magnon-spinon resonance to
a broad continuum of spinon states. In contrast, the $(\pi/2,\pi/2)$ spinons, which are at their dispersion maximum, never fall below the magnon energy,
thus explaining the robust magnon in this case.
The proximity of the square-lattice Heisenberg AFM to a so-called AF* phase has been proposed as the reason for the $(\pi,0)$ anomaly \cite{piazza15}.
The AF* phase has topological $Z_2$ order but still also has AFM long-range order, and it hosts gapped spinon excitations in addition to low-energy magnons
\cite{balents99,senthil00}. In our scenario it is instead the proximity to a VBS and the intervening deconfined quantum critical point that is responsible for the
presence of high-energy spinons and the excitation anomaly in the Heisenberg model. Our results for the $J$-$Q$ model show that the ${\bf q}=(\pi,0)$ magnon pole
is very fragile in the Heisenberg model and the magnon picture should fail completely around this wavevector even with a rather weak deformation of the
model, likely also with other perturbations than the $Q$-term considered here (e.g., frustrated further-neighbor couplings, ring exchange, or perhaps
even spin-phonon couplings). Thus, although the almost ideal Heisenberg magnet CFTD should only host nearly deconfined spinons, other materials may possibly
have sufficient additional quantum fluctuations to cause full deconfinement close to ${\bf q}=(\pi,0)$.
Our numerical results for $S(q,\omega)$ rely heavily on an improved stochastic method for analytic continuation of QMC-computed imaginary-time
correlation functions. It allows us to test for the presence of a $\delta$-function in the spectral function and determine its weight. In Sec.~\ref{sec:sac}
we will summarize the features of the method that are of critical importance to the present work (leaving more extensive discussions of a broader range of
applications of similar ideas for a future publication \cite{shao17}). We also present tests using synthetic data, which show that the kind of spectral
function expected in the Heisenberg model indeed can be reproduced with QMC data of typical quality. Readers who are not interested in technical details
can skip this section and go directly to Sec.~\ref{sec:hberg}, where we present a brief recapitulation of the key aspects of the method before discussing
the dynamic structure factor of the Heisenberg model. In addition to the QMC results, we also compare with Lanczos exact diagonalization (ED) results for
small systems and study finite-size behaviors with both methods. We compare our results with the recent experimental data for CFTD. In Sec.~\ref{sec:jq}
we discuss results for the $J$-$Q$ model, focusing on the points ${\bf q}=(\pi,0)$ and ${\bf q}=(\pi/2,\pi/2)$, where the excitation spectrum evolves in
completely different ways as the $Q$-interactions are increased and the deconfined critical point is approached. In Sec.~\ref{sec:heff} we present
the effective magnon-spinon mixing model for the excitations and discuss numerical solutions of it. We summarize and further discuss our main
conclusions in Sec.~\ref{sec:summary}.
\section{Stochastic Analytic Continuation}\label{sec:sac}
We will consider a spectral function---the dynamic spin structure factor---at temperature $T=0$. A general spectral function of any bosonic
operator $O$ can be written in the basis of eigenstates $|n\rangle$ and eigenvalues $E_n$ of the Hamiltonian as
\begin{equation}
S(\omega)=\frac{1}{\pi} \sum_{n}|\langle n|O|0\rangle|^2\delta(\omega - [E_n-E_0]).
\label{somegasum}
\end{equation}
For the dynamic spin structure factor $S({\bf q},\omega)$ at momentum transfer ${\bf q}$ and energy transfer $\omega$, the corresponding
operator is the Fourier transform of a spin operator, e.g., the $z$ component
\begin{equation}
O^z_q = \frac{1}{\sqrt{N}}\sum_{i=1}^N {\rm e}^{-i {\bf r}_i \cdot {\bf q}} S^z_i,
\label{szft}
\end{equation}
where ${\bf r}_i$ is the coordinate of site $i$; here on the square lattice with the lattice spacing set to unity.
In this section we will keep the discussion general and do not need to consider the form of the operator.
\subsection{Preliminaries}
\label{sec:sac1}
In QMC simulations we calculate the corresponding correlation function in imaginary time,
\begin{equation}
G(\tau)=\langle O(\tau)O(0)\rangle,
\label{otau}
\end{equation}
where $O(\tau) = {\rm e}^{\tau H} O {\rm e}^{-\tau H}$, and its relationship to the real-frequency spectral function is
\begin{equation}
G(\tau) = \int_{0}^\infty d\omega S(\omega){\rm e}^{-\tau \omega}.
\label{gtau}
\end{equation}
Some QMC methods, such as the SSE method \cite{sse1} applied here to the Heisenberg model, can provide an unbiased
stochastic approximation $\bar G_i \equiv\bar G(\tau_i)$ to the true correlation function $G_i\equiv G(\tau_i)$ for a set of imaginary
times $\tau_i$, $i=1,\ldots,N_\tau$ \cite{sse2,sse3}. These data points have statistical errors $\sigma_i$ (one standard
deviation of the mean value). Since the statistical errors are correlated, their full characterization requires the
covariance matrix, which can be evaluated with the QMC data divided up into a large number of bins. Denoting the QMC bin averages
by $G^{b}_{i}$ for bins $b=1,2,\ldots,N_B$, we have $\bar G_i = \sum_b G^{b}_{i}/N_B$ and the covariance matrix is given by
\begin{equation}
C_{ij} = \frac{1}{N_B(N_B-1)}\sum_{b=1}^{N_B} (G^b_i-\bar G_i)(G^b_j-\bar G_j),
\label{covmat}
\end{equation}
where we also assume that the bins are based on sufficiently long simulations to be statistically independent.
The diagonal elements of $C$ are the squares of the standard statistical errors; $\sigma_i^2 = C_{ii}$.
In a numerical analytic continuation procedure, the spectral function is parametrized in some way, e.g., with a large number of
$\delta$-functions on a dense grid of frequencies or with adjustable positions in the frequency continuum. The parameters (e.g.,
the amplitudes of the $\delta$-functions) are adjusted for compatibility with the QMC data using the relationship Eq.~(\ref{gtau}).
Given a proposal for $S(\omega)$, there is then a set of numbers $\{G_i\}$ whose closeness to the corresponding
QMC-computed function is quantified in the standard way in a data-fitting procedure by the ``goodness of the fit''
\begin{equation}
\chi^2 = \sum_{i=1}^{N_\tau}\sum_{j=1}^{N_\tau} (G_i-\bar G_i)C^{-1}_{ij}(G_j-\bar G_j).
\label{chi2}
\end{equation}
In practice, we compute the eigenvalues $\epsilon_i$ and eigenvectors of $C$ and transform the kernel ${\rm e}^{-\tau \omega}$ of
Eq.~(\ref{gtau}) to this basis. With $\Delta_i=G_i-\bar G_i$ transformed to the same basis, the
goodness of the fit is diagonal;
\begin{equation}
\chi^2 = \sum_{i=1}^{N_\tau} \left ( \frac{\Delta_i}{\epsilon_i} \right )^2,
\label{chi2dia}
\end{equation}
and can be more rapidly evaluated.
A reliable diagonalization of the covariance matrix requires more than $N_\tau$ bins and we here typically use at least $10 \times N_\tau$ bins,
with $N_\tau$ in the range $50-100$ and the $\tau$ points chosen on a uniform or quadratic grid. We evaluate the covariance matrix (\ref{covmat})
by bootstrapping, with the total number of bootstrap samples (each consisting of $N_B$ random selections out of the $N_B$ bins) even
larger than the number of bins. In Appendix \ref{app:covar} we show some examples of covariance eigenvalues and eigenvectors.
Minimizing $\chi^2$ does not produce useful results. If positive-definiteness of the spectrum is imposed, the ``best''
solution consists of a typically small number of sharp peaks \cite{schuttler86,sandvik98}, and there are many other very different solutions with almost
the same $\chi^2$-value, reflecting the ill-posed nature of the inverse of the Laplace transform in Eq.~(\ref{gtau}). Without positive-definiteness
the problem is even more ill-posed. Some regularization mechanism therefore has to be applied.
In the standard Maximum-Entropy (ME) method \cite{gull84,silver90,jarrell96}, an entropy $E$,
\begin{equation}
E = - \int_0^\infty d\omega S(\omega)\ln \left ( \frac{S(\omega)}{D(\omega)} \right ),
\label{entropy}
\end{equation}
of the spectrum with respect to a ``default model'' $D(\omega)$ is defined (i.e., $E$ is maximized when $S=D$), and the data is taken into
account by maximizing the function
\begin{equation}
Q=\alpha E-\chi^2.
\label{qalpha}
\end{equation}
This produces the most likely spectrum, given the data and the entropic prior. Different variants of the
method prescribe different ways of determining the parameter $\alpha$, or, in some variants, results are averaged over $\alpha$.
Here we will use stochastic analytic continuation~\cite{sandvik98,beach04,syljuasen08,fuchs10} (SAC), where the entropy is not imposed explicitly
as a prior but is generated implicitly by a Monte Carlo sampling procedure of a suitably parametrized spectrum. We will introduce a parametrization that
enables us to study a spectrum containing a sharp $\delta$-function, which is impossible to resolve with the standard ME approaches (and also with
standard SAC) because of the low entropy of such spectra.
\subsection{Sampling Procedures}
Following one of the main lines of the SAC approach \cite{sandvik98,beach04,syljuasen08,fuchs10}, we sample the spectrum with a
probability distribution resembling the Boltzmann distribution of a statistical-mechanics problem, with $\chi^2/2$ playing the role of the energy of
a system at a fictitious temperature $\Theta$;
\begin{equation}
P(S) \propto \exp \left (-\frac{\chi^2}{2\Theta} \right ).
\label{pofs}
\end{equation}
Lowering $\Theta$ leads to less fluctuations and a smaller mean value $\langle \chi^2\rangle$, and this parameter therefore plays a regularization
role similar to $\alpha$ in the ME function, Eq.~(\ref{qalpha}) \cite{beach04}. Several proposals for how to choose the value of
$\Theta$ have been put forward \cite{sandvik98,beach04,syljuasen08,fuchs10}. There is also another line of SAC methods in which good spectra
(in the sense of low $\chi^2$ values) are generated not by sampling at a fictitious temperature, but according to some other distribution with
other regularizing parameters \cite{mishchenko02}. Using Eq.~(\ref{pofs}) allows us to construct direct analogues with statistical mechanics,
e.g., as concerns configurational entropy \cite{sandvik16}. Before describing our scheme of fixing $\Theta$, we discuss a parametrization of the
spectrum specifically adapted to the dynamic spin structure factor of interest in this work.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{fig01.pdf}
\caption{Parametrizations of the spectral function used in this work. In (a) a large number of $\delta$-functions with the same amplitude occupy
frequencies $\omega_i$ in the continuum (or, in practice, on a very fine frequency grid). The locations are sampled in the SAC procedure.
In (b), the $\delta$-function at the lowest frequency $\omega_0$ has a larger amplitude, $a_0 > a_i$ for $i>0$, and this amplitude is
optimized in the way described in the text. The frequencies of all the $\delta$-functions, including $\omega_0$, are sampled as in (a),
but with the constraint $\omega_0 < \omega_i ~\forall ~i>0$.}
\label{deltas}
\end{figure}
We parametrize the spectrum by a number $N_{\omega}$ of $\delta$-functions in the continuum, as illustrated
in Fig.~\ref{deltas};
\begin{equation}
S(\omega) = \sum_{i=0}^{N_\omega-1}{a_i}\delta(\omega-\omega_i),
\label{somegadeltas}
\end{equation}
working with a normalized spectrum, so that
\begin{equation}
\sum_{i=0}^{N_\omega-1}{a_i}=1,
\label{normdeltas}
\end{equation}
which corresponds to $G(0)=1$ in Eq.~(\ref{gtau}). The pre-normalized value of $G(0)$ is used as a factor in the final result.
In sampling the spectrum, we never change the normalization, and $G(0)$ therefore is not included in the data set
defining $\chi^2$ in Eq.~(\ref{chi2dia}). The covariance matrix, Eq.~(\ref{covmat}), is also computed with normalization to $G(0)=1$ for each
bootstrap sample, which has a consequence that the individual statistical errors $\sigma_i \to 0$ for $\tau_i \to 0$, as discussed
further in Appendix \ref{app:covar}.
In Fig.~\ref{deltas}(a) the $\delta$-functions all have the same weight, $a_i=N^{-1}_\omega$, with $N_\omega$ typically ranging from
$500$ to $2000$ in the calculations presented in this paper. The sampling corresponds to changing the locations (frequencies) $\omega_i$ of
the $\delta$-functions, with the standard Metropolis probability used to accept or reject a change $\omega_i \to \omega_i + d$,
with $d$ chosen at random within a window centered at $d=0$. The width of the window is adjusted to give an acceptance rate close to $1/2$.
We collect the spectral weight in a histogram, averaging over sufficiently many updating cycles of the frequencies to obtain smooth results.
In practice, in order to be able to use a precomputed kernel ${\rm e}^{-\omega_j\tau_i}$ in Eq.~(\ref{gtau}) for all times $\tau_i$ and frequencies
$\omega_j$, we use a very fine grid of allowed frequencies (much finer than the histogram used for collecting the spectrum), e.g., with spacing
$\Delta_\omega=10^{-5}$ in typical cases where the dominant spectral weight is roughly within the range $0-5$. We then also need to
impose a maximum frequency, e.g., $\omega_{\rm max}=20$ under the above conditions. With $\approx 100$ $\tau$-values the amount of memory needed to
store the kernel is then still reasonable, and in practice the fine grid produces results indistinguishable from ones obtained in the
continuum (strictly speaking double-precision floating-point numbers) without limitation, i.e., without even an upper bound imposed on
the frequencies.
We have found that not changing the amplitudes of the $\delta$-functions is an advantage in terms of the sampling time required to obtain good
results, and there are other advantages as well, as will be discussed further in a forthcoming technical article \cite{shao17}. One can also initialize
the amplitudes with a range of different weights (e.g., of the form $a_i \propto i^\alpha$, with $\alpha>0$), while maintaining the
normalization Eq.~(\ref{normdeltas}). This modification of the scheme can help if the spectrum has a gap separating regions of significant
spectral weight, since an additional amplitude-swap update, $a_i \leftrightarrow a_j$, can easily transfer weight between two separate regions
when the weights are all different, thus speeding up the sampling (but we typically do not find significant differences in the final results
as compared with all-equal $a_i$). This method was already applied to spectral functions of a 3D quantum critical antiferromagnet in
Ref.~\onlinecite{qin17}. Here we do not have any indications of mid-spectrum gaps and use the constant-weight ensemble, however, with
a crucial modification.
As illustrated in Fig.~\ref{deltas}(b), in order to reproduce the kind of spectral function expected in the 2D Heisenberg model---a magnon pole
followed by a continuum---we have developed a modified parametrization where we give special treatment to the $\delta$-function with lowest frequency
$\omega_0$. We adjust its amplitude $a_0$ in a manner described further below but keep it fixed in the sampling of frequencies. The common amplitude for
the other $\delta$-functions is then $a_i = (1-a_0)/(N_\omega-1)$. The determination of the best $a_0$ value also relies on how the sampling temperature
$\Theta$ is chosen, which we discuss next.
Consider first the case of all $\delta$-functions having equal amplitude; Fig.~\ref{deltas}(a). As an initial step, we carry out a simulated
annealing procedure with slowly decreasing $\Theta$ to find the lowest, or very close to the lowest, possible value of $\chi^2$ (which will never be
exactly $0$, no matter how many $\delta$-functions are used, because of the positive-definiteness imposed on the spectrum). We then raise $\Theta$ to a
value where the sampled mean value $\langle \chi^2\rangle$ of the goodness of fit is higher than the minimum value $\chi^2_{\rm min}$ by an amount of the
order of the standard deviation of the $\chi^2$ distribution, i.e, going away from the overfitting region where the process becomes sensitive
to the detrimental effects of the statistical errors (i.e., producing a nonphysical spectrum with a small number of sharp peaks). Considering the
statistical expectation that the best fit should have $\chi^2_{\rm min} \approx N_{\rm dof}=N_\tau-N_{\rm para}$, where $N_{\rm para}$ is the (unknown)
effective number of parameters of the spectrum and the minimum $\chi^2$ value can be taken as an estimate of the effective number of degrees of
freedom; $\chi^2_{\rm min} \approx N_{\rm dof}$. Hence, the standard deviation $\sigma_{\chi^2}=(2N_{\rm dof})^{1/2}$ can be replaced by the
statistically valid approximation
\begin{equation}
\sigma_{\chi^2} \approx \sqrt{2\chi^2_{\rm min}}.
\end{equation}
Thus, we adjust $\Theta$ such that
\begin{equation}
\langle \chi^2\rangle \approx \chi^2_{\rm min} +a\sqrt{2\chi^2_{\rm min}},
\label{chi2crit}
\end{equation}
with the constant $a$ of order one. For spectral functions with no sharp features, we find that this method with the parametrization in Fig.~\ref{deltas}(a)
produces good, stable results, with very little dependence of the average spectrum on $a$ as long as it is of order one. For $a \to 0$ the data become
overfitted, leading eventually to a spectrum consisting of a small number of sharp peaks with little resemblance to the true spectrum.
Using the unrestricted sampling with the parametrization in Fig.~\ref{deltas}(a), with QMC data of typical quality one cannot expect to resolve
a very sharp peak---in the extreme case a $\delta$-function---because it will be washed out by entropy. Therefore, in most of the calculations reported
in this paper we proceed in a different way in order to incorporate the expected $\delta$-function. After determining $\chi^2_{\rm min}$, we switch to the
parametrization in Fig.~\ref{deltas}(b), and the next step is to find an optimal value of the amplitude $a_0$. To this end we rely on the insight from
Ref.~\onlinecite{sandvik16}
that the optimal value of a parameter affecting the amount of configurational entropy in the spectrum can be determined by monitoring $\langle\chi^2\rangle$
as a function of that parameter at fixed sampling temperature $\Theta$. In the case of $a_0$, increasing its value will remove entropy from the spectrum.
Since entropy is what tends to spread out the spectral weight excessively into regions where there should be little weight or no weight at all, a reduced
entropy can be reflected in a smaller value of $\langle \chi^2\rangle$. Thus, in cases where the spectrum is gapped, a sampling with the parametrization
in Fig.~\ref{deltas}(a) will lead to spectral weight in the gap and an overall distorted spectrum. However, upon switching to the parametrization in
Fig.~\ref{deltas}(b) and gradually increasing $a_0$, no weight can appear below $\omega_0$ and $\langle \omega_0\rangle$ will gradually increase (and
note again that $\omega_0$ is not fixed but is sampled along with the other frequencies $\omega_i$) because a good match with the QMC data $\{\bar G\}$
cannot be obtained if there is too much weight in the gap. In this process $\langle \chi^2\rangle$ will decrease. Upon increasing $a_0$ further,
$\langle \omega_0\rangle$ will eventually be pushed too far above the gap, and then $\langle \chi^2\rangle$ clearly must start to increase. Thus, if there is
a $\delta$-function at the lower edge of the spectrum pursued, one can in general expect a minimum in $\langle \chi^2\rangle$ versus $a_0$, and, if the QMC data
are good enough, this minimum should be close to the true value of $a_0$. When fixing $a_0$ to its optimal value at the $\langle \chi^2\rangle$-minimum, the
frequency $\omega_0$ should fluctuate around its correct value (with normally very small fluctuations so that the final result is a very sharp peak). If
there is no such $\delta$-function in the true spectrum, one would expect the $\langle \chi^2\rangle$ minimum very close to $a_0=0$. Extensive testing,
to be reported elsewhere \cite{shao17}, has confirmed this picture. We here show test results relevant to the type of spectral function expected for
the 2D Heisenberg model.
One might think that we could also sample the weight $a_0$ instead of optimizing its fixed value. The reason why this does not work is at the heart
of our approach: Including Monte Carlo updates changing the value of $a_0$ (and thus also of all other weights $a_{i>0}$ to maintain normalization), entropic pressures
will favor values close to the other amplitudes and the results (which we have confirmed) are indistinguishable from those obtained without special
treatment of the lower edge, i.e., the parametrization in Fig.~\ref{deltas}(a). The entropy associated with different parametrizations will be further
discussed in a separate article \cite{shao17}.
\subsection{Tests on synthetic data}
To test whether the method can resolve the kind of spectral features that are expected in the 2D Heisenberg model, we construct a synthetic spectral
function with a $\delta$-function of weight $a_0$ and frequency $\omega_0$, followed by a continuum with total weight $1-a_0$. The relationship in
Eq.~(\ref{gtau}) is used to obtain $G(\tau)$ for a set of $\tau$-points and normal-distributed noise is added to the $G$ values, with standard deviation
typical in QMC results. To provide an even closer approximation to real QMC data, we construct correlated noise. Here one can adjust the autocorrelation
time of the correlated noise to be close to what is observed in QMC data. The way we do this is discussed in more detail in Appendix \ref{app:covar}.
As we will discuss in Sec.~\ref{sec:hberg}, for the 2D Heisenberg model we find that the smallest relative weight of the magnon pole is $\approx 0.4$
at ${\bf q}=(\pi,0)$. We therefore
here test with $a_0=0.4$, set $\omega_0=1$ and take for the continuum a truncated Gaussian (with no weight below $\omega_0$) of width
$\sigma=1$. This situation of no gap between the $\delta$-function and the continuum should be expected to be very challenging for any
analytic continuation method. Extracting $a_0$ and $\omega_0$ by simply fitting an exponential $a_0{\rm e}^{-\omega_0\tau}$ to the QMC
data for large $\tau$ is difficult because there will never be any purely exponential decay (unlike the case where there is a gap
between the $\delta$-function and the continuum) and the best one could hope for is to extrapolate the parameters based on different
ranges of $\tau$ included in the fit, or with some more sophisticated analysis \cite{suwa16}. As we will see below, with noise levels in the
synthetic data similar to our real QMC data, the SAC procedure outlined above not only produces good results for $a_0$ and $\omega_0$
but also reproduces the continuum well.
\begin{figure}[t]
\centering
\includegraphics[width=75mm, clip]{fig02.pdf}
\caption{The goodness of the fit versus the amplitude $a_0$ of the lowest $\delta$-function in three runs with different
noise realizations for a synthetic spectrum with a $\delta$-function of weight $a_0=0.4$ at $\omega_0=1$. The continuum is
a Gaussian of width $1$ centered at the same $\omega_0$, with the weight below $\omega_0$ excluded. The noise level
is $\sigma_i \approx 10^{-5}$ and the errors are correlated with autocorrelation time $1$ according to the description
in Appendix \ref{app:covar}. The inset shows the data close to the $\langle \chi^2\rangle$ minimum on a different scale.}
\label{chi2fig}
\end{figure}
When looking for the minimum value of $\langle \chi^2\rangle$ versus $a_0$, it is better to start with a somewhat higher $\Theta$ than
what is obtained with the $\chi^2$ criterion in Eq.~(\ref{chi2crit}), so that the minimum can be more pronounced. Staying in the regime where the fit
can still be considered good and the effects on $S(\omega)$ of a slightly elevated $\Theta$ are very minor,
we aim for $\langle \chi^2\rangle \approx \chi^2_{\rm min} + bN_\tau$ with
$b=1$ or $2$ at the initial stage of fixing $\Theta$ without the special treatment of the lowest $\delta$-function. With the so obtained $\Theta$ we scan
over $a_0$ with some step size $\Delta a_0$. The scan is terminated when $\langle \chi^2\rangle$ has increased well past its minimum.
The $\langle \chi^2\rangle$ curve can be analyzed later to locate the optimal $a_0$ value. If all the spectra generated in the scan have been
saved one can simply use the best one. Since $\langle \chi^2\rangle$ normally will be significantly smaller at the optimal value of $a_0$ than
at the starting point with $a_0=0$, there is typically no need for further adjustments of $\Theta$ later, though one can also do a final run
at the optimal $a_0$ with the criterion in Eq.~(\ref{chi2crit}).
Fig.~\ref{chi2fig} shows typical $\langle \chi^2\rangle$ behaviors in tests with spectrum consisting of a $\delta$-function and a continuum of relative size
and width similar to what we will report for the Heisenberg model in the next section. Here we used $80$ $\tau$-points on a uniform grid with spacing $\Delta_\tau=0.1$
and noise level $\sigma_i \approx 10^{-5}$ for $\tau$ points sufficiently away from $\tau=0$. We built in covariance similar to what is observed in the QMC data
(also discussed in Appendix \ref{app:covar}). We can indeed observe a clear minimum in the $\langle \chi^2\rangle$ curve close to the expected value $a_0=0.4$.
The deviations from this point reflect the effects of the statistical errors. In several runs at much smaller noise level, $\sigma_i \approx 10^{-6}$, the
minimum was always at $0.40$ in scans with $\Delta a_0=0.01$.
\begin{figure}[t]
\centering
\includegraphics[width=70mm, clip]{fig03.pdf}
\caption{Mean value of $\omega_0$ in SAC runs with four different noise realizations (shown in different colors) graphed vs
the amplitude parameter $a_0$. The noise level is $10^{-5}$ and $10^{-6}$ in (a) and (b), respectively, and the number of $\delta$-functions was
$N_\omega=1000$ and $N_\omega=2000$. In both cases $G(\tau)$ values were generated on a uniform grid with $\Delta_\tau=0.1$
for $\tau$ up to the point where the relative error exceeds $10\%$.}
\label{w0}
\end{figure}
The effects of the noise are smaller in the mean location $\langle \omega_0\rangle$ of the lowest $\delta$-function. Fig.~\ref{w0} shows
results versus $a_0$ from several different runs. At the correct value $a_0=0.4$, the error in the frequency is typically less than
$10^{-3}$ at noise level $10^{-5}$ and smaller still at $10^{-6}$. Considering the uncertainty in the location of the minimum
in Fig.~\ref{chi2fig}, the total error on $\omega_0$ of course becomes higher, but still the precision is typically better than $10^{-2}$ for
noise level $10^{-5}$ and much better at $10^{-6}$.
\begin{figure}[t]
\centering
\includegraphics[width=70mm, clip]{fig04.pdf}
\caption{Two typical SAC-computed spectral functions (red and blue curves, obtained with different noise realizations) compared with the underlying
true synthetic spectrum (thicker black curve, with the half-Gaussian containing $60\%$ of the weight). The parameters of the spectrum
are the same as in Fig.~\ref{chi2}. The noise level is $10^{-5}$ and $10^{-6}$ in (a) and (b), respectively.}
\label{sw}
\end{figure}
The full SAC spectral functions at both noise levels are shown in Fig.~\ref{sw}, for two noise realizations in each case
(with the spectra taken at their respective optimal $a_0$ values). When constructing the histogram for averaging the spectrum, here with a
bin with $\Delta\omega=0.005$, we also include the main $\delta$-peak. If the fluctuations in $\omega_0$ are large, a broadened peak will
result. Here the fluctuations are very small and no significant broadening is seen beyond that due to the histogram binning. As discussed
above, the location of the main peak is very well reproduced. The continuum typically shows the strongest deviations from the correct
curve close to the edge. The improvements when going from noise level $10^{-5}$ to $10^{-6}$ are obvious in the figure.
Statistical errors of order $10^{-5}$ in the correlation function $G(\tau)$ normalized to $1$ at $\tau=0$ are relatively easy to achieve
in QMC calculations, and in many cases it is possible to go to $10^{-6}$ or even better. The tests here show that quite detailed information
can be obtained with such data for spectral functions with a prominent $\delta$-function at the lower edge followed by a broad continuum.
Importantly, the approach also involves the estimation of the statistical error on the weight of the $\delta$-function through a bootstrapping
procedure, and based on tests such as those above, as well as additional cases, we do not see any signs of further systematical errors in the
weight and location of the $\delta$-function, i.e., the method is unbiased in this regard. It is still of course not easy to discriminate
between a spectrum with an extremely narrow peak and one with a true $\delta$-function, but a broad peak will manifest itself in the loss of
amplitude $a_0$, accumulation of the ``background'' $\delta$-functions as a leading maximum at the edge, and in large fluctuations in the
lower edge $\omega_0$. We therefore have good reasons to believe that the approach is suitable in general both for reproducing spectra with
an extremely narrow peak and for detecting when such a peak is absent.
\section{Heisenberg model}\label{sec:hberg}
In quantum magnetism the most important spectral function is the dynamic spin structure factor $S^\alpha ({\bf q},\omega)$, corresponding to
the correlations of the spin operator $S_{\bf q}^\alpha$ $(\alpha=x,y,z)$, the Fourier transform of the real-space spin operator $S_{\bf r}^\alpha$
as in Eq.~(\ref{szft}). This spectral function is directly proportional to the inelastic neutron-scattering cross-section at wavevector
transfer ${\bf q}$ and energy transfer $\omega$ \cite{formfactor}. In this paper we focus on isotropic spin systems and do not break the symmetry
in the finite-size calculations; thus all components $\alpha$ are the same, corresponding to the total cross-section averaged over the longitudinal
and transverse channels (i.e., as obtained in experiments with unpolarized neutrons). We consider the $z$-component in the SSE-QMC calculations and
hereafter use the notation $S({\bf q},\omega)$ without any
$\alpha$ superscript. With sufficiently large inverse temperature, here $\beta=4L$ in most QMC simulations, we obtain ground-state properties for all
practical purposes for ${\bf q}$ at which the gap $\omega_{\bf q}$ is sufficiently large. More precisely, we have well-converged data for all
${\bf q}$ except for ${\bf q}=(\pi,\pi)$, where the finite-size gap closes as $1/L^2$ (this being the lowest excitation in the Anderson
tower of quantum-rotor states), much faster than the lowest magnon excitation which has a gap $\propto 1/L$. Therefore, in the following we do not
analyze the not fully-converged ${\bf q}=(\pi,\pi)$ data. In addition to the QMC calculations, where we go up to linear system sizes $L=48$,
we also report exact $T=0$ Lanczos ED results for lattices with up to $N=40$ spins.
For the square-lattice Heisenberg antiferromagnet, the spectral function in calculations such as conventional spinwave expansions
\cite{Igarashi92,Canali93,Igarashi05,syromyatnikov10} and continuous unitary transformations (an approach which also starts from spinwave theory,
formulated with the Dyson-Maleev representation of the spin operators) \cite{Powalski15,Powalski17} contains a dominant $\delta$-function at
the lowest frequency $\omega_{\bf q}$ and a continuum above this frequency,
\begin{equation}
S({\bf q},\omega)=S_0({\bf q})\delta(\omega-\omega_{\bf q})+S_c({\bf q},\omega),
\label{Sfunction}
\end{equation}
where $\omega_{\bf q}$ is also the single-magnon dispersion and $S_0({\bf q})$ is the spectral weight in the magnon pole.
We define the relative weight of the single-magnon contribution as
\begin{equation}
a_0({\bf q})=\frac{S_0({\bf q})}{\int d\omega S({\bf q},\omega)},
\label{define-a0}
\end{equation}
in the same way as the generic $a_0$ in Sec.~\ref{sec:sac}.
In principle the single-magnon pole may be broadened, but the damping
processes causing this are of very high order in the spinwave interaction terms and we are not
aware of any calculations estimating these effects quantitatively. In general it is expected that the broadening of the magnon pole itself should be very small
in bipartite (collinear AFM-ordered) Heisenberg systems \cite{chernyshev06,chernyshev09}. Accordingly, we can here make the simplifying assumption that there
is no broadening at $T=0$ of the single-magnon pole itself, i.e., that interaction effects are manifested as spectral weight transferred from the
$\delta$-function to the continuum above it. In contrast, in non-bipartite (frustrated) antiferromagnets with non-collinear order, there are
other lower-order magnon damping mechanisms present that cause significant broadening of the $\delta$-function \cite{chernyshev06,chernyshev09}.
In a previous QMC calculation where the analytic continuation was carried out by function fitting including a $\delta$-function edge \cite{sandvik01},
the continuum $S_c({\bf q},\omega)$ was modeled with a specific functional form with a number of parameters (adjusted to fit the QMC data).
Here we do not make any prior assumptions on the shape of the continuum, instead applying the SAC procedure with the parametrization illustrated
in Fig.~\ref{deltas}(b). If the $\delta$-function is actually substantially broadened, such that the separation of the spectrum into two distinct parts in
Eq.~(\ref{Sfunction}) becomes inappropriate, we expect our SAC approach to simply give a very small amplitude $S_0({\bf q})$ when this is the case.
We will see examples of this kind of full depletion of the magnon pole later in Sec.~\ref{sec:jq}, where other interactions are added to the Heisenberg
model (the $J$-$Q$ model). Later in this section we will also show some results for the Heisenberg model obtained without assuming a $\delta$-function
in Eq.~(\ref{Sfunction}).
To briefly recapitulate the version of SAC we developed in Sec.~\ref{sec:sac}, after fixing a proper sampling temperature using the spectrum without special
treatment of the leading $\Delta$-function, i.e., the parametrization of the dynamic structure factor illustrated in in Fig.~\ref{deltas}(a), in the final stage
of the sampling process we use the parametrization ofFig.~\ref{deltas}(b). The amplitude of the leading $\delta$-function is optimized based on the entropic
signal---a minimum in the mean goodness of the fit, $\langle \chi^2\rangle$. The location of this special $\delta$-function is sampled along with all the other
``small'' ones representing the continuum, and the spectral weight as a function of the frequency is collected in a histogram (here typically with bin size
$\Delta\omega=0.005$). Thus,
in the final averaged spectrum the magnon pole may be broadened by fluctuations in its location, but, as we will see below, the width is typically very narrow
and for all practical purposes it remains a $\delta$-function contribution. Here the level of the statistical QMC errors, with the definitions discussed in
Sec.~\ref{sec:sac}, is $10^{-5}$ or better (some raw data are shown in Appendix \ref{app:covar}). Extensive testing, exemplified in Fig.~\ref{sw}, demonstrates
that the method is well capable of reproducing the type of spectral function of interest here to a good degree with this data quality. The number
$N_\omega$ of $\delta$-functions required in the continuum in order to obtain well converged results depends on the quality of the QMC data. We have
carried out tests with different $N_\omega$ and find good convergence of the results when $N_\omega \approx 500-1000$. The results presented
below were obtained with $N_\omega=2000$.
\subsection{Spectral functions at different wavevectors}
For an overview, we first show the spectral function for the $L=48$ system with a color plot in Fig.~\ref{haf-1}, where the $x$-axis
corresponds to the wavevector along a standard path in the BZ and the $y$-axis is the frequency $\omega$. The location of the magnon
pole (the dispersion relation) is indicated, and for the continuum a color coding is used. We also show an upper spectral bound defined
such that $95\%$ of the weight for each ${\bf q}$ falls between the two curves. Due to matrix-elements effects related to conservation of
the magnetization ($S^z_{q=0}$) of the Heisenberg model, the total spectral weight vanishes as $q\to 0$ and it is seen in Fig.~\ref{haf-1} to
be small in a wide region around this point. Both the total weight and the low-energy scattering is maximized as ${\bf q} \to (\pi,\pi)$. As
mentioned above, exactly at $(\pi,\pi)$ our calculations are not $T \to 0$ converged, and we therefore do not show any results for this case.
The width in $\omega$ of the region in which
$95\%$ of the weight is concentrated is seen to be almost independent on ${\bf q}$. However, since the total spectral weight for ${\bf q}$ close
to $(\pi,\pi)$ is very large there is significant weight extending up to $\omega \approx 6$, while in other ${\bf q}$ regions the weight extends
roughly up to $4.5-5$ [except close to $(0,0)$, where no significant weight can be discerned in the density plot with the color coding used].
\begin{figure}[t]
\centering
\includegraphics[width=83mm]{fig05.pdf}
\caption{The dynamic structure factor of the 2D Heisenberg model computed on an $L=48$ lattice along
the path in the BZ indicated on the $x$-axis. The $y$-axis is the energy transfer $\omega$ in units of the coupling $J$.
The magnon peak ($\delta$-function) at the lower edge of the spectrum is marked in white irrespective of its weight, while
the continuum is shown with color coding on an arbitrary scale where the highest value is $1$. The upper white curve
corresponds to the location where, for given ${\bf q}$, $5\%$ of the spectral weight remains above it.}
\label{haf-1}
\end{figure}
More detailed frequency profiles at four different wavevectors are shown in Fig.~\ref{haf-2}. In addition to the points
$(\pi,0)$ and $(\pi/2,\pi/2)$, on which many prior works have focused, results for the points closest to the gapless points
$(0,0)$ and $(\pi,\pi)$ are also shown. The results at $(\pi,0)$ and $(\pi/2,\pi/2)$ are in general in good agreement with the
previous QMC calculations \cite{sandvik01} in which the $\delta$-function contributions were also explicitly included in the
parametrization of the spectrum. The relative weight in the $\delta$-function, indicated in each panel in Fig.~\ref{haf-2}, is also in reasonably
good agreement with series expansions around the Ising limit \cite{zheng05}. The relative spectral weight of the continuum, $1-a_0({\bf q})$,
can be taken as a measure of the effect of spinwave interactions, which leads to the multi-magnon contributions often assumed to be responsible
for the continuum. We will argue later that the particularly large continuum at $(\pi,0)$ is actually due to nearly deconfined spinons.
It is not clear whether the small maximum to the right of the $\delta$-function, which we see consistently through the BZ, are real spectral
features or whether they reflect the statistical errors of the QMC data in a way similar to the most common distortion resulting from noisy
synthetic data, as seen in the tests presented in Fig.~\ref{sw}. The error level of the QMC data in all cases is a bit below $10^{-5}$, i.e.,
similar to Fig.~\ref{sw}(a). The behavior does not suggest any gap between the $\delta$-functions and the continuum.
\begin{figure}[t]
\centering
\includegraphics[width=65mm]{fig06.pdf}
\caption{Dynamic structure factor for $L=48$ system at four different momenta. The smallest momentum increment $2\pi/L$ is
denoted by $k$ in (a) and (d). The relative amplitude of the magnon pole is indicated in each panel.}
\label{haf-2}
\end{figure}
\subsection{Finite-size effects}
It is important to investigate the size dependence of the spectral functions. For very small lattices at $T=0$, $S({\bf q},\omega)$ computed
according to Eq.~(\ref{somegasum}) for each ${\bf q}$ contains only a rather small number of $\delta$-functions and it is not possible to draw
a curve approximating a smooth continuum following a leading $\delta$-functions. Therefore, the SAC procedure does not reproduce exact Lanczos
results very well---we obtain a single broad continuum following the leading $\delta$-function, instead of several small peaks. Because the
continuum also has weight close to the leading $\delta$-function, between it and the second peak of the actual spectrum, the SAC method also
slightly underestimates the weight in the first $\delta$-function. If the continuum emerging as the system size increases indeed is, as expected,
broad and does not exhibit any unresolvable fine-structure, the tests in Sec.~\ref{sec:sac} suggest that our methods should be able to reproduce it.
\begin{figure}[t]
\centering
\includegraphics[width=65mm]{fig07.pdf}
\caption{Size dependence of the single-magnon energy (a) and weight in the magnon pole (b) at wavevectors ${\bf q}=(\pi,0)$, $(\pi/2,\pi/2)$,
and $(\pi,\pi)$. Lanczos ED results for small systems ($L\times L$ lattices with $L=4$ and $L=6$ as well as tilted lattices
with $N=20,32$, and $40$ sites) are shown as open circles and QMC-SAC data are presented as solid circles with error bars. The error bars were
estimated by bootstrap analysis (i.e., carrying out the SAC procedure multiple times with random samples of the QMC data bins).}
\label{haf-6}
\end{figure}
For the $6\times 6$ lattice at ${\bf q}=(\pi,0)$, our SAC result underestimates the weight in the magnon
pole by about $5\%$, while the energy deviates by less than $1\%$. We expect these systematic errors to decrease with increasing system size,
for the reasons explained above. Fig.~\ref{haf-6} shows the size dependence of the single-magnon weight and energy at wavevectors
${\bf q}=(\pi,0)$, $(\pi/2,\pi/2)$, and $(\pi,\pi)$. At $(\pi,\pi)$ we only have Lanczos results, but even with the small
systems accessible with this method it can be seen that indeed the energy decays toward zero. The magnon weight is large, converging rapidly toward
about $97\%$, which is similar to the series-expansion result \cite{zheng05}. The energies at ${\bf q}=(\pi,0)$ and $(\pi/2,\pi/2)$ also converge rapidly,
with no detectable differences between $L=32$ and $L=48$, and a smooth transition between the ED results for small systems and QMC results for larger sizes.
The magnon weight at these wavevectors show more substantial size dependence, though again the results for the two largest sizes agree within error
bars. Here the connection between the ED and QMC results does not appear completely smooth at $(\pi,0)$, due to the difficulties for the SAC method to deal
with a spectrum with a small number of $\delta$-functions. Nevertheless, even the ED results indicate a drop in the amplitude for the larger system
sizes. The trends in $1/L$ for the QMC results suggest that the weight converges to slightly below $40\%$ at ${\bf q}=(\pi,0)$ and slightly below
$70\%$ at ${\bf q}=(\pi/2,\pi/2)$, both in very good agreement with the series-expansion results \cite{zheng05}. This agreement with a completely
different method provides strong support to the accuracy of the QMC-SAC procedures. The energies also agree very well with the previous QMC results
where particular functional forms were used to model the continuum, and the magnon amplitudes agree within $5-10\%$ (with the values indicated
in the insets of Fig.~3 in Ref.~\onlinecite{sandvik01}).
\subsection{Comparisons with experiments}
In the discussion of the recent neutron-scattering experiments on CFTD \cite{piazza15}, it was argued that the large continuum in the
$(\pi,0)$ spectrum is due to fully deconfined spinons, and a variational RVB wavefunction was used to support this interpretation. We will
discuss our different picture of nearly deconfined spinons further in Sec.~\ref{sec:heff}. Here we first compare the $(\pi,0)$ and $(\pi/2,\pi/2)$
results with the experimental data without invoking any interpretation. The experimental scattering cross section in Ref.~\onlinecite{piazza15}
was shown versus the frequency $\omega/J$ normalized by the estimated value of the coupling constant ($J \approx 6.11$ meV). Keeping the same scale,
we should only convolute our spectral functions with an experimental Gaussian broadening. We optimize this broadening to match the data and find
that a half-width $\sigma=0.12J$ of the Gaussian works well for both wavevectors---which is the same as the instrumental broadening reported
for the experiment \cite{piazza15}. Since the neutron data are presented with an arbitrary scale for the scattering intensity we also have to multiply
our $S({\bf q},\omega)$ for each ${\bf q}$ by a common factor. The agreement with the data at both $(\pi,0)$ and $(\pi/2,\pi/2)$ is very good, and
can be further improved by dividing $\omega/J$ in the experimental data by $1.02$, which corresponds to $J \approx 6.23$ meV, which should still
be within the errors of the experimentally estimated value. As shown in Fig.~\ref{haf-3}, the agreement with the experiments is not perfect but
probably as good as could possibly be expected, considering small effects of the weakly ${\bf q}$-dependent form factor \cite{formfactor}
and some influence of weak interactions beyond $J$ (longer-range exchange, ring exchange, spin-phonon couplings, disorder, etc.).
\begin{figure}[t]
\centering
\includegraphics[width=75mm]{fig08.pdf}
\caption{Comparison of the CFTD experimental data \cite{piazza15} (the full scattering cross section corresponding to unpolarized neutrons)
and our QMC-SAC spectral functions at wavevectors ${\bf q}=(\pi,0)$ and ${\bf q}=(\pi/2,\pi/2)$.
To account for experimental resolution, we have convoluted the QMC-SAC spectral functions in Figs.~\ref{haf-2}(b,c) with a common Gaussian broadening
(half-width $\sigma=0.12J$). We have renormalized the exchange constant by a factor $1.02$ relative to the original value in Ref.~\onlinecite{piazza15}, and
to match the arbitrary factor in the experimental data we have further multiplied both of our spectra by a factor $\approx 50$.}
\label{haf-3}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig09.pdf}
\caption{Single-magnon dispersion $\omega_{\bf q}$ along a representative path of the magnetic BZ. The CFTD experimental data from Ref.~\onlinecite{piazza15}
are shown as blue squares and the QMC-SAC data (the location of the magnon pole) are shown with red circles. We also show the linear SWT dispersion
(black curve) adjusted by a common factor corresponding to the exact spinwave velocity $c=1.65847$ \cite{sen15}.}
\label{haf-4}
\end{figure}
The single-magnon dispersion, the energy $\omega_{\bf q}$ in Eq.~(\ref{Sfunction}), is compared with the corresponding experimental peak
position in Fig.~\ref{haf-4}. The linear spinwave dispersion is shown as a reference, using the best available value of the renormalized
velocity $c=1.65847$ \cite{sen15}. Our results agree very well with the spinwave dispersion at low energies, and with the experimental CDFT
data \cite{piazza15} also in the high-energy regions where the spinwave results are not applicable. The only statistically significant
deviation, though rather small, is at ${\bf q} \approx (\pi/2,\pi/2)$, where the experimental energy is lower (as seen also in the peak
location in Fig.~\ref{haf-3}). Still, overall, one must conclude that CFTD is an excellent realization of the square-lattice Heisenberg model
at the level of current state-of-the-art experiments. It would certainly be interesting to improve the frequency resolution further and try to
analyze higher-order effects, which should become possible in future neutron scattering experiments.
\subsection{Wavevector dependence of the single-magnon amplitude}
We next look at the variation of the relative magnon weight $a_0({\bf q})$ along the representative path of the BZ for $L=48$,
shown in Fig.~\ref{haf-5}. For ${\bf q}\to(0,0)$ and $(\pi,\pi)$ the weight $a_0$ increases and appears to tend close
to $1$. From the results exactly at $(\pi,\pi)$ in Fig.~\ref{haf-6} we know that in this case the remaining weight in the continuum
should be about $3\%$, which is also in good agreement with the series results in Ref.~\onlinecite{zheng05}, where a similar non-zero multi-magnon
weight was also found as $q \to 0$. At ${\bf q}=(\pi/2,\pi/2)$, as also shown in Fig.~\ref{haf-6}, the magnon pole contains about
$70\%$ of the weight, while at ${\bf q}=(\pi,0)$ this weight is reduced to about $40\%$. Both of these are also in good agreement with
Ref.~\onlinecite{zheng05}, and in fact throughout the BZ path we find no significant deviations from the series results. This again reaffirms
the ability of the SAC procedure to correctly optimize the amplitude of the leading $\delta$-function. It should be noted that the series
expansion around the Ising model does not produce the full spectral functions, only the single-magnon dispersion and weight.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig10.pdf}
\caption{Relative spectral weight of the single-magnon pole along the representative path in the BZ for the $L=48$ Heisenberg system.
Error bars were estimated by bootstrapping.}
\label{haf-5}
\end{figure}
The depletion seen in Fig.~\ref{haf-5} of the single-magnon weight in a neighborhood of ${\bf q}=(\pi,0)$ can also be
related to the experimental data for CFTD. In Fig.~1(a) of Ref.~\onlinecite{piazza15}, a color coding is used for the scattering
intensity such that even a modest reduction in the coherent single-magnon weight has a large visual impact. The region
in which the spectral function is smeared out with no sharp feature in this representation corresponds closely to the
region where the single-magnon weight drops from about $60\%$ to $40\%$ in our Fig.~\ref{haf-5}.
\subsection{Alternative ways of analytic continuation}
One could of course argue that the existence of the magnon pole at $(\pi,0)$ is not proven by our calculations since it has
been built into our parametrization of the spectral function. While it is clear that our approach cannot distinguish between a very
narrow peak and a $\delta$-function, if the broadening is significant for some ${\bf q}$, so that the main peak essentially becomes
part of the continuum, we would expect the optimal amplitude $a_0({\bf q})$ to be very small or vanish. Nevertheless, to explore the
possibility of spectra without magnon pole, we also have carried out the analytic continuation in two alternative ways, using
the parametrization in Fig.~\ref{deltas}(a) without special treatment of the lowest frequency, or by imposing a lower frequency
bound.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig11.pdf}
\caption{Spectral functions at ${\bf q}=(\pi,0)$ and $(\pi/2,\pi/2)$ obtained using unconstrained SAC with the
parametrization in Fig.~\ref{deltas}(a). The insets show comparisons with the experimental data \cite{piazza15}, where
we have only adjusted a common amplitude to match the areas under the peaks.}
\label{sqw0}
\end{figure}
Sampling without any constraints with $N_\omega=1000$
$\delta$-functions gives the results at $q=(\pi,0)$ and $(\pi/2,\pi/2)$ shown in Fig.~\ref{sqw0}. Here one can distinguish a peak
in each case in the general neighborhood of where the $\delta$-function is located in Figs.~\ref{haf-2}(b,c), with the the maximum shifted
slightly to higher frequencies and weight extending significantly to lower frequencies. At
${\bf q}=(\pi/2,\pi/2)$ there is now a shallow minimum before a low broad distribution at higher energies. This kind of behavior is typical
for analytic continuation methods when there is too much broadening at low frequency, which leads to a compensating (in order to match
the QMC data) depletion of weight above the main peak. Similarly, the up-shift of the location of the peak frequency at both {\bf q}
relative to Fig.~\ref{haf-2} is due to there being weight also at $\omega < \omega_{\bf q}$ where there should be none or much less weight.
In the insets of Fig.~\ref{sqw0} we show comparisons with the CFTD experimental data. Here the SAC spectral functions are broader than
the experimental profiles and we have not applied any additional broadening. It is clear that the SAC results here do not match the
experiments as well as in Fig.~\ref{haf-3}, most likely because the QMC data are no sufficiently precise to reproduce a narrow magnon
pole, thus also leading to other distortions at higher energy.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig12.pdf}
\caption{Spectral functions obtained using sampling with the parametrization in
Fig.~\ref{deltas}(a) under the constraint that no weight falls below the lower bounds determined with a $\delta$-function
at the lower edge (Fig.~\ref{haf-2}); $\omega_q=2.13$ and $2.40$ for ${\bf q}=(\pi,0)$ and $(\pi/2,\pi/2)$, respectively.
The inset shows the results on a different scale to make the continua better visible. The insets of the inset show comparisons
with the experimental data, where we have broadened the numerical results by Gaussian convolution and adjusted a common amplitude.}
\label{sqw2}
\end{figure}
In order to reduce the broadening and other distortions arising as a consequence of spectral weight spreading out in the SAC sampling
procedure due to entropic pressure \cite{sandvik16} into regions where there should be no weight, we also carried out SAC runs with the
constraint that no $\delta$-function can go below the lowest energy determined with the dominant $\delta$-function present. These energies,
$\omega_q = 2.13$ and $2.40$ for ${\bf q}=(\pi,0)$ and $(\pi/2,\pi/2)$, respectively, are in excellent agreement with the series expansions around
the Ising limit \cite{zheng05} and, in the case of $(\pi/2,\pi/2)$, also with the well-converged high-order spin-wave expansion
\cite{Igarashi92,Canali93,Igarashi05,syromyatnikov10}. There is therefore good reason to trust these as being close to the actual
energies. As seen in Fig.~\ref{sqw2}, there is a dramatic effect of imposing the lower bound---the main peak is much higher and narrower
than in Fig.~\ref{sqw0} and an edge is formed at $\omega_q$. Most likely the peaks are still broadened on the right side,
and again this broadening has as a consequence a local minimum in spectral weight before a broad second peak, which is now seen
for both ${\bf q}$ points. In this case the comparisons with the experiments (insets of Fig.~\ref{sqw2}) is overall somewhat better than
with the completely unconstrained sampling in Fig.~\ref{sqw0}, but still we see signs of a depletion of spectral weight to the right of
the main peak that is not present in the experimental data. We take the $\omega_0$-constrained spectra as upper limits in terms of the
widths of the main magnon peaks, and most likely the true spectra are much closer to those obtained with the optimized $\delta$-functions
in Fig.~\ref{haf-2}.
In summary, the results of these alternative ways of carrying out the SAC process reaffirm that there indeed should be a leading very narrow
magnon pole, close to a $\delta$-function, at both ${\bf q}=(\pi,0)$ and $(\pi/2,\pi/2)$. While the pole strictly speaking may have some damping,
our good fits with a pure $\delta$-function in Fig.~\ref{haf-3} indicates that such damping should be extremely weak, as also expected on
theoretical grounds \cite{chernyshev06,chernyshev09}.
\section{$\text {J-Q}$ model}
\label{sec:jq}
The AFM order parameter in the ground state of the Heisenberg model is significantly reduced by zero-point quantum fluctuations from its classical
value $m_s=1/2$ to about $0.307$ \cite{Reger88,sandvik10a}. It can be further reduced when frustrated interactions are included, eventually leading
to a quantum-phase transition into a non-magnetic state, e.g., in the frustrated $J_1$-$J_2$ Heisenberg model
\cite{dagotto89,gelfand89,hu13,gong14,morita15,wang17}. In the
$J-Q$ model \cite{sandvik07}, the quantum phase transition driven by the four-spin coupling $Q$ appears to be a realization of the deconfined
quantum critical point \cite{shao16}, which separates the AFM state and a spontaneously dimerized ground state; a columnar VBS. The model is amenable
to large-scale QMC simulations and we consider it here in order to investigate the evolution of the dynamic structure factor upon reduction of
the AFM order and approaching spinon deconfinement.
The $J$-$Q$ Hamiltonian can be written as
\cite{sandvik07},
\begin{equation}
H = -J \sum_{\langle ij\rangle} P_{ij} -Q \sum_{\langle ijkl\rangle} P_{ij}P_{kl},
\label{jqham}
\end{equation}
where $P_{ij}$ is a singlet projector on sites $ij$,
\begin{equation}
P_{ij} = 1/4 - {\bf S}_i \cdot {\bf S}_j,
\end{equation}
here on the nearest-neighbor sites. In the four-spin interaction $Q$ the site pairs $ij$ and $kl$ form horizontal and vertical edges
of $2\times 2$ plaquettes. All translations and $90^\circ$ rotation of the operators are included in Eq.~(\ref{jqham}) so that all
the symmetries of the square lattice are preserved.
\begin{figure}[t]
\centering
\includegraphics[width=65mm]{fig13.pdf}
\caption{Results for the $J$-$Q$ model at ${\bf q}=(\pi,0)$ and $(\pi/2,\pi/2)$, calculated on the $L=32$ lattice.
The lowest excitation energy $\omega_q$ (a) and the relative weight of the single-magnon contribution (b) are shown as functions
of the coupling ratio $Q/J$ from the Heisenberg limit ($Q/J=0$) to the deconfined quantum critical point ($Q_c/J\approx 22$).}
\label{jq}
\end{figure}
In addition to strong numerical evidence of a continuous AFM--VBS transition in the $J$-$Q$ model (most recently in Ref.~\onlinecite{shao16}),
there are also results pointing directly to spinon excitations at the critical point, in accord with the scenario of deconfined quantum criticality
\cite{senthil04a,senthil04b} (where, strictly speaking, there may be weak residual spinon-spinon interactions, though those may only be important
in practice only at very low energies \cite{tang13}). Moreover, the set of gapless points is expanded from just the points ${\bf q}=(0,0)$ and
$(\pi,\pi)$ in the N\'eel state to also ${\bf q}=(\pi,0)$ and $(0,\pi)$ \cite{spanu06,suwa16} at the critical point. Recent results point to
linearly dispersing spinons with a common velocity around all the gapless points \cite{suwa16}.
Here our primary aim is to study how the magnon poles and continua in $S({\bf q},\omega)$ at ${\bf q}=(\pi,0)$ and $(\pi/2,\pi/2)$ evolve as the coupling
ratio $Q/J$ is increased. We use the same SAC parametrization as in the previous section, with a leading $\delta$-function whose amplitude is optimized by
finding the minimum in $\langle \chi^2\rangle$ versus $a_0({\bf q})$. We first consider the $L=32$ lattice and show our results for the energy and the
relative amplitude in Fig.~\ref{jq} as functions of the coupling ratio $Q/J$ all the way from the Heisenberg limit to the deconfined quantum critical
point. Here the most notable aspect is the rapid drop in the magnon weight at ${\bf q}=(\pi,0)$, even for small values of
$Q/J$, while at ${\bf q}=(\pi/2,\pi/2)$ the weight stays large, $70-80\%$, over the entire range. The energies depend on the normalization
and here we have chosen $J + Q$ as the unit. We know from past work that the ${\bf q}=(\pi,0)$ energy at $Q_c/J$ vanishes in the thermodynamic limit
but the reduction in the finite-size gap with the system size is rather slow \cite{suwa16}, and for the $L=32$ lattice considered here we are still
far from the gapless behavior.
\begin{figure}[t]
\centering
\includegraphics[width=65mm]{fig14.pdf}
\caption{Size dependence of the excitation energy $\omega_{\bf q}$ (a) and the relative weight of the magnon pole $a_0({\bf q})$ (b)
at ${\bf q}=(\pi,0)$ close to the Heisenberg limit of the $J$-$Q$ model.}
\label{fss}
\end{figure}
We focus on the effects on small $Q$, where reliable extrapolations to infinite size are possible, and show the size dependence of the lowest excitation energy
and the magnon amplitude at ${\bf q}=(\pi,0)$ for several cases in Fig.~\ref{fss}. We again show Lanczos ED results for small systems and QMC-SAC
results for larger sizes. For the only common system size, $L=6$, the energies agree very well, as in the pure Heisenberg case discussed in the previous
section, while the QMC-SAC calculations underestimate the magnon weight by a few percent due to the inability to resolve the details of a spectrum
consisting of just a small number of $\delta$-functions. The most interesting feature is the dramatic reduction in the magnon weight even for very small
ratios $Q/J$. For $Q/J=0.25$ and $0.5$, the size dependence indicates small remaining magnon poles, while at $Q/J=1$ it appears that the $\delta$-function
completely vanishes in the thermodynamic limit.
In Fig.~\ref{sqw4} we show the full ${\bf q}=(\pi,0)$ dynamic structure factor at $Q/J=4$, obtained with both the parametrizations in Fig.~\ref{deltas}.
The optimal weight of the leading $\delta$-function is only $1.4\%$ for this $L=32$ lattice, and the finite-size behavior indicates that no magnon pole at all
should be present in the thermodynamic limit in this case. When no leading $\delta$-function is included in the SAC treatment, i.e., with unrestricted
SAC sampling with the parametrization in Fig.~\ref{deltas}(a), there is a little shoulder close to where the $\delta$-function is located with the other
parametrization. The differences at higher frequencies are very minor. This is very different from the large change in the entire spectrum when
unrestricted sampling is used for the same wavevector in the pure Heisenberg model, Fig.~\ref{sqw0}, which is clearly because of the much larger magnon
pole in the latter case. This comparison also reinforces the ability of our SAC method to extract the correct weight of the leading $\delta$-function.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig15.pdf}
\caption{The ${\bf q}=(\pi,0)$ dynamic structure factor of the $J$-$Q$ model at $Q/J=4$ obtained using SAC with the two parametrizations
of the the spectrum in Figs.~\ref{deltas}(a,b). The relative weight of the leading $\delta$-function in (b) is $1.4\%$.}
\label{sqw4}
\end{figure}
These results for the $J$-$Q$ model show that the magnon picture at $q=(\pi,0)$ fails even with a rather weak deformation of
the Heisenberg model. Thus, it seems likely that the reduced excitation energy and coherent single-magnon weight at $q=(\pi,0)$,
observed in the Heisenberg model as well as experimentally in CFTD, is a precursor to deconfined quantum criticality. If that is indeed
the case, then it may be possible not only to describe the continuum in $S({\bf q},\omega)$ around ${\bf q}=(\pi,0)$ in terms of spinons
\cite{piazza15}, but also to characterize the influence of spinons on the remaining sharp magnon pole. We next consider a simple effective
Hamiltonian to address this possibility.
\section{Nature of the excitations}
\label{sec:heff}
Motivated by the numerical results presented in Secs.~\ref{sec:hberg} and \ref{sec:jq}, we here propose a mechanism
of the excitations in the square-lattice Heisenberg model where the magnons have an internal structure corresponding to
a mixing with spinons at higher energy. Our physical picture is that the magnon resonates in and out of the spinon space,
which, in the absence of spinon-magnon couplings, exists above the bare magnon energy. We will construct a simple effective
coupled magnon-spinon model describing such a mechanism. The model resembles the simplest model for the exciton-polariton
problem, where the mixing is between light and a bound electron-hole pair (exciton). Here a bare photon can be absorbed by
generating an exciton, and subsequently the electron and hole can recombine and emit a photon. This resulting collective
resonating electron-hole-photon state is called an exciton-polariton \cite{hopfield58,mahan}. The spinon-magnon model introduced
here is more complex, because the magnon interacts not just with a single bound state but with a whole continuum of spinon states
with or without (depending on model parameters) spinon-spinon interactions.
We start below by discussing the dispersion relations of the bare magnon and spinons, and then present details of the mixing process
and the effective Hamiltonian. We will show that the model can reproduce the salient spectral features found for the Heisenberg
and $J$-$Q$ models in the preceding section, in particular the differences between wavevectors $(\pi,0)$ and $(\pi/2,\pi/2)$ and
the evolution of the spectral features when the $Q$ interaction is turned on, which in the effective model corresponds to
lowering the bare spinon energy.
\subsection{Effective Hamiltonian}
In spinwave theory, the excitations of the square-lattice Heisenberg antiferromagnet are described as magnons, which to order
$1/S$ disperse according to
\begin{equation}
\omega^{\text m}({\bf q})=c^{\text m}\sqrt{2-\frac{1}{2}\left[\cos(q_x)+\cos(q_y)\right]^2},
\label{m-dispersion}
\end{equation}
where $c^{\text m}$ is the spin wave velocity (the value of which is $c^{\text m}=1.637412$ when calculated to this order). We will take
this form of $\omega^{\text m}({\bf q})$ as the bare magnon energy in our model but treat the velocity as an adjustable bare parameter.
Spinons are well understood in the $S=1/2$ AFM Heisenberg chain, where the dispersion relation is \cite{faddeev81,moller81}
\begin{equation}
\omega(k)=\frac{\pi}{2}\sin(k),
\label{1d-omega}
\end{equation}
and an $S=1$ excitation with wavenumber ${q}$ can exist at all energies $\omega({k}_1)+\omega({k}_2)$ with ${k}_1+{k}_2={q}$.
In 2D, we use as input results of a recent QMC study of the excitation spectrum at the deconfined quantum critical point of the J-Q model \cite{suwa16},
where four gapless points at ${\bf q}=(0,0),(\pi,0),(0,\pi)$, and $(\pi,\pi)$ were found in the $S=1$ excitation spectrum (confirming a general expectation
of a system at a continuous AFM--VBS transition \cite{spanu06}). This dispersion relation is interpreted as the lower bound of a two-spinon continuum,
which should also be the dispersion relation for a single spinon. In the effective model we will use the simplest spinon dispersion relation with
the above four gapless points and shape in general agreement with the findings in Ref.~\onlinecite{suwa16},
\begin{equation}
\omega^\text{s}({\bf q})=c^{\text s}\sqrt{1-\cos^2(q_x)\cos^2(q_y)},
\label{free-s-dispersion}
\end{equation}
which can also be regarded as a 2D generalization of the 1D spinon dispersion, Eq.~(\ref{1d-omega}). The common velocity $c^{\text s}$ at the gapless
points was determined for the critical $J$-$Q$ model \cite{suwa16} but here we will regard it as a free parameter.
One of our basic assumptions will be that spinons exist in the system also in the AFM phase, but they are no longer gapless and interact with
the magnon excitations. We will add a constant $\Delta$ to the spinon energy Eq.~(\ref{free-s-dispersion}) to model the evolution of the bare spinon
dispersion from completely above the magnon energy $\omega^{\text m}({\bf q})$ at all ${\bf q}$ deep in the AFM phase to gradually approaching
$\omega^{\text m}({\bf q})$ and eventually dipping below the magnon in parts of the BZ---which happens first at ${\bf q}=(\pi,0)$---as the AFM order
is reduced. For two spinons, with one of them at wavevector ${\bf p}$ and the total wavevector being ${\bf q}$, the bare energy of the spinon pair is then,
\begin{eqnarray}
\tilde\omega^\text{s}({\bf q,p})&=& 2\Delta +c^{\text s}\sqrt{1-\cos^2(p_x)\cos^2(p_y)} \label{s-dispersion} \\
&+&c^{\text s}\sqrt{1-\cos^2(q_x-p_x)\cos^2(q_y-p_y)}. \nonumber
\end{eqnarray}
Here it should be noted that, in the simple picture of spinons in the basis of bipartite valence bonds, an $S=1$ excitation corresponds to breaking
a valence bond (singlet), thereby creating a triplet of two spins, one in each of the sublattice A and B \cite{tang13}. The unpaired spins are always confined
to their respective sublattices. There are also two species of magnons, and creating one of them corresponds to a change in magnetization by $\Delta S^z= 1$
or $\Delta S^z= -1$, depending on the sublattice. Since $S^z$ must be conserved, we only need to consider one species of the magnons (e.g, $\Delta S^z= 1$,
which we associate with sublattice A) and that dictates the magnetization of the spinon pair that it can resonate with.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig16.pdf}
\caption{(a) Dispersions of the bare excitations of the effective model along a path through the BZ. The lower branch is for the magnon,
and the upper branch is for a single spinon. The latter is also the lower edge of the two-spinon continuum. In this example, the
spinons in the circled region close to ${\bf q}=(\pi,0)$ almost touch the magnon band, leading to significant spinon-magnon mixing.
(b) The black curve shows the lowest energy of the mixed spinon-magnon system obtained with the dispersions in (a)
and strength $g=5.1$ of the mixing term. The red circles show the results of the QMC-SAC calculations for the Heisenberg model on the
$L=48$ lattice from Sec.~\ref{sec:hberg}.}
\label{eff-2}
\end{figure}
Instead of adding twice the gap as we do in Eq.~(\ref{s-dispersion}), we could include $\Delta^2$ under each of the square-roots. This
would cause some rounding of the V-shapes of the spinon dispersion. We have confirmed that there are no significant differences between
the two ways of lifting the spinon energies in the coupled spinon-magnon system.
Using second-quantized notation, the non-interacting effective Hamiltonian in the space spanning single-magnon and spinon pair
excitations can be written as
\begin{eqnarray}
&&H^{\text{eff}-0}_A=\sum_{{\bf q}}\omega^\text{m}({\bf q})d^\dag_{A,{\bf q}}d_{A,{\bf q}}\nonumber\\
&&+\sum_{\bf q,p}\tilde\omega^\text{s}({\bf q,p})c^\dag_{A,{\bf p}}c^\dag_{B,{\bf q-p}}c_{A,{\bf p}}c_{B,{\bf q-p}}
\label{eff-H0}
\end{eqnarray}
where $c^\dag$ ($c)$ and $d^\dag$ ($d$) are the spinon and magnon creation (annihilation) operators, respectively, and there is also an implicit constraint
on the Hilbert space to states with either a single magnon (here on the A sublattice) or two spinons (one on each sublattice). Note that both kinds of
particles are bosons based on the broken-valence-bond picture of the spinons \cite{tang13}. For brevity of the notation we will hereafter drop the
sublattice index, but in the calculations we always treat the two spinons as distinguishable particles.
Fig.~\ref{eff-2}(a) shows an example of the spinon and magnon dispersions corresponding to the situation we posit for the Heisenberg model. Here the
spinon offset $\Delta$ is sufficiently large to push the entire two-spinon continuum (of which we only show the lower edge) up above the magnon energy,
but at $(\pi,0)$ the spinons almost touch the magnon band. It is clear that any resonance process between the magnon and spinon Hilbert spaces will be
most effective at this point, thus reducing the energy and accounting for the dip in the dispersion found in the QMC study of the Heisenberg model. In
Fig.~\ref{eff-2}(b) we show how well the dispersion relation can be reproduced by the effective model, using a simple spinon-magnon mixing term that we
will specify next.
\begin{figure}[t]
\centering
\includegraphics[width=80mm]{fig17.pdf}
\caption{Illustration of the mixing process between the magnon (black circle) and the spinon pair (red circles). With mixing strength $g$,
a magnon on a given sublattice splits up into a spinon pair occupying nearest-neighbor sites. The spinon pair can recombine and form a magnon
on the original sublattice.}
\label{eff-1}
\end{figure}
Our basic premise is that the magnon and spinon subspaces mix, through processes where a magnon is split into two spinons and vice versa.
We use the simplest form of this mechanism, where the two spinons are created on neighboring sites, one of those sites being the one on
which the magnon is destroyed. The interaction Hamiltonian in real space is
\begin{equation}
H^\text I=g\sum_{\bf r,\bf{e}}(c^\dag_{\bf r+\bf{e}}c^\dag_{\bf r}d_{\bf r}+d_{\bf r}^\dag c_{\bf r}c_{{\bf r}+\bf{e}}),
\label{H_I}
\end{equation}
where ${\bf e}$ denotes the four unit lattice vectors as illustrated in Fig.~\ref{eff-1}. In motivating this interaction, we have in mind
how an $S=1$ excitation is created locally, e.g., in a neutron scattering experiment, by flipping a single spin. Spinwave theory describes
the eigenstates of such excitations in momentum space and this leads to the bare magnon dispersion. A spinon in one dimension can be regarded
as a point-like domain wall, and as such is associated with a lattice link instead of a site. However, in the valence bond
basis, the spinons arise from broken bonds and are associated with sites (in any number of dimensions) \cite{tang13}. In this basis, the
initial creation of the magnon also corresponds to creating two unpared spins, and the distinction between a magnon and two deconfined
spinons only becomes clear when examining the nature of the eigenstates (where the spinons may or may not be well-defined particles,
and they can be confined or deconfined). In the actual spin system, the magnon and spinons in the sense proposed here would
never exist as independent particles (not even in any known limit), but the simplified coupled system can still provide
a good description of the true excitations at the phenomenological level, as was also pointed out in the proposal of the AF* state (which also hosts
topological order that is not present within our proposal) \cite{balents99}. Our
way of coupling the two idealized bare systems according to Eq.~(\ref{H_I}) is intended as a simplest, local description of the mixing of
the two posited parts of the Hilbert space. In the end, beyond its compelling physical picture with key ingredients taken from deconfined quantum
criticality and the AF* state, the justification of the effective model will come from its ability to reproduce the key properties of the
excitations of the Heisenberg model.
The magnon-spinon coupling in reciprocal space is
\begin{equation}
H^\text I=\sum_{{\bf q},{\bf p}}I({\bf p})(c_{\bf p}^\dag c^\dag_{{\bf q}-{\bf p}}d_{\bf q}+{\rm h.c.}),
\label{H_I_2}
\end{equation}
where ${\bf q}$ again is the conserved total momentum and ${\bf p}$ is the momentum of the $A$ spinon (more precisely, the above spinon pair creation
operator is $c_{A,\bf p}^\dag c^\dag_{B,{\bf q}-{\bf p}}$), and the form factor corresponding to the
mixing strength $g$ in real space is
\begin{equation}
I({\bf p})=g\sqrt{\frac{2}{N}}\left [\cos (p_x)+\cos (p_y) \right ].
\label{ifunc}
\end{equation}
If this interaction is used directly in a Hamiltonian with the bare magnon and spinon dispersions, we encounter the problem that the ground
state is unstable---the mixing term will push the energy of the lowest excitations below that of the vacuum because the magnon mixes with the
spinon and reduces its energy also at the gapless points. This behavior is analogous to what would happen to the exciton-polariton spectrum by including
the light-matter interaction without the diamagnetic term. In reality, since $p^2 \to (\mathbf{p}-q{\bf A} )^2$, the minimal exciton-photon
coupling is also responsible for a modification of the photon Hamiltonian, in a way which preserves the gapless spectrum \cite{hopfield58,mahan}.
Following the analogy between magnons/spinon-pairs and photons/excitons, we consider the coupling to arise from a modified spinon-pair operators
by the following substitution in Eq.~(\ref{eff-H0}):
\begin{equation}
c^\dag_{{\bf p}}c^\dag_{{\bf q-p}} \to c^\dag_{{\bf p}}c^\dag_{{\bf q-p}}+G({\bf q,p})d^\dag_{{\bf q}},
\label{spinons_modified}
\end{equation}
where the mixing function is given by:
\begin{equation}
G({\bf q,p})=\frac{I({\bf p})}{\tilde\omega^\text{s}({\bf q,p})}.
\label{gdef}
\end{equation}
This substitution generates the following effective magnon-spinon Hamiltonian:
\begin{eqnarray}
&&H^{\text{eff}}=\sum_{{\bf q}}\left(\omega^\text{m}({\bf q})+\sum_{\bf p}\tilde\omega^\text{s}({\bf q,p})G^2({\bf q,p})\right)d^\dag_{{\bf q}}d_{{\bf q}}\\\nonumber
&& +\sum_{{\bf p},{\bf q}}\left[\tilde\omega^\text{s}({\bf q,p})c^\dag_{{\bf p}}c^\dag_{{\bf q-p}}c_{{\bf p}}c_{{\bf q-p}}
+I({\bf p})c_{\bf p}^\dag c^\dag_{{\bf q}-{\bf p}}d_{\bf q}+{\rm h.c.}\right],
\label{eff-H0-modified}
\end{eqnarray}
Here we see explicitly how the interaction also affects the magnon dispersion (similar to the effect of the diamagnetic term on the exciton-polariton problem),
so that the dressed magnons acquire a slightly renormalized velocity. This procedure guarantees that the ground state is stable and that the
full spectrum of the coupled system is still gapless.
Some aspects of the observed behaviors in the Heisenberg and $J$-$Q$ models can be better reproduced if we also introduce a spinon-spinon interaction
term $V$, to be specified later. Defining the modified magnon dispersion
\begin{equation}
\tilde\omega^\text{m}({\bf q})=\omega^\text{m}({\bf q})+\sum_{\bf p}\tilde\omega^\text{s}({\bf q,p})G^2({\bf q,p}),
\end{equation}
the Hamiltonian in the sector of given total momentum ${\bf q}$ can be written as
\begin{eqnarray}
&&H^{\text{eff}}({\bf q})=\tilde\omega^\text{m}({\bf q})d^\dag_{{\bf q}}d_{{\bf q}}
+\sum_{{\bf k},{\bf p}}V({\bf k},{\bf p})c^\dag_{{\bf k}}c_{{\bf p}}c^\dag_{{\bf q-k}}c_{{\bf q-p}} \nonumber\\
&&+\sum_{\bf p}\left[\tilde\omega^\text{s}({\bf q,p})c^\dag_{{\bf p}}c^\dag_{{\bf q-p}}c_{{\bf p}}c_{{\bf q-p}}
+I({\bf p})c_{\bf p}^\dag c^\dag_{{\bf q}-{\bf p}}d_{\bf q}+{\rm h.c.}\right]. \nonumber\\
\label{eff-H}
\end{eqnarray}
Here it should be noted that, if spinon-spinon interactions are present, $V \not=0$, the definition of the function $G$ changes from Eq.~(\ref{gdef}) in
the following simple way: the non-interacting two-spinon energies $\tilde\omega^\text{s}({\bf q,p})$ should be replaced by the eigenenergies of the
interacting 2-spinon subsystem, and the momentum label ${\bf p}$ accordingly changes to a different index labeling the eigenstates. The mixing term
is also transformed accordingly by using the proper basis in Eq.~(\ref{spinons_modified}).
We study the effective Hamiltonian by numerical ED on $L\times L$ lattices with $L$ up to $64$. Our effective model is clearly very
simplified and one should of course not expect it to provide a fully quantitative description of the excitations of the many-body spin Hamiltonians.
Nevertheless, it is interesting that the parameters $c^m,c^s,\Delta,$ and $g$ can be chosen such that an almost perfect agreement with
the Heisenberg magnon dispersion obtained in Sec.~\ref{sec:hberg} is reproduced, as shown in Fig.~\ref{eff-2}(b) (where no spinon-spinon interactions are
included). In the following we will not attempt to make any further detailed fits to the results for the spin systems, but focus on the general behaviors
of the model and how they can be related to the salient features of the Heisenberg and $J$-$Q$ spectral functions.
\subsection{Mixing states and spectral functions}
For a given total momentum ${\bf q}$, the eigenstates $|n,{\bf q}\rangle$ of the effective Hamiltonian in Eq.~(\ref{eff-H}) have overlaps
$\langle n,{\bf q}|{\bf q}\rangle$ with the bare magnon state $|{\bf q}\rangle$. Without spinon-spinon interactions ($V=0$), with the bare spinons
above the magnon band for all ${\bf q}$, and when the mixing parameter $g$ is suitable for describing the Heisenberg model [i.e., giving good agreement
with the QMC dispersion relation, as in Fig.~\ref{eff-2}(b)], we find that all but the first and the last of these overlaps become very small when the
lattice size $L$ increases. Thus, the two particular states are magnon-spinon resonances and the rest are essentially free states of the two-spinons.
When attractive spinon-spinon interactions are included, the picture changes qualitatively, with the magnon also mixing in strongly with all spinon
bound states. An example of spinon levels in the presence of spin-spin interactions are shown in Fig.~\ref{levels}, where a number of bound states
separated by gaps can be distinguished. The stronger mixing with the bound states is simply a reflection of the fact that two bound spinons have
a finite probability to occupy nearest-neighbor sites, so that the mixing process with the magnon (Fig.~\ref{eff-1}) can take place, while the
probability of this vanishes when $L \to \infty$ for free spinons. Note that the total overlap $\langle n,{\bf q}|{\bf q}\rangle$ summed over
all free-spinon states can still be non-zero, due to the increasing number of these states.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig18.pdf}
\caption{Energy levels versus the total wavevector of two spinons interacting through a potential $V(r)=-6.2{\rm e}^{-r/2}$.
The bare dispersion relation of the single spinon is given by Eq.~(\ref{free-s-dispersion}) with $c^s=3.1$. We only show a few of the
levels between the lower and higher energy bound.}
\label{levels}
\end{figure}
The fact that the dispersion relation resulting from $H^{\text{eff}}$ can be made to match the QMC-SAC results for the Heisenberg model (Fig.~\ref{eff-2})
is a tantalizing hint that the dispersion anomaly at ${\bf q}=(\pi,0)$ may be a precursor of spinon deconfinement as some interaction brings the system
further toward the AFM--VBS transition. In the weak magnon-spinon mixing limit, the lowest-energy spinons will, in the absence of attractive spinon-spinon
interactions $V$, deconfine close to ${\bf q}=(\pi,0)$ if the spinon continuum falls below the magnon band at this wave vector, while the magnon-spinon
resonance remains lowest excitation in parts of the BZ where the bare spinons stay above the magnon. The resonance state should still be considered as a magnon,
as the spinons are spatially confined and constitute an internal structure to the magnon.
This simple behavior, which essentially follows from the postulated bare dispersion relations, is very intriguing because it is precisely what we observed
in Sec~\ref{sec:jq} for the $J$-$Q$ model when $Q$ is turned on but is still far away from the deconfined critical point. We found (Figs.~\ref{jq} and
\ref{fss}), that the low-energy magnon pole vanishes at $(\pi,0)$, while it remains
prominent at $(\pi/2,\pi/2)$. Thus, we propose that increasing $Q/J$ corresponds to a reduction of the energy shift $\Delta$ in the bare spinon energy
in Eq.~(\ref{s-dispersion}), reaching $\Delta=0$ at the deconfined quantum-critical point. At the same time the bare magnon and spinon velocities should also
evolve in some way. The observation that the $(\pi/2,\pi/2)$ magnon survives even at the critical point would suggest that the magnon band remains below
the spinon continuum at this wave vector.
Let us now investigate the spectral function of the effective model. Within the model, the spectral function corresponding to the dynamic spin
structure factor of the spin models is that of the magnon creation operator $d^\dagger_{\bf q}$
\begin{equation}
S({\bf q},\omega)=\sum_{n}|\langle n|d^\dagger_{\bf q}|{\rm vac} \rangle|^2\delta(\omega-E_n),
\label{sqweff}
\end{equation}
where $|{\rm vac} \rangle$ is the vacuum representing the ground state of the spin system and $E_n$ is the energy of the eigenstate $|n \rangle$. The
matrix element is nothing but the absolute-squared of the magnon overlap $\langle n,{\bf q}|{\bf q}\rangle$ discussed above. Thus, with non-interacting
spinons the spectral function consists of two $\delta$-functions, corresponding to the two spinon-magnon resonance states, and a weak continuum arising from
a large number of deconfined 2-spinon states. The situation changes if we include spinon-spinon interactions. Then, as mentioned above, the spinon
bound states mix more significantly with the magnon and gives rise to more spectral weight in Eq.~(\ref{sqweff}) away from the edges of the spectrum,
and the $\delta$-function at the upper edge essentially vanishes. To attempt to model the spinon-spinon interactions quantitatively would be beyond
the scope of the simplified effective model, but by considering a reasonable case of short-range interactions we will observe interesting features
that match to a surprisingly high degree with what was observed in the spin systems.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig19.pdf}
\caption{Dispersion relation (a) and wavevector dependence of the relative weight of the magnon pole (b) calculated with the effective
Hamiltonian with the parameters $c^m=3.1, c^s=3.1, \Delta=1.94, g=1.86$, and the spinon-spinon potential $V(r)=-6.2{\rm e}^{-r/2}$.}
\label{effresults}
\end{figure}
The ${\bf q}$ dependence of the total spectral weight of the spin system cannot be modeled with our approach here, because the effective model completely
neglects the structure of the ground state, replacing it by trivial vacuum, and the magnon creation operator is also an oversimplification of the spin
operator. Because of these simplifications the total spectral weight is unity for all ${\bf q}$. A main focus in Secs.~\ref{sec:hberg} and \ref{sec:jq}
was on the relative weight $a_0({\bf q})$ of the leading magnon pole, and this quantity does have its counterpart in Eq.~(\ref{sqweff});
\begin{equation}
a_0({\bf q})=|\langle n=0 | d^\dagger_{\bf q}|{\rm vac} \rangle|^2 = |\langle 0|{\bf q}\rangle|^2,
\end{equation}
where $|n=0\rangle\rangle$ is the lowest-energy eigenstate and $a_0({\bf q})$ can be compared with the QMC/SAC results in Fig.~\ref{haf-5}. Given that the
Hilbert space of the effective model contains only a single magnon, the spectral function should correspond to the transverse component in situations where
the transverse and longitudinal contributions are separated (e.g., polarized neutron scattering).
We now include attractive spinon-spinon interactions such that bare (before mixing with the magnon) bound states are produced,
as in Fig.~\ref{levels}. The other model parameters are again adjusted such that the dispersion relation resembles that in the Heisenberg model,
with the anomaly at ${\bf q}=(\pi,0)$. The resulting dispersion (location of the dominant $\delta$-function, which constitutes the lower edge of the spectral
function) as well as the relative magnon amplitude are graphed in Fig.~\ref{effresults}. The dispersion relation is very similar to that obtained
without spinon-spinon interactions in Fig.~\ref{eff-2}. Comparing the amplitude $a_0({\bf q})$ in Fig.~\ref{effresults}(b) with the Heisenberg results in Fig.~\ref{haf-5},
we can see very similar features, with minima and maxima at the same wavevectors, though the variations in the amplitude are larger in the
Heisenberg model.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig20.pdf}
\caption{Spectral functions of the effective model at (a) ${\bf q}=(\pi,0)$ and (b) ${\bf q}=(\pi/2,\pi/2)$, using model parameters corresponding
to the Heisenberg model; $c^m=3.1, c^s=3.1, \Delta=1.94, g=1.86$ and the spinon-spinon potential $V(r)=-6.2{\rm e}^{-r/2}$ (same as used
in Fig.~\ref{levels} and \ref{effresults}). The $\delta$-functions in the exact spectral function (computed here using
an $L=64$ lattice) have been broadened for visualization.}
\label{hbsqweff}
\end{figure}
The full spectral functions at ${\bf q}=(\pi,0)$ and $(\pi/2,\pi/2)$ are displayed in Fig.~\ref{hbsqweff}. Here we have broadened all $\delta$-functions
to obtain continuous spectral functions. As already discussed, the prominent $\delta$-function corresponding to the magnon is similar to what is observed
in the Heisenberg model, though clearly the shapes of the continua above the main $\delta$-function are different from those in Fig.~\ref{haf-2}. Upon
reducing the spinon energy offset $\Delta$ so that the bare energy falls below the magnon energy close to ${\bf q}=(\pi,0)$, we observe a very interesting
behavior in Fig.~\ref{jqsqweff}. We see that the main magnon peak is washed out, due to decay into the lower spinon states. This is very similar to what we
found for the $J$-$Q$ model in Sec.~\ref{sec:jq}, where already a relatively small value of $Q/J$ led to a broad spectrum without magnon pole at ${\bf q}=(\pi,0)$.
At $(\pi/2,\pi/2)$ the magnon pole remained strong, however, and this is also what we see for the effective model in Fig.~\ref{jqsqweff}. Without
spinon-spinon interactions, when the bare magnon is inside the spinon continuum a sharp (single $\delta$-function) spinon-magnon resonance remains in
inside the continuum of free spinon states. Thus, for the magnon pole to completely decay, spinon-spinon interactions are essential in the
effective model.
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{fig21.pdf}
\caption{Spectral functions as in Fig.~\ref{hbsqweff}, but with the parameters of the effective model chosen to give behaviors
similar to the $J$-$Q$ model with $Q \approx J$; $c^m=3.1, c^s=6.2, \Delta=0.39, g=1.86$ and the spinon-spinon potential
$V(r)=-6.2{\rm e}^{-r/2}$.}
\label{jqsqweff}
\end{figure}
These results for a simple effective model provide compelling evidence for the mechanism of magnon-spinon mixing outlined above. The results
also suggest that the absence of magnon pole at and close to ${\bf q} = (\pi,0)$ does not necessarily imply complete spinon deconfinement, as we
have to include explicitly attractive interactions in the effective model in order to reproduce the behavior in the full spin systems. Weak
attractive spinon-spinon interactions have previously been detected explicitly in the $J$-$Q$ model at the deconfined critical point \cite{tang13},
and they are also expected based on the field-theory description, where the spinons are never completely deconfined due to their coupling to an
emergent gauge field \cite{senthil04a}. The loss of the magnon pole observed here then signifies that the magnon changes character, from a single
spatially well-resolved small resonance particle to a more extended particle (with more spinon characteristics) as a weak $Q$ interaction is turned on,
and finally the particle completely disintegrating into a continuum of weakly bound spinon pairs and deconfined spinons.
\section{Conclusions}
\label{sec:summary}
We have investigated the long-standing problem of the excitation anomaly at wavevectors ${\bf q} \approx (\pi,0)$ in the spin-$1/2$ square lattice
Heisenberg antiferromagnet, and established its relationship to deconfined quantum criticality by also studying the $J$-$Q$ model. Using an improved
stochastic (sampling) method for analytic continuation of QMC correlation functions, we have been able to quantify the evolution of the magnon pole
in the dynamic structure factor $S({\bf q},\omega)$ as the AFM order is weakened with increasing ratio $Q/J$, all the way from the Heisenberg limit
$(Q=0$) to the deconfined critical point at $Q/J \approx 22$. For the Heisenberg model, our results agree with other numerical approaches (series
expansions \cite{zheng05} and continuous similarity transformations within the Dyson-Maleev formalism \cite{Powalski15})
and also with recent inelastic neutron scattering experiments of
the quasi-2D antiferromagnet CFTD \cite{piazza15}. Upon increasing $Q/J$, we found a rapid loss of single-magnon weight at ${\bf q} \approx (\pi,0)$,
but not at ${\bf q} \approx (\pi/2,\pi/2)$, where the magnon pole remains robust even at the critical point. At first sight these behaviors appear
surprising, but we can consistently explain them through the proposed connection to deconfined quantum criticality.
Motivated by the numerical results, we have constructed an effective model of magnon-spinon mixing that can phenomenologically explain not only the
fragile, almost fractionalized $(\pi,0)$ magnon of the Heisenberg model and its decay into spinon pairs with increasing $Q/J$, but also establishes the reason
of the stability of the $(\pi/2,\pi/2)$ magnon in the $J$-$Q$ model for large $Q$ (as discovered with the QMC-SAC calculations). The essential
ingredient is a gapped spinon band with a dispersion minimum at $(\pi,0)$, for which we find motivation in the fact that this point becomes gapless at the
deconfined quantum critical point. If the continuum of bare spinon excitations remains above the
magnon band throughout the BZ (as in Fig.~\ref{eff-2}), then the lowest excitations are always magnons. However, since the two bands are coupled
in the effective model, via a term that destroys a magnon and creates two spinons (as well as its conjugate destroying the spinons and creating a magnon),
the magnons fluctuate in and out of the spinon space, and this effect is the largest at the point in the BZ where the gap between the two bare branches
is the smallest, i.e., at ${\bf q}=(\pi,0)$. We find that this effect can account quantitatively for the dip in the magnon dispersion relation,
and qualitatively the wavevector dependence of the relative weight of the $\delta$-function at the lower edge of the spectrum is also captured.
Within this effective model, the deconfinement mechanism in the $J$-$Q$ model is explained as the bare spinon dispersion dipping below the magnon at
${\bf q}=(\pi,0)$. This can happen already for small $Q/J$, far away from the AFM--VBS transition, because the bare magnon-spinon gap is already small for $Q=0$.
As $Q/J$ increases, an increasing fraction of the BZ becomes deconfined, until finally the gapless spinons deconfine at the critical point. Our QMC-SAC
results indicate that the excitations at higher energy remain confined, as exemplified by ${\bf q}=(\pi/2,\pi/2)$. Within the effective model this
follows from the bare spinon dispersion staying above the magnon band in this region of wavevectors.
Clearly the effective model should not be taken as a quantitative description of the Heisenberg and $J$-$Q$ systems; motivated by aspects of deconfined
quantum-criticality and the AF* state, we have introduced it mainly as a phenomenological tool for elucidating the behaviors observed in the QMC studies
of the model Hamiltonians. Nevertheless, it is remarkable how well the essential observed features are captured and how otherwise non-intuitive aspects of the
deconfinement mechanism follow naturally from the magnon-spinon mixing under mild assumptions on the bare parameters of the effective model. Thus,
even in the absence of a strict microscopic derivation, the effective model can be justified by its many non-trivial confirmed predictions.
Considering the mechanism leading to the loss of magnon pole with increasing $Q$, it is interesting to note that it does not appear to involve
significant broadening of the $\delta$-function, but instead the spectral weight of this peak is distributed out into the continuum by the spinon
mixing process. This is in accord with the general belief that quantum antiferromagnets with collinear order lack the damping processes that
cause the broadening of the magnon pole in frustrated, non-collinear magnets \cite{chernyshev06,ma16,kamiya17}. Our proposed mechanism of spinon mixing
is, thus, very different from standard magnon damping.
The scenario of a nearly fractionalized magnon in the Heisenberg model does not necessarily stand in conflict with the expansion in multi-magnon processes
\cite{Powalski15,Powalski17}, which can account for the dynamic structure factor without invoking any spinon mixing effects. We have only discussed the
effective model of the excitations at the level of a single magnon and its mixing with the spinon continuum, and our results for the Heisenberg model show that
the magnon is significantly dressed by spinons around ${\bf q}=(\pi,0)$ but is not yet fractionalized. The magnon-spinon mixing then represents a description
of the internal structure of the magnon, and we have not considered the further effects of multi-magnon processes. It is remarkable that the results of
Ref.~\onlinecite{Powalski17} match the experimental data (and also numerical data for the Heisenberg model) so well without taking into account the internal
spinon structure of the magnons, if indeed this structure is present. Here we can draw a loose analogy with nuclear physics, where the
inter-nucleon force has an effective description in terms of exchange of mesons (pions) between nucleons. Yukawa proposed mesons as the carriers of the
force without knowledge of the quark structure of the nucleons and mesons that is ultimately involved in the interaction (residual strong force) process,
and quantitatively satisfactory results in nuclear physics are obtained with the effective interaction (and calculations with the full strong force
between quars mediated by gluons are in practice too complicated to work with quantitatively). The significant attractive interaction between magnons in
the Heisenberg model \cite{Powalski15,Powalski17} might perhaps similarily be regarded as mediated by spinon pairs (which themselves constitute magnons),
and, by the pion analogy, the magnons and their residual attractive interactions could also provide an accurate description of the excitations without
invocing the internal spinon structure. To investigate the relationship between the two pictures further, it would be interesting to treat the $J$-$Q$
model with the method of Ref.~\onlinecite{Powalski17}. Based on our scenario we predict that the multi-magnon expansion should break down rapidly close
to ${\bf q}=(\pi,0)$as the $Q$ interaction is turned on but remain convergent at low energies until the system comes close to the deconfined
quantum-critical point.
The fragility of the magnons at and close to ${\bf q}=(\pi,0)$ suggests that these excitations may become completely fractionalized also by other interactions
than the $Q$-terms considered here, e.g., ring exchange or longer-range pair exchange. These interactions have recently also been investigated
in the context of possible topological order and spinon excitations in the cuprates \cite{chatterjee17}. Earlier the so-called AF* state had
been proposed, largely on phenomenological grounds, where topological order coexists with AFM order and there is a spinon continuum similar
to the one in our effective model \cite{balents99,senthil00}. Though in our scenario the reason for the spinon continuum is different---the
proximity to a deconfined quantum critical point---a generic conclusion valid in either case is that spinon deconfinement can set in at
${\bf q}=(\pi,0)$ well before any ground state transition at which the low-energy spinons deconfine.
In this context the quasi-2D square-lattice antiferromagnet Cu(pz)$_2$(ClO$_4$)$_2$ is very interesting. It has a weak frustrated next-nearest-neighbor
coupling and has been modeled within the $J_1$-$J_2$ Heisenberg model \cite{tsyrulin09}. Neutron scattering experiments on the material and
series-expansion calculations for the model show an even larger suppression of the $(\pi,0)$ energy than in the pure Heisenberg model, similar to what
we have observed in the presence of a weak $Q$ interaction. The experimental $(\pi,0)$ line shape also seems to have a smaller magnon pole than CFTD,
in accord with our scenario of a fragile magnon pole, although we are not aware of any quantitative analysis of the weight of the magnon pole and
no line-shape calculations were reported in Ref.~\onlinecite{tsyrulin09}. It would clearly be intersting to carry out neutron experiments at higher resolution
and to make detailed comparisons with calculations beyond the dispersion relation.
Ultimately the $J_1$-$J_2$ system should be different from the $J$-$Q$ model, because the deconfined quantum critical point of the latter most likely is replaced
by an extended gapless spin liquid phase of the former \cite{hu13,gong14,morita15,wang17}.
However, since this phase should also be associated with deconfined spinons, the
evolution of the excitations as this phase is approached may be very similar to what we have discussed within the $J$-$Q$ model on its approach to the
deconfined quantum critical point. A state with topological order and spinon excitations may instead be approached when strong ring-exchange interactions are added \cite{chatterjee17}, but
given that $J$ is weak in Cu(pz)$_2$(ClO$_4$)$_2$ these interactions may not play a significant role in this case. Ring exchange should be more important
in Sr$_2$CuO$_2$Cl$_2$, where excitation anomalies have also been observed \cite{guarise10}.
The magnetic-field ($h$) dependence of the excitation spectrum of Cu(pz)$_2$(ClO$_4$)$_2$ was also studied in Ref.~\onlinecite{tsyrulin09}. Since the energy scale
of the Heisenberg exchange is even smaller than in CFTD, it was possible to study field strengths of order $J$ and observe significant changes in the
dispersion relation and the $(\pi,0)$ line shape. The methods we have developed here can also be applied to systems in an external magnetic field
and it would be interesting to study the dynamics of the $J$-$Q$-$h$ model. Some results indicating destabilization of magnons due to the field in
the Heisenberg model are already available \cite{syljuasen08b}, and our improved analytic continuation technique could potentially improve on the
frequency resolution.
\begin{acknowledgments}
We thank Wenan Guo, Akiko Masaki-Kato, Andrey Mishchenko, Martin Mourigal, Henrik R{\o}nnow, Kai Schmidt, Cenke Xu, and Seiji Yunoki for useful
discussions. Experimental data from Ref.~\cite{piazza15} were kindly provided by N. B. Christensen and H. M. R{\o}nnow. H.S. was supported by
the China Postdoctoral Science Foundation under Grant Nos.~2016M600034 and 2017T100031. St.C was funded by the NSFC under Grant Nos.~11574025 and U1530401.
Y.Q.Q. and Z.Y.M. acknowledge funding from the Ministry of Science and Technology of China through the National Key Research and Development Program
under Grant No.~2016YFA0300502, and from the NSFC under Grant Nos.~11574359 and 11674370, as well as the National Thousand-Young Talents Program of China.
A.W.S. was funded by the NSF under Grant Nos.~DMR-1410126 and DMR-1710170, and by the Simons Foundation. In addition H.S., Y.Q.Q., and Sy.C. thank
Boston University's Condensed Matter Theory Visitors program for support, and A.W.S. thanks the Beijing CSRC and the Institute of Physics, Chinese
Academy of Sciences for visitor support. We thank the Center for Quantum Simulation Sciences at the Institute of Physics, Chinese Academy of Sciences,
the Tianhe-1A platform at the National Supercomputer Center in Tianjin, and Boston University's Shared Computing Cluster for their technical support and
generous allocation of CPU time.
\end{acknowledgments}
|
2,869,038,155,588 | arxiv | \section{Introduction}\label{s:introduction}
Researchers---especially social scientists---often find themselves estimating causal effects in complex, dynamic settings. For example, when studying the effects of political campaign strategies, researchers must confront how candidates change their messaging and behavior in response to the shifting landscape of the campaign. While most social scientists ignore these dynamics, a handful of scholars have applied the marginal structural model (MSM) approach to estimating the effects of time-varying treatments \citep{RobHerBru00} to a variety of social science settings \citep{Blackwell13, LadHarWin18, CreSim19, Kurer20}. Given the richness of dynamic phenomena in the social sciences, it is perhaps surprising that such methods are not applied more widely in those fields. One key reason is that most approaches to estimating the effects of these treatments---including marginal structural models, but also parametric g-computation and structural nested mean models---require a no unmeasured confounding assumption that many social scientists find implausible. In this paper, we extend the current MSM framework to allow for time-constant unmeasured confounding and apply this approach to estimating the time-varying effects of negative advertising in U.S.\ elections.
Marginal structural models have been a popular way to address the key challenges that time-varying treatments pose, especially in the biomedical sciences. The unique challenges of these treatments include the presence of time-varying confounders, or those variables that are affected by past treatment and confound the relationship between future treatment and the outcome. In the electoral context, changes in polling for one candidate or the other might affect a candidate's decision to go on the attack, but those attacks might affect future polling.
Typical adjustment for these covariates via regression or matching will lead to post-treatment bias,
while omitting them from these methods will obviously lead to confounding bias. To resolve this dilemma,
Robins and colleagues have developed an inverse probability of treatment weighting (IPTW) approach that allows
for the estimation causal effects of treatment histories without inducing post-treatment bias
\citep{Robins98, Robins98b, Robins99, RobHerBru00}.
This weighting approach is often applied to the estimation of parameters from a model for the marginal mean of the potential outcomes, which these authors call the marginal structural model.
One limitation of the IPTW approach to marginal structural models is that it usually relies on an assumption of
sequential ignorability, which essentially states that there are no unmeasured confounders between the treatment
at time $t$ and the outcome conditional on the treatment and covariate history up to that point.
In the social sciences, this assumption could be suspect when units select into treatment based on data not available to the researcher.
While sensitivity analyses can help diagnose the effects of unmeasured confounding on MSM estimates \citep{BruHerHan04, Blackwell14}, these approaches require knowledge of the severity of the unmeasured confounding, which is difficult for any applied researcher to possess.
When this type of confounding is constant over time and the researcher has panel data on both treatment and outcomes, social scientists often use the so-called linear fixed effects (LFE) approach that
transforms all variables to deviations from their unit means \citep[for a review see][Ch. 5]{AngPis09}.
While this can purge time-constant unmeasured confounding under certain assumptions, the linear fixed effects approach has two important limitations for social science applications.
First, the LFE approach requires the outcome and the treatment to be measured at multiple points in time,
but political outcomes such as final vote share in an election are often single endline measures.
Second, LFEs generally cannot estimate the effects of treatment histories under sequential ignorability
without much stronger assumptions that rule out feedback between the outcome and the treatment \citep{Sobel12, ImaKim19}.
To overcome these issues, this article extends the marginal structural models approach to estimating the effects of time-varying treatments to allow for time-constant unmeasured confounding.
To do so, we propose a straightforward modification to IPTW: to include unit-specific fixed effects in the propensity score model used to construct the inverse-probability weights.
While this approach will lead to an incidental parameters problem for the propensity score model \citep{NeySco48},
we show that if this model is correctly specified and the number of time periods grows at the same rate as the number of units,
the IPTW with fixed effects estimator (IPTW-FE) will lead to a consistent and asymptotically normal estimator for the parameters of the marginal structural model.
This is true even when we only have a single measurement of the outcome after the final instance of treatment. This approach relies on a within-unit version of sequential ignorability, which allows the type of feedback between the treatment and outcome usually ruled out by LFE estimators.
The essential logic of the IPTW-FE is quite simple. If the propensity score model is stable over time and we have a number of time periods,
we can allow for each unit to have a unique offset to the propensity score model that should incorporate any time-constant variables, measured or unmeasured.
In addition to the literature on marginal structural models, we also build on a robust literature on nonlinear panel models.
Initially, these models follow the path of LFEs in side-stepping the incidental parameters problem
by leveraging estimation techniques that did not require direct estimates of the unit-specific unobserved effects \citep[see][for a review]{Lancaster00}.
Unfortunately, these approaches have a severe limitation for our purposes: by avoiding the estimation of the unit-specific effects,
they preclude the ability to calculate the types of predicted probabilities required for IPTW estimation.
Due to this limitation, we build on approaches that focus on large-$T$ approximations, which consider asymptotic results where the number of time periods grows with the number of units. A host of papers have taken this approach to investigate the properties of maximum likelihood estimators of nonlinear panel models with unit (and time) effects \citep{HahNew04, AreHah07, FernandezVal09, HahKue11, FerWei16, FerWei18}. Many of these approaches have developed bias correction techniques since these estimators are often asymptotically biased. Our approach avoids this issue with these estimators for two reasons. First, we follow the MSM literature and focus on estimating the parameters of the MSM at the slower $\sqrt{N}$ rate rather than the $\sqrt{NT}$ rate so that the asymptotic bias described in this literature converges to 0. Second, we focus on the effect of a finite number of lags of treatment, which limits how much the bias from noisy fixed effect estimation can affect the estimates of the MSM parameters.
There are some important limitations to our IPTW-FE approach.
First, given the asymptotic setup, this approximation will be more accurate
when the number of units and the number of time periods are not drastically different and when the treatment process provides new information over time. The latter requirement is captured by a strong mixing assumption on the treatment and covariate processes, though we do not impose stationarity.
Second, we derive the asymptotic properties of the IPTW-FE estimator for a MSM that is a function of a fixed number of periods
in the treatment history, rather than the entire history. Third, the performance of IPTW-FE depends heavily on the how severe the unit-specific heterogeneity is.
This is due to extreme unit fixed effects in the propensity score models leading to extreme weights, making estimation less stable.
Relatedly, units that never or always receive treatment in the sample have unit-fixed effects that are not identified.
We explore two procedures for handling these situations: dropping units without treatment variation and imputing their propensity scores
to values close to zero or one.
In spite of these limitations, we find in simulations that IPTW-FE can outperform naive IPTW without fixed effects, even when the unobserved heterogeneity is severe.
Our approach is also related to recent work on causal inference in fixed effects settings. \citet{ArkImb19} is mostly closely related to our approach here. They investigate how to use inverse probability weighting with fixed effects when a set of sufficient statistics for the treatment process is available, though in a fixed-$T$ setting with no dynamic feedback between the treatment and the outcome and no time-varying covariates. Other work has explained how this dynamic feedback stymies estimation of both contemporaneous effects and the effects of treatment histories with fixed effects assumptions \citep{Sobel12, ImaKim19}. In contrast, our approach allows for feedback between the treatment and the outcome, so long as sequential ignorability holds conditional on the unit-specific effect.
The paper proceeds as follows. We begin in Section~\ref{s:motivation} with a description of out motivating application of the effects of negative advertising in U.S.\ elections. In Section~\ref{s:illustration}, we develop the core intuition for our approach in a simple setting of unit-specific randomized experiments without covariates. Next in Section~\ref{s:review-msm} we review marginal structural models and inverse probability of treatment weighting as they are currently deployed in applied research. When then introduce our fixed-effect approach in Section~\ref{s:iptw-fe}, describing both the assumptions that justify its use and its large-sample properties under these assumptions. In Section~\ref{s:simulation}, we present simulation evidence of the finite-sample performance of this estimator, which shows that it works well, especially when the amount of unmeasured heterogeneity is limited. Finally, we apply the method to the context of negative advertising in Section~\ref{s:application} and conclude with some ideas for future research in Section~\ref{s:conclusion}.
\section{Motivating Application: Negative Advertising in Electoral Campaigns}\label{s:motivation}
In the United States, political advertising is an important way that candidates for office attempt to influence the electorate. One tool that candidates have is the tone of their ads---they might air ads that promote themselves and their agenda, or they may show ads that contrast themselves with their opponent. The latter, which we call negative advertising, is commonplace in campaigns, but its effects are not well understood. In this paper, we apply the IPTW-FE approach to a key empirical question in this literature: what is the effect of negative advertising versus positive advertising on vote shares for a particular electoral candidate? A long literature has explored how and when the tone of an advertisement might affect both the decision to vote (voter mobilization) and the vote choice conditional on voting (persuasion) \citep{LauSigRov07, Blackwell13}, but the results are not conclusive.
Most studies of negative advertising have ignored the sequential nature of this treatment---polling data affects the decision to go negative which in turn affects future polling---though \cite{Blackwell13} used IPTW and MSMs to adjust for this time-varying confounding to investigate the time-varying effects of negative advertising. One worry with that approach, however, is that some campaigns will have baseline higher probabilities of going negative for reasons that could be related to the outcome. For instance, challengers of seats whose incumbents have take unpopular votes or actions might attack more and be more likely to win. Since fully adjusting for these characteristics is difficult, a fixed-effects approach that adjusts for unmeasured baseline confounders could reduce the potential for bias in the estimates of the MSM parameters. Below, we find that our IPTW-FE approach can lead to different substantive conclusions how negative advertising affects electoral outcomes compared to a traditional IPTW approach.
\section{Simple Illustration: Unit-specific Randomized Experiments}\label{s:illustration}
We begin with a simple setting that can illustrate many of the main features of the IPTW-FE methodology. Suppose we observe $N$ units indexed by $i=1,\ldots,N$ each with a binary treatment measured at $T$ points in time, $D_i = \{D_{i1}, \ldots, D_{iT}\}$. We are interested in assessing the effect of the last treatment on some endline measure. We define $Y_i(d_1, \ldots, d_T)$ as the potential outcome under a particular treatment history. For exposition in this simple setting, we assume an extreme no-carryover setting where the outcome only depends on the last period, $Y_i(d_T) \equiv Y_i(d_{i1}, \ldots, d_{i,t-1}, d_T)$.
Below, we will weaken this assumption for marginal structural models in Section~\ref{s:iptw-fe}, but it is useful here to highlight key parts of the approach. We make the usual consistency assumption, $Y_i = Y_i(1)D_{iT} + Y_i(0)(1 - D_{iT})$. Letting $\tau_d = \mathbb{E}[Y_i(d)]$ for $d = 0, 1$, we define the quantity of interest to be $\tau = \tau_1 - \tau_0$.
We consider a setting where each unit repeatedly makes treatment decisions with unit-specific propensities, but those propensities are unknown to the researcher. In particular, we assume there is a time-constant unit-specific shock, $\alpha_i$, that potentially affects both treatment and the outcome, but that ignorability of treatment assignment holds conditional on that shock:
\[
D_i \!\perp\!\!\!\perp Y_i(d_T) \mid \alpha_i \qquad \forall i.
\]
We assume that the $D_{it}$ are independently randomly assigned within a unit with probability $\mathbb{P}(D_{it} = 1 \mid \alpha_i) = \pi(\alpha_i) = \pi_i$, where both $\pi_i$ and $1-\pi_i$ are bounded below by $c > 0$ for all $i$.
If the unit-specific propensity scores were known, estimation and inference could proceed as usual. Define the following infeasible IPTW estimator:
\begin{equation}
\label{eq:simple-ipw-infeasible}
\widetilde{\tau} = \frac{\sum_{i=1}^N (D_{iT} / \pi_i)Y_i}{\sum_{i=1}^N (D_{iT} / \pi_i)} - \frac{\sum_{i=1}^N ((1-D_{iT}) / (1-\pi_i))Y_i}{\sum_{i=1}^N ((1-D_{iT}) / (1-\pi_i))}
\end{equation}
We can verify the consistency of this estimator using the standard IPTW approach. In particular, we can write the first term of the right-hand side of~\ref{eq:simple-ipw-infeasible} as the solution to the sample version of the following population moment condition:
\[
0 = \mathbb{E}\left[\frac{D_{iT}}{\pi_i}\left( Y_i - \tau_1 \right)\right] = \mathbb{E}\left[\frac{D_{iT}}{\pi_i}\left( Y_i(1) - \tau_1 \right)\right] = \mathbb{E}\left[ \frac{\mathbb{E}[D_{iT} \mid \alpha_i]}{\pi_i}\mathbb{E}[Y_i(1) - \tau_1 \mid \alpha_i] \right] = \mathbb{E}[Y_i(1) - \tau_1]
\]
Under regularity conditions, the first term on the right-hand side of~\eqref{eq:simple-ipw-infeasible} will be consistent for $\mathbb{E}[Y_i(1)]$. Applying a similar argument to the second term, we can establish that $\widetilde{\tau}$ is consistent for $\tau$. Using standard asymptotic techniques, we can write this estimator in its asymptotically linear form
\[
\sqrt{N}(\widetilde{\tau} - \tau) = \frac{1}{\sqrt{N}} \sum_{i=1}^N U_i + o_p(1),
\]
where $U_i = U_{i1} - U_{i0}$, and
\[
U_{i1} = (D_{iT}/\pi_i)(Y_i(1) - \tau_1), \qquad U_{i0} = ((1-D_{iT})/(1-\pi_i))(Y_i(0) - \tau_0).
\]
The first term in this expansion converges in distribution to $N(0, V)$, where $V = \mathbb{E}[U_i^2]$.
What if, as is almost always the case, we do know the true propensity scores? In the typical cross-sectional case, estimation of the causal effects would not be possible because there could be unmeasured confounding between treatment (captured in the unit-specific propensity scores) and the outcome. In the present setting, however, we have additional information available to us that can adjust for the confounding. In particular, we have the entire treatment history that we can use to estimate the unknown, unit-specific propensity scores. Let $\widehat{\pi}_i = T^{-1} \sum_{t=1}^T D_{it}$ be the sample proportion of treated time periods for unit $i$. We now define the unit-specific IPTW estimator:
\begin{equation}\label{eq:feasible-IPW}
\widehat{\tau} = \frac{\sum_{i=1}^N (D_{iT} / \widehat{\pi}_i)Y_i}{\sum_{i=1}^N (D_{iT} / \widehat{\pi}_i)} - \frac{\sum_{i=1}^N ((1 - D_{iT}) / (1 - \widehat{\pi}_i))Y_i}{\sum_{i=1}^N ((1 - D_{iT}) / (1 - \widehat{\pi}_i))}
\end{equation}
This plug-in estimator replaces the unknown unit-specific propensity scores with the over-time, within-unit sample averages of the treatment indicators. Somewhat surprisingly, this estimator is consistent and asymptotically normal in spite of the unmeasured confounding, as shown by the following proposition. The proof is in Appendix~\ref{appendix:proof-prop-simple}.
\begin{proposition}\label{prop:simple}
As $N,T \rightarrow \infty$ and $N/T \rightarrow \rho$, where $0 < \rho < \infty$,
the asymptotic distribution of the estimator $\widehat{\tau}$ defined in Equation~\eqref{eq:feasible-IPW}
is given by $\sqrt{N}(\widehat{\tau} - \tau) \overset{d}{\to} \mathcal{N}(0, V)$, where $V = \mathbb{E}[U_i^2]$.
\end{proposition}
Proposition~\ref{prop:simple} shows that when the sample size does not dominate the number of time periods and when the history of treatment provides independent information about the propensity score of interest, using the unit-specific propensity scores in an IPW estimator leads to a consistent and asymptotically normal estimator with variance the same as if the propensity scores were known. This is in spite of the fact that there may be unmeasured confounding between treatment and the outcome across units. Furthermore, unlike most extant ways of adjusting for time-constant unmeasured confounding, this result only requires repeated measurements of the treatment, not of both treatment and the outcome.
How does this approach avoid the incidental parameters problem? It is instructive to review a Taylor expansion of this estimator around the true value of propensity scores. Letting $\widehat{\tau}_1$ be the first term in~\ref{eq:feasible-IPW}, we can expand it as
\[
\sqrt{N}(\widehat{\tau}_1 - \tau_1) = \frac{1}{\sqrt{N}} \sum_{i=1}^N U_{i1} - \frac{1}{\sqrt{N}} \sum_{i=1}^N \frac{(\widehat{\pi}_i - \pi_i)}{\pi_i} U_{i1} + o_p(1/T)
\]
The first term on the left-hand side of this expansion is the treatment group's contribution to the influence function when the propensity scores are known, and the second term is the first-order effect of estimating the propensity scores. Because of the within-unit ignorability and independence of treatment over time conditional on $\alpha_i$, we can show that the second term is $O_p(1/\sqrt{T})$, so it can be ignored in a first-order asymptotic approximation. In a typical panel data setting, there would be an outcome in every time period and typical methods require a $\sqrt{NT}$ rate of convergence, in which case the second term would contribute bias and variance to the asymptotic distribution. While we have focused on the effect of the last blip of treatment, it is straightforward to extend this result to investigate the effect of some subset of the treatment history so long as the length of the subset is fixed as $N,T \rightarrow \infty$. The fixed window ensures that the incidental parameters bias does not dominate the asymptotic distribution.
One advantage of the unit-specific weights is that they act as \emph{balancing weights} for any baseline covariate, measured or unmeasured. In particular, the true propensity scores, $\pi_i = \mathbb{P}(D_{i}=1 \mid \mathcal{F}_i)$, where $\mathcal{F}_i$ is the set of all possible time-constant variables influencing treatment. That is, for any $Z_i \in \mathcal{F}_i$, $\mathbb{E}[D_{iT}Z_i/\pi_i] = \mathbb{E}[Z_i]$ and
\[
\mathbb{E}\left[\frac{D_{iT}Z_i}{\pi_i} - \frac{(1 - D_{iT})Z_i}{1-\pi_i} \right] = 0,
\]
so that the true weights balance the baseline covariates on average.
This setting was highly restricted to demonstrate the core intuition behind the IPTW-FE approach. There were no covariates, time-varying or otherwise, and we relied on a nonparametric estimator for the propensity score. In the rest of the paper, we expand our scope to focus on the more general class of marginal structural models.
\section{Review of Marginal Structural Models and Inverse Propensity Score Weighting}\label{s:review-msm}
The combination of marginal structural models and inverse probability of treatment weighting was developed by \cite{Robins98b} and has since become an important method across a number of scientific domains. \cite{RobHerBru00} provides a general introduction to the method. A robust methodological literature has built up around the method, focusing on stabilizing the construction of the weights \citep{ColHer08, XiaMooAbr13, ImaRat15, KalSan19}, using machine learning methods to make estimation more flexible \citep{Diavan11, BruLogJar15}, or developing doubly robust versions of the approach \citep{BanRob05, RotLeiSue12}. Our contribution to this literature is to show how these methods may be applied when a researcher suspects there may be time-constant unmeasured confounding.
We now review the use of IPTW in the context of marginal structural models. Two main features distinguish it from the previous discussion. First, we allow for potential outcomes to be a function of treatment history. Let $\overline{D}_{it} = (D_{i1},\ldots,D_{it})$ be the treatment history up to time $t$ and $\underline{D}_{it} = \{D_{it}, \ldots, D_{iT}\}$ be the history from $t$ to $T$. Let $\overline{D}_i = \overline{D}_{iT}$, where these take values in $\mathcal{D}_T \in \{0,1\}^T$. We define the potential outcomes as $Y_i(\overline{d})$, where $\overline{d} \in \mathcal{D}_T$, which is the outcome that unit $i$ would have if they had followed treatment history $\overline{d}$. Second, we allow for the possibility of time-varying confounders. Let $X_{it}$ be a vector of time-varying covariates that are causally prior to $D_{it}$. Even though this vector is labelled in period $t$, we assume it can include lags of contemporaneous covariates. We define $\overline{X}_{it}$, $\underline{X}_{it}$ and $\underline{X}_i$ similarly to the treatment history.
The MSM methodology is based on a sequential ignorability assumption that treatment at time $t$ is unrelated to the potential outcomes conditional on (some function of) the history of treatment and the time-varying covariates. In particular, there is some vector of time-varying covariates, such that,
\[
Y_i(\overline{d}) \!\perp\!\!\!\perp D_{it} \mid X_{it}, \overline{D}_{i,t-1},
\]
where again $X_{it}$ may include information about the history of the covariates as well. This assumption is a time-varying version of a selection-on-observeables assumption applied repeatedly to treatment in each period. One drawback with this approach in the social sciences is that units may have differing baseline probabilities of treatment based on traits that are difficult to measure, as we had in the last section. In the context of negativity, this may occurs if a candidate is facing strong challenger that is not fully captured in polling data. This limitation of sequential ignorability is one motivation for developing the fixed-effects approach we introduce below.
A marginal structural model is a model for the marginal mean of the potential outcomes as a function of the treatment history:
\[
\mathbb{E}[Y_i(\overline{d})] = g(\overline{d}; \gamma_0)
\]
The dimensionality of $\overline{d}$ grows quickly in $T$, so even when $T$ is moderate, $g(\cdot)$ will usually impose some parametric restrictions on the response surface. Even if these modeling restrictions are correct, the observed conditional expectation function $\mathbb{E}[Y_i\mid \overline{D}_i = \overline{d}] \neq g(\overline{d}; \gamma_0)$ due to confounding by $X_{it}$. On the other hand, including the covariates in the conditional expectation will lead to post-treatment bias so that $\mathbb{E}[Y_i \mid \overline{D}_i = \overline{d},\overline{X}_i] \neq g(\overline{d}; \gamma_0)$. \citet{Robins99} showed how an inverse probability of treatment weighting scheme could avoid these two biases. In particular, he showed that a weighted conditional expectation can recover the parameters of the MSM when the weights are proportional to the inverse of the conditional probability of a the unit's treatment history given their covariate history. Let $\pi_t(\overline{d}_{t-1}, x) = \mathbb{P}(D_{it} = 1 \mid \overline{D}_{i,t-1} = \overline{d}_{t-1}, X_{it} = x)$ and let $\pi_{it} = \pi_t(\overline{D}_{i,t-1}, X_{it})$. Then, the IPTW weights for our MSM become
\begin{equation}\label{eq:msm-weights}
W_i = \prod_{t=1}^T \pi_{it}^{-D_{it}}(1 - \pi_{it})^{-(1-D_{it})}
\end{equation}
With these weights, \citet{Robins99} showed that $\mathbb{E}[\bm{1}\{\overline{D}_i = \overline{d}\} W_{i} Y_i] = g(\overline{d}; \gamma_0)$.
In observational studies, the propensity scores used to construct the weights are not usually known to the analyst and so must be estimated. The standard approach to this in the MSM literature is to specify a parametric model for treatment and estimate its parameters via maximum likelihood. Define the a parametrization of the propensity score $\pi(x, \overline{d}_t; \beta)$, where we define the true value of this parameter as $\pi(x, \overline{d}_t; \beta_0) = \mathbb{P}(D_{it} = 1 \mid X_{it} = x, \overline{D}_{it} = \overline{d}_t)$. We then define the estimated propensity scores as $\widehat{\pi}_{it} = \pi(X_{it}, \overline{D}_{it};\widehat{\beta})$, where $\widehat{\beta}$ is the MLE:
\[
\begin{gathered}
\widehat{\beta} = \mathop{\rm arg~max}\limits_{\beta} \frac{1}{NT} \sum_{i=1}^N \sum_{t=1}^{T} \ell_{it}(\beta),\\
\ell_{it}(\beta) = D_{it}\log \pi(X_{it}, \overline{D}_{i,t-1}; \beta) + (1 - D_{it})\log(1 - \pi(X_{it}, \overline{D}_{i,t-1}; \beta)).
\end{gathered}
\]
These estimated propensity scores can then be used to generate estimated weights, $\widehat{W}_i = \prod_{t=1}^T \widehat{\pi}_{it}^{-D_{it}}(1 - \widehat{\pi}_{it})^{-(1-D_{it})}$. With these estimated weights, an IPTW estimator for the MSM can be constructed by solving the empirical version of the following estimating equation for $\gamma$:
\[
\mathbb{E}\left\{\widehat{W}_{i} h(\underline{D}_i)(Y_i - g(\underline{D}_i;\gamma)) \right\} = 0.
\]
For example, when $g(\cdot)$ is the identity function, then this approach reduces to weighted least squares. \cite{Robins98} established this procedure as producing a consistent and asymptotically normal estimator for the parameters of the MSM.
The weights in~\eqref{eq:msm-weights} can often be unstable when the true or estimated propensity scores are close to one or zero, which can lead to highly variable estimates. A common practice in this case is to include a stabilizing numerator that is the marginal probability of the treatment history, $\overline{\pi}_{it} = \mathbb{P}(D_{it} = 1 \mid \overline{D}_{i,t-1})$. In this case, the stabilized weights become
\[
\widetilde{W}_i = \prod_{t=1}^T \left(\frac{\overline{\pi}_{it}}{\pi_{it}}\right)^{D_{it}}\left( \frac{1 - \overline{\pi}_{it}}{1 - \pi_{it}} \right)^{1-D_{it}}.
\]
Another common practice is to trim the weights to additionally guard against unstable causal parameter estimates \citep{ColHer08}.
\section{Fixed-effect Propensity Score Estimators}\label{s:iptw-fe}
\subsection{Setting and Assumptions}
We now focus on estimating propensity scores with fixed effects for MSMs when time-constant unmeasured confounding exists. As with the traditional MSM case, we assume that $(Y_i, \overline{D}_{i}, \overline{X}_i)$ are independent across observations. In order to adjust for unit-specific heterogeneity, we do require restrictions beyond the typical MSM case. First and foremost, we focus on marginal structural models for a treatment history of a fixed length rather than the entire treatment history. In particular, we focus on MSMs of the form $\mathbb{E}[Y_{i}(\underline{d}_{T-k})] = g(\underline{d}_{T-k};\gamma)$, where $\underline{d}_{T-k} = (d_{T-k},\ldots, d_T)$, $k$ is fixed, and the parameter vector $\gamma$ is of length $J$. This restriction is a choice of the quantity of interest, not a substantive assumption about the effect of the treatment history. By the usual consistency assumption, we can define these ``shorter'' potential outcomes as $Y_i(\underline{d}_{T-k}) \equiv Y_i(\overline{D}_{i,T-k-1}, \underline{d}_k)$,
so that treatment history before $k$ lags acts more like a baseline confounder. Compared to typical MSM practice, the main limitation of this restriction is to rule out functional forms where the cumulative sum of the treatment history is included as part of the MSM.
We now describe the key identification assumption of the IPTW-FE approach, which is a combination of the unit-specific randomized experiments assumption of Section~\ref{s:illustration} and the standard MSM framework in Section~\ref{s:review-msm}. Let $\underline{X}_{i,t+1}(d_t)$ and $\underline{D}_{i,t+1}(d_t)$ represent the potential outcomes of the future covariate and treatment histories for an intervention on the treatment process at time $t$.
\begin{assumption}[Unit-specific Sequential Ignorability]\label{a:sequential-ignorability} For all $i$ and $t$,
\[\{Y_i(\overline{d}), \underline{X}_{i,t+1}(\overline{d})\} \!\perp\!\!\!\perp D_{it} \mid \overline{X}_{it}, \overline{D}_{i,t-1} = \overline{d}_{t-1}, \alpha_i.\]
\end{assumption}
\noindent Assumption~\ref{a:sequential-ignorability} states that conditional on the unit-specific effect, the treatment history, and (a function of) the covariate history, treatment is independent of future potential outcomes for both the outcome and the covariate process. In essence, treatment is randomized with respect to future covariates and the outcome, conditional on the past and time-constant features of the unit. This assumption allows for both time-varying confounding by measured covariates and time-constant confounding by measured and unmeasured covariates. We do assume that the time-constant unmeasured confounding can be captured by the unidimensional, $\alpha_i$, which might represent a combination of several unit-specific factors.
Assumption~\ref{a:sequential-ignorability} involves potential outcomes histories of various lengths, $Y_i(\underline{d}_t)$, but above we defined our main marginal structural models in terms of treatment histories of fixed length, $\mathbb{E}[Y_i(\underline{d}_{T-k})]$. Thus, the requirements of sequential ignorability go beyond the treatments of interest in the marginal structural model and apply to the potential outcomes for the entire treatment history. This allows for the fixed-effect propensity score estimators to be consistent even without a no-carryover assumption that would assume that treatment before $T-k$ has no effect on the outcome.
We now turn to defining the propensity scores with fixed effects. We assume a parametric model for the this propensity score (up to the unmeasured heterogeneity) with a function of the treatment history acting as the covariates: $V_{it} = b(\overline{X}_{it}, \overline{D}_{i,t-1})$ where $V_{it}$ is a $k \times 1$ vector. Let $\mathbb{P}(D_{it} = 1 \mid \overline{X}_{it} = \overline{x}_{t}, \overline{D}_{i,t-1} = \overline{d}_{t-1}, \alpha_i) = \mathbb{P}(D_{it} = 1 \mid V_{it} = v, \alpha_i) = \pi_{it}(v; \beta, \alpha_i)$ be the probability of treatment given the covariate and treatment histories, where $v = b(\overline{x}_{t}, \overline{d}_{t-1})$. We define $\pi_{it}(\beta, \alpha_i) = \pi(V_{it}; \beta, \alpha_i)$ and assume the following functional form:
\[
\pi(v; \beta, \alpha_i) = F(v^{\top}\beta + \alpha_{i}),
\]
where $F(\cdot)$ is the cumulative distribution function of either the standard normal or standard logistic distribution. One restriction here is that the function $b(\cdot)$ is time-constant which allows us to pool information across the time dimension effectively. Let $\alpha_{0} = (\alpha_{10}, \ldots, \alpha_{N0})$ and $\beta_0$ be the values of the parameters that generate the treatment process. In particular, we assume that these values are the solution to the following population conditional maximum likelihood condition
\begin{equation}
\label{eq:pop-mle}
(\beta_0, \alpha_0) = \mathop{\rm arg~max}\limits_{(\beta, \alpha) \in \mathbb{R}^{d_\beta + N}} \sum_{i=1}^N\sum_{t=1}^T \mathbb{E}[\ell_{it}(\beta, \alpha) \mid \alpha_i],
\end{equation}
where the expectation is with respect to the distribution of the data conditional on the unobserved effect. Under Assumption~\ref{a:panel-data} below, these parameters will be identified from the model. This explicitly assumes a correctly-specified parametric model for the propensity scores, though this assumption is common in applications of MSMs and the current assumption is still considerably weaker than those settings since it allows for unit-specific heterogeneity.
To account for the time-constant unmeasured confounding, we construct weights with these unit-specific effects to estimate the MSMs. In particular, we use the following weights
\[
W_{i}(\beta, \alpha_i) = \prod_{j=1}^k \left( \frac{1}{\pi_{i,T-j}(\beta, \alpha_i)} \right)^{D_{i,T-j}} \left( \frac{1}{1-\pi_{i,T-j}(\beta, \alpha_i)} \right)^{1 - D_{i,T-j}},
\]
where we only take the product over the last $k$ time periods because are quantities of interest focuses on those periods. As with the standard MSM case, we can replace the numerator with the marginal probability of the treatment history, $\overline{\pi}_{it}$, which can stabilize the variance of the estimator without affecting identification.
The IPTW approach to estimating this MSM is to rely on the estimating equation
\[
U_i(\gamma, \beta, \alpha_i) = W_i(\beta, \alpha_{i})h(\underline{D}_{i,T-k})(Y_i - g(\underline{D}_{i,T-k}; \gamma)),
\]
where $h(\cdot)$ is a function with $J$-length output, chosen by the researcher. For example, if $Y_i$ is continuous and $g$ is linear and additive, it is common to use $h(\underline{D}_{i,T-k}) = \underline{D}'_{i,T-k}$. Under the fixed-effects sequential ignorability assumption and the MSM has the follow restriction:
\begin{equation}\label{eq:msm-population-condition}
\mathbb{E}\left[ U_i(\gamma_0, \beta_0, \alpha_{i0})\right] = 0
\end{equation}
This is an identification result because the restriction identifies the causal parameters, $\gamma_0$, solely in terms of sample quantities (up to the propensity score parameters). This result follows the standard g-computation algorithm with the unit-specific heterogeneity, $\alpha_i$, included in the place of a baseline covariate \citep{Robins99, Robins00}.
Of course, to build a feasible estimator for $\gamma$, we need estimates of the propensity score parameters. We will consider first-step estimators that find the MLEs for these parameters. To allow for fixed effects in these models, we make the following assumptions.
\begin{assumption}[Treatment Regularity Conditions]\label{a:panel-data}
Let $\nu > 0$, $\mu > 4(8 + \nu)/\nu$, and $\mathcal{B}_{0}(\epsilon)$ is an $\epsilon$-neighborhood of $(\beta_0, \alpha_{i0})$ for all $i,t,N,T$.
\begin{enumerate}[label=(\roman*), ref=\theassumption(\roman*)]
\item \label{a:rates} (Asymptotics) Let $N,T \rightarrow \infty$ such that $N/T \rightarrow \rho$ where $0 < \rho < \infty$.
\item \label{a:sampling} (Sampling) For all $N$ and $T$, $\{(Y_i(\underline{d}), \underline{D}_i, \underline{X}_i, \alpha_i): i = 1,\ldots,N\}$ are i.i.d. across $i$. Letting $Z_{it} = (D_{it}, X_{it})$ for $t = 1,\ldots,T$ and $Z_{i,T+1} = (Y_i(\underline{d}))$, then for each $i$, $\{Z_{it}: t = 1,\ldots,T+1\}$ is $\alpha$-mixing conditional on $\alpha_i$ with mixing coefficients satisfying $\sup_i a_i(m) = O(m^{-\mu})$ as $m\rightarrow \infty$ where
\[
a_i(m) \equiv \sup_t \sup_{A\in\mathcal{A}_{it}, B \in \mathcal{B}_{i,t+m}} |\mathbb{P}(A \cap B) - \mathbb{P}(A)\mathbb{P}(B)|,
\]
and $\mathcal{A}_{it}$ is the sigma field generated by $(Z_{it}, Z_{i,t-1}, \ldots)$ and $\mathcal{B}_{i,t}$ is the sigma field generated by $(Z_{it}, Z_{i,t+1}, \ldots)$.
\item \label{a:bounded-derivatives} We assume that $(\beta, \alpha) \mapsto \ell_{it}(\beta, \alpha)$ is four-times continuously differentiable over $\mathcal{B}_{0}(\epsilon)$ almost surely. The partial derivatives of $\ell_{it}(\beta, \alpha)$ with respect to the elements of $(\beta, \alpha)$ are bounded in absolute value uniformly over $(\beta,\alpha) \in \mathcal{B}_{0}(\epsilon)$ by a function $M(Z_{it}) > 0$ almost surely and $\max_{i,t} \mathbb{E}[M(Z_{it})^{8+\nu}]$ is almost surely uniformly bounded over $N,T$.
\item \label{a:bounded-propensity-scores} For all $i,t$ we have $\pi_{it}(\beta,\alpha)$ bounded away from $0$ and $1$ uniformly over $(\beta, \alpha) \in \mathcal{B}_0(\epsilon)$.
\item \label{a:concavity} (Concavity) For all $N$, $T$ $(\beta, \alpha) \mapsto \ell_{it}(\beta, \alpha)$ is strictly concave over $\mathbb{R}^{\text{dim}(\beta)+1}$ almost surely. Furthermore, there exists $b_{\min}$ and $b_{\max}$ such that for all $(\beta, \alpha) \in \mathcal{B}_0(\epsilon)$, $0 < b_{\min} \leq -\mathbb{E}[V_{it\alpha} \mid \alpha_i] \leq b_{\max}$ almost surely uniformly over $i$, $t$, $N$, and $T$.
\end{enumerate}
\end{assumption}
Assumption~\ref{a:panel-data} mostly derives from \citet{FerWei16}, who used them to establish the asymptotic properties of nonlinear panel models with unit- and time-specific effects, though we focus only on unit effects. Assumption~\ref{a:rates} establishes the large-N, large-T asymptotic framework, which has been widely used for nonlinear panel models in econometrics \citep{HahNew04, AreHah07, FernandezVal09, HahKue11, FerWei16, FerWei18}. The strong mixing process in Assumption~\ref{a:sampling}, which allows us to rely on laws of large numbers and central limit theorems in the time dimension. This assumption is substantially weaker than independence over time or even stationarity. In particular, it allows for time trends which are a common feature of propensity score models in MSMs. The i.i.d. nature of the distribution of the data and the fixed effects across units is common to IPTW approaches and allows us to take averages over the unit-specific heterogeneity and has been used before for average partial effects in nonlinear panel models \citep{FerWei16}. It is possible to replace this assumption with stationarity of $X_{it}$ over time, but this would rule out lagged treatment in the propensity score model along with time trends.
Assumption~\ref{a:bounded-derivatives} requires the log-likelihood of the propensity score model and its derivatives to be sufficiently smooth to allow for the higher-order asymptotic expansions we use. With a binary response, this assumption could be replaced by a moment condition on the distribution of the covariates. We invoke a locally uniform version of positivity in Assumption~\ref{a:bounded-propensity-scores}. Note that Assumption~\ref{a:bounded-propensity-scores} implicitly restricts $\alpha_i$, since if $\alpha_i$ were completely unrestricted, then we may have $\pi_{it} \rightarrow \infty$. Finally, Assumption~\ref{a:concavity} ensures that the MLE is identified and should be satisfied in the usual parametric models used for binary data when the covariates, $V_{it}$ vary in the time and unit dimensions.
In addition to these assumptions on the treatment process, we also make the following regularity conditions on the marginal structural model and outcome.
\begin{assumption}[Outcome Regularity Conditions]\label{a:msm-assumptions}
Let $\nu > 0$ and $\mathcal{B}_{0}(\epsilon)$ is an $\epsilon$-neighborhood of $(\gamma_0, \beta_0, \alpha_{i0})$ for all $i, N$.
\begin{enumerate}[label=(\roman*), ref=\theassumption(\roman*)]
\item \label{a:bounded-moments} (Bounded outcome moments) $\mathbb{E}[|Y_{i}(d)|^{4+\nu}]$ and $\mathbb{E}[|Y_{i}(d)|^{4+\nu} \mid \alpha_i, \underline{D}_i, X_{iT}]$ are bounded by finite constants, uniformly over $i$.
\item \label{a:msm-regularity} (MSM regularity) The parameters $\phi = (\gamma, \beta, \alpha) \in \interior \Phi$ where $\Phi$ is a compact, convex subset of $\mathbb{R}^{J+R+1}$ with $J = \dim(\gamma)$ and $R = \dim(\beta)$. The map $\gamma \mapsto U_i(\gamma, \beta, \alpha)$ is continuously differentiable over $(\gamma, \beta, \alpha) \in \mathcal{B}_{0}(\epsilon)$ with $\mathbb{E}[\sup_{\gamma \in \mathcal{B}_{0}(\epsilon)} \Vert\partial_{\gamma} U_i(\gamma, \beta, \alpha)\Vert] < \infty$.
\end{enumerate}
\end{assumption}
Assumption~\ref{a:bounded-moments} ensures the potential outcomes have sufficiently bounded (conditional) moments. Assumptions~\ref{a:msm-regularity} is a set of standard regularity conditions for the marginal structural model.
\subsection{Proposed Method}
We propose a two-step approach to estimating the parameters of the marginal structural model using inverse probability of treatment weighting. These two steps are:
\begin{enumerate}
\item Estimate the parameters of the propensity score model $(\widehat{\beta}, \widehat{\alpha}_i)$ using conditional maximum likelihood treating the unit-specific effects $\alpha_i$ as fixed parameters to be estimated. Construct estimated weight $W_i(\underline{D}_{i,T-k}; \widehat{\beta}, \widehat{\alpha}_i)$.
\item Pass the estimated weights to a weighted estimating equation $N^{-1}\sum_{i=1}^NU_i(\widehat{\gamma}, \widehat{\beta}, \widehat{\alpha}_i) = 0$ to obtain estimates of the MSM parameters, $\gamma$.
\end{enumerate}
The first step in this procedure can be implemented with a sample conditional maximum likelihood estimator. Letting $\widehat{\alpha} = (\widehat{\alpha}_1, \ldots, \widehat{\alpha}_N)$, we have
\begin{equation}
\label{eq:mle}
(\widehat{\beta}, \widehat{\alpha}) = \mathop{\rm arg~max}\limits_{(\beta, \alpha) \in \mathbb{R}^{d_{\beta} + N}} \mathbb{E}_{NT}[\ell_{it}(\beta, \alpha_i)]
\end{equation}
Under these assumptions we use the following maximum likelihood estimators:
\[
\widehat{\beta} = \mathop{\rm arg~max}\limits_{\beta} \frac{1}{NT} \sum_{i=1}^N \sum_{t=1}^{T} \ell_{it}(\beta, \widehat{\alpha}_i(\beta)), \qquad
\widehat{\alpha}_i(\beta) = \mathop{\rm arg~max}\limits_{\alpha} \frac{1}{T} \sum_{t=1}^T \ell_{it}(\beta, \alpha).
\]
These maximum likelihood estimates are subject to the usual incidental parameters problem that results in bias that shrinks as $T\rightarrow\infty$. Even when $N$ and $T$ grow at the same rate, \citet{HahNew04} showed that these types of MLE estimators are not $\sqrt{NT}$-consistent and a large literature has developed proposing several bias correction techniques \citep{AreHah07, FerWei18}. We sidestep these issues in our own results because we target the slower convergence rate of $\sqrt{N}$ that is typical in the MSM literature.
To obtain estimates of the MSM parameters, $\widehat{\gamma}$, we use the sample version of the MSM moment condition~\eqref{eq:msm-population-condition},
\[
\frac{1}{N} \sum_{i=1}^N W_i(\underline{D}_{i,T-k};\widehat{\beta}, \widehat{\alpha}_i)h(\underline{D}_{i,T-k})(Y_i - g(\underline{D}_{i,T-k};\widehat{\gamma})) = \frac{1}{N} \sum_{i=1}^N U_i(\widehat{\gamma}, \widehat{\beta}, \widehat{\alpha}_i) = 0.
\]
This estimator depends on the link function for the marginal structural model and a function $h(\cdot)$. One particularly straightforward estimator in this class is weighted least squares for the identity link with continuous outcomes. Often $h(\cdot)$ can be chosen to enhance the efficiency of the estimator \citep{Robins99}, but we do not explore that here.
We now show in Theorem~\ref{thm:msm-iptw-fe} that under regularity conditions and the above assumptions, this estimator is consistency and asymptotically normal.
The proof is in Appendix~\ref{appendix:proof-thm-iptwfe}.
\begin{theorem}\label{thm:msm-iptw-fe}
Under Assumptions~\ref{a:sequential-ignorability}, \ref{a:panel-data}, and~\ref{a:msm-assumptions}, $\widehat{\gamma} \overset{p}{\to} \gamma_0$ and
\begin{equation}
\sqrt{N}(\widehat{\gamma} - \gamma_0) \xrightarrow{d} N(0, V_{\gamma_0}),
\end{equation}
where $V_{\gamma_0} = G^{-1}\mathbb{E}[U_iU_i^{\top}]G^{-1}$ and
\[
G = \mathbb{E}\left\{ \frac{\partial U_i(\gamma, \beta, \alpha)}{\partial \gamma} \right\}_{\gamma=\gamma_0} \qquad U_i = U_i(\gamma_0, \beta_0, \alpha_{i0}).
\]
\end{theorem}
We can build a consistent variance estimator in the usual way with $\widehat{V}_{\gamma} = \widehat{G}^{-1}\widehat{\Omega}\widehat{G}^{-1}$, where
\[
\widehat{G} = \frac{1}{N} \sum_{i=1}^N \frac{\partial \widehat{U}_i}{\partial \gamma}, \qquad \widehat{\Omega} = \frac{1}{N} \sum_{i=1}^N \widehat{U}_i\widehat{U}_i^{\top}, \qquad \widehat{U}_i = U_i(\widehat{\gamma}, \widehat{\beta}, \widehat{\alpha}_i).
\]
This is a standard sandwich estimator for estimators based on estimating equations.
Theorem~\ref{thm:msm-iptw-fe} establishes that the IPTW-FE for MSMs is asymptotically normal and that we can asymptotically ignore the estimation of the weights. In the standard IPTW case, the estimation of the weights does impact the distribution of the MSM estimates. Here, however, the estimation of the weights doesn't affect the first-order asymptotic distribution because we are using $NT$ observations to estimate the propensity score parameters but only using a fraction of the observations, $Nk$, to create the weights, where $k$ is fixed as $T\to\infty$. Thus, the $\widehat{\beta}$ converges much faster than $\widehat{\gamma}$ and so we can ignore its estimation.
In typical nonlinear panel models, plugging in noisy estimates of the fixed-effect parameters leads to a bias that converges to 0 slowly enough to create asymptotic bias. In our setting, however, the strong mixing property of the treatment process that this bias fades over time and so allows us to ignore the estimation of the fixed-effect parameters as well. In the literature on nonlinear panel models, there is a similar result for estimating partial effects, or differences in the conditional expectation, as opposed to parameters of the nonlinear model. For example, \cite{FerWei18} showed how these average partial effects can converge at a slower rate with parameter estimation not having a first-order effect on the asymptotic distribution \cite[see also][]{FerWei16}. The current approach is similar since we are only interested in the parameters of the weighting model insofar as they provide consistent estimates of the IPTW weights.
This result establishes that it is possible to adjust for unmeasured baseline confounding in MSMs when the time dimension is long and provides sufficiently new information within units. The quality of this adjustment will depend on both how long the panels are and how severe the unmeasured heterogeneity is. A second-order expansion of the estimator shows that second-order bias (which can be ignored in our asymptotic analysis) is inversely related to the propensity scores. Thus, strong unit-specific heterogeneity will push propensity scores close to zero or one and create more finite-sample bias. Longer panels helps with this finite-sample bias since these second-order terms will be of order $O_P(1/\sqrt{T})$. A fruitful avenue for future research would be to use analytic or computational approaches like the jackknife to adjust for these second-order terms.
\subsection{Trimming Weights}
One drawback of the IPTW-FE approach is that the fixed-effect parameters of the propensity score model are not identified when units are either always treated or always control. Even if when we maintain the population-level positivity assumption, this in-sample positivity violation means that some units will have undefined weights. We propose three ways to address this issue. First, one could simply omit the no-treatment-variance units and estimate the parameters of the MSM for the units that have at least one treated or control period. This is the simplest procedure but could induce confounding bias, especially if the $\alpha_i$ have a nonlinear relationship with the outcome. Second, we could use an ad hoc rule for imputing propensity scores of the no-treatment-variance units.
For example, we could set these units to have $\widehat{\pi}_{it} = 0.01$ if $D_{it} = 0$ for all $t$ and $\widehat{\pi}_{it} = 0.99$ if $D_{it} = 1$ for all $t$. Depending on the lag length $k$ in the MSM and the exact trimming this may lead to extreme weights which themselves could require trimming. Alternatively, one could place bounds on the range of the unit-specific effects in the MLE estimation to $\alpha_i \in [a_0, a_1]$ and set the estimates of those effects as $\widehat{\alpha}_i = a_0$ or $\widehat{\alpha}_i = a_1$ if $D_{it} = 0$ or $D_{it} = 1$ for all $t$, respectively. The amount of trimming of the weights in this approach amounts to a bias-variance trade-off similar to weight trimming in standard IPTW estimators for MSMs \citep{ColHer08}.
Finally, one alternative approach to handling positivity violations would be to focus on a different quantity of interest. \cite{Kennedy19} proposed estimating the effect of incremental propensity score interventions, which are interventions that shift the propensity score rather than set treatment histories to specific values. The identification and estimation of these effects does not depend on positivity, and under the assumption of a correctly specified propensity score model, a simple inverse probability weighting estimator is available \citep[p. 650]{Kennedy19}.
\section{Simulation Evidence}\label{s:simulation}
In this section, we conduct simulation studies to evaluate the finite sample performance of the proposed approach.
\subsection{Setup}
We simulate a balanced panel of $n$ units with $T$ time points
where the number of units varies $n \in \{200, 500, 1000, 3000\}$.
We fix the ratio of the number of units to the number of time periods $ n/T = \rho \in \{5, 10, 50\}$.
This setup mimics the key asymptotic approach of our theoretical results, and the larger value of $\rho$ implies the small number of time points, $T = n / \rho$.
The treatment sequence is generated as a function of individual unobserved effect $\alpha_{i}$, the past treatment $D_{i,t-1}$
and the time-varying covariates, $\*X_{it}$.
\begin{equation*}
D_{it} \sim \text{Bernoulli}(
\text{expit}(\alpha_{i} + \varphi D_{i,t-1} + \bm{\beta}^{\top}\mathbf{X}_{it})
)
\end{equation*}
where $\text{expit}(x) = 1 / (1 + \exp(-x))$ is the inverse logistic function.
The individual heterogeneity is drawn from a uniform distribution with support on $[-a, a]$ for $a \in \{1, 2\}$.
The value of $a$ is chosen such that the variance of individual heterogeneity explains
$25\%$ ($a = 1$) or $50\%$ ($a = 2$) of the variance of the linear predictor.
The time-varying covariates $\*X_{it}$ are generated exogenous to the treatment, drawn from the multivariate normal distribution,
$\*X_{it} \sim \mathcal{N}(-1/2\bm{1}, \Sigma)$
where $\Sigma_{jj} = 1$ and $\Sigma_{jj'} = 0.2$ for $j \neq j'$.
Finally, we set $\varphi = 0.3$ and $\bm{\beta} = (-0.5, -0.5)$
when the number of covariates is two or $\bm{\beta} = (-0.5, -0.5, 1.0, -0.5)$ when the number of covariates is four.
The outcome is generated by the linear model
with individual unobserved variable $\alpha_{i}$,
the final treatment $D_{iT}$,
the cumulative treatments $\sum^{T-3}_{t = T-1}D_{it}$
and the average of the time-varying covariates, $\overline{\*X}_{i} = \sum^{T}_{t=1}\*X_{it}/T$,
all of which are generated in the previous step.
\begin{equation*}
Y_{i} = \alpha_{i} +
\tau_{F}D_{iT} +
\tau_{C}\sum^{T-3}_{t = T-1}D_{it} +
\bm{\gamma}^{\top}\overline{\mathbf{X}}_{i} +
\epsilon_{i},\quad
\epsilon_{i} \sim \mathcal{N}(0, 1)
\end{equation*}
where we set $\tau_{F} = 1$, $\tau_{C} = 0.3$, and $\bm{\gamma} = (1.0, 0.5)$ or
$\bm{\gamma} = (1.0, 0.5, 1.0, 1.0)$ depending on the number of covariates used in each simulation.
\begin{figure}[H]
\centerline{
\includegraphics[width=\textwidth]{figures/msm_simulation_cm_cov2.pdf}
}
\caption{
Bias, standard error (Std. Error) and coverage probability of 90\% confidence intervals (Coverage) for the estimation of the final period effect $\tau_{F}$ and the cumulative effect $\tau_{C}$ under the ``low'' heterogeneity ($a = 1$) -- first two columns --
and the ``high'' heterogeneity ($a = 2$) -- last two columns -- scenario.
Solid lines in \textcolor{simblue}{blue} show the proposed estimator
(\texttt{IPTW-FE}} % {{\footnotesize\textsf{IPTW-FE}}),
solid lines in \textcolor{simgrey}{grey} show the esti\&ator based on the true propensity score (\texttt{IPTW-True}} %{{\footnotesize\textsf{IPTW-True}}), and
dashed lines in \textcolor{simgreen}{green} show the estimator based on the estimated propensity score without fixed effects (\texttt{IPTW}} % {{\footnotesize\textsf{IPTW}}).
Shapes correspond to the $n$ to $T$ ratio $\rho$ such that
squares represent $\rho = 5$ (the largest number of time periods),
circles represent $\rho = 10$, and
triangles represent $\rho = 50$ (the smallest number of time periods)
}
\label{fig:msm-cov2}
\end{figure}
\subsection{Results}
We compared the performance of the proposed method in terms of estimating two causal quantities: the final period effect $\tau_{F}$
and the cumulative effect $\tau_{C}$.
We estimate two quantities together in the framework of weighted least square,
\begin{equation*}
(\widehat{\tau}_{F}, \widehat{\tau}_{C}) = \mathop{\rm arg~min}\limits_{\tau_{F}, \tau_{C}}
\sum^{n}_{i=1} \widehat{W}_{i}
\bigg\{
Y_{i} - \alpha - \tau_{F}D_{iT} - \tau_{C}\sum^{T-3}_{t = T-1}D_{it}
\bigg\}^{2}
\end{equation*}
where $\widehat{W}_{i}$ is constructed as described in the previous section.
The variance of $\widehat{\tau}_{F}$ and $\widehat{\tau}_{C}$ is estimated using the standard sandwich formula with the HC2 option.
In addition to the fixed effect approach,
we consider two other strategies to obtain the weights $\widehat{W}_{i}$
as benchmarks to the proposed method.
First, we use the true propensity score to construct the weights.
Second, the estimated propensity score without the fixed effect
is used to construct weights.
We expect that the weights with known propensity scores
is least biased and the weights without the fixed effect is most biased.
Figure~\ref{fig:msm-cov2} shows the results
for the two-covariate case.
Bias (first row), standard errors (second row)
and coverages (third row) are computed based on 500 Monte Carlo simulations.
Additional simulation results are presented in Appendix~\ref{appx:additional-simulation}.
The first two columns correspond to the ``low'' heterogeneity case
where the support of the fixed effect is $[-1, 1]$,
whereas the last two columns correspond to the ``high'' heterogeneity
scenario where the support of $\alpha_{i}$ is set to $[-2, 2]$.
Solid lines in blue show the proposed estimator
(\texttt{IPTW-FE}} % {{\footnotesize\textsf{IPTW-FE}}),
solid lines in grey show the estimator based on the true propensity score (\texttt{IPTW-True}} %{{\footnotesize\textsf{IPTW-True}}), and
dashed lines in green show the estimator based on the estimated propensity score without fixed effects (\texttt{IPTW}} % {{\footnotesize\textsf{IPTW}}).
Shapes correspond to the $n$ to $T$ ratio $\rho$ such that
squares represent $\rho = 5$ (the largest number of time periods),
circles represent $\rho = 10$, and
triangles represent $\rho = 50$ (the smallest number of time periods).
We can see that under the low heterogeneity setting,
where the unobserved individual heterogeneity explains roughly 25\% of the variance of the treatment assignment,
the bias of the proposed estimator (\texttt{IPTW-FE}} % {{\footnotesize\textsf{IPTW-FE}}) is indistinguishable from the estimator that is based on the true propensity score (\texttt{IPTW-True}} %{{\footnotesize\textsf{IPTW-True}})
and the confidence interval estimates maintain the nominal coverage across different values of $n$ and $\rho$.
Under this scenario, even a case of $n = 200$ and $T = 4$,
the proposed method performs well.
When the variance of the individual heterogeneity is high ($a = 2$)
such that it explains roughly 75\% of the variance of the treatment assignments, the proposed estimator shows relatively larger bias compared with \texttt{IPTW-True}} %{{\footnotesize\textsf{IPTW-True}}, while bias of the estimator without fixed effects (\texttt{IPTW}} % {{\footnotesize\textsf{IPTW}}) is substantially larger.
Under this setting, the coverage results are mainly driven by the bias, thus the figure shows that as $n$ increases, the coverage results also improves thanks to the reduction in bias.
We can also see that in general the estimator without fixed effects
shows smaller standard errors than \texttt{IPTW-FE}} % {{\footnotesize\textsf{IPTW-FE}}.
This implies that the proposed method (\texttt{IPTW-FE}} % {{\footnotesize\textsf{IPTW-FE}}) trade-off the efficiency with lower bias.
Finally, we highlight that small Monte Carlo bias is observed even for \texttt{IPTW-True}} %{{\footnotesize\textsf{IPTW-True}}\ under this scenario. This is possibly due to the high variability of the weights, which are produce of inverse probabilities over four time periods with stabilization.
Overall, these results point to two key tensions in controlling for time-constant unmeasured heterogeneity through fixed effects in the propensity score models. First, high degrees of unmeasured heterogeneity in the propensity scores may lead to near violations of the positivity assumption that could lead to the kind of instability we see when $a = 2$. Second, larger magnitudes of heterogeneity may require more time periods to achieve good finite sample performance compared to when the heterogeneity is relatively small.
\section{Empirical Application: The Effectiveness of Negative Advertising}\label{s:application}
We now apply these techniques to estimate the effectiveness of negative advertising in U.S.\ Senate and Gubernatorial elections. We build on \citet{Blackwell13} who investigated the same question using an MSM approach without fixed effects for elections over the period from 2000 to 2008. We expand the data to include additional Senate races from 2010 until 2016, and focus on the effect of Democratic candidate negativity on three outcomes: the Democratic percentage of the two-party vote, percent of the voting-eligible population casting Democratic votes, and percentage of the voting-eligible population casting Republican votes. The latter two outcomes use the voting-eligible population as a denominator, allow us to explore the possibility that Democratic negativity mobilizes each party differently. We organize the data into a race-week panel and focus on the time period between the primary election for the race and the general election, so that we have $N=201$ with the length of the campaign ranging from 8 to 40 weeks with a median of 20 weeks. Our marginal structural model is:
\[
\mathbb{E}[Y_i(\underline{d})] = \gamma_0 + \gamma_1\left(\sum_{k=0}^4 d_{T-k}\right),
\]
where the time index here is weeks of the campaign. The main quantity of interest, $\gamma_1$, can be interpreted as the effect of additional week of negative advertising in the last five week on the outcomes. The data on advertising comes from the Wesleyan Media Project, which provides data on the number, timing, and tone of political television ads for most recent elections. The outcomes come from the MIT Election Data Science Project, and we collected polling data from several polling aggregators.
We apply several different estimation approaches to this MSM: the proposed IPTW-FE approach, a standard IPTW approach without fixed effects, and a naive approach that ignores time-varying covariates all together. For each of these methods, we consider ``vanilla'' MSMs that only include treatment history and augmented MSMs that additional control for baseline covariates. For the weighting model, we included various time-varying covariates: average Democratic share of the two-party preferences in polls in the previous week (and the square of this term), the average percentage reporting undecided or voting for third-party candidates in the previous week, measures of Republican negativity over the last six weeks, the cumulative number of ads shown by the Democrat and Republican (and their squared terms). For the fixed effects approach, we additionally include a race fixed effect term in the specification. For the IPTW approach without fixed effects, we include several baseline covariates including the length of the general election campaign, baseline polling, whether the Democrat is the incumbent, and an indicator for the type of elected position.
\begin{figure}[t!]
\centerline{\includegraphics[scale=0.75]{figures/negativity-iptwfe-results.pdf}}
\caption{Estimated effects of the number of weeks of negativity in the last five weeks of the campaign with different methods.}\label{fig:negativity-coefplot}
\end{figure}
Figure~\ref{fig:negativity-coefplot} shows the results of these methods for each of the outcomes. Substantively, the methods generally agree that there is a positive effect of Democratic negativity on Democratic electoral performance, though there is disagreement between the methods on why. The results of the standard IPTW approach indicates that this electoral edge comes from a positive effect on Democratic turnout, with relatively little impact on Republican turnout. The IPTW-FE approach, on the other hand, indicates that negativity actually demobilized Republican turnout without a large effect on Democratic turnout, raising the overall Democratic share of the vote. One other key difference between the IPTW and IPTW-FE approaches is that the fixed effect approach is much less sensitive to the inclusion of baseline covariates, whereas the IPTW sees relatively large changes in the point estimates across those specifications.
\section{Conclusion}\label{s:conclusion}
In this paper, we have show how it is possible to control for time-constant unmeasured confounding in marginal structural models by using a fixed effects approach to estimate the propensity score of the time-varying treatment. We derived the large-sample properties of this estimator under an asymptotic setup where the number of time periods and the number of units grows together. Simulations showed that the proposed method outperforms a naive approach that omits fixed effects and performs well overall especially when the magnitude of the heterogeneity is moderate. We applied this approach to estimating the time-varying effect of negative advertising on election outcomes in United States statewide elections and found that the fixed effect approach led to different conclusions about how negativity affects vote shares. An obvious place for future research would be to apply these methods to data where we have repeated measurements of the outcomes as well as the treatment. In those situations it may be possible to develop doubly-robust estimators under fixed effects assumptions.
\bibliographystyle{chicago}
|
2,869,038,155,589 | arxiv | \section{Introduction} \label{se:introduction}
{\em Simultaneous embedding} is a flourishing area of research studying
topological and geometric properties of planar drawings of multiple graphs on
the same point set. The seminal paper in the area is the one of Bra{\ss} {\em et
al.}~\cite{bcd-spge-07}, in which two types of simultaneous embedding are
defined, namely {\em with mapping} and {\em with no mapping}. In the former
variant, a bijective mapping between the vertex sets of any two graphs $G_1$ and
$G_2$ to be drawn is part of the problem's input, and the goal is to construct a
planar drawing of $G_1$ and a planar drawing of $G_2$ so that corresponding
vertices are mapped to the same point. In the latter variant, the drawing
algorithm is free to map any vertex of $G_1$ to any vertex of $G_2$ (still the
$n$ vertices of $G_1$ and the $n$ vertices of $G_2$ have to be placed on the
same $n$ points). Simultaneous embeddings have been studied with respect to two
different drawing standards: In {\em geometric simultaneous embedding}, edges are required to be straight-line segments.
In {\em simultaneous embedding with fixed edges} (also known as {\sc Sefe}), edges can be arbitrary Jordan curves, but each edge that belongs to
two graphs $G_1$ and $G_2$ has to be represented by the same Jordan curve in the
drawing of $G_1$ and in the drawing of $G_2$.
Many papers deal with the problem of constructing geometric simultaneous
embeddings and simultaneous embeddings with fixed edges of pairs of planar
graphs in the variant {\em with mapping}. Typical considered problems include:
(i) determining notable classes of planar graphs that always or not always admit
a simultaneous embedding; (ii) designing algorithms for constructing
simultaneous embeddings within small area and with few bends on the edges; (iii)
determining the time complexity of testing the existence of a simultaneous
embedding for a given set of graphs. We refer the reader to the recent survey by
Blasi\"us, Kobourov, and Rutter~\cite{bkr-sepg-13}.
In contrast to the large number of papers dealing with simultaneous embedding
{\em with mapping}, little progress has been made on the {\em no mapping}
version of the problem.
Bra{\ss} {\em et
al.}~\cite{bcd-spge-07} showed that any planar graph admits a geometric
simultaneous embedding with no mapping with any number of outerplanar graphs.
They left open the following attractive question: Do every two $n$-vertex planar
graphs admit a geometric simultaneous embedding with no mapping?
In this paper we initiate the study of simultaneous
embeddings with fixed edges and no mapping, called
{\sc SefeNoMap~} for brevity. In this setting, the natural counterpart of the Bra{\ss}
{\em et al.}~\cite{bcd-spge-07} question reads as follows: Do every two
$n$-vertex planar graphs admit a {\sc SefeNoMap~}?
Since answering this question seems to be an elusive goal, we tackle
the following generalization of the problem: What is the largest $k\leq n$ such
that every $n$-vertex planar graph and every $k$-vertex planar graph admit a
{\sc SefeNoMap~}? That is: What is the largest $k\leq n$ such that every
$n$-vertex planar graph $G_1$ and every $k$-vertex planar graph $G_2$ admit two
planar drawings $\Gamma_1$ and $\Gamma_2$ with their vertex sets mapped to point
sets $P_1$ and $P_2$, respectively, so that $P_2 \subseteq P_1$ and so that if
edges $e_1$ of $G_1$ and $e_2$ of $G_2$ have their end-vertices mapped to the
same two points $p_a$ and $p_b$, then $e_1$ and $e_2$ are represented by the
same Jordan curve in $\Gamma_1$ and in $\Gamma_2$? We prove that $k\geq n/2$:
\begin{theorem} \label{th:sefe-main}
Every $n$-vertex planar graph and every $(n/2)$-vertex planar graph have a
{\sc SefeNoMap~}.
\end{theorem}
Observe that the previous theorem would be easily proved if $n/2$ were replaced
with $n/4$: First, consider an $(n/4)$-vertex independent set $I$ of any
$n$-vertex planar graph $G_1$ (which always exists, as a consequence of the four
color theorem~\cite{ah-epfci-77,ahk-epfcii-77}). Then, construct any planar
drawing $\Gamma_1$ of $G_1$, and let $P(I)$ be the point set on which the
vertices of $I$ are mapped in $\Gamma_1$. Finally, construct a planar drawing
$\Gamma_2$ of any $(n/4)$-vertex planar graph $G_2$ on point set $P(I)$ (e.g.
using Kaufmann and Wiese's technique~\cite{kw-evpfbspg-02}). Since $I$ is an
independent set, any bijective mapping between the vertex set of $G_2$ and $I$
ensures that $G_1$ and $G_2$ share no edges. Thus, $\Gamma_1$ and $\Gamma_2$ are
a {\sc SefeNoMap~} of $G_1$ and $G_2$.
In order to get the $n/2$ bound, we study the problem of finding a large
induced outerplane graph in a plane graph.
A {\em plane graph} is a
planar graph together with a {\em plane embedding}, that is, an equivalence class
of planar drawings, where two planar drawings $\Gamma_1$ and $\Gamma_2$ are
equivalent if: (1) each vertex has the same {\em rotation scheme} in $\Gamma_1$
and in $\Gamma_2$, i.e., the same clockwise order of the edges incident to it;
(2) each face has the same {\em facial cycles} in $\Gamma_1$ and in $\Gamma_2$,
i.e., it is delimited by the same set of cycles; and (3) $\Gamma_1$ and
$\Gamma_2$ have the same {\em outer face}. An {\em outerplane graph} is a graph
together with an {\em outerplane embedding}, that is a plane embedding where all
the vertices are incident to the outer face. An {\em outerplanar graph} is a
graph that admits an outerplane embedding; a plane embedding of an outerplanar
graph is not necessarily outerplane. Consider a plane graph $G$ and a subset
$V'$ of its vertex set. The {\em induced plane graph} $G[V']$ is the subgraph of
$G$ induced by $V'$ together with the plane embedding {\em inherited} from $G$,
i.e., the embedding obtained from the plane embedding of $G$ by removing all the
vertices and edges not in $G[V']$.
We show the following result:
\begin{theorem} \label{th:outerplane-main}
Every $n$-vertex plane graph $G(V,E)$ has a vertex set $V'\subseteq V$ with
$|V'|\geq n/2$ such that $G[V']$ is an outerplane graph.
\end{theorem}
Theorem~\ref{th:outerplane-main} and the results of Gritzmann {\em et al.}~\cite{gmpp-eptvs-91} yield a proof of Theorem~\ref{th:sefe-main}, as follows:
\begin{figure}[tb]
\begin{center}
\begin{tabular}{c c c}
\mbox{\includegraphics[scale=0.47]{Figures/PlanarGraphG1andG2.eps}} \hspace{2mm} &
\mbox{\includegraphics[scale=0.47]{Figures/Straight-lineDrawingG2andG1induced.eps}} \hspace{2mm} &
\mbox{\includegraphics[scale=0.47]{Figures/SEFENoMapG2andG1.eps}}\\
(a) \hspace{2mm} & (b) \hspace{2mm} & (c)
\end{tabular}
\caption{(a) A $10$-vertex planar graph $G_1$ (solid lines) and a $5$-vertex planar graph $G_2$ (dashed lines). A $5$-vertex induced outerplane graph $G_1[V']$ in $G_1$ is colored black. Vertices and edges of $G_1$ not in $G_1[V']$ are colored gray. (b) A straight-line planar drawing $\Gamma(G_2)$ of $G_2$ with no three collinear vertices, together with a straight-line planar drawing of $G_1[V']$ on the point set $P_2$ defined by the vertices of $G_2$ in $\Gamma(G_2)$. (c) A {\sc SefeNoMap~} of $G_1$ and $G_2$.}
\label{fig:illustration}
\end{center}
\end{figure}
\begin{proofx}
Consider any $n$-vertex plane graph $G_1$ and any $(n/2)$-vertex plane graph $G_2$ (see Fig.~\ref{fig:illustration}(a)). Let $\Gamma(G_2)$ be any straight-line planar drawing of $G_2$ in which no three vertices are collinear. Denote by $P_2$ the set of $n/2$ points to which the vertices of $G_2$ are mapped in $\Gamma(G_2)$. Consider any vertex subset $V'\subseteq V(G_1)$ such that $G_1[V']$ is an outerplane graph. Such a set exists by Theorem~\ref{th:outerplane-main}. Construct a straight-line planar drawing $\Gamma(G_1[V'])$ of $G_1[V']$ in which its vertices are mapped to $P_2$ so that the resulting drawing has the same (outerplane) embedding as $G_1[V']$. Such a drawing exists by results of Gritzmann {\em et al.}~\cite{gmpp-eptvs-91}; also it can found efficiently by results of Bose~\cite{b-eogps-02} (see Fig.~\ref{fig:illustration}(b)). Construct any planar drawing $\Gamma(G_1)$ of $G_1$ in which the drawing of $G_1[V']$ is $\Gamma(G_1[V'])$. Such a drawing exists, given that $\Gamma(G_1[V'])$ is a planar drawing of a plane subgraph $G_1[V']$ of $G_1$ preserving the embedding of $G_1[V']$ in $G_1$ (see Fig.~\ref{fig:illustration}(c)). Both $\Gamma(G_1)$ and $\Gamma(G_2)$ are planar, by construction. Also, the only edges that are possibly shared by $G_1$ and $G_2$ are those between two vertices that are mapped to $P_2$. However, such edges are drawn as straight-line segments both in $\Gamma(G_1)$ and in $\Gamma(G_2)$. Thus, $\Gamma(G_1)$ and $\Gamma(G_2)$ are a {\sc SefeNoMap~} of $G_1$ and $G_2$.
\end{proofx}
By the standard observation that the vertices in the odd (or even) levels of a breadth-first search tree of a planar graph induce an \emph{outerplanar} graph,
we know that $G$ has an induced outerplanar graph with at least $n/2$ vertices. However, since its embedding in $G$ may not be outerplane, this seems insufficient to prove the existence of a {\sc SefeNoMap~} of every $n$-vertex and every $(n/2)$-vertex planar graph.
Theorem~\ref{th:outerplane-main} might be of independent interest, as it is
related to (in fact it is a weaker version of) one of the most famous and
long-standing graph theory conjectures:
\begin{conjecture} \label{conj:induced-forests} {\em (Albertson and Berman
1979~\cite{ab-cpg-79})}
Every $n$-vertex planar graph $G(V,E)$ has a vertex set $V'\subseteq V$ with
$|V'|\geq n/2$ such that $G[V']$ is a forest.
\end{conjecture}
Conjecture~\ref{conj:induced-forests} would prove the existence of an
$(n/4)$-vertex independent set in a planar graph without using the four color
theorem~\cite{ah-epfci-77,ahk-epfcii-77}. The best known partial result related
to Conjecture~\ref{conj:induced-forests} is that every planar graph has a vertex
subset with $2/5$ of its vertices inducing a forest, which is a consequence of
the {\em acyclic $5$-colorability} of planar graphs~\cite{Borodin79}. Variants
of the conjecture have also been studied such that the planar graph in which the
induced forest has to be found is bipartite~\cite{aw-mifpg-87}, or is
outerplanar~\cite{h-iftog-90}, or such that each connected component of the
induced forest is required to be a path~\cite{p-milfog-04,p-lvapg-90}.
The topological structure of an outerplane graph is arguably much closer to that of a forest than the one of a non-outerplane graph. Thus the importance of Conjecture~\ref{conj:induced-forests} may justify the study of induced outerplane graphs in plane graphs in its own right.
To complement the results of the paper, we also show the following:
\begin{theorem} \label{th:plane3trees-main}
Every $n$-vertex planar graph and every $n$-vertex planar parital $3$-tree have a {\sc SefeNoMap~}.
\end{theorem}
\section{Proof of Theorem~\ref{th:outerplane-main}} \label{se:outerplane}
In this section we prove Theorem~\ref{th:outerplane-main}. We assume that the input graph $G$ is a {\em maximal} plane graph, that is, a plane graph such that no edge can be added to it while maintaining planarity. In fact, if $G$ is not maximal, then dummy edges can be added to it in order to make it a maximal plane graph $G'$. Then, the vertex set $V'$ of an induced outerplane graph $G'[V']$ in $G'$ induces an outerplane graph in $G$, as well.
\begin{figure}[tb]
\begin{center}
\begin{tabular}{c c c}
\mbox{\includegraphics[scale=0.25]{Figures/OuterplanarLevels.eps}} \hspace{7mm} &
\mbox{\includegraphics[scale=0.32]{Figures/OuterplanarLevels-3.eps}} \hspace{7mm} &
\mbox{\includegraphics[scale=0.32]{Figures/OuterplanarLevels-5.eps}}\\
(a) \hspace{7mm}& (b) \hspace{7mm} & (c)
\end{tabular}
\caption{(a) A maximal plane graph $G$ with outerplanarity $4$. (b) Graphs $G[V_1]$ (on the top) and $G[V_2]$ (on the bottom). (c) Graphs $G[V_3]$ (on the top) and $G[V_4]$ (on the bottom).}
\label{fig:outerplanelevels}
\end{center}
\end{figure}
Let $G^*_1=G$ and, for any $i\geq 1$, let $G^*_{i+1}$ be the plane graph
obtained by removing from $G^*_i$ the set $V_i$ of vertices incident to the
outer face of $G^*_i$ and their incident edges. Vertex set $V_i$ is the {\em
$i$-th outerplane level} of $G$. Denote by $k$ the maximum index such that $V_k$
is non-empty; then $k$ is the {\em outerplanarity} of $G$. For any $1\leq i\leq
k$, graph $G[V_i]$ is a (not necessarily connected) outerplane graph and graph
$G^*_i$ is a (not necessarily connected) {\em internally-triangulated} plane
graph, that is, a plane graph whose internal faces are all triangles. See
Fig.~\ref{fig:outerplanelevels}. For $1\leq i \leq k$, denote by
$H^*_{i,1},\dots,H^*_{i,h_i}$ the connected components of $G^*_i$ and, for
$1\leq j \leq h_i$, denote by $H_{i,j}$ the outerplane graph induced by the
vertices incident to the outer face of $H^*_{i,j}$. Since $G$ is maximal, for
any $1\leq i \leq k$ and for any internal face $f$ of $G[V_i]$, at most one
connected component of $G^*_{i+1}$ lies inside $f$.
A {\em $2$-coloring} $\psi=(W^*,B^*)$ of a graph $H^*$ is a partition of the vertex
set $V(H^*)$ into two sets $W^*$ and $B^*$. We say that the vertices in $W^*$
are \emph{white} and the ones in $B^*$ are \emph{black}. Given a $2$-coloring
$\psi=(W^*,B^*)$ of a plane graph $H^*$, the subgraph $H^*[W^*]$ of $H^*$ is
{\em strongly outerplane} if it is outerplane and it contains no black vertex
inside any of its internal faces. We define the \emph{surplus} of $\psi$ as
$s(H^*,\psi)=|W^*|-|B^*|$.
A {\em cutvertex} in a connected graph $H^*$ is a vertex whose removal
disconnects $H^*$. A \emph{maximal $2$-connected component} of $H^*$, also
called a \emph{block} of $H^*$, is an induced subgraph $H^*[V']$ of $H^*$ such
that $H^*[V']$ is $2$-connected and there exists no $V''\subseteq
V(H^*)$ where $V'\subset V''$ and $H^*[V'']$ is $2$-connected. The {\em
block-cutvertex tree} ${\cal BC}(H^*)$ of $H^*$ is a tree that represents the
arrangement of the blocks of $H^*$ (see Figs.~\ref{fig:bctree}(a)
and~\ref{fig:bctree}(b)). Namely, ${\cal BC}(H^*)$ contains a \emph{${\cal
B}$-node} for each block of $H^*$ and a \emph{${\cal C}$-node} for each
cutvertex of $H^*$; further, there is an edge between a ${\cal B}$-node $b$ and
a ${\cal C}$-node $c$ if $c$ is a vertex of $b$. Given a $2$-coloring
$\psi=(W^*,B^*)$ of $H^*$, the {\em contracted block-cutvertex tree} ${\cal
CBC}(H^*,\psi)$ of $H^*$ is the tree obtained from ${\cal BC}(H^*)$ by
identifying all the ${\cal B}$-nodes that are adjacent to the same black
cut-vertex $c$, and by removing $c$ and its incident edges (see
Fig.~\ref{fig:bctree}(c)). Each node of ${\cal CBC}(H^*,\psi)$ is either a
${\cal C}$-node $c$ or a ${\cal BU}$-node $b$. In the former case, $c$
corresponds to a white ${\cal C}$-node in ${\cal BC}(H^*)$. In the latter case,
$b$ corresponds to a maximal connected subtree ${\cal BC}(H^*(b))$ of ${\cal
BC}(H^*)$ only containing ${\cal B}$-nodes and black ${\cal C}$-nodes. The {\em
subgraph $H^*(b)$ of $H^*$ associated with a ${\cal BU}$-node $b$} is the union
of the blocks of $H^*$ corresponding to ${\cal B}$-nodes in ${\cal BC}(H^*(b))$.
Finally, we denote by $H(b)$ the outerplane graph induced by the vertices
incident to the outer face of $H^*(b)$. We have the following:
\begin{figure}[tb]
\begin{center}
\begin{tabular}{c c c}
\mbox{\includegraphics[scale=0.5]{Figures/BC-tree-1.eps}} \hspace{3mm} &
\mbox{\includegraphics[scale=0.55]{Figures/BC-tree-2.eps}} \hspace{3mm} &
\mbox{\includegraphics[scale=0.55]{Figures/BC-tree-3.eps}}\\
(a) \hspace{3mm}& (b) \hspace{3mm} & (c)
\end{tabular}
\caption{(a) A connected internally-triangulated plane graph $H^*$ with a $2$-coloring $\psi$, (b) the block-cutvertex tree ${\cal BC}(H^*)$, and (c) the contracted block-cutvertex tree ${\cal CBC}(H^*,\psi)$.}
\label{fig:bctree}
\end{center}
\end{figure}
\begin{lemma} \label{th:inductive}
For any $1\leq i\leq k$ and any $1\leq j\leq h_i$, there exists a $2$-coloring $\psi=(W^*_{i,j},B^*_{i,j})$ of $H^*_{i,j}$ such that:
\begin{itemize}
\item[(1)] the subgraph $H^*_{i,j}[W^*_{i,j}]$ of $H^*_{i,j}$ induced by $W^*_{i,j}$ is strongly outerplane; and
\item[(2)] for any ${\cal BU}$-node $b$ in ${\cal CBC}(H^*_{i,j},\psi)$, one of the following holds:
\begin{itemize}
\item[(a)] $s(H^*_{i,j}(b),\psi)\geq |W^*_{i,j} \cap V(H_{i,j}(b))|+1$;
\item[(b)] $s(H^*_{i,j}(b),\psi)= |W^*_{i,j} \cap V(H_{i,j}(b))|$ and there exists an edge with white end-vertices incident to the outer face of $H^*_{i,j}(b)$; or
\item[(c)] $s(H^*_{i,j}(b),\psi)=1$ and $H^*_{i,j}(b)$ is a single vertex.
\end{itemize}
\end{itemize}
\end{lemma}
Lemma~\ref{th:inductive} implies Theorem~\ref{th:outerplane-main} as follows: Since $G$ is a maximal plane graph, $G^*_1$ has one $2$-connected component, hence $H^*_{1,1}(b)=H^*_{1,1}=G^*_1=G$. By Lemma~\ref{th:inductive}, there exists a $2$-coloring $\psi=(W,B)$ of $G$ such that $G[W]$ is an outerplane graph and $|W|-|B|\geq |W \cap V_1| \geq 0$, hence $|W| \geq n/2$.
We emphasize that Lemma~\ref{th:inductive} shows the existence of a large induced subgraph $H^*_{i,j}[W^*_{i,j}]$ of $H^*_{i,j}$ satisfying an even stronger property than just being outerplane; namely, the $2$-coloring $\psi=(W^*_{i,j},B^*_{i,j})$ is such that $H^*_{i,j}[W^*_{i,j}]$ is outerplane and contains no vertex belonging to $B^*_{i,j}$ in any of its internal faces.
In order to prove Lemma~\ref{th:inductive}, we start by showing some sufficient conditions for a $2$-coloring to induce a strongly outerplane graph in $H^*_{i,j}$. We first state a lemma arguing that a $2$-coloring $\psi$ of $H^*_{i,j}$ satisfies Condition (1) of Lemma~\ref{th:inductive} if and only if it satisfies the same condition ``inside each internal face'' of $H_{i,j}$. For any face $f$ of $H_{i,j}$, we denote by $C_f$ the cycle delimiting $f$; also, we denote by $H^*_{i,j}[W^*_{i,j}(f)]$ the subgraph of $H^*_{i,j}$ induced by the white vertices inside or belonging to $C_f$.
\begin{lemma} \label{le:single-faces}
Let $\psi=(W^*_{i,j},B^*_{i,j})$ be a $2$-coloring of $H^*_{i,j}$. Assume that, for each internal face $f$ of $H_{i,j}$, graph $H^*_{i,j}[W^*_{i,j}(f)]$ is strongly outerplane. Then, $H^*_{i,j}[W^*_{i,j}]$ is strongly outerplane.
\end{lemma}
\begin{proof}
Suppose, for a contradiction, that $H^*_{i,j}[W^*_{i,j}]$ is not strongly outerplane. Then, it contains a simple cycle $C$ that contains in its interior some vertex $x$ in $H^*_{i,j}$. Assume, w.l.o.g., that $C$ is minimal, that is, there exists no cycle $C'$ that contains $x$ in its interior such that $|V(C')|\subset |V(C)|$. By hypothesis, $C$ is not a subgraph of $H^*_{i,j}[W^*_{i,j}(f)]$, for any internal face $f$ of $H_{i,j}$. Consider a maximal path $P$ in $C$ all of whose edges belong to $H^*_{i,j}[W^*_{i,j}(f)]$, for some internal face $f$ of $H_{i,j}$. Let $u$ and $v$ be the end-vertices of $P$; also, let $w$ be the vertex adjacent to $v$ in $C$ and not belonging to $P$. See Fig.~\ref{fig:single-face}. Vertex $v$ belongs to $C_f$, as otherwise edge $(v,w)$ would cross $C_f$, given that $w$ is not inside nor belongs to $C_f$, by the maximality of $P$. Let $v'$ and $v''$ be the vertices adjacent to $v$ on $C_f$. We have that the path $C - P$ obtained from $C$ by removing the edges and the internal vertices of $P$ contains $v'$ or $v''$. In fact, if that's not the case, $C - P$ would pass twice through $v$, which contradicts the fact that $C$ is a simple cycle. However, if $C$ contains one of $v'$ or $v''$, say $v'$, then $v'$ is a white vertex, thus edge $(v,v')$ splits $C$ into two cycles $C'$ and $C''$, with $|V(C')|\subset |V(C)|$ and $|V(C'')|\subset |V(C)|$, one of which contains $x$ in its interior, thus contradicting the minimality of $C$.
\begin{figure}[tb]
\begin{center}
\mbox{\includegraphics[scale=0.55]{Figures/SingleFace.eps}}
\caption{Illustration for the proof of Lemma~\ref{le:single-faces}. The thick solid line represents $P$ together with edge $(v,w)$. The thin solid lines represent the edges of $C_f$ not in $P$. The dashed lines represent some edges of $H_{i,j}$ not in $C_f$. The color of the gray vertices is not important for the proof.}
\label{fig:single-face}
\end{center}
\end{figure}
\end{proof}
An internal face $f$ of $H_{i,j}$ is {\em empty} if it contains no vertex of $G^*_{i+1}$ in its interior. Also, for a $2$-coloring $\psi$ of $H^*_{i,j}$, an internal face $f$ of $H_{i,j}$ is {\em trivial} if it contains in its interior a connected component $H^*_{i+1,k}$ of $G^*_{i+1}$ that is a single white vertex or such that all the vertices incident to the outer face of $H^*_{i+1,k}$ are black. We have the following.
\begin{lemma} \label{le:trivial-faces}
Let $\psi=(W^*_{i,j},B^*_{i,j})$ be a $2$-coloring of $H^*_{i,j}$ and let $f$ be a trivial face of $H_{i,j}$. Let $H^*_{i+1,k}$ be the connected component of $G^*_{i+1}$ in $f$'s interior. If $H^*_{i+1,k}[W^*_{i,j}]$ is strongly outerplane and if $C_f$ contains at least one black vertex, then $H^*_{i,j}[W^*_{i,j}(f)]$ is strongly outerplane.
\end{lemma}
\begin{proof}
Suppose, for a contradiction, that $H^*_{i,j}[W^*_{i,j}(f)]$ is not strongly outerplane. Then, it contains a simple cycle $C$ that contains in its interior some vertex $x$ in $H^*_{i,j}$. Cycle $C$ contains at least one vertex of $C_f$ by the assumption that $H^*_{i+1,k}[W^*_{i,j}]$ is outerplane. Also, $C$ does not coincide with $C_f$, since $C_f$ contains at least one black vertex. Then, $C$ contains vertices of $C_f$ and vertices internal to $C_f$. This provides a contradiction in the case in which $H^*_{i+1,k}$ is a single white vertex, as no other vertex is internal to any cycle in $H^*_{i,j}[W^*_{i,j}(f)]$, and it provides a contradiction in the case in which all the vertices incident to the outer face of $H^*_{i+1,k}$ are black, as no edge connects a white vertex of $C_f$ with a white vertex internal to $C_f$.
\end{proof}
We now prove Lemma~\ref{th:inductive} by induction on the outerplanarity of $H^*_{i,j}$.
In the base case, the outerplanarity of $H^*_{i,j}$ is $1$; then, color white all the vertices of $H^*_{i,j}$. Since the outerplanarity of $H^*_{i,j}$ is $1$, then $H^*_{i,j}[W^*_{i,j}]=H^*_{i,j}$ is an outerplane graph, thus satisfying Condition (1) of Lemma~\ref{th:inductive}. Also, consider any ${\cal BU}$-node $b$ in the contracted block-cutvertex tree ${\cal CBC}(H^*_{i,j},\psi)$ (which coincides with the block-cutvertex tree ${\cal BC}(H^*_{i,j})$, given that all the vertices of $H^*_{i,j}$ are white). All the vertices of $H^*_{i,j}(b)$ are white, hence either Condition (2b) or Condition (2c) of Lemma~\ref{th:inductive} is satisfied, depending on whether $H^*_{i,j}(b)$ has or does not have an edge, respectively.
In the inductive case, the outerplanarity of $H^*_{i,j}$ is greater than $1$.
First, we inductively construct a $2$-coloring
$\psi_k=(W^*_{i+1,k},B^*_{i+1,k})$, satisfying the conditions of
Lemma~\ref{th:inductive}, of each connected component $H^*_{i+1,k}$ of
$G^*_{i+1}$, for $1\leq k \leq h_{i+1}$. The $2$-coloring $\psi$ of $H^*_{i,j}$
is such that each connected component $H^*_{i+1,k}$ of $G^*_{i+1}$ that lies
inside an internal face of $H_{i,j}$ ``maintains'' the coloring $\psi_k$, i.e.,
a vertex of $H^*_{i+1,k}$ is white in $\psi$ if and only if it is white in
$\psi_k$. Then, in order to determine $\psi$, it suffices to describe how to
color the vertices of $H_{i,j}$.
Second, we look at the internal faces of $H_{i,j}$ one at a time. When we look at a face $f$, we determine a set $B_f$ of vertices of $C_f$ that are colored black. This is done in such a way that the graph $H^*_{i,j}[W^*_{i,j}(f)]$ is strongly outerplane even if we color white all the vertices in $V(C_f)\setminus B_f$. By Lemma~\ref{le:single-faces}, a $2$-coloring of $H^*_{i,j}$ such that $H^*_{i,j}[W^*_{i,j}(f)]$ is strongly outerplane for every internal face $f$ of $H_{i,j}$ is such that $H^*_{i,j}[W^*_{i,j}]$ is strongly outerplane. We remark that, when a set $B_f$ of vertices of $C_f$ are colored black, the vertices in $V(C_f)\setminus B_f$ are not necessarily colored white, as a vertex in $V(C_f)\setminus B_f$ might belong to the set $B_{f'}$ of vertices that are colored black for a face $f'\neq f$ of $H_{i,j}$. In fact, only after the set $B_f$ of vertices of $C_f$ are colored black for {\em every} internal face $f$ of $H_{i,j}$, are the remaining uncolored vertices in $H_{i,j}$ colored white.
We now describe in more detail how to color the vertices of $H_{i,j}$. We show an algorithm, that we call {\em algorithm cycle-breaker}, that associates a set $B_f$ to each internal face $f$ of $H_{i,j}$ as follows.
{\bf Empty faces:} For any empty face $f$ of $H_{i,j}$, let $B_f=\emptyset$.
{\bf Trivial faces:} While there exists a vertex $v^*_{1,2}$ incident to two
trivial faces $f_1$ and $f_2$ of $H_{i,j}$ to which no sets $B_{f_1}$ and
$B_{f_2}$ have been associated yet, respectively, let
$B_{f_1}=B_{f_2}=\{v^*_{1,2}\}$. When no such vertex exists, for any trivial
face $f$ of $H_{i,j}$ to which no set $B_f$ has been associated yet, let $v$ be
any vertex of $C_f$ and let $B_f=\{v\}$.
{\bf Non-trivial non-empty faces:} Consider any non-trivial non-empty internal
face $f$ of $H_{i,j}$. Denote by $H^*_{i+1,k}$ the connected component of
$G^*_{i+1}$ inside $f$. By induction, for any ${\cal BU}$-node $b$ in the
contracted block-cutvertex tree ${\cal CBC}(H^*_{i+1,k},\psi_k)$, it holds
$s(H^*_{i+1,k}(b),\psi_k)\geq |W^*_{i+1,k} \cap V(H_{i+1,k}(b))|+1$, or
$s(H^*_{i+1,k}(b),\psi_k)= |W^*_{i+1,k} \cap V(H_{i+1,k}(b))|$ and there exists
an edge incident to the outer face of $H^*_{i+1,k}(b)$ whose both end-vertices
are white.
We repeatedly perform the following actions: (i) We pick any ${\cal BU}$-node
$b$ that is a leaf in ${\cal CBC}(H^*_{i+1,k},\psi_k)$; (ii) we insert some
vertices of $C_f$ in $B_f$, based on the structure and the coloring of
$H^*_{i+1,k}(b)$; and (iii) we remove $b$ from ${\cal CBC}(H^*_{i+1,k},\psi_k)$,
possibly also removing its adjacent cutvertex, if it has degree one. We describe
in more detail action (ii).
For every white vertex $u$ incident to the outer face of $H^*_{i+1,k}(b)$, we
define the {\em rightmost neighbor $r(u,b)$ of $u$ in $C_f$ from $b$} as
follows. Denote by $u'$ the vertex following $u$ in the clockwise order of the
vertices along the cycle delimiting the outer face of $H^*_{i+1,k}(b)$. Vertex
$r(u,b)$ is the vertex preceding $u'$ in the clockwise order of the neighbors of
$u$. Observe that, since $H^*_{i,j}$ is internally-triangulated, then $r(u,b)$
belongs to $C_f$. Also, $r(u,b)$ is well-defined because $u$ is not a cutvertex
(in fact, it might be a cutvertex of $H^*_{i+1,k}$, but it is not a cutvertex of
$H^*_{i+1,k}(b)$, since such a graph contains no white cut-vertex).
Suppose that $s(H^*_{i+1,k}(b),\psi_k)\geq |W^*_{i+1,k} \cap V(H_{i+1,k}(b))|+1$. Then, for every white vertex $u$ incident to the outer face of $H^*_{i+1,k}(b)$, we add $r(u,b)$ to $B_f$.
Suppose that $s(H^*_{i+1,k}(b),\psi_k)= |W^*_{i+1,k} \cap V(H_{i+1,k}(b))|$ and there exists an edge $(v,v')$ incident to the outer face of $H^*_{i+1,k}(b)$ such that $v$ and $v'$ are white. Assume, w.l.o.g., that $v'$ follows $v$ in the clockwise order of the vertices along the cycle delimiting the outer face of $H^*_{i+1,k}(b)$. Then, for every white vertex $u\neq v$ incident to the outer face of $H^*_{i+1,k}(b)$, we add $r(u,b)$ to $B_f$.
After the execution of algorithm cycle-breaker, a set $B_f$ has been defined for every internal face $f$ of $H_{i,j}$. Then, color black all the vertices in $\bigcup_{f} B_f$, where the union is over all the internal faces $f$ of $H_{i,j}$. Also, color white all the vertices of $H_{i,j}$ that are not colored black. Denote by $\psi=(W^*_{i,j},B^*_{i,j})$ the resulting coloring of $H^*_{i,j}$. We have the following lemma, that completes the induction, and hence the proof of Lemma~\ref{th:inductive}.
\begin{lemma} \label{le:correctness}
Coloring $\psi$ satisfies Conditions (1) and (2) of Lemma~\ref{th:inductive}.
\end{lemma}
\begin{proof}
{\em We prove that $\psi$ satisfies Condition (1) of Lemma~\ref{th:inductive}}. Namely, we prove that, for every internal face $f$ of $H_{i,j}$, graph $H^*_{i,j}[W^*_{i,j}(f)]$ is strongly outerplane. By Lemma~\ref{le:single-faces}, this implies that graph $H^*_{i,j}[W^*_{i,j}]$ is strongly outerplane.
For any {\bf empty face} $f$ of $H_{i,j}$, graph $H^*_{i,j}[W^*_{i,j}(f)]$ is strongly outerplane, as no vertex of $H^*_{i,j}$ is in the interior of $f$.
By construction, for any {\bf trivial face} $f$ of $H_{i,j}$, there is a black vertex in $C_f$; hence, by Lemma~\ref{le:trivial-faces}, graph $H^*_{i,j}[W^*_{i,j}(f)]$ is strongly outerplane.
Let $f$ be any {\bf non-empty non-trivial internal face} of $H_{i,j}$. Denote by $H^*_{i+1,k}$ the connected component of $G^*_{i+1}$ inside $f$. Such a component exists because $f$ is non-empty. Suppose, for a contradiction, that $H^*_{i,j}[W^*_{i,j}(f)]$ is not strongly outerplane. Then, it contains a simple cycle $C$ that contains in its interior some vertex $x$ in $H^*_{i,j}$. Assume, w.l.o.g., that $C$ is minimal, that is, there exists no cycle $C'$ in $H^*_{i,j}[W^*_{i,j}(f)]$ that contains $x$ in its interior and such that $|V(C')|\subset |V(C)|$.
Cycle $C$ contains at least one vertex of $C_f$, since by induction $H^*_{i+1,k}[W^*_{i,j}]$ is strongly outerplane. Suppose, for a contradiction, that $C$ coincides with $C_f$. Since $f$ is non-trivial, when algorithm cycle-breaker picks the first leaf $b$ in ${\cal CBC}(H^*_{i+1,k},\psi_k)$, there exists at least one vertex $u$ incident to the outer face of $H^*_{i+1,k}(b)$ that is white. Then, either $r(u,b)\in V(C_f)$ is black, thus obtaining a contradiction, or there exists an edge $(u,u')$ incident to the outer face of $H^*_{i+1,k}(b)$ such that $u$ and $u'$ are both white, where $u'$ follows $u$ in the clockwise order of the vertices along the cycle delimiting the outer face of $H^*_{i+1,k}(b)$. Then $r(u',b)\in V(C_f)$ is black, thus obtaining a contradiction. Hence, we can assume that $C$ contains vertices of $C_f$ {\em and} vertices in the interior of $C_f$.
Consider a maximal path $P$ in $C$ all of whose edges belong to $C_f$. Let $u$ and $v$ be the end-vertices of $P$. Let $u'$ and $v'$ be the vertices adjacent to $u$ and $v$ in $C$, respectively, and not belonging to $P$. By the maximality of $P$, we have that $u'$ and $v'$ are in the interior of $C_f$. It might be the case that $u=v$ or that $u'=v'$ (however the two equalities do not hold simultaneously). Assume w.l.o.g. that $u'$, $u$, $v$, and $v'$ appear in this clockwise order along $C$. Denote by $b$ any node of ${\cal CBC}(H^*_{i+1,k},\psi_k)$ such that $u'$ belongs to $H^*_{i+1,k}(b)$.
Suppose first that algorithm cycle-breaker inserted vertex $r(u',b)$ into $B_f$ as the rightmost neighbor of $u'$ in $C_f$ from $b$. Assume also that $v'\neq u'$. By the assumption on the clockwise order of the vertices along $C$ and since all the vertices of $P$ are white, we have that edge $(v,v')$ crosses edge $(u',r(u',b))$, a contradiction to the planarity of $H^*_{i+1,k}(b)$ (see Fig.~\ref{fig:correctness}(a)). Assume next that $v'=u'$. Consider the edge $(u',u'')$ that follows $(u',u)$ in the clockwise order of the edges incident to $u'$. (Note that $u'' \neq v$, otherwise $C$ would be an empty triangle.) If $u''$ belongs to $C_f$, then it either belongs to $P$ or it does not. In the former case, a cycle $C'$ can be obtained from $C$ by replacing path $(u',u,u'')$ with edge $(u',u'')$; since $H^*_{i,j}$ is internally-triangulated, then $(u,u',u'')$ is a cycle delimiting a face of $H^*_{i,j}$, hence if $C$ contains a vertex $x$ in its interior, then $C'$ contains $x$ in its interior as well, thus contradicting the minimality of $C$ (see Fig.~\ref{fig:correctness}(b)). In the latter case, edge $(u',u'')$ crosses $C$, thus contradicting the planarity of $H^*_{i,j}$. We can hence assume that $u''$ belongs to $H^*_{i+1,k}$. Let $b'$ be the node in ${\cal CBC}(H^*_{i+1,k},\psi_k)$ such that $H^*_{i+1,k}(b')$ contains edge $(u',u'')$ (it is possible that $b'=b$). Observe that, by the planarity of $H^*_{i,j}$, graph $H^*_{i+1,k}(b')-u'$ lies entirely in the interior of $C$. This implies that $u$ is the rightmost neighbor $r(u',b')$ of $u'$ in $C_f$ from $b'$. Thus, if algorithm cycle-breaker inserted $r(u',b')$ into $B_f$, then we immediately get a contradiction to the fact that $u$ is white. Otherwise, vertex $u''$ is white, and algorithm cycle-breaker inserted into $B_f$ the rightmost neighbor $r(u'',b')$ of $u''$ in $C_f$ from $b'$. Since every vertex of $P$ is white, then $r(u'',b')$ does not belong to $P$, hence edge $(u'',r(u'',b'))$ crosses edge $(u',u)$ or edge $(u',v)$, a contradiction to the planarity of $H^*_{i,j}$ (see Fig.~\ref{fig:correctness}(c)).
\begin{figure}[tb]
\begin{center}
\begin{tabular}{c c c c}
\mbox{\includegraphics[scale=0.48]{Figures/Correctness1.eps}} \hspace{2mm} &
\mbox{\includegraphics[scale=0.48]{Figures/Correctness2.eps}} \hspace{2mm} &
\mbox{\includegraphics[scale=0.48]{Figures/Correctness3.eps}} \hspace{1mm} &
\mbox{\includegraphics[scale=0.48]{Figures/Correctness4.eps}}\\
(a) \hspace{2mm}& (b) \hspace{2mm} & (c) \hspace{1mm} & (d)
\end{tabular}
\caption{Contradiction to the existence of a cycle $C$ (shown by thick lines) in $H^*_{i,j}[W^*_{i,j}(f)]$ containing a vertex $x$ in its interior. The color of the gray vertices is not important for the proof. Figures (a), (b), and (c) illustrate the case in which algorithm cycle-breaker inserted a vertex $r(u',b)$ into $B_f$, and in which $v'\neq u'$ (a), $v'= u'$ and $u'' \in V(P)$ (b), and $v'= u'$ and $u''\in V(H^*_{i+1,k})$ (c). Figure (d) illustrates the case in which algorithm cycle-breaker did not insert a vertex $r(u',b)$ into $B_f$ and $v'\neq u',u''$.}
\label{fig:correctness}
\end{center}
\end{figure}
Suppose next that algorithm cycle-breaker did not insert vertex $r(u',b)$ into $B_f$ as the rightmost neighbor of $u'$ in $C_f$ from $b$. Denote by $u''$ the vertex that follows $u'$ in the clockwise order of the vertices along the outer face of $H^*_{i+1,k}(b)$. Assume first that $v'=u''$. Then, edge $(u',v')$ splits $C$ into two cycles $C'$ and $C''$, with $|V(C')|\subset |V(C)|$ and $|V(C'')|\subset |V(C)|$, one of which contains $x$ in its interior, thus contradicting the minimality of $C$. Hence, we can assume that $v' \neq u''$. In the case in which $v' = u'$, a contradiction can be derived with {\em exactly} the same arguments as in the case in which algorithm cycle-breaker inserted vertex $r(u',b)$ into $B_f$. Hence, we can assume that $v' \neq u'$. Since algorithm cycle-breaker did not insert vertex $r(u',b)$ into $B_f$, it follows that $u''$ is white and algorithm cycle-breaker inserted into $B_f$ the rightmost neighbor $r(u'',b)$ of $u''$ in $C_f$ from $b$. Since every vertex of $P$ is white, it follows that $r(u'',b)$ does not belong to $P$. Hence, edge $(u'',r(u'',b))$ crosses edge $(u,u')$ or edge $(v,v')$, a contradiction to the planarity of $H^*_{i+1,k}(b)$ (see Fig.~\ref{fig:correctness}(d)).
{\em We prove that $\psi$ satisfies Condition (2) of Lemma~\ref{th:inductive}}. Consider any ${\cal BU}$-node $b$ in the contracted block-cutvertex tree ${\cal CBC}(H^*_{i,j},\psi)$. Denote by $H_{i,j}(b)$ the outerplane graph induced by the vertices incident to the outer face of $H^*_{i,j}(b)$ or, equivalently, the subgraph of $H_{i,j}$ induced by the vertices in $H^*_{i,j}(b)$.
We distinguish three cases. In {\em Case A}, graph $H_{i,j}(b)$ contains at least one non-trivial non-empty internal face; in {\em Case B}, all the faces of $H_{i,j}(b)$ are either trivial or empty, and there exists a vertex $v^*_{1,2}$ incident to two trivial faces $f_1$ and $f_2$ of $H_{i,j}(b)$; finally, in {\em Case C}, all the faces of $H_{i,j}(b)$ are either trivial or empty, and there exists no vertex incident to two trivial faces and of $H_{i,j}(b)$. We prove that, in Cases~A and~B, Condition (2a) of Lemma~\ref{th:inductive} is satisfied, while in Case~C, Condition (2b) of Lemma~\ref{th:inductive} is satisfied.
In all cases, the surplus $s(H^*_{i,j}(b),\psi)$ is the sum of the surpluses $s(H^*_{i+1,k},\psi)$ of the connected components $H^*_{i+1,k}$ of $G^*_{i+1}$ inside the internal faces of $H_{i,j}(b)$, plus the number $|W^*_{i,j} \cap V(H_{i,j}(b))|$ of white vertices in $H_{i,j}(b)$, minus the number $|B^*_{i,j} \cap V(H_{i,j}(b))|$ of black vertices in $H_{i,j}(b)$, which is equal to $|\bigcup_{f} B_f|$. Denote by $n_t$ the number of trivial faces of $H_{i,j}(b)$ and by $n_n$ the number of non-trivial non-empty internal faces of $H_{i,j}(b)$.
\begin{itemize}
\item We discuss {\bf Case A}. First, the number of vertices inserted in $\bigcup_{f} B_f$ by algorithm cycle-breaker when looking at trivial faces of $H_{i,j}(b)$ is at most $n_t$, as for every trivial face $f$ of $H_{i,j}(b)$ at most one vertex is inserted into $B_f$. Also, the sum of the surpluses $s(H^*_{i+1,k},\psi)$ of the connected components $H^*_{i+1,k}$ of $G^*_{i+1}$ inside trivial faces of $H_{i,j}(b)$ is $n_t$, given that each connected component $H^*_{i+1,k}$ inside a trivial face is either a single white vertex, or it is such that all the vertices incident to the outer face of $H^*_{i+1,k}$ are black (hence by induction $s(H^*_{i+1,k},\psi)\geq |W^*_{i+1,k} \cap V(H_{i+1,k})|+1 = 1$).
In the following, we prove that, for every non-trivial non-empty internal face $f$ of $H_{i,j}(b)$ containing a connected component $H^*_{i+1,k}$ of $G^*_{i+1}$ in its interior, algorithm cycle-breaker inserts into $B_f$ at most $s(H^*_{i+1,k},\psi)-1$ vertices. The claim implies that Condition (2a) of Lemma~\ref{th:inductive} is satisfied by $H^*_{i,j}(b)$. In fact, (1) the sum of the surpluses $s(H^*_{i+1,k},\psi)$ of the connected components $H^*_{i+1,k}$ of $G^*_{i+1}$ inside the internal faces of $H_{i,j}(b)$ is $n_t + \sum_f s(H^*_{i+1,k},\psi)$ (where the sum is over each connected component $H^*_{i+1,k}$ inside a non-trivial non-empty internal face $f$ of $H_{i,j}(b)$), (2) the number of white vertices in $H_{i,j}(b)$ is $|W^*_{i,j} \cap V(H_{i,j}(b))|$, and (3) the number of black vertices in $H_{i,j}(b)$ is at most $n_t + \sum_f (s(H^*_{i+1,k},\psi)-1)$ (where the sum is over each connected component $H^*_{i+1,k}$ inside a non-trivial non-empty internal face $f$ of $H_{i,j}(b)$). Hence, $s(H^*_{i,j}(b),\psi)\geq n_t - n_t + |W^*_{i,j} \cap V(H_{i,j}(b))| + n_n$. By the assumption of Case A, we have $n_n\geq 1$, and Condition (2a) of Lemma~\ref{th:inductive} follows.
Consider any non-trivial non-empty internal face $f$ of $H_{i,j}(b)$ containing a connected component $H^*_{i+1,k}$ of $G^*_{i+1}$ in its interior. We consider the ${\cal BU}$-nodes of the contracted block-cutvertex tree ${\cal CBC}(H^*_{i+1,k},\psi)$ of $H^*_{i+1,k}$ one at a time. Denote by $n_{bu}$ the number of ${\cal BU}$-nodes in ${\cal CBC}(H^*_{i+1,k},\psi)$ and by $b_1,b_2,\dots,b_{n_{bu}}$ the ${\cal BU}$-nodes of ${\cal CBC}(H^*_{i+1,k},\psi)$ in any order.
We prove that, when algorithm cycle-breaker deals with ${\cal BU}$-node $b_l$, for any $1\leq l\leq n_{bu}$, it inserts into $B_f$ a number of vertices which is at most $s(H^*_{i+1,k}(b_l),\psi)-1$. Namely, if $s(H^*_{i+1,k}(b_l),\psi)\geq |W^*_{i,j} \cap V(H_{i+1,k}(b_l))|+1$, then it suffices to observe that, for each white vertex incident to the outer face of $H^*_{i+1,k}(b_l)$, at most one black vertex is inserted into $B_f$; further, if $s(H^*_{i+1,k}(b_l),\psi)= |W^*_{i,j} \cap V(H_{i+1,k}(b_l))|$ and there exists an edge $e$ incident to the outer face of $H^*_{i+1,k}(b_l)$ whose both end-vertices are white, then, for each white vertex incident to the outer face of $H^*_{i+1,k}(b_l)$, at most one black vertex is inserted into $B_f$ with the exception of one of the end-vertices of $e$, for which no black vertex is inserted into $B_f$. Hence, the number of vertices inserted into $B_f$ by algorithm cycle-breaker is at most $\sum_{l=1}^{n_{bu}} (s(H^*_{i+1,k}(b_l),\psi)-1)=\sum_{l=1}^{n_{bu}} s(H^*_{i+1,k}(b_l),\psi)-b_{n_{bu}}$.
It remains to prove that $\sum_{l=1}^{n_{bu}} s(H^*_{i+1,k}(b_l),\psi)= s(H^*_{i+1,k},\psi)+b_{n_{bu}}-1$, which is done as follows. (Roughly speaking, if $b_{n_{bu}}>1$, then $\sum_{l=1}^{n_{bu}} s(H^*_{i+1,k}(b_l),\psi)>s(H^*_{i+1,k},\psi)$ holds because white cutvertices in $H^*_{i+1,k}$ belong to more than one graph $H^*_{i+1,k}(b_l)$, hence they contribute more than $1$ to $\sum_{l=1}^{n_{bu}} s(H^*_{i+1,k}(b_l),\psi)$, while they contribute exactly $1$ to $s(H^*_{i+1,k},\psi)$). Root ${\cal CBC}(H^*_{i+1,k},\psi)$ at any ${\cal BU}$-node, and orient all its edges towards the root. We now {\em assign} each white cutvertex $c_x$ in $H^*_{i+1,k}$ to the only ${\cal BU}$-node $b_l$ such that edge $(c_x,b_l)$ is oriented from $c_x$ to $b_l$. Such an assignment corresponds to consider $b_l$ as the only ${\cal BU}$-node in which $c_x$ is counted in order to relate $s(H^*_{i+1,k},\psi)$ to $\sum_{l=1}^{n_{bu}} s(H^*_{i+1,k}(b_l),\psi)$. Now the difference $\sum_{l=1}^{n_{bu}} s(H^*_{i+1,k}(b_l),\psi) - s(H^*_{i+1,k},\psi)$ is equal to the number of pairs $(c_x,b_l)$ such that $c_x$ is not assigned to $b_l$, that is, the number of edges $(c_x,b_l)$ that are oriented from $b_l$ to $c_x$. For each ${\cal BU}$-node $b_l$ of ${\cal CBC}(H^*_{i+1,k},\psi)$, there is one such an edge, except for the root for which there is no such an edge. Hence, we get that $\sum_{l=1}^{n_{bu}} s(H^*_{i+1,k}(b_l),\psi)= s(H^*_{i+1,k},\psi)+b_{n_{bu}}-1$.
\item We now discuss {\bf Case B}. First, the sum of the surpluses $s(H^*_{i+1,k},\psi)$ of the connected components $H^*_{i+1,k}$ of $G^*_{i+1}$ inside the internal faces of $H_{i,j}(b)$ is equal to $n_t$, given that each connected component $H^*_{i+1,k}$ is either a single white vertex, or it is such that all the vertices incident to the outer face of $H^*_{i+1,k}$ are black (hence by induction $s(H^*_{i+1,k}(b),\psi)\geq |W^*_{i+1,k} \cap V(H_{i+1,k}(b))|+1 = 1$).
Second, we prove that algorithm cycle-breaker defines $B_{f_1}=B_{f_2}=\{v^*_{1,2}\}$ for two trivial faces $f_1$ and $f_2$ of $H_{i,j}(b)$ sharing a vertex $v^*_{1,2}$. Suppose the contrary, for a contradiction. Two trivial faces $f_1$ and $f_2$ of $H_{i,j}(b)$ sharing a vertex $v^*_{1,2}$ exist by the assumption of Case B. Hence, algorithm cycle-breaker does not define $B_{f_1}=B_{f_2}=\{v^*_{1,2}\}$ only if it does define $B_{f_1}=B_{f_3}=\{v^*_{1,3}\}$, for some vertex $v^*_{1,3}$ incident to $f_1$ and to a trivial face $f_3$ of $H_{i,j}$ (possibly after swapping the labels of $f_1$ and $f_2$). If $f_3$ is a face of $H_{i,j}(b)$, we immediately get a contradiction. If $f_3$ is not a face of $H_{i,j}(b)$, then we get a contradiction since any vertex that is incident to an internal face of $H_{i,j}(b)$ and to an internal face of $H_{i,j}$ not in $H_{i,j}(b)$ is white, by definition of contracted block-cutvertex tree, hence it is not in $\bigcup_{f} B_f$.
Thus, $|B^*_{i,j} \cap V(H_{i,j}(b))|=|\bigcup_{f} B_f|<n_t$. In fact, each trivial face contributes with at most one vertex to $\bigcup_{f} B_f$ and at least two trivial faces of $H_{i,j}(b)$ contribute with a total of one vertex to $\bigcup_{f} B_f$.
Hence, $s(H^*_{i,j}(b),\psi) \geq n_t + |W^*_{i,j} \cap V(H_{i,j}(b))| - (n_t -1) = |W^*_{i,j} \cap V(H_{i,j}(b))| +1$, thus Condition (2a) of Lemma~\ref{th:inductive} is satisfied.
\item We now discuss {\bf Case C}. As in Case B, the sum of the surpluses of the connected components $H^*_{i+1,k}$ of $G^*_{i+1}$ inside the internal faces of $H_{i,j}(b)$ is equal to $n_t$.
Second, $|B^*_{i,j} \cap V(H_{i,j}(b))|=|\bigcup_{f} B_f|= n_t$, as each trivial face contributes with exactly one vertex to $\bigcup_{f} B_f$. (Notice that, since no two trivial faces share a vertex, then no two trivial faces contribute with the same vertex to $\bigcup_{f} B_f$.)
Hence, $s(H^*_{i,j}(b),\psi) = n_t + |W^*_{i,j} \cap V(H_{i,j}(b))| - n_t = |W^*_{i,j} \cap V(H_{i,j}(b))|$. Thus, in order to prove that Condition (2b) of Lemma~\ref{th:inductive} is satisfied, it remains to prove that there exists an edge incident to the outer face of $H^*_{i,j}(b)$ whose end-vertices belong to $W^*_{i,j}$. We will in fact prove that there exists an edge incident to the outer face of $H^*_{i,j}(b)$ whose end-vertices belong to $W^*_{i,j}$ in {\em every} $2$-connected component $D^*_{i,j}(b)$ of $H^*_{i,j}(b)$. Denote by $D_{i,j}(b)$ the outerplane graph induced by the vertices incident to the outer face of $D^*_{i,j}(b)$. Refer to Fig.~\ref{fig:casec}.
\begin{figure}[tb]
\begin{center}
\mbox{\includegraphics[scale=0.6]{Figures/CaseC.eps}}
\caption{Illustration for the proof that there exists an edge incident to the outer face of $D^*_{i,j}(b)$ whose end-vertices belong to $W^*_{i,j}$. }
\label{fig:casec}
\end{center}
\end{figure}
Suppose, for a contradiction, that no edge incident to the outer face of $D^*_{i,j}(b)$ has both its end-vertices in $W^*_{i,j}$. If all the faces of $D_{i,j}(b)$ are empty, then every edge incident to the outer face of $D^*_{i,j}(b)$ has both its end-vertices in $W^*_{i,j}$, thus obtaining a contradiction. Assume next that $D_{i,j}(b)$ has at least one trivial face $f$. Since exactly one of the vertices incident to $f$, say vertex $z$, belongs to $\bigcup_{f} B_f$ (as otherwise two trivial faces of $D_{i,j}(b)$ would exist sharing a vertex), it follows that at least one of the edges delimiting $f$ has both its end-vertices in $W^*_{i,j}$. Denote by $(u,v)$ such an edge and assume, w.l.o.g., that $u$, $v$, and $z$ appear in this clockwise order along the cycle delimiting $f$. If $(u,v)$ is incident to the outer face of $D_{i,j}(b)$, we immediately have a contradiction. Otherwise, $(u,v)$ is an internal edge of $D_{i,j}(b)$. Denote by $u=u_1,u_2,\dots,u_l=v$ the clockwise order of the vertices along the cycle delimiting the outer face of $D_{i,j}(b)$ from vertex $u$ to vertex $v$. We assume w.l.o.g. that $(u,v)$ is {\em maximal}, that is, there is no edge $(u_x,u_y)\neq (u,v)$ such that (1) $1\leq x < y \leq l$, (2) $u_x$ and $u_y$ are both white, and (3) there exists a trivial face $f'$ of $D_{i,j}(b)$ that is incident to edge $(u_x,u_y)$ and that is internal to cycle $C_{1,2}=(u_1,u_2,\dots,u_x,u_y,u_{y+1},\dots,u_l)$. Then, consider vertex $u_2$. If it is white, then we have a contradiction, as edge $(u_1,u_2)$ is incident to the outer face of $D_{i,j}(b)$. Otherwise, $u_2$ is black. Then, there exists a trivial face $f'$ such that $B_{f'}=\{u_2\}$. Since no vertex incident to $f'$ and different from $u_2$ belongs to $\bigcup_{f} B_f$ (as otherwise two trivial faces of $D_{i,j}(b)$ would exist sharing a vertex), it follows that at least one of the edges delimiting $f'$, say $e'=(u_{x'},u_{y'})$, has both its end-vertices in $W^*_{i,j}$. By planarity, the end-vertices of $e'$ are among $u_1,u_2,\dots,u_l$. Further, they are different from $u_1$ and $u_l$, as otherwise $f$ and $f'$ would share a vertex. Hence, $f'$ is internal to cycle $(u_1,u_2,\dots,u_{x'},u_{y'},u_{y'+1},\dots,u_l)$, thus contradicting the maximality of $(u,v)$.
\end{itemize}
This concludes the proof of the lemma.
\end{proof}
\section{Proof of Theorem~\ref{th:plane3trees-main}} \label{se:3trees}
In this section we prove Theorem~\ref{th:plane3trees-main}. It suffices to
prove Theorem~\ref{th:plane3trees-main} for an $n$-vertex {\em maximal} plane
graph $G_1$ and an $n$-vertex ({\em maximal}) plane $3$-tree $G_2$. In fact, if $G_1$ and $G_2$ are not maximal, then they can be augmented to an $n$-vertex
maximal plane graph $G'_1$ and an $n$-vertex plane $3$-tree $G'_2$, respectively; the latter augmentation can be always performed, as proved in~\cite{kv-nppt-12}. Then, a {\sc SefeNoMap~} can be constructed for $G'_1$ and $G'_2$, and finally
the edges not in $G_1$ and $G_2$ can be removed, thus obtaining a {\sc SefeNoMap~} of $G_1$
and $G_2$. In the following we assume that $G_1$ and $G_2$ are an $n$-vertex
maximal plane graph and an $n$-vertex plane $3$-tree, respectively, for
some $n\geq 3$. Denote by $C_i=(u_i,v_i,z_i)$ the cycle delimiting the outer
face of $G_i$, for $i=1,2$, where vertices $u_i$, $v_i$, and $z_i$ appear in
this clockwise order along $C_i$.
Let $p_u$, $p_v$, and $p_z$ be three points in the plane. Let $s_{uv}$,
$s_{vz}$, and $s_{zu}$ be three curves connecting $p_u$ and $p_v$,
connecting $p_v$ and $p_z$, and connecting $p_z$ and $p_u$, respectively, that
do not intersect except at their common end-points. Let $\Delta_{uvz}$ be
the closed curve $s_{uv}\cup s_{vz}\cup s_{zu}$. Assume that $p_u$, $p_v$, and
$p_z$ appear in this clockwise order along $\Delta_{uvz}$. Denoting by
$\int(\Delta)$ the interior of a closed curve $\Delta$, let
$cl(\Delta)=\int(\Delta) \cup \Delta$. Let $P$ be a set of $n-3 \geq 0$ points
in $\int(\Delta_{uvz})$ and let $R$ be a set of points on $\Delta_{uvz}$, where
$p_u,p_v,p_z\in R$. Let $S$ be a set of curves whose end-points are in
$R\cup P$ such that: (i) No two curves in $S$ intersect, except possibly
at common end-points, (ii) no two curves in $S$ connect the same pair of
points in $R\cup P$, (iii) each curve in $S$ is contained in
$cl(\Delta_{uvz})$, (iv) any point in $R$, except possibly for $p_u$, $p_v$,
and $p_z$, has exactly one incident curve in $S$, and (v) no curve in
$S$ connects two points of $R$ both lying on $s_{uv}$, or both lying on
$s_{vz}$, or both lying on $s_{zu}$. See Fig.~\ref{fig:plane3treesdrawing}(a).
We show the following.
\begin{figure}[tb]
\begin{center}
\begin{tabular}{c c}
\mbox{\includegraphics[scale=0.4]{Figures/Plane3TreeSetting.eps}} \hspace{1mm} &
\mbox{\includegraphics[scale=0.4]{Figures/Plane3TreeDrawing.eps}}\\
(a) \hspace{2mm}& (b)
\end{tabular}
\caption{(a) Setting for Lemma~\ref{le:splittingpoints}. White circles are points in $P$. White squares are points in $R$. Dashed curves are in $S$. Curves $s_{uv}$, $s_{vz}$, and $s_{zu}$ are solid thin curves. (b) A planar drawing of $G_2$ (solid thick lines) satisfying the properties of Lemma~\ref{le:splittingpoints}.}
\label{fig:plane3treesdrawing}
\end{center}
\end{figure}
\begin{lemma} \label{le:splittingpoints}
There exists a planar drawing $\Gamma_2$ of $G_2$ such that:
\begin{itemize}
\item[(a)] Vertices $u_2$, $v_2$, and $z_2$ are mapped to $p_u$, $p_v$, and $p_z$, respectively;
\item[(b)] edges $(u_2,v_2)$, $(v_2,z_2)$, and $(z_2,u_2)$ are represented by curves $s_{uv}$, $s_{vz}$, and $s_{zu}$, respectively;
\item[(c)] the internal vertices of $G_2$ are mapped to the points of $P$;
\item[(d)] each edge of $G_2$ that connects two points $p_1,p_2\in P \cup\{p_u, p_v, p_z\}$ such that there exists a curve $s\in S$ connecting $p_1$ and $p_2$ is represented by $s$ in $\Gamma_2$; and
\item[(e)] each edge $e$ of $G_2$ and each curve $s \in S$ such that $e$ is not represented by $s$ in $\Gamma_2$ cross at most once.
\end{itemize}
\end{lemma}
\begin{proof}
We prove the statement by induction on $n$. If $n=3$, then construct $\Gamma_2$ by mapping vertices $u_2$, $v_2$, and $z_2$ to $p_u$, $p_v$, and $p_z$ (thus satisfying Property (a)), respectively, and by mapping edges $(u_2,v_2)$, $(v_2,z_2)$, and $(z_2,u_2)$ to curves $s_{uv}$, $s_{vz}$, and $s_{zu}$ (thus satisfying Property (b)). Property (c) is trivially satisfied since $G_2$ has no internal vertices and hence $P=\emptyset$. Property (d) is trivially satisfied since $G_2$ has no internal edges and hence $S=\emptyset$. Finally, Property (e) is trivially satisfied since $S=\emptyset$.
Suppose next that $n>3$. By the properties of plane $3$-trees, $G_2$ has an internal vertex $w_2$ that is connected to all of $u_2$, $v_2$, and $z_2$. Also, the subgraphs $G^{uv}_2$, $G^{vz}_2$, and $G^{zu}_2$ of $G_2$ induced by the vertices inside or on the border of cycles $C_2^{uv}=(u_2,v_2,w_2)$, $C_2^{vz}=(v_2,z_2,w_2)$, and $C_2^{zu}=(z_2,u_2,w_2)$, respectively, are plane $3$-trees with $n_{uv}$, $n_{vz}$, and $n_{zu}$ internal vertices, respectively, where $n_{uv}+n_{vz}+n_{zu}=n-4$.
We claim that there exists a point $p_w \in P$ and three curves $s_{uw}$, $s_{vw}$, and $s_{zw}$ connecting $p_u$ and $p_w$, connecting $p_v$ and $p_w$, and connecting $p_z$ and $p_w$, respectively, such that the following hold (see Fig.~\ref{fig:plane3treesclaim}):
\begin{figure}[htb]
\begin{center}
\mbox{\includegraphics[scale=0.48]{Figures/Plane3TreeClaim.eps}}
\caption{Illustration for the claim to prove Lemma~\ref{le:splittingpoints}. White circles represent points in $P$. White squares represent points in $R$. Points in $R_{uv}$, in $R_{vz}$, and in $R_{zu}$ are black squares. Dashed curves are in $S$. Curves $s_{uv}$, $s_{vz}$, $s_{zu}$, $s_{uw}$, $s_{vw}$, and $s_{zw}$ are solid thick curves. In this example $n_{uv}=1$, $n_{vz}=2$, and $n_{zu}=6$. }
\label{fig:plane3treesclaim}
\end{center}
\end{figure}
\begin{itemize}
\item[(P1)] $s_{uw}$, $s_{vw}$, and $s_{zw}$ do not intersect each other and do not intersect $s_{uv}$, $s_{vz}$, and $s_{zu}$, other than at common end-points;
\item[(P2)] if there exists a curve $s_a\in S$ connecting $p_u$ and $p_w$, then $s_{uw}$ coincides with $s_a$; if there exists a curve $s_b\in S$ connecting $p_v$ and $p_w$, then $s_{vw}$ coincides with $s_b$; if there exists a curve $s_c\in S$ connecting $p_z$ and $p_w$, then $s_{zw}$ coincides with $s_c$;
\item[(P3)] for any curve $s\in S$ that does not coincide with $s_{uw}$, curves $s$ and $s_{uw}$ cross at most once; for any curve $s\in S$ that does not coincide with $s_{vw}$, curves $s$ and $s_{vw}$ cross at most once; for any curve $s\in S$ that does not coincide with $s_{zw}$, curves $s$ and $s_{zw}$ cross at most once;
\item[(P4)] the closed curve $\Delta_{uvw}=s_{uv}\cup s_{uw}\cup s_{vw}$ contains in its interior a subset $P_{uv}$ of $P$ with $n_{uv}$ points; the closed curve $\Delta_{vzw}=s_{vz}\cup s_{vw}\cup s_{zw}$ contains in its interior a subset $P_{vz}$ of $P$ with $n_{vz}$ points; and the closed curve $\Delta_{zuw}=s_{zu}\cup s_{zw}\cup s_{uw}$ contains in its interior a subset $P_{zu}$ of $P$ with $n_{zu}$ points; and
\item[(P5)] if a curve $s\in S$ has both its end-points in $cl(\Delta_{uvw})$, in $cl(\Delta_{vzw})$, or in $cl(\Delta_{zuw})$, then it is entirely contained in $cl(\Delta_{uvw})$, in $cl(\Delta_{vzw})$, or in $cl(\Delta_{zuw})$, respectively.
\end{itemize}
We first prove that the claim implies the lemma, and we later prove the claim.
Suppose that the claim holds. Denote by $R_{uv}$ the set of points consisting of: (i) the points in $R$ lying on $s_{uv}$, (ii) the intersection points of $s_{uw}$ with the curves in $S$, if $s_{uw}$ does not coincide with any edge in $S$, and (iii) the intersection points of $s_{vw}$ with the curves in $S$, if $s_{vw}$ does not coincide with any edge in $S$. Analogously define $R_{vz}$ and $R_{zu}$. Let $S^+$ be the set of curves obtained by subdividing each curve $s\in S$ with its intersection points with $s_{uw}$, $s_{vw}$, and $s_{zw}$ (e.g., if $s$ connects points $q_1,q_2 \in R\cup P$ and it has an intersection $q_3$ with $s_{uw}$, then $S^+$ contains two curves connecting $q_1$ and $q_3$, and connecting $q_3$ and $q_2$, respectively). Observe that no curve in $s\in S^+$ has both its endpoints lying on $s_{uw}$, or both its endpoints lying on $s_{vw}$, or both its endpoints lying on $s_{zw}$, as otherwise the curve in $S$ of which $s$ is part would cross twice $s_{uw}$ or $s_{vw}$ or $s_{zw}$, respectively, contradicting Property P3 of the claim. Also, denote by $S_{uv}$, $S_{vz}$, and $S_{zu}$ the subsets of $S^+$ composed of the curves in $cl(\Delta_{uvw})$, in $cl(\Delta_{vzw})$, and in $cl(\Delta_{zuw})$, respectively.
Apply induction three times. The first time to construct a drawing $\Gamma^{uv}_2$ of $G^{uv}_2$ (where the parameters $p_u$, $p_v$, $p_z$, $s_{uv}$, $s_{vz}$, $s_{zu}$, $\Delta_{uvz}$, $P$, $R$, and $S$ in the statement of Lemma~\ref{le:splittingpoints} are replaced with $p_u$, $p_v$, $p_w$, $s_{uv}$, $s_{uw}$, $s_{vw}$, $\Delta_{uvw}$, $P_{uv}$, $R_{uv}$, and $S_{uv}$, respectively), the second time to construct a drawing $\Gamma^{vz}_2$ of $G^{vz}_2$ (where the parameters $p_u$, $p_v$, $p_z$, $s_{uv}$, $s_{vz}$, $s_{zu}$, $\Delta_{uvz}$, $P$, $R$, and $S$ in the statement of Lemma~\ref{le:splittingpoints} are replaced with $p_v$, $p_z$, $p_w$, $s_{vz}$, $s_{vw}$, $s_{zw}$, $\Delta_{vzw}$, $P_{vz}$, $R_{vz}$, and $S_{vz}$, respectively), and the third time to construct a drawing $\Gamma^{zu}_2$ of $G^{zu}_2$ (where the parameters $p_u$, $p_v$, $p_z$, $s_{uv}$, $s_{vz}$, $s_{zu}$, $\Delta_{uvz}$, $P$, $R$, and $S$ in the statement of Lemma~\ref{le:splittingpoints} are replaced with $p_z$, $p_u$, $p_w$, $s_{zu}$, $s_{zw}$, $s_{uw}$, $\Delta_{zuw}$, $P_{zu}$, $R_{zu}$, and $S_{zu}$, respectively). Observe that induction can be applied since, by Properties P4 of the claim, the number of points in $P_{uv}$, in $P_{uv}$, and in $P_{uv}$ is equal to the number of internal vertices of $G^{uv}_2$, of $G^{vz}_2$, and of $G^{zu}_2$, respectively.
Placing $\Gamma^{uv}_2$, $\Gamma^{vz}_2$, and $\Gamma^{zu}_2$ together results in a drawing $\Gamma_2$ of $G_2$; in particular, edge $(u,w)$ is represented by curve $s_{uw}$ both in $\Gamma^{uv}_2$ and in $\Gamma^{vz}_2$ (analogous statements hold for $(v,w)$ and $(z,w)$). Drawing $\Gamma_2$ is planar by induction and by Property P1 of the claim; also, $\Gamma_2$ satisfies Properties (a) and (b) of the lemma by induction; further, $\Gamma_2$ satisfies Property (c) of the lemma by induction and by definition of $p_w$; moreover, $\Gamma_2$ satisfies Property (d) of the lemma by induction and by Properties P2 and P5 of the claim; finally, $\Gamma_2$ satisfies Property (e) of the lemma by induction and by Properties P3 and P5 of the claim.
We now prove the claim. First, we {\em almost-triangulate} the interior of $\Delta_{uvw}$, that is, we add a maximal set of curves $S'$ to $S$ such that: (i) No two curves in $S \cup S'$ intersect, except possibly at common end-points, (ii) no two curves in $S \cup S'$ connect the same pair of points in $R\cup P$, (iii) each curve in $S \cup S'$ is contained in $cl(\Delta_{uvz})$, (iv) any point in $R$, except possibly for $p_u$, $p_v$, and $p_z$, has exactly one incident curve in $S \cup S'$, and (v) no curve in $S \cup S'$ connects two points of $R$ both lying on $s_{uv}$, or both lying on $s_{vz}$, or both lying on $s_{zu}$.
Second, we prove the claim by induction on $n_{uv}+n_{vz}+n_{zu}$. In the base case, $n_{uv}+n_{vz}+n_{zu}=0$. Then, let $p_w$ be the only point in $P$. Refer to Fig.~\ref{fig:plane3treesbasecase}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{c c c}
\mbox{\includegraphics[scale=0.4]{Figures/Plane3TreeBaseCase.eps}} \hspace{1mm} &
\mbox{\includegraphics[scale=0.4]{Figures/Plane3TreeBaseCase-2.eps}} \hspace{1mm} &
\mbox{\includegraphics[scale=0.4]{Figures/Plane3TreeBaseCase-3.eps}}
\end{tabular}
\caption{Three examples for the base case $n_{uv}+n_{vz}+n_{zu}=0$.}
\label{fig:plane3treesbasecase}
\end{center}
\end{figure}
We show how to draw $s_{uw}$.
\begin{itemize}
\item If a curve $s_a\in S$ exists connecting $p_u$ and $p_w$, then $s_{uw}$ coincides with $s_a$.
\item Suppose that no curve exists in $S$ connecting $p_u$ and $p_w$. If a curve $s_a\in S$ exists connecting $p_w$ and a point in $R$ on $s_{uv}$, then let $q_{uv}$ be the point in $R$ on $s_{uv}$ that is ``closest'' to $u$, i.e., no curve exists connecting $p_w$ and a point $q'_{uv}$ in $R$ such that $q'_{uv}$ lies on the part $s_{quv}$ of $s_{uv}$ between $q_{uv}$ and $u$. Draw $s_{uw}$ as a curve arbitrarily close to the curve $s_a \cup s_{quv}$.
\item Suppose that no curve exists in $S$ connecting $p_w$ and a point in $R$ on $s_{uv}$. If a curve $s_b\in S$ exists connecting $p_w$ and a point in $R$ on $s_{zu}$, then define $s_{quz}$ analogously to $s_{quv}$, and draw $s_{uw}$ as a curve arbitrarily close to the curve $s_b \cup s_{quz}$.
\item Finally, suppose that no curve exists in $S$ connecting $p_w$ with any point in $R$ on $s_{uv}$ or on $s_{zu}$. We claim that $p_w$ and $s_{uv}$ or $p_w$ and $s_{zu}$ are incident to a common face in $cl(\Delta_{uvz})$. In fact, if $p_w$ and $s_{uv}$ are not incident to a common face in $cl(\Delta_{uvz})$, then a curve $s_x$ exists in $S$ connecting a point in $R$ on $s_{vz}$ and a point in $R$ on $s_{zu}$ in such a way that $s_x$ ``separates'' $p_w$ from $s_{uv}$; analogously, if $p_w$ and $s_{zu}$ are not incident to a common face in $cl(\Delta_{uvz})$, then a curve $s_y$ exists in $S$ connecting a point in $R$ on $s_{uv}$ and a point in $R$ on $s_{vz}$ in such a way that $s_y$ ``separates'' $p_w$ from $s_{zu}$; however this implies that $s_x$ and $s_y$ cross, contradicting the assumptions on $S$. Then, say that $p_w$ and $s_{uv}$ are incident to a common face $f$ in $cl(\Delta_{uvz})$. Insert a dummy point $q_{uv}$ in $R$ on $s_{uv}$ incident to $f$ together with a curve $s_c$ connecting $p_w$ and $q_{uv}$ inside $f$. Define $s_{quv}$ as the part of $s_{uv}$ between $q_{uv}$ and $p_u$. Draw $s_{uw}$ as a curve arbitrarily close to the curve $s_c \cup s_{quv}$.
\end{itemize}
We draw $s_{vw}$ and $s_{zw}$ analogously to $s_{uw}$.
The constructed drawing is easily shown to satisfy Properties P1--P5 of the claim. In particular, Property P3 can be shown to be satisfied by $s_{uw}$ as follows (the proof for $s_{vw}$ and $s_{zw}$ is analogous). Consider any curve $s\in S$. If $s_{uw}$ coincides with a curve $s'\in S$, then $s_{uw}$ and $s$ do not cross, given that $s'$ and $s$ do not cross by the assumptions on $S$. Otherwise, $s_{uw}$ is arbitrarily close to $s_a \cup s_{quv}$, for some curve $s_a\in S$ incident to a point in $R$ on $s_{uv}$ or for some curve $s_a$ lying in the interior of a face $f$ of $cl(\Delta_{uvz})$. Hence, if $s$ is incident to a point in $R$ on $s_{quv}$ different from $q_{uv}$, then $s_{uw}$ and $s$ cross exactly once arbitrarily close to $s_{quv}$. Otherwise, either $s_{uw}$ and $s$ share $p_w$ as common endpoint and do not cross at any other point, or $s_{uw}$ and $s$ do not share $p_w$ as common endpoint and do not cross at all, given that $s_a$ and $s$, as well as $s_{quv}$ and $s$, do not cross by the assumptions on $S$.
In the inductive case, $n_{uv}+n_{vz}+n_{zu}>0$. Suppose, w.l.o.g., that $n_{uv}>0$, the other cases being analogous.
We say that a point $p\in P$ is {\em close to $s_{uv}$} if the following condition holds. Let $G(P\cup R,S)$ be the plane graph whose vertices are the points in $P\cup R$ and whose edges are the curves in $S$. See Fig.~\ref{fig:closetouv}(a). Also, let $G(u,v,p)$ be the plane subgraph of $G(P\cup R,S)$ whose vertices are the points in $P\cup R$ and whose edges are: (i) The curves that compose $s_{uv}$, and (ii) every curve in $S$ that is incident to $p$ and to a point on $s_{u,v}$ (including $p_u$ and $p_v$). Then $p$ is close to $s_{uv}$ if: (a) It is incident to the same face of $G(P\cup R,S)$ a point on $s_{uv}$ is incident to, and (b) no point in $P$ lies inside any of the bounded faces of $G(u,v,p)$.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{c c c}
\mbox{\includegraphics[scale=0.41]{Figures/CloseToUv1.eps}} \hspace{1mm} &
\mbox{\includegraphics[scale=0.41]{Figures/CloseToUv2.eps}} \hspace{1mm} &
\mbox{\includegraphics[scale=0.41]{Figures/CloseToUv3.eps}} \\
(a) \hspace{1mm} & (b) \hspace{1mm} & (c) \\
\mbox{\includegraphics[scale=0.41]{Figures/CloseToUv4.eps}} \hspace{1mm} &
\mbox{\includegraphics[scale=0.41]{Figures/CloseToUv5.eps}} \hspace{1mm} &
\mbox{\includegraphics[scale=0.41]{Figures/CloseToUv6.eps}} \\
(d) \hspace{1mm} & (e) \hspace{1mm} & (f)
\end{tabular}
\caption{(a) Graph $G(P\cup R,S)$. Point $p$ is close to $s_{uv}$. In fact, $p$ and a point $r_p$ on $s_{uv}$ are incident to the same face of $G(P\cup R,S)$, and the bounded faces of $G(u,v,p)$, which are shown gray, do not contain any point in $P\cup R$. Point $q$ is not close to $s_{uv}$. In fact, although $q$ and a point $r_q$ on $s_{uv}$ are incident to the same face of $G(P\cup R,S)$, the only bounded face of $G(u,v,q)$, which is shown with gray stripes, contains a point in $P$. (b) Removal of $p$ from $G(P\cup R,S)$. The face $f$ in which $p$ used to lie is colored gray. (c) Insertion of dummy vertices $r'_{y-x},r'_{y-x-1},\dots,r'_1$ (small black squares) on edge $(u_1,u_2)$, and insertion of dummy edges $e_1,e_2,\dots,e_{y-x}$ (solid thin lines) inside $f$. (d) Inductively constructed drawings $s_{uw}$, $s_{vw}$, and $s_{zw}$. (e) Reintroduction of $p$ and its incident edges. (f) Restoration of the position of $p$ and of the drawing of its incident edges, while preserving the topology of the drawing. Only the part of $s_{uw}$, $s_{vw}$, and $s_{zw}$ in the interior of $f$ has to be modified for this sake.}
\label{fig:closetouv}
\end{center}
\end{figure}
A point close to $s_{u,v}$ always exists. Namely, consider any point $p\in P$ incident to the same face of $G(P\cup R,S)$ a point on $s_{uv}$ is incident to. If no point in $P$ lies inside any of the bounded faces of $G(u,v,p)$, then $p$ is the desired point. Otherwise, a set $F$ of isolated vertices lies inside a bounded face $f$ of $G(u,v,p)$. Among those points, there is a point $p'$ that is incident to the same face of $G(P\cup R,S)$ a point on $s_{uv}$ is incident to. Then, consider $G(u,v,p')$. If no point in $P$ lies inside any of the bounded faces of $G(u,v,p')$, then $p'$ is the desired point. Otherwise, a set $F'$ of isolated vertices lies inside a bounded face $f'$ of $G(u,v,p')$. However, $F'$ is a subset of $F$, thus the repetition of such an argument eventually leads to find a point close to $s_{uv}$.
Let $p$ be any point close to $s_{uv}$. Remove $p$ and its incident edges from $G(P\cup R,S)$. Let $f$ be the face of $G(P\cup R,S)$ in which $p$ used to lie and let $C_f$ be the cycle delimiting $f$. Since $p$ is close to $s_{uv}$, the vertices in $R$ lying on $s_{uv}$ appear consecutively along $C_f$. Denote by $u_1,u_2,\dots,u_y$ the clockwise order of the vertices along $C_f$, where $u_1,u_2,\dots,u_x$ are the vertices in $R$ on $s_{uv}$ in order from $p_u$ to $p_v$. See Fig.~\ref{fig:closetouv}(b). It holds $x\geq 2$, given that $p$ is incident to the same face of $G(P\cup R,S)$ a point on $s_{uv}$ is incident to, and given that every point in $R$ has exactly one incident curve in $S$.
Insert $y-x$ dummy vertices $r'_{y-x},r'_{y-x-1},\dots,r'_1$ in $R$ in this
order on edge $(u_1,u_2)$. Insert dummy edges $e_1,e_2,\dots,e_{y-x}$ inside $f$
from $u_{x+i}$ to $r'_i$, for every $1\leq i\leq y-x$. See
Fig.~\ref{fig:closetouv}(c). Inductively draw $s_{uw}$, $s_{vw}$, and $s_{zw}$
so that Properties P1--P5 of the claim are satisfied, where Property P4 ensures
that $\Delta_{uvw}$, $\Delta_{vzw}$, and $\Delta_{zuw}$ contain $n_{uv}-1$,
$n_{vz}$, and $n_{zu}$ points of $P\setminus\{p\}$ in their interior,
respectively. See Fig.~\ref{fig:closetouv}(d).
Introduce $p$ in a point arbitrarily close to edge $(u_1,u_2)$. Reintroduce the edges incident to $p$ as follows. Draw curves connecting $p$ and its neighbors among $u_1,u_2,\dots,u_x$ inside $f$ and arbitrarily close to $s_{uv}$. Also, for each neighbor $u_{x+i}$ of $p$ with $1\leq i\leq y-x$, draw a curve connecting $p$ and $u_{x+i}$ as composed of two curves, the first one arbitrarily close to $s_{uv}$, the second one coinciding with part of the edge $e_i$. Remove dummy vertices $r'_1,r'_2,\dots,r'_{y-x}$ and dummy edges $e_1,e_2,\dots,e_{y-x}$ from the drawing. See Fig.~\ref{fig:closetouv}(e). Finally, restore the placement of $p$ and the drawing of its incident edges. In order to do so while maintaining the topology of the drawing (i.e. the number and order of the crossings along each edge), curves $s_{uw}$, $s_{vw}$, and $s_{zw}$ have to be modified in the interior of $f$. See Fig.~\ref{fig:closetouv}(f).
The constructed drawing of $s_{uw}$, $s_{vw}$, and $s_{zw}$ satisfies Properties P1--P5 of the claim as shown in the following.
\begin{itemize}
\item Property P1 directly comes from induction.
\item We prove Property P2. Consider any curve $s\in S$. We prove that, if $s$ connects $p_u$ and $p_w$, then $s_{uw}$ coincides with $s$. Suppose first that $s$ is incident to $p$. By construction, $p\neq p_u,p_w$, hence in this case there is nothing to prove. Suppose next that $s$ is not incident to $p$. By induction, if $s$ connects $p_u$ and $p_w$, then $s_{uw}$ coincides with $s$ before restoring the position of $p$ and the drawing of its incident edges. Moreover, the only part of the drawing of $s_{uw}$ that can be possibly modified in order to restore the position of $p$ and the drawing of its incident edges is the one lying in the interior of $f$. However, if $s_{uw}$ coincides with $s$, then no part of it lies in the interior of $f$, hence $s_{uw}$ still coincides with $s$ after the modification. It can be analogously proved that if $s$ connects $p_v$ and $p_w$, or $p_z$ and $p_w$, then $s_{vw}$ or $s_{zw}$ coincides with $s$, respectively.
\item We prove Property P3. Consider any curve $s\in S$ and assume that $s$ does not coincide with $s_{uw}$. We prove that $s$ and $s_{uw}$ cross at most once. Since restoring the position of $p$ and the drawing of its incident edges does not alter the number of crossings between $s$ and $s_{uw}$, it suffices to prove that they cross at most once when $p$ and its incident edges are first reintroduced inside $f$. Suppose first that $s$ is not incident to $p$. Then, by induction $s$ and $s_{uw}$ cross at most once. Suppose next that $s$ is incident to $p$ and to a point $u_i$, for some $1\leq i\leq y$. If $1\leq i\leq x$, then $s$ is arbitrarily close to $s_{uv}$, hence it does not cross $s_{uw}$ at all. If $x+1\leq i\leq y$, then $s$ is composed of two curves, the first one arbitrarily close to $(u_1,u_2)$ (hence, such a curve does not cross $s_{uw}$ at all), the second one coinciding with part of edge $e_i$ (hence such a curve crosses $s_{uw}$ at most once, by induction). It can be analogously proved that if $s$ does not coincide with $s_{vw}$ or with $s_{zw}$, then $s$ and $s_{vw}$ or $s$ and $s_{zw}$ cross at most once, respectively.
\item In order to prove Property P4, it suffices to observe that by induction $\Delta_{uvw}$, $\Delta_{vzw}$, and $\Delta_{zuw}$ contain in their interior subsets of $P\setminus \{p\}$ with $n_{uv}-1$ points, with $n_{vz}$ points, and with $n_{zu}$ points, respectively, and that by construction $p$ lies in $\Delta_{uvw}$, which hence contains $n_{uv}$ points of $P$.
\item We prove Property P5. Consider any curve $s\in S$. If $s$ is not incident to $p$, then it satisfies the property by induction. Otherwise, $s$ is incident to $p$, hence it has at least one of its end-points inside $cl(\Delta_{uvw})$. Thus, we only need to show that, if its second end-point is inside $cl(\Delta_{uvw})$, then $s$ is entirely contained in $cl(\Delta_{uvw})$. Since $s$ crosses each of $s_{uw}$ and $s_{vw}$ at most once, it follows that $s$ is not entirely contained in $cl(\Delta_{uvw})$ if and only if it crosses each of $s_{uw}$ and $s_{vw}$ {\em exactly} once. Suppose that $s$ connects $p$ with a point $u_i$. If $1\leq i\leq x$, then $s$ is arbitrarily close to $s_{uv}$, hence it does not cross $s_{uw}$ nor $s_{vw}$ at all. If $x+1\leq i\leq y$, then $s$ is composed of a curve arbitrarily close to $(u_1,u_2)$, which does not cross $s_{uw}$ nor $s_{vw}$ at all, and of a curve coinciding with part of edge $e_i$. If the latter curve crosses both $s_{uw}$ and $s_{vw}$, then $e_i$ would have both its end-points inside $cl(\Delta_{uvw})$ and still would not entirely lie inside $cl(\Delta_{uvw})$, which is not possible by induction.
\end{itemize}
This concludes the proof of the claim and of the lemma.
\end{proof}
Fig.~\ref{fig:plane3treesdrawing}(b) shows a planar drawing of $G_2$ satisfying
the properties of Lemma~\ref{le:splittingpoints}. Lemma~\ref{le:splittingpoints}
implies a proof of Theorem~\ref{th:plane3trees-main}. Namely, construct any
planar drawing $\Gamma_1$ of $G_1$. Denote by $P$ the point set to which the
$n-3$ internal vertices of $G_1$ are mapped in $\Gamma_1$. Let $s_{uv}$,
$s_{vz}$, and $s_{zu}$ be the curves representing edges $(u_1,v_1)$,
$(v_1,z_1)$, and $(z_1,u_1)$ in $\Gamma_1$, respectively. Let $S$ be the set of
curves representing the internal edges of $G_1$ in $\Gamma_1$. Let $p_u$,
$p_v$, and $p_z$ be the points on which $u_1$, $v_1$, and $z_1$ are drawn,
respectively. Let $R=\{p_u, p_v, p_z\}$. Construct a planar drawing $\Gamma_2$
of $G_2$ satisfying the properties of Lemma~\ref{le:splittingpoints}. Then,
$\Gamma_1$ and $\Gamma_2$ are planar drawings of $G_1$ and $G_2$, respectively.
By Properties (a) and (c) of Lemma~\ref{le:splittingpoints}, the $n$ vertices of
$G_2$ are mapped to the same $n$ points to which the vertices of $G_1$ are
mapped. Finally, by Properties (b) and (d) of Lemma~\ref{le:splittingpoints}, if
edges $e_1$ of $G_1$ and $e_2$ of $G_2$ have their end-vertices mapped to the
same two points $p_a,p_b\in P \cup\{p_u, p_v, p_z\}$, then $e_1$ and $e_2$ are
represented by the same Jordan curve in $\Gamma_1$ and in $\Gamma_2$; hence,
$\Gamma_1$ and $\Gamma_2$ are a {\sc SefeNoMap~} of $G_1$ and $G_2$.
\section{Conclusions} \label{se:conclusions}
In this paper we studied the problem of determining the largest $k_1\leq n$ such that every $n$-vertex planar graph and every $k_1$-vertex planar graph admit a {\sc SefeNoMap~}. We proved that $k_1\geq n/2$. No upper bound smaller than $n$ is known. Hence, tightening this bound (and in particular proving whether $k_1=n$ or not) is a natural research direction.
To achieve the above result, we proved that every $n$-vertex plane graph has an $(n/2)$-vertex induced outerplane graph, a result related to a famous conjecture stating that every planar graph contains an induced forest with half of its vertices~\cite{ab-cpg-79}. A suitable triangulation of a set of nested $4$-cycles shows that $n/2$ is a tight bound for our algorithm, up to a constant. However, we have no example of an $n$-vertex plane graph whose largest induced outerplane graph has less than $2n/3$ vertices (a triangulation of a set of nested $3$-cycles shows that the $2n/3$ bound cannot be improved). The following question arises: What are the largest $k_2$ and $k_3$ such that every $n$-vertex plane graph has an induced outerplane graph with $k_2$ vertices and an induced outerplanar graph with $k_3$ vertices? Any bound $k_2>n/2$ would improve our bound for the {\sc SefeNoMap~} problem, while any bound $k_3>3n/5$ would improve the best known bound for Conjecture~\ref{conj:induced-forests}, via the results in~\cite{h-iftog-90}.
A different technique to prove that every $n$-vertex planar graph and every $k_4$-vertex planar graph have a {\sc SefeNoMap~} is to ensure that a mapping between their vertex sets exists that generates no shared edge. Thus, we ask: What is the largest $k_4\leq n$ such that an injective mapping exists from the vertex set of any $k_4$-vertex planar graph and the vertex set of any $n$-vertex planar graph generating no shared edge? It is easy to see that $k_4\geq n/4$ (a consequence of the four color theorem~\cite{ah-epfci-77,ahk-epfcii-77}) and that $k_4\leq n-5$ (an $n$-vertex planar graph with minimum degree $5$ does not admit such a mapping with an $(n-4)$-vertex planar graph having a vertex of degree $n-5$).
Finally, it would be interesting to study the geometric version of our problem. That is: What is the largest $k_5\leq n$ such that every $n$-vertex planar graph and every $k_5$-vertex planar graph admit a geometric simultaneous embedding with no mapping? Surprisingly, we are not aware of any super-constant lower bound for the value of $k_5$.
|
2,869,038,155,590 | arxiv | \section{Introduction}
Recent years have witnessed a great success of supervised learning in various pattern recognition problems, such as image classification, object detection, and speech recognition. Standard supervised learning relies heavily on the {\it i.i.d.} data assumption; however, dataset-bias is unavoidable in many situations due to selection bias or mechanism changes. For example, this problem has been well recognized in the computer vision community \cite{torralba2011unbiased,khosla2012undoing}: the widely adopted vision datasets have their special properties and are not representative of the visual world. In medical diagnosis, the distribution of cell types varies from patient to patient, and we need to train a classifier on the data collected from previous patients that generalizes well to unseen patients \cite{blanchard2011generalizing,muandet9}. These problems are known as domain generalization, in which the training set consists of data from heterogeneous source domains, say patients, and the test data distribution is different from that of the training data.
To handle the distribution changes, many existing domain generalization methods aim to learn domain-invariant representations that have stable distributions across all source domains \cite{muandet9,erfani23,ghifary8}. The learned invariant representations are expected to generalize well to any unseen test set under the assumption that the changes of distribution across source and test domains are caused by some common factors whose effects are removed in the invariant representations. In computer vision, such factors could be illumination, camera viewpoints, and backgrounds. These methods have achieved good performance in computer vision \cite{ghifary15,ghifary8} and medical diagnosis \cite{muandet9}.
However, existing methods that learn domain-invariant representations assume that only $\mathbb{P}(X)$ changes across domains while the conditional distribution $\mathbb{P}(Y|X)$ is rather stable. Thus, the conditional distribution $\mathbb{P}(Y|h(X))$ is also invariant, and the learning problem reduces to ensuring that the marginal distribution $\mathbb{P}(h(X))$ is invariant across domains. This assumption greatly simplifies the problem, but it is unclear whether this assumption holds in practical situations. According to some recent results in causal learning \cite{scholkopf16,janzing17}, $\mathbb{P}(Y|X)$ can be stable when $\mathbb{P}(X)$ changes in the situation where $X$ is the cause for $Y$, i.e., the causal structure is $X\rightarrow Y$. This is because the mechanism that generates the cause, i.e., $\mathbb{P}(X)$, is not coupled with the mechanism that generates the effect from the cause, i.e., $\mathbb{P}(Y|X)$, and not vice versa. That is to say, if $Y$ is the cause and $X$ is the effect, $\mathbb{P}(X)$ often changes together with $\mathbb{P}(Y|X)$. In this situation, if $\mathbb{P}(X)$ changes, it is very likely that $\mathbb{P}(Y|X)$ also changes across domains, which violates the stability of $\mathbb{P}(Y|X)$ assumption. In practice, we have plenty of problems where the causal structure is $Y\rightarrow X$. For example, , in face recognition, Y is person id, X is the feature, and $\theta$ is the viewpoint. Let us consider each viewpoint as a domain, then in each domain we have conditional distribution $P(X|Y,\theta=\theta_i)$. According to Bayes theorem, $P(Y|X,\theta=\theta_i) = P(X|Y,\theta=\theta_i)P(Y|\theta=\theta_i)/P(X|\theta=\theta_i),$ thus changes across domains. This conflicts with previous assumptions that $P(Y|X)$ keeps unchanged. There are also other examples, e.g. speaker recognition and person re-identification \cite{yang2017enhancing}.
In this paper, we assume both $\mathbb{P}(X)$ and $\mathbb{P}(Y|X)$ change across domains. We aim to find a feature transformation $h(X)$ that has invariant class-conditional distribution $\mathbb{P}(h(X)|Y)$. To achieve so, we propose to minimize two regularization terms that enforce distribution invariance across source domains. The first term measures the variance of each class-conditional distribution across all source domains and then sums up the variances for all classes. The second term is the variance of class prior-normalized marginal distribution $\mathbb{P}_N(h(X))$, which measures the global distribution discrepancy. The normalization of class priors is introduced to remove the effects brought by possible changes in $\mathbb{P}(Y)$ across source domains. If the prior distribution $\mathbb{P}(Y)$ does not change across source domains, the second term reduces to the common technique used in existing domain-invariant representation learning methods \cite{muandet9,ghifary8}. To preserve the discriminative power of the learned representation, we also incorporate the intra-class and inter-class distances used in kernel Fisher discriminant analysis (FDA)\cite{mika1999fisher}. \par
Compared to existing domain-invariant representation learning methods, our method does not require the assumption of stable $\mathbb{P}(Y|X)$ by exploiting the labels on the source domains which were overlooked in the previous methods. Especially, if the prior distribution $\mathbb{P}(Y)$ on the test sets is the same as that on the training set containing all source domains, our method is able to learn representations $h(X)$ that have invariant joint distribution $\mathbb{P}(h(X),Y)$ across all domains. We conduct a series of experiments on both synthetic and real data, and the results demonstrate the effectiveness of our method.
\section{Related Work}
Domain generalization has been widely applied in classification tasks \cite{xu13,duan14,muandet9,ghifary8,ghifary15,erfani23}. Compared with standard supervised learning, domain generalization methods aim to reduce data bias across different domains and improve the generalization of the learned model to unseen but related domains. For example, \cite{xu13}assumed that positive samples within the same shared latent domain should have similar likelihood and proposed to exploit the low-rank structure from latent domains for domain generalization. \cite{muandet9} proposed domain-invariant component analysis (DICA) through learning an invariant feature representation $h(X)$, in which the difference between marginal distributions $\mathbb{P}(h(X))$ is minimized. \cite{ghifary8} proposed a unified framework called scatter component analysis for domain adaptation and domain generalization. The scatter component analysis combines domain scatter \cite{muandet9}, kernel PCA \cite{scholkopf1998nonlinear}, and kernel FDA \cite{mika1999fisher} in a single objective function. However, all these methods assume that the distribution between domains differs only in the marginal distribution $\mathbb{P}(X)$ while the conditional distribution $\mathbb{P}(Y|X)$ keeps stable or unchanged across domains. This assumption can simplify the problem of domain generalization, but it is easily violated in real-world applications. \par
Domain adaptation is a related problem which has been extensively studied in the literature \cite{6751205,Huang07,Pan11,pmlr-v70-long17a,shao2014generalized,shao2016spectral,luogeneral,liu2017understanding,}. Assuming that only $\mathbb{P}(X)$ changes, the distribution changes can be corrected by importance reweighting \cite{Huang07} or domain-invariant feature learning \cite{Pan11,6751205}, using unlabeled data from source and target domains. Recently, several works attempted to work in the situation where both $\mathbb{P}(X)$ and $\mathbb{P}(Y|X)$ change across domains \cite{zhang11,gong10,pmlr-v70-long17a}. \cite{zhang11} and \cite{gong10} proposed to consider the domain adaptation problem in the generalized target shift (GeTarS) scenario where the causal direction is $Y \rightarrow X$. In this scenario, both the change of distribution $\mathbb{P}(Y)$ and conditional distribution $\mathbb{P}(X|Y)$ are considered to reduce the data bias across domains. \cite{zhang11} made an assumption that features from source domains can be transferred to the target domain by a location-scale transformation, which is restricted in real-world applications because of the presence of noises in features. \cite{gong10} proposed to le{}arn components whose conditional distribution $\mathbb{P}(h(X)|Y)$ is invariant across domains and estimate the target label distribution $\mathbb{P}^t(Y)$ through labeled source domain data and unlabeled target domain data. Since there are no labels in the target domain to match class-conditionals, the invariance of $\mathbb{P}(h(X)|Y)$ is achieved by minimizing the discrepancy of the marginal distribution $\mathbb{P}(h(X))$ under some untestable assumptions. \cite{pmlr-v70-long17a} proposed an iterative way to match the conditionals by using the predicted labels from previous iterations as pseudo labels. Different from the domain adaptation methods, domain generalization does not require unlabeled data from the target domains.
\section{Conditional Invariant Domain Generalization}
In this section, we first establish the basic notations of domains and formally introduce the definition of domain generalization. Then we give a detailed description of the proposed conditional invariant domain generalization (CIDG) method.
\subsection{Problem Definition}
Denote $\mathcal{X}$ and $\mathcal{Y}$ as the input feature and label spaces, respectively. A domain defined on $\mathcal{X} \times \mathcal{Y}$ can be represented by a joint probability distribution $\mathbb{P}(X,Y)$. For simplicity, we denote the joint probability distribution $\mathbb{P}^s(X,Y)$ of the $s$-th source domain as $\mathbb{P}^s$. The domain $\mathbb{P}^s$ is associated with a sample $D_s = \{x_i^s, y_i^s\}_{i=1}^{n^s}$, where $(x_i^s,y_i^s) \sim \mathbb{P}^s$ and $n^s$ denotes the sample size of the domain $\mathbb{P}^s$. Then we can define domain generalization as follows. \par
\begin{defi}[Domain Generalization]
Given multiple related source domains $\Omega = \{\mathbb{P}^1,\mathbb{P}^2,...,\mathbb{P}^m\}$ and each domain is associated with a sample $D_s=\{x_i^s,y_i^s\}_{i=1}^{n^s} \sim \mathbb{P}^s$, where $s=\{1,2,\ldots,m\}$. The goal of domain generalization is to learn a classification function $f: \mathcal{X} \rightarrow \mathcal{Y}$ from source domain datasets $\{D_s\}_{s=1}^m$ and apply it to an unseen but related target domain $\mathbb{P}^t(X,Y)$.
\end{defi}
\subsection{Kernel Mean Embedding}
Before introducing the proposed method, we briefly review the kernel mean embedding of distributions, which is an important mathematical tool to represent and compare distributions \cite{song2013kernel,sriperumbudur2010hilbert}. Let $\mathcal{H}$ denote a characteristic reproducing kernel Hilbert space (RKHS) on $\mathcal{X}$ associated with a kernel $k(\cdot,\cdot): \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$, and $\phi$ be an associated mapping such that $\phi(x) \in \mathcal{H}$. Suppose we have two observations $x_1^s \in \mathcal{X}$ and $x_2^s \in \mathcal{X}$ from domain $s$, then we have $\left<\phi(x_1^s),\phi(x_2^s)\right> = k(x_1^s,x_2^s)$. The kernel embedding of a distribution $\mathbb{P}(X)$ can be formulated as the following:
\begin{equation}
\mu_{\mathbb{P}_X}:=E_{X\sim \mathbb{P}_X}[\phi(X)]=E_{X\sim \mathbb{P}_X}[k(X,\cdot)],
\end{equation}
where $\mathbb{P}_X$ denotes $\mathbb{P}(X)$ for simplicity. If a kernel is characteristic, then the mean embedding $\mu_{\mathbb{P}_X}$ is injective. All the information about the distribution can be preserved \cite{sriperumbudur2010hilbert}. The kernel embedding cannot be computed directly and is usually estimated from observations. Given a sample $D = \{x_i\}_{i=1}^{n}$, where $n$ is the sample size of the domain, and the kernel embedding can be empirically estimated as the following:
\begin{equation}\label{empirical}
\hat{\mu}_{\mathbb{P}_X} = \frac{1}{n} \sum \limits_{i=1}^{n} \phi(x_i)=\frac{1}{n}\sum \limits_{i=1}^{n} k(x_i,\cdot).
\end{equation}
\subsection{Proposed Approach}
The proposed conditional invariant domain generalization (CIDG) method aims to find a conditional invariant representation $h(X)$ (a linear transformation of the original features) to reduce the variance of the conditional distribution $\mathbb{P}(h(X)|Y)$ across source domains. Suppose we can learn a perfect conditional invariant representation $h(X)$, which satisfies $\mathbb{P}^{s=i}(h(X)|Y)=\mathbb{P}^{s=j}(h(X)|Y)=\mathbb{P}^t(h(X)|Y)$, $i,j \in \{1,2,...,m\}$ and $\mathbb{P}^t$ denotes the target domain. We can gather all the source domains to construct a new single domain with a joint distribution $\mathbb{P}^t(h(X)|Y)\mathbb{P}^{\text{new}}(Y)$. Therefore, under the condition $\mathbb{P}^{\text{new}}(Y)=\mathbb{P}^t(Y)$, the learned $h(X)$ has the invariant joint distribution across training and test domains. Contrarily, the previous method can only guarantee that $\mathbb{P}(h(X))$ is invariant, and whether $\mathbb{P}(Y|h(X))$ is invariant remains unknown. If $\mathbb{P}^{\text{new}}(Y)$ is different from $\mathbb{P}^t(Y)$, our method cannot guarantee the invariance of the joint distribution either. Nevertheless, our method can at least guarantee invariant class-conditional distributions, which is still better than previous methods. This is because $\mathbb{P}(Y|h(X))$ is usually not very sensitive to the changes in the prior $\mathbb{P}(Y)$ if $h(X)$ is highly correlated with $Y$.
The learning of conditional invariant representations is achieved mainly through two regularization terms: total scatter of class-conditional distributions and scatter of class prior-normalized marginal distributions. The first term measures the variance of $\mathbb{P}(h(X)|Y)$ locally, while the second term measures the variance of $\mathbb{P}(h(X)|Y)$ globally. In addition to these two terms, we also incorporate several terms that measure the discriminative power of the representation $h(X)$ as done in the previous works. By minimizing the distribution variance across domains and maximizing the discriminative power in one objective function, we can obtain the conditional invariant representation which is predictable for the labels on unseen target domains. \par
\subsubsection{Total scatter of class-conditional distributions}
Suppose we have $m$ related domains $\{\mathbb{P}^1,\mathbb{P}^2,...,\mathbb{P}^m\}$ on $\mathcal{X} \times \mathcal{Y}$. The marginal distribution on $\mathcal{X}$ of the $s$-th domain is denoted as $\mathbb{P}^s_X$. Suppose the class labels of each domain vary from $1$ to $C$. For simplicity, the $j$-th class conditional distribution $\mathbb{P}^s(X|Y=j)$ of the $s$-th domain is denoted as $\mathbb{P}_j^s$.
The total scatter of class-conditional distributions across domains can be formulated as:
\begin{equation}
\Psi\left(\{\mu_{\mathbb{P}_1^1}, \mu_{\mathbb{P}_2^1},...,\mu_{\mathbb{P}_C^m} \}\right) = \sum \limits_{j=1}^{C}\frac{1}{m} \sum \limits_{s=1}^m \| \mu_{\mathbb{P}_j^s} - \overline{\mu}_j \|_{\mathcal{H}}^2,
\end{equation}
where $\overline{\mu}_j = \frac{1}{m} \sum \limits_{s=1}^m \mu_{\mathbb{P}_j^s}$ and $\frac{1}{m}\sum \limits_{s=1}^m \| \mu_{\mathbb{P}_j^s} - \overline{\mu}_j \|_{\mathcal{H}}^2$ is called the domain scatter \cite{ghifary8} or distributional variance \cite{muandet9}. Instead of measuring the domain scatter w.r.t. the marginal distributions $\mathbb{P}^s_X$ as done in previous works like \cite{ghifary8}, we measure the domain scatter w.r.t. each class-conditional distribution and then sum them together.
Before introducing the computation of the above scatter, we first give the formulation of the learned feature transformation. Denote the feature matrix $\bm{X} = [x_1, x_2,...,x_{n}]^{\top} \in \mathbb{R}^{n \times d}$ as the data matrix of samples from $m$ source domains, where $d$ is the dimension of the feature space $\mathcal{X}$ and $n= \sum \nolimits_{s=1}^m n^s$. Define a set of functions $\bm{\Phi}=[\phi(x_1),\phi(x_2)...,\phi(x_n)]^{\top}$ related to the feature map $\phi: \mathbb{R}^d \rightarrow \mathcal{H}$. We aim to find a linear feature transformation $\bm{W}$ transforming $\mathcal{H}$ into a finite subspace $: \mathcal{H} \rightarrow \mathbb{R}^q$, that is $h(x)=\bm{W}^\top\phi(x)$. According to the kernel principal component analysis (KPCA) \cite{scholkopf1998nonlinear}, the linear transformation can be formulated as the linear combination of $\bm{\Phi}, i.e., \bm{W} = \bm{\Phi}^{\top}\bm{B}$, where $\bm{B} \in \mathbb{R}^{n\times q}$ is the coefficient matrix. By using this representation, we can avoid explicitly computing the feature map $\phi$ and use the kernel trick instead.\par
For simplicity, denote $\Psi\left(\{\mu_{\mathbb{P}_1^1}, \mu_{\mathbb{P}_2^1},...,\mu_{\mathbb{P}_C^m} \}\right)$ as $\Psi^{con}$,
\begin{equation}
\begin{aligned}
\Psi^{con} &= \frac{1}{m} \sum \limits_{s=1}^m \sum \limits_{j=1}^{C} \| \mu_{\mathbb{P}_j^s} - \overline{\mu}_j \|_{\mathcal{H}}^2 \\
&= \frac{1}{m} \sum \limits_{s=1}^m \sum \limits_{j=1}^{C} Tr\left((\mu_{\mathbb{P}_j^s} -\overline{\mu}_j )(\mu_{\mathbb{P}_j^s} -\overline{\mu}_j )^{\top}\right) \\
&= Tr\left(\frac{1}{m} \sum \limits_{s=1}^m \sum \limits_{j=1}^{C}(\mu_{\mathbb{P}_j^s} -\overline{\mu}_j )(\mu_{\mathbb{P}_j^s} -\overline{\mu}_j )^{\top} \right),
\end{aligned}
\end{equation}
where $Tr(\cdot)$ is trace operator. To measure the distribution scatter of the distributions of $\mathbb{P}(h(X)|Y)$, we apply the linear feature transformation $\bm{W}$ to the above scatter and obtain
\begin{flalign}\label{con}
& \Psi_{\bm{B}}^{con} \nonumber\\
&= Tr\left(\frac{1}{m} \sum \limits_{s=1}^m \sum \limits_{j=1}^{C}\bm{B}^{\top}\bm{\Phi}(\mu_{\mathbb{P}_j^s} -\overline{\mu}_j )(\mu_{\mathbb{P}_j^s} -\overline{\mu}_j )^{\top} \bm{\Phi}^{\top}\bm{B} \right) \nonumber\\
&= Tr\left(\bm{B}^{\top}\bm{H}\bm{B}\right),
\end{flalign}
where $\bm{H}$ is:
\begin{equation}\label{eq:H}
\bm{H} = \sum \limits_{s=1}^m \frac{1}{m}\sum \limits_{j=1}^{C} \bm{\Phi}(\mu_{\mathbb{P}_j^s} -\overline{\mu}_j )(\mu_{\mathbb{P}_j^s} -\overline{\mu}_j )^{\top} \bm{\Phi}^{\top},
\end{equation}
in which $\mu_{\mathbb{P}_j^i}$ and $\overline{\mu}_j$ can be computed according to the empirical estimation shown in equation (\ref{empirical}). Denote $x_{k \sim j}^{s}$ as the $k$-th sample belonging to the $j$-th class in the $s$-th domain, where $s \in \{1, 2,...,m\}$ and $j \in \{1,2,...,C\}$. Let $n_j^s$ denote the sample size of the $j$-th class from the $s$-th domain, we have:
\begin{equation}
\begin{aligned}
\hat{\mu}_{\mathbb{P}_j^{s}} &= \frac{1}{n^s_j} \sum \limits_{k=1}^{n^s_j} \phi(x^s_{k \sim j}), ~
\hat{\overline{\mu}}_j = \frac{1}{m} \sum \limits_{s=1}^{m} \hat{\mu}_{\mathbb{P}_j^{s}},
\end{aligned}
\end{equation}
where $k\sim j$ denotes the indicies of examples in the $j$-th class.
\subsubsection{Scatter of class prior-normalized marginal distributions}
The scatter of each class-conditional distribution is estimated locally using the samples from that class. When the number of examples in each class is small, optimizing (\ref{con}) can easily overfit the data. To further improve the estimation accuracy, we propose another regularization term which measures the scatter of class-prior normalized marginal distributions. The new regularization term is able to measure the global distance between all class-conditionals. In the $s$-th domain, the marginal distribution is defined as
\begin{equation}
\begin{aligned}
\mathbb{P}^s(X) = \sum_{j=1}^C \mathbb{P}^s(X|Y=j)\mathbb{P}^s(Y=j).
\end{aligned}
\end{equation}
If the class prior distribution $\mathbb{P}(Y)$ does not change across domains, and we can also find a feature representation that has an invariant class-conditional $\mathbb{P}(h(X)|Y)$ across source domains, we can say that $\mathbb{P}(h(X))$ is also domain-invariant, but not vice versa. Nevertheless, searching for a representation that reduces the discrepancy between the marginal distributions can to some extent reduce the discrepancy of class conditional distributions, though the original purpose was to match marginal distributions only \cite{muandet9,ghifary8}. However, if the class prior changes across source domains, the above statements are no longer true. That is to say, even if the class conditionals are domain-invariant, the marginal distribution are not invariant because of the changes in $\mathbb{P}(Y)$. To mitigate this issue, we propose to match the class-prior normalized marginal distribution, which is defined as follows:
\begin{eqnarray}
&\mathbb{P}^s_N(X) = \sum_{j=1}^C \mathbb{P}^s(X|Y=j)\frac{1}{C}
\end{eqnarray}
It can be seen that the class-prior normalized marginal distribution enforces the same prior probability for each class. Therefore, the changes in the prior distribution across source domains are adjusted, which guarantees that the prior-normalized marginal distribution is domain-invariant when the class conditionals are invariant. By embedding the class prior-normalzied marginal distribution into a Hilbert space, the scatter of the normalized marginal distribution across domains can be formulated as:
\begin{equation} \label{prior}
\begin{aligned}
\Psi^{prior} &= \frac{1}{m} \sum \limits_{s=1}^{m} \| \overline{\mu}_N - \mu_{\mathbb{P}^s_{N}} \|_{\mathcal{H}}^2,
\end{aligned}
\end{equation}
where $\mu_{\mathbb{P}_N^s} = E_{x\sim \mathbb{P}^s_{N}}[\phi(x)]$, and $\mathbb{P}^s_N$ is the prior-normalized marginal distribution of the $s$-th domain. $\overline{\mu}_N = \frac{1}{m} \sum \nolimits_{s=1}^m \mu_{\mathbb{P}^s_N}$ is the kernel mean of the class prior-normalized marginal distribution $\mathbb{P}_{N}$ of all domains. To learn the domain-invariant representation, we apply the linear feature transformation $\bm{W}$ to the above scatter, resulting in:
\begin{flalign}\label{prior1}
&\Psi^{prior}_{\bm{B}} \nonumber\\
&= Tr\left(\frac{1}{m} \sum \limits_{s=1}^m \bm{B}^{\top} \bm{\Phi} (\overline{\mu}_N - \mu_{\mathbb{P}_N^s})(\overline{\mu}_N - \mu_{\mathbb{P}_N^s})^{\top} \bm{\Phi}^{\top} \bm{B} \right)\nonumber \\
&= Tr\left(\bm{B}^{\top}\bm{L}\bm{B}\right),
\end{flalign}
where $\bm{L}$ can be formulated as follows:
\begin{equation}\label{eq:L}
\bm{L} = \frac{1}{m} \sum \limits_{s=1}^{m} \bm{\Phi}(\overline{\mu}_N - \mu_{\mathbb{P}_N^s})(\overline{\mu}_N - \mu_{\mathbb{P}_N^s})^{\top} \bm{\Phi}^{\top}.
\end{equation}
$\mu_{\mathbb{P}_N^s}$ in (\ref{eq:L}) can be empirically estimated from the observations as:
\begin{equation}
\hat{\mu}_{\mathbb{P}_N^s} = \frac{1}{C} \sum \limits_{j=1}^{C} \frac{1}{n^s_j} \sum \limits_{k=1}^{n^s_j} \phi(x_{k \sim j}^{s}).
\end{equation}
Note that if $n_j^s$ are identical for all $j$, that is the classes are balanced, the class prior-normalized marginal distribution reduces to the empirical estimate of the original marginal distribution $\hat{\mu}_{\mathbb{P}^s} = \frac{1}{n^s} \sum \limits_{k=1}^{n^s} \phi(x_{k}^{s})$ adopted in \cite{muandet9,ghifary8}.
\subsubsection{Preserving Discriminative Power}
In addition to the above proposed two domain-invariance regularization terms, we also consider extra terms to preserve the discriminativeness of the learned representation. There have been plenty of works in supervised dimension reduction in the $i.i.d.$ case, and kernel Fisher discriminant analysis \cite{mika1999fisher} is a representative method which has been used in domain generalization \cite{ghifary8}. Becasue the focus of our method is to better learn the domain-invariant representations, we incorporate kernel Fisher discriminant analysis for fair comparison to existing methods. Specifically, the examples with the same label should be similar and the examples with different labels should be well separated. These two constraints can be formulated as two regularization terms: within-class scatter and between-class scatter, which are briefly described as follows. \par
Between-class scatter:
\begin{equation}\label{between}
\Psi^{between}_{\bm{B}} = Tr(\bm{B}^{\top}\bm{P}\bm{B}),
\end{equation}
where matrix $\bm{P}$ can be computed as:
\begin{equation}{eq:P}
\bm{P} = \sum \limits_{j=1}^C n_j \bm{\Phi} (\mu_j - \overline{\mu}_b)(\mu_j - \overline{\mu}_b)^{\top} \bm{\Phi}^{\top},
\end{equation}
and $n_j= \sum \nolimits_{s=1}^m n^s_j$ denotes the number of examples in the $j$-th class from all domains. Note that $\mu_j$ and $\overline{\mu}_b$ can be empirically estimated as $\hat{\mu}_j = \frac{1}{n_j} \sum \nolimits_{s=1}^{m} \sum \nolimits_{k=1}^{n_j^s} \phi(x^s_{k\sim j})$ and $\hat{\overline{\mu}}_b = \frac{1}{n} \sum \nolimits_{j=1}^C n_j \hat{\mu}_j$. \par
Within-class scatter:
\begin{equation}\label{within}
\Psi^{within}_{\bm{B}} = Tr(\bm{B}^{\top}\bm{Q}\bm{B}),
\end{equation}
where the matrix $\bm{Q}$ can be computed as:
\begin{equation}\label{eq:Q}
\bm{Q} = \sum \limits_{j=1}^C \sum \limits_{s=1}^m \sum \limits_{k=1}^{n_j^s} \bm{\Phi}(\phi(x_{k\sim j}^s) - \mu_j)(\phi(x_{k\sim j}^s) - \mu_j)^{\top} \bm{\Phi}^{\top}.
\end{equation}
\subsubsection{Objective Function and Optimization}
In this subsection, we first formulate our objective function with the above regularization terms and then find the solutions by maximizing the objective function. \par
The proposed CIDG aims to learn an invariant feature transformation by solving the following optimization problem:
\begin{equation}
\argmax_{\bm{B}} \frac{\Psi^{between}_{\bm{B}}}{\Psi^{con}_{\bm{B}} + \Psi^{prior}_{\bm{B}} + \Psi^{within}_{\bm{B}}}.
\end{equation}
The numerator enforces the distance between features in different classes to be large. The denominator aims to learn a conditional invariant feature representation and reduce the distance between features in the same class simultaneously.\par
Replace the scatters with equation, (\ref{con}), (\ref{prior1}), (\ref{between}), (\ref{within}) and introduce several trade-off parameters $\gamma, \alpha$, the above objective function can be reformulated as follows:
\begin{equation}\label{obj_1}
\argmax_{\bm{B}} \frac{Tr(\bm{B}^{\top}\bm{P}\bm{B})}{Tr(\bm{B}^{\top}(\gamma\bm{H} + \alpha\bm{L} + \bm{Q})\bm{B})},
\end{equation}
where $0<\gamma$, $0<\alpha$ are trade-off parameters, which need to be selected according to the validation set. \par
Note that the above objective function is invariant when rescaling $\bm{B} \rightarrow \eta \bm{B}$, where $\eta$ is a constant. Consequently, (\ref{obj_1}) can be reformulated as the following constrained optimization problem:
\begin{equation}
\begin{aligned}
\argmax_{\bm{B}} \quad& Tr(\bm{B}^{\top}\bm{P}\bm{B}) \\
s.t. \quad & Tr(\bm{B}^{\top}(\gamma\bm{H} + \alpha\bm{L} + \bm{Q})\bm{B}) = 1,
\end{aligned}
\end{equation}
which yields Lagrangian:
\begin{equation}\label{lagrange}
\begin{aligned}
L(\bm{B}) = & Tr(\bm{B}^{\top}\bm{P}\bm{B}) \\
&- Tr((\bm{B}^{\top}(\gamma\bm{H} + \alpha\bm{L} + \bm{Q})\bm{B} - \bm{I}_q) \bm{\Gamma}),
\end{aligned}
\end{equation}
where $\bm{I}_q$ is an identity matrix of dimension $q$ and $\Gamma = diag(\lambda_1,\lambda_2,...,\lambda_q)$ is a diagonal matrix with the Lagrange multipliers aligned in the diagonal. Solving (\ref{lagrange}) by setting the derivative w.r.t. $\bm{B}$ to be zero, we arrive at a standard eigenvalue decomposition problem:
\begin{equation}\label{result}
\bm{P}\bm{B} = (\gamma \bm{H} + \alpha \bm{L} + \bm{Q})\bm{B}\bm{\Gamma}.
\end{equation}
In practice, the term $(\gamma \bm{H} + \alpha \bm{L} + \bm{Q})$ is added by a small constant $\epsilon \bm{I}$ to get a more stable solution, becoming $(\gamma \bm{H} + \alpha \bm{L} + \bm{Q} + \epsilon \bm{I})$. We summarize the algorithm of our CIDG in Algorithm \ref{algorithm}.
\begin{algorithm}
\caption{Conditional invariant domain generalization}
\label{algorithm}
\begin{algorithmic}[1]
\REQUIRE $m$ source domains with datasets $S_D = \{D_s = \{x_i^s,y_i^s\}_{i=1}^{n^s}, s = \{1,2,...,m\}\}$, trade-off parameters $\gamma,\alpha$.
\ENSURE Invariant feature transformation $\bm{B}^*$ and corresponding eigenvalues $\bm{\Gamma}^*$
\STATE Construct kernel matrix $\bm{K}$ from data samples of all domains, $\bm{K}(i,j) = k(x_i, x_j)$, $\forall x_i,x_j \in S_D$, and construct matrices $\bm{H},\bm{L},\bm{P},\bm{Q}$ from equations (\ref{between}), (\ref{con}), (\ref{prior1}), (\ref{within}).
\STATE Centering the kernel matrix $\bm{K} \leftarrow \bm{K} - \bm{1}_n\bm{K} - \bm{K}\bm{1}_n + \bm{1}_n\bm{K}\bm{1}_n$, where $n = \sum \nolimits_{i=1}^m n^s$ and $\bm{1}_n \in \mathbb{R}^{n \times n}$ denotes a matrix with all entries equal to $\frac{1}{n}$.
\STATE Solve the equation (\ref{result}) to get the optimal feature transformation matrix $\bm{B}^*$ and the corresponding eigenvalues $\bm{\Gamma}^*$ with the first $q$ leading eigenvalues.
\STATE When given a target domain with a set of data $D_t = \{x_i^t,y_i^t\}_{i=1}^{n^t}$, construct a kernel matrix $\bm{K}^t$ with samples from source domains and samples from the target domain, $\bm{K}^t(i,j) = k(x_i, x_j), \forall x_i \in S_D, x_j \in D_t$. Then we apply the centering operation to $\bm{K}^t \leftarrow \bm{K}^t - \bm{1}_n\bm{K}^t - \bm{K}^t\bm{1}_{n^t} + \bm{1}_n \bm{K}^t \bm{1}_{n^t}$, where $\bm{1}_{n^t} \in \mathbb{R}^{n^t \times n^t}$ denotes a matrix with all entries equal to $\frac{1}{n}$.
\STATE The learned feature matrix of the target domain can be computed as $\bm{X}^* = (\bm{K}^t)^{\top}\bm{B}^*(\bm{\Gamma}^*)^{-\frac{1}{2}}$.
\end{algorithmic}
\end{algorithm}
\begin{table*}
\begin{center}
\label{synthetic}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
domain index & \multicolumn{3}{c|}{domain 1} & \multicolumn{3}{c|}{domain 2} & \multicolumn{3}{c|}{domain 3} \\
\hline
class index &$ 1$ & $2$ & $3$ & $ 1$ & $2$ & $3$ &$ 1$ & $2$ & $3$ \\
\hline
x &(1,0.3)& (2, 0.3) & (3, 0.3) & (3.5, 0.3) & (4.5, 0.3) & (5.5, 0.3) & (8, 0.3) & (9.5, 0.3) & (10, 0.3) \\
\hline
y &(2,0.3)& (1, 0.3) & (2, 0.3) & (2.5, 0.3) & (1.5, 0.3) & (2.5, 0.3) & (2.5, 0.3) & (1.5, 0.3) & (2.5, 0.3) \\
\hline
\# samples & 30 & 20 & 30 & 20 & 60 & 40 & 40 & 40 & 40\\
\hline
\end{tabular}
\caption{Details of the generated distributions of three domains.}
\end{center}
\end{table*}
\begin{figure*}
\centering
\label{visualization}
\includegraphics[width=0.8\textwidth]{./domains_all.pdf}
\caption{Performance comparison between different methods. The figures in the first row visualize the samples according to three different domains (yellow, magenta, cyan). The figures in the second row visualize the samples of three classes (green, red, blue) in different domains (star, circle, cross). Note that the left two domains (yellow, magenta) are source domains and the right one (cyan) is target domain.}
\end{figure*}
\section{Experiments}
In this section, we conduct experiments on one synthetic data and two real-world image classification datasets to demonstrate the effectiveness of our conditional invariant domain generalization (CIDG) method. The synthetic data are two dimensional, which facilitate the comparison of the performance of different methods through the visualization of the data distribution. The two real-world image classification datasets are the VLCS and Office+Caltech datasets, which are widely used datasets to evaluate the performance of domain generalization and domain adaptation \cite{ghifary8,gong10,khosla2012undoing}. We compare our CIDG with several state-of-the-art domain generalization methods, which are summarized below.
\begin{itemize}
\item K-nearest neighbors (KNN) using the original features, which servers as the baseline method.
\item Kernel principal component analysis (KPCA) \cite{scholkopf1998nonlinear} which finds the dominant components of the original features. KNN is applied for classification on the KPCA features.
\item Undo-Bias \cite{khosla2012undoing}, which is a multi-task learning method aims to reduce the data bias. Because undo-bias is a binary classification algorithm, we use the one-vs-rest strategy for multi-class classification.
\item Domain invariant component analysis (DICA) \cite{muandet9}, which is a domain generalization method learns an domain-invariant feature representation in terms of marginal distributions. We use KNN to do classification on the learned feature representation.
\item Scatter component analysis (SCA) \cite{ghifary8}, which is a another method that learns domain-invariant features in terms of marginal distributions. The method incorporates discriminative terms and domain scatter terms into a unified framework.
\end{itemize}
Note that we have also conducted experiments using kernel finsher discriminant analysis (FDA), however, it performs worse than KPCA. Consequently, we do not report the results of KLDA in this paper.
\begin{table*}
\begin{center}
\label{VLCS}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Source & Target & 1NN & KPCA & DICA & Undo-bias & SCA & CIDG \\
\hline
L,C,S & V & $53.27 \pm 1.52 $ & $58.62 \pm 1.44$ & $58.29 \pm 1.51$ & $57.73 \pm 1.02$ & $57.48 \pm 1.78$ & $\bm{65.65 \pm 0.52}$ \\
\hline
V,C,S & L & $50.35 \pm 0.94$ & $53.80 \pm 1.78$ & $50.35 \pm 1.45$ & $58.16 \pm 2.13$ & $52.07 \pm 0.86$ & $\bm{60.43 \pm 1.57}$ \\
\hline
V,L,S & C & $76.82 \pm 1.56$ & $85.84 \pm 1.64$ & $73.32 \pm 4.13$ & $82.18 \pm 1.77$ & $70.39 \pm 1.42$ & $\bm{91.12 \pm 1.62}$ \\
\hline
V,C,L & S & $51.78 \pm 2.07$ & $53.23 \pm 0.62$ & $54.97 \pm 0.61$ & $55.02 \pm 2.53$ & $54.46 \pm 2.71$ & $\bm{60.85 \pm 1.05}$ \\
\hline
C,S & V,L & $52.44 \pm 1.87$ & $55.74 \pm 1.01$ & $53.76 \pm 0.96$ & $56.83 \pm 0.67$ & $56.05 \pm 0.98$ & $\bm{59.25 \pm 1.21}$ \\
\hline
C,L & V,S & $45.04 \pm 2.49$ & $45.13 \pm 3.01$ & $44.81 \pm 1.62$ & $52.16 \pm 0.80$ & $48.97 \pm 1.04$ & $\bm{54.04 \pm 0.91}$ \\
\hline
C,V & L,S & $47.09 \pm 2.49$ & $55.79 \pm 1.57$ & $49.81 \pm 1.40$ & $59.00 \pm 2.49$ & $53.47 \pm 0.71$ & $\bm{61.61 \pm 0.67}$ \\
\hline
L,S & V,C & $57.09 \pm 1.43$ & $\bm{58.50 \pm 3.84}$ & $44.09 \pm 0.58$ & $51.16 \pm 3.52$ & $49.98 \pm 1.84$ & $55.65 \pm 3.57$ \\
\hline
L,V & S,C & $59.21 \pm 1.84$ & $63.88 \pm 0.36$ & $61.22 \pm 0.95$ & $64.26 \pm 2.77$ & $66.68 \pm 1.09$ & $\bm{70.89 \pm 1.31}$ \\
\hline
V,S & L,C & $58.39 \pm 0.78$ & $64.56 \pm 0.99$ & $60.68 \pm 1.36$ & $68.58 \pm 1.62$ & $63.29 \pm 1.34$ & $\bm{70.44 \pm 1.43}$ \\
\hline
\end{tabular}
\caption{Performance comparison between different methods with respect to accuracy ($\%$) on VLCS dataset.}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\label{office+caltech}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Source & Target & 1NN & KPCA & DICA & Undo-bias & SCA & CIDG \\
\hline
W,D,C & A & $87.65 \pm 2.46$ & $90.92 \pm 1.03$ & $80.34 \pm 2.65$ & $89.56 \pm 1.55$ & $89.97 \pm 1.85$ & $\bm{93.24 \pm 0.71}$ \\
\hline
A,W,D & C & $67.00 \pm 0.67$ & $74.23 \pm 1.34$ & $64.55 \pm 2.85$ & $82.27 \pm 1.49$ & $77.90 \pm 1.28$ & $\bm{85.07 \pm 0.93}$ \\
\hline
A,W,C & D & $97.36 \pm 1.92$ & $94.34 \pm 1.19$ & $93.21 \pm 1.92$ & $95.28 \pm 2.45$ & $93.21 \pm 3.50$ & $\bm{97.36 \pm 0.92}$ \\
\hline
A,C,D & W & $82.11 \pm 0.67$ & $88.84 \pm 2.17$ & $69.68 \pm 3.22$ & $90.18 \pm 2.10$ & $81.26 \pm 3.15$ & $\bm{90.53 \pm 2.66}$ \\
\hline
A,C & D,W & $60.95 \pm 1.31$ & $75.81 \pm 2.94$ & $60.41 \pm 1.94$ & $80.24 \pm 2.21$ & $76.89 \pm 0.99$ & $\bm{83.65 \pm 2.24}$ \\
\hline
D,W & A,C & $60.47 \pm 0.99$ & $65.75 \pm 1.74$ & $43.02 \pm 3.24$ & $\bm{74.14 \pm 3.45}$ & $69.53 \pm 1.87$ & $65.91 \pm 1.42$ \\
\hline
A,W & C,D & $71.11 \pm 0.81$ & $76.26 \pm 1.13$ & $69.29 \pm 1.77$ & $81.77 \pm 1.77$ & $78.99 \pm 1.54$ & $\bm{83.89 \pm 2.97}$ \\
\hline
A,D & C,W & $60.95 \pm 1.31$ & $75.81 \pm 2.94$ & $68.49 \pm 2.88$ & $81.23 \pm 2.17$ & $75.84 \pm 1.66$ & $\bm{84.66 \pm 3.27}$ \\
\hline
C,W & A,D & $89.08 \pm 2.26$ & $91.45 \pm 1.27$ & $83.01 \pm 2.42$ & $91.73 \pm 0.67$ & $90.46 \pm 1.72$ & $\bm{93.41 \pm 0.92}$ \\
\hline
C,D & A,W & $86.19 \pm 1.58$ & $90.36 \pm 1.26$ & $79.69 \pm 1.11$ & $90.67 \pm 1.87$ & $88.61 \pm 0.38$ & $\bm{91.70 \pm 1.35}$ \\
\hline
\end{tabular}
\caption{Performance comparison between different methods with respect to accuracy ($\%$) on office+caltech dataset.}
\end{center}
\end{table*}
\subsection{Synthetic Dataset}
In this section, we randomly generate two dimensional examples for source domains and target domain from different Gaussian distributions $\mathcal{N}(\mu, \sigma)$, where $\mu$ is the mean and $\sigma$ is the standard deviation. The values of mean $\mu$ and standard deviation $\sigma$ pairs $(\mu, \sigma)$ of different classes in three domains are shown in Table \ref{synthetic}. We consider the first two domains as source domains and the third one as a target domain. The first row of Figure \ref{visualization} visualizes the samples from three different domains corresponding to three different colors (yellow, magenta, cyan), and the domains are domain $1$, domain $2$ and domain $3$ from left to right. The second row of Figure \ref{visualization} shows that each domain has three clusters (green, red, blue) corresponding to three different classes and the domains are represented by different shapes (star, circle, cross). The first column illustrates the raw feature distributions. \par
We compare our CIDG with KNN, KPCA, DICA, and SCA to evaluate the distributions of the learned feature representation across domains. Since Undo-Bias is a SVM-based method that does not need to explicitly learn a feature representation, we do not compare the results with Undo-Bias on synthetic data. We use the RBF kernel for all the methods involving computation of kernel matrices. In all experiments, domain 1 and domain 2 are used as source domains and domain 3 is used as the unseen target domain. From the results in Figure \ref{visualization}, we can see that the proposed CIDG achieves the best accuracy of $86.67\%$. KPCA almost has no improvement over the baseline KNN method on the synthetic dataset. DICA can cluster one class (blue) well but performs badly for the other two classes. SCA can learn better feature distribution but the blue class and the green class are mixed in the learned representation. Additionally, the samples in the same class lie in a line rather than reside in a clear cluster. Our CIDG can learn more robust feature representations and the learned features in the same class are distributed in a well-shaped cluster. \par
\subsection{VLCS Dataset}
VLCS is an image classification dataset widely used for evaluating the performance of domain generalization. This dataset contains images from four different sub-datasets corresponding to four domains: PASCAL VOC2007 (V) \cite{everingham2010pascal}, LabelMe (L) \cite{russell2008labelme}, Caltech-101 (C) \cite{griffin2007caltech}, and SUN09 (S) \cite{choi2010exploiting}. Five shared classes (bird, car, chair, dog and person) are selected from these four datasets. The images are preprocessed by subtracting the mean values and cropped on the central $224 \times 224$ region out of the $256 \times 256$ resized images. Then the preprocessed images are fed into the DeCAF network and extracted the 4096 dimensional DeCAF\textsubscript{6} features \cite{donahue2014decaf}.
We randomly select $70\%$ of the data as training set from each domain and repeat the random selection five times. The mean classification accuracy and standard deviation of the five random selection are given for each method. All parameters are selected through validation, in which $30\%$ of the training data is selected as validation set. All kernel methods use a RBF kernel and the learned features are classified using KNN except for Undo-Bias. The results are shown in Table \ref{VLCS}. \par
From the results in Table \ref{VLCS}, we can see that our conditional invariant domain generalization (CIDG) performs the best on 9 of the 10 domain generalization tasks. KPCA performs the best when L,S are source domains and V,C are target domains. Note that almost all the domain generalization methods outperform the 1NN on raw features. However, some methods on several domain tasks perform even worse than 1NN on raw features. This is mainly because that features of real world images are complicated and noisy. The learned features are not discriminative when generalized to target domains. \par
\subsection{Office+Caltech Dataset}
The Office+Caltech image dataset consists of ten overlapping categories between the Office dataset and the Caltech-256 dataset (C). Because the Office dataset contains three sub-datasets: AMAZON (A), DSLR (D), and WEBCAM (W), we have four different domains in total. Similarly, We randomly select $70\%$ of the data as training set from each domain and repeat the random selection five times. The mean classification accuracy and standard deviation of the five random selection are reported for each method. The feature extraction is the same as that used for the VLCS dataset except we use the CAFFE network \cite{jia2014caffe} instead of the DeCAF network. The other settings are the same as those in experiments on the VLCS dataset. \par
From the results in Table \ref{office+caltech}, we can find that the proposed CIDG achieves the best performance on 9 of the 10 domain generalization tasks. This further validates that enforcing conditional invariance is more reasonable than enforcing only marginal invariance. Note that Undo-bias is a SVM-based method. It is possibly the main reason why it outperforms CIDG when using D,W as source domains and A,C as target domains. \par
\section{Conclusion}
In this paper, we have proposed a conditional invariant domain generalization approach considering the situation that both $\mathbb{P}(X)$ and $\mathbb{P}(Y|X)$ change across domains. Different from previous works which assume that only $\mathbb{P}(X)$ changes, our proposed method can learn representations that have invariant joint distribution $\mathbb{P}(h(X),Y)$ across domains if the prior distribution $\mathbb{P}(Y)$ does not change between the source domains and the target domains. Two regularization terms that enforce class-conditional distribution invariance across domains are proposed and validated on both synthetic and real datasets.
\section*{Acknowledgments}
This work was supported by National Key Research and Development Program of China 2017YFB1002203, NSFC No.61572451, No.61390514, and No. 61632019, Youth Innovation Promotion Association CAS CX2100060016, Fok Ying Tung Education Foundation WF2100060004, and Australian Research Council Projects FL-170100117, DP-180103424, DP-140102164, LP-150100671.
\bibliographystyle{aaai}
|
2,869,038,155,591 | arxiv |
\section*{Introduction}
\label{sec:introduction}
We are concerned with the development and rigorous analysis of novel efficient model order reduction methods for parameter optimization constrained by coercive variational state equations using the {\em first optimize, then discretize} approach.
The methods are based on successive enrichment of the underlying reduced order models within the framework of Trust-Region optimization.
Optimization problems constrained by partial differential equations (PDEs) arise in many fields of application
in engineering and across all sciences.
Examples of such problems include optimal (material) design or optimal control of processes and inverse problems,
where parameters of a PDE model are unknown and need to be estimated from measurements.
The numerical solution of such problems is very challenging as the underlying PDEs have to be solved repeatedly
within outer optimization algorithms and the dimension of the parameters that need to be optimized might be
very high or even infinite dimensional.
PDE constrained optimization problems have been of interest for many decades. Classically, the underlying
PDE (forward problem) is approximated by a high dimensional full order model (FOM) that results from discretization, e.g.,
by the Finite Element or Finite Volume method.
Hence, the complexity of the optimization problem directly depends on
the numbers of degrees of freedom (DOF) of the FOM.
Mesh adaptivity has been advised to minimize the number of DOFs; see, e.g.,
\cite{BKR2000,MR2556843,Clever2012,MR2905006,MR1887737,MR2892950} and the references therein.
\textbf{Model order reduction for PDE constrained optimization and optimal control.}
A more recent
approach is the usage of model order reduction (MOR) methods in
order to replace the FOM by a surrogate reduced order model (ROM) of possibly very low dimension.
MOR is a very active research field that has seen
tremendous development in recent years, both from a theoretical and application
point of view. For an introduction and overview
we refer to the monographs and collections
\cite{BCOW2017,BOP+2017,HRS2016,QMN2016}.
A particular promising model reduction approach for parameterized partial differential equations (pPDEs)
is the Reduced Basis (RB) method that relies on the approximation of the solution manifold of pPDEs
by low dimensional linear approximation spaces that are spanned from suitably selected particular solutions,
called snapshots.
A posteriori error estimation for solutions of the ROM with respect to the FOM
is the basis for efficient Greedy algorithms to select the snapshots in a quasi-optimal way
\cite{BCD+2011,Haasdonk2013}. Alternatively, construction of reduced bases using proper orthogonal decomposition
(POD) may be used \cite{GV17}.
The construction of a reduced basis and the respective projected ROM is generally called the offline phase,
whereas evaluating the ROM is called online phase.
There exists a large amount of literature using such reduced order surrogate models for optimization methods.
A posteriori error estimates for reduced approximation of
linear-quadratic optimization problems and parametrized
optimal control problems with control constraints were studied, e.g., in \cite{Dede2012,GK2011,KTV13,NRMQ2013,OP2007}.
In \cite{dihl15} an RB approach is proposed which also enables
an estimation on the actual error on the control variable and not only on the gradient of the output functional.
Certified Reduced Basis methods for parametrized elliptic
optimal control problems with distributed controls were studied in \cite{KTGV2018}.
With the help of an a posteriori error estimator, ROMs
can be constructed with respect to a desired accuracy but also with respect to a local area in the parameter set \cite{EPR2010,HDO2011}.
For very high dimensional parameter sets, simultaneous parameter and state reduction has been advised \cite{HO2014,HO2015,LWG2010}.
However, constructing a reduced order surrogate for a prohibitively expensive
forward problem can also take a significant amount of computational resources. To remedy this, it is
beneficial to use optimization methods that optimize on a local level of the control variable, assuming
the surrogate only to be accurate enough in the respective parameter region.
Hence, we require an approach which goes beyond the classical offline/online decomposition.
Recently, RB methods have been advised with a progressive construction of ROMs \cite{BMV2020,GHH2016,ZF2015}.
Also localized RB methods that are based on efficient localized a posteriori error control and online enrichment \cite{BEOR2017,OS2015}
overcome traditional offline/online splitting and are thus particularly well suited for applications in optimization or inverse
problems \cite{OSS2018,OS2017}.
\textbf{Trust-Region reduced order models for second-order methods.}
Trust-Region (TR) approaches are a class of optimization methods that are advantageous for the usage
of locally accurate surrogate models. The main idea is to solve optimization sub-problems only in a local
area of the parameter set which resolves the burden of constructing a global RB space.
The problem that might occur is the fact that during this minimization one usually moves away from the original parameters on which
the reduced order model was built, and the quality of the reduced model cannot be guaranteed anymore.
For that reason, a priori and a posteriori error analysis are required to ensure accurate reduced order
approximations for the optimization problem; cf.~\cite{GV17,HV08,KTV13}.
In \cite{AFS00,SS13} a TR approach was proposed to control the quality of the (POD) reduced
order model, referred to as TR-POD, a meanwhile well-established method in applications; cf.~\cite{BC08,CAN12}.
TR methods ensure global convergence for locally convergent methods.
In each iteration of the TR algorithm the nonlinear objective is replaced by a model function which can be optimized with much less effort; cf.~\cite{CGT00,NW06}.
One suitable choice for the model is a reduced order discretization of the objective (e.g., by utilizing a second-order Taylor approximation).
To ensure convergence to stationary points the accuracy of the model function and of its gradient have to be monitored.
In \cite{RoggTV17} a posteriori error bounds are utilized to monitor the approximation quality of the gradient.
We also refer to \cite{GGMRV17}, where the authors utilize basis update strategies to improve the reduced order approximation scheme with respect to the optimization goal.
The TR strategy can be combined with second-order methods for nonlinear optimization: with the Newton method to solve the reduced problem and with the SQP method for the all-at-once approach; cf.~\cite{HV06}.
Constraints on the control and the metric for the Trust-Region radius can affect the convergence of the method.
For an error-aware TR method, the TR radius is directly characterized by the a posteriori error estimator for the
cost functional of the surrogate model. Thus, the offline phase of the RB method can completely be omitted since the RB model can
be adaptively enriched during the outer optimization loop. With this procedure the surrogate model eventually
will have a high accuracy around the optimum of the optimization problem, ignoring the accuracy of the part
which the outer (and inner) optimization loop does not approach at all. Error aware TR-RB methods can be utilized in many
different ways.
One possible TR-RB approach has been extensively studied
in \cite{QGVW2017} for linear parametric elliptic equations, which ensures convergence of the nonlocal TR-RB.
Note that the experiments in \cite{QGVW2017} are for up to six dimensional parameter sets without inequality constraints.
In \cite{YM2013}, the TR framework is combined with an efficient RB error bound for defining the Trust-Region in the design optimization of vibrating structures using frequency domain formulations.
\textbf{Main results.}
In this contribution we present several significant advances for adaptive
Trust-Region Reduced Basis optimization methods for parameterized partial differential equations:
\begin{itemize}
\item For the model function in the TR-RB approach, we follow a non-conforming dual (NCD) approach by choosing as model function the Lagrangian associated to the optimization problem. This permits more accurate results in terms of approximation of the optimal solution;
\item we provide efficiently computable a posteriori error estimates for all reduced quantities for different choices of the cost functional and its (approximate) gradient;
\item we rigorously prove the convergence of the TR-RB method with bilateral inequality constraints on the
parameters;
\item we devise several new adaptive enrichment strategies for the progressive construction of the
Reduced Basis spaces;
\item we demonstrate in numerical experiments that our new TR-RB methods outperform existing model reduction approaches for large scale optimization problems in well defined benchmark problems.
\end{itemize}
\textbf{Organization of the article.}
In Section~\ref{sec:problem} we introduce the PDE constrained optimization problem and state first- and second-order optimality conditions. These serve as a basis for the full order discretization
derived in Section~\ref{sec:mor}.
Moreover, in Section~\ref{sec:mor} we introduce different strategies of model order reduction for the full order model and
derive rigorous a posteriori error estimates for all equations, functionals, and gradient information.
Section~\ref{sec:TRRB_and_adaptiveenrichment} is devoted to the derivation of Trust-Region -- Reduced Basis methods and the presentation of the convergence analysis of the adaptive TR-RB algorithm.
In addition, we discuss in detail several variants of new TR-RB algorithms that differ in their respective reduced gradient information as well as in the enrichment strategies for the
construction of the corresponding reduced models.
All variants are thoroughly analyzed numerically in Section~\ref{sec:num_experiments}, where we consider three well defined benchmark problems. We also compare with selected state of the art optimization methods from the literature.
\section{Problem formulation}
\label{sec:problem}
Given $\mu_\mathsf{a},\mu_\mathsf{b}\in\mathbb{R}^P$ with $P \in \mathbb{N}$ we consider the compact and convex admissible parameter set
\[
\mathcal{P}:= \left\{\mu\in\mathbb{R}^P\,|\,\mu_\mathsf{a} \leq \mu \leq \mu_\mathsf{b} \right\} \subset \mathbb{R}^P,
\]
where $\leq$ is understood component-wise. Let $V$ be a real-valued Hilbert space with
inner product $(\cdot \,,\cdot)$ and induced norm $\|\cdot\|$. We are interested in efficiently approximating
PDE-constrained parameter optimization problems with the quadratic continuous cost functional
\[
\mathcal{J}: V \times \mathcal{P} \to \mathbb{R}, \quad (u,\mu) \mapsto \mathcal{J}(u, \mu) = \Theta(\mu) + j_\mu(u) + k_\mu(u, u),
\]
where $\Theta: \mathcal{P} \to \mathbb{R}$ denotes a parameter functional and, for each $\mu \in \mathcal{P}$, $j_\mu \in V'$ is a parameter-dependent continuous linear functional and $k_\mu: V \times V \to \mathbb{R}$ a continuous symmetric bilinear form. To be more precise, we consider the following constrained minimization problem:
{\color{white}
\begin{equation}
\tag{P}
\label{P}
\end{equation}
}\vspace{-45pt}
\begin{subequations}\begin{equation}
\min_{(u,\mu)\in V\times\mathcal{P}} \mathcal{J}(u, \mu),
\tag{P.a}\label{P.argmin} \end{equation}
subject to $(u,\mu)$ satisfying the \emph{state -- or primal -- equation}
\begin{align}
a_\mu(u, v) = l_\mu(v) && \text{for all } v \in V,
\tag{P.b}\label{P.state}
\end{align}\end{subequations}%
\setcounter{equation}{0}
where, for each $\mu \in \mathcal{P}$, $a_\mu: V \times V \to \mathbb{R}$ denotes a continuous and coercive symmetric bilinear form and $l_\mu \in V'$ denotes a continuous linear functional. For given $u \in V$, $\mu \in \mathcal{P}$, we introduce the primal residual $r_\mu^\textnormal{pr}(u) \in V'$ associated with \eqref{P.state} by
\begin{align}
r_\mu^\textnormal{pr}(u)[v] := l_\mu(v) - a_\mu(u, v) &&\text{for all }v \in V.
\label{eq:primal_residual}
\end{align}
The primal residual plays a crucial role for a posteriori error analysis and for sensitivities of solution maps.
\begin{remark}
The Lagrange functional for \eqref{P} is given by $\mathcal{L}(u,\mu,p) = \mathcal{J}(u,\mu) + r_\mu^\textnormal{pr}(u)[p]$ for $(u,\mu)\in V\times\mathcal{P}$ and for $p\in V$.
\end{remark}
To apply RB methods efficiently, we require the parametrization of the problem to be separable from $V$ throughout the work. This separability is a standard assumption for RB methods and can be circumvented by using empirical interpolation techniques \cite{BMNP2004,CS2010,DHO2012}.
\begin{assumption}[Parameter-separability]
\label{asmpt:parameter_separable}
We assume $a_\mu$, $l_\mu$, $j_\mu$, $k_\mu$ to be parameter separable with $\Xi^a, \Xi^l, \Xi^j, \Xi^k \in \mathbb{N}$ non-parametric components $a_\xi: V \times V \to \mathbb{R}$ for $1 \leq \xi \leq \Xi^a$, $l_\xi \in V'$ for $1 \leq \xi \leq \Xi^l$, $j_\xi \in V'$ for $1 \leq \xi \leq \Xi^j$ and $k_\xi: V \times V \to \mathbb{R}$ for $1 \leq \xi \leq \Xi^k$, and respective parameter functionals $\theta_\xi^a, \theta_\xi^l, \theta_\xi^j, \theta_\xi^k: \mathcal{P} \to \mathbb{R}$, such that
\begin{align*}
a_\mu(u, v) &= \sum_{\xi = 1}^{\Xi^a} \theta_\xi^a(\mu)\, a_\xi(u, v)
, &&&&& l_\mu(v) &= \sum_{\xi = 1}^{\Xi^l} \theta_\xi^l(\mu)\, l_\xi(v),
\end{align*}
and analogously for $j_\mu$ and $k_\mu$.
\end{assumption}
Due to Assumption~\ref{asmpt:parameter_separable}, all quantities which linearly depend on $a_\mu$, $l_\mu$, $j_\mu$ and $k_\mu$ (such as $\mathcal{J}$ or the primal residual) are also separable w.r.t.~the parameter.
Since we will use a Lagrangian ansatz for an explicit computation of derivatives, we require some notation that we use throughout this paper.
\subsection{A note on differentiability}
\label{sec:differentiability}
If $\mathcal{J}: V \times \mathcal{P} \to \mathbb{R}$ is Fr\'echet differentiable w.r.t.~each $\mu \in \mathcal{P}$, for each $u \in V$ and each $\mu \in \mathcal{P}$ there exists a bounded linear functional $\partial_\mu \mathcal{J}(u, \mu) \in \mathbb{R}^P$, such that the Fr\'echet derivative of $\mathcal{J}$ w.r.t.~its second argument in the direction of $\nu \in \mathbb{R}^P$ is given by $\partial_\mu \mathcal{J}(u, \mu) \cdot \nu$ (noting that the dual space of $\mathbb{R}^P$ is itself). We refer to $\partial_\mu \mathcal{J}(u, \mu)$ as the derivative w.r.t.~$\mu$. In addition, for $u \in V$, $\mu \in \mathcal{P}$ we denote the \emph{partial derivative} of $\mathcal{J}(u, \mu)$ w.r.t.~the $i$-th component of $\mu$ by $d_{\mu_i}\mathcal{J}(u, \mu)$ for $1 \leq i \leq P$. Note that $d_{\mu_i}\mathcal{J}(u, \mu) = \partial_\mu \mathcal{J}(u, \mu)\cdot e_i $, where $e_i \in \mathbb{R}^P$ denotes the $i$-th canonical unit vector. Furthermore, we denote the \emph{gradient} w.r.t.~its second argument -- the vector of components $d_{\mu_i}\mathcal{J}(u, \mu)$ -- by the operator $\nabla_\mu \mathcal{J}: V \times \mathcal{P} \to \mathbb{R}^P$. Similarly, if $\mathcal{J}$ is Fr\'echet differentiable w.r.t.~each $u \in V$, for each $u \in V$ and each $\mu \in \mathcal{P}$ there exists a bounded linear functional $\partial_u \mathcal{J}(u, \mu)\in V'$, such that the Fr\'echet derivative of $\mathcal{J}$ w.r.t.~its first argument in any direction $v \in V$ is given by $\partial_u \mathcal{J}(u, \mu)[v]$. We refer to $\partial_u \mathcal{J}(u, \mu)$ simply as the derivative w.r.t.~$u$.
If $\mathcal{J}$ is twice Fr\'echet differentiable w.r.t.~each $\mu \in \mathcal{P}$, we denote its \emph{hessian} w.r.t.~its second argument by the operator $\mathcal{H}_\mu \mathcal{J}: V \times \mathcal{P} \to \mathbb{R}^{P \times P}$.
We treat $a$, $l$, $j$ and $k$ in a similar manner, although, for notational compactness, we indicate their parameter-dependency differently.
For instance, interpreting the bilinear form $a$ as a map $a: V \times V \times \mathcal{P} \to \mathbb{R}$, $\left(u, v, \mu\right) \mapsto a_\mu(u, v)$, we denote the Fr\'echet derivatives of $a$ w.r.t.~the first, second and third argument of said map in the direction of $w\in V$, $\nu \in \mathbb{R}^P$ by $\partial_u a_\mu(u, v)[w] \in \mathbb{R}$, $\partial_v a_\mu(u, v)[w] \in \mathbb{R}$ and $\partial_\mu a_\mu(u, v)\cdot \nu \in \mathbb{R}$, respectively.
Similarly, interpreting the linear functional $l$ as a map $l: V \times \mathcal{P} \to \mathbb{R}$, $(v, \mu) \mapsto l_\mu(v)$, we denote the Fr\'echet derivatives of $l$ w.r.t.~the first and second argument of said map in the direction of $w \in V$, $\nu \in \mathbb{R}^P$ by $\partial_v l_\mu(v)[w] \in \mathbb{R}$ and $\partial_\mu l_\mu(v) \cdot \nu \in \mathbb{R}$, respectively. We omit the word Fr\'{e}chet when referring to the derivatives of $\mathcal{J}$, $a$, $l$, $j$ and $k$, in order to simplify the notation, unless it is strictly necessary to specify it.
We apply this notation for Fr\'echet and partial derivatives for functionals and bilinear forms throughout this manuscript.
Note that we denote the derivatives w.r.t.~the symbol of the argument in the original definition of the functional or bilinear form, not w.r.t.~the symbol of the actual argument, i.e.~we use $\partial_u \mathcal{J}(u_\mu, \mu)$ for the derivative w.r.t.~the first argument, not $\partial_{u_\mu} \mathcal{J}(u_\mu, \mu)$ or $\partial_v a_\mu(u, p)$ for the derivative w.r.t.~the second argument, not $\partial_p a_\mu(u, p)$. Note also that, due to Assumption~\ref{asmpt:parameter_separable}, we can exchange the order of differentiation w.r.t.~$V$ and $\mathbb{R}^P$, i.e.~$\partial_u\big(\partial_\mu \mathcal{J}(u, \mu)\cdot\nu\big)[w] = \partial_\mu\big(\partial_u \mathcal{J}(u, \mu)[w]\big)\cdot\nu$.
\begin{assumption}[Differentiability of $a$, $l$ and $\mathcal{J}$]
\label{asmpt:differentiability}
We assume $a_{\mu}$, $l_{\mu}$ and $\mathcal{J}$ to be twice Fr\'{e}chet differentiable w.r.t.~$\mu$. This obviously requires that all parameter-dependent coefficient functions in Assumption~{\rm{\ref{asmpt:parameter_separable}}} are twice differentiable as well.
\end{assumption}
\begin{remark}[Derivatives w.r.t.~$V$]
\label{prop:gateaux_wrt_V}
Due to the (bi-)linearity of $a$, $l$, $j$ and $k$, we can immediately compute their derivatives w.r.t.~arguments in $V$. For $u, v \in V$, $\mu \in \mathcal{P}$, the derivatives of $a$, $l$ and $\mathcal{J}$ w.r.t.~arguments in $V$ in the direction of $w \in V$ are given, respectively, by
\[
\partial_u a_\mu(u, v)[w] = a_\mu(w, v),\hspace{1em}\partial_v a_\mu(u, v)[w] = a_\mu(u, w), \hspace{1em} \partial_v l_\mu(v)[w] = l_\mu(w), \hspace{1em}\partial_u \mathcal{J}(u, \mu)[w] = j_\mu(w) + 2 k_\mu(w, u).
\]
\end{remark}
We compute the partial derivatives of $a$ and $l$ w.r.t.~the parameter by means of their separable decomposition.
\begin{remark}[Derivatives w.r.t.~$\mathcal{P}$]
\label{rem:gateaux_wrt_P}
For $\mu \in \mathcal{P}$, $u, v, \in V$ the derivatives of $a$ and $l$ w.r.t.~$\mu$ in the direction of $\nu \in \mathbb{R}^P$ are given by
\begin{align}
\partial_\mu a_\mu(u, v) \cdot \nu &= \sum_{\xi = 1}^{\Xi^a} \big(\partial_\mu \theta_\xi^a(\mu) \cdot \nu\big)\, a_\xi(u, v)&&\text{and}&&&
\partial_\mu l_\mu(v) \cdot \nu &= \sum_{\xi = 1}^{\Xi^l} \big(\partial_\mu \theta_\xi^l(\mu) \cdot \nu\big)\, l_\xi(v),
\notag
\end{align}
respectively, if $u, v$ do not depend on $\mu$.
We also introduce the following shorthand notation for the derivative of functionals and bilinear forms w.r.t.~the parameter in the direction of $\nu \in \mathbb{R}^P$, e.g.~for $\mu \in \mathcal{P}$ we introduce
\begin{align}
\partial_\mu l_\mu &\cdot \nu \in V'&&& v \mapsto \big(\partial_\mu l_\mu \cdot \nu\big)(v) &:= \partial_\mu l_\mu(v)\cdot \nu&&\text{and}
\notag\\
\partial_\mu a_\mu &\cdot \nu \in V \times V \to \mathbb{R}&&& u, v \mapsto \big(\partial_\mu a_\mu \cdot \nu\big)(u, v) &:= \partial_\mu a_\mu(u, v)\cdot \nu,
\notag
\end{align}
and note that $\partial_\mu l_\mu$ and $\partial_\mu a_\mu$ are continuous and separable w.r.t.~the parameter, owing to Assumption~{\rm\ref{asmpt:parameter_separable}}.
\end{remark}
The bilinear form $a_\mu(\cdot\,,\cdot)$ is continuous and coercive for all $\mu\in\mathcal{P}$. Thus we can define the bounded solution map $\mathcal{S}:\mathcal{P} \to V$, $\mu \mapsto u_\mu:= \mathcal S(\mu)$, where $u_\mu$ is the unique solution to \eqref{P.state} for a given $\mu$. The Fr\'echet derivatives of $\mathcal{S}$ are a common tool for RB methods and optimization, e.g., for constructing Taylor RB spaces that consist of the primal solution as well as their sensitivities (see \cite{HAA2017}) or for deriving optimality conditions for \eqref{P} (see \cite{HPUU2009}).
\begin{proposition}[Fr\'{e}chet derivative of the solution map]
\label{prop:solution_dmu_eta}
Considering the solution map $\mathcal S:\mathcal{P} \to V$, $\mu \mapsto u_\mu$, its Fr\'{e}chet derivative $d_{\nu} u_\mu \in V$ w.r.t.~a direction $\nu\in\mathbb{R}^P$ is the unique solution of
\begin{align}\label{eq:primal_sens}
a_\mu(d_{\nu}u_\mu, v) = \partial_\mu r_\mu^\textnormal{pr}(u_\mu)[v] \cdot \nu &&\text{for all } v \in V.
\end{align}
\end{proposition}
\begin{proof}
We refer to \cite{HPUU2009} for the proof of this result.
\end{proof}
\subsection{Optimal solution and optimality conditions}
\label{sec:first_order_optimality_conditions}
In this section, we discuss the existence of an optimal solution for problem \eqref{P}. Then, we characterize a locally optimal solution through first- and second-order optimality conditions. Throughout the paper, a bar indicates optimality.
\begin{theorem}[Existence of an optimal solution]
\label{Thm:existence_optimalsolution}
Problem \eqref{P} admits an optimal solution pair $(\bar u, \bar \mu)\in V\times\mathcal{P}$, where $\bar u:= u_{\bar \mu}$ is the solution of \eqref{P.state} for the parameter $\bar\mu$.
\end{theorem}
\begin{proof}
Note that the quantities involved in problem \eqref{P} satisfies Assumption~1.44 in \cite{HPUU2009}. Thus the existence follows from \cite[Theorem~1.45]{HPUU2009}.
\end{proof}
Let us introduce the reduced cost functional $\hat{\mathcal{J}}: \mathcal{P}\mapsto \mathbb{R},\,\mu\mapsto \hat{\mathcal{J}}(\mu) := \mathcal{J}(u_\mu, \mu)= \mathcal{J}( \mathcal S(\mu),\mu)$. Then problem \eqref{P} is equivalent to the so-called reduced problem
\begin{align}
\min_{\mu \in \mathcal{P}} \hat{\mathcal{J}}(\mu).
\tag{$\hat{\textnormal{P}}$}\label{Phat}
\end{align}
\begin{remark} \hfill
\begin{enumerate}
\item Since $r^\textnormal{pr}_\mu(u_\mu)[p] = 0$ for any $p\in V$, it follows that $\hat{\mathcal{J}}(\mu) = \mathcal L(u_\mu,\mu,p)$ for any $p\in V$.
\item The cost functional $\hat{\mathcal{J}}$ is in general non-convex, thus the existence of a unique minimum for $\hat{\mathcal{J}}$ (and thus of $\mathcal{J}$) can not be guaranteed.
\item Let $(\bar u, \bar \mu) \in V \times \mathcal{P}$ be a local optimal solution to \eqref{P} with $\bar u := u_{\bar \mu}$ the solution of the primal equation \eqref{P.state} for the parameter $\bar\mu$. Then the following constraint qualification holds true: For any $f\in V'$ there exists a pair $(u,\mu) \in V\times \mathbb{R}^P$ solving
\begin{align*}
a_{\bar \mu}(u, v) - \partial_\mu r_{\bar \mu}^\textnormal{pr}(\bar u)[v] \cdot \mu = f(v) &&\text{for all } v \in V.
\end{align*}
\item Theorem~{\rm{\ref{Thm:existence_optimalsolution}}} does not provide any solution method.
\end{enumerate}
\end{remark}
One can derive first-order necessary optimality conditions in order to compute candidates for a local optimal solution of \eqref{P}. We refer to \cite[Cor. 1.3]{HPUU2009} for a proof of the following result:
\begin{proposition}[First-order necessary optimality conditions for \eqref{P}]
\label{prop:first_order_opt_cond}
Let $(\bar u, \bar \mu) \in V \times \mathcal{P}$ be a local optimal solution to \eqref{P}. Moreover, let Assumption~{\rm{\ref{asmpt:differentiability}}} hold true. Then there exists a unique Lagrange multiplier $\bar p\in V$ such that the following first-order necessary optimality conditions hold:
\begin{subequations}
\label{eq:optimality_conditions}
\begin{align}
r_{\bar \mu}^\textnormal{pr}(\bar u)[v] &= 0 &&\text{for all } v \in V,
\label{eq:optimality_conditions:u}\\
\partial_u \mathcal{J}(\bar u,\bar \mu)[v] - a_{\bar\mu}(v,\bar p) &= 0 &&\text{for all } v \in V,
\label{eq:optimality_conditions:p}\\
(\nabla_\mu \mathcal{J}(\bar u,\bar \mu)+\nabla_{\mu} r^\textnormal{pr}_{\bar\mu}(\bar u)[\bar p]) \cdot (\nu-\bar \mu) &\geq 0 &&\text{for all } \nu \in \mathcal{P}.
\label{eq:optimality_conditions:mu}
\end{align}
\end{subequations}
\end{proposition}
Note that \eqref{eq:optimality_conditions:u} resembles the state equation \eqref{P.state}. From \eqref{eq:optimality_conditions:p} we deduce the \emph{adjoint -- or dual -- equation} with unique solution $p_{\mu} \in V$ for a fixed $\mu \in \mathcal{P}$, i.e.
\begin{align}
a_\mu(v, p_\mu) = \partial_u \mathcal{J}(u_\mu, \mu)[v]
= j_\mu(v) + 2 k_\mu(v, u_\mu)&&\text{for all } v \in V,
\label{eq:dual_solution}
\end{align}
given the solution $u_\mu \in V$ to the state equation \eqref{P.state}. From \eqref{eq:optimality_conditions:p} we observe that the variable $\bar p$ of the optimal triple solves the dual equation \eqref{eq:dual_solution} for $\bar \mu$. Similarly to the primal solution, we can consider the dual solution map $\mathcal A:\mathcal{P} \to V$, $\mu \mapsto \mathcal A(\mu) := p_\mu$, where $p_\mu$ is the solution of \eqref{eq:dual_solution} for the parameter $\mu$. In particular, $\bar p = p_{\bar \mu}$. For given $u, p \in V$, we also introduce the dual residual $r_\mu^\textnormal{du}(u, p) \in V'$ associated with \eqref{eq:dual_solution} by
\begin{align}
r_\mu^\textnormal{du}(u, p)[v] := j_\mu(v) + 2k_\mu(v, u) - a_\mu(v, p)&&\text{for all }v \in V.
\label{eq:dual_residual}
\end{align}
In addition, from the dual equation \eqref{eq:dual_solution}, we obtain the following formulation for the dual sensitivities.
\begin{proposition}[Fr\'echet derivative of the dual solution map]
\label{prop:dual_solution_dmu_eta}
Considering the dual solution map $\mathcal A:\mathcal{P} \to V$, $\mu \mapsto p_\mu$, its directional derivative $d_{\eta} p_\mu\in V$ w.r.t.~a direction $\eta \in \mathcal{P}$ is the solution of
\begin{equation} \label{eq:dual_sens}
\begin{split}
a_\mu(q, d_{\eta} p_\mu) &= -\partial_\mu a_\mu(q, p_\mu)\cdot \eta + d_\mu \partial_u \mathcal{J}(u_\mu, \mu)[q] \cdot \et
= \partial_\mu r_\mu^\textnormal{du}(u_\mu, p_\mu)[q] \cdot \eta + 2 k_\mu(q, d_{\eta}u_\mu)
\end{split}
\end{equation}
for all $q \in V$, where the latter equality holds for quadratic $\mathcal{J}$ as in \eqref{P.argmin}.
\end{proposition}
\begin{proof}
Note that $\mathcal A$ is well defined because the bilinear form $a_{\mu}(\cdot \, , \cdot)$ is continuous and coercive. For a proof of the other claims we refer to \cite{HPUU2009}, for instance.
\end{proof}
Furthermore, we can compute first-order derivatives of $\hat{\mathcal{J}}$.
\begin{proposition}[Gradient of $\hat{\mathcal{J}}$]
\label{prop:grad_Jhat}
For given $\mu\in\mathcal{P}$, the gradient of $\hat{\mathcal{J}}$, $\nabla_\mu\hat{\mathcal{J}}: \mathcal{P} \to \mathbb{R}^P$, is given by
\begin{align*}
\nabla_{\mu}\hat{\mathcal{J}}(\mu) &= \nabla_{\mu}\mathcal{J}(u_{\mu}, \mu)+\nabla_{\mu}r_\mu^\textnormal{pr}(u_{\mu})[p_{\mu}]= \nabla_\mu \Theta(\mu) + \nabla_\mu j_\mu(u_\mu)
+ \nabla_\mu k_\mu(u_\mu, u_\mu) + \nabla_\mu l_\mu(p_\mu) - \nabla_\mu a_\mu(u_\mu, p_\mu).
\end{align*}
\end{proposition}
\begin{proof}
This follows from \eqref{eq:primal_residual}, \eqref{eq:primal_sens}, \eqref{eq:dual_solution} and \eqref{P.argmin}, cf.~\cite{HPUU2009}.
\end{proof}
\begin{remark}
The proof of Proposition~{\rm{\ref{prop:grad_Jhat}}} relies on the fact that both $u_\mu$ and $p_\mu$ belong to the same space $V$; cf.~\emph{\cite{HPUU2009}}. In particular, for any $\mu\in\mathcal{P}$, we have $\nabla_\mu \hat{\mathcal{J}}(\mu) = \nabla_\mu \mathcal L(u_\mu,\mu,p_\mu)$.
\end{remark}
For $\bar\mu$ satisfying the first-order necessary optimality conditions, we have that $\bar \mu$ is a stationary point of the cost functional $\hat{\mathcal{J}}$. Thus, $\bar \mu$ can be either a local minimum, a saddle point or a local maximum of the cost functional $\hat{\mathcal{J}}$ (and obviously the same relationship occurs between $(\bar u,\bar \mu)$ and $\mathcal{J}$). We thus consider second-order sufficient optimality conditions in order to characterize local minima of the functional $\hat{\mathcal{J}}$, requiring its hessian.
\begin{proposition}[Hessian of $\hat{\mathcal{J}}$]
\label{prop:hessian_Jhat}
The hessian of $\hat{\mathcal{J}}$, $\hat{\mathcal{H}}_\mu := \mathcal{H}_\mu \hat{\mathcal{J}}: \mathcal{P} \to \mathbb{R}^{P \times P}$, is determined by its application to a direction $\nu\in \mathbb{R}^P$, given by
\begin{align}
\hat{\mathcal{H}}_\mu(\mu) \cdot \nu
= \nabla_\mu\Big(&\partial_u \mathcal{J}(u_\mu, \mu)[d_{\nu}u_\mu] + l_\mu(d_{\nu}p_\mu) - a_\mu(d_{\nu}u_\mu, p_\mu) - a_\mu(u_\mu, d_{\nu}p_\mu)
\notag\\
&+\big(\partial_\mu \mathcal{J}(u_\mu, \mu)+ \partial_\mu l_\mu(p_\mu)- \partial_\mu a_\mu(u_\mu, p_\mu)\big)\cdot \nu\Big),
\notag
\end{align}
where $u_\mu, p_\mu \in V$ denote the primal and dual solutions, respectively.
For a quadratic $\mathcal{J}$ as in \eqref{P.argmin} the above formula simplifies to
\begin{align}
\hat{\mathcal{H}}_\mu(\mu) \cdot \nu
= \nabla_\mu\Big(&j_\mu(d_{\nu}u_\mu) + 2k_\mu(d_{\nu}u_\mu, u_\mu) + l_\mu(d_{\nu}p_\mu) - a_\mu(d_{\nu}u_\mu, p_\mu) - a_\mu(u_\mu, d_{\nu}p_\mu)
\notag\\
&+\big(\partial_\mu \mathcal{J}(u_\mu, \mu)+ \partial_\mu l_\mu(p_\mu)- \partial_\mu a_\mu(u_\mu, p_\mu)\big)\cdot \nu\Big).
\notag
\end{align}
\end{proposition}
\begin{proof}
See, e.g., \cite{HPUU2009} for the first part. The second one follows from Remark~\ref{prop:gateaux_wrt_V}.
\end{proof}
\begin{proposition}[Second-order sufficient optimality conditions]\label{prop:second_order}
Let Assumption~{\rm{\ref{asmpt:differentiability}}} hold true. Suppose that $\bar \mu\in \mathcal{P}$ satisfies the first-order necessary optimality conditions \eqref{eq:optimality_conditions}. If $\hat{\mathcal{H}}_\mu(\bar \mu)$ is positive definite on the \emph{critical cone} $\mathcal C(\bar\mu)$ at $\bar\mu\in\mathcal{P}$, i.e., if $\nu \cdot(\hat{\mathcal{H}}_\mu(\bar \mu)\cdot \nu) > 0$ for all $\nu\in\mathcal C(\bar\mu)\setminus\{0\}$, with
\begin{align*}
\mathcal C(\bar\mu):= \big\{\nu\in\mathbb{R}^P\,\big|\, \exists \mu\in\mathcal{P},\,c_1>0: \nu = c_1(\mu-\bar \mu),\, \nabla_\mu \hat{\mathcal{J}}(\bar\mu)\cdot \nu = 0 \big\},
\end{align*}
then $\bar \mu$ is a strict local minimum of \eqref{Phat}.
\end{proposition}
\begin{proof}
For this result we refer to \cite{CasTr15,Nocedal06}, for instance.
\end{proof}
\section{High dimensional discretization and model order reduction}
\label{sec:mor}
We first discretize the optimization problem \eqref{P} as well as the corresponding optimality conditions
using the classical Ritz-Galerkin projection into a possibly high dimensional approximation space $V_h \subset V$, such as conforming Finite Elements.
Note that we restrict ourselves to a conforming approximation for simplicity and that we do not further specify the choice of $V_h$, as neither impacts the analysis below.
Based on this idea, we then derive different ways for the ROM using the Reduced Basis method with possibly different reduced primal and dual
state spaces.
Thus, the resulting ROM optimality system will in general not be equivalent to a Ritz-Galerkin projection of the FOM one onto a reduce space $V_{r} \subset V_h$. For this reason, we will introduce a non-conforming dual-corrected (NCD-corrected) approach; cf.~Section~\ref{sec:ncd_approach}.
\subsection{FOM for the optimality system}
\label{sec:problem_fom}
For the discretization of the optimization problem we assume that a finite-dimensional subspace $V_h \subset V$ is given and obtain the FOM for the optimality system of \eqref{P} by Ritz-Galerkin projection of equations \eqref{eq:optimality_conditions} onto $V_h$.
In particular, we have for each $\mu \in \mathcal{P}$ the solution $u_{h, \mu} \in V_h$ of the \emph{discrete primal equation}
\begin{align}
a_{\mu}(u_{h, \mu}, v_h) = l_{\mu}(v_h) &&\text{for all } v_h \in V_h,
\label{eq:state_h}
\end{align}
and hence $r_{\mu}^\textnormal{pr}(u_{h, \mu})[v_h] = 0$ for all $v_h \in V_h$, $\mu \in \mathcal{P}$.
We also have for each $\mu \in \mathcal{P}$ the solution $p_{h, \mu} \in V_h$ of the \emph{discrete dual equation}
\begin{align}
a_{\mu}(v_h, p_{h, \mu}) = \partial_u \mathcal{J}(u_{h, \mu}, \mu)[v_h] = j_{\mu}(v_h) + 2 k_{\mu}(v_h, u_{h, \mu}) &&\text{for all } v_h \in V_h,
\label{eq:dual_solution_h}
\end{align}
and hence $r_{\mu}^\textnormal{du}(u_{h, \mu}, p_{h, \mu})[v_h] = 0$ for all $v_h \in V_h$, $\mu \in \mathcal{P}$. Similarly, the \emph{discrete primal sensitivity equations} for solving for $d_{\nu} u_{h, \mu} \in V_h$ as well as
\emph{discrete dual sensitivity equations} for solving for $d_{\nu} p_{h, \mu} \in V_h$ at any direction $\nu \in \mathbb{R}^P$
follow analogously to Propositions \ref{prop:solution_dmu_eta} and \ref{prop:dual_solution_dmu_eta}.
Furthermore, $\hat{\mathcal{J}}$ is approximated by the \emph{discrete reduced functional}
\begin{align}
\hat{\mathcal{J}}_h(\mu) := \mathcal{J}(u_{h, \mu}, \mu) = \mathcal L(u_{h,\mu},\mu,p_h) && \text{for all } p_h\in V_h,
\label{eq:Jhat_h}
\end{align}
where $u_{h, \mu} \in V_h$ is the solution of \eqref{eq:state_h} and we formulate the discrete optimization problem
\begin{align}
\min_{\mu \in \mathcal{P}} \hat{\mathcal{J}}_h(\mu).
\tag{$\hat{\textnormal{P}}_h$}\label{Phat_h}
\end{align}
Further, $\bar\mu_h$ denotes a locally optimal solution to \eqref{Phat_h} satisfying the first- and second-order optimality conditions.
\begin{remark}
Since $u_{h,\mu}$ and $p_{h,\mu}$ belong to the same space $V_h$, Propositions~{\rm{\ref{prop:first_order_opt_cond}-{\rm\ref{prop:grad_Jhat}},{\rm\ref{prop:hessian_Jhat}}-\ref{prop:second_order}}} from Section~{\rm{\ref{sec:problem_fom}}} hold for the FOM as well, with all quantities replaced by their discrete counterparts.
\end{remark}
As usual in the context of RB methods, we eliminate the issue of ``truth'' by assuming that the high dimensional space $V_h$ is accurate enough to approximate the true solution.
\begin{assumption}[This is the ``truth'']
\label{asmpt:truth}
We assume that the primal discretization error $\|u_\mu - u_{h, \mu}\|$, the dual error $\|p_\mu - p_{h, \mu}\|$,
the primal sensitivity errors $\|d_{\mu_i} u_\mu - d_{\mu_i} u_{h, \mu}\|$ and the dual sensitivity errors $\|d_{\mu_i} p_\mu - d_{\mu_i} p_{h, \mu}\|$
are negligible for all $\mu \in \mathcal{P}$, $1 \leq i \leq P$.
\end{assumption}
To define suitable ROMs, in what follows, we assume that we have computed problem adapted RB spaces $V_{r}^\textnormal{pr}, V_{r}^\textnormal{du} \subset V_h$,
the construction of which is detailed in Section~\ref{sec:construct_RB}. We stress here that $V_{r}^\textnormal{pr}$ and $V_{r}^\textnormal{du}$ might not coincide, this will imply further discussions of the RB approximation of the optimality system \eqref{eq:optimality_conditions}.
\subsection{ROM for the optimality system -- Standard approach}
\label{sec:standard_approach}
Given a RB space $V_{r}^\textnormal{pr} \subset V_h$ of low dimension $n := \dim V_{r}^\textnormal{pr}$ and dual RB space $V_{r}^\textnormal{du} \subset V_h$ of low dimension $m := \dim V_{r}^\textnormal{du}$, we obtain the RB approximation of state and adjoint equations as follows:
\begin{subequations}
\label{eq:optimality_conditionsRB}
\begin{itemize}
\item RB approximation for \eqref{eq:optimality_conditions:u}: For each $\mu \in \mathcal{P}$ the primal variable $u_{{r}, \mu} \in V_{r}^\textnormal{pr}$
of the \emph{RB approximate primal equation} is defined through
%
\begin{align}
a_{\mu}(u_{{r}, \mu}, v_{r}) = l_{\mu}(v_{r}) &\qquad \text{for all } v_{r} \in V_{r}^\textnormal{pr}.
\label{eq:state_red}
\end{align}
%
\item RB approximation for \eqref{eq:optimality_conditions:p}: For each $\mu \in \mathcal{P}$, $u_{{r}, \mu} \in V_{r}^\textnormal{pr}$ the dual/adjoint variable $p_{{r}, \mu} \in V_{r}^\textnormal{du}$ satisfies the \emph{RB approximate dual equation} through
\begin{align}
a_{\mu}(q_{r}, p_{{r}, \mu}) = \partial_u \mathcal{J}(u_{{r}, \mu}, \mu)[q_{r}] = j_{\mu}(q_{r}) + 2 k_{\mu}(q_{r}, u_{{r}, \mu}) &&\text{for all } q_{r} \in V_{r}^\textnormal{du}.
\label{eq:dual_solution_red}
\end{align}
\end{itemize}
\end{subequations}
Analogously to Propostion~\ref{prop:solution_dmu_eta}, we define the \emph{RB solution map} $\mathcal{S}_{r}:\mathcal{P} \to V_{r}^\textnormal{pr}$ by $\mu \mapsto u_{{r}, \mu}$ and analogously to Propostion~\ref{prop:dual_solution_dmu_eta} the \emph{RB dual solution map} $\mathcal{A}_{r}:\mathcal{P} \to V_{r}^\textnormal{du}$ by $\mu \mapsto p_{{r}, \mu}$, where $u_{{r}, \mu}$ and $p_{{r}, \mu}$ denote the primal and dual reduced solutions of \eqref{eq:state_red} and \eqref{eq:dual_solution_red}, respectively.
To approximate \eqref{Phat_h}, we introduce the \emph{RB reduced functional} by
\begin{align}
\hat{J}_{r}(\mu) := \mathcal{J}(u_{{r}, \mu}, \mu)=\mathcal{J}(\mathcal S_r(\mu), \mu),&&
\text{where $u_{{r}, \mu} \in V_{r}^\textnormal{pr}$ is the solution of \eqref{eq:state_red}}
\label{eq:Jhat_red}
\end{align}
instead of $\hat{\mathcal{J}}_h$ and the
problem of finding a locally optimal solution $\bar \mu_{r}$ of
\begin{align}
\min_{\mu \in \mathcal{P}} \hat{J}_{r}(\mu).
\label{eq:Phat_red_uncorrected}
\end{align}
Now, a solution to the optimality system \eqref{eq:optimality_conditions} is approximated by the RB triple $(u_{{r},\bar\mu_{r}},\bar\mu_{r},p_{{r},\bar\mu_{r}})$.
As proposed in \cite{QGVW2017}, for computing an approximation of the gradient of $\hat{J}_{r}$, the gradient from Propostion~\ref{prop:grad_Jhat} can be utilized by replacing $u_{\mu}$ and $p_{\mu}$ with their RB counterparts. However, it can not be guaranteed in general that the computed gradient is the actual gradient of $\hat{J}_{r}$, if $V^\textnormal{pr}_{r}$ and $V^\textnormal{du}_{r}$ are chosen to be different. To see this, we consider first the Lagrangian and note that, for $1\leq i\leq P$ and all $p\in V$, it holds
\begin{equation}
\hat{J}_{r}(\mu) = \mathcal{L}(u_{{r},\mu},\mu,p), \qquad
\label{Gradient_lagrangian}
\big(\nabla_\mu \hat{J}_{r}(\mu)\big)_i = \partial_u \mathcal{L}(u_{{r},\mu},\mu,p)[d_{\mu_i}u_{{r},\mu}]
+ d_{\mu_i} \mathcal{L}(u_{{r},\mu},\mu,p).
\end{equation}
Now, following \cite{QGVW2017}, we define the \emph{inexact gradient} $\widetilde\nabla_\mu \hat{J}_{r}:\mathcal{P} \to \mathbb{R}^P$ by
\begin{equation}
\label{naive:red_grad}
\big(\widetilde\nabla_\mu \hat{J}_{r}(\mu)\big)_i := d_{\mu_i}\mathcal{J}(u_{{r}, \mu}, \mu) + d_{\mu_i} r_\mu^\textnormal{pr}(u_{{r}, \mu})[p_{{r}, \mu}] = d_{\mu_i} \mathcal{L}(u_{{r},\mu},\mu,p_{{r},\mu})
\end{equation}
for all $1 \leq i \leq P$ and $\mu \in \mathcal{P}$, where $u_{{r}, \mu} \in V_{r}^\textnormal{pr}$ and $p_{{r}, \mu} \in V_{r}^\textnormal{du}$ denote the primal and approximate dual reduced solutions of \eqref{eq:state_red} and \eqref{eq:dual_solution_red}, respectively. With the superscript $\sim$ we stress that $\widetilde\nabla_\mu \hat{J}_{r}(\mu)$ is not the actual gradient of $\hat{J}_{r}$, but its approximation. Choosing $p= p_{{r},\mu}\in V^\textnormal{du}_{r}$ in \eqref{Gradient_lagrangian} and considering \eqref{naive:red_grad} lead to
\begin{align*}
\big(\nabla_\mu \hat{J}_{r}(\mu)\big)_i & = \partial_u \mathcal{L}(u_{{r},\mu},\mu,p_{{r},\mu})[d_{\mu_i}u_{{r},\mu}] + \widetilde\nabla_\mu \hat{J}_{r}(\mu).
\end{align*}
Note that, in general, it does not hold that $\partial_u\mathcal L(u_{{r},\mu},\mu, p_{{r},\mu})= 0$, since \eqref{eq:dual_solution_red} is not the dual equation with respect to the optimization problem \eqref{eq:Phat_red_uncorrected}, cf.~\cite[Section~1.6.4]{HPUU2009}, which would only be true if $V^\textnormal{du}_{r}\subseteq V^\textnormal{pr}_{r}$.
Thus, \eqref{naive:red_grad} defines only an approximation of the true gradient of $\hat{J}_{r}$ with the choice made in \cite{QGVW2017}. This introduces an additional approximation error in reconstructing the solution of the optimality system \eqref{eq:optimality_conditions}, which is well visible in our numerical experiments (see Section~\ref{sec:mmexc_example}): the standard RB approach leads to a significant lack in accuracy, requiring additional steps to enrich the RB space and cover this gap. We therefore propose to add a correction term to $\hat{J}_{r}$ based on the previous remarks.
\subsection{ROM for the optimality system -- NCD-corrected approach}
\label{sec:ncd_approach}
Following the primal-dual RB approach for linear output functionals \cite[Section~2.4]{HAA2017}, it is more suitable to add a correction term to the output functional for which improved error estimates are available. We seek to minimize the Lagrangian corresponding to problem \eqref{P}. A similar approach, in the context of adaptive finite elements, can be found in \cite{BKR2000,Rannacher2006}.
We utilize \eqref{eq:dual_solution_red} to extend the primal-dual RB approach of \cite[Section~2.4]{HAA2017} to quadratic output functionals and define the \emph{NCD-corrected RB reduced functional}
\begin{align}
{{\Jhat_{r}}}(\mu) := \mathcal{L}(u_{{r},\mu},\mu,p_{{r},\mu}) = \hat{J}_{r}(\mu) + r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}]
\label{eq:Jhat_red_corected}
\end{align}
with $u_{{r}, \mu} \in V_{r}^\textnormal{pr}$ and $p_{{r}, \mu} \in V_{r}^\textnormal{du}$ the solutions of \eqref{eq:state_red} and \eqref{eq:dual_solution_red}, respectively.
Note that ${{\Jhat_{r}}}$ coincides with the functional $\hat{J}_{r}$ in \eqref{eq:Jhat_red} if $V_{r}^\textnormal{du} = V_{r}^\textnormal{pr}$.
We then consider the \emph{RB reduced optimization problem} of finding a locally optimal solution $\bar \mu_{r}$ of
\begin{align}
\min_{\mu \in \mathcal{P}} {{\Jhat_{r}}}(\mu).
\tag{$\hat{\textnormal{P}}_{r}$}\label{Phat_{r}}
\end{align}
Computing the actual gradient of $\hat{\mathcal{J}}_{r}$ results in the next proposition, proved following \cite[Section~1.6.2]{HPUU2009}.
\begin{proposition}[Gradient of the NCD-corrected RB reduced functional]
\label{prop:true_corrected_reduced_gradient_adj}
The $i$-th component of the true gradient of ${{\Jhat_{r}}}$ is given by
\begin{align*}
\big(\nabla_\mu {{\Jhat_{r}}}(\mu)\big)_i & = d_{\mu_i}\mathcal{J}(u_{{r},\mu},\mu) + d_{\mu_i}r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}+w_{{r},\mu}] - d_{\mu_i}r^\textnormal{du}_\mu (u_{{r},\mu},p_{{r},\mu})[z_{{r},\mu}]
\end{align*}
where $u_{{r}, \mu} \in V_{r}^\textnormal{pr}$ and $p_{{r}, \mu} \in V_{r}^\textnormal{du}$ denote the RB approximate primal and dual solutions of \eqref{eq:state_red} and \eqref{eq:dual_solution_red}, $z_{{r},\mu} \in V_{r}^\textnormal{du}$ solves
\begin{equation}
\label{A2:z_eq}
a_\mu(z_{{r},\mu},q) = -r_\mu^\textnormal{pr}(u_{{r},\mu})[q] \quad \forall q\in V^\textnormal{du}_r
\end{equation}
and $w_{{r},\mu} \in V_{r}^\textnormal{pr}$ solves
\begin{equation}
\label{A2:w_eq}
a_\mu(v,w_{{r},\mu}) = r_\mu^\textnormal{du}(u_{{r},\mu},p_{{r},\mu})[v]-2k_\mu(z_{{r},\mu},v), \quad \forall v\in V^\textnormal{pr}_r.
\end{equation}
\end{proposition}
\subsection{A posteriori error analysis}
\label{sec:a_post_error_estimates}
A posteriori error estimates are required for controlling the accuracy of the reduced order model.
In addition, we also use them for the error aware TR method (which is explained in Section~\ref{sec:TR}).
We derive a posteriori error estimates for all reduced terms that we need for the TR method. Moreover, we suggest further advances for the reduction of sensitivities and
gradients.
From a model reduction perspective, these error estimates need to be computed efficiently such that the time for the evaluation for many parameters can be neglected.
Note that Assumption~\ref{asmpt:parameter_separable} is crucial for this, since it allows to precompute most of the required terms.
For any functional $l \in V_h'$ or bilinear form $a: V_h \times V_h \to \mathbb{R}$, we denote their dual or operator norms $\|l\|$ and $\|a\|$ by the continuity constants $\cont{l}$ and $\cont{a}$, respectively. The same consideration applies for the norm $\|\cdot\|$ in $V_h'$ of the residuals.
For $\mu \in \mathcal{P}$, we denote the coercivity constant of $a_{\mu}$ w.r.t.~the $V_h$-norm by $\underline{a_{\mu}} > 0$.
\subsubsection{Standard RB estimates for the optimality system}
\label{sec:estimation_for_the_standard_and_corrected_reduced_order_model}
We start with the residual based a posteriori error estimation for the primal variable, which is a standard result from RB theory and has extensively been used in the literature. For a proof, we refer to \cite{rozza2007}.
\begin{proposition}[Upper bound on the primal model reduction error] \label{prop:primal_rom_error}
For $\mu \in \mathcal{P}$ let $u_{h, \mu} \in V_h$ be the solution of \eqref{eq:state_h} and $u_{{r}, \mu} \in V_{r}^\textnormal{pr}$
the solution of \eqref{eq:state_red}. Then it holds
\begin{align}
\|u_{h, \mu} - u_{{r}, \mu}\| \leq \Delta_\textnormal{pr}(\mu) := \underline{a_{\mu}}^{-1}\, \|r_{\mu}^\textnormal{pr}(u_{{r}, \mu})\|.
\notag
\end{align}
\end{proposition}
For the reduced dual problem, a similar idea can be used to derive the following estimation, accounting for the fact that $p_{{r}, \mu}$ is not a Galerkin projection of
$p_{h, \mu}$. For a proof, we refer to \cite[Lemma 3]{QGVW2017}.
\begin{proposition}[Upper bound on the dual model reduction error]
\label{prop:dual_rom_error}
For $\mu \in \mathcal{P}$, let $p_{h, \mu} \in V_h$ be the solution of \eqref{eq:dual_solution_h} and $p_{{r}, \mu} \in V_{r}^\textnormal{du}$
the solution of \eqref{eq:dual_solution_red}. Then it holds
\begin{align}
\|p_{h, \mu} - p_{{r}, \mu}\| &\leq \Delta_\textnormal{du}(\mu) := \underline{a_{\mu}}^{-1}\big(2 \cont{k_{\mu}}\;\Delta_\textnormal{pr}(\mu) + \|r_{\mu}^\textnormal{du}(u_{{r}, \mu}, p_{{r}, \mu})\|\Big).
\notag
\end{align}
\end{proposition}
In the next proposition we state the result of the standard approach from \cite[Theorem~4]{QGVW2017}. Furthermore we show an improved version by using, in contrast to \cite{QGVW2017}, the NCD-corrected reduced functional, which results in an optimal higher order a posteriori upper bound without lower order terms.
\begin{proposition}[Upper bound on the model reduction error of the reduced output]
\label{prop:Jhat_error}
\hfill
\begin{enumerate}[(i)]
\item [\emph{(i)}] With the notation from above, we have for the standard RB reduced cost functional
\begin{align}
|\hat{\mathcal{J}}_h(\mu) - &\hat{J}_r(\mu)| \leq \Delta_{\hat{J}_{r}}(\mu) := \Delta_\textnormal{pr}(\mu) \|r_{\mu}^\textnormal{du}(u_{{r}, \mu}, p_{{r},\mu})\| + \Delta_\textnormal{pr}(\mu)^2 \cont{k_{\mu}} + \big|r_{\mu}^\textnormal{pr}(u_{{r}, \mu})[p_{{r},\mu}]\big|.
\notag
\end{align}
\item [\emph{(ii)}] Furthermore, we have for the NCD-corrected RB reduced cost functional (or equivalently for the Lagrangian for any $p\in V_h$)
\begin{align}
|\hat{\mathcal{J}}_h(\mu) - &{{\Jhat_{r}}}(\mu)| = |\mathcal{L}(u_{h,\mu},\mu,p)-\mathcal{L}(u_{{r},\mu},\mu,p)| \leq \Delta_{{{\Jhat_{r}}}}(\mu) := \Delta_\textnormal{pr}(\mu) \|r_{\mu}^\textnormal{du}(u_{{r}, \mu}, p_{{r},\mu})\| + \Delta_\textnormal{pr}(\mu)^2 \cont{k_{\mu}}.
\notag
\end{align}
\end{enumerate}
\end{proposition}
\begin{proof}
We refer to \cite[Theorem~4]{QGVW2017} for a proof of $(i)$.
Regarding $(ii)$, using
the shorthand $e_{h, \mu}^\textnormal{pr} := u_{h, \mu} - u_{{r}, \mu}$ and $a_\mu(e_{h, \mu}^\textnormal{pr},p_{{r},\mu}) = r_{\mu}^\textnormal{pr}(u_{{r}, \mu})[p_{{r},\mu}]$ lead us to
\begin{align*}
|\hat{\mathcal{J}}_h(\mu) - {{\Jhat_{r}}}(\mu)| &= |j_{h, \mu}(e_{h, \mu}^\textnormal{pr}) + k_{\mu}(u_{h, \mu}, u_{h, \mu}) - k_{\mu}(u_{{r}, \mu}, u_{{r}, \mu}) - a_\mu(e_{h, \mu}^\textnormal{pr},p_{{r},\mu})| \notag\\
&= |r_{\mu}^\textnormal{du}(u_{{r}, \mu}, p_{{r},\mu})[e_{h, \mu}^\textnormal{pr}] - 2k_{\mu}(u_{{r}, \mu},e_{h, \mu}^\textnormal{pr})
+ k_{\mu}(u_{h, \mu}, u_{h, \mu}) - k_{\mu}(u_{{r}, \mu}, u_{{r}, \mu})| \notag \\
&\leq \|r_{\mu}^\textnormal{du}(u_{{r}, \mu}, p_{{r},\mu})\|\; \|e_{h, \mu}^\textnormal{pr}\| +\cont{k_{\mu}} \; \|e_{h, \mu}^\textnormal{pr}\|^2,
\notag
\end{align*}
where we used the definition of the dual residual in the second equality and Cauchy-Schwarz for the inequality.
The assertion follows by using Propostion~\ref{prop:primal_rom_error}.
\end{proof}
\begin{remark}\label{continuity_of_estimator}
The estimator $\Delta_{{{\Jhat_{r}}}}(\mu)$ is continuous w.r.t.~$\mu$,
since the Riesz-representative of the residual is continuous.
\end{remark}
For the inexact and NCD-corrected gradient, we derive the following a posteriori estimators.
\begin{proposition}[Upper bound on the model reduction error of the gradient of reduced output]
\label{prop:grad_Jhat_error}
\hfill
\begin{enumerate}[(i)]
\item [\emph{(i)}] For the inexact gradient $\widetilde\nabla_\mu \hat{J}_{r}(\mu)$ from the standard-RB approach \eqref{naive:red_grad}, we have
\begin{align}
\big\|\nabla_\mu \hat{\mathcal{J}}_h(\mu) - \widetilde\nabla_\mu \hat{J}_{r}(\mu)\big\|_2 &\leq \Delta_{\widetilde\nabla \hat{J}_{r}}(\mu) = \big\|\underline{\Delta_{\widetilde\nabla \hat{J}_{r}}(\mu)}\big\|_2 \quad\quad\text{with}
\notag\\
\big(\underline{\Delta_{\widetilde\nabla \hat{J}_{r}}(\mu)}\big)_i
:= 2\Delta_\textnormal{pr}(\mu) \|u_{{r}, \mu}\|\; \cont{d_{\mu_i} k_{\mu}}
&+ \Delta_\textnormal{pr}(\mu)\big(\cont{d_{\mu_i} j_{\mu}} + \cont{d_{\mu_i} a_{\mu}}\; \|p_{{r}, \mu}\|\big)
\notag\\
+ \Delta_\textnormal{du}(\mu)\big(\cont{d_{\mu_i} l_{\mu}} &+ \cont{d_{\mu_i} a_{\mu}}\; \|u_{{r}, \mu}\|\big)
+ \Delta_\textnormal{pr}(\mu)\; \Delta_\textnormal{du}(\mu)\; \cont{d_{\mu_i} a_{\mu}}
+ (\Delta_\textnormal{pr})^2(\mu)\; \cont{d_{\mu_i} k_{\mu}}.
\notag
\end{align}
\item [\emph{(ii)}] For the gradient $\nabla_{\mu} {{\Jhat_{r}}}(\mu)$ of the NCD-corrected reduced functional, computed with the adjoint approach
from Definition~{\rm{\ref{prop:true_corrected_reduced_gradient_adj}}}, we have
\begin{align*}
&\big\|\nabla_\mu \hat{\mathcal{J}}_h(\mu) - \nabla_{\mu} {{\Jhat_{r}}}(\mu)\big\|_2 \leq \Delta^*_{\nabla \hat{\mathcal{J}}_{r}}(\mu)
= \big\|\underline{\Delta^*_{\nabla \hat{\mathcal{J}}_{r}}(\mu)}\big\|_2 \quad\quad\text{with} \\
\big(\underline{\Delta^*_{\nabla \hat{\mathcal{J}}_{r}}(\mu)}\big)_i &:=
2\Delta_\textnormal{pr}(\mu) \|u_{{r}, \mu}\|\; \cont{d_{\mu_i} k_{\mu}}
+ \Delta_\textnormal{pr}(\mu)\big(\cont{d_{\mu_i} j_{\mu}} + \cont{d_{\mu_i} a_{\mu}}\; \|p_{{r}, \mu}\|\big)\\
&+ \Delta_\textnormal{du}(\mu)\big(\cont{d_{\mu_i} l_{\mu}} + \cont{d_{\mu_i} a_{\mu}}\; \|u_{{r}, \mu}\|\big)
+ \Delta_\textnormal{pr}(\mu)\; \Delta_\textnormal{du}(\mu)\; \cont{d_{\mu_i} a_{\mu}}
+ (\Delta_\textnormal{pr})^2(\mu)\; \cont{d_{\mu_i} k_{\mu}} \\
&+ (\cont{d_{\mu_i}l_{\mu}} + \cont{d_{\mu_i}a_{\mu}} \| u_{{r},\mu} \|) \underline{a_{\mu}}^{-1} \big( \|r_\mu^\textnormal{du}(u_{{r},\mu},p_{{r},\mu})\|
+ 2\cont{k_\mu} \underline{a_{\mu}}^{-1} \|r_\mu^\textnormal{pr}(u_{{r},\mu})\| \big) \\
&+ \underline{a_{\mu}}^{-1} \|r_\mu^\textnormal{pr}(u_{{r},\mu})\| \big( \cont{d_{\mu_i}j}+ 2\cont{d_{\mu_i}k} \|u_{{r},\mu}\|
+ \cont{d_{\mu_i}a} \|p_{{r},\mu}\| \big).
\end{align*}
\end{enumerate}
\end{proposition}
\begin{proof}
(i) For $\Delta_{\widetilde\nabla \hat{J}_{r}}(\mu)$, we have
\begin{align*}
\big(\nabla_\mu \hat{\mathcal{J}}_h(\mu) - \widetilde\nabla_{\mu} {{\Jhat_{r}}}(\mu)\big)_i
&=d_{\mu_i}\mathcal{J}(u_{h, \mu}, \mu) - d_{\mu_i}\mathcal{J}(u_{{r}, \mu}, \mu) + d_{\mu_i}r_\mu^\textnormal{pr}(u_{h,\mu})[p_{h,\mu}] - d_{\mu_i}r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}].
\end{align*}
Regarding the first contribution, we obtain with $\|u_{h, \mu}\|_h \leq \|e_{h, \mu}^\textnormal{pr}\|_h + \|u_{{r}, \mu}\|_h$
\begin{align*}
\big|d_{\mu_i}\mathcal{J}(u_{h, \mu}, \mu) - d_{\mu_i}\mathcal{J}(u_{{r}, \mu}, \mu)\big| &=
| d_{\mu_i}j_{h, \mu}(e_{h, \mu}^\textnormal{pr}) + d_{\mu_i}k_{\mu}(e_{h, \mu}^\textnormal{pr}, u_{{r}, \mu}) + d_{\mu_i}k_{\mu}(u_{h, \mu}, e_{h, \mu}^\textnormal{pr})|
\\
&\leq \Delta_\textnormal{pr}(\mu)\Big( \cont{d_{\mu_i} j_{\mu}} + \cont{d_{\mu_i} k_{\mu}}\big(2 \|u_{{r}, \mu}\| + \Delta_\textnormal{pr}(\mu)\big)\Big).
\end{align*}
For the other contributions we refer to \cite[Theorem~5]{QGVW2017}.
(ii) For the adjoint estimator $\Delta^*_{\nabla_\mu \hat{\mathcal{J}}_{r}}$, we have
\begin{align*}
\partial_\mu {{\Jhat_{r}}}(\mu)\cdot\nu & = \partial_\mu \mathcal{J}(u_{{r},\mu},\mu)\cdot\nu + \partial_\mu r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}+w_{{r},\mu}]\cdot\nu
- \partial_\mu r_\mu^\textnormal{du}(u_{{r},\mu},p_{{r},\mu})[z_{{r},\mu}]\cdot\nu
\end{align*}
and thus
\begin{align*}
\big(\nabla_\mu \hat{\mathcal{J}}_h(\mu) - \nabla_{\mu} {{\Jhat_{r}}}(\mu)\big)_i
&=d_{\mu_i}\mathcal{J}(u_{h, \mu}, \mu) - d_{\mu_i}\mathcal{J}(u_{{r}, \mu}, \mu) + d_{\mu_i}r_\mu^\textnormal{pr}(u_{h,\mu})[p_{h,\mu}] - d_{\mu_i}r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}] \\
& \qquad \qquad - d_{\mu_i} r_\mu^\textnormal{pr}(u_{{r},\mu})[w_{{r},\mu}] - d_{\mu_i} r_\mu^\textnormal{du}(u_{{r},\mu},p_{{r},\mu})[z_{{r},\mu}]
\end{align*}
The first line is equal to the estimator $\Delta_{\widetilde\nabla \hat{J}_{r}}(\mu)$,
the first term of the second line can be estimated by
\begin{align*}
d_{\mu_i} r_\mu^\textnormal{pr}(u_{{r},\mu})[w_{{r},\mu}] &\leq \cont{d_{\mu_i}l_{\mu}}\|w_{\mu}\| + \cont{d_{\mu_i}a_{\mu}} \| u_{{r},\mu} \| \|w_{\mu}\|
\end{align*}
The second term can analogously be estimated by
\begin{align*}
d_{\mu_i} r_\mu^\textnormal{du}(u_{{r},\mu},p_{{r},\mu})[z_{\mu}] \leq \cont{d_{\mu_i}j} \|z_{\mu}\| + 2\cont{d_{\mu_i}k} \| z_{\mu} \| \|u_{{r},\mu}\|
+ \cont{d_{\mu_i}a} \|z_{\mu}\| \|p_{{r},\mu}\|.
\end{align*}
We also have
\begin{align*}
\underline{a_{\mu}} \|w_{\mu}\|^2 &\leq a_{\mu}(w_{\mu},w_{\mu}) = r_\mu^\textnormal{du}(u_{{r},\mu},p_{{r},\mu})[w_{\mu}]-2k_\mu(z_{\mu},w_{\mu})
\leq \|r_\mu^\textnormal{du}(u_{{r},\mu},p_{{r},\mu})\| \|w_{\mu}\| + 2\cont{k_\mu} \|z_{\mu}\| \|w_{\mu}\|
\end{align*}
which gives
\begin{equation*}
\|w_{\mu}\| \leq \underline{a_{\mu}}^{-1} \left( \|r_\mu^\textnormal{du}(u_{{r},\mu},p_{{r},\mu})\| + 2\cont{k_\mu} \|z_{\mu}\| \right).
\end{equation*}
For $z_\mu$ we estimate
\begin{align*}
\underline{a_{\mu}} \|z_{\mu}\|^2 &\leq a_{\mu}(z_{\mu},z_{\mu}) = -r_\mu^\textnormal{pr}(u_{{r},\mu})[z_\mu]\leq \|r_\mu^\textnormal{pr}(u_{{r},\mu})\| \|z_{\mu}\|.
\end{align*}
Summing all together gives the assertion.
\end{proof}
In a view of Section~\ref{sec:ncd_approach}, we emphasize that the estimator for the NCD-corrected gradient does not show a better approximation
of the FOM gradient since more terms are added to the standard estimate. Propositon \ref{prop:Jhat_error}.(ii) suggests that there exist an estimator of higher order which we derive in the following section.
\subsubsection{Sensitivity based approximation and estimation}
\label{sec:sens_estimation}
We elaborate a better estimator for the NCD-corrected gradient by using sensitivities of the reduced primal and dual
solutions. In addition, approximated sensitivities that are computed from the FOM sensitivities suggest an even better approximation of the FOM gradient.
We define the derivatives of the primal and dual solution maps associated with \eqref{eq:optimality_conditionsRB} in direction $\nu \in \mathbb{R}^P$ as the solutions
$d_{\nu} u_{{r}, \mu} \in V_{r}^\textnormal{pr}$ and $d_{\nu} p_{{r}, \mu} \in V_{r}^\textnormal{du}$ of
\begin{align}
a_\mu(d_{\nu} u_{{r}, \mu}, v_{r}) &= \partial_{\mu} r_\mu^\textnormal{pr}(u_{{r}, \mu})[v_{r}]\cdot\nu &&\text{for all } v_{r} \in V_{r}^\textnormal{pr} \text{ and}\label{eq:true_primal_reduced_sensitivity}\\
a_\mu(q_{r}, d_{\nu} p_{{r}, \mu}) &= -\partial_\mu a_\mu(q_{r}, p_{r, \mu})\cdot\nu + d_\mu\partial_u \mathcal{J}(u_{{r}, \mu}, \mu)[q_{r}]\cdot\nu\notag\\
&= \partial_\mu r_\mu^\textnormal{du}(u_{{r}, \mu}, p_{{r}, \mu})[q_{r}]\cdot\nu + 2k_\mu(q_{r}, d_{\nu} u_{{r}, \mu}) &&\text{for all } q_{r} \in V_{r}^\textnormal{du},
\label{eq:true_dual_reduced_sensitivity}
\end{align}
respectively, analogously to Propositions \ref{prop:solution_dmu_eta} and \ref{prop:dual_solution_dmu_eta},
where the last equality holds for quadratic functionals as in \eqref{P.argmin}.
With these sensitivities we can compute the same gradient of the NCD-corrected RB reduced functional from Propostion~\ref{prop:true_corrected_reduced_gradient_adj} in a different manner.
\begin{proposition}[Gradient of the NCD-corrected RB reduced functional -- Sensitivity approach]
\label{prop:true_corrected_reduced_gradient}
The $i$-th component of the true gradient of ${{\Jhat_{r}}}$, $\nabla_\mu{{\Jhat_{r}}}:\mathcal{P} \to \mathbb{R}^P$, is given by
\begin{align*}
\big(\nabla_\mu {{\Jhat_{r}}}(\mu)\big)_i & = d_{\mu_i}\mathcal{J}(u_{{r}, \mu}, \mu) + d_{\mu_i} r_\mu^\textnormal{pr}(u_{{r}, \mu})[p_{{r}, \mu}]
+ r_\mu^\textnormal{pr}(u_{{r}, \mu})[d_{\mu_i}p_{{r}, \mu}]
+ r_\mu^\textnormal{du}(u_{{r}, \mu},p_{{r}, \mu})[d_{\mu_i} u_{{r}, \mu}]
\end{align*}
for all $1 \leq i \leq P$ and $\mu \in \mathcal{P}$, where $u_{{r}, \mu} \in V_{r}^\textnormal{pr}$ and $p_{{r}, \mu} \in V_{r}^\textnormal{du}$ solve \eqref{eq:optimality_conditionsRB}, $d_{\mu_i}u_{{r}, \mu} \in V_{r}^\textnormal{pr}$ and $d_{\mu_i}p_{{r}, \mu} \in V_{r}^\textnormal{du}$ denote
the derivatives of RB primal and dual solution maps from \eqref{eq:true_primal_reduced_sensitivity} and \eqref{eq:true_dual_reduced_sensitivity}.
\end{proposition}
\begin{proof}
It follows from the chain rule and Remark~\ref{prop:gateaux_wrt_V}; see \cite[Section~1.6.1]{HPUU2009}.
\end{proof}
Note that the sensitivity based gradient is mathematically equivalent to the one in Propostion~\ref{prop:true_corrected_reduced_gradient_adj}, but the second only requires to solve \eqref{A2:z_eq} and \eqref{A2:w_eq} once,
because they can be reused for every component $\mu_i$, whereas the computation of the gradient in Propostion~\ref{prop:true_corrected_reduced_gradient} requires to solve \eqref{eq:true_primal_reduced_sensitivity}
and \eqref{eq:true_dual_reduced_sensitivity} for each $1\leq i\leq P$; cf.~\cite{HPUU2009}.
In terms of numerical approximation w.r.t.~the FOM functional, we note that, e.g., a solution $d_{\mu_i} u_{{r}, \mu} \in V_{r}^\textnormal{pr}$ of \eqref{eq:true_primal_reduced_sensitivity} does not necessarily need to be a good approximation of the FOM version $d_{\mu_i} u_{h, \mu} \in V_h$ even though $u_{h, \mu}$ is contained in $V_{r}^\textnormal{pr}$ since the high dimensional sensitivities are not generally contained in the respective reduced space (c.f. Propostion~\ref{prop:primal_solution_dmui_error}).
To remedy this we could compute the FOM sensitivities for all canonical directions and either include them in the respective primal and dual space (thus forming Taylor RB spaces) or distribute all directional sensitivities to
problem adapted RB spaces for the primal and dual sensitivities w.r.t.~all canonical directions: $V_{r}^{\textnormal{pr},d_{\mu_i}}, V_{r}^{\textnormal{du},d_{\mu_i}} \subset V_h$.
Thus, we again commit a variational crime.
\begin{definition}[Approximate partial derivatives of the RB primal and dual solution maps]
\label{prop:red_sens}
Considering the reduced primal and dual solution maps $\mathcal{P} \to V_{r}^\textnormal{pr}$, $\mu \mapsto u_{{r}, \mu}$ and $\mathcal{P} \to V_{r}^\textnormal{du}$, $\mu \mapsto p_{{r}, \mu}$,
respectively, where $u_{{r}, \mu}$ and $p_{{r}, \mu}$ are the solutions of \eqref{eq:state_red} and \eqref{eq:dual_solution_red},
we define their approximate partial derivatives w.r.t.~the $i$th component of $\mu$ by $\dred{\mu_i} u_{{r}, \mu} \in V_{r}^{\textnormal{pr},d_{\mu_i}}$
and $\dred{\mu_i} p_{{r}, \mu} \in V_{r}^{\textnormal{du},d_{\mu_i}}$, respectively, as solutions of the sensitivity equations
\begin{align}
a_{\mu}(\dred{\mu_i} u_{{r}, \mu}, v_{r}) &= \partial_\mu r_{\mu}^\textnormal{pr}(u_{{r}, \mu})[v_{r}] \cdot e_i&&\text{for all } v_{r} \in V_{r}^{\textnormal{pr},d_{\mu_i}},
\label{eq:red_sens_pr} \\
a_{\mu}(q_{r}, \dred{\mu_i} p_{{r}, \mu}) &= \partial_\mu r_{\mu}^\textnormal{du}(u_{{r}, \mu}, p_{{r}, \mu})[q_{r}] \cdot e_i
+ 2k_{\mu}(q_{r}, \dred{\mu_i} u_{{r}, \mu})&&\text{for all }q_{r} \in V_{r}^{\textnormal{du},d_{\mu_i}}.
\label{eq:red_sens_du}
\end{align}
Similarly, we denote the approximate partial derivatives in direction $\nu \in \mathbb{R}^P$ by $\dred{\nu} u_{{r}, \mu}$ and $\dred{\nu} p_{{r}, \mu}$,
respectively, defined by substituting $e_i$ with $\nu$ above.
\end{definition}
Following Propositions \ref{prop:solution_dmu_eta} and \ref{prop:dual_solution_dmu_eta} we would obtain
$\dred{\mu_i} u_{{r}, \mu} = d_{\mu_i} u_{{r}, \mu}$, if $V_{r}^{\textnormal{pr},d_{\mu_i}} =V_{r}^{\textnormal{pr}}$ and
$\dred{\mu_i} p_{{r}, \mu} = d_{\mu_i} p_{{r}, \mu}$, if $V_{r}^{\textnormal{du},d_{\mu_i}} =V_{r}^{\textnormal{du}}$.
Moreover, the approximate partial derivatives depend on the choice of the corresponding reduced approximation spaces.
\begin{definition}[Approximate gradient of the NCD-corrected RB reduced functional]
\label{prop:red_approx_grad}
We define the approximate gradient $\gradred{\mu}{{\Jhat_{r}}}:\mathcal{P} \to \mathbb{R}^P$ of ${{\Jhat_{r}}}$ by
\begin{align}
\big(\gradred{\mu} {{\Jhat_{r}}}(\mu)\big)_i := d_{\mu_i}\mathcal{J}(u_{{r},\mu}, \mu) &+ d_{\mu_i}r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}]
+ r_\mu^\textnormal{pr}(u_{{r}, \mu})[\dred{\mu_i}p_{{r}, \mu}] + r_\mu^\textnormal{du}(u_{{r}, \mu},p_{{r}, \mu})[\dred{\mu_i} u_{{r}, \mu}],
\end{align}
for $1 \leq i \leq P$, where $u_{{r},\mu} \in V_{r}^\textnormal{pr}$, $p_{{r},\mu} \in V_{r}^\textnormal{du}$ denote the reduced primal and dual solutions and
$\dred{\mu_i} u_{{r}, \mu} \in V_{r}^{\textnormal{pr},d_{\mu_i}}$ and $\dred{\mu_i} p_{{r}, \mu} \in V_{r}^{\textnormal{du},d_{\mu_i}}$ denote the solutions of \eqref{eq:red_sens_pr}
and \eqref{eq:red_sens_du}.
\end{definition}
Both gradients from Definition~\ref{prop:true_corrected_reduced_gradient} and Propostion~\ref{prop:red_approx_grad} yield higher order estimate.
To show this, we first derive error estimates for the reduction error of the reduced sensitivities from \eqref{eq:true_primal_reduced_sensitivity} and
\eqref{eq:true_dual_reduced_sensitivity} as well as for \eqref{eq:red_sens_pr} and \eqref{eq:red_sens_du}.
For $v_h \in V_h$, the residuals of the equation in Propostion~\ref{prop:solution_dmu_eta} and Propostion~\ref{prop:dual_solution_dmu_eta} for the canonical directions are respectively given by
\begin{align}
r_{\mu}^{\textnormal{pr},d_{\mu_i}}(u_{h, \mu}, d_{\mu_i} u_{h, \mu})[v_h] &:= d_{\mu_i} r_{\mu}^\textnormal{pr}(u_{h, \mu})[v_h] - a_{\mu}(d_{\mu_i} u_{h, \mu}, v_h),
\label{sens_res_pr}\\
r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{h, \mu}, p_{h, \mu}, d_{\mu_i} u_{h, \mu}, d_{\mu_i} p_{h, \mu})[v_h]
&:= d_{\mu_i} r_{\mu}^\textnormal{du}(u_{h, \mu}, p_{h, \mu})[q_{r}] + 2k_{\mu}(q_{r}, d_{\mu_i} u_{h, \mu}) - a_{\mu}(q_h, d_{\mu_i} p_{h, \mu}) .
\label{sens_res_du}
\end{align}
\begin{proposition}[Residual based upper bound on the model reduction error of the sensitivity of the primal solution map]
\label{prop:primal_solution_dmui_error}
For $\mu \in \mathcal{P}$ and $1 \leq i \leq P$, let $d_{\mu_i}u_{h, \mu} \in V_h$ be
the solution of the discrete version of \eqref{eq:primal_sens} and $d_{\mu_i}u_{{r}, \mu} \in V_{r}^{\textnormal{pr},d_{\mu_i}}$
be the solution of \eqref{eq:true_primal_reduced_sensitivity}. We then have
\begin{align}
\|d_{\mu_i}u_{h, \mu} - d_{\mu_i}u_{{r}, \mu}\| &\leq \Delta_{d_{\mu_i}\textnormal{pr}}(\mu) := \underline{a_{\mu}}^{-1}\Big(\cont{d_{\mu_i} a_{\mu}} \Delta_\textnormal{pr}(\mu) + \|r_{\mu}^{\textnormal{pr},d_{\mu_i}}(u_{{r}, \mu}, d_{\mu_i}u_{{r}, \mu})\| \Big).
\notag
\end{align}
\end{proposition}
\begin{proof}
Using the shorthand $d_{\mu_i} e_{h, \mu}^\textnormal{pr} := d_{\mu_i}u_{h, \mu} - d_{\mu_i}u_{{r}, \mu}$, we obtain
\begin{align*}
\underline{a_{\mu}}\, \|d_{\mu_i} e_{h, \mu}^\textnormal{pr}\|^2 &\leq a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{pr}, d_{\mu_i} e_{h, \mu}^\textnormal{pr})
= a_{\mu}(d_{\mu_i}u_{h, \mu}, d_{\mu_i} e_{h, \mu}^\textnormal{pr}) -\, a_{\mu}(d_{\mu_i}u_{{r}, \mu}, d_{\mu_i} e_{h, \mu}^\textnormal{pr}) \\
&= d_{\mu_i} r_{\mu}^\textnormal{pr}(u_{h, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{pr}] - a_{\mu}(d_{\mu_i}u_{{r}, \mu}, d_{\mu_i} e_{h, \mu}^\textnormal{pr}) \\
&= d_{\mu_i} r_{\mu}^\textnormal{pr}(u_{h, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{pr}] - d_{\mu_i} r_{\mu}^\textnormal{pr}(u_{{r}, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{pr}] + d_{\mu_i} r_{\mu}^\textnormal{pr}(u_{{r}, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{pr}] - a_{\mu}(d_{\mu_i}u_{{r}, \mu}, d_{\mu_i} e_{h, \mu}^\textnormal{pr}) \\
&= - d_{\mu_i} a_{\mu}(e_{h, \mu}^\textnormal{pr}, d_{\mu_i} e_{h, \mu}^\textnormal{pr}) + r_{\mu}^{\textnormal{pr},d_{\mu_i}}(u_{{r}, \mu}, d_{\mu_i}u_{{r}, \mu})[d_{\mu_i} e_{{r}, \mu}^\textnormal{pr}] \\
&\leq \cont{d_{\mu_i} a_{\mu}} \;\|e_{h, \mu}^\textnormal{pr}\| \;\|d_{\mu_i} e_{h, \mu}^\textnormal{pr}\| + \|r_{\mu}^{\textnormal{pr},d_{\mu_i}}(u_{{r}, \mu}, d_{\mu_i}u_{{r}, \mu})\|\; \|d_{\mu_i} e_{h, \mu}^\textnormal{pr}\|
\end{align*}
using the coercivity of $a_{\mu}$ in the first inequality, the definition $d_{\mu_i} e_{h, \mu}^\textnormal{pr}$ in the first equality,
Propostion~\ref{prop:solution_dmu_eta} applied to $u_{h, \mu}$ in the second equality,
the definition of the discrete sensitivity primal residual \eqref{sens_res_pr} in the third equality and the continuity of $d_{\mu_i} a_{\mu}$ in the last inequality.
\end{proof}
We emphasize that the same result can be shown for $\dred{\mu_i} u_{{r}, \mu}$ by replacing $d_{\mu_i} u_{{r}, \mu}$ and using the equation \eqref{eq:red_sens_pr}
instead of \eqref{eq:true_primal_reduced_sensitivity}. We call the resulting error estimator $\Delta_{\dred{\mu_i}\textnormal{pr}}(\mu)$.
\begin{proposition}[Residual based upper bound on the model reduction error of the sensitivity of the dual solution map]
\label{prop:dual_solution_dmui_error}
For $\mu \in \mathcal{P}$ and $1 \leq i \leq P$, let $d_{\mu_i}p_{h, \mu} \in V_h$ be
the solution of the discrete version of \eqref{eq:dual_sens} and $d_{\mu_i}p_{{r}, \mu} \in V_{r}^{\textnormal{pr},d_{\mu_i}}$
be the solution of \eqref{eq:true_dual_reduced_sensitivity}. We then obtain
\begin{align*}
\|d_{\mu_i}p_{h, \mu} - d_{\mu_i}p_{{r}, \mu}\| \leq \Delta_{ d_{\mu_i}\textnormal{du}}(\mu)\hspace*{5cm}\text{with}\\
\Delta_{ d_{\mu_i}\textnormal{du}}(\mu) := \underline{a_{\mu}}^{-1}\Big(
2 \cont{d_{\mu_i} k_{\mu}} \; \Delta_\textnormal{pr}(\mu) + \cont{d_{\mu_i} a_{\mu}} \; \Delta_\textnormal{du}(\mu) + 2 \cont{k_{\mu}} \; \Delta_{d_{\mu_i}pr}(\mu)
+ \| r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu}, d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu}) \| \Big).
\end{align*}
\end{proposition}
\begin{proof}
Using the shorthand $d_{\mu_i} e_{h, \mu}^\textnormal{du} := d_{\mu_i}p_{h, \mu} - d_{\mu_i}p_{{r}, \mu}$ and $e_{h, \mu}^\textnormal{du} := p_{h, \mu} - p_{{r}, \mu}$, we obtain
\begin{align*}
\underline{a_{\mu}}\;&\|d_{\mu_i} e_{h, \mu}^\textnormal{du}\|^2 \leq a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i} e_{h, \mu}^\textnormal{du})
= \underbrace{a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i}p_{h, \mu})}_{= d_{\mu_i} r_{\mu}^\textnormal{du}(u_{h, \mu}, p_{h, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{du}]
+ 2k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i}u_{h, \mu})} - a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i}p_{{r}, \mu})
\\
&= d_{\mu_i} r_{\mu}^\textnormal{du}(u_{h, \mu}, p_{h, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{du}] + 2k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i}u_{h, \mu})
- d_{\mu_i} r_{\mu}^\textnormal{du}(u_{{r}, \mu}, p_{{r}, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{du}] \\
&\qquad - 2k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i}u_{{r}, \mu})
+ d_{\mu_i} r_{\mu}^\textnormal{du}(u_{{r}, \mu}, p_{{r}, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{du}] + 2k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i}u_{{r}, \mu})
- a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i}p_{{r}, \mu}) \\
&= d_{\mu_i} j_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}) + 2 d_{\mu_i}k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, u_{h, \mu}) - d_{\mu_i}a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, p_h)
- d_{\mu_i}j_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}) + 2 d_{\mu_i}k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, u_{{r}, \mu})\\
&\qquad
- d_{\mu_i}a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, p_{r}) +\! 2k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i}u_{h, \mu}) \!-\! 2k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i}u_{{r}, \mu}) \\
&\qquad + r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu}, d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{du}] \\
&= 2 d_{\mu_i}k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, e_{h, \mu}^\textnormal{pr}) - d_{\mu_i}a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, e_{h, \mu}^\textnormal{du})
+ 2k_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{du}, d_{\mu_i} e_{h, \mu}^\textnormal{pr}) \\
&\qquad
+ r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu}, d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu})[d_{\mu_i} e_{h, \mu}^\textnormal{du}]\\
&\leq \big(2 \cont{d_{\mu_i} k_{\mu}} \; \|e_{h, \mu}^\textnormal{pr}\| + \cont{d_{\mu_i} a_{\mu}} \; \|e_{h, \mu}^\textnormal{du}\| \big)\|d_{\mu_i} e_{h, \mu}^\textnormal{du}\| \\
&\qquad + 2 \cont{k_{\mu}} \; \|d_{\mu_i} e_{h, \mu}^\textnormal{pr}\| \; \|d_{\mu_i} e_{h, \mu}^\textnormal{du}\|
+ \| r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu}, d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu}) \| \; \|d_{\mu_i} e_{h, \mu}^\textnormal{du}\|
\end{align*}
using the coercivity of $a_{\mu}$ in the first inequality, the definition of $d_{\mu_i} e_{h, \mu}^\textnormal{du}$ in the first equality, Propostion~\ref{prop:dual_solution_dmu_eta} applied to $p_{h, \mu}$ in the second equality, the definition of the dual residual in \eqref{eq:dual_residual} in the third equality and continuity of all parts in the last inequality.
\end{proof}
Again, the same result holds for $\dred{\mu_i} p_{{r}, \mu}$ if we replace $d_{\mu_i} p_{{r}, \mu}$ and use \eqref{eq:red_sens_du} instead of
\eqref{eq:true_dual_reduced_sensitivity}. The resulting error estimator is then called $\Delta_{\dred{\mu_i}\textnormal{du}}(\mu)$.
Using the residual based a posteriori error estimates for the primal sensitivities, we are able to state two a posteriori error bounds on the model reduction error of
the true gradient and the approximated gradient of the NCD-corrected functional.
\begin{proposition}[Upper bound on the model reduction error of the gradient of the reduced output -- sensitivity approach]
\label{prop:grad_Jhat_error_sens}
\hfill
\begin{enumerate}[(i)]
\item [\emph{(i)}] For the gradient $\nabla_{\mu} {{\Jhat_{r}}}(\mu)$ of the NCD-corrected RB reduced functional, computed with sensitivities
according to Proposition~{\rm{\ref{prop:true_corrected_reduced_gradient}}}, we have
\begin{align*}
\big\|\nabla_\mu \hat{\mathcal{J}}_h(\mu) - \nabla_{\mu} {{\Jhat_{r}}}(\mu)\big\|_2 &\leq \Delta_{\nabla\hat{\mathcal{J}}_{r}}(\mu)
= \big\|\underline{\Delta_{\nabla\hat{\mathcal{J}}_{r}}(\mu)}\big\|_2 \quad\quad\text{with} \\
\big(\underline{\Delta_{\nabla\hat{\mathcal{J}}_{r}}(\mu)}\big)_i := \cont{d_{\mu_i} k_{\mu}} \, \left(\Delta_\textnormal{pr}(\mu)\right)^2 &+ \cont{a_{\mu}} \,
\Delta_{d_{\mu_i}pr}(\mu) \,\Delta_\textnormal{du}(\mu) + \| r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu}, d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu}) \| \,
\Delta_\textnormal{pr}(\mu).
\end{align*}
\item [\emph{(ii)}] Furthermore, we have for the approximate gradient from Definition~{\rm{\ref{prop:red_approx_grad}}}
\begin{align*}
\big\|\nabla_\mu \hat{\mathcal{J}}_h(\mu) - \gradred{\mu} {{\Jhat_{r}}}(\mu)\big\|_2 &\leq \Delta_{\widetilde\nabla\hat{\mathcal{J}}_{r}}(\mu)
= \big\|\underline{\Delta_{\widetilde\nabla\hat{\mathcal{J}}_{r}}(\mu)}\big\|_2 \quad\quad\text{with} \\
\big(\underline{\Delta_{\widetilde\nabla\hat{\mathcal{J}}_{r}}(\mu)}\big)_i := \cont{d_{\mu_i} k_{\mu}} \, \left(\Delta_\textnormal{pr}(\mu)\right)^2 &+ \cont{a_{\mu}} \,
\Delta_{\dred{\mu_i}\textnormal{pr}}(\mu) \,\Delta_\textnormal{du}(\mu) + \| r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, &p_{{r}, \mu}, \dred{\mu_i}u_{{r}, \mu}, \dred{\mu_i}p_{{r}, \mu}) \| \,
\Delta_\textnormal{pr}(\mu).
\end{align*}
\end{enumerate}
\end{proposition}
\begin{proof}
(i) To prove the first assertion, we use $r_\mu^\textnormal{pr}(u_{h, \mu})[d_{\mu_i}p_{{r}, \mu}] = 0$ and $r_\mu^\textnormal{du}(u_{h, \mu},p_{h, \mu})[d_{\mu_i} u_{{r}, \mu}]=0$ to obtain
\begin{align*}
\big(\nabla_\mu \hat{\mathcal{J}}_h(\mu) &- \nabla_\mu {{\Jhat_{r}}}(\mu)\big)_i
=d_{\mu_i}\mathcal{J}(u_{h, \mu}, \mu) - d_{\mu_i}\mathcal{J}(u_{{r}, \mu}, \mu) + d_{\mu_i}r_\mu^\textnormal{pr}(u_{h,\mu})[p_{h,\mu}] - d_{\mu_i}r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}] \\
&= \partial_\mu j_{\mu}(e_{h, \mu}^\textnormal{pr}) + \partial_\mu k_{\mu}(u_{h, \mu}, u_{h, \mu}) - \partial_\mu k_{\mu}(u_{{r}, \mu},u_{{r}, \mu})
+ d_{\mu_i}r_\mu^\textnormal{pr}(u_{h,\mu})[p_{h,\mu}] - d_{\mu_i}r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}] \\
&\qquad + \underset{=(*)}{\underbrace{r_\mu^\textnormal{pr}(e_{h, \mu}^\textnormal{pr})[d_{\mu_i}p_{{r}, \mu}]}}
+ \underset{=(**)}{\underbrace{r_\mu^\textnormal{du}(e_{h, \mu}^\textnormal{pr},e_{h, \mu}^\textnormal{du})[d_{\mu_i} u_{{r}, \mu}]}}.
\end{align*}
For the last two residual terms we have
\begin{align*}
(*) &= l_{\mu}(d_{\mu_i}p_{{r}, \mu}) - l_{\mu}(d_{\mu_i}p_{{r}, \mu})
- a_{\mu}(e_{h, \mu}^\textnormal{pr},d_{\mu_i}p_{{r}, \mu}) \\
&= - a_{\mu}(e_{h, \mu}^\textnormal{pr},d_{\mu_i}p_{{r}, \mu})
+ d_{\mu_i} r_\mu^\textnormal{du}(u_{{r}, \mu},p_{{r}, \mu})[e_{h, \mu}^\textnormal{pr}]
+ 2k_{\mu}(d_{\mu_i}u_{{r}, \mu}, e_{h, \mu}^\textnormal{pr})
- d_{\mu_i} r_\mu^\textnormal{du}(u_{{r}, \mu},p_{{r}, \mu})[e_{h, \mu}^\textnormal{pr}]
- 2k_{\mu}(d_{\mu_i}u_{{r}, \mu}, e_{h, \mu}^\textnormal{pr}) \\
&= r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu}, d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu})[e_{h, \mu}^\textnormal{pr}]
- d_{\mu_i} r_\mu^\textnormal{du}(u_{{r}, \mu},p_{{r}, \mu})[e_{h, \mu}^\textnormal{pr}]
- 2k_{\mu}(d_{\mu_i}u_{{r}, \mu}, e_{h, \mu}^\textnormal{pr})
\end{align*}
and
\begin{align*}
(**) &= j_{\mu}(d_{\mu_i}u_{{r}, \mu}) - j_{\mu}(d_{\mu_i}u_{{r}, \mu})
+ 2k_{\mu}(d_{\mu_i}u_{{r}, \mu}, e_{h, \mu}^\textnormal{pr}) - a_{\mu}(d_{\mu_i}u_{{r}, \mu}, e_{h, \mu}^\textnormal{du}).
\end{align*}
Thus, by summing both terms we have
\begin{align*}
(*) + (**)=
r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu},d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu})[e_{h, \mu}^\textnormal{pr}]
- \underset{=(***)}{\underbrace{d_{\mu_i} r_\mu^\textnormal{du}(u_{{r}, \mu},p_{{r}, \mu})[e_{h, \mu}^\textnormal{pr}]}} - a_{\mu}(d_{\mu_i}u_{{r}, \mu}, e_{h, \mu}^\textnormal{du})
\end{align*}
and for $(***)$ it holds
\begin{align*}
(***) = d_{\mu_i}j_{\mu}(e_{h, \mu}^\textnormal{pr}) + 2 d_{\mu_i}k_{\mu}(e_{h, \mu}^\textnormal{pr}, u_{{r},\mu}) - d_{\mu_i}a_{\mu}(e_{h, \mu}^\textnormal{pr},p_{{r},\mu}).
\end{align*}
Combining $(*)$, $(**)$ and $(***)$ with the previous result, we have
\begin{align*}
\big(\nabla_\mu \hat{\mathcal{J}}_h(\mu) &- \nabla_\mu {{\Jhat_{r}}}(\mu)\big)_i
= \partial_\mu j_{\mu}(e_{h, \mu}^\textnormal{pr}) + \partial_\mu k_{\mu}(u_{h, \mu}, u_{h, \mu}) - \partial_\mu k_{\mu}(u_{{r}, \mu},u_{{r}, \mu}) \\
&\qquad - d_{\mu_i}j_{\mu}(e_{h, \mu}^\textnormal{pr}) - 2 d_{\mu_i}k_{\mu}(e_{h, \mu}^\textnormal{pr}, u_{{r},\mu}) + d_{\mu_i}a_{\mu}(e_{h, \mu}^\textnormal{pr},p_{{r},\mu}) + d_{\mu_i}r_\mu^\textnormal{pr}(u_{h,\mu})[p_{h,\mu}] \\
&\qquad - d_{\mu_i}r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}]
+ r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu},d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu})[e_{h, \mu}^\textnormal{pr}]
- a_{\mu}(d_{\mu_i}u_{{r}, \mu}, e_{h, \mu}^\textnormal{du}) \\
& = d_{\mu_i} k_{\mu}(e_{h, \mu}^\textnormal{pr}, e_{h, \mu}^\textnormal{pr})
+ r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu},d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu})[e_{h, \mu}^\textnormal{pr}] \\
& \qquad + \underset{=(****)}{\underbrace{d_{\mu_i}r_\mu^\textnormal{pr}(u_{h,\mu})[p_{h,\mu}] - d_{\mu_i}r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}]
+ d_{\mu_i}a_{\mu}(e_{h, \mu}^\textnormal{pr},p_{{r},\mu}) - a_{\mu}(d_{\mu_i}u_{{r}, \mu}, e_{h, \mu}^\textnormal{du})}}.
\end{align*}
Further, we have
\begin{align*}
d_{\mu_i}r_\mu^\textnormal{pr}(u_{h,\mu})[p_{h,\mu}] &- d_{\mu_i}r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}] = d_{\mu_i} l_{\mu}(e_{h, \mu}^\textnormal{du}) -
d_{\mu_i} a_{\mu}(u_{h, \mu}, p_{h, \mu}) + d_{\mu_i} a_{\mu}(u_{{r}, \mu}, p_{{r}, \mu}) \\
& = a_{\mu}(d_{\mu_i}u_h, e_{h, \mu}^\textnormal{du}) + d_{\mu_i} a_{\mu}(u_h, e_{h, \mu}^\textnormal{du})
- d_{\mu_i} a_{\mu}(u_{h, \mu}, p_{h, \mu}) + d_{\mu_i} a_{\mu}(u_{{r}, \mu}, p_{{r}, \mu}),
\end{align*}
where we used the discretized version of \eqref{eq:primal_sens} in the second equality. Inserting this into $(*)$ gives
\begin{align*}
(***\,*) =& a_{\mu}(d_{\mu_i}u_h, e_{h, \mu}^\textnormal{du}) - a_{\mu}(d_{\mu_i}u_{{r}, \mu}, e_{h, \mu}^\textnormal{du}) \\ & +
\underset{=0}{\underbrace{d_{\mu_i} a_{\mu}(u_h, e_{h, \mu}^\textnormal{du}) - d_{\mu_i} a_{\mu}(u_{h, \mu}, p_{h, \mu})
+ d_{\mu_i} a_{\mu}(u_{{r}, \mu}, p_{{r}, \mu})
+d_{\mu_i}a_{\mu}(e_{h, \mu}^\textnormal{pr},p_{{r},\mu}) }}
= a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{pr}, e_{h, \mu}^\textnormal{du}).
\end{align*}
In total, we have
\begin{align*}
& &\big(\nabla_\mu \hat{\mathcal{J}}_h(\mu) - \nabla_\mu {{\Jhat_{r}}}(\mu)\big)_i
= d_{\mu_i} k_{\mu}(e_{h, \mu}^\textnormal{pr}, e_{h, \mu}^\textnormal{pr})
+ a_{\mu}(d_{\mu_i} e_{h, \mu}^\textnormal{pr}, e_{h, \mu}^\textnormal{du}) + r_{\mu}^{\textnormal{du},d_{\mu_i}}(u_{{r}, \mu}, p_{{r}, \mu},d_{\mu_i}u_{{r}, \mu}, d_{\mu_i}p_{{r}, \mu})[e_{h, \mu}^\textnormal{pr}]
\end{align*}
which proofs the assertion. \\
(ii) The estimate follows analogously to (i), by replacing $d_{\mu_i} u_{{r}, \mu}$ and $d_{\mu_i} p_{{r}, \mu}$ with $\dred{\mu_i} u_{{r}, \mu}$
and $\dred{\mu_i} p_{{r}, \mu}$, respectively.
\end{proof}
To conclude, $\Delta_{\widetilde\nabla\hat{\mathcal{J}}_{r}}(\mu)$ and $\Delta_{\nabla\hat{\mathcal{J}}_{r}}(\mu)$ both decay with second order (cf.~Section~\ref{sec:estimator_study}).
We also point out, that $\Delta_{\widetilde\nabla\hat{\mathcal{J}}_{r}}(\mu)$ is an improved estimator which can be used to replace the poor estimator
$\Delta^{{r},*}_{\nabla_\mu \hat{\mathcal{J}}_{r}}(\mu)$.
However both higher order estimators $\Delta_{\widetilde\nabla\hat{\mathcal{J}}_{r}}(\mu)$ and $\Delta_{\nabla\hat{\mathcal{J}}_{r}}(\mu)$
come with the price of computing the dual norm of the sensitivity residuals in \eqref{sens_res_pr} and \eqref{sens_res_du} for each direction
which aggravates the computational complexity.
\section{The Trust-Region Method and adaptive enrichment strategies}
\label{sec:TRRB_and_adaptiveenrichment}
To solve problem \eqref{P} we apply the TR method, which iteratively computes a first-order critical point of \eqref{P}. At each iteration $k\geq 0$, we consider a so-called model function $m^{(k)}$, which is a cheaply computable approximation of the quadratic cost functional $\mathcal{J}$ in a neighbourhood of the parameter $\mu^{(k)}$, i.e., the Trust-Region. Therefore, for $k\geq 0$, given a TR radius $\delta^{(k)}$, we consider the TR minimization sub-problem
\begin{equation}
\label{TRsubprob}
\min_{s\in \mathbb{R}^P} m^{(k)}(s) \, \text{ subject to } \|s\|_2 \leq \delta^{(k)},\, \widetilde{\mu}:= \mu^{(k)}+s \in\mathcal{P} \text{ and } r_{\tilde{\mu}}^\textnormal{pr}(u_{\tilde{\mu}})[v]= 0 \, \text{ for all } v\in V.
\end{equation}
Under suitable assumptions on $m^{(k)}$, problem \eqref{TRsubprob} admits a unique solution $\bar s^{(k)}$, which is used to compute the next iterate $\mu^{(k+1)} = \mu^{(k)} + \bar s^{(k)}$.
\subsection{The Trust-Region Reduced Basis Method}
\label{sec:TR}
Slightly different from \cite{AFS00,QGVW2017}, we choose as model function the NCD-corrected RB reduced functional ${{\Jhat_{r}}}^{(k)}$ defined in (\ref{eq:Jhat_red_corected}), i.e.~$m^{(k)}(\cdot)= {{\Jhat_{r}}}^{(k)}(\mu^{(k)}+\cdot)$ for $k\geq 0$, where the super-index $(k)$ indicates that we use different RB spaces $V_{r}^{*, (k)}$ in each iteration. As indicated in Proposition~\ref{prop:Jhat_error} and shown in our numerical experiments below, ${{\Jhat_{r}}}^{(k)}$ converges to $\hat{\mathcal{J}}$ with higher order
in comparison to the standard RB reduced functional from \eqref{eq:Jhat_red}, which has been considered in \cite{QGVW2017}.
We initialize the RB spaces using the initial guess $\mu^{(0)}$, i.e.~setting $V^{\textnormal{pr},0}_{r} = \left\{u_{h,\mu^{(0)}}\right\}$ and $V^{\textnormal{du},0}_{r} = \left\{p_{h,\mu^{(0)}}\right\}$. At every iteration $k$ we may -- depending on the a posteriori estimates -- enrich the obtained space using the computed parameter $\mu^{(k+1)}$; for further details see Section~\ref{sec:construct_RB}. Possible sufficient and necessary conditions for convergence, dependent on the approximate generalized Cauchy point (AGC) $\mu^{(k)}_\text{\rm{AGC}}$ (see Definition~\ref{Def:AGC}), are given in \cite{QGVW2017}. In contrast to \cite{QGVW2017}, we consider additional bilateral parameter constraints in \eqref{TRsubprob}. In particular, the presence of these inequality constraints requires a review of the proof of convergence for the TR-RB algorithm. In \cite{QGVW2017}, the convergence is based on the results contained in \cite{YM2013}, where the authors consider an equality-constrained optimization problem. We state first how our method differs from the one in \cite{QGVW2017}, then we prove the convergence of this modified algorithm. According to \cite{QGVW2017}, the inexact RB version of problem \eqref{TRsubprob} is
\begin{equation}
\label{TRRBsubprob}
\min_{\widetilde{\mu}\in\mathcal{P}} {{\Jhat_{r}}}^{(k)}(\widetilde{\mu}) \text{ s.t. } \frac{\Delta_{{{\Jhat_{r}}}}(\widetilde{\mu})}{{{\Jhat_{r}}}^{(k)}(\widetilde{\mu})}\leq \delta^{(k)},
\end{equation}
where $\widetilde{\mu}:= \mu^{(k)}+s$, the equality constraint $r_{\tilde\mu}^\textnormal{pr}(u_{\tilde{\mu}})[v]= 0$ is hidden in the definition of ${{\Jhat_{r}}}$ and the inequality ones are concealed in the request $\widetilde{\mu}\in \mathcal{P}$. As also remarked in \cite{QGVW2017}, the projected BFGS method \cite{Kel99}, which we use in order to solve \eqref{TRRBsubprob}, computes the ACG point $\mu^{(k)}_\text{\rm{AGC}}$ in the first iterate and generates a sequence $\{\mu^{(k,\ell)}\}_{\ell=1}^L$ where $L$ is the last BFGS iteration. In what follows, $\mu^{(k,1)}:= \mu^{(k)}_\text{\rm{AGC}}$ and the TR iterate $\mu^{(k+1)}:=\mu^{(k,L)}$. Throughout the paper the index $k$ refers to the current outer TR iteration, $\ell$ refers instead to the inner BFGS iteration. Note that $L$ may be different for each iteration $k$, but we will indicate it only when strictly necessary in order to simplify the notation. To describe the projected BFGS method in details, we define
\begin{align}
\label{eq:General_Opt_Step_point}
\mu^{(k,\ell)}(j):= \text{P}_\mathcal{P}(\mu^{(k,\ell)} + \kappa^j d^{(k,\ell)}) \in\mathcal{P} && \text{for } j\geq 0,
\end{align}
where $\kappa\in(0,1)$, $d^{(k,\ell)}\in\mathbb{R}^P$ is the chosen descent direction at the iteration $(k,\ell)$ and the projection operator $\text{P}_\mathcal{P}: \mathbb{R}^P\rightarrow \mathcal{P}$ is defined as
\begin{align*}
(\text{P}_\mathcal{P}(\mu))_i:= \left\{ \begin{array}{ll}
(\mu_\mathsf{a})_i & \text{if } (\mu)_i\leq (\mu_\mathsf{a})_i, \\
(\mu_\mathsf{b})_i & \text{if } (\mu)_i\geq (\mu_\mathsf{b})_i, \\
(\mu)_i & \text{otherwise,}
\end{array} \right.
&& \text{for } i=1,\ldots,P.
\end{align*}
Note that the operator $\text{P}_\mathcal{P}$ is Lipschitz continuous with constant one; cf.~\cite{Kel99}. For computing the descent direction $d^{(k,\ell)}$ we follow the projected BFGS algorithm reported in \cite[Section~5.5.3]{Kel99}. Furthermore, we enforce respectively an Armijo-type condition and the additional TR constraint on ${{\Jhat_{r}}}^{(k)}$
\begin{align}
\label{Armijo}{{\Jhat_{r}}}^{(k)}(\mu^{(k,\ell)}(j)) - {{\Jhat_{r}}}^{(k)}(\mu^{(k,\ell)}) & \leq -\frac{\kappa_{\mathsf{arm}}}{\kappa^j} \big\| \mu^{(k,\ell)}(j)-\mu^{(k,\ell)}\big\|^2_2, \\
\label{TR_radius_condition} \frac{\Delta_{{{\Jhat_{r}}}}(\mu^{(k,\ell)}(j))}{{{\Jhat_{r}}}^{(k)}(\mu^{(k,\ell)}(j))} & \leq \delta^{(k)},
\end{align}
by selecting $\mu^{(k,\ell+1)} = \mu^{(k,\ell)}(j^{(k,\ell)})$ for $\ell\geq 1$, where $j^{(k,\ell)}<\infty$ is the smallest index for which the conditions \eqref{Armijo}-\eqref{TR_radius_condition} hold for some $\kappa_{\mathsf{arm}}\in(0,\frac{1}{2})$, generally $\kappa_{\mathsf{arm}}=10^{-4}$; cf.~\cite{QGVW2017}. Moreover, we use as termination criteria for the optimization sub-problem\newline
\begin{subequations}\label{Termination_crit_subproblem}
\noindent\begin{minipage}{0.6\textwidth}
\begin{equation}
\label{FOC_subproblem}
\big\|\mu^{(k,\ell)}-\text{P}_\mathcal{P}(\mu^{(k,\ell)}-\nabla_\mu \hat{\mathcal{J}}_{r}^{(k)}(\mu^{(k,\ell)}))\big\|_2\leq \tau_\text{\rm{sub}}
\hspace{2em} \text{or}
\end{equation}
\end{minipage}%
\begin{minipage}{0.4\textwidth}
\vspace{0.2em}
\begin{equation}
\label{Cut_of_TR}
\beta_2\delta^{(k)} \leq \frac{\Delta_{\hat{\mathcal{J}}_{r}^{(k)}}(\mu)}{\hat{\mathcal{J}}_{r}^{(k)}(\mu)} \leq \delta^{(k)},
\end{equation}
\end{minipage}\vskip1em
\end{subequations}
\noindent where $\tau_\text{\rm{sub}}\in(0,1)$ is a predefined tolerance and $\beta_2\in(0,1)$, generally close to one. Condition \eqref{Cut_of_TR} is used to prevent that the optimizer spends much time close to the boundary of the Trust-Region, where the model is poor in approximation; cf.~\cite{QGVW2017}. Note that, without the projection operator $\text{P}_\mathcal{P}$, conditions \eqref{Armijo}-\eqref{Termination_crit_subproblem} coincide with the ones in \cite{QGVW2017}, apart from using the NCD-corrected RB reduced functional. Furthermore, in addition to \cite{QGVW2017}, we consider a condition which allows enlarging the TR radius. A drawback of the TR algorithm proposed in \cite{QGVW2017} is that the TR radius may be significantly shrunk at the beginning, i.e.~when the TR model is poor in approximation. Afterwards, even if the RB space is enriched, i.e.~the approximation of the TR model function is improved, the TR radius is kept small. Thus, one misses the local second-order rate of convergence of the BFGS method. More precisely, if $\mu^{(k,\ell)}$ is close to the locally optimal solution $\bar \mu^{(k)}$ of the TR sub-problem, we want to make full BFGS steps, which gives us faster convergence. The possibility to enlarge the TR radius at each iteration will also decrease the number of outer iterations needed to converge. As a condition for enlarging the radius we check whether the sufficient reduction predicted by the model function ${{\Jhat_{r}}}^{(k)}$ is realized by the objective function, i.e.~we check if
\begin{equation}
\label{TR_act_decrease}
\varrho^{(k)}:= \frac{\hat{\mathcal{J}}_h(\mu^{(k)})-\hat{\mathcal{J}}_h(\mu^{(k+1)})}{{{\Jhat_{r}}}^{(k)}(\mu^{(k)})-{{\Jhat_{r}}}^{(k)}(\mu^{(k+1)})} \geq \eta_\varrho
\end{equation}
for a tolerance $\eta_\varrho \in [3/4,1)$. Condition \eqref{TR_act_decrease} seems costly because of the evaluation of the FOM cost functional $\hat{\mathcal{J}}_h$, but, after the
enrichment of the RB space, the quantities in the numerator of \eqref{TR_act_decrease} are cheaply accessible, since one has already
solved the FOM to generate the new snapshots for the RB space enrichment. Note that this also implies that we can cheaply evaluate the FOM gradient $\nabla_\mu \hat{\mathcal{J}}_h(\mu^{k+1})$ in case of an enrichment. This knowledge will be used for the stopping criterium in the outer loop of the algorithm. Finally, let us define the AGC point for our constrained case.
\begin{definition}[AGC point for simple bounds]
\label{Def:AGC}
At iteration $k$, we define the AGC point as
\[
\mu_\text{\rm{AGC}}^{(k)}:= \mu^{(k,0)}(j^{(k)}_c)= \text{P}_\mathcal{P}(\mu^{(k,0)} + \kappa^{j^{(k)}_c} d^{(k,0)}),
\]
where $\mu^{(k,0)}:= \mu^{(k)}$, $d^{(k,0)}:= -\nabla_\mu {{\Jhat_{r}}}^{(k)}(\mu^{(k,0)})$ and $j^{(k)}_c$ is the smallest non-negative integer $j$ for which $\mu^{(k,0)}(j)$ satisfies \eqref{Armijo}-\eqref{TR_radius_condition} for $\ell=0$.
\end{definition}
We refer to Algorithm~\ref{Alg:TR-RBmethod} for the proposed TR-RB algorithm.
\begin{algorithm2e}[!h]
\KwData{Initial TR radius $\delta^{(0)}$, TR shrinking factor $\beta_1\in (0,1)$, tolerance for enlarging the TR radius $\eta_\varrho\in [\frac{3}{4},1)$, initial parameter $\mu^{(0)}$, stopping tolerance for the sub-problem $\tau_\text{\rm{sub}}\ll 1$, stopping tolerance for the first-order critical condition $\tau_\text{\rm{FOC}}$ with $\tau_\text{\rm{sub}}\leq\tau_\text{\rm{FOC}}\ll 1$, safeguard for TR boundary $\beta_2\in(0,1)$.}
Set $k=0$ and \textsf{Loop\_flag}$=$\textsf{True}\;
\While{
{\normalfont\textsf{Loop\_flag}}}{
Compute $\mu^{(k+1)}$ as solution of \eqref{TRRBsubprob} with termination criteria \eqref{Termination_crit_subproblem}\label{TRRB-optstep}\;
\uIf{\label{Suff_condition_TRRB}${{\Jhat_{r}}}^{(k)}(\mu^{(k+1)})+\Delta_{\hat{\mathcal{J}}_{r}^{(k)}}(\mu^{(k+1)})<{{\Jhat_{r}}}^{(k)}(\mu^{(k)}_\text{\rm{AGC}})$} {
Accept $\mu^{(k+1)}$,
update the RB model at $\mu^{(k+1)}$ and compute $\varrho^{(k)}$ from \eqref{TR_act_decrease}\;
\eIf {$\varrho^{(k)}\geq\eta_\varrho$} {
Enlarge the TR radius $\delta^{(k+1)} = \beta_1^{-1}\delta^{(k)}$\;
}
{
Set $\delta^{(k+1)}=\delta^{(k)}$\;
}
}
\uElseIf {\label{Nec_condition_TRRB}${{\Jhat_{r}}}^{(k)}(\mu^{(k+1)})-\Delta_{\hat{\mathcal{J}}_{r}^{(k)}}(\mu^{(k+1)})> {{\Jhat_{r}}}^{(k)}(\mu^{(k)}_\text{\rm{AGC}})$} {
Reject $\mu^{(k+1)}$, shrink the TR radius $\delta^{(k+1)} = \beta_1\delta^{(k)}$ and go to \ref{TRRB-optstep}\;
}
\Else {
Update the RB model at $\mu^{(k+1)}$ and compute $\varrho^{(k)}$ from \eqref{TR_act_decrease}\;
\eIf {${{\Jhat_{r}}}^{(k+1)}(\mu^{(k+1)})\leq {{\Jhat_{r}}}^{(k)}(\mu^{(k)}_\text{\rm{AGC}})$} {
Accept $\mu^{(k+1)}$\;
\eIf {$\varrho^{(k)}\geq\eta_\varrho$} {
Enlarge the TR radius $\delta^{(k+1)} = \beta_1^{-1}\delta^{(k)}$\;
}
{
Set $\delta^{(k+1)}=\delta^{(k)}$\;
}
}{
Reject $\mu^{(k+1)}$, shrink the TR radius $\delta^{(k+1)} = \beta_1\delta^{(k)}$ and go to \ref{TRRB-optstep}\;
}
}
\If { $\|\mu^{(k+1)}-P_\mathcal{P}(\mu^{(k+1)}-\nabla_\mu \hat{\mathcal{J}}_h(\mu^{(k+1)}))\|_2\leq \tau_{\text{\rm{FOC}}}$ } {
Set \textsf{Loop\_flag}$=$\textsf{False}\;
}
Set $k=k+1$\;
}
\caption{TR-RB algorithm}
\label{Alg:TR-RBmethod}
\end{algorithm2e}
\subsection{Convergence analysis}
\label{sec:TR_convergence_analysis}
In order to guarantee the well-posedness (because of \eqref{TR_radius_condition}) and the convergence of the method, we make the following assumption
\begin{assumption}
\label{asmpt:bound_J}
The cost functional $\mathcal{J}(u,\mu)$ is strictly positive for all $u\in V$ and all parameters $\mu\in\mathcal{P}$.
\end{assumption}
Note that this assumption is not too restrictive, since the boundedness from below is a standard assumption in optimization to guarantee the existence of a solution for the minimization problem. If a global lower bound for the cost functional is also known, one can add a sufficiently large constant, without changing the position of its local minima and maxima. Another important request, pointed out in \cite{QGVW2017,YM2013}, is that an error-aware sufficient decrease condition
\begin{align}
\label{Suff_decrease_condition}
{{\Jhat_{r}}}^{(k+1)}(\mu^{(k+1)})\leq {{\Jhat_{r}}}^{(k)}(\mu_\text{\rm{AGC}}^{(k)}) && \text{ for all } k\in\mathbb{N}
\end{align}
is fulfilled at each iteration $k$ of the TR-RB algorithm. As in \cite{QGVW2017,YM2013}, we consider cheaply computable sufficient and necessary conditions for \eqref{Suff_decrease_condition} in Algorithm~\ref{Alg:TR-RBmethod} (Step~\ref{Suff_condition_TRRB} and Step~\ref{Nec_condition_TRRB}, respectively). The TR-RB algorithm rejects, then, any computed point which does not satisfy \eqref{Suff_decrease_condition}. One may be concerned of the fact that Algorithm~\ref{Alg:TR-RBmethod} may be trapped in an infinite loop where every computed point is rejected and the TR radius is shrunk all time. We point out that this never happened in our numerical tests. Anyway, we consider a safety termination criteria, which is triggered when the TR radius is smaller than the double machine precision. To prove convergence of Algorithm~\ref{Alg:TR-RBmethod}, in what follows, we then assume that this situation can not occur.
\begin{assumption}
\label{asmpt:rejection}
For each $k\geq 0$, there exists a radius $\widetilde{\delta}^{(k)}>\tau_{\rm{\text{mac}}}$ for which a solution of \eqref{TRRBsubprob} is such that $\eqref{Suff_decrease_condition}$ is verified, where $\tau_{\rm{\text{mac}}}= 2.22\cdot10^{-16}$ is the double machine precision.
\end{assumption}
\begin{lemma}
\label{Lemma:AGC_search}
Let Assumptions~{{\rm{\ref{asmpt:parameter_separable}}}--{\rm{\ref{asmpt:rejection}}}} hold true. The search of the AGC point defined in Definition~{\rm{\ref{Def:AGC}}} takes finitely many iterations at each step $k$ of the TR-RB Algorithm.
\end{lemma}
\begin{proof}
We want to prove that there exists an index $j_c^{(k)}<\infty$ for each $k\geq 0$, for which $\mu_\text{\rm{AGC}}^{(k)}= \mu^{(k,0)}(j^{(k)}_c)$ satisfies
\eqref{Armijo}-\eqref{TR_radius_condition} for $\ell=0$. From \cite[Theorem~5.4.5]{Kel99} (and the subsequent discussion) we conclude that for all $k\in\mathbb{N}$ there exists a strictly positive index $j^{(k)}_1\in\mathbb{N}$ such that $\mu^{(k,0)}(j)$ satisfies \eqref{Armijo} for $j\geq j^{(k)}_1$ and $\ell=0$. If $k=0$, by construction we have that $\Delta_{\hat{\mathcal{J}}_{r}^{(0)}}(\mu^{(0)}) = 0$. Therefore, there exists a sufficiently large (but finite) index $j^{(0)}_2\in\mathbb{N}$ such that $\mu^{(0,0)}(j)$ satisfies \eqref{TR_radius_condition} for all $j\geq j^{(0)}_2$ and $\ell=0$. The reason relies on the
continuity w.r.t.~$\mu$ of the error estimator $\Delta_{\hat{\mathcal{J}}_{r}^{(k)}}(\mu)$ (cf.~Remark~\ref{continuity_of_estimator}) and of the cost functional ${{\Jhat_{r}}}^{(k)}(\mu)$ for all $k\in\mathbb{N}$. Hence there exists $j^{(0)}_c=\max(j^{(0)}_1,j^{(0)}_2)<\infty$, for which $\mu^{(0,0)}(j)$ satisfies \eqref{Armijo}-\eqref{TR_radius_condition} for $\ell=0$. If $k\geq 1$,
since the model has been enriched, i.e.~$\Delta_{\hat{\mathcal{J}}_{r}^{(k)}}(\mu^{(k)}) = 0$, we can show the claim arguing as we did for $k=0$.
Note, in fact, that we increase the iteration counter only when $\mu^{(k)}$ is accepted at iteration $k-1$ and, thus, when the RB model is enriched at this parameter.
\end{proof}
\begin{theorem}
\label{Thm:convergence_of_TR}
Let the hypotheses of Lemma~{\rm{\ref{Lemma:AGC_search}}} be verified.
Then every accumulation point $\bar\mu$ of the sequence $\{\mu^{(k)}\}_{k\in\mathbb{N}}\subset \mathcal{P}$ generated by the TR-RB algorithm
is an approximate first-order critical point for $\hat{\mathcal{J}}_h$ (up to the chosen tolerance $\tau_\text{\rm{sub}}$), i.e., it holds
\begin{equation}
\label{First-order_critical_condition}
\|\bar \mu-P_\mathcal{P}(\bar \mu-\nabla_\mu \hat{\mathcal{J}}_h(\bar \mu))\|_2 \leq \tau_\text{\rm{sub}}.
\end{equation}
\end{theorem}
\begin{proof}
The set $\mathcal{P}\subset \mathbb{R}^P$ is compact. Therefore there exists a sequence of indices $\left\{k_i\right\}_{i\in\mathbb{N}}$ such that the sub-sequence $\{\mu^{(k_i)}\}_{i\in\mathbb{N}}$ converges to a point $\bar \mu\in \mathcal{P}$.
It remains to show that $\bar \mu$ is an approximate first-order critical point. At first, note that once the RB space is enriched at a point $\mu^{(k)}$, we have $\Delta_{\hat{\mathcal{J}}_{r}^{(k)}}(\mu^{(k)})= 0$. Hence, also $q^{(k)}(\mu^{(k)}) = 0$ holds, where
\begin{align*}
q^{(k)}(\mu) := \frac{\Delta_{\hat{\mathcal{J}}_{r}^{(k)}}(\mu)}{\hat{\mathcal{J}}_{r}^{(k)}(\mu)} && \text{for all } k\in\mathbb{N},\, \mu\in\mathcal{P}.
\end{align*}
Note that both the estimator $\Delta_{\hat{\mathcal{J}}_{r}^{(k)}}$ and $q^{(k)}$ are uniformly continuous on $\mathcal{P}$ for all $k\geq 0$. This follows directly from Remark~\ref{continuity_of_estimator} and the Heine-Cantor theorem. When the model is enriched at a parameter $\mu^{(k)}$, from the uniform continuity of $q^{(k)}$ it follows that for all $\varepsilon>0$ there exists an $\eta^{(k)}>0$ (depending on $\varepsilon$) such that
$\|\mu^{(k)}-\mu\|_2< \eta^{(k)}$ implies
\[
\big|q^{(k)}(\mu)-\underset{= 0}{\underbrace{q^{(k)}(\mu^{(k)})}}\big| < \varepsilon.
\]
Furthermore, due to the convergence of the sub-sequence $\left\{\mu^{(k_i)}\right\}_{i\in\mathbb{N}}\subset\mathcal{P}$, we have that there exists a sufficiently large constant $I>0$ and a constant $\gamma>0$ such that $\|\mu^{(k_i)}-\mu^{(k_{i+1})}\|_2< \gamma < \eta^{(k_i)}$ for all $i \geq I$. Then we have
\begin{align}
\label{unif_cont_estimator_step}
q^{(k_i)}(\mu^{(k_{i+1})}) = \frac{\Delta_{\hat{\mathcal{J}}_{r}^{(k_i)}}(\mu^{(k_{i+1})})}{\hat{\mathcal{J}}_{r}^{(k_i)}(\mu^{(k_{i+1})})}<\varepsilon && \text{ for all } i\geq I.
\end{align}
We want to prove that $q^{(k_{i+1}-1)}(\mu^{(k_{i+1})})< \beta_2 \delta^{(k_{i+1}-1)}$ for all $i\geq I$, such that the unique solution $\mu^{(k_{i+1})}$ to \eqref{TRRBsubprob} (for $k=k_{i+1}-1$) is not triggering the termination criteria \eqref{Cut_of_TR}. Note that $\varepsilon$ in \eqref{unif_cont_estimator_step} can be chosen appropriately (which implies a certain $\eta^{(k_i)}$ for all $i\in\mathbb{N}$ and thus a sufficiently large index $I$, of course). Since the RB space is enriched at each iteration of Algorithm~\ref{Alg:TR-RBmethod}, we especially have that $\Delta_{\hat{\mathcal{J}}_{r}^{(k_{i+1}-1)}}(\mu^{(k_{i+1})})\leq \Delta_{\hat{\mathcal{J}}_{r}^{(k_i)}}(\mu^{(k_{i+1})})$. Using \eqref{unif_cont_estimator_step}, we find that $\Delta_{\hat{\mathcal{J}}_{r}^{(k_{i+1}-1)}}(\mu^{(k_{i+1})})\leq \Delta_{\hat{\mathcal{J}}_{r}^{(k_i)}}(\mu^{(k_{i+1})})< \varepsilon \hat{\mathcal{J}}_{r}^{(k_i)}(\mu^{(k_{i+1})})$. Hence, $q^{(k_{i+1}-1)}(\mu^{(k_{i+1})})< \beta_2 \delta^{(k_{i+1}-1)}$ holds for
\[
\varepsilon = \beta_2 \delta^{(k_{i+1}-1)}\frac{\hat{\mathcal{J}}_{r}^{(k_{i+1}-1)}(\mu^{(k_{i+1})})}{\hat{\mathcal{J}}_{r}^{(k_i)}(\mu^{(k_{i+1})})}
\]
and for all $i\geq I$.
This shows that from a certain iteration $I$, we are far enough from the
boundary of the Trust-Region for all $i\geq I$, so that \eqref{Cut_of_TR} does not affect the projected BFGS algorithm. Thus, \eqref{FOC_subproblem} must hold for $\mu^{(k_{i+1})}= \mu^{(k_{i+1}-1,L^{(k_{i+1}-1)})}$ for $i\geq I$. Hence, we have proved that each $\mu^{(k_{i+1})}$ is an approximate first-order critical point for ${{\Jhat_{r}}}^{(k_{i+1}-1)}$ (up to the chosen tolerance $\tau_\text{\rm{sub}}$) for all $i\geq I$, which yields to
\begin{align*}
\|\mu^{(k_{i+1})}-P_\mathcal{P}(\mu^{(k_{i+1})}-\nabla_\mu{{\Jhat_{r}}}^{(k_{i+1}-1)}(\mu^{(k_{i+1})})\|_2 \leq \tau_\text{\rm{sub}}, && \text{for all } i\geq I.
\end{align*}
Moreover, taking into account the RB method properties and the fact that $V_h$ is a finite dimensional space, there exists a constant $I_\nabla>0$ sufficiently large, such that $\nabla_\mu {{\Jhat_{r}}}^{(k_i)}(\mu)= \nabla_\mu \hat{\mathcal{J}}_h(\mu)+\epsilon^{(k_i)}$ for all $\mu$ in a neighborhood of $\bar\mu$ and for $i\geq I_{\nabla}$, with $\epsilon^{(k_i)}\to 0$ as $i\to\infty$.
Thus, exploiting the continuity of the projection operator and assuming $i\geq \max(I,I_\nabla)$, we have that
\[
\begin{aligned}
\tau_\text{\rm{sub}} & \geq \|\mu^{(k_{i+1})}-P_\mathcal{P}(\mu^{(k_{i+1})}-\nabla_\mu{{\Jhat_{r}}}^{(k_{i+1}-1)}(\mu^{(k_{i+1})})\|_2 \\
& = \|\mu^{(k_{i+1})}-P_\mathcal{P}(\mu^{(k_{i+1})}-\nabla_\mu\hat{\mathcal{J}}_h(\mu^{(k_{i+1})})+\epsilon^{(k_{i+1}-1)})\|_2 \to \|\bar\mu - P_\mathcal{P}(\bar\mu-\nabla_\mu\hat{\mathcal{J}}_h(\bar \mu))\|_2.
\end{aligned}
\]
Hence, the accumulation point $\bar \mu$ is an approximate first-order critical point (up to the tolerance $\tau_\text{\rm{sub}}$).
\end{proof}
\begin{remark}
\label{rmk:local_minimum_TR}
What remains to prove is that $\bar \mu$ is a local minimum of $\hat{\mathcal{J}}_h$ (or rather a sufficiently close approximation of a local minimum). Exploiting the sufficient decrease condition, one can easily show by contradiction that $\bar \mu$ is not a maximum of $\hat{\mathcal{J}}_h$. It can still be a saddle point as well as a local minimum. In the numerical experiments, to verify that the computed point $\bar \mu$ is actually a local minimum, we employ the second-order sufficient optimality conditions after the algorithm terminates.
\end{remark}
\subsection{Construction of RB spaces}
\label{sec:construct_RB}
In an enrichment step of the outer
loop of the TR-algorithm \ref{Alg:TR-RBmethod} for $\mu \in \mathcal{P}$, we assume to have access to the primal and dual
solutions $u_{h, \mu}, p_{h, \mu} \in V_h$
and consider two strategies to enrich the RB spaces.
\begin{enumerate}[(a)]
\item \emph{Lagrangian RB spaces}: \label{enrich:lag} We add each FOM solution to the RB space that is directly related to its respective reduced formulation, i.e.~for a
given $\mu \in \mathcal{P}$, we enrich by
$
V^{\textnormal{pr},k}_{{r}} = V^{\textnormal{pr},k-1}_{{r}} \cup \{u_{h,\mu}\},
V^{\textnormal{du},k}_{{r}} = V^{\textnormal{du},k-1}_{{r}} \cup \{p_{h,\mu}\}.
$
\item \emph{Single RB space}: We add all available information into a single RB space,
i.e.~$V^{\textnormal{pr}, k}_{{r}} = V^{\textnormal{du}, k}_{{r}} = V^{\textnormal{pr},k-1}_{{r}} \cup \{u_{h,\mu}, p_{h,\mu}\}$.
According to Section~\ref{sec:mor} this results in
${{\Jhat_{r}}}(\mu)$ being equal to the standard RB reduced functional from \eqref{eq:Jhat_red}.
\label{enrich:single}
\end{enumerate}
These strategies for constructing RB spaces have a significant impact on the performance and accuracy of the TR-RB method.
Note that offline computations for the construction of RB models scale quadratically with the number of
basis functions in the RB space.
Thus, Lagrange RB spaces in \eqref{enrich:lag} are computationally beneficial compared to \eqref{enrich:single} at a potential loss of accuracy
of the corresponding RB models (since less information is added). Moreover, different spaces as in \eqref{enrich:lag} destroy the duality of state and adjoint equations, cf.~Section~\ref{sec:standard_approach}.
\subsection{Trust-Region variants based on adaptive enrichment strategies}
\label{sec:high_order_TR}
A major contribution of this article is to introduce and analyze variants of adaptive TR-RB methods with projected BFGS as sub-problem solver for efficiently computing a
solution of the optimization problem \eqref{P}.
In terms of performance we need to account for all computational costs, including traditional offline and online costs of the algorithms.
The proposed methods mainly differ in terms of the model function and its gradient information.
Following Section~\ref{sec:TR}, we propose a TR method which adaptively builds
an RB space along the path of optimization (see Algorithm \ref{Alg:TR-RBmethod}). From a MOR perspective this diminishes
the offline time of the ROM significantly since no global RB space (with respect to the parameter domain) has
to be built in advance.
We enrich the model after the sub-problem \eqref{TRRBsubprob} of the TR method has been solved.
We distinguish three different approaches:
\begin{enumerate}[1.]
\item \emph{standard approach:} Following Section~\ref{sec:standard_approach}, the standard approach for the functional
is to replace the FOM quantities by their respective ROM counterpart, i.e.~we consider the map $\mu \mapsto \hat{J}_{r}(\mu)$ from \eqref{eq:Jhat_red}.
Gradient information can be computed by reducing the corresponding FOM gradient which results in
$\widetilde\nabla_\mu \mathcal{J}(u_{{r}, \mu}, \mu)$ from \eqref{naive:red_grad}. Consequently this approach does not allow for using a higher order estimate but
$\Delta_{\hat{J}_{r}}(\mu)$.
\item \emph{semi NCD-corrected approach:} A first correction strategy is to replace the functional by the NCD-corrected RB reduced functional ${{\Jhat_{r}}}$
from \eqref{eq:Jhat_red_corected} but stick with the inexact gradient of the standard approach.
This allows to use the higher order estimator for the functional, i.e.
$\Delta_{{{\Jhat_{r}}}}(\mu)$.
\item \emph{NCD-corrected approach:}
We propose to consider the NCD-corrected RB reduced functional ${{\Jhat_{r}}}$ from \eqref{eq:Jhat_red_corected} and its actual gradient according to
Propostion~\ref{prop:true_corrected_reduced_gradient_adj}.
Note that we only need to solve two additional equations, independently of the dimension $P$ of the parameter space.
\end{enumerate}
For the basis construction, we may use variants \eqref{enrich:lag} or \eqref{enrich:single} from Section~\ref{sec:construct_RB}.
Note however, that by using
the basis enrichment \eqref{enrich:single}, all approaches $1.$ - $3.$ are equivalent.
Using variant \eqref{enrich:lag} with BFGS is inspired from \cite{QGVW2017}. However, our algorithms differ from the
TR-RB approach in \cite{QGVW2017} since we are working with the NCD-corrected reduced cost functional (in 2) and its actual gradient (in 2 and 3).
Note that the presence of inequality constraints, which are missing in \cite{QGVW2017}, implies a projection-based optimization algorithm.
In addition, we stress that, differently from \cite{QGVW2017}, we take advantage of the proposed condition for
enlarging the TR radius and of a stopping criterium independent from the RB a posteriori estimates, as presented in Section~\ref{sec:TR}.
\begin{remark}
Note that we do not use the sensitivity based quantities from Section~{\rm{\ref{sec:sens_estimation}}} although they suggest the highest
numerical accuracy w.r.t.~the FOM optimality system.
However, for the experiments in Section~\ref{sec:num_experiments},
additional computational cost for computing
FOM sensitivities will not pay off in the TR-RB algorithm, especially for high-dimensional parameter spaces.
\end{remark}
\section{Numerical experiments}
\label{sec:num_experiments}
We present numerical experiments to demonstrate the adaptive TR-RB variants from Section~\ref{sec:high_order_TR} with both RB constructions from Section~\ref{sec:construct_RB} for quadratic objective functionals with elliptic PDE constraints as in \eqref{P}, and compare them to state-of-the art algorithms from the literature.
We also validate the higher-order a posteriori error estimates from Section \ref{sec:a_post_error_estimates} numerically.
We consider two setups: first, the elliptic thermal fin problem from \cite[Sec.~5.1.1]{QGVW2017} (where the correction term of the proposed NCD-corrected approach vanishes) in Section~\ref{sec:thermalfin}. Second, we consider a more challenging optimization problem in Section~\ref{sec:mmexc_example}, including a detailed analysis of the a posteriori error estimates from Section~\ref{sec:a_post_error_estimates}.
All simulations have been performed with a pure \texttt{Python} implementation based on the open source MOR library pyMOR \cite{milk2016pymor}, making use of pyMORs builtin vectorized \texttt{numpy/scipy}-based discretizer for the FOM and generic MOR algorithms for projection and orthonormalization (such as a stabilized Gram-Schmidt algorithm) to effortlessly obtain efficient ROMs.
The source code to reproduce all results (including detailed interactive \texttt{jupyter}-notebooks\footnote{Available at \url{https://github.com/TiKeil/NCD-corrected-TR-RB-approach-for-pde-opt}.}) is available at \cite{Code}.
All experiments are based on the same implementation (including a reimplementation of \cite{QGVW2017}) and were performed on the same machine multiple times to avoid caching or multi-query effects.
Timings may thus be used to compare and judge the computational efficiency of the different algorithms.
We consider stationary heat transfer in a bounded connected spatial domain $\Omega \subset \mathbb{R}^2$ with polygonal boundary $\partial\Omega$ partitioned into a non-empty Robin boundary $\Gamma_\text{\rm{R}} \subset \partial\Omega$ and possibly empty distinct Neumann boundary $\Gamma_\text{\rm{N}} = \partial\Omega\backslash \Gamma_\text{\rm{R}}$, and unit outer normal $n: \partial\Omega \to \mathbb{R}^2$.
We consider the Hilbert space $V = H^1(\Omega) := \{v \in L^2(\Omega) \,|\, \nabla v \in L^2(\Omega) \}$ of weakly differentiable functions and, for an admissible parameter $\mu \in \mathcal{P}$, we seek the temperature $u_\mu \in V$ as the solution of
\begin{align}
-\nabla\cdot\big(\kappa_\mu \nabla u_\mu \big) &= f_\mu \ \text{in }\Omega, &
\kappa_\mu \nabla u_\mu\cdot n &= c_\mu (u_\text{\rm{out}} - u_\mu) \ \text{on }\Gamma_\text{\rm{R}},
\label{eq:heat_equation}
& \kappa_\mu\nabla u_\mu \cdot n &= g_\text{\rm{N}} \ \text{on }\Gamma_\text{\rm{N}}
\end{align}
in the weak sense, with the admissible parameter set, the spatial domain and its boundaries and the data functions $\kappa_\mu \in L^\infty(\Omega)$, $f_\mu \in L^2(\Omega)$, $c_\mu \in L^\infty(\Gamma_\text{\rm{R}})$ and $u_\text{\rm{out}} \in L^2(\Gamma_\text{\rm{R}})$ defined in the respective experiment.
The bilinear form $a$ and linear functional $l$ in \eqref{P.state} are thus given for all $\mu \in \mathcal{P}$ and $v, w \in V$ by
\begin{align}
a_\mu(v, w) := \int_\Omega\kappa_\mu \nabla v \cdot \nabla w \intend{x} + \int_{\Gamma_\text{\rm{R}}}\hspace{-5pt}c_\mu\, vw\intend{s} &&\text{and}&& l_\mu(v) := \int_\Omega f_\mu\,v\intend{x} + \int_{\Gamma_\text{\rm{R}}}\hspace{-5pt}c_\mu\, u_\text{\rm{out}}v\intend{s} + \int_{\Gamma_\text{\rm{N}}}\hspace{-5pt}g_\text{\rm{N}}v\intend{s}.
\end{align}
For the FOM we fix a fine enough reference simplicial or cubic mesh and define $V_h \subset V$ as the respective space of continuous piecewise (bi-)linear Finite Elements.
Since the inner product and norm have a big influence on the computational efficiency of the a posteriori error estimates as well as their sharpness, we use the mesh-independent energy-product $(u, v) := a_{\check{\mu}}(u, v)$ for a fixed parameter $\check{\mu} \in \mathcal{P}$, which is a product over $V$ due to the symmetry, continuity and coercivity of the bilinear form for each example below.
Owing to this choice of the product, we may use the $\min$-theta approach from \cite[Prop.~2.35]{HAA2017} to obtain lower bounds on coercivity constants and the $\max$-theta approach from \cite[Ex.~5.12]{HAA2017} to obtain upper bounds on continuity constants, each required for the a posteriori error estimates.
Compared to the more general Successive Constraint Method \cite{pat07}, this approach yields quite sharp estimates and is computationally more efficient, both offline and online.
Due to Assumption~\ref{asmpt:parameter_separable} and the bi-linearity of the objective functional, we may carry out the preassembly of all high-dimensional quantities after each enrichment, which is well-known for RB methods \cite[Sec.~2.5]{HAA2017}.
We would like to point out that while the more accurate and stable preassembly of the estimates from \cite{buhr14} is readily available in pyMOR, the slightly cheaper standard preassembly of the estimates was sufficient for our experiments.
For all experiments, we use an initial TR radius of $\delta^0 = 0.1$,
a TR shrinking factor $\beta_1=0.5$, an Armijo step-length $\kappa=0.5$,
a truncation of the TR boundary of $\beta_2 = 0.95$,
a tolerance for enlarging the TR radius of $\eta_\varrho = 0.75$,
a stopping tolerance for the TR sub-problems of $\tau_\text{\rm{sub}} = 10^{-8}$,
a maximum number of TR iteration $K = 40$, a maximum number of sub-problem iteration $K_{\text{\rm{sub}}}= 400$ and
a maximum number of Armijo iteration of $50$. We also point out that the stopping tolerance for the FOC condition
$\tau_\text{\rm{FOC}}$ is specified in each experiment.
\subsection{State of the art optimization methods}
\label{sec:state_of_the_art_methods}
\noindent
We compare our proposed methods to the following ones from the literature:
\textbf{Adaptive TR-RB with BFGS sub-problem solver and Lagrangian RBs \cite{QGVW2017}}:
We consider the same method as in \cite{QGVW2017}, where the authors used the standard functional and gradient from Section~\ref{sec:standard_approach}.
Furthermore, no enlarging strategy has been used for the TR-radius and no projection for parameter constraints has been considered.
Importantly, the authors did not take advantage of the fact, that the full order FOC condition in line 23 of Algorithm \ref{Alg:TR-RBmethod}
is cheaply available after an enrichment step. Instead they used the reduced FOC condition plus the estimator for the gradient of the
cost functional $\|\widetilde\nabla_\mu \hat{J}_{r}(\mu^{(k+1)}))\|_2 + \Delta_{\widetilde\nabla_\mu \hat{J}_{r}}(\mu) \leq \tau_\text{\rm{FOC}}$ in line 23. Note that this
approach has multiple drawbacks. First, the evaluation is more costly due to the estimator. Second, it is less accurate and third, it can
prevent the TR-RB from converging in case the estimator is not able to be small enough (for instance governed by high constants or numerical
issues in the estimator).
\textbf{FOM projected BFGS}: We consider a standard projected BFGS method,
which uses FOM evaluations of the forward model to compute the reduced cost functional and its gradient.
We restrict the maximum number of iterations by $400$.
\subsection{Model problem 1: Elliptic thermal fin model problem}
\label{sec:thermalfin}
We consider the six-dimensional elliptic thermal fin example from \cite[Sec.~5.1.1]{QGVW2017} and refer to Figure~\ref{fig:fin} for the problem definition.
The purpose of this experiment is to show the applicability of the proposed algorithms and to compare them to the one proposed in \cite{QGVW2017}.
For all runs we prescribe the same desired parameter $\mu^\text{d} \in \mathcal{P}$ by randomly drawing $k_1, \dots, k_4$ strictly within $\mathcal{P}$ and by setting $k_0 = 0.1$ and $\text{Bi} = 0.01$, to artificially mimic the situation where parameter constraints have to be tackled.
Defining $T^\text{d} := q(u_{\mu^\text{d}})$ where $u_{\mu^\text{d}} \in V$ is the solution of \eqref{P.state} associated with the desired parameter and where $q(v) := \int_{\Gamma_\text{\rm{N}}} v \intend{s}$ for $v\in V$ denotes the mean temperature at the root of the fin, we consider a cost functional $\mathcal{J}(u, \mu) = \Theta(\mu) + j_\mu(u) + k_\mu(u, u)$ as in \eqref{P.argmin} with $\Theta(\mu) := (\|\mu^\text{d} - \mu\| / \|\mu^\text{d}\|)^2 + {T^\text{d}}^2 + 1$, $j_\mu(v) := - T^\text{d}\,q(v)$ and $k_\mu(v, w) := 1/2\,q(v)\,q(w)$.
We would like to point out that the authors in \cite{QGVW2017} dropped the ${T^\text{d}}^2 + 1$ term from the definition of $\Theta$, which we re-add to ensure Assumption~\ref{asmpt:bound_J}. This constant term does not change the position of local minima and the derivatives of the cost functional. However, this makes the Trust-Region radius shrink especially at the beginning, slowing down the TR-RB methods. This does not affect the comparison among the TR-RB methods, since all suffer from this issue.
Note that for this particular example, the proposed NCD-correction term vanishes, see Remark~\ref{rem:fin}.
For the FOM, we generate an unstructured simplicial mesh using pyMORs \texttt{gmsh} (see \cite{gmsh}) bindings, resulting in $\dim V_h = 77537$.
\begin{SCfigure}[50][ht]
\centering\footnotesize
\begin{tikzpicture}[scale=1.1]
\draw (-.5,0) -- (-.5,.75);
\draw (-.5,.75) -- (-3.,.75);
\node at (-1.8,0.86) {{\tiny $k_1$}};
\node at (-1.8,1.86) {{\tiny $k_2$}};
\node at (-1.8,2.86) {{\tiny $k_3$}};
\node at (-1.8,3.86) {{\tiny $k_4$}};
\node at (1.8,0.86) {{\tiny $k_1$}};
\node at (1.8,1.86) {{\tiny $k_2$}};
\node at (1.8,2.86) {{\tiny $k_3$}};
\node at (1.8,3.86) {{\tiny $k_4$}};
\node at (0,2) {{\tiny $k_0$}};
\draw (-3.,.75) -- (-3.,1) -- (-.5,1);
\draw (-.5,1) -- (-.5,1.75) -- (-3.,1.75) -- (-3.,2) -- (-.5,2);
\draw (-.5,2) -- (-.5,2.75) -- (-3.,2.75) -- (-3.,3) -- (-.5,3);
\draw (-.5,3) -- (-.5,3.75) -- (-3.,3.75) -- (-3.,4) -- (3,4);
\draw (.5,0) -- (.5,.75) -- (3.,.75) -- (3.,1) -- (.5,1);
\draw (.5,1) -- (.5,1.75) -- (3.,1.75) -- (3.,2) -- (.5,2);
\draw (.5,2) -- (.5,2.75) -- (3.,2.75) -- (3.,3) -- (.5,3);
\draw (.5,3) -- (.5,3.75) -- (3.,3.75) -- (3.,4);
\draw (-.5,0) -- (.5,0) node [midway,above] {$\Gamma_\text{\rm{N}}$};
\draw[densely dotted] (-.5,.75) -- (-.5,1);
\draw[densely dotted] (-.5,1.75) -- (-.5,2);
\draw[densely dotted] (-.5,2.75) -- (-.5,3);
\draw[densely dotted] (-.5,3.75) -- (-.5,4);
\draw[densely dotted] (.5,.75) -- (.5,1);
\draw[densely dotted] (.5,1.75) -- (.5,2);
\draw[densely dotted] (.5,2.75) -- (.5,3);
\draw[densely dotted] (.5,3.75) -- (.5,4);
\draw[->|] (3.2,0.45) -- (3.2,0.75);
\draw[->|] (3.2,1.3) -- (3.2,1.);
\draw[<->|] (0.5,0.45) -- (3,0.45) node[midway, fill=white] {$L$};
\draw (3,0.75) -- (3,1) node[midway,right] {\small $t$};
\end{tikzpicture}
\caption
{\small
Problem definition of the thermal fin example from Section~\ref{sec:thermalfin}.
Depicted is the spatial domain $\Omega$ (with $L=2.5$ and $t=0.25$) with Neumann boundary at the bottom with $|\Gamma_{\text{N}}| = 1$ and Robin boundary $\Gamma_\text{\rm{R}} := \partial\Omega \backslash \Gamma_\text{\rm{N}}$, as well as the values $k_0, \dots, k_4 > 0$ of the diffusion $k_\mu$, which is piecewise constant in the respective indicated part of the domain.
The other data functions in \eqref{eq:heat_equation} are given by $f_\mu = 0$, $g_\text{\rm{N}} = -1$, $u_\text{\rm{out}} = 0$ and $c_\mu = \text{Bi} \in \mathbb{R}$, the Biot number.
We allow to vary the six parameters $(k_0, \dots, k_4, \text{Bi})$ and define the set of admissible parameters as $[0.1, 10]^5 \times [0.01, 1] \subset \mathbb{R}^P$ with $P = 6$.
We choose $\check{\mu} = (1, 1, 1, 1, 1, 0.1)$ for the energy product.}
}
\label{fig:fin}
\end{SCfigure}
Starting with ten different randomly drawn initial parameters $\mu^{(0)}$, we measure the total computational runtime, the number of TR iterations $k$ and the error in the optimal parameter for all combinations of adaptive TR algorithms from Section~\ref{sec:TRRB_and_adaptiveenrichment} and choice of RB spaces from Section~\ref{sec:construct_RB}, as well as for the state of the art methods from the literature from Section~\ref{sec:state_of_the_art_methods}.
\begin{table}
\centering\footnotesize
\begin{tabular}{l|cc|c|cc}
& av.~(min/max) runtime[s]
& speed-up
& av.~(min/max) iter.
& rel.~error
& FOC cond.\\
\hline
FOM proj.~BFGS
& 967.86 (176.69/3401.06)
& --
& 111.20 (25/400)
& $3.13\cdot 10^{-3}$
& $1.19\cdot 10^{-2}$\\
TR-RB from \cite{QGVW2017}
& 68.06 (43.28/88.21)
& 10.40
& 7.20 (8/13)
& $1.34\cdot 10^{-6}$
& $4.31\cdot 10^{-5}$\\
1(a) TR-RB with $V_{r}^\textnormal{pr} \neq V_{r}^\textnormal{du}$
& 44.56 (34.22/74.96)
& 21.72
& 8.80 (8/11)
& $3.08\cdot 10^{-6}$
& $4.64\cdot 10^{-5}$\\
1(b) TR-RB with $V_{r}^\textnormal{pr} = V_{r}^\textnormal{du}$
& 43.86 (34.09/74.35)
& 22.07
& 8.70 (8/10)
& $3.37\cdot 10^{-6}$
& $6.40\cdot 10^{-5}$
\end{tabular}
\vspace{0.15cm}
\caption{%
{\small Performance and accuracy of selected algorithms for the example from Section~\ref{sec:thermalfin} for ten optimization runs with randomly initial guesses $\mu^{(0)}$: averaged, minimum and maximum total computational time (column 2) and speed-up compared to the FOM variant (column 3); average, minimum and maximum number of iterations $k$ required until convergence (column 4), average relative error in the parameter (column 5) and average FOC condition (column 6).}
}
\label{table:fin}
\end{table}
\begin{SCfigure}
\centering\footnotesize
\input{Pictures/mu_error_FIN.tex}
\caption{%
{\small Error decay and performance of selected algorithms for the example from Section~\ref{sec:thermalfin} for a single optimization run with random initial guess $\mu^{(0)}$ for $\tau_\text{\rm{FOC}} = 5\cdot 10^{-4}$: for each algorithm each marker corresponds to one (outer) iteration of the optimization method and indicates the absolute error in the current parameter, measured against the known desired optimum $\bar{\mu} = \mu^\text{d}$.
In all except the FOM variant, the ROM is enriched in each iteration corresponding to Algorithm \ref{Alg:TR-RBmethod}, depending on the variant in question.}
}
\label{fig:fin_timing_mu_d}
\end{SCfigure}
All considered optimization methods converged (up to a tolerance), but we restrict the presentation to the most informative ones (all results can be found in the accompanying code).
As we observe from Table~\ref{table:fin}, the ROM based adaptive TR-RB algorithms vastly outperform the FOM variant, noting that the computational time of the ROM variants includes all offline and online computations.
Figure~\ref{fig:fin_timing_mu_d} details the decay of the error decay in the optimal parameter during the optimization for a selected random initial guess.
We observe that the choice of the RB enrichment does not impact the performance of the algorithm for this example too much, see Remark~\ref{rem:fin}.
Also methods 2(\ref{enrich:lag}) and 3(\ref{enrich:lag}) show a comparable computational speed (not shown).
We also observe that the method from \cite{QGVW2017} requires more time and more iterations on average,
variants 1 are still faster due to the enlarging of the TR radius and of the use of a termination criterium which does not depend on a posteriori estimates, which may force additional TR iterations.
\begin{remark}[Vanishing NCD-correction for the fin problem]
\label{rem:fin}
It is important to notice that this model problem is not suitable to fully demonstrate the capabilities of the NCD-corrected approach. The reason is
that the choice of the functional is a misfit on only the root edge of the thermal fin, plus a Tikhonov regularization term.
Since the root of the thermal fin is also the source of
the primal problem, the dual solutions $p_{{r}, \mu}$ of the reduced dual equation
\eqref{eq:dual_solution_red} are thus linearly dependent on the respective primal solutions $u_{{r}, \mu}$ and the correction term $r_\mu^\textnormal{pr}(u_{{r},\mu})[p_{{r},\mu}]$ for the NCD-corrected RB reduced functional from \eqref{eq:Jhat_red_corected} vanishes.
In general, for quadratic objective functionals, this is not the case and all variants with correction terms thus waste unnecessary computational time.
\end{remark}
\subsection{Model problem 2: stationary heat distribution in a building}
\label{sec:mmexc_example}
For these experiments we consider as objective functional a weighted $L^2$-misfit on a domain of interest $D \subseteq \Omega$ and a weighted Tikhonov term comparable to design optimization, optimal control or inverse problems, i.e.
\begin{align}
\mathcal{J}(v, \mu) = \frac{\sigma_D}{2} \int_{D}^{} (v - u^{\text{d}})^2 + \frac{1}{2} \sum^{M}_{i=1} \sigma_i (\mu_i-\mu^{\text{d}}_i)^2 + 1,
\label{eq:mmexc_example_J}
\end{align}
with given desired state $u^{\text{d}} \in V$ and parameter $\mu^{\text{d}} \in \mathcal{P}$ and weights $\sigma_D, \sigma_i$ specified further below. With respect to \eqref{P.argmin}, we thus have
$\Theta(\mu) = \frac{1}{2} \sum^{M}_{i=1} \sigma_i (\mu_i-\mu^{\text{d}}_i)^2 + \frac{\sigma_D}{2} \int_{D}^{} u^{\text{d}} u^{\text{d}} + 1$,
$j_{\mu}(u) = -\sigma_D \int_{D}^{} u^{\text{d}}u$
and
$k_{\mu}(u,v) = \frac{\sigma_D}{2} \int_{D}^{} uv$.
\begin{SCfigure}[][h]
\centering\footnotesize
\begin{subfigure}[c]{0.60\textwidth}
\includegraphics[width=\textwidth]{Pictures/full_diffusion_with_big_numbers_with_D.png}
\end{subfigure}
\centering
\vspace{2em}
\caption{{\small Definition~of Model problem 2: with $\Omega := [0,2] \times [0,1] \subset \mathbb{R}^2$.
Numbers indicate affine components, where $i.$ is a window, $\underbar{i}$ are doors, and $i|$ are walls.
The $i$-th heater is located under the $i$-th window. With respect to \eqref{eq:heat_equation}, we consider $\Gamma_\text{\rm{R}}:= \partial \Omega$, where
$c_{\mu}$ contains outside wall $10|$, outside doors $\underbar{8}$ and $\underbar{9}$ and all windows. All other diffusion components enter the coefficient $\kappa_\mu$,
whereas the heaters enter into the source term $f_\mu$. Furthermore, we set $u_{\text{out}}=5$ and
the green region illustrates the domain of interest $D$.}}
\label{ex1:blueprint}
\end{SCfigure}
Motivated by ensuring a desired temperature in a single room of a building floor,
we consider
blueprints
with windows, heaters, doors and walls, yielding parameterized diffusion, forces and boundary values
as sketched in Figure~\ref{ex1:blueprint}.\footnote{See \url{https://github.com/TiKeil/NCD-corrected-TR-RB-approach-for-pde-opt} for the definition of the data functions.}
For simplicity we omit a realistic modeling of temperature and restrict
ourselves to academic numbers of the diffusion and heat source quantities.
We seek to ensure a desired temperature $u^\text{d}=18$ and set $\mu^\text{d}_i = 0$.
For the FOM discretization we choose a cubic mesh which resolves all features of the data functions derived from Figure~\ref{ex1:blueprint}, resulting in $\dim V_h = 80601$ degrees of freedom. We consider a ten-dimensional parameter example with three wall sets $\{1|,2|,3|,8|\}$, $\{4|,5|,6|,7|\}$ and $\{9|\}$ and
seven heater sets, $\{1,2\}$,
$\{3,4\}$ and $\{5\}$, $\{6\}$, $\{7\}$, $\{8\}$ and $\{9,10,11,12\}$ (each set governed by a single parameter component).
The set of admissible parameters is given by $\mathcal{P}= [0.025,0.1]^3\times[0,100]^7$ and we choose
$\sigma_D= 100$ and $(\sigma_i)_{1\leq i\leq 10} = (10\sigma_w,5\sigma_w,\sigma_w,2\sigma_h,$
$2\sigma_h,\sigma_h,\sigma_h,\sigma_h,\sigma_h,4\sigma_h)$
in \eqref{eq:mmexc_example_J},
with $\sigma_w = 0.05$ and $\sigma_h= 0.001$.
The choice of $\sigma_i$ is related to the measure of the walls and how many heaters are considered in each group.
The other components of the data functions are fixed and thus not directly involved in the optimization process.
Briefly, the diffusion coefficient of air and inside doors is set to $0.5$, of the outside wall to $0.001$,
of outside doors $\underline{8}$ and $\underline{9}$ to $0.01$ and of windows to $0.025$.
For the energy product, we choose $\check{\mu}= (0.05,0.05,0.05,10,10,10,10,10,10,10)$.
We use this setup to inspect different TR-RB algorithms in Section \ref{sec:mmexc_opt_results}, but also to study the a posteriori error estimates from Section \ref{sec:a_post_error_estimates} in the following section.
\subsubsection{Numerical validation of the a posteriori error estimates}
\label{sec:estimator_study}
To study the performance of the a posteriori error estimates proposed in Section \ref{sec:a_post_error_estimates},
we neglect the outer-loop optimization and simply use a goal oriented adaptive greedy algorithm \cite{HDO2011} with basis extension (a) from Section \ref{sec:construct_RB}
to generate a ROM which ensures that the worst relative estimated error for the reduced functional and its gradient over the adaptively generated training set and a randomly chosen
validation set is below a prescribed tolerance of $\tau_{\text{\rm{FOC}}}= 5 \cdot 10^{-4}$.
In particular we first ensure $\Delta_{\hat{J}_{r}}(\mu)/\hat{J}_{r}(\mu) < \tau_{\rm{FOC}}$ for $\Delta_{\hat{J}_{r}}$ from Proposition \ref{prop:Jhat_error}.i and continue with $\Delta_{\widetilde\nabla \hat{J}_{r}}(\mu)/\|\widetilde\nabla \hat{J}_{r}(\mu)\|_2 < \tau_{\widetilde\nabla \hat{J}}$ for $\Delta_{\widetilde\nabla \hat{J}_{r}}$ from Proposition \ref{prop:grad_Jhat_error}.i, cf. \cite[Algorithm 2]{QGVW2017}.
Let us mention that the goal for $\Delta_{\hat{J}_{r}}$ is fulfilled after $24$ basis enrichments and we have $\Delta_{\widetilde\nabla \hat{J}_{r}}(\mu)/\|\widetilde\nabla \hat{J}_{r}(\mu)\| < 4.84$ after $56$ basis enrichments, where we artificially stop the algorithm
since the associated computational effort is already roughly 17 hours, demonstrating the need for the proposed adaptive TR-RB algorithm studied in the next section.
\begin{figure}[ht]
\centering%
\footnotesize%
\input{Pictures/estimator_study.tex}
\caption{%
{\small Evolution of the true and estimated model reduction error (top) in the reduced functional and its approximations (A) and the gradient of the reduced functional and its approximations (B), as well as error estimator efficiencies (bottom), during adaptive greedy basis generation for the experiment from Section \ref{sec:estimator_study}.
Top: depicted is the $L^\infty(\mathcal{P}_\textnormal{val})$-error for a validation set $\mathcal{P}_\textnormal{val} \subset \mathcal{P}$ of $100$ randomly selected parameters, i.e.~$|\hat{J}_h - \hat{J}_{r}|$ corresponds to $\max_{\mu \in \mathcal{P}_\textnormal{val}} |\hat{J}_h(\mu) - \hat{J}_{r}(\mu)|$, $\Delta_{{{\Jhat_{r}}}}$ corresponds to $\max_{\mu \in \mathcal{P}_\textnormal{val}}\Delta_{{{\Jhat_{r}}}}(\mu)$, $\|\nabla \hat{\mathcal{J}}_h - \nabla{{\Jhat_{r}}}\|_2$ corresponds to $\max_{\mu \in \mathcal{P}_\textnormal{val}}\|\nabla \hat{\mathcal{J}}_h(\mu) - \nabla{{\Jhat_{r}}}(\mu)\|_2$, and so forth.
Bottom: depicted is the worst efficiency of the respective error estimate (higher: better), i.e.~``$\Delta_{\hat{J}_{r}}$ eff.'' corresponds to $\min_{\mu \in \mathcal{P}_\textnormal{val}} |\hat{J}_h(\mu) - \hat{J}_{r}(\mu)|\,/\,\Delta_{\hat{J}_{r}}(\mu)$, and so forth.}
}
\label{fig:estimator_study}
\end{figure}
As we observe from Figure \ref{fig:estimator_study}, the error of the NCD-corrected terms is of several orders of magnitude smaller than the corresponding terms of the standard approach.
It can also be seen that the (computationally more costly) sensitivity bases quantities, i.e.~$\widetilde\nabla{{\Jhat_{r}}}$, show the best error.
However, all estimators for the corrected and sensitivity based quantities show a worse effectivity, hinting that there is still room for improvement.
\subsubsection{Optimization results}
\label{sec:mmexc_opt_results}
Similar to Section \ref{sec:thermalfin}, starting with ten different randomly drawn initial parameters $\mu^{(0)}$, we measure the total computational runtime, the number of TR iterations $k$ and the error in the optimal parameter for all combinations of adaptive TR algorithms from Section~\ref{sec:TRRB_and_adaptiveenrichment} and choice of RB spaces from Section~\ref{sec:construct_RB}, as well as for the state of the art methods from the literature from Section~\ref{sec:state_of_the_art_methods}.
All algorithms converged
(up to a tolerance) to the same point $\bar\mu$ and it was verified a posteriori that this point is a local minimum of $\hat{\mathcal{J}}$,
i.e.~it satisfies the second-order sufficient optimality conditions.
The value of $\bar\mu$ in order to compute the relative error was calculated with the
FOM projected Newton method for a FOC condition tolerance of $10^{-12}$ and,
thanks to the choice of the cost functional weights,
the target $u^\text{d}$ is approximate by $\bar u$ with a relative error of $1.7\cdot 10^{-6}$ in $D$.
We consider the same setup for two different stopping tolerances $\tau_{\text{\rm{FOC}}}= 5\cdot 10^{-4}$ and $\tau_{\text{\rm{FOC}}}= 10^{-6}$ to demonstrate that the performance (both in terms of time and convergence) of the methods vastly depends on the choice of $\tau_{\text{\rm{FOC}}}$.
\begin{table}
\centering\footnotesize
\begin{subtable}{\textwidth}
\centering
\begin{tabular}{l|cc|c|cc}
\textsc{\textbf{(A) Result for $\tau_{\text{\rm{FOC}}}= 5\cdot 10^{-4}$}}
& av.~(min/max) runtime[s]
& speed-up
& av.~(min/max) iter.
& rel.~error
& FOC cond.\\
\hline
FOM proj.~BFGS
& 332.57 (196.51/591.85)
& --
& 44.30 (30/60)
& $1.40\cdot 10^{-3}$
& $1.80\cdot 10^{-4}$\\
TR-RB from \cite{QGVW2017}
& 117.87 (70.29/166.31)
& 2.82
& 10.10 (6/14)
& $5.46\cdot 10^{-4}$
& $1.41\cdot 10^{-4}$\\
1(a) TR-RB with $V_{r}^\textnormal{pr} \neq V_{r}^\textnormal{du}$
& 91.50 (47.07/230.29)
& 3.63
& 8.30 (5/10)
& $2.01\cdot 10^{-3}$
& $2.04\cdot 10^{-4}$\\
1(b) TR-RB with $V_{r}^\textnormal{pr} = V_{r}^\textnormal{du}$
& 78.65 (54.69/114.36)
& 4.23
& 6.90 (5/9)
& $2.53\cdot 10^{-4}$
& $8.23\cdot 10^{-5}$ \\
2(a) TR-RB semi NCD-corrected
& 79.47 (63.38/94.28)
& 4.18
& 8.50 (7/10)
& $5.98\cdot 10^{-5}$
& $1.02\cdot 10^{-4}$ \\
3(a) TR-RB NCD-corrected
& 71.84 (50.38/87.16)
& 4.63
& 7.40 (5/9)
& $1.09\cdot 10^{-3}$
& $6.12\cdot 10^{-5}$ \\
\end{tabular}
\vspace{0.15cm}
\end{subtable}
\vspace{0.15cm}
\begin{subtable}{\textwidth}
\centering
\begin{tabular}{l|cc|c|cc}
\textsc{\textbf{(B) Result for $\tau_{\text{\rm{FOC}}}= 10^{-6}$}}
& av.~(min/max) runtime[s]
& speed-up
& av.~(min/max) iter.
& rel.~error
& FOC cond.\\
\hline
FOM proj.~BFGS
& 409.28 (317.25/637.55)
& --
& 57.00 (49/71)
& $2.82\cdot 10^{-6}$
& $3.35\cdot 10^{-7}$\\
TR-RB from \cite{QGVW2017}
& 614.81 (566.66/671.97)
& 0.66
& 40.00 (40/40)
& $8.46\cdot 10^{-7}$
& $8.44\cdot 10^{-8}$\\
1(a) TR-RB with $V_{r}^\textnormal{pr} \neq V_{r}^\textnormal{du}$
& 165.48 (92.26/417.24)
& 2.47
& 15.30 (10/40)
& $3.29\cdot 10^{-6}$
& $5.43\cdot 10^{-7}$\\
1(b) TR-RB with $V_{r}^\textnormal{pr} = V_{r}^\textnormal{du}$
& 86.39 (62.68/124.43)
& 4.74
& 7.80 (6/10)
& $3.52\cdot 10^{-6}$
& $3.03\cdot 10^{-7}$ \\
2(a) TR-RB semi NCD-corrected
& 90.37 (80.97/102.60)
& 4.53
& 9.80 (9/11)
& $8.12\cdot 10^{-7}$
& $2.26\cdot 10^{-7}$ \\
3(a) TR-RB NCD-corrected
& 88.24 (58.18/108.90)
& 4.64
& 8.90 (6/10)
& $2.65\cdot 10^{-6}$
& $2.73\cdot 10^{-7}$ \\
\end{tabular}
\vspace{0.15cm}
\end{subtable}
\caption{%
{\small Performance and accuracy of selected algorithms for two choices of $\tau_\text{\rm{FOC}}$ for the example from Sec.~\ref{sec:mmexc_opt_results} for ten optimization runs with random initial guess, compare Table \ref{table:fin}.}
}
\label{table:MP2_com}
\end{table}
From Table~\ref{table:MP2_com}, we observe that all proposed TR-RB methods speed up the FOM projected BFGS method with
the NCD-corrected approach outperforming the others,
since the gradient used is the true one of the model function $\hat{\mathcal{J}}_{r}$.
Moreover, independently of the model function, the algorithm from \cite{QGVW2017} is much slower,
demonstrating the positive impact of the suggested improvements on
enlarging the TR radius and on the termination criterium based on cheaply available FOM information
(instead of relying on an a posteriori estimate),
also visible in the
number of outer TR iterations.
Comparing our proposed TR variants in terms of iterations,
it is more beneficial to consider a single RB space, i.e.~$V^\textnormal{pr}_{r} = V^\textnormal{du}_{r}$.
While enrichment (a) is more costly and the time-to-ROM-solution is slightly larger, the richer space seems to allow for better approximations of $\hat{\mathcal{J}}_h$.
All methods approximate the optimal parameter $\bar\mu$ with a small relative error and reach the desired tolerance for the FOC condition.
However, in view of the resulting relative error in Table~\ref{table:MP2_com} and Figure~\ref{Fig:_FOC_cond}, we observe that the choice $\tau_\text{\rm{FOC}}= 5\cdot10^{-4}$ is not sufficiently small for this model problem. In fact, we observe for most of the variants, that this choice for the tolerance $\tau_\text{\rm{FOC}}$ does not guarantee an adequately low relative error in approximating $\bar\mu$ and
affects the timings by stopping the method too early.
We conclude that the choice $\tau_{\text{\rm{FOC}}}= 10^{-6}$ instead results in a valid optimum of all variants (up to a tolerance of $10^{-6}$).
Importantly, for this choice of $\tau_\text{\rm{FOC}}$, we point out that the variant from \cite{QGVW2017} only stopped because
we restricted the maximum number of iterations to $40$, although the FOC condition dropped under the depicted tolerance of $10^{-6}$.
\begin{figure}[h]
\centering\footnotesize
\hspace{-0.6cm}
\begin{subfigure}{0.45\textwidth}
\input{Pictures/mu_error_EXC_10_.tex}
\end{subfigure}
\hspace{0.6cm}
\begin{subfigure}{0.45\textwidth}
\input{Pictures/mu_error_EXC_10_1e-6.tex}
\end{subfigure}
\caption{{\small Error decay and performance of selected algorithms for two choices of $\tau_{\text{\rm{FOC}}}$ for the example from Section~\ref{sec:mmexc_opt_results} for a single optimization run with random initial guess, compare Figure \ref{fig:fin_timing_mu_d}.}
}
\label{Fig:_FOC_cond}
\end{figure}
This is caused by the fact that in \cite{QGVW2017} the a posteriori estimate,
which is summed to the FOC condition, cannot get numerically small enough, showing the limit of the proposed stopping criterium in \cite{QGVW2017}. From Figure~\ref{Fig:_FOC_cond}(B) we conclude that the NCD-corrected approaches 2(\ref{enrich:lag}) and 3(\ref{enrich:lag}) outperform the standard ROM variant 1(\ref{enrich:lag}), which also
reached the maximum number of iterations for one of the ten samples.
Consequently, the NCD-correction entirely resolves the issue of the variational crime (introduced by splitting the reduced spaces), since it shows roughly the same performance as
variant 1(\ref{enrich:single}). However, looking at the minimum and maximum number of computational time in Table~\ref{table:MP2_com}, variant 3(\ref{enrich:lag}) shows a less volatile and more robust
behavior.
\section{Conclusion}
In this work we proposed and analyzed several variants of new adaptive Trust-Region Reduced Basis methods for parameterized partial differential equations.
First, we proved convergence of the modified algorithm in case of additional bilateral constraints on the parameter set, making this method more appealing for real-world applications.
Second, the use of a NCD-corrected RB reduced functional improves the RB approximation compared to the standard approach,
and enables the possibility of using an exact gradient in the case of separate RB spaces (each variant accompanied by rigorous a posteriori error estimates).
This approach turns out to be the most reliable in terms of computational time and accuracy,
outperforming the existing TR-RB method.
Furthermore, the proposed cheaply-computable criteria for enlarging the TR radius and for terminating the iterations ensure a faster convergence.
In future works we are interested in considering the projected Newton method to replace the projected BFGS method used in this contribution.
This leads to additional effort on developing a posteriori estimates for the RB approximation of the hessian and of the optimal parameter.
In addition, we are interested in combining the proposed TR-RB algorithm with localized RB methods for large-scale applications.
{\small
\bibliographystyle{abbrv}
|
2,869,038,155,592 | arxiv | \section{Introduction\label{intro}}
This paper is a continuation of our previous work~\cite{Ronniger} on the
description of baryon resonances in a covariant Bethe-Salpeter framework.
While in~\cite{Ronniger} we concentrated on the description of the mass
spectrum in the present paper we discuss the results on electromagnetic
transition amplitudes obtained on the basis of the Salpeter amplitudes
determined previously. The dynamical ingredients of the relativistically
covariant quark model used are instantaneous interaction kernels describing
confinement, a spin-flavour dependent interactions kernel motivated by
instanton effects as well as in addition a phenomenologically introduced
spin-flavour dependent interaction. The latter was found to improve the description
obtained previously in~\cite{LoeMePe1,LoeMePe2,LoeMePe3} in particular of
Roper-like scalar excitations as well as the position of some negative parity
$\Delta$-resonances slightly below 2 GeV. With instantaneous interaction
kernels the Bethe-Salpeter equation reduces to the more tractable Salpeter
equation which can be cast in the form of an eigenvalue equation for the
masses and the Salpeter amplitudes. These in turn determine the vertex
functions for any on-shell momentum of the baryons which then enter the
electromagnetic current matrix elements. The details of this procedure can be
found in~\cite{Merten}\,.
Spin-flavour dependent effective quark-quark interactions have also been studied
previously by the Graz
group~\cite
Glozman1996,Glozman1997,Glozman1998_1
Glozman1998_2,Theussl,Glantschnig,Melde07,Melde08
Plessas}
which obtained very satisfactory results for the mass spectra up to 1.7 GeV as
well as for the corresponding nucleon form factors on the basis of a truncated
pseudo-vector coupled Yukawa potential, where the tensor force terms were
neglected. In~\cite{Ronniger} we found a phenomenological \textit{Ansatz}
which includes a short ranged flavour-singlet and a flavour-octet exchange
with pseudoscalar-like coupling of Gaussian form to be most effective for the
improvements mentioned above. This newly introduced interaction kernel
increased the number of parameters of seven in the former model (Model
$\mathcal{A}$, see~\cite{LoeMePe2,LoeMePe3}) to ten in the new version (Model
$\mathcal{C}$, see~\cite{Ronniger}), which we still consider to be acceptable
in view of the multitude of baryon masses described accurately in this
manner. For the values of the parameters we refer to table~\ref{tab:par},
where we listed an improved set of parameters for the interaction kernels of
model $\mathcal C$ together with the values used in~\cite{Ronniger} displayed
in brackets. Likewise the parameters for the interaction kernels of model
$\mathcal{A}$ obtained from calculations within a larger model space, see
also~\cite{Ronniger}\, are listed along with the values (in brackets) as
determined in \cite{LoeMePe2,LoeMePe3} in smaller model spaces.
\begin{table}[!htb]
\centering
\caption{
Model parameter values for the novel model $\mathcal{C}$ in comparison to
those of model $\mathcal{A}$~\cite{LoeMePe2,LoeMePe3}. The bracketed numbers
in the column for model $\mathcal{A}$ are the parameters as found
in~\cite{LoeMePe2,LoeMePe3} and the numbers above them are recalculated with
higher numerical accuracy as commented in~\cite{Ronniger}.
The numbers in the column for model $\mathcal{C}$ represent an improved set
with respect to the values quoted in~\cite{Ronniger} which are listed in
brackets. Note that the Dirac structure for the confinement interaction
kernel is different in models $\mathcal{A}$ and $\mathcal{C}$\,, see
also~\cite{Ronniger}\,.
\label{tab:par}
}
\begin{tabular}{l@{\hspace*{5pt}}l@{\hspace*{5pt}}r@{\hspace*{5pt}}r}
\toprule
parameter & & model $\mathcal{C}$ & model $\mathcal{A}$ \\
\midrule
masses & $m_n$ [MeV] & 350.0 & \multirow{2}{*}{330.0} \\
& & [325.0] & \\
& $m_s$ [MeV] & 625.0 & \multirow{2}{*}{670.0} \\
& & [600.0] & \\
\midrule
\multirow{2}{*}{confinement} & \multirow{2}{*}{$a$ [MeV]} & -370.8 & -734.6 \\
& &[-366.8] & [-700.0]\\
& \multirow{2}{*}{$b$ [MeV/fm]} & 208.4 & 453.6 \\
& & [212.86] & [440.0]\\
\midrule
\multirow{2}{*}{instanton} & \multirow{2}{*}{$g_{nn}$ [MeV fm$^3$]} & 317.9 & 130.3 \\
& & [341.5] & [136.0]\\
\multirow{2}{*}{induced} & \multirow{2}{*}{$g_{ns}$ [MeV fm$^3$]} & 260.0 & 81.8 \\
& & [273.6] & [96.0]\\
interaction & $\lambda$ [fm] & 0.4 & 0.4 \\
\midrule
octet & \multirow{2}{*}{$\frac{g_{8}^2}{4\pi}$ [MeV fm$^3$]} & 118.0 & \multirow{2}{*}{--} \\
exchange & & [100.86] & \\
singlet & \multirow{2}{*}{$\frac{g_{0}^2}{4\pi}$ [MeV fm$^3$]} & 1715.5 & \multirow{2}{*}{--} \\
exchange & &[1897.4] & \\
& $\lambda_{8}=\lambda_0$ [fm] & 0.25 & -- \\
\bottomrule
\end{tabular}
\end{table}
Note that the two models $\mathcal{A}$ and $\mathcal{C}$, specified in
table~\ref{tab:par} employ different confinement Dirac-structures for the
constant part (offset) $\Gamma_0$ and the linear part (slope) $\Gamma_s$
(see~\cite{Ronniger,LoeMePe2} for more information). All calculations in the
present paper are based on the parameter values of table~\ref{tab:par}.
Furthermore we want to point out, that the calculation of helicity amplitudes
or transition form factors (such as that for the nucleon-$\Delta(1232)$
magnetic transition) in lowest order as considered here does not introduce any
additional parameter in the underlying models as discussed in~\cite{Ronniger}
before. Since in the new model $\mathcal{C}$ we can account for more baryon
excitations accurately we can now also offer predictions for some
$\Delta$-resonances which could not be covered before in model $\mathcal{A}$.
In particular these are: $\Delta_{1/2^+}(1750)$, $\Delta_{3/2^+}(1600)$,
$\Delta_{1/2^-}(1900)$, $\Delta_{3/2^-}(1940)$ and $\Delta_{5/2^-}(1930)$ as
reported in~\cite{Ronniger}. Additionally there now exists new data for photon
decay amplitudes from Anisovich \textit{et
al.}~\cite{Anisovich_1,Anisovich_2,Anisovich_3} and for helicity amplitudes
from Aznauryan \textit{et
al.}~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009,Aznauryan2012} as well as
the MAID analysis~\cite{Drechsel,Tiator}, in particular the analysis and
parametrisations in the recent overview~\cite{Tiator2011}, with information
also on longitudinal amplitudes which can serve as a test of the present model beyond the
comparison done previously in~\cite{Merten,Kreuzer,PhDMerten} on the basis of
the amplitudes determined in model $\mathcal{A}$ of~\cite{LoeMePe2}. For the
definition of the helicity amplitudes we use the conventions as in
Tiator \textit{et al.}~\cite{Tiator2011} as mentioned in Eqs.~(\ref{TransFF_eq4a})
to~(\ref{TransFF_eq4c}) in the subsequent sec.~\ref{TransFF}.
The paper is organised as follows: After a brief recapitulation on some
improvements concerning model $\mathcal{C}$ in sec.~\ref{ImprovedModel} and
the determination of the helicity amplitudes for electro-excitation in the
Salpeter model, we shall present the results in sec.~\ref{TransFF}, which
contains three subsections: Sec.~\ref{NNHelAmpl}, covers the helicity
amplitudes for the electro-excitation of nucleon resonances,
sec.~\ref{NDeltaHelAmpl} contains the helicity amplitudes for the
electro-excitation of $\Delta$-resonances, while sec.~\ref{PhotonCoupl}
summarises the photon decay amplitudes. Sec.~\ref{MagnTransFF} contains a
short discussion of the magnetic and electric transition form factor of the
$\Delta(1232)$ resonance before we conclude with a summary in
sec.~\ref{Summary}.
\section{Improvements to model $\mathcal{C}$\label{ImprovedModel}}
In the course of the investigations within the novel model
$\mathcal{C}$\cite{Ronniger} a new parameter set was found which led to an
improved description in particular of the nucleon form factors. This new set
of parameters is listed in table~\ref{tab:par} of the introduction. The
corresponding baryon mass spectra are very similar to those published
in~\cite{Ronniger}; only for some higher excitation deviations up to
$30\,\textrm{MeV}$ with respect to the values presented in~\cite{Ronniger}
were found. We therefore refrain from displaying the mass spectra here. The
calculated masses of those baryons which enter the helicity amplitudes
calculated in this work can be found in tables~\ref{tab:PhotonCoupl1} and
\ref{tab:PhotonCoupl2}\,. However for nucleon form factors which were also
given in~\cite{Ronniger} some small but partially significant modifications
were found and the results will be discussed subsequently.
In Fig.~\ref{EFF_Proton} the calculated electric proton form factor (divided by
its dipole-shape)
\begin{align}
\label{eq:EFF}
G_D(Q^2) = \frac{1}{(1+Q^2/M_V^2)^2}\,,
\end{align}
with $M_V^2=0.71\,\textrm{GeV}^2$ (see~\cite{Mergell,Bodek}) in both versions
of model $\mathcal C$ is compared to experimental data.
It is found that with the new set of parameters model $\mathcal{C}$ describes
the data for momenta transfers $Q^2 \lesssim 3\,\textrm{GeV}^2$ slightly
better than with the older set of~\cite{Ronniger}\,, but the modification is
rather small.
\begin{figure}[!htb]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$G_E^p(Q^2)/G_D(Q^2)$}
\psfrag{MMD}[r][r]{\scriptsize MMD~\cite{Mergell}}
\psfrag{Christy}[r][r]{\scriptsize Christy~\cite{Christy}}
\psfrag{Qattan}[r][r]{\scriptsize Qattan~\cite{Qattan}}
\psfrag{Gauss-old}[r][r]{\scriptsize model $\mathcal{C}$~\cite{Ronniger}}
\psfrag{Gauss-new}[r][r]{\scriptsize improved model $\mathcal{C}$}
\includegraphics[width=\linewidth]{EFF_Dipole_Improved_P.eps}
\caption{
The electric form factor of the proton divided by the dipole form $G_D(Q^2)$, Eq.~(\ref{eq:EFF}).
MMD-Data are taken from Mergell \textit{et al.}~\cite{Mergell}, supplemented by
data from Christy \textit{et al.}~\cite{Christy} and Qattan \textit{et al.}~\cite{Qattan}\,.
The solid line represents the results from model $\mathcal{C}$ with the new
set of parameters, while the dashed line those from
model $\mathcal{C}$~\cite{Ronniger}. Red data points are taken
from polarisation experiments and black ones are obtained by Rosenbluth separation.
\label{EFF_Proton} }
\end{figure}
Fig.~\ref{EFF_Neutron} shows the electric neutron form factor where the effect
of the new parameter set yields a significantly improved description,
reflecting the fact that this small quantity is thus very sensitive to
parameter changes. Although the deviation between both curves could be
interpreted as an estimate of the uncertainty in the model prediction the new
version demonstrates that within the model it is indeed possible to account
for the momentum transfer dependence and the position of the maximum rather
accurately.
\begin{figure}[!htb]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$G_E^n(Q^2)$}
\psfrag{MMD}[r][r]{\scriptsize MMD \cite{Mergell}}
\psfrag{Rosenbluth}[r][r]{\scriptsize \cite{Eden,Herberg,Ostrick,Passchier,Schiavilla}}
\psfrag{Polarisation}[r][r]{\scriptsize \cite{Rohe,Golak,Zhu,Madey,Warren,Glazier,Alarcon}}
\psfrag{Gauss-old}[r][r]{\scriptsize model $\mathcal{C}$~\cite{Ronniger}}
\psfrag{Gauss-new}[r][r]{\scriptsize improved model $\mathcal{C}$}
\includegraphics[width=1.0\linewidth]{EFF_Improved_N.eps}
\caption{
The electric form factor of the neutron. MMD-Data are
taken from the compilation of Mergell \emph{et al.}~\cite{Mergell}.
The solid line represents the results from the improved model
$\mathcal{C}$; the dashed line is the result from model
$\mathcal{C}$~\cite{Ronniger}. Red data points are taken from
polarisation experiments and black ones are obtained by Rosenbluth
separation.\label{EFF_Neutron}
}
\end{figure}
The new results are very similar to the results from the Bhaduri-Cohler-Nogami
quark model quoted as BCN in~\cite{Melde07}\,, whereas the older version
from~\cite{Ronniger} is closer to the result quoted as GBE-model
in~\cite{Melde07}\,.
\begin{figure}[!htb]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$G_M^p(Q^2)/G_D(Q^2)/\mu_p$}
\psfrag{MMD}[r][r]{\scriptsize MMD \cite{Mergell}}
\psfrag{Christy}[r][r]{\scriptsize Christy \cite{Christy}}
\psfrag{Qattan}[r][r]{\scriptsize Qattan \cite{Qattan}}
\psfrag{Bartel}[r][r]{\scriptsize Bartel \cite{Bartel}}
\psfrag{Gauss-old}[r][r]{\scriptsize model $\mathcal{C}$~\cite{Ronniger}}
\psfrag{Gauss-new}[r][r]{\scriptsize improved model $\mathcal{C}$}
\includegraphics[width=1.0\linewidth]{MFF_Dipole_Improved_P.eps}
\caption{
The magnetic form factor of the proton divided by the dipole form
$G_D(Q^2)$, Eq.~(\ref{eq:EFF}) and the magnetic moment of the
proton $\mu_p=2.793\,\mu_N$\,. MMD-Data are taken from the compilation
of Mergell \emph{et al.}~\cite{Mergell}. Additionally, polarisation
experiments are marked in red. The black marked data points are obtained by
Rosenbluth separation.
\label{MFF_Proton}
}
\end{figure}
Also for the magnetic form factors displayed in Fig.~\ref{MFF_Proton}
and~\ref{MFF_Neutron} improvements are observed: this concerns in particular
the description of the magnetic proton form factor at higher momentum
transfers $Q^2 \gtrsim 1\,\textrm{GeV}^2$\,.
\begin{figure}[!htb]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$G_M^n(Q^2)/G_D(Q^2)/\mu_n$}
\psfrag{MMD}[r][r]{\scriptsize MMD \cite{Mergell}}
\psfrag{Anklin}[r][r]{\scriptsize Anklin \cite{Anklin}}
\psfrag{Kubon}[r][r]{\scriptsize Kubon \cite{Kubon}}
\psfrag{Xu}[r][r]{\scriptsize Xu \cite{Xu}}
\psfrag{Madey}[r][r]{\scriptsize Madey \cite{Madey}}
\psfrag{Alarcon}[r][r]{\scriptsize Alarcon \cite{Alarcon}}
\psfrag{Gauss-old}[r][r]{\scriptsize model $\mathcal{C}$~\cite{Ronniger}}
\psfrag{Gauss-new}[r][r]{\scriptsize improved model $\mathcal{C}$}
\includegraphics[width=1.0\linewidth]{MFF_Dipole_Improved_N.eps}
\caption{
The magnetic form factor of the neutron divided by the dipole form $G_D(Q^2)$,
Eq.~(\ref{eq:EFF}) and the magnetic moment of the neutron $\mu_n=-1.913\,\mu_N$.
MMD-Data are taken from the compilation by Mergell \emph{et al.}~\cite{Mergell} and
from more recent results from MAMI~\cite{Anklin,Kubon}\,. Additionally, polarisation
experiments are marked by red data points. The black marked ones are obtained by
Rosenbluth separation.
\label{MFF_Neutron}
}
\end{figure}
\begin{figure}[!htb]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$G_A^3(Q^2)/G_D^A(Q^2)/g_A$}
\psfrag{Amaldi}[r][r]{\scriptsize Amaldi \cite{Amaldi}}
\psfrag{Brauel}[r][r]{\scriptsize Brauel \cite{Brauel}}
\psfrag{Bloom}[r][r]{\scriptsize Bloom \cite{Bloom}}
\psfrag{Del Guerra}[r][r]{\scriptsize D. Guerra \cite{DGuerra}}
\psfrag{Joos}[r][r]{\scriptsize Joos \cite{Joos}}
\psfrag{Baker}[r][r]{\scriptsize Baker \cite{Baker}}
\psfrag{Miller}[r][r]{\scriptsize Miller \cite{Miller}}
\psfrag{Kitagaki}[r][r]{\scriptsize Kita. \cite{Kitagaki83,Kitagaki90}}
\psfrag{Allasia}[r][r]{\scriptsize Allasia \cite{Allasia}}
\psfrag{Gauss-old}[r][r]{\scriptsize model $\mathcal{C}$ \cite{Ronniger}}
\psfrag{Gauss-new}[r][r]{\scriptsize impr. model $\mathcal{C}$}
\includegraphics[width=1.0\linewidth]{AFF_Improved_Dipole_3.eps}
\caption{The axial form factor of the nucleon divided by the axial dipole
form in Eq.~(\ref{eq:AFF}) and the axial coupling $g_A=1.267$. The solid
line is the improved result of model $\mathcal C$, the dashed line the
result of model $\mathcal{C}$ in~\cite{Ronniger}. Experimental
data are taken from the compilation by Bernard \emph{et al.}~\cite{Bernard}\,.
\label{AFF_3}}
\end{figure}
The axial form factor divided by its dipole-shape
\begin{align}
\label{eq:AFF}
G^A_D(Q^2)=\frac{g_A}{(1+Q^2/M_A^2)^2}\,,
\end{align}
with the parameters $M_A=1.014\pm0.014\,\textrm{GeV}$ and $g_A=1.267$ taken
from Bodek \textit{et al.}~\cite{Bodek} is shown in Fig.~\ref{AFF_3}. Here a
clear preference for either set of parameters cannot be inferred; in general
the description is satisfactory in both versions. In subsequent sections all
calculations will use the parameters of the improved model $\mathcal C$ and
shall be quoted simply as model $\mathcal C$\,.
\section{Helicity amplitudes and transition form factors
from the current matrix elements\label{TransFF}}
Following the elaboration on the transition current matrix elements in Merten
\textit{et al.}~\cite{Merten} one finds in lowest order for a initial baryon
state with four-momentum $\bar{P}_i=M_i=(M_i,\vec 0)$ in its rest frame and a
final baryon state with four-momentum $\bar{P}_f$ the expression
\begin{eqnarray}
\label{TransFF_eq1}
&&\langle \bar P_f | j^\mu(0) | M_i \rangle
=
\!-3\!\!\int\!\!\frac{\textrm{d}^4p_\xi}{(2\pi)^4}\!\!
\int\!\!\frac{\textrm{d}^4p_\eta}{(2\pi)^4}\!
\textstyle
{\bar{\Gamma}}^\Lambda_{\bar{P}_f}\!\left(p_\xi,p_\eta\!-\!\tfrac{2}{3}q\right)
\nonumber
\\
&&\hspace*{1em}
\textstyle
S^1_F\!\left(\tfrac{1}{3}M_i+\!p_\xi\!+\!\tfrac{1}{2}p_\eta\right)\!
\otimes\!
S^2_F\!\left(\tfrac{1}{3}M_i-\!p_\xi\!+\!\tfrac{1}{2}p_\eta\right)
\nonumber
\\
&&\hspace*{1em}
\textstyle
\otimes
S^3_F\!\left(\tfrac{1}{3}M_i\!-\!p_\xi\!-\!p_\eta\!+\!q\right)
\widehat q\gamma^\mu
S^3_F\!\left(\tfrac{1}{3}M_i\!-\!p_\xi\!-\!p_\eta\right)
\nonumber
\\
&&
\hspace*{10em}
\textstyle
\Gamma^\Lambda_{M_i}\left(\vec p_\xi,\vec p_\eta\right)\,,
\end{eqnarray}
where the so-called vertex-function $\Gamma^\Lambda_{M_i}\left(\vec p_\xi,\vec
p_\eta\right)$ is given in the rest frame by
\begin{eqnarray}
\label{TransFF_eq2}
\lefteqn{
\Gamma^\Lambda_{M_i}\left(\vec p_\xi,\vec p_\eta\right)
:=
-\textrm{i}
\int\frac{\textrm{d}p'_\xi}{(2\pi)^4}
\int\frac{\textrm{d}p'_\eta}{(2\pi)^4}
}\nonumber
\\
&&
\hspace*{1em}\left[
V^{(3)}_\Lambda\left(\vec p_\xi,\vec p_\eta;\vec p'_\xi,\vec p'_\eta\right)
+
V^{\textrm{eff}}_\Lambda\left(\vec p_\xi,\vec p_\eta;\vec p'_\xi,\vec
p'_\eta\right)
\right]\nonumber
\\
&&
\hspace*{10em}\Phi^\Lambda_{M_i}\left(\vec p'_\xi,\vec p'_\eta\right)\,,
\end{eqnarray}
and where the Salpeter-amplitude $\Phi^\Lambda_{M_i}$ is normalised to
$\sqrt{2M_i}$. For an arbitrary on-shell momentum $\bar{P}_f$ with
$\bar{P}_f^2 = M_f^2$ the vertex function
$\Gamma^\Lambda_{\bar{P}_f}\!\left(p_\xi,p_\eta\!-\!\tfrac{2}{3}q\right)$ is
obtained from $\Gamma^\Lambda_{M_f}\left(\vec p_\xi,\vec p_\eta\right)$ by
applying a boost.
Note that by this procedure we can determine the required vertex function only
on the mass shell, which precludes a calculation of transition amplitudes in
the time-like region.
The electromagnetic current operator is then defined as
\begin{align}
j^E_{\mu}(x) =\,\,:\bar{\Psi}(x)\hat{q}\gamma_{\mu}\Psi(x):\label{TransFF_eq3}
\end{align}
in terms of the charge operator $\hat{q}$ and the quark field-operator
$\Psi(x)$\,.
With
\begin{align}
j^E_{\pm}(x) = j^E_1(x) \pm \textrm{i} j^E_{2}(x)\,,\label{TransFF_eq4}
\end{align}
and with our normalisation of the Salpeter amplitudes, in accordance with the
definitions in Warns \textit{et al.}~\cite{Warns1990}, Tiator \textit{et
al.}~\cite{Tiator2011} and Aznauryan and Burkert~\cite{Aznauryan2012}\,, the
transverse and longitudinal helicity amplitudes $A^N_{\nicefrac{1}{2}}$\,,
$A^N_{\nicefrac{3}{2}}$ and $S^N_{\nicefrac{1}{2}}$, respectively, are related
to the transition current matrix elements in the rest frame of the baryon $B$
with rest mass $M_B$ via
\begin{subequations}
\begin{align}
A^N_{\frac{1}{2}}\!(Q^2)
=
&
\frac{\zeta}{\sqrt{2}}\,K\,
\Big\langle
B, M_B,\tfrac{1}{2} \Big| j_+^E(0) \Big| N,\bar P_N,-\tfrac{1}{2}
\Big\rangle,\label{TransFF_eq4a}
\\
A^N_{\frac{3}{2}}\!(Q^2)
=
&
\frac{\zeta}{\sqrt{2}}\,K\,
\Big\langle
B, M_B,\tfrac{3}{2} \Big| j_+^E(0) \Big| N,\bar P_N,\tfrac{1}{2}
\Big\rangle,\label{TransFF_eq4b}
\\
S^N_{\frac{1}{2}}\!(Q^2)
=
&
\zeta\,K\,
\Big\langle
B, M_B,\tfrac{1}{2} \Big| j_0^E(0) \Big| N,\bar P_N,\tfrac{1}{2}
\Big\rangle,\label{TransFF_eq4c}
\end{align}
\end{subequations}
where $K := \sqrt{(\pi\,\alpha)/(M_N(M_B^2-M_N^2))}$\,, $\alpha$ is the fine
structure constant\,, $N$ denotes the ground state
nucleon ($N=(p,n)$) with four-momentum $\bar P_N=(\sqrt{M_N^2+\vec k^2},-\vec
k)$ related to the momentum transfer $Q^2$ by $\vec k^2 = (M_B^2-M_N^2-Q^2)^2
/ (4 M_B^2) + Q^2$\,.
Note that the common phase $\zeta$ is not determined in the present
calculation.
In most cases we shall fix $\zeta$ such as to reproduce the sign of the proton
decay amplitude reported in~\cite{PDG}\,.
Furthermore, note that
$\langle p,\,\bar P_N,\tfrac{1}{2}|j_0^E(0)|p,\,M_N,\tfrac{1}{2}\rangle$
is normalised to +1 at $Q^2=0$\,.
\subsection{Helicity amplitudes for electro-excitation\label{ElecHelAmpl}}
In the last decade new experiments were performed at the Jefferson-Laboratory
in order to study helicity amplitudes up to $6\,\textrm{GeV}^2$. These new
experiments were designed to determine the helicity amplitudes for the
electro-excitation of the $P_{11}(1440)$, $S_{11}(1535)$ and $D_{13}(1520)$
resonances. The results can be found
in~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009} and~\cite{Drechsel}. In
addition novel data for the longitudinal $S_{1/2}^N$ amplitudes were obtained.
We calculated the corresponding helicity amplitudes of these and other states
on the basis of the Salpeter amplitudes obtained in the novel model $\mathcal
C$~\cite{Ronniger}\,. As mentioned above we are now able to solve the
eigenvalue problem with higher numerical accuracy by an expansion into a
larger basis which presently includes all three-particle harmonic oscillator
states up to an excitation quantum number $N_{\textrm{max}}=18$\,, whereas
previously~\cite{LoeMePe2,LoeMePe3,Merten} the results for baryon masses and
amplitudes in model $\mathcal{A}$ were obtained with
$N_{\textrm{max}}=12$\,. For comparison and to study the effects of the newly
introduced phenomenological flavour-dependent interaction of model
$\mathcal{C}$ we thus also recalculated the spectrum and the amplitudes for
model $\mathcal{A}$ within the same larger model space. The corresponding
changes in the determination of the interaction parameters are indicated in
table~\ref{tab:par}\,.
\subsubsection{Helicity amplitudes for nucleons\label{NNHelAmpl}}
We will now turn to the discussion of $N\to N^\ast$ helicity amplitudes for each angular
momentum $J$ and parity $\pi$\,.
\paragraph{The $J=1/2$ resonances:}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Ku-Th}[r][r]{\scriptsize~\cite{Kummer,Beck,Alder,Breuker,Brasse78,Benmerrouche,Krusche,Armstrong,Thompson}, p}
\psfrag{Capstick}[r][r]{\scriptsize Capstick~\cite{Keister}, p}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009}, p}
\psfrag{Aznauryan-fit}[r][r]{\scriptsize Fit: Aznau.~\cite{Aznauryan2012}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S11_1535.eps}
\caption
Comparison of the $S_{11}(1535)$ transverse helicity amplitude $A_{1/2}^N$
for electro-excitation from the proton and the neutron calculated in the
model $\mathcal C$ (solid and dashed-dotted line) and model $\mathcal A$ (dashed lines) to
experimental
data~\cite{PDG,Kummer,Beck,Alder,Breuker,Brasse78
Benmerrouche,Krusche,Armstrong,Thompson
Aznauryan05_1,Aznauryan05_2,Aznauryan2009
Drechsel,Tiator}\,.The dotted
line is the result obtained by Keister and Capstick~\cite{Keister}.
Additionally
recent fits obtained by Tiator \textit{et al.}~\cite{Tiator2011}
and by Aznauryan \textit{et al.}~\cite{Aznauryan2012}
are displayed as green dotted and dashed-dotted lines, respectively.
Note that the results
for model $\mathcal A$ were recalculated with higher numerical accuracy and
thus deviate from the results published previously in~\cite{Merten}\,
\label{TRANSFF_A12_S11_1535}
}
\end{figure}
A comparison of calculated transverse and longitudinal helicity amplitudes
with experimental data for the electro-excitation of the $S_{11}(1535)$
resonance is given in Figs.~\ref{TRANSFF_A12_S11_1535}
and~\ref{TRANSFF_S_S11_1535}, respectively. Whereas the value of the
transverse amplitudes at the photon point ($Q^2=0$) both for the proton and
the neutron are accurately reproduced in particular by the new model
$\mathcal{C}$\,, in general the calculated transverse amplitudes are too small
by a factor of two; in comparison to the results from model $\mathcal{A}$ the
amplitudes of model $\mathcal{C}$ decrease more slowly with increasing
momentum transfer, in better agreement with the experimental data. But, in
particular the near constancy of the proton data for $0<Q^2 < 1\,\textnormal{GeV}^2$
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Kreuzer-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Kreuzer-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_S11_1535.eps}
\caption
Comparison of the $S_{11}(1535)$ longitudinal electro-excitation helicity
amplitude $S_{1/2}^N$ of proton and neutron calculated in model $\mathcal C$
(solid and dashed-dotted line) and model $\mathcal A$ (dashed lines) with experimental
data~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009
Drechsel,Tiator}. Note that for the data points of the MAID-analysis by
Tiator \textit{et al.}~\cite{Tiator} no errors are quoted. See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_S11_1535}}
\end{figure}
is not reflected by any of the calculated results. For
comparison we also plotted the results from the quark model calculation of the
transverse $A_{1/2}^p$-amplitude by Keister and Capstick~\cite{Keister} for
$Q^2\lesssim3\,\textrm{GeV}^2$ and the fits obtained by Aznauryan \textit{et
al.}~\cite{Aznauryan2012} and Tiator \textit{et al.}~\cite{Tiator2011}.
Contrary to this, the momentum transfer dependence of the calculated longitudinal
helicity amplitudes hardly bear any resemblance to what has been determined
experimentally, in particular the minimum found for the proton at $Q^2 \approx
1.5 \textnormal{ GeV}^2$ is not reproduced. Only the non-relativistic calculation
of Capstick and Keister~\cite{Capstick1995} shows a pronounced minimum for the
longitudinal $S_{11}(1535)$ amplitude, however this minimum is predicted at the
wrong position.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Burkert}[r][r]{\scriptsize Burkert~\cite{Burkert}, p}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S11_1650.eps}
\caption
Comparison of the $S_{11}(1650)$ transverse electro-excitation helicity
amplitude $A_{1/2}^N$ of proton and neutron calculated in model
$\mathcal C$ (solid and dashed-dotted line) and model $\mathcal A$ (dashed lines) to experimental
data from~\cite{PDG,Burkert,Aznauryan05_2,Drechsel,Tiator}\,. See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_S11_1650}
}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_S11_1650.eps}
\caption{
Comparison of the $S_{11}(1650)$ longitudinal electro-excitation helicity
amplitude $S_{1/2}^N$ of proton and neutron calculated in model $\mathcal C$
(solid and dashed-dotted line) and model $\mathcal A$ (dashed lines).
See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.\label{TRANSFF_S_S11_1650}
}
\end{figure}
Also the calculated transverse proton helicity amplitude $A_{1/2}^p$ for the
next $S_{11}(1650)$ resonance shows a large disagreement
with experimental data as shown in Fig.~\ref{TRANSFF_A12_S11_1650}.
This discrepancy was already found in the previous calculation of Merten
\textit{et al.}~\cite{Merten} and obviously is not resolved within model
$\mathcal C$\,. Note, however that the neutron amplitude $A_{1/2}^n$ calculated
at the photon point does correspond to the data from PDG~\cite{PDG}, as
illustrated in Fig~\ref{TRANSFF_A12_S11_1650}.
The rather small longitudinal $S_{11}(1650)$ amplitude $S_{1/2}^N$ seems to
agree with the scarce medium $Q^2$ data from the MAID-analysis
of~\cite{Drechsel,Tiator}\,, however for lower $Q^2$ the single data point of
Aznauryan \textit{et al.}~\cite{Aznauryan05_2} seems to indicate a zero
crossing of this amplitude not reproduced by either of the model calculations of
the $S_{1/2}^p$-amplitude for the $S_{11}(1650)$-resonance (see
Fig.~\ref{TRANSFF_S_S11_1650}).
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{Merten-p-A}[r][r]{\scriptsize model $\mathcal A$, p 3rd}
\psfrag{Merten-n-A}[r][r]{\scriptsize model $\mathcal A$, n 3rd}
\psfrag{Merten-p-B}[r][r]{\scriptsize model $\mathcal A$, p 4th}
\psfrag{Merten-n-B}[r][r]{\scriptsize model $\mathcal A$, n 4th}
\psfrag{Gauss-p-A}[r][r]{\scriptsize model $\mathcal C$, p 3rd}
\psfrag{Gauss-n-A}[r][r]{\scriptsize model $\mathcal C$, n 3rd}
\psfrag{Gauss-p-B}[r][r]{\scriptsize model $\mathcal C$, p 4th}
\psfrag{Gauss-n-B}[r][r]{\scriptsize model $\mathcal C$, n 4th}
\includegraphics[width=\linewidth]{TRANSFF_S11_1895.eps}
\caption
Comparison of the $S_{11}(1895)$ transverse helicity amplitude $A_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines) with the single photon point value from
Anisovich \textit{et al.}~\cite{Anisovich_1}. See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_S11_1895}
}
\end{figure}
The third and fourth $J^\pi=1/2^-$ nucleon resonances are predicted in model $\mathcal A$
at $1872\,\textrm{MeV}$ and $1886\,\textrm{MeV}$ and in model $\mathcal C$ at
$1839\,\textrm{MeV}$ and $1882\,\textrm{MeV}$\,, respectively. Indeed within the
Bonn-Gatchina Analysis of the CB-ELSA collaboration data~\cite{Anisovich_3}
evidence for a $J^\pi=1/2^-$ nucleon resonance at $1895\,\textrm{MeV}$ was
found. As can be seen from Fig.~\ref{TRANSFF_A12_S11_1895} the predicted
transverse amplitudes for the third resonance both models are rather large and the
calculated value at the photon point ($Q^2=0$) is much larger than the
experimental value quoted in~\cite{Anisovich_1,Anisovich_3}, but the value of the
fourth resonance matches the PDG photon decay amplitude.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Aniso.~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{BurkertGerhardt}[r][r]{\scriptsize~\cite{Burkert,Gerhardt}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznau.~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_P11_1440.eps}
\caption{
Comparison of the $P_{11}(1440)$ transverse helicity amplitude $A_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535}\,.
\label{TRANSFF_P11_1440}
}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG}[r][r]{\scriptsize $C^p$ PDG~\cite{PDG}}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Kreuzer-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Kreuzer-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_P11_1440.eps}
\caption{
Comparison of the $P_{11}(1440)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). Note that for the data points of the
MAID-analysis by Tiator \textit{et al.}~\cite{Tiator} no errors are quoted.
See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_P11_1440}
}
\end{figure}
The transverse and longitudinal helicity amplitudes of the Roper resonance
$P_{11}(1440)$ are displayed in Figs.~\ref{TRANSFF_P11_1440} and~\ref{TRANSFF_S_P11_1440},
respectively. It is obvious, that the zero crossing
found in the data at $Q^2\approx 0.5\,\textnormal{GeV}^2$, see
Fig.~\ref{TRANSFF_P11_1440}, is not reproduced in the calculated curves, although
the $Q^2$ dependence of the positive values at higher momentum transfers can
be accounted for in both models after changing the sign of the old
prediction~\cite{Merten}\,. On the other hand we do find a satisfactory description
of the longitudinal $C^p_{1/2}$-amplitude displayed in
Fig.~\ref{TRANSFF_S_P11_1440} in particular in the new model $\mathcal C$\,.
Helicity amplitudes of higher lying resonances in the $P_{11}$ channel are
only poorly studied in experiments. Nevertheless we shall discuss briefly the
$P_{11}(1710)$ helicity amplitude before treating the higher excitations
$P_{11}(1880)$ and $P_{11}(2100)$\,.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_P11_1710.eps}
\caption{
Comparison of the $P_{11}(1710)$ transverse helicity
amplitude $A_{1/2}^N$ for proton and neutron calculated in model $\mathcal C$
(solid and dashed-dotted line) and model $\mathcal A$ (dashed lines). See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_P11_1710}
}
\end{figure}
For the $P_{11}(1710)$ resonance only the photon decay amplitude is reported~\cite{PDG}.
In Figs.~\ref{TRANSFF_P11_1710} and~\ref{TRANSFF_S_P11_1710} we display
our predictions for these amplitudes.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_P11_1710.eps}
\caption{
Prediction of the $P_{11}(1710)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_P11_1710}
}
\end{figure}
The transverse $A_{1/2}^N$-amplitude of model $\mathcal C$ matches the PDG data
at the photon point in contrast to model $\mathcal A$, which overestimates the
proton- and neutron amplitudes by a factor of two. On the other hand this would
be in accordance with the larger value obtained by Anisovich \textit{et al.}~\cite{Anisovich_3},
$A^p_{1/2}=(52\pm15)\times10^{-3}\textrm{ GeV}^2$\,. The prediction of the longitudinal
$S_{1/2}^N$-amplitudes is given in Fig.~\ref{TRANSFF_S_P11_1710}.
Finally we present the results for the fourth and fifth
$J^\pi=\tfrac{1}{2}^+$-nucleon state in Fig.~\ref{TRANSFF_P11_3rd-4th_Res}\,,
where we show the transverse helicity amplitudes only. The corresponding\linebreak
masses predicted by model $\mathcal A$ are $1905\,\textrm{MeV}$ for the
fourth and $1953\,\textrm{MeV}$ for the fifth state; for model
$\mathcal C$ the predicted masses are $1872\,\textrm{MeV}$ and
$1968\,\textrm{MeV}$\,, respectively. The two data at the photon point marked
$''01''$ and $''02''$ were obtained by the CB-ELSA collaboration within the
Bonn-Gatchina Analysis as reported in~\cite{Anisovich_1,Anisovich_3} for the
$N_{1/2}^+(1880)$ resonance.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich $N_{1/2^+}(1880)$~\cite{Anisovich_1}}
\psfrag{Anisovich-2}[r][r]{\scriptsize Anisovich $N_{1/2^+}(1880)$~\cite{Anisovich_3}}
\psfrag{01}[c][c]{\scriptsize 01}
\psfrag{02}[c][c]{\scriptsize 02}
\psfrag{Merten-p-3rd}[r][r]{\scriptsize model $\mathcal A$, p 4th}
\psfrag{Merten-n-3rd}[r][r]{\scriptsize model $\mathcal A$, n 4th}
\psfrag{Merten-p-4th}[r][r]{\scriptsize model $\mathcal A$, p 5th}
\psfrag{Merten-n-4th}[r][r]{\scriptsize model $\mathcal A$, n 5th}
\psfrag{Gauss-p-3rd}[r][r]{\scriptsize model $\mathcal C$, p 4th}
\psfrag{Gauss-n-3rd}[r][r]{\scriptsize model $\mathcal C$, n 4th}
\psfrag{Gauss-p-4th}[r][r]{\scriptsize model $\mathcal C$, p 5th}
\psfrag{Gauss-n-4th}[r][r]{\scriptsize model $\mathcal C$, n 5th}
\includegraphics[width=\linewidth]{TRANSFF_P11_3rd-4th_Res.eps}
\caption{
Prediction of the $P_{11}$ transverse helicity amplitudes $A_{1/2}^N$ for
the third and fourth excitation of the proton and the neutron calculated
within model $\mathcal C$ (solid and dashed-dotted lines) and model $\mathcal A$ (dashed lines).
The data at the photon point marked $''01''$ and $''02''$ were reported
in~\cite{Anisovich_1,Anisovich_3} as alternatives for the $N_{1/2}^+(1880)$
resonance. See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_P11_3rd-4th_Res}}
\end{figure}
They correspond to two different partial wave solutions in order to extract the corresponding
baryon mass and helicity amplitudes. The prediction for the fourth state lies between these
values, the values found for the fifth state are much smaller. This also applies for higher
$J^\pi=\tfrac{1}{2}^+$ excitations not displayed here.
\paragraph{The $J=3/2$ resonances:}
In Figs.~\ref{TRANSFF_A12_P13_1720} and~\ref{TRANSFF_A32_P13_1720} the transverse
helicity amplitudes of the $P_{13}(1720)$ resonance are displayed.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_P13_M2_1720.eps}
\caption{
Comparison of the $P_{13}(1720)$ transverse helicity amplitude $A_{1/2}^N$
of proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line)
and model $\mathcal A$ (dashed lines). Note that for the data points
of the MAID-analysis by Tiator \textit{et al.}~\cite{Tiator} no errors
are quoted. See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_P13_1720}}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_P13_M4_1720.eps}
\caption{
Comparison of the $P_{13}(1720)$ transverse helicity amplitude $A_{3/2}^N$
of proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). Note that for the data
points of the MAID-analysis by Tiator \textit{et al.}~\cite{Tiator} no
errors are quoted. See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A32_P13_1720}}
\end{figure}
Although a reasonable agreement with the data of Aznauryan \textit{et
al.}~\cite{Aznauryan05_2} and with the photon decay amplitude is found for both
models, the data from the MAID analysis~\cite{Drechsel,Tiator} indicate a
sign change for the $A^p_{\nicefrac{1}{2}}$ amplitude at
$Q^2 \approx 3\,\textnormal{GeV}^2$ not reproduced by either model.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_P13_1720.eps}
\caption{
Comparison of the $P_{13}(1720)$ longitudinal electro-excitation helicity amplitude $S_{1/2}^N$
of proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line)
and model $\mathcal A$ (dashed lines). Note that for the data points of the
MAID-analysis by Tiator \textit{et al.}~\cite{Tiator} no errors are quoted.
See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S12_P13_1720}}
\end{figure}
In spite of not being able to account at all for the large $A^p_{\nicefrac{3}{2}}$
amplitude found experimentally, the longitudinal helicity amplitude as reported
in the MAID analysis with exception of the value at $Q^2 \approx 1\,
\textnormal{GeV}^2$ is reproduced by both models rather well, as shown in
Fig.~\ref{TRANSFF_S12_P13_1720}.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Burkert}[r][r]{\scriptsize Burkert~\cite{Burkert}, p}
\psfrag{Gerhardt}[r][r]{\scriptsize Gerhardt~\cite{Gerhardt}, p}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009}, p}
\psfrag{Aznauryan-fit}[r][r]{\scriptsize Fit: Aznauryan~\cite{Aznauryan2012}, p}
\psfrag{Ahrens}[r][r]{\scriptsize Ahrens~\cite{Ahrens}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_D13_M2_1520.eps}
\caption{
Comparison of the $D_{13}(1520)$ transverse helicity amplitude $A_{1/2}^N$
of proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_D13_1520}}
\end{figure}
For the transverse helicity amplitude $A_{1/2}^p$ (see
Fig.~\ref{TRANSFF_A12_D13_1520}) of the $D_{13}(1520)$-resonance we find a
reasonable quantitative agreement with experimental data for low momentum
transfers, while apart from the fact that in model $\mathcal{C}$ the amplitude is too small by about
a factor of two the $Q^2$ dependence is reproduced up to $Q^2 \approx 6\,
\textnormal{GeV}^2$\,. The minimum at $Q^2 \approx 1\, \textnormal{GeV}^2$ is
clearly visible for model $\mathcal{A}$ whereas this feature is not so
pronounced in model $\mathcal{C}$\,.
The $A_{3/2}^p$-amplitudes are displayed in Fig.~\ref{TRANSFF_A32_D13_1520}; here both model
underestimates the data by more than a factor of three.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Burkert}[r][r]{\scriptsize Burkert~\cite{Burkert}, p}
\psfrag{Gerhardt}[r][r]{\scriptsize Gerhardt~\cite{Gerhardt}, p}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009}, p}
\psfrag{Aznauryan-fit}[r][r]{\scriptsize Fit: Aznauryan~\cite{Aznauryan2012}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Ahrens}[r][r]{\scriptsize Ahrens~\cite{Ahrens}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_D13_M4_1520.eps}
\caption{
Comparison of the $D_{13}(1520)$ transverse helicity amplitude $A_{3/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A32_D13_1520}
}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Kreuzer-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Kreuzer-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p-S}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n-S}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_D13_1520.eps}
\caption{
Comparison of the $D_{13}(1520)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_D13_1520}
}
\end{figure}
Likewise the calculated neutron $A_{1/2}^n$- and $A_{3/2}^n$-amplitudes at the
photon point are too small. In particular for the $A_{1/2}^n$-amplitude the
predicted value close to zero is in contradiction to the
experimental value $-59\pm9 \times 10^{-3}\textrm{ GeV}^{-1/2}$\, from PDG~\cite{PDG}.
Unfortunately, although the $Q^2$ dependence of the magnitude of the
longitudinal amplitude $S_{1/2}^p$\,, see Fig.~\ref{TRANSFF_S_D13_1520}\,,
would describe the experimental data of Aznauryan \textit{et
al.}~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009} and
MAID~\cite{Drechsel} very well, the amplitude has the wrong sign. Note that
although, as mentioned above, the common phase $\zeta$ in the definition of
the helicity amplitudes is not determined in our framework, relative signs
beween the three helicity amplitudes are fixed.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_D13_M2_1700.eps}
\caption{
Comparison of the $D_{13}(1700)$ transverse helicity amplitude $A_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_D13_1700}
}
\end{figure}
The transverse amplitudes for the next ${3/2}^-$ nucleon resonance,
\textit{i.e.} $D_{13}(1700)$\,, are displayed in Figs.~\ref{TRANSFF_A12_D13_1700}
and~\ref{TRANSFF_A32_D13_1700}.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_D13_M4_1700.eps}
\caption{
Comparison of the $D_{13}(1700)$ transverse helicity amplitude $A_{3/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A32_D13_1700}}
\end{figure}
In contrast to the situation for the $D_{13}(1520)$-resonance described above,
here both models are in accordance with the PDG-data~\cite{PDG} as well as with
the data from Aznauryan \textit{et al.}~\cite{Aznauryan05_2} for the
$A_{1/2}$-amplitude, whereas the $A_{3/2}$-amplitude only reproduces the PDG-data~\cite{PDG}
and not the data point from Aznauryan \textit{et al.}~\cite{Aznauryan05_2} at
finite momentum transfer.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_D13_1700.eps}
\caption{
Prediction of the $D_{13}(1700)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and model
$\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_D13_1700}}
\end{figure}
The prediction for the longitudinal $D_{13}(1700)$ amplitudes is given in
Fig.~\ref{TRANSFF_S_D13_1700}. The calculated amplitudes turn out to be rather small.
\paragraph{The $J=5/2$ resonances:}
Although the transverse\linebreak $D_{15}(1675)$ helicity amplitudes at the photon point
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Park}[r][r]{\scriptsize MAID~\cite{Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_D15_M2_1675.eps}
\caption{
Comparison of the $D_{15}(1675)$ transverse helicity amplitude $A_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_D15_1675}
}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Park}[r][r]{\scriptsize MAID~\cite{Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_D15_M4_1675.eps}
\caption{Comparison of the $D_{15}(1675)$ transverse electro-excitation helicity
amplitude $A_{3/2}^N$ for proton and neutron calculated in model $\mathcal C$
(solid and dashed-dotted line) and model $\mathcal A$ (dashed lines). See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535}.\label{TRANSFF_A32_D15_1675}}
\end{figure}
reproduce the experimental data from MAID~\cite{Drechsel,Tiator} and the
PDG~\cite{PDG} rather well, as displayed in Figs.~\ref{TRANSFF_A12_D15_1675}
and~\ref{TRANSFF_A32_D15_1675}, both calculations cannot account for the
apparent zero of the experimental $A_{1/2}^p$-amplitude at $Q^2 \approx 1.5\,
\textnormal{GeV}^2$\,. Furthermore the $A_{3/2}^p$-amplitude, displayed in
Fig.~\ref{TRANSFF_A32_D15_1675} is severely underestimated in magnitude by
both models and model $\mathcal C$ even yields the wrong sign. The
transverse amplitudes for the neutron are predicted to be negative, here the
calculated value at the photon point for model $\mathcal A$ is closer to the
experimental value than for model $\mathcal C$\,.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Park}[r][r]{\scriptsize MAID~\cite{Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_D15_1675.eps}
\caption{
Comparison of the $D_{15}(1675)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_D15_1675}
}
\end{figure}
The longitudinal amplitudes are calculated to be very small for both models.
There exists only experimental data from the MAID-analysis~\cite{Tiator},
indicating that the experimental values are consistent with zero (see
Fig.~\ref{TRANSFF_S_F15_1680}).
There also exist data for the helicity amplitudes of the
$F_{15}(1680)$-resonance. The comparison with the calculated values is given in
Figs.~\ref{TRANSFF_A12_F15_1680} and~\ref{TRANSFF_A32_F15_1680}.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Burkert}[r][r]{\scriptsize Burkert~\cite{Burkert}, p}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_F15_M2_1680.eps}
\caption{
Comparison of the $F_{15}(1680)$ transverse helicity amplitude $A_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_F15_1680}}
\end{figure}
In particular in model $\mathcal{C}$ a reasonable description of the
$A_{1/2}^p$ amplitudes is found for the newer data from Aznauryan \textit{et
al.}~\cite{Aznauryan05_1,Aznauryan05_2} and MAID~\cite{Drechsel,Tiator}
both at the photon point and for the values at higher momentum transfers.
The calculated values in model $\mathcal{A}$ are in better accordance with the
the older data from Burkert \textit{et al.}~\cite{Burkert} which are larger in
magnitude.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Burkert}[r][r]{\scriptsize Burkert~\cite{Burkert}, p}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Kreuzer-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Kreuzer-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_F15_M4_1680.eps}
\caption{
Comparison of the $F_{15}(1680)$ transverse helicity amplitude $A_{3/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed line). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A32_F15_1680}}
\end{figure}
In contrast to this the $A_{3/2}^p$-amplitudes are again severely underestimated in
magnitude, see Fig.~\ref{TRANSFF_A32_F15_1680}\,.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Burkert}[r][r]{\scriptsize Burkert~\cite{Burkert}, p}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, p}
\psfrag{Kreuzer-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Kreuzer-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_F15_1680.eps}
\caption{
Comparison of the $F_{15}(1680)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_F15_1680}
}
\end{figure}
In contrast, for the longitudinal $S_{1/2}^p$-amplitude we observe a rather good agreement
with the data as displayed in Fig.~\ref{TRANSFF_S_F15_1680}, the values
obtained in model $\mathcal C$ being too small at lower momentum transfers.
\paragraph{The $J=7/2$ resonances:}
For positive parity PDG~\cite{PDG} lists the $F_{17}(1990)$ resonance rated
with two stars. Both in model $\mathcal{A}$ and in model $\mathcal{C}$ we
can relate this to states with a calculated mass of 1954 MeV and 1997 MeV,
respectively. The corresponding photon amplitudes are very small, see
Table~\ref{tab:PhotonCoupl1}. Otherwise, concerning the
$J=7/2$ resonances there exists only one negative parity resonance with more than at
least a three star rating, the $G_{17}(2190)$. The corresponding predictions for
transverse and longitudinal helicity amplitudes are shown in Figs.~\ref{TRANSFF_A12_G17_2190}
and~\ref{TRANSFF_S_G17_2190}.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{Merten-M2-p}[r][r]{\scriptsize model $\mathcal A$, p $A_{1/2}$}
\psfrag{Merten-M2-n}[r][r]{\scriptsize model $\mathcal A$, n $A_{1/2}$}
\psfrag{Merten-M4-p}[r][r]{\scriptsize model $\mathcal A$, p $A_{3/2}$}
\psfrag{Merten-M4-n}[r][r]{\scriptsize model $\mathcal A$, n $A_{3/2}$}
\psfrag{Gauss-M2-p}[r][r]{\scriptsize model $\mathcal C$, p $A_{1/2}$}
\psfrag{Gauss-M2-n}[r][r]{\scriptsize model $\mathcal C$, n $A_{1/2}$}
\psfrag{Gauss-M4-p}[r][r]{\scriptsize model $\mathcal C$, p $A_{3/2}$}
\psfrag{Gauss-M4-n}[r][r]{\scriptsize model $\mathcal C$, n $A_{3/2}$}
\includegraphics[width=\linewidth]{TRANSFF_G17_2190.eps}
\caption{
Prediction of the $G_{17}(2190)$ transverse helicity amplitudes $A_{1/2}^N$ and
$A_{3/2}^N$ for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line)
and model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_G17_2190}
}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG-p}[r][r]{\scriptsize PDG~\cite{PDG}, p}
\psfrag{PDG-n}[r][r]{\scriptsize PDG~\cite{PDG}, n}
\psfrag{Burkert}[r][r]{\scriptsize Burkert~\cite{Burkert}, p}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2}, p}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, p}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_G17_2190.eps}
\caption{
Prediction of the $G_{17}(2190)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_G17_2190}
}
\end{figure}
\paragraph{The $J=9/2$ resonances:}
The transverse and longitudinal helicity amplitudes of the $J^\pi=9/2^+$ resonance
$G_{19}(2250)$ are predicted to be very small as shown in Figs.~\ref{TRANSFF_A12_G19_2250}
and~\ref{TRANSFF_S_G19_2250} and coincide with the estimate by Anisovich
\textit{et al.}~\cite{Anisovich_3} for the transverse amplitudes. Obviously,
the $A^p_{\nicefrac{3}{2}}$ amplitude of model $\mathcal C$ and the longitudinal
amplitudes of model $\mathcal A$ are effectively zero.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{Merten-M2-p}[r][r]{\scriptsize model $\mathcal A$, p $A_{1/2}$}
\psfrag{Merten-M2-n}[r][r]{\scriptsize model $\mathcal A$, n $A_{1/2}$}
\psfrag{Merten-M4-p}[r][r]{\scriptsize model $\mathcal A$, p $A_{3/2}$}
\psfrag{Merten-M4-n}[r][r]{\scriptsize model $\mathcal A$, n $A_{3/2}$}
\psfrag{Gauss-M2-p}[r][r]{\scriptsize model $\mathcal C$, p $A_{1/2}$}
\psfrag{Gauss-M2-n}[r][r]{\scriptsize model $\mathcal C$, n $A_{1/2}$}
\psfrag{Gauss-M4-p}[r][r]{\scriptsize model $\mathcal C$, p $A_{3/2}$}
\psfrag{Gauss-M4-n}[r][r]{\scriptsize model $\mathcal C$, n $A_{3/2}$}
\includegraphics[width=\linewidth]{TRANSFF_G19_2250.eps}
\caption{
Comparison of the $G_{19}(2250)$ transverse helicity amplitudes
$A_{1/2}^N$ and $A_{3/2}^N$ for proton and neutron calculated in model $\mathcal C$
(solid and dashed-dotted line) and model $\mathcal A$ (dashed lines). The error bar at the photon point
corresponds to an estimate by Anisovich \textit{et al.}~\cite{Anisovich_3}
for$A_{1/2}^N$ and $A_{3/2}^N$ within $|A^p|<10\times10^{-3}\,\textrm{GeV}^{-\nicefrac{1}{2}}$\,.
See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.\label{TRANSFF_A12_G19_2250}
}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_G19_2250.eps}
\caption{
Prediction of the $G_{19}(2250)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_G19_2250}
}
\end{figure}
Although the resonance with $J^\pi=9/2^-$\,, $H_{19}(2220)$ has a four star
rating by the PDG, only the proton photon decay amplitude has been estimated
in~\cite{Anisovich_3}. The calculated values are displayed in Fig.~\ref{TRANSFF_A12_H19_2220}
and Fig.~\ref{TRANSFF_S_H19_2220}\,;
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, p}
\psfrag{Merten-M2-p}[r][r]{\scriptsize model $\mathcal A$, p $A_{1/2}$}
\psfrag{Merten-M2-n}[r][r]{\scriptsize model $\mathcal A$, n $A_{1/2}$}
\psfrag{Merten-M4-p}[r][r]{\scriptsize model $\mathcal A$, p $A_{3/2}$}
\psfrag{Merten-M4-n}[r][r]{\scriptsize model $\mathcal A$, n $A_{3/2}$}
\psfrag{Gauss-M2-p}[r][r]{\scriptsize model $\mathcal C$, p $A_{1/2}$}
\psfrag{Gauss-M2-n}[r][r]{\scriptsize model $\mathcal C$, n $A_{1/2}$}
\psfrag{Gauss-M4-p}[r][r]{\scriptsize model $\mathcal C$, p $A_{3/2}$}
\psfrag{Gauss-M4-n}[r][r]{\scriptsize model $\mathcal C$, n $A_{3/2}$}
\includegraphics[width=\linewidth]{TRANSFF_H19_2220.eps}
\caption{
Comparison of the $H_{19}(2220)$ transverse helicity amplitudes
$A_{1/2}^N$ and $A_{3/2}^N$ for proton and neutron calculated in model $\mathcal C$
(solid and dashed-dotted lines) and model $\mathcal A$ (dashed lines). The error bar at the photon point
corresponds to an estimate by Anisovich \textit{et al.}~\cite{Anisovich_3}
for$A_{1/2}^N$ and $A_{3/2}^N$ within $|A^p|<10\times10^{-3}\,\textrm{GeV}^{-\nicefrac{1}{2}}$\,.
See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.\label{TRANSFF_A12_H19_2220}
}
\end{figure}
the amplitudes turn out to be smaller in model $\mathcal{C}$ than in
model $\mathcal{A}$ in better agreement with the estimate of~\cite{Anisovich_3}.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_H19_2220.eps}
\caption{
Prediction of the $H_{19}(2220)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_H19_2220}
}
\end{figure}
\paragraph{The $J=11/2$ resonances:}
Figs.~\ref{TRANSFF_A12_I111_2600} and~\ref{TRANSFF_S_I111_2600} shows predictions
of the transverse and longitudinal helicity amplitudes for the
$J^{\pi}=1/2^-$ $I_{1\,11}(2600)$-resonances. So far no data available.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)/A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Merten-M2-p}[r][r]{\scriptsize model $\mathcal A$, p $A_{1/2}$}
\psfrag{Merten-M2-n}[r][r]{\scriptsize model $\mathcal A$, n $A_{1/2}$}
\psfrag{Merten-M4-p}[r][r]{\scriptsize model $\mathcal A$, p $A_{3/2}$}
\psfrag{Merten-M4-n}[r][r]{\scriptsize model $\mathcal A$, n $A_{3/2}$}
\psfrag{Gauss-M2-p}[r][r]{\scriptsize model $\mathcal C$, p $A_{1/2}$}
\psfrag{Gauss-M2-n}[r][r]{\scriptsize model $\mathcal C$, n $A_{1/2}$}
\psfrag{Gauss-M4-p}[r][r]{\scriptsize model $\mathcal C$, p $A_{3/2}$}
\psfrag{Gauss-M4-n}[r][r]{\scriptsize model $\mathcal C$, n $A_{3/2}$}
\includegraphics[width=\linewidth]{TRANSFF_I111_2600.eps}
\caption{
Prediction of the $I_{1\,11}(2600)$ transverse helicity amplitudes $A_{1/2}^N$ and
$A_{3/2}^N$ for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line)
and model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_I111_2600}
}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Merten-p}[r][r]{\scriptsize model $\mathcal A$, p}
\psfrag{Merten-n}[r][r]{\scriptsize model $\mathcal A$, n}
\psfrag{Gauss-p}[r][r]{\scriptsize model $\mathcal C$, p}
\psfrag{Gauss-n}[r][r]{\scriptsize model $\mathcal C$, n}
\includegraphics[width=\linewidth]{TRANSFF_S_I111_2600.eps}
\caption{
Prediction of the $I_{1\,11}(2600)$ longitudinal helicity amplitude $S_{1/2}^N$
for proton and neutron calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_I111_2600}
}
\end{figure}
\subsubsection{Nucleon$\to \Delta$ helicity amplitudes\label{NDeltaHelAmpl}}
We now turn to a discussion of the results for $N \to \Delta$ electro-excitation.
\paragraph{The $J=1/2$ resonances:}
We start the discussion with the negative parity $S_{31}(1620)$-resonance.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{1/2}$}
\psfrag{PDG}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{1/2}$}
\psfrag{Burkert}[r][r]{\scriptsize Burkert~\cite{Burkert}, $A_{1/2}$}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}, $A_{1/2}$}
\psfrag{Aznauryan-S}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}, $S_{1/2}$}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, $A_{1/2}$}
\psfrag{Maid-S}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, $S_{1/2}$}
\psfrag{Tiator-A12}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, $A_{1/2}$}
\psfrag{Tiator-S12}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, $S_{1/2}$}
\psfrag{Merten}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$}
\psfrag{Merten-S}[r][r]{\scriptsize model $\mathcal A$, $S_{1/2}$}
\psfrag{Gauss}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-S}[r][r]{\scriptsize model $\mathcal C$, $S_{1/2}$}
\includegraphics[width=\linewidth]{TRANSFF_S31_1620.eps}
\caption{
Comparison of the $S_{31}(1620)$ transverse and the longitudinal helicity
amplitudes $A_{1/2}^N$ and $S_{1/2}^N$ calculated in model $\mathcal C$
(solid and dashed-dotted line) and model $\mathcal A$ (dashed lines) with experimental data
from~\cite{PDG,Burkert,Aznauryan05_2,Drechsel,Tiator}\,. Note that for
the data points of the MAID-analysis by Tiator \textit{et al.}~\cite{Tiator}
no errors are quoted. See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_AS12_S31_1620}
}
\end{figure}
For the $S_{31}(1620)$ transverse and longitudinal helicity amplitudes,
depicted in Fig.~\ref{TRANSFF_AS12_S31_1620}, a wide variety of experimental
data at and near the photon point exists. The calculated values lie well within
the region of experimental data obtained due to the spread in partially contradictory
experimental data but an assessment of the quality is hardly
possible. The positive longitudinal amplitude $S_{1/2}^N$ in
Fig.~\ref{TRANSFF_AS12_S31_1620} as determined in~\cite{Drechsel,Tiator}
together with the single data point from~\cite{Aznauryan05_2} suggest a sign
change in the region $Q^2 \approx 0.7-1.0\, \textnormal{GeV}^2$ not reproduced by
both calculations, this clearly needs more experimental clarification.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich-M2}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{1/2}$}
\psfrag{Awaji}[r][r]{\scriptsize Awaji~\cite{Awaji}, $A_{1/2}$}
\psfrag{Crawford}[r][r]{\scriptsize Crawford~\cite{Crawford}, $A_{1/2}$}
\psfrag{Gauss-A}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-S}[r][r]{\scriptsize model $\mathcal C$, $S_{1/2}$}
\includegraphics[width=\linewidth]{TRANSFF_S31_1900.eps}
\caption{
Comparison of the $S_{31}(1900)$ transverse $A_{1/2}^N$ (solid and dashed-dotted line) and
longitudinal helicity amplitude $S_{1/2}^N$ (dashed-dotted line) in model
$\mathcal C$\,. See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S31_1900}
}
\end{figure}
The next excitation in this channel is the $S_{31}(1900)$ resonance; the
corresponding transverse and longitudinal helicity amplitudes are displayed in
Fig.~\ref{TRANSFF_S31_1900}\,. Here we only give the results for model
$\mathcal{C}$ since the original model $\mathcal{A}$ does not describe a
resonance in this region. The values at the photon point seems to be in better
agreement with the data from Crawford \textit{et al.}~\cite{Crawford} than
with the data from Awaji \textit{et al.}~\cite{Awaji} and Anisovich
\textit{et al.}~\cite{Anisovich_3}.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)/S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Penner}[r][r]{\scriptsize Penner~\cite{Penner}}
\psfrag{Gauss-A}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-S}[r][r]{\scriptsize model $\mathcal C$, $S_{1/2}$}
\includegraphics[width=\linewidth]{TRANSFF_P31_1750.eps}
\caption{
Comparison of the $P_{31}(1750)$ transverse $A_{1/2}^N$ and longitudinal
$S_{1/2}^N$ helicity amplitude calculated in model $\mathcal C$ (solid line)
and the data from Penner \textit{et al.}~\cite{Penner}. See also caption
to Fig.~\ref{TRANSFF_A12_S11_1535}.\label{TRANSFF_A12_P31_1750}}
\end{figure}
Note that for both $S_{31}$-resonances we judiciously fixed the
phase $\zeta$ in order to reproduce the sign of the PDG value at the photon
point, as has been mentioned above.
Reversing the sign of $\zeta$ would in case of the $S_{31}(1620)$-resonance
in fact better reproduce the data at larger momentum transfers.
Also the lowest positive parity $\Delta$-resonance $P_{31}(1750)$ is only
reproduced in model $\mathcal C$ as shown in~\cite{Ronniger}. The calculation
does not account for the large value found by Penner \textit{et al.}~\cite{Penner}
at the photon point, see Fig.~\ref{TRANSFF_A12_P31_1750}\,. The longitudinal
amplitude is predicted to be negative for this resonance.
The helicity amplitudes for the next excited state,\linebreak $P_{31}(1910)$ are shown
in Fig.~\ref{TRANSFF_AS12_P31_1910}.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N\,/\,S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG}[r][r]{\scriptsize PDG~\cite{PDG}}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}}
\psfrag{Merten-A-A}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$ 1st}
\psfrag{Merten-B-A}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$ 2nd}
\psfrag{Merten-A-S}[r][r]{\scriptsize model $\mathcal A$, $S_{1/2}$ 1st}
\psfrag{Merten-B-S}[r][r]{\scriptsize model $\mathcal A$, $S_{1/2}$ 2nd}
\psfrag{Gauss-A}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-S}[r][r]{\scriptsize model $\mathcal C$, $S_{1/2}$}
\includegraphics[width=\linewidth]{TRANSFF_P31_1910.eps}
\caption{
Comparison of the $P_{31}(1910)$ transverse $A_{1/2}^N$ and longitudinal
$S_{1/2}^N$ helicity amplitude calculated in model $\mathcal C$ (solid and dashed-dotted line)
and model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_AS12_P31_1910}
}
\end{figure}
Note that model $\mathcal A$ does produce two nearby resonances at the position of
the\linebreak $P_{31}(1910)$ resonance, see ~\cite{LoeMePe2,Ronniger}. The calculated
amplitudes for both resonances as well as the calculated amplitude in model
$\mathcal{C}$ are very small and in rough agreement with the experimental
value found at the photon point which has a large error. Again the
assessment cannot be conclusive. Also shown are the predictions for the
rather small longitudinal amplitudes.
\paragraph{The $J=3/2$ resonances:}
We shall discuss the electro-excitation of the ground-state $\Delta$ resonance,
$P_{33}(1232)$ in some more detail below; the
transverse amplitudes are shown in Fig.~\ref{TRANSFF_A132_P33_1232} while
Fig.~\ref{TRANSFF_S_P33_1232} displays the results for the longitudinal amplitude.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N(Q^2)/A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich-A12}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{1/2}$}
\psfrag{Anisovich-A32}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{3/2}$}
\psfrag{PDG-A12}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{1/2}$}
\psfrag{PDG-A32}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{3/2}$}
\psfrag{Maid-A12}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, $A_{1/2}$}
\psfrag{Maid-A32}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, $A_{3/2}$}
\psfrag{Merten-M2}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$}
\psfrag{Merten-M4}[r][r]{\scriptsize model $\mathcal A$, $A_{3/2}$}
\psfrag{Gauss-M2}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-M4}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$}
\includegraphics[width=\linewidth]{TRANSFF_P33_1232.eps}
\caption{
Comparison of the $P_{33}(1232)$ transverse helicity amplitudes
$A_{1/2}^N$ and $A_{3/2}^N$ as calculated in model $\mathcal C$
(solid and dashed-dotted line) and model $\mathcal A$ (dashed lines).
See also caption to Fig.~\ref{TRANSFF_A12_S11_1535} and the magnetic
and electric transition form factor in Figs.~\ref{MagneticTRANSFF}
and~\ref{ElectricTRANSFF}, respectively.\label{TRANSFF_A132_P33_1232}}
\end{figure}
With the exception of the low momentum transfer region $Q^2<0.5\,\textrm{GeV}^2$
we observe a fair agreement with experimental data for the transverse
amplitude $A_{1/2}^N$ for both models, which, however, both show a minimum
in the amplitudes for $Q^2 \lesssim 0.5\, \textnormal{GeV}^2$ (which, in
contradiction to data, also persists in the magnetic form factor, see
Fig.~\ref{MagneticTRANSFF})\,, whereas the data show a minimum of some
kinematical origin at much smaller momentum transfers $Q^2 \lesssim 0.1\,
\textnormal{GeV}^2$.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{PDG}[r][r]{\scriptsize PDG~\cite{PDG}}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}}
\psfrag{Merten}[r][r]{\scriptsize model $\mathcal A$}
\psfrag{Gauss}[r][r]{\scriptsize model $\mathcal C$}
\includegraphics[width=\linewidth]{TRANSFF_S_P33_1232.eps}
\caption{
Comparison of the $P_{33}(1232)$ longitudinal helicity amplitude $S_{1/2}^N$
calculated in model $\mathcal C$ (solid line) and model $\mathcal A$ (dashed line)
to experimental data from~\cite{PDG,Drechsel,Tiator}\,. See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535} and the Coulomb transition form factor in
Fig.~\ref{CoulombTRANSFF}.\label{TRANSFF_S_P33_1232}
}
\end{figure}
Also the experimental data for the $S_{1/2}^N$ helicity amplitude
can be accounted for by the calculated curve for model $\mathcal{C}$ at the
highest momentum transfers only, while the amplitude calculated in $\mathcal{A}$
is much smaller. Note that more data is available for the magnetic transition
form factor, which is a linear combination of the $A_{1/2}^N$ and $A_{3/2}^N$
amplitudes, see sec.~\ref{MagnTransFF}.
The Roper-like excitation of the ground state $\Delta$ resonance,
$P_{33}(1600)$ , is only described adequately in model $\mathcal{C}$\,.
The corresponding helicity amplitudes $A^N_{1/2}$\,, $A^N_{3/2}$
and $S^N_{1/2}$ are displayed in Fig.~\ref{TRANSFF_AS132_P33_1600}.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A^N_{\tfrac{1}{2}}/A^N_{\tfrac{3}{2}}/S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich-M2}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{1/2}$}
\psfrag{Anisovich-M4}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{3/2}$}
\psfrag{PDG-M2}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{1/2}$}
\psfrag{PDG-M4}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{3/2}$}
\psfrag{Gauss-M2}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-M4}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$}
\psfrag{Gauss-S}[r][r]{\scriptsize model $\mathcal C$, $S_{1/2}$}
\includegraphics[width=\linewidth]{TRANSFF_P33_1600.eps}
\caption{
Comparison of the $P_{33}(1600)$ transverse and longitudinal helicity
amplitudes $A_{1/2}^N$, $A_{3/2}^N$ and $S_{1/2}^N$ calculated in model
$\mathcal C$ with the PDG-data~\cite{PDG} and~\cite{Anisovich_3}.
See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_AS132_P33_1600}
}
\end{figure}
The $A^N_{\nicefrac{1}{2}}$ amplitude is calculated to be smaller than the decay
amplitude quoted by the PDG~\cite{PDG}. Contrary to this we find
a rather large $A^N_{\nicefrac{3}{2}}$ amplitude with a pronounced minimum around $Q^2
\approx 0.75\,\textrm{GeV}^2$. However in this case the value at the photon point is
overestimated.
For the $P_{33}(1920)$ state with positive parity the
helicity amplitudes are displayed in Figs.~\ref{TRANSFF_AS132_P33_1920}
and~\ref{TRANSFF_S_P33_1920}. In~\cite{Ronniger} it is shown that there exist
several states around 1920 MeV which correspond to the second and third excited
$\Delta_{\nicefrac{3}{2}^+}$ state and which are predicted at 1834 MeV and
at 1912 MeV for model $\mathcal A$ and at 1899 MeV and at 1932 MeV for model
$\mathcal C$\,, respectively.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A^N_{\tfrac{1}{2}}/A^N_{\tfrac{3}{2}}\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{APHA-M2}[r][r]{\scriptsize~\cite{Awaji,Horn,Penner}, $A_{1/2}$}
\psfrag{APHA-M4}[r][r]{\scriptsize~\cite{Awaji,Horn,Penner}, $A_{3/2}$}
\psfrag{Merten-M2-2nd}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$ 2nd}
\psfrag{Merten-M2-3rd}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$ 3rd}
\psfrag{Merten-M4-2nd}[r][r]{\scriptsize model $\mathcal A$, $A_{3/2}$ 2nd}
\psfrag{Merten-M4-3rd}[r][r]{\scriptsize model $\mathcal A$, $A_{3/2}$ 3rd}
\psfrag{Gauss-M2-2nd}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$ 2nd}
\psfrag{Gauss-M2-3rd}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$ 3rd}
\psfrag{Gauss-M4-2nd}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$ 2nd}
\psfrag{Gauss-M4-3rd}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$ 3rd}
\includegraphics[width=\linewidth]{TRANSFF_P33_1920.eps}
\caption{
Comparison of the $P_{33}(1920)$ transverse helicity amplitudes
$A_{1/2}^N$ and $A_{3/2}^N$ as calculated in model $\mathcal C$ (solid and dashed-dotted line)
and model $\mathcal A$ (dashed lines) with the data from~\cite{Awaji,Horn,Penner}. Note
that the values at $Q^2=0$ of Anisovich \textit{et al.}~\cite{Anisovich_3},
$A^p_{1/2}=130^{+30}_{-60}\times10^{-3}\textrm{GeV}^{-1/2}$ and
$A^p_{3/2}=-150^{+25}_{-50}\times10^{-3}\textrm{GeV}^{-1/2}$, are beyond the range displayed.
See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_AS132_P33_1920}
}
\end{figure}
The transverse amplitudes are in general very small and match the photon decay data
of~\cite{Awaji,Horn,Penner} whereas the data of Anisovich \textit{et al.}~\cite{Anisovich_3}
cannot be reproduced. The predictions for the longitudinal amplitude as well as the
$A^N_{\nicefrac{3}{2}}$ amplitude for the third excitation are effectively zero.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Merten-S-2nd}[r][r]{\scriptsize model $\mathcal A$, 2nd}
\psfrag{Merten-S-3rd}[r][r]{\scriptsize model $\mathcal A$, 3rd}
\psfrag{Gauss-S-2nd}[r][r]{\scriptsize model $\mathcal C$, 2nd}
\psfrag{Gauss-S-3rd}[r][r]{\scriptsize model $\mathcal C$, 3rd}
\includegraphics[width=\linewidth]{TRANSFF_S_P33_1920.eps}
\caption{
Prediction of the $P_{33}(1920)$ longitudinal helicity amplitude
$S_{1/2}^N$ calculated in model $\mathcal C$ (solid lines)
and model $\mathcal A$ (dashed lines). See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535}.\label{TRANSFF_S_P33_1920}
}
\end{figure}
We now turn to negative parity excited $\Delta$-resonances. For the
$D_{33}(1700)$ transition amplitudes we find that the predictions of both
models are rather close, as displayed in Figs.~\ref{TRANSFF_A132_D33_1700}
and~\ref{TRANSFF_S_D33_1700}. Note that the calculated masses of the
$D_{33}(1700)$-resonance, \textit{viz.} $M=1594\,\textrm{MeV}$ for model
$\mathcal A$ and $M=1600\,\textrm{MeV}$
for model $\mathcal C$, are about $100\,\textrm{MeV}$ lower than the
experimental mass at approximately $1700\,\textrm{MeV}$. This of course
affects the pre-factors in Eqs.~(\ref{TransFF_eq4a}) and~(\ref{TransFF_eq4b})
leading to the conclusion that the current matrix elements are calculated to
be too small.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/A_{\tfrac{3}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich-M2}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{1/2}$}
\psfrag{Anisovich-M4}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{3/2}$}
\psfrag{PDG-M2}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{1/2}$}
\psfrag{PDG-M4}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{3/2}$}
\psfrag{Burkert-M2}[r][r]{\scriptsize Burkert~\cite{Burkert}, $A_{1/2}$}
\psfrag{Burkert-M4}[r][r]{\scriptsize Burkert~\cite{Burkert}, $A_{3/2}$}
\psfrag{Aznauryan-M2}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}, $A_{1/2}$}
\psfrag{Aznauryan-M4}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}, $A_{3/2}$}
\psfrag{Maid-M2}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, $A_{1/2}$}
\psfrag{Maid-M4}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}, $A_{3/2}$}
\psfrag{Tiator-M2}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, $A_{1/2}$}
\psfrag{Tiator-M4}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}, $A_{3/2}$}
\psfrag{Merten-M2}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$}
\psfrag{Merten-M4}[r][r]{\scriptsize model $\mathcal A$, $A_{3/2}$}
\psfrag{Gauss-M2}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-M4}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$}
\includegraphics[width=\linewidth]{TRANSFF_D33_1700.eps}
\caption{
Comparison of the $D_{33}(1700)$ transverse helicity amplitudes $A_{1/2}^N$
and $A_{3/2}^N$ calculated in model $\mathcal C$ (solid and dashed-dotted line) and
model $\mathcal A$ (dashed lines). See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A132_D33_1700}
}
\end{figure}
For the transverse amplitude $A_{1/2}^N$ only the single data point from Aznauryan
\textit{et al.}~\cite{Aznauryan05_2} is close to the calculated curves. At the photon point
the calculated values also agree with the PDG data~\cite{PDG}. In contrast the
data from MAID~\cite{Drechsel,Tiator} and Burkert \textit{et
al.}~\cite{Burkert} cannot be accounted for. Similar observations are made
for the $A_{3/2}^N$-amplitude. The longitudinal $S_{1/2}^N$ amplitude
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\frac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Aznauryan}[r][r]{\scriptsize Aznauryan~\cite{Aznauryan05_2}}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}}
\psfrag{Merten}[r][r]{\scriptsize model $\mathcal A$}
\psfrag{Gauss}[r][r]{\scriptsize model $\mathcal C$}
\includegraphics[width=\linewidth]{TRANSFF_S_D33_1700.eps}
\caption{
Comparison of the $D_{33}(1700)$ longitudinal helicity amplitude $S_{1/2}^N$
of the nucleon calculated in model $\mathcal C$ (solid line) and model
$\mathcal A$ (dashed line). Note that for the data points of the MAID-analysis
by Tiator \textit{et al.}~\cite{Tiator} no errors are quoted. See also caption
to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_D33_1700}}
\end{figure}
has a sign opposite to the rare data from Aznauryan
\textit{et al.}~\cite{Aznauryan05_2} and MAID~\cite{Drechsel,Tiator} as shown in
Fig.~\ref{TRANSFF_S_D33_1700}. Note however that the MAID-analysis of Tiator
\textit{et al.}~\cite{Tiator} yields a vanishing $S_{1/2}^N$ amplitude in
contrast to the appreciable amplitudes found in the calculations.
Figs.~\ref{TRANSFF_A12_D33_1940} and~\ref{TRANSFF_S_A12_D33_1940} contain the prediction
for the transverse and longitudinal helicity amplitudes of the $D_{33}(1940)$
resonance in model $\mathcal C$. Note that in this model two resonances with
masses $M=1895\,\textrm{MeV}$ and $M=1959\,\textrm{MeV}$ are predicted in this
energy range, as shown in~\cite{Ronniger}. Accordingly we have displayed two
alternative predictions for the helicity amplitudes.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A^N_{\tfrac{1}{2}}/A^N_{\tfrac{3}{2}}\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich-M2}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{1/2}$}
\psfrag{Anisovich-M4}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}}
\psfrag{Horn-M2}[r][r]{\scriptsize Horn~\cite{Horn}, $A_{1/2}$}
\psfrag{Awaji-M2}[r][r]{\scriptsize Awaji~\cite{Awaji}, $A_{1/2}$}
\psfrag{Horn-M4}[r][r]{\scriptsize Horn~\cite{Horn}, $A_{3/2}$}
\psfrag{Awaji-M4}[r][r]{\scriptsize Awaji~\cite{Awaji}, $A_{3/2}$}
\psfrag{Gauss-M2}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$ 2nd}
\psfrag{Gauss-M4}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$ 2nd}
\psfrag{Gauss-M2-2nd}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$ 3rd}
\psfrag{Gauss-M4-2nd}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$ 3rd}
\includegraphics[width=\linewidth]{TRANSFF_D33_1940.eps}
\caption{
Comparison of the $D_{33}(1940)$ transverse helicity amplitudes $A_{1/2}^N$
(solid lines) and $A_{3/2}^N$ (dashed lines) calculated in model $\mathcal C$
with the data from Awaji \textit{et al.}~\cite{Awaji}. Due to the fact that
model $\mathcal C$ offers two alternatives for the $D_{33}(1940)$-resonance,
as shown in~\cite{Ronniger}, both amplitudes, labelled with ''second''
and ''third'' are displayed. Note that the values at $Q^2=0$ of Horn
\textit{et al.}~\cite{Horn}, $A^p_{1/2}=(160\pm40)\times10^{-3}\textrm{GeV}^{-1/2}$
and $A^p_{3/2}=(130\pm30)\times10^{-3}\textrm{GeV}^{-1/2}$,
are beyond the range displayed. See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A12_D33_1940}
}
\end{figure}
The results for the transverse amplitudes, see Fig.~\ref{TRANSFF_A12_D33_1940}\,,
for both resonances are rather similar; the measured photon decay amplitudes
from Horn \textit{et al.}~\cite{Horn} and Awaji \textit{et al.}~\cite{Awaji}
are in conflict, the calculated values favour a small negative value at the
photon point, which agrees with the data from Awaji \textit{et al.}~\cite{Awaji}.
In Fig.~\ref{TRANSFF_S_A12_D33_1940} we also show the corresponding longitudinal
amplitudes.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Gauss}[r][r]{\scriptsize model $\mathcal C$\,, 2nd}
\psfrag{Gauss-2nd}[r][r]{\scriptsize model $\mathcal C$\,, 3rd}
\includegraphics[width=\linewidth]{TRANSFF_S_D33_1940.eps}
\caption{
Prediction of the $D_{33}(1940)$ longitudinal electro-excitation helicity amplitudes
$S_{1/2}^N$ of the nucleon calculated in model $\mathcal C$\,. Due to
the fact that model $\mathcal C$ offers two alternatives for the $D_{33}(1940)$-resonance,
as shown in~\cite{Ronniger}, both amplitudes, labelled with ''first'' and ''second'' are
displayed. See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_S_A12_D33_1940}}
\end{figure}
\paragraph{The $J=5/2$ resonances:}
In Fig.~\ref{TRANSFF_A_D35_1930} we show the $A_{1/2}^N$, $A_{3/2}^N$ and $S_{1/2}^N$
helicity amplitudes
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/A_{\tfrac{3}{2}}^N/S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich-M2}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{1/2}$}
\psfrag{Anisovich-M4}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{3/2}$}
\psfrag{PDG-M2}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{1/2}$}
\psfrag{PDG-M4}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{3/2}$}
\psfrag{Gauss-M2}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-M4}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$}
\psfrag{Gauss-S}[r][r]{\scriptsize model $\mathcal C$, $S_{1/2}$}
\includegraphics[width=\linewidth]{TRANSFF_D35_1930.eps}
\caption{
Comparison of the $D_{35}(1930)$ helicity amplitudes $A_{1/2}^N$
(black line), $A_{3/2}^N$ (red line) and $S_{1/2}^N$ (blue line)
calculated in model $\mathcal C$ with the PDG-data~\cite{PDG}.
See also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A_D35_1930}
}
\end{figure}
calculated in model $\mathcal C$~\cite{Ronniger} for the $D_{35}(1930)$
resonance. Also displayed is the PDG-data at the photon point~\cite{PDG},
where we find that the transverse amplitudes agree well with the experimental
values. The longitudinal amplitude is found to be almost vanishing. Since model
$\mathcal A$ cannot account for a resonance in this energy region no results
are given in this case.
Both models are able to reproduce the lowest $J=5/2$ $\Delta$ resonance with
positive parity. The prediction of the helicity amplitudes of the
$F_{35}(1905)$ can be found in Fig.~\ref{TRANSFF_A_F35_1905}.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/A_{\tfrac{3}{2}}^N/S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich-M2}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{1/2}$}
\psfrag{Anisovich-M4}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{3/2}$}
\psfrag{PDG-M2}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{1/2}$}
\psfrag{PDG-M4}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{3/2}$}
\psfrag{Merten-M2}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$}
\psfrag{Merten-M4}[r][r]{\scriptsize model $\mathcal A$, $A_{3/2}$}
\psfrag{Merten-S}[r][r]{\scriptsize model $\mathcal A$, $S_{1/2}$}
\psfrag{Gauss-M2}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-M4}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$}
\psfrag{Gauss-S}[r][r]{\scriptsize model $\mathcal C$, $S_{1/2}$}
\includegraphics[width=\linewidth]{TRANSFF_F35_1905.eps}
\caption{
Comparison of the $F_{35}(1905)$ transverse and longitudinal helicity
amplitudes $A_{1/2}^N$, $A_{3/2}^N$ and $S_{1/2}^N$ calculated in model
$\mathcal C$ (solid and dashed-dotted line) and model $\mathcal A$ (dashed lines). See
also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A_F35_1905}}
\end{figure}
Both models can account very well for the PDG-data at the photon point for
the $A_{1/2}$ transverse amplitudes, but the
$A_{3/2}$ amplitude is found with a sign opposite to that of the data.
As for the previously discussed resonance the results for the longitudinal
amplitudes turn out to be very small.
\paragraph{The $J=7/2$ resonances:}
For $J=7/2$ there exists only one four star resonance, the $F_{37}(1950)$. The predictions
of the corresponding transverse and longitudinal helicity amplitudes are shown
in Fig.~\ref{TRANSFF_A_F37_1950}\,.
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/A_{\tfrac{3}{2}}^N/S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Anisovich-M2}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{1/2}$}
\psfrag{Anisovich-M4}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}, $A_{3/2}$}
\psfrag{PDG-M2}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{1/2}$}
\psfrag{PDG-M4}[r][r]{\scriptsize PDG~\cite{PDG}, $A_{3/2}$}
\psfrag{Merten-M2}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$}
\psfrag{Merten-M4}[r][r]{\scriptsize model $\mathcal A$, $A_{3/2}$}
\psfrag{Merten-S}[r][r]{\scriptsize model $\mathcal A$, $S_{1/2}$}
\psfrag{Gauss-M2}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-M4}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$}
\psfrag{Gauss-S}[r][r]{\scriptsize model $\mathcal C$, $S_{1/2}$}
\includegraphics[width=\linewidth]{TRANSFF_F37_1950.eps}
\caption{
Comparison of the $F_{37}(1950)$ transverse and longitudinal helicity
amplitudes $A_{1/2}^N$, $A_{3/2}^N$ and $S_{1/2}^N$ calculated in model
$\mathcal C$ (solid and dashed-dotted line) and model $\mathcal A$ (dashed lines). See
also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_A_F37_1950}
}
\end{figure}
Here the predictions of the transverse amplitudes are much too small in order
to explain the experimental photon couplings.
\paragraph{The $J=11/2$ resonances:}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$A_{\tfrac{1}{2}}^N/A_{\tfrac{3}{2}}^N/S_{\tfrac{1}{2}}^N(Q^2)\,[10^{-3}\textrm{GeV}^{-\tfrac{1}{2}}]$}
\psfrag{Merten-M2}[r][r]{\scriptsize model $\mathcal A$, $A_{1/2}$}
\psfrag{Merten-M4}[r][r]{\scriptsize model $\mathcal A$, $A_{3/2}$}
\psfrag{Merten-S}[r][r]{\scriptsize model $\mathcal A$, $S_{1/2}$}
\psfrag{Gauss-M2}[r][r]{\scriptsize model $\mathcal C$, $A_{1/2}$}
\psfrag{Gauss-M4}[r][r]{\scriptsize model $\mathcal C$, $A_{3/2}$}
\psfrag{Gauss-S}[r][r]{\scriptsize model $\mathcal C$, $S_{1/2}$}
\includegraphics[width=\linewidth]{TRANSFF_H311_2420.eps}
\caption{
Prediction of the $H_{3\,11}(2420)$ transverse and longitudinal helicity
amplitudes $A_{1/2}^N$, $A_{3/2}^N$ and $S_{1/2}^N$ calculated in model
$\mathcal C$ (solid and dashed-dotted line) and model $\mathcal A$ (dashed lines). See
also caption to Fig.~\ref{TRANSFF_A12_S11_1535}.
\label{TRANSFF_H311_2420}
}
\end{figure}
Fig.~\ref{TRANSFF_H311_2420} shows the prediction of the transverse and longitudinal
helicity amplitudes of the $\Delta_{11/2^+}(2420)$ resonance.
The amplitudes found in model $\mathcal C$ are slightly smaller than those in model
$\mathcal A$. In both cases the longitudinal amplitude virtually vanishes.
\subsection{Photon couplings\label{PhotonCoupl}}
\begin{table*}[ht!]
\centering
\caption{
Transverse photon couplings calculated for $N\to N^\ast$ transitions in model
$\mathcal A$ and $\mathcal C$ in comparison to experimental data.
All calculated photon couplings were determined by calculating the helicity
amplitudes at $Q^2=10^{-4}\,\textrm{GeV}^2$ close to the photon point.
A hyphen indicates that data do not exist. All amplitudes are
in units of $10^{-3}\textrm{GeV}^{-1/2}$\,, all masses are given in MeV.
The references~\cite{Penner,Barbour1978} do not quote errors.
\label{tab:PhotonCoupl1}}
\begin{tabular}{lc|cc|c*{4}{r@{\hspace*{5pt}}}|*{2}{c@{\hspace*{5pt}}}r}\toprule
State & & \multicolumn{2}{c|}{Mass}& & \multicolumn{2}{c@{\hspace*{5pt}}}{Model $\mathcal A$} & \multicolumn{2}{c@{\hspace*{5pt}}}{Model $\mathcal C$} & \multicolumn{2}{|c@{\hspace*{5pt}}}{Exp.} & Ref. \\
& Rat. & model $\mathcal A$ & model $\mathcal C$ & Ampl. & $p$ & $n$ & $p$ & $n$ & $p$ & $n$ & \\\midrule
$S_{11}(1535)$ & **** & 1417 & 1475 & $A_{\nicefrac{1}{2}}$ & 111.68 & -74.75 & 85.93 & -54.96 & 90$\pm$30& -46$\pm$27 & \cite{PDG} \\
$S_{11}(1650)$ & **** & 1618 & 1681 & $A_{\nicefrac{1}{2}}$ & 2.55 & -16.03 & -4.56 & -6.86 & 53$\pm$16& -15$\pm$21 & \cite{PDG} \\
\multirow{2}{*}{$S_{11}(1895)$} & \multirow{2}{*}{**} & 1872 & 1839 & \multirow{2}{*}{$A_{\nicefrac{1}{2}}$}& 43.36&-23.93&52.71&-29.01&\multirow{2}{*}{12$\pm$6}&\multirow{2}{*}{--}&\multirow{2}{*}{\cite{Anisovich_3}} \\
& & 1886 & 1882 & & 38.95 & -18.44 & 17.18 & -8.27 & & & \\\midrule
$P_{11}(1440)$ & **** & 1498 & 1430 & $A_{\nicefrac{1}{2}}$ & 33.51 & -18.68 & 33.10 & -17.43 & -60$\pm$4 & 40$\pm$10 & \cite{PDG} \\
$P_{11}(1710)$ & *** & 1700 & 1712 & $A_{\nicefrac{1}{2}}$ & 58.36 & -30.59 & 30.95 & -13.57 & 24$\pm$10& -2$\pm$14 & \cite{PDG} \\
$P_{11}(1880)$ & ** & 1905 & 1872 & $A_{\nicefrac{1}{2}}$ & 24.35 & -15.55 & 24.44 & -11.87 & 14$\pm$3 & -- & \cite{Anisovich_3} \\\midrule
$P_{13}(1720)$ & **** & 1655 & 1690 & $A_{\nicefrac{1}{2}}$ & 81.69 & -33.06 & 50.28 & -22.56 & 18$\pm$30 & 1$\pm$15 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & -26.24 & 11.76 &-17.10 & 2.69 & -19$\pm$20& -29$\pm$61 & \cite{PDG} \\
\multirow{2}{*}{$P_{13}(1900)$} & \multirow{2}{*}{***} & 1859 & \multirow{2}{*}{1840} & \multirow{2}{*}{$A_{\nicefrac{1}{2}}$} & 5.06 & 3.17 & \multirow{2}{*}{2.31} & \multirow{2}{*}{5.17} & \multirow{2}{*}{26$\pm$15/-17} & \multirow{2}{*}{--/-16} & \multirow{2}{*}{\cite{Anisovich_3}/\cite{Penner}} \\
& & 1894 & & & 12.58 & -14.53 & & & & & \\
& & & & \multirow{2}{*}{$A_{\nicefrac{3}{2}}$} & 2.29 & 18.15 & \multirow{2}{*}{4.03} & \multirow{2}{*}{13.79} & \multirow{2}{*}{-65$\pm$30/31} & \multirow{2}{*}{--/-2} & \multirow{2}{*}{\cite{Anisovich_3}/\cite{Penner}} \\
& & & & & 6.49 & -14.90 & & & & & \\\midrule
$D_{13}(1520)$ & **** & 1453 & 1520 & $A_{\nicefrac{1}{2}}$ & -54.80 & 2.47 &-39.39 & 0.65 & -24$\pm$9 & -59$\pm$9 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & 48.45 & -52.27& 32.80 & -31.64 & 150$\pm$15&-139$\pm$11 & \cite{PDG} \\
$D_{13}(1700)$ & *** & 1573 & 1686 & $A_{\nicefrac{1}{2}}$ & -20.69 & 16.52 &-10.16 & 10.65 & -18$\pm$13& 0$\pm$50 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & -5.45 & 38.89 & -7.08 & 26.42 & -2$\pm$24& -3$\pm$44 & \cite{PDG} \\
\multirow{2}{*}{$D_{13}(1875)$} & \multirow{2}{*}{***} & 1896 & 1849 & \multirow{2}{*}{$A_{\nicefrac{1}{2}}$} & 49.87 & -19.04 & 42.29 & -13.71 & 18$\pm$10/-20$\pm$8& \multirow{2}{*}{7$\pm$13} & \cite{Anisovich_3}/\cite{Awaji}/\cite{PDG}/ \\
& & 1920 & 1921 & & 1.62 & -6.73 & -3.72 & -6.76 & 12/26$\pm$52 & & \cite{Penner}/\cite{Devenish1974}\\
& & & & \multirow{2}{*}{$A_{\nicefrac{3}{2}}$} & -20.86 & 13.11 & -21.46 & 10.17 & -9$\pm$5/17$\pm$11& \multirow{2}{*}{-53$\pm$34} & \cite{Anisovich_3}/\cite{Awaji}/\cite{PDG}/ \\
& & & & & -5.78 & -2.38 & 0.64 & -4.27 & -10/128$\pm$57 & & \cite{Penner}/\cite{Devenish1974}\\\midrule
$D_{15}(1675)$ & **** & 1623 & 1678 & $A_{\nicefrac{1}{2}}$ & 3.74 & -25.80 & 6.16 & -19.91 & 19$\pm$8 & -43$\pm$12 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & 5.39 & -36.41 & -1.36 & -22.98 & 15$\pm$9 & -58$\pm$13 & \cite{PDG} \\
\multirow{2}{*}{$D_{15}(2060)$} & \multirow{2}{*}{**} & 1935 & 1922 & \multirow{2}{*}{$A_{\nicefrac{1}{2}}$} & 50.63 & -28.09 & 26.71 & -16.48 & \multirow{2}{*}{65$\pm$12} & \multirow{2}{*}{--} & \multirow{2}{*}{\cite{Anisovich_3}} \\
& & 2063 & 2017 & & 0.83 & -14.53 & 2.74 & -12.84 & \\
& & & & \multirow{2}{*}{$A_{\nicefrac{3}{2}}$} & -17.97 & 10.01 & -8.99 & 2.06 & \multirow{2}{*}{55$^{+15}_{-35}$} & \multirow{2}{*}{--} & \multirow{2}{*}{\cite{Anisovich_3}} \\
& & & & & 1.35 & -20.16 & -2.92 & -17.67 & & & \\\midrule
$F_{15}(1680)$ & **** & 1695 & 1734 & $A_{\nicefrac{1}{2}}$ & -45.91 & 32.65 &-29.98 & 22.25 & -15$\pm$6 & 29$\pm$10 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & 42.16 & -12.85 & 24.10 & -6.95 & 133$\pm$12& -33$\pm$9 & \cite{PDG} \\
\multirow{2}{*}{$F_{15}(1860)$} & \multirow{2}{*}{**} & 1892 & 1933 & \multirow{2}{*}{$A_{\nicefrac{1}{2}}$} & -9.86 & -11.41 & 1.22 & -13.86 & \multirow{2}{*}{20$\pm$12} & \multirow{2}{*}{--} & \multirow{2}{*}{\cite{Anisovich_3}} \\
& & 1918 & 1978 & & -5.33 & 17.12 & -5.41 & 4.31 & & & \\
& & & & \multirow{2}{*}{$A_{\nicefrac{3}{2}}$} & -0.41 & -23.28 & -0.60 & -11.28 & \multirow{2}{*}{50$\pm$20} & \multirow{2}{*}{--} & \multirow{2}{*}{\cite{Anisovich_3}} \\
& & & & & -5.34 & 6.48 & -2.21 & -2.67 & & & \\
\multirow{2}{*}{$F_{15}(2000)$} & \multirow{2}{*}{**} & \multirow{2}{*}{2082} & 1978 & \multirow{2}{*}{$A_{\nicefrac{1}{2}}$} & \multirow{2}{*}{-0.05} & \multirow{2}{*}{0.59} & -5.41 & 4.31 & \multirow{2}{*}{35$\pm$15} & \multirow{2}{*}{--} & \multirow{2}{*}{\cite{Anisovich_3}} \\
& & & 2062 & & & & 32.96 & -21.35 & & & \\
& & & & \multirow{2}{*}{$A_{\nicefrac{3}{2}}$} & \multirow{2}{*}{-0.02} & \multirow{2}{*}{0.61} & -2.21 & -2.70 & \multirow{2}{*}{50$\pm$14} & \multirow{2}{*}{--} & \multirow{2}{*}{\cite{Anisovich_3}} \\
& & & & & & &-16.72 & 6.06 & & & \\\midrule
\multirow{2}{*}{$F_{17}(1990)$} & \multirow{2}{*}{**} & \multirow{2}{*}{1954} & \multirow{2}{*}{1997} & \multirow{2}{*}{$A_{\nicefrac{1}{2}}$} & \multirow{2}{*}{-2.98} & \multirow{2}{*}{-9.19} &\multirow{2}{*}{-3.94} & \multirow{2}{*}{-3.22} & 42$\pm$14/30$\pm$29 & --/-1 & \cite{Anisovich_3}/\cite{Awaji} \\
& & & & & & & & & 40 & -69 & \cite{Barbour1978} \\
& & & & \multirow{2}{*}{$A_{\nicefrac{3}{2}}$} & \multirow{2}{*}{-3.96} & \multirow{2}{*}{-11.81} & \multirow{2}{*}{0.39} & \multirow{2}{*}{-5.88} & 58$\pm$12/86$\pm$60& --/-178 & \cite{Anisovich_3}/\cite{Awaji} \\
& & & & & & & & & 4 & -72 & \cite{Barbour1978} \\\midrule
$G_{17}(2190)$ & **** & 1986 & 1980 & $A_{\nicefrac{1}{2}}$ & -27.72 & 8.47 & -12.42 & 2.69 & -65$\pm$8 & -- & \cite{Anisovich_3} \\
& & & & $A_{\nicefrac{3}{2}}$ & 19.04 & -13.45 & 8.80 & -6.48 & 35$\pm$17 & -- & \cite{Anisovich_3} \\\midrule
$G_{19}(2250)$ & **** & 2181 & 2169 & $A_{\nicefrac{1}{2}}$ & 1.26 & -11.16 & 2.18 & -6.65 & $|A^p_{\nicefrac{1}{2}}|<10$ & -- & \cite{Anisovich_3} \\
& & & & $A_{\nicefrac{3}{2}}$ & 1.64 & -13.70 & -0.27 & -6.54 & $|A^p_{\nicefrac{1}{2}}|<10$ & -- & \cite{Anisovich_3} \\\midrule
$H_{19}(2220)$ & **** & 2183 & 2159 & $A_{\nicefrac{1}{2}}$ & 22.06 & -13.65 & 10.63 & -6.93 & $|A^p_{\nicefrac{1}{2}}|<10$ & -- & \cite{Anisovich_3} \\
& & & & $A_{\nicefrac{3}{2}}$ & -17.40 & 7.04 & -7.78 & 3.05 & $|A^p_{\nicefrac{1}{2}}|<10$ & -- & \cite{Anisovich_3} \\\midrule
$I_{1,11}(2600)$& *** & 2394 & 2342 & $A_{\nicefrac{1}{2}}$ & 14.06 & -5.45 & -5.56 & -1.89 & -- & -- & -- \\
& & & & $A_{\nicefrac{3}{2}}$ & -10.07 & 5.97 & -4.07 & 2.46 & -- & -- & -- \\\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[ht!]
\centering
\caption{
Transverse photon couplings calculated for $N\to\Delta$ transitions in model
$\mathcal A$ and $\mathcal C$ in comparison to experimental data.
All calculated photon couplings were determined by calculating the helicity
amplitudes at $Q^2=10^{-4}\,\textrm{GeV}^2$ close to the photon point.
A hyphen indicates that data do not exist. All amplitudes are
in units of $10^{-3}\textrm{GeV}^{-1/2}$\,, all masses are given in MeV.
Reference~\cite{Penner} does not quote errors.
\label{tab:PhotonCoupl2}}
\begin{tabular}{lc|cc|c*{2}{r@{\hspace*{5pt}}}|c@{\hspace*{5pt}}r}\toprule
State & & \multicolumn{2}{c|}{Mass}& & Model $\mathcal A$ & Model $\mathcal C$ & Exp. & Ref. \\
& Rat. & model $\mathcal A$ & model $\mathcal C$ & Ampl. & & & & \\\midrule
$S_{31}(1620)$ & **** & 1620 & 1636 & $A_{\nicefrac{1}{2}}$ & 16.63 & 15.33 & 27$\pm$11 & \cite{PDG} \\
$S_{31}(1900)$ & ** & -- & 1956 & $A_{\nicefrac{1}{2}}$ & -- & -1.43 & 59$\pm$16/29$\pm$8/-4$\pm$16 & \cite{Anisovich_3}/\cite{Awaji}/\cite{Crawford} \\\midrule
$P_{31}(1750)$ & * & -- & 1765 & $A_{\nicefrac{1}{2}}$ & -- & 6.27 & 53 & \cite{Penner} \\
$P_{31}(1910)$ & **** & 1829/1869 & 1892 & $A_{\nicefrac{1}{2}}$ & 2.38/0.69& 1.98 & 3$\pm$14 & \cite{PDG} \\\midrule
$P_{33}(1232)$ & **** & 1233 & 1231 & $A_{\nicefrac{1}{2}}$ & -93.23 & -68.08 & -135$\pm$6 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ &-158.61 &-122.08 & -250$\pm$8 & \cite{PDG} \\
$P_{33}(1600)$ & *** & -- & 1596 & $A_{\nicefrac{1}{2}}$ & -- & -14.98 & -23$\pm$20 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & -- & -35.24 & -9$\pm$21 & \cite{PDG} \\
\multirow{2}{*}{$P_{33}(1920)$} & \multirow{2}{*}{***} & \multirow{2}{*}{1834/1912}& \multirow{2}{*}{1899/1932}& \multirow{2}{*}{$A_{\nicefrac{1}{2}}$} & \multirow{2}{*}{20.89/1.79} & \multirow{2}{*}{14.89/11.90} & 130$^{+30}_{-60}$/40$\pm$14/ & \cite{Anisovich_3}/\cite{Awaji}/ \\
& & & & & & & 22$\pm$8/-7 & \cite{Horn}/\cite{Penner}\\
& & & & \multirow{2}{*}{$A_{\nicefrac{3}{2}}$} & \multirow{2}{*}{-18.56/-0.58} & \multirow{2}{*}{1.36/9.16}& -115$^{+25}_{-50}$/23$\pm$17/ & \cite{Anisovich_3}/\cite{Awaji}/ \\
& & & & & & & 42$\pm$12/-1 & \cite{Horn}/\cite{Penner} \\\midrule
$D_{33}(1700)$ & **** & 1594 & 1600 & $A_{\nicefrac{1}{2}}$ & 64.99 & 63.39 & 104$\pm$15 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & 67.25 & 71.47 & 85$\pm$22 & \cite{PDG} \\
$D_{33}(1940)$ & ** & -- & 1895/1959 & $A_{\nicefrac{1}{2}}$ & -- & -16.86/-14.98 & -36$\pm$58/160$\pm$40 & \cite{Awaji}/\cite{Horn} \\
& & & & $A_{\nicefrac{3}{2}}$ & -- & -12.56/-27.19 & -31$\pm$12/110$\pm$30 & \cite{Awaji}/\cite{Horn} \\\midrule
$D_{35}(1930)$ & *** & -- & 2022 & $A_{\nicefrac{1}{2}}$ & -- & -7.27 & -9$\pm$28 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & -- & -19.49 & -18$\pm$28 & \cite{PDG} \\\midrule
$F_{35}(1905)$ & **** & 1860 & 1896 & $A_{\nicefrac{1}{2}}$ & 18.46 & 12.42 & 26$\pm$11 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & 41.22 & 23.54 & -45$\pm$20 & \cite{PDG} \\\midrule
$F_{37}(1950)$ & **** & 1918 & 1934 & $A_{\nicefrac{1}{2}}$ & -24.80 & -14.22 & -76$\pm$12 & \cite{PDG} \\
& & & & $A_{\nicefrac{3}{2}}$ & -31.94 & -18.62 & -97$\pm$10 & \cite{PDG} \\\midrule
$H_{39}(2420)$ & **** & 2399 & 2363 & $A_{\nicefrac{1}{2}}$ & 11.62 & 4.92 & -- & -- \\
& & & & $A_{\nicefrac{3}{2}}$ & 13.78 & 5.90 & -- & -- \\
\bottomrule
\end{tabular}
\end{table*}
In tables~\ref{tab:PhotonCoupl1} and~\ref{tab:PhotonCoupl2} we have summarised the results for the photon
decay amplitudes as partially already discussed in
subsections~\ref{NNHelAmpl} and \ref{NDeltaHelAmpl}\,. This tables also lists
the available experimental data.
Most of the decay amplitudes can be accounted for quite satisfactory. In
general no large differences between both models is found.
For some amplitudes of resonances with higher angular momentum no
experimental data are available to our knowledge.
\subsection{The nucleon$\,\to\Delta(1232)$ transition form factors\label{MagnTransFF}}
The $N\to\Delta$ electric and magnetic transition form factors between
the ground-state nucleon and the $P_{33}(1232)$ state are related to
the helicity amplitudes by Eqs.~(\ref{TransFF_eq6a}) and~(\ref{TransFF_eq6b})
\begin{subequations}
\begin{align}
G^{\ast}_E(Q^2)
:=
&
F(Q^2)\left(\tfrac{1}{\sqrt{3}}A^N_{\nicefrac{3}{2}}-A^N_{\nicefrac{1}{2}}\right)\label{TransFF_eq6a},\!\!
\\
G^{\ast}_M(Q^2)
:=
&
F(Q^2)\left(\sqrt{3}A^N_{\nicefrac{3}{2}}+A^N_{\nicefrac{1}{2}}\right)\label{TransFF_eq6b},
\end{align}
\end{subequations}
respectively, where $F(Q^2)$ is a kinematical pre-factor defined as
\begin{align}
F(Q^2) =
-\sqrt{\frac{M_N}{4\pi\alpha}\frac{M_\Delta^2-M_N^2}{2M_\Delta^2}}\frac{M_N}{|\vec
k|}
\label{TransFF_eq7}
\end{align}
in the notation of Ash \textit{et al.}~\cite{Ash1967}.
Furthermore, for the sake of completeness the Coulomb-transition form factor is
given by
\begin{align}
G^{\ast}_C(Q^2)=-2\frac{M_\Delta}{|\vec k|}F(Q^2)\sqrt{2}\,S_{\nicefrac{1}{2}}^N\,. \label{TransFF_eq8}
\end{align}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$\frac{G^{\ast p}_M}{3G_D}(Q^2)$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}}
\psfrag{PDG}[r][r]{\scriptsize PDG~\cite{PDG}}
\psfrag{Bartel}[r][r]{\scriptsize Bartel~\cite{Bartel}}
\psfrag{Stein}[r][r]{\scriptsize Stein~\cite{Stein}}
\psfrag{Frolov}[r][r]{\scriptsize Frolov~\cite{Frolov}}
\psfrag{Foster}[r][r]{\scriptsize Foster~\cite{Foster}}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}}
\psfrag{Villano}[r][r]{\scriptsize Villano~\cite{Villano2009}}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}}
\psfrag{Merten}[r][r]{\scriptsize model $\mathcal A$}
\psfrag{Gauss}[r][r]{\scriptsize model $\mathcal C$}
\includegraphics[width=\linewidth]{Magnetic_DipoleTRANSFF.eps}
\caption{Comparison of $\Delta(1232)$ magnetic transition form factor
$G^{\ast p}_M$ calculated within model $\mathcal C$ (solid line)
and model $\mathcal A$ (dashed line). See also caption
to Fig.~\ref{TRANSFF_A12_S11_1535}.\label{MagneticTRANSFF}}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$G^{\ast p}_E(Q^2)$}
\psfrag{Anisovich}[r][r]{\scriptsize Anisovich~\cite{Anisovich_3}}
\psfrag{PDG}[r][r]{\scriptsize PDG~\cite{PDG}}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}}
\psfrag{Merten}[r][r]{\scriptsize model $\mathcal A$}
\psfrag{Gauss}[r][r]{\scriptsize model $\mathcal C$}
\includegraphics[width=\linewidth]{ElectricTRANSFF.eps}
\caption{Comparison of $\Delta(1232)$ electric transition form factor
$G^{\ast p}_E$ calculated within model $\mathcal C$ (solid line) and
model $\mathcal A$ (dashed line). See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535}.\label{ElectricTRANSFF}}
\end{figure}
\begin{figure}[ht!]
\centering
\psfrag{x-axis}[c][c]{$Q^2\,[\textrm{GeV}^2]$}
\psfrag{y-axis}[c][c]{$G^{\ast p}_C(Q^2)$}
\psfrag{Maid}[r][r]{\scriptsize MAID~\cite{Drechsel,Tiator}}
\psfrag{Tiator}[r][r]{\scriptsize Fit: Tiator~\cite{Tiator2011}}
\psfrag{Merten}[r][r]{\scriptsize model $\mathcal A$}
\psfrag{Gauss}[r][r]{\scriptsize model $\mathcal C$}
\includegraphics[width=\linewidth]{CoulombTRANSFF.eps}
\caption{Comparison of $\Delta(1232)$ Coulomb transition form factor
$G^{\ast p}_C$ calculated within model $\mathcal C$ (solid line)
and model $\mathcal A$ (dashed line). See also caption to
Fig.~\ref{TRANSFF_A12_S11_1535}.\label{CoulombTRANSFF}}
\end{figure}
In Fig.~\ref{MagneticTRANSFF} the calculated magnetic transition form
factor divided by thrice the standard dipole form factor is compared to
experimental data and analyses. This representation enhances the discrepancies
between the calculated and experimental results: Although model $\mathcal{A}$
still gives a fair description at larger momentum transfers, albeit in general
too small, model $\mathcal{C}$ yields too large values in this regime.
In both models the values at low momenta are too small, a
discrepancy which this calculation shares with virtually all calculations within
a constituent quark model. Usually this is regarded to be an indication of
effects due to the coupling to pions. In Fig.~\ref{ElectricTRANSFF} we also present
to corresponding electric transition form factor. Only model $\mathcal C$ agrees
with the PDG-data~\cite{PDG} of the MAID-analysis~\cite{Drechsel,Tiator},
whereas model $\mathcal A$ even has the wrong sign.
In model $\mathcal{A}$ we recalculated the form factor with a higher numerical
accuracy than was done by Merten \textit{et al.} in~\cite{Merten}.
The Coulomb transition form factor is displayed in
Fig.~\ref{CoulombTRANSFF}\,. Although the calculated result in model
$\mathcal{C}$ is significantly larger than in model $\mathcal{A}$\,, both are too
small to account for the data from the
MAID-analysis~\cite{Drechsel,Tiator,Tiator2011}\,.
\section{Summary and conclusion\label{Summary}}
In this paper we supplement the investigation of a novel spin-flavour dependent
interaction within the framework of a relativistically covariant constituent
quark model for the structure of baryon resonances, as presented in~\cite{Ronniger}
by a calculation of helicity amplitudes for electro-excitation of the
nucleon. The calculational framework for the computation of current-matrix
elements is the same as presented by Merten \textit{et al.}~\cite{Merten}.
In the current contribution the Salpeter-amplitudes were obtained in a calculation of the
baryon mass spectra, see~\cite{Ronniger}, where, in addition to confinement and
a spin-flavour dependent interaction motivated by instanton effects as used in
an older version called model $\mathcal A$\,, a phenomenological short ranged
spin-flavour dependent interaction was introduced in order to improve in particular upon
the description of some excited negative parity $\Delta$ resonances at
$\approx$ 1.9 GeV; this novel version of the model is called $\mathcal{C}$\,.
On the basis of these amplitudes the current-matrix elements relevant for the
helicity amplitudes were calculated in lowest order without introducing any
additional parameters. In the course of the investigations within the novel
version of the relativistic quark model $\mathcal{C}$ an improved parameter
set was found as discussed in sec.~\ref{ImprovedModel}\,. The modification
affects mainly the neutron form factor which now rather accurately reproduces
the experimental data.
The calculated results are compared to experimental data as far as available
for resonances with a three or four star rating according to the
PDG~\cite{PDG}\,. The experimental data comprise the couplings at the photon
point from PDG~\cite{PDG} and~\cite{Anisovich_3} as well as recent determinations
of transverse and longitudinal amplitudes as reported by
Aznauryan~\cite{Aznauryan05_1,Aznauryan05_2,Aznauryan2009,Aznauryan2012} and
in the MAID-analysis~\cite{Drechsel,Tiator}, see also~\cite{Tiator2011}\,.
The results for the helicity amplitudes of nucleon resonances can be summarised
as follows:
\begin{itemize}
\item
A satisfactory description of data for the $S_{11}(1535)$, $P_{11}(1440)$,
$D_{13}(1520)$ and $F_{15}(1680)$ data was found. Exceptions are: A
node in the transverse $P_{11}(1440)$-amplitude as found experimentally was
not reproduced by the calculations; we also do not find the observed minimum
in the longitudinal $S_{11}(1535)$-amplitude and the calculations
underestimate the transverse\linebreak $S_{11}(1650)$ as well as the longitudinal
$F_{15}(1680)$-ampli\-tude for low momentum transfers. Also the amplitudes of
the $D_{13}(1520)$ resonances are slightly too small in model $\mathcal C$.
Furthermore predictions of helicity amplitudes are given for higher excited resonances
for both models. Some of these were recently found by~\cite{Anisovich_1,Anisovich_2,Anisovich_3},
\textit{e.g.} the $N_{1/2}^+(1880)$ and $N_{1/2}^-(1895)$ resonance.
\end{itemize}
For the nucleon-$\Delta$ transitions helicity amplitudes, as discussed in
sec.~\ref{NDeltaHelAmpl}, we note that:
\begin{itemize}
\item
There exists agreement with the scarce data for the $S_{31}(1620)$ amplitude
if we disregard two data points for the longitudinal amplitude. There is an
indication for a sign disagreement between the data of Aznauryan \textit{et al.}~\cite{Aznauryan05_2}
and that for the MAID-analysis~\cite{Drechsel,Tiator} or alternatively a node
in the amplitude exists which is not reproduced by both models in this case.
\item
The $P_{33}(1232)$ helicity amplitudes are generally underestimated by both
models, slightly more so in model $\mathcal C$\,. For the longitudinal
amplitude in particular we find a maximum in the theoretical curves for which
there exists no experimental evidence.
\item
Predictions of the negative parity excited $\Delta^{\ast}(1900,$ $1940,$ $1930)$ helicity amplitudes
can be made in model $\mathcal C$. The position of these states cannot be
reproduced in the original model $\mathcal{A}$ and was the main motivation to
supplement the dynamics of the model by an additional short-ranged
spin-flavour dependent interaction. It is rewarding that the calculated photon
decay amplitudes agree reasonably well with the PDG data for these three
resonances.
\end{itemize}
In addition we presented predictions for helicity amplitudes of some lower
rated resonances, such as $P_{31}(1750)$ and $D_{33}(1940)$
as well as predictions to some photon decay amplitudes analysed by the CB-ELSA
collaboration \textit{et al.}~\cite{Anisovich_3}. The corresponding photon decay
amplitude data from the CB-ELSA collaboration are presently mostly included
in the new PDG-data and thus only occasionally listed explicitly
in table~\ref{tab:PhotonCoupl1} and~\ref{tab:PhotonCoupl2}. For nucleon photon
decay amplitudes displayed in table~\ref{tab:PhotonCoupl1} we found that model
$\mathcal C$ reproduces the data of $A_{1/2}^p$ slightly better than model
$\mathcal A$\,. The $A_{3/2}^p$ decay amplitudes in both models
are too small in general. Analysing table~\ref{tab:PhotonCoupl2} for
$\Delta$-transition amplitudes we find a slightly better agreement for
model $\mathcal A$ than for model $\mathcal C$\,.
For the magnetic form factor of the $\Delta(1232)$-$N$ transition we have
found that both models cannot accurately account for the data, the
calculated values being too small in model $\mathcal A$
generally and in model $\mathcal C$ for $Q^2 < 1.5\, \textnormal{GeV}^2$.
With the MAID-data~\cite{Drechsel,Tiator} and the fits reported
in~\cite{Tiator2011} it becomes also possible to make
a statement on the electric transition form factor which is
better reproduced by model $\mathcal C$\,. Model $\mathcal A$ even produces
a wrong sign compared to the MAID-data. The momentum dependence of the
Coulomb transition form factor is well described by model $\mathcal C$, but
too small in magnitude by more than a factor 3. Model
$\mathcal A$ yields almost vanishing values in this case.
The corresponding elastic nucleon form factors calculations were already
presented in~\cite{Ronniger}.
Although the overall agreement of calculated and experimental helicity data
in both versions $\mathcal A$ and $\mathcal C$ of the relativistic quark models
are of similar quality, the new model $\mathcal C$ apart from accounting better
for the baryon mass spectrum also does improve on specific observables such as
the ground state form factors.
\section*{Acknowledgements}
We gratefully acknowledge the assistance of Simon T\"olle in optimising the
program-code. We are indebted to the referee for a multitude of valuable
comments and suggestions.
|
2,869,038,155,593 | arxiv | \section{Introduction}
Realization theory of stochastic LTI state-space
representations with exogenous inputs is a mature
theory \cite{LindquistBook,Katayama:05}. In particular,
there is a constructive theory for constructing a
minimal stochastic LTI state-space representation of
a process $\mathbf{y}$ with exogenous input $\mathbf{w}$. The construction uses geometric ideas, and it is based on oblique projection of future outputs onto past inputs and outputs.
Note that it in system identification it is
often assumed that $(\mathbf{y},\mathbf{w})$ has a realization by an autonomous stochastic LTI system driven by white noise.
Indeed, if $\mathbf{w}$ has a realization by a stochastic LTI
state-space representation, and $\mathbf{y}$ has a realization
by a LTI state-space representation with exogenous input
$\mathbf{w}$, then under some mild assumptions
(absence of feedback from $\mathbf{y}$ to $\mathbf{w}$)
$(\mathbf{y},\mathbf{w})$ will be the output of an autonomous stochastic LTI state-space representation. It is then natural
to ask the question how to construct a minimal stochastic
LTI realization of $\mathbf{y}$ with input $\mathbf{w}$, from an LTI realization of the joint process $(\mathbf{y},\mathbf{w})$, instead of
computing a realization of $\mathbf{y}$ using oblique projections.
In this paper we present an explicit construction
of a minimal stochastic LTI space- representation
of $\mathbf{y}$ with an exogenous input $\mathbf{w}$ from an autonomous stochastic LTI state-representation of the joint process $(\mathbf{y},\mathbf{w})$. The basic idea is as follows: we will assume that there is no feedback from $\mathbf{y}$ to $\mathbf{w}$ and then use
the result of \cite{jozsa2018relationship} stating that
there exists a minimal LTI realization of $(\mathbf{y},\mathbf{w})$
matrices of which admit a
An explicit construction of a realization of $\mathbf{y}$ with
input $\mathbf{w}$ could useful for several reasons: it could be
useful in proofs and it has the potential to provide
an alternative to existing system identification algorithms. There are many subtleties in the consistency
analysis of subspace identification algorithms with inputs
\cite{LindquistBook,Katayama:05,chiuso2004ill}, so an alternative approach involving the identification of
an autonomous model of $(\mathbf{y},\mathbf{w})$ could be advantageous
in some cases.
Our motivation for developing an explicit construction of
an LTI realization of $\mathbf{y}$ with input $\mathbf{w}$ from a LTI realization of $(\mathbf{y},\mathbf{w})$ was that
this construction turned out to be useful in providing non-asymptotic error bounds of PAC-Bayesian type \cite{alquier-15} for LTI systems \cite{Arxiv}. The latter could be a first step towards extending the PAC-Bayesian framework for stochastic state-space representations, in particular, to recurrent neural networks.
More precisely, one of the byproducts of the construction of this paper is a one-to-one relationship between LTI
systems which generate $(\mathbf{y},\mathbf{w})$ and optimal linear predictors of future values of $\mathbf{y}$ based on past values of $\mathbf{w}$. The latter predictor is just the deterministic part of a realization of $\mathbf{y}$ with input $\mathbf{w}$ in forward
innovation form.
That is, finding a realization of $(\mathbf{y},\mathbf{w})$ boils down
to finding the best predictor in a parametrized family of
predictors. This is done by minimizing the empirical
prediction error. In \cite{Arxive} PAC-Bayesian
error bounds are formulated relating the expected
prediction error and the empricial
prediction error for each predictor. In turn, the derivation of these error bounds require the use of LTI state-space representations of $(\mathbf{y},\mathbf{w})$ and of minimal
LTI realizations of $\mathbf{y}$ with input $\mathbf{w}$ in forward innovation form.
The contribution of the paper can also be viewed as
as follows.
We wish to construct an estimator of $\mathbf{y}(t)$ given past $(s<t)$ and present $(s=t)$ measurements of $\mathbf{w}(s)$. We consider a specific class of relationships, specifically when the two processes are related by a common stochastic linear time invariant(LTI) state-space system, i.e. $[\mathbf{y}^T(t)\;\mathbf{w}^T(t)]^T$ is an output of an LTI system. The problem of find this estimator can also be thought of as trying to estimate non-measurable quantities of a system from measurable quantities.
\paragraph*{Related work}
As it was pointed out above, stochastic realization theory
with inputs is a mature topic with several publications,
see the monographs \cite{LindquistBook,Katayama:05,CainesBook} and
the references therein. However, we have not found in the literature an explicit procedure for constructing a stochastic LTI state-space realization in forward innovation form of $\mathbf{y}$ with input $\mathbf{w}$ from the joint stochastic LTI state-space realization of $(\mathbf{y},\mathbf{w})$. The current note is intended to fill this gap.
We will need to further analyse the relationship between $\mathbf{y}$ and $\mathbf{w}$ by feedback-free assumption. In \cite{GRANGER196328}, the author defines what it means for one process to cause another, a similar notion to feedback. In \cite{CainesFeedback}, the authors further extend the notion and define weak and strong feedback free processes. As strong feedback free condition implies weak feedback free, we consider the relaxed case of weak feedback free throughout the paper. Using causal real rational transfer function matrices to describe processes $\mathbf{y}$ and $\mathbf{w}$, and analysing these processes with feedback free assumption, yields a straightforward construction of estimator of $\mathbf{y}$ given $\mathbf{w}$ in frequency domain, see \cite{feedbackProcesses} and \cite{Gevers1982OnJS}.
In this paper we study this problem in time domain, using LTI state-space representations.
\paragraph*{Outline}
This paper is organised as follows. Below we start by defining the notation and terminology used in this paper, then in Section \ref{sec:sec1} we reformulate the state-space system driven by innovation of $[\mathbf{y}^T(t)\;\mathbf{w}^T(t)]^T$ into a state-space system, which yields a realisation of $\mathbf{y}$, driven by $\mathbf{w}$ and the innovation of a purely non-deterministic part of $\mathbf{y}$. Afterwards in Section \ref{sec:OptimalPred}, given this new realisation we provide the optimal (in the sense of minimum error variance) predictor of $\mathbf{y}$. And finally we summarise by stating an algorithm to compute such predictors in Section \ref{sec:sum}
{\bf Notation and terminology} Let $\mathbf{F}$ denote a $\sigma$-algebra on the set $\Omega$ and $\mathbf{P}$ be a probability measure on $\mathbf{F}$. Unless otherwise stated all probabilistic considerations will be with respect to the probability space $(\Omega,\mathbf{F},\mathbf{P})$. In this paragraph let $\mathbb{E}$ denote some euclidean space. We associate with $\mathbb{E}$ the topology generated by the 2-norm $||\cdot||_2$, and the Borel $\sigma$-algebra generated by the open sets of $\mathbb{E}$. The closure of a set $M$ is denoted $clM$. For $S\subseteq\mathbb{N}$ and stochastic variables $\mathbf{y},\mathbf{z}_1,\mathbf{z}_2,\dots$ with values in $\mathbb{R}$ we denote by $\mathbf{E}(\mathbf{y}~|~\{\mathbf{z}_i\}_{i\in S})$ the conditional expectation of $\mathbf{y}$ with respect to the $\sigma$-algebra $\sigma(\{\mathbf{z}_i\})$ generated by the family $\{\mathbf{z}_i\}_{i\in S}$. Recall that $\mathbf{E}(\mathbf{z}\mathbf{x})$ define an inner product in $L^2(\Omega,\mathbf{F},\mathbf{P})$ and that $\mathbf{E}(\mathbf{y}~|~\{\mathbf{z}_i\}_{i\in S})$ can be interpreted as the orthogonal projection onto the closed subspace $L^2(\Omega,\sigma(\{\mathbf{z}_i\}_{i\in S}),\mathbf{P})$ which also can be identified with the closure of the subspace generated by $\{\mathbf{z}_i\}_{i\in S}$. That is,
\begin{align}\label{zigen}
\textstyle
L^2(\Omega,\sigma(\{\mathbf{z}_i\}_{i\in S}),\mathbf{P})=cl\left\{\sum_{i\in S}\alpha_i\mathbf{z}_i~|~\alpha_i\in\mathbb{R}\right\}
\end{align}
with only a finite number of summands in \eqref{zigen} being nonzero when $S=\mathbb{N}$. Moreover, for a closed subspace $H$ of $L^2(\Omega,\mathbf{F},\mathbf{P})$ and a stochastic variable $\mathbf{y}$ with values in $\mathbb{E}$ and $\mathbf{E}(||\mathbf{y}||_2^2)<\infty$, we let $\mathbf{E}(\mathbf{y}~|~H)$ denote the $\dim(\mathbb{E})$-dimensional vector with $i$th coordinate equal to $\mathbf{E}(\mathbf{y}_i~|~H)$ with $\mathbf{y}_i$ denoting the $i$th coordinate of $\mathbf{y}$.
There are two closed subspaces of particular importance. Following \cite{LindquistBook}, for a discrete time stochastic process $\mathbf{z}(t)$ with values in $\mathbb{E}$ and $\mathbf{E}(||z(t)||_2^2)<\infty$, we write $H_t^-(\mathbf{z})$ for the closure of the subspace in $L^2(\Omega,\mathbf{F},\mathbf{P})$ generated by the coordinate functions $\mathbf{z}_i(s)$ of $\mathbf{z}(s)$ for all $s<t$. That is,
\begin{align}\label{h-}
\textstyle
H^-_t(\mathbf{z})=cl\left\{\sum_{i=-\infty}^{t-1}\alpha_i^T\mathbf{z}(i)~|~\alpha_i\in\mathbb{E}\right\}
\end{align}
with ${}^T$ indicating transpose and only a finite number of summands in \eqref{h-} being nonzero. In a similar manner we define
\begin{align}
H^+_t(\mathbf{z})=&\textstyle
cl\left\{\sum_{i=t}^{\infty}\alpha_i^T\mathbf{z}(i)~|~\alpha_i\in\mathbb{E}\right\},\\
H(\mathbf{z})=&\textstyle
cl\left\{\sum_{i=-\infty}^{\infty}\alpha_i^T\mathbf{z}(i)~|~\alpha_i\in\mathbb{E}\right\}.
\end{align}
Let $A$, $B$ and $C$ be closed subspaces of $L^2(\Omega,\mathbf{F},\mathbf{P})$. We then define
\begin{align}
A\vee B=cl\{a+b~|~a\in A,~b\in B\}
\end{align}
and say that $A$ and $B$ are orthogonal given $C$, denoted $A\perp B~|~C$, if
\begin{align}
\mathbf{E}\Big(\big(a-\mathbf{E}(a~|~C)\big)\big(b-\mathbf{E}(b~|~C)\big)\Big)=0
\end{align}
for all $a\in A$ and $b\in B$.
We use the following notation, $\mathcal{Y}=\mathbb{R}^{p}$, $\mathcal{W}=\mathbb{R}^{q}$ and for the disjoint union $\mathcal{W}^{*}=\bigsqcup_{k=1}^{\infty} \mathcal{W}^k$ we write $w=(w_1,\ldots,w_k)$ in place of the more correct $(w,k)=((w_1,\ldots,w_k),k)$ for an element in $\mathcal{W}^{*}$.
\section{Formulating a Realisation}\label{sec:sec1}
Suppose we want to construct an estimator of the output stochastic process $\mathbf{y}(t):\Omega\to\mathcal{Y}$ given a sequence of measurements as inputs obtained from the stochastic process $\mathbf{w}(t):\Omega\to\mathcal{W}$.
In order to narrow down and formally describe the estimation problem, we assume that the processes $\mathbf{y}(t)$ and $\mathbf{w}(t)$ can be represented as outputs of an LTI system in forward innovation form:
\begin{Assumption}\label{as:generator}
The processes $\mathbf{y}(t)$ and $\mathbf{w}(t)$ can be generated by a stochastic discrete-time minimal LTI system on the form
\begin{subequations}\label{eq:generator}
\begin{align}
\mathbf{x}(t+1) & =A_g\mathbf{x}(t)+K_g\mathbf{e}_g(t)\\
\begin{bmatrix}
\mathbf{y}(t)\\
\mathbf{w}(t)
\end{bmatrix}
& =C_g\mathbf{x}(t)+\mathbf{e}_g(t)
\end{align}
\end{subequations}
where $A_g \in \mathbb{R}^{n \times n},K_g \in \mathbb{R}^{n \times m},C_g \in \mathbb{R}^{(p+q) \times n}$ for $n \ge 0$, $m,p>0$ and $\mathbf{x}\in\mathbb{R}^n$, $\mathbf{y}\in\mathbb{R}^p$,$\mathbf{x}\in\mathbb{R}^q$ and $\mathbf{e}_g$ are stationary, square-integrable, zero-mean, and jointly Gaussian stochastic processes. The processes $\mathbf{x}$ and $\mathbf{e}_g$ are called state and noise process, respectively. Recall, that stationarity and square-integrability imply constant expectation and that the covariance matrix\\ $Cov(\mathbf{y}(t),\mathbf{y}(s)) = \mathbf{E} [(\mathbf{y}(t)-\mathbf{E}[\mathbf{y}(t)])(\mathbf{y}(s)-\mathbf{E}[\mathbf{y}(s)])^T]$\\ only depends on time lag $(t-s)$.
Furthermore, we require that $A_g$ is stable (all its eigenvalues are inside the open unit circle) and that for any $t,k \in \mathbb{Z}$, $k \geq 0$, $E[\mathbf{e}_g(t)\mathbf{e}_g^T(t\!-\! k\!-\! 1)]=0$, $E[\mathbf{e}_g(t)\mathbf{x}^T(t-k)]=0$, i.e., the stationary Gaussian process $\mathbf{e}_g(t)$ is white noise and uncorrelated with $\mathbf{x}(t-k)$.
We identify the system \eqref{eq:generator} with the tuple
$(A_g,K_g,C_g,I,\mathbf{e}_g)$; note that the state process $\mathbf{x}$ is uniquely defined by
the infinite sum $\mathbf{x}(t)=\sum_{k=1}^{\infty} A_g^{k-1}K_g\mathbf{e}_g(t-k)$.
\end{Assumption}
Before we can continue we have to consider the relationship between $\mathbf{y}$ and $\mathbf{w}$. For technical reasons we can not have feedback from $\mathbf{y}$ to $\mathbf{w}$, as $\mathbf{w}$ would then be determined by a dynamical relation involving the past of the process $\mathbf{y}$. As such we have Assumption \ref{as:NoFeedback}
\begin{Assumption}\label{as:NoFeedback}
There is no feedback from $\mathbf{y}$ to $\mathbf{w}$, following definition 17.1.1. from \cite{LindquistBook}, i.e., $$H_t^-(\mathbf{y}) \perp H_t^+(\mathbf{w}) \mid H_t^-(\mathbf{w})$$ holds, i.e., the future of $\mathbf{w}$ is conditionally uncorrelated with the past of $\mathbf{y}$, given the past of $\mathbf{w}$.
\end{Assumption}
As a passing remark, the no feedback assumption is equivalent to weak feedback free assumption \cite{CainesFeedback} or Granger non-causality \cite{GRANGER196328}. Thus the no feedback assumption can be stated as $\mathbf{y}$ does not Granger cause $\mathbf{w}$.\\
Several results can now be deduced from Assumption \ref{as:NoFeedback}. First, by \cite[Proposition 2.4.2]{LindquistBook}, we obtain the following relation between projections
\begin{align}
E[\mathbf{y}(t)|H(\mathbf{w})]&=E[\mathbf{y}(t)|H_{t+1}^-(\mathbf{w})],\\
E[\mathbf{w}(t)|H_{t}^-(\mathbf{w})\vee H_{t}^-(\mathbf{y})]&=E[\mathbf{w}(t)|H_{t}^-(\mathbf{w})].
\end{align}
Secondly, from \cite[Ch. 17]{LindquistBook} it follows that the process $\mathbf{y}$ can then be decomposed into a deterministic part $\mathbf{y}_d$ and a stochastic part $\mathbf{y}_s$, as follows
\begin{align}
\mathbf{y}(t)&=\mathbf{y}_d(t)+\mathbf{y}_s(t), \label{eq:decomp}\\
\mathbf{y}_d(t)&=E[\mathbf{y}(t) | H(\mathbf{w})]=E[\mathbf{y}(t) | H_{t+1}^-(\mathbf{w})],\label{eq:ydDef}\\
\mathbf{y}_s(t)&=\mathbf{y}(t)-\mathbf{y}_d(t).\label{eq:ysDef}
\end{align}
Note that, as a consequence of \eqref{eq:ydDef} and \eqref{eq:ysDef}
$$E[\mathbf{y}_d(t)\mathbf{y}_s^T(\tau)]=0\;\forall t,\tau\; ,$$
i.e., $\mathbf{y}_d$ and $\mathbf{y}_s$ are uncorrelated.
Moreover, the process $\mathbf{y}_s$ can be realised by a state-space system in forward innovation form
\begin{subequations}
\begin{align}
\mathbf{x}_s(t+1)&=A_s\mathbf{x}_s(t)+K_s\mathbf{e}_s(t),\\
\mathbf{y}_s(t)&=C_s\mathbf{x}_s(t)+\mathbf{e}_s(t),\\
\mathbf{e}_s&=\mathbf{y}_s(t)-E[\mathbf{y}_s(t)|H_t^-(\mathbf{y}_s)].\label{esdef
\end{align}
\end{subequations}
Finally, from \cite[Proposition 17.1.3.]{LindquistBook} we get
\begin{equation}
H_{t}^-(\mathbf{y})\vee H_{t+1}^-(\mathbf{w})=H_{t}^-(\mathbf{y}_s)\oplus H_{t+1}^-(\mathbf{w}),
\end{equation}
where $\oplus$ denotes orthogonal sum, and
\begin{equation}
\mathbf{e}_s(t)=\mathbf{y}(t)-E[\mathbf{y}(t)|H_{t}^-(\mathbf{y})\vee H_{t+1}^-(\mathbf{w})]. \label{eq:es}
\end{equation}
Now consider a similarity transformation $T$ of \eqref{eq:generator} such that $\bar{A}_g=TA_gT^{-1}$, $\bar{K}_g=TK_g$ and $\bar{C}_g=C_gT^{-1}$ are upper block triangular (see also Section~\ref{sec:sum})
\begin{subequations}\label{eq:similar}
\begin{flalign}
\begin{bmatrix}
\bar{x}_{g,1}(t+1)\\
\bar{x}_{g,2}(t+1)
\end{bmatrix}&=\underset{\bar{A}_g}{\underbrace{\begin{bmatrix}A_{1,1}&A_{1,2}\\0&A_{2,2} \end{bmatrix}}}\begin{bmatrix}
\bar{x}_{g,1}(t)\\
\bar{x}_{g,2}(t)
\end{bmatrix}+\underset{\bar{K}_g}{\underbrace{\begin{bmatrix} K_{1,1} & K_{1,2} \\ 0 & K_{2,2} \end{bmatrix}}}\begin{bmatrix} \mathbf{e}_1(t)\\ \mathbf{e}_2(t)\end{bmatrix} &&\\
\begin{bmatrix} \mathbf{y}(t)\\ \mathbf{w}(t)\end{bmatrix} &= \underset{\bar{C}_g}{\underbrace{\begin{bmatrix} C_{1,1} & C_{1,2} \\ 0 & C_{2,2} \end{bmatrix}}}\begin{bmatrix}
\bar{x}_{g,1}(t)\\
\bar{x}_{g,2}(t)
\end{bmatrix} + \begin{bmatrix} \mathbf{e}_1(t)\\ \mathbf{e}_2(t)\end{bmatrix} \label{eq:similarOuput}
\end{flalign}
\end{subequations}
where $[\mathbf{e}_1^T(t)\;\mathbf{e}_2^T(t)]^T=\mathbf{e}_g(t)$, and such that $(A_{3},C_{2,2})$ is observable. Moreover, $A_{i,j}\in\mathbb{R}^{p_i\times p_j}$, $K_{i,j}\in\mathbb{R}^{p_i\times r_j}$,$C_{i,j}\in\mathbb{R}^{r_i\times p_j}$, with $r_1=p$ and $r_2=q$. From \cite{jozsa2018relationship} it then follows that $(A_{2,2},K_{2,2},C_{2,2},\mathbf{e}_2)$ is a minimal Kalman representation of $\mathbf{w}$, and hence $\mathbf{e}_2(t)$ is the innovation process of $\mathbf{w}$ i.e.,
\begin{align}
\mathbf{e}_2(t)&=\mathbf{w}(t) - E[\mathbf{w}(t) \mid H_{t}^-(\mathbf{w})]\nonumber\\
&=\mathbf{w}(t) - E[\mathbf{w}(t) \mid H_{t}^-(\mathbf{y}) \vee H_{t}^-(\mathbf{w})]. \label{def:e2}
\end{align}
Moreover, the transformed system \eqref{eq:similar} induce a relation between the output $\mathbf{y}$ and input $\mathbf{w}$. In detail, from \eqref{eq:similarOuput} we also have
\begin{equation}
\mathbf{e}_2(t)=\mathbf{w}(t)-C_{2,2}\bar{x}_{g,2}(t). \label{eq:e2}
\end{equation}
Hence, substituting \eqref{eq:e2} in \eqref{eq:similar} yields the following realisation of $\mathbf{y}$
{\normalsize
\begin{subequations}\label{inout}
\begin{flalign}
\begin{bmatrix}
\bar{x}_{g,1}(t+1)\\
\bar{x}_{g,2}(t+1)
\end{bmatrix}&=\begin{bmatrix}A_{1,1}&A_{1,2}-K_{1,2}C_{2,2}\\0&A_{2,2}-K_{2,2}C_{2,2} \end{bmatrix}\begin{bmatrix}
\bar{x}_{g,1}(t)\\
\bar{x}_{g,2}(t)
\end{bmatrix} &&\\
&+\begin{bmatrix}K_{1,2}\\K_{2,2}\end{bmatrix}\mathbf{w}(t) +\begin{bmatrix} K_{1,1} \\ 0 \end{bmatrix}\mathbf{e}_1(t) &&\\
\mathbf{y}(t) &= \begin{bmatrix} C_{1,1} & C_{1,2} \end{bmatrix}\begin{bmatrix}
\bar{x}_{g,1}(t)\\
\bar{x}_{g,2}(t)
\end{bmatrix} + \mathbf{e}_1(t)&&
\end{flalign}
\end{subequations} }
Note that $\mathbf{e}_1(t)$ is the innovation process of $\mathbf{y}$ (with respect to $\mathbf{w}$), i.e.,
\begin{equation}\label{e1}
\mathbf{e}_1(t)=\mathbf{y}(t)-E[\mathbf{y}(t) \mid H_t^-(\mathbf{y}) \vee H_t^-(\mathbf{w})].
\end{equation}
\subsection{Optimal predictor} \label{sec:OptimalPred}
The goal in this section is to derive an optimal predictor (in the sense of minimum variance). Firstly, we claim that
\begin{equation}
\mathbf{e}_s(t)=\mathbf{e}_1(t)-E(\mathbf{y}(t)|\mathbf{e}_2(t))=\mathbf{e}_1(t)-D_0\mathbf{e}_2(t) \label{eq:esClaim}
\end{equation}
where $D_0=(E[\mathbf{y}(t)\mathbf{e}_2^T(t)])^T(E[\mathbf{e}_2(t)\mathbf{e}_2^T(t)])^{-1}$ is the minimum variance linear estimator of $\mathbf{y}(t)$ given $\mathbf{e}_2(t)$, see \cite[Proposition 2.2.3.]{LindquistBook}.
In order to show \eqref{eq:esClaim}, we first show that
\begin{equation}
H_t^-(\mathbf{y}) \vee H_{t+1}^-(\mathbf{w}) = ( H_t^-(\mathbf{y}) \vee H_t^-(\mathbf{w}) ) \oplus H(\mathbf{e}_2(t) ), \label{orthSum}
\end{equation}
where $H(\mathbf{e}_2(t))=\{\alpha^T \mathbf{e}_2(t) \mid \alpha \in \mathbb{R}^q\}$, is the space spanned by innovation process $\mathbf{e}_2(t)$, considered only at the time $t$. By definition we have
\begin{align}
&( H_t^-(\mathbf{y}) \vee H_t^-(\mathbf{w}) ) \vee H(\mathbf{e}_2(t) ) =\nonumber\\
&\quad\qquad cl\Big\{ \sum_{i=-\infty}^{t-1} \gamma_i^T \mathbf{y}(i) + \sum_{i=-\infty}^{t-1} \eta_i^T \mathbf{w}(i)\nonumber\\
&\qquad\qquad + \lambda_t^T\mathbf{e}_2(t)
\mid \gamma_i \in \mathbb{R}^p, \eta_i\in \mathbb{R}^q, \lambda_t \in \mathbb{R}^q\Big\} \label{eq:esfe12}
\end{align}
However, using definition of $\mathbf{e}_2(t)$ from \eqref{def:e2} we have
\begin{align}
&( H_t^-(\mathbf{y}) \vee H_t^-(\mathbf{w}) ) \vee H(\mathbf{e}_2(t) ) =\nonumber \\
&\quad\qquad cl\Big\{ \sum_{i=-\infty}^{t-1} \gamma_i^T \mathbf{y}(i) + \sum_{i=-\infty}^{t-1} \eta_i^T \mathbf{w}(i)\nonumber\\
&\qquad\qquad + \lambda_t^T\mathbf{w}(t)
\mid \gamma_i \in \mathbb{R}^p, \eta_i\in \mathbb{R}^q, \lambda_t \in \mathbb{R}^q \Big\},
\end{align}
which equals $H_t^-(\mathbf{y})\vee H_{t+1}^-(\mathbf{w})$ and therefore
\begin{align}
( H_t^-(\mathbf{y}) \vee H_t^-(\mathbf{w}) ) \vee H(\mathbf{e}_2(t) ) = H_t^-(\mathbf{y}) \vee H_{t+1}^-(\mathbf{w})
\end{align}
Again from \eqref{def:e2} it follows that $\mathbf{e}_2(t) \perp H_{t}^-(\mathbf{w}) \vee H_{t}^-(\mathbf{y})$, thus \eqref{orthSum} holds.
The relation \eqref{eq:esClaim} now follows since
\begin{align}
&E[\mathbf{y}(t) \mid H_t^-(\mathbf{y})\vee H_{t+1}^-(\mathbf{w})]\nonumber\\
&\qquad=E[\mathbf{y}(t) \mid H_t^-(\mathbf{y})\vee H_t^-(\mathbf{w})]
+E[\mathbf{y}(t) \mid \mathbf{e}_2(t)] && \nonumber \\
&\qquad=E[\mathbf{y}(t) \mid H_t^-(\mathbf{y})\vee H_t^-(\mathbf{w})]+D_0\mathbf{e}_2(t),
\end{align}
and therefore
\begin{align}
\mathbf{e}_s(t)
&=\mathbf{y}(t)-E[\mathbf{y}(t)\mid H_t^-(\mathbf{y}) \vee H_t^-(\mathbf{w})]-D_0\mathbf{e}_2(t)\\
&=\mathbf{e}_1(t)-D_0\mathbf{e}_2(t)
\end{align}
by using \eqref{e1}.
Now from \eqref{eq:esClaim} and \eqref{eq:e2} we get
\begin{equation}
\mathbf{e}_1(t)=\mathbf{e}_s(t)+D_0\mathbf{w}(t)-D_0C_{2,2}\bar{x}_{g,2}(t),
\end{equation}
which can be applied to \eqref{inout} to obtain the following realization of $\mathbf{y}$
\begin{subequations}
\begin{flalign}
&\begin{bmatrix}
\bar{x}_{g,1}(t+1)\\
\bar{x}_{g,2}(t+1)
\end{bmatrix}=\underset{\tilde{A}}{\underbrace{\begin{bmatrix}A_{1,1}&A_{1,2}-K_{1,2}C_{2,2}-K_{1,1}D_0C_{2,2}\\0&A_{2,2}-K_{2,2}C_{2,2} \end{bmatrix}}}\begin{bmatrix}
\bar{x}_{g,1}(t)\\
\bar{x}_{g,2}(t)
\end{bmatrix}\nonumber &&\\
&+\underset{\tilde{K}}{\underbrace{\begin{bmatrix}K_{1,2}+K_{1,1}D_0\\K_{2,2}\end{bmatrix}}}\mathbf{w}(t)+\begin{bmatrix} K_{1,1} \\ 0 \end{bmatrix}\mathbf{e}_s(t)\label{xbar} &&\\
&\mathbf{y}(t) = \underset{\tilde{C}}{\underbrace{\begin{bmatrix} C_{1,1} & C_{1,2}-D_0C_{2,2} \end{bmatrix}}}\begin{bmatrix}
\bar{x}_{g,1}(t)\\
\bar{x}_{g,2}(t)
\end{bmatrix} +D_0\mathbf{w}(t) + \mathbf{e}_s(t) &&\label{y}
\end{flalign}
\end{subequations}
Finally we are in a position to derive a formula for the minimum variance predictor $E[\mathbf{y}(t)\mid H_{t+1}^-(\mathbf{w})]$. That is, a formula for the orthogonal projection of $\mathbf{y}(t)$ given past and present values of $\mathbf{w}$. First define $\hat{x}_g(t)=E[\bar{x}_g(t)\mid H_{t+1}^-(\mathbf{w})]$, then from \eqref{y} we get
\begin{flalign}
&E[\mathbf{y}(t)\mid H_{t+1}^-(\mathbf{w})]= E[\tilde{C}\bar{x}_g(t)+D_0\mathbf{w}(t) + \mathbf{e}_s(t)\mid H_{t+1}^-(\mathbf{w})]&&\\
&=\tilde{C} \hat{x}_g(t) +D_0\mathbf{w}(t) + E[\mathbf{e}_s(t)|H_{t+1}^-(\mathbf{w})]&&\\
&=\tilde{C} \hat{x}_g(t) +D_0\mathbf{w}(t)\label{last}&&
\end{flalign}
where \eqref{last} follows from \eqref{eq:es}.
Now \eqref{xbar} can be used to derive a dynamical expression for $\hat{x}_g$ as follows
\begin{flalign}
&E[\bar{x}_g(t+1)\mid H_{t+2}^-(\mathbf{w})] && \nonumber \\
&=E\Bigg [ \tilde{A}\bar{x}_{g}(t)+\tilde{K}\mathbf{w}(t)+\begin{bmatrix} K_{1,1} \\ 0 \end{bmatrix}\mathbf{e}_s(t) \Bigg | H_{t+2}^-(\mathbf{w}) \Bigg ] && \\
&=\tilde{A}E[\bar{x}_{g}(t)|H_{t+2}^-(\mathbf{w})]\label{proxbar}+\tilde{K} E[\mathbf{w}(t)|H_{t+2}^-(\mathbf{w})]\nonumber &&\\
&\hspace{1cm}+\begin{bmatrix} K_{1,1} \\ 0 \end{bmatrix}E[\mathbf{e}_s(t)|H_{t+2}^-(\mathbf{w})] &&
\end{flalign}
Clearly $E[\mathbf{w}(t)|H_{t+2}^-(\mathbf{w})]=\mathbf{w}(t)$. For the projection in \eqref{proxbar} we have $$E[\bar{x}_{g}(t)|H_{t+2}^-(\mathbf{w})]=E[\bar{x}_{g}(t)|H_{t+1}^-(\mathbf{w})]$$ since the state vector $\bar{x}_g(t)$ can be expressed as an infinite sum using \eqref{xbar}, where \eqref{eq:es} is used for $\mathbf{e}_s(t)$
\begin{multline}
\bar{x}_g(t)
=\sum_{i=1}^\infty M_i\mathbf{w}(t-i) + \sum_{i=1}^\infty N_i\mathbf{y}(t-i)\\ + \sum_{i=1}^\infty N_iE[\mathbf{y}(t-i)\mid H_{t-i}^-(\mathbf{y}) \vee H_{t-i+1}^-(\mathbf{w})]\label{eq:barxsum}
\end{multline}
Finally, from \eqref{esdef} we observe that
\begin{flalign}
&E[\mathbf{e}_s(t)|H_{t+2}^-(\mathbf{w})] && \nonumber \\
&=E[\mathbf{y}_s(t)|H_{t+2}^-(\mathbf{w})]-E[E[\mathbf{y}_s(t)|H_t^-(\mathbf{y}_s)]|H_{t+2}^-(\mathbf{w})] &&\\
&=0-0 && \nonumber
\end{flalign}
since $H(\mathbf{y}_s)\perp H(\mathbf{w})$ by \eqref{eq:ysDef}.
In summary we obtain the following formula for the minimum variance predictor
\begin{subequations}
\begin{align}
\hat{x}_{g}(t+1)=\tilde{A}\hat{x}_{g}(t)+\tilde{K}\mathbf{w}(t)\\
E[\mathbf{y}(t)\mid H_{t+1}^-(\mathbf{w})] = \tilde{C}\hat{x}_{g}(t)-D_0\mathbf{w}(t)
\end{align}\label{eq:T0sys}
\end{subequations}
In order to predict $\mathbf{y}(t)$ based on $H_{t+1}^-(\mathbf{w})$, the system \eqref{eq:T0sys} would yield the minimum prediction error variance
\section{Summary}\label{sec:sum}
Given $(A,B,C,D)$, which describe the system
\begin{subequations}
\begin{align}
\mathbf{x}(t+1)&=A\mathbf{x}(t)+Bv(t),\\
\begin{bmatrix} \mathbf{y}(t) \\ \mathbf{w}(t) \end{bmatrix} &= C\mathbf{x}(t)+Dv(t),\\
v(t)&\sim \mathcal{N}(0,I),
\end{align}
\end{subequations}
and assume that there is no feedback from $\mathbf{y}$ to $\mathbf{w}$.\\
\textbf{Step 1} Compute forward innovation representation \cite[Section 6.9]{LindquistBook}, by first
solving the discrete Lyapunov equation
\begin{equation}
P=APA^T+BB'
\end{equation}
then compute
\begin{align}
\bar{C}&=CPA^T+DB^T\\
\Lambda_{0y}&=CPC^T+DD^T
\end{align}
Then solve a discrete algebraic Riccati equation
\begin{equation}
\Pi=A\Pi A^T+(\bar{C}^T-A\Pi C^T)(\Lambda_{0y}-C\Pi C^T)^{-1}(\bar{C}^T-A\Pi C^T)^T
\end{equation}
Finally compute the covariance of innovation process $\mathbf{e}_g(t)$ and the Kalman gain
\begin{align}
Q_e&=\Lambda_{0y}-C\Pi C^T\\
K&=(\bar{C}^T-A\Pi C^T)Q_e^{-1}
\end{align}
Then we obtain a realisation in forward innovation form
\begin{subequations}
\begin{align}
\mathbf{x}(t+1)&=A\mathbf{x}(t)+K\mathbf{e}_g(t)\\
\begin{bmatrix} \mathbf{y}(t) \\ \mathbf{w}(t) \end{bmatrix} &= C\mathbf{x}(t)+\mathbf{e}_g(t)
\end{align}
\end{subequations}
\textbf{Step 2} Find a similarity transformation to obtain the system in upper block triangular form \eqref{eq:similar}. \\
First compute the svd of the observability matrix related to $\mathbf{w}$.
\begin{align}
C&=\begin{bmatrix} C_y \\ C_w \end{bmatrix}\\
USV^T&=\begin{bmatrix}
C_w\\ C_wA\\ \vdots \\ C_wA^n
\end{bmatrix}\label{obsM}
\end{align}
where $S$ is diagonal matrix with entries $[\sigma_1,\sigma_2,\dots,\sigma_n]$ and $\sigma_1\leq \sigma_2\leq \dots \leq \sigma_n$. Note that the order of the singular values has been reversed compared to standard practice. The non-singular matrix $T$, which performs the desired similarity transformation is given by $T=V^T$. Moreover, $p_1$ (from \eqref{eq:similar}) is given by the number of singular values which are zero, i.e. the nullity of the observability matrix in \eqref{obsM}, and $p_2=n-p_1$.
Now compute the upper block triangular representation by
\begin{align}
T^{-1}AT&=\begin{bmatrix} A_{1,1} & A_{1,2} \\ \; 0_{p_2\times p_1} & A_{2,2}\end{bmatrix},\; TK=\begin{bmatrix} K_{1,1}& K_{1,2} \\ \; 0_{p_2\times p} & K_{2,2}\end{bmatrix}\\
CT^{-1}&=\begin{bmatrix} C_{1,1} & C_{1,2} \\ \; 0_{q\times p_1} & C_{1,2}\end{bmatrix}
\end{align}
and partition
\begin{equation}
Q_e=\begin{bmatrix} Q_1 & Q_2\\ Q_2^T& Q_3\end{bmatrix},
\end{equation}
with $Q_2\in\mathbb{R}^{p\times q}$, $Q_3\in \mathbb{R}^{q\times q}$.\\
\textbf{Step 3} Compute the minimum variance predictor \eqref{eq:T0sys}.
\begin{subequations}
\begin{flalign}
&\hat{\mathbf{x}}(t+1)=\begin{bmatrix}A_{1,1}&A_{1,2}-K_{1,2}C_{2,2}-K_{1,1}D_0C_{2,2}\\0&A_{2,2}-K_{2,2}C_{2,2} \end{bmatrix}
\hat{x}(t)\nonumber &&\\
&\qquad\qquad+\begin{bmatrix}K_{1,2}+K_{1,1}D_0\\K_{2,2}\end{bmatrix}\mathbf{w}(t)&&\\
&\hat{\mathbf{y}}(t) = \begin{bmatrix} C_{1,1} & C_{1,2}-D_0C_{2,2} \end{bmatrix}
\hat{x}(t)+Q_2Q_3^{-1}\mathbf{w}(t) &&
\end{flalign}
\end{subequations}
\section{Illustrative Example}
To illustrate the findings consider the system
\begin{subequations}\label{eq:exSystem1}
\begin{align}
\mathbf{x}(t+1)&=\begin{bmatrix} 0.85 & 1\\ 0 & 0.5\end{bmatrix}\mathbf{x}(t)+\begin{bmatrix}1&1\\0&1\end{bmatrix} v(t),\\
\begin{bmatrix} \mathbf{y}(t) \\ \mathbf{w}(t) \end{bmatrix} &= \begin{bmatrix} 1&0\\0 & 1\end{bmatrix}\mathbf{x}(t)+\begin{bmatrix}1&1\\0&1\end{bmatrix} v(t),\\
v(t)&\sim \mathcal{N}(0,I),
\end{align}
\end{subequations}
From system \eqref{eq:exSystem1} we can see that $\mathbf{w}$ is feedback free from $\mathbf{y}$. To make the example interesting we will not work with the system \eqref{eq:exSystem1} but instead apply a similarity transformation to the system such that it is not in a nice and clean form. We use
\begin{equation}
T=\begin{bmatrix} 0.5& 0.9\\0.5& 0.1\end{bmatrix}
\end{equation}
and obtain
\begin{subequations}\label{eq:exSystem2}
\begin{align}
\mathbf{x}(t+1)&=\begin{bmatrix} 1.08 & -0.23\\ 0.58 & 0.26\end{bmatrix}\mathbf{x}(t)+\begin{bmatrix}0.5&1.4\\0.5&0.6\end{bmatrix} v(t),\\
\begin{bmatrix} \mathbf{y}(t) \\ \mathbf{w}(t) \end{bmatrix} &= \begin{bmatrix} -0.25&2.25\\1.25 & -1.255\end{bmatrix}\mathbf{x}(t)+\begin{bmatrix}1&1\\0&1\end{bmatrix} v(t)
\end{align}
\end{subequations}
Note that if we had started with system \eqref{eq:exSystem2}, it would not be clear if $\mathbf{w}$ was feedback free of $\mathbf{y}$. In this example we know that $\mathbf{w}$ is feedback free, therefore we proceed by computing the forward innovation form as in \textbf{Step 1} we get
\begin{subequations}\label{eq:exSystem}
\begin{align}
\mathbf{x}(t+1)&=\begin{bmatrix} 1.08 & -0.23\\ 0.58 & 0.26\end{bmatrix}\mathbf{x}(t)+\begin{bmatrix}0.5&0.9\\0.5&0.1\end{bmatrix} e(t),\\
\begin{bmatrix} \mathbf{y}(t) \\ \mathbf{w}(t) \end{bmatrix} &= \begin{bmatrix} -0.25&2.25\\1.25 & -1.25\end{bmatrix}\mathbf{x}(t)+e(t)\\
Qe&=\begin{bmatrix} 2&1\\1&1\end{bmatrix}
\end{align}
\end{subequations}
The triangular form is obtained by applying SVD to the observability matrix
\begin{equation}
U\begin{bmatrix} 0 & 0 \\ 0 & 1.9764\end{bmatrix}\begin{bmatrix} -0.7&-0.707\\-0.7&0.7\end{bmatrix} = \begin{bmatrix} 1.25 & -1.25\\0.62&-0.62 \end{bmatrix}
\end{equation}
Note that there is one singular value equal to zero, therefore there is one unobservable state and as such $p_1=1$. Taking $T=V^T$ we obtain a system in triangular form
\begin{subequations}\label{eq:exSystem}
\begin{align}
\mathbf{x}(t+1)&=\begin{bmatrix} 0.85 & 0.81\\ 0 & 0.5\end{bmatrix}\mathbf{x}(t)+\begin{bmatrix}-0.7&-0.7\\0&-0.56\end{bmatrix} e(t),\\
\begin{bmatrix} \mathbf{y}(t) \\ \mathbf{w}(t) \end{bmatrix} &= \begin{bmatrix} -1.41&1.76\\0 & -1.76\end{bmatrix}\mathbf{x}(t)+e(t)
\end{align}
\end{subequations}
and finally we can compute the estimator
\begin{subequations}
\begin{flalign}
&\hat{\mathbf{x}}(t+1)=\begin{bmatrix}0.85&-1.6875\\0&-0.5\end{bmatrix}
\hat{x}(t)+\begin{bmatrix}-1.41\\-0.56\end{bmatrix}\mathbf{w}(t)&&\\
&\hat{\mathbf{y}}(t) = \begin{bmatrix} -1.41 & 3.53 \end{bmatrix}
\hat{x}(t)+1\mathbf{w}(t) &&
\end{flalign}
\end{subequations}
\begin{figure}
\centering
\vspace*{3mm}
\includegraphics[width=3in]{figures/pred.pdf}
\caption{process $\mathbf{y}$ and prediction $\hat{\mathbf{y}}$ }
\label{fig:gibbs}
\end{figure}
\bibliographystyle{cas-model2-names}
|
2,869,038,155,594 | arxiv | \section{Introduction}
Reinforcement learning (RL) is a computational approach to automate goal-directed learning of a policy by maximizing cumulative reward in an environment \citep{sutton2018reinforcement}. In combination with Artificial Neural Networks as function approximators, RL can deal with high dimensional spaces, like images and depth data \citep{openaiSolvingRubikCube2019,kalashnikovQTOptScalableDeep2018}. In recent years, RL has shown impressive results in video games \citep{mnihPlayingAtariDeep} and control applications \citep{lillicrapContinuousControlDeep2015}. A lot of research effort went into improving the state-of-the-art methods, with the goal of large-scale real-world applications \citep{kalashnikovQTOptScalableDeep2018}. To achieve this goal, a certain degree of reliability is a requirement.
Previous work has shown that reproducing results of state-of-the-art RL algorithms is very difficult, and the robustness of these algorithms can be brittle \citep{hendersonDeepReinforcementLearning2017}. Minor differences in implementation, can lead to a major change in performance \citep{hendersonDeepReinforcementLearning2017,pardoTimeLimitsReinforcement2017}. Furthermore, different codebases use different tricks to improve performance and to ensure stability.
This paper aims to investigate implementation details and tricks, which have a significant impact on the performance of RL algorithms, validate their usefulness, and examine their effects. Our main contribution is to identify techniques that have the most impact on the overall performance. These techniques can be split into three categories: initialization schemes, input normalization, and adapting learning rates and gradients. For the evaluation, we chose commonly used algorithms on a set of control benchmark tasks. Our experimental analysis aims to provide guidelines on which techniques to use and which to avoid. For the sake of reproducibility, we open-source the implementation of the algorithms and our evaluation procedures: \url{https://github.com/Nirnai/DeepRL}.
\section{Related Work}
It has been established in recent years that RL research suffers regarding reproducibility and reusability \citep{hendersonDeepReinforcementLearning2017,islamReproducibilityBenchmarkedDeep2017}. \cite{hendersonDeepReinforcementLearning2017} investigated how reproducibility is affected by different hyperparameters and the number of samples. They also examine the effect of environmental characteristics and different open-source codebases. These results can strongly vary based on different hyperparameters and codebases. The details in the codebases, which caused the divergent performances, were not further investigated. Our work analyses those details in-depth and explores their impact.
Due to the algorithms' stochastic nature, a sufficient number of samples is necessary to get an insight into the performance of a population.
Confidence intervals and significance tests are necessary to determine if the difference in populations is, in fact, significant.
\citep{colasHitchhikerGuideStatistical2019,Khetarpal2018REEVALUATERI,colasHowManyRandom2018} propose new evaluation practices to improve upon some of those issues. \cite{colasHitchhikerGuideStatistical2019} investigated statistical significance tests based on the performance population of SAC and TD3 on the Half-Cheetah-v2 environment.
Furthermore, the RL research community made steps towards providing benchmarks with a large variety of tasks \citep{tassaDeepMindControlSuite2018,duanBenchmarkingDeepReinforcement,mnihPlayingAtariDeep}. Some of these benchmarks focus on continuous state and action spaces \citep{tassaDeepMindControlSuite2018,duanBenchmarkingDeepReinforcement}, others focus on discrete state and action spaces \citep{mnihPlayingAtariDeep}. For instance, the Deep Mind Control Suite introduces a unified reward structure, enabling robust performance measures across environments.
Open source baseline implementations are publicly available\footnote[1]{\url{https://github.com/openai/baselines}}\footnote[2]{\url{https://github.com/openai/spinningup}}\footnote[3]{\url{https://github.com/rll/rllab}}. The details of those implementations vary strongly and different hand-engineered techniques are used across codebases. In this work, we analyze those details and their impact on learning behavior and final performance. We hope to provide insights into previously unquestioned defaults and build a better understanding of commonalities and disparities between different implementations.
\section{Common Implementation Details}
This section introduces the techniques that we found to impact learning behavior and performance the most. First, we discuss initialization methods for neural networks used in deep learning. Second, we introduce input normalization and its theoretical justification. Last, we investigate adaptive learning techniques, which either modify the learning rate or the gradients themselves.
\subsection{Initialization}
Initialization can determine whether an iterative algorithm, like RL, converges. If the algorithm converges, initialization could furthermore influence the learning speed and the quality of the solution. Designing principled initialization schemes for neural networks is not trivial because neural network optimization is still not fully understood. Thereby Deep RL also lacks such initialization schemes and, hence, commonly follows a heuristically motivated procedure. Common initialization schemes in supervised learning (SL) are Xavier-, Kaiming/He-, LeCun-, and Orthogonal initialization \citep{montavonNeuralNetworksTricks2012,glorotUnderstandingDifficultyTraining,heDelvingDeepRectifiers2015,saxeExactSolutionsNonlinear2013}. Xavier initialization is usually employed if activation functions of a neural network are symmetric (like sigmoid or tanh).
It is common to use symmetric activation function for policy gradient methods like Trust Region Policy Optimization (TRPO) \citep{schulmanTrustRegionPolicy2015} and Proximal Policy Optimization (PPO) \citep{schulmanProximalPolicyOptimization2017}. If the activation function is non-symmetric, for example, ReLU and Kaiming initialization are preferred. This activation function is often used in Q-learning methods like Twin Delayed Deep Deterministic Policy Gradient (TD3) or Soft Actor-Critic (SAC). LeCun and Orthogonal initialization are applicable for any activation function.
These initialization methods are supposed to break the symmetry of a neural network and reduce the chance of exploding and vanishing gradients \citep{goodfellowDeepLearning2016,montavonNeuralNetworksTricks2012}.
In RL, initialization has an additional impact on exploration since it determines the initial distribution from which actions are sampled. This impacts policy gradient algorithms differently than Q-learning algorithms because they use differently parameterized policies. The initialization schemes used in different baselines are not consistent. The default in the OpenAI Baseline\footnotemark[1] implementation is the Orthogonal initialization, whereas OpenAI SpinningUp\footnotemark[2] uses Tensorflow's default, the Xavier initialization.
\subsection{Input Normalization}
Deep Learning commonly uses input normalization to increase the learning speed \citep{montavonNeuralNetworksTricks2012}. Likewise, learning speed, and therefore sample efficiency, is a significant concern in RL.
Input normalization ensures that the input data is distributed according to a standard normal distribution by transforming it with the following equation.
\begin{equation}
\tilde{s} = \frac{s - \mu}{\sigma}
\end{equation}
$s$ is the input state, $\tilde{s}$ is the normalized state, $\mu$ is the mean, and $\sigma$ the standard deviation. If the mean of the inputs is close to zero, it is easier and faster for weights to change their sign if necessary. Furthermore, rescaling the variance to one standard deviation balances out the rate at which the weights connected to the input nodes learn \citep{montavonNeuralNetworksTricks2012}.
In SL, applying this transformation is straight forward since the entire dataset is available at the beginning. In RL, this is not the case. Furthermore, the distribution of the input variables is non-stationary. Running estimates of the distribution parameters are necessary to implement input normalization in an RL setting. Welford's algorithm \citep{welfordNoteMethodCalculating1962} is most commonly used to implement such a running estimation\footnotemark[1].
\begin{subequations}
\begin{align}
\mu_n &\leftarrow \mu_{n-1} + \frac{s_n-\mu_{n-1}}{n} \\
\sigma^2_n &\leftarrow \sigma^2_{n-1} + \frac{ (s_n-\mu_{n-1})(s_n-\mu_n)}{n}
\end{align}
\label{eq:RunningMeanVar}%
\end{subequations}
where $\mu_n$ and $\sigma_n$ are respectively the estimated mean and standard deviation after $n$ samples.
The parameter estimates are usually not very accurate at the beginning of the learning process. This is due to the limited number of samples available in the early stages of learning. Hence, this technique does not necessarily impact the performance in the same way as would be expected from SL. This technique is represented in some of the major codebases\footnotemark[1]\footnotemark[3] but is missing in some other\footnotemark[2].
\subsection{Adaptive Learning Techniques}
We refer to methods that modify the gradients or the learning rate during training by adaptive learning techniques.
Adaptive learning is especially crucial for Policy Gradient methods since they suffer from very noisy gradients and, hence, can potentially take updates in directions that do not point towards the true gradient. PPO and TRPO reduce the impact of noisy gradients with trust regions. Since trust regions are not usually computed exactly, additional techniques are required to deal with approximation errors. The clipped objective in PPO, for example, follows the heuristic of keeping consecutive policies close to each other. However, it does not enforce a trust-region \citep{ilyasAreDeepPolicy2018}. The desired property of small policy changes is often not fulfilled, making the vanilla implementation of PPO very unstable.
This section introduces five techniques that are most commonly found in implementations of policy gradient algorithms\footnotemark[1]\footnotemark[2]\footnotemark[4].
These techniques are Learning Rate Schedules, Advantage Normalization, Gradient Clipping, KL-Stopping, and KL-Cutoff. They are often necessary to achieve state-of-the-art performance.
\textbf{Learning Rate Schedules\footnotemark[1] (LRS)} are used to decay the learning rate gradually throughout learning. This technique only appears in one baseline implementation\footnotemark[1], where it is used in combination with stochastic gradient descent to reduce noise. Especially in policy gradient methods, but not restricted to, it can help to manage noise. A popular choice for the schedule is a linear decay
\begin{equation}
\alpha_t = \alpha_0 \cdot \left(1-\frac{t}{t_{\text{total}}}\right)
\end{equation}
where $\alpha_t$ is the learning rate at step $t$ and $t_{total}$ is the total number of steps.
\textbf{Advantage Normalization\footnote[1]{\url{https://github.com/openai/baselines}}\footnote[2]{\url{https://github.com/openai/spinningup}}\footnote[3]{\url{https://github.com/rll/rllab}}\footnote[4]{\url{https://github.com/joschu/modular_rl}} (AN)} transforms the advantages $\hat{A}_{\pi}$, used to compute the policy gradient, to have zero mean and unit variance. This acts as an adaptive learning rate heuristic and bounds the gradient variance \citep{tuckerMirageActionDependentBaselines2018}. It is one of the most commonly used techniques and is represented in all the baseline implementations we investigated\footnotemark[1]\footnotemark[2]\footnotemark[3]\footnotemark[4].
\begin{equation}
\bar{A}_{\pi} = \frac{\hat{A}_{\pi} - \mu(\hat{A}_{\pi})}{\sigma(\hat{A}_{\pi})}
\end{equation}
\textbf{Gradient Clipping\footnotemark[1]} limits the maximum norm of the policy gradient. The gradient is modified according to the following update rule, until the threshold $\alpha$ is met.
\begin{equation}
\nabla_{\theta} J\left(\pi_{\theta}\right) \leftarrow \nabla_{\theta} J\left(\pi_{\theta}\right) \cdot \frac{\alpha}{\left\|\nabla_{\theta} J\left(\pi_{\theta}\right)\right\|^{2}}
\end{equation}
$\pi_{\theta}$ is the policy with parameters $\theta$, and $\nabla_{\theta} J\left(\pi_{\theta}\right)$ is the policy gradient. This technique limits the step size in parameter space, thereby avoiding very large gradient updates. It is commonly used to address the issue of exploding gradients and was only found in the OpenAI Baseline\footnotemark[1].
\textbf{KL-Stopping\footnotemark[2]} explicitly estimates the KL-Divergence after an update with the following equation
\begin{equation}
\hat{D}_{\text{KL}} = \mathbb{E}\Big[ \log(\pi_{\theta_{\text{old}}}(a|s)) - \log(\pi_{\theta_{\text{new}}}(a|s)) \Big]
\end{equation}
If this value is larger than a threshold, optimization on the current data is stopped, and new data is collected. This technique directly addresses the fact that the clipped PPO objective does not guarantee small updates. The only baseline implementation using this technique calls it early stopping \footnotemark[2]. In the following, we refer to this technique as KL stopping to avoid any confusion with early stopping from an SL context.
\textbf{KL-Cutoff} also addresses the issues as mentioned earlier with the clipped objective. Instead of stopping optimization after violating the KL constraint, a correction term is added to the loss function of the next update, which is defined as follows.
\begin{equation}
L = L^{\text{CLIP}} - \alpha \cdot (\hat{D}_{\text{KL}} > D_{\text{thr}}) \cdot (\hat{D}_{\text{KL}} - D_{\text{thr}})^2
\end{equation}
where $L^{\text{CLIP}}$ is the clipped objective and $D_{\text{thr}}$ is the KL constraint. After a violation of the constraint has occurred, this objective prioritizes the reduction of the KL-Divergence over the original objective. The resulting gradient step corrects the bad update and resumes optimization on the same data. This technique was used in the original implementation of PPO\footnotemark[3] but wasn't found in other baseline implementations\footnotemark[1]\footnotemark[2]\footnotemark[4].
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{images/init_histograms.pdf}
\caption{Probability density of actions selected under the initial policy for different
initialization schemes in the cartpole environment for the swingup task.}
\label{fig:histInit}
\end{figure}
\section{Experimental Analysis}
For the experimental analysis of initialization schemes and input normalization, we report results for TD3 and TRPO. These algorithms use fundamentally different parameterizations and represent the class of Q-learning methods and Policy Gradient methods, respectively. Additionally, we ran experiments on SAC and CGP to investigate Q-learning algorithms with differently parameterized policies. To facilitate the readability of the plots, we report additional results only if they provide additional insights.
Adaptive learning techniques are evaluated on PPO since they are mainly used for policy gradient methods as they show particularly unstable learning behavior. All algorithms are implemented with the autograd library PyTorch \citep{pytorch}.
We run experiments on six continuous control tasks from the Deep Mind Control Suite, namely cartpole-balance, cartpole-swingup, acrobot-swingup, cheetah-run, hopper-hop, and walker-run. They cover linear and non-linear dynamics in low- and high-dimensional state and action spaces. Returns are collected in an offline manner and averaged over ten episodes. We run each algorithm for ten different seeds and report the mean performance.
This choice is common in recent publications \citep{fujimotoAddressingFunctionApproximation2018,simmons-edlerQLearningContinuousActions2019}. All observed effects have an effect size larger than two standard deviations, hence indicate strong evidence for these effects.
Bootstrap confidence intervals are used to give a $95\%$ confidence level for the reported result.
\subsection{Initialization}
Due to the underlying assumption that initialization influences differently parameterized policies in a different manner, we chose TD3, SAC, and TRPO.
TD3 uses a deterministic policy with additive noise. As for SAC, it uses a stochastic policy. Both policies bound their outputs by passing them through the $\tanh$ function. TRPO also uses a Gaussian policy, but without bounding the output. This results in a clipping of the actions by the environment.
Note that Initialization only affects the mean value in the Gaussian policies. The standard deviations are initialized by the absolute value of the action limits. The standard deviation of the additive noise for the deterministic policy takes a value of $0.1$.
\begin{figure}
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[width=\columnwidth]{images/TD3_init.pdf}
\caption{Mean performance of TD3 under LeCun, Kaiming and Orthogonal initialization.}
\label{fig:TD3Init}
\end{minipage}%
\hfill%
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[width=\columnwidth]{images/SAC_init.pdf}
\caption{Mean performance of SAC under LeCun, Kaiming and Orthogonal initialization.}
\label{fig:SACInit}
\end{minipage}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{images/late_start.pdf}
\caption{Replay buffer of TD3 initialized with 10,000 data points sampled from a uniform action distribution, to reduce the dependence on the initial policy parameters.}
\label{fig:TD3LateStart}
\end{figure}
Figure~\ref{fig:histInit} shows the initial action distributions for the three policies under the different initialization schemes.
5000 states were randomly sampled from a standard normal distribution. Each density estimate was averaged over 100 random initializations. It shows that the mean spread increases with the different initializations, starting with the LeCun and reaching its maximum with Xavier/Kaiming initialization. With TD3's low noise level for exploration, the different initialization schemes greatly impact the initial action distribution. If the spread of the action distribution is larger than the action limits of the environment, the edges account for a lot of the probability mass due to action clipping. Heuristically, it is desirable to have a large spread while avoiding clipping by the environment to increase the diversity of actions explored. The experimental results on TD3 support these heuristics, as shown in figure~\ref{fig:TD3Init}.
Orthogonal initialization outperforms all other schemes in terms of learning speed and final performance. In more complex environments, Kaiming initialization only slows down the convergence, but the final performance is almost as good as with Orthogonal initialization.
In the cartpole environment, the final performance is significantly worse.
This drop in performance is due to some of the runs using the Kaiming method, not resulting in any learning. These runs weigh down the overall average performance.
This finding indicates that the cartpole environment is more sensitive to reduced exploration.
The original publication of TD3 \citep{fujimotoAddressingFunctionApproximation2018} suggests a custom procedure to reduce the impact of the initial parameters on the performance. This technique fills the replay buffer with a predefined number of transitions. The actions for those transitions are not sampled from the policy but a uniform distribution over the action space. This increases the performance and learning speed in our experiments but does not achieve similar performance levels as TD3 with orthogonal initialization.
Figure~\ref{fig:TD3LateStart} shows how the proposed replay buffer initialization, in addition to the LeCun method, compares to only LeCun and Orthogonal schemes. The proposed technique claims to remove the dependency on the initial policy parameters. If this was the case, it should perform at least as good as the best initialization scheme.
The initial action distribution of SAC's stochastic policy is close to uniform. Only the Kaiming initialized version has more probability mass at the edges. This concentration of probability mass results in a downgrade of learning speed and the final performance shown in figure~\ref{fig:SACInit}.
The similar action distributions under LeCun and Orthogonal initialization result in almost identical learning behavior for all environments. This finding validates the hypothesis that the influence of initialization on the initial action distribution has a significant impact on learning behavior.
TRPO under different schemes shows similar results; we report these results in Appendix \ref{app:trpo}. Due to the high standard deviation at the start of training and the unbounded policy, most of the probability density is focused on the edges. The smaller the estimated means' variation, the smaller the overall variance of the initial action distribution will be. This, in turn, leads to more probability density for the actions in the center of the distribution. The initialization scheme that provides the least amount of spread to the means is LeCun. The performance of this initialization scheme is slightly better than that of Xavier and Orthogonal initialization. However, we do not observe a statistically significant effect, and experiments with more samples need to be conducted for confirmation. The difference in performance for the Xavier scheme is statistically significant, at least in cartpole-swingup and cheetah-run.
\begin{figure}
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[width=\columnwidth]{images/TRPO_norm.pdf}
\caption{TRPO return with normalized vs. regular input states.}
\label{fig:TRPONorm}
\end{minipage}%
\hfill%
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[width=\columnwidth]{images/TD3_norm.pdf}
\caption{TD3 return with normalized vs. regular input states.}
\label{fig:TD3Norm}
\end{minipage}
\end{figure}
\subsection{Input Normalization}
Due to the conservative updates of TRPO, the algorithm converges very slowly. The slower convergence is especially evident in the performance of the more complex tasks, where the algorithm fails to converge in the given $1000$ episodes. With input normalization, the algorithm achieves significantly better performance in those tasks, as shown in figure~\ref{fig:TRPONorm}.
Also, in the case of normalized inputs, the final performance of the high dimensional tasks is not even satisfactory. Furthermore, the confidence bounds indicate that the variance of the learning curve is reduced when normalization is employed.
This effect is not observable for TD3, where input normalization decreases the overall performance, as shown in figure~\ref{fig:TD3Norm}. Similar results were observed with SAC, which might indicate an incompatibility of input normalization with a replay buffer.
\begin{figure}
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[width=\columnwidth]{images/adaptive_0.pdf}
\caption{Baseline PPO performance compared with added Learning Rate Schedule and Advantage Normalization.}
\label{fig:adaptive0}
\end{minipage}%
\hfill%
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[width=\columnwidth]{images/adaptive_kl1.pdf}
\caption{Mean KL-Divergence for PPO with Learning Rate Schedules and Advantage Normalization. The dashed line represents the desired trust region.}
\label{fig:adaptive1KL}
\end{minipage}
\end{figure}
\subsection{Adaptive Learning}
For experimental analysis of adaptive learning techniques, we chose to focus only on PPO because it relies on many of them and exhibits the most unstable learning behavior without using any of those techniques.
Figure~\ref{fig:adaptive0} shows how PPO performs in all environments if not aided by any learning adaptation.
The combination of LRS and AN are necessary to exhibit any learning progress for some environments. The KL-Divergence may explain why the vanilla implementation performs so poorly. Figure~\ref{fig:adaptive1KL} shows how the KL-Divergence of the vanilla implementation violates the desired trust region. Especially in the cartpole environment, large spikes are observable. LRS seems to reduce the average KL-Divergence but still allows for occasional spikes. Contrarily, AN does not reduce the average but filters the spikes. Using both techniques together achieves low KL-Divergence in all environments. The desired average KL-Divergence is commonly set to $0.01$, which seems reasonable for most environments except for cartpole-balance. One potential reason for this is that the closeness to linear dynamics might allow for more drastic policy changes.
Even with both adaptation techniques, the KL-Divergence is much higher than the desired value, and performance is very good.
One of the remaining techniques can be added on top of LRS and AN to make sure this threshold is kept even tighter (new baseline adapted). Figure~\ref{fig:adaptive2} shows the performance of the modified algorithms.
\begin{figure}
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[width=\columnwidth]{images/adaptive_1.pdf}
\caption{PPO performance with Learning Rate Schedules, Advantage Normalization and one additional technique.}
\label{fig:adaptive2}
\end{minipage}%
\hfill%
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[width=\columnwidth]{images/adaptive_kl2.pdf}
\caption{Mean KL-Divergence for PPO with LR Schedule, Advantage Normalization and an additional technique. The dashed line represents the desired trust region.}
\label{fig:adaptive2KL}
\end{minipage}
\end{figure}
The figure shows that KL-Cutoff generally achieves performance as good as or better than the adapted baseline. However, it does reduce the learning speed slightly.
KL-Stopping significantly reduces the learning speed and does not always provide better performance. Gradient Clipping does not seem to offer a significant benefit towards the adapted baseline.
The resulting KL-Divergence is shown in figure~\ref{fig:adaptive2KL}. The only technique, which manages to uphold the desired threshold is KL-Cutoff. KL-Stopping even increases the average KL-Divergence by not allowing the clipped objective to optimize over enough epochs.
This is surprising since baselines like OpenAI Baselines\footnote[1]{\url{https://github.com/openai/baselines}} and OpenAI SpinningUp\footnote[2]{\url{https://github.com/openai/spinningup}} do not use this technique in their implementations. Only the original implementation\footnote[4]{\url{https://github.com/joschu/modular_rl}} deploys KL-Cutoff.
In general, it can be seen that without these techniques, PPO is unable to solve the benchmark tasks. State-of-the-art performance is achievable only with the right combination of methods.
\section{Discussion and Conclusion}
We investigate the impact of initialization, normalization, and adaptive learning on state-of-the-art deep reinforcement learning algorithms through experimental methods. The results show that initialization changes the initial action distribution and therefore influences Deep RL algorithms differently than Deep SL algorithms. In deep RL, the initial action distribution dictates the exploration behavior, at least at the early stages. Thus, there is a need to develop RL specific initialization methods that account for the initial action distribution. Currently, Orthogonal initialization provides the best results and can be used for any architecture.
Input normalization improved the performance of TRPO, but could not do so for TD3 and SAC. In general, our findings could indicate that input normalization should not be used with Q-learning algorithms.
Finally, we investigate adaptive learning techniques applied to PPO. Our experiments show that the algorithm could not achieve state-of-the-art performance without such methods. We observe the best performance when using a combination of Learning Rate Scheduling, Advantage Normalization, and KL-Cutoff.
In general, Initialization and Adaptive Learning are implementation details that need to be carefully considered when implementing deep RL algorithms. Both can influence the performance and even cause algorithms not to learn at all. Overall, we conclude that implementation details have a strong influence on the final performance of an algorithm. Our findings encourage full transparency in RL research for the sake of reproducibility and reusability. It should become standard practice to include implementation details in publications if they affect performance significantly.
\section*{ACKNOWLEDGEMENTS}
We greatly acknowledge the funding of this work by Microsoft Germany and the Alfried Krupp von Bohlen und Halbach Foundation.
|
2,869,038,155,595 | arxiv | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{W}{ireless} communication technologies in fifth-generation (5G) mobile networks provide multigigabit-per-second data rates, which fulfill the backhaul rate requirements \cite{niu2015survey, rangan2014millimeter}.
A key technology of 5G systems is millimeter-wave (mmWave) communications, which is advantageous because its broad spectral band increases the communication capacity \cite{cao2018capacity}.
In contrast to time- and cost-intensive optical fiber deployments, a mmWave wireless backhaul network has the advantages of high flexibility, cost efficiency, and rapid deployment of backhaul connections \cite{niu2015survey}.
However, signals transmitted in the mmWave frequency band experience larger path loss, which mandates the use of large antenna arrays and adaptive control of the array weights to ensure the main lobes of the antennas are pointing toward each other\cite{wei2014key}.
This adaptive control of the antenna array is required not only for initial access, but also during an operation in a mobile scenario or quasi-static scenarios where mmWave nodes are occasionally displaced owing to various perturbations.
The latter is termed beam-tracking, and in view of the large overhead involved in tracking, developing efficient beam-tracking methods has attracted considerable research interest, as is briefly discussed below.
\subsection{Related Work and Motivations}
\begin{table*}[!t]
\caption{Comparison from Previous Works on Beam-Tracking}
\begin{center}
\begin{tabular}{c c c c c}
\toprule
Reference & \begin{tabular}{c} 1) Periodically solving beam-optimization?
\end{tabular} & \begin{tabular}{c} 2) Learning-Based? \end{tabular} & \begin{tabular}{c}\textbf{Addressing Training-Test Gap?} \end{tabular} & \begin{tabular}{c}
\end{tabular} \\
\midrule
\cite{11ad, alexandropoulos2017position, zhang2016tracking, va2016beam, shaham2019fast, larew2019adaptive, lim2019beam} & Yes & No & -- \\
\cite{lin2019beamforming, elbir2019joint, elbir2019deep, wang2018mmwave, wang2019mmwave, klautau20185g} & No & Yes (SL) & No \\
\cite{wang2019reinforcement, mismar2019deep} & No & Yes (RL) & No \\
Previous version \cite{shinzaki2020deep, koda2020millimeter} & No & Yes (RL) & No \\
This paper & No & Yes (RL) & \textbf{Yes} \\
\bottomrule
\end{tabular}
\label{tab:previous_works_ver2}
\end{center}
\end{table*}
Previous research addressed the beam-tracking problem mainly via the following two approaches: 1) periodically solving beam-optimization problems \cite{11ad, alexandropoulos2017position, zhang2016tracking, va2016beam, shaham2019fast, larew2019adaptive, lim2019beam}, 2) learning a beam-tracking policy beforehand \cite{lin2019beamforming, elbir2019joint, elbir2019deep, wang2018mmwave, wang2019mmwave, klautau20185g, wang2019reinforcement, mismar2019deep, shinzaki2020deep, koda2020millimeter}.
In the first approach, a mmWave node optimizes the array antenna weights based on the estimated channels or angles of arrival (AoAs)/angles of departure (AoDs).
For example, in the IEEE 802.11ad standard \cite{11ad}, channels are surveyed by steering the transmitter/receiver beams; thus the array antenna weights are optimized.
The work in \cite{alexandropoulos2017position} estimated the AoAs/AoDs, and optimized the array antenna weights based on these estimations.
The works in \cite{zhang2016tracking, va2016beam, shaham2019fast, larew2019adaptive, lim2019beam} used filtering methods to estimate the AoAs/AoDs based on the surveyed channel and calculated the optimal antenna weights.
Although adaptive, this approach incurs computational overhead to periodically solve optimization problems that scale a number of antennas.
In the second approach, with the help of powerful machine-learning (ML) techniques approximating the input-output relationships, appropriate antenna weights or beam steering angles are learned beforehand.
Although the computational overhead increases during the training procedure, appropriate antenna weights or beam-steering angles can be outputted with fewer computations than solving the optimization problems thereafter.
Prior studies mainly leveraged supervised learning (SL), where the training data for optimal angles are given in advance \cite{lin2019beamforming, elbir2019joint, elbir2019deep, wang2018mmwave, wang2019mmwave, klautau20185g}.
Wang \textit{et al.} and Mismar \textit{et al.} \cite{wang2019reinforcement, mismar2019deep} leveraged reinforcement learning (RL) techniques, where a mmWave node learns appropriate beam steering angles from the received power without being given the optimal angles as training data.
Our previous studies relating to this work \cite{shinzaki2020deep, koda2020millimeter} were also categorized as focusing on RL-based beam tracking, where we considered the beam tracking of a mmWave node placed on an overhead messenger wire.
Although this second approach is attractive with respect to its ability to perform beam tracking, its applicability to the real environment is arguable because this approach may experience a training-test gap in terms of the environmental parameters that affect the node dynamics, which is the main focus of this study.
Generally, ML models perform worse as the gap between training and testing increases in terms of dataset distributions in SL or parameters that determine the state dynamics in RL.
Hence, this performance deterioration may naturally occur in the above learning-based beam tracking when a training-test gap exists.
To the best of our knowledge, this problem has been overlooked in the aforementioned prior studies pertaining to learning-based beam tracking.
Note that this is also the main difference from our prior studies \cite{shinzaki2020deep, koda2020millimeter}, where the training-test gap affecting node dynamics was not considered; in this sense, the contribution is different from these previous studies.
The differences between this work and the previous work are summarized in Table~\ref{tab:previous_works_ver2}.
\subsection{Contributions}
In view of the above problem, this work addresses the following two questions as the contribution of this study: 1) what would happen if a training-test gap were to exist in learning-based beam-tracking? 2) If this gap has a negative effect on beam tracking, how could we solve the problem caused by this training-test gap?
To address the first question, we confirm that the training-test gap causes the received power to deteriorate via numerical evaluation.
To address the second question, we first conceptualize \textit{zero-shot adaptation} as our objective.
Here, zero-shot adaptation implies that a learning agent exhibits feasible performance in test scenarios even when there are training-test gaps.
To realize zero-shot adaptation in mmWave beam-tracking, we applied robust adversarial reinforcement learning (RARL)\cite{pinto2017robust}, which is detailed below.
It should be noted that when examining these problems, we consider a particular scenario in which a mmWave node is placed on an overhead messenger wire, similar to our previous work (\cite{shinzaki2020deep} and \cite{koda2020millimeter}).
This is because this scenario is affected by several parameters that determine the node dynamics, such as the mass and tension of the wire, which facilitate the investigation of the training-test gap problem.
We believe that, although we are considering a particular scenario, this work provides firsthand insight regarding the overlooked problem caused by the training-test gap and provides the opportunity to rethink learning-based systems in the wireless communication research area.
In view of this, the contributions of this work are summarized as follows:
\begin{itemize}
\item
Given an example scenario in which a mmWave node is placed on an overhead messenger wire (see Fig.~\ref{fig:system_model}), through numerical evaluations, we confirm that the gap between the training and test scenarios deteriorates the beam-tracking performance.
This debunks the importance of addressing training-test gaps to provide reliable mmWave links, and to the best of our knowledge, this perspective has not yet been reported in the literature.
\item We demonstrate the feasibility of zero-shot adaptation of learning-based mmWave beam-tracking in the aforementioned scenarios.
The key idea is to leverage RARL, wherein a beam-tracking agent is trained competitively to correspond to an intelligent adversary that attempts to cause beam misalignment by introducing additional wind disturbance.
Through numerical evaluations, we show that even if the test scenarios were different from the training scenarios in terms of the parameters that affect the node dynamics, the proposed method prevents a drastic performance loss in terms of received power without adaptively fine-tuning the test scenario.
\end{itemize}
In a nutshell, our main scope is to validate the effectiveness of adding an intelligent adversary during the training; thereby confirming the concept of the zero-shot adaptation in the context of mmWave beam-tracking.
Thus, particularly in the evaluation in Section~\ref{sec:simulation}, we focus on the difference between the proposed method with the adversary and a baseline method without the adversary.
For this reason, delving into the problems commonly applied to the RL-based beam-tracking method with and without the adversary (e.g., delay for tracking and the accuracy of state acquisitions) is beyond the scope of this paper.
We believe that without addressing these problems, our evaluations sufficiently validate the aforementioned contributions.
The remainder of this paper is organized as follows.
In Section~\ref{sec:motivation}, we introduce zero-shot adaptation to overcome the training-test gap in RL problems and provide the motivation for solving this training-test gap in the mmWave beam-tracking problem.
In Section \ref{sec:beam_tracking}, we formulate the system model as an RL task.
In Section \ref{sec:adversarial_rl}, we explain the adversarial RL algorithm.
In Section \ref{sec:simulation}, we describe the simulation evaluation of the proposed beam-tracking policy.
Finally, we present our conclusions in Section \ref{sec:conclusion}.
\section{Definition and Motivation of Zero-Shot Adaptation}
\label{sec:motivation}
\subsection{Definition of Zero-Shot Adaptation}
We define the zero-shot adaptation problem considered in this study.
Let us consider a Markov decision process (MDP) $(\mathcal{S}, \mathcal{A}, r, p_{\bm \theta})$, where $\mathcal{S}$ and $\mathcal{A}$ denote the state and action space, respectively, $r:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ is the reward function, and $p_{\bm \theta}:\mathcal{S}\times\mathcal{A}\to \mathcal{S}$ is the state transition rule.
Note that the state transition rule is subject to the static parameter $\bm \theta$, which we refer to as the environmental parameters.
Therein, at each time step $k=1, 2, \dots, K$, a decision maker observes a state $s_k\in\mathcal{S}$ and determines an action $a_k\in\mathcal{A}$ according to a policy $\pi:\mathcal{S}\to\mathcal{A}$, and receives a reward $r_k$.
The objective of the decision-maker is to seek for a policy that maximizes the expected sum of rewards $\sum_{k = 1}^{K}\gamma^{k-1}r_k$, where $\gamma\in[0, 1]$ is the discount factor.
Given the aforementioned decision process, we define the zero-shot adaptation problem as follows:
\begin{definition}[Zero-shot adaptation]
Let us consider two MDPs $\mathit{MDP}_{\mathrm{train}} = (\mathcal{S}, \mathcal{A}, r, p_{\bm \theta_{\mathrm{train}}})$ and $\mathit{MDP}_{\mathrm{test}} = (\mathcal{S}, \mathcal{A}, r, p_{\bm \theta_{\mathrm{test}}})$ are different in terms of the environmental parameters, that is, $\bm \theta_{\mathrm{train}}\neq \bm \theta_{\mathrm{test}}$.
During the learning procedure of a policy, the decision-maker can act only in $\mathit{MDP}_{\mathrm{train}}$ and cannot access $\bm \theta_{\mathrm{test}}$.
MAt the same time, the decision-maker or another third-party agent can ``manipulate'' training data, that is, they can replace several elements in the obtained state-action trajectory $(s_1, a_1, \dots, s_{K}, a_{K})$ with other values.
Given this constraint, zero-shot adaptation is defined as finding a policy that maximizes the expected sum of rewards in $\mathit{MDP}_{\mathrm{test}}$ without any re-training of the policy.
\end{definition}
In a nutshell, the zero-shot adaptation considered in this study is to find the optimal policy in an unseen environment for a decision-maker with the tolerance of manipulating the state-action trajectory.
Hence, the problem reduces the manipulation of the state-action trajectory, that is, the training data, where the RARL is an effective approach, as demonstrated throughout this study.
Note that this problem is not necessarily identical to a ``zero-shot learning problem'' \cite{wang2019survey} in an SL context in terms of the way in which the problem regarding the training-test gap is overcome.
In the zero-shot learning as defined in \cite{wang2019survey}, an SL model is trained to enable the model to classify not only samples with a class label seen during training, but also those with unseen classes during training.
Rather than achieving this by manipulating the training data, it is accomplished with ``auxiliary knowledge,'' which indicates pre-obtained feature information involving class labels unseen in training\footnote{A well-known example is the problem of classifying an image of a ``zebra.'' Even if an image of the zebra is not contained in the training data, it would be possible to predict that it is an image of a zebra when it is known that a zebra has the appearance of a ``striped horse,'' and images labeled with ``horse'' and ``striped'' appear in the training dataset.}.
However, we also consider our problem to be ``zero-shot'' by focusing on the common aspect underlying both problems concerned with the training-test gap.
\subsection{Motivation of Zero-Shot Adaptation in mmWave Beam-Tracking}
As is shown in the subsequent sections, we consider the beam-tracking problem on an overhead messenger wire.
This scenario involves several environmental parameters, and among these, we select an overall mass $m$ and spring constant (i.e., wire tension) $k_0$ of the wire to which a mmWave node is attached.
These parameters are hardly measured precisely, particularly when a messenger wire is pre-installed.
Hence, to attach the mmWave nodes to such a pre-installed messenger wire, a beam-tracking policy should be trained without accessing these parameters.
One possible approach is to train the beam-tracking policy after attaching the nodes to a pre-installed messenger wire such that $\bm \theta_{\mathrm{traing}} \simeq \bm \theta_{\mathrm{test}}$.
However, this approach has the following two drawbacks, both of which can be solved via zero-shot adaptation.
First, in this approach, because it is necessary to train the beam-tracking policy after the attachment, supplying the connections is delayed.
However, if the beam-tracking policy could be pre-trained by simulations via zero-shot adaptation, the connection could be supplied immediately, which would be preferable for real deployments.
Second, even if the beam-tracking policy could be trained quickly using this approach, the aforementioned parameters gradually but certainly vary with time owing to the degradation of the wire over time.
This yields another test scenario $\bm \theta'_{\mathrm{test}}$, where $\bm \theta_{\mathrm{train}} \neq \bm \theta'_{\mathrm{test}}$.
Hence, to obtain robustness against the yielded training-test gap, it is worthwhile to consider zero-shot adaptation.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\columnwidth]{figs/result_previous.pdf}
\caption{
Heatmap depicting the robustness against training-test gap without zero-shot adaptation in terms of the received power.
The cross mark point $\bm \theta_{\mathrm{train}} = [10\,\mathrm{kg}, 100\,\mathrm{N/m}]^{\mathrm{T}}$ represents the environmental parameters of the total wire mass and spring constant (i.e., wire tension) during training.
The learned policy was brought in the different environmental parameter settings $\bm \theta_{\mathrm{test}}$, where the wire mass and spring constant are depicted in horizontal and vertical axes, respectively.
}
\label{fig:noadv_result}
\end{figure}
At the same time, it remains unclear whether the training-test gap is harmful in the context of mmWave beam-tracking.
Hence, for the sake of clarity, we conclude this section by displaying a partial result in Section~\ref{subsec:simu_robustness}, as shown in Fig.~\ref{fig:noadv_result}, which shows what happens if there exists a training-test gap.
The beam-tracking policy was trained with the parameter setting indicated by a cross mark, that is, a wire mass of 10\,$\mathrm{kg}$ and wire tension 100\,$\mathrm{N/m}$; namely, $\bm \theta_{\mathrm{train}} = [10\,\mathrm{kg}, 100\,\mathrm{N/m}]^{\mathrm{T}}$.
We introduce this beam-tracking policy in other parameter settings $\bm \theta_{\mathrm{test}} = [m_{\mathrm{test}}, k_{\mathrm{test}}]^{\mathrm{T}}$, which are depicted as the horizontal and vertical axes in Fig.~\ref{fig:noadv_result}, respectively.
Hence, apart from the cross mark point, there exists a training-test gap, that is, $\bm \theta_{\mathrm{test}} \neq \bm \theta_{\mathrm{train}}$.
As shown in Fig.~\ref{fig:noadv_result}, under several settings of $\bm \theta_{\mathrm{test}}$, the learned beam-tracking policy does not perform better than under $\bm \theta_{\mathrm{train}}$ in terms of the received power.
In particular, the learned beam-tracking policy exhibited poorer received power when the wire mass was lower than that in the test scenario.
This is because a wire with a smaller mass has more vibrant wire dynamics, which makes beam-tracking more challenging than that in the training scenario.
This example led us to address the aforementioned zero-shot adaptation problem in the context of mmWave beam-tracking on a messenger wire.
\section{System Model}
\label{sec:beam_tracking}
Fig.~\ref{fig:system_model} shows the system model for an on-wire small-cell base station (SBS) mmWave backhaul connection.
The SBS on the overhead messenger wire communicates with the gateway BS mounted on the building surface through a mmWave link to relay data from the gateway BS to the overhead messenger wire, and vice-versa.
The SBS is also connected to the optical fibers installed along the wire physically and receives the data to be transferred to the gateway BS via the upper layer protocol rather than the physical (PHY) layer (e.g., internet protocol).
A possible configuration for data delivery is as follows.
Both the SBS and gateway BS function as routers that forward the data from/to the end user, who sits in the house in Fig.~\ref{fig:system_model}.
The SBS hereby receives data routed to the end user via the upper layer protocol because the SBS is the natural waypoint for data delivery to the end user.
Briefly, the SBS provides a connection to the Internet infrastructure directly from an overhead messenger wire near the building.
However, it should be noted that the problem of beam-tracking that we consider in this study lies in the PHY layer, which is different from the data delivery problem in terms of the OSI reference model.
Hence, we focus on the former problem without a specific consideration for the upper layer protocols, which is sufficient to validate the effectiveness of the proposed beam-tracking method.
In this system, the SBS installed on an overhead messenger wire with a weight of $m$ performs beamforming to increase the received signal power of the gateway BS.
The endpoints of the overhead messenger wire are fixed to the telephone poles at a height $h_\mathrm w$, with a distance $d_\mathrm w$ between the poles.
The gateway BS is mounted at a height $h_\mathrm r$, and the distance between the overhead messenger wire and the gateway is $d_\mathrm r$.
For practical usage, beam-tracking on the side of both the gateway and the SBS should also be addressed; nonetheless, we consider only SBS-side beam tracking.
This is because of our focus on validating the effectiveness of adding the adversary during training, thereby confirming the feasibility of the zero-shot adaptation.
Indeed, gateway-side beam-tracking is more challenging than SBS-side beam-tracking in the sense that the gateway BS cannot immediately obtain the necessary state information in the RL-based beam tracking (i.e., position/velocity of the SBS).
However, this challenge is equally posed to beam-tracking trained both with and without an adversary, indicating that the difference between gateway-side and SBS-side beam tracking does not have a specific impact on the comparison between these two beam-tracking methods (i.e., with or without adversaries).
Hence, to focus on the comparison, we consider only SBS-side beam-tracking by assuming that the gateway-side beam-tracking was performed perfectly, which is sufficient to validate our contributions.
\begin{table}[t]
\centering
\caption{Summary of Notations in System Model}
\begin{tabular}{cc}
\toprule
$h_\mathrm w$ & Height of endpoints of wire \\
$d_\mathrm w$ & Direct distance between endpoints \\
$h_\mathrm r$ & Height of gateway BS \\
$d_\mathrm r$ & Distance between wire and gateway BS \\
$m$ & Total wire mass \\
$N$ & Number of proxy points on wire \\
$i$ & Index of proxy point on wire \\
$k_0$ & Spring constant, i.e., coefficient for tensile force\cite{gladwell1986inverse} \\
$\bm g$ & Gravitational acceleration \\
$\bm a_i(t)$ & Acceleration of proxy point $i$ at time $t$ \\
$\bm v_i(t)$ & Velocity of proxy point $i$ at time $t$ \\
$\bm x_i(t)$ & Position of proxy point $i$ at time $t$ \\
$c_0$ & Drag constant \cite{zarate2016sde} \\
$\bm v_0(t)$ & Wind velocity \\
$\bm V_0$ & Covariance matrix of wind velocity \\
$\bm W_i(t)$ & Standard Wiener process \\
$P_\mathrm t$ & Transmit power \\
$\lambda$ & Radio-wave wavelength \\
$\theta_{\mathrm{AoD}}, \phi_{\mathrm{AoD}}$ & Arbitrary zenith/azimuth angles \\
$\theta_{\mathrm{s}}, \phi_{\mathrm{s}}$ & Zenith/azimuth angles of antenna main-lobe \\
$d$ & Distance between SBS and gateway BS \\
$A_\mathrm r$ & Receiver antenna gain \\
$G_{\mathrm{max}}$ & Antenna gain of main-lobe \\
$A_{\mathrm{m}}$ & Front-back ratio \cite{rebato2018study} \\
$\theta_{\mathrm{3dB}}, \phi_{\mathrm{3dB}}$ & Horizontal/vertical 3\,dB beamwidth \\
$\mathit{SLA}_{\mathrm{V}}$ & Side-lobe level limit \cite{rebato2018study} \\
$n_{\mathrm{V}}$, $n_{\mathrm{H}}$ & Number of vertical/horizontal array elements \\
$\bm w$ & Beamforming vector \\
$\Delta_\mathrm V$, $\Delta_\mathrm H$ & Vertical/horizontal array spacing distances \\
$\tau$ & Decision interval, i.e., interval between time steps \\
$k$ & Index of time step \\
$T$ & Observation time \\
$\bm x^{(k)}_{\mathrm{S}}$ & Position of SBS at time step $k$ \\
$\bm x_{\mathrm{G}}$ & Position of gateway BS \\
\bottomrule
\end{tabular}
\label{tbl:parameters_definitions}
\end{table}
\subsection{Model of Dynamics in Overhead Messenger Wire}
According to \cite{gladwell1986inverse}, we modeled the overhead messenger wire as a chain of several proxy mass points that align and are separated by an equal distance, where the proxy mass points are affected by a tensile force from adjacent mass points.
Let $N$ and $m$ denote the number of proxy mass points and the total mass of the wire, respectively.
In the model, the mass of each point is assigned equally, that is, the mass of each point is $m/N$.
We denote points ~1 and ~$N$ as the ends of the chain of the mass points and term the residual mass points in the order of their proximity to point~1 as point~2,$\dots$, point~$N - 1$.
In the model, the tensile force applied to point~$i\in \{2, \dots, N - 1\}$ is proportional to the relative position of the adjacent points, i.e., points $i-1$ and $i+1$, where the total tensile force is calculated as:
$k_0\bigl[\bm x_{i+1}(t) + \bm x_{i-1}(t) - 2\bm x_i(t)\bigr]$, where $\bm x_i(t)$ is the position of point~$i$ at time $t$ measured in the coordinate system in Fig.~\ref{fig:coordinate}.
The term $k_0$ is constant and determines a wire tension.
This model can be regarded as a spring chain, where the mass points are connected via springs; hence, we refer to the constant $k_0$ as the ``spring constant,'' hereinafter.
We delve into the dynamics of these proxy mass points.
Let $\bm a_i(t)\in\mathbb R^3$ denote the acceleration of point $i$ at time $t$.
The accelerations of points $1$ and $N$ fixed to the telephone poles are expressed as $\bm a_1(t)=\bm a_N(t)=\bm 0$.
For $i=2,3,\dots,N-1$, from the equation of motion, $\bm a_i(t)$ is given by
\begin{align}
\label{eq:accel}
\bm a_i(t) = \bm g + \underbrace{\frac{k_0N}{m}\bigl[\bm x_{i+1}(t) + \bm x_{i-1}(t) - 2\bm x_i(t)\bigr]}_{\text{acceleration from tensile force}},
\end{align}
where $\bm g\in\mathbb R^3$ denotes the gravitational acceleration.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{figs/system_model.pdf}
\caption{On-wire SBS in a millimeter-wave backhaul connection.}
\label{fig:system_model}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{figs/coordinate.pdf}
\caption{Coordinate system of the system model.}
\label{fig:coordinate}
\end{figure}
As the perturbations that are responsible for the dynamics in the wire, we consider a wind perturbation and consider the wind drag model in \cite{zarate2016sde}.
In this model, the wind drag primarily consists of frictional drag and pressure drag \cite{zarate2016sde}.
The frictional drag increases proportionally with the velocity of the mass point of interest $\bm v_i(t)$ relative to the wind velocity $\bm v_{\mathrm{o}}(t)$.
We denote the constant of the proportionality as $c_0$ and refer to it as the ``drag constant.''
The pressure drag has random magnitude regardless of the wind speed.
The derivatives of the velocity and position of point~$i$ at time $t$ are denoted by $\mathrm d\bm v_i(t)\in\mathbb R^3$ and $\mathrm d\bm x_i(t)\in\mathbb R^3$, respectively.
The derivatives of the velocities and positions of points $1$ and $N$ are expressed as $\mathrm d\bm v_1(t)=\mathrm d\bm v_N(t)=\mathrm d\bm x_1(t)=\mathrm d\bm x_N(t)=\bm 0$, respectively.
For $i=2,3,\dots,N-1$, $\mathrm d\bm v_i(t)$ and $\mathrm d\bm x_i(t)$ are calculated as follows \cite{zarate2016sde, shiri2019massive}:
\begin{align}
\label{eq:pos_vel}
\mathrm d\bm v_i(t) & = \bm a_i(t) \,\mathrm dt \underbrace{-c_0\bigl[\bm v_i(t)-\bm v_\mathrm o(t)\bigr] \mathrm dt + \bm V_\mathrm o \,\mathrm d\bm W_i(t)}_{\text{derivatives from the wind}}, \notag \\
\mathrm d\bm x_i(t) & = \bm v_i(t) \,\mathrm dt,
\end{align}
where $\bm V_\mathrm o\in\mathbb R^{3\times 3}$ and $\bm W_i(t)\in\mathbb R^3$ denote the covariance matrix of the wind speed and the standard Wiener process that is independently and identically distributed across positions of point $i$, respectively.
\subsection{Radio-Wave Propagation Model}
According to the free-space path loss model \cite{friis1946note}, we consider the received signal power of the gateway BS to be determined by the distance $d$ between the SBS and the gateway BS and the antenna radiation pattern.
This free-space path loss model is consistent with the mmWave channel measurement conducted under a line-of-sight (LoS) conditions in an open space \cite{geng2008millimeter}, and this is a feasible assumption considering that the SBS and gateway BS in the above-mentioned scenario are likely to be deployed under such conditions.
Note that in a NLoS condition, the above assumption does not hold indeed; however, we do not consider the NLoS condition because our focus is on the beam-tracking inaccuracy caused by the training-test gap, which occurs even under LoS conditions.
In other words, considering only the LoS condition is sufficient to validate the contributions of this study.
We consider the use of a directional antenna; hence, the antenna radiation pattern is determined by AoDs in the zenith and azimuth angles. That is, $\theta_{\mathrm{AoD}}, \phi_{\mathrm{AoD}}$, respectively, and the zenith and azimuth steering angles, that is, $\theta_\mathrm s, \phi_\mathrm s$, respectively.
Note that these angles are measured in the coordinate system illustrated in Fig.~\ref{fig:coordinate}.
From the Friis transmission equation \cite{friis1946note}, $P_\mathrm r(d,\theta_{\mathrm{AoD}},\phi_{\mathrm{AoD}},\theta_\mathrm s,\phi_\mathrm s)$ is given by
\begin{align}
P_\mathrm r(d,\theta_{\mathrm{AoD}},\phi_{\mathrm{AoD}},\theta_\mathrm s,\phi_\mathrm s) = \left(\frac{\lambda}{4\pi d}\right)^{\!\!2} A_\mathrm t(\theta_{\mathrm{AoD}},\phi_{\mathrm{AoD}},\theta_\mathrm s,\phi_\mathrm s)\, A_\mathrm rP_\mathrm{t},
\end{align}
where $P_\mathrm t$, $\lambda$, and $ A_\mathrm r$ are constants, and denote the transmission power of the SBS, wavelength of the radio waves, and receiver antenna gain, respectively.
Moreover, $A_\mathrm t(\theta_{\mathrm{AoD}},\phi_{\mathrm{AoD}},\theta_\mathrm s,\phi_\mathrm s)$ denotes the transmission antenna gain, with its maximum value at $\theta_{\mathrm{AoD}}=\theta_\mathrm s$ and $\phi_{\mathrm{AoD}}=\phi_\mathrm s$.
For the sake of simplicity, we omit the subscript $_{\mathrm{AoD}}$, hereinafter.
We considered the use of the array antenna model in \cite{3gpp,rebato2018study}, where the transmission antenna gain $A_\mathrm t(\theta,\phi,\theta_\mathrm s,\phi_\mathrm s)$ is given by
\begin{align}
\label{eq:antenna_gain}
A_\mathrm t(\theta, \phi,\theta_\mathrm s,\phi_\mathrm s) = A_\mathrm E(\theta, \phi) + \mathit{AF}(\theta, \phi, \theta_\mathrm s, \phi_\mathrm s),
\end{align}
where $A_\mathrm E(\theta,\phi)$ and $\mathit{AF}(\theta, \phi, \theta_\mathrm s, \phi_\mathrm s)$ denote the element radiation pattern and array factor, respectively.
The element radiation pattern $A_\mathrm E(\theta,\phi)$ of each single antenna element is composed of horizontal and vertical radiation patterns.
The element radiation pattern $A_\mathrm E(\theta,\phi)$ is given by
\begin{align}
A_\mathrm E(\theta, \phi) = G_\mathrm{max} -\min\left\{-\left[A_{\mathrm E,\mathrm V}(\theta)+A_{\mathrm E,\mathrm H}(\phi)\right], A_\mathrm m\right\},
\end{align}
where $A_{\mathrm E,\mathrm V}(\theta)$, $A_{\mathrm E,\mathrm H}(\phi)$, $G_\mathrm{max}$, and $A_\mathrm m$ denote the vertical and horizontal radiation patterns, maximum directional gain of the antenna element, and front-back ratio, respectively.
The vertical and horizontal radiation patterns $A_{\mathrm E,\mathrm V}(\theta)$ and $A_{\mathrm E,\mathrm H}(\phi)$, respectively, are obtained as follows:
\begin{align}
A_{\mathrm E,\mathrm V}(\theta) & = -\min\left\{12\left(\frac{\theta-90^\circ}{\theta_{3\mathrm{dB}}}\right)^{\!\!2}, \mathit{SLA}_\mathrm V\right\}, \notag \\
A_{\mathrm E,\mathrm H}(\phi) & = -\min\left\{12\left(\frac{\phi}{\phi_{3\mathrm{dB}}}\right)^{\!\!2}, A_\mathrm m\right\},
\end{align}
where $\theta_{3\mathrm{dB}}$, $\phi_{3\mathrm{dB}}$, and $\mathit{SLA}_\mathrm V$ are the vertical $3\,\mathrm{dB}$ beamwidth, horizontal $3\,\mathrm{dB}$ beamwidth, and side-lobe level limit, respectively.
The array factor $\mathit{AF}(\theta,\phi,\theta_\mathrm s,\phi_\mathrm s)$ models the directivity of the antenna array, which
is expressed for an array of $n=n_\mathrm V n_\mathrm H$ elements as
\begin{align}
\mathit{AF}(\theta, \phi, \theta_\mathrm s, \phi_\mathrm s) = 10\log_{10}\Bigl[1+\Bigl(\left|(1/\sqrt{n})\bm w\right|^{2}-1\Bigr)\Bigr],
\end{align}
where $n_\mathrm V$ and $n_\mathrm H$ denote the number of vertical and horizontal elements, respectively, and $\bm w\in\mathbb C^n$ denotes the beamforming vector, which is given by
\begin{align}
\bm w & = \left[w_{1,1}, w_{1,2}, \ldots, w_{n_\mathrm V, n_\mathrm H}\right]^{\mathrm{T}}, \notag \\
w_{p,r} & = \mathrm e^{\mathrm j2\pi[(p-1)\Delta_\mathrm V\Psi_p/\lambda + (r-1)\Delta_\mathrm H\Psi_r]/\lambda}, \notag \\
\Psi_p & = \cos\theta - \cos\theta_\mathrm s, \notag \\
\Psi_r & = \sin\theta\sin\phi - \sin\theta_\mathrm s\sin\phi_\mathrm s,
\end{align}
where $\Delta_\mathrm V$ and $\Delta_\mathrm H$ denote the spacing distances between the vertical and horizontal elements of the array, respectively.
As a specific characteristic of mmWave communications, the spacing distances $\Delta_{\mathrm{V}}$ and $\Delta_{\mathrm{H}}$ are of the order of several millimeters (e.g., $\Delta_{\mathrm{V}}=\Delta_{\mathrm{H}}=2.5\,\mathrm{mm}$ at an the RF frequency of $60\,\mathrm{GHz}$).
This is because of the common setting where spacing distances should not exceed the half-wavelength to avoid high grating lobes.
The SBS periodically observes both the instantaneous received signal power and its position and velocity.
Hereinafter, we let the notation $\tau$ denote the observation interval and term the time instants for the observation as the ``time step.''
Accordingly, we consider the steering angle capable of moving up, down, left, or right by an angle $\beta$ at each time step.
The problem for determining the steering angles at each time step is formulated in the next section.
\subsection{Initial Access Procedure}
Among the initial access procedures, we only note the beam alignment between the SBS and gateway BS in the initial stage.
Indeed, many other procedures are mandatory to initialize the mmWave communications (e.g., device discovery and association frame exchanges); however, our focus is on beam-tracking, which is disjoint these initial access procedures.
Hence, providing a concrete design of the initial access procedures is basically beyond the scope of this paper, and we describe only the beam alignment in the initial stage, which is the most relevant procedure for beam-tracking.
The beam alignment in the initial stage should allow the beam of the SBS to point toward the gateway BS; thereby allowing the gateway BS to receive the maximum received power.
Indeed, one can assume arbitrary beam-alignment procedures as long as the beam of the SBS to point toward the gateway BS.
As an example of this procedure, in the evaluation in Section~V, we employed position-aware beam alignment based on our assumption that the positions of the SBS and gateway BS are given in advance.
Therein, we simply calculated the orientation of the gateway BS from these positions under the initial conditions without wind.
Subsequently, we established the array antenna weights such that the beam pointed toward the gateway BS.
\subsection{Formulation}
Let $\mathcal N=\{1,2,\dots,\lfloor T/\tau\rfloor\}$ denote the set of indices of the time steps, where the index $k\in\mathcal{N}$ corresponds to the time $t = k\tau$.
The term $T$ is the total time length for the beam-tracking.
Moreover, we let the superscript $(k)$ indicate that the variables of interest are measured at the time $t = k\tau$.
The optimization problem is formulated as follows:
\begin{maxi!}[1]
{\substack{\left(a^{(k)}_\theta, a^{(k)}_\phi\right)_{k\in\mathcal{N}}}}
{\frac{1}{\lfloor T/\tau \rfloor}\sum_{k \in \mathcal{N}} P_\mathrm{r}\left(d^{(k)}, \theta^{(k)}, \phi^{(k)}, \theta_\mathrm s^{(k)}, \phi_\mathrm s^{(k)}\right)}{}{}.
\addConstraint{d^{(k)}}{= \left\|\bm x^{(k)}_\mathrm{S}-\bm x_\mathrm{G}\right\|}
\addConstraint{\theta^{(k)}}{= \arccos \frac{x^{(k)}_{\mathrm{S},\mathrm z}-x_{\mathrm{G}, \mathrm z}}{d^{(k)}}}
\addConstraint{\phi^{(k)}}{= \arctan \frac{x^{(k)}_{\mathrm{S},\mathrm y}-x_{\mathrm{G}, \mathrm y}}{x^{(k)}_{\mathrm{S}, \mathrm x}-x_{\mathrm{G}, \mathrm x}}}
\addConstraint{\theta^{(k)}_\mathrm s}{= \theta^{(k-1)}_\mathrm s + a^{(k)}_\theta\beta}
\addConstraint{\phi^{(k)}_\mathrm s}{= \phi^{(k-1)}_\mathrm s + a^{(k)}_\phi\beta}
\addConstraint{a^{(k)}_\theta, a^{(k)}_\phi}{\in \{-1, 0, 1\}}
\addConstraint{\left|a^{(k)}_\theta\right|+\left|a^{(k)}_\phi\right|}{\leq 1},
\end{maxi!}
\noindent where $a^{(k)}_\theta$ and $a^{(k)}_\phi$ denote the action for the zenith and azimuth angles at time step $k$, respectively, which can move the zenith and azimuth steering angles $\theta_\mathrm s^{(k)}, \phi_\mathrm s^{(k)}$ by angle $\beta$, respectively.
Moreover, $d^{(k)}$, $\theta^{(k)}$, and $\phi^{(k)}$ denote the distance, beam zenith, and azimuth angle from the SBS to the gateway BS at time step $k$, respectively.
These variables are defined by the positions of the SBS and gateway BS, i.e., $\bm x^{(k)}_\mathrm{S} = [x^{(k)}_{\mathrm{S}, \mathrm x}, x^{(k)}_{\mathrm{S}, \mathrm y}, x^{(k)}_{\mathrm{S}, \mathrm z}]^{\mathrm{T}} \in\mathbb R^3, \bm x_\mathrm{G} = [x_{\mathrm{G}, \mathrm x}, x_{\mathrm{G}, \mathrm y}, x_{\mathrm{G}, \mathrm z}]^{\mathrm{T}} \in\mathbb R^3$, respectively.
\section{Adversarial RL-based Beam-Tracking Based on Zero-Shot Adaptation}
\label{sec:adversarial_rl}
\subsection{Reason for Adversarial RL}
Motivated by the importance of zero-shot adaptation as discussed in Section~\ref{sec:motivation}, we propose a RARL-based beam-tracking method.
The key reason for using RARL is to develop the capability to overcome the training and test gap by: 1) regarding the gap as a disturbance from an adversarial agent that impedes the legitimate agent; 2) training both the adversarial and legitimate agent, thereby allowing the legitimate agent to experience more severe disturbances.
The explanation more specific to our beam-tracking problem is as follows: By training the adversarial agent to disturb the on-wire SBS with additional wind, the beam-tracking agent experiences more rapid displacements in the on-wire SBS than without such adversarial agents.
We hypothesize that this well simulates a situation in which it would be difficult to correct the directional beams, where the actual wire mass or spring constant is smaller than that used for training.
This means that the adversarial agent provides richer experiences to the beam-tracking agent in view of the existence of the training and test gap; hence, the beam-tracking agent would be expected to obtain a robust beam-tracking policy against these training and test gaps.
\subsection{Overview of RARL-Based Beam-Tracking}
The training procedure for the RARL-based beam-tracking is shown in Fig.~\ref{fig:scenario_train}.
In the training scenario, as shown in Fig.~\ref{fig:scenario_train}, the protagonist corresponding to the beam-tracking agent learns to maximize the average received signal power.
In contrast, the adversary learns to minimize the average received signal power by generating additional wind.
To achieve these purposes, the protagonist and adversary observe a state, select an action, and observe a reward to update their NN from experienced transitions.
In the test scenario shown in Fig.~\ref{fig:scenario_test}, the protagonist corrects the beam misalignment according to the policy learned in the training scenario.
To examine the feasibility of zero-shot adaptation, the environmental parameters, for example, the spring constant $k_0$ and total wire mass $m$, are varied between the training and test scenarios.
As an example of disturbance caused by an adversary, we assume that the adversary can affect the wind speed in the simulation.
Thus, the adversary can append discontinuous additional wind to continuous wind in the environment.
At every time step $t$ in the training scenario, by considering the wind speed in the environment $\bm v_\mathrm e^{(k)}\in\mathbb R^3$ and the additional wind speed appended by the adversary $\bm v_\mathrm a^{(k)}\in\mathbb R^3$, the wind speed $\bm v_\mathrm o^{(k)}\in\mathbb R^3$ in \eqref{eq:pos_vel} is calculated as
\begin{align}
\bm v_\mathrm o^{(k)} = \bm v_\mathrm e^{(k)} + \bm v_\mathrm a^{(k)}.
\end{align}
Conversely, because the adversary does not exist in the test scenarios, the wind speed $\bm v_\mathrm o^{(k)}\in\mathbb R^3$ in \eqref{eq:pos_vel} is given by
\begin{align}
\bm v_\mathrm o^{(k)} = \bm v_\mathrm e^{(k)}.
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{figs/scenario_train.pdf}
\caption{Training scenario of the adversarial RL.}
\label{fig:scenario_train}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{figs/scenario_test.pdf}
\caption{Test scenario of the adversarial RL.}
\label{fig:scenario_test}
\end{figure}
\subsection{State, Action, and Reward}
The state set $\mathcal S$ of the protagonist and adversary is defined as
\begin{align}
\label{eq:state_space}
\mathcal S \coloneqq \mathcal S_{\bm x}\times\mathcal S_{\bm v}\times\mathcal S_{\bm b},
\end{align}
where $\mathcal S_{\bm x}\coloneqq\{\,\bm x\mid\bm x\in\mathbb R^3\,\}$ and $\mathcal S_{\bm v}\coloneqq\{\,\bm v\mid\bm v\in\mathbb R^3\,\}$ denote the set of possible three-dimensional positions and velocities of the SBS, respectively.
Note that for the practical implementation, these three-dimensional positions/velocities can be obtained if the SBS involves an accelerometer and then integrates the measured accelerations.
This position/velocity acquisition involves an error, and this error may affect the accuracy of the beam-tracking.
Nonetheless, we assume that these three-dimensional positions/velocities can be obtained without any measurement errors in view of the scope of this study.
As discussed in Section~I, our main scope is to validate the effectiveness of adding an adversary during training, thereby confirming the feasibility of our concept of zero-shot adaptation.
This objective can be achieved by comparing the proposed method (with the adversary) with the baseline method (without the adversary), and the measurement error does not have a specific impact on this comparison because the inaccuracy of the beam-tracking due to the measurement error occurs commonly in both methods.
Hence, this assumption is sufficient to validate the contribution of this study, and delving into the measurement precisions of the positions/velocities is beyond the scope of this study.
In \eqref{eq:state_space}, $\mathcal S_{\bm b}\coloneqq\{\,\bm b\mid\bm b\in\mathbb R^3\,\}$ denotes the set of possible beam directions, where beam direction at time step $k$ $\bm b^{(k)}$ is given by
\begin{align}
\bm b^{(k)} = \big[\sin\theta^{(k)}_\mathrm s\cos\phi^{(k)}_\mathrm s, \sin\theta^{(k)}_\mathrm s\sin\phi^{(k)}_\mathrm s, \cos\theta^{(k)}_\mathrm s\big]^{\mathrm{T}}.
\end{align}
These state settings are consistent with our previous works \cite{shinzaki2020deep, koda2020millimeter} to ensure a fair comparison.
The action set of the protagonist $\mathcal A_\mathrm p$ is defined as
\begin{align}
\mathcal A_\mathrm p \coloneqq \{\mathrm{stay}, \mathrm{up},\mathrm{down},\mathrm{left},\mathrm{right}\},
\end{align}
where the action $\mathrm{stay}$ denotes that the beam direction is maintained.
Moreover, the actions $\mathrm{up}$, $\mathrm{down}$, $\mathrm{left}$, and $\mathrm{right}$ denote that the beam direction is moved up, down, left, and right by $\beta$, respectively.
The actions for the zenith and azimuth angle $a^{(k)}_\theta, a^{(k)}_\phi$ are given by
\begin{align}
\left[a^{(k)}_\theta, a^{(k)}_\phi\right] =
\begin{cases}
[0, 0]^{\mathrm{T}}, & a_k = \mathrm{stay}; \\
[-1, 0]^{\mathrm{T}}, & a_k = \mathrm{up}; \\
[1, 0]^{\mathrm{T}}, & a_k = \mathrm{down}; \\
[0, 1]^{\mathrm{T}}, & a_k = \mathrm{left}; \\
[0, -1]^{\mathrm{T}}, & a_k = \mathrm{right}.
\end{cases}
\end{align}
The action set of the adversary $\mathcal A_\mathrm a$ is defined as
\begin{align}
\mathcal A_\mathrm a \coloneqq \{\mathrm{stay}, \mathrm{up},\mathrm{down},\mathrm{left},\mathrm{right}, \mathrm{front}, \mathrm{back}\},
\end{align}
where the action $\mathrm{stay}$ denotes that no additional wind is appended.
Furthermore, in the training scenarios, the actions $\mathrm{up}$, $\mathrm{down}$, $\mathrm{left}$, $\mathrm{right}$, $\mathrm{front}$, and $\mathrm{back}$ denote wind in the upward, downward, leftward, rightward, forward, and backward directions with a wind speed of $v_\mathrm a$.
The additional wind speed by the adversary $\bm v_\mathrm a^{(k)}$ is given by
\begin{align}
\bm v_\mathrm a^{(k)} =
\begin{cases}
[0, 0, 0]^{\mathrm{T}}, & a_k = \mathrm{stay}; \\
[0, 0, v_\mathrm a]^{\mathrm{T}}, & a_k = \mathrm{up}; \\
[0, 0, -v_\mathrm a]^{\mathrm{T}}, & a_k = \mathrm{down}; \\
[-v_\mathrm a, 0, 0]^{\mathrm{T}}, & a_k = \mathrm{left}; \\
[v_\mathrm a, 0, 0]^{\mathrm{T}}, & a_k = \mathrm{right}; \\
[0, v_\mathrm a, 0]^{\mathrm{T}}, & a_k = \mathrm{front}; \\
[0, -v_\mathrm a, 0]^{\mathrm{T}}, & a_k = \mathrm{back}.
\end{cases}
\end{align}
The immediate reward is defined as the instantaneous received signal power at the next step, which is clipped using the technique in \cite{mnih2015human}:
\begin{align}
\label{eq:clipping}
r_{\mathrm p, k} =
\begin{cases}
1, & (P^{(k+1)}_\mathrm r-b_\mathrm c)/d_\mathrm c >1; \\
(P^{(k+1)}_\mathrm r-b_\mathrm c)/d_\mathrm c, & -1\leq(P-b_\mathrm c)/d_\mathrm c\leq 1; \\
-1, & (P^{(k+1)}_\mathrm r-b_\mathrm c)/d_\mathrm c < -1,
\end{cases}
\end{align}
where $P^{(k+1)}_\mathrm r \coloneqq P_\mathrm{r}\left(d^{(k+1)}, \theta^{(k+1)}, \phi^{(k+1)}, \theta_\mathrm s^{(k+1)}, \phi_\mathrm s^{(k+1)}\right)$.
Moreover, $b_\mathrm c$ and $d_\mathrm c$ denote the offset and scale of the clipping, respectively.
The immediate reward of the adversary $r_{\mathrm a, k}$ is defined by inverting the sign of that of the protagonist to encourage the adversary to disturb the beam-tracking agent, that is, $r_{\mathrm a, k} = -r_{\mathrm p, k}$.
\subsection{Adversarial RL Algorithm}
\label{subsec:training_procedure}
Given the definition of the state, action, and reward, the training of the protagonist beam-tracking agent and adversary is performed via deep Q-learning \cite{mnih2015human} for the two agents.
In deep Q-learning, the following value, termed the optimal action-value function, is predicted via a neural network:
\begin{multline}
Q_{\mathit{agent}}^{\star}(s, a) = \mathbb{E}_{\pi_{\mathit{agent}}^{\star}}\!\left[\sum_{k'=0}^\infty \gamma^{k'} r_{\mathit{agent}, k+1+k'}\,\middle|\,s_k = s, a_{\mathit{agent}, k} = a\right],\\
s\in\mathcal{S}, a\in\mathcal{A}_{\mathit{agent}},
\end{multline}
where $s_k$ and $\gamma\in[0, 1)$ denote the state at the time step $k$ and discount factor, respectively.
For $\mathit{agent}\in\{\mathrm{p}, \mathrm{a}\}$, which indicates the protagonist or adversary, $a_{\mathit{agent}, k}$ and $r_{\mathit{agent}, k}$ denote the action and reward given to $\mathit{agent}$ at time step $k$, respectively.
The term $\pi_{\mathit{agent}}^{\star}:\mathcal{S}\to\mathcal{A}_{\mathit{agent}}$ is the optimal policy, which means the action rule to maximize the discounted reward.
Although finding the optimal policy is the main objective of this algorithm, in deep Q-learning, we can find the optimal action value function first, and then we can determine the optimal policy by: $\textstyle \mathop \mathrm{arg~max}\limits_{a\in\mathcal{A}_{\mathit{agent}}}Q_{\mathit{agent}}^{\star}(s, a)$.
Hence, the problem boils down to finding the optimal action-value function, which is conducted by training a neural network known as a deep Q network (DQN) in deep Q-learning such that the DQN is a good approximation of the optimal action-value function.
Let $Q_{\mathit{agent}}(s, a; \bm \theta_{\mathit{agent}})$ denote the DQN for $\mathit{agent}$.
The procedure used to train the two DQNs is detailed below.
Note that procedures 1 and 2 are conducted in every time step, whereas procedures 3, 4, 5 are performed on a per-episode basis.
Here, we let the episode be the finite time steps for $k = 1, 2, \dots, \lfloor T / \tau \rfloor$.
\vspace{.3em}\noindent
\textbf{1. Exploring and Storing Experience.}\quad
In this procedure, the protagonist and adversary collect the ingredients to create training data, termed experience.
The experience is defined as $(s_k, a_{\mathit{agent}, k}, r_{\mathit{agent}, k}, s_{k + 1})$ for $\mathit{agent}$ and is collected while interacting with the environment, that is, taking the action, obtaining the rewards, and the subsequent states.
In this procedure, the protagonist and adversary synchronously perform the action following $\epsilon$-greedy policies, which is a general assumption that includes the prior work for adversarial RL\cite{pinto2017robust}.
The experience for $\mathit{agent}$ is stored in the experience memory denoted by $\mathcal{D}_{\mathit{agent}}$.
\vspace{.3em}\noindent
\textbf{2. Training DQNs.}\quad
Given the experience memories $\mathcal{D}_{\mathit{agent}}$ for $\mathit{agent}\in\{\mathrm{p}, \mathrm{a}\}$, the DQNs are trained.
In deep Q-learning, the DQN is trained to minimize the difference metric between $Q_{\mathit{agent}}(s, a; \bm \theta_{\mathit{agent}})$ and $\mathit{target}_{\mathit{agent}}\coloneqq r + \gamma \max_{a\in\mathcal{A}_{\mathit{agent}}}Q_{\mathit{agent}}(s, a; \bm \theta^{-}_{\mathit{agent}})$, where $Q_{\mathit{agent}}(s, a; \bm \theta^{-}_{\mathit{agent}})$ is termed the target network and is updated less frequently than the DQN.
We refrain from delve into the details of the target network to enable us to focus on the training procedure. Interested readers are encouraged to refer to the paper of Mnih \textit{et al.}\cite{mnih2015human}.
The training procedure is as follows.
First, we calculate $\mathit{target}_{\mathit{agent}}$ by sampling the experience from $\mathcal{D}_{\mathit{agent}}$ uniformly.
Subsequently, we train each DQN to minimize the difference metric between the DQN $Q_{\mathit{agent}}(s, a; \bm \theta_{\mathit{agent}})$ and $\mathit{target}_{\mathit{agent}}$ via the Adam optimizer \cite{kingma2015adam}.
Note that the protagonist and adversary DQNs are trained synchronously, which is found to be sufficient to demonstrate the robustness of the protagonist against training-test gaps.
During this training procedure, we leveraged more advanced techniques in the evaluation described in Section~\ref{sec:simulation}.
More specifically, we leveraged the Huber loss \cite{varga2018deeprn} as the difference metric instead of taking the square of $Q_{\mathit{agent}}(s, a; \bm \theta_{\mathit{agent}})-\mathit{target}_{\mathit{agent}}$.
In addition, we leveraged dueling DQN \cite{wang2015dueling} in the experiment.
However, detailing these techniques is beyond the objective of this section; hence, we detailed these techniques in the Appendix.
\vspace{.3em}\noindent
\textbf{3. Updating Target DQNs.}\quad
The parameters of the target DQNs $\bm \theta^{-}_{\mathit{agent}}$ are updated such that $\bm \theta^{-}_{\mathit{agent}}$ is equal to $\bm \theta_{\mathit{agent}}$.
This update is generally performed less frequently than that of the update the main DQNs\cite{mnih2015human}; hence, we conduct this procedure when every $C$ episode elapses.
This target network update is performed synchronously for the protagonist and adversary.
\vspace{.3em}\noindent
\textbf{4. Checking Protagonist Performance.}\quad This procedure is performed to check whether an appropriate beam-tracking policy is learned regardless of the adversary.
In this procedure, for additional $\mathit{test}$ steps, the protagonist performs beam-tracking by greedily determining the action with respect to the DQN $Q_{\mathrm{p}}(s, a;\bm \theta_{\mathrm{p}})$, that is, it chooses the action by: $\textstyle \mathop \mathrm{arg~max}\limits_{a\in\mathcal{A}_{\mathit{p}}}Q_{\mathrm{p}}^{\star}(s, a;\bm \theta_{\mathrm{p}})$, and the received power averaged for the steps is obtained.
Therein, the adversary is not activated to check the protagonist performance in view of real deployments.
\vspace{.3em}\noindent
\textbf{5. Checking Adversary Performance.}\quad This procedure is performed to check whether the adversary can with certainty learn a policy to disturb a protagonist.
In this procedure, for additional $\mathit{test}$ steps, the adversary disturbs the protagonist by greedily determining the action with respect to the DQN $Q_{\mathrm{a}}(s, a;\bm \theta_{\mathrm{a}})$.
In contrast to the previous procedure, checking the performance of adversary still requires the existence of a protagonist, whereas using the protagonist in this learning procedure $Q_{\mathrm{p}}(s, a;\bm \theta_{\mathrm{p}})$ underestimates the performance of the adversary because the protagonist may already be robust against the adversary.
To avoid this, we prepared another proxy protagonist, \textit{pre-trained} without the adversary, and it is this proxy protagonist that is disturbed by the adversary in this phase.
This proxy protagonist did not learn a robust policy against the adversary; therefore, we can keep track of the performances of the adversary without underestimation.
Note that by letting $Q_{\mathrm{p}}(s, a;\bm \theta_{\mathrm{p, proxy}})$ denote the DQN of the proxy protagonist, this progagonist also determines the action greedily with respect to $Q_{\mathrm{p}}(s, a;\bm \theta_{\mathrm{p, proxy}})$.
The performance of the adversary is measured by the average received power that is obtained in the proxy protagoist during this phase.
\vspace{.3em}
These procedures are iterated for a predefined number of episodes $M$.
This value of $M$ is determined to be sufficiently longer than the convergence of the protagonist performance, which could be measured in procedure~4.
Note that during the training procedure, the adversary is always active except for procedure~4.
The overall training procedure is summarized in Algorithm~\ref{alg:adversarial}.
\begin{algorithm}[t]
\caption{Training the protagonist beam-tracking agent and adversary via adversarial RL}
\label{alg:adversarial}
\setlength{\baselineskip}{11.4pt}
\begin{algorithmic}
\State Initialize main DQN and target DQN of protagonist, i.e., $Q_\mathrm p(s,a;\bm\theta_\mathrm p)$ and $Q_\mathrm p(s,a;\bm\theta^-_\mathrm p)$, respectively, and experience memory of protagonist $\mathcal D_\mathrm p$
\State Initialize main DQN and target DQN of adversary, i.e., $Q_\mathrm a(s,a;\bm\theta_\mathrm a)$ and $Q_\mathrm a(s,a;\bm\theta^-_\mathrm a)$, respectively, experience memory of adversary $\mathcal D_\mathrm a$
\State \parbox[t]{205pt}{
Observe initial state $s_{1}$ by obtaining $\bm x_\mathrm S^{(1)}$, velocity $\bm v_\mathrm S^{(1)}$, and beam direction $\bm b_\mathrm S^{(1)}$ \strut
}
\For{Episode $e = 1, 2, \dots, M$}
\For{time step $k = 1, 2,\dots,\lfloor T/\tau \rfloor$}
\Exploration
\State \parbox[t]{190pt}{
Select actions of protagonist and adversary $a_{\mathrm{p},k}$ and $a_{\mathrm{a}, k}$ synchronously
}
\State Calculate reward for protagonist $r_{k,\mathrm{p}}$ from \eqref{eq:clipping}
\State Calculate reward $r_{k,\mathrm{a}}$ by $r_{k,\mathrm{a}}\leftarrow -r_{k,\mathrm{p}}$
\State \parbox[t]{205pt}{
Observe state $s_{k+1}$ by obtaining $\bm x_\mathrm S^{(k+1)}$, velocity $\bm v_\mathrm S^{(k+1)}$, and beam direction $\bm b_\mathrm S^{(k+1)}$ \strut
}
\State Store experience $(s_{k}, a_{\mathrm p,k}, r_{\mathrm p,k}, s_{k+1})$ into $\mathcal D_\mathrm p$
\State Store experience $(s_{k}, a_{\mathrm a,k}, r_{\mathrm a,k}, s_{k+1})$ into $\mathcal D_\mathrm a$
\Training
\State \parbox[t]{220pt}{Synchronously update weights of main DQNs of protagonist and adversary $\bm \theta_\mathrm p, \bm \theta_\mathrm a$\strut}
\Target
\If{$e\equiv 0\ (\mathrm{mod}\,C)$}
\State Update target DQNs $\bm\theta^-_\mathrm p\gets\bm\theta_\mathrm p$ and $\bm\theta^-_\mathrm a\gets\bm\theta_\mathrm a$
\EndIf
\EndFor
\PerformanceCheck
\For{time step $k = \lfloor T/\tau \rfloor + 1,\dots,\lfloor T/\tau \rfloor + \mathit{test}$}
\State \parbox[t]{190pt}{Select actions of protagonist $a_{\mathrm{p},k}$ greedily w.r.t. $Q_\mathrm{p}(s, a; \theta_{\mathrm{p}})$}
\State \parbox[t]{205pt}{
Observe state $s_{k+1}$ in the same way as that in procedure~1 \strut
}
\State Observe received power
\EndFor
\PerformanceCheckad
\For{time step $k = \lfloor T/\tau \rfloor + 1,\dots,\lfloor T/\tau \rfloor + \mathit{test}$}
\State \parbox[t]{190pt}{Select actions of proxy protagonist $a_{\mathrm{p},k}$ greedily w.r.t. $Q_\mathrm{p}(s, a; \theta_{\mathrm{p, proxy}})$ (pre-trained without adversary)}
\State \parbox[t]{190pt}{Select actions of adversary $a_{\mathrm{a},k}$ greedily w.r.t. $Q_\mathrm{a}(s, a; \theta_{\mathrm{a}})$}
\State \parbox[t]{205pt}{
Observe state $s_{k+1}$ in the same way as that in procedure~1 \strut
}
\State Observe received power
\EndFor
\State Check average received power
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Simulation Results}
\label{sec:simulation}
\subsection{Simulation Parameters}
The simulation parameters are listed in Table~\ref{tbl:SimurationParam}, where $\bm E\in\mathbb R^{3\times 3}$ is an identity matrix.
The SBS is installed at the midpoint of the overhead messenger wire, which experiences significant movement as a result of the wind.
The wind speed in the environment $\bm v_\mathrm e^{(k)}$ is given by
\begin{align}
\bm v_\mathrm e^{(k)} = \left[5\sin\frac{2\pi k\tau}{4}, 5\sin\frac{2\pi k\tau}{6}, 5\sin\frac{2\pi k\tau}{8}\right]^{\mathrm{T}},
\end{align}
which follows our previous studies\cite{shinzaki2020deep, koda2020millimeter}.
\subsubsection{Antenna Pattern}
Fig.~\ref{fig:antenna_pattern} shows the antenna pattern based on the simulation parameters listed in Table~\ref{tbl:SimurationParam}.
The red trace represents the transmission antenna gain with the array factor $\mathit{AF}(\theta, \phi, \theta_\mathrm s, \phi_\mathrm s)$.
The blue curve represents the transmission antenna gain without the array factor, that is, $\mathit{AF}(\theta, \phi, \theta_\mathrm s, \phi_\mathrm s) = 0$ in \eqref{eq:antenna_gain}.
The directivity of the transmission antenna gain with the array factor is higher than that of the transmission antenna gain without the array factor.
In the transmission antenna gain with the array factor, $3.6^\circ$ is the positive minimum value of the local minimum values.
\subsubsection{Architecture of the Neural Network}
We used a neural network with four hidden layers, as shown in Fig.~\ref{fig:nn}, where $|\mathcal A|$ denotes the number of actions.
The number of actions of the protagonist $|\mathcal A_\mathrm p| = 5$ and the number of actions of the adversary $|\mathcal A_\mathrm a| = 7$.
The hidden layers were all fully connected and had $32$ units.
The activation function of the hidden layers is the rectified linear unit $R(x)$, which is given by
\begin{align}
R(x) = \max \{x, 0\}.
\end{align}
Adam \cite{kingma2015adam} was used as the gradient descent method, and the learning rates of the protagonist and adversary $\alpha_\mathrm p, \alpha_\mathrm a$ were $10^{-3}$.
\begin{table}[t]
\centering
\caption{Simulation Parameters}
\begin{tabular}{cc}
\toprule
Height of endpoints of wire $h_\mathrm w$ & $5\,\mathrm{m}$ \\
Distance between endpoints $d_\mathrm w$ & $10\,\mathrm{m}$ \\
Height of gateway BS $h_\mathrm r$ & $5\,\mathrm{m}$ \\
Distance between wire and gateway BS $d_\mathrm r$ & $5\,\mathrm{m}$ \\
Transmission power $P_\mathrm t$ & $23\,\mathrm{dBm}$ \\
Radio-wave wavelength $\lambda$ & $5\,\mathrm{mm}$ \\
Receiver antenna gain $A_\mathrm r$ & $8\,\mathrm{dBi}$ \\
Gravitational acceleration $\bm{g}$ & $[0, 0, -9.8]\,\mathrm{ms^{-2}}$ \\
Spring constant $k_0$ & $100\,\mathrm{N}\mathrm{m}^{-1}$ \\
Drag constant $c_0$ & $1\,\mathrm{s}^{-1}$ \\
Number of points $N$ & 11 \\
Total wire mass $m$ & $10\,\mathrm{kg}$ \\
Covariance matrix of the wind speed $\bm{V}_0$ & $0.1\bm{E}$ \\
Vertical $3\,\mathrm{dB}$ beamwidth $\theta_{3\mathrm{dB}}$ & $65^\circ$ \\
Horizontal $3\,\mathrm{dB}$ beamwidth $\phi_{3\mathrm{dB}}$ & $65^\circ$ \\
Side-lobe level limit $\mathit{SLA}_\mathrm V$ & $30\,\mathrm{dB}$ \\
Front-back ratio $A_\mathrm m$ & $30\,\mathrm{dB}$ \\
Number of vertical elements $n_\mathrm V$ & 32 \\
Number of horizontal elements $n_\mathrm H$ & 32 \\
Zenith steering angle $\theta_\mathrm s$ & $90^\circ$ \\
Azimuth steering angle $\phi_\mathrm s$ & $0^\circ$ \\
Vertical spacing distance $\Delta_\mathrm V$ & $2.5\,\mathrm{mm}$ \\
Horizontal spacing distance $\Delta_\mathrm H$ & $2.5\,\mathrm{mm}$ \\
Number of episodes $M$ & $400$ \\
Point of SBS installation & Point 6 \\
Observation time $T$ & $10\,\mathrm{s}$ \\
Observation interval $\tau$ & $0.01\,\mathrm{s}$ \\
Angle when moving beam direction $\beta$ & $1^\circ$ \\
Constant in \eqref{eq:clipping} $b_\mathrm c$ & 27 \\
Constant in \eqref{eq:clipping} $d_\mathrm c$ & 3 \\
Exploration rate for $\epsilon$-greedy policy & 0.2 \\
Discount factor $\gamma$ & $0.99$ \\
Target DQN frequency $C$ & $5$ \\
Wind speed added by the adversary $v_\mathrm a$ & $10\,\mathrm{m/s}$ \\
\bottomrule
\end{tabular}
\label{tbl:SimurationParam}
\end{table}
\begin{figure}[t]
\centering
\subfigure[{$\phi\in[-180^\circ,180^\circ]$.}]{
\includegraphics[width=0.45\columnwidth]{figs/antenna_pattern1.pdf}
\label{fig:gcn_layer}
}
\centering
\subfigure[{$\phi\in[-8^\circ,8^\circ]$.}]{
\includegraphics[width=0.45\columnwidth]{figs/antenna_pattern2.pdf}
\label{fig:nn_layer}
}
\caption{Antenna pattern. The vertical angle $\theta$ was fixed at $90^\circ$.}
\label{fig:antenna_pattern}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figs/nn.pdf}
\caption{Architecture of the NN of the protagonist and adversary.}
\label{fig:nn}
\end{figure}
\subsection{Baseline Method}
To evaluate the average received signal power of the learned policy of the protagonist, we compared the proposed method with the \textit{stay method} and the \textit{upper-limit method}.
Because these compared methods aim to evaluate the policy of the protagonist without the disturbance caused by the adversary, the adversary does not exist in the stay or upper-limit methods.
In the stay method, the beam direction is fixed in the initial beam direction.
The policy $\pi(a\,\vert\,s)$ of the stay method is given by
\begin{align}
\pi(a\,\vert\,s) =
\begin{cases}
1, & a = \mathrm{stay}; \\
0, & a = \mathrm{up},\mathrm{down},\mathrm{left},\mathrm{right}.
\end{cases}
\end{align}
In the upper-limit method, the actions maximize the transmission antenna gain such that the superior performance is delivered in terms of the received signal power.
To evaluate the robustness of the learned policy of the protagonist, we compared the proposed method with the \textit{no adversary method} and \textit{random 10\,m/s method}.
In the no adversary method, the policy of the protagonist is learned in the scenario in which the adversary appends no additional wind, i.e., $v_\mathrm a=0$.
In the random $10\,\mathrm{m/s}$ method, the policy of the protagonist is learned in the scenario in which the adversary takes each action at random with equal probability.
\subsection{Learning Curve}
The result of the performance check of the protagonist and adversary during training is shown in Fig.~\ref{fig:adversarial_learning_curve_10}, along with the performance of the baseline methods.
The red curve shows the performance of the protagonist obtained during the procedure ``4.~Checking Protagonist Performance'' whereas the blue curve shows the performance of the adversary obtained during the procedure ``5.~Checking Adversary Performance,'' which is detailed in Section~\ref{subsec:training_procedure}.
Note that only with respect to the performance of the adversary (represented by the blue curve), lower received power is better because the performance of the adversary should be measured with the extent to which it disturbs the proxy protagonist.
As shown in Fig.~\ref{fig:adversarial_learning_curve_10}, regardless of the adversarial training, the performance of both the protagonist and adversary increases as the episode elapses.
This first shows the feasibility of the protagonist achieving an appropriate beam-tracking policy that converges closely to the upper limit regardless of the adversary.
Moreover, although the received power fluctuates, the adversary can basically obtain the policy of disturbing the protagonist as the episode elapses.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figs/adversarial_learning_curve_10.pdf}
\caption{Learning curve of the protagonist and the adversary. Only in the adversary performance in the blue curve, the lower received power is better. As the episode elapses, the performance of both the protagonist and adversary increases as the episode elapses.}
\label{fig:adversarial_learning_curve_10}
\end{figure}
\subsection{Robustness of the Learned Policy}
\label{subsec:simu_robustness}
To demonstrate the robustness to variations in the spring constant, we compared the average received signal power of the proposed method with that of the no adversary and random $10\,\mathrm{m/s}$ methods, as shown in Fig.~\ref{fig:spring_constant}.
In Fig.~\ref{fig:spring_constant}, the spring constant of the messenger wire is shown on the horizontal axis, where the training parameter of the spring constant is represented by the dashed line.
When the spring constant $k_0=100\,\mathrm{N}\mathrm{m}^{-1}$, which is the setting in the training, all the methods, including baseline methods, exhibited an almost identical amount of received power, that is, approximately $-12.9\,\mathrm{dBm}$.
However, in the scenario of when the spring constant was lower than that in training, that is, $k_0=10\,\mathrm{Nm^{-1}}$, the average received signal power of the proposed method was $-13.2\,\mathrm{dBm}$, whereas that of the no adversary and random $10\,\mathrm{m/s}$ methods dropped to $-14.5$ and $-13.8\,\mathrm{dBm}$, respectively.
Indeed, the random 10\,m/s method (orange curve) is advantageous to enable the protagonist to learn a robust policy when compared to no adversary method.
However, in contrast to the random 10\,m/s method, the proposed method with the adversary is more effective for the protagonist to learn a robust policy.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figs/compare_k.pdf}
\caption{Robustness to variations in the spring constant of the overhead messenger wire. In the test scenarios, the average received signal power of the proposed method is more robust to different spring constants than the compared methods. The dashed line represents the training parameter of the spring constant.}
\label{fig:spring_constant}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figs/compare_mass.pdf}
\caption{Robustness to variations in total wire mass. In the test scenarios, the average received signal power of the proposed method is more robust to various total wire masses than the compared methods. The dashed line represents the training parameter of the total wire mass.}
\label{fig:mass}
\end{figure}
To evaluate the robustness to the variations in the total wire mass, we compared the average received signal power of the proposed method with that of the no adversary and random $10\,\mathrm{m/s}$ methods, as shown in Fig.~\ref{fig:mass}.
In Fig.~\ref{fig:mass}, the total wire mass is shown on the horizontal axis, where the training parameter of the total wire mass is represented by the dashed line, that is, $10\,\mathrm{kg}$.
Similar to the results pertaining to the spring constant, the proposed methods exhibited a larger amount of received power for various amounts of the total wire mass.
Hence, the proposed method achieves more robust beam-tracking against the training-and-test gaps in terms of the total wire mass.
Note that these achievements are without any adaptive fine-tuning of the total wire mass and hence demonstrates the feasibility of the aforementioned zero-shot adaptation in mmWave beam-tracking.
We discuss the robustness to variations in the spring constants and total wire mass by plotting the average received signal power for various values of the spring constant and total wire mass in Fig.~\ref{fig:heatmap}.
The color of the heat map represents the average received signal power, where the cross mark represents the training parameters of the spring constant and total wire mass.
In Fig.~\ref{fig:heatmap}, the area in which the average amount of received power is higher, is shown in red, whereas that ohin which the average received power is low, is shown in white.
Fig.~\ref{fig:heatmap} demonstrates that the area representing the high average received power of the proposed method is wider than that of the no adversary method.
This implies that the proposed method enables the adaptation of various ranges of test settings for beam-tracking without the requirement for adaptive fine-tuning; thus, in this sense, this result demonstrates the feasibility of zero-shot adaptation in learning-based beam-tracking.
\begin{figure}[t]
\centering
\begin{tabular}{c}
\begin{minipage}[t]{0.384\hsize}
\centering
\includegraphics[width=1.0\columnwidth]{figs/compare_km_random0.pdf}
\end{minipage}
\begin{minipage}[t]{0.57\hsize}
\centering
\includegraphics[width=1.0\columnwidth]{figs/compare_km_adversarial10.pdf}
\end{minipage}
\end{tabular}
\caption{Heat map depicting the robustness to variations in the spring constant and the total wire mass of the messenger wire. In the test scenarios, the average received signal power of the proposed method is more robust to the variations in the total wire mass than the no adversary method. The cross mark represents the training parameters of the spring constant and total wire mass.}
\label{fig:heatmap}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We discussed zero-shot adaptation in learning-based mmWave beam-tracking.
To demonstrate the feasibility of zero-shot adaptation, we proposed an adversarial RL-based beam-tracking method to obtain a robust beam-tracking policy to overcome the differences between the training and test scenarios, such as variations in the wire tension and the total wire mass.
We developed an RARL-based algorithm in which the adversary generated additional wind.
We demonstrated that the proposed method is more robust, not only than the no adversary method but also than the random $10\,\mathrm{m/s}$ method, when assuming an adversary without training.
This shows that disturbance by an intelligent adversary increased the robustness of the beam-tracking policy to variations in the wire tension, that is, the spring constant or total wire mass.
The improved robustness was greated than for an adversary without training.
|
2,869,038,155,596 | arxiv | \section{Introduction}
The main aim of this work is to achieve better understanding of large
annihilation rates observed for many polyatomic
molecules \cite{Paul:63,Heyland:82,Surko:88,Iwata:95}. In particular, we
use an exactly solvable model to verify the prediction
\cite{Gribakin:00,Gribakin:01} that positron capture into vibrational
Feshbach resonances (VFR) gives rise to a strong enhancement of the
annihilation rate. We also use this model to investigate the dependence
of positron binding energy for polyatomics on the size of the molecule.
Such binding was postulated in \cite{Gribakin:00,Gribakin:01} as a
necessary condition for VFRs.
The annihilation rate for positrons in a gas of atoms or molecules can be
expressed in terms of an effective number of electrons, $Z_{\rm eff}$, by
\begin{equation}
\label{annil}
\lambda = \pi {\it r}_0^2cZ_{\rm eff}{\it n},
\end{equation}
where $r_0$ is the classical electron radius, $c$ is the speed of light and
$n$ is the number density of the gas. Measurements by Paul and
Saint-Pierre in the early sixties \cite{Paul:63} indicated unusually large
positron annihilation rates in certain polyatomic molecular gases, with $Z_{\rm eff}$
exceeding the actual number of electrons by orders of magnitude.
They speculated that this might be caused by the formation of
positron-molecule bound states, and later Smith and Paul \cite{Smith:70}
explored the possibility that the large rates might be caused by a vibrational
resonance. Research on the alkanes and similar molecules since
that time \cite{Heyland:82,Surko:88,Iwata:95} uncovered a rapid growth
of $Z_{\rm eff}$ with the size of the molecule and very strong chemical sensitivity
of $Z_{\rm eff}$. However, only recently a verisimilar physical
picture of this phenomenon has begun to emerge
\cite{Gribakin:00,Gribakin:01,Gribakin:03}. These papers put forward a
mechanism which is operational for molecules with positive positron
affinities, and which involves capture of positrons into VFRs.
Recent measurements of annihilation with a positron beam at high
resolution (25 meV) \cite{Gilb:02,BGS03}, have shown resonances in the energy
dependence of the annihilation rate parameter, $Z_{\rm eff}$, of alkane molecules.
Most of the features observed have been unambiguously identified as related to
molecular vibrations. In particular, for all alkanes heavier than methane
$Z_{\rm eff}$ displays a prominent C--H stretch vibrational peak. The experiments found
that the magnitude of $Z_{\rm eff}$ in the peak increases rapidly with the size of the
molecule (similarly to the increase in $Z_{\rm eff}$ observed with thermal
room-temperature positrons \cite{Heyland:82,Surko:88,Iwata:95}). Another
remarkable finding concerns the position of the C--H peak. While for ethane
its energy is close to the mode energy of C--H stretch vibrations ($0.37$ eV),
for heavier alkanes the resonances appear at an energy $\sim 20$ meV lower
for each carbon added. This downward shift provides evidence of positron
binding to molecules. The binding energies observed increase from
about 14 meV in C$_3$H$_8$ to 205 meV in C$_{12}$H$_{26}$. Very recent
experiments show evidence of a second bound state for alkanes with
12 and 14 carbons \cite{BYS05}.
So far, realistic molecular calculations have not been able to reproduce
enhanced $Z_{\rm eff}$. For hydrocarbons and a number of other polyatomics,
calculations have been done using a model positron-molecule correlation
potential in a fixed nuclei approximation \cite{GMO01,OG03}. Such
calculations often provide a reasonable description of low-energy
positron-molecule scattering. However, their results, almost without
exception, underestimate experimental $Z_{\rm eff}$, in some cases by an order of
magnitude. This suggests that to describe enhanced
$Z_{\rm eff}$, dynamical coupling to molecular vibrations must be included.
Such coupling was considered earlier for diatomics and CO$_2$
\cite{GM99,GM00}, where it had a relatively small effect on $Z_{\rm eff}$. (These
molecules most likely do not form bound states with the positron, and do
not possess VFR.) Calculations by the Schwinger multichannel method
\cite{SGL94}, which treats electron-positron correlations {\it ab initio}
for fixed nuclei, also underestimate $Z_{\rm eff}$ for molecules such as C$_2$H$_4$
\cite{SGL96} and C$_2$H$_2$ \cite{CVL00} by an order of magnitude \cite{VCL02}.
To examine the effect of vibrations on positron scattering and annihilation,
we consider a simple model of Kr$_2$ dimer using the zero-range potential
(ZRP) method \cite{DO88}. In this model the interaction of the positron with
each of the atoms is parametrised using the atomic value of the scattering
length. It is applicable at low energies when the de Broglie wavelength
of the projectile is much larger than the typical size of the scatterers.
Once ZRP is adopted, the problem of the positron-molecule interaction,
including the vibrational dynamics, can be solved practically exactly. In the
previous paper \cite{Gribakin:02} the interaction between the atoms in the
dimer was treated using the harmonic approximation (HA), which allowed the
vibrational coupling matrix elements to be calculated analytically.
A parabolic potential does not describe well the shallow asymmetric interatomic
potential for a weakly bound van der Waals molecule such as Kr$_2$. In this
work we use the Morse potential to provide a better description of the
molecular interaction. It is a good approximation to the best potential
available for Kr$_2$ \cite{Hal:03}. We examine how the use of a realistic
molecular potential affects the positions and magnitudes of the VFR.
To explore positron binding to
polyatomics we again use the ZRP method. Specifically, we model alkanes
by representing the CH$_2$ and CH$_3$ groups by ZRPs. We investigate the
dependence of the binding energy on the number of monomers and find that a
second bound state emerges for a molecule with ten carbons.
\section{Zero-range model for a molecular dimer}
In a van der Waals molecule the atoms are far apart and are only weakly
perturbed by each other. This makes it an ideal system for applying the
ZRP method. In this work we model the interaction between the two Kr atoms
using the Morse potential (MP),
\begin{equation} \label{Morse}
U(R) = U_{\rm{min}}[ e^{-2\alpha (R-R_0)}-2e^{-\alpha (R-R_0)}],
\end{equation}
with the parameters $R_0=7.56$ a.u.,
$U_{\rm min} = 6.32\times 10^{-4}~\mbox{a.u.}= 17.2$ meV, and
$\omega = (2U_{\rm min} \alpha ^2/m)^{1/2}= 1.1\times 10^{-4}~\mbox{a.u.}=2.99$
meV \cite{Rad:86,Hub:79}, where $m$ is the reduced mass of Kr$_2$. The energy
eigenvalues and eigenfunctions of the MP
are given by simple analytical formulae \cite{Landau}. In Fig. \ref{pot_plot}
we compare the Morse potential to an accurate fit of the best available
Kr$_2$ potential \cite{Hal:03},
\begin{equation}
\label{PotForm}
V(R)=Ae^{-\alpha R-\beta R^2}-\sum_{n=3}^{8}f_{2n}(R,b)\frac{C_{2n}}{R^{2n}},
\end{equation}
where $\alpha $, $\beta $, and $A$ characterise the short-range part of
the potential, and $C_{2n}$ is a set of six dispersion coefficients. The
function $f_{2n}(R,b)$ is the damping function \cite{Tan:84},
\begin{equation} \label{damp}
f_{2n}(R,b)=1-e^{-bR}\sum_{k=0}^{2n}\frac{(bR)^k}{k!}.
\end{equation}
The values of the parameters given in \cite{Hal:03} are:
$\alpha = 1.43158$, $\beta = 0.031743$, $A = 264.552$, $b = 2.80385$,
$C_6 = 144.979$, $C_8 = 3212.89$, $C_{10} = 92633.0$,
$C_{12} = 3.57245\times 10^6$, $C_{14}=1.79665\times 10^8$, and
$C_{16} = 1.14709 \times 10^{10}$ (atomic units are used throughout).
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{Kr2_pot.eps}
\end{center}
\caption{Comparison of the best Kr$_2$ potential (solid curve) with
the Morse potential (dashed curve) and harmonic approximation (dotted curve).
Chain curve is the adiabatic potential for the $e^+$Kr$_2$.}
\label{pot_plot}
\end{figure}
Figure \ref{pot_plot} shows that the Morse potential is close to the best
Kr$_2$ potential, while the HA is valid only in the vicinity of the minimum.
This conclusion is supported by the comparison of the vibrational
spacings. For the MP we have $\omega_{10}\equiv E_1-E_0 = 2.74$ meV,
$\omega_{21} = 2.47$ meV, $\omega_{32} = 2.21$ meV, which agree well
with $\omega_{10}= 2.65$ meV, $\omega_{21} = 2.39$ meV,
$\omega_{32} = 2.12$ meV for the Kr$_2$ potential \cite{Hal:03}. Both
potentials are strongly anharmonic, with $\omega _{n+1,n}$ deviating
markedly from $\omega = 2.99$ meV of HA. Obviously, the MP is
a much better approximation than HA for modelling the Kr$_2$ potential.
In the ZRP model the interaction between a positron and an atom
is expressed as a boundary condition for the positron wavefunction $\psi $,
\begin{equation}\label{eq:zero}
\left. \frac{1}{r\psi}\frac{d(r\psi)}{dr}\right| _{r{\rightarrow}0}=-\kappa_0,
\end{equation}
where $\kappa _0$ is the reciprocal of the scattering length \cite{DO88}.
Positron-Kr scattering calculations yield $\kappa _0=-0.1$ a.u.
\cite{MSC80,DFG96,JL03}.
When applied to a two-centre problem, this condition can be expressed
as
\begin{equation}\label{eq:zero2}
\Psi |_{{\bf r}{\rightarrow }{\bf R}_i}\simeq {\rm const} \times
\left(\frac{1}{|{\bf r}-{\bf R}_i|}-\kappa_0 \right) ,
\end{equation}
where ${\bf r}$ is the positron coordinate, and ${\bf R}_i$ ($i=1,\,2$) are
the coordinates of the two atoms.
Outside the (zero) range of action of the atomic potentials, the positron-dimer
wavefunction can be written as a linear combination of the incident and
scattered waves,
\begin{equation}\label{eq:Psi}
\Psi = e^{i{\bf k}_0 \cdot{\bf r}}\Phi _0({\bf R}) +
\sum_n A_n \Phi _n({\bf R}) \frac{e^{ik_n|{\bf r}-{\bf R}_1|}}
{|{\bf r}-{\bf R}_1|}+
\sum_n B_n \Phi _n({\bf R}) \frac{e^{ik_n|{\bf r}-{\bf R}_2|}}
{|{\bf r}-{\bf R}_2|},
\end{equation}
where ${\bf k}_0$ is the incident positron momentum, $\Phi_n $ is the
$n$th vibrational state of the molecule ($n=0,\,1,\dots $),
and ${\bf R}= {\bf R}_1 - {\bf R}_2$ is the interatomic distance.
Equation (\ref{eq:Psi}) is written for the
case when the positron collides with a ground-state molecule.
The coefficients $A_n$ and $B_n$ determine the excitation amplitude of the
$n$th vibrational state of the molecule, and
$k_n=[k_0^2-2(E_n-E_0)]^{1/2}$ is the corresponding positron momentum.
Applying (\ref{eq:zero2}) gives a set of linear equations for $A_n$ and
$B_n$,
\begin{align}
&(\kappa _0+ik_n)A_n+\sum _m\left(\frac{e^{ik_mR}}{R}\right) _{nm}B_m
= -(e^{i{\bf k}_0\cdot {\bf n}R/2})_{n0}, \label{eq:A} \\
&(\kappa _0+ik_n)B_n+\sum _m\left(\frac{e^{ik_mR}}{R}\right) _{nm}A_m
= -(e^{-i{\bf k}_0\cdot {\bf n}R/2})_{n0}, \label{eq:B}
\end{align}
where ${\bf n}$ is a unit vector along the molecular axis (whose direction
we assume to be fixed during the collision), and the matrix elements are given
by
\begin{equation}\label{n0}
\left(e^{\pm i{\bf k}_0\cdot{\bf n}R/2}\right)_{n0} = \int
\Phi _n^*(R)e^{\pm i{\bf k}_0\cdot {\bf n}R/2}\Phi _0(R)dR ,
\end{equation}
\begin{equation}\label{nm}
\left( \frac{e^{ik_mR}}{R}\right)_{nm} = \int
\Phi_n^*(R)\frac{e^{ik_mR}}{R}\Phi _m(R)dR.
\end{equation}
In HA these matrix elements can be evaluated analytically, (\ref{n0}) --
exactly, and (\ref{nm}) in the leading order \cite{Gribakin:02}. For the
MP we calculated them numerically.
After solving equations ($\ref{eq:A}$)--($\ref{eq:B}$) for $A_n$ and $B_n$,
one finds the total elastic ($0\rightarrow 0$) and vibrational excitation
($0\rightarrow n$, $n=1,\,2,\dots $) cross sections,
\begin{equation} \label{cs_eqn}
\sigma_{0\rightarrow n}=4\pi\frac{k_n}{k_0}|A_n+B_n|^2,
\end{equation}
and the positron annihilation rate,
\begin{equation}
\label{Z_eqn}
Z_{\rm eff}=Z_{\rm eff}^{(0)}\kappa_0^2\sum_ n (|A_n|^2+|B_n|^2),
\end{equation}
where $Z_{\rm eff}^{(0)}$ is the positron-atom $Z_{\rm eff}$ at $k=0$ (see
\cite{Gribakin:02} for details). For Kr we use
$Z_{\rm eff}^{(0)}=81.6$ \cite{MSC80}.
Equations (\ref{eq:A})--(\ref{eq:B}) also allow one to determine the energies
of bound states of the positron-dimer system, by looking for the poles of
$A_n$ and $B_n$ at negative projectile energies, i.e., for imaginary
positron momenta $k_0=i|k_0|$.
For doing numerical calculations, the set of equations
(\ref{eq:A})--(\ref{eq:B})
can be truncated by assuming that $A_n=B_n=0$ for $n>N_c$. This means
that only the first $N_c+1$ channels with $n = 0,\,1,\dots ,\,N_c$ are taken
into consideration. At low projectile energies only a small number of channels
are open, and one obtains converged results with a relatively small $N_c$.
In the calculations we used $N_c=15$ and 10, for the HA and MP, respectively.
This value for MP is the total number of bound excited states.
In the single-channel approximation, $N_c = 0$, the HA results practically
coincide with those of the fixed-nuclei approximation, since the matrix
elements (\ref{n0}) and (\ref{nm}) become
$e^{\pm i{\bf k}_0\cdot{\bf n}R_0/2}$ and
$e^{ik_mR_0}/R_0$, respectively (neglecting the 2nd-order and higher
corrections in the small parameter $k_0(m\omega )^{-1/2}$ \cite{Gribakin:02}).
A similar calculation for MP produces slightly different results, because
of the asymmetry of the vibrational ground-state wavefunction, which
gives rise to first-order corrections to these matrix elements.
\section{Results for Kr$_2$}
Table $\ref{tab1}$ shows the values of the bound states (negative) and the
VFRs (positive) of the e$^+$Kr$_2$ complex obtained with MP and in HA. In the
$N_c=0$ approximation the binding energies are $\varepsilon_0 = -3.77$ meV
and $\varepsilon_0=-3.48$ meV for HA and MP, respectively. The binding
energy for the MP is smaller due to the asymmetry of the potential curve.
The corresponding energies obtained from a multichannel calculation
given in Table \ref{tab1} are lower, because allowing the nuclei to move
leads to stabilisation of the $e^+$Kr$_2$ complex.
\begin{table}[ht!]
\caption{Energies of the bound states and resonances for $e^+$Kr$_2$.}
\begin{tabular}{crr}
\hline
& \multicolumn{2}{c}{Energy (meV)}\\
\cline{2-3}
Level & HA & MP \\
\hline
0 & $-4.23$ & $-3.89$\\
1 & $-1.41$ & $-0.74$\\
2 & $ 1.42$ & $ 2.16$\\
3 & $ 4.25$ & $ 4.83$\\
\hline
\end{tabular}
\label{tab1}
\end{table}
The ground-state energy of the complex can also be compared to the results
of an adiabatic calculation. For fixed nuclei the energy of the positron
bound state is $-\kappa ^2/2$, where $\kappa $ is a positive root of the
equation $\kappa =\kappa _0+e^{-\kappa R}/R$. Adding this energy
to the Kr$_2$ potential, one obtains the adiabatic potential for the
$e^+$Kr$_2$ complex (chain curve in Fig. \ref{pot_plot}).
Its minimum is about 3.94~meV below that of the Kr$_2$, which is close
to the MP value of $\varepsilon _0$ in table \ref{tab1}.
The first vibrational excitation energy of the $e^+$Kr$_2$ complex,
$\omega' _{10}= \varepsilon _1-\varepsilon _0$, for MP is 3.15~meV, while
in HA it is 2.82~meV. Thus, MP calculations predict a ``stiffening'' of the
vibrational motion of the complex in comparison with that of Kr$_2$.
This effect is caused by a shift of the equilibrium position of the atoms
to the left (see Fig. \ref{pot_plot}), towards the steep repulsive part of
the interatomic potential. Note that in MP, the 2nd bound state with
$\varepsilon_1=-0.74$~meV lies just below the threshold. We will see that
this causes a steep rise in $\sigma_{0\rightarrow 0}$ and $Z_{\rm eff}$ as
$k\rightarrow 0$. In HA this bound state is lower, i.e., further away
from threshold, and its effect on the cross sections is less noticeable.
This combination of a lower binding energy and a greater vibrational
frequency in MP, means that the first resonance observed in the
cross sections and in $Z_{\rm eff}$ will be at a greater energy than in HA.
Figure \ref{MPHAcs_plot} shows the elastic and vibrational excitation
cross sections obtained from Eq. (\ref{cs_eqn}) for MP and in HA.
To highlight the effect of resonances, the elastic scattering cross
section has been calculated in both multichannel and single-channel ($N_c=0$)
approximations. The single-channel (``fixed nuclei'') elastic cross
sections from the two calculations are quite close. The multichannel
cross sections are qualitatively similar, the main difference being
in the positions and widths of the resonances and energies of the
vibrational thresholds. The magnitudes of the $\sigma_{0\rightarrow 1}$ and
$\sigma_{0\rightarrow 2}$ cross sections are greater for MP. Another
noticeable difference is in the rise of $\sigma_{0\rightarrow 0}$ towards
zero positron energy in MP calculation.
\begin{figure}[!ht]
\begin{center}
\begin{minipage}{6.5cm}
\begin{center}
\includegraphics[width=6.5cm]{newHAcs.eps}
\end{center}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{6.5cm}
\begin{center}
\includegraphics[width=6.5cm]{newMPcs.eps}
\end{center}
\end{minipage}
\caption{Cross sections for positron scattering from Kr$_2$ calculated
using HA (left) and MP (right): solid curve, elastic scattering,
$\sigma_{0\rightarrow 0}$; long-dashed curve, vibrational excitation,
$\sigma_{0\rightarrow 1}$ (times $10^2$); chain curve, vibrational excitation,
$\sigma_{0\rightarrow 2}$ (times $10^4$); short-dashed curve, single-channel
elastic cross section.}
\label{MPHAcs_plot}
\end{center}
\end{figure}
\noindent
Figure \ref{ZHAMP_plot} shows the positron annihilation rate (\ref{Z_eqn})
obtained with and without the coupling to the vibrational motion, i.e.,
from the multichannel and single-channel calculations.
The background (``fixed nuclei'', $N_c=0$) $Z_{\rm eff}$ at low positron momenta
is enhanced due to the large positron-Kr$_2$ scattering length.
Such enhancement first predicted in Ref. \cite{Goldanskii:64}, affects
both $Z_{\rm eff}$ and the elastic cross section, which in this case are proportional
to each other, $Z_{\rm eff}\sim \sigma _{\rm el}/4\pi $ in atomic units
\cite{Gribakin:00,Gribakin:01}. The effect of VFRs on $Z_{\rm eff}$ is much more
prominent than in scattering, with the strongest resonance four orders of
magnitude above the background. The widths of the resonances in MP and HA,
are quite different, e.g., $\Gamma =2.8~\mu$eV (MP) vs. $\Gamma =16.7~\mu$eV
(HA) for the strongest $n=2$ resonance. This difference, also seen in the
scattering cross sections (Fig. \ref{MPHAcs_plot}), means that anharmonicity
of the Kr$_2$ potential reduces the coupling between the incident positron
and vibrationally-excited $e^+$Kr$_2$ compound. A possible explanation for
this is that positron binding has only a small effect on the equilibrium
position of the nuclei (as seen from adiabatic potential curve of $e^+$Kr$_2$
in Fig. \ref{pot_plot}).
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{ZeffnewHAMP.eps}
\caption{$Z_{\rm eff}$ for positrons on Kr$_2$ calculated using MP and HA in the
multichannel (solid and chain curves, respectively) and single-channel
(dashed and dotted curves, respectively) approximations.}
\label{ZHAMP_plot}
\end{center}
\end{figure}
To compare the integral contribution of the resonances, we averaged
$Z_{\rm eff}$ over the Maxwellian positron energy distribution,
\begin{equation}\label{eq:ZeffT}
\bar Z _{\rm eff}(T)=\int_0^{\infty}Z_{\rm eff}(k)
\frac{e^{-k^2/2k_{\rm B}T}}{(2\pi k_{\rm B}T)^{3/2}}4\pi k^2dk.
\end{equation}
Figure \ref{ZeffT_plot} shows $\bar Z_{\rm eff}(T)$ for HA and MP. In both
cases the VFR gives a very large contribution, increasing $Z_{\rm eff}$ by an order
of magnitude at $T\sim 1$ meV, i.e., for positron energies close
to that of the resonance. Its effect is seen even at much higher
temperatures, raising $Z_{\rm eff}$ above the non-resonant background by a factor
of three for room temperature positrons.
\begin{figure}[!ht]
\begin{center}
\begin{minipage}{6.5cm}
\begin{center}
\includegraphics[width=6.5cm]{ZeffnewTHA.eps}
\end{center}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{6.5cm}
\begin{center}
\includegraphics[width=6.5cm]{ZeffnewTMP.eps}
\end{center}
\end{minipage}
\caption{Thermally averaged $Z_{\rm eff}$ for positrons on Kr$_2$, obtained
using HA (left) and MP (right): long-dashed curve, single-channel
approximation; chain curve, multichannel calculation, non-resonant
background; solid curve, multichannel calculation, total, including
the VFR. For comparison, $Z_{\rm eff}$ for Kr is shown (short-dashed curve).}
\label{ZeffT_plot}
\end{center}
\end{figure}
\section{ZRP model of positron binding to polyatomics}
In positron-molecule collisions VFRs occur when the energy of the incident
positron plus the energy released in positron-molecule binding, equals the
energy of a vibrational excitation of the positron-molecule complex. For weakly
bound positron states the latter should be close to a vibrational excitation
energy of the neutral molecule. Hence, by observing VFRs one can obtain
information on the binding energy. This procedure was applied to
electron scattering from (CO$_2$)$_N$ clusters \cite{LBF00}. In this
system the redshift of the VFR was found to increase with the cluster size
by about 12 meV per unit. A simple model of a spherically-symmetric cluster
with a constant potential inside and $-\alpha e^2/r^4$ potential outside
was able to reproduce the dependence of the electron binding energy on the
cluster size.
In a similar way, the measurements of the energy dependence of $Z_{\rm eff}$
for alkanes with a high-resolution positron beam allow one to determine
their positron binding energies \cite{Gilb:02,BGS03}. In contrast, an accurate
{\em ab initio} calculation of positron binding to a polyatomic molecule is
probably beyond the capability of present-day theory. Even for atoms,
calculations of positron binding remain a challenging task because of the need
to carefully account for strong electron-positron and many-electron
correlations (see, e.g., \cite{MBR02}).
In this work we have adopted a different approach. To examine positron
binding to alkanes, we model the molecule by representing each CH$_2$ or
CH$_3$ group by a ZRP centred on the corresponding carbon atom. The wave
function of a bound state decreases as $r^{-1}e^{-\kappa r}$ at large
positron-molecule separation $r$, where $\kappa^2/2$ is the binding energy.
For weakly bound states this wavefunction is very diffuse
($\kappa \ll 1$ a.u.), which means that the positron moves mostly
far away from the molecule. The actual binding energy is determined by their
interaction at small distances, and the ZRP model is a simple way of
parametrizing such interaction. It allows us to account for the scaling of the
positron-molecule interaction with the number of monomers (to the extent that
the monomers do not have a large effect on each other), and to use a realistic
geometry of the molecule. We will consider two cases, a straight carbon chain,
Fig. \ref{alkaneC_plot}~(a), and a ``zigzag'' carbon chain,
Fig. \ref{alkaneC_plot}~(b), which matches the actual structure
of the molecule, Fig. \ref{alkaneC_plot}~(c). Unlike the Kr$_2$ model,
the $\kappa_0$ parameter of the ZRP for alkanes is adjusted semiempirically
(see below).
\begin{figure}[!ht]
\begin{center}
\begin{minipage}{4.5cm}
\includegraphics[width=4.5cm]{straight.eps}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}{3.8cm}
\includegraphics[width=3.8cm]{zigzag.eps}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}{4.7cm}
\includegraphics[width=4.7cm]{C4H10_3d_1.eps}
\end{minipage}
\caption{Zero-range potential models of the alkane molecule (butane,
C$_4$H$_{10}$, is shown). In (a) and (b) the shaded circles represent a
CH$_2$ or CH$_3$ group, while (c) is a true 3D molecular structure \cite{NIST}.
The parameters used are $R_0=2.911$ a.u. and $\theta =113^\circ $.}
\label{alkaneC_plot}
\end{center}
\end{figure}
The bound-state positron wavefunction in the field of $N$ ZRP centres
has the form \cite{DO88},
\begin{equation}\label{eq:bound}
\Psi = \sum_{i=1}^N A_i \frac{e^{-\kappa |{\bf r}-{\bf R}_i|}}
{|{\bf r}-{\bf R}_i|}.
\end{equation}
Subjecting it to $N$ boundary conditions (\ref{eq:zero2}) with parameters
$\kappa _{0i}$, we find the positron energy as $\varepsilon _0=-\kappa^2/2$,
where $\kappa $ is a positive root of the equation
\begin{equation}\label{eq:kap}
\det \left[ (\kappa_{0i}-\kappa)\delta_{ij} + \frac{e^{-\kappa R_{ij}}}
{R_{ij}}(1-\delta _{ij})\right] = 0.
\end{equation}
Here $R_{ij}=|{\bf R}_i-{\bf R}_j|$ is the distance between the $i$th and $j$th
ZRP.
For modelling alkanes we choose the distance between the neighbouring ZRPs
equal to the length of the C--C bond. All ZRPs are characterised
by the same value of $\kappa _{0i}=-1.5$. This value ensures that the
molecule with three ZRP centres (which models propane, C$_3$H$_8$) has
a small binding energy, 7 meV for the straight chain, and 12 meV for the
zigzag chain. These values are close to that inferred from experiment
\cite{Gilb:02,BGS03}, where propane is the first molecule for which a
downshift of the C--H peak from the corresponding vibrational energy
can be seen \cite{Gilb:02,BGS03}.
In Fig. \ref{BE4variousC_plot} the results of our calculations are
compared with the experimental binding energies. As we move from a straight
ZRP chain, Fig. \ref{alkaneC_plot} (a), to a "zigzag" chain,
Fig. \ref{alkaneC_plot} (b), the binding energy increases. This is expected
as the carbon atoms beyond the nearest neighbour become closer to each another.
The overall dependence of the binding
energy on the number of monomers $n$ predicted by the ZRP model is similar
to that of the experimental data, while the absolute values predicted by
our simple theory are within a factor of two of the measurements. One may also
notice that the measured binding energies increase almost linearly with $n$,
while the calculation shows signs of saturation. These discrepancies might
be related to the fact that the ZRP model disregards the long-range
$-\alpha e^2/r^4$ nature of the positron-molecule interaction, which would
restricts its validity to very small binding energies.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{alk_BE_kap1.5.eps}
\end{center}
\caption{Positron binding energies $|\varepsilon _0|$ for alkanes modelled
using straight (circles) and ``zigzag'' (triangles) ZRP chains. Experimental
results (crosses) \cite{Gilb:02,BGS03} are shown for comparison.}
\label{BE4variousC_plot}
\end{figure}
A remarkable feature of the model calculations is the emergence of a
second bound state for $n=10$ in both straight and ``zigzag'' chains.
This prediction supports the experimental evidence for the second bound state,
in the form of a C--H peak, which re-emerges at 0.37 eV for dodecane
($n=12$) \cite{Gilb:02,BGS03} and stabilises by about 50 meV for
C$_{14}$H$_{30}$ \cite{BYS05}.
\section{Summary and outlook}
We have used the Morse potential to model the interaction between the atoms
in the Kr$_2$ dimer. We find that the positron binding energies and the
positions and widths of the VFR change compared with the harmonic
approximation. However, the overall picture remains similar, with the lowest
VFR enhancing the Maxwellian-averaged positron annihilation rate by
an order of magnitude to $Z_{\rm eff}\sim 10^4$ at $T\sim 1$ meV.
In priniciple, a similar approach could be applied to positron interaction
with other rare-gas dimers and clusters. For Ar and lighter atoms, the
positron-atom attraction is too weak to allow formation of positron bound
states and VFRs. For Xe$_2$, on the contrary, the attraction is much stronger
($\kappa_0\sim -0.01$ \cite{DFG96}). This leads to a much greater
positron-dimer binding energy ($\sim 40$ meV), which means that many
vibrationally excited states of $e^+$Xe$_2$ are bound, and only those with
high $n$ lie above the positron-dimer continuum threshold. The coupling
between these states and the target ground state is extremely weak, and
we have not been able to find any VFR for positrons incident on ground-state
Xe$_2$ in our calculations.
A zero-range potential model for positron binding to alkanes
yields binding energies that are in qualitative agreement with experiment.
Our calculation predicts the emergence of the second bound state for a
molecule with ten carbon atoms. Such a bound state may have already been
observed for dodecane and tetradecane.
Zero-range potential is an exceptionally simple model. The accuracy of our
predictions is of course limited by the nature of the ZRP model. In particular,
ZRPs disregard the long-range character of the positron-target polarization
attraction. This is a reasonable approximation for very weakly bound states,
but as the binding gets stronger, larger errors may be introduced.
Given the difficulty
of performing {\it ab initio} calculations for positron interaction with
polyatomics, one hopes that more sophisticated yet tractable models
could be developed. They should provide a more accurate description of
positron-molecule binding and capture into vibrational Feshbach resonances.
One may then hope to fully explain the dramatic enhancement of the annihilation
rates and their strong chemical sensitivity for polyatomic molecules.
\section*{Acknowledgements}
We are grateful to L. D. Barnes, C. M. Surko, and J. A. Young for drawing our
attention to the experimental evidence for a second positron bound state
for larger alkanes.
|
2,869,038,155,597 | arxiv | \section{Motivation and main results}
The celebrated theorem of Charles Fefferman from \cite{FeffBall} shows that the ball multiplier is an unbounded operator on $L^p(\mathbb{R}^n)$ for all $p\neq 2$ whenever $n\geq 2$. A well-known argument originally due to Yves Meyer, \cite{DeGuzman}, exhibits the intimate relationship of the ball multiplier with vector-valued estimates for directional singular integrals along all possible directions. Fefferman proves in \cite{FeffBall} the impossibility of such estimates by testing these vector-valued inequalities on a Kakeya set.
Besicovitch or Kakeya sets are compact sets in the Euclidean space that contain a line segment of unit length in every direction. Sets of this type with zero Lebesgue measure do exist. However, in two dimensions, Kakeya sets are necessarily of full Hausdorff dimension. The question of the Hausdorff dimension of Kakeya sets can be then formulated as a question of quantitative boundedness of the Kakeya maximal function, which is a maximal directional average along rectangles of fixed eccentricity and pointing along arbitrary directions.
The importance of the ball multiplier for the summation of higher dimensional Fourier series, as well as its intimate connection to Kakeya sets, have motivated a host of problems in harmonic analysis which have been driving relevant research since the 1970s. Finitary or smooth models of the ball multiplier such as the polygon multiplier and the Bochner-Riesz means quantify the failure of boundedness of the ball multiplier and formalize the close relation of these operators with directional maximal and singular averages.
This paper is dedicated to the study of a variety of operators in the plane that are all connected in one way or another with the ball multiplier. Our point of view is through the analysis of directional operators mapping into $L^p(\mathbb{R}^2;\ell^q)$-spaces where the inner $\ell^q$-norm is taken with respect to the set of directions. Different values of $q$ are relevant in our analysis but the cases $q=2$ and $q=\infty$ are of particular interest. On one hand, the case $q=\infty$ arises when considering maximal directional averages and the corresponding differentiation theory along directions; see \cite{BatTrans,CDR,DPPAlg,KatzDuke} for classical and recent work on the subject. On the other hand, the case $q=2$ is especially relevant for Meyer's argument that bounds the norm of a vector-valued directional Hilbert transform by the norm of the ball multiplier. It also arises when dealing with square functions associated to conical or directional Fourier multipliers of the type
\[
f\mapsto \{C_j f: j=1,\ldots, N\}
\]
where each $C_j$ is adapted to a different coordinate pair and the $C_j$ have disjoint or well-separated Fourier support. These estimates are directional analogues of the celebrated square function estimate for Fourier restriction to families of disjoint cubes, due to Rubio de Francia \cite{RdF}, and they appear naturally when seeking for quantitative estimates on the $N$-gon Fourier multiplier.
While such square function estimates have been considered previously in the literature, and usually approached directly via weighted norm inequalities, our treatment is novel and leads to improved and in certain cases sharp estimates in terms of the cardinality of the set of directions. It rests on a new directional Carleson measure condition and corresponding embedding theorem, which is subsequently applied to intrinsic directional square functions of time-frequency nature. The link between the abstract Carleson embedding theorem and the applications is provided by directional, one and two-parameter time-frequency analysis models. The latter allow us to reduce estimates for directional operators to those of the corresponding intrinsic square functions involving directional wave packet coefficients. We note that in the fixed coordinate system case, related square functions have appeared in Lacey's work \cite{LR}, while a single-scale directional square function similar to those of Section \ref{s:isf} is present in \cite{DP+} by Guo, Thiele, Zorin-Kranich and the second author.
Having clarified the context of our investigation, we turn to the detailed description of our main results and techniques.
\subsection*{A new approach to directional square functions} While we address several types of square functions associated to directional multipliers, our analysis of each relies on a common first step. This is an $L^4$-square function inequality for abstract Carleson measures associated with one and two-parameter collections of rectangles in $\mathbb{R}^2$, pointing along a finite set of $N$ directions; this setup is presented in Section \ref{sec:carleson} and the central result is Theorem \ref{thm:carleson}. Section \ref{sec:carleson} builds upon the proof technique first introduced by Katz \cite{KatzDuke} and revisited by Bateman \cite{BatTrans} in the study of sharp weak $L^2$-bounds for maximal directional operators. Our main novel contributions are the formulation of an abstract directional Carleson condition which is flexible enough to be applied in the context of time-frequency square functions, and the realization that square functions in $L^4$ can be treated in a $TT^*$-like fashion. The advancements over \cite{BatTrans,KatzDuke} also include the possibility of handling two-parameter collections of rectangles.
In Section \ref{s:isf}, we verify that the Carleson condition, which is a necessary assumption in the directional embedding of Theorem \ref{thm:carleson}, is satisfied by the intrinsic directional wave packet coefficients associated with certain time-frequency tile configurations, and Theorem \ref{thm:carleson} may be thus applied to obtain sharp estimates for discrete time-frequency models of directional Rubio de Francia square function (for instance). Establishing the Carleson condition requires a precise control of spatial tails of the wave packets: this control is obtained by a careful use of Journ\'e's product theory lemma.
The estimates obtained for the time-frequency model square functions are then applied to three main families of operators described below. All of them are defined in terms of an underlying set of $N$ directions. As in Fefferman's counterexample for the ball multiplier the Kakeya set is the main obstruction for obtaining uniform estimates. Depending on the type of operator the usable estimates will be restricted in the range $2<p<4$ for square function estimates or in the range $3/4<p<4$ for the self-adjoint case of the polygon multiplier. The fact that the estimates should be logarithmic in $N$ in the $L^p$-ranges above is directed by the Besicovitch construction of the Kakeya set. It is easy to see that for $p$ outside this range the only available estimates are essentially trivial polynomial estimates. Further obstructions deter any estimates for Rubio de Francia type square function in the range $p<2$ already in the one-directional case.
\subsection*{Sharp Rubio de Francia square function estimates in the directional setting}
Section \ref{sec:cone} concerns quantitative estimates of Rubio de Francia type for the square function associated with $N$ finitely overlapping cone multipliers, of both rough and smooth type. Beginning with the seminal article of Nagel, Stein and Wainger \cite{NSW}, square functions of this type are crucial in the theory of maximal operators, in particular along lacunary directions, see for instance \cite{PR,SS}. In the case of $N$ uniformly spaced cones, logarithmic estimates with unspecified dependence were proved by A.\ C\'ordoba in \cite{CorGFA} using weighted theory.
In order to make the discussion above more precise, and to give a flavor of the results of this paper, we introduce some basic notation. Let $\uptau\subset (0,2\uppi)$ be an interval and consider the corresponding smooth restriction to the frequency cone subtended by $\uptau$, namely
\[
C_\uptau^\circ f(x) \coloneqq \int_{0}^{2\uppi} \int_{0}^\infty \widehat f(\varrho{\rm e}^{i\vartheta}) \upbeta_\uptau\left(\vartheta\right) {\rm e}^{ix\cdot\varrho{\rm e}^{i\vartheta} } \, \varrho{\rm d} \varrho {\rm d} \vartheta,\qquad x\in\mathbb{R}^2,
\]
where $\upbeta_\uptau$ is a smooth indicator on $\uptau$, namely it is supported in $\uptau$ and is identically one on the middle half of $\uptau$.
One of the main results of this paper is a quantitative estimate for a square function associated with the smooth conical multipliers of a finite collection of intervals with bounded overlap. In the statement of the theorem below $\ell^2 _{\bm{\uptau}}$ denotes the $\ell^2$-norm on the finite set of directions $\bm{\uptau}$.
\begin{theorem} \label{thm:smoothcones} Let $\bm{\uptau}=\{\uptau\}$ be a finite collection of intervals in $[0,2\pi)$ with bounded overlap, namely
\[
\Bigl\| \sum_{\uptau\in \bm{\uptau}} \cic{1}_{\uptau} \Bigr\|_\infty \lesssim 1.
\]
We then have the square function estimate
\begin{align*}
& \left\| \{C_\uptau^\circ f\} \right\|_{L^p(\mathbb{R}^2; \ell^2_{\bm{\uptau}})} \lesssim_p (\log \# \bm{\uptau} )^{\frac{1}2-\frac1p}\|f\|_p
\end{align*}
for $2\leq p<4$,
as well as the restricted type analogue valid for all measurable sets $E$
\begin{align*}
& \left\| \{C_\uptau^\circ(f\bm{1}_E)\} \right\|_{L^4(\mathbb{R}^2; \ell^2_{\bm{\uptau}})} \lesssim (\log \# \bm{\uptau} )^{\frac{1}4}|E|^{\frac14}\|f\|_\infty.
\end{align*}
The dependence on $\#\uptau$ in the estimates above is best possible.
\end{theorem}
The sharp estimate of Theorem~\ref{thm:smoothcones} above can be suitably bootstrapped in order to provide an estimate for rough conical frequency projections; the precise statement can be found in Theorem \ref{thm:wdcones} of Section~\ref{sec:cone}. The sharpness of the estimates in Theorem~\ref{thm:smoothcones} above is discussed in \S\ref{sec:cexconical}.
A similar square function estimate associated with disjoint rectangular directional frequency projections is presented in Section~\ref{sec:rdf}. This is a square function that is very close in spirit to the one originally considered by Rubio de Francia in \cite{RdF}, and especially to the two-parameter version of Journ\'e from \cite{Journe} and revisited by Lacey in \cite{LR}. The novel element is the directional aspect which comes from the fact that the frequency rectangles are allowed to point along a set of $N$ different directions. Our method of proof can deal equally well with one-parameter rectangular projections or collections of arbitrary eccentricities. As before we prove a sharp -in terms of the number of directions- estimate for the smooth square function associated with rectangular frequency projections along $N$ directions; this is the content of Theorem~\ref{thm:rdfsmooth}. The main term in the upper bound of Theorem~\ref{thm:rdfsmooth} matches the logarithmic lower bound associated with the Kakeya set.
\subsection*{The polygon multiplier} The square function estimates discussed above may be combined with suitable vector-valued estimates in the directional setting in order to obtain a quantitative estimate for the operator norm of the $N$-gon multiplier, namely the Fourier restriction to a regular $N$-gon $\mathcal{P}_N$,
\begin{equation}
\label{e:Ngon}
T_{\mathcal{P}_N} f(x) \coloneqq \int_{\mathcal{P}_N} \widehat f(\xi) {\rm e}^{ix\cdot \xi} \, {\rm d} \xi, \qquad x\in\mathbb{R}^2.
\end{equation}
In Section~\ref{sec:polygon} we give the details and proof of the following quantitative estimate for the polygon multiplier.
\begin{theorem} \label{thm:polygon} Let $\mathcal P_{N}$ be a regular $N$-gon in $\mathbb{R}^2$ and $T_{\mathcal{P}_N}$ be the corresponding Fourier restriction operator defined in \eqref{e:Ngon}. We have the estimate
\[
\left\|T_{\mathcal P_N} : L^p(\mathbb{R}^2) \right\| \lesssim (\log N)^{4\left|\frac12-\frac1p\right|} , \qquad \frac43 <p <4.
\]
\end{theorem}
We limit ourselves to treating the regular $N$-gon case; however, it will be clear from the proof that this restriction may be significantly weakened by requiring instead a well-distribution type assumption on the arcs defining the polygon, similar to the one that is implicit in Theorem \ref{thm:smoothcones}.
Precise $L^p$-bounds for the $N$-gon multiplier as a function of $N$ quantify Fefferman's counterexample and so the failure of boundedness of the ball multiplier when $p\neq 2$. A logarithmic type estimate for $T_{\mathcal{P}_N}$ was first obtained by A.\ C\'ordoba in \cite{CorPoly}. While the exact dependence in \cite{CorPoly} is not explicitly tracked, the upper bound on the operator norm obtained in \cite{CorPoly} must be necessarily larger than $O(\log N)^{\frac54}$ for $p$ close to the endpoints of the relevant interval: see Remark \ref{rem:compcor} and \S\ref{sec:cexradial} for details.
While the dependence obtained in Theorem \ref{thm:polygon} is a significant improvement over previous results, it does not match the currently best known lower bound, which is the same as that for the Meyer lemma constant in Lemma~\ref{lem:meyer} and \S\ref{sec:cexmeyer}.
\begin{remark*} Let $\updelta>0$ and $T_j$ be a smooth frequency restriction to one of the $O(\updelta^{-1})$ tangential $\updelta \times \updelta^2$ boxes covering the $\updelta^2$ neighborhood of $\mathbb S^1$. Unlike the sharp forward square function estimate we prove in this article, the \emph{reverse square function} estimate
\begin{equation}
\label{e:revsf}
\|f\|_{p} \leq C_{p,\updelta} \left\| \left\{T_jf: 1\leq j \leq O{\textstyle \left(\frac 1\updelta \right)} \right\} \right\|_{L^p(\mathbb{R}^2:\ell^2_j)},
\end{equation}
holds with $C_{4,\updelta}=O(1)$ at the endpoint $p=4$. For the proof of this $L^4$-decoupling estimate see \cites{CorPoly,Feff}. An extension to the range $2<p<4$ is at the moment only possible via vector-valued methods, which introduce the loss $C_{p,\updelta}=O(|\log \updelta|^{{1}/{2}-1/p} )$. In fact \eqref{e:revsf} with the loss $C_{p,\updelta}$ claimed above follows easily from Lemma~\ref{l:Tj}; the details are contained in Remark~\ref{rmrk:invsf}.
Reverse square function inequalities of the type \eqref{e:revsf} have been popularized by Wolff in his proof of local smoothing estimates in the large $p$ regime; see also the related works \cite{GS,LP,LW,PS}.
We refer to Carbery's note \cite{CarbA} for a proof that the $p=2n/(n-1)$ case of the $\mathbb S^{n-1}$ reverse square function estimate implies the corresponding $L^{n}(\mathbb{R}^n)$ Kakeya maximal inequality, as well as the Bochner-Riesz conjecture. In \cite{CarbA}, the author also asks whether a $\updelta$-free estimate holds in the range $2<p<2n/(n-1)$. At the moment this is not known in any dimension.
On a different but related note, weakening \eqref{e:revsf} by replacing the right hand side with the larger square function of $\|f_j\|_p$ yields a sample (weak) \emph{decoupling} inequality: a full range of sharp decoupling inequalities for hypersurfaces with curvature have been established starting from the recent, seminal paper by Bourgain and Demeter \cite{BD1}. In the case of $\mathbb S^1$, the weak decoupling inequality holds in the wider range $2\leq p \leq 6$, with $C_\varepsilon\updelta^{-\varepsilon}$ type bounds outside of $[2,4]$: our methods do not seem to provide insights on the quantitative character of weak decoupling in this wider range.
\end{remark*}
\subsection*{Weighted estimates for the maximal directional function} The simplest example of application of the directional Carleson embedding theorem is the adjoint of the directional maximal function; this was already noticed by Bateman \cite{BatTrans}, re-elaborating on the approach of Katz \cite{KatzDuke}. By duality, the $L^2$-directional Carleson embedding theorem of Section~\ref{sec:carleson} yields the sharp bound
for the weak $(2,2)$ norm of the maximal Hardy-Littlewood maximal function $M_N$ along $N$ \emph{arbitrary directions}
\[
\|M_N:L^2(\mathbb{R}^2)\to L^{2,\infty}(\mathbb{R}^2)\|\sim \sqrt{\log N};
\]
this result first appeared in the quoted article \cite{KatzDuke} by Katz.
Theorem \ref{thm:carleson} may be extended to the directional weighted setting. We describe this extension in Section \ref{s:wcarleson}, see Theorem \ref{thm:wcarleson}, and derive several novel weighted estimates for directional maximal and singular integrals as an application.
More specifically, our weighted Carleson embedding Theorem \ref{thm:wcarleson} yields a Fefferman-Stein type inequality for the operator $M_N$ with sharp dependence on the number of directions; this result is the content of Theorem~\ref{thm:maxweight}. Specializing to $A_1$-weights in the directional setting yields the first sharp weighted result for the maximal function along arbitrary directions. Furthermore, Theorem \ref{thm:singweight} contains an $L^{2,\infty}(w)$-estimate for the maximal directional singular integrals along $N$ directions, for suitable directional weights $w$, with a quantified logarithmic dependence in $N$. This is a weighted counterpart of the results of \cite{Dem,DDP}.
\subsection*{Acknowledgments} The authors are grateful to Ciprian Demeter and Jongchon Kim for fruitful discussions on reverse square function estimates, and for providing additional references on the subject.
\section{An $L^2$-inequality for directional Carleson sequences}\label{sec:carleson}
In this section we prove an abstract $L^2$-inequality for certain Carleson sequences adapted to sets of directions: the main result is Theorem \ref{thm:carleson} below. The Carleson sequences we will consider are indexed by parallelograms with long side pointing in a given set of directions in $\mathbb{R}^2$, and possessing certain natural properties. The definitions below are motivated by the applications we have in mind, all of them lying in the realm of directional singular and averaging operators.
\subsection{Parallelograms and sheared grids}Fix a coordinate system and the associated horizontal and vertical projections
of $A\subset \mathbb{R}^2$:
\[
\uppi_1(A)\coloneqq\left\{x\in\mathbb{R}:\, \{x\}\times \mathbb{R} \cap A \neq \varnothing\right\},\qquad \uppi_2(A)\coloneqq\left\{y\in\mathbb{R}:\,\mathbb{R} \times\{ y\} \cap A \neq \varnothing\right\}.
\]
Fix a finite set of slopes $S\subset [-1,1]$. Throughout, we indicate by $N=\#S$ the number of elements of $S$. In general we will deal with sets of directions
\[
V\coloneqq \{(1,s):\, s\in S\},\quad V^\perp\coloneqq \{ (-s,1):\, s\in S\}.
\]
We will conflate the descriptions of directions in terms of slopes in $S$ and in terms of vectors in $V$ with no particular mention.
For each $s\in S$ let
\begin{equation}
\label{e:shearing}
A_s\coloneqq \left[ \begin{matrix} 1 & 0
\\
s & 1 \end{matrix}\right]
\end{equation}
be the corresponding shearing matrix. A \emph{parallelogram along $s$} is the image $P=A_s(I\times J)$ of the rectangular box $I\times J$ in the fixed coordinate system with $|I|\geq |J|$. We denote the collection of parallelograms along $s$ by $\mathcal P_s^2$ and
\[
\mathcal P_S ^2 \coloneqq \bigcup _{s\in S} \mathcal P_s^2.
\]
In order to describe the setup for our general result we introduce a collection of directional dyadic grids of parallelograms. In order to define these grids we consider the two-parameter product dyadic grid
\[
\mathcal D^2 _{0}\coloneqq\left\{R=I\times J:\, I,J\in \mathcal D(\mathbb{R}),\/ |I|\geq |J| \right\}
\]
obtained by taking cartesian product of the standard dyadic grid $\mathcal D(\mathbb{R})$ with itself; we note that we only consider the rectangles in $\mathcal D\times \mathcal D$ whose horizontal side is longer than their vertical one. Define the sheared grids
\[
\mathcal D^2 _{s}\coloneqq\left\{A_sR:\, R\in \mathcal D^2 _0 \right\},\quad s\in S,\qquad \mathcal D^2 _S\coloneqq \bigcup_{s\in S}\mathcal D^2 _s.
\]
We will also use the notation
\[
\mathcal D^2 _{s,k_1,k_2}\coloneqq \left\{A_sR:\, R=I\times J\in \mathcal D^2 _0,\, |I|=2^{-k_1},\, |J|=2^{-k_2} \right\},\quad s\in S,\qquad k_1,k_2 \in{\mathbb Z}, \quad k_1\leq k_2.
\]
Note that $\mathcal D^2 _s$ is a special subcollection of $\mathcal P_s^2$. In particular, $R\in \mathcal D_s ^2$ is a parallelogram oriented along $v=(1,s)$ with vertical sides parallel to the $y$-axis and such that $\uppi_1(R)$ is a standard dyadic interval. Furthermore our assumptions on $S$ and the definition of $\mathcal D_0 ^2$ imply that the parallelograms in $\mathcal D^2 _S$ have long side with slope $|s|\leq 1$ and a vertical short side. With a slight abuse of language we will continue referring to the rectangles in $\mathcal D_S ^2$ as \emph{dyadic}.
\begin{figure}[ht]\label{fig:RtoRs}
\centering
\def<desired width>{330pt}
\begingroup%
\makeatletter%
\providecommand\color[2][]{%
\errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}%
\renewcommand\color[2][]{}%
}%
\providecommand\transparent[1]{%
\errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}%
\renewcommand\transparent[1]{}%
}%
\providecommand\rotatebox[2]{#2}%
\newcommand*\fsize{\dimexpr\f@size pt\relax}%
\newcommand*\lineheight[1]{\fontsize{\fsize}{#1\fsize}\selectfont}%
\ifx<desired width>\undefined%
\setlength{\unitlength}{723.58294505bp}%
\ifx\svgscale\undefined%
\relax%
\else%
\setlength{\unitlength}{\unitlength * \real{\svgscale}}%
\fi%
\else%
\setlength{\unitlength}{<desired width>}%
\fi%
\global\let<desired width>\undefined%
\global\let\svgscale\undefined%
\makeatother%
\begin{picture}(1,0.49324622)%
\lineheight{1}%
\setlength\tabcolsep{0pt}%
\put(0,0){\includegraphics[width=\unitlength,page=1]{grids.pdf}}%
\put(0.10598895,0.1379423){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$R=I\times[0,1]\in\mathcal D_0 ^2$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=2]{grids.pdf}}%
\put(0.66037296,0.17032085){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$A_s R\in\mathcal D_s ^2$\end{tabular}}}}%
\put(-0.00125515,0.19638162){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$1$\end{tabular}}}}%
\put(0.52627843,0.19559189){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$1$\end{tabular}}}}%
\put(0.53338587,0.07239541){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$0$\end{tabular}}}}%
\put(0.72181329,0.09434969){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$ \tan \theta=s$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=3]{grids.pdf}}%
\put(0.4417783,0.31168086){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$A_s$\end{tabular}}}}%
\put(0.00032428,0.07358004){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$0$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=4]{grids.pdf}}%
\put(0.22571024,0.00292328){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$I$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=5]{grids.pdf}}%
\put(0.75654146,0.00529166){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$I$\end{tabular}}}}%
\end{picture}%
\endgroup%
\caption{The axis-parallel rectangle $R\in\mathcal D^2 _0$ is mapped to the slanted parallelogram $A_sR\in\mathcal D^2 _s$.}
\end{figure}
Several results in this paper will involve collections of parallelograms $\mathcal R\subset \mathcal D^2 _S$. Writing $\mathcal R_s\coloneqq \mathcal R \cap\mathcal D_s ^2$ we have the natural decomposition of $\mathcal R $ into $\#S=N$ subcollections
\[
\mathcal R =\bigcup_{s\in S} \mathcal R_s.
\]
In general for any collection $\mathcal R$ of parallelograms we will use the notation
\[
\mathsf{sh}(\mathcal R)\coloneqq \bigcup_{R\in\mathcal R} R
\]
for the \emph{shadow} of the collection. Finally, for any collection of parallelograms $\mathcal R$ we define the corresponding maximal operator
\[
\mathrm{M}_{\mathcal R} f(x) \coloneqq \sup_{R\in \mathcal R} \langle |f| \rangle_R \bm{1}_R(x),\qquad f\in L^1 _{\mathrm{loc}}(\mathbb{R}^2),\quad x\in\mathbb{R}^2.
\]
We will also use the following notations for directional maximal functions:
\begin{equation}\label{eq:dirMv}
{\mathrm M}_v f(x)\coloneqq \sup_{r>0} \frac{1}{2r}\int_{-r} ^r |f(x+tv)|\,{\rm d} t,\qquad {\mathrm M}_jf(x)\coloneqq {\mathrm M}_{e_j}f(x), \quad j\in\{1,2\},\quad x\in\mathbb{R}^2.
\end{equation}
If $V\subset \mathbb \mathbb{R}^2$ is a compact set of directions with $0\notin V$, we write
\begin{equation}
\label{e:Mv}
{\mathrm M}_Vf\coloneqq \sup_{v\in V} {\mathrm M}_v f.
\end{equation}
In the definitions above and throughout the paper we use the notation
\[
\langle g \rangle_{E} = \Xint-_E g \coloneqq \frac{1}{|E|}\int_E g(x)\,{\rm d} x
\]
whenever $g$ is a locally integrable function in $\mathbb{R}^2$ and $E\subset \mathbb{R}^2$ has finite measure.
\subsection{An embedding theorem for directional Carleson sequences}
In this section we will be dealing with Carleson-type sequences $a=\{a_R\}_{R\in\mathcal D_S ^2}$, indexed by dyadic parallelograms. In order to define them precisely we need a preliminary notion.
\begin{definition} Let $\mathcal L\subset \mathcal P_S ^2$ be a collection of parallelograms and let $s\in S$. We will say that $\mathcal L$ is subordinate to a collection $\mathcal T\subset \mathcal P_s^2$ if for each $L\in\mathcal L$ there exists $T\in\mathcal T$ such that $L\subset T$; see Figure~\ref{fig:subordinate}.
\end{definition}
\begin{figure}[ht]
\centering
\def<desired width>{330pt}
\begingroup%
\makeatletter%
\providecommand\color[2][]{%
\errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}%
\renewcommand\color[2][]{}%
}%
\providecommand\transparent[1]{%
\errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}%
\renewcommand\transparent[1]{}%
}%
\providecommand\rotatebox[2]{#2}%
\ifx<desired width>\undefined%
\setlength{\unitlength}{311.20868bp}%
\ifx\svgscale\undefined%
\relax%
\else%
\setlength{\unitlength}{\unitlength * \real{\svgscale}}%
\fi%
\else%
\setlength{\unitlength}{<desired width>}%
\fi%
\global\let<desired width>\undefined%
\global\let\svgscale\undefined%
\makeatother%
\begin{picture}(1,0.44220777)%
\put(0,0){\includegraphics[width=\unitlength,page=1]{subordinate.pdf}}%
\put(0.09935607,0.34055067){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\mathcal L$}}}%
\put(0,0){\includegraphics[width=\unitlength,page=2]{subordinate.pdf}}%
\put(0.16325441,0.41877092){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$T \in \mathcal T$}}}%
\put(0.69867839,0.00453373){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$T' \in \mathcal T$}}}%
\put(0,0){\includegraphics[width=\unitlength,page=3]{subordinate.pdf}}%
\put(0.55105117,0.28289539){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\mathcal L$}}}%
\put(0,0){\includegraphics[width=\unitlength,page=4]{subordinate.pdf}}%
\end{picture}%
\endgroup%
\caption{A collection $\mathcal L$ subordinate to a collection $\mathcal T\subset \mathcal P_0 ^2$.}
\label{fig:subordinate}
\end{figure}
It is important to stress that collections $\mathcal L$ are subordinate to rectangles $\mathcal T\subset \mathcal P_s^2$ \emph{having a fixed slope $s$}.
The Carleson sequences $a=\{a_R\}_{R\in\mathcal R}$ we will be considering will fall under the scope of the following definition.
\begin{definition}\label{def:carleson}Let $a=\{a_R\}_{R\in\mathcal D_S ^2}$ be a sequence of nonnegative numbers. Then $a$ will be called an \emph{$L^\infty$-normalized} Carleson sequence if for every $\mathcal L \subset \mathcal D_S ^2$ which is subordinate to some collection $\mathcal T \subset \mathcal P^2_\uptau$ for some fixed $\uptau\in S$, we have
\[
\sum_{L\in\mathcal L}a_L\leq |\mathsf{sh}(\mathcal T )|
\]
and the quantity
\[
\mathsf{mass}_a \coloneqq \sum_{R\in\mathcal D_S ^2} a_R
\]
is finite.
Given a Carleson sequence $a=\{a_R: R\in\mathcal D_S ^2 \}$ and a collection $\mathcal R\subset \mathcal D_S ^2$ we define the corresponding \emph{balayage}
\[
T_{\mathcal R}(a)(x) \coloneqq \sum_{R\in\mathcal R} a_R \frac{\cic{1}_R(x)}{|R|},\qquad x\in\mathbb{R}^2.
\]
We write $T(a)$ for $T_\mathcal R(a)$ when $\mathcal R=\mathcal D_S ^2$.
For $1\leq p \leq 2$ we then define the balayage norms
\[
\mathsf{mass} _{a,p} (\mathcal R)\coloneqq \left\| T_{\mathcal R}(a) \right\|_{L^p}.
\]
Note that $\mathsf{mass}_{a,1} (\mathcal R)=\sum_{R\in\mathcal R} a_R\leq\mathsf{mass}_a$.
\end{definition}
\begin{remark}[Elementary properties of $\mathsf{mass}$]\label{rmrk:dircarl} Let $ \mathcal R \subset \mathcal D_\uptau ^2 $ for some fixed $\uptau\in S$. Then $\mathcal R$ is subordinate to itself and if $a$ is an $L^\infty$-normalized Carleson sequence we have
\[
\mathsf{mass}_{a,1} (\mathcal R)=\sum_{R\in\mathcal R }a_R\leq |\mathsf{sh}(\mathcal R )|,\qquad \mathcal R \subset \mathcal D_\uptau ^2\quad\text{for some fixed}\,\, \uptau\in S.
\]
Also, the very definition of $\mathsf{mass}$ and the $\log$-convexity of the $L^p$-norm imply
\begin{equation}\label{eq:convex}
\mathsf{mass}_{a,p} (\mathcal R)\leq \mathsf{mass}_{a,1} (\mathcal R) ^{1-\frac{2}{p'}}\mathsf{mass}_{a,2} (\mathcal R) ^\frac{2}{p'}
\end{equation}
for all $1\leq p \leq 2$, with $p'$ its dual exponent.
\end{remark}
We are now ready to state the main result of this section. The result below should be interpreted as a reverse H\"older-type bound for the balayages of directional Carleson sequences.
\begin{theorem}
\label{thm:carleson}Let $S\subset[-1,1]$ be a finite set of $N$ slopes and $\mathcal R\subset \mathcal D_S ^2$. Suppose that the maximal operators $\{{\mathrm M}_{\mathcal R_s}: s\in S \}$ satisfy
\[
\sup_{s\in S}\big\| {\mathrm M}_{\mathcal R_s}: \, L^p\to L^{p,\infty}\big\|\lesssim (p')^\upgamma,\qquad p\to 1^+
\]
for some $\upgamma\geq 0$.
Then for every $L^\infty$-normalized Carleson sequence $a=\{a_R\}_{R\in\mathcal D_S ^2}$
\[
\mathsf{mass}_{a,2}(\mathcal R) \lesssim (\log N)^{\frac12} \big((1+\upgamma)\log\log N\big)^{\frac\upgamma2}\mathsf{mass}_{a,1}(\mathcal R) ^\frac12.
\]
\end{theorem}
The proof of Theorem \ref{thm:carleson} occupies the next subsection. The argument relies on several lemmata, whose proof is postponed to the final Subsection \ref{ss2:lp}.
\begin{remark} There are essentially two cases in the assumption of Theorem~\ref{thm:carleson} above. If for each $s\in S$ the family $\mathcal R_s$ happens to be a one-parameter family, then the corresponding maximal operator ${\mathrm M}_{\mathcal R_s}$ is of weak-type $(1,1)$, whence the assumption holds with $\upgamma=0$. In the generic case that $\mathcal R=\mathcal D_S ^2$ then for each $s$ the operator ${\mathrm M}_{\mathcal R_s}={\mathrm M}_{\mathcal D^2 _s}$ is a skewed copy of the strong maximal function and the assumption holds with $\upgamma=1$.
\end{remark}
\subsection{Main line of proof of Theorem \ref{thm:carleson}} \label{ss2:mlp} Throughout the proof, we use the following partial order between parallelograms $Q,R\in\mathcal D_S ^2$:
\begin{equation}\label{eq:partorder}
Q\leq R \overset{\text{def}}{\iff} Q\cap R\neq \varnothing,\,\uppi_1(Q)\subseteq \uppi_1(R).
\end{equation}
Notice that, since $Q,R\in\mathcal D_S ^2$, we have that $\uppi_1(R),\uppi_1(Q)$ belong to the standard dyadic grid $\mathcal D$ on $\mathbb{R}$.
It is convenient to encode the main inequality of Theorem \ref{thm:carleson} by means of the following dimensionless quantity associated with a collection $\mathcal R\subset \mathcal D_S ^2$ and a Carleson sequence $a=\{a_R\}_{R\in\mathcal D_S ^2}$
\[
{\mathsf U} _p (\mathcal R) \coloneqq \sup_{\substack{\mathcal L\subset \mathcal R\\ a=\{a_R\}}}\frac{\mathsf{mass}_{a,p}(\mathcal L)}{\mathsf{mass}_{a,1}(\mathcal L)^\frac1p} ,
\]
where the supremum is taken over all finite subcollections $\mathcal L\subset \mathcal R$ and all $L^\infty$-normalized Carleson sequences $a=\{a_R\}_{R\in\mathcal D_S ^2}$.
There is an easy, albeit lossy, \emph{a priori} estimate for $\mathsf{U}_p(\mathcal R)$ for general $\mathcal R\subset \mathcal D_S ^2$.
\begin{lemma}\label{l:apriori} Let $S\subset[-1,1]$ be a finite set of $N$ slopes and $a=\{a_R\}_{R\in\mathcal R}$ be a normalized Carleson sequence as above. For every $\mathcal R\subset \mathcal D_S ^2$ we have the estimate
\[
\mathsf U_p(\mathcal R)\lesssim N^{\frac{1}{p'}} \sup_{s\in S}\big\|{\mathrm M}_{\mathcal R_s}:L^{p'}\to L^{p',\infty}\big\|,\qquad 1<p<\infty.
\]
\end{lemma}
Theorem \ref{thm:carleson} is then an easy consequence of the following bootstrap-type estimate. For an arbitrary finite collection of parallelograms $\mathcal R\subset \mathcal D_S ^2$ we will prove the estimate
\begin{equation}\label{eq:iter}
\mathsf U_2(\mathcal R) ^2 \lesssim (\log \mathsf U_2(\mathcal R) )^\upgamma \log N
\end{equation}
with absolute implicit constant. Note also that the boundedness assumption on ${\mathrm M}_{\mathcal R_s}$ for some $p<2$ and Lemma~\ref{l:apriori} yield the \emph{a priori} estimate $\mathsf U_2 (\mathcal R) \lesssim N^{1/2}$. Inserting this \emph{a priori} estimate into \eqref{eq:iter} and bootstrapping will then complete the proof of Theorem \ref{thm:carleson}. It thus suffices to prove \eqref{eq:iter} to obtain Theorem \ref{thm:carleson}.
The remainder of the section is dedicated to the proof of \eqref{eq:iter}. We begin by expanding the square of the $L^2$-norm of $T_\mathcal R(a)$ as follows:
\begin{equation}
\label{e:badnessnorm}
\mathsf{mass}_{a,2}(\mathcal R)^2 =
\|T_{\mathcal R}(a)\|_2 ^2 \leq 2 \sum_{R\in\mathcal R} a_R \frac{1}{|R|}\int_{R} \sum_{\substack{{Q}\in\mathcal R\\{Q}\leq R}}a_{{Q}}\frac{\cic{1}_{{Q}}}{|{Q}|}\eqqcolon 2 \sum_{R\in\mathcal R}a_R B_R ^{\mathcal R}.
\end{equation}
For any $\mathcal L \subset \mathcal R$ and $R\in\mathcal R$ we have implicitly defined
\begin{equation}
\label{eq:brL}
B_R ^{\mathcal L } \coloneqq \frac{1}{|R|}\int_{R} \sum_{\substack{{Q}\in\mathcal L\\{Q}\leq R}} a_{{Q}}\frac{\cic{1}_{{Q}}}{|Q|}.
\end{equation}
\begin{remark}\label{rmrk:BRbig} Observe that for any $\mathcal L\subset \mathcal R$ and every fixed $s\in S$ we have
\[
\bigcup \left\{R\in\mathcal R_s:\, B_R ^{\mathcal L}>\uplambda\right\}\subset \left\{x\in \mathbb{R}^2:{\mathrm M}_{\mathcal R_s}\bigg[\sum_{Q\in\mathcal L} a_Q\frac{\cic{1}_Q}{|Q|}\bigg](x)>\uplambda\right\}
\]
which by our assumption on the weak $(p,p)$ norm of ${\mathrm M}_{\mathcal R_s}$ implies
\[
\sup_{s\in S}\Big|\bigcup \left\{R\in\mathcal R_s:\, B_R ^{\mathcal L}>\uplambda\right\}\Big|\lesssim (p')^\upgamma \frac{ \mathsf{mass}_{a,p}(\mathcal L) ^p}{\lambda^p},\qquad p\to 1^+.
\]
\end{remark}
For a numerical constant $\uplambda\geq 1$, to be chosen at the end of the proof, a nonnegative integer $k$ and $s\in S$ we consider subcollections of $\mathcal R_s$ as follows
\begin{equation}\label{eq:rsk}
\mathcal R_{s,k}\coloneq\left\{R:\, R\in\mathcal R_s,\: \uplambda k\leq B_R ^{\mathcal R } < \uplambda(k+1)\right\},\qquad k\in\mathbb N,\quad s\in S .
\end{equation}
Using \eqref{e:badnessnorm} we have
\begin{equation}\label{eq:main_split}
\begin{split}
\|T_{\mathcal R}(a)\|_2 ^2 & \lesssim \sum_{s\in S}\sum_{k=0} ^{ N} k\uplambda \sum_{R\in\mathcal R_{s,k}} a_R + N \sup_{s\in S} \Big[\sum_{k>N} k\uplambda \sum_{R\in\mathcal R_{s,k}} a_R\Big]
\\
&\qquad \lesssim \uplambda (\log N) \mathsf{mass}_{a,1}(\mathcal R)+ \uplambda N \sum_{k> N} k \sup_{s\in S} |\mathsf{sh}(\mathcal R_{s,k})|.
\end{split}
\end{equation}
Here $\uplambda>0$ is the constant used to define the collections $\mathcal R_{s,k}$ and in the last lines we used the definition of a Carleson sequence and Remark~\ref{rmrk:dircarl}.
The following lemma encodes the exponential decay relation between mass and $ B^{\mathcal L}_R$ and is in fact the main step of the proof of Theorem~\ref{thm:carleson}.
\begin{lemma} \label{lem:iter} Let $a=\{a_R:R\in \mathcal D_S ^2\}$ be an $L^\infty$-normalized Carleson sequence, $ S\subset [-1,1]$, and $\mathcal L,\mathcal R \subset \mathcal D_S ^2$ with $\mathcal L \subseteq \mathcal R$. We assume that for some $p\in[1,2)$
\[
A_p\coloneqq \sup_{s\in S} \|{\mathrm M}_{\mathcal R_s}:L^p\to L^{p,\infty}\|<+\infty.
\]
If $\,\uplambda \geq C\max(1,A_p \mathsf{U}_2(\mathcal L) ^{\frac{2}{p'}})$ for a sufficiently large numerical constant $C>1$ then there exists $\mathcal L_1 \subset \mathcal L $ such that:
\begin{itemize}
\item [\emph{(i)}] $\mathsf{mass}_{a,1}({\mathcal L_1})\leq \frac{1}{2} \mathsf{mass}_{a,1}({\mathcal L}) $;
\item[\emph{(ii)}] fixing $s\in S$ and denoting by
$\mathcal R_{s} '$ the collection of rectangles $R$ in $\mathcal R_s$ with $B^{\mathcal L} _R>\uplambda$, cf.\ \eqref{eq:brL}, we have that
\[
B^{\mathcal L} _R \leq \uplambda +B^{\mathcal L_1} _{R}\qquad \forall R\in\mathcal R_s '.
\]
\end{itemize}
\end{lemma}
The final lemma we make use of in the argument translates the exponential decay of the mass of each $\mathcal R_{s,k}$ into exponential decay of the support size, which is what we need in the estimate \eqref{eq:main_split}.
\begin{lemma}\label{lem:shadow} Let $S\subset[-1,1]$ and define the collections $\mathcal R_{s,k}$ by \eqref{eq:rsk} with $\uplambda$ defined as in Lemma~\ref{lem:iter} for $\mathcal L= \mathcal R$
\[
\uplambda\coloneqq C \max\Big(1, A_p \mathsf{U} _2(\mathcal R) ^\frac{2}{p'}\Big).
\]
We assume that the operators $\{{\mathrm M}_{\mathcal R_s}:\, s\in S\}$ map $L^p(\mathbb{R}^2)$ to $L^{p,\infty}(\mathbb{R}^2)$ uniformly with constant $A_{p}$. For $k\geq 1$ we then have the estimate
\[
|\mathsf{sh}(\mathcal R_{s,k})|\lesssim 2^{-k} \mathsf{mass}_{a,1}(\mathcal R)
\]
with absolute implicit constant.
\end{lemma}
With these lemmata in hand we now return to the proof of \eqref{eq:iter}. Substituting the estimate of Lemma~\ref{lem:shadow} into \eqref{eq:main_split} yields
\[
\|T_{\mathcal R}(a)\|_{2} ^2 \lesssim \uplambda \mathsf{mass}_{a,1}(\mathcal R) \bigg[(\log N) + N\sum_{k\geq \log N}k2^{-k}\bigg] \lesssim \uplambda \mathsf{mass}_{a,1}(\mathcal R)(\log N).
\]
This was proved for an arbitrary collection $\mathcal R$ and so also for every $\mathcal L\subset \mathcal R$. Thus the estimate above and our assumption $A_p\lesssim (p')^\upgamma$ imply
\[
\mathsf{U}_2(\mathcal R) ^2 \lesssim \uplambda (\log N),\qquad \uplambda \gtrsim \max(1,(p')^\upgamma \mathsf{U}_2(\mathcal R) ^{\frac{2}{p'}}).
\]
Now observe that we can assume that $ \mathsf{U}_2(\mathcal R) \gtrsim 1$ otherwise there is nothing to prove. In this case we can take
\[
\uplambda \simeq (p')^\upgamma \mathsf{U}_2(\mathcal R) ^\frac{2}{p'}
\]
for every $p>1$. The choice $p'\coloneqq (\log \mathsf{U}_2(\mathcal R))$ guarantees that $[\mathsf{U}_2(\mathcal R)]^{\frac{1}{p'}}\lesssim 1$ and leads to
\[
\mathsf{U}_2(\mathcal R) ^2 \lesssim (\log \mathsf{U}_2(\mathcal R))^\upgamma \log N.
\]
This is the desired estimate \eqref{eq:iter} and so the proof of Theorem~\ref{thm:carleson} is complete.
\subsection{Proof of the lemmata} \label{ss2:lp}
\begin{proof}[Proof of Lemma \ref{l:apriori}] We follow the proof of \cite{LR}*{Lemma 3.11}. Take ${\mathcal R}$ to be some finite collection and $\|g\|_{p'}=1$ such that
\[
\Big\|\sum_{R\in{\mathcal R}}a_R \frac{\cic{1}_R}{|R|}\Big\|_p=\int \sum_{R\in{\mathcal R}}a_R \frac{\cic{1}_R}{|R|}g.
\]
Define ${\mathcal R}':=\{R\in{\mathcal R}: \, \langle g\rangle_R > [cN/ \mathsf{mass}_{a,1} ({\mathcal R})]^{1/{p'}}\}$ for some $c>1$ and ${\mathcal R}'_s\coloneqq {\mathcal R}'\cap \mathcal D_s ^2$ for $s\in S$. Then,
\[
\begin{split}
\int \sum_{R\in{\mathcal R}} a_R \frac{\cic{1}_R}{|R|} g &\le \sum_{R\in{\mathcal R}\setminus{\mathcal R}'}a_R \langle g\rangle_R+ \Big\|\sum_{R\in{\mathcal R}'}a_R \frac{\cic{1}_R}{|R|}\Big\|_p
\\
& \le (cN)^{\frac{1}{p'}}(\sum_{R\in{\mathcal R} } a_R)^{\frac1p}+ N \sup_{s\in S}\Big\|\sum_{R\in{\mathcal R}'_s}a_R\frac{\cic{1}_R}{|R|}\Big\|_p.
\end{split}\]
This means
\[
\Big\|\sum_{R\in{\mathcal R}}a_R \frac{\cic{1}_R}{|R|}\Big\|_p\lesssim (c N)^{\frac{1}{p'}} \Big(1+\frac{N^{\frac1p}}{c^{\frac{1}{p'}}} \sup_{s\in S} \frac{\Big\|\sum_{R\in{\mathcal R}'_s}a_R\frac{\cic{1}_R}{|R|}\Big\|_p}{(\sum_{R\in{\mathcal R}' _s} a_R)^{\frac1p}} \frac{(\sum_{R\in{\mathcal R}' _s} a_R)^{\frac1p}}{(\sum_{R\in{\mathcal R} _s} a_R)^{\frac1p}}\Big)(\sum_{R\in{\mathcal R} } a_R)^{\frac1p}.
\]
We have proved that for an arbitrary collection $\mathcal R$ we have
\[
\mathsf{U}_p(\mathcal R) \leq (cN)^{\frac{1}{p'}}\bigg(1+\frac{N^{\frac1p}}{c^{\frac{1}{p'} }} \sup_s\mathsf{U}_p(\mathcal R_s ') \frac{\mathsf{mass}_{a,1}(\mathcal R_s ')^\frac{1}{p}}{\mathsf{mass}_{a,1}(\mathcal R)^{\frac1p}}\bigg).
\]
We claim that $\sup_{s\in S}\mathsf{U}_p({\mathcal R}_s ')\lesssim \sup_{s\in S}\|{\mathrm M}_{{\mathcal R}_s}:L^{p'}\to L^{p',\infty}\|$. Assuming this for a moment and using Remark~\ref{rmrk:dircarl} we can estimate
\[
\begin{split}
\sum_{R\in{\mathcal R}' _s} a_R &\le |\mathsf{sh}({\mathcal R}'_s)|\le \big|\big\{{\mathrm M}_{{\mathcal R}_s}(g)>(cN/\mathsf{mass}_{a,1}({\mathcal R}))^{1/{p'}}\big\}\big|
\\
&\leq \sup_{s\in S}\big\|{\mathrm M}_{\mathcal R_s}:L^{p'}\to L^{p',\infty} \big\|^{p'}
\frac{\mathsf{mass}_{a,1}({\mathcal R})}{c N}.
\end{split}
\]
This proves the proposition upon choosing $c \gtrsim \sup_{s\in S}\big\|{\mathrm M}_{\mathcal R_s}:L^{p'}\to L^{p',\infty} \big\|^{p'}$.
We have to prove the claim. Note that since ${\mathcal R}' _s$ is a collection in a fixed direction the inequality $\mathsf{U}_{{\mathcal R}'_s} \lesssim \sup_{s\in S}\|{\mathrm M}_{{\mathcal R}_s}:L^{p'}\to L^{p',\infty}\|$ follows by the John-Nirenberg inequality in the product setting and Remark~\ref{rmrk:dircarl}; see \cite{LR}*{Lemma 3.11}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:iter}] By the invariance under shearing of our statement, we can work in the case $s=0$. Therefore, $\mathcal R_0 '$ will stand for the collection of rectangles in $\mathcal R_0$ such that $B_R ^\mathcal L>\uplambda$, where $\uplambda\geq C$ and $C>1$ will be specified at the end of the proof. We write $R=I_R \times L_R$ for $R\in \mathcal R_0$.
\subsubsection*{Inside-outside splitting}For $I\in \{\uppi_1(R): R \in \mathcal R_0'\}$ and any interval $K$ we define
\[
\begin{split}
\mathcal L_{I,K} ^{\mathrm{in}} \coloneqq\{Q\in\mathcal L:Q\leq I\times K, \,\uppi_2(Q)\subset 3K\},\qquad \mathcal L_{I,K} ^{\mathrm{out}} \coloneqq\{Q\in\mathcal L:Q\leq I\times K, \,\uppi_2(Q)\nsubseteq 3K\},
\end{split}
\]
where we recall that the definition of partial order $Q\leq R$ was given in \eqref{eq:partorder}. Set also
\[
B_{I,K} ^{\mathrm{in}}\coloneqq \Xint-_{I\times K} \sum_{Q\in \mathcal L_{I,K} ^{\mathrm{in}} }\frac{a_Q}{|Q|}\cic{1}_{Q},
\quad
B_{I,K} ^{\mathrm{out}}\coloneqq \Xint-_{I\times K} \sum_{Q\in \mathcal L_{I,K} ^{\mathrm{out}} }\frac{a_Q}{|Q|}\cic{1}_{Q}.
\]
We claim that if $K\subset \mathbb{R}$ is any interval then for all $\upalpha\in K$ we have
\begin{equation}
\label{eq:slice}
\begin{split}
&\Xint-_{I\times\{\upalpha\}} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I,K} } a_Q \frac{\cic{1}_Q}{|Q|} = \sum_{Q\in\mathcal L^{\mathrm{out}} _{I,K} } a_Q\frac{|Q\cap (I \times \{\upalpha\})|}{|Q|}
\lesssim \Xint-_{I\times 3K} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I,K} } a_Q \frac{\cic{1}_Q}{|Q|}.
\end{split}
\end{equation}
To see this note that in order for a $Q$-term appearing in the sum of the left hand side above to be non-zero we must have
\[
\uppi_1(Q)\subset I, \qquad \uppi_2(Q) \cap K \neq \varnothing, \qquad \uppi_2(Q) \cap \mathbb{R} \setminus 3 K \neq \varnothing.
\]
Let us write $\uptheta_Q=\arctan \sigma$ if $Q\in\mathcal D^2 _{\sigma}$ for some $\sigma \in S$. A computation then reveals that
\[
|Q\cap (I\times\{\upalpha\})|=\min(|J_Q|,{\rm dist}(\upalpha,\mathbb{R}\setminus \uppi_2(Q) ) \cot \uptheta_Q
\]
We also observe that $\uppi_2(Q) \cap (3K \setminus K)$ contains an interval $A=A(\upalpha)$ of length $|K|/3$, whence for all $\upalpha'\in A$ we have
\[
{\rm dist}(\upalpha,\mathbb{R}\setminus \uppi_2(Q) )\leq {\rm dist}(\upalpha,\upalpha') +{\rm dist}(\upalpha',\mathbb{R}\setminus \uppi_2(Q) ) \lesssim |K|+{\rm dist}(\upalpha',\mathbb{R}\setminus \uppi_2(Q) )\lesssim {\rm dist}(\upalpha',\mathbb{R}\setminus \uppi_2(Q) );
\]
see Figure~\ref{fig:slice}. This clearly implies that for every $\upalpha\in K$ we have
\[
|Q\cap (I\times\{\upalpha\})| \lesssim \Xint-_{A} |Q\cap (I\times\{\upalpha'\})|\,{\rm d} \upalpha'\lesssim \Xint-_{3K} |Q\cap (I\times\{\upalpha'\})|\,{\rm d} \upalpha'
\]
which proves the claim.
\subsubsection*{Smallness of the local average.} We now use the previously obtained \eqref{eq:slice} to prove (ii). Let $\mathcal R_0 ^\star$ denote the family of parallelograms $R=I_R\times L_R\in\mathcal R_0 '$ such that $B_{I_R,L_R} ^{\mathrm{out}}>{\uplambda}$. For each such $R$ let $K_R$ be the maximal interval $K\in\{L_R,3L_R,\ldots,3^kL_R,\ldots\}$ such that $B_{I_R,K} ^{\mathrm {out}}>{\uplambda}$; the existence of the maximal interval $K_R$ is guaranteed for example by the \emph{a priori} estimate of Lemma~\ref{l:apriori} and the assumption $R\in\mathcal R_0 ^\star$. Obviously $K_R\supseteq L_R$ and $B_{I_R,3K_R} ^{\mathrm{out}}\leq \uplambda$.
We show that for $R\in\mathcal R_0 ^\star$ we have
\begin{equation}\label{eq:Ravg}
\Xint-_R \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,K_R} } a_Q \frac{\cic{1}_Q}{|Q|} \leq \upkappa \uplambda
\end{equation}
for some numerical constant $\upkappa\geq 1$.
\begin{figure}[t]
\centering
\def<desired width>{330pt}
\begingroup%
\makeatletter%
\providecommand\color[2][]{%
\errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}%
\renewcommand\color[2][]{}%
}%
\providecommand\transparent[1]{%
\errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}%
\renewcommand\transparent[1]{}%
}%
\providecommand\rotatebox[2]{#2}%
\newcommand*\fsize{\dimexpr\f@size pt\relax}%
\newcommand*\lineheight[1]{\fontsize{\fsize}{#1\fsize}\selectfont}%
\ifx<desired width>\undefined%
\setlength{\unitlength}{482.14248709bp}%
\ifx\svgscale\undefined%
\relax%
\else%
\setlength{\unitlength}{\unitlength * \real{\svgscale}}%
\fi%
\else%
\setlength{\unitlength}{<desired width>}%
\fi%
\global\let<desired width>\undefined%
\global\let\svgscale\undefined%
\makeatother%
\begin{picture}(1,0.77448698)%
\lineheight{1}%
\setlength\tabcolsep{0pt}%
\put(0,0){\includegraphics[width=\unitlength,page=1]{rectangles.pdf}}%
\put(0.17369807,0.31981504){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\upalpha'$\end{tabular}}}}%
\put(0.75277102,0.44643419){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\uptheta_Q$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=2]{rectangles.pdf}}%
\put(0.61857882,0.73095941){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$Q$\end{tabular}}}}%
\put(0.46980148,0.09069451){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$I$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=3]{rectangles.pdf}}%
\put(0.56117215,0.34727745){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$R=I\times L$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=4]{rectangles.pdf}}%
\put(0.16946619,0.17990661){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$K$\end{tabular}}}}%
\put(0.2456552,0.34017265){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$Q\cap (I\times\{\upalpha'\})$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=5]{rectangles.pdf}}%
\put(-0.00188368,0.19864912){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$3K$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=6]{rectangles.pdf}}%
\put(0.22698852,0.2601726){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$Q\cap (I\times\{\upalpha\})$\end{tabular}}}}%
\put(0.17680919,0.24203717){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\upalpha$\end{tabular}}}}%
\end{picture}%
\endgroup%
\caption{A rectangle $Q$ with angle $\uptheta_Q$ intersecting $R=I\times L\subset I\times K$.}
\label{fig:slice}
\end{figure}
Indeed it is a consequence of \eqref{eq:slice} that
\[
\begin{split}
&\qquad \Xint-_{I_R\times\{\upalpha\}} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,K_R} } a_Q \frac{\cic{1}_Q}{|Q|} \lesssim \Xint-_{I_R\times 3K_R} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,K_R} } a_Q \frac{\cic{1}_Q}{|Q|}
\\
& \leq \Xint-_{I_R\times 3K_R} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,3K_R} } a_Q\frac{\cic{1}_Q}{|Q|} +\Xint-_{I_R\times 3K_R} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R, K_R}\setminus \mathcal L^{\mathrm{out}} _{I_R,3K_R}} a_Q \frac{\cic{1}_Q}{|Q|}.
\end{split}
\]
The first summand is estimated using the maximality of $K_R$
\[
\Xint-_{I_R\times 3K_R} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,3K_R} } a_Q\frac{\cic{1}_Q}{|Q|} =B_{I_R,3K_R} ^{\mathrm{out}}\leq \uplambda.
\]
The second summand can be further analyzed by observing that the cubes $Q$ appearing in the sum above satisfy $\uppi_1(Q)\subset I$ and $\uppi_2(Q)\subset 9K_R$ since $Q\notin \mathcal L_{I_R,3K_R} ^{\mathrm{out}}$, that is, $\mathcal L^{\mathrm{out}} _{I_R,3K_R}\setminus \mathcal L^{\mathrm{out}} _{I_R,K_R}$ is subordinate to the singleton collection $\{I_R\times 9K_R\}$. Applying the Carleson sequence property
\[
\begin{split}
&\quad \Xint-_{I_R\times 3K_R} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,K_R}\setminus \mathcal L^{\mathrm{out}} _{I_R,3K_R}} a_Q \frac{\cic{1}_Q}{|Q|}\leq \sum_{ Q\in\mathcal L_{I_R,K_R} ^{\mathrm{out}}\setminus \mathcal L_{I_R,3K_R} ^\mathrm{out} } a_Q \frac{|Q\cap (I_R\times 3K_R)|}{|Q||I_R\times 3K_R|}\lesssim 1 \leq \uplambda
\end{split}
\]
by our assumption on $\uplambda$. Combining the estimates above shows that
\[
\Xint-_{I_R\times\{\upalpha\}} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,K_R} } a_Q \frac{\cic{1}_Q}{|Q|} \lesssim \uplambda
\]
for all $\upalpha\in K_R$. Since $\pi_2(R)\subset K$ this implies \eqref{eq:Ravg}.
Observe that if $R=I_R\times L_R\in\mathcal R_0 ' \setminus \mathcal R_0 ^\star$ then
\begin{equation}\label{eq:except}
B_{I_R,L_R} ^{\mathrm{out}}= \Xint-_{I_R\times L_R} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,L_R} } a_Q \frac{\cic{1}_Q}{|Q|} \leq\uplambda.
\end{equation}
\subsubsection*{Defining the subcollection $\mathcal L_1$} We set
\[
\mathcal L_1 {'} \coloneqq \bigcup_{R\in \mathcal R_0 ^\star} \mathcal L_{I_R,K_R} ^{\mathrm{in}},\quad \mathcal L_1 {''}\coloneqq \bigcup_{R\in\mathcal R_0 {'} \setminus \mathcal R_0 ^\star} \mathcal L_{I_R,L_R} ^{\mathrm{in}},\qquad \mathcal L_1 \coloneqq \mathcal L_1 {'} \cup \mathcal L_1 {''}.
\]
Now note that for each $R\in \mathcal R_0 ^\star$ and $K=K_R\in \mathcal K_{\uppi_1(R)}$ we have that
\[
B_R ^{\mathcal L} \leq \Xint-_R \sum_{Q\in\mathcal L_{I_R\times K_R} ^{\mathrm{out}}}a_Q\frac{\cic{1}_Q}{|Q|}+\Xint-_R \sum_{Q\in\mathcal L_{I_R\times K_R} ^{\mathrm{in}}}a_Q\frac{\cic{1}_Q}{|Q|}\leq \upkappa \uplambda+B_R ^{\mathcal L_1}
\]
while for $R\in \mathcal R_0 ' \setminus \mathcal R_0 ^\star$ the same estimate holds using $L_R$ in place of $K_R$. It remains to show the desired estimate for $\mathsf{mass}_{a,1}(\mathcal L_1)$ in (i) of the lemma.
\subsubsection*{Smallness of the mass $\,\mathsf{mass}_{a,1}(\mathcal L_1)$.} By the definition of the collections $\mathcal L_{I,K} ^{\mathrm{in}}$ we have that
\[
\mathsf{sh}(\mathcal L_1)\subset \bigcup_{R\in\mathcal R_0 ^\star} I_R\times 3K_R \cup \bigcup_{R\in\mathcal R_0 '\setminus \mathcal R_0 ^\star} I_R\times 3L_R.
\]
If $K=K_R$ for some $R\in\mathcal R_0 ^\star$ we have by definition that $B_{I_R,K_R} ^{\mathrm{out}}>\uplambda$. On the other hand for $R \in \mathcal R_0 '\setminus \mathcal R_0 ^\star$ we have that $B_R ^{\mathcal L}=B_{I_R,L_R} ^{\mathcal L}>\uplambda$.
Define
\[
E\coloneqq \bigg\{(x,y)\in \mathbb{R}^2:\, {\mathrm M}_v \bigg[ \sum_{Q\in\mathcal L} a_Q \frac{\cic{1}_Q}{|Q|}\bigg](x,y)\geq \frac12 \uplambda\bigg\}
\]
where ${\mathrm M}_v={\mathrm M}_{(1,s)}={\mathrm M}_1$ is the directional Hardy-Littlewood maximal function acting on the direction $v=(1,s)=(1,0)$, see \eqref{eq:dirMv}, since we have assumed $s=0$. We will show that
\[
\bigcup_{\substack{R\in\mathcal R_0^\star }} I_R\times 3K_R \subset \big\{(x,y)\in\mathbb{R}^2:\, {\mathrm M}_2 (\cic{1}_E)(x,y)\geq C\big\}
\]
for a sufficiently small constant $C>0$, where ${\mathrm M}_2$ is as in \eqref{eq:dirMv}. To this end let us define
\[
\uppsi(\upalpha)\coloneqq \frac{1}{|I_R|}\int_{I_R\times\{\upalpha\}} \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,K_R} }a_Q \frac{\cic{1}_Q}{|Q|}.
\]
Note that
\[
\begin{split}
& \uplambda<B_{I_R,K_R} ^{\mathrm{out}}=\Xint-_{K_R} \uppsi(\upalpha) \,{\rm d} \upalpha\leq \frac{1}{|K_R|}\int_{\{K_R:\,\uppsi(\upalpha)>\uplambda/2\}}\uppsi(\upalpha)\,{\rm d} \upalpha+\frac{\uplambda}{2}
\leq \frac{c\uplambda}{|K_R|}|\{K_R:\, \uppsi(\upalpha)>\uplambda/2\}|+\frac\uplambda2
\end{split}
\]
which readily yields the existence of $K'\subset K_R$ with
\[
|K_R|\lesssim |K'|, \qquad \inf_{x\in I_R}\inf_{y\in K'}{\mathrm M}_v \bigg[ \sum_{Q\in\mathcal L^{\mathrm{out}} _{I_R,K_R} } a_Q \frac{\cic{1}_Q}{|Q|}\bigg](x,y)>\frac\uplambda 2.
\]
This in turn implies that ${\mathrm M}_2(\cic{1}_E)\gtrsim 1$ on $I_R\times 3K_R$. Now we can conclude
\begin{equation}\label{eq:measure}
\begin{split}
&\quad \bigg |\bigcup_{\substack{R\in\mathcal R_0^\star }} I_R\times 3K_R\bigg| \leq \big| \{{\mathrm M}_2(\cic{1}_E)\gtrsim 1\}\big| \lesssim |E| \lesssim\frac{1}{\uplambda} \mathsf{mass}_{a,1}({\mathcal L})
\end{split}
\end{equation}
by the weak $(1,1)$ inequality of the directional Hardy-Littlewood maximal function ${\mathrm M}_{(1,0)}$.
On the other hand we have for the rectangles $R\in \mathcal R_0 '\setminus \mathcal R_0 ^\star$ that
\[
\bigcup_{R\in\mathcal R_0 '\setminus \mathcal R_0 ^\star} I_R\times 3L_R\subset \Big\{{\mathrm M}_{\mathcal R_0}\Big(\sum_{Q\in\mathcal L} a_Q\frac{\cic{1}_Q}{|Q|}\Big)>\frac{\uplambda}{3} \big)\Big\}.
\]
Thus we get by the weak $(p,p)$ assumption for ${\mathrm M}_{\mathcal R_0}$ that
\[
\begin{split}
&\bigg|\bigcup_{R\in\mathcal R_0 '\setminus \mathcal R_0 ^\star} I_R\times 3L_R\bigg|\le \bigg|\Big\{{\mathrm M}_{\mathcal R_0}\big(\sum_{Q\in\mathcal L} a_Q\frac{\cic{1}_Q}{|Q|}>\frac{\uplambda}{3} \big)\Big\}\bigg|\lesssim \frac{A_p ^p}{\uplambda ^p}\mathsf{mass}_{a,p}(\mathcal L)
\\
&\qquad\qquad\lesssim \frac{A_p ^p}{\uplambda^p}\mathsf{mass}_{a,1}(\mathcal L)\mathsf{U}_2(\mathcal L) ^{2(p-1)}.
\end{split}
\]
By the subordination property of $\mathcal L_1$ we get
\[
\mathsf{mass}_{a,1}(\mathcal L_1)\leq \bigg|\bigcup_{R\in\mathcal R_0 ^\star} I_R\times 3K_R \cup \bigcup_{R\in\mathcal R_0 '\setminus \mathcal R_0 ^\star} I_R\times 3L_R\bigg|\leq \frac12 \mathsf{mass}_{a,1}(\mathcal L)
\]
upon choosing $\uplambda \geq C \max(1,A_p\mathsf{U}_2 (\mathcal L)^\frac{2}{p'})$ with sufficiently large $C>1$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:shadow}] Fix $s\in S$ and choose $\uplambda$ in the definition of $\mathcal R_{s,k}$ to be the value given by Lemma \ref{lem:iter} with $\mathcal L=\mathcal R=\cup_{s\in S}\mathcal R_s$. Let $j=0$ and $\mathcal L_0=\mathcal L_{j}\coloneqq \mathcal R$. Construct $\mathcal L_{1}=\mathcal L_{j+1}\subset \mathcal R$ such that $\mathsf{mass}_{a,1}({\mathcal L_1})\leq \frac 12 \mathsf{mass}_{a,1}(\mathcal L_0)$. Since $B_R ^{\mathcal L_0}>k\uplambda $ for all $R\in\mathcal R_{s,k}$ we have
\[
\uplambda k<B_R ^{\mathcal L_0 }\leq \uplambda +B_R ^{\mathcal L_1}\implies B_R ^{\mathcal L_1}>\uplambda(k-1).
\]
Repeat the procedure inductively with $j+1$ in place of $j$. When $j=k-1$ we have reached the collection $\mathcal L_{k-1}$ with $\mathsf{mass}_{a,1}({\mathcal L_{k-1}})\lesssim 2^{-k}\mathsf{mass}_{a,1}(\mathcal L_0)$ and $B_R ^{\mathcal L_{k-1}}>\uplambda$. This last condition and Remark~\ref{rmrk:BRbig} imply that
\[
\mathsf{sh}(\mathcal R_{s,k}) \subset \bigg\{{\mathrm M}_{\mathcal R_s}\bigg[\sum_{Q\in\mathcal L_{k-1}}a_Q\frac{\cic{1}_Q}{|Q|}\bigg]>\uplambda\bigg\}
\]
and so, using \eqref{eq:convex},
\[
\begin{split}
& \big | \mathsf{sh}(\mathcal R_{s,k})\big|\leq \frac{A_p ^p}{\uplambda ^p}\mathsf{mass}_{a,p}({\mathcal L_{k-1}}) ^p \leq \frac{A_p ^p}{\uplambda ^p} \mathsf{mass}_{a,1}(\mathcal L_{k-1}) ^{p-\frac{2p}{p'}} \mathsf{mass}_{a,2}(\mathcal L_{k-1}) ^{\frac{2p}{p'}}
\\
& \qquad \leq 2^{-k}\mathsf{mass}_{a,1} (\mathcal L_0) \frac{C A_p ^p }{\uplambda^p}\Bigg(\frac{\mathsf{mass}_{a,2}(\mathcal L_0 )^2}{\mathsf{mass}_{a,1}(\mathcal L_0)}\Bigg)^{p-1} = 2^{-k}\mathsf{mass}_{a,1} (\mathcal L_0) \frac{C A_p ^p }{\uplambda^p}\mathsf{U}_2(\mathcal L_0)^{2(p-1)}
\end{split}
\]
and the lemma follows by the definition of $\uplambda$ since $\mathcal L_0=\mathcal R$.
\end{proof}
\section{A weighted Carleson embedding and applications to maximal directional operators}
\label{s:wcarleson} In this section, we provide a weighted version of the directional Carleson embedding theorem. We then derive, as applications, novel weighted norm inequalities for maximal and singular directional operators.
The proof of the weighted Carleson embedding follows the strategy used for Theorem~\ref{thm:carleson}, with suitable modifications. In order to simplify the presentation, we restrict our scope to collections of parallelograms $\mathcal R=\{\bigcup \mathcal R_s:s\in S\}$ with the property that the maximal operator ${\mathrm M}_{\mathcal R_s}$ associated to each collection $\mathcal R_s$ satisfies the appropriate weighted weak-$(1,1)$ inequality. This is the case, for instance, when the collections $\mathcal R_s$ are of the form
\begin{equation}
\label{e:onepar2020}
\mathcal R_s \subset \mathcal D^2_{s,\cdot,k}\, , \qquad \mathcal D^2_{s,\cdot,k} \coloneqq
\bigcup_{k_1\leq k}
\mathcal D^2 _{s,k_1,k}
\end{equation}
for a fixed $k\in \mathbb Z$. In other words, the parallelograms in direction $s$ have fixed vertical sidelength and arbitrary eccentricity.
\subsection{Directional weights} Let $S$ be a set of slopes and $w,u\in L^1 _{\mathrm{loc}}(\mathbb{R}^2)$ be nonnegative functions, which we refer to as \emph{weights} from now on. Our weight classes are related to the maximal operator
\[\mathrm{M}_{S;2}\coloneqq \mathrm{M}_{V}\circ {\mathrm M}_{(0,1)},\] recalling that $M_V=M_{\{(1,s):s\in S\}}$ is the directional maximal operator defined in \eqref{e:Mv}.
We introduce the two-weight directional constant
\[
[w,u]_{S}\coloneqq \sup_{x\in \mathbb{R}^2} \frac{ \mathrm{M}_{S;2} w(x)}{u(x)}.
\]
We pause to point out some relevant examples of pairs $w,u$ with $[w,u]_{S}<\infty$. Recall that for $p>2$, $\|\mathrm{M}_{S;2}\|_{p\to p} \lesssim (\log \#S)^{1/p}$; this is actually a special case of Theorem \ref{thm:carleson} and interpolation. Therefore, if $g\geq 0$ belongs to the unit sphere of $L^p(\mathbb{R}^2)$,
\[
w\coloneqq \sum_{\ell=0}^\infty \frac{\mathrm{M}_{S;2}^{[\ell]} g}{2^\ell\left\|\mathrm{M}_{S;2}\right\|_{p\to p}^\ell}
\]
satisfies $[w,w]_{S}\leq 2 \|\mathrm{M}_{S;2} \|_{p\to p}$; here $T^{[\ell]}$ denotes $\ell$-fold composition of an operator $T$ with itself. We also highlight the relevance of $[w,u]_{S}$ in Theorem \ref{thm:wcarleson} below by noticing that
\begin{equation}
\label{e:weighted1bd}
\sup_{s\in S } \big\| {\mathrm M}_{\mathcal D^2_{s,\cdot,k}}: \, L^1(u)\to L^{1,\infty}(w)\big\| \lesssim [w,u]_{S}
\end{equation}
with absolute implicit constant. This result is obtained via the classical Fefferman-Stein inequality in direction $s$ paired with the remark that ${\mathrm M}_{\mathcal D^2_{s,\cdot,k}} w \lesssim {\mathrm M}_{S; 2} w \leq [w,u]_{S} u$.
\subsection{Weighted Carleson sequences} We begin with the weighted analogue of Definition~\ref{def:carleson}, which is given with respect to a fixed weight $w$.
\begin{definition}\label{def:wcarleson}Let $a=\{a_R\}_{R\in\mathcal D_S ^2}$ be a sequence of nonnegative numbers. Then $a$ will be called an \emph{$L^\infty$-normalized} $w$-Carleson sequence if for every $\mathcal L \subset \mathcal D_S ^2$ which is subordinate to some collection $\mathcal T \subset \mathcal P^2_\uptau$ for some fixed $\uptau\in S$, we have
\[
\sum_{L\in\mathcal L}a_L\leq w(\mathsf{sh}(\mathcal T )),
\qquad
\mathsf{mass}_a \coloneqq \sum_{R\in\mathcal D_S ^2} a_R
<\infty.\]
\end{definition}
As before, if $\mathcal R\subset \mathcal D_\uptau ^2$ for some fixed $\uptau\in S$ then $\mathcal R$ is subordinate to itself and
\[
\mathsf{mass}_{a,1} (\mathcal R)=\sum_{R\in\mathcal R }a_R\leq w(\mathsf{sh}(\mathcal R )),\qquad \mathcal R \subset \mathcal D_\uptau ^2\quad\text{for some fixed}\,\, \uptau\in S.
\]
Throughout this section all Carleson sequences and related quantities are taken with respect to some fixed weight $w$ which is suppressed from the notation.
We can now state our weighted Carleson embedding theorem.
\begin{theorem}
\label{thm:wcarleson}Let $S\subset[-1,1]$ be a finite set of $N$ slopes and $\mathcal R\subset \mathcal D_S ^2$. Let $w,u$ be weights with $[w,u]_{S}<\infty $ and such that \[
\sup_{s\in S}\big\| {\mathrm M}_{\mathcal R_s}: \, L^1(u)\to L^{1,\infty}(w)\big\|\lesssim [w,u]_S.
\]
Then for every $L^\infty$-normalized $w$-Carleson sequence $a=\{a_R\}_{R\in\mathcal D_S ^2}$ we have
\[
\left(\int \left|T_{\mathcal R}(a)(x)\right|^2 \frac{\mathrm d x}{{\mathrm M}_{\mathcal R}u(x)}\right)^{\frac12} \lesssim (\log N)^{\frac12} [w,u]_S \mathsf{mass}_{a,1}(\mathcal R) ^\frac12.
\]
\end{theorem}
\subsection{Proof of Theorem~\ref{thm:wcarleson}} We follow the proof of Theorem~\ref{thm:carleson} and only highlight the differences to accommodate the weighted setting. Write $\upsigma\coloneqq [{\mathrm M}_{\mathcal R}u]^{-1}$. Expanding the $L^2(\upsigma)$-norm we have
\[
\|T_{\mathcal R}(a)\|_{L^2(\upsigma)}^2 \leq2\sum_{R\in\mathcal R}a_R \sum_{\substack{Q\in\mathcal R\{\mathcal Q}\leq R}}a_Q \frac{\upsigma(Q\cap R)}{|Q||R|}.
\]
From the definition of $\upsigma$ we have that
\[
\upsigma(Q\cap R)\leq \frac{|Q\cap R|}{\inf_Q {\mathrm M}_{\mathcal R}u} \leq \frac{|Q|}{u(Q)}|Q\cap R |
\]
whence
\[
\|T_{\mathcal R}(a)\|_{L^2(\upsigma)}^2 \leq2 \sum_{R\in\mathcal R}a_R \Xint-_R\sum_{\substack{Q\in\mathcal R\{\mathcal Q}\leq R}}a_Q\frac{\cic{1}_Q}{u(Q)} \coloneqq 2 \sum_{R\in\mathcal R}a_R B_R ^{\mathcal R}
\]
where now for any $\mathcal L\subset \mathcal R$ we have defined
\[
B_R ^{\mathcal L}\coloneqq \Xint-_R \sum_{\substack{Q\in\mathcal L\{\mathcal Q}\leq R}} a_Q\frac{\cic{1}_Q}{u(Q)}.
\]
Defining the families $\mathcal R_{s,k}$ for $s\in S$ and $k\in\mathbb{N}$ as in \eqref{eq:rsk} we then have the estimate
\[
\|T_{\mathcal R}(a)\|_{L^2(\upsigma)}^2\leq 2 \uplambda \Big[(\log N) \mathsf{mass}_{a,1}(\mathcal R)+ N \sum_{k> \log N} k \sup_{s\in S} w(\mathsf{sh}(\mathcal R_{s,k}))\Big].
\]
Again $\uplambda>0$ is a constant that will be determined later in the proof and in the last line we used the $w$-Carleson assumption for the sequence $a=\{a_R\}$ for rectangles in a fixed direction.
We need the weighted version of Lemma~\ref{lem:iter}, which is given under the standing assumptions of Theorem \ref{thm:wcarleson}.
\begin{lemma} \label{lem:witer} Let $a=\{a_R:R\in \mathcal D_S ^2\}$ be an $L^\infty$-normalized $w$-Carleson sequence, $s\in S\subset [-1,1]$, and $\mathcal L,\mathcal R \subset \mathcal D_S ^2 $ with $\mathcal L \subseteq \mathcal R$. For every $\uplambda>C[w,u]_S$ where $C$ is a suitably chosen absolute constant, there exists $\mathcal L_1 \subset \mathcal L $ such that:
\begin{itemize}
\item [\emph{(i)}] $\mathsf{mass}_{a,1}({\mathcal L_1})\leq \frac{1}{2} \mathsf{mass}_{a,1}({\mathcal L}) $;
\item[\emph{(ii)}] denoting by
$\mathcal R_{s} '$ the collection of rectangles $R$ in $\mathcal R_s$ with $B^{\mathcal L} _R>\uplambda$ we have that
\[
B^{\mathcal L} _R \leq \uplambda +B^{\mathcal L_1} _{R}\qquad \forall R\in\mathcal R_s '.
\]
\end{itemize}
\end{lemma}
\begin{proof} We can assume that $s=0$ and let $\mathcal R_0 '$ be the collection of rectangles in $\mathcal R_0$ such that $B_R ^\mathcal L>\uplambda$, where $\uplambda$ is as in the statement of the lemma and $C$ will be specified at the end of the proof. For $I\in \{\uppi_1(R): R \in \mathcal R_0'\}$ and any interval $K\subset \mathbb{R}$ we define $\mathcal L_{I,K} ^{\mathrm{in}} $ and $\mathcal L_{I,K} ^{\mathrm{out}}$ as in the proof of Theorem~\ref{thm:carleson} but now we set
\[
B_{I,K} ^{\mathrm{in}}\coloneqq \Xint-_{I\times K} \sum_{Q\in \mathcal L_{I,K} ^{\mathrm{in}} } \frac{a_Q}{u(Q)}\cic{1}_{Q},
\quad
B_{I,K} ^{\mathrm{out}}\coloneqq \Xint-_{I\times K} \sum_{Q\in \mathcal L_{I,K} ^{\mathrm{out}} }\frac{a_Q}{u(Q)}\cic{1}_{Q}.
\]
We define $\mathcal R_0 ''$ to be the subcollection of those $R=I\times L\in\mathcal R_0 '$ such that $B_{I,L} ^{\mathrm{out}}\leq \uplambda $. By linearity we get for each $R\in\mathcal R_0''$ that $B_R ^{\mathcal L} \leq \uplambda+B_{I,L} ^{\mathrm{in}}\leq \uplambda +B_R ^{\mathcal L_1 ''}$ where
\[
\mathcal L _1 {''} \coloneqq \bigcup_{R=I\times L\in\mathcal R_0 ''}\mathcal L_{I,L} ^{\mathrm{in}},\qquad
\mathsf{sh}(\mathcal L'' _1) \subset \bigcup_{R=I\times L \in \mathcal R_0 ''} I\times 3L .
\]
Since $\mathcal R_0 ''\subset \mathcal R_0 '$ we conclude as before that
\[
\begin{split}
&w \big (\mathsf{sh}(\mathcal L'' _1 )\big) \leq w\bigg ( \bigcup_{R=I\times L\in\mathcal R_0 ''} I\times 3L \bigg)\leq w\Big (\Big\{ {\mathrm M}_{\mathcal R_{0}} \Big(\sum_{Q\in\mathcal L} \frac{a_Q\cic{1}_Q}{u(Q)}\Big)>\frac{\uplambda}{3}\Big\}\Big)
\\
&\qquad \lesssim \frac{[w,u]_{S}}{\uplambda} \int_{\mathbb{R}^2} \sum_{Q\in\mathcal R} a_Q \frac{\cic{1}_Q}{u(Q)}\,{\rm d} u = \frac{[w,u]_S}{\uplambda} \mathsf{mass}_{a,1}( \mathcal L)
\end{split}
\]
by the two-weight weak type $(1,1)$ inequality for ${\mathrm M}_{\mathcal R_{s}}={\mathrm M}_{\mathcal R_{0}}$. Now $\mathcal L'' _1$ is subordinate to the collection $\{I\times 3L:I\times L \in \mathcal R_0 '' \}$. Using the definition of a Carleson sequence we have
\[
\begin{split}
\sum_{Q\in\mathcal L'' _1} a_Q &\leq w \bigg(\bigcup_{R=I\times L \in \mathcal R_0 ''} I\times 3L\bigg) \lesssim \frac{[w,u]_S}{\uplambda}\mathsf{mass}_{a,1}(\mathcal L)
\end{split}
\]
and so $ \mathsf{mass}_{a,1}({\mathcal L'' _1 })\lesssim [w,u]_S \mathsf{mass}_{a,1}({\mathcal L})/\uplambda$.
It remains to deal with parallelograms
\[
R=I\times L
\in \mathcal R^\star_0\coloneqq\mathcal R_0 '\setminus \mathcal R_0''\qquad B_{I,L} ^{\mathrm{out}}> \uplambda.
\]
We define the maximal $K_R$ such that $B_{I,K_R} ^{\mathrm{out}}>\uplambda$ as before; the existence of this maximal interval can be guaranteed for example by assuming the collection $\mathcal R$ is finite. We have for each $R=I\times L\in \mathcal R^\star _0$ that $B_{I,L} ^{\mathrm{out}}> \uplambda$ so $K_R\supset L$ and $B_{I,3K_R } ^{\mathrm{out}}\leq\uplambda$ by maximality.
Now using \eqref{eq:slice} we get that
\[
\Delta\coloneqq \sum_{Q\in\mathcal L^{\mathrm{out}} _{I,3K_R} } a_Q \frac{|Q\cap( I\times\{\upalpha\})|}{{u}(Q)|I|}\lesssim \sum_{Q\in\mathcal L^{\mathrm{out}} _{I,3K_R} }a_Q \frac{|Q\cap (I\times 3K_R)|}{{u}(Q)||3K_R||I|}=\Xint-_{I\times 3K_R}\sum_{Q\in \mathcal L^{\mathrm{out}} _{I,3K_R} }a_Q\frac{\cic{1}_Q}{{u}(Q)}\lesssim \uplambda
\]
by the maximality of $K_R$. On the other hand
\[
\Xi\coloneqq \sum_{Q\in\mathcal L^{\mathrm{out}} _{I,K} \setminus \mathcal L^{\mathrm{out}} _{I,3K} }a_Q \Xint-_{I\times\{\upalpha\}} \frac{\cic{1}_Q}{{u}(Q)} \lesssim \sum_{Q\subset I\times 9K} a_Q \frac{|Q\cap (I\times 3K)|}{|I\times 3K|{u}(Q)}.
\]
Since ${\mathrm M}_{\mathcal R_s} w\leq {\mathrm M}_V{\mathrm M}_2w\leq [w,u]_S {u}$ uniformly in $s$ we get that for $Q\subset I\times 9K$
\[
{u}(Q) \gtrsim [w,u]_S ^{-1}\frac{w(I\times 9K)}{|I\times 9K|}|Q|
\]
and by this and the $w$-Carleson property for all $Q$s subordinate to $I\times 9K$ we get
\[
\Xi\lesssim [w,u]_S \leq \uplambda
\]
provided $\uplambda \geq [w,u]_S$.
We now define
\[
\mathcal L_1 {'} \coloneqq \bigcup_{R\in \mathcal R_0 ^\star} \mathcal L_{\uppi_1(R),K_R} ^{\mathrm{in}}
\]
so that
\[
\mathsf{sh}(\mathcal L' _1)\subset \bigcup_{R\in\mathcal R_0 ^\star} \uppi_1(R)\times K_R.
\]
Arguing as in the unweighted case of Theorem~\ref{thm:carleson} we can estimate
\[
w(\mathsf{sh}(\mathcal L' _1))\leq w\Bigg(\bigcup_{R\in\mathcal R_0 ^\star} \uppi_1(R)\times K_R\Bigg)\lesssim w(\{{\mathrm M}_2 (\cic{1}_E)\gtrsim 1\})
\]
where
\[
E\coloneqq \bigg\{(x,y)\in \mathbb{R}^2:\, {\mathrm M}_v \bigg[ \sum_{Q\in\mathcal L} a_Q \frac{\cic{1}_Q}{{u}(Q)}\bigg](x,y)\geq \frac12 \uplambda\bigg\}.
\]
In the definition of $E$ above we have that ${\mathrm M}_v={\mathrm M}_{(1,s)}={\mathrm M}_{1}$ since we have reduced to the case $v=(1,s)=(1,0)$. Using the subordination property of $\mathcal L' _1$ and the Fefferman-Stein inequality once in the direction $e_2$ for ${\mathrm M}_2$ and once in the direction $v=(1,s)=(1,0)$ for ${\mathrm M}_v$ we estimate
\[
\mathsf{mass}_{a,1}(\mathcal L' _1)\leq w\Bigg(\bigcup_{R\in\mathcal R_0 ^\star} \uppi_1(R)\times K_R\Bigg)\lesssim \frac{1}{\uplambda} \sum_{Q\in\mathcal L} a_Q\frac{{\mathrm M}_V{\mathrm M}_2w(Q)}{{u}(Q)}\leq \frac{[w,u]_S}{\uplambda}\mathsf{mass}_{a,1}(\mathcal L).
\]
We have thus proved the lemma upon setting $\mathcal L_1 \coloneqq \mathcal L_1 ^{''}\cup \mathcal L_1 '$ and choosing $\uplambda\geq C [w,u]_S$ for a sufficiently large numerical constant $C>1$.
\end{proof}
Repeating the steps in the proof of Lemma~\ref{lem:shadow} for $\uplambda$ as in the statement of Lemma~\ref{lem:witer} we get for the sets $\mathcal R_{s,k}$ defined with respect to this $\uplambda$ that
\[
w(\mathsf{sh}(\mathcal R_{s,k}))\lesssim 2^{-k} \mathsf{mass}_{a,1}(\mathcal R),
\]
and this completes the proof of Theorem \ref{thm:wcarleson}.
\subsection{Applications of Theorem \ref{thm:wcarleson}} The first corollary of Theorem \ref{thm:wcarleson} is a two-weighted estimate for the directional maximal operator ${\mathrm M}_V$ from \eqref{e:Mv}.
\begin{theorem}\label{thm:maxweight}Let $V\subset \mathbb S^1$ be a finite set of $N$ slopes and $w$ be a weight on $\mathbb{R}^2$. Then
\[
\left\|{\mathrm M}_V : {L^{2}(\widetilde{{\mathrm M}_{V}}w)}\to {L^{2,\infty}(w)}\right\|\lesssim \sqrt{\log N}
\qquad \widetilde{{\mathrm M}_{V}} \coloneqq {\mathrm M}_{V} \circ {\mathrm M}_{V} \circ \max\{{\mathrm M}_{(1,0)},{\mathrm M}_{(0,1)}\}.
\]
\end{theorem}
\begin{remark} In the proof below, we argue for almost horizontal $V$, and use ${\mathrm M}_{(0,1)}$ in place of $\max\{{\mathrm M}_{(1,0)},{\mathrm M}_{(0,1)}\}$. The usage of $\max\{{\mathrm M}_{(1,0)},{\mathrm M}_{(0,1)}\}$ is made necessary to make the statement of the theorem invariant under rotation of $V$.
\end{remark}
\begin{proof} By standard limiting arguments, it suffices to prove that for each $k\in \mathbb Z$ the estimate
\begin{equation}
\label{e:weightpf1}
\left\|{\mathrm M}_{\mathcal R} : {L^{2}(z)}\to {L^{2,\infty}(w)}\right\|\lesssim \sqrt{\log N}, \qquad z\coloneqq {\mathrm M}_{\mathcal R} \circ {\mathrm M}_{V} \circ {\mathrm M}_{(0,1)}w,
\end{equation}
when $\mathcal R$ is a one-parameter collection as in \eqref{e:onepar2020}, holds uniformly in $k$.
For a nonnegative function $f\in\mathcal S(\mathbb{R}^2)$ let $Uf$ be a linearization of ${\mathrm M}_{\mathcal R}f$, namely
\[
{\mathrm M}_{\mathcal R}f(x)=U f(x)= \frac{1}{|R(x)|}\int_{R(x)}f(y)\,{\rm d} y = \sum_{R\in \mathcal R} \langle f \rangle_R \bm{1}_{F_R}(x), \qquad F_R\coloneqq\{x\in R:\, R(x)=R \}.
\] By duality, \eqref{e:weightpf1} turns into \begin{equation}
\label{e:weightpf2}
\|U^*(w\bm{1}_E) \|_{L^2(z^{-1}) }\lesssim \sqrt{\log N} \sqrt{w(E)},\qquad \forall E\subset \mathbb{R}^2.
\end{equation}
We can easily calculate
\[
U^*(w\bm{1}_E)= \sum_{R\in\mathcal R} w(E\cap F_R)\frac{\cic{1}_R}{|R|}
\]
and it is routine to check that $\{w(E\cap F_ R)\}_{R\in\mathcal R}$ is a $w$-Carleson sequence according to Definition~\ref{def:wcarleson}. The main point here is that the sets $\{E\cap F_R\}_{R\in\mathcal R}$ are by definition pairwise disjoint and $F_R\subseteq R$ for each $R\in\mathcal R$.
Setting $u\coloneqq {\mathrm M}_{V} \circ {\mathrm M}_{{(0,1)}} w, $ if $S$ are the slopes of $V$, it is clear that $[w,u]_S\lesssim 1$ and that $z^{-1}= ({\mathrm M}_{\mathcal R} u)^{-1}$. Therefore \eqref{e:weightpf2} follows from an application of Theorem \ref{thm:wcarleson}.
\end{proof}
We may in turn use Theorem \ref{thm:maxweight} to establish a weighted norm inequality for maximal directional singular integrals with controlled dependence on the cardinality $\#V=N$. Similar considerations may be used to yield weighted bounds for directional singular integrals in $L^p(\mathbb{R}^2)$ for $p>2$; we do not pursue this issue.
\begin{theorem}\label{thm:singweight} Let $K$ be a standard Calder\'on-Zygmund convolution kernel on $\,\mathbb{R}$ and $V\subset \mathbb S^1$ be a finite set of $N$ slopes. For $\,v\in V$ we define
\[
T_vf (x) = \sup_{\upepsilon>0}\left| \int_{\upepsilon<t<\frac{1}{\upepsilon}} f(x+tv) K(t) \,{\rm d} t\right|, \qquad T_V f(x)= \sup_{v\in V} |T_v f(x)|.
\]
Let $w$ be a weight on $\mathbb{R}^2$ with $[w]_{A_1^V}\coloneqq \|{{\mathrm M}_{V}}w/w\|_\infty<\infty$. Then
\[
\left\|T_{V} :\,{L^{2}(w)}\to {L^{2,\infty}(w)}\right\|\lesssim (\log N)^{\frac32} [w]_{A_1^V}^{\frac52}.
\]
\end{theorem}
We sketch the proof, which is a weighted modification of the arguments for \cite[Theorem 1]{DDP}. Hunt's classical exponential good-$\uplambda$ inequality, see \cite[Proposition 2.2]{DDP} for a proof, may be upgraded to
\begin{equation}
\label{e:weightpf3}
w\left(\left\{ x\in \mathbb{R}^2 :T_vf(x) > 2\uplambda, {\mathrm M}_{v} f(x) \leq \upgamma \uplambda \right\}\right) \lesssim \exp\left(-{\textstyle\frac{c}{\upgamma [w]_{A_1^V}}}\right) w\left(\left\{ x\in \mathbb{R}^2 :T_vf(x) > \uplambda\right\}\right)
\end{equation}
by using that $[w]_{A_1^V}$ dominates the $A_\infty$ constant of the one-dimensional weight $t\mapsto w(x+tv)$ for all $x\in \mathbb{R}^2,v\in V$, together with Fubini's theorem. With \eqref{e:weightpf3} in hand, Theorem \ref{thm:singweight} follows from Theorem~\ref{thm:maxweight} via standard good-$\uplambda$ inequalities, selecting $(\upgamma)^{-1}\sim[w]_{A_1^V} \log N$. Note that the right hand-side of the estimate in the conclusion of Theorem~\ref{thm:maxweight} becomes $[w]_{A_1} ^\frac{3}{2} \sqrt{\log N}$ when the estimate is specified to $A_1 ^V$ weights as the ones we consider here.
\section{Tiles, adapted families, and intrinsic square functions} \label{s:isf}
We define here some general notions of tiles and adapted families of wave-packets: definitions in this spirit have appeared in, among others \cite{BarrLac,DDP,LL2,LR,LL1}. These will be essential for the time-frequency analysis square functions we use in this paper in order to model the main operators of interest. After presenting these abstract definitions we show some general orthogonality estimates for wave packet coefficients. We then detail how these notions are specialized in three particular cases of interest.
\subsection{Tiles and wavelet coefficients}Throughout this section we fix a finite set of slopes $S\subset [-1,1]$. Remember that alternatively we will refer to the set of vectors $V\coloneqq\{(1,s):\, s\in S\}$. A \emph{tile} is a set $t\coloneqq R_t \times \Omega_t \subset \mathbb{R}^2\times \mathbb{R}^2$ where $R_t \in\mathcal D_S ^2$ and $\Omega_t\subset \mathbb{R}^2$ is a measurable set, and $|R_t||\Omega_t|\gtrsim 1$. We denote by $s(t)\in S$ the slope such that $R_t\in \mathcal D_{s(t)} ^2$, and then
\[
R_t = A_{s(t)}(I_t \times J_t)\quad\text{with}\quad I_t\times J_t \in \mathcal D _0 ^2.
\]
We also use the notation $v_t\coloneqq (1,s(t))$. There are several different collections of tiles used in this paper, they will generically be denoted by ${\mathbf{T}},{\mathbf{T}}_1,{\mathbf{T}} '$ or similar. Given any collection of tiles ${\mathbf{T}}$ we will use often use the notation $\mathcal R_{{\mathbf{T}} }\coloneq\{R_t:\, t\in {\mathbf{T}}\}$ to denote the collection of spatial components of the tiles in ${\mathbf{T}}$. The exact geometry of these tiles will be clear from context, however several estimates hold for generic collections of tiles as we will see in \S\ref{sec:orthocone}.
Let $t=R_t\times \Omega_t$ be a tile and $M\geq 2$. We denote by $\mathcal A_t^{M}$ the collection of Schwartz functions $\upphi$ on $\mathbb{R}^2$ such that:
\begin{itemize}
\item [(i)] $\mathrm{supp}(\widehat{\upphi})\subset \Omega_t$,
\item[(ii)] There holds
\[
\sup_{0\leq \upalpha,\upbeta\leq M} \sup_{x\in \mathbb{R}^2}
|R_t|^{\frac12} |I|^\upalpha|J|^\upbeta \left( 1 + \frac{|x\cdot v_t|}{|I||v_t|}\right)^{M} \left( 1 + \frac{|x\cdot e_2|}{|J|} \right)^{M}
\left|\partial_{v_t}^\upalpha \partial_{e_2}^\upbeta \upphi (x+c_{R_t}) \right| \leq 1.
\]
\end{itemize}
In the above display $c_{R_t}$ refers to the center of $R_t$ and $\partial_{v_t}(\cdot)\coloneqq \frac{v_t}{|v_t|}\cdot \nabla (\cdot)$. An immediate consequence of property (ii) is the normalization
\[
\sup_{\upphi \in \mathcal A_t^{M} }\|\upphi\|_{2} \lesssim 1.
\]
We thus refer to $\mathcal A_t^{M}$ as the collection of \emph{$L^2$-normalized wave packets adapted to $t$ of order $M$}. { For our purposes, it will suffice to work with moderate values of $M$, say $2^3\leq M \leq 2^{50}$. In fact, we use $M=M_0=2^{50}$ in the definition of the \emph{intrinsic wavelet coefficient} associated with the tile $t$ and the Schwartz function $f$:\begin{equation}
\label{e:intwcdef}
a_{t}(f) \coloneqq \sup_{\upphi\in \mathcal A_t^{M_0} }| \langle f,\upphi \rangle|^2,\qquad M_0=2^{50}.
\end{equation}}
This section is dedicated to square functions involving wavelet coefficients associated with particular collections of tiles which formally look like
\[
\Delta_{\mathbf{T}}(f)^2 \coloneqq \sum_{t\in{\mathbf{T}}} a_t(f)\frac{\cic{1}_{R_t}}{|R_t|},\qquad{\mathbf{T}}\,\text{is a collection of tiles}.
\]
We begin by proving some general global and local orthogonality estimates for collections of tiles with finitely overlapping frequency components. These estimates will be essential in showing that the sequence $\{a_t(f)\}_{t\in{\mathbf{T}}}$ is Carleson in the sense of Section~\ref{sec:carleson}, when $|f|\leq \cic{1}_E$ for some measurable set $E\subset \mathbb{R}^2$ with $0<|E|<\infty$. This in turn will allow us to use the directional Carleson embedding of Theorem~\ref{thm:carleson} in order to conclude corresponding estimates for intrinsic square functions defined on collections of tiles.
\subsection{Orthogonality estimates for collections of tiles}\label{sec:orthocone} We begin with an easy orthogonality estimate for wave packet coefficients. For completeness we present a sketch of proof which has a $TT^*$ flavor. The argument follows the lines of proof of \cite{LR}*{Proposition 3.3}.
\begin{lemma}\label{l:globalortho}Let $\mathbf T$ be a set of tiles such that $\sum_{t\in{\mathbf{T}}}\cic{1}_{\Omega_t}\lesssim 1$, let $M\geq 2^3$ and $\{\upphi_t:t\in{\mathbf{T}}\}$ be such that $\upphi_t \in \mathcal A_{t}^{M} $ for all $t\in {\mathbf{T}}$. We have the estimate
\begin{equation}\label{e:ttstarlemma}
\sum_{t\in \mathbf T} |\langle f,\upphi_t \rangle|^2 \lesssim\|f\|_2^2,
\end{equation}
and as a consequence
\[
\sum_{t\in \mathbf T} a_{t}(f) \lesssim \|f\|_2^2.
\]
\end{lemma}
\begin{proof} Fix $M\geq 2^3$.
It suffices to prove that for $\|f\|_2=1$ and an arbitrary adapted family of wave packets $\{\upphi_t:\, \upphi_t\in \mathcal A_t ^M,\, t\in {\mathbf{T}}\}$ there holds
\begin{equation}
\label{e:ttstar}
B\coloneqq \sum_{t\in \mathbf T} |\langle f,\upphi_t \rangle|^2 \lesssim 1,
\end{equation}
Let us first fix some $\Omega\in \Omega({\mathbf{T}})\coloneqq \{\Omega_t:\, t\in{\mathbf{T}}\}$ and consider the family
\[
{\mathbf{T}}(\{\Omega\})\coloneqq \{t\in{\mathbf{T}}:\,\Omega_t = \Omega\}.
\]
To prove \eqref{e:ttstar}, we introduce
\[
B_\Omega(g)\coloneqq \sum_{t\in \mathbf T(\{\Omega\})} |\langle g,\upphi_t \rangle|^2,\qquad S_\Omega(g)\coloneqq (\hat g\cic{1}_\Omega)^\vee.
\]
We claim that $B_\Omega(g)\lesssim \|g\|_2 ^2$ for all $g$, uniformly in $\Omega\in\Omega({\mathbf{T}})$. Assuming the claim for a moment and remembering the finite overlap assumption on the frequency components of the tiles we have
\[
B=\sum_{\Omega\in\Omega({\mathbf{T}})}B_\Omega(S_\Omega f)\lesssim \sum_{\Omega\in\Omega({\mathbf{T}})}\|S_\Omega(f)\|_2 ^2\leq \bigg\|\sum_{\Omega\in\Omega({\mathbf{T}})}\cic{1}_\Omega \bigg\|_\infty ^2 \| f\|_2 ^2\lesssim 1
\]
as desired. It thus suffices to prove the claim. To this end let
\[
P_\Omega(g)\coloneqq \sum_{t\in \mathbf T(\{\Omega\})} \langle g,\upphi_t \rangle \upphi_t.
\]
Then for any $g$ with $\|g\|_2=1$ we have that $B_\Omega(g)=\langle P_\Omega(g),g \rangle \leq \|P_\Omega(g)\|_2 $ and it suffices to prove that $\|P_\Omega(g)\|_2^2\lesssim B_\Omega(g)$. A direct computation reveals that
\[
\|P_\Omega(g)\|_2^2 \leq B_\Omega(g)\sup_{t' \in {\mathbf{T}}(\{\Omega\})} \sum_{ t \in {\mathbf{T}}(\{\Omega\}) } |\langle \upphi_t,\upphi_{t'} \rangle| \lesssim B
\]
where the second inequality in the last display above follows by the polynomial decay of the wave packets $\{\upphi_t:\, \, \Omega_t=\Omega\}$. This completes the proof of the lemma.
\end{proof}
We present below a localized orthogonality statement which is needed in order to verify that the coefficients $a_{t}(f)$ form a Carleson sequence in the sense of \S\ref{sec:carleson}. Verifying this Carleson condition relies on a variation of Journ\'e's lemma that can be found in \cite[Lemma 3.23]{CLMP}; we rephrase it here adjusted to our notation. In the statement of the lemma below we denote by $\mathrm{M}_{\mathcal P_s ^2}$ the \emph{maximal function} corresponding to the collection $\mathcal P_s ^2$ where $s\in S$ is a fixed slope. Note that the proof in \cite{CLMP} corresponds to the case of slope $s=0$ but the general case $s\in S$ follows easily by a change of variables. Remember here that we have $S\subset [-1,1]$.
In the statement of the lemma below two parallelograms are called \emph{incomparable} if none of them is contained in the other.
\begin{lemma} \label{l:journe} Let $s\in S$ be a slope and $\mathcal T\subset \mathcal D_s ^2$ be a collection of pairwise incomparable parallelograms. Define
\[
\mathsf{sh}^\star(\mathcal T)\coloneqq \big\{\mathrm{M}_{\mathcal P_s ^2} \bm{1}_{\mathsf{sh}(\mathcal T)} > 2^{-6} \big\}
\]
and for each $R \in \mathcal T$ let $u_R$ be the least integer $u$ such that $2^{u}R\not\subset \mathsf{sh}^\star(\mathcal T) .$ Then
\[
\sum_{\substack{R\in \mathcal T\\ u_R=u}} |R| \lesssim 2^u| \mathsf{sh}(\mathcal T)|.
\]
\end{lemma}
With the suitable analogue of Journ\'e's lemma in hand we are ready to state and prove the localized orthogonality condition for the coefficients $a_{t}(f)$.
\begin{lemma}\label{l:localortho} Let $s\in S$ be a slope, $\mathcal T\subset \mathcal P_s ^2$ be a given collection of parallelograms and ${\mathbf{T}}$ be a collection of tiles such that
\[
\mathcal R_{{\mathbf{T}}} \coloneqq \{R_t:\, t\in {\mathbf{T}}\}
\]
is subordinate to $\mathcal T$. Then we have
\[
\sum_{t\in {\mathbf{T}}} a_{t}(f)\lesssim \left| \mathsf{sh}(\mathcal T) \right| \|f\|_\infty^2.
\]
\end{lemma}
\begin{proof} We first make a standard reduction that allows us to pass to a collection of dyadic rectangles. To do this we use that there exist at most $9^2$ shifted dyadic grids $\mathcal D_{s,j} ^2$ such that for each parallelogram $T\in \mathcal T$ there exists $\widetilde{T}\in\cup_j \mathcal D^2 _{s,j}$ with $T\subset \widetilde{T}$ and $|T|\leq |\widetilde{T}|\lesssim |T|$; see for example \cite{HLP}. Now note that for each $\widetilde T\in \widetilde {\mathcal T}$ we have
\[
\frac{|T\cap \widetilde{T}|}{|\widetilde{T}|}\gtrsim 1,\qquad \mathsf{sh}(\widetilde {\mathcal T})\subset \big\{{\mathrm M}_{\mathcal P_S ^2}(\cic{1}_{\mathsf{sh}(\mathcal T)})\gtrsim 1\big\}
\]
and so $|\mathsf{sh}(\widetilde{\mathcal T})|\lesssim |\mathsf{sh}(\mathcal T)|$. Now it is clear that we can replace $\mathcal T$ with the dyadic collection $\widetilde{\mathcal T}$ in the assumption. Furthermore there is no loss in generality with assuming that $\mathcal T$ is a pairwise incomparable collection. We do so in the rest of the proof and continue using the notation $\mathcal T$ assuming it is a dyadic collection.
Since $\mathcal R_{{\mathbf{T}}}$ is subordinate to $\mathcal T$ we have the decomposition
\[
{\mathbf{T}}=\bigcup_{T\in\mathcal T}{\mathbf{T}}(T), \qquad {\mathbf{T}}(T)\coloneqq \{t\in {\mathbf{T}}:\, R_t\subset T\}.
\]
Now if $f$ is supported on $\mathsf{sh}^\star (\mathcal T)$ and $\upphi_t \in \mathcal A_t ^{M_0}$ for each $t\in{\mathbf{T}}$ then
\[
\sum_{t \in {\mathbf{T}}} |\langle f,\upphi_t\rangle|^2 \lesssim \|f\|_2^2\leq |\mathsf{sh}^\star(\mathcal T)|\|f\|_\infty^2\lesssim |\mathsf{sh}(\mathcal T)|\|f\|_\infty^2
\]
by Lemma \ref{l:globalortho}. We may thus assume that $f$ is supported outside $\mathsf{sh}^\star(\mathcal T) $.
By Lemma \ref{l:journe} it then suffices to prove that
\[
\sum_{t\in {\mathbf{T}}(T)} |\langle f, \upphi_t\rangle|^2 \lesssim 2^{-10u} |T|
\]
whenever $u$ is the least integer such that $2^{u}T\not\subset \mathsf{sh}^\star(\mathcal T)$ and $\|f\|_\infty=1$. As $f$ is supported off $\mathsf{sh}^\star(\mathcal T)$ we have have for this choice of $u$ that
\[
f= \sum_{n \geq 0 } f_n, \qquad f_n\coloneqq f\bm{1}_{2^{u+n}T\setminus 2^{u+n-1} T}.
\]
Let $z_T$ be the center of $T$ and suppose that $T=A_s(I_T\times J_T)$ with $I_T\times J_T \in \mathcal D_0 ^2$; remember that we write $v_s\coloneqq (1,s)$. Let
\begin{equation}\label{eq:chiT}
\chi_T(x)\coloneqq\Big(1+ \frac{(x-z_T)\cdot v_s}{|I_T||v_s|}\Big)^{-20}(1+|J_T|^{-1} (x-z_T)\cdot e_2)^{-20}.
\end{equation}
Observe preliminarily that
\[
\|f_n\chi_T\|_\infty\lesssim 2^{-20 (u+n) }
\]
so that for any constant $c>0$ we have
\[
\begin{split}
\bigg(\sum_{t\in {\mathbf{T}}(T)} |\langle f, \upphi_t\rangle|^2\bigg)^{\frac12} &\leq \sum_{n \geq 0} \bigg(\sum_{t\in {\mathbf{T}}(T)} |\langle f_n, \upphi_t\rangle|^2\bigg)^{\frac12}= \sum_{n\geq 0}\bigg(\sum_{t\in{\mathbf{T}}(T)} |\langle f_nc^{-1}\chi_T, c\chi_T^{-1}\upphi_t\rangle|^2\bigg)^{\frac12}
\\
& \lesssim \sum_{n\geq 0} \|f_n\chi_T\|_2 \lesssim \sum_{n\geq 0} \|f_n\chi_T\|_\infty |2^{u+n}T|^{\frac12} \lesssim 2^{-5u}|T|^{\frac12}
\end{split}
\]
as claimed. To pass to the second line we have used estimate \eqref{e:ttstarlemma} of Lemma \ref{l:globalortho} together with the easily verifiable fact that for each $ t\in \mathcal {\mathbf{T}}(T) $ the wave-packet $c\chi_T^{-1}\upphi_t$ is adapted to $t$ with order $M_0-20\geq 2^3$ provided the absolute constant $c$ is chosen small enough.
\end{proof}
\subsection{The intrinsic square function associated with rough frequency cones}\label{sec:conetiles} Let $s\in S$ be our finite set of slopes. As usual we write $v_s\coloneqq(1,s)$ for $s\in S$ and $V\coloneqq\{v_s:\, s\in S\}$ and switch between the description of directions as slopes or vectors as desired with no particular mention. Now assume we are given a finitely overlapping collection of arcs $\{\upomega_s\}_{s\in S}$ with each $\upomega_s \subset\mathbb S^1$ centered at $(v_s/|v_s|)^\perp$. We will adopt the notation $\upomega_s \eqqcolon ((v_{s ^-}/|v_{s ^-}|)^\perp,(v_{s ^+}/|v_{s ^+}|)^\perp)$ assuming that the positive direction on the circle is counterclockwise and $s^-<s<s^+$.
For $s\in S$ we define the conical sectors
\begin{equation}
\label{e:freqsconical}
\Omega_{s,k}\coloneqq \left\{\xi \in \mathbb{R}^2:\, 2^{k-1} < |\xi| < 2^{k+1}, \, \frac{\xi}{|\xi|} \in \upomega_s \right\}, \qquad k \in \mathbb Z;
\end{equation}
these are an overlapping cover of the cone
\[
C_s\coloneqq \left\{\xi\in \mathbb{R}^2\setminus\{0\}:\, \frac{\xi}{|\xi|} \in \upomega_s\right\}
\]
with $k\in \mathbb Z$ playing the role of the annular parameter. Each sector $\Omega_{s,k}$ is strictly contained in the cone $C_s$.
For each $s\in S$ let $\ell_s\in \mathbb Z$ be chosen such that $2^{-\ell_s}<|\upomega_s|\leq 2^{-\ell_s+1}$. We perform a further discretization of each conical sector $\Omega_{s,k}$ by considering Whitney-type decompositions with respect to the distance to the lines determined by the boundary rays $r_{s^-}$ and $r_{s^+}$; here $r_{s^+}$ denotes the ray emanating from the origin in the direction of $v_{s^+} ^\perp$ and similarly for $r_{s^-}$. For each sector $\Omega_{s,k}$ a central piece which we call $\Omega_{s,k,0}$ is left uncovered by these Whitney decompositions. This is merely a technical issue and we will treat these central pieces separately in what follows.
To make this precise let $s,k$ be fixed and define the regions
\begin{equation}
\begin{split}\label{eq:whitney}
\Omega_{s,k,m}\coloneqq\Big\{\xi\in\Omega_{s,k}:\, \frac13 2^{-|m|-1} \leq \frac{\mathrm{dist}(\xi,r_{s^+})}{|\upomega_s|} \leq \frac13 2^{-|m|+1} \Big\},\qquad m>0,
\\
\Omega_{s,k,m}\coloneqq\Big\{\xi\in\Omega_{s,k}:\, \frac13 2^{-|m|-1} \leq \frac{\mathrm{dist}(\xi,r_{s^-}) }{|\upomega_s|} \leq \frac13 2^{-|m|+1} \Big\},\qquad m<0.
\end{split}
\end{equation}
The central part that was left uncovered corresponds to $m=0$ and is described as
\begin{equation}\label{eq:whitney0}
\Omega_{s,k,0}\coloneqq \Big\{\xi\in\Omega_{s,k}:\, \min(\mathrm{dist}(\xi,r_{s^-}),\mathrm{dist}(\xi,r_{s^+})) \geq \frac12 \frac13 |\upomega_s|\Big\}.
\end{equation}
Notice that the collection $\{\Omega_{s,k,m}\}_{m\in\mathbb{N}}$ is a finitely overlapping cover of $\Omega_{s,k}$. Furthermore the family $\{\Omega_{s,k,m}\}_{s,k,m}$ has finite overlap as the cones $\{C_s\}_{s\in S}$ have finite overlap and for fixed $s$ the family $\{\Omega_{s,k,m}\}_{k,m}$ is Whitney both in $k$ and $m$.
These geometric considerations are depicted in Figure~\ref{fig:conetiles} below.
\begin{figure}[ht]
\centering
\def<desired width>{330pt}
\begingroup%
\makeatletter%
\providecommand\color[2][]{%
\errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}%
\renewcommand\color[2][]{}%
}%
\providecommand\transparent[1]{%
\errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}%
\renewcommand\transparent[1]{}%
}%
\providecommand\rotatebox[2]{#2}%
\newcommand*\fsize{\dimexpr\f@size pt\relax}%
\newcommand*\lineheight[1]{\fontsize{\fsize}{#1\fsize}\selectfont}%
\ifx<desired width>\undefined%
\setlength{\unitlength}{955.56878988bp}%
\ifx\svgscale\undefined%
\relax%
\else%
\setlength{\unitlength}{\unitlength * \real{\svgscale}}%
\fi%
\else%
\setlength{\unitlength}{<desired width>}%
\fi%
\global\let<desired width>\undefined%
\global\let\svgscale\undefined%
\makeatother%
\begin{picture}(1,0.52752546)%
\lineheight{1}%
\setlength\tabcolsep{0pt}%
\put(0,0){\includegraphics[width=\unitlength,page=1]{conetiles.pdf}}%
\put(0.73352601,0.39585592){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{v_{s} =(1,s)}$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=2]{conetiles.pdf}}%
\put(0.65892369,0.09870227){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{R_t \text{ dual to } \Omega_{s,k,0}}$\end{tabular}}}}%
\put(0.009892,0.02296516){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{|\xi|=2^{k+1}}$\end{tabular}}}}%
\put(0.24263737,0.02400128){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{|\xi|=2^{k-1}}$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=3]{conetiles.pdf}}%
\put(0.20754137,0.39357333){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{2^{k+1}|\omega_s|\eqsim 2^{k-\ell_s}}$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=4]{conetiles.pdf}}%
\put(0.32090948,0.33414535){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{\Omega_{s,k}}$\end{tabular}}}}%
\put(0.0093418,0.22583676){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{r_{s^+}}$\end{tabular}}}}%
\put(0.13727607,0.38045671){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{r_{s^-}}$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=5]{conetiles.pdf}}%
\put(0.32247923,0.27371016){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{\Omega_{s,k,0}}$\end{tabular}}}}%
\put(0.02795961,0.3315946){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{ v_{s} ^\perp=(-s,1)}$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=6]{conetiles.pdf}}%
\put(0.07949608,0.10867378){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{0}\smash{\begin{tabular}[t]{l}$\scriptstyle{\Omega_{s,k,m},\,} \scriptscriptstyle{m}>0$\end{tabular}}}}%
\put(0,0){\includegraphics[width=\unitlength,page=7]{conetiles.pdf}}%
\end{picture}%
\endgroup%
\caption{The decomposition of the sector $\Omega_{s,k}$ into Whitney regions, and the spatial grid corresponding to the middle region $\Omega_{s,k,0}$.}
\label{fig:conetiles}
\end{figure}
The collection of tiles ${\mathbf{T}}$ corresponding to this decomposition is obtained as
\[
{\mathbf{T}}\coloneqq \bigcup_{s\in S} {\mathbf{T}}_{s} ^- \cup {\mathbf{T}}^0 _s \cup {\mathbf{T}}_{s} ^+
\]
where
\begin{alignat}{2}\label{eq:tilesskm}
\notag \qquad & {\mathbf{T}}_{s} ^- \coloneqq\bigcup_{k\in{\mathbb Z},\,m<0}{\mathbf{T}}_{s^-,k,m},\qquad &&{\mathbf{T}} _{s^-,k,m} \coloneqq \big\{t=R_t \times \Omega_{s,k,m}: \, R_t \in \mathcal D_{s^-,k,k-\ell_{s}+|m|}\big\}, \quad m<0,
\\
\qquad & {\mathbf{T}}_{s} ^0 \coloneqq\bigcup_{k\in{\mathbb Z}}{\mathbf{T}}_{s,k,0}, \qquad &&{\mathbf{T}} _{s,k,0} \coloneqq \big\{t=R_t \times \Omega_{s,k,0}: \, R_t \in \mathcal D_{s,k,k-\ell_{s} }\big\},
\\
\qquad & {\mathbf{T}}_{s} ^+\coloneqq\bigcup_{k\in{\mathbb Z},\,m>0}{\mathbf{T}}_{s^+,k,m}, \qquad &&{\mathbf{T}} _{s^+,k,m} \coloneqq \big\{t=R_t \times \Omega_{s,k,m}: \, R_t \in \mathcal D_{s^+,k,k-\ell_{s}+|m|}\big\}, \quad m>0.
\end{alignat}
We stress here that for each cone $C_s$ we introduce tiles in three possible directions $v_{{s^-}},v_s,v_{s^+}$. This turns out to be technical nuisance more than anything else as the total number of directions is still comparable to $\#S$, and our estimates will be uniform over all $S$ with the same cardinality. However in order to avoid confusion we set
\begin{equation}\label{eq:s*}
S^*\coloneqq S\cup\{s^-:\, s\in S\}\cup\{s^+:\, s\in S\}\eqqcolon S^-\cup S\cup S^+.
\end{equation}
Note also that for fixed $s,k,m$ the choice of scales for $R_t$ yields that the tile $t=R_t \times \Omega_{s,k,m}$ obeys the uncertainty principle in both radial and tangential directions.
We then define the associated intrinsic square function by
\begin{equation}
\label{e:intrinsicsf}
\begin{split}
&\Delta_{{\mathbf{T}}} (f) \coloneqq \Bigg( \sum_{t \in \mathbf T}a_{t}(f) \frac{\cic{1}_{R_t}}{|R_t|} \Bigg)^\frac12,
\end{split}
\end{equation}
where the set of slopes $S$ are kept implicit in the notation. Here we remember that the notation $a_t(f)$ was introduced in \eqref{e:intwcdef}. Using the orthogonality estimates of \S\ref{sec:orthocone} as input for Theorem \ref{thm:carleson} we readily obtain the estimates of the following theorem.
\begin{theorem} \label{t:isf} We have the estimates
\begin{align}\label{e:isfp}
& \big\|\Delta_{{\mathbf{T}}}: L^{p}(\mathbb{R}^2) \big\| \lesssim_p (\log \#S)^{\frac12-\frac1p} (\log\log\#S)^{\frac12-\frac1p}, \qquad 2\leq p<4, \\
& \sup_{E,f}
\label{e:isfrest} \frac{\left\|\Delta_{{\mathbf{T}}} (f\bm{1}_E)\right\|_{4}}{|E|^{\frac14}} \lesssim (\log \#S)^{\frac14} (\log\log\#S)^{\frac14},
\end{align}
where the supremum in the last display is taken over all measurable sets $E\subset \mathbb{R}^2$ of finite positive measure and all Schwartz functions $f$ on $\mathbb{R}^2$ with $\|f\|_\infty\leq 1$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{t:isf}] First of all, observe that the case $p=2$ of \eqref{e:isfp} is exactly the conclusion of Lemma \ref{l:globalortho}. By restricted weak type interpolation it thus suffices to prove \eqref{e:isfrest} to obtain the remaining cases of \eqref{e:isfp}: we turn to the former task.
For convenience define $S^*\coloneqq S\cup\{s^-:\, s\in S\}\cup\{s^+:\, s\in S\}\eqqcolon S^-\cup S\cup S^+$; note that this is the actual set of slopes of tiles in ${\mathbf{T}}$. Let
\[
\mathcal R_{{\mathbf{T}}}\coloneqq\{R_t:\,t\in{\mathbf{T}}\} \subset \mathcal D_{S^*} ^2.
\]
Observe that we can write
\[
\Delta_{{\mathbf{T}}}(f\cic{1}_E)^2 =\sum_{R\in \mathcal R_{{\mathbf{T}}}}\bigg(\sum_{t\in{\mathbf{T}}:\, R_t=R}a_t(f\cic{1}_E)\bigg)\frac{\cic{1}_{R }}{|R |}\eqqcolon \sum_{R\in \mathcal R_{{\mathbf{T}}}}a_R\frac{\cic{1}_R}{|R|}
\]
where
\[
a\coloneqq\Big\{a_R= \sum_{t\in{\mathbf{T}}:\, R_t=R}a_t(f\cic{1}_E): \, R \in \mathcal R_{{\mathbf{T}}}\Big\}.
\]
We fix $E$ and $f$ as in the statement and we will obtain \eqref{e:isfrest} from an application of Theorem \ref{thm:carleson} to the Carleson sequence $a=\{a_R\}_{R \in \mathcal R_{{\mathbf{T}}}}$.
First, $\mathsf{mass}_a\lesssim |E|$ as a consequence of Lemma \ref{l:globalortho} since
\[
\sum_{R \in \mathcal R_{{\mathbf{T}}}}a_R
=\sum_{R \in \mathcal R_{{\mathbf{T}}}}\sum_{t\in{\mathbf{T}}:\, R_t=R}a_t(f\cic{1}_E)=\sum_{t\in{\mathbf{T}}} a_t(f\cic{1}_E)\lesssim\|f\cic{1}_E\|_2 ^2\lesssim |E|.
\]
Further, the fact that $a$ is (a constant multiple of) an $L^\infty$-normalized Carleson sequence is a consequence of the localized estimate of Lemma \ref{l:localortho}. To verify this we need to check the validity of Definition~\ref{def:carleson} for the sequence $a$ above. To that end let $\mathcal L\subset \mathcal D_{S^*} ^2$ be a collection of parallelograms which is subordinate to $\mathcal T\subset \mathcal D_\sigma ^2$ for some fixed $\sigma\in S^*$. Then
\[
\sum_{R\in\mathcal L} a_R = \sum_{R\in\mathcal L} \sum_{t\in{\mathbf{T}}:\, R_t=R}a_t(f\cic{1}_E)=\sum_{t\in{\mathbf{T}} _{\mathcal L}} a_t(f\cic{1}_E)
\]
where ${\mathbf{T}} _{\mathcal L}\coloneqq\{t\in {\mathbf{T}}:\, R_t\in\mathcal L\}.$ By Lemma~\ref{l:localortho} the right hand side of the display above can be estimated by a constant multiple of $|\mathsf{sh}(\mathcal T)| \|f\cic{1}_E\|_{\infty} ^2\leq |\mathsf{sh}(\mathcal T)|$. This shows the desired property in the definition of a Carleson sequence.
Finally if ${\mathbf{T}}_{\sigma}\coloneqq \{t\in{\mathbf{T}}:\, s(t)=\sigma\}$ for $\sigma\in S^*$ we have that
\[
\sup_{\sigma\in S^*}\big\| {\mathrm M}_{\mathcal R_{{\mathbf{T}}_{\sigma}}}: \, L^p(\mathbb{R}^2)\to L^{p,\infty}(\mathbb{R}^2)\big\|\lesssim p',\qquad p\to 1^+.
\]
Indeed note that for fixed direction $\sigma\in S^*$ each maximal operator appearing in the estimate above is bounded by the strong maximal function in the coordinates $(v,e_2)$ with $v=(1,\sigma)$.
Now Theorem \ref{thm:carleson} applies to the Carleson sequence $a=\{a_R\}_{R\in\mathcal R_{{\mathbf{T}}}}$ yielding
\[
\|\Delta_{{\mathbf{T}}} (f\bm{1}_E)\|_4^4 = \|T_{R_{{\mathbf{T}}}}(a)\|_2^2 \lesssim (\log \#S^*)(\log\log \#S^*) \mathsf{mass}_a\lesssim (\log \#S)(\log\log \#S) |E|
\]
which is the claimed estimate \eqref{e:isfrest} as $\#S^*\simeq \#S$. The proof of Theorem~\ref{t:isf} is thus complete.
\end{proof}
\subsection{The intrinsic square function associated with smooth frequency cones}\label{sec:smoothtiles} The tiles in the previous subsection were used to model rough frequency projections on a collection of essentially disjoint cones. Indeed note that all decompositions were of Whitney type with respect to all the singular sets of the corresponding rough multiplier. In the case of smooth frequency projections on cones we need a simplified collection of tiles that we briefly describe below.
Assuming $S$ is a finite set of slopes and the arcs $\{\upomega_s\}_{s\in S}$ on $\mathbb S^1$ have finite overlap as before we now define for $s\in S$ and $k\in{\mathbb Z}$ the collections
\begin{equation}\label{eq:smoothconetiles}
{\mathbf{T}}_{s,k}\coloneqq\big\{t=R_t\times \Omega_{s,k}:\,R_t\in\mathcal D_{s,k-\ell_s,k} \big\},\qquad {\mathbf{T}}_s\coloneqq\bigcup_{k\in{\mathbb Z}} {\mathbf{T}}_{s,k},\qquad {\mathbf{T}}\coloneqq \bigcup_{s\in S} {\mathbf{T}}_s,
\end{equation}
with $\Omega_{s,k}$ given by \eqref{e:freqsconical}. Here we also assume that $2^{-\ell_s}\leq |\upomega_s|\leq 2^{-\ell_s+1}$. Notice that each conical sector $\Omega_{s,k}$ now generates exactly one frequency component of possible tiles in contrast with the previous subsection where we need a whole Whitney collection for every $s$ and every $k$; in fact the tiles ${\mathbf{T}}_{s,k}$ are for all practical purposes the same as the tiles ${\mathbf{T}}_{s,k,0}$ considered in \S\ref{sec:conetiles}. It is of some importance to note here that for each fixed $s\in S$ the collection $\mathcal R_{{\mathbf{T}}}\coloneqq\{R_t:\, t\in{\mathbf{T}}\}$ consists of parallelograms of fixed eccentricity $2^{\ell_s}$ and thus the corresponding maximal operator $\mathrm{M}_{\mathcal R_{{\mathbf{T}}_s}}$ is of weak-type (1,1) uniformly in $s\in S$:
\[
\sup_{s\in S}\big\| {\mathrm M}_{\mathcal R_{{\mathbf{T}}_s}}: \, L^1(\mathbb{R}^2)\to L^{1,\infty}(\mathbb{R}^2)\big\|\lesssim 1.
\]
The intrinsic square function $\Delta_{\mathbf{T}}$ is formally given as in \eqref{e:intrinsicsf} but defined with respect to the new collection of tiles defined in \eqref{eq:smoothconetiles}. A repetition of the arguments that led to the proof of Theorem~\ref{t:isf} yield the following.
\begin{theorem} \label{thm:smoothconeintrinsic} For ${\mathbf{T}}$ defined by \eqref{eq:smoothconetiles} we have the estimates
\[
\begin{split}
& \big\|\Delta_{{\mathbf{T}}}: L^{p}(\mathbb{R}^2) \big\| \lesssim_p (\log \#S)^{\frac12-\frac1p}, \qquad 2\leq p<4, \\
& \sup_{E,f} \frac{\left\|\Delta_{{\mathbf{T}}} (f\bm{1}_E)\right\|_{4}}{|E|^{\frac14}} \lesssim (\log \#S)^{\frac14},
\end{split}
\]
where the supremum in the last display is taken over all measurable sets $E\subset \mathbb{R}^2$ of finite positive measure and all Schwartz functions $f$ on $\mathbb{R}^2$ with $\|f\|_\infty\leq 1$.
\end{theorem}
\subsection{The intrinsic square function associated with rough frequency rectangles}\label{sec:intrinsicrdf}
The considerations in this subsection aim at providing the appropriate time-frequency analysis in order to deal with a Rubio de Francia type square function, given by frequency projections on disjoint rectangles in finitely many directions. The intrinsic setup is described by considering again a finite set of slopes $S$ and corresponding directions $V$. Suppose that we are given a finitely overlapping collection of rectangles $\mathcal F=\cup_{s\in S}\mathcal F_s$, consisting of rectangles which are tensor products of intervals in the coordinates $v,v^\perp$, $v=(1,s)$, for some $s\in S$. Namely a rectangle $F\in \mathcal F_s$ is a \emph{rotation} by $s$ of an axis-parallel rectangle. We stress that the rectangles in each collection $\mathcal F_s$ are generic two-parameter rectangles, namely their sides have independent lengths (there is no restriction on their eccentricity).
We also note that $\mathcal F_s$ consists of rectangles rather than parallelograms and this difference is important when one deals with rough frequency projections. Our techniques are sufficient to deal with the case of parallelograms as well but we just choose to detail the setup for the rectangular case. The interested reader will have no trouble adjusting the proof for variations of our main statement below for the case of parallelograms, or for the case that the families $\mathcal F_s$ are in fact one-parameter families.
Given $F\in\mathcal F_s$ we define a two-parameter Whitney discretization as follows. Let $F=\mathrm{rot}_s(I\times J)+y_F$ for some $y_F\in \mathbb{R}^2$, where $\mathrm{rot_s}$ denotes counterclockwise rotation by $s$ about the origin and $I\times J$ is an axis parallel rectangle centered at the origin. Note that $I=(-|I|/2,|I|/2)$ and similarly for $J$. Then we define for $(k_1,k_2)\in\mathbb{N}^2$, $k_1,k_2\neq 0$,
\[
W_{k_1,k_2}(F)\coloneqq \Big\{\xi\in I\times J:\, \frac13 2^{-k_1-1} \leq \frac12- \frac{|\xi_1|}{|I|}\leq \frac 13 2^{-k_1+1} ,\, \frac13 2^{-k_2-1} \leq \frac12- \frac{|\xi_2|}{|J|}\leq \frac 13 2^{-k_2+1} \Big\}.
\]
The definition has to be adjusted for $k_1=0$ or $k_2=0$. For example we define for $k_2\neq 0$
\[
W_{0,k_2}(F)\coloneqq \Big\{\xi\in I\times J:\, |I|/2-|\xi_1|\geq \frac12 \frac13|I|,\, \frac13 2^{-k_2-1}|J| \leq |J|/2- |\xi_2|\leq \frac 13 2^{-k_2+1} |J|\Big\}
\]
and symmetrically for $k_1\neq 0$ and $k_2=0$. Finally
\[
W_{0,0}(F)\coloneqq \Big\{\xi\in I\times J:\,|I|/2-|\xi_1|\geq \frac12 \frac13|I|,\, |J|/2-|\xi_2|\geq \frac12 \frac13|J| \Big\}.
\]
Then for $k=(k_1,k_2)\in \mathbb{N}^2 $ we set $\Omega_{s,k_1,k_2}(F)\coloneqq\mathrm{rot}_s(W_{k_1,k_2}(F))+y_F$.
We can define tiles for this system as follows. If $F\in\mathcal F_s$ for some $s\in S$ and $F=\mathrm{rot}_s(I\times J)+y_F$ with $I\times J$ as above, then we choose $\ell_I ^F,\ell_J ^F\in{\mathbb Z}$ such that $2^{\ell^F _I}<|I| \leq 2^{\ell^F _I+1}$ and $2^{\ell^F _J}<|J| \leq 2^{\ell^F _J+1}$. We will have
\begin{equation}\label{eq:rdftiles}
{\mathbf{T}}^{\mathcal F}\coloneqq \bigcup_{s\in S}{\mathbf{T}}_s ^{{\mathcal F}},\qquad {\mathbf{T}}_{s} ^{\mathcal F} \coloneqq\bigcup_{F\in\mathcal F_s} {\mathbf{T}} _{s}(F),\qquad {\mathbf{T}}_s(F)\coloneqq \bigcup_{(k_1,k_2)\in\mathbb{N}^2} {\mathbf{T}}_{s,k_1,k_2}(F),\quad F\in\mathcal F_s,
\end{equation}
where
\[
{\mathbf{T}}_{s,k_1,k_2}(F)\coloneqq\Big\{t=R_t\times \Omega_{s,k_1,k_2}(F):\, R_t\in\mathcal D_{s,-k_2+\ell_J ^F,-k_1+\ell_I ^F}\Big \},\qquad F\in\mathcal F_s.
\]
Note again that the tiles defined above obey the uncertainty principle in both $v,v^\perp$ for every fixed $v=(1,s)$ with $s\in S$.
The intrinsic square function associated with the collection $\mathcal F$ is denoted by $\Delta_{{\mathbf{T}}^{\mathcal F}}$ and formally has the same definition as \eqref{e:intrinsicsf}, where now the ${\mathbf{T}}$ are given by the collection ${\mathbf{T}} ^{\mathcal F}$ of \eqref{eq:rdftiles}. The corresponding theorem is the intrinsic analogue of a multiparameter directional Rubio de Francia square function estimate.
\begin{theorem} \label{thm:intrrdf} Let $\mathcal F$ be a finitely overlapping collection of two-parameter rectangles in directions given by $S$
\[
\Big \| \sum_{F\in\mathcal F}\cic{1}_F \Big \|_\infty \lesssim 1.
\]
Consider the collection of tiles ${\mathbf{T}}^\mathcal F$ defined in \eqref{eq:rdftiles} and $\Delta_{{\mathbf{T}}^{\mathcal F}}$ be the corresponding intrinsic square function. We have the estimates
\begin{align}
& \big\|\Delta_{{\mathbf{T}}^{\mathcal F}}: L^{p}(\mathbb{R}^2) \big\| \lesssim_p (\log \#S)^{\frac12-\frac1p}(\log\log\#S)^{\frac12-\frac1p}, \qquad 2\leq p<4, \\
& \sup_{E,f}
\frac{\left\|\Delta_{{\mathbf{T}}^{\mathcal F}} (f\bm{1}_E)\right\|_{4}}{|E|^{\frac14}} \lesssim (\log \#S)^{\frac14} (\log\log\#S)^{\frac14},
\end{align}
where the supremum in the last display is taken over all measurable sets $E\subset \mathbb{R}^2$ of finite positive measure and all Schwartz functions $f$ on $\mathbb{R}^2$ with $\|f\|_\infty\leq 1$.
\end{theorem}
\begin{remark}\label{rmrk:1paramrecs} As before, there is slight improvement in the case of one-parameter spatial components in each direction. More precisely suppose that $\mathcal F=\cup_{s\in S}\mathcal F_s$ is a given collection of disjoint rectangles in directions given by $S$. If for each $s\in S$ the family $\mathcal R_{\mathcal F_s}\coloneqq\{R_t:\, t\in{\mathbf{T}}_{\mathcal F_s}\}$ yields a weak-type $(1,1)$ maximal operator then the estimates of Theorem~\ref{thm:intrrdf} hold without the $\log\log$-terms.
\end{remark}
\begin{remark}\label{rmrk:slanted} Suppose that $\mathcal R=\bigcup_{s\in S}\mathcal R_s\subset \mathcal P_S ^2$ is a family of parallelograms in directions given by $s$, namely we have that if $R\in\mathcal R_s$ then $R=A_s (I\times J)+y_R$ for some rectangle $I\times J$ in $\mathbb{R}^2$ with sides parallel to the coordinate axes and centered at $0$, and $y_R\in \mathbb{R}^2$. Now there is an obvious way to construct a Whitney partition of each $R\in\mathcal R$. Indeed we just define the frequency components
\[
\Omega_{s,k_1,k_2}(R)\coloneqq A_s(W_{k_1,k_2}(I\times J))+y_R
\]
with $W_{k_1,k_2}(I\times J)$ as constructed before. Then
\[
{\mathbf{T}}_{s,k_1,k_2}(R)\coloneqq\Big\{R_t\times \Omega_{s,k_1,k_2}(R):\, R_t\in\mathcal D_{s,-k_2+\ell_J ^F,-k_1+\ell_I ^F}\Big \},\qquad R\in\mathcal R_s
\]
and ${\mathbf{T}}$ are given as in \eqref{eq:rdftiles}. With this definition there is a corresponding intrinsic square function $\Delta_{{\mathbf{T}}_\mathcal R}$ which satisfies the bounds of Theorem~\ref{thm:intrrdf}. The improvement of Remark~\ref{rmrk:1paramrecs} is also valid if $\mathcal R=\cup_{s\in S}\mathcal R_s$ and each $\mathcal R_s$ consists of rectangles of fixed eccentricity.
\end{remark}
The proof of Theorem~\ref{thm:intrrdf} relies again on the global and local orthogonality estimates of \S\ref{sec:orthocone} and a subsequent application of the directional Carleson embedding theorem, Theorem~\ref{thm:carleson}. We omit the details.
\section{Sharp bounds for conical square functions} \label{sec:cone}
We begin this section by recalling the definition for the smooth conical frequency projections given in the introduction. Let $\uptau\subset (0,2\uppi)$ be an interval and consider the corresponding rough cone multiplier
\[
C_\uptau f(x) \coloneqq \int_{0}^{2\uppi} \int_{0}^\infty \widehat f(\varrho{\rm e}^{i\vartheta}) \bm{1}_{\uptau}(\vartheta) {\rm e}^{ix\cdot\varrho{\rm e}^{i\vartheta} } \, \varrho{\rm d} \varrho {\rm d} \vartheta,\qquad x\in\mathbb{R}^2,
\]
and its smooth analogue
\begin{equation}
\label{e:smoothcones}
C_\uptau^\circ f(x) \coloneqq \int_{0}^{2\uppi} \int_{0}^\infty \widehat f(\varrho{\rm e}^{i\vartheta}) \upbeta\left(\frac{\vartheta-c_\uptau}{|\uptau|/2}\right) {\rm e}^{ix\cdot\varrho{\rm e}^{i\vartheta} } \, \varrho{\rm d} \varrho {\rm d} \vartheta,\qquad x\in\mathbb{R}^2.
\end{equation}
where $\upbeta$ is a smooth function on $\mathbb{R}$ supported on $[-1,1]$ and equal to 1 on $[-\frac12,\frac12]$ and $c_\uptau$, $|\uptau|$ stand respectively for the center and length of $\uptau$.
This section is dedicated to the proofs of two related theorems concerning conical square functions. The first is a quantitative estimate for a square function associated with the smooth conical multipliers of a finite collection of intervals with bounded overlap given in Theorem~\ref{thm:smoothcones}, namely the estimates
\begin{align}
\label{e:sfsmooth}& \left\| \{C_\uptau^\circ f\} \right\|_{L^p(\mathbb{R}^2; \ell^2_{\bm{\uptau}})} \lesssim_p (\log \# \bm{\uptau} )^{\frac{1}2-\frac1p}\|f\|_p
\end{align}
for $2\leq p<4$,
as well as the restricted type analogue valid for all measurable sets $E$
\begin{align}
\label{e:sfsmoothrs}& \left\| \{C_\uptau^\circ(f\bm{1}_E)\} \right\|_{L^4(\mathbb{R}^2; \ell^2_{\bm{\uptau}})} \lesssim (\log \# \bm{\uptau} )^{\frac{1}4}|E|^{\frac14}\|f\|_\infty,
\end{align}
under the assumption of finite overlap
\begin{equation} \label{e:bo}
\Bigl\| \sum_{\uptau\in \bm{\uptau}} \cic{1}_{\uptau} \Bigr\|_\infty \lesssim 1.
\end{equation}
The second theorem concerns an estimate for the rough conical square function for a collection of finitely overlapping cones $\bm{\uptau}$.
\begin{theorem} \label{thm:wdcones} Let $\bm{\uptau}$ be a finite collection intervals in $[0,2\pi)$ with finite overlap as in \eqref{e:bo}. Then the square function estimate
\begin{align} \label{e:sfrough}
& \left\| \{C_\uptau f\} \right\|_{L^p(\mathbb{R}^2; \ell^2_{\bm{\uptau}})} \lesssim_p (\log \# \bm{\uptau} )^{1-\frac2p}(\log\log\#\bm{\uptau})^{\frac12-\frac1p}\|f\|_p.
\end{align}
holds for each $2\leq p<4.$
\end{theorem}
Theorem \ref{thm:smoothcones} is sharp, in terms of $\log \# \bm{\upomega}$-dependence, for all $2\leq p<4$ and for $p=4$ up to the restricted type.
Theorem \ref{thm:wdcones} improves on \cite[Theorem 1]{CorGFA}, where the dependence on cardinality is unspecified.
Examples providing a lower bound of $(\log \# \bm{\upomega} )^{\frac12-\frac1p}\|f\|_p$ for the left hand side of \eqref{e:sfrough}, and showing the sharpness of Theorem \ref{thm:smoothcones}, are detailed in Section \ref{sec:cex}.
The remainder of the section is articulated as follows. In the upcoming Subsection~\ref{sec:proofA} we show Theorem \ref{thm:smoothcones}. The subsequent subsection is dedicated to the proof of Theorem \ref{thm:wdcones}.
\subsection{Proof of Theorem \ref{thm:smoothcones}}\label{sec:proofA} We are given a finite collection of intervals $\upomega\in \bm{\upomega}$ having bounded overlap as in \eqref{e:bo}. By finite splitting we may reduce to the case of $ \upomega\in \bm{\upomega}$ being pairwise disjoint; we treat this case throughout.
The first step in the proof of Theorem \ref{thm:smoothcones} is a radial decoupling. Let $\uppsi$ be a smooth radial function on $\mathbb{R}^2$ with
\[
\cic{1}_{[1,2]}(|\xi|)\leq \uppsi(\xi) \leq \cic{1}_{[2^{-1},2^{2}]}(|\xi|)
\]
and define the Littlewood-Paley projection
\[
S_kf(x)\coloneqq \int \uppsi(2^{-k}\xi)\widehat f(\xi) \, {\rm e}^{ix\cdot \xi} \, {\rm d} \xi, \qquad x\in \mathbb{R}^2.
\]
The following weighted Littlewood-Paley inequality is contained in \cite{BeHa}*{Proposition 4.1}.
\begin{proposition}[Bennett-Harrison, \cite{BeHa}]\label{prop:beha} Let $w$ be a non-negative locally integrable function.
\[
\int_{\mathbb{R}^2} | f|^2 w \lesssim \int_{\mathbb{R}^2} \sum_{k\in\mathbb Z} |S_k(f)|^2 {\mathrm M}^{[3]}w
\]
with implicit constant independent of $w,f$, where we recall that ${\mathrm M}^{[3]}$ denotes the three-fold iteration of the Hardy- Littlewood maximal function ${\mathrm M}$ with itself.
\end{proposition}
We may easily deduce the next lemma from the proposition.
\begin{lemma}\label{lem:wLP} For any $p\geq 2$ we have
\begin{equation}
\label{e:radialdec}
\left\| \{C_{\uptau}^\circ f\} \right\|_{L^p(\mathbb{R}^2; \ell^2_{\bm{\uptau}})} \lesssim \Big\|\Big(\sum_{k\in\mathbb Z,\, \uptau\in \bm{\uptau}} | C_\uptau^\circ S_k(f)|^2 \Big)^\frac12\Big\|_{p}.
\end{equation}
\end{lemma}
\begin{proof} The case $p=2$ is trivial so we assume $p>2$. Letting $r\coloneqq \frac p2>1$ there exists some $w\in L^{r'}(\mathbb{R}^2)$ with $\|w\|_{r'}=1$ such that
\[
\left\| \{C_{\uptau}^\circ f\} \right\|_{L^p(\mathbb{R}^2; \ell^2_{\bm{\uptau}})}^2= \sum_{\uptau\in \bm{\uptau}} \int_{\mathbb{R}^2} |C_\uptau^\circ f |^2 w\lesssim \sum_{k\in\mathbb Z,\, \uptau\in \bm{\uptau}}\int_{\mathbb{R}^2}|C_\uptau^\circ S_k(f)|^2 M^{[3]}w
\]
and the lemma follows by H\"older's inequality and the boundedness of $M^{[3]}$ on $L^{r'}(\mathbb{R}^2)$.
\end{proof}
The second and final step of the proof of Theorem \ref{thm:smoothcones} is the reduction of the operator appearing in the right hand side of \eqref{e:radialdec} to the model operator of Theorem \ref{thm:smoothconeintrinsic}.
In order to match the notation of \S\ref{sec:conetiles} we write $\{\omega_s\}_{s\in S}$ for the collection of arcs in $\mathbb S^1$ corresponding to the collection of intervals $\bm{\uptau}$, namely for $\uptau\in\bm{\uptau}$ we implicitly define $s=s_\uptau$ by means of $v_s ^\perp/|v_s ^\perp|\coloneqq {\rm e}^{ic_\uptau}={(1,s)}/{|(1,s)|}$. We set $S\coloneqq\{s_\uptau:\, \uptau\in\bm{\uptau}\}$ and define the corresponding arcs in $\mathbb S^1$ as
\[
\upomega_{s_\uptau}\coloneqq\{{\rm e}^{i\theta}:\, \theta\in\uptau\}.
\]
Now the cone $C_\uptau$ is the same thing as the cone $C_s$ and $\#S=\#\bm{\uptau}$. Similarly we write $C^\circ _\uptau=C^\circ _{s_\uptau}$ so the cones can now be indexed by $s\in S$. Define $\ell_s$ such that $2^{-\ell_s} \leq |\omega_s|\leq 2^{-\ell_s+1}$.
By finite splitting and rotational invariance there is no loss in generality with assuming that $S\subset[-1,1]$. Notice that the support of the multiplier of $C_{s}^\circ S_k$ is contained in the frequency sector $\Omega_{s,k}$ defined in \eqref{e:freqsconical}.
By standard procedures of time-frequency analysis, as for example in \cite[Section 6]{DDP}, the operator $C_{s}^\circ S_k$ can be recovered by appropriate averages of operators
\begin{equation}
\mathsf{C}_{s,k} f \coloneqq
\sum_{t \in {\mathbf{T}}_{s,k}} \langle f,\upphi_t\rangle \upphi_t
\end{equation}
where $\upphi_t\in \mathcal{A}_t^{8M_0}$ for all $t\in {\mathbf{T}}_{s_\upomega,k}$ and ${\mathbf{T}}_{s,k}$ is defined in \eqref{eq:smoothconetiles}. Here $M_0=2^{50}$ is as chosen in \eqref{e:intwcdef}. Fixing $s,k$ for the moment we preliminarily observe that for each $\upnu\geq 1$ the collection $\mathcal R_{s,k}\coloneqq \mathcal R_{{\mathbf{T}}_{s,k}}= \{R_t:\,t\in{\mathbf{T}}_{s,k} \}$ can be partitioned into subcollections $ \{\mathcal R_{s,k,\upnu}^j: \, 1\leq j \leq 2^{8\upnu}\}$ with the property that
\[
R_1,R_2 \in \mathcal R_{s,k,\upnu}^j \implies 2^{2\upnu+4}R_1 \cap 2^{2\upnu+4} R_2 =\varnothing.
\]
We will also use below the Schwartz decay of $\upphi_t\in \mathcal A_{t}^{M_0}$ in the form
\[
\sqrt{|R_t|} |\upphi_t | \lesssim \bm{1}_{R_t} + \sum_{\upnu\geq 0} 2^{-8M_0\upnu} \sum_{\substack{\uprho \in \mathcal{R}_{s,k} \\ \uprho \not \subset 2^{\upnu} R_t, \, \uprho \subset 2^{\upnu+1}R_t}} \bm{1}_\uprho .\]
Using Schwartz decay of $\upphi_t$ twice, in particular to bound by an absolute constant the second factor obtained by Cauchy-Schwartz after the first step, we get
\[
\begin{split}
&\quad |\mathsf{C}_{s,k} f |^2 \lesssim \left(\sum_{t \in {\mathbf{T}}_{s,k}} | \langle f,\upphi_t\rangle|^2 \frac{|\upphi_t |}{\sqrt{|R_t|}} \right) \left(\sum_{t \in {\mathbf{T}}_{s,k}} \sqrt{|R_t|} |\upphi_t| \right)
\\
& \lesssim \sum_{t \in {\mathbf{T}}_{s,k}} | \langle f,\upphi_t\rangle|^2 \frac{ \cic{1}_{R_t} }{ |R_t| }+\sum_{\upnu \geq 0} 2^{-8M_0\upnu} \sum_{t \in {\mathbf{T}}_{s,k}} \sum_ {\substack{\uprho \in \mathcal{R}_{s,k} \\ \uprho \not \subset 2^{\upnu} R_t, \uprho \subset 2^{\upnu+1}R_t}} | \langle f,\upphi_t\rangle|^2\frac{ \bm{1}_{\uprho}}{|\uprho|}
\\
&\leq \sum_{t\in {\mathbf{T}}_{s,k}} | \langle f,\upphi_t\rangle|^2 \frac{ \cic{1}_{R_t} }{ |R_t| }+ \sum_{\upnu \geq 0} 2^{-8M_0\upnu} \sum_{R\in\mathcal R_{s,k}} \sum_{j=1}^{2^{8\upnu}} \sum_{\substack{\uprho \in \mathcal{R}_{s,k} \\ \uprho \not \subset 2^{\upnu} R_t, \uprho \subset 2^{\upnu+1}R_t}} |\langle f,\upphi_t\rangle|^2 \frac{\cic{1}_\uprho}{|\uprho|}.
\end{split}
\]
Now for fixed $\upomega,k,\upnu,j$ and $t\in {\mathbf{T}}_{s,k}$ observe that there is at most one $\uprho=\uprho_{s,k,\upnu}^j(t)\in\mathcal R_{\upomega,k,\upnu} ^j$ such that $ \uprho \not\subset 2^{\upnu} R_t, \uprho \subset 2^{\upnu+1}R_t$. Thus the estimate above can be written in the form
\begin{equation}\label{eq:R+R}
\quad |\mathsf{C}_{s,k} f |^2 \lesssim \sum_{t\in {\mathbf{T}}_{s,k}} | \langle f,\upphi_t\rangle|^2 \frac{ \cic{1}_{R_t} }{ |R_t| }+ \sum_{\upnu \geq 0} 2^{-8M_0 \upnu}\sum_{j=1}^{2^{8\upnu}} \sum_{t\in {\mathbf{T}}_{s,k}} |\langle f,\upphi_{t}\rangle|^2 \frac{\cic{1}_{\uprho_{s,k,\upnu}^j(t)}}{|\uprho_{s,k,\upnu}^j(t)|}.
\end{equation}
Observe that if $t\in {\mathbf{T}}_{s,k}$
\[
\upphi_{t}\in\mathcal A_{t} ^{8M_0},\quad \uprho \in \mathcal R_{s,k},\quad \uprho \subset 2^{\upnu+1}R_t \implies 2^{-4M\upnu}|\langle f,\upphi_{t}\rangle|^2 \leq a_{t_{\uprho}}(f)
\]
where $t_\uprho=\uprho \times \Omega_{s,k} \in {\mathbf{T}}_{s,k} $ is the unique tile with spatial localization given by $\uprho$: this is because $2^{-4M\upnu}\upphi_t\in \mathcal A_{t_\uprho}^{M_0}$. We thus conclude that
\begin{equation}
\label{e:Cvkdec}
|\mathsf{C}_{s,k} f|^2 \lesssim \sum_{t\in {\mathbf{T}}_{s,k}} a_{t}(f) \frac{ \bm{1}_{R_t}}{|R_t|}.
\end{equation}
Comparing with the definition of $\Delta_{\mathbf{T}}$ given in \eqref{e:intrinsicsf} we may summarize the discussion in the lemma below.
\begin{lemma} \label{l:modelsumconical} Let $1<p<\infty$. Then
\[
\sup_{\|f\|_p=1} \Big\|\Big(\sum_{k\in\mathbb Z,\, \uptau\in \bm{\uptau}} | C_\uptau^\circ S_k(f)|^2 \Big)^\frac12\Big\|_{p} \lesssim \sup_{\|f\|_p=1} \big\|\Delta_{{\mathbf{T}}} (f) \big\|_{p}
\]
where
\[
{\mathbf{T}} \coloneqq \bigcup_{s\in S}\bigcup_{k\in{\mathbb Z}} {\mathbf{T}}_{s,k}
\]
and ${\mathbf{T}}_{s,k}$ is defined in \eqref{eq:smoothconetiles}.
\end{lemma}
The proof of the upper bound in Theorem \ref{thm:smoothcones} is then completed by juxtaposing the estimates of Lemmata \ref{lem:wLP} and \ref{l:modelsumconical} with Theorem \ref{thm:smoothconeintrinsic}. For the optimality of the estimate see \S\ref{sec:cexconical}.
\subsection{Proof of Theorem~\ref{thm:wdcones}} \label{ss:rts} The proof of Theorem~\ref{thm:wdcones} is necessarily more involved than its smooth counterpart Theorem~\ref{thm:smoothcones}. In particular we need to decompose each cone not only in the radial direction as before, but also in the directions perpendicular to the singular boundary of each cone. We describe this procedure below.
Consider a collection of intervals $\bm{\uptau}=\{\uptau\}$ as in the statement. By the same correspondence as in the proof of Theorem~\ref{thm:smoothcones} we pass to a family $\{\upomega_s\}_{s\in S}$ consisting of finitely overlapping arcs on $\mathbb S^1$ centered at $v_s ^\perp/|v_s ^\perp|$ and corresponding cones $C_s$. Note that the sectors $\{\Omega_{s,k}\}_{s\in S,k\in{\mathbb Z}}$, defined in \eqref{e:freqsconical} form a finitely overlapping cover of $\cup_{s\in S}C_s$. We remember here that $v_s=(1,s)$ and the endpoint of the interval $\upomega_s$ are given by $(v_{s^-} ^\perp,v_{s^+} ^\perp)$, and that the positive direction is counterclockwise.
Now, for each fixed $s\in S$ the cover $\{\Omega_{s,k,m}\}_{(k,m)\in{\mathbb Z}^2}$ defined in \eqref{eq:whitney}, \eqref{eq:whitney0}, is a Whitney cover if $\Omega_{s,k}$ in the product sense: for each $\Omega_{s,k,m}$ the distance from the origin is comparable to $2^k$ and the distance to the boundary is comparable to $2^{-|m||\upomega_s|}$.
The radial decomposition in $k$ will be taken care of by the Littlewood-Paley decomposition $\{S_k\}_{k\in{\mathbb Z}}$, defined as in the proof of Theorem~\ref{thm:wdcones}. Now for fixed $s,k$ we consider a smooth partition of unity subordinated to the cover $\{\Omega_{s,k,m}\}_{m\in{\mathbb Z}}$. Note that one can easily achieve that by choosing $\{ \upvarphi_{s,m}\}_{m<0}$ to be a one-sided (contained in $C_s$) Littlewood-Paley decomposition in the negative direction $v^-=v_{s^-}$ , and constant in the direction $(v^-)^\perp$ when $m<0$, and similarly one can define $ \upvarphi_{s,m}$ when $m>0$, with respect to the positive direction $v^+$. The central piece $\Omega_{s,k,0}$ corresponds to $ \upvarphi_{s_0}$ defined implicitly as
\[
\upvarphi_{s,0} =\cic{1}_{C_s}- \sum_{m\in{\mathbb Z}}\upvarphi_{s,m} .
\]
Now the desired partition of unity is $\pi_{s,k,m}(\xi)\coloneqq \cic{1}_{C_s}(\xi)\upvarphi_{s,m}(\xi)\uppsi_k(\xi)=\upvarphi_{s,m}(\xi)\uppsi_k(\xi)$, where $\uppsi_k\coloneqq \uppsi(2^{-k}\cdot)$ with the $\uppsi$ constructed in the proof of Theorem~\ref{thm:smoothcones}. Remember that $ S_k f \coloneqq (\uppsi_k\hat f)^\vee$ and let us define $ \Phi_{s,m}f \coloneqq (\upvarphi_{s,m} \hat f)^\vee$.
An important step in the proof is the following square function estimate in $L^p(\mathbb{R}^2)$, with $2\leq p<4$, that decouples the Whitney pieces in every cone $C_s$. It comes at a loss in $N$ which appears to be inevitable because of the directional nature of the problem.
\begin{lemma}\label{lem:decoupling} Let $\{C_s\}_{s\in S}$ be a family of frequency cones, given by a family of finitely overlapping arcs $\bm{\upomega}\coloneqq\{\upomega_s\}_{s\in S}$ as above. For $2\leq p <4$ there holds
\[
\big\|\{C_sf\}|\big\|_{L^p(\mathbb{R}^2;\ell^2 _{\bm{\upomega}})}\lesssim \frac{1}{4-p}(\log\#S)^{\frac12-\frac{1}{p}} \|\{S_k \Phi_{s,m} f\}\|_{L^p(\mathbb{R}^2;\ell^2 _{\bm{\upomega}\times{\mathbb Z}\times{\mathbb Z}\}})}.
\]
\end{lemma}
\begin{proof} Observe that the desired estimate is trivial for $p=2$ so let us fix some $p\in (2,4)$. There exists some $g\in L^{q}$ with $q=(p/2)'=p/(p-2)$ such that
\[
A^2\coloneqq \big\|\{C_sf\}|\big\|_{L^p(\mathbb{R}^2;\ell^2 _{\upomega})} ^2 = \int_{\mathbb{R}^2}\sum_{s\in S}|C_s f|^2 g
\]
and so by Proposition~\ref{prop:beha} we get
\[
A^2\lesssim \sum_{k\in{\mathbb Z}} \sum_{s\in S} \int_{\mathbb{R}^2}|C_s S_k f|^2 {\mathrm M}^{[3]}g
\]
where we recall that ${\mathrm M}^{[3]}$ denotes three iterations of the Hardy-Littlewood maximal function ${\mathrm M}$. Fixing $s$ for a moment we use Proposition~\ref{prop:beha} in the directions $v_{s^-},v_s$ and $v_{s^+}$ to further estimate
\[
\int_{\mathbb{R}^2}|C_s f|^2 {\mathrm M}^{[3]}g\lesssim \sum_{m\in{\mathbb Z}}\sum_{\varepsilon\in\{-,0,+\}}\int_{\mathbb{R}^2}| S_k\Phi_{s,m} f|^2 {\mathrm M}_{v_{s^\varepsilon} } ^{[3]}{\mathrm M}^{[3]}g
\]
where we adopted the convention $s^0\coloneqq s$ for brevity, and ${\mathrm M}_v$ is given by \eqref{eq:dirMv}. Remember also that $\Phi_{s,m}$ for $m>0$ corresponds to directions $s^+$ while $\Phi_{s,m}$ corresponds to directions $s^-$ for $m<0$, and to directions $s^0=s$ for $m=0$. Now for any $v\in \mathbb S^1$ and $r>1$ we have that
\[
{\mathrm M}_v ^{[3]} G \lesssim (r')^2 [{\mathrm M}_v G^r]^{\frac1r};
\]
see for example \cite{CP}. Thus ${\mathrm M}_{v_{s^\varepsilon} } ^{[3]}{\mathrm M}^{[3]}g \lesssim (r')^2 [{\mathrm M}_{V^*} [{\mathrm M}^{[3]} G]^r]^{\frac1r}$ where ${\mathrm M}_{V^*}f\coloneqq \sup_{v\in V^*}{\mathrm M}_v f$, where here we use $V^*\coloneqq \{(1,s):\, s\in S^*\}$ with $S^*$ as in \eqref{eq:s*}, and ${\mathrm M}_{V^*}f\coloneqq\sup_{w\in V^*}{\mathrm M}_w(f)$.
It is known \cite{KatzDuke} that $M_{V^*}$ maps $L^p(\mathbb{R}^2)$ to $L^p(\mathbb{R}^2)$ with a bound $(\log\#V^*)^{\frac1p}$ for $p>2$. As $p<4$ there exists a choice of $1<r<\frac{p}{2(p-2)}$ so that $\frac{p}{r(p-2)}>2$ and a theorem of Katz from \cite{KatzDuke} applies. Using this fact together with H\"older's inequality proves the lemma.
\end{proof}
The proof of Theorem~\ref{thm:wdcones} can now be completed as follows. For each $(s,k,m)\in S\times{\mathbb Z}\times {\mathbb Z}$ the operator $S_k \Phi_{s,m}$ is a smooth frequency projection adapted to the rectangular box $\Omega_{s,k,m}$. Following the same procedure that led to \eqref{e:Cvkdec} in the proof of Theorem~\ref{thm:smoothcones} we can approximate each piece $S_k \Phi_{s,m} f$ by an operator of the form
\[
\mathsf{C}_{s^\varepsilon,k,m} f \coloneqq \sum_{t \in {\mathbf{T}}_{s^\varepsilon,k,m}} \langle f,\upphi_t\rangle \upphi_t,\qquad |\mathsf{C}_{s^\varepsilon,k,m} f|^2\lesssim \sum_{t \in {\mathbf{T}}_{s^\varepsilon,k,m}} a_t(f)\frac{\cic{1}_{R_t}}{|R_t|},
\]
where $s^\varepsilon$ follows the sign of $m$ and coincides with $s$ if $m=0$. The collections of tiles ${\mathbf{T}}_{s^\varepsilon,k,m}$ are the ones given in \eqref{eq:tilesskm}. Now Lemma~\ref{lem:decoupling} and Theorem~\ref{t:isf} are combined to complete the proof of Theorem~\ref{thm:wdcones}.
\section{Directional Rubio de Francia square functions}\label{sec:rdf}In his seminal paper \cite{RdF}, Rubio de Francia proved a one-sided Littlewood-Paley inequality for arbitrary intervals on the line. This estimate was later extended by Journ\'e, \cite{Journe}, to the case of rectangles ($n$-dimensional intervals) in $\mathbb{R}^n$; a proof more akin to the arguments of the present paper appears in \cite{LR}. The aim of this subsection is to present a generalization of the one-sided Littlewood-Paley inequality to the case of rectangles in $\mathbb{R}^2$ with sides parallel to a given set of directions. The set of directions is to be finite, necessarily, because of Kakeya counterexamples.
As in the case of cones of \S\ref{sec:cone} we will present two versions, one associated with smooth frequency projections and one with rough. To set things up let $S$ be a finite set of slopes and $V$ be the corresponding directions. We consider a family of rotated rectangles $\mathcal F$ as in \S\ref{sec:intrinsicrdf} where $\mathcal F=\cup_{s\in S}\mathcal F_s$. For each $s\in S$ a rectangle $F\in\mathcal F_s$ is a rotation by $s$ of an axis parallel rectangle, so that the sides of $R$ are parallel to $(v,v^\perp)$ with $v=(1,s)$. We will write $F=\mathrm{rot}_s(I_F\times J_F)+y_F$ for some $y_F\in \mathbb{R}^2$ in order to identify the axes-parallel rectangle $I_F\times J_F$ producing $F$ by an $s$-rotation; this writing assumes that $I_F\times J_F$ is centered at the origin.
Now for each $F\in\mathcal F$ we consider the rough frequency projection
\[
P_Ff(x)\coloneqq \int_{\mathbb{R}^2} \hat f(\xi)\cic{1}_F(\xi) e^{ix\cdot\xi}\,{\rm d} \xi,\qquad x\in \mathbb{R}^2,
\]
and its smooth analogue
\[
P_F ^\circ f(x)\coloneqq \int_{\mathbb{R}^2} \hat f(\xi) \upgamma_F(\xi)e^{ix\cdot\xi}\, {\rm d}\xi,\quad x\in\mathbb{R}^2,
\]
where $\upgamma_R$ is a smooth function on $\mathbb{R}^2$, supported in $R$, and identically $1$ on $\mathrm{rot_s}(\frac12 I \times \frac12 J)$.
We first state the smooth square function estimate.
\begin{theorem}\label{thm:rdfsmooth} Let $\mathcal F$ be a collection of rectangles in $\mathbb{R}^2$ with sides parallel to $(v,v^\perp)$ for some $v$ in a finite set of directions $V$. Assume that $\mathcal F$ has finite overlap. Then
\begin{align}
& \left\| \{P_F ^\circ f\} \right\|_{L^p(\mathbb{R}^2; \ell^2_{\mathcal F})} \lesssim_p (\log \#V )^{\frac{1}2-\frac1p}(\log\log \#V )^{\frac{1}2-\frac1p}\|f\|_p
\end{align}
for $2\leq p<4$, as well as the restricted type analogue valid for all measurable sets $E$
\begin{align}
& \left\| \{P_F ^\circ(f\bm{1}_E)\} \right\|_{L^4(\mathbb{R}^2; \ell^2_{\mathcal F})} \lesssim (\log \#V )^{\frac{1}4}(\log\log \#V )^{\frac14}|E|^{\frac14}\|f\|_\infty.
\end{align}
The dependence on $\#V$ in the estimates above is best possible up the doubly logarithmic term.
\end{theorem}
\begin{remark}\label{rmrk:1paramrdf} We record a small improvement of the estimates above in some special cases. Suppose that for fixed $s\in S$ all the rectangles $F\in\mathcal F_s$ have one side-length fixed, or that they have fixed eccentricity. In both these cases the collections of spatial components of the tiles needed to discretize these operators, $\mathcal R_{{\mathbf{T}}_s ^\mathcal F}\coloneqq\{R_t:\, t\in{\mathbf{T}}_s ^{\mathcal F}\}$, with ${\mathbf{T}}_s$ as in \eqref{eq:rdftiles}, give rise to maximal operators that are of weak-type $(1,1)$. Then Remark~\ref{rmrk:1paramrecs} shows that the estimates of Theorem~\ref{thm:rdfsmooth} hold without the doubly logarithmic terms, and as shown in \S\ref{sec:rdfcex} this is best possible.
\end{remark}
The rough version of this Rubio de Francia type theorem is slightly worse in terms of the dependence on the number of directions. The reason for that is that, as in the case of conical projections, passing from rough to smooth in the directional setting incurs a loss of logarithmic terms, essentially originating in the corresponding maximal function bound.
\begin{theorem}\label{thm:rdfrough} Let $\mathcal F$ be a collection of rectangles in $\mathbb{R}^2$ with sides parallel to $(v,v^\perp)$ for some $v$ in a finite set of directions $V$. Assume that $\mathcal F$ has finite overlap. Then the following square function estimate holds for $2\leq p<4$
\begin{align}
& \left\| \{P_F f\} \right\|_{L^p(\mathbb{R}^2; \ell^2_{\mathcal F})} \lesssim_p (\log \#V )^{\frac{3}{2}-\frac3p} (\log\log\#V)^{\frac12-\frac1p}\|f\|_p .
\end{align}
\end{theorem}
The proofs of these theorems follow the by now familiar path of introducing local Littlewood-Paley decompositions on each multiplier, approximating with time-frequency analysis operators, establishing a directional Carleson condition on the wave-packet coefficients and finally applying Theorem~\ref{thm:carleson}. We will very briefly comment on the proofs below.
\begin{proof}[Proof of Theorem~\ref{thm:rdfrough} and Theorem~\ref{thm:rdfsmooth}] We first sketch the proof of Theorem~\ref{thm:rdfrough} which is slightly more involved. The first step here is a decoupling lemma which is completely analogous to Lemma~\ref{lem:decoupling} with the difference that now we need to use two directional Littlewood-Paley decompositions while in the case of cones only one. This explains the extra logarithmic term of the statement.
Remember that $\mathcal F=\cup_s\mathcal F_s$ with $s=(1,v)$ for some $v\in V$; here $s$ gives the directions $(v,v^\perp)$ of the rectangles in $\mathcal F_s$. Using the finitely overlapping Whitney decomposition of \S\ref{sec:intrinsicrdf} we have for each $F\in\mathcal F_s$ a collection of tiles
\[
{\mathbf{T}}_s(F)=\bigcup_{(k_1,k_2)\in{\mathbb Z}^2}{\mathbf{T}}_{s,k_1,k_2}(F)
\]
as in \eqref{eq:rdftiles}. Let us for a moment fix $s$ and $F\in\mathcal F_s$. The frequency components of the tiles in ${\mathbf{T}}_s(F)$ form a two-parameter Whitney decomposition of $F$, so let $\{\phi_{F,k_1,k_2}\}_{(k_1,k_2)\in{\mathbb Z}^2}$ be a smooth partition of unity subordinated to this cover and denote by $\Phi_{F,k_1,k_2}$ the Fourier multiplier with symbol $\phi_{F,k_1,k_2}$.
The promised analogue of Lemma~\ref{lem:decoupling} is the following estimate: for $2\leq p<4$ there holds
\begin{equation}\label{eq:2paramdecoupling}
\big\|\{P_F f\}|\big\|_{L^p(\mathbb{R}^2;\ell^2 _{\mathcal F})}\lesssim \frac{1}{(4-p)^2}(\log\#V)^{1-\frac{2}{p}} \|\{\Phi_{s,k_1,k_2} f\}\|_{L^p(\mathbb{R}^2;\ell^2 _{\mathcal F\times{\mathbb Z}\times{\mathbb Z}\}})}.
\end{equation}
The proof of this estimate is a two-parameter repetition of the proof of Lemma~\ref{lem:decoupling}, where one applies Proposition~\ref{prop:beha} once in the direction of $v$ and once in the direction of $v^\perp$. Using the familiar scheme we can approximate each $\Phi_{s,k_1,k_2}f$ by time-frequency analysis operators
\[
\mathsf{P}_{F,k_1,k_2}f\coloneqq \sum_{t\in {\mathbf{T}}_{s,k_1,k_2}(F)}\langle f,\upphi_t\rangle \upphi_t, \qquad|\mathsf{P}_{F,k_1,k_2}f|^2\lesssim \sum_{t\in {\mathbf{T}}_{s,k_1,k_2}(F)} a_t(f)\frac{\cic{1}_{R_t}}{|R_t|}
\]
and by \eqref{eq:2paramdecoupling} the proof of Theorem~\ref{thm:rdfrough} follows by corresponding bounds for the intrinsic square function of Theorem~\ref{thm:intrrdf}, defined with respect to the tiles ${\mathbf{T}}^{\mathcal F}$ given by \eqref{eq:rdftiles}.
For Theorem~\ref{thm:rdfsmooth} things are a bit simpler as the decoupling step of \eqref{eq:2paramdecoupling} is not needed. Apart from that one needs to consider for each $F$ a new set of tiles which is very easy to define: If $F\in\mathcal F_s$ with $F=\mathrm{rot}_s(I_F\times J_F)+y_F$
\[
{\mathbf{T}}'(F)\coloneqq \big\{t=R_t\times F: \, R_t\in\mathcal D^2 _{s,\ell_J,\ell_I}\big\}
\]
and then ${\mathbf{T}}'\coloneqq \cup_{F\in\mathcal F}{\mathbf{T}}' (F)$. One can recover $P_F ^\circ$ by operators of the form
\[
\mathsf{P}^\circ _F f\coloneqq \sum_{t\in{\mathbf{T}}_s(F)} \langle f,\upphi_t\rangle \upphi_t,\qquad |\mathsf{P}^\circ _F f|^2\lesssim \sum_{t\in{\mathbf{T}}_s(F)} a_t(f)\frac{\cic{1}_{R_t}}{|R_t|}
\]
as before. Using the orthogonality estimates of \S\ref{sec:orthocone} in Theorem~\ref{thm:carleson} yields the upper bound in Theorem~\ref{thm:rdfsmooth}. The optimality of the estimates in the statement of Theorem~\ref{thm:rdfsmooth} is discussed in \S\ref{sec:rdfcex}.
\end{proof}
\section{The multiplier problem for the polygon}\label{sec:polygon} Let $\mathcal P=\mathcal P_{N}$ be a regular $N$-gon and $T_{\mathcal{P}_N}$ be the corresponding Fourier restriction operator on $\mathcal P$
\[
T_{\mathcal P}f (x) \coloneqq \int_{\mathbb{R}^2} \widehat f(\xi) \cic{1}_{\mathcal P}(\xi) {\rm e}^{ix\cdot \xi} \, {\rm d} \xi,\qquad x\in\mathbb{R}^2.
\]
In this subsection we prove Theorem~\ref{thm:polygon}, namely we will prove the estimate
\[
\left\|T_{\mathcal P_N} : L^p(\mathbb{R}^2) \right\| \lesssim (\log N)^{4\left|\frac12-\frac1p\right|} , \qquad \frac43 <p <4.
\]
The idea is to reduce the multiplier problem for the polygon to the directional square function estimates of Theorem~\ref{thm:rdfsmooth} and combine those with vector-valued inequalities for directional averages and directional Hilbert transforms.
We introduce some notation. The large integer $N$ is fixed throughout and left implicit in the notation. By scaling, it will be enough to consider a regular polygon $\mathcal P$ with the following geometric properties: first, $\mathcal P$ has vertices
\[
\{v_j ={\rm e}^{i\vartheta_j}:\, 1\leq j \leq N+1 \},\qquad v_j\coloneqq\exp( 2\uppi j/ N ),
\]
on the unit circle $\mathbb S^1$ with $\vartheta_1=\vartheta_{N+1}=0$ and oriented counterclockwise so that $ \vartheta_{j+1}- \vartheta_{j}>0$. The associated Fourier restriction operator is then defined by
\[
T_{\mathcal P}f \coloneqq (\cic{1}_{\mathcal P} \hat f)^\vee.
\]
The proof of the estimate of Theorem \ref{thm:polygon} for $T_{\mathcal P}$ occupies the remainder of this section: by self-duality of the estimate it will suffice to consider the range $2\leq p<4$.
\subsection{A preliminary decomposition} Let $N$ be a large positive integer and take $\upkappa$ such that $2^{\upkappa-1}< N\leq 2^{\upkappa}$. For each $-2\upkappa\leq k\leq 0$ consider a smooth radial multiplier $m_k$ which is supported on the annulus
\[
A_k\coloneqq \Big\{\xi\in\mathbb{R}^2:\, 1-\frac{2^{-k-1}}{2^{2\upkappa} }< |\xi| <1- \frac{2^{-k-5}}{2^{2\upkappa}} \Big\}
\]
and is identically $1$ on the smaller annulus
\[
a_k\coloneqq \Big\{\xi\in\mathbb{R}^2:\, 1-\frac{2^{-k-2}}{2^{2\upkappa} }< |\xi| <1- \frac{2^{-k-4}}{2^{2\upkappa}} \Big\}.
\]
Now consider the corresponding radial multiplier operators $T_k$
\[
T_k f \coloneqq (m_k \hat f)^\vee, \qquad m_{\upkappa} \coloneqq \sum_{k=-2\upkappa} ^0 m_k.
\]
We note that $m_\upkappa$ is supported in the annulus
\[
\Big\{\xi\in\mathbb{R}^2:\, \frac{1}{2}<|\xi|<1-\frac{2^{-5}}{2^{2\upkappa}}\Big\}.
\]
With this in mind let us consider radial functions $m_0,m_{\mathcal P}\in\mathcal S(\mathbb{R}^2)$ with $0\leq m_0,m_{\mathcal P}\leq 1$ such that
\begin{equation}
\label{e:pfpoly10}
\Big(m_0 +m_\upkappa +m_{\mathcal P}\Big)\cic{1}_{\mathcal P}=\cic{1}_{\mathcal P},
\end{equation}
with the additional requirement that
\begin{equation}
\label{e:pfpoly11}
\mathrm{supp}(m_{\mathcal P})\subset A_{\mathcal P}\coloneqq \big\{\xi\in\mathbb{R}^2: \, 1- 2^{-2\upkappa-3} \leq|\xi| \leq1 + 2^{-2\upkappa-3} \big\}.
\end{equation}
Defining
\[
\begin{split}
&\widehat{T_0 f} \coloneqq \widehat f m_{0}, \qquad \widehat{T_\upkappa f} \coloneqq \widehat f m_{\upkappa} , \qquad
\widehat{O_{\mathcal P}f} \coloneqq \widehat f m_{\mathcal P} \cic{1}_{\mathcal P} ,
\end{split}
\]
identity \eqref{e:pfpoly10} implies that $T_{\mathcal P}= T_0 + T_\upkappa+ O_{\mathcal P}$. Observing that $T_0$ is bounded on $p$ for all $1<p<\infty$ with bounds $O_p(1)$ we have
\begin{equation} \label{e:pfpoly60}
\|T_{\mathcal P}\|_{L^p(\mathbb{R}^2)} \lesssim_p 1 + \|T_{\upkappa} \|_{L^p(\mathbb{R}^2)}+\|O_{\mathcal P} \|_{L^p(\mathbb{R}^2)},\qquad 1<p<\infty.
\end{equation}
\subsection{Estimating $T_\upkappa$} \label{sec:inner}
We aim for the estimate
\begin{equation}\label{eq:innerannuli}
\|T_{\upkappa} f\|_{p} \lesssim \upkappa^{4(\frac12-{\frac1p})} \|f\|_p, \qquad 2\leq p<4.
\end{equation}
The case $p=2$ is obvious whence it suffices to prove the restricted type version at the endpoint $p=4$
\begin{equation}
\label{e:pfpoly21}
\|T_{\upkappa} (f\bm\cic{1}_E)\|_{4} \lesssim \upkappa|E|^{\frac14} \|f\|_\infty.
\end{equation}
Now we have that for any $g$
\[
|T_{\upkappa}g| =\Big|\sum_{k=-2\upkappa} ^0 T_k g\Big|\lesssim \Big(\sum_{k=-2\upkappa} ^0 |T_k g|^4\Big)^{\frac14}\upkappa^{\frac34}
\]
and thus
\begin{equation}\label{eq:4/3}
\|T_{\upkappa}g\|_4 \lesssim\upkappa^{\frac34} \Big(\sum_{k=-2\upkappa} ^0 \|T_k g\|_4 ^4\Big)^{\frac14}.
\end{equation}
Let $\{\upomega_{j}:\, j\in J\}$ be the collection of intervals on $\mathbb S^1$ centered at $v_j\coloneqq\exp\left({{2\uppi i j/}{N}}\right)$ and of length $2^{-\upkappa}$. Note that these intervals have finite overlap and their centers $v_j$ form a $\sim 1/N$-net on $\mathbb S^1$. Now let $\{\upbeta_j:\, j\in J\}$ be a smooth partition of unity subordinated to the finitely overlapping open cover $\{\upomega_{j}:\, j\in J\}$ so that each $\upbeta_j$ is supported in $\omega_j$. We can decompose each $T_k$ as
\[
(T_k f)^{\wedge}(\xi)=\sum_{j\in J} m_k(|\xi|)\upbeta_j \Big(\frac{\xi}{|\xi|}\Big)\hat f(\xi)\eqqcolon \sum_{j\in J} m_{j,k}(\xi)\hat f(\xi),\eqqcolon\sum_{j\in J} (T_{j,k} f)^{\wedge}(\xi), \qquad \xi\in \mathbb{R}^2.
\]
For $s_j\in S$ and $-2\upkappa\leq k\leq 0 $ we define the conical sectors
\begin{equation}
\label{e:amaranth1}
\Omega_{j,k}\coloneqq \left\{\xi \in \mathbb{R}^2:\, \xi\in A_k,\: \xi/|\xi|\in \upomega_j\right\}
\end{equation}
and note that each one of the multipliers $m_{k,j}$ is supported in $\Omega_{j,k}$. Each $\Omega_{j,k}$ is an annular sector around the circle of radius $1-2^{-k}/2^{2\upkappa}$ of width $\sim 2^{-k}/2^{2\upkappa}$, where $-2\upkappa \leq k \leq 0$. It is a known observation, usually attributed to C\'ordoba, \cite{CorPoly}*{Theorem 2} or C. Fefferman, \cite{Feff}, that for such parameters we have
\begin{equation}\label{eq:cordobaoverlap}
\sum_{j,j'\in J} \cic{1}_{\Omega_{j,k}+\Omega_{j',k} }\lesssim 1.
\end{equation}
This pointwise inequality and Plancherel's theorem allows us to decouple the pieces $T_{j,k}$ in $L^4$: for each fixed $k$ as above we have
\begin{equation}\label{eq:cordecoupling}
\|T_kf\|_4 \lesssim \Big\| \big(\sum_{j\in J}|T_{j,k}f|^2\big)^\frac{1}{2}\Big\|_4;
\end{equation}
see also the proof of Lemma~\ref{l:Tj} below for a vector-valued version of this estimate. Combining the last estimate with \eqref{eq:4/3} and dominating the $\ell^2$-norm by the $\ell^1$-norm yields
\[
\begin{split}
&\|T_{\upkappa}f\|_4 \lesssim \upkappa^{\frac34}\Bigg( \int_{\mathbb{R}^2}\sum_{k=-2\upkappa} ^0 \big(\sum_{j\in J}|T_{j,k}f|^2\big)^{2} \Bigg)^{\frac14}\lesssim \upkappa^{\frac34}\Bigg( \int_{\mathbb{R}^2}\bigg[\sum_{k=-2\upkappa} ^0 \big(\sum_{j\in J}|T_{j,k}f|^2\big)^{2} \bigg]^{\frac12 2} \Bigg)^{\frac14}
\\
&\qquad\qquad \leq \upkappa^{\frac34}\Bigg( \int_{\mathbb{R}^2}\bigg[\sum_{k=-2\upkappa} ^0 \sum_{j\in J}|T_{j,k}f|^2 \bigg]^{ 2} \Bigg)^{\frac14}\eqqcolon \upkappa^{\frac34} \| \Delta_{J,\upkappa}f\|_4
\end{split}
\]
with
\[
\Delta_{J,\upkappa}f\coloneqq \Big(\sum_{k=-2\upkappa} ^0 \sum_{j\in J}|T_{j,k}f|^2\Big)^{\frac12}.
\]
But now note that $\{T_{j,k}\}_{j,k}$ is a finitely overlapping family of smooth frequency projections on a family of rectangles in at most $\sim N$ directions. Furthermore all these rectangles have one side of fixed length since $|\upomega_j|=2^{-\upkappa}$ for all $j\in J$. So Theorem~\ref{thm:rdfsmooth} with the improvement of Remark~\ref{rmrk:1paramrdf} applies to yield
\begin{equation}\label{eq:sqfnctpol}
\|\Delta_{J,\upkappa}f\|_4 \lesssim (\log\#N)^{\frac14}\|f\|_\infty |E|^\frac{1}{4}\simeq \upkappa^{\frac14}\|f\|_\infty |E|^{\frac14}.
\end{equation}
The last two displays establish \eqref{e:pfpoly21} and thus \eqref{eq:innerannuli}.
\begin{remark} \label{rem:compcor}The term $T_{\upkappa}$ is also present in the argument of \cite{CorPoly}. Therein, an upper estimate of order $O(\upkappa^{\frac54})$ for $p$ near $4$ is obtained, by using the triangle inequality and the bound $\sup\,\{\|T_{k}\|_{L^4(\mathbb{R}^2)}:\, {-2\upkappa\leq k\leq 0}\}\sim \upkappa^{\frac14}$ for the smooth restriction to a single annulus.
\end{remark}
\subsection{Estimating $O_{\mathcal P}$}\label{sec:OP} In this subsection we will prove the estimate
\begin{equation}\label{eq:OP}
\|O_{\mathcal P} f \|_p\lesssim \upkappa^{4(\frac12-\frac1p)}\|f\|_p.
\end{equation}
Let $\Phi$ be a smooth radial function with support in the annular region $\{\xi\in\mathbb{R}^2:\, 1-c2^{-2\upkappa}<|\xi|< 1+c2^{-2\upkappa}\}$, where $c$ is a fixed small constant, and satisfying $0\leq \Phi\leq 1$. Let $\{\upbeta_j:\, j\in J\}$ be a partition of unity on $\mathbb S^1$ relative to intervals $\upomega_j$ as in \S\ref{sec:inner}.
Define the Fourier multiplier operators on $\mathbb{R}^2$
\begin{equation}
\label{e:pfpoly50}
\widehat{T_j f} (\xi) \coloneqq \Phi(\xi) \upbeta_j\left( \frac{\xi}{|\xi|} \right)\hat f(\xi),\qquad \xi \in \mathbb{R}^2.
\end{equation}
The operators $T_j$ satisfy a square function estimate
\begin{equation}
\label{e:pfpoly40}
\begin{split}
&\left\| \{ T_j f\} \right\|_{L^p(\mathbb{R}^2; \ell^2_J)} \lesssim \upkappa^{\frac12-\frac1p} \|f\|_p , \qquad 2\leq p<4,\\
&\left\| \{ T_j (f\bm{1}_E)\} \right\|_{L^4(\mathbb{R}^2; \ell^2_J)} \lesssim \upkappa^{\frac14 }|E|^{\frac14}\|f\|_\infty,
\end{split}
\end{equation}
which follows in the same way as \eqref{eq:sqfnctpol}, by using Theorem~\ref{thm:rdfsmooth} with the improvement of Remark~\ref{rmrk:1paramrdf}. They also obey a vector-valued estimate
\begin{equation}
\label{e:pfpoly41}
\begin{split}
&\left\| \{ T_j f_j\} \right\|_{L^p(\mathbb{R}^2; \ell^2_J)} \lesssim \upkappa^{\frac12-\frac1p} \left\| \{ f_j\} \right\|_{L^p(\mathbb{R}^2; \ell^2_J)} , \qquad 2\leq p<4,
\\
&\left\| \{ T_j (f_j\bm{1}_F)\} \right\|_{L^4(\mathbb{R}^2; \ell^2_J)} \lesssim \upkappa^{\frac14 }|F|^{\frac14}\left\| \{ f_j\} \right\|_{L^\infty(\mathbb{R}^2; \ell^2_J)}.
\end{split}
\end{equation}
These estimates are easy to prove. Indeed note that it suffices to prove the endpoint restricted estimate at $p=4$. Using the Fefferman-Stein inequality for fixed $j\in J$ we can estimate for each function $g$ with $\|g\|_2=1$
\[
\begin{split}
& \int_{\mathbb{R}^2} \sum_{j\in J}|T_j(f_j\cic{1}_F)|^2 g \lesssim \sum_{j\in J}\int_{\mathbb{R}^2} |f_j\cic{1}_F|^2 {\mathrm M}_{j} g \leq \left\| \{ f_j\} \right\|_{L^\infty(\mathbb{R}^2; \ell^2_J)} ^2 \int_F \sup_{j\in J}{\mathrm M}_j g
\\
&\qquad\qquad\lesssim |F|^{\frac 12} \big\|\sup_{j\in J}{\mathrm M}_j g \big\|_{L^{2,\infty}(\mathbb{R}^2)},
\end{split}
\]
where ${\mathrm M}_j$ is the Hardy-Littlewood maximal function with respect to the collection of parallelograms in $\mathcal D^2 _{s_j,-2\upkappa,-\upkappa}$ with $s_j$ defined through $(-s_j,1)\coloneqq v_j$. Now $\sup_{j\in J}M_j$ is the maximal directional maximal function and the number of directions involved in its definition is comparable to $N\sim 2^{\upkappa}$. Then the maximal theorem of Katz from \cite{KatzDuke} applies to give the estimate
\[
\big\|\sup_{j\in J}{\mathrm M}_j g\big \|_{L^{2,\infty}(\mathbb{R}^2)}\lesssim \upkappa ^{\frac12}.
\]
This proves the second of the estimates \eqref{e:pfpoly41} and thus both of them by interpolation.
In the estimate for $O_{\mathcal P}$ we will also need the following decoupling result.
\begin{lemma} \label{l:Tj}
Let $2\leq p<4$. Then
\[
\Big\| \sum_{j} T_j f_j \Big\|_{p} \lesssim \upkappa^{\frac12-\frac1p}\big\| \{ f_j\} \big\|_{L^p(\mathbb{R}^2; \ell^2_J)}.
\]
\end{lemma}
\begin{proof} Note that the case $p=2$ of the conclusion is trivial due to the finite overlap of the supports of the multipliers of the operators $T_j$. Thus by vector-valued restricted type interpolation of the operator
\[
\{f_j\} \mapsto O(\{f_j\})\coloneqq \sum_{j\in J} T_j f_j
\]
it suffices to prove a restricted type $L^{4,1}\to L^{4}$ estimate:
\begin{equation}
\label{e:pfpoly31}
\left\| O(\{f_j\})\right\|_{4} \lesssim \upkappa^{\frac14}|E|^{\frac14}
\end{equation}
for functions with $\|\{f_j\}\|_{\ell^2}\leq \bm{1}_E$.
To do so note that the finite overlap of the supports of $\widehat{T_j f_j}*\widehat{T_k f_k}$ over $j,k$, as in \eqref{eq:cordobaoverlap}, entail
\[
\left\| O(\{f_j\}) \right\|_{4} \lesssim \left\| \{ T_j f_j \} \right\|_{L^4(\mathbb{R}^2; \ell^2_J)}
\]
and the restricted type estimate \eqref{e:pfpoly31} follows from \eqref{e:pfpoly41}.
\end{proof}
We come to the main argument for $O_{\mathcal P}$.
Let $m_{\mathcal P}$ be as as in \eqref{e:pfpoly10}-\eqref{e:pfpoly11} and $T_j$ be the multiplier operators from \eqref{e:pfpoly50} corresponding to the choice $\Phi=m_{\mathcal P}$. Then obviously
\[
m_{\mathcal P} \hat f = \sum_{j\in J} \widehat {T_j f}.
\]
We may also tweak $\Phi$ and the partition of unity on $\mathbb S^1$ to obtain further multiplier operators $\widetilde T_j$ as in \eqref{e:pfpoly50} and such that the Fourier transform of the symbol of $\widetilde T_j$ equals one on the support of the symbol of $T_j$.
With these definitions in hand we estimate for $2<p<4$
\begin{equation}
\label{e:pfpoly32}
\begin{split}
\|O_{\mathcal P} f \|_p &= \Big\| \sum_{j} \widetilde T_j (T_j T_{\mathcal P} f) \Big\|_p
\lesssim \upkappa^{\frac12-\frac1p}\left\| \{T_{\mathcal P}(T_j f) \} \right\|_{L^p(\mathbb{R}^2; \ell^2_J)}
\\
&= \upkappa^{\frac12-\frac1p}\left\| \{H_jH_{j+1}(T_j f) \} \right\|_{L^p(\mathbb{R}^2; \ell^2_J)}.
\end{split}
\end{equation}
The first inequality is an application of Lemma~\ref{l:Tj} for $\widetilde T_j$. The last equality is obtained by observing that the polygon multiplier $T_{\mathcal P}$ on the support of each $T_j$ may be written as a (sum of $O(1)$) directional biparameter multipliers $H_jH_{j+1}$ of iterated Hilbert transform type, where $H_j$ is a Hilbert transform along the direction $\nu_j$, which is the unit vector perpendicular to the $j$-th side of the polygon, and pointing inside the polygon; these are at most $\sim N$ such directions.
In order to complete our estimate for $O_{\mathcal P}$ we need the following Meyer-type lemma for directional Hilbert transforms of the form
\[
H_v f (x)\coloneqq \int_{\mathbb{R}^2} \hat f(\xi)\cic{1}_{\{\xi\cdot v>0\}}e^{ix\cdot \xi}\,{\rm d}\xi,\qquad x\in\mathbb{R}^2.
\]
\begin{lemma}\label{lem:meyer} Let $V\subset \mathbb S^1$ be a finite set of directions and $H_v$ be the Hilbert transform in the direction $v$. Then for $\frac43< p<4$ we have
\[
\begin{split}
&\left\| \{ H_v f_v\} \right\|_{L^p(\mathbb{R}^2; \ell^2_V)} \lesssim (\log\#V)^{\left|\frac12-\frac1p\right|} \left\| \{ f_v\} \right\|_{L^p(\mathbb{R}^2; \ell^2_V)} .
\end{split}
\]
The dependence on $\#V$ is best possible.
\end{lemma}
\begin{proof} It suffices to prove the estimate for $2<p<4$. The proof is by way of duality and uses the following inequality for the Hilbert transform: for $r>1$ and $w$ a non-negative locally integrable function we have
\[
\int_{\mathbb{R}^2} |H_v f|^2 w \lesssim \int_{\mathbb{R}^2} |f|^2 ({\mathrm M}_v |w|^r )^\frac1r
\]
with ${\mathrm M}_v$ given by \eqref{eq:dirMv}. See for example \cite{CP} and the references therein. Using this we have for a suitable $g\in L^{(p/2)'}$ of norm one that
\[\begin{split}
& \left\| \{ H_v f_v\} \right\|_{L^p(\mathbb{R}^2; \ell^2_V)} ^2=\int_{\mathbb{R}^2}\sum_{v\in V} |H_vf_v|^2 g \lesssim \sum_{v\in V}\int_{\mathbb{R}^2} |f_v|^2 ({\mathrm M}_v |g|^r )^\frac1r
\\
&\qquad\qquad\lesssim \|\{f_v\}\|_{L^p(\mathbb{R}^2;\ell^2 _V)} ^2 \big\|({\mathrm M}_V |g|^r )^\frac1r \big\|_{L^{(p/2)'}(\mathbb{R}^2)}
\end{split}
\]
with ${\mathrm M}_V g\coloneqq \sup_{v\in V}{\mathrm M}_vg$. Now for $2<p<4$ there is a choice of $1<r<\frac{p}{2(p-2)}$ so that $\frac{p}{r(p-2)}>2$. This means that the maximal theorem of Katz from \cite{KatzDuke} applies again to give
\[
\big\|({\mathrm M}_V |g|^r )^\frac1r \big\|_{L^{(p/2)'}(\mathbb{R}^2)}\lesssim (\log\#V)^{1 -\frac2p}
\]
and so the proof of the upper bound is complete. The optimality is discussed in \S\ref{sec:cexmeyer}.
\end{proof}
Let us now go back to the estimate for $O_{\mathcal P}$. The left hand side of \eqref{e:pfpoly32} contains a double Hilbert transform. By an iterated application of Lemma~\ref{lem:meyer} we thus have
\[
\left\| \{H_jH_{j+1}(T_j f) \} \right\|_{L^p(\mathbb{R}^2; \ell^2_J)}\lesssim \upkappa ^{1-\frac2p}\left\| \{T_j f) \} \right\|_{L^p(\mathbb{R}^2; \ell^2_J)}
\]
since the number of directions is $N=2^\upkappa$. The final estimate for the right hand side of the display above is a direct application of \eqref{e:pfpoly40} which together with \eqref{e:pfpoly32} yields the estimate for $\|O_{\mathcal P} f \|_p$ claimed in \eqref{eq:OP}.
Now the decomposition \eqref{e:pfpoly60} together with the estimate of \S\ref{sec:inner} for $T_\upkappa$ and the estimate \eqref{eq:OP} for $O_{\mathcal P}$ complete the proof of Theorem~\ref{thm:polygon}.
\begin{remark}\label{rmrk:invsf} Consider a function $f$ in $\mathbb{R}^2$ such that $\mathrm{supp}(\hat f)\subseteq A_\updelta$ where $A_{\updelta}$ is an annulus of width $\updelta^2$ around $\mathbb S^1$. Decomposing $A_{\updelta}$ into a union of $O(1/\updelta)$ finitely overlapping annular boxes of radial width $\updelta^2$ and tangential width $\updelta$ we can write $f=\sum_{j\in J} T_j f$ where each $T_j$ is a smooth frequency projection onto one of these annular boxes, indexed by $j$. Then if $\widetilde T_j$ is a multiplier operator whose symbol is identically one on the frequency support of $T_j f$ and supported on a slightly larger box, we can write $f=\sum_j \widetilde{T_j}T_j f$, as in \eqref{e:pfpoly32} above. Then Lemma~\ref{l:Tj} yields
\[
\|f\|_{L^p(\mathbb{R}^2)} \lesssim (\log (1/\updelta))^{\frac{1}{2}-\frac{1}{p}} \|\{T_jf\}\|_{L^p(\mathbb{R}^2;\ell^2 _J)}.
\]
This is the inverse square function estimate claimed in the remark after Theorem~\ref{thm:polygon} in the introduction.
\end{remark}
\section{Lower bounds and concluding remarks}\label{sec:cex}
\subsection{Sharpness of Meyer's lemma}\label{sec:cexmeyer}
We briefly sketch the quantitative form of Fefferman's counterexample \cite{FeffBall} proving the sharpness of Lemma~\ref{lem:meyer}. Let $N$ be a large dyadic integer. Using a standard Besicovitch-type construction we produce rectangles $\{R_j: \, j=1,\ldots, N\}$ with sidelengths $1\times \frac1N$, so that the long side of $R_j$ is oriented along $v_j\coloneqq \exp(2\uppi i j/N)$. Now we consider the set $E$ to be the union of these rectangles and
\[
\left|E\coloneqq \bigcup_{j=1}^N R_j\right| \lesssim \frac{1}{\log N}.
\]
Denoting by $\widetilde {R_j}$ the $2$-translate of $R_j$ in the direction of $v_j$ we gather that $ \{\widetilde {R_j}:j=1,\ldots,N\}$ is a pairwise disjoint collection. Furthermore if $H_j$ is the Hilbert transform in direction $v_j$, there holds
\[
|H_j \bm{1}_{R_j}| \geq c \bm{1}_{\widetilde {R_j}}.
\]
Therefore for all $1<p<\infty$
\[
\bigg\|\Big(\sum_{j=1}^N |H_j \bm{1}_{R_j}|^2\Big)^{\frac12} \bigg\|_{p} \geq c \bigg| \bigcup_{j=1}^N \widetilde {R_j} \bigg|^{\frac1p} \geq c
\]
while for $p\leq 2$
\[
\bigg\|\Big(\sum_{j=1}^N |\bm{1}_{R_j}|^2\Big)^{\frac12} \bigg\|_{p} \leq \bigg(\sum_{j=1}^N |R_j| \bigg)^{\frac12} |E|^{\frac1p-\frac12} \lesssim (\log N)^{\frac{1}{2}-\frac{1}{p}}.
\]
Self-duality of the square function estimate then entails the optimality of the estimate of Lemma~\ref{lem:meyer}.
\subsection{Sharpness of the directional square function bound}\label{sec:rdfcex} In this subsection we prove that the bound of Theorem~\ref{thm:rdfrough} is best possible, up to the doubly logarithmic terms. In particular we prove that the bound of Remark~\ref{rmrk:1paramrdf} is best possible.
We begin by showing a lower bound for the rough square function estimate
\begin{equation}\label{eq:cexrdf}
\|\{P_F g\}\|_{L^p(\mathbb{R}^2;\ell^2 _{\mathcal F})} \leq \|\{P_F\}: L^p(\mathbb{R}^2)\to L^p(\mathbb{R}^2;\ell^2 _{\mathcal F})\| \|g\|_p,\qquad 2\leq p <4,
\end{equation}
where the notations are as in \S\ref{sec:rdf}. Now as in Fefferman's argument in \cite{FeffBall} one can easily show that the estimate above implies the vector-valued inequality for directional averages, for directions corresponding to the directions of rectangles in $\mathcal F$. For this let $\#V=N$ where $V$ is the set of directions of rectangles in $\mathcal F$. Now consider functions $\{g_F\}_{F\in\mathcal F}$ with compact Fourier support; by modulating these function we can assume that $\mathrm{supp}(\widehat{g_F})\subset B(c_F,A)$ for some $A>1$ and $\{c_F\}_{F\in\mathcal F}$ a $100AN$-net in $\mathbb{R}^2$. Then if $F$ is a rectangle centered at $c_F$ with short side $1$ parallel to a direction $v_F\in V$ and long side of length $N$ parallel to $v_F ^\perp$, then we have that $|P_Fg_F|=|A_{v_F}g_F|$ where $A_{v_F}$ is the averaging operator
\[
A_{v_F}f(x)\coloneqq 2N \int_{|t|\leq 1/2} \int_{N|s|<1} f(x-tv_F-sv_F ^\perp)\,{\rm d} t \,{\rm d} s,\qquad x\in \mathbb{R}^2.
\]
Note that this is a single-scale average with respect to rectangles of dimensions $1\times 1/N$ in the directions $v_F,v_F ^\perp$ respectively. Since the frequency supports of these functions are well-separated we gather that for all choices of signs $\varepsilon_F\in\{-1,1\}$ we have
\[
\sum_{T\in\mathcal F} |P_T G|^2\coloneqq \sum_{T\in\mathcal F} \Big|P_T \big(\sum_{F\in \mathcal F} \varepsilon_F g_F \big)\Big|^2 = \sum_{T\in\mathcal F}|P_T g_T|^2.
\]
Thus applying \eqref{eq:cexrdf} with the function $G$ as above and averaging over random signs we get
\[
\big\| \{A_{v_F}g_F\}\big\|_{L^p(\mathbb{R}^2;\ell^2 _{\mathcal F})}\leq \|\{P_F\}: L^p(\mathbb{R}^2)\to L^p(\mathbb{R}^2;\ell^2 _{\mathcal F})\| \|\{g_F\}\|_{L^p(\mathbb{R}^2;\ell^2 _{\mathcal F})},\qquad 2\leq p <4.
\]
Now we just need to note that as in \S\ref{sec:cexmeyer} we have that
\[
A_{v_F}\cic{1}_{R_F}\gtrsim \cic{1}_{\widetilde {R_F}}
\]
where $\{R_F\}_{F\in\mathcal F}$ are the rectangles used in the Besicovitch construction in \S\ref{sec:cexmeyer}. As before we get
\[
\big\|\{P_F\}: L^p(\mathbb{R}^2)\to L^p(\mathbb{R}^2;\ell^2 _{\mathcal F})\big\| \gtrsim (\log \#V)^{\frac12-\frac1p} .
\]
For $p<2$ the square function estimate \eqref{eq:cexrdf} is known to fail even in the case of a single directions; see for example the counterexample in \cite{RdF}*{\S1.5}.
One can use the same argument in order to show a lower bound for the norm of the smooth square function
\begin{equation}\label{eq:cexrdfsmmoth}
\|\{P_F ^{\circ} g\}\|_{L^p(\mathbb{R}^2;\ell^2 _{\mathcal F})} \leq \|\{P_F ^\circ\}: L^p(\mathbb{R}^2)\to L^p(\mathbb{R}^2;\ell^2 _{\mathcal F})\| \|g\|_p,\qquad 2\leq p <4.
\end{equation}
Indeed, following the exact same steps we can deduce a vector-valued inequality for smooth averages
\[
A_{v_F} ^\circ f(x)\coloneqq \int_{\mathbb{R}}\int_{\mathbb{R}} f(x-tv_F-sv_F ^\perp) \upgamma_F(t,s) \,{\rm d} t \,{\rm d} s,\qquad x\in \mathbb{R}^2,
\]
where $\upgamma_F$ is the smooth product bump function used in the definition of $P_F ^\circ$ in \S\ref{sec:rdf}. By a direct computation one easily shows the analogous lower bound $A_{v_F} ^\circ\cic{1}_{R_F}\gtrsim \cic{1}_{\widetilde {R_F}}$ for the rectangles of the Besicovitch construction and this completes the proof of the lower bound for smooth projections as well.
\subsection{Sharpness of C\'ordoba's bound for radial multipliers}\label{sec:cexradial}
Firstly we remember the definition of each radial multiplier $P_\updelta$: Let $\Phi:\mathbb{R}\to\mathbb{R}$ be a smooth function which is supported in $[-1,1]$ and define
\[
P_{\updelta} f(x) \coloneqq\int_{\mathbb{R}^2} \widehat f(\xi) \Phi\big(\updelta^{-1} (1-|\xi|) \big) {\rm e}^{ix\cdot \xi}\, {\rm d} \xi,\qquad x\in \mathbb{R}^2.
\]
These smooth radial multipliers were used extensively in \S\ref{sec:polygon}. In \cite{CordBR} C\'ordoba has proved the bound
\[
\|P_\updelta f\|_p \lesssim (\log1/\updelta)^{\left|\frac12-\frac1p\right|}\|f\|_p,\qquad \frac34\leq p \leq 4.
\]
In fact the same bound is implicitly proved in \S\ref{sec:polygon} in a more refined form, but only in the open range $p\in(3/4,4)$ with weak-type analogues at the endpoints. More precisely we have discretized $P_{\updelta}$ into a sum of pieces $\{P_{\updelta,j}\}_{j\in J}$, where each $P_{\updelta,j}$ is a smooth projection onto an annular box of width $\updelta$ and length $\sqrt{\updelta}$, pointing along one of $N$ equispaced directions $\nu_j$. Then it follows from the considerations in \S\ref{sec:polygon} that
\begin{equation}\label{eq:vvaluedradial}
\begin{split}
\| \{P_{\updelta,j} f\}\|_{L^p(\mathbb{R}^2;\ell^2 _J)} &\lesssim \log(1/\updelta)^{\frac12-\frac1p}\|f\|_p,\qquad 2<p<4,
\\
\| \{P_{\updelta,j} f\cic{1}_F\}\|_{L^4(\mathbb{R}^2;\ell^2 _J)} &\lesssim \log(1/\updelta)^{\frac14}\|f\|_\infty |F|^\frac14.
\end{split}
\end{equation}
Obviously one gets the same bound by duality for $4/3<p<2$ while the $L^2$-bound is trivial. Now these estimates imply C\'ordoba's estimate for $P_{\updelta}$ in the open range $(3/4,4)$ by the decoupling inequality \eqref{eq:cordecoupling}, also due to C\'ordoba. On the other hand C\'ordoba's estimate is sharp. Indeed one uses the same rescaling and modulation arguments as in the previous subsection in order to deduce a vector-valued inequality for smooth averages starting by C\'ordoba's estimate. Testing this vector-valued estimate against the rectangles of the Besicovitch construction proves the familiar lower bound for $P_\updelta$ and thus also shows the optimality of the estimates in \eqref{eq:vvaluedradial}. We omit the details.
\subsection{Lower bounds for the conical square function}\label{sec:cexconical} We conclude this section with a simple example that provides a lower bound for the operator norm of the conical square function $\|C_\upomega(f):{\ell^2_{\bm{\upomega}}}\| $ of Theorem~\ref{thm:wdcones} and the smooth conical square function $\|C_\upomega^\circ:\ell^2_{\bm{\upomega}}\|$ of Theorem~\ref{thm:smoothcones}. The considerations in this subsection also rely on the Besicovitch construction so we adopt again the notations of \S\ref{sec:cexmeyer} for the rectangles $\{R_j:\, 1\leq j \leq N\}$ and their union $E$. Let $H^+ _j$ denote the frequency projection in the half-space $\{\xi\in\mathbb{R}^2:\, \xi \cdot v_j>0\}$ where $v_j\coloneqq\exp(2\uppi i j/N)$. We begin by observing that
\begin{equation}
\label{e:cexbes1}
H_j ^+ f - H_{j+1} ^+ f = C_{j}P_+f -C_{j}P_{-}f,
\end{equation}
where $P_{+},P_{-}$ denote the rough frequency projections in the upper and lower half-space respectively and $C_{v_j}$ is the multiplier associated with the cone bordered by ${v_j,v_{j+1}}$. Since $H_j ^+$ is a linear combination of the identity with the usual directional Hilbert transform $H_j$ along $v_j$ we conclude that
\[
\Big\|\Big(\sum_{j=1} ^N |(H_{j+1}-H_j)f|^2\Big)^\frac12\Big\|_p\lesssim \big\| \{C_{j}\}: L^p(\mathbb{R}^2)\to L^p(\mathbb{R}^2;\ell^2 _j)\big\|\, \|f\|_p,\qquad 2\leq p <4.
\]
Now note that for each fixed $1\leq k\leq N$ we have
\begin{equation}
\label{e:cexbes0}
\cic{1}_{\widetilde{R_k}} \sum_j (H_{j }-H_{j+1}) \cic{1}_{R_j} = \cic{1}_{\widetilde{R_k}} H_k\cic{1}_{R_k} \gtrsim \cic{1}_{\widetilde{R_k}}
\end{equation}
if $\widetilde{R_k}$ is
a sufficiently large translation of $R_k$ in the positive direction $v_k$. Thus
\[
\begin{split}
\Big|\int \cic{1}_{\cup_k \widetilde{R_k}} \sum_{j=1}^N (H_{j+1}-H_j)\cic{1}_{R_j}\Big| \gtrsim \Big|\sum_k \int_{\widetilde{R_k}}\cic{1}_{\widetilde{R_k}}\Big|\simeq 1.
\end{split}
\]
On the other hand the left hand side of the display above is bounded by a constant multiple of
\[
\big\| \{C_{j}\}: L^p(\mathbb{R}^2)\to L^p(\mathbb{R}^2;\ell^2 _j)\big\| \, \Big\| \big(\sum_j \cic{1}_{R_j}^2\big)^{\frac12}\Big\|_{p'}\lesssim \big\| C_V:\, L^p(\mathbb{R}^2)\to L^p(\mathbb{R}^2;\ell^2)\big\| (\log N)^{\frac{1}{2}-\frac{1}{p'}}
\]
for all $2\leq p<4$. We thus conclude that
\[
\big\| \{C_{j}\}: L^p(\mathbb{R}^2)\to L^p(\mathbb{R}^2;\ell^2)\big\| \gtrsim (\log N)^ {\frac12-\frac1p} ,\qquad 2\leq p <4.
\]
We explain how this counterexample can be modified to get a lower square function estimate for the smooth cone multipliers $C^\circ_\upomega$ from \eqref{e:smoothcones} matching the upper bound of Theorem \ref{thm:smoothcones}. For $t\in \mathbb{R}$ write $v_j^t\coloneqq\exp(2\uppi i (j+t)/N)$ and let $H_{j}^t$ and $H_{j}^{t,+} $ be the directional Hilbert transform and analytic projection along $v_j^t$, respectively. Let $\updelta>0$ be a small parameter to be chosen later and for each $1\leq j\leq N$ let $\upomega_j$ be an interval of size $\updelta N^{-1}$ centered around $2\pi j/N$. Arguing as in \eqref{e:cexbes1},
\begin{equation}
C_{\upomega_j}^\circ P_+f - C_{\upomega_j}^\circ P_{-}f= \Xint-_{N|t|<\updelta} \upalpha\left({\textstyle\frac{Nt}{\updelta}}\right) \left( H_j ^{t,+} f - H_{j+1}^{t,+} f \right) {\rm d} t
\end{equation}
for a suitable nonnegative averaging function $\upalpha$ which equals $1$ on $[-\frac14,\frac14]$. Now, if $\widetilde{R_k}$ is again a sufficiently large translation of $R_k$ in the positive direction $v_k$ and $\updelta$ is chosen sufficiently small depending only on the translation amount, the analogue of \eqref{e:cexbes0}
\begin{equation}
\cic{1}_{\widetilde{R_k}}\inf_{N|t|<\updelta} \sum_{j=1}^N(H_{j}^t -H_{j+1}^t ) \cic{1}_{R_j} = \cic{1}_{\widetilde{R_k}} \inf_{N|t|<\updelta} H_k^t[\cic{1}_{R_k}] \gtrsim \cic{1}_{\widetilde{R_k}}.
\end{equation}
The lower bound for $\| \{C_{\upomega_j}\}: L^p(\mathbb{R}^2)\to L^p(\mathbb{R}^2;\ell^2 _j)\|$ then follows exactly as in the previous case.
|
2,869,038,155,598 | arxiv | \section{INTRODUCTION }
Thermoelectrical (TE) materials, which can directly convert heat to electricity, have been extensively studied
for last several decades since they play an important role in the area of environmentally
friendly energy technology.~\cite{Snyder_NM, Zebarjadi_EES}
The conversion efficiency of TE materials is usually evaluated by a dimensionless figure of merit,
$ZT=S^2\sigma T/\kappa $, where $S$ , $\sigma$, $T$, and $\kappa$ are the Seebeck coefficient, electrical conductivity,
absolute temperature and thermal conductivity respectively. The thermal conductivity $\kappa$ of a material can be further
divided into two items: $\kappa = \kappa_e + \kappa_L$, where $\kappa_e$ and $\kappa_L$ are electronic and lattice
thermal conductivity respectively.
For practical applications, the $ZT$ of a TE material should be at least larger than 1.
To be competitive, a higher $ZT$ of 4 is needed.\cite{DiSalvo_sci}
However, these physical quantities ($S$, $\sigma$, and $\kappa$) in the same material can not be
tuned separately due to their internal relationship. In general, the electrical conductivity ($\sigma$) and
electronic thermal conductivity ($\kappa_e$) will increase with the increase of carrier concentration,
while the Seebeck coefficient ($S$) will decrease with it. \cite{Snyder_NM}
Therefore, most good TE materials are semiconductors,
such as IV-VI compounds PbTe ~\cite{Heremans_science} and SnSe.~\cite{SnSe_nature, SnSe_science}
On the other hand, the lattice thermal conductivity ($\kappa_L$) is not directly related to
the carrier concentration. It is an effective method to tune the $ZT$ values
of materials by tuning their lattice thermal conductivities.
Typical materials with low lattice thermal conductivities are glasses. However, glasses are
bad TE materials because of their very low carrier concentration and mobility compared
with crystalline semiconductors. \cite{Snyder_NM} Therefore, good TE materials require a
rather peculiar property in the same system: `phonon-glass electron-crystal'. ~\cite{slack} In other words,
low lattice thermal conductivity is a necessary condition for good TE materials, although not a sufficient one.
In the classic physics picture, the lattice thermal conductivity can be approximated by the
formula: $\kappa_L = \frac{1}{3} C_v v l = \frac{1}{3} C_v v^2 \tau $ , where $C_v$,
$v$, $l$, and $\tau$ are the heat capacity, phonon velocity, mean free path (MFP),
and relaxation time. Furthermore, the phonon velocity is often simply replaced by
the sound velocity, which is proportional to $\sqrt{ B/\rho} $, where $B$ and $\rho$ are
the elastic modulus and mass density of a material. \cite{Toberer}
Accordingly, one method to design low lattice thermal conductivity materials is to search for
high density materials due to their low sound velocities, such as Bi$_2$Te$_3$.
Besides, complex crystal structural materials also usually have low sound velocities,
such as Yb$_{14}$MnSb$_{11}$.~\cite{Yb14MnSb11}
The other way is to reduce the relaxation time by introduction of defects or
nano-structures to scatter phonons. \cite{Toberer} Of course, the intrinsic large anharmonic
effect (large gr\"uneisen parameters) in a material will also reduce the relaxation time
by phonon-phonon scattering. A distinct example is SnSe, in which the large anharmonicity
leads to its exceptional low lattice thermal conductivity.~\cite{SnSe_nature}
However, this is not intuitive without quantitative calculations.
In this work, we predict by first principles calculations that the layered intermetallic
Na$_2$MgSn and Na$_2$MgPb have very low and anisotropic intrinsic lattice thermal conductivities.
Both materials have a very simple layered structure (8 atoms in the unit cell) and a low mass density
(2.82 and 4.01 g/cm$^3$ for Na$_2$MgSn and Na$_2$MgPb, respectively).
In particular, we propose that Na$_2$MgSn is a promising room-temperature TE material.
An intermetallic is a solid-state compound exhibiting metallic bonding, defined stoichiometry
and ordered crystal structure. It is a large material family, which have a wide various crystal
structures, ranging from zero to three in dimensionality.
Despite of their metallic bonding, some intermetallics are semiconductors,
which is the precondition for their TE applications.
There are many works about the potential TE intermetallic materials such as
Mg$_3$Sb$_2$ \cite{Mg3Sb2,Mg3Sb2_snyder,Mg3Sb2_Bhardwaj}, CaMgSi \cite{CaMgSi},
MGa$_3$ (M=Fe, Ru, and Os) \cite{TMGa3}, YbAl$_3$ \cite{YbAl3}, Zn$_4$Sb$_3$ \cite{Zn4Sb3},
Al$_2$Fe$_3$Si$_3$ \cite{AlFeSi}, MIn$_3$ (M=Ru and Ir) \cite{MIn3}, M$_2$Ru (M=Al and Ga) \cite{MRu} and etc.
In particular, many half- and full-Heusler compounds, which are magnetic intermetallics, are found to be good
TE materials.~\cite{heusler1,heusler2,heusler3,heusler4}
In 2012, Yamada \textit{et. al.} reported the synthesis, crystal structure, and basic physical properties
of hexagonal intermetallic Na$_2$MgSn.~\cite{NaMgSn} They found that polycrystalline Na$_2$MgSn is a small band gap
semiconductor with a large Seebeck coefficient of +390 $\mu$V/K
and an electrical resistivity of about 10 m$\Omega$\ cm at 300 K.
As a result, the power factor of Na$_2$MgSn is almost 40\% of that of Bi$_2$Te$_3$.~\cite{NaMgSn}
Two years later, the same group synthesized the similar intermetallic Na$_2$MgPb, ~\cite{NaMgPb}
which is a metal with three different phases from 300 to 700 K.
In the experiment, the electrical resistivity of Na$_2$MgPb is much lower than that of
Na$_2$MgSn, which is only 0.4 m$\Omega$\ cm at 300 K.
From the preliminary experimental results, Na$_2$MgSn could be a potential TE material.
However, the thermal conductivity of Na$_2$MgSn is not studied yet in both experiments~\cite{NaMgSn, NaMgPb}
and the following theoretical work. ~\cite{wang_NaMgSn}
The rest of the paper is organized as follows. In section II,
we will give the computational details
about phonon and thermal conductivity calculations.
In section III, we will present our main results about phonon
dispersions, temperature dependent and accumulated lattice thermal conductivity,
group velocity, Gr\"uneisen parameter, phonon lifetime, and Seebeck coefficient of
Na$_2$MgSn and Na$_2$MgPb. Some comparisons between different TE materials are also given.
Finally, a short conclusion is present.
\section{ COMPUTATIONAL DETAILS }
The structural properties of Na$_2$MgSn and Na$_2$MgPb are calculated by
the Vienna ab-initio simulation package (VASP) ~\cite{vasp1,vasp2} based on the density functional theory.
The projected augmented wave method ~\cite{paw1,paw2} and the
generalized gradient approximation with the
Perdew-Burke-Ernzerhof exchange-correlation functional ~\cite{pbe} are used.
The plane-wave cutoff energy is set to be 350 eV. Both the internal atomic positions and the
lattice constants are allowed to relax until the maximal residual Hellmann-Feynman forces
on atoms are smaller than 0.001 eV/\AA.
An $8 \times 8 \times 4$ k-mesh was used in the optimization.
Both the second- and third-order interatomic force constants (IFCs)
are calculated by the finite displacement method.
The second-order IFCs in the harmonic approximation and the phonon dispersions
of Na$_2$MgSn and Na$_2$MgPb are calculated by the Phonopy code. ~\cite{phonopy}
The third-order (anharmonic) IFCs and the lattice thermal conductivity are
calculated by the Phono3py code.~\cite{phono3py}
We use a $2\times 2 \times 2$ supercell (64 atoms) for the calculations of the
second- and third-order IFCS in Na$_2$MgSn and Na$_2$MgPb. And a q-mesh of $20\times 20\times 10$
is used for the calculation of lattice thermal conductivity by Phono3py code.
We do not use the crude force constants approximation in the third-order IFCS, although
we have checked that a cutoff distance of 4 \AA \, can already obtain a good thermal conductivity.
We also checked a larger supercell of $3 \times 3 \times 2$ in Na$_2$MgSn with a cutoff distance of 5 \AA,
and we found that the thermal conductivity changed little.
The Seebeck coefficients of Na$_2$MgSn and Na$_2$MgPb are calculated by
the BoltzTraP2 program \cite{boltztrap2} with the Boltzmann transport theory.
The electron eigenvalues in the whole Brillouin zone are calculated by the VASP code
with the hybrid functional of Heyd-Scuseria-Ernzerhof (HSE06) \cite{hse06} and spin-orbit coupling.
A k-mesh of 20 $\times$ 20 $\times$ 10 is used in the calculations of Seebeck coefficient.
\section{ RESULTS AND DISCUSSIONS }
\subsection{Crystal Structure and Phonon Dispersions}
\begin{figure}
\includegraphics[width=0.8\textwidth]{Fig_structure.eps}
\caption{\label{fig:structure} (Color online) ({\bf a}) Layered crystal structure and ({\bf b}) Brillouin zone and
high symmetry k-points of hexagonal Na$_2$MgSn and Na$_2$MgPb. }
\end{figure}
\begin{table}
\caption{ Calculated and experimental lattice constants of hexagonal Na$_2$MgSn and Na$_2$MgPb.
}
\begin{center}
\begin{tabular}{c|c|c|c}
\hline\hline
Material & Method & $a$ (\AA) & $c$ (\AA) \\ \hline
& present work & 5.0825 & 10.1075 \\ \cline{2-4}
Na$_2$MgSn & exp. (293 K) \footnote{ from reference~\cite{NaMgSn} } & 5.0486 & 10.0950 \\ \cline{2-4}
& GGA-USP \footnote{from reference~\cite{wang_NaMgSn} } & 5.0085 & 10.1314 \\ \hline
Na$_2$MgPb & present work & 5.1415 & 10.1873 \\ \cline{2-4}
& exp. (293 K) \footnote{from reference~\cite{NaMgPb} } & 5.1102 & 10.1714 \\ \hline \hline
\end{tabular}
\end{center}
\end{table}
As shown in Fig. 1(a), Na$_2$MgSn and Na$_2$MgPb
share the same hexagonal crystal structure with the space group of $P6_3/mmc$ (No. 194).
It is noted that Na$_2$MgPb has three phases from 300 to 700 K. \cite{NaMgPb}
However, from 300 to 500 K, Na$_2$MgPb and Na$_2$MgSn have the same hexagonal crystal structure. \cite{NaMgSn, NaMgPb}
Mg and Sn (or Pb) atoms lie in the same plane and form a two-dimensional (2D) honeycomb structure
stacking along the $c$ axis. Two layers of Na atoms are intercalated between the
adjacent Mg-Sn (or Mg-Pb) layers.
This is quite different from other alkali-intercalated layered materials, such as Na$_x$CoO$_2$ ~\cite{NaCoO}
or Na$_x$RhO$_2$ ~\cite{zbb_NRO}, in which only one Na layer is intercalated between the adjacent CoO$_2$ or RhO$_2$ layers.
The Brillouin zone and high symmetry k-points of Na$_2$MgSn and Na$_2$MgPb are given in Fig. 1(b).
The optimized lattice constants as well as the experimental ones of Na$_2$MgSn
and Na$_2$MgPb are given in Table I.
We find that the theoretical and experimental lattice constants are well consistent to each other,
with the largest difference less than 1\%. Our calculated lattice constant is also consistent with
Wang's calculation by the generalized gradient approximation
with the ultra-soft pseudopotentials (GGA-USP). ~\cite{wang_NaMgSn}
\begin{figure}
\includegraphics[width=0.9\textwidth]{Fig_phonon.eps}
\caption{\label{fig:phonon} (Color online) Calculated phonon dispersions and
density of states of hexagonal Na$_2$MgSn (left column) and Na$_2$MgPb (right column).}
\end{figure}
Based on the optimized structures, the phonon dispersions and density of states (DOS)
of Na$_2$MgSn and Na$_2$MgPb are calculated by the Phonopy code, given in Fig. 2.
It is obvious that two materials show very similar phonon dispersions
due to the same crystal structures. The highest frequency is about 7.5 and 7.0 THz
for Na$_2$MgSn and Na$_2$MgPb respectively. There is also a clear band gap
from 5 to 6.5 THz for Na$_2$MgSn and from 5 to 6 THz for Na$_2$MgPb.
From the phonon DOS in Fig. 2(c) and (d),
it is found that the high frequency phonon modes above the band gap are mainly contributed
from the vibrations of Mg ions. This feature is same for Na$_2$MgSn and Na$_2$MgPb.
The main difference in phonon DOS between the two materials is the vibrations of Sn and Pb ions.
Due to the larger atomic mass of Pb ions, the vibrational frequencies of Pb ions are mainly below 2.5 THz in
Na$_2$MgPb, while the phonon modes of Sn ions extend from 0 to 4 THz in Na$_2$MgSn.
The vibrations of Na ions are similar in both materials, which spread from 0 to 5 THz.
It is also noted that the mid-frequency phonon modes from 2.5 to 5 THz in Na$_2$MgPb
are mainly contributed from the vibrations of Na ions. However, for the phonon modes in the same frequency range
in Na$_2$MgSn, there are also significant contribution from the vibrations of Sn ions.
In other words, the vibrations of Na, Mg, and Pb ions in Na$_2$MgPb are well separated in different frequency regions,
while there is a relatively large overlap in Na$_2$MgSn.
\subsection{Lattice Thermal Conductivities}
\begin{figure}
\includegraphics[width=0.7\textwidth]{Fig_kappa.eps}
\caption{\label{fig:kappa} (Color online) ({\bf a}) Calculated lattice thermal conductivities and ({\bf b}) their average values
of hexagonal Na$_2$MgSn and Na$_2$MgPb from 300 to 800 K. The red dashed lines mean
that the high temperature lattice thermal conductivities are calculated based on a low temperature crystal structure
of Na$_2$MgPb. }
\end{figure}
\begin{table}
\caption{ Calculated lattice thermal conductivities $\kappa_a$ and $\kappa_c$ along $a$ and $c$ axes
and their average value $\bar{\kappa}$ of hexagonal Na$_2$MgSn and Na$_2$MgPb at 300 K. The unit is W/m$\cdot$K.}
\begin{center}
\begin{tabular}{c|c|c|c}
\hline\hline
Material & $\kappa_a$ & $\kappa_c$ & $\bar{\kappa}$ \\ \hline
Na$_2$MgSn & 1.75 & 0.80 & 1.27 \\ \hline
Na$_2$MgPb & 0.51 & 0.31 & 0.44 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
Based on the harmonic and anharmonic IFCs, we have calculated the lattice thermal conductivity ($\kappa_L$)
by using the Phono3py code.
Fig. 3(a) shows the temperature-dependent thermal lattice conductivities along $a$ and $c$ axes
of Na$_2$MgSn and Na$_2$MgPb, while the average ones are also given in Fig. 3(b).
It is quite surprising to found that the $\kappa_L$ of the two intermetallics are
very low. As shown in Fig. 3 and Table II, the $\kappa_L$ of Na$_2$MgSn is only 1.75 and 0.80 W/m$\cdot$K
along $a$ and $c$ axes at 300 K, while it is even much lower for Na$_2$MgPb,
which is 0.51 and 0.31 W/m$\cdot$K along $a$ and $c$ axes at the same temperature.
Specifically, the $\kappa_L$ along the $c$ axis in Na$_2$MgPb even approaches the predicted
amorphous limit (0.25 W/m$\cdot$K), \cite{Cahill_PRB} which is extremely low for crystalline solids.
The lattice thermal conductivity of Na$_2$MgSn is comparable with
the typical good TE materials, which will be discussed later.
While the lattice thermal conductivity of Na$_2$MgPb is even smaller than that of the
recently found best TE material SnSe, which is 0.8, 2.0 and 1.7 W/m$\cdot$K
along $a$, $b$, and $c$ axes at 300 K from the first-principles calculations. \cite{Guo_PRB}
It is noted that the lattice thermal conductivity of Na$_2$MgPb above 500 K is meaningless
since Na$_2$MgPb has different crystal structures above that temperature.
However, since we mainly focus on the room temperature behavior of Na$_2$MgPb,
it is not necessary to study its high temperature thermal conductivity.
Furthermore, it is obvious that Na$_2$MgSn and Na$_2$MgPb both show an anisotropic lattice thermal conductivities
due to their layered crystal structures. However, the ratio of thermal conductivities between the $a$ and $c$
directions in both materials is smaller than 2. The small anisotropy suggests that easily-formed texture
structures in layered compounds has not much effect on thermoelectric performance of these compounds.
We also calculated the average lattice thermal conductivity
$\bar{\kappa} $, defined by the formula $ 3/\bar{\kappa} = 2/\kappa_a + 1/\kappa_c $,
shown in Fig. 3(b) and Table II.
The average $\bar{\kappa} $ for Na$_2$MgSn and Na$_2$MgPb are 1.27 and 0.44 W/m$\cdot$K at 300 K respectively.
We then estimate the electronic thermal conductivity ($\kappa_e$) by the Wiedemann-Franz law:
$\kappa_e = LT\sigma$,
where $L$ is the Lorenz number (2.44 $\times$ 10$^{-8}$ W$\mathrm{\Omega}$K$^{-2}$),
$\sigma$ is the electrical conductivity, and $T$ is the absolute temperature.
According to previous experiments~\cite{NaMgSn, NaMgPb}, we can obtain the electrical
resistivities ($\rho$) of Na$_2$MgSn and Na$_2$MgPb are about 10 and 0.4 m$\Omega$\ cm at 300 K, respectively.
Therefore, the electronic thermal conductivity
of Na$_2$MgSn and Na$_2$MgPb are estimated to be 0.073 and 1.83 W/m$\cdot$K, respectively.
For the semiconducting Na$_2$MgSn, its electronic thermal conductivity is much smaller than the lattice one.
While for the metallic Na$_2$MgPb, the electron contributes more thermal conductivity than the lattice one.
Of course, it has to be noted that the experimental samples are both polycrystals.
No single crystal sample has been reported yet.
The sound velocity is also calculated by the slopes of three acoustic phonon branches near the $\Gamma$ point.
For each directions, the sound velocity is averaged on the two transversal acoustic modes, (TA1 and TA2),
and one longitudinal acoustic mode (LA) by the formula:
$ {3}/{{v}_x^3} = 1/v_{x,\mathrm{TA1}}^3 + 1/v_{x,\mathrm{TA2}}^3 + 1/v_{x,\mathrm{LA}}^3 $,
where $x$ means the $a$ and $c$ axes.
In Table II, we can see that in $a$ and $c$ axes, the sound velocities of Na$_2$MgSn are both higher than those of Na$_2$MgPb,
which can explain why Na$_2$MgSn has higher lattice thermal conductivities.
For both materials, the sound velocity along $a$ axis is higher than that along $c$ axis,
which can explain the anisotropic lattice thermal conductivities in two materials.
\begin{table}
\caption{ Calculated sound velocities along $a$ and $c$ axes of Na$_2$MgSn and Na$_2$MgPb. The unit is km/s.}
\begin{center}
\begin{tabular}{c|c|c}
\hline\hline
Material & $ v_a$ & $v_c$ \\ \hline
Na$_2$MgSn & 2.75 & 2.41 \\ \hline
Na$_2$MgPb & 2.16 & 1.90 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\includegraphics[width=0.8\textwidth]{Fig_accu_kappa.eps}
\caption{\label{fig:kappa} (Color online) Normalized directional accumulated lattice thermal conductivities
of hexagonal ({\bf a}) Na$_2$MgSn and ({\bf b}) Na$_2$MgPb at 300 K. }
\end{figure}
We further plot the directional cumulative lattice thermal conductivity with respect to the
phonon MFP in Na$_2$MgSn and Na$_2$MgPb at 300 K in Fig. 4.
It is found that the $\kappa_L$ of Na$_2$MgSn is mainly dominated by the phonons whose MFPs are
less than 100 nm. However, for Na$_2$MgPb, the $\kappa_L$
is mostly contributed by the phonons whose MFPs are less than 20 nm.
This indicates that the phonon MFP in Na$_2$MgPb is much shorter than that in Na$_2$MgSn,
leading to a much lower $\kappa_L$ in Na$_2$MgPb than that in Na$_2$MgSn.
We also give the representative MFP (rMFP) for the two materials in Fig. 4.
This parameter (rMFP) means that all the phonons whose MFP is shorter than the rMFP will contribute half to the total thermal conductivity.
It is clear to see that the rMFP of Na$_2$MgPb is much shorter than that of Na$_2$MgSn.
For Na$_2$MgSn, the rMFP along the $a$ and $c$ axes
are 10.31 and 12.37 nm respectively, while they are only 3.56 and 3.05 nm along the same axes for Na$_2$MgPb.
The rMFP of Na$_2$MgPb is even a little shorter than the ones of SnSe, which are 4.1, 4.9, and 5.6 nm
along $a$, $b$, and $c$ axes at 300 K from the first-principles calculations. \cite{Guo_PRB}
Due to their different MFP, we suggest that for Na$_2$MgSn,
structural engineerings, such as nano-structuring or polycrystalline structures,
can be taken to reduce the thermal conductivity effectively in experiments,
while it would be much ineffective for Na$_2$MgPb.
\subsection{Group velocity and phonon anharmonicity}
\begin{figure}
\includegraphics[width=0.8\textwidth]{Fig_gv_gruneisen.eps}
\caption{\label{fig:kappa} (Color online) Calculated group velocities and mode-dependent
Gr\"uneisen parameters of Na$_2$MgSn (left column) and Na$_2$MgPb (right column) .}
\end{figure}
In order to further understand the intrinsic thermal lattice conductivity
and their difference in Na$_2$MgSn and Na$_2$MgPb, we also calculated their frequency
dependent phonon group velocities, mode-dependent Gr\"uneisen parameter, and phonon lifetimes.
In Fig. 5 (a) and (b), we have given the magnitude of frequency dependent phonon group velocities
of Na$_2$MgSn and Na$_2$MgPb.
We can see that the group velocities of Na$_2$MgPb are only slightly smaller than those of Na$_2$MgSn,
which is difficult to explain the significant difference of $\kappa_L$ of the two materials.
Therefore, we need to further study their anharmonic effect: the Gr\"uneisen parameters.
In general, larger Gr\"uneisen parameter means larger anharmonicity of the material
and lower lattice thermal conductivity. In Fig. 5 (c) and (d), the magnitude of Gr\"uneisen parameters for Na$_2$MgPb is
obviously much larger than that of Na$_2$MgSn, which means that Na$_2$MgPb has a much stronger phonon anharmonicity.
In other words, stronger phonon scattering leads to lower lattice thermal conductivity in Na$_2$MgPb
than that in Na$_2$MgSn.
The frequency-dependent phonon lifetimes of Na$_2$MgSn and Na$_2$MgPb are also
calculated by the Phono3py code from third-order anharmonic IFCs, plotted in Fig. 6.
The color bar in Fig. 6 represent the density of phonon modes.
In general, we found that the phonon lifetimes of both materials are very short, roughly ranging
from 0.4 to 4.5 ps , which are much smaller than those of SnSe (from 0 to 30 ps). \cite{Guo_PRB}
In detail, we can see that in the low frequency region below 2.5 THz, the phonon lifetimes of both materials
show a large broad distribution with a maximal value about 4.5 ps.
The lifetimes of the high frequency phonons (above the band gap) of the two materials are also similar with a
narrow distribution from 0.5 to 1.0 ps.
The major differences between the two materials are from the mid-frequency phonons, i.e.
from 2.5 to 5 THz. In this region, the phonons of Na$_2$MgSn have a relatively broader distribution from 0 to 1.5 ps,
while in Na$_2$MgPb, the phonons have much smaller lifetimes, located from 0.4 to 0.8 ps.
Therefore, we conclude that the phonon modes between 2.5 to 5 THz scatter
much stronger in Na$_2$MgPb than those in Na$_2$MgSn, which is the main reason
for the lower lattice thermal conductivity in Na$_2$MgPb than that of Na$_2$MgSn.
\begin{figure}
\includegraphics[width=0.8\textwidth]{Fig_lifetime.eps}
\caption{\label{fig:kappa} (Color online) Calculated phonon lifetimes of ({\bf a}) Na$_2$MgSn and ({\bf b}) Na$_2$MgPb
at 300 K. The color in the firgure represents the phonon density. Brighter color means a higher phonon density.}
\end{figure}
\subsection{Seebeck coefficient}
In order to estimate the $ZT$ of Na$_2$MgPb, we have also calculated the Seebeck coefficient
of two materials, shown in Fig. 7. The electron band structure calculations indicate that
Na$_2$MgSn is a small gap (about 0.18 eV) semiconductor and Na$_2$MgPb is a semi-metal.
In Fig. 7(a), the maximal absolute Seebeck coefficient of Na$_2$MgSn at 300 K is about 250
$\mu$V/K, which is lower than the experimental value (390 $\mu$V/K). This is possibly because that
the experimental Na$_2$MgSn is a polycrystalline sample.
In Fig. 7(b), it is natural to find that the Seebeck coefficient of metallic Na$_2$MgPb is quite low.
At the Fermi energy, its Seebeck coefficient is about -22 $\mu$V/K.
\begin{figure}
\includegraphics[width=0.8\textwidth]{Fig_seebeck.eps}
\caption{\label{fig:seebeck} (Color online) Calculated Seebeck coefficient of ({\bf a}) Na$_2$MgSn and ({\bf b}) Na$_2$MgPb
at 300 K. The Fermi energy is set to 0. }
\end{figure}
\subsection{Discussion}
\begin{table}
\caption{ Comparison of theoretical lattice thermal conductivity ($\kappa_L$), mass density ($\rho$),
bulk modulus ($B$), sound velocity ($v$), and rMFP, and maximal phonon lifetime ($\tau$)
of Na$_2$MgSn and Na$_2$MgPb at 300 K with other TE materials. All physical quantities
except the mass density are theoretical ones. The data of Na$_2$MgSn and Na$_2$MgPb is from the present work.
The (a), (b), and (c) in the table indicates the $a$, $b$, and $c$ axes respectively.}
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c}
\hline\hline
Material& $\kappa_L$ (W/m$\cdot$K) & $\rho$ (g/cm$^3$) & $B$ (GPa) & $v$ (km/s) & rMFP (nm) &$\tau$ (ps) \\ \hline
Na$_2$MgSn & 1.75(a)/0.80(c) & 2.82 \cite{NaMgSn} & 30.6 & 2.75(a)/2.41(c) & 10.31(a)/12.37(c) &$\sim$ 4.5 \\ \cline{2-7} \hline
Na$_2$MgPb & 0.51(a)/0.31(c) & 4.01 \cite{NaMgPb} & 27.0 & 2.16(a)/1.90(c) & 3.56(a)/3.05(c) &$\sim$ 4.5 \\ \cline{2-7} \hline
SnSe \cite{Guo_PRB} & 0.8(a)/2.0(b)/1.7(c) & $\sim$6.1 \cite{SnSedensity} & 39.4 & 0.40(a)/0.63(b)/0.58(c)
& 5.6(a)/4.9(b)/4.1(c) & $\sim$ 30 \\ \cline{2-6} \hline
SnS \cite{Guo_PRB} & 0.9(a)/2.3(b)/1.6(c) & 5.1 \cite{SnSdensity} & 41.6 & 0.44(a)/0.71(b)/0.60(c)
& 4.3(a)/5.2(b)/4.0(c) &$\sim$ 30 \\ \cline{2-6} \hline
Bi$_2$Te$_3$ & 1.2(a)/0.4(c) \cite{BiTe3} & 7.88 \cite{Bi2Te3pccp} & 36.4 \cite{Bi2Te3pccp} & 1.68(a)/1.79(c) \cite{BiTe4}& 1.5(a) \cite{BiTe3} \\ \cline{2-6} \hline
PbTe & 2.1 \cite{PbTe1} & 8.24 \cite{PbTe3} & 45.51 \cite{PbTe4} & 1.98 \cite{PbTe4} & 6 \cite{PbTe1} & $\sim$ 100 \cite{PbTe1} \\ \cline{2-6}
\hline\hline
\end{tabular}
\end{center}
\end{table}
Now, we compare some theoretical physical properties of Na$_2$MgSn and Na$_2$MgPb with other
well-known TE materials, such as SnSe, SnS, Bi$_2$Te$_3$, and PbTe, shown in Table IV.
It is found that Na$_2$MgSn and Na$_2$MgPb have comparable lattice thermal
conductivities as those TE materials. The main difference
is that Na$_2$MgSn and Na$_2$MgPb have a much smaller mass density, a smaller bulk modulus,
and a relatively higher sound velocity. In particular, the mass density of Na$_2$MgSn is less than
half of the SnSe's, while they almost have the same lattice thermal conductivities.
This is a quite unique behavior in low $\kappa$ materials.
In spite of the very low mass density and high sound velocity, Na$_2$MgSn and Na$_2$MgPb show
a very low lattice thermal conductivity due to their large anharmonicity.
From Table IV, we can see that the maximal lifetime of Na$_2$MgSn and Na$_2$MgPb
is much smaller than those of SnSe, SnS, Bi$_2$Se$_3$, and PbTe.
We think the large anharmonicity is probably due to the Na intercalated layered structures.
In Na$_2$MgSn and Na$_2$MgPb, there are two layers of Na ions loosely
confined between adjacent Mg-Sn (or Mg-Pb) layers. The possible rattling modes of
the Na ions could suppress the lattice thermal conductivity, as been found in some cage structure materials, such as
Ba$_8$Ga$_{16}$Ge$_{30}$\cite{cage} and layered structure materials, such as Na$_x$CoO$_2$. \cite{voneshen}
\begin{table}
\caption{ Comparison of experimental electric conductivity ($\sigma$), absolute Seebeck coefficient ($S$),
total thermal conductivity ($\kappa$), and figure of merit $ZT$ of Na$_2$MgSn and Na$_2$MgPb at 300 K
with other TE materials. The values with $\ast$ are theoretical
ones or estimated ones based on the theoretical results.
In the table, only SnSe sample is a single crystal. }
\begin{center}
\begin{tabular}{c|c|c|c|c}
\hline\hline
Material & $\sigma$ ($\Omega^{-1}$ cm$^{-1}$ ) & $S$ ( $\mu$V/K) & $\kappa$ (W/m$\cdot$K) & $ZT$ \\ \hline
Na$_2$MgSn \cite{NaMgSn} & 100 & 390 & 1.343* & 0.34* \\ \cline{2-4} \hline
Na$_2$MgPb \cite{NaMgPb} & 2500 & 22* & 2.27* &0.016* \\ \cline{2-4} \hline
SnSe \cite{SnSe_nature} & 1.6(a)/10(b)/10.3(c) & 542(a)/522(b)/515(c) & 0.46(a)/0.7(b)/0.68(c) & 0.03(a)/0.12(b)/0.12(c) \\ \cline{2-5} \hline
SnS \cite{AgSnS} & 0.001& 400 & 1.25 &$\sim$ 0 \\ \cline{2-5} \hline
Bi$_2$Te$_3$ \cite{Bi2Te3} &962 & 226 &1.47 &1.003 \\ \cline{2-5}\hline
PbTe \cite{PbTe3} & 200 & 192 & 1.7 & 0.13 \\ \cline{2-5} \hline \hline
\end{tabular}
\end{center}
\end{table}
Finally, we can compare some experimental properties of Na$_2$MgSn and Na$_2$MgPb with other TE
materials, as shown in Table V. It is found that Na$_2$MgSn has a high Seebeck coefficient at room temperatures,
which is higher than those of Bi$_2$Te$_3$ and PbTe, but lower than those of SnSe and SnS.
It also has a good electric conductivity, which is much higher than those of SnSe and SnS,
but lower than those of Bi$_2$Te$_3$ and PbTe.
Based on these experimental data and our calculated average total thermal conductivity, we
can estimate that the $ZT$ of Na$_2$MgSn is about 0.34 at 300 K, which is comparable with the
$ZT$ values of Bi$_2$Te$_3$ and PbTe, but much higher than those of SnSe and SnS at the same temperatures.
We also note that in a polycrystalline sample, the thermal conductivity should be greatly suppressed usually.
For example, the experimental thermal conductivity of polycrystalline ZrTe$_5$ is only 2.2
W/m$\cdot$K at room temperatures,\cite{ZT1} which is much lower than the experimental and theoretical values
of single crystalline ZrTe$_5$ (about 8 W/m$\cdot$K at 300 K).\cite{ZT2, ZT3}
Therefore, we expect that the thermal conductivity of polycrystalline Na$_2$MgSn should be much smaller than our
calculated value and its $ZT$ could be possibly much larger than 0.34 at 300 K.
It is also noted that the SnSe and SnS have the best performance at high temperatures (more than 700 K),
while we believe that Na$_2$MgSn would have the highest $ZT$ near the room temperatures due to its small band gap.
On the other hand, although Na$_2$MgPb has an ultra-low lattice thermal conductivity, its total $\kappa_L$
(about 2.27 W/m$\cdot$K) is not small due to its metallicity. In experiment,
Yamada \textit{et al} have found a very small electrical resistivity of polycrystalline Na$_2$MgPb
(only 0.4 m$\Omega$ cm at 300 K).
Combined with our theoretical Seebeck coefficient, we can estimate the $ZT$ of Na$_2$MgPb
is very small, which is only about 0.016 at 300 K, as shown in Table V.
Therefore, metallic Na$_2$MgPb could not be a good TE material.
\section{ CONCLUSIONS}
We have studied the lattice thermal conductivities and thermoelectric properties of
Na$_2$MgSn and Na$_2$MgPb based on the density functional theory and linearized Boltzmann equation.
Despite of their very low mass density and simple crystal structure, both materials show very
low lattice thermal conductivities, compared with other well-known TE materials.
The lattice thermal conductivities along $a$ and $c$ axes of Na$_2$MgSn are 1.75 and 0.80 W/m$\cdot$K
respectively at 300 K, while they are much lower in Na$_2$MgPb,
which are 0.51 and 0.31 W/m$\cdot$K along $a$ and $c$ axes at the same temperature.
The main reason for the low thermal conductivity is due their large anharmonic effect.
Na$_2$MgPb could not be a good TE material due to its metallicity.
However, we predict that Na$_2$MgSn is a potential room-temperature TE material with a considerable
large $ZT$ of 0.34 at least.
Since the intermetallic is a large material family, our work can possibly stimulate
further experimental and theoretical works about the thermoelectric research in simple layered intermetallic compounds.
\textit{Note Added:} During the preparation of this manuscript, B. Peng \textit{et al.} predicted that
Na$_2$MgPb is a Dirac semi-metal, while Na$_2$MgSn is a trivial indirect semiconductor with
a small band gap of about 0.2 eV. \cite{peng}
\begin{acknowledgments}
We thank Dr. Yang Han for invaluable discussions.
This work is supported by the National Key R\&D Program of China (2016YFA0201104) and
the National Science Foundation of China (Nos. 91622122 and 11474150).
The use of the computational resources in the High Performance Computing Center of Nanjing University
for this work is also acknowledged.
\end{acknowledgments}
|
2,869,038,155,599 | arxiv | \section{ParkPredict+}
\label{sec:approach}
Our model has two main components: 1) a Convolutional Neural Network (CNN) model to predict vehicle intent based on local contextual information, and 2) a CNN-Transformer model to predict future trajectory based on the trajectory history, image history, and the predicted intent.
\subsection{Intent Prediction}
\label{sec:approach-intent}
We first consider a distribution
\begin{equation*}
\Tilde{p} = \left[ p_{\mathrm{s}}^{[1]}, \dots, p_{\mathrm{s}}^{[M_{\mathrm{s}}]}, p_{-\mathrm{s}}\right] \in \Delta^{M_{\mathrm{s}}}
\end{equation*}
where there are only the probabilities of choosing the spots ${1, \dots, M_{\mathrm{s}}}$ and bypassing all of them (denoted $p_{-\mathrm{s}}$). Compared to Eq.~\eqref{eq:intent_prob_all}, it is intuitive to have
\begin{equation*}
p_{-\mathrm{s}} = \sum_{j=1}^{M_{\mathrm{d}}} p_{\mathrm{d}}^{[j]}.
\end{equation*}
Given the $i$-th spot intent $\eta_{\mathrm{s}}^{[i]} = (x_{\mathrm{s}}^{[i]}, y_{\mathrm{s}}^{[i]})$, we can generate a spot-specific image $I^{[i]}(0)$ by painting the corresponding spot a different color in the BEV image $I(0)$, as shown in Fig.~\ref{fig:intent_model}. We could also generate supplementary features for this intent, such as the distance to it $\| \eta_{\mathrm{s}}^{[i]} \|$, and the difference of heading angle $| \Delta \psi^{[i]} | = | \mathrm{arctan2} \left( y_{\mathrm{s}}^{[i]}, x_{\mathrm{s}}^{[i]} \right) |$.
Denoting by $f: \mathbb{R}^{n \times n} \times \mathbb{R} \times \mathbb{R} \rightarrow [0, 1]$ a mapping that assigns a ``score`` to an intent, we can compute the scores for all $M_{\mathrm{s}}$ spot intents as
\begin{equation}
\hat{s}^{[i]} = f\left(I^{[i]}(0), \| \eta_{\mathrm{s}}^{[i]} \|, | \Delta \psi^{[i]} | \right), i \in \mathcal{N}_{\mathrm{s}}.
\end{equation}
Without painting any particular spot on $I(0)$ and setting the distance and angle to $0$, the score for bypassing all spots is
\begin{equation}
\hat{s}^{[0]} = f\left(I(0), 0, 0 \right).
\end{equation}
Getting a higher score means that the corresponding intent is more likely.
\begin{figure}
\vspace{0.5em}
\centering
\includegraphics[width=\linewidth]{intent_model.pdf}
\caption{\small Mapping $f$ constructed by CNN and feed forward layers. The $i$-th spot is painted in purple on image $I^{[i]}(0)$.}
\label{fig:intent_model}
\vspace{-1em}
\end{figure}
We train a CNN based model as the mapping $f$. The model architecture is demonstrated in Fig.~\ref{fig:intent_model}. The loss function is binary cross-entropy (BCE) between the predicted score $\hat{s}^{[i]}$ and the ground truth label $s^{[i]}_\mathrm{gt}$
\begin{equation}
\ell = - \left[s^{[i]}_\mathrm{gt} \log\hat{s}^{[i]} + \left( 1 - s^{[i]}_\mathrm{gt}\right) \log \left( 1-\hat{s}^{[i]} \right)\right] ,
\end{equation}
where $s^{[i]}_\mathrm{gt}=1$ if the vehicle chooses the $i$-th spot, and $s^{[i]}_\mathrm{gt}=0$ if not.
The probability outputs are normalized scores
\begin{subequations}
\begin{align}
p_{\mathrm{s}}^{[i]} & = \frac{\hat{s}^{[i]}}{\sum_{j=0}^{M_{\mathrm{s}}} \hat{s}^{[j]}}, i \in \mathcal{N}_{\mathrm{s}}, \\
p_{-\mathrm{s}} & = \frac{\hat{s}^{[0]}}{\sum_{j=0}^{M_{\mathrm{s}}} \hat{s}^{[j]}}.
\end{align}
\end{subequations}
To split $p_{-\mathrm{s}}$ into the probabilities of continuing to drive through different lanes, we generate a set of weights $\left\{ w^{[j]} \in [0,1] | j \in \mathcal{N}_{\mathrm{d}} \right\}$ as an arithmetic sequence and reorder them based on a simple heuristic: an intent $\eta_{\mathrm{d}}^{[j]}$ which requires more steering and longer driving distance will have lower weight $w^{[j]}$. Then, the probabilities are split as
\begin{equation}
p_{\mathrm{d}}^{[j]} = \frac{w^{[j]}}{\sum_{k=1}^{M_{\mathrm{d}}} w^{[k]}} p_{-\mathrm{s}}, j \in \mathcal{N}_{\mathrm{d}}.
\end{equation}
We would like to highlight here that since $f$ is only a mapping from a single intent to its score, the intent prediction model proposed above is invariant to the number of detected intents $M_{\mathrm{s}}$ and $M_{\mathrm{d}}$ in the local context.
\subsection{Trajectory Prediction}
\label{sec:approach-traj}
\begin{figure}
\vspace{0.5em}
\centering
\includegraphics[width=0.8\linewidth]{traj_model.pdf}
\caption{\small Trajectory prediction model based on Transformer.}
\label{fig:traj_model}
\vspace{-1em}
\end{figure}
We leverage the Multi-Head Attention mechanism and Transformer architecture~\cite{vaswaniAttentionAllYou2017} to construct the trajectory prediction model $\mathcal{F}: \mathbb{R}^{N_{\mathrm{hist}} \times 3} \times \mathbb{R}^{N_{\mathrm{hist}} \times n \times n} \times \mathbb{R}^{2} \rightarrow \mathbb{R}^{N_{\mathrm{pred}} \times 3}$, which predicts future trajectory $\mathcal{Z}_{\mathrm{pred}}$ with the trajectory history $\mathcal{Z}_{\mathrm{hist}}$, image history $\mathcal{I}_{\mathrm{hist}}$, and intent $\eta$. The model structure is illustrated in Fig.~\ref{fig:traj_model} and will be elaborated below.
\subsubsection{Positional Encoding}
As pointed out in~\cite{vaswaniAttentionAllYou2017}, the Attention mechanism does not contain recurrence or convolution, so we need to inject unique position encoding to inform the model about the relative position of data along the horizon. The positional encoding mask $PE$ is calculated as sinusoidal waves~\cite{suiJointIntentionTrajectory2021}:
\begin{equation}
\label{eq:positional_encoding}
PE_{t, i} = \left\{
\begin{aligned}
& \sin \left( \frac{t}{10000^{i/D}} \right), i = 2k \\
& \cos \left( \frac{t}{10000^{(i-1)/D}} \right), i = 2k+1
\end{aligned}
\right.
\end{equation}
where $t$ denotes the time step along the input horizon, $D$ denotes the model dimension, $i \in \{1, \dots, D\}$.
\subsubsection{Transformer Encoder}
\label{sec:transformer_encoder}
The image history $\mathcal{I}$ is first processed by a base CNN network $g: \mathbb{R}^{n \times n} \rightarrow \mathbb{R}^{d_{\mathrm{img}}}$ to encode contextual information.
\begin{equation}
X^{\mathrm{in}}_{\mathrm{img}}(t) = g(I(t)), t \in \mathcal{T}_{\mathrm{hist}}.
\end{equation}
Subsequently the processed image history is concatenated with the trajectory history, projected to the model dimension $D$ with a linear layer $\Phi_{\mathrm{en}}: \mathbb{R}^{d_{\mathrm{img}}+3} \rightarrow \mathbb{R}^{D}$, and summed with the positional encoding as in Eq.~\eqref{eq:positional_encoding}:
\begin{equation}
X^{\mathrm{in}}_{\mathrm{en}}(t) = \Phi_{\mathrm{en}}\left[ \mathrm{Concat}\left( X^{\mathrm{in}}_{\mathrm{img}}(t), z(t)\right) \right] \oplus PE_{t}, t \in \mathcal{T}_{\mathrm{hist}}
\end{equation}
We apply a classical Transformer Encoder $\mathcal{F}_{\mathrm{en}}: \mathbb{R}^{N_{\mathrm{hist}} \times D} \rightarrow \mathbb{R}^{N_{\mathrm{hist}} \times D}$ that consists of $n_{\mathrm{head}}$ self-attention layers and a fully connected layer:
\begin{equation}
X^{\mathrm{out}}_{\mathrm{en}} = \mathcal{F}_{\mathrm{en}}(X^{\mathrm{in}}_{\mathrm{en}}).
\end{equation}
Residual connections are employed around each layer.
\subsubsection{Intent Embedding}
Given a certain intent $\eta \in \mathbb{R}^2$, we apply a fully connected layer $\Phi_{\mathrm{it}}: \mathbb{R}^2 \rightarrow \mathbb{R}^D$ to embed the intent as latent state for the Transformer Decoder:
\begin{equation}
X^{\mathrm{out}}_{\mathrm{it}} = \Phi_{\mathrm{it}}(\eta).
\end{equation}
We use the ground truth intent to train the model, and at run time we obtain multimodal trajectory predictions by picking intents with high probabilities.
\subsubsection{Transformer Decoder}
The decoder predicts the future trajectory in an autoregressive fashion. The input of the decoder $X_{\mathrm{de}}^{\mathrm{in}} \in \mathbb{R}^{N_{\mathrm{{pred}}} \times D}$ is the trajectory prediction shifted one step to the right, together with a linear projection and positional encoding as the encoder in Sect.~\ref{sec:transformer_encoder}. The Masked Multi-Head Attention block also prevents the decoder from looking ahead to future time steps.
The Encoder Multi-Head Attention block uses the Transformer Encoder output $X^{\mathrm{out}}_{\mathrm{en}} \in \mathbb{R}^{N_{\mathrm{hist}} \times D}$ as the key ($K$) and value ($V$), and the output of the Masked Multi-Head Attention block $X^{\mathrm{out}}_{\mathrm{mm}} \in \mathbb{R}^{N_{\mathrm{pred}} \times D}$ as the query ($Q$) to compute cross-attention
\begin{equation}
\mathrm{attn}_{\mathrm{en}}=\mathrm{softmax}\left(\frac{X^{\mathrm{out}}_{\mathrm{mm}} X^{\mathrm{out}, \top}_{\mathrm{en}}}{\sqrt{D}}\right) X^{\mathrm{out}}_{\mathrm{en}}.
\end{equation}
We add the third block, Intent Multi-Head Attention, to compute the attention weights using intent so that the final trajectory prediction will be affected by intent. Here, we use intent embedding $X^{\mathrm{out}}_{\mathrm{it}} \in \mathbb{R}^{D}$ as key ($K$) and value ($V$), and the output of the previous Multi-Head Attention block $X^{\mathrm{out}}_{\mathrm{ma}} \in \mathbb{R}^{N_{\mathrm{pred}} \times D}$
\begin{equation}
\mathrm{attn}_{\mathrm{it}}=\mathrm{softmax}\left(\frac{X^{\mathrm{out}}_{\mathrm{ma}} X^{\mathrm{out}, \top}_{\mathrm{it}}}{\sqrt{D}}\right) X^{\mathrm{out}}_{\mathrm{it}}.
\end{equation}
Finally, we apply fully connected layers at the end to generate the trajectory output $\mathcal{Z}_{\mathrm{pred}} = \left\{ \hat{z}(t) \right\}^{N_{\mathrm{pred}}}_{t=1} \in \mathbb{R}^{N_{\mathrm{pred}} \times 3}$. Residual connections are also employed in the decoder.
The loss function is L1 loss since 1) the gradient does not decrease as the prediction gets closer to the ground truth, and 2) L1 is more resistant to outliers in the dataset:
\begin{equation}
\mathcal{L} = \frac{1}{3N_{\mathrm{pred}}} \sum_{t=1}^{N_{\mathrm{pred}}} \sum_{i=1}^3 | \hat{z}(t)_i - z_{\mathrm{gt}}(t)_i |.
\end{equation}
\section{Conclusion}
\label{sec:conclusion}
In this work, we investigate the problem of multimodal intent and trajectory prediction for vehicles in parking lots. CNN and Transformer networks are leveraged to build the prediction model, which take the trajectory history and vehicle-centered BEV images as input. The proposed model can be flexibly applied to various types of parking maps with an arbitrary number of intents. The result shows that the model learns to predict the intent with almost 100\% accuracy in top-3 candidates, and generate multimodal trajectory rollouts. Furthermore, we release the DLP dataset, the first-ever human driving dataset from a parking lot, with its software kit. The dataset can be applied to a wide range of prospective applications.
\section{Experiments}
\label{sec:experiments}
\subsection{Dataset}
\label{sec:exp-dataset}
\begin{figure}
\vspace{0.5em}
\centering
\begin{subfigure}[t]{0.48\columnwidth}
\centering
\includegraphics[width=\textwidth]{dlp_annotation.png}
\caption{\small Annotated video data.}
\label{fig:dlp_annotation}
\end{subfigure}%
~
\begin{subfigure}[t]{0.48\columnwidth}
\centering
\includegraphics[width=\textwidth]{dlp_semantic.png}
\caption{\small Rasterized semantic view.}
\label{fig:dlp_semantic}
\end{subfigure}
\caption{\small DLP dataset.}
\vspace{-1em}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{dlp_interactive_examples.png}
\caption{\small Interaction-intense scenarios in DLP dataset.}
\label{fig:dlp_interaction}
\vspace{-2em}
\end{figure}
We collected 3.5 hours of video data by flying a drone above the parking lot of Dragon Lake Wetland Park at Zhengzhou, Henan, China. We therefore refer to it as the Dragon Lake Parking (DLP) dataset. The videos were taken in 4K resolution, covering a parking area of 140 m $\times$ 80 m with about 400 parking spots (Fig.~\ref{fig:dlp_annotation}). With precise annotation, we obtain the dimension, position, heading, velocity, and acceleration of all vehicles at 25 fps. Abundant vehicle parking maneuvers and interactions are recorded, as shown in Fig.~\ref{fig:dlp_interaction}. To the best of our knowledge, this is the first and largest public dataset designated for the parking scenario, featuring high data accuracy and a rich variety of realistic human driving behavior.
We are also releasing a Python toolkit~\footnote{ \url{https://github.com/MPC-Berkeley/dlp-dataset}.} which provides convenient APIs to query and visualize data (Fig.~\ref{fig:dlp_semantic} and all BEV images included in this paper). The data is organized in a graph structure with the following components:
\begin{enumerate}
\item Instance: An instance is the state of a vehicle at a time step, which includes position, orientation, velocity, and acceleration. It also points to the preceding / subsequent instance along the vehicle's trajectory.
\item Agent: An agent is a vehicle that has moved in this scene. It contains the vehicle's dimension, type, and trajectory as a list of instances.
\item Frame: A frame is a discrete sample from the recording. It contains a list of visible instances at this time step, and points to the preceding / subsequent frame.
\item Obstacle: Obstacles are vehicles that never move in this recording.
\item Scene: A scene represents a consecutive video recording with certain length. It points to all frames, agents, and obstacles in this recording.
\end{enumerate}
The entire DLP dataset contains $30$ scenes, $317,873$ frames, $5,188$ agents, and $15,383,737$ instances.
In this work, we are using the sensing limit $L=10$ m and resolution $r = 0.1$ m/pixel, so that the semantic BEV image $I$ is of size $200\times 200$. The sampling time interval $\Delta t = 0.4$s, $N_\mathrm{hist} = N_\mathrm{tail} = N_\mathrm{pred} = 10$. In other words, there are total of $8$s' information history encoded in the inputs and we are predicting the vehicle trajectory over the next $4$s.
After data cleaning, filtering, and random shuffling, we obtain a training set of size 51750 and a validation set of size 5750. The models are trained on an Alienware Area 51m PC with 9th Gen Intel Core i9-9900K, 32GB RAM, and NVIDIA GeForce RTX 2080 GPU. Source code~\footnote{\small \url{https://github.com/XuShenLZ/ParkSim/}.}.
Since most well-known motion prediction benchmarks are not for the parking scenario, we choose the physics-based EKF model presented in the ParkPredict~\cite{shenParkPredictMotionIntent2020} paper as our baseline. The CNN-LSTM approach requires a different global map encoding thus cannot establish a fair comparison.
\subsection{Intent Prediction}
\label{sec:exp-intent}
\subsubsection{Hyperparameters}
\begin{table}[t]
\vspace{0.5em}
\centering
\caption{\small Hyperparameters of the Intent Prediction Model}
\label{tab:intent_parameter}
\begin{tabular}{@{}ll@{}}
\toprule
\multicolumn{1}{c}{Types} & \multicolumn{1}{c}{Parameters} \\ \midrule
Conv2d $\rightarrow$ BatchNorm & 8 $\times$ (7 $\times$ 7) $\rightarrow$ 8 \\
Dropout $\rightarrow$ LeakyReLU $\rightarrow$ MaxPool & 0.2 $\rightarrow$ 0.01 $\rightarrow$ 2 \\ \midrule
Conv2d $\rightarrow$ BatchNorm & 8 $\times$ (5 $\times$ 5) $\rightarrow$ 8 \\
Dropout $\rightarrow$ LeakyReLU $\rightarrow$ MaxPool & 0.2 $\rightarrow$ 0.01 $\rightarrow$ 2 \\ \midrule
Conv2d $\rightarrow$ BatchNorm & 3 $\times$ (3 $\times$ 3) $\rightarrow$ 3 \\
Dropout $\rightarrow$ LeakyReLU $\rightarrow$ MaxPool & 0.2 $\rightarrow$ 0.01 $\rightarrow$ 2 \\ \midrule
Linear & 6629 $\times$ 100 \\
Linear \& Sigmoid & 100 $\times$ 1
\end{tabular}
\vspace{-2em}
\end{table}
The hyperparameters of the intent prediction model in Fig.~\ref{fig:intent_model} are outlined in Table.~\ref{tab:intent_parameter}. We use the Adam optimizer with learning rate $0.001$ and stop early at convergence.
\subsubsection{Evaluation Metrics}
We use the top-$k$ accuracy to evaluate our model and compare with the baseline: Let the set $\Gamma_k \subseteq \{1, \dots, M\}$ include the $k$ most likely intent indices in the predicted distribution $\hat{p}$ and the ground truth intent be at index $l$, then the top-$k$ accuracy $\mathcal{A}_k$ is computed as
\begin{equation*}
\mathcal{A}_k = \frac{1}{M_\mathrm{D}} \sum_{i=1}^{M_\mathrm{D}} \mathbb{I} \left( l \in \Gamma_k \right).
\end{equation*}
The variable $M_\mathrm{D}$ here corresponds to the cardinality of the validation set and $\mathbb{I}(\cdot)$ is the indicator function.
\subsubsection{Results}
\begin{figure}
\vspace{0.5em}
\centering
\includegraphics[width=0.8\linewidth]{intent_top_k.png}
\caption{\small Top-k accuracy of intent prediction}
\label{fig:intent_top_k}
\vspace{-1em}
\end{figure}
Fig.~\ref{fig:intent_top_k} shows the Top-k prediction accuracy up to $k=5$. The proposed method outperforms EKF methods for all values of $k$ and achieves almost 100\% accuracy in top-3 results, which means we can reliably cover the ground truth intent when making multimodal predictions.
\subsection{Trajectory Prediction}
\label{sec:exp-traj}
\subsubsection{Hyperparameters}
We use the same Convolutional layers as Table.~\ref{tab:intent_parameter} for the Base CNN in trajectory prediction model. The subsequent feed forward layers reshape the inputs $X^{\mathrm{in}}_{\mathrm{en}}$ and $X^{\mathrm{in}}_{\mathrm{de}}$ to the model dimension $D=52$. We construct 16 encoder layers, 8 decoder layers, and 4 heads for all Multi-Head Attention blocks. The dropout rates are $0.14$ for all blocks. We use the SGD optimizer with learning rate $0.0025$ and stop early at convergence.
\subsubsection{Evaluation Metrics}
Given a prediction $\mathcal{Z}_{\mathrm{pred}} = \left\{ \hat{z}(t) = \left( \hat{x}(t), \hat{y}(t), \hat{\psi}(t) \right) \right\}^{N_{\mathrm{pred}}}_{t=1}$ and the ground truth label $\left\{ z_{\mathrm{gt}}(t) = \left( x_{\mathrm{gt}}(t), y_{\mathrm{gt}}(t), \psi_{\mathrm{gt}}(t) \right) \right\}^{N_{\mathrm{pred}}}_{t=1}$, we can look at the mean position error $e_{\mathrm{p}}(t)$ and mean absolute error (MAE) of the heading angle $e_{\mathrm{a}}(t)$ as a function of the time step $t$:
\begin{subequations}
\vspace{-1em}
\begin{align*}
e_{\mathrm{p}}(t) & = \frac{1}{M_\mathrm{D}} \sum_{i=1}^{M_\mathrm{D}} \sqrt{(\hat{x}(t) - x_{\mathrm{gt}}(t))^2 + (\hat{y}(t) - y_{\mathrm{gt}}(t))^2}, \\
e_{\mathrm{a}}(t) & = \frac{1}{M_\mathrm{D}} \sum_{i=1}^{M_\mathrm{D}} | \hat{\psi}(t) - \psi_{\mathrm{gt}}(t) |.
\end{align*}
\end{subequations}
\subsubsection{Results}
\begin{figure}
\vspace{0.5em}
\centering
\begin{subfigure}[t]{0.48\columnwidth}
\centering
\includegraphics[width=\textwidth]{distance_error.png}
\caption{\small Positional displacement.}
\label{fig:position_error}
\end{subfigure}%
~
\begin{subfigure}[t]{0.50\columnwidth}
\centering
\includegraphics[width=\textwidth]{angular_error.png}
\caption{\small Difference of heading angle.}
\label{fig:angular_error}
\end{subfigure}
\caption{\small Trajectory prediction error vs time step. Blue line represents the proposed (complete) Transformer model, red and yellow lines represent removing the intent and image inputs respectively, and green line represents the EKF model.}
\label{fig:traj_prediction_result}
\vspace{-1em}
\end{figure}
We report the performance of our trajectory prediction model and the results of an ablation study in Fig.~\ref{fig:traj_prediction_result}. For positional displacement error in Fig.~\ref{fig:position_error}, we can see that our model outperform EKF in both short- and long-term. However, by removing the intent information, the model performs almost the same as the EKF, and by removing the image, the performance is severely worsened. The error of heading angle in Fig.~\ref{fig:angular_error} has less difference among models but reflects the same trend. The comparison results demonstrate that the model learns a lot of information from image and intent inputs to generate predictions with high accuracy, especially in the long-term.
\subsection{Case Study}
\begin{figure}
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{live_prediction_parking.png}
\caption{\small Multi-modal prediction in the middle of parking maneuver.}
\label{fig:live_prediction_parking}
\end{subfigure}%
\\
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{live_prediction_intersection.png}
\caption{\small Multi-modal prediction while cruising with high speed.}
\label{fig:live_prediction_intersection}
\end{subfigure}
\caption{\small Examples of multimodal prediction. The top-3 modes are visualized by orange, green, and purple. The stars and the text close by indicate the predicted intents and their probabilities. The white trajectories are the ground truth future trajectories.}
\label{fig:live_prediction}
\vspace{-2em}
\end{figure}
Fig.~\ref{fig:live_prediction} presents our prediction for four representative scenarios in the parking lot. The model generates accurate trajectory predictions when compared to the ground truth, while providing other possible behaviors according to the local context. Video link:~{\small \url{https://youtu.be/rjWdeuvRXp8}}.
In Fig.~\ref{fig:live_prediction_parking}, the vehicles have already slowed down and started steering towards parking spots. In the left example, it is still hard to tell the actual intention of the vehicle, therefore the top-3 intents have similar probabilities. Among them, the mode marked with green matches the ground truth to steer into the corresponding spot. At the same time, the mode in purple describes that the vehicle might continue driving towards the intersection. The mode in orange represents the case that the vehicle might prepare for a reverse parking maneuver. The trajectory prediction in orange is so short that it is mostly occluded by the other two modes, indicating the need for the car to slow down before backing up. The scenario is simpler in the right example, where the model believes with high probability that the vehicle is backing up into the spot marked by purple intent.
Fig.~\ref{fig:live_prediction_intersection} shows two other scenarios in which the vehicles are still driving with relatively high speeds (indicated by the length of the fading tail). In the left example, the model firstly predicts the most likely spot marked in orange and the resulting trajectory prediction matches the ground truth. Since there is an intersection in front, it is possible that the vehicle will continue driving across it. The model predicts this behavior with the modes in green and purple. The example on the right shows a situation where the vehicle needs to turn right at the intersection. The mode in purple successfully predicts it according to the local context. Since no empty spot is visible, the model assign some probabilities to the other two intents along the lanes, but with lower value. The orange and green trajectories also try to reach the corresponding intents by slowing the vehicle down.
\section{Problem Formulation}
\label{sec:formulation}
We aim to generate multimodal intent and trajectory predictions for vehicles driving in parking lots.
\subsection{Inputs}
We make predictions based on two types of input:
\subsubsection{Trajectory history}
Denote the current time step as $0$, the trajectory history $\mathcal{Z}_{\mathrm{hist}} = \left\{ z(t) \right\}^0_{t=-(N_{\mathrm{hist}}-1)} \in \mathbb{R}^{N_{\mathrm{hist}} \times 3}$ is the sequence of target vehicle state $z(t) = \left(x(t), y(t), \psi(t)\right) \in \mathbb{R}^3$ sampled backward in time from $0$ with horizon $N_{\mathrm{hist}}$ and step interval $\Delta t$. For convenience of notation, we denote the index set of history time steps by $\mathcal{T}_{\mathrm{hist}} = \{ -(N_{\mathrm{hist}}-1), \dots, 0\}$.
To obtain better numerical properties and generalize the model to different maps, all states mentioned in this paper are local with respect to the vehicle body frame at $t=0$, therefore indicating that $z(0) \equiv (0, 0, 0)$.
\subsubsection{Local Contextual Information}
\begin{figure}
\vspace{0.5em}
\centering
\begin{subfigure}[t]{0.44\columnwidth}
\centering
\includegraphics[width=\textwidth]{inst_centric_wo_tail.png}
\caption{\small $N_\mathrm{tail}=0$}
\label{fig:inst_centric_wo_tail}
\end{subfigure}%
~
\begin{subfigure}[t]{0.44\columnwidth}
\centering
\includegraphics[width=\textwidth]{inst_centric_10_tail.png}
\caption{\small $N_\mathrm{tail}=10$.}
\label{fig:inst_centric_10_tail}
\end{subfigure}
\caption{\small Rasterized BEV images with different fading tails.}
\vspace{-1em}
\end{figure}
Similarly to~\cite{djuricShorttermMotionPrediction2018}, we generate a rasterized bird's eye view (BEV) image of size $n \times n$ and resolution $r$ that encodes the local environment and neighboring agents of the target vehicle. This image reflects the farthest sensing limit $L$ of the target vehicle in four directions to make decisions. Each color represents a set of objects with a common type. As shown in Fig.~\ref{fig:inst_centric_wo_tail}, the driving lanes are plotted in gray, static obstacles in blue, open spaces in green, the target vehicle in red, and other agents in yellow.
Since we are interested in representing the local context of the target vehicle, we always position the target vehicle at the center pixel $(n/2, n/2)$ and rotate the image so that the vehicle always faces east. Denoting the image at time $t$ as $I(t)$, along the same horizon $\mathcal{T}_{\mathrm{hist}}$, we obtain the image histories as $\mathcal{I} = \left\{ I(t) | t \in \mathcal{T}_{\mathrm{hist}} \right\} \in \mathbb{R}^{N_{\mathrm{hist}} \times n \times n }$.
The agents' motion histories can also be encoded in a single image by plotting polygons with reduced level of brightness, resulting in a ``fading tail`` behind each agent. By setting the length of the fading tail $N_{\mathrm{tail}}$, a longer history up to $t = - (N_\mathrm{hist} + N_{\mathrm{tail}}-1)$ can be encoded implicitly in $\mathcal{I}$. Fig.~\ref{fig:inst_centric_10_tail} shows a BEV image with $N_{\mathrm{tail}}=10$.
Note that both the trajectory and image inputs can be constructed by on-board sensors such as LiDAR. Therefore the model is adaptable to different environments without the access to global map or Vehicle to Everything (V2X) technologies.
\subsection{Outputs}
The outputs are the probability distributions over intents and future trajectory sequence:
\subsubsection{Intent}
We define the intent to be a location $\eta = (x, y)$ in the local context that the vehicle is ``aiming at`` in the long term. The vehicle does not need to reach $\eta$ at the end of the prediction horizon, but the effort to approach it would be implicitly contained along the trajectory.
In this work, we are interested in two types of intents around the target vehicle: empty parking spots and lanes. We assume the existence of certain algorithms to detect all possible intents through BEV or other on-board sensor outputs, as shown in Fig.~\ref{fig:intents_example}. Suppose that there are $M_{\mathrm{s}}$ empty spots and $M_{\mathrm{d}}$ lanes, the output of the intent prediction is a probability distribution for the $M = M_{\mathrm{s}} + M_{\mathrm{d}}$ detected intents
\begin{equation}
\label{eq:intent_prob_all}
\hat{p} = \left[ p_{\mathrm{s}}^{[1]}, \dots, p_{\mathrm{s}}^{[M_{\mathrm{s}}]}, p_{\mathrm{d}}^{[1]}, \dots, p_{\mathrm{d}}^{[M_{\mathrm{d}}]}\right] \in \Delta^{M-1}
\end{equation}
where $p_{\mathrm{s}}^{[i]}, i\in \mathcal{N}_{\mathrm{s}} = \{1,\dots,M_{\mathrm{s}}\}$ represents the probability that the vehicle will choose the $i$-th parking spot centered at $\eta_{\mathrm{s}}^{[i]} = (x_{\mathrm{s}}^{[i]}, y_{\mathrm{s}}^{[i]})$, and $p_{\mathrm{d}}^{[j]}, j\in \mathcal{N}_{\mathrm{d}} = \{1,\dots,M_{\mathrm{d}}\}$ represents the probability that the vehicle will bypass all spots and continue driving along the $j$-th lane towards $\eta_{\mathrm{d}}^{[j]} = (x_{\mathrm{d}}^{[j]}, y_{\mathrm{d}}^{[j]})$, which is at the boundary of its sensing limit.
\begin{figure}
\vspace{0.5em}
\centering
\includegraphics[width=0.5\linewidth]{intent_example_add_star.png}
\caption{\small The orange stars indicate the detected possible intents, including the center of 4 empty spots and 3 lanes to drive away.}
\label{fig:intents_example}
\vspace{-1em}
\end{figure}
\subsection{Trajectory}
The trajectory output $\mathcal{Z}_{\mathrm{pred}} = \left\{ \hat{z}(t) \right\}^{N_{\mathrm{pred}}}_{t=1} \in \mathbb{R}^{N_{\mathrm{pred}} \times 3}$ is the sequence of future vehicle states along the prediction horizon $N_{\mathrm{pred}}$.
As we are going to discuss in Section~\ref{sec:approach-traj}, the trajectory prediction model takes the intent as input, so given a probability $p^{[m]}$ of choosing intent $\eta^{[m]}, m \in \{1, \dots, M\}$, the resulting trajectory prediction $\mathcal{Z}_{\mathrm{pred}}^{[m]}$ is also distributed with probability $p^{[m]}$.
\section{Introduction}
\label{sec:introduction}
While the rapid advancement of self-driving technology in the past decade has brought people much closer to an era with automated mobility, autonomous vehicles (AVs) still face great challenges in interacting with other road users safely and efficiently. In addition to having a robust perception system, the ability to make predictions and infer the potential future of other vehicles will be essential for AVs to make optimal decisions.
Researchers have made great strides in the field of motion prediction. Physics model-based methods~\cite{lefevreSurveyMotionPrediction2014} such as the Kalman Filter and its variants~\cite{schubertcomparison2008}, which leverage the kinematic and dynamic properties of the vehicle to propagate the state of the vehicle forward, are easy to implement and generate intuitive short-term predictions. Reachability study~\cite{leungInfusingReachabilitybasedSafety2020} also provides a formal way to measure the uncertainties of vehicle behavior for the control design.
When vehicles operate in complex environments, model-based approaches tend to suffer from the difficulty of accurate modeling, along with the burden of heavy computation. In contrast, deep learning methods have demonstrated great potential to incorporate various forms of information and generalize to new environments. Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks~\cite{maTrafficPredictTrajectoryPrediction2019} are widely known for learning sequential data, and various research papers have also focused on adapting the networks to account for the interactions among nearby agents~\cite{alahiSocialLSTMHuman2016}. There is also interest in using Convolutional Neural Networks (CNN) to make predictions~\cite{cuiMultimodalTrajectoryPredictions2019, djuricShorttermMotionPrediction2018} where vehicle trajectories and local environments can embedded in images efficiently.
In recent years, Transformer networks~\cite{vaswaniAttentionAllYou2017} have achieved great success in Natural Language Processing tasks. Their attention mechanism helps to keep track of global dependencies in input and output sequences regardless of the relative position, which overcomes the limitations of RNNs in learning from long temporal sequences~\cite{liEndtoendContextualPerception2020}. For trajectory prediction tasks~\cite{quintanarPredictingVehiclesTrajectories2021}, Transformer networks have been shown to outperform the LSTMs in many aspects, including accuracy~\cite{giuliariTransformerNetworksTrajectory2020} and interaction modeling~\cite{liEndtoendContextualPerception2020}.
Most of the existing work mentioned above focuses on pedestrians or vehicles driving in a road network. These environments feature simple dynamics or clear lane markings and traffic rules. However, for vehicles driving in a parking lot, we are faced with the following challenges:
\begin{enumerate}
\item There is no strict enforcement of traffic rules regarding lane directions and boundaries.
\item The vehicles need to perform complex parking maneuvers to drive in and out of the parking spots.
\item There are few public datasets available for vehicles in parking lots.
\end{enumerate}
The ParkPredict~\cite{shenParkPredictMotionIntent2020} work addresses the vehicle behavior prediction problem in parking lots by using LSTM and CNN networks. However, in~\cite{shenParkPredictMotionIntent2020} the driving data was collected using a video game simulator to model just a single vehicle. Also, the problem formulation was restricted to a specific global map and could not be generalized. In this work, we are presenting ParkPredict+, an extensible approach which generates multimodal intent and trajectory prediction based on CNN and Transformer. We also present the Dragon Lake Parking (DLP) dataset, the first human driving dataset in a parking lot environment. We offer the following contributions:
\begin{enumerate}
\item We propose a CNN-based model to predict the probabilities of vehicle intents in parking lots. The model is agnostic to the global map and the number of intents.
\item We propose a Transformer and CNN-based model to predict the future vehicle trajectory based on intent, image, and trajectory history. Multimodality is achieved by coupling the model with the top-k intent prediction.
\item We release the DLP dataset and its Python toolkit for autonomous driving research in parking lots.
\end{enumerate}
The paper is organized as follows: Section~\ref{sec:formulation} formulates the multimodal intent and trajectory prediction problem, Section~\ref{sec:approach} elaborates on the model design of ParkPredict++, Section~\ref{sec:experiments} discusses the dataset, experiment setting, and results, and finally Section~\ref{sec:conclusion} concludes the paper.
\section*{ACKNOWLEDGMENT}
We would like to thank Michelle Pan and Dr. Vijay Govindarajan for their contribution in building DLP dataset.
\bibliographystyle{IEEEtran}
|
2,869,038,155,600 | arxiv | \chapter*{Acronyms and Abbreviations}
\markboth{Acronyms and Abbreviations}{Acronyms and Abbreviations}
\addcontentsline{toc}{chapter}{\protect\numberline{\ }Acronyms and Abbreviations}}
\def\acro#1#2{\vskip4pt\hbox to\textwidth{\normalsize
\hbox to5pc{#1\hfill}\vtop{\advance\hsize by
-5pc\raggedright\noindent#2}}}
\def\symbols{
\chapter*{List of Symbols}
\markboth{Symbols}{Symbols}
\addcontentsline{toc}{chapter}{\protect\numberline{\ }List of Symbols}}
\def\symbol#1#2{\vskip4pt\hbox to\textwidth{\normalsize
\hbox to5pc{#1\hfill}\vtop{\advance\hsize by
-5pc\raggedright\noindent#2}}}
\def\distributions{
\chapter*{List of Distributions}
\markboth{Distributions}{Distributions}
\addcontentsline{toc}{chapter}{\protect\numberline{\ }List of Distributions}}
\def\distro#1#2{\vskip4pt\hbox to\textwidth{\normalsize
\hbox to5pc{#1\hfill}\vtop{\advance\hsize by
-5pc\raggedright\noindent#2}}}
\newcommand{\mathrm{I}}{\mathrm{I}}
\renewcommand{\i}{\mathrm{i}}
\newcommand{\cdf}{cdf }
\newcommand{pdf }{pdf }
\newcommand{pdf }{pdf }
\newcommand{Cdf }{Cdf }
\newcommand{Pdf }{Pdf }
\newcommand{Pdf }{Pdf }
\newcommand{cdfs }{cdfs }
\newcommand{pdfs }{pdfs }
\newcommand{pdfs }{pdfs }
\newcommand{Cdfs }{Cdfs }
\newcommand{Pdfs }{Pdfs }
\newcommand{Pdfs }{Pdfs }
\newcommand{Probability generating function }{Probability generating function }
\newcommand{probability generating function }{probability generating function }
\newcommand{Probability generating functions }{Probability generating functions }
\newcommand{probability generating functions }{probability generating functions }
\newcommand{Moment generating function }{Moment generating function }
\newcommand{moment generating function }{moment generating function }
\newcommand{Moment generating functions }{Moment generating functions }
\newcommand{moment generating functions }{moment generating functions }
\newcommand{Cumulant generating function }{Cumulant generating function }
\newcommand{cumulant generating function }{cumulant generating function }
\newcommand{Cumulant generating functions }{Cumulant generating functions }
\newcommand{cumulant generating functions }{cumulant generating functions }
\newcommand{Characteristic function }{Characteristic function }
\newcommand{characteristic function }{characteristic function }
\newcommand{Characteristic functions }{Characteristic functions }
\newcommand{characteristic functions }{characteristic functions }
\newcommand{\stackrel{\mathrm{approx.}}{\sim}}{\stackrel{\mathrm{approx.}}{\sim}}
\newcommand{\stackrel{\mathrm{iid}}{\sim}}{\stackrel{\mathrm{iid}}{\sim}}
\newcommand{\iidsim}{\stackrel{\mathrm{iid}}{\sim}}
\newcommand{{\:{\sim}_{\mathrm{iid}}\:}}{{\:{\sim}_{\mathrm{iid}}\:}}
\newcommand{\mathrm{diag}}{\mathrm{diag}}
\newcommand{\mathrm{sech}}{\mathrm{sech}}
\newcommand{\chk}[1]{}
\newcommand{}{}
\newcommand{\stackrel{\mathrm{def}}{=}}{\stackrel{\mathrm{def}}{=}}
\newcommand{\AR}{{\sf AR}}
\newcommand{\MA}{{\sf MA}}
\newcommand{{\sf CH}}{{\sf CH}}
\newcommand{\ARMA}{{\sf ARMA}}
\newcommand{\ARIMA}{{\sf ARIMA}}
\newcommand{\ARCH}{{\sf ARCH}}
\newcommand{\GARCH}{{\sf GARCH}}
\newcommand{\mathrm{Corr}}{\mathrm{Corr}}
\newcommand{\Corr}{\mathrm{Corr}}
\newcommand{\mathbf{m}}{\mathbf{m}}
\newcommand{\mathrm{rank}}{\mathrm{rank}}
\newcommand{\rank}{\mathrm{rank}}
\newcommand{{\EuScript{C}}}{{\EuScript{C}}}
\section{Introduction} \label{sec:intro}
Random walks in random environments (RWREs) are well-known mathematical
models for motion
through disorganized (random) media. They generalize
ordinary random walks, usually on the $d$-dimensional lattice $\mathbb Z^d$,
via a two-stage random procedure. First, the environment
is generated according to some probability
distribution (e.g., on a set ${\cal U}^{\, \mathbb Z}$, where ${\cal U}$ is the set
of all possible environment states at any position). Second, the
walker performs an ordinary random walk $\{X_n, n = 0,1, \ldots\}$ in which the transition
probabilities at any state are
determined by the environment at that state. RWREs exhibit
interesting and unusual behavior that is not seen in ordinary random
walks. For example, the walk can tend to infinity almost surely,
while the
speed (also called drift) is 0; that is, $\mathbb P(\lim_{n \rightarrow \infty} X_n
= \infty) = 1$, while $\lim_{n \rightarrow \infty} X_n/n = 0$. The reason for
such surprising behavior is that RWREs can spend a long time in
(rare) regions from which it is difficult to escape --- in effect, the
walker becomes ``trapped'' for a long time.
Since the late 1960s a vast body of knowledge has been built up on the
behavior of RWREs.
Early applications can be found in
Chernov \cite{chernov67} and Temkin \cite{temkin69}; see also
Kozlov \cite{kozlov85} and references therein.
Recent applications to charge transport in designed materials
are given in
Brereton et al.\ \cite{brereton12} and
Stenzel et al.\ \cite{stenzel14}.
The mathematical framework for RWREs was laid by
Solomon \cite{solomon75}, who proved conditions for
recurrence/transience for one-dimensional RWREs and also derived law
of large number properties for such processes.
Kesten et al.\ \cite{kesten75} were the first to establish
central limit-type
scaling laws for transient RWREs, and
Sinai \cite{sinai83} proved such results for the recurrent case,
showing remarkable ``sub-diffusive'' behavior.
Large deviations for these processes were obtained in Greven and Den Hollander
\cite{greven94}. The main focus in these papers was on one-dimensional
random walks in independent
environments.
Markovian environments were investigated in Dolgopyat
\cite{dolgopyat2008} and
Mayer-Wolf et al. \cite{wolf04}.
Alili \cite{alili99} showed that in the one-dimensional case much of the
theory for independent environments
could be
generalized to the case where the environment process is stationary and ergodic.
Overviews of the current state of the art, with a focus on
higher-dimensional RWREs, can be found, for example, in
Hughes \cite{hughes2}, Sznitman \cite{sznitman04},
Zeitouni \cite{zeit04,zeit2012}, and R\'ev\'esz \cite{revesz}.
Although the theoretical behavior of one-dimensional RWREs is nowadays
well understood (in terms of transience/recurrence, law of large numbers, central limits, and
large deviations), it remains difficult to find easy to
compute expressions for key measures such as the drift of the
process. To the best of our knowledge such expressions are only
available in simple one-dimensional cases with independent random
environments. The purpose of this paper is to develop theory and
methodology for the computation of the drift of the random walk for
various dependent environments, including one where the environment
is obtained as a moving average of independent environments.
The rest of the paper is organized as follows. In
Section~\ref{sec:model} we formulate the model for a one-dimensional
RWRE in a stationary and ergodic environment and review
some of the key results from \cite{alili99}. We then formulate
special cases for the environment: the iid, the Markovian, the
$k$-dependent, and the moving average environment.
In Section~\ref{sec:eval} we derive explicit (computable)
expressions for the drift for each of these models, using a novel
construction involving an auxiliary Markov chain. Conclusions and
directions for future research are given in Section~\ref{sec:concl}.
\section{Model and preliminaries}\label{sec:model}
In this section we review some key results on one-dimensional
RWREs and introduce the class of ``swap-models''
that we will study in more detail. We mostly follow the notation of Alili \cite{alili99}.
\subsection{General theory}
Consider a stochastic
process $\{X_n,n=0,1,2,\ldots\}$ with state space $\mathbb Z$, and a stochastic
``Underlying'' environment $\mathbf{U}$ taking values in some set ${\cal
U}^{\,\mathbb Z}$, where ${\cal U}$ is the set of possible environment states
for each site in $\mathbb Z$. We assume that $\mathbf{U}$ is stationary (under $\mathbb P$)
as well as ergodic (under the natural shift operator on $\mathbb Z$).
The evolution of $\{X_n\}$ depends on the realization of $\mathbf{U}$, which is random
but fixed over time. For any realization $\mathbf{u}$ of $\mathbf{U}$ the process $\{X_n\}$ behaves as a simple random walk with transition probabilities
\begin{equation}
\begin{split} \label{transitions}
\mathbb P(X_{n+1} &= i+1 \,|\, X_n = i, \mathbf{U} = \mathbf{u}) = \alpha_i(\mathbf{u})\\
\mathbb P(X_{n+1} &= i-1 \,|\, X_n = i, \mathbf{U} = \mathbf{u}) =
\beta_i(\mathbf{u})=1-\alpha_i(\mathbf{u}).
\end{split}
\end{equation}
The general behavior of $\{X_n\}$ is well
understood. Theorems~\ref{thm:alili1} and \ref{thm:alili2} below
completely describe the transience/recurrence behavior and the Law of
Large Numbers behavior of $\{X_n\}$. The key quantities in these
theorems are given first.
Define
\begin{equation} \label{rhodef}
\sigma_i = \sigma_i(\mathbf{u})=\frac{\beta_i(\mathbf{u})}{\alpha_i(\mathbf{u})}, \quad i \in \mathbb Z\;,
\end{equation}
and let
\begin{equation} \label{Sdef}
S = 1 + \sigma_1 + \sigma_1 \,\sigma_2 + \sigma_1 \,\sigma_2 \,\sigma_3 + \cdots
\end{equation}
and
\begin{equation}\label{Fdef}
F = 1 + \frac{1}{\sigma_{-1}} + \frac{1}{\sigma_{-1}\,\sigma_{-2}} +
\frac{1}{\sigma_{-1}\,\sigma_{-2}\, \sigma_{-3}} + \cdots
\end{equation}
\begin{thm}~(Theorem 2.1 in \cite{alili99}) \label{thm:alili1}
\begin{enumerate}
\item If $\mathbb E [\log \sigma_0]< 0$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = \infty\;.$
\item If $\mathbb E [\log \sigma_0] > 0$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} =-\infty\;.$
\item If $\mathbb E [\log \sigma_0] = 0$, then almost surely $\displaystyle \liminf_{n\rightarrow \infty} {X_n} =-\infty$ and $\displaystyle \limsup_{n\rightarrow \infty} {X_n} =\infty\;.$
\end{enumerate}
\end{thm}
\begin{thm}~(Theorem 4.1 in \cite{alili99}) \label{thm:alili2}
\begin{enumerate}
\item If $\mathbb E[ S] < \infty$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} \frac{X_n}{n} =
\frac{1}{\mathbb E[(1+\sigma_0)S]} = \frac{1}{2\mathbb E[S]-1}\;.$
\item If $\mathbb E[ F] < \infty$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} \frac{X_n}{n} =
\frac{-1}{\mathbb E[(1+\sigma_0^{-1})F]} = \frac{-1}{2\mathbb E[F]-1}\;.$
\item If $\mathbb E[ S] = \infty$ and $\mathbb E[ F] = \infty$, then almost surely
$\displaystyle \lim_{n\rightarrow \infty} \frac{X_n}{n} = 0$.
\end{enumerate}
\end{thm}
Note that we have added the second equalities in statements 1.\ and
2.\ of Theorem~\ref{thm:alili2}. These follow directly from the stationarity of
$\mathbf{U}$. In particular, if $\theta$ denotes the shift operator on $\mathbb Z$,
then
\begin{align*}
\mathbb E[\sigma_0\sigma_1 \cdots \sigma_{n-1}]
&= \mathbb E\left[\frac{\beta_0(\mathbf{U})\beta_1(\mathbf{U})\cdots\beta_{n-1}(\mathbf{U})}{\alpha_0(\mathbf{U})\alpha_1(\mathbf{U})\cdots\alpha_{n-1}(\mathbf{U})} \right]\\
&= \mathbb E\left[\frac{\beta_1(\theta\mathbf{U})\beta_2(\theta\mathbf{U})\cdots\beta_{n}(\theta\mathbf{U})}{\alpha_1(\theta\mathbf{U})\alpha_2(\theta\mathbf{U})\cdots\alpha_{n}(\theta\mathbf{U})} \right]\\
(\mbox{apply } \theta\mathbf{U} \stackrel{d}{=}\mathbf{U})\ &= \mathbb E\left[\frac{\beta_1(\mathbf{U})\beta_2(\mathbf{U})\cdots\beta_{n}(\mathbf{U})}{\alpha_1(\mathbf{U})\alpha_2(\mathbf{U})\cdots\alpha_{n}(\mathbf{U})} \right]\\
&=\mathbb E[\sigma_1\sigma_2 \cdots \sigma_{n}],
\end{align*}
from which it follows that $\mathbb E[(1+\sigma_0)S] = 2\mathbb E[S]-1$.
We will call $\lim_{n\rightarrow \infty} X_n/n$ the {\em drift} of the
process $\{X_n\}$, and denote it by $V$.
Note that, as mentioned in the introduction, it is possible for the chain to be transient with drift 0 (namely when $\mathbb E [\log \sigma_0] \neq 0$, $\mathbb E[ S] = \infty$ and $\mathbb E[ F] = \infty$).
\subsection{Swap model}\label{ssec:swapmodel}
We next focus on what we will call {\em swap} models.
Here, ${\cal U}=\{-1,1\}$; that is, we
assume that all elements $U_i$ of the process $\mathbf{U}$ take value either
$-1$ or $+1$. We assume that the transition probabilities in
state $i$ only depends on $U_i$, and not on other elements of $\mathbf{U}$, as
follows. When $U_i=-1$, the transition probabilities of $\{X_n\}$ from
state $i$ to states $i+1 $ and $i-1$ are swapped with respect to the
values they have when $U_i=+1$. Thus, for some fixed value $p$ in
$(0,1)$ we let $\alpha_i(\mathbf{u})=p$ (and $\beta_i(\mathbf{u})=1-p$) if $u_i =
1$, and $\alpha_i(\mathbf{u})=1-p$ (and $\beta_i(\mathbf{u})=p$) if $u_i =
-1$. Thus, (\ref{transitions}) becomes
\[
\mathbb P(X_{n+1} = i+1 \,|\, X_n = i, \mathbf{U} = \mathbf{u}) = \begin{cases}
p & \text{ if } u_i = 1 \\
1-p & \text{ if } u_i = -1
\end{cases}
\]
and
\[
\mathbb P(X_{n+1} = i-1 \,|\, X_n = i, \mathbf{U} = \mathbf{u}) = \begin{cases}
1-p & \text{ if } u_i = 1 \\
p & \text{ if } u_i = -1\;.
\end{cases}
\]
Notice that due to our convenient choice of notation for the states in
${\cal U}=\{-1,1\}$ we have
\[
\sigma_i = \frac{p}{1-p} \mathbb{I}(U_i=-1)+\frac{1-p}p \mathbb{I}(U_i=1) = \sigma^{U_i},
\]
where $\sigma =(1-p)/p$. Also, for the quantities in Theorems~\ref{thm:alili1} and~\ref{thm:alili2} we find the following.
\begin{equation} \label{logrho-swap}
\mathbb E[\log \sigma_0]=\mathbb E[U_0 \log\sigma]=\log\sigma\ \mathbb E[U_0],
\end{equation}
the sign of which (and hence the a.s.\ limit of $X_n$) only depends on
whether $p$ is less than or greater than 1/2, and on whether $\mathbb E[U_0]$ is positive or negative, regardless of the dependence structure between the $\{U_i\}$. Furthermore,
\begin{equation} \label{ES-swap}
\mathbb E[S]=
\sum_{n=0}^\infty \mathbb E\left[\sigma^{\sum_{i=1}^n U_{i}}\right]\quad \mbox{ and } \quad
\mathbb E[F]= \sum_{n=0}^\infty \mathbb E\left[\sigma^{-\sum_{i=1}^n U_{-i}}\right]\;.
\end{equation}
In what follows we will focus on $\mathbb E[S]$, since analogous results for
$\mathbb E[F]$ follow by replacing $\sigma$ with $\sigma^{-1}$ and $p$ with
$1-p$. This follows from the stationarity of $\mathbf{U}$, which implies that for any $n$ the product $\sigma_{-1}\sigma_{-2}\cdots \sigma_{-n}$ has the same distribution as $\sigma_1\sigma_2\cdots \sigma_n$ (apply a shift over $n+1$ positions).
Next, we need to choose a dependence structure for $\mathbf{U}$.
The standard case, first studied by
Sinai \cite{sinai83}, simply assumes that the $\{U_i\}$ are iid
(independent and identically distributed):
\medskip
\noindent{\bf Iid environment.}
Let the $\{U_i\}$ be iid with
\[
\mathbb P(U_{i}=1)=\alpha, \qquad \mathbb P(U_{i}=-1)=1-\alpha
\]
for some $0 < \alpha < 1$. In this case the model has two parameters: $\alpha$ and $p$.
\medskip
\noindent
We extend this to a more general model where $\mathbf{U}$ is
generated by a stationary and ergodic Markov chain $\{Y_i, i \in
\mathbb Z\}$ taking values in a finite set $\{1,\ldots,m\}$. In particular, we let
$U_i = g(Y_i)$, where $g:\{0,\ldots,m\} \rightarrow \{-1,1\}$ is a given
function. Despite its simplicity, this formalism covers a number of
interesting dependence structures on $\mathbf{U}$, discussed next.
\medskip
\noindent{\bf Markov environment.} Let $U_i = Y_i$, where $\{Y_i\}$ is
a stationary discrete-time Markov chain on $\{-1,1\}$, with one-step
transition matrix $P$ given by
\[
P=\left[
\begin{array}{cc}
1-a&a\\
b&1-b
\end{array}
\right],
\]
for some $a,b \in (0,1)$. The $\{U_i\}$ form a dependent Markovian
environment depending on $a$ and $b$.
\medskip
\noindent{\bf $k$-dependent environment.} \label{sec:nstep}
Let $k \geq 1$ be a fixed integer. Our goal is to obtain a generalization of the Markovian environment in which
the conditional distribution of $U_i$ given all other variables
is the same as the conditional distribution of $U_i$ given only
$U_{i-k}, \dots, U_{i-1}$ (or, equivalently, given
$U_{i+1},\ldots,U_{i+k}$).
To this end we define a $k$-dimensional Markov chain $\{Y_i,i\in \mathbb Z\}$ on $\{-1,1\}^k$ as follows.
From any state $(u_{i-k}, \ldots, u_{i-1})$ in $\{-1,1\}^k$, $\{Y_i\}$ has two possible one-step transitions, given by
\[
(u_{i-k}, \ldots, u_{i-1})
\rightarrow
({u}_{i-k+1},\ldots, u_{i-1}, u_i), \quad u_j \in \{-1,1\},
\]
with corresponding probabilities $1-a_{({u}_{i-k},\ldots,{u}_{i-2})}$, $a_{({u}_{i-k},\ldots,{u}_{i-2})}$, $b_{({u}_{i-k},\ldots,{u}_{i-2})}$, and $1-b_{({u}_{i-k},\ldots,{u}_{i-2})}$,
for $(u_{i-1},u_i)$ equal to $(-1,-1),$ $(-1,1),$ $(1,-1)$, and $(1,1)$,
respectively. Now let $U_i$ denote the last component of $Y_i$.
Then $\{U_i, i \in \mathbb Z\}$ is a $k$-dependent environment, and $Y_i=(U_{i-k+1}, \ldots, U_{i})$.
Note the correspondence in notation with the (1-dependent) Markov environment: $a$ indicates transition probabilities from $U_{i-1}=-1$ to $U_{i}=+1$, and $b$ from $U_{i-1}=+1$ to $U_{i}=-1$, where in both cases the subindex denotes the dependence on $U_{i-k}, \ldots, U_{i-2}$.
\medskip
\noindent {\bf Moving average environment.} \label{sec:MAS}
Consider a ``moving average'' environment, which is built up in two phases as follows. First, start with an
iid environment $\{\hat U_i\}$ as in the iid case, with $\mathbb P(\hat U_{i}=1)=\alpha$. Let $Y_i =
(\hat U_i,\hat U_{i+1},\hat U_{i+2})$. Hence, $\{Y_i\}$ is a Markov process
with states $1= (-1,-1,-1), 2 = (-1,-1,1), \ldots, 8 = (1,1,1)$
(lexicographical order). The corresponding transition matrix clearly is given by
\begin{equation} \label{P_movav}
P = \begin{bmatrix}
1-\alpha & \alpha & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1-\alpha & \alpha & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1-\alpha & \alpha & 0 & 0 \\
0 & 0 & 0 & 0 & 0& 0& 1-\alpha & \alpha \\
1-\alpha & \alpha & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1-\alpha & \alpha & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1-\alpha & \alpha & 0 & 0 \\
0 & 0 & 0 & 0 & 0& 0& 1-\alpha & \alpha \\
\end{bmatrix}.
\end{equation}
Now define $U_i = g(Y_i)$, where $g(Y_i)=1$ if at least two of the three random variables $\hat U_i,\hat U_{i+1}$ and
$\hat U_{i+2}$ are 1, and $g(Y_i)=-1$ otherwise. Thus,
\begin{equation} \label{g_movav}
(g(1), \ldots, g(8)) = (-1,-1,-1,1,-1,1,1,1)\;,
\end{equation}
and we see that each $U_i$ is obtained by taking the moving average of
$\hat{U}_i, \hat{U}_{i+1}$ and $\hat{U}_{i+2}$, as illustrated in
Figure~\ref{fig:movav}.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\linewidth]{movav}
\label{fig:movav}
\caption{Moving average environment.}
\end{figure}
\section{Evaluating the drift}\label{sec:eval}
As a starting point for the analysis, we begin in Section~\ref{ssec:iid} with the solution for the iid
environment, based on first principles. As mentioned earlier,
this case was first studied by
Sinai \cite{sinai83}.
Then, in
Section~\ref{sec:generalsolution} we give the general solution approach for the Markov-based swap model, followed by sections with results on the transience/recurrence and on the drift for the random environments mentioned in Section~\ref{ssec:swapmodel}: the Markov environment, the 2-dependent environment, and the moving average environment (all based on Section~\ref{sec:generalsolution}).
\subsection{Iid environment}\label{ssec:iid}
As a warm-up we consider the iid case first, with $\mathbb P(U_{i}=1)=\alpha=1-\mathbb P(U_{i}=-1)$.
Here,
\[
\mathbb E [\log \sigma_0]=\mathbb E [U_0] \log \sigma = (1-2\alpha) \log\frac{1-p}{p}.
\]
Hence, by Theorem~\ref{thm:alili1}, we have the following findings, consistent with intuition.
$X_n\rightarrow +\infty$ a.s.\ if and only if either $\alpha>1/2$ and $p > 1/2$, or $\alpha<1/2$ and $p<1/2$;
$X_n\rightarrow -\infty$ a.s.\ if and only if either $\alpha>1/2$ and $p < 1/2$, or $\alpha<1/2$ and $p>1/2$; and
$\{X_n\}$ is recurrent a.s.\ if and only if either $\alpha=1/2$, or $p = 1/2$, or both.
Moving on to Theorem~\ref{thm:alili2}, we have
\begin{align}
\mathbb E [S]&=\sum_{n=0}^\infty \mathbb E\left[\sigma^{\sum_{i=1}^n U_{i}}\right] =
\sum_{n=0}^\infty \left(\mathbb E[\sigma^{U_1}]\right)^n = \sum_{n=0}^\infty
(\sigma^{-1}(1-\alpha) + \sigma \alpha)^n, \label{iidS}
\end{align}
which is finite if and only if $\sigma^{-1}(1-\alpha) + \sigma \alpha<1$; that is,
$\mathbb E [S]<\infty$ if and only if either $\alpha>1/2$ and $p \in (1/2, \alpha)$, or $\alpha<1/2$ and $p \in (\alpha, 1/2)$.
Similarly (replace $\sigma$ by $\sigma^{-1}$ and $p$ by $1-p$), $\mathbb E [F] = \sum_{n=0}^\infty
(\sigma(1-\alpha) + \sigma^{-1} \alpha)^n <\infty$ if and only if either $\alpha>1/2$ and $p \in (1-\alpha,1/2)$, or $\alpha<1/2$ and $p \in (1/2, 1-\alpha)$.
Clearly the cases with respect to $\mathbb E[S]$ and $\mathbb E[F]$ do not entirely cover the cases we concluded to be transient above. E.g., when $\alpha > 1/2$ and $p \in [\alpha,1]$, the process tends to $+\infty$, but the drift is zero. We summarize our findings in the following theorem.
\begin{thm} \label{thm:iid}~
We distinguish between transient cases with and without drift, and the recurrent case as follows.
\begin{enumerate}
\item[1a.] If either $\alpha > 1/2$ and $p\in (1/2, \alpha)$ or $\alpha < 1/2$ and $p
\in (\alpha, 1/2)$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = \infty\;$ and
\begin{equation} \label{iid}
V = (2p-1)\frac{\alpha-p}{\alpha(1-p)+(1-\alpha)p} > 0\;.
\end{equation}
\item[1b.] If either
$\alpha > 1/2$ and $p \in (1-\alpha,1/2)$ or
$\alpha < 1/2$ and $p \in (1/2, 1-\alpha)$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = -\infty\;$ and
\begin{equation} \label{iid2}
V = - (1-2p)\frac{\alpha-(1-p)}{\alpha p+(1-\alpha)(1-p)} < 0\;.
\end{equation}
\item[2a.] If either $\alpha > 1/2$ and $p\in [\alpha,1]$ or $\alpha < 1/2$ and $p
\in [0, \alpha]$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = \infty\;,$ but $V=0$.
\item[2b.] If either $\alpha > 1/2$ and $p\in [0, 1-\alpha]$ or $\alpha < 1/2$ and $p
\in [1-\alpha, 1]$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = -\infty\;,$ but $V=0$.
\item[3.] Otherwise (when $\alpha = 1/2$ or $p=1/2$ or both), $\{X_n\}$ is recurrent and $V = 0$.
\end{enumerate}
\end{thm}
\begin{proof}
Immediate from the above; (\ref{iid}) follows from (\ref{iidS})
by using $\sigma=(1-p)/p$; and similar for \eqref{iid2}.
\end{proof}
\medskip
We illustrate the drift as a function of $\alpha$ and $p$ in Figure~\ref{fig:iidcase}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.7\linewidth,clip=]{rwrepic3}
\caption{Graphical representation of Theorem~\ref{thm:iid}. Solid
lines, where the process is recurrent, divide the remaining
parameter space in four quadrants. In quadrants I and III, $\{X_n\}$
moves to the right; in quadrants II and IV, $\{X_n\}$ moves to the left. In gray areas (including dashed boundaries and boundaries at $p=0, 1$), the drift is zero. In white areas (including boundaries at $\alpha=0, 1$), the drift is nonzero.} \label{fig:iidcase}
\end{center}
\end{figure}
\subsection{General solution for swap models} \label{sec:generalsolution}
Consider a RWRE swap model with a random environment generated by a Markov
chain $\{Y_i, i \in \mathbb Z\}$, as specified in
Section~\ref{sec:model}. We already saw that the a.s.\ limit of $X_n$ only depends on whether $p$ is less than or larger than 1/2, and on whether $\mathbb E[U_0]$ is positive or negative, regardless of the dependence structure between the $\{U_i\}$, see (\ref{logrho-swap}). The other key quantity to evaluate is (see
\eqref{ES-swap}):
\[
\mathbb E[S] =
\sum_{n=0}^\infty \mathbb E\left[\sigma^{\sum_{i=1}^n U_{i}}\right] =
\sum_{n=0}^\infty \mathbb E\left[\sigma^{\sum_{i=1}^n g(Y_{i})}\right]\;.
\]
Let
\[
G^{(n)}_{y}(\sigma) = \mathbb E\left[ \sigma^{\sum_{i=1}^n g(Y_i)} \,|\, Y_0 = y\right],
\quad y = 1,\ldots,m\;.
\]
Let $P = (P_{y, y'})$ be the one-step transition matrix of $\{Y_i\}$.
Then, by conditioning on $Y_1$,
\[
\begin{split}
G^{(n+1)}_y(\sigma) & = \mathbb E\left[ \sigma^{\sum_{i=1}^{n+1} g(Y_i)} \,|\, Y_0 =
y\right] = \mathbb E\left[ \sigma^{\sum_{i=2}^{n+1} g(Y_i)}
\sigma^{g(Y_1)} \,|\, Y_0 =
y\right] \\
& = \sum_{y'=1}^m P_{y,y'} \sigma^{g(y')} G_{y'}^{(n)}(\sigma)\;.
\end{split}
\]
In matrix notation, with $\mathbf{G}^{(n)}(\sigma) =(G_{1}^{(n)}(\sigma), \ldots, G_{m}^{(n)}(\sigma))^{\top}$,
we can write this as
\[
\mathbf{G}^{(n+1)}(\sigma) = P D \mathbf{G}^{(n)}(\sigma),
\]
where
\[
D = \mathrm{diag}(\sigma^{g(1)}, \ldots,\sigma^{g(m)})\;.
\]
It follows, also using $G^{(0)}_y(\sigma)=1$, that
\[
\mathbf{G}^{(n)}(\sigma)=(PD)^n \mathbf{G}^{(0)}(\sigma)=(PD)^n\vect{1},
\]
where $\vect{1} = (1,1)^{\top}$,
and hence
\begin{equation*}
\mathbb E[S] =\sum_{n=0}^\infty \vect{\pi} \mathbf{G}^{(n)}(\sigma)\
=\ \vect{\pi}\sum_{n=0}^\infty (PD)^n \vect{1},
\end{equation*}
where $\vect{\pi}$ denotes the stationary distribution vector for $\{Y_i\}$. The matrix series $\sum_{n=0}^\infty (PD)^n$ converges if and only if $\mbox{Sp}(PD)<1$, where $\mbox{Sp}(\cdot)$ denotes the spectral radius, and in that case the limit is $(I-PD)^{-1}$. Thus, we end up with
\begin{equation} \label{series}
\mathbb E[S]=
\begin{cases}
\vect{\pi} (I-PD)^{-1} \vect{1} & \text{ if Sp}(PD)<1\\
\infty & \text{ else. }
\end{cases}
\end{equation}
Based on the above, the following subsections will give results on the
transience/recurrence and on the drift for the random environments
mentioned in Section~\ref{ssec:swapmodel}.
\subsection{Markov environment} \label{sec:markovdependence}
The quantity $\mathbb E[\log \sigma_0]$ in Theorem~\ref{thm:alili1}, which determines whether $X_n$ will diverge to $+\infty$ or $-\infty$, or is recurrent, is given by
\[
\mathbb E[\log \sigma_0]=\frac{b}{a+b}\log \sigma^{-1} + \frac{a}{a+b}\log\sigma =\frac{a-b}{a+b}\log\frac{1-p}{p}.
\]
Hence,
$X_n\rightarrow +\infty$ a.s.\ if and only if either $a > b$ and $p > 1/2$, or $a < b$ and $p<1/2$;
$X_n\rightarrow -\infty$ a.s.\ if and only if either $a > b$ and $p < 1/2$, or $a < b$ and $p>1/2$; and
$\{X_n\}$ is recurrent a.s.\ if and only if either $a = b$, or $p = 1/2$, or both.
Next we study $\mathbb E[S]$ to find the drift. In the context of Section~\ref{sec:generalsolution} the processes $\{U_i\}$ and $\{Y_i\}$ are identical and the function $g$ is the identity on the state space ${\cal U}=\{-1,1\}$.
Thus, the matrix $D$ is given by $D=\mathrm{diag}(\sigma^{-1}, \sigma)$, and since $P$ is as in Section~\ref{ssec:swapmodel}, the matrix $PD$ is given by
\[
PD=\left[
\begin{array}{cc}
(1-a)\sigma^{-1}&{a}\sigma\\
b\sigma^{-1}&(1-b)\sigma
\end{array}
\right],
\]
for which we have the following.
\begin{lemma} \label{lem}
The matrix series $\sum_{n=0}^\infty (PD)^n$ converges to
\begin{equation} \label{seriesanswer}
(I-PD)^{-1}=\frac1{\det(I-PD)}\left[
\begin{array}{cc}
1-(1-b)\sigma&{a}\sigma\\
b\sigma^{-1}&1-{(1-a)}\sigma^{-1}
\end{array}
\right],
\end{equation}
with $\det(I -PD)=2-a-b-\left(\frac{1-a}\sigma+(1-b)\sigma\right)$,
iff $\sigma$ lies between 1 and~$\frac{1-a}{1-b}$.
\end{lemma}
Note that the condition that $\sigma$ lies between 1 and~$\frac{1-a}{1-b}$ can either mean
$1<\sigma<\frac{1-a}{1-b}$ (when $a<b$), or $\frac{1-a}{1-b}<\sigma<1$ (when $a>b$).
\medskip
\begin{proof}
The series $\sum_{n=0}^\infty (PD)^n$ converges if and only if $\mbox{Sp}(PD)<1$, where $\mbox{Sp}(\cdot)$ denotes the spectral radius $\max_i \,|\, \lambda_i \,|\, $. The eigenvalues $\lambda_1, \lambda_2$ follow from
\[
\,|\, \lambda I -PD \,|\, =\lambda^2-A\lambda+(1-a-b)=0, \qquad \mbox{where} \qquad A=(1-a)\sigma^{-1}+(1-b)\sigma.
\]
The discriminant of this quadratic equation is
\[
A^2-4(1-a)(1-b)+4ab=\left(\frac{1-a}\sigma-(1-b)\sigma\right)^2+4ab>0,
\]
so the spectral radius is given by the largest eigenvalue,
\[
\mbox{Sp}(PD)=\frac{A+\sqrt{A^2-4(1-a-b)}}{2}.
\]
Clearly $\mbox{Sp}(PD)<1$ if and only if $\sqrt{A^2-4(1-a-b)}<2-A$, or equivalently $A<2-a-b$.
Substituting the definition of $A$ and multiplying by $\sigma$ this leads to
\[
(1-b)\sigma^2 -(2-a-b)\sigma +(1-a)<0,
\]
or equivalently,
\[
(\sigma-1)\big((1-b)\sigma-(1-a)\big)<0.
\]
Since the coefficient of $\sigma^2$ in the above is $1-b>0$, the statement of the lemma now follows immediately.
\end{proof}
This leads to the following theorem.
\begin{thm} \label{thm:general}~
We distinguish between transient cases with and without drift, and the recurrent case as follows.
\begin{enumerate}
\item[1a.] If either $a>b$ and $p\in \big(\frac12, \frac{1-b}{(1-a)+(1-b)}\big)$ or $a<b$ and $p\in \big(\frac{1-b}{(1-a)+(1-b)},\frac12 \big)$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = \infty\;$ and
\begin{equation} \label{Markovdrift}
V =(2p-1)
\frac{(1-b)(1-p)-(1-a)p}{\left(b+\frac{a-b}{a+b}\right)(1-p) + \left(a-\frac{a-b}{a+b}\right)p} > 0\;.
\end{equation}
\item[1b.] If either
$a>b$ and $p\in \big(\frac{1-a}{(1-a)+(1-b)},\frac12 \big)$ or $a<b$ and $p\in \big(\frac12, \frac{1-a}{(1-a)+(1-b)} \big)$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = -\infty\;$ and
\begin{equation} \label{Markovdrift2}
V = -(1-2p)\frac{(1-b)p-(1-a)(1-p)}{\left(b+\frac{a-b}{a+b}\right)p + \left(a-\frac{a-b}{a+b}\right)(1-p)} < 0\;.
\end{equation}
\item[2a.] If either $a>b$ and $p\in \big[\frac{1-b}{(1-a)+(1-b)},1 \big]$ or $a<b$ and
$p\in \big[0, \frac{1-b}{(1-a)+(1-b)} \big]$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = \infty\;,$ but $V=0$.
\item[2b.] If either $a>b$ and $p\in \big[0, \frac{1-a}{(1-a)+(1-b)} \big]$ or $a<b$ and
$p \in \big[\frac{1-a}{(1-a)+(1-b)},1 \big]$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = -\infty\;,$ but $V=0$.
\item[3.] Otherwise (when $a=b$ or $p=1/2$ or both), $\{X_n\}$ is recurrent and $V = 0$.
\end{enumerate}
\end{thm}
\begin{proof}
Substitution of (\ref{seriesanswer}) and $\vect{\pi}=\frac1{a+b}(b,a)$ in (\ref{series}) leads to
\begin{eqnarray*}
V^{-1}&=&2\mathbb E[S]-1\\
&=& \frac2{(a+b)\det(I-PD)}\ (b,a) \left[
\begin{array}{cc}
1-(1-b)\sigma&a\sigma\\
b\sigma^{-1}&1-{(1-a)}\sigma
\end{array}
\right]
\left(\!\!\!
\begin{array}{c}
1\\1
\end{array}\!\!\!
\right)-1\\
&=& \frac2{\det(I-PD)} \frac{(a+b)-(1-a-b)(b\sigma+a\sigma^{-1})}{a+b}-1\\
&=& \frac{1+\sigma}{1-\sigma}
\frac{\left(b+\frac{a-b}{a+b}\right)\sigma +
\left(a-\frac{a-b}{a+b}\right)}{(1-b)\sigma-(1-a)}\\
&=& \frac{1}{2p-1}
\frac{\left(b+\frac{a-b}{a+b}\right)(1-p) + \left(a-\frac{a-b}{a+b}\right)p}{(1-b)(1-p)-(1-a)p}.
\end{eqnarray*}
When $\sigma$ lies between $1$ and $\frac{1-a}{1-b}$, i.e. when $p=(1+\sigma)^{-1}$ lies between $1/2$ and $(1-b)/((1-a)+(1-b))$, it follows by Lemma~\ref{lem} that the process has positive drift, given by the reciprocal of the above. This proves (\ref{Markovdrift}). The proof of (\ref{Markovdrift2}) follows from replacing $\sigma$ by $\sigma^{-1}$ and $p$ by $1-p$, and adding a minus sign. The other statements follow immediately.
\end{proof}
When we take $a+b=1$ we obtain the iid case of the previous section, with $\alpha=a/(a+b)$. Indeed the theorem then becomes identical to Theorem~\ref{thm:iid}. In the following subsection we make a comparison between the Markov case and the iid case.
\subsubsection{Comparison with the iid environment}
To study the impact of the (Markovian) dependence, we reformulate the expression for the drift in Theorem~\ref{thm:general}. Note that the role of $\alpha$ in the iid case is played by $P(U_0=1)=a/(a+b)$ in the Markov case. Furthermore, we can show that the correlation coefficient between two consecutive $U_i$'s satisfies
\[
\rho\equiv \rho(U_0, U_1)=\frac{\Cov(U_0, U_1)}{\Var(U_0)}=\frac{\frac{a+b-4ab}{a+b}-\left(\frac{a-b}{a+b}\right)^2}{1-\left(\frac{a-b}{a+b}\right)^2}=1-a-b.
\]
So $\rho$ depends on $a$ and $b$ only through their sum $a+b$, with
extreme values 1 (for $a=b=0$; i.e., $U_i\equiv U_0$) and $-1$ (for
$a=b=1$; that is, $U_{2i}\equiv U_0$ and $U_{2i+1}\equiv -U_0$). The intermediate case $a+b=1$ leads to $\rho=0$ and corresponds to the iid case, as we have seen before.
To express $V$ in terms of $\alpha$ and $\rho$ we solve the system of equations
$\frac{a}{a+b}=\alpha$ and $1-a-b=\rho$, leading to the solution
\begin{align*}
a&=(1-\rho)\alpha\\
b&=(1-\rho)(1-\alpha).
\end{align*}
Substitution in the expression for $V$ (here in case of positive drift only, see (\ref{Markovdrift})) and rewriting yields
\[
V =(2p-1)
\frac{\alpha-p + \rho(1-\alpha -p)}{\big(\alpha(1-p) + (1-\alpha)p\big) (1+\rho) - \rho}.
\]
This enables us not only to immediately recognize the result
(\ref{iid}) for the iid case (take $\rho=0$), but also to study the
dependence of the drift $V$ on $\rho$.
Note that due to the restriction that $a$ and $b$ are probabilities,
it must hold that $\rho > \max\{1 - 1/\alpha, 1 - 1/(1-\alpha)\}$.
Figures~\ref{fig:rwre3Vversusp} and~\ref{fig:rwre3Vversusrho}
illustrate
various aspects of the difference between iid and Markov cases.
Clearly, compared to the iid case (for the same value of $\alpha$),
the Markov case with positive correlation coefficient has lower drift,
but also a lower `cutoff value' of $p$ at which the drift becomes
zero.
For negative correlation coefficients we see a higher cutoff value, but not all values of $\alpha$ are possible (since we should have $a<1$).
Furthermore, for weak correlations the drift (if it exists) tends to be larger than for strong correlations (both positive and negative), depending on $p$ and $\alpha$.
Note that Figure~\ref{fig:rwre3Vversusrho} seems to suggest there are two cutoff values
in terms of the correlation coefficient. However, it should be realized that drift curves corresponding to some $\alpha$ are no longer drawn for negative correlations since the particular value of $\alpha$ cannot be attained. E.g., when $\rho$ is close to $-1$, then $a$ and $b$ are both close to 1, hence $\alpha$ can only be close to 1/2.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{rwre3_incl_rho_negative}
\caption{Drift for $\rho=0$ (blue, dashed), $\rho=0.3$ (red,solid), and $\rho=-0.3$ (green,dotdashed) as a function
of $p$. From highest to lowest curves for $\alpha = 1, 0.95, \ldots,0.55$ (for $\rho=0$ and $\rho=0.3$), and for $\alpha = 0.75, 0.70,\ldots,0.55$ (for $\rho=-0.3$).}
\label{fig:rwre3Vversusp}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{rwre3matlabpic3}
\caption{Drift for $p=0.7$ (blue, dashed) and $p=0.9$ (red, solid) as a function of the correlation
coefficient $\rho$, for $\alpha = 1, 0.95, \ldots,0.55$ (from
highest to lowest curves). The values at $\rho = 0$ give the drift
for the independent case. Note that $\rho$ must be greater than or
equal to $1/\alpha$.}
\label{fig:rwre3Vversusrho}
\end{figure}
\subsection{2-dependent environment} \label{sec:2dependence}
In this section we treat the $k$-dependent environment for $k=2$. For this case we have the transition probabilities
\[
P_{u_{i-2}u_{i-1},u_i}=\mathbb P(U_{i}=u_i \,|\, U_{i-2}=u_{i-2}, U_{i-1}=u_{i-1}),\qquad u_j \in\{-1,1\},
\]
so that the one-step transition matrix of the Markov chain $\{Y_i, i \in \mathbb Z\}$ with $Y_i=(U_{i-1}, U_i)$ is given by
\[
P=\left[
\begin{array}{cccc}
P_{-1-1,-1}&P_{-1-1,+1}&0&0\\
0&0&P_{-1+1,-1}&P_{-1+1,+1}\\
P_{+1-1,-1}&P_{+1-1,+1}&0&0\\
0&0&P_{+1+1,-1}&P_{+1+1,+1}
\end{array}
\right]
=
\left[
\begin{array}{cccc}
1-a_-&a_-&0&0\\
0&0&b_-&1-b_-\\
1-a_+&a_+&0&0\\
0&0&b_+&1-b_+
\end{array}
\right].
\]
Thus, the model has five parameters, $a_-, a_+, b_-, b_+,$ and $p$.
Also note that the special case $a_-=a_+(=a)$ and $b_-=b_+(=b)$ corresponds to the (1-dependent) Markovian case in Section~\ref{sec:markovdependence}.
We first note that the stationary distribution (row) vector $\vect{\pi}$ is given by
\begin{equation} \label{pi2dependent}
\vect{\pi}=\left(2+\frac{1-a_+}{a_-} + \frac{1-b_-}{b_+}\right)^{-1} \left(\frac{1-a_+}{a_-}, 1, 1, \frac{1-b_-}{b_+}\right),
\end{equation}
so assuming stationarity we have $\mathbb P(U_0=1)={\pi}_{-1,1}+{\pi}_{1,1}$
and $\mathbb P(U_0=-1)=\pi_{-1,-1}+\pi_{1,-1}$. It follows that
$\mathbb P(U_0=1)>\mathbb P(U_0=-1)$ if and only if
$\frac{a_-}{1-a_+}>\frac{b_+}{1-b_-}$. This is important to determine
the sign of $\mathbb E[\log \sigma_0]$, which satisfies (with $\sigma=\frac{1-p}p$ as before),
\[
\mathbb E[\log \sigma_0] = \big(2\mathbb P(U_0=1)-1\big)\ \log \sigma.
\]
Hence,
$X_n\rightarrow +\infty$ a.s.\ if and only if either $\frac{a_-}{1-a_+}>\frac{b_+}{1-b_-}$ and $p> 1/2$, or $\frac{a_-}{1-a_+}<\frac{b_+}{1-b_-}$ and $p < 1/2$;
$X_n\rightarrow -\infty$ a.s.\ if and only if either $\frac{a_-}{1-a_+}>\frac{b_+}{1-b_-}$ and $p < 1/2$, or
$\frac{a_-}{1-a_+}<\frac{b_+}{1-b_-}$ and $p > 1/2$; and
$\{X_n\}$ is recurrent a.s.\ if and only if either $\frac{a_-}{1-a_+}=\frac{b_+}{1-b_-}$, or $p=1/2$, or both.
Next we consider the drift. As before we have when $\mathbb E[S]<\infty$ that $V^{-1} =2\mathbb E[S]-1$. So in view of (\ref{series}) we need to consider the matrix $PD$ where $D=\mathrm{diag}(\sigma^{-1}, \sigma, \sigma^{-1}, \sigma)$, so
\[
PD=
\left[
\begin{array}{cccc}
(1-a_-)\sigma^{-1}&a_-\sigma&0&0\\
0&0&b_-\sigma^{-1}&(1-b_-)\sigma\\
(1-a_+)\sigma^{-1}&a_+\sigma&0&0\\
0&0&b_+\sigma^{-1}&(1-b_+)\sigma
\end{array}
\right]
\]
and hence
\begin{eqnarray*}
V^{-1}&=& 2 \vect{\pi} \left(\sum_{n=0}^\infty (PD)^n\right) \ \vect{1}\ -1\\
&=& 2 \vect{\pi} (I-PD)^{-1} \ \vect{1}\ -1
\end{eqnarray*}
if Sp$(PD)<1$. Unfortunately, the eigenvalues of $PD$ are now the
roots of a 4-degree polynomial, which are hard to find
explicitly. However, using Perron--Frobenius theory and the implicit
function theorem it is possible to prove the following lemma, which has the same structure as in the Markovian case.
\begin{lemma} \label{lem2dep}
The matrix series $\sum_{n=0}^\infty (PD)^n$ converges to
$(I-PD)^{-1}$, which is
{\tiny
\[
\begingroup
\renewcommand{\arraystretch}{2.5}
\begin{bmatrix}
1 - a_+ b_- - \sigma + \sigma B & a_- \sigma ((b_+ -1) \sigma+1) & a_- (-\sigma b_-+b_-+b_+
\sigma) & -a_- (b_--1) \sigma^2 \\
\frac{(a_+ -1) (b_- (\sigma-1)-b_+ \sigma)}{\sigma^2} & \frac{(a_- +\sigma-1) ((b_+ -1) \sigma+1)}{\sigma} & -\frac{(a_- +\sigma-1)
(b_- (\sigma-1)-b_+ \sigma)}{\sigma^2} & -(b_--1) (a_- +\sigma-1) \\
-\frac{(a_+ -1) ((b_+ -1) \sigma+1)}{\sigma} & (a_- +a_+ (\sigma-1)) ((b_+ -1) \sigma+1) & \frac{(a_- +\sigma-1) ((b_+ -1)
\sigma+1)}{\sigma} & -(b_--1) (a_- +a_+ (\sigma-1)) \sigma \\
\frac{b_+ -a_+ b_+ }{\sigma^2} & \frac{b_+ (a_- +a_+ (\sigma-1))}{\sigma} & \frac{b_+ (a_- +\sigma-1)}{\sigma^2} &
\frac{1 - A + \sigma - a_+ b_- \sigma}{\sigma} \\
\end{bmatrix}
\endgroup
\]
}
\!\!divided by $\det(I -PD)=-\sigma^{-1}(\sigma-1)\big((1-B)\sigma-(1-A)\big)$,
iff $\sigma$ lies between 1 and~$\frac{1-A}{1-B}$. Here, $A=a_-+a_+b_--a_-b_-$ and $B=b_++a_+b_--a_+b_+$.
\end{lemma}
\begin{proof}
To find out for which values of $\sigma$ we have Sp$(PD)<1$, first we denote the (possibly complex) eigenvalues of $PD$ by $\lambda_i(\sigma), i=0, 1, 2, 3, $ as continuous functions of $\sigma$.
Since $PD$ is a nonnegative irreducible matrix for any $\sigma>0$, we can apply Perron--Frobenius to claim that there is always a unique eigenvalue with largest absolute value (the other $|\lambda_i|$ being strictly smaller), and that this eigenvalue is real and positive (so in fact it always equals Sp$(PD)$). When $\sigma=1$ the matrix is stochastic and we know this eigenvalue to be 1, and denote it by $\lambda_0(1)$.
Now, moving $\sigma$ from 1 to any other positive value, $\lambda_0(\sigma)$ {\em must} continue to play the role of the Perron--Frobenius eigenvalue; i.e., none of the other $\lambda_i(\sigma)$ can at some point take over this role. If this were not true, then the continuity of the $\lambda_i(\sigma)$ would imply that one value $\hat \sigma$ exists where (say) $\lambda_1$ `overtakes' $\lambda_0$, meaning that $|\lambda_1(\hat \sigma)|=|\lambda_0(\hat \sigma)|$, which is in contradiction with the earlier Perron--Frobenius statement.
Thus, it remains to find out when $\lambda_0(\sigma)<1$, which can be established using the implicit function theorem, since
$\lambda_0$ is implicitly defined as a function of $\sigma$ by $f(\sigma, \lambda_0)=0$, with $f(\sigma, \lambda)=\det(\lambda I-PD)$ together with $\lambda_0(1)=1$. Using $\det(D)=1$, we find that
\begin{align*}
f(\sigma, \lambda)=&\det((\lambda D^{-1}-P)D) =\det(\lambda D^{-1}-P)=\\
=&\ \sigma[\lambda(a_+b_--a_+b_+)+\lambda^3(b_+-1)]\\
&+\sigma^{-1}[\lambda(a_+b_--a_-b_-)+\lambda^3(a_--1)]\\
&+\lambda^4+(1-a_--b_++a_-b_+-a_+b_-)\lambda^2+a_-b_--a_-b_+-a_+b_-+a_+b_+.
\end{align*}
Setting $\lambda=1$ in this expression gives $\det(I-PD)$ as given in the lemma, with two roots for $\sigma$. Thus, there is only an eigenvalue 1 when $\sigma=1$, which we already called $\lambda_0(1)$, or when $\sigma=\frac{1-A}{1-B}$. In the latter case this must be $\lambda_0(\frac{1-A}{1-B})$, i.e., it cannot be $\lambda_i(\frac{1-A}{1-B})$ for some $i\neq0$, again due to continuity. As a result we have either $\lambda_0(\sigma)>1$ or $\lambda_0(\sigma)<1$ when $\sigma$ lies between 1 and~$\frac{1-A}{1-B}$. Whether $\frac{1-A}{1-B}<1$ or $\frac{1-A}{1-B}>1$ depends on the parameters:
\begin{equation} \label{sigmalocation}
\frac{1-A}{1-B}>1\quad\Leftrightarrow\quad\frac{a_-}{1-a_+}<\frac{b_+}{1-b_-},
\end{equation}
where we used that $1-B=1-b_+-a_+b_-+a_+b_+>(1-b_+)(1-a_+)>0$. Now we
apply the implicit function theorem:
\begin{align}
\frac{\mathrm{d}\lambda_0(\sigma)}{\mathrm{d}\sigma}\Big|_{\sigma=1}
&=
-\left.
\frac{\ \
\frac{\partial f(\sigma,\lambda_0)}{\partial \sigma}\ \
}{
\frac{\partial f(\sigma,\lambda_0)}{\partial \lambda_0}
}\right|_{\sigma=1, \lambda_0=1}
\\
&=
-\frac{b_+(1-a_+)-a_-(1-b_-)}{a_-(1-b_-+b_+)+b_+(1-a_++a_-)}\\
&=
\frac{\frac{a_-}{1-a_+}-\frac{b_+}{1-b_-}}{\frac{a_-}{1-a_+}\left(1+\frac{b_+}{1-b_-}\right) +\frac{b_+}{1-b_-}\left(1+\frac{a_-}{1-a_+}\right) },
\end{align}
which due to (\ref{sigmalocation}) is $<0$ iff $\frac{1-A}{1-B}>1$ and
is $>0$ iff $\frac{1-A}{1-B}<1$, so that indeed
Sp$(PD)=\lambda_0(\sigma)<1$ if and only if $\sigma$ lies between 1
and $\frac{1-A}{1-B}$.
\end{proof}
Note that for the case $\frac{a_-}{1-a_+}=\frac{b_+}{1-b_-}$ the
series never converges, as there is no drift,
$\mathbb P(U_0=1)=\mathbb P(U_0=-1)$. This corresponds to $a=b$ in the Markovian case and $\alpha=1/2$ in the iid case.
We conclude that if $\sigma$ lies between 1 and $\frac{1-A}{1-B}$, or equivalently, if $p$ lies between 1/2 and $\frac{1-B}{1-A+1-B}$, the drift is given by $V=(2 \vect{\pi} (I-PD)^{-1} \ \vect{1}\ -1)^{-1}$, where $\vect{\pi}$ is given in (\ref{pi2dependent}) and $(I-PD)^{-1}$ follows from Lemma~\ref{lem2dep}. Using computer algebra, this can be shown to equal
\begin{equation} \label{2depdrift}
V=(2 p-1)\, \frac{d\, p(1-p)\big((1-B)(1-p)-(1-A)p\big) }
{\sum_{i=0}^3 c_i \, p^i}
\end{equation}
where
\[
\begin{split}
d &= a_- (b_- - b_+ -1)+ b_+(a_+ -a_- -1) \\
c_0 &= 2 a_- b_+ (b_- - b_+) \\
c_1 & =-c_0( 2+ a_+ +a_-)+(B-A)(1-B)\\
c_2 & =-c_0-c_1-c_3\\
c_3 & = (B-A) (2-A-B).
\end{split}
\]
Including the transience/recurrence result from the first part of this section, and including the cases with negative drift, we obtain the following analogon to Theorems~\ref{thm:iid} and~\ref{thm:general}.
\begin{thm} \label{thm:2-dependent}
We distinguish between transient cases with and without drift, and the recurrent case in the same way as for the Markov environment in Theorem~\ref{thm:general}. In particular, all statements (1a.), \ldots, (3) in Theorem~\ref{thm:general} also hold for the 2-dependent environment if we replace $a$ and $b$ by $A$ and $B$ respectively, (\ref{Markovdrift}) by (\ref{2depdrift}), and (\ref{Markovdrift2}) by minus the same expression (\ref{2depdrift}) but with $p$ replaced by $1-p$.
\end{thm}
\subsubsection{Comparison with the Markov environment}
To facilitate a comparison between the drifts for the
two-dependent and Markov
environments it is convenient to write the
probability distribution vector of
$(U_0,U_1,U_2)$ as $\vect{\pi} R$, where $\vect{\pi}$ is the distribution vector of $(U_0,U_1)$, see (\ref{pi2dependent}), and
\[
R =
\left[
\begin{array}{cccccccc}
1-a_-&a_-&0&0& 0 & 0 & 0 & 0 \\
0&0&b_-&1-b_- & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1-a_+&a_+&0&0\\
0 & 0 & 0 & 0& 0&0&b_+&1-b_+
\end{array}
\right].
\]
Thus, $\vect{\pi} R=c \left(\frac{(1-a_+)(1-a_-)}{a_-}, 1-a_+, b_-,1-b_-,1-a_+, a_+, 1-b_-, \frac{(1-b_-)(1-b_+)}{b_+}\right),$ where $c=\left(2+\frac{1-a_+}{a_-} + \frac{1-b_-}{b_+}\right)^{-1}$. If we also define
\[
M_0 =
\begin{bmatrix}
1 & 0 \\
1 & 0 \\
1 & 0 \\
1 & 0 \\
0 & 1 \\
0 & 1 \\
0 & 1 \\
0 & 1 \\
\end{bmatrix},
M_{01} =
\begin{bmatrix}
1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 0 & 1\\
\end{bmatrix},
M_{02} =
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
\end{bmatrix},
\]
then the probability distribution vector of $U_0$, $(U_0,U_1)$, and
$(U_0,U_2)$ are respectively given by
{\footnotesize
\[
\begin{split}
\vect{\pi} R M_0&=c \left(\frac{1-a_+}{a_-}+1, \frac{1-b_-}{b_+}+1\right),\\
\vect{\pi} R M_{01}=\vect{\pi}&=c \left(\frac{1-a_+}{a_-}, 1, 1, \frac{1-b_-}{b_+}\right),\\
\vect{\pi} R M_{02}&=c \left(\frac{(1-a_+)(1-a_-)}{a_-}+b_-,2-a_+- b_-,2-a_+- b_-, a_+ + \frac{(1-b_-)(1-b_+)}{b_+}\right).
\end{split}
\]
}
\!\!\!Various characteristics of the distribution of $(U_0,U_1,U_2)$ are now
easily found. In particular,
the probability $\mathbb P(U_0=1)$ is
\[
\alpha = \frac{a_- (1-b_- +b_+ )}
{a_- (1-b_- +b_+ ) + b_+(1-a_+ +a_-)},
\]
the correlation coefficient between $U_0$ and $U_1$ is
\[
\rho_{01} = 1-\frac{a_- }{a_- +1-a_+}-\frac{b_+}{ b_++ 1-b_-},
\]
the correlation coefficient between $U_0$ and $U_2$ is
\[
\begin{split}
\rho_{02} &= 1-(2-a_+-b_-)\left(
\frac{a_- }{a_- +1-a_+}+\frac{b_+}{ b_+ +1-b_-}
\right)\\
&=1-(2-a_+-b_-)(1-\rho_{01}),
\end{split}
\]
and $\mathbb E[U_0 U_1 U_2]$ is
\[
e_{012} = \frac{4 a_- b_+(b_- -a_+) +
a_- (1-b_- +b_+ ) - b_+(1-a_+ +a_-)\ }
{\phantom{4 a_- b_+(b_- -a_+) + }\ a_- (1-b_- +b_+ ) + b_+(1-a_+ +a_-)\ }\;.
\]
The original parameters can be expressed in terms of $\alpha,
\rho_{01}, \rho_{02}$, and $e_{012}$ as follows:
\[
\begin{split}
a_- & = -\frac{2 \alpha (2 \alpha (\rho_{02}-1)-2
\rho_{02}+1)+e_{012}+1}{8 (\alpha-1) (\alpha
(\rho_{01}-1)+1)}\\
b_- & = \frac{2 \alpha (\alpha (4
\rho_{01}-2 (\rho_{02}+1))-4 \rho_{01}+2 \rho_{02}+1)+e_{012}+1}{8
(\alpha-1) \alpha (\rho_{01}-1)}\\
a_+ & = -\frac{2
\alpha (2 \alpha (-2 \rho_{01}+\rho_{02}+1)+4 \rho_{01}-2
\rho_{02}-3)+e_{012}+1}{8 (\alpha-1) \alpha
(\rho_{01}-1)}\\
b_+ & = \frac{2 \alpha (-2 \alpha
(\rho_{02}-1)+2 \rho_{02}-3)+e_{012}+1}{8 \alpha (\alpha
(\rho_{01}-1)-\rho_{01})}\;.
\end{split}
\]
Note that due to the restriction that $a_-, a_+, b_-$, and $b_+$ are
probabilities, $(\alpha, \rho_{01},$ $\rho_{02},e_{012})$ can only take
values in a strict subset of $[0,1]\times [-1,1]^3$.
An illustration of the different behavior that can be achieved for
two-dependent environments (as opposed to Markovian environments) is
given in Figure~\ref{fig:twodepvsMarkov}. Here, $\alpha = 0.95$ and
$\rho_1 = 0.3$. The drift for the corresponding Markovian case is
indicated in the figure.
The cutoff value is here approximately 0.75. By varying $\rho_2$ and
$e_{012}$ one can achieve a considerable increase in the drift. It is not
difficult to verify that the smallest
possible value for $\rho_2$ is here $(\alpha - 1)/\alpha = -1/19$,
in which case $e_{012}$ can only take the value
$ 3 + 2 \alpha (-5 - 4 \alpha(-1 + \rho_1) + 4 \rho_1) = 417/500.$
This gives a maximal cutoff value of 1. The corresponding drift curve
is indicated by the ``maximal'' label in Figure~\ref{fig:twodepvsMarkov}.
For $\rho_2 = 0$, the parameter $e_{012}$ can at most vary from
$-1 + 2 \alpha(-1 + \alpha(2 - 4 \rho_1) + 4 \rho_1) = 103/123 =
0.824$ to $7 + 2 \alpha (-9 + \alpha (6 - 4 \rho_1) + 4 \rho_1) =
211/250= 0.844$. The solid red curves show the evolution of the drift
between these extremes. The dashed blue curve corresponds to the drift for the
independent case with $\alpha = 0.95$.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{twodepvsmarkov3}
\caption{Drift for $\alpha = 0.95$ and $\rho_1 = 0.3$ for various
$\rho_2$ and $e_{012}$. The solid red curves show the drift for $\rho_2=0$
and $e_{012}$ varying from 0.824 to 0.844. The smallest dashed blue curve
corresponds to the Markov case. The ``maximal'' dotdashed orange curve corresponds to the case $\rho_2
= - 1/19$ and $e_{012} = 417/500$. The middle dashed blue line gives the
independent case.}
\label{fig:twodepvsMarkov}
\end{figure}
\subsection{Moving average environment}
Recall that the environment is given by $U_i = g(Y_i)$ where the
Markov process $\{Y_i\}$ is given by $Y_i=(\hat U_{i}, \hat
U_{i+1},\hat U_{i+2})$. The sequence $\{\hat U_i\}$ is iid with
$\mathbb P(\hat U_i=1)=\alpha=1-\mathbb P(\hat U_i=-1)$. Thus, $\{Y_i\}$ has states
$1= (-1,-1,-1), 2 = (-1,-1,1), \ldots, 8 = (1,1,1)$ (in lexicographical
order) and transition matrix $P$ given by (\ref{P_movav}). The
deterministic function $g$ is given by (\ref{g_movav}); see also
Figure~\ref{fig:movav}.
The almost sure behavior of $\{X_n\}$ again depends only on $\mathbb E[U_0]$ which equals $-4\alpha^3+6\alpha^2-1=(2\alpha-1)(-2\alpha^2+2\alpha+1)$. Since $-2\alpha^2+2\alpha+1>0$ for $0\leq \alpha \leq 1$, the sign of $\mathbb E[U_0]$ is the same as the sign of $\mathbb E[\hat U_0]=2\alpha-1$, so the almost sure behavior is precisely the same as in the iid case; we will not repeat it here (but see Theorem~\ref{thm:movav}).
To study the drift, we need the stationary vector of $\{Y_i\}$, which is given by
\begin{equation} \label{pi_movav}
\begin{split}
\vect{\pi} & =
\big\{(1-\alpha)^3,(1-\alpha)^2 \alpha,(1-\alpha)^2
\alpha,(1-\alpha) \alpha^2, \\
& \hspace{2cm} (1- \alpha)^2 \alpha,(1-\alpha) \alpha^2,(1-\alpha)
\alpha^2,\alpha^3 \big\},
\end{split}
\end{equation}
and the convergence behavior of $\sum (PD)^n$, with
$D=\mathrm{diag}(\sigma^{-1}, \sigma^{-1}, \sigma^{-1}, \sigma, \sigma^{-1},$ $ \sigma, \sigma, \sigma)$. This is given in the following lemma.
\begin{lemma} \label{lem_movav}
The matrix series $\sum_{n=0}^\infty (PD)^n$ converges to
$(I-PD)^{-1}$ iff $\sigma$ lies between 1 and~$\sigma_{\mathrm{cutoff}}$, which is the unique root $\neq1$ of
\begin{equation}
\begin{split}
\det(I-PD)=&-\frac{\alpha(1-\alpha)^2}{\sigma^3}+\frac{\alpha^2(1-\alpha)^2}{\sigma^2}-\frac{(1-\alpha)(1-\alpha+\alpha^2)}{\sigma}+1
\\ & -2\alpha^2(1-\alpha)^2
-\alpha^2(1-\alpha)\sigma^3+\alpha^2(1-\alpha)^2\sigma^2-\alpha(1-\alpha+\alpha^2)\sigma. \label{Detmovav}
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{lem2dep}; we only give an outline, leaving details for the reader to verify. Again, denote the possibly complex eigenvalues of $PD$ by $\lambda_i(\sigma), i=0, \ldots, 7$ and use Perron-Frobenius theory to conclude that for any $\sigma>0$ we have Sp$(PD)=\lambda_0(\sigma)$, say, with $\lambda_0(1)=1$.
To find out when $\lambda_0(\sigma)<1$ we again use the implicit function theorem on $f(\sigma, \lambda_0)=0$, with $f(\sigma, \lambda)=\det(\lambda I-PD)$.
Setting $\lambda=1$ gives (\ref{Detmovav}). It can be shown that $f(\sigma,1)$ is zero at $\sigma=1$, that $f(\sigma,1)\rightarrow \infty$ for $\sigma\downarrow 0$, and that $(\partial^2/\partial \sigma^2) f(\sigma,1)<0$ for all $\sigma>0$ (for the latter, consider $0<\sigma<1$ and $\sigma\geq1$ separately). Thus we can conclude that
$f(\sigma,1)$ has precisely two roots for $\sigma>0$, at $\sigma=1$ and at $\sigma=\sigma_{\mathrm{cutoff}}$.
As a result we have either $\lambda_0(\sigma)>1$ or $\lambda_0(\sigma)<1$ when $\sigma$ lies between 1 and~$\sigma_{\mathrm{cutoff}}$. For the location of $\sigma_{\mathrm{cutoff}}$ it is helpful to know that
$(\partial/\partial \sigma) f(\sigma,1)\big|_{\sigma=1}=(2\alpha-1)(2\alpha^2-2\alpha-1)$, which is positive for $0<\alpha<1/2$ and negative for $1/2<\alpha<1$. Thus we have $\sigma_{\mathrm{cutoff}}>1$ iff $\alpha<1/2$.
Also $(\partial/\partial \lambda) f(1,1)=1$ so that the implicit function theorem gives $(d/d\sigma) \lambda_0(\sigma)\big|_{\sigma=1}=-(2\alpha-1)(2\alpha^2-2\alpha-1)$, so that indeed $\lambda_0(\sigma)<1$
iff $\sigma$ lies between 1 and~$\sigma_{\mathrm{cutoff}}$.
\end{proof}
\medskip
\noindent
The cutoff value for $p$ is now easily found as $(1+\sigma_{\mathrm{cutoff}})^{-1}$, which can be numerically evaluated. The values are plotted in Figure~\ref{fig:movavcutoff}.
\begin{figure}[H]
\centering
\includegraphics[width=0.75\linewidth]{movav_fig6}
\caption{Relation between cutoff value for $p$, and $\alpha$. The solid red curve is for the moving average process. For comparison, the dashed blue line is the iid case (see also
Figure~\ref{fig:iidcase}).}
\label{fig:movavcutoff}
\end{figure}
When $p$ lies between 1/2 and $p_{\mathrm{cutoff}}$, the drift is
given by $V=(2 \vect{\pi} (I-PD)^{-1} \ \vect{1}\ -1)^{-1}$, where
$\vect{\pi}$ is given in (\ref{pi_movav}) and $(I-PD)^{-1}$ follows
from Lemma~\ref{lem_movav}. Using computer algebra we can find a rather unattractive, but explicit expression for the value of the drift; it is given by
the quotient of
\[
\begin{split}
& \alpha ^4 \left(-(1-2 p)^2\right) (p-1) p+\alpha ^3 (1-2 p ((p-2) p (p
(2 p-5)+6)+4)) \\
& +\alpha ^2 (2 p-1) (p (3 p ((p-2)
p+3)-5)+1)-\alpha (1-2 p)^2 p^2-(p-1)^2 p^3 (2 p-1)
\end{split}
\]
and
\[
\begin{split}
& -2 \alpha ^5 (2 p-1)^3-\alpha ^4 (1-2 p)^2 ((p-11) p+6)+\alpha ^3 (2
p-1) (2 p (p^3-9 p+10) \\
& -5)-\alpha ^2 (p+1)
(2 p-1) (p (p (3 p-7)+6)-1)+\alpha p^2 (2 p-1)+(p-1)^2 p^3\;.
\end{split}
\]
\begin{thm} \label{thm:movav}
Let $p_{\mathrm{cutoff}}=(1+\sigma_{\mathrm{cutoff}})^{-1}$, where
$\sigma_{\mathrm{cutoff}}$ follows from Lemma \ref{lem_movav}. Then $p_{\mathrm{cutoff}}>1/2$ iff $\alpha>1/2$.
We distinguish between transient cases with and without drift, and the recurrent case as follows.
\begin{enumerate}
\item[1a.] If either $\alpha > 1/2$ and $p\in (1/2, p_{\mathrm{cutoff}})$ or $\alpha < 1/2$ and $p
\in (p_{\mathrm{cutoff}}, 1/2)$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = \infty\;$ and the drift $V>0$ is given as above.
\item[1b.] If either
$\alpha > 1/2$ and $p \in (1-p_{\mathrm{cutoff}},1/2)$ or
$\alpha < 1/2$ and $p \in (1/2, 1-p_{\mathrm{cutoff}})$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = -\infty\;$ and the drift $V<0$ is given as minus the same expression as above but with $p$ replaced by $1-p$.
\item[2a.] If either $\alpha > 1/2$ and $p\in [p_{\mathrm{cutoff}},1]$ or $\alpha < 1/2$ and $p
\in [0, p_{\mathrm{cutoff}}]$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = \infty\;,$ but $V=0$.
\item[2b.] If either $\alpha > 1/2$ and $p\in [0, 1-p_{\mathrm{cutoff}}]$ or $\alpha < 1/2$ and $p
\in [1-p_{\mathrm{cutoff}}, 1]$, then almost surely $\displaystyle \lim_{n\rightarrow \infty} {X_n} = -\infty\;,$ but $V=0$.
\item[3.] Otherwise (when $\alpha = 1/2$ or $p=1/2$ or both), $\{X_n\}$ is recurrent and $V = 0$.
\end{enumerate}
\end{thm}
Figure~\ref{fig:movav_vs_indep} compares the drifts for the moving
average and independent environments.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\linewidth]{movav_vs_indep2}
\caption{Red: Drift for the moving average environment as a function of $p$
for $\alpha = 1, 0.95, \ldots,0.55$ (from
highest to lowest curves). Blue: comparison with the independent
case. }
\label{fig:movav_vs_indep}
\end{figure}
It is interesting to note that the cutoff points
(where $V$ becomes 0) are significantly lower in the moving average
case than the iid case, using the same $\alpha$, while
at the same time
the maximal drift that can be achieved is {\em higher} for the moving
average case than for the iid case. This is different behavior from
the Markovian case; see also Figure~\ref{fig:rwre3Vversusp}.
\section{Conclusions}\label{sec:concl}
Random walks in random environments can exhibit interesting and unusual
behavior due to the trapping phenomenon. The dependency structure of
the random environment can significantly affect the drift of the
process. We showed how to conveniently construct dependent environment
processes, including $k$-dependent and moving average environments,
by using an auxiliary Markov chain. For the well-known swap RWRE
model,
this approach allows for easy
computation of drift, as well as explicit
conditions under which the drift is
positive, negative, or zero. The cutoff values where the drift
becomes zero, are determined via Perron--Frobenius theory.
Various generalizations of the above environments can be considered in the same (swap model) framework, and can be analyzed along the same lines, e.g., replacing iid by Markovian $\{\hat{U}_i\}$ in the
moving average model, or taking moving averages of more than 3
neighboring states.
\enlargethispage{\baselineskip}
Other possible directions for future research are (a) extending the
two-state dependent random environment to a $k$-state dependent random
environment; (b) replacing the transition probabilities for swap model
with the more general rules in Eq.\eqref{transitions}; and (c) generalizing
the single-state random walk process to a multi-state discrete-time
{\em quasi birth and death process}
(see, e.g., \cite{bean1997}). By using an infinite ``phase space'' for
such processes, it might be possible to bridge the gap between the
theory for one- and multi-dimensional RWREs.
\section*{Acknowledgements}
This work was supported by the Australian Research
Council {\em Centre of Excellence for Mathematical and Statistical
Frontiers} (ACEMS)
under grant number CE140100049. Part of this work was done while the
first author was an Ethel Raybould Visiting Fellow at The University
of Queensland. We thank Prof.\ Frank den Hollander for his useful
comments.
\bibliographystyle{plain}
|
2,869,038,155,601 | arxiv | \section{Introduction}
The search for universal correlations between physical variables,
such as transition temperature, magnetic field penetration depth,
electrical conductivity, energy gap, Fermi energy {\it etc.} may
provide hints towards a unique classification of different
superconductors. In particular, establishing of such correlations
may help to understand the phenomenon of superconductivity, that is
observed now in quite different systems, such as simple metals and
alloys, fullerenes, molecular metals, cuprates, cobaltites, borides
{\it etc}. Among others, there is a correlation between the
transition temperature ($T_c$) and the inverse squared
zero-temperature magnetic field penetration depth
($\lambda_0^{-2}$), that generally relates to the zero-temperature
superfluid density ($\rho_s$) in terms of
$\rho_s\propto\lambda_0^{-2}$. In various families of underdoped
high-temperature cuprate superconductors (HTS)'s there is the
empirical relation $T_c\propto\rho_s\propto\lambda_0^{-2}$, first
identified by Uemura {\it et al.}\cite{Uemura89,Uemura91} It was
recently shown, however, that for HTS's with highly reduced $T_c$'s
the direct proportionality between $T_c$ and $\lambda_0^{-2}$ is
changed to a power law kind of dependence with
$(T_c)^n\propto\lambda_0^{-2}$ ($n$ is the power law exponent). In
experiments on highly underdoped YBa$_2$Cu$_3$O$_{7-\delta}$ Zuev
{\it et al.}\cite{Zuev05} obtained $n = 2.3$, Liang and coworkers
reported $n = 1.6$,\cite{Liang05} while Sonier {\it et
al.}\cite{Sonier07} found $n=2.6-3.1$. In molecular superconductors
Pratt and Blundell obtained $n = 2/3$.\cite{Pratt05} From the
theoretical point of view, it was shown that in systems obeying 2D
or 3D quantum superconductor to insulator transition, $n\equiv 1$ or
$n\equiv 2$,
respectively.\cite{Kim91,Schneider00,Schneider04,Schneider07}
It should be emphasized, however, that the relation between $T_c$
and $\lambda_0^{-2}$ is not yet established for BCS superconductors
and still awaits to be explored.
A good candidate to search for such a relation could be NbB$_{2+x}$.
The superconductivity in NbB$_{2+x}$, similar to MgB$_2$, is most
likely mediated by phonons. It is confirmed in nuclear magnetic
resonance (NMR)\cite{Kotegawa02} and tunnelling
experiments,\cite{Takasaki04,Ekino04} as well as by recent
calculations of the elastic properties.\cite{Regalado07} Moreover,
muon-spin rotation ($\mu$SR) experiments suggest that the
superconducting gap is isotropic.\cite{Takagiwa04} As is shown in
Refs.~\onlinecite{Escamilla04} and \onlinecite{Yamamoto01} the
superconductivity in NbB$_2$ can be induced by either increasing
boron or decreasing niobium content, while the parent NbB$_2$
compound is not superconducting at least down to 2~K. The transition
temperature was found to reach the maximum value of $9.2$~K and
$9.8$~K for Nb$_{0.9}$B$_2$ and NbB$_{2.34}$,
respectively.\cite{Escamilla04,Yamamoto01} This offers a possibility
to study the relation between $T_c$ and $\lambda_0^{-2}$ as a
function of boron and/or niobium content. In the present study the
temperature dependence of the magnetic field penetration depth was
measured for two NbB$_{2+x}$ samples with $x = 0,2$ and 0.34 by
means of transverse-field muon-spin rotation technique. It was found
that in both samples the distribution of the superconducting volume
fractions with different $T_c$'s can be well approximated by a
Gaussian distribution. The mean values of the superconducting
transition temperature ($T^m_c$) and the width of the distribution
($\Delta T_c$) were found to be $T^m_c= 6.02(3)$~K, $\Delta
T_c=0.96(2)$~K for NbB$_{2.34}$ at $\mu_0H = 0.1$~T, and
$T^m_c=3.40(4)$~K, $\Delta T_c = 1.06(2)$~K for NbB$_{2.2}$ at
$\mu_0H = 0.05$~T. Within the model, developed for a granular
superconductor of moderate quality, we reconstruct the dependence of
the zero-temperature superfluid density
$\rho_s\propto\lambda_0^{-2}$ on the transition temperature $T_c$.
It was found that in the range of $1.5$~K$\lesssim T_c \lesssim
8.0$~K $\lambda_0^{-2}$ follows a power law dependence, rather than
a linear dependence reported by Takagiwa {\it et
al.},\cite{Takagiwa04} with $\lambda_0^{-2}\propto T_c^{3.1(1)}$.
The value of the power law exponent 3.1(1) agrees rather well with
$n=2.6-3.1$ reported by Sonier {\it et al.}\cite{Sonier07} for
underdoped HTS's YBa$_2$Cu$_3$O$_{7-\delta}$.
The paper is organized as follows. In Sec.~\ref{subsec:Theoretical
background-calculations} we describe the model used to obtain the
temperature dependence of the magnetic field penetration depth for a
granular superconductor having a certain distribution of the
superconducting volume fractions. The distributions of the local
magnetic fields, calculated within the framework of this model, are
presented in Sec.~\ref{subsec:Theoretical background-simulations}.
In Sec.~\ref{sec:experimental} we describe the sample preparation
procedure and details of the muon-spin rotation and magnetization
experiments. Sec.~\ref{sec:results_and_discussions} comprises
studies of the magnetic penetration depth in NbB$_{2.2}$ and
NbB$_{2.34}$ superconductors. The conclusions follow in
Section~\ref{sec:conclusions}.
\section{Theoretical background}\label{sec:theoretical
background}
\subsection{Magnetic penetration depth in a granular superconductor of
moderate quality} \label{subsec:Theoretical
background-calculations}
In this section we describe the model applied to calculate
temperature dependence of the magnetic field penetration depth
$\lambda$ in a granular superconductor of moderate quality by using
the $\mu$SR data. We based on a general assumption that $\lambda$
can be obtained from the second moment of the local magnetic field
distribution [$P(B)$] inside the superconducting sample in the mixed
state measured directly in $\mu$SR experiments .
The model uses following assumptions: (i) The superconducting grains
are decoupled from each other. (ii) Each $i-$th grain is a
superconductor with a certain value of the transition temperature
($T^i_c$) and the zero-temperature magnetic penetration depth
($\lambda^i_0$). (iii) The zero-temperature superconducting gap
($\Delta_0$) scales with $T_c$ in agreement with the well-known
relation $\Delta_0/k_BT_c = const$,\cite{Tinkham75} implying that
the ratio $R = \Delta_0^i/k_BT_c^i$ is the same for all the grains.
Note that a linear decrease of both superconducting energy gaps
($\Delta^\sigma$ and $\Delta^\pi$) with decreasing $T_c$ was
observed recently by Gonelli {\it et al.}\cite{Gonnelli06} in Mn
doped MgB$_2$. The similar linear $\Delta_0$ {\it vs.} $T_c$ scaling
was reported by Khasanov {\it et al.}\cite{Khasanov04} for
RbOs$_2$O$_6$ BCS superconductor.
Let us first define variables and functions used within the model.
Function $\omega(t)$ describes the distribution of the
superconducting volume fractions with different transition
temperatures $T_c$'s. It is defined so that the volume fraction of
the sample, having transition temperatures in the range between
$T_c^i$ and $T_c^j$, is obtained as $\int_{T_c^i}^{T_c^j}
\omega(t)dt$. %
The distribution of the local magnetic fields in the $i-$th grain is
described by the function $P^i(B)$, which for ideal superconductor
has rather asymmetric shape [see Fig.~\ref{fig:simulations}~(a)].
Function $f(t)$ describes dependence of the inverse squared magnetic
penetration depth $\lambda^{-2}$ at $T = 0$ on $T_c$
[$(\lambda_0^i)^{-2} = f(T_c^i)$]. The dependence of $\lambda^i$ on
temperature was assumed to follow the standard equation for weak
coupled BCS superconductor [see, {\it e.g.}, Eq.~(2-111) in
Ref.~\onlinecite{Tinkham75}]:
\begin{equation}
[\lambda^i(T)]^{-2} = (\lambda^i_0)^{-2}s(T,\Delta_0^i)=
f(T_c^i)s(T,k_BR\cdot T_c^i),
\label{eq:BCS-weak-coupled}
\end{equation}
with the temperature dependent part
\begin{equation}
s(T,\Delta_0^i)= 1+ 2\int_{\Delta^i(T)}^{\infty}\left(\frac{\partial
F}{\partial E}\right)\frac{E}{\sqrt{E^2-\Delta^i(T)^2}}\ dE.
\nonumber
\end{equation}
Here, $F=[1+\exp(E/k_BT)]^{-1}$ is the Fermi function,
$\Delta^i(T)=\Delta_0^i \tilde{\Delta}(T/T_c^i)$ represents the
temperature dependence of the energy gap, and
$\Delta^i_0=2Rk_BT_c^i$ denotes the zero temperature value of the
superconducting gap. For the normalized gap
$\tilde{\Delta}(T/T_c)$ the values tabulated in
Ref.~\onlinecite{Muhlschlegel59} were used.
Finally, the temperature dependence of the total second moment of
the local magnetic field distribution in the superconducting sample
can obtained as:
\begin{equation}
\langle \Delta B^{2}\rangle^{tot} =\frac{\sigma^2}{\gamma_\mu^2}
=\int_T^\infty \int_0^\infty \omega(t) P(B',t)(B'-B^{m})^2dtdB'.
\label{eq:B_tot-full}
\end{equation}
Here, $\gamma_\mu = 2\pi\times135.5342$~MHz/T is the muon
gyromagnetic ratio, $\sigma^{2}$ is the second moment of the
$\mu$SR line and $B^m$ is the mean internal field inside the
superconducting sample.
As will be shown later, in transverse-field $\mu$SR experiments
one obtains directly: (i) the distribution of the superconducting
volume fractions with different $T_c$'s [$\omega(T_c)$], (ii) the
temperature dependence of the mean internal field $B^m(T)$, and
(iii) the temperature dependence of the second moment of the
$\mu$SR line $\sigma^2(T)$. By substituting $\omega(T_c)$ and
$B^m(T)$ to the Eq.~(\ref{eq:B_tot-full}) and then fitting it to
the experimental $\sigma(T)$ data, one should be able to obtain
the distribution of the zero-temperature superfluid density
$\rho_s\propto\lambda_0^{-2}$ as a function of the transition
temperature $T_c$ and the ratio $R=\Delta_0/k_BT_c$. In order to
do that one needs, however, to calculate $P^i(B)$ distributions
which depend on the applied field ($B_{ex}$), $\lambda^i$, and the
coherence length ($\xi^i$) in a nontrivial way. This makes fit of
Eq.~(\ref{eq:B_tot-full}) to the experimental data extremely
difficult.
The analysis can be simplified by assuming that each $P^i(B)$
follows the Gaussian distribution and is determined, therefore, by
the only two parameters: the second moment of the Gaussian line
\begin{equation}
\langle \Delta B^{2}\rangle =
(\sigma/\gamma_\mu)^2=G^2(b)\cdot(\lambda)^{-4},
\nonumber
\end{equation}
and the internal field of the $i-$th grain ($B^i$). Here, $b=
B/B_{c2}$ is the reduced magnetic field ($B_{c2}$ is the second
critical field) and the $G(b)$ is the proportionality coefficient
between $\sigma/\gamma_\mu=\sqrt{\langle \Delta B^{2}\rangle}$ and
$\lambda^{-2}$ which can be obtained by means of Eq.~(13) from
Ref.~\onlinecite{Brandt03} as:
\begin{equation}
G(b)=0.172\frac{\Phi_0}{2\pi}(1-b)[1+1.21(1-\sqrt{b})^3],
\label{eq:G(b)}
\end{equation}
($\Phi_0$ is the magnetic flux quantum). Here we also take into
account that within Ginzburg-Landau theory $\xi=\sqrt{\Phi_0/2\pi
B_{c2}}$. The internal field of the $i-$th grain was calculated
within the London approximation modified by Brandt:\cite{Brandt03}
\begin{eqnarray}
B^i&=&B_{ex}-(1-D^i)\frac{\Phi_0}{8\pi\cdot
(\lambda^i)^2}\ln[g(b^i)]
\nonumber \\
&\simeq&B_{ex}-(1-D^m)\frac{\Phi_0}{8\pi}f(T_c^i)s(T,k_BRT_c^i)\ln[g(b^m)]
\label{eq:Bi}
\end{eqnarray}
where
\begin{equation}
g(b) = 1 + \frac{1-b}{b}(0.357+2.890b-1.581b^2),
\nonumber
\end{equation}
$b^i = B^i/B^i_{c2}$, and $b^m = B^m/B^m_{c2}$. $B^i_{c2}$ and $D^i$
are the second critical field and the demagnetization factor of the
$i-$th grain, and $D^m$ and $B^m_{c2}$ are the mean values of the
demagnetization factor and the second critical field, respectively.
In the last part of Eq.~(\ref{eq:Bi}), we also replaced $D^i$ and
$b^i$ by their average values $D^m$ and $b^m$ (it is clear, that in
the intermediate range of fields and for temperatures not very close
to $T_c$, $b^i\simeq b^m$).
The advantage to use Gaussian $P^i(B)$ is that the contribution of
the $i-$th line to the total second moment can be obtained
analytically as:\cite{Weber93,Khasanov05-RbOs}
\begin{eqnarray}
\langle \Delta
B^{2}\rangle^{i}&=&(\sigma^i/\gamma_\mu)^2+(B^i-B^m)^2\simeq
\nonumber \\
&&G^2(b^m)[\lambda^i(T)]^{-4}+(B^i-B^m)^2.
\label{eq:Bi-Gauss}
\end{eqnarray}
By substituting it in Eq.~(\ref{eq:B_tot-full}) one gets:
\begin{equation}
\langle \Delta B^{2}\rangle^{tot} =\frac{\sigma^2}{\gamma_\mu^2}
=\int_T^\infty \omega(t) \langle \Delta B^{2}\rangle dt.
\label{eq:B_tot-Gauss}
\end{equation}
Later on we are going to use this equation in order to fit the
experimental $\mu$SR data. We want to remind that the total second
moment $\langle \Delta B^{2}\rangle^{tot}$ obtained by means of
Eq.~(\ref{eq:B_tot-Gauss}) is determined by: (i) Function $f(T_c)$
that describes distribution of $\lambda_0^{-2}$ as a function of
$T_c$ [see Eq.~(\ref{eq:BCS-weak-coupled})]. (ii) Function
$\omega(T_c)$ describing distribution of the superconducting
fractions with different $T_c$'s. (iii) Gap to $T_c$ ratio
$R=2\Delta_0/k_BT_c$ [see Eq.~(\ref{eq:BCS-weak-coupled})]. (iv) The
mean values of the demagnetization factor $D^m$ and the reduced
magnetic field $b^m$ [see Eq.~(\ref{eq:Bi}) and
(\ref{eq:Bi-Gauss})].
Note that in real experiments on polycrystalline samples with sharp
transition to the superconducting state, $P(B)$ often obeys Gaussian
distribution (see, {\it e.g.}, Refs.~\onlinecite{Weber93} and
\onlinecite{Pumpin90}). The reason is that the vortex lattice within
the small superconducting grain does not remain regular. It is
distorted due to effects of pinning inside the grain, as well as,
near the edges of the grain due to closeness of vortexes to the
surface. As is shown by Brandt,\cite{Brandt88} even small pinning
leads to substantial smearing of the characteristic features of the
ideal $P(B)$ line and, in a case of intermediate pinning, to the
Gaussian shape of $P(B)$. We should emphasize, however, that even in
a case of distorted vortex lattice, the second moment of $P(B)$ is
still a good measure of $\lambda$.\cite{Brandt88}
\subsection{Magnetic field distribution in a granular
superconductor}\label{subsec:Theoretical background-simulations}
In this section we are going to simulate the internal magnetic field
distribution $P(B)$ within the above described model. As a first
step it is necessary to choose a theoretical model describing the
spatial variation of the local internal magnetic fields in the
vortex lattice [$B({\bf r})$], from which the distribution of the
fields in the $i-$th grain can be obtained as:
\begin{equation}
P^i(B)=\frac{\int\delta(B-B')\ dA(B')}{\int dA(B')}.
\end{equation}
Here, $dA(B')$ is an elemental piece of the vortex lattice unit
cell where the magnetic field is equal to $B'$ and the unit cell
has a total area of $\int dA(B')$. $B({\bf r})$ was calculated by
using an iterative method for solving the Ginzburg- Landau
equations developed by Brandt.\cite{Brandt03} This method allows
to accurately determine $B({\bf r})$ for arbitrary $b$,
$\kappa=\lambda/\xi$ and the vortex lattice symmetry (see
Refs.~\onlinecite{Brandt03}, \onlinecite{Brandt97}, and
\onlinecite{Laulajainen06} for details).
\begin{figure}[htb]
\includegraphics[width=0.65\linewidth]{Figure1}
\caption{(Color online) $P(B)$ distributions calculated by means of
Eq.~(\ref{eq:P(B)-simulations}) at $T_c^m=6$~K,
$\lambda_0(T_c^m)=60$~nm, $\kappa=1.67$, and $B_{ex}=0.1$~T for the
following values of $\Delta T_c$: 0.0~K (a) 0.2~K (b), 1.0~K (c),
and 2~K (d). The dashed green, solid blue, and doted red lines in
(b), (c), and (d) represent the $\omega(T_c^i)P^i(B)$ term for
$T_c^i=T_c^m-\Delta T_c$, $T_c^i=T_c^m$, and $T_c^i=T_c^m+\Delta
T_c$, respectively. See text for details.}
\label{fig:simulations}
\end{figure}
It was also assumed that the distribution of the superconducting
volume fractions with different $T_c$'s is described by a Gaussian
distribution:
\begin{equation}
\omega(T_c^i)=\frac{1}{\Delta T_c \sqrt{2\pi}}\exp \left(
-\frac{(T_c^i-T_c^m)^2}{2\Delta T_c^2} \right)
\label{eq:Tc-gauss}
\end{equation}
($T^m_c$ and $\Delta T_c$ are the mean value and the width of the
distribution, respectively), $\lambda_0^{-2}$ follows the power
law:
\begin{equation}
\lambda_0^{-2}(T_c^i) = (K\cdot T_c^i)^n
\label{eq:power-law}
\end{equation}
with the exponent $n = 2$, and
$\kappa^i=\lambda^i/\xi^i=const$.\cite{comment} As is mentioned
already in the introduction, the power law dependence of
$\lambda_0^{-2}$ on $T_c$ is observed for various
HTS\cite{Uemura89,Uemura91,Zuev05,Liang05,Sonier07} and molecular
superconductors,\cite{Pratt05} as well as obtained theoretically for
materials having 2D or 3D quantum superconductor to insulator
transition.\cite{Kim91,Schneider00,Schneider04,Schneider07}
As initial parameters for calculations we took $T_c^m=6$~K,
$B_{ex}=0.1$~T, $D^m=1/3$, $\lambda_0(T_c^m)=60$~nm and
$\kappa=1.67$. The resulting field distribution $P(B)$ was
obtained as:
\begin{equation}
P(B)=\sum_i^N\omega(T_c^i)P^i(B).
\label{eq:P(B)-simulations}
\end{equation}
Calculations were done for $N = 60$ in the region $\pm 3\Delta T_c$
around $T_c^m$. Figure~\ref{fig:simulations} shows $P(B)$
distributions for $\Delta T_c = 0.0$~K, 0.2~K, 1.0~K and 2.0~K
obtained by means of Eq.~(\ref{eq:P(B)-simulations}). The lines in
(b), (c), and (d) represent the $\omega(T_c^i)P^i(B)$ term for
$T_c^i=T_c^m-\Delta T_c$ (dashed green line), $T_c^i=T_c^m$ (solid
blue line), and $T_c^i=T_c^m+\Delta T_c$ (dotted red line). It is
seen that the shape of $P(B)$ changes dramatically with increasing
width of the $\omega(T_c)$ distribution. For small $\Delta T_c$
(when transition to the superconducting state is very sharp) $P(B)$
is asymmetric with the highest weight around the point corresponding
to the so called ''saddle point`` field $B_{sad}$ [see
Figs.~\ref{fig:simulations} (a) and (b)]. It is also seen that all
the characteristic features of $P(B)$ at minimum ($B_{min}$),
maximum ($B_{max}$), and ''saddle point`` fields are smeared out
[see Fig.~\ref{fig:simulations}~(b)]. Note that the simulated $P(B)$
presented in Fig.~\ref{fig:simulations}~(b) looks very similar to
what is observed in $\mu$SR experiments on high-quality single
crystals.\cite{Herlach90,Sonier00} With a further increase of
$\Delta T_c$ the $P(B)$ distribution becomes rather symmetric and,
finally, asymmetric again, but now with the maximum weight around
the external field $B_{ex}=0.1$~T and a very long tail at lower
fields. It is interesting to note, that the parts of the sample with
the lowest $T_c$'s are responsible for the peak appearing slightly
below the external field, as is seen in
Fig.~\ref{fig:simulations}~(c) and (d). The reason is the decrease
of $P^i(B)$ width and the shift of $B^i$ towards to $B_{ex}$ with
increasing $\lambda^i$.
To summarize, in Sec.~\ref{sec:theoretical background} we
described the model allowing to obtain the distribution of the
internal magnetic fields in a granular superconducting sample of
moderate quality and calculated the second moment of this
distribution. Within the framework of this model we also simulated
$P(B)$ profiles for materials with different width of the
superconducting transition $\Delta T_c$. It is remarkable, that
already small $\Delta T_c$ leads to smearing of the characteristic
features of $P(B)$ distribution near $B_{min}$, $B_{sad}$, and
$B_{max}$ characteristic fields. Even though this result is quite
predictable, it has an important impact, since previously smearing
of $P(B)$ was ascribed entirely for the effects of pinning (see,
{\it e.g.}, Ref.~\onlinecite{Sonier00}).
\section{Experimental details}\label{sec:experimental}
\begin{figure}[htb]
\includegraphics[width=0.7\linewidth]{Figure2}
\caption{(Color online) Temperature dependences of the field-cooled
magnetization ($M_{FC}$) for NbB$_{2.2}$ and NbB$_{2.34}$ samples
measured at $\mu_0H = 0.5$~mT. The solid lines are the best linear
fits to the steepest part of $M_{FC}(T)$ curves and $M = 0$.}
\label{fig:Magnetization}
\end{figure}
Details of the sample preparation for NbB$_{2+x}$ can be found
elsewhere \cite{Escamilla04}. Both NbB$_{2.2}$ and NbB$_{2.34}$
samples, studied in the present work, were fine powders with the
average grain size of the order of few microns. The field-cooled
0.5~mT magnetization ($M_{FC}$) measurements for NbB$_{2.2}$ and
NbB$_{2.34}$ samples were performed by using a SQUID magnetometer.
The corresponding $M_{FC}(T)$ curves are shown in
Fig.~\ref{fig:Magnetization}. It is seen that the superconducting
transitions are rather broad indicating that both samples are not
particularly uniform. This also implies that the superconducting
critical temperatures may be evaluated only approximately. The
middle-points of transitions correspond to $\simeq9.16$~K and
$\simeq3.60$~K, while linear extrapolations of $M_{FC}(T)$ curves in
the vicinity of $T_c$ to $M = 0$ result in 9.76~K and 4.79~K for
NbB$_{2.2}$ and NbB$_{2.34}$, respectively (see
Fig.~\ref{fig:Magnetization}).
\begin{figure}[htb]
\includegraphics[width=0.8\linewidth]{Figure3}
\caption{(Color online) The muon-time spectra (a), difference
between the two-Gaussian fit and experimental data (b), and internal
field distributions (c) for NbB$_{2.34}$ sample at $T=1.7$~K after
field cooling in magnetic field of 0.1~T. The lines in (a) and (c)
represent the best fit with the Gaussian line-shapes. See text for
details.}
\label{fig:Fourier}
\end{figure}
The transverse-field $\mu$SR experiments were performed at the
$\pi$M3 beam line at the Paul Scherrer Institute (Villigen,
Switzerland). The powders of NbB$_{2.2}$ and NbB$_{2.34}$ were
cold pressed into pellets (12~mm diameter and 2~mm thick). The
samples were field cooled in $\mu_0H=0.05$~T (NbB$_{2.2}$) and
0.1~T (NbB$_{2.34}$), applied perpendicular to the flat surface of
the pellet, from above $T_c$ down to 1.7~K. The fields 0.05~T and
0.1~T were chosen in order to perform measurements at the same
reduced magnetic field $B/B_{c2}\simeq0.4$ ($B_{c2}$ is the upper
critical field).\cite{Khasanov06_unp} The $\mu$SR signal was
observed in the usual time-differential way by counting positrons
from decaying muons as a function of time in positron telescopes.
The time dependence of the positron rate is given by the
expression:\cite{msr}
\begin{equation}
\frac{{\rm d}N(t)}{{\rm d}t} = N_0 {1\over\tau_\mu} e^{-t/\tau_\mu}
\left[ 1 + A P(t) \right] + bg \; ,
\label{eq:N_t}
\end{equation}
where $N_0$ is the normalization constant, $bg$ denotes the
time-independent background, $\tau_\mu = 2.19703(4) \times
10^{-6}$~s is the muon lifetime, $A$ is the maximum decay asymmetry
for the particular detector telescope ($A\simeq 0.18-0.19$ in our
case), and $P(t)$ is the polarization of the muon ensemble:
\begin{equation}
P(t)=\int P(B)\cos(\gamma_{\mu}Bt+\phi)dB \; .
\label{eq:P_t}
\end{equation}
Here, $\phi$ is the angle between the initial muon polarization
and the effective symmetry axis of a positron detector. $P(t)$ can
be linked to the internal field distribution $P(B)$ by using the
algorithm of Fourier transform.\cite{msr} The $P(t)$ and $P(B)$
distributions inside the NbB$_{2.34}$ at $T=1.7$~K after
field-cooling in a magnetic field of 0.1~T are shown in
Fig.~\ref{fig:Fourier}~(a) and (c). The $P(B)$ distributions were
obtained from measured $P(t)$ by using the fast Fourier transform
procedure based on the maximum entropy algorithm.\cite{Rainford94}
In order to account for the field distribution seen in
Fig.~\ref{fig:Fourier}~(c), the $\mu$SR time-spectra were fitted
by two Gaussian lines:
\begin{eqnarray}
P(t)=\sum_{i=1}^2A_i \exp(-\sigma_i^2t^2/2) \cos(\gamma_{\mu}B_i
t+\phi) \;.
\label{eq:gauss}
\end{eqnarray}
where $A_i$, $\sigma_i$, and $B_i$ are the asymmetry, the Gaussian
relaxation rate and the first moment of the $i$-th line. At
$T\gtrsim8$~K for NbB$_{2.34}$ and $T\gtrsim4.5$~K for NbB$_{2.2}$
the analysis is simplified to the single line only with
$\sigma_{nm} \simeq 0.3$~$\mu$s$^{-1}$ resulting from the nuclear
moments of the sample. Eq.~(\ref{eq:gauss}) is equivalent to the
field distribution:\cite{Khasanov05-RbOs}
\begin{equation}
P(B)=\gamma_{\mu}\sum_{i=1}^2{A_i \over \sigma_i}
\exp\left(-{\gamma_{\mu}^2(B-B_i)^2 \over 2\sigma_i^2}\right) \; .
\label{eq:P_B}
\end{equation}
The solid line in Fig.~\ref{fig:Fourier}~(a) represent the best fit
with the two-Gaussian lines to the muon-time spectra. The
corresponding $P(B)$ is shown in Fig.~\ref{fig:Fourier}~(c). It
should be mentioned that the two-Gaussian fit can satisfactory
describe the experimental data. For both samples and in the whole
range of temperatures the normalized $\chi^2_{\rm norm}$'s were
found to be close to the unity implying a good quality of fits.
\section{Results and discussions} \label{sec:results_and_discussions}
\begin{figure*}[htb]
\includegraphics[width=0.85\linewidth]{Figure4}
\caption{(Color online) The temperature dependences of asymmetries
($A_{sc}$, $A_n$) (a)/(d), internal fields ($B_{sc}$, $B_n$)
(b)/(e), and the muon-spin depolarization rates ($\sigma_{sc}$,
$\sigma_n$) (c)/(f), obtained from the fit of $\mu$SR time spectra
by means of Eq.~(\ref{eq:gauss}) for NbB$_{2.34}$/NbB$_{2.2}$ after
field-cooling in a magnetic field of 0.1~T/0.05~T. The indexes
''sc`` and ''n`` denote the normal and the superconducting state
components. The dotted lines in (a)/(d) and (b)/(e) represent the
total muon asymmetry ($A_{sc}+A_n$) and the external field value
($B_{ex}$), respectively. In the upper panels of (a) and (d) the
$P(B)$ distributions at temperatures marked by the arrows are shown.
The red and the blue dotted lines represent the superconducting
(broad) and the normal state (narrow) components obtained from the
fit by means of Eq.~(\ref{eq:gauss}). The solid green line
corresponds to the sum of these two components.}
\label{fig:Fourier_Analysis}
\end{figure*}
Results for NbB$_{2.2}$ and NbB$_{2.34}$, obtained from the fit of
experimental data by means of Eq.~(\ref{eq:gauss}), are summarized
in Fig.~\ref{fig:Fourier_Analysis}. Distributions of the local
fields, at temperatures marked by the arrows, are shown in the upper
panels of Figs.~\ref{fig:Fourier_Analysis}~(a) and (d). The broad
lines reflect contributions of the superconducting parts of samples.
Indeed, their asymmetries $A_{sc}$
[Figs.~\ref{fig:Fourier_Analysis}~(a) and (d)] and relaxation rates
$\sigma_{sc}$ [Figs.~\ref{fig:Fourier_Analysis}~(c) and (f)]
increase while first moments $B_{sc}$ [Figs.~(b) and (e)] decrease
with decreasing temperature, as expected for a superconductor in a
mixed state (see, {\it e.g.}, Ref.~\onlinecite{Zimmermann95}). The
narrow lines describe contributions from parts of the samples being
in the normal state. The slight shift of these lines to higher
fields [Figs.~~\ref{fig:Fourier_Analysis}~(b) and (e)] and the small
increase of relaxations [Figs.~~\ref{fig:Fourier_Analysis}~(c) and
(f)] are associated with the diamagnetism of the superconducting
grains leading to increase of the local fields in the
nonsuperconducting parts of the sample. Such behavior is often
observed in samples with less than 100\% superconducting volume
fraction (see, {\it e.g.}, Refs.~\onlinecite{Khasanov05-RbOs} and
\onlinecite{Koda04}).
In Fig.~\ref{fig:Asymmetry-Relaxation} we plot temperature
dependences of the superconducting components $A_{sc}$ and
$\sigma_{sc}$. The $\sigma_{sc}$ values presented in
Fig.~\ref{fig:Asymmetry-Relaxation} are corrected to the nuclear
moment contribution $\sigma_{nm}$ which was subtracted in quadrature
(see {\it e.g.} Ref.~\onlinecite{Khasanov05-RbOs}). It is seen
[Fig.~\ref{fig:Asymmetry-Relaxation}~(a)] that superconductivity
does not disappear abruptly. The superconducting fraction decreases
continuously from their maximum value to zero with temperature
rising from 4~K to 8~K for NbB$_{2.34}$ and from 1.5~K to 5.5~K for
NbB$_{2.2}$. The solid lines in
Fig.~\ref{fig:Asymmetry-Relaxation}~(a) correspond to fits of
$A_{sc}(T)$, assuming that the distribution of the superconducting
volume fractions with different $T_c$'s [$\omega(T_c)$] follows the
Gaussian distribution [see Eq.~(\ref{eq:Tc-gauss})] and,
consequently,
\begin{equation}
A_{sc}(T)=A^m_{sc}\int_T^\infty \omega(t) dt \ .
\label{eq:SCfraction-gauss}
\end{equation}
Here, $A^m_{sc}$ is the mean value of the superconducting asymmetry
at low temperatures. The fits yield $T^m_{c}$=6.02(3)~K, $\Delta
T_c$=0.96(2)~K, and $A_{sc}^m$=0.170(5) for NbB$_{2.34}$, and
$T^m_{c}$=3.40(4)~K, $\Delta T_c$=1.06(2)~K, and $A^m_{sc}$=0.128(7)
for NbB$_{2.2}$. By taking into account that the total asymmetries
($A_{sc}+A_{n}$) were found to be $\simeq 0.19$ for NbB$_{2.34}$ and
$\simeq0.18$ for NbB$_{2.2}$ [dotted lines in
Figs.~\ref{fig:Fourier_Analysis}(a) and (d)], the superconducting
volume fractions were estimated to be $\simeq$85~\% and
$\simeq$70~\% for NbB$_{2.34}$ and NbB$_{2.2}$, respectively.
\begin{figure}[htb]
\includegraphics[width=0.7\linewidth]{Figure5}
\caption{(Color online) (a) Temperature dependences of the
superconducting asymmetries ($A_{sc}$) for NbB$_{2.34}$ and
NbB$_{2.2}$. The solid lines are the fits of $A_{sc}(T)$ data by
means of Eq.~(\ref{eq:SCfraction-gauss}). The dashed lines
represents the Gaussian distributions of the superconducting volume
fractions with different $T_c$'s [$\omega(T_c)$]. (b) Temperature
dependences of $\sigma_{sc}$ corrected to the nuclear moment
contribution $\sigma_{nm}$ for NbB$_{2.34}$ and NbB$_{2.2}$. The
open and filled small black circles are the fit of $\sigma_{sc}(T)$
by means of Eq.~(\ref{eq:B_tot-Gauss}). See text for details. }
\label{fig:Asymmetry-Relaxation}
\end{figure}
The samples used in our $\mu$SR experiments were cold pressed
powders. In this case the individual grains are expected to be only
weakly coupled. This implies that the model, described in
Sec.~\ref{subsec:Theoretical background-calculations}, can be
applied for the particular NbB$_{2.2}$ and NbB$_{2.34}$ samples
studied in the present work. All the variables and functions needed
for such calculations as, {\it e.g.}, the distribution of the
superconducting volume fractions with different $T_c$'s
[$\omega(T_c)$ -- dashed red and blue lines in
Fig.~\ref{fig:Asymmetry-Relaxation}~(a)], the temperature
dependences of the square root of the second moment
[$\sigma_{sc}(T)=\gamma_\mu \sqrt{\langle \Delta B^{2}\rangle(T)}$,
see Figs.~\ref{fig:Fourier_Analysis}~(c) and (f)] and the mean field
[$B_{sc}(T)= B^m(T)$, see Figs.~\ref{fig:Fourier_Analysis}~(b) and
(e)] are directly obtained in the experiments. The only unknown
parameter is the demagnetization factor $D^m$ which enters
Eq.~(\ref{eq:Bi}). We should mention, however, that it is not
correct to assume that $D^m$ is equal to the some average
demagnetization factor for all the grains as, {\it e.g.}, $D^m =
1/3$ for grains of spherical symmetry. The reason for that is the
following. The individual grains have different internal magnetic
fields and, therefore, will show a different diamagnetism. As a
consequence, the internal field in the particular grain is
determined by the diamagnetism of the grain itself, by the
demagnetization field of the whole sample and by the fields from the
other superconducting grains surrounding it. This problem was
already discussed by Weber {\it et al.} in
Ref.~\onlinecite{Weber93}. According to their calculations for the
grains of spherical symmetry the factor $1-D^m$ in Eq.~(\ref{eq:Bi})
should be replaced with [see Eq.~(36) in Ref.~\onlinecite{Weber93}]:
\begin{equation}
1-D^m\simeq2/3-(D^p-1/3)\eta_p/\eta_G.
\label{eq:1-Dm}
\end{equation}
Here, $D^p$ is the demagnetization factor of the whole sample, and
$\eta_p$ and $\eta_G$ are the effective volume density and the
x-ray density of the sample, respectively. For the experimental
geometry (thin disk in perpendicular magnetic field) $D^p\simeq1$.
Assuming now that the density of the pellet is twice as small as
the x-ray density of the material ($\eta_G/\eta_p= 2$) we get
$1-D^m\simeq0.3$. At this stage we are not going to estimate the
value of $1-D^m$ more precisely. As is shown below, it can be
obtained self consistently from the fit of
Eq.~(\ref{eq:B_tot-Gauss}) to the experimental data.
The demagnetization factor $D^m$ enters Eq.~(\ref{eq:B_tot-Gauss})
via the term $\langle \Delta B^{2}\rangle^i$ that, in its turn, is a
sum of $G^2(b)[\lambda^i(T)]^{-4}$ and $(B^i-B^m)^2$ [see
Eq.~(\ref{eq:Bi-Gauss})]. The term $(B^i-B^m)^2$, which depends on
$D^m$, is responsible for the correction to $\langle \Delta
B^{2}\rangle^i$ appearing due to the shift of the internal field of
the $i-$th grain from the mean internal field $B^m$ [see
Eq.~(\ref{eq:Bi-Gauss})]. It is clear, that during fit of
Eq.~(\ref{eq:B_tot-Gauss}) to the experimental data, contribution of
$(B^i-B^m)^2$ term to $\langle \Delta B^{2}\rangle^{tot}$ needs to
be minimized ($B^m$ should become the first moment of the resulting
$P(B)$ distribution). We used, therefore, an iterative approach.
During the fit, both $\sigma_{sc}(T)$ data sets for NbB$_{2.2}$ and
NbB$_{2.34}$ samples were fitted simultaneously. The dependence of
$\lambda_0$ on the transition temperature was assumed to be
described by the power law $\lambda_0^{-2}=(K\cdot T_c)^n$ [see
Eq.~(\ref{eq:power-law}) and
Refs.~\onlinecite{Zuev05,Liang05,Sonier07,Pratt05,Kim91,
Schneider00,Schneider04,Schneider07}]. In a first step, $1-D^m$ was
assumed to be equal to 0.3 (see above) and
Eq.~(\ref{eq:B_tot-Gauss}) was fitted to the data with the
proportionality factor $K$, the power law exponent $n$, and the gap
to $T_c$ ratio $R=\Delta_0/k_BT_c$ as free parameters. In a second
step, the values of $K$, $n$, and $R$, obtained in the step one,
were substituted back to Eq.~(\ref{eq:B_tot-Gauss}) and the second
term $(B^i-B^m)^2$ entering $\langle \Delta B^{2}\rangle$ in
Eq.~(\ref{eq:B_tot-Gauss}) was minimized with the only free
parameter $D^m$. Then the whole cycle was repeated by using as
initial parameter the newly obtained $D^m$ value. After 3 iterations
the fit already converges. The fit yields $K=
1.24(3)\cdot10^{-2}$~nm$^{-2/n}$K$^{-1}$, $n=3.1(1)$, $R=1.68(3)$
and $1-D^m=0.24(2)$. The open and the filled black circles in
Fig.~\ref{fig:Asymmetry-Relaxation}~(b) represent the result of the
fit of Eq.~\ref{eq:B_tot-Gauss} to the data.
Three important points emerge:\\
(i) The value of $R=\Delta_0/k_BT_c=1.68(3)$ obtained from the fit
is very close to the weak-coupling BCS value 1.76 and
$\Delta_0/k_BT_c=1.55$ obtained by Kotegawa {\it et
al.}\cite{Kotegawa02} in $^{11}$B NMR experiments. It is, however,
much smaller than $\Delta_0/k_BT_c=2.15-2.25$ reported by Ekino
{\it et al.}\cite{Ekino04} in tunneling experiments.\\
(ii) The value of $1-D^m=0.24(2)$ was found to be rather close to
0.3 roughly estimated from Eq.~(\ref{eq:1-Dm}).\\
(iii) For each particular data set the $\lambda_0^{-2}$ {\it vs.}
$T_c$ dependence can be reconstructed {\it only} for $T_c$'s in the
range $T_c^l<T_c<T_c^h$ ($T_c^h$ and $T_c^l$ denote the temperatures
at which the superconducting fraction achieves the maximum value and
vanishes, respectively). Figure~\ref{fig:Asymmetry-Relaxation}~(a)
reveals that the corresponding regions are from $\simeq4$~K to
$\simeq8$~K and from $\simeq1.5$~K to $\simeq5.5$~K for NbB$_{2.34}$
and NbB$_{2.2}$, respectively. Bearing in mind that fit by means of
Eq.~(\ref{eq:B_tot-Gauss}) was performed for both NbB$_{2.34}$ and
NbB$_{2.2}$ data sets simultaneously, we can thus conclude that the
relation $\lambda_0^{-2}\propto T_c^{3.1(1)}$ is valid at least for
temperatures in the region from $\simeq1.5$~K to $\simeq8$~K.
\begin{figure}[htb]
\includegraphics[width=0.7\linewidth]{Figure6}
\caption{ (Color online) Dependence of the transition temperature
($T_c$) on the inverse squared zero-temperature magnetic penetration
depth ($\lambda_0^{-2}$) for NbB$_{2+x}$. The grey area represents
the region where $\lambda_0^{-2}=[(1.24\pm0.03)\cdot10^{-2}\cdot
T_c]^{3.1\pm0.1}$. The dotted line corresponds to
$\lambda_0^{-2}=(1.24\cdot10^{-2}\cdot T_c)^{3.1}$. The dashed blue
and the dash-dotted red lines represent the range of transition
temperatures where $T_c$ {\it vs.} $\lambda^{-2}_0$ dependences were
reconstructed for NbB$_{2.2}$ ($1.5$~K$\lesssim T_c\lesssim5.5$~K)
and NbB$_{2.34}$ ($4$~K$\lesssim T_c\lesssim8$~K). The lines are
shifted by 0.05~K above and below the central
$(1.24\cdot10^{-2}\cdot T_c)^{3.1}$ line for clarity. The star is
the $T_c$ {\it vs.} $\lambda_0^{-2}$ point obtained from the fit of
$\sigma_{sc}(T)$ data for NbB$_{2.2}$ by means of
Eq.~(\ref{eq:BCS-weak-coupled}) when all the measured points are
equally included in the fit. The open circles are the data points
from Ref.~\onlinecite{Takagiwa04}. Values of $\lambda(0)^{-2}$ were
obtained from $\sigma_{sc}(0)$ measured in a field 0.1~T and
$H_{c2}(0)$ (see Ref.~\onlinecite{Takagiwa04}) by using Eq.~(13)
from Ref.~\onlinecite{Brandt03}.}
\label{fig:Uemura_NbB}
\end{figure}
In Fig.~\ref{fig:Uemura_NbB} we plot $T_c$ {\it vs.}
$\lambda_0^{-2}$ dependence obtained from the fit of NbB$_{2.34}$
and NbB$_{2.2}$ data. The grey area represents the region where
$\lambda_0^{-2}=[(1.24\pm0.03)\cdot10^{-2}\cdot T_c]^{3.1\pm0.1}$.
The dotted line corresponds to
$\lambda_0^{-2}=[1.24\cdot10^{-2}\cdot T_c]^{3.1}$. The dashed blue
and the dot-dashed red lines represent the range of the transition
temperatures where $T_c$ {\it vs.} $\lambda_0^{-2}$ dependences were
reconstructed for NbB$_{2.2}$ and NbB$_{2.34}$, respectively. For
clarity, these lines are shifted by 0.05~K above and below the
central $(1.24\cdot10^{-2}\cdot T_c)^{3.1}$ line. We also include in
this graph data points for NbB$_{2+x}$ ($x=0.0$, 0.01, and 0.1) from
Ref.~\onlinecite{Takagiwa04}. It is seen that these samples have
approximately twice as high transition temperatures as one would
expect from the obtained $T_c$ {\it vs.} $\lambda_0^{-2}$
dependence. On the base of our results, we can argue that samples
studied in Ref.~\onlinecite{Takagiwa04} also have distributions of
the superconducting volume fractions, similar to what is observed in
the present study. Without taking into account this distributions,
fit of $\sigma_{sc}(T)$ data can lead to a substantial overestimate
of the superconducting transition temperature $T_c$. As an example,
the star in Fig.~\ref{fig:Uemura_NbB} represents result of the fit
of Eq.~(\ref{eq:BCS-weak-coupled}) to $\sigma_{sc}(T)$ NbB$_{2.2}$
data when all measured points are taken equally into account.
Now we are going to comment shortly the observed $T_c$ {\it vs.}
$\lambda_0^{-2}$ dependence. Recently it was shown that the famous
''Uemura`` relation, establishing the linear proportionality between
$T_c$ and $\lambda^{-2}_0$,\cite{Uemura89,Uemura91} does not hold
for highly underdoped HTS's.\cite{Zuev05,Liang05,Sonier07} Indeed,
for YBa$_2$Cu$_3$O$_{7-\delta}$ Zuev {\it et al.} \cite{Zuev05}
observed $\lambda^{-2}_0\propto T_c ^{2.3(4)}$ and found that this
power law is in fairly good agreement with
YBa$_2$Cu$_3$O$_{7-\delta}$ data in the whole doping range. For the
similar compounds Sonier {\it et al.} \cite{Sonier07} obtained
$n=2.6-3.1$. Those values are very close to $3.1(1)$ obtained in the
present study for NbB$_{2+x}$ superconductor. The observation of the
power law type of relation between the transition temperature $T_c$
and the superfluid density in BCS supercondutor NbB$_{2+x}$ and
their good correspondence to what is observed in HTS's points to
close similarity between these materials.
It, probably, comes from the fact that the increase of $T_c$ in
NbB$_{2+x}$, similar to HTS, is determined by increasing the charge
carrier concentration.\cite{Takagiwa04,Escamilla06} Indeed, the
x-ray photoelectron spectroscopy experiments of Escamilla and Huerta
\cite{Escamilla06} show that the increase of boron content leads to
a decrease in the contribution of the Nb $4d$ states and increase in
the contribution of the B $2p_\pi$ states to the density of states
at the Fermi level.
\begin{figure}[htb]
\includegraphics[width=0.7\linewidth]{Figure7}
\caption{ (Color online) Dependence of the transition temperature
normalized to the maximum transition temperature of the
superconducting family ($T_c/T_c^{max}$) on the normalized inverse
squared zero-temperature magnetic penetration depth
[$\lambda_0^{-2}/(\lambda^{max}_0)^{-2}]$. Solid line corresponds to
$\lambda_0^{-2}\propto T_c^{3.1}$ as obtained in the present study.
Blue diamonds are the Mg$_{1-x}$Al$_x$B$_2$ data from
Ref.~\onlinecite{Serventi04}. Red stars are the data points for
YBa$_2$Cu$_3$O$_{7-\delta}$ from Ref.~\onlinecite{Sonier07}. }
\label{fig:Uemura_NbB_MgB2_Y123}
\end{figure}
It is important to emphasize that observation of correlation between
$T_c$ and $\lambda^{-2}_0$ in BCS superconductors is not restricted
to the particular NbB$_{2+x}$ system studied here. As an example, in
Fig.~\ref{fig:Uemura_NbB_MgB2_Y123} we plot $T_c/T_c^{max}$ as a
function $\lambda_0^{-2}/(\lambda^{max}_0)^{-2}$ for Al doped
MgB$_2$ from Ref.~\onlinecite{Serventi04}. Here $T_c^{max}$ is the
maximum $T_c$ of a certain superconducting family and
$\lambda^{max}_0$ is the corresponding zero temperature penetration
depth. We also include on this graph points for
YBa$_2$Cu$_3$O$_{7-\delta}$ from Ref.~\onlinecite{Sonier07}. It is
seen that all these superconductors represent the very similar
relations. The scaling relation for the BCS superconductors,
reported here, and their agreement with what was observed in
high-temperature cuprate superconductors
\cite{Zuev05,Liang05,Sonier07} strongly suggests that there are some
features of their electronic properties that are {\it common},
despite these materials have quite different dimensionality, Fermi
surface topology, symmetry of the superconducting order parameter
{\it etc}.
\section{Conclusions} \label{sec:conclusions}
Muon-spin rotation studies were performed on BCS superconductors
NbB$_{2+x}$ ($x=0.2$, 0.34). As a real space microscopic probe,
$\mu$SR allows to distinguish between the superconducting and
nonsuperconducting parts of the samples and determine the
distributions of the superconducting volume fractions with different
$T_c$'s. By using the model, developed for a granular
superconductor of moderate quality, the dependence of the
zero-temperature superfluid density $\rho_s\propto \lambda^{-2}_0$
on the transition temperature $T_c$ was reconstructed in a broad
range of temperatures ($1.5$~K$\lesssim T_c\lesssim8.0$~K) revealing
$\rho_s\propto \lambda^{-2}_0\propto T_c^{3.1(1)}$. This dependence
appears to be common at least for some families of BCS
superconductors as, {\it e.g.}, Al doped MgB$_2$ and
high-temperature cuprate superconductors as, {\it e.g.}
YBa$_2$Cu$_3$O$_{7-\delta}$.
\section{Acknowledgments}
This work was partly performed at the Swiss Muon Source (S$\mu$S),
Paul Scherrer Institute (PSI, Switzerland). The authors are
grateful to T.~Schneider for stimulating discussions, and A.~Amato
and D.~Herlach for assistance during the $\mu$SR measurements.
This work was supported by the Swiss National Science Foundation,
by the K.~Alex~M\"uller Foundation, and in part by the SCOPES
grant No. IB7420-110784 and the EU Project CoMePhS.
|
2,869,038,155,602 | arxiv | \chapter{Results and Preliminaries}
\section{Introduction}
\subsection{Background}
L\'evy matrices are symmetric random matrices whose entry distributions lie in the domain of attraction of an $\alpha$-stable law; for $\alpha<2$, their entries have infinite variance.
They were introduced by Bouchaud and Cizeau in 1994, motivated by various questions about heavy-tailed phenomena in physics and mathematical finance \cite{cizeau1994theory}. The ubiquity of heavy-tailed random variables has made L\'{e}vy matrices widely applicable. Indeed, their study has since yielded insights into spin glasses with power-law interactions \cite{cizeau1993mean}, portfolio optimization \cite{galluccio1998rational,bouchaud1998taming}, option pricing \cite{bouchaud1997option}, neural networks \cite{martin2021predicting}, and a variety of statistical questions \cite{sornette2006critical,heiny2019eigenstructure, bun2017cleaning, laloux1999noise,laloux2000random,bouchaud2009financial}.
The original work \cite{cizeau1994theory} made several predictions about the spectra of L\'evy matrices. The first was that the empirical spectral distribution of a L\'evy matrix of large dimension should converge to a deterministic, heavy-tailed measure $\mu_\alpha$ (depending only on the parameter $\alpha$ and not the precise laws of the matrix entries). This was proved in \cite{arous2008spectrum}; subsequent works showed that $\mu_\alpha$ admits a density with respect to Lebesgue measure on $\mathbb{R}$, which is symmetric and scales as $(\alpha/2) |x|^{-\alpha - 1}$, as $|x|$ tends to $\infty$ \cite{arous2008spectrum,belinschi2009spectral,bordenave2011spectrum}. One already observes from this a sharp contrast between matrices with finite variance entries, such as the Gaussian Orthogonal Ensemble\footnote{Recall that the Gaussian Orthogonal Ensemble is a $N\times N$ symmetric matrix whose upper triangular entries $\{g_{ij}\}_{i,j=1}^N$ are mutually independent Gaussian random variables with mean zero and variances $(1 + \mathbbm{1}_{i=j})N^{-1}$.} (GOE), and L\'{e}vy matrices; the global law of the former is compactly supported, while that of the latter is not.
These differences become more pronounced when one probes eigenvector and local eigenvalue statistics. The authors of \cite{cizeau1994theory} presented detailed predictions for these phenomena with respect to L\'{e}vy matrices, which were later corrected and completed in the more recent work \cite{tarquini2016level} using the replica method. We present only the latter predictions here.
For all $\alpha \in [1,2)$, the eigenvectors should be completely delocalized,\footnote{A unit vector $\boldsymbol{u} \in \mathbb{R}^N$ is \emph{completely delocalized} if, for any $\varepsilon > 0$, we have $\| \boldsymbol{u} \|_\infty \le N^{-1/2+\varepsilon}$ with high probability for large enough $N$. It is \emph{completely localized} if, for any $\varepsilon > 0$, there exists a constant $m = m(\varepsilon) \ge 1$ such that at least $1-\varepsilon$ of its $\ell^2$-mass is supported on at most $m$ of its entries.} and the local statistics of the eigenvalues should converge to those of the Gaussian Orthogonal Ensemble (GOE) throughout the bulk of the spectrum (around any finite real number). Moreover, for all $\alpha \in (0,1)$, there should exist a sharp phase transition at some real number $E_{\mathrm{mob}} = E_{\mathrm{mob}}(\alpha)$, such that the following holds. First, in the interval $(-E_{\mathrm{mob}}, E_{\mathrm{mob}})$, all eigenvectors should be completely delocalized and the eigenvalues should display GOE statistics. Second, outside of this interval (that is, on $(-\infty, -E_{\mathrm{mob}}) \cup (E_{\mathrm{mob}}, \infty)$), all eigenvectors should be completely localized and the eigenvalues should be distributed as a Poisson point process. The work \cite{tarquini2016level} also predicted a deterministic formula for $E_{\mathrm{mob}}$, given by the (presumably unique) real number $E > 0$ such that $\lambda(E,\alpha) = 1$, where $\lambda = \lambda(E,\alpha)$ is the largest solution of the quadratic equation
\begin{flalign}
\label{mobilityintroduction}
\lambda^2 - 2t_{\alpha} K_{\alpha} \Re \ell(E) \cdot \lambda + K_{\alpha}^2 (t_{\alpha}^2 - 1) |\ell (E)|^2 = 0.
\end{flalign}
\noindent Here, $t_{\alpha}$, $K_{\alpha}$, and $\ell (E)$ are explicit (though $\ell(E)$ is a bit intricate) functions given by \Cref{lambdaEalpha} below. An equivalent, but more complicated, characterization was predicted earlier in \cite{cizeau1994theory}.
For $\alpha \in (0,1)$, the predicted coexistence of localization and delocalization differs strongly from the behavior of random matrices whose entries have finite variance. Indeed, the latter display complete eigenvector delocalization and GOE local eigenvalue statistics throughout the entire bulk of the spectrum. This fact, known as the \emph{Wigner--Dyson--Mehta conjecture}, was originally shown in \cite{erdos2010bulk,TV11} for certain classes of complex Hermitian matrices. Over the past decade, its conclusion was vastly generalized through the three-step strategy of \cite{localrelax2} and was proven to hold for wide classes of finite-variance random matrix ensembles; we refer to the surveys \cite{erdos2011survey,survey,erdos2017dynamical} for more information and references on this topic.
The phase transition indicated above is an example of \emph{Anderson localization}, a term describing localization--delocalization transitions triggered by sufficiently high disorder. The real number (or ``energy'') at which this transition occurs is called a \emph{mobility edge}, denoted by $E_{\mathrm{mob}}$ above.
Originally proposed in the 1950s to explain puzzling experimental results on doped semiconductors, Anderson localization is now recognized as a general feature of wave transport in disordered media and is one of the most influential ideas in modern condensed matter physics \cite{lee1985disordered,evers2008anderson,lagendijk2009fifty, abrahams201050,anderson1958absence}.
However, our theoretical understanding of Anderson localization remains severely incomplete.
This phenomenon has traditionally been studied through the tight-binding model \cite{abou1973selfconsistent, abou1974self, anderson1978local, aizenman2015random}, defined on the lattice $\mathbb Z^d$ by the random self-adjoint operator $H_\lambda = T + \lambda V$ on $\ell^2( \mathbb Z^d)$. Here, $T$ is the graph Laplacian of the lattice; $V$, called the \emph{potential}, is a diagonal operator with independent, random entries; and $\lambda \ge 0$ governs the strength of the disorder. In dimensions $d \ge 3$, a mobility edge is expected to occur for small values of $\lambda$, separating a delocalized regime from a localized one \cite{aizenman2015random}. The localized regime has been thoroughly investigated by this point, and it has been established that eigenfunctions are completely localized and Poisson local eigenvalue statistics arise whenever the energy is sufficiently far from zero (and also when $\lambda$ is sufficiently large) \cite{frohlich1983absence,minami1996local,aizenman1993localization,spencer1988localization,gol1977pure,ding2018localization,li2019anderson,carmona1987anderson}. However, for small $\lambda$, it remains open to establish the existence of a delocalized phase with GOE statistics, as well as a sharp transition between the localized and delocalized phases.
The tight-binding model can also be defined on an infinite regular tree, which allows for technical simplifications. In this context, the existence of delocalized states at small disorder was shown in \cite{klein1998extended,froese2007absolutely,aizenman2006stability}. More generally, \cite{aizenman2013resonant} proved a sufficient condition for the appearance of absolutely continuous spectrum (a signature of delocalization) on the regular tree, which the authors used to show the entire spectrum of the model is absolutely continuous if the disorder is sufficiently small (and if the potential has unbounded support and bounded density). This criterion is given in terms of a fractional moment bound for the model's resolvent, which is almost complementary to the Simon--Wolff criterion for pure point spectrum (a signature of localization) \cite{SCSRP,aizenman1993localization}. This is strongly suggestive of a sharp transition separating the localized and delocalized phases but is too inexplicit to pin it down. In this direction, \cite{bapst2014large} used this criterion to provide upper and lower bounds on the endpoints for intervals of localization and delocalization; these bounds converge towards each other as the degree $K$ of the graph tends to $\infty$ but do not coincide for any finite $K$. In fact, no exact form for the mobility edge seems to be known for any tight-binding model, even in the most solvable cases, such as the Anderson model on the regular tree with Cauchy disorder \cite{miller1994weak}.
There are three other prominent random matrix ensembles whose spectra are believed to exhibit a coexistence of localized and delocalized regimes; they are band matrices \cite{bourgade2018survey}, adjacency matrices of sparse graphs \cite{RMM}, and L\'{e}vy matrices \cite{HTRM}. Analogously to the tight-binding model, band matrices are believed to exhibit a mobility edge in dimensions $d \ge 3$ when the band width is sufficiently large (but constant) \cite{RBSM,DSMRM}. Proving this prediction remains open, though recent years have seen progress towards the analysis of band matrices in dimensions $d \ge 7$ with a band width diverging in the dimension of the matrix \cite{DQDRBMHD,DQDRBM,BUQURBM}, and in dimension $d = 1$ \cite{bourgade2018random,bourgade2019random,yang2018random,URBMSM,URBM,STM}; in both cases, no mobility edge appears, as the entire spectrum is either all localized or all delocalized (depending on the band width). Random graphs should exhibit such coexistence when the average vertex degree is constant\footnote{When the average degree diverges at least logarithmically in the number of vertices, the behavior of the eigenvectors was determined in \cite{alt2021completely,alt2021delocalization}.} \cite{bauer2001random, bauer2001exactly,bauer2001core}, though this coexistence is of a different nature from in the other models of Anderson localization mentioned above. Indeed, in this regime, the localized and delocalized phases are not separated by a sharp transition but instead overlap, in that intervals of the spectral support admit some eigenvectors that are delocalized \cite{bordenave2011rank,coste2021emergence,bordenave2017mean,arras2021existence} and others that are localized \cite{chayes1986density,salez2015every}. The endpoints of these intervals should serve as analogs of the mobility edges for this model. In both of these models, there is neither a mathematical proof for the existence of these sharp phase transitions (mobility edges), nor an explicit prediction for their locations.
L\'evy matrices are one of a small number of models predicted to exhibit a mobility edge that can be computed exactly. Partly for this reason, the predictions of \cite{tarquini2016level,cizeau1994theory} have generated substantial interest in L\'evy matrices among both physicists and mathematicians \cite{auffinger2009poisson,arous2008spectrum,benaych2014central,benaych2014central,belinschi2009spectral,biroli2007top,bordenave2011spectrum,bordenave2017delocalization,bordenave2013localization,burda2006random,soshnikov2004poisson,tarquini2016level,biroli2007extreme,aggarwal2021goe}. The analogy between Anderson localization in heavy-tailed random matrices and the tight-binding model becomes especially transparent upon viewing the tail parameter $\alpha$ as parallel to the inverse disorder strength $\lambda^{-1}$, an apt comparison, as reducing $\alpha$ makes the L\'{e}vy matrix ``more noisy.'' The equation \eqref{mobilityintroduction} for the mobility edge predicts for small $\alpha$ that the delocalized phase should be short (shrinking to a point as $\alpha$ tends to $0$), while for large $\alpha$ (close to $1$) that it should be long (extending across the real axis as $\alpha$ tends to $1$). This is similar to the prediction for the tight-binding model on the lattice that, for small $\lambda$ the delocalized region should nearly saturate the spectrum \cite{aizenman2015random}, while for large $\lambda$ it is empty.
We next review rigorous results on the predictions of \cite{cizeau1994theory,tarquini2016level} regarding localization and delocalization. For any $\alpha \in (1, 2)$, the predictions on complete eigenvector delocalization and GOE local statistics throughout the spectrum for $\alpha \in (1, 2)$ were proven in \cite{aggarwal2021goe}. In the case $\alpha \in (0, 1)$, results are less complete. For every $\alpha \in \big( 0, \frac{2}{3} \big)$, \cite{bordenave2013localization} showed the existence of a regime consisting of sufficiently large energies where eigenvectors are partially localized. For almost every $\alpha \in (0, 1)$, \cite{bordenave2017delocalization} established the existence of a region consisting of sufficiently small energies where eigenvectors are partially delocalized. Subsequently, \cite{aggarwal2021goe} showed in this region that eigenvectors are completely delocalized and that local eigenvalue statistics converge to those of the GOE. Building on \cite{aggarwal2021goe}, the article \cite{aggarwal2021eigenvector} demonstrated that the statistics of eigenvector entries in this small-energy region are non-Gaussian, further distinguishing it from those in the finite-variance case (which are known to be typically Gaussian \cite{bourgade2013eigenvector,marcinek2020high}). As all of these results on L\'evy matrices only apply to sufficiently small or large energies, no work thus far has touched on the existence of the predicted sharp localization--delocalization transition, nor its exact characterization given by \eqref{mobilityintroduction}.
The purpose of the present paper is to address this point.
\subsection{Results and Proof Ideas}\label{s:ideas}
We now proceed to explain our results and the ideas of their proofs. We will be informal here, both with definitions and explanations, referring to \Cref{Results} below for a more detailed exposition.
Let $\bm{H} = \{ h_{ij} \} $ denote an $N \times N$ L\'{e}vy matrix (see \Cref{matrixh} below) and, for any $z = E + \mathrm{i} \eta \in \mathbb{H}$ in the upper half plane, let $\bm{G} = \bm{G}(z) = \{ G_{ij} \} = (\bm{H} - z)^{-1}$ denote its resolvent. As is common in the context of random operators \cite{aizenman2015random}, we use $\Imaginary G_{jj} (E + \mathrm{i}\eta)$ (which is independent of $j \in [1, N]$, by symmetry) to distinguish localization from delocalization at a point $E \in \mathbb{R}$. More specifically, we equate\footnote{This equivalence can be made precise through the introduction of a certain inverse participation ratio $Q_I$ (\Cref{jwi}) expressible in terms of resolvent entries (\Cref{l:loccriteria}) that is indicative of eigenvector delocalization or localization if it remains bounded or becomes unbounded, respectively (\Cref{westimate}).} eigenvector localization and delocalization around $E$ with the statements that $\lim_{\eta \rightarrow 0} \Imaginary G_{jj} (E + \mathrm{i} \eta) = 0$ and $\lim_{\eta \rightarrow 0} \Imaginary G_{jj} (E + \mathrm{i} \eta) > 0$ in probability, respectively.\footnote{Here, we view $N$ as tending to $\infty$ before $\eta$ tends to $0$. It would be both natural and useful to analyze the effect of having $\eta$ decay at some effective rate with respect to $N$, but for brevity we do not pursue this here.}
Recalling $\lambda = \lambda (\alpha, E)$ from \eqref{mobilityintroduction}, we prove the following results on the large L\'evy matrix $\bm{H}$.
\begin{enumerate}
\item \emph{Localization}: Let $E_{\mathrm{loc}}$ be the largest real number such that $\lambda(E_{\mathrm{loc}},\alpha) = 1$. For any $E \in \mathbb{R}$ with $|E| > E_{\max}$, eigenvectors of $\bm{H}$ with eigenvalue around $E$ are localized.
\item \emph{Delocalization}: For any $E \in \mathbb{R}$ such that $\lambda (E,\alpha) > 1$, the eigenvectors of $\bm{H}$ with eigenvalue around $E$ are delocalized. Consequently, letting $E_{\mathrm{deloc}}$ be the smallest positive real number such that $\lambda(E_{\mathrm{deloc}},\alpha) = 1$, the eigenvectors of $\bm{H}$ are delocalized around any $E$ with $|E| < E_{\mathrm{deloc}}$.
\end{enumerate}
The previous two results reduce proving the existence of a unique mobility edge to the question of whether $E_{\mathrm{deloc}} = E_{\mathrm{loc}}$, that is, where there is a unique positive solution $E = E_{\mathrm{mob}}$ to $\lambda(E,\alpha) = 1$. Although this is a question about the purely deterministic equation \eqref{mobilityintroduction}, it is complicated by the intricacy of the function $\ell(E)$. Nonetheless, we prove such a statement if $\alpha$ is sufficiently close to $0$ or $1$, giving the following result.
\begin{enumerate}\item[(3)] \emph{Sharp Anderson transition}: There exists a constant $c>0$ such that $E_{\mathrm{deloc}} = E_{\mathrm{loc}}$ if $\alpha \in (0,c)\cup (1-c, 1)$. Denoting this common value by $E_{\mathrm{mob}}$, we have $E_{\mathrm{mob}} \sim |\log \alpha|^{-2/\alpha}$ for $\alpha$ near $0$, and $E_{\mathrm{mob}} \sim (1-\alpha)^{-1}$ for $\alpha$ near $1$.
\end{enumerate}
We elaborate more on these statements and their relation to previous works in \Cref{s:relationprevious} below.
Our proofs are based on analyzing a certain infinite-dimensional operator $\bm{T}$ that is associated with a randomly weighted, root tree (which is a variant of the Poisson Weighted Infinite Tree introduced in \cite{aldous1992asymptotics,aldous2004objective}). Each vertex of the tree has infinitely many children, its vertex set is given by $\mathbb{V} = \{0 \} \cup \bigcup_{k = 1}^{\infty} \mathbb{Z}_{\ge 1}^k$, and its root is chosen to be $0$. For each $v \in \mathbb{V}$, the set of edge weights connecting $v$ to its children are given by the entries of a Poisson point process on $\mathbb{R}$ with intensity measure formally given by $\frac{\alpha}{2} |x|^{-\alpha - 1 }\, dx$. Then $\bm{T}$ is set to be the adjacency matrix of this tree, and it can be interpreted as the local limit of the matrix $\bm{H}$ as $N$ tends to $\infty$ (upon viewing $\bm{H}$ as the adjacency matrix for a weighted complete graph). As such, information about $\bm{T}$ can be transferred to $\bm{H}$. Indeed, letting $\bm{R} = \bm{R} (z) = \{ R_{uv} \} = (\bm{T} - z)^{-1}$ denote the resolvent of $\bm{T}$ and $ R_{\star} (z) = R_{00}$ denote its diagonal entry at the root, it was shown in \cite{bordenave2011spectrum} that $G_{jj}$ converges to $R_\star$ in law. Hence, to show the above (de)localization statements for $\bm{H}$, it suffices to show them for $\bm{T}$.
This provides two advantages,\footnote{Analogs of both can be realized for the original $\bm{H}$, at the expense of having to tracking an (effective in $N$) error.} which are related. The first is, since $\bm{T}$ is the adjacency matrix of a tree, off-diagonal entries of $\bm{R}$ admit a simplified product form. Indeed, for any vertex $v \in \mathbb{V}$, we have \cite{aizenman2015random}
\begin{flalign}
\label{r0v0}
R_{0v} = - R_{00_+} T_{0_+v} R_{0_+ v}^{(0)}, \qquad \text{where $0_+$ is the child of $0$ on the path between $0$ and $v$},
\end{flalign}
\noindent where $R_{0_+ v}^{(0)}$ is the $(0_+, v)$ entry of $\big(\bm{T}^{(0)} - z \big)^{-1}$ (and $\bm{T}^{(0)}$ is obtained from $\bm{T}$ by setting its $(0,0_+)$ entry to $0$). The second is that this tree admits a metric that can be (and has been in analogous contexts, such as the tight-binding model) useful for quantifying (de)localization phenomena.
Given this, the proofs of our main results are composed of three main parts.
\begin{enumerate}
\item \emph{Fractional moment criterion}: We establish a criterion for delocalization (and localization) through the growth (and decay, respectively) of certain fractional moment sums of the resolvent entries $R_{0v}$. This reduces the original question to an analysis of fractional moments for these resolvent entries, though the latter are not always explicit.
\item \emph{Explicit formula for the mobility edge}: We show that this fractional moment criterion is equivalent to a more explicit one, dependent on whether the quantity $\lambda(E,\alpha)$ is larger or smaller than $1$. This reduces us to an analysis of the deterministic equation \eqref{mobilityintroduction}.
\item \emph{Small and large $\alpha$ behavior}: We analyze this equation \eqref{mobilityintroduction} for $\alpha$ near $0$ and $1$. We first show $E_{\mathrm{mob}} \sim |\log \alpha|^{-2/\alpha}$ for $\alpha$ near $0$ and $E_{\mathrm{mob}} \sim (1 - \alpha)^{-1}$ for $\alpha$ near $1$; we then show uniqueness of a solution in $E$ to $\lambda(E,\alpha) = 1$ in these regimes. Together, these statements show the existence of a unique mobility edge for $\alpha$ near $0$ and $1$.
\end{enumerate}
\noindent We finally remark that to the best of our knowledge, the random operator $\boldsymbol{T}$ is the first for which a non-trivial mobility edge can be computed and rigorously demonstrated.
The first, second, and third parts constitute \Cref{MomentFractional}, \Cref{EdgeExplicit}, and \Cref{Scaling} of this paper, respectively. We next elaborate on these parts in more detail.
\subsubsection{Fractional Moment Criterion}
\label{MomentFractionals}
For any integer $L \ge 1$, let $\mathbb{V} (L) \subset \mathbb{V}$ denote the set of vertices in $\mathbb{V}$ of distance $L$ from the root $0$, and for any fixed real number $s\in (\alpha, 1)$ and complex number $z \in \mathbb{H} = \{ z \in \mathbb{C} : \Im (z) > 0 \}$, define (assuming they exist)
\begin{flalign}
\label{sz0}
\Phi_{L} (s; z) = \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{V} (L)} \big| R_{0v} (z) \big|^s \Bigg]; \qquad \varphi (s; z) = \lim_{L\rightarrow \infty} \frac{1}{L} \log \Phi_{L} (s; z);
\end{flalign}
\noindent Thus, $\Phi_L (s; z)$ describes a sum of fractional moments of resolvent entries associated with the $L$-th level of $\mathbb{V}$, and $\varphi$ denotes its exponential growth (or decay) rate. For any real number $E \in \mathbb{R}$, also define the limits of the above quantities as $s$ tends to $1$ and $z$ tends to $E$, given by
\begin{flalign*}
\varphi (z) = \displaystyle\lim_{s \rightarrow 1} \varphi (s; z); \qquad \varphi(E) = \displaystyle\lim_{\eta \rightarrow 0} \varphi (E + \mathrm{i} \eta).
\end{flalign*}
The fractional moment criterion we show (see \Cref{rimaginary0} and \Cref{p:imvanish} below) is then that $\bm{T}$ is delocalized around some real number $E \in \mathbb{R}$ if $\varphi (E) > 1$, and that it is localized around $E$ if $\varphi (E) < 1$. The latter statement on localization is a quick consequence of the Ward identity \eqref{sumrvweta}, so here we focus on the former statement about delocalization.
The origin of this delocalization criterion goes back to the work \cite{aizenman2013resonant}, where an analogous statement was shown for the tight-binding model on the regular graph. While the operator $\bm{T}$ seems quite distant from the tight-binding Hamiltonian, our starting observation is that a formulaic analogy between them arises upon expressing their diagonal resolvent entries through the Schur complement identity \eqref{qvv}. For $\bm{T}$, this gives for any $w \sim 0$ (writing $u \sim v$ if $u, v \in \mathbb{V}$ are adjacent) that
\begin{flalign}
\label{r00zt1}
R_{00} = -\big( z + T_{0w}^2 R_{ww}^{(0)} + K_{0,v} \big)^{-1}, \qquad \text{where $K_{0,w} = \displaystyle\sum_{\substack{w \sim 0 \\ u \ne w}} T_{0u}^2 R_{uu}^{(0)}$},
\end{flalign}
\noindent while the analogous form for the tight-binding model on the regular graph would be
\begin{flalign}
\label{r0020}
R_{00} = -\bigg( z + \sum_{u \sim 0} R_{uu}^{(0)} + K_0 \bigg)^{-1}, \qquad \text{for a random potential $K_0$ independent from $\bm{R}$}.
\end{flalign}
\noindent Viewing $K_{0,v}$ as parallel to the potential $K_0$ makes the two formulas appear similar (though the $\{ K_v \}$ are independent in the latter, while the $\{ K_{u,v} \}$ are seriously correlated in the former).
This in mind, we next verify the existence of the limit \eqref{sz0} defining $\varphi (s; z)$, by showing $\Phi_L (s; z)$ is approximately submultiplicative and supermultiplicative in $L$. This proceeds by first (repeatedly) using the resolvent expansion \eqref{r0v0}; applying \eqref{r00zt1}; and taking the expectation over $\{ T_{0u} \}$, conditional on the tree edge weights not incident to $0$. Central in implementing for \cite{aizenman2013resonant} this was the assumption that the potential $K_0$ has bounded density, which directly guarantees the uniform boundedness\footnote{Indeed, if $K$ has bounded density, then $\mathbb{E} \big[ |R_{00}|^s \big] = \mathbb{E} \big[ |A+K_0|^{-s} \big] < \infty$, upon setting $A = z + \sum_{u \sim 0} R_{uu}^{(0)}$.} of $\mathbb{E} \big[ |R_{00}|^s \big]$, conditional on the $R_{uu}^{(0)}$. Fortunately, in the L\'{e}vy setup, we can essentially identify $\Real K_{0,v}$ with an $\frac{\alpha}{2}$-stable law, which does have bounded density.
However, a complication that arises in the L\'{e}vy setup is that the edge weights $T_{0u}$ appearing in \eqref{r0v0} are heavy-tailed. Since $s > \alpha$, this makes $\mathbb{E} \big[ |T_{00_+}|^s \big]$ infinite; what guarantees finiteness of $\mathbb{E}\big[ |R_{0v}|^s \big]$ is the presence of $R_{00}$ in \eqref{r0v0}. Indeed, by \eqref{r00zt1}, we have
\begin{flalign}
\label{r00v001}
\mathbb{E} \big[ |R_{0v}|^s \big] = \mathbb{E} \Big[ |R_{00}|^s \cdot |T_{00_+}|^s \cdot \big| R_{0_+ v}^{(0)} \big|^s \Big] = \mathbb{E} \Bigg[ \displaystyle\frac{|T_{00_+}|^s}{\big| z + T_{00_+}^2 R_{0_+ 0_+}^{(0)} + K_{0,v} \big|^s}\cdot \big| R_{0_+ v}^{(0)} \big|\Bigg],
\end{flalign}
\noindent and so when its numerator is large then its denominator is as well.\footnote{As stated in \Cref{salpha} below, $\Phi_L (s; z)$ diverges if $s < \alpha$; this is due to the excess of many small $\{ T_{0v} \}$ (instead of the presence of a few large ones).} However, unlike in \cite{aizenman2013resonant}, this bound is not uniform conditional on the $R_{uu}^{(0)}$; it diverges as $R_{0_+ 0_+}^{(0)}$ decreases. Confronting this requires a precise understanding of the expectations on the right side of \eqref{r00v001} (accessed through various integral estimates in \Cref{G0vMoment}), and a recursive analysis of modified (see \Cref{xil} below) fractional moments weighted by an additional factor of $|R_{00}|^{\chi-s}$ (done in \Cref{EstimateMomentss}).
Next, we seek to show delocalization at $E$ if $\varphi(E) > 1$. In their setup, \cite{aizenman2013resonant} did this through two claims (whose statements we simplify considerably here). First, delocalization occurs if for each $L \ge 1$ there exists at least one $v \in \mathbb{V} (L)$ such that $|R_{0v}|$ is large; such vertices are called ``resonant.'' Second, if $\Imaginary R_{\star}$ is too small (delocalization does not occur), then the exponential divergence of $\Phi_L (s; z)$ for $(s, z) \approx (1, E)$ (equivalently, $\varphi (E) > 1$) can guarantee the existence of at least one resonant $v \in \mathbb{V} (L)$. The former claim is an eventual consequence of a deterministic, elementary linear algebraic inequality (\Cref{rsum} below). The proof of the latter is more involved and proceeds by first using a large deviations argument to show that the divergence of $\Phi_L (s; z)$ for $(s, z) \approx (1, E)$ implies that the number of resonant vertices is large in expectation. It then lower bounds the probability that there exists at least one resonant vertex using a second moment method.
The proof of the first claim quickly adapts to the L\'{e}vy context, but that of the latter does not. A large devations argument still shows that the number of resonant vertices is large in expectation, and we implement this in \Cref{EstimateN}. However, the second moment method used to show this in probability is obstructed by the infinite and heavy-tailed nature of our tree.
To overcome this, we introduce a more robust amplification scheme in \Cref{EstimateR}, described as follows. Using the expectation bound, we show for any $M \ge 1$ and $v_0 \in \mathbb{V}$ that there exists some $\mu \ge 0$ such that the below holds if $\varphi (E) > 0$. With probability at least $e^{-\mu M}$, there are at least $e^{(\mu + \delta) M}$ vertices $v \in \mathbb{V}$ at $M$ levels below $v_0$, such that $|R_{v_0 v}|$ is large; moreover, these events are essentially independent as $v_0$ ranges over some fixed level of the tree. Although this probability is exponentially small, it produces sufficiently many opportunities so as to ensure that the number of resonant vertices (of length $M$) likely grows (``amplifies'') as one descends down the tree. We show that ``chaining together'' $\frac{L}{M}$ of these length $M$ resonant vertices typically yields one of length $L$, which produces delocalization.
\subsubsection{Explicit Formula for the Mobility Edge}
\label{LambdaE}
We would next like to convert the inexplicit fractional moment criterion into an explicit one involving the quantity $\lambda(E,\alpha)$ from \eqref{mobilityintroduction}, to which end we will try evaluating the quantity $\Phi_L (s; z)$. Making use of another product expansion for off-diagonal resolvent entries of $\bm{R}$, we obtain
\begin{equation}\label{productexpand}
R_{0v} = R_{00}^{(0_+)} T_{00_+} R_{0_+ v}^{(0)},
\end{equation}
\noindent where again $0_+ \sim 0$ lies on the path between $0$ and $v$. Similarly to in \cite{bapst2014large} (which addressed the tight-binding model on the regular graph), we repeatedly expand the fractional moments of $R_{0v}$ in $\Phi_L (s;z)$ using \eqref{productexpand}; apply \eqref{r00zt1} on each diagonal resolvent entry $R_{ww}^{(w_+)}$ in the expansion; and integrate out the randomness $\{ T_{u_- u} \}$ one level at a time. In \Cref{s:transfer}, we find that this expresses $\Phi_L(s;z)$ through the $L$-fold iteration of certain integral transfer operator $T = T_{s,\alpha,z}$, defined by
\begin{flalign}
\label{t11}
Tf(x)
=
\frac{\alpha}{2} \int_{\mathbb{C}} \displaystyle\int_{\mathbb{R}}
f(y)
\left| \frac{1}{z + y} \right|^s
|h|^{s-\alpha -1 } p_z\left(x + h^2 (z+y)^{-1}\right)
\, d h \, dy,
\end{flalign}
where $p_z$ is the density of $-z - R_\star(z)^{-1}$. Thus, the quantity $e^{\varphi (s;z)}$ defining the exponential growth rate of $\Phi_L (s;z)$ becomes the Perron--Frobenius eigenvalue of $T$. Unfortunately, its computation is impeded by the fact that $T$ is defined through the density of the complex random variable $R_{\star}$, which is in general inexplicit; its real and imaginary parts are correlated in an unknown way.
A simplification arises if we assume in probability that
\begin{flalign}
\label{eta0rlimit}
\displaystyle\lim_{\eta \rightarrow 0} R_{\star} (E + \mathrm{i} \eta) = 0, \qquad \text{namely, localization occurs at $E$}.
\end{flalign}
\noindent In this case, we show (\Cref{l:boundarybasics} below) that the above limiting random variable is an explict one, denoted by $R_{\mathrm{loc}} = R_{\mathrm{loc}} (E)$, given by the inverse of a stable law with specific parameters. Inserting this into \eqref{t11} and taking the associated limit as $z \in \mathbb{H}$ tends to $E \in \mathbb{R}$ yields an operator $T_{s,\alpha,E}$ that is now fully explicit. This operator appeared earlier in \cite{tarquini2016level}, where its Perron--Frobenius eigenvalue $\lambda (\alpha, s, E)$ was found exactly through a Fourier transform computation; we carefully reimplement this in \Cref{TEigenvalue}. We have under this notation that $\lambda (E, \alpha) = \lim_{s \rightarrow 1} \lambda(E, s, \alpha)$; so, assuming \eqref{eta0rlimit}, this would indicate (if one can interchange several limits) that $e^{\varphi (E)} = \lim_{s \rightarrow 1} \lambda (E, s, \alpha) = \lambda (E, \alpha)$.
This already implies that, if $\lambda (E,\alpha) > 1$, then delocalization holds at $E$. Indeed, otherwise \eqref{eta0rlimit} would hold (at least along some sequence $\{ \eta_j \}$ tending to $0$, which suffices), indicating by the above that $\varphi (E) = \log \lambda (E, \alpha) > 0$. By the fractional moment criterion, this implies that delocalization holds at $E$, which is a contradiction.
An analogous route does not show localization, to which end we introduce a bootstrap argument involving two additional statements: (i) an estimate showing localization at $E$ whenever $|E|$ sufficiently large, and (ii) a bound showing uniform continuity of $\varphi (s; E) = \lim_{\eta \rightarrow 0} \varphi (s; E + \mathrm{i} \eta)$ on the domain $\big\{ E : \varphi (s; E) < -\delta \big\}$, for any fixed $\delta < 0$. The first implicitly follows from the estimates discussed in \Cref{MomentFractionals} (this can be seen on a high level by observing from \eqref{r00zt1} that $|z|$ large indicates $R_{00}$ should be small, which by repeated use of \eqref{productexpand} suggests that $R_{0v}$ is small). The second is proven in \Cref{s:lyapunovcontinuity} through a series of restricted moment estimates and resolvent bounds; we refer to the beginning of that section for a more detailed heuristic underlying its proof.
Given these, we proceed at follows. Fixing $E > E_{\max}$ (the case $E < -E_{\max}$ is similar), we seek to show localization at $E$. To do this, observe by (i) that there exists $E_0 > E_{\max}$ sufficiently large so that localization \eqref{eta0rlimit} holds at $E_0$. Hence, $\varphi (E_0) = \lambda (E_0, \alpha)$; since $\lambda (x, \alpha) < 0$ for $x > E_{\max}$, it follows from (ii) that there exists a real number $E_1 \in (E, E_0)$ (uniformly bounded away from $E_0$) such that $\varphi (E_1, \alpha) < 0$. By the fractional moment criterion, this verifies \eqref{eta0rlimit} as $E_1$, which yields $\varphi (E_1) = \lambda (E_1, \alpha)$. Repeating this procedure (uniformly decreasing the $E_j$, and applying (i), (ii), and the fractional moment criterion) shows that \eqref{eta0rlimit} holds at $E$, yielding localization there.
\subsubsection{Small and Large $\alpha$ Behavior}
The main difficulty in studying the equation \eqref{mobilityintroduction} lies in the analysis of the $\ell(E)$ term, which is defined (see \eqref{tlrk}) as a certain oscillatory integral. So, we first interpret $\ell(E)$ probabilistically (see \Cref{realimaginaryl}), by expressing
\begin{flalign}
\label{ralphae}
\Real \ell (E) = \pi^{-1} \cdot \Gamma (\alpha) \cdot \cos \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \Big],
\end{flalign}
\noindent where we recall from \Cref{LambdaE} that $R_{\mathrm{loc}} (E)$ is an explicit, real random variable given by the inverse of a stable law law; a similar expression holds for $\Im \ell (E)$. More specifically, we may write \begin{flalign}
\label{re0}
R_{\mathrm{loc}} (E) = - \big( E + a(E)^{2/\alpha} \cdot S - b(E)^{2/\alpha} \cdot T \big)^{-1},
\end{flalign}
\noindent where $S$ and $T$ are independent, normalized, positive $\frac{\alpha}{2}$-stable laws, and $a = a(E)$ and $b = b(E)$ are certain real numbers satisfying the pair of coupled fixed-point equations (here, $x_+ = \max \{ x, 0 \}$ and $x_- = \max \{ -x, 0 \}$)
\begin{equation}
\label{ab0}
a = \mathbb{E} \big[ ( E + a^{2/\alpha} S - b^{2/\alpha} T )_-^{-\alpha/2} \big]; \qquad
b = \mathbb{E} \big[ ( E + a^{2/\alpha} S - b^{2/\alpha} T )_+^{-\alpha/2} \big].
\end{equation}
The problem of studying solutions to $\lambda(E,\alpha) = 1$ then effectively reduces to understanding the system \eqref{ab0}, which in general seems intricate.
However, assuming that $\alpha$ is sufficiently close to $0$ or $1$ allows for a fairly complete analysis.
If $\alpha$ is close to $1$, then we can restrict to $E$ large. Indeed, from \eqref{mobilityintroduction} (and the exact expressions for $K_{\alpha}$, $t_{\alpha}$, and $t_1$ from \eqref{tlrk}), it is quickly seen that $\lambda (\alpha, E) \sim 1$ imposes\footnote{More broadly, it imposes an asymmetric scaling between the real and imaginary parts of $\ell (E)$, namely, $\Re \ell (E) \sim (1-\alpha)^2$ and $\Imaginary \ell (E) \sim 1-\alpha$, which eventually must be taken into account when implementing the analysis in full.} $\Real \ell(E) \sim (1-\alpha)^2$. Together with \eqref{ralphae}, this indicates that $\mathbb{E} \big[ |R_{\mathrm{loc}} (E)|^{\alpha} \big] \sim 1-\alpha$; in particular, it must be small, so \eqref{re0} suggests that $E \gg 1$. If $E$ is large, the equations \eqref{ab0} simplify substantially. For example, we will approximate $E + \alpha^{2/\alpha} S - b^{2/\alpha} T \approx E$, which directly from \eqref{ab0} implies $b \approx E^{-\alpha/2}$. In general, we obtain precise asymptotics over how $a(E)$ and $b(E)$ scale with $E$, which we use to show that any solution $E$ to $\lambda(E,\alpha)=1$ scales as $E \sim (1-\alpha)^{-1}$; this is done in \Cref{s:alphanear1}.
The proof of uniqueness for such a solution proceeds by showing that $\lambda(E,\alpha)$ is strictly decreasing in the region $E \sim (1-\alpha)^{-1}$. This is done in \Cref{s:alphanear1uniqueness}, by pinpointing the scaling behavior (and thus realizing the signs) for the derivatives of all functions involved in the computation of $\lambda$. These include $a(E)$, $b(E)$, expectations of the form $\mathbb{E} \big[ (E+x S - yT)^{\gamma} \big]$, and $\lambda(E, \alpha)$.
If $\alpha$ is close to $0$, then from \eqref{ralphae} and \eqref{mobilityintroduction}, it can be seen that $\lambda (\alpha, E) \sim 1$ imposes $\mathbb{E} \big[ |R_{\mathrm{loc}}(E)|^{\alpha} \big] \approx 2 - c_{\star} \alpha$, for some explicit constant $c_{\star} > 0$ (\Cref{lambdaalpha0} below). To analyze the equations \eqref{ab0} we then make use of (and quantify; see \Cref{w0walphavariation} below) the fact \cite{SDCS} that, for small $\alpha$, the random variables $W = \frac{\alpha}{2} \log S$ and $V = \frac{\alpha}{2} \log T$ are approximately Gumbel laws. It thus becomes convenient to parameterize $E = u^{2/\alpha}$, in which case \eqref{ab0} becomes
\begin{flalign}
\label{ab02}
a = \mathbb{E} \big[ (u^{2/\alpha} + (a e^W)^{2/\alpha} - (be^V)^{2/\alpha})_-^{-\alpha/2} \big]; \qquad b = \mathbb{E} \big[ (u^{2/\alpha} + (ae^W)^{2/\alpha} - (be^V)^{2/\alpha})_+^{-\alpha/2} \big].
\end{flalign}
\noindent Approximating by $(W, V)$ independent Gumbel random variables, the expectations in \eqref{ab02} become explicit, especially upon further approximating $(w^{2/\alpha} + x^{2/\alpha} - y^{2/\alpha})^{\alpha/2} \approx \pm \max \{ w, x, y \}$.
This enables one to nearly solve to the equations \eqref{ab02} for $(a, b)$ in terms of $u$, and thus to show that any solution $u$ to $\lambda (u^{2/\alpha}, \alpha) = 1$ (or $\mathbb{E}\big[ |R_{\mathrm{loc}} (u^{2/\alpha})|^{\alpha} \big] \approx 2 - c_{\star} \alpha$) must scale as $u \sim |\log \alpha|^{-1}$; we implement this in \Cref{Alpha0Scaling}. The proof of uniqueness for such a solution proceeds by showing that $\lambda (u^{2/\alpha},\alpha)$ is strictly decreasing in the region where $u \sim |\log \alpha|^{-1}$. This is done in \Cref{E0Unique}, again by analyzing how the derivatives of all functions involved in the computation of $\lambda (E, \alpha)$ scale.
\subsection{Notation}\label{s:notation}
We let $\mathbb{H} = \{ z \in \mathbb{C} : \Imaginary z > 0 \}$ and $\mathbb{R}_+ = \{ r \in \mathbb{R} : r \ge 0 \}$. The notation $\unn{a}{b}$ for $a,b \in \mathbb{Z}$ denotes the set $\{ k \in \mathbb{Z} : a \le k \le b\}$. We define the function $z \mapsto z^a$ for $a \in (0,1)$ and $z\in \mathbb{C} \setminus (-\infty, 0)$ by $z^a = r^a \exp( \mathrm{i} a \theta)$ if $z = r \exp(\mathrm{i} \theta)$ for $\theta \in ( -\pi, \pi]$. The function $\log$ always denotes the natural logarithm. Given $x \in \mathbb{R}$, we set $(x)_+ = \max(x,0)$ and $(x)_- = \max(-x,0)$. For any two functions $f, g \colon \mathbb{R} \rightarrow \mathbb{C}$, we let $f * g : \mathbb{R} \rightarrow \mathbb{C}$ denote their convolution, given by $(f \ast g) (x) = \int_{-\infty}^{\infty} f(y) g(x-y) \, dy$. Moreover, for a Banach space $X$; subset $S \subseteq X$; and complex number $z \in \mathbb{C}$, we let $z \cdot S = \{ zs : s \in S \}$. Given an event $\mathscr E$ in some probability space, we let $\mathscr E^c$ denote its complement.
We will frequently define constants that depend on some number of parameters. These may be introduced either as $C = C(x_1, \dots, x_n)$ or simply as $C(x_1, \dots, x_n)$, for parameters $x_1, \dots, x_n$, and subsequently referred to as $C$ (suppressing the dependence on the parameters in the notation). Such constants may change line to line without being renamed; in all such cases the new constant depends on the same set of parameters as the previous one. We usually write $C>1$ for large constants, and $c>0$ for small constants.
We will often fix constants $\alpha \in (0,1)$ and $s \in (\alpha, 1)$, and sometimes omit notating the dependence on these parameters from our definitions of certain constants for brevity. Similarly, we will often fix $\varepsilon$ and $B$ such that $\varepsilon \in (0,1)$ and $B > 1$ and make claims for all points $z\in \mathbb{H}$ such that $\varepsilon \le |\Re z| \le B$ and $\Im z \in (0,1)$ (which may also be subject to further conditions). We may similarly omit $\varepsilon$ and $B$ from our notation. In statements about any of the parameters $\alpha$, $s$, $\varepsilon$, and $B$, all constants are understood to depend on these parameters, unless explicitly stated otherwise.
\subsection{Acknowledgements}
The authors are immensely grateful to Alice Guionnet for extremely valuable and extensive discussions related to this work. They also thank Giulio Biroli, Marco Tarzia, and Horng-Tzer Yau for helpful conversations on this topic. Amol Aggarwal was partially supported by a Clay Research Fellowship, by the NSF grant DMS-1926686, and by the IAS School of Mathematics.
Patrick Lopatto was partially supported by the NSF postdoctoral fellowship DMS-2202891. Amol Aggarwal and Patrick Lopatto also wish to acknowledge the NSF grant DMS-1928930, which supported their participation in the Fall 2021 semester program at MSRI in Berkeley, California titled, ``Universality and Integrability in Random Matrix Theory and Interacting Particle Systems.''
\section{Results}
\label{Results}
\subsection{L\'{e}vy Matrices and Their Local Limits}
\label{Stable}
In this section we recall L\'{e}vy matrices and a heavy tailed operator on an infinite tree that can be viewed as their local limit.
We begin by recalling a class of heavy-tailed probability distributions defined in \cite[Section 1]{bordenave2011spectrum}.
It contains symmetric power laws and distributions close to such laws.
\begin{definition}\label{LaDef}
For every $\alpha \in (0,1)$, we define $\mathbb{L}_{\alpha}$ as the set of probability measures $\mu$ on $\mathbb{R}$ that are absolutely continuous with respect to Lebesgue measure and satisfy the following conditions.
\begin{enumerate}
\item
The function $L: \mathbb{R}_+ \rightarrow \mathbb{R}$ given by
\begin{equation*}
L(t) = t^{\alpha}\mu\big( (-\infty, t) \cup (t, \infty)\big)
\end{equation*}
has slow variation at $\infty$; this means that for any $x > 0$,
\begin{equation*}
\lim_{t \rightarrow \infty} \frac{L(xt)}{L(t)} = 1.
\end{equation*}
\item We have
\begin{equation*}
\lim_{t \rightarrow \infty} \frac{\mu\big (t, \infty) \big) }{\mu\big( (-\infty, t) \cup (t, \infty) \big)} = \frac{1}{2}.
\end{equation*}
\end{enumerate}
\end{definition}
We next define the class of random matrices considered in this work.
\begin{definition}
\label{matrixh}
Fix an integer $N \ge 1$, and let $\{ h_{ij} \}_{1 \le i \le j \le N}$ denote mutually independent random variables that each have law $N^{-1 / \alpha} X$, where $X$ is a random variable whose law is contained in $\mathbb{L}_\alpha$. Set $h_{ij} = h_{ji}$ for each $1 \le j < i \le N$, and define the $N \times N$ random, real symmetric matrix $\boldsymbol{H} = \{ h_{ij} \}_{1 \le i, j \le N}$, which we call a \emph{L\'{e}vy matrix} (with parameter $\alpha$).
\end{definition}
It will further be useful to introduce notation for the resolvent of a L\'{e}vy matrix.
\begin{definition}
\label{g}
Adopt the notation of \Cref{matrixh}, and fix a complex number $z \in \mathbb{H}$. The \emph{resolvent} of $\boldsymbol{H}$, denoted by $\boldsymbol{G} = \boldsymbol{G} (z) = \{ G_{ij} \}_{1 \le i, j \le N} = \big\{ G_{ij} (z) \big\}_{i, j}$, is defined by $\boldsymbol{G} = (\boldsymbol{H} - z)^{-1}$.
\end{definition}
We next recall from \cite[Section 2.3]{bordenave2011spectrum} a description of the local limit of a L\'{e}vy matrix $\boldsymbol{H}$ as a heavy-tailed operator on an infinite tree $\mathbb T$ (which may be viewed as a variant of the Poisson Weighted Infinite Tree introduced in \cite{aldous1992asymptotics,aldous2004objective}). This tree has vertex set $\mathbb{V} = \bigcup_{k = 0}^{\infty} \mathbb{Z}_{\ge 1}^k$, with root $\mathbb{Z}^0$ denoted by $0$, so that the children of any $v \in \mathbb{Z}_{\ge 1}^k \subset \mathbb{V}$ are $(v, 1), (v, 2), \ldots \in \mathbb{Z}_{\ge 1}^{k + 1}$.
For each vertex $v \in \mathbb{V}$, let $\boldsymbol{\xi}_{v} = (\xi_{(v, 1)}, \xi_{(v, 2)}, \ldots )$ denote a Poisson point process on $\mathbb{R}$ with intensity measure $\frac{1}{2} dx$ (where $dx$ is the Lebesgue measure on $\mathbb{R}$), ordered so that $|\xi_{(v, 1)}| \ge |\xi_{(v, 2)}| \ge \cdots $. Let $\mathcal{F} \subset L^2 (\mathbb{V})$ denote the (dense) set of vectors with finite support. For any vertex $v \in \mathbb{V}$, let $\delta_{v} \in \mathcal{F}$ denote the unit vector supported on $v$. Then define the operator $\boldsymbol{T} : \mathcal{F} \rightarrow L^2 (\mathbb{V})$ by setting
\begin{flalign}
\label{t}
\begin{aligned}
\langle \delta_{v}, \boldsymbol{T} \delta_{w} \rangle & = \sgn \xi_{w} \cdot |\xi_{w}|^{-1 / \alpha}, \qquad \text{if $w = (v, j)$ for some $j \in \mathbb{Z}_{\ge 1}$}; \\
\langle \delta_{v}, \boldsymbol{T} \delta_{w} \rangle& = \sgn \xi_{v} \cdot |\xi_{v}|^{-1 / \alpha}, \qquad \text{if $v = (w, j)$, for some $j \in \mathbb{Z}_{\ge 1}$}; \\
\langle \delta_{v}, \boldsymbol{T} \delta_{w} \rangle & = 0, \qquad \qquad \qquad \qquad \text{otherwise},
\end{aligned}
\end{flalign}
\noindent for any vertices $v, w \in \mathbb{V}$, and extending it by linearity to all of $\mathcal{F}$. We abbreviate the matrix entry $T_{v w} = \langle \delta_{v}, \boldsymbol{T} \delta_{w} \rangle$ for any vertices $v, w \in \mathbb{V}$.
We further identify $\boldsymbol{T}$ with its closure, which is self-adjoint by \cite[Proposition A.2]{bordenave2011spectrum}.
\begin{rem}
\label{tuvalpha}
For any $v \in \mathbb{V}$, let $\mathbb{D} (v)$ denote the set of children of $v$. Then, the sets of random variables $\big\{ |T_{vw}| \big\}_{w \in \mathbb{D}(v)}$ and $\big\{ |T_{vw}^2| \big\}_{w \in \mathbb{D} (v)}$ form Poisson point processes on $\mathbb{R}_{> 0}$ with intensity measures $\alpha x^{-\alpha-1} dx$ and $\frac{\alpha}{2} x^{-\alpha/2 - 1} dx$, respectively.
\end{rem}
The following definition provides notation for the resolvent
of $\boldsymbol{T}$.
\begin{definition}
\label{r}
For any complex number $z \in \mathbb{H}$, the \emph{resolvent} of $\boldsymbol{T}$, denoted by $\boldsymbol{R}: L^2 (\mathbb{V}) \rightarrow L^2 (\mathbb{V})$, is defined by $\boldsymbol{R} = \boldsymbol{R} (z) = (\boldsymbol{T} - z)^{-1}$. For for any vertices $v, w \in \mathbb{V}$, we denote the $(v, w)$-entry of $\boldsymbol{R}$ by $R_{v w} = R_{v w} (z) = \langle \delta_{v}, \boldsymbol{R} \delta_{w} \rangle$.
\end{definition}
The following lemma, which follows from results in \cite{bordenave2011spectrum}, implies that the (diagonal) entries of $\boldsymbol{G}$ converge to those of $\boldsymbol{R}$ as $N$ tends to $\infty$.
\begin{lem}[{\cite[Theorem 2.2, Theorem 2.3]{bordenave2011spectrum}}]
\label{gr}
Adopt the notation of \Cref{g} and \Cref{r}. For any integer $i \in \unn{1}{N}$, we have the convergence in law $\lim_{N \rightarrow \infty} G_{ii} (z) = R_{\star} (z)$.
\end{lem}
We close this section with some notation on stable laws. Fix real numbers $\alpha \in (0,1)$, $\sigma \ge 0$, and $\beta \in [-1, 1]$. We say that a random variable $X$ is an \emph{$\alpha$-stable law} with \emph{scaling parameter $\sigma$} and \emph{skewness parameter $\beta$}, or an \emph{$(\alpha; \sigma; \beta)$-stable law}, if for any $t \in \mathbb{R}$ we have
\begin{flalign}
\label{xtsigma}
\mathbb{E} \big[ e^{\mathrm{i} tX} \big] = \exp \bigg(- \displaystyle\frac{\pi}{2 \sin (\pi \alpha / 2) \Gamma (\alpha)} \cdot \sigma^{\alpha} |t|^{\alpha} \big( 1 - \mathrm{i} \beta \sgn (t) u_{\alpha} \big) \bigg), \quad \text{where} \quad u_{\alpha} = \tan ( \pi \alpha/2 ).
\end{flalign}
\noindent The constant $\pi \big(2 \sin (\pi \alpha / 2) \Gamma (\alpha)\big)^{-1}$ in \eqref{xtsigma} is chosen so that when $\sigma = 1$ and $\beta=0$, the associated law is in the class $\mathbb{L}_\alpha$ defined in \Cref{LaDef}. See \cite[Theorem 1.2]{nolan2020univariate} for details.
We further say that $X$ is \emph{centered} if $\beta = 0$, and that it is \emph{nonnegative} if $\beta = 1$. If $X$ is a nonnegative $\alpha$-stable law with scaling parameter $\sigma$, then $Y \ge 0$ almost surely, and for any $t \ge 0$ we have
\begin{equation}
\label{ytsigma}
\mathbb{E} [e^{-tX}] = \exp \big( - \Gamma (1 - \alpha) \cdot \sigma^{\alpha} |t|^{\alpha} \big).
\end{equation}
We adopt the convention that any reference to a nonnegative $\alpha$-stable law without further qualification always specifies one with scaling parameter $\sigma =1$.
\begin{rem}
\label{sumalpha}
Any $\alpha$-stable law $X$ is the difference between two nonnegative stable laws. Indeed, if the scaling and skewness parameters of $X$ are $\sigma = \sigma (X) \ge 0$ and $\beta = \beta(X) \in [-1, 1]$, respectively, then by \eqref{xtsigma} we have $X = Y - Z$, where $Y$ and $Z$ are nonnegative $\alpha$-stable laws with scaling parameters
\begin{flalign*}
\sigma (Y) = \sigma \cdot \bigg( \displaystyle\frac{\beta + 1}{2} \bigg)^{1/\alpha}, \quad \text{and} \quad \sigma (Z) = \sigma \cdot \bigg( \displaystyle\frac{1 - \beta}{2} \bigg)^{1 / \alpha}, \quad \text{respectively}.
\end{flalign*}
\end{rem}
\subsection{Results for the Poisson Weighted Infinite Tree} \label{s:PWITresults}
We begin with some preliminary definitions.
\begin{definition}
Fix $\alpha \in (0,1)$ and recall $R_\star$ from \Cref{r}.
We define $y(z) = y_\alpha(z)$ for $z\in \mathbb{H}$ by
\begin{equation}\label{yrepresentation}
y(z) = \mathbb{E} \left[ \left(- \mathrm{i} R_\star(z) \right)^{\alpha/2} \right].
\end{equation}
\end{definition}
\begin{rem}
Alternatively, $y(z)$ can be characterized through a deterministic equation that does not reference the operator $\boldsymbol{R}$. Set $\mathbb{K} = \{ z \in \mathbb{C}: \Re z > 0\}$. For any $z\in \mathbb{H}$, we define $\phi_{\alpha, z} \colon \mathbb{K} \rightarrow \mathbb{C}$ by
\begin{equation*}
\phi_{\alpha,z}(x) = \frac{1}{\Gamma(\alpha/2)}
\int_0^\infty t^{\alpha/2 - 1} \exp(\mathrm{i} tz) \exp\left(
-\Gamma(1 - \alpha/2) t^{\alpha/2} x
\right)\, dt.
\end{equation*}
By \cite[Theorem 1.4]{arous2008spectrum} and \cite[Proposition 3.1]{bordenave2013localization},
$y(z)$ is the unique solution to the equation
$ y(z) = \phi_{\alpha,z}( y(z))$ for all $z \in \mathbb{H}$.
\end{rem}
It follows from the proof of \cite[Theorem 1.4]{belinschi2009spectral} that $y(z)$ extends continuously to a function on $\overline{\mathbb{H}}$, so that $y(E)$ is defined for every $E\in \mathbb{R}$. For the convenience of the reader, we provide a proof of the previous statement in \Cref{s:boundaryvalues}, where it is stated as \Cref{l:boundaryvalues}.
\begin{definition}
Because the complex numbers $\mathrm{i}^{\alpha/2}$ and $(-\mathrm{i})^{\alpha/2}$ are linearly independent over $\mathbb{R}$, for every $E\in \mathbb{R}$ there exists a unique pair $\big(a(E), b(E)\big) \in \mathbb{R}^2$ such that
\begin{equation}\label{opaque}
y(E) = (-\mathrm{i})^{\alpha/2} a(E) + (\mathrm{i})^{\alpha/2} b(E).
\end{equation}
Further, we have
\begin{equation*}
\big (-iR_\star(z)\big)^{\alpha/2} \in \big\{ \mathrm{i}^{\alpha/2} \cdot a + (-\mathrm{i})^{\alpha/2} \cdot b : a, b \ge 0 \big\}
\end{equation*}
because $R_\star(z) \in \mathbb{H}$, so $a(E), b(E) \ge0$. (Recall the definition of $z \mapsto z^{\alpha/2}$ given in \Cref{s:notation}.)
\end{definition}
\begin{definition}
\label{pkappae}
Let $S_1$ and $S_2$ be independent, nonnegative $\alpha/2$-stable random variables. Set
\begin{equation*}
\varkappa_{\mathrm{loc}}(E) = a(E)^{2/\alpha} S_1 - b(E)^{2/\alpha} S_2.
\end{equation*}
Then $\varkappa_\mathrm{loc}(E)$ is an $\alpha/2$-stable law whose scaling and skewness parameters are determined through \Cref{sumalpha}.
We denote the density of $\varkappa_\mathrm{loc}(E)$ with respect to the Lebesgue measure on $\mathbb{R}$ by $p_E(x)$. We let $\hat p_E(\xi)$ denote the Fourier transform of the density $p_E(x)$.\footnote{
We define the Fourier transform so that $\hat p_E(\xi)$ is equal to the characteristic function of $\varkappa_{\mathrm{loc}}(E)$. See \Cref{s:fouriertransform} below for details.}
\end{definition}
While these quantities depend on $\alpha$, we suppress this dependence in our notation.
\begin{rem}\label{r:abreason}
The definition of $a(E)$ and $b(E)$ through \eqref{opaque} can be understood in the following way.
Suppose that $R_\star(E) = \lim_{\eta \rightarrow 0} R_\star(E + \mathrm{i} \eta)$ exists and satisfies $\Im R_\star(E) = 0$.
Then we have
\begin{equation*}
a(E) = \mathbb{E}\left[ \big( R_\star(E) \big)^{\alpha/2}_+ \right], \qquad b(E) = \mathbb{E}\left[ \big( R_\star(E) \big)^{\alpha/2}_- \right].
\end{equation*}
The assumption $\Im R_\star(E) = 0$ essentially implies localization, as noted below in \Cref{l:loccriteria}. So one may think of the definitions of $a(E)$ and $b(E)$ as being the boundary values of fractional moments at $E$, assuming localization at $E$.
The definition of $\varkappa_{\mathrm{loc}}$ arises in a related way. Given $z\in \mathbb{H}$, let $(\xi_j)_{j \ge 1}$ be a Poisson point process on $\mathbb{R}_{> 0}$ with intensity measure $\frac{\alpha}{2} x^{-\alpha / 2 - 1} dx$, and let $(R_j(z))_{j \ge 1}$ be mutually independent random variables with law $R_{\star} (z)$. The quantity
\begin{flalign*}
\displaystyle\sum_{j = 1}^{\infty} \xi_j R_j(z),
\end{flalign*}
known as the \emph{self energy} at $z$, plays a key role both in our work and in the physics literature \cite{tarquini2016level,cizeau1994theory}. If we consider the boundary value of the self energy at a point $E \in \mathbb{R}$, then the localization assumption $\Im R_\star(E) = 0$ and the Poisson thinning property (see \Cref{sumaxi} below) imply that the previous sum is equal in distribution to
\begin{equation*}
\mathbb{E}\left[ \big( R_\star(E) \big)^{\alpha/2}_+ \right]^{2/\alpha} S_1 -
\mathbb{E}\left[ \big( R_\star(E) \big)^{\alpha/2}_+ \right]^{2/\alpha} S_2,
\end{equation*}
with $S_1$ and $S_2$ as above, yielding the definition of $\varkappa_{\mathrm{loc}}$.
\end{rem}
We next define a quantity which plays a key role in our main results.
\begin{definition} \label{lambdaEalpha}
For any $x \in \mathbb{R}$ and $y \in (0, 1)$, define
\begin{flalign}
\label{tlrk}
\ell (x) = \displaystyle\frac{1}{\pi} \displaystyle\int_0^{\infty} e^{\mathrm{i} x \xi} |\xi|^{\alpha-1} \widehat{p}_E (\xi)\, d \xi; \quad K_{\alpha} = \displaystyle\frac{\alpha}{2} \cdot \Gamma \bigg( \displaystyle\frac{1-\alpha}{2} \bigg)^2 ;
\quad t_y = \sin \bigg( \displaystyle\frac{\pi y}{2} \bigg).
\end{flalign}
\noindent For any $E\in \mathbb{R}$ and $\alpha \in (0,1)$, we let $\lambda(E, \alpha)$ denote the unique positive solution to the quadratic equation
\begin{flalign}\label{mobilityquadratic}
\lambda(E, \alpha)^2 - 2 t_{\alpha} K_{\alpha} \Real \ell (E) \cdot \lambda(E, \alpha) + K_{\alpha}^2 (t_{\alpha}^2 - 1) \big| \ell (E) \big|^2 = 0,
\end{flalign}
\noindent which admits only one positive solution since its constant term is negative (as $t_{\alpha} < 1$ for $\alpha < 1$).
\end{definition}
We now state a proposition that provides important context for our main theorems. This proposition and the theorem that follows it are proved in \Cref{s:proofmain1}, assuming several preliminary lemmas that are proved in the remainder of the paper.
\begin{prop}\label{p:213}
Fix $\alpha \in (0,1)$.
\begin{enumerate}
\item There exists a constant $c(\alpha) > 0$ such that $\lambda(E, \alpha) > 1$ for $|E| < c$.
\item There exists a constant $C(\alpha) > 1$ such that $\lambda(E, \alpha) < 1$ for $|E| >C$.
\end{enumerate}
\end{prop}
We now state our first main theorem. It connects $\lambda(E,\alpha)$ to the behavior of $R_\star(E + \mathrm{i} \eta)$ as $\eta$ tends to zero. The conclusions of the first and second parts should be understood as signatures of delocalization and localization, respectively; this connection will be made precise below in \Cref{l:loccriteria}. The theorem essentially states that one has delocalization for energies $E$ such that $\lambda > 1$, and localization for $E$ in the connected components of $\lambda < 1$ containing $\infty$ and $-\infty$. We start with a definition of strict lower boundedness in probability.
\begin{definition}\label{def:lbprob}
Consider a family of real random variables $(X_\eta)_{\eta \geq 0}$ and $c \in \mathbb{R}$, we write
\emph{$
\liminf_{\eta \to 0} X_\eta > c
$ in probability}
if for all $\omega \in (0,1)$, there exists $\delta > 0$ such that
\begin{flalign} \label{2141}
\liminf_{\eta \to 0} \mathbb{P} ( X_\eta > c + \delta ) \geq 1 - \omega.
\end{flalign}
\end{definition}
We now state the theorem.
\begin{thm}\label{t:main1} Fix $\alpha \in(0,1)$.
\begin{enumerate}
\item For all $E \in \mathbb{R}$ such that $\lambda(E, \alpha) > 1$, $
\liminf_{\eta \rightarrow 0} \Imaginary R_{\star} (E + \mathrm{i} \eta) > 0$ in probability.
\item Set
\begin{equation}\label{eloc}
E_{\mathrm{loc}}(\alpha) = \sup \{ E \in \mathbb{R}_+ : \lambda(E,\alpha) =1 \}.
\end{equation}
For all $E \in \mathbb{R}$ such that $|E| > E_{\mathrm{loc}}$, we have $\lambda(E, \alpha) < 1$ and $\lim_{\eta \rightarrow 0 }\Im R_\star(E + \mathrm{i} \eta) = 0$ in probability.
\end{enumerate}
\end{thm}
We remark that \Cref{t:main1} makes no claim about
localization or delocalization in any connected components of $\lambda < 1$ avoiding $\infty$ and $-\infty$. Conjecturally, such regions do not exist, and the following two theorems asserts thi for $\alpha$ sufficiently close to $1$ or to $0$, respectively. For such $\alpha$, it implies that delocalization holds for all $E\in \mathbb{R}$ such that $|E| < E_\mathrm{loc}$, and establishes the existence of exactly two sharp phase transitions between regions of delocalization to localization at $\pm E_\mathrm{loc}$. Further, it pinpoints the scalings of these transitions (as $(1-\alpha)^{-1}$ for $\alpha$ near $1$, and as $|\log \alpha|^{-2/\alpha}$ for $\alpha$ near $0$). The first part of the theorem below is proved in \Cref{s:proveuniqueness}, and the second is proved in \Cref{Alpha0Unique}.
\begin{thm}\label{t:main2}
There exists a constant $c>0$ such that the following statements hold.
\begin{enumerate}
\item For every $\alpha \in (1-c, 1)$, the set \begin{equation}\label{zalpha}
\mathcal Z_\alpha = \{E\in \mathbb{R} : E \ge 0, \, \lambda(E, \alpha) = 1\}
\end{equation} consists of a single point $E_{\mathrm{mob}} = E_\mathrm{mob}(\alpha)$. It satisfies
\begin{equation*}
c(1-\alpha)^{-1} < E_\mathrm{mob} < c^{-1}(1-\alpha)^{-1}.
\end{equation*}
\item For every $\alpha \in (0, c)$, the set $\mathcal{Z}_{\alpha}$ consists of a single point $E_{\mathrm{mob}} = E_{\mathrm{mob}} (\alpha)$. It satisfies
\begin{flalign}
\label{ealpha0}
\bigg( \displaystyle\frac{1}{|\log \alpha|} - \displaystyle\frac{C \log |\log \alpha|}{|\log \alpha|^2} \bigg)^{2/\alpha} \le E_{\mathrm{mob}} \le \bigg( \displaystyle\frac{1}{|\log \alpha|} + \displaystyle\frac{C \log |\log \alpha|}{|\log \alpha|^2} \bigg)^{2/\alpha}.
\end{flalign}
\end{enumerate}
\end{thm}
\subsection{Results for L\'evy Matrices}
For any $N \times N$ symmetric matrix $\boldsymbol{M}$, let $\Lambda = \Lambda (\boldsymbol{M}) = (\lambda_1, \lambda_2, \ldots , \lambda)$ denote the set of eigenvalues of $\boldsymbol{M}$, and let $\boldsymbol{u}_i = \big( u_i (1), u_i (2), \ldots , u_i (N) \big) \in \mathbb{R}^N$ denote the eigenvector of $\boldsymbol{M}$ with eigenvalue $\lambda_i$ and $\ell^2$ norm equal to one. For any interval $I \subset \mathbb{R}$, let $\Lambda_I = \Lambda_I (\boldsymbol{M}) = \Lambda \cap I$. Following \cite{bordenave2013localization}, we introduce the below measure of eigenvector localization.
\begin{definition}[{\cite[Section 1]{bordenave2013localization}}]
\label{jwi}
For any interval $I \subset \mathbb{R}$ and index $j \in \unn{1}{N}$, we define $P_I (j) = P_I (j; \boldsymbol{M})$ and $Q_I = Q_I (\boldsymbol{M})$ by
\begin{flalign*}
P_I (j) = \displaystyle\frac{N}{|\Lambda_I|} \displaystyle\sum_{\lambda_i \in \Lambda_I} \big| u_i (j) \big|^2; \qquad Q_I = \displaystyle\frac{1}{N} \displaystyle\sum_{j = 1}^N P_I (j)^2.
\end{flalign*}
\end{definition}
\begin{rem}
\label{westimate}
Let us describe the asymptotic behavior of $Q_I$ in two regimes. In the ``maximally delocalized regime,'' we have $u_i (j) = N^{-1/2}$ for each $i, j \in \unn{1}{ N}$, in which case $P_I (j) = 1$ for all $j$, and $Q_I = 1$. In the ``maximally localized regime,'' we have $u_i (j) = \mathbbm{1}_{i = j}$. In this case, $P_I (j) = 0$ if $\lambda_j \notin I$ and $P_I (j) = N |\Lambda_I|^{-1}$ otherwise, so $Q_I = N |\Lambda_I|^{-1} $. In the latter case, for a fixed interval $I\subset \mathbb{R}$, we have $|\Lambda_I| < C |I| N$ for some $C > 0$ (independent of $I$) by \cite[Theorem 1.1]{arous2008spectrum}.\footnote{We let $|I|$ denote the length of $I$.} Then $Q_{I} > C^{-1} | I|^{-1}$. Thus we may distinguish delocalization from localization, for eigenvectors corresponding to eigenvalues in $I$, by examining whether $Q_I$ remains bounded or grows as $|I|$ tends to zero, respectively.
\end{rem}
We now introduce the notions of eigenvector localization and delocalization that we adopt in this paper.
\begin{definition}\label{d:loc}
Fix $E \in \mathbb{R}$ and a a L\'evy matrix $\boldsymbol{H} = \boldsymbol{H}_N$. We say that \emph{delocalization holds at $E$} for $\boldsymbol{H}$ if there exist $C(E)>1$ and $c(E) >0$ such that for all $\varepsilon \in (0, c)$,
\begin{flalign*}
\displaystyle\lim_{N \rightarrow \infty} \mathbb{P} \big[ Q_{I(\varepsilon)} \left( \boldsymbol{H} \right) \le C \big] = 1
\end{flalign*}
for the interval $I(\varepsilon) = [E- \varepsilon, E + \varepsilon]$. We say that \emph{localization holds at $E$} for $\boldsymbol{H}$ if for every $D > 0$, there exists $c(D)>0$ such that for all
$\varepsilon \in (0, c)$,
\begin{flalign*}
\displaystyle\lim_{N \rightarrow \infty} \mathbb{P} \big[ Q_{I(\varepsilon)} \left( \boldsymbol{H} \right) \ge D \big] = 1.
\end{flalign*}
\end{definition}
The next lemma asserts that the behavior of $\Imaginary R_{\star} (E + \mathrm{i} \eta)$ as $\eta$ tends to $0$ controls the behavior of the eigenvectors $\boldsymbol{H}$ with eigenvalues near $E$. Indeed, $\Imaginary R_{\star} (z)$ tending to $0$ in probability will imply localization for these eigenvectors, in the sense of \Cref{d:loc}. Similarly, $\Imaginary R_{\star} (z)$ being positive with positive probability as $\eta$ tends to zero implies delocalization for these eigenvectors. We now state these two claims as a lemma; it is proved in
\Cref{a:conclusion} below.
\begin{lem}\label{l:loccriteria}
Fix $\alpha \in (0,1)$ and $E \in \mathbb{R}$, and let $\boldsymbol{H}$ be a L\'evy matrix with parameter $\alpha$.
\begin{enumerate}
\item If $\lim_{\eta \rightarrow 0 }\Im R_\star(E + \mathrm{i} \eta) = 0$ in probability, then localization holds at $E$ for $\boldsymbol{H}$.
\item If there exists a real number $c > 0$ such that $\liminf_{\eta \rightarrow 0} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta) > c \big] > c$, then delocalization holds at $E$ for $\boldsymbol{H}$.
\end{enumerate}
\end{lem}
The following corollary asserts that the result of the previous section, which concerned the operator $\boldsymbol{R}$ on $\mathbb{T}$, also hold for L\'evy matrices. It is a straightforward consequence of \Cref{t:main1}, \Cref{t:main2}, and \Cref{l:loccriteria}. We prove it in \Cref{s:proofmain1}.
\begin{cor}\label{c:main1}
Fix $\alpha \in(0,1)$, and let $\boldsymbol{H}$ be a L\'evy matrix with parameter $\alpha$.
\begin{enumerate}
\item For all $E \in \mathbb{R}$ such that $\lambda(E, \alpha) > 1$, delocalization holds at $E$ for $\boldsymbol{H}$.
\item Recall the quantity $E_\mathrm{loc}$ defined in \eqref{eloc}.
For all $E\in \mathbb{R}$ such that $|E| > E_{\mathrm{loc}}$,
localization holds at $E$ for $\boldsymbol{H}$.
\item There exists $c>0$ such that if $\alpha\in(0,c)$ or $\alpha \in (1-c, 1)$, delocalization holds at $E$ when $|E| < E_\mathrm{loc}$, and localization holds at $E$ when $|E| > E_\mathrm{loc}$.
\end{enumerate}
\end{cor}
\subsection{Relation to Previous Work} \label{s:relationprevious}
We now discuss the relation between our results and previous work on L\'evy matrices in the physics and mathematics literature. We first review the physics predictions from \cite{tarquini2016level, cizeau1994theory}.
These works posited that the set $\mathcal Z_\alpha$ (from \Cref{t:main2}) consists of a single point for each $\alpha \in (0, 1)$, that is, $\mathcal Z_\alpha = \{E_{\mathrm{mob}}\}$ for some $E_{\mathrm{mob}} > 0$. Further, \cite{cizeau1994theory,tarquini2016level} predicted that eigenvector delocalization holds for $|E| < E_{\mathrm{mob}}$ and eigenvector localization holds for $|E| > E_{\mathrm{mob}}$. Regarding eigenvalues, they predicted that the local eigenvalue statistics are distributed as those of the GOE for energies $|E| < E_{\mathrm{mob}}$, and that these statistics are Poisson for $|E| > E_{\mathrm{mob}}$. The authors of \cite{tarquini2016level} also predicted that $E_\mathrm{mob}$ scales as $(1- \alpha)^{-1}$ for $\alpha$ near $1$. They gave no analytical prediction on how $E_{\mathrm{mob}}$ scales for $\alpha$ near $0$, but the plot \cite[Figure 1]{tarquini2016level} indicates that $E_{\mathrm{mob}}$ is very small for $\alpha < \frac{1}{4}$ (which is in some sense confirmed by the super exponential decay in $1/\alpha$ of $E_\mathrm{mob}$ in Theorem \ref{t:main2}).
The predictions of \cite{tarquini2016level, cizeau1994theory} were subsequently investigated in the mathematically rigorous works \cite{bordenave2013localization, bordenave2017delocalization, aggarwal2021goe}.
For all $\alpha \in (0, 2/3)$, \cite{bordenave2013localization} showed that there exists a constant $C(\alpha) > 1$ such that if $\boldsymbol{H}$ is a $N\times N$ L\'evy matrix with $\alpha$-stable entries and resolvent $\boldsymbol{G}$, then the fractional moment bound
\begin{equation*}
\mathbb{E} \left[ \big( \Im G_{11}(E + \mathrm{i} \eta) \big)^{\alpha/2} \right] = O(\eta^{\alpha/2 - \delta})
\end{equation*}
holds for $|E| > C(\alpha)$ and $\eta \gg N^{-(2+\alpha)/(4\alpha + 12)}$. Using this bound, \cite{bordenave2017delocalization} showed the following localization result for the same range of $\alpha$. For every $\varepsilon \in (0, \alpha/2)$, there exists $C(\alpha, \varepsilon)>1 $ such that for any compact interval $K \subset [-C,C]^c$, there exists a constant $ c_1(K) > 0$ such that the following bound holds. For any interval $I\subset K$ such that $|I| \ge N^{-\frac{\alpha}{2 + 3 \alpha}} (\log n)^2$, we likely have $Q_I \ge c_1 | I |^{ - \frac{2\varepsilon}{2 - \alpha}}$. Further, for almost all $\alpha \in(0,1)$, \cite{bordenave2017delocalization} proved that if $\boldsymbol{H}$ is a L\'evy matrix with $\alpha$-stable entries, there exist constants $C(\alpha)>1$ and $c(\alpha) > 0$ such that for every interval $I \subset [-c, c]$ satisfying $|I| \ge n^{-\alpha/(4+\alpha)} (\log N)^{C}$, we likely have $Q_I \le C$. Finally, \cite{aggarwal2021goe} proved that eigenvectors of $\bm{H}$ with eigenvalues $\lambda_k \in [ -c,c]$ are completely delocalized (for a slightly more general class of matrices), and also confirmed the prediction of GOE eigenvalue statistics around energies $E \in [-c,c]$.
Our results pinpoint the exact mobility edge serving as the sharp transition between localization and delocalization, at the expense of not pursuing the effective or optimal (de)localization bounds showcased in the previous works. Indeed, \Cref{c:main1} verifies the (de)localization (in terms of the inverse participation ratio $Q_I$) aspects of the phase diagram predicted in \cite{tarquini2016level,cizeau1994theory}, conditional on the purely deterministic statement that $\mathcal{Z}_{\alpha}$ consists of a unique point (equivalently, that \eqref{mobilityquadratic} admits a unique solution in $E$ for fixed $\alpha\in(0,1)$). The third part of \Cref{c:main1} shows this uniqueness if $\alpha \in (0,c) \cup (1-c, 1)$ for some sufficiently small constant $c > 0$. Further, our \Cref{t:main2} shows that the unique point $E_{\mathrm{loc}} (\alpha) \in \mathcal{Z}_{\alpha}$ scales as $(1-\alpha)^{-1}$ for $\alpha$ close to $1$, as predicted in \cite{tarquini2016level}. For $\alpha$ near $0$, we show that $E_{\mathrm{loc}} (\alpha)$ scales about as $|\log \alpha|^{-2/\alpha}$, which had not been predicted earlier.
While we are unable to show that $\mathcal{Z}_{\alpha}$ reduces to a single point for $\alpha \in (c, 1-c)$, our results still improve on several aspects of previous works in this regime. First, the set of energies $E$ not covered by the first two parts of \Cref{c:main1} presumably reduces to the two mobility edges $\{ -E_{\mathrm{loc}} (\alpha), E_{\mathrm{loc}} (\alpha) \}$ (this is evident numerically \cite{tarquini2016level} and equivalent to the prediction that $\mathcal{Z}_{\alpha}$ consists of a single point). In contrast, there is a considerable gap between the localized phase shown in \cite{bordenave2013localization, bordenave2017delocalization} and the delocalized phase shown in \cite{aggarwal2021goe,bordenave2017delocalization}. Next, the second part of \Cref{c:main1} provides an explicit localized phase for all $\alpha \in (0, 1)$, while the previous localization results of \cite{bordenave2013localization, bordenave2017delocalization} apply for $\alpha \in (0, 2/3)$ only; the first part of \Cref{c:main1} also provides an explicit delocalized phase for all $\alpha \in (0, 1)$, slightly improving the range of $\alpha$ in the results of \cite{aggarwal2021goe,bordenave2017delocalization} (which missed a countable set of $\alpha$ parameters). These hold for all matrices satisfying \Cref{matrixh}, which significantly expands the class of matrices considered in \cite{bordenave2013localization, bordenave2017delocalization, aggarwal2021goe}.
\label{Estimates00}
\section{Proof Outline}\label{s:outline}
In \Cref{Tn}, we begin by characterizing the behavior of $R_\star(z)$ in this limit in terms of the fractional moments of resolvent entries. In \Cref{s:bootstrap}, we relate the fractional moments of the resolvent entries to $\lambda(E,\alpha)$, and state certain continuity properties for these quantities. We conclude in \Cref{s:proofmain1} with the proof of \Cref{t:main1}.
As stated in our discussion of notation in \Cref{s:notation}, we will omit writing the dependence of all constants on $\alpha \in (0,1)$.
\subsection{Localization and Delocalization Criteria}
\label{Tn}
We first characterize the behavior of $\Imaginary R_{\star} (z)$ as $\eta$ tends to zero in terms of the growth of fractional moments of the off-diagonal resolvent entries (from \Cref{Stable}); these moments are given by the below definition. Throughout this section, we fix $\alpha \in (0,1)$.
\begin{definition}
\label{moment1}
Recalling that $0$ is the root of $\mathbb{T}$, for any integer $L \ge 1$ define
\begin{flalign}
\label{sumsz1}
\Phi_{L} (s; z) = \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{V} (L)} \big| R_{0v} (z) \big|^s \Bigg]; \qquad \varphi_L (s; z) = L^{-1} \log \Phi_L (s; z).
\end{flalign}
\noindent where $\mathbb{V} (L) = \mathbb{Z}_{\geq 1}^L$ is the set of vertices of $\mathbb T$ at distance $L$ from the root $0$. When they exist, we also define the limits
\begin{flalign}
\label{szl}
\varphi (s; z) = \displaystyle\lim_{L \rightarrow \infty} \varphi_L (s; z); \qquad \varphi (s; E) = \displaystyle\lim_{\eta \rightarrow 0} \varphi (s; E + \mathrm{i} \eta).
\end{flalign}
\end{definition}
The following theorem, whose statement resembles that of \cite[Theorem 3.2]{aizenman2013resonant}, indicates that the first limit in \eqref{szl} exists and analyzes its properties. We will establish it in \Cref{MultiplicativeR} below.
\begin{thm}
\label{limitr0j}
Fix $s \in (\alpha,1)$, $\varepsilon \in (0,1)$, $B\ge 1$, and $z = E + \mathrm{i} \eta \in \mathbb{H}$ such that $\varepsilon \le |E| \le B$ and $\eta \in (0,1)$.
The following statements hold.
\begin{enumerate}
\item The limit $\varphi (s; z) = \lim_{L \rightarrow \infty} \varphi_L (s; z)$ exists and is finite. Moreover, there exists a constant $C( s,\varepsilon,B) > 1$ such that
\begin{flalign}
\label{limitr0j2}
\big| \varphi_L (s; z) - \varphi (s; z) \big| \le \displaystyle\frac{C}{L}.
\end{flalign}
\item The function $\varphi (s; z)$ is (weakly) convex and nonincreasing in $s \in (\alpha, 1)$.
\item Fix $E\in \mathbb{R}$. If the limit $\varphi_L (s; E) = \lim_{\eta \rightarrow 0} \varphi_L (s; E + \mathrm{i} \eta)$ exists, then the limit $\varphi (s; E) = \lim_{\eta \rightarrow 0} \varphi (s; E + \mathrm{i} \eta)$ does also. In this case, the limits in $L$ and $\eta$ in the second equality of \eqref{szl} commute, namely, $\varphi (s; E) = \lim_{L\rightarrow\infty} \varphi_L (s; E )$.
\item There exist constants $c = c(\varepsilon, B)> 0$ and $C = C(\varepsilon, B) > 1$ (independent of $s$) such that $\varphi (s; z) > -C - c \log (s-\alpha)$. In particular, $\lim_{s \rightarrow \alpha} \varphi (s; z) = \infty$.
\item There exist constants $C = C ( s) > 1$ and $c = c ( s) > 1$ (independent of $\varepsilon$ and $B$) such that for $|E| \ge 2$ we have $\varphi (s; z) < C - c \log |E|$. In particular, $\lim_{|E| \rightarrow \infty} \varphi (s; z) = -\infty$.
\end{enumerate}
\end{thm}
\begin{rem}
\label{salpha}
We remark that, unlike in \cite{aizenman2013resonant}, it is necessary to take $s \in (\alpha, 1)$ in \Cref{limitr0j}, as for $s \in (0, \alpha)$ it can be shown that $\varphi_L (s; z)$ diverges (which is suggested by the fourth part of \Cref{limitr0j}).
\end{rem}
\begin{definition}
We set
\begin{equation*}
\varphi (1; z) = \lim_{s \rightarrow 1} \varphi (s; z).
\end{equation*}
The existence of the limit follows from the second part of \Cref{limitr0j}.
\end{definition}
Now we can state the following theorem, which resembles \cite[Theorem 2.5]{aizenman2013resonant} and essentially states delocalization for $\boldsymbol{T}$ around $E \in \mathbb{R}$ (allowing $E=0$) such that $\varphi (1; E) > 0$. It is proved in \Cref{RLarge}.
\begin{thm}
\label{rimaginary0}
Fix $E \in \mathbb{R}$ and a decreasing sequence $\{\eta_j\}_{j=1}^\infty$ such that $\lim_{j\rightarrow\infty} \eta_j =0$. Suppose that
\begin{equation}
\displaystyle\lim_{s \rightarrow 1} \bigg( \displaystyle\liminf_{j \rightarrow \infty} \varphi_s (E + \mathrm{i} \eta_j) \bigg) > 0
\end{equation}
Then there exists a real number $c > 0$
such that
\begin{equation*}
\liminf_{\eta \rightarrow 0} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta) > c \big] > c.
\end{equation*}
\end{thm}
In the complementary case, where $\varphi (1; E) < 0$, it can quickly be proven that $\Imaginary R_{\star} (E + \mathrm{i} \eta)$ converges to $0$ in probability as $\eta$ tends to $0$. The following proposition is proved in \Cref{s:continuitypreliminary}.
\begin{prop} \label{p:imvanish}
Fix $E \in \mathbb{R}$ and $s\in(\alpha, 1)$.
\begin{enumerate}
\item
Let $\{\eta_j\}_{j=1}^\infty$ be a decreasing sequence such that $\lim_{j\rightarrow\infty} \eta_j =0$, and suppose that \begin{equation*}\limsup_{j \rightarrow \infty} \varphi (s; E + \mathrm{i} \eta_j) < 0.\end{equation*}
Then $\lim_{j\rightarrow \infty} \Imaginary R_{\star} (E + \mathrm{i} \eta_j) = 0$ in probability.
\item
Suppose that $\limsup_{\eta \rightarrow 0} \varphi (s; E + \mathrm{i} \eta) < 0$. Then $\lim_{\eta \rightarrow 0} \Imaginary R_{\star} (E + \mathrm{i} \eta) = 0$ in probability.
\end{enumerate}
\end{prop}
Next, we state a compactness lemma. It is proved in \Cref{ResolventDensity}.
\begin{lem}\label{l:compactness}
Fix $E\in \mathbb{R}$. Then the closure of the set ${\big\{ R_\star(E + \mathrm{i} \eta) \big\}_{\eta \in (0,1)}}$ in the weak topology is sequentially compact.
\end{lem}
Finally, we state a lemma that upgrades the delocalization statement in \Cref{rimaginary0} to the one in \Cref{t:main1}. It is proved in \Cref{ResolventDensity}.
\begin{lem}\label{l:upgrade} Fix $E \in \mathbb{R}$,
and suppose there exists $c> 0$ such that
\begin{equation*}
\liminf_{\eta \rightarrow 0} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta) > c \big] > c.
\end{equation*}
Then $
\liminf_{\eta \rightarrow 0} \Imaginary R_{\star} (E + \mathrm{i} \eta) > 0$ in probability.
\end{lem}
\subsection{Bootstrap for Large Energies}\label{s:bootstrap}
Our proof of the second part of \Cref{t:main1} will be accomplished through a bootstrap argument. Localization is established for sufficiently large $|E|$ (using the fifth part of \Cref{limitr0j} and \Cref{p:imvanish}), and then inductively shown hold for smaller values of $|E|$, until $E_\mathrm{loc}$ is reached. We now collect some lemmas that facilitate the inductive argument. For technical reasons, we cannot work with $\phi(1,z)$ directly during this process; we instead argue using $\phi(s,z)$ and then take $s$ to $1$ at the end. We first state a continuity lemma for $ \varphi (s; z)$. It is proved in \Cref{s:continuityproof}.
\begin{lem}\label{l:phicontinuity}
Fix $\alpha \in (0,1)$; $s \in (\alpha,1)$;
$\omega, \kappa >0$; and a compact interval $I \subset \mathbb{R} \setminus \{0\}$. There exists a constant $\delta(s,\omega,\kappa, I) >0$ such that the following holds. For any $E_1, E_2 \in I$ such that $|E_1 - E_2 | \le \delta$ and
\begin{equation*}
\limsup_{\eta \rightarrow 0} \varphi (s; E_1 + \mathrm{i} \eta) < - \kappa,
\end{equation*}
we have
\begin{equation*}
\limsup_{\eta \rightarrow 0} \varphi (s; E_2 + \mathrm{i} \eta)
<
-\kappa + \omega.
\end{equation*}
\end{lem}
We next introduce a quantity $\lambda(E,s,\alpha)$, which we will we see is closely connected to $\phi(s;z)$.
\begin{comment}
\begin{definition}
For $\alpha \in (0,1)$, $s \in (\alpha ,1)$, and $E \in \mathbb{R}$, we define
\ber
\begin{align*}\lambda(E, s, \alpha) &=
\frac{2 \alpha }{\pi} \mathbb{E} \big[ |R_\mathrm{loc}(E)|^{\alpha} \big]\Gamma(\alpha) \Gamma(1 - s/2 + \alpha/2)
\Gamma(1 - s/2 - \alpha/2) \Gamma(s/2 - \alpha/2)^2
\\
&\times \cos\left( \frac{\pi \alpha}{2}\right) \sin\left( \frac{\pi(s - \alpha )}{2} \right) \left( \sin\left( \frac{\pi \alpha}{2}\right) + \sin\left( \frac{\pi s}{2}\right) \right).\notag
\end{align*}
\eer
\end{definition}
\begin{definition}
\end{comment}
\begin{definition} \label{lambdaEsalpha}
Fix $\alpha \in (0,1)$ and $s \in (0,1)$. Recall the quantities $\ell(x)$ and $t_y$ defined in \Cref{lambdaEsalpha} and set
\begin{flalign}
K_{\alpha, s} = \displaystyle\frac{\alpha}{2} \cdot \Gamma \bigg( \displaystyle\frac{s-\alpha}{2} \bigg) \cdot \Gamma \bigg( 1 - \displaystyle\frac{\alpha+s}{2} \bigg).
\end{flalign}
\noindent Then, let $\lambda(E, s, \alpha)$ denote the unique positive solution to the quadratic equation
\begin{flalign*}
\lambda(E, s, \alpha)^2 - 2 t_{\alpha} K_{\alpha,s} \Real \ell (E) \cdot \lambda(E, s, \alpha) + K_{\alpha,s}^2 (t_{\alpha}^2 - t_s^2) \big| \ell (E) \big|^2 = 0,
\end{flalign*}
\noindent which admits only one positive solution since its constant term is negative (as $t_{\alpha} < t_s$ for $\alpha < s < 1$).
\end{definition}
The next lemma states basic properties of $\lambda(E,\alpha)$ and $\lambda(E,s,\alpha)$. It is proved in \Cref{s:boundaryvalues}.
\begin{lem}\label{l:lambdalemma}
For all $\alpha \in (0,1)$, $s\in (\alpha, 1)$, and $E \in \mathbb{R}$, the following claims hold.
\begin{enumerate}
\item We have $\lambda(E,s,\alpha) > 0$ and $\lambda(E,\alpha) >0$.
\item The limit $\lambda(E, \alpha) = \lim_{s \rightarrow 1} \lambda(E,s,\alpha)$ holds.
\item The functions $(E,s) \mapsto \lambda(E, s, \alpha)$ and $E \mapsto \lambda(E, \alpha)$
are continuous.
\item We have $\lambda(0, \alpha) > 1$.
\end{enumerate}
\end{lem}
Finally, we relate $\lambda(E,s,\alpha)$ to $\phi(s;z)$. The following lemma is proved in \Cref{s:provebootstrap}.
\begin{lem}\label{l:bootstrap}
Fix $\alpha \in (0,1)$, $s\in (\alpha,1)$, and $E\in \mathbb{R} \setminus \{ 0\}$.
Let $\{\eta_j\}_{j=1}^\infty$ be a decreasing sequence such that $\lim_{j\rightarrow\infty} \eta_j =0$. Suppose
that
$\lim_{j \rightarrow\infty} \Im R_\star(E + \mathrm{i} \eta_j) = 0$ in probability.
Then
\begin{equation*}
\lim_{j \rightarrow \infty} \phi(s; E+ \mathrm{i} \eta_j) = \log \lambda(E,s,\alpha).
\end{equation*}
\end{lem}
\subsection{Conclusion} \label{s:proofmain1}
We now prove \Cref{p:213}, \Cref{t:main1}, and \Cref{c:main1}, assuming the previously stated preliminary results. For clarity, we separate the proofs of the first and second parts of \Cref{t:main1}.
\begin{proof}[Proof of \Cref{p:213}]
The first part is an immediate consequence of the third and fourth parts of \Cref{l:lambdalemma}. For the second part, fix $s_0 = (\alpha + 1)/2$. By the fifth part of \Cref{limitr0j}, there exists $E_0 = E_0(\alpha)> 1 $ such that
\begin{equation}\label{pt1}
\phi(s_0; E + \mathrm{i} \eta) < -1\quad\text{for}\quad |E| \ge E_0,\,\eta \in (0,1).
\end{equation}
Since $\phi(s;z)$ is nonincreasing in $s$ by the second part of \Cref{limitr0j}, the previous line implies that
\begin{equation}\label{pt2}
\phi(s; E + \mathrm{i} \eta) < -1\quad\text{for}\quad |E| \ge E_0,\,\eta \in (0,1)\, , s\in (s_0, 1).
\end{equation}
By the second part of \Cref{p:imvanish}, the bound \eqref{pt1} implies that for any $E\in \mathbb{R}$ such that $|E| \ge E_0$, we have
\begin{equation}\label{pt3}
\limsup_{\eta \rightarrow 0} \Im R_* (E + \mathrm{i} \eta) = 0
\end{equation}
in probability.
Now fix $E$ such that $|E| \ge E_0$ and any $s \in (s_0 ,1)$. By \Cref{l:bootstrap} and \eqref{pt3},
\begin{equation}\label{pt4}
\lim_{\eta \rightarrow 0} \varphi (s; E + \mathrm{i} \eta) = \log \lambda (E, s, \alpha).
\end{equation}
Then by \eqref{pt2} and \eqref{pt4}, it follows for any $s \in (s_0, 1)$ and $|E| \ge E_0$ that
\begin{equation}\label{usemelater}
\log \lambda (E, s, \alpha) \le -1.
\end{equation}
Using the second part of \Cref{l:lambdalemma} and \eqref{usemelater}, we have $\lambda(E,\alpha) \le - 1$ for $|E| \ge E_0$. This completes the proof of the second part of the proposition.
\end{proof}
\begin{proof}[Proof of \Cref{t:main1}(1)]
We may suppose $E\neq 0$, since in this case the result is immediate from \cite[Lemma 4.3(b)]{bordenave2011spectrum}. Suppose, for the sake of contradiction, that there exists some $E_0\in \mathbb{R}$ such that $\lambda(E_0, \alpha) > 1$ and
\begin{flalign*}
\liminf_{\eta \rightarrow 0} \Imaginary R_{\star} (E_0 + \mathrm{i} \eta) = 0
\end{flalign*}
in probability.
Then there exists a decreasing sequence $\{\eta_j\}_{j=1}^\infty$ such that $\lim_{j\rightarrow\infty} \eta_j =0$ and
\begin{equation}\label{contradict2}
\lim_{j \rightarrow\infty} \Im R_\star(E_0 + \mathrm{i} \eta_j) = 0
\end{equation}
in probability.
Then using \Cref{l:bootstrap}, we have
\begin{equation*}
\lim_{j \rightarrow \infty} \phi(s; E_0+ \mathrm{i} \eta_j) = \log \lambda(E_0,s,\alpha)
\end{equation*}
for all $s \in (\alpha, 1)$. By the second and third parts of \Cref{l:lambdalemma}, and the assumption that $\lambda(E_0, \alpha) > 1$, there exists $\nu>0$ such that
\begin{equation}
\lim_{s \rightarrow 1}\left( \lim_{j \rightarrow\infty} \ \phi(s; E_0+ \mathrm{i} \eta_j)\right) \ge \nu.
\end{equation*}
Then by \Cref{rimaginary0}, the previous inequality implies that there exists $c>0$ such that
\begin{equation}
\liminf_{j \rightarrow \infty} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta_j) > c \big] > c.
\end{equation*}
This contradicts the assumption that
\eqref{contradict2} holds. We conclude that there exists $c>0$ such that
\begin{equation*}
\liminf_{\eta \rightarrow 0} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta) > c \big] > c.
\end{equation*}
Using \Cref{l:upgrade}, the previous line implies the first part of \Cref{t:main1}.
\end{proof}
\begin{proof}[Proof of \Cref{t:main1}(2)]
We consider only positive $E$, since negative $E$ can be handled by symmetric reasoning. For any $s \in (\alpha ,1)$, set
\begin{equation*}
E_{\mathrm{loc},s } = \inf \{ E \in \mathbb{R}_+ : \lambda(E,s,\alpha) < 1 \text{ for all } x \ge E\}.
\end{equation*}
First observe that $E_{\mathrm{loc}, s}$ is finite. Indeed, by \eqref{usemelater}, there exists $E_0 = E_0(\alpha) > 1$
such that for any $s\in (\alpha ,1)$ and any $E\in \mathbb{R}$ such that $|E| \ge E_0$ we have $\lambda (E ,s , \alpha) < 1$, which gives $E_{\mathrm{loc},s } \le E_0$. Further, for $|E| \ge E_0$, the second part of Theorem 2.14 follows from \eqref{pt3}, so it remains to prove the claim for $E < E_0$.
To this end, fix $s \in (\alpha , 1)$ and $E' \in ( E_{\mathrm{loc},s}, E_0)$.
By the definition of $E_{\mathrm{loc},s}$, we have $\lambda(E', s, \alpha) < 1$.
Set
\begin{equation*}
\kappa' = \kappa'(E',s) = \inf_{E \in [E', E_0]} \big( 1 - \lambda(E,s, \alpha) \big),
\qquad \kappa = \kappa(E',s) = \frac{\min ( \kappa', 1 )}{2}.
\end{equation*}
Since $E \mapsto \lambda(E,s, \alpha)$ is continuous (by \Cref{l:lambdalemma}) and strictly less than $1$ on the compact interval $[E', E_0]$, we have $\kappa' > 0$.
With $I = [E', E_0]$ and $\omega$ equal to $\kappa$, let $\delta$ be the constant given by \Cref{l:phicontinuity}.
Let $\{E_i\}_{i=1}^M$ be a set of real numbers such that $E_M = E'$ and
\begin{equation*}
E_{i+1} < E_i, \qquad | E_{i+1} - E_i | < \delta
\end{equation*}
for all $i \in \unn{0}{M-1}$.
We claim that for every $i\in \unn{0}{M}$, we have
\begin{equation}\label{bob}
\limsup_{\eta \rightarrow 0} \exp\big(\phi(s; E_i + \mathrm{i} \eta)\big) \le \lambda(E_i, s , \alpha).
\end{equation}
We proceed by induction on $i$. The base case $i=0$, at $E_0$, follows from \eqref{pt4}. Next, assuming the induction hypothesis holds for some $i\in \unn{0}{M-1}$, we will show it holds for $i+1$. Using the induction hypothesis at $i$, and the definition of $\kappa$, we have
\begin{equation*}
\limsup_{\eta \rightarrow 0} \exp\big(\phi(s; E_i + \mathrm{i} \eta)\big) \le \lambda(E_i, s , \alpha) \le 1 - 2\kappa.
\end{equation*}
By \Cref{l:phicontinuity} and the definition of $\delta$,
\begin{align}\label{p19}
\limsup_{\eta \rightarrow 0} \varphi (s; E_{i+1} + \mathrm{i} \eta)
&< \limsup_{\eta \rightarrow 0} \varphi (s; E_i + \mathrm{i} \eta) + \kappa\le 1 - \kappa.
\end{align}
Using \Cref{p:imvanish}, the previous equation implies
\begin{equation}\label{jcvg}
\lim_{\eta \rightarrow 0} \Im R_\star(E_{i+1} + \mathrm{i} \eta) = 0.
\end{equation}
Let $\{\eta_j\}_{j=1}^\infty$ be a sequence such that
\begin{equation}\label{otter}
\lim_{j \rightarrow \infty}
\exp \left( \phi(s; E_{i+1} + \mathrm{i} \eta_j) \right)= \limsup_{\eta\rightarrow 0} \exp \left( \phi(s; E_{i+1} + \mathrm{i} \eta) \right).
\end{equation}
Then using \eqref{jcvg} and \Cref{l:bootstrap}, equation \eqref{otter} implies that
\begin{equation*}
\limsup_{\eta \rightarrow 0} \exp \big( \phi(s; E_{i+1} + \mathrm{i} \eta) \big) = \lim_{j \rightarrow \infty}
\exp \big( \phi(s; E_{i+1} + \mathrm{i} \eta_j) \big) = \lambda(E_{i+1}, s, \alpha).
\end{equation*}
This completes the induction step and shows that \eqref{bob} holds for all $i\in \unn{0}{M}$. In particular, taking $i=M$, we have
\b
\limsup_{\eta \rightarrow 0} \exp\big(\phi(s; E' + \mathrm{i} \eta)\big) \le \lambda(E', s , \alpha).
\end{equation}
Using \eqref{p:imvanish}, this implies $ \lim_{\eta \rightarrow 0} \Im R_\star(E' + \mathrm{i} \eta) = 0$.
Since $s \in (\alpha ,1)$ and $E'\in (E_{\mathrm{loc},s}, E_0)$ were arbitrary, we conclude that for every $s\in (\alpha ,1)$, the localization statement
\begin{equation}\label{myloc}
\lim_{\eta \rightarrow 0} \Im R_\star(E + \mathrm{i} \eta) = 0
\end{equation}
holds in probability for every $E$ such that $|E| > E_{\mathrm{loc},s}$.
By the second part of \Cref{p:213} and the continuity of $E\mapsto \lambda(E ,\alpha)$ (given by \Cref{l:lambdalemma}), we have
\begin{equation*}
E_{\mathrm{loc} } = \inf \{ E \in \mathbb{R}_+ : \lambda(E,\alpha) < 1 \text{ for all } x \ge E\}.
\end{equation*}
Then the continuity of $\lambda(E, s, \alpha)$ and the equation $\lambda(E,\alpha) = \lim_{s\rightarrow 1} \lambda(E,s,\alpha)$ (provided by the second part of \Cref{l:lambdalemma}) together show that $
\lim_{s\rightarrow 1} E_{\mathrm{loc},s } = E_{\mathrm{loc} }.$
Hence, for any $E$ such that $|E| > E_{\mathrm{loc} }$, there exists some $s\in (\alpha,1)$ such that $|E| > E_{\mathrm{loc},s }$, and by what was shown previously the localization claim \eqref{myloc} holds at $E$. This completes the proof of the second part of \Cref{t:main1}.
\end{proof}
\begin{proof}[Proof of \Cref{c:main1}]
The first part of the corollary follows from the first part of \Cref{t:main1} and the second part of \Cref{l:loccriteria}. The second part of the corollary follows form the second part of \Cref{t:main1} and the first part of \Cref{l:loccriteria}.
The final part of the corollary follows from \Cref{t:main2}, the first two parts of the corollary, \Cref{p:213}, and the continuity of $\lambda(E,\alpha)$ in $E$ provided by the third part of \Cref{l:lambdalemma}.
\end{proof}
\section{Miscellaneous Preliminaries}
\label{Estimates0}
In this section we collect various miscellaneous results that will be used in what follows. We begin in \Cref{s:fouriertransform} with our conventions for the Fourier transform. In \Cref{ProofSum} we state properties of stable laws and Poisson point processes, and then provide properties of random trees in \Cref{EstimateTreeVertex}. In \Cref{EquationsResolvent} we recall various resolvent identities, and in \Cref{ResolventDensity} and \Cref{ProofSigma} we establish properties of the diagonal resolvent entry $R_{\star} (z)$ from \Cref{r}. Finally, in \Cref{s:rebdd} we show that any real boundary value of $R_\star(z)$ must be equal to $R_\mathrm{loc}(E)$.
\subsection{Fourier Transform}\label{s:fouriertransform}
Our convention for the Fourier transform of a function $f\in L^1(\mathbb{R})$ is
\begin{equation}\label{fourier}
\hat f(w) = \int_{-\infty}^\infty \exp(\mathrm{i} wx) f(x) \, dx.
\end{equation}
This definition is chosen to match the definition of the characteristic function of a random variable $X$ as $\mathbb{E} \left[ \exp( \mathrm{i} t X) \right]$. The inversion formula then reads
\begin{equation}
f(x) =\frac{1}{2\pi} \int_{-\infty}^\infty
\exp(-\mathrm{i} w x ) \hat f(w)\, dw,
\end{equation}
which holds whenever $f\in L^1(\mathbb{R})$ and $\hat f\in L^1(\mathbb{R})$.
For notational convenience, we sometimes write $\mathcal F [f](w)$ in place of $\hat f(w)$, and $\mathcal F^{-1} [f](x)$ for the inverse transform $(2\pi)^{-1}\int_{\mathbb{R}} \exp(-iw x) f(w) \, dw$.
Recall that the convolution of two functions was defined in \Cref{s:notation}. For $f,g \in L^2(\mathbb{R})$, we have under our convention the formulas
\begin{equation*}
\mathcal F [f \ast g] = \mathcal F [f] \cdot \mathcal F [g], \qquad \mathcal F [fg] = \frac{1}{2 \pi} \big( \mathcal F[f] \ast \mathcal F[g] \big).
\end{equation*}
\subsection{Properties of Stable Laws and Poisson Point Processes}
\label{ProofSum}
In this section we provide several facts about stable laws and Poisson point processes. We begin with the following lemma stating that a stable law can be represented as a random linear combination of entries sampled from a Poisson point process with the heavy-tailed intensity $\frac{\alpha}{2} x^{-\alpha / 2 - 1} dx$.
\begin{lem}
\label{sumaxi}
Let $\boldsymbol{a} = (a_j)_{j \ge 1}$ denote a family of mutually independent, identically distributed, real random variables with law $a$. For each $j \ge 1$, set $b_j = \max \{ a_j, 0 \}$ and $c_j = \min \{ a_j, 0 \}$. Set
\begin{flalign}
\label{bc}
B = \mathbb{E} \big[ (b_i)^{\alpha / 2} \big]; \qquad C = \mathbb{E} \big[ (-c_i)^{\alpha / 2} \big].
\end{flalign}
\noindent If $\boldsymbol{\zeta} = \{ \zeta_j \}_{j \ge 1}$ is a Poisson point process with intensity measure $\frac{\alpha}{2} x^{-\alpha / 2 - 1} dx$, then the random variable $Z = \sum_{j = 1}^{\infty} a_j \zeta_j$ is an $\big( \frac{\alpha}{2}; \sigma; \beta)$-stable law, where
\begin{flalign*}
\sigma = (B + C)^{2 / \alpha} = \mathbb{E} \big[ |a|^{\alpha / 2} \big]^{2 / \alpha}; \qquad \beta = \displaystyle\frac{B - C}{B + C}.
\end{flalign*}
\end{lem}
\begin{proof}
Let $\mathcal{J} = \big\{ j \in \mathbb{Z}_{\ge 1} : a_j = b_j \}$. Then $Z = X - Y$, where $X = \sum_{j \in \mathcal{J}} b_j \zeta_j$ and $Y = -\sum_{k \notin \mathcal{J}} c_k \zeta_k$. Observe that $X$ and $Y$ are independent nonnegative random variables, as $(\zeta_j)_{j \in \mathcal{J}}$ is independent from $(\zeta_{j'})_{j' \notin \mathcal{J}}$ from the Poisson thinning property. Moreover, by the L\'{e}vy--Khintchine representation, we have for any $t \in \mathbb{R}_{\ge 0}$ that
\begin{flalign*}
\mathbb{E} [e^{-tX}] = \mathbb{E} \Bigg[ \exp \bigg( -t \displaystyle\sum_{j \in \mathcal{J}} a_j \zeta_j \bigg) \Bigg] & = \exp \Bigg( \frac{\alpha}{2} \displaystyle\int_0^{\infty} \mathbb{E} [e^{-xta} - 1] \cdot x^{-\alpha/2-1} dx \Bigg) \\
& = \exp \bigg( - \Gamma \Big(1 - \displaystyle\frac{\alpha}{2} \Big) \cdot B |t|^{\alpha / 2} \bigg),
\end{flalign*}
\noindent and similarly
\begin{flalign*}
& \mathbb{E} [e^{-tY}] = \mathbb{E} \Bigg[ \exp \bigg( -t \displaystyle\sum_{j \notin \mathcal{J}} a_j \zeta_j \bigg) \Bigg] = \exp \bigg( - \Gamma \Big(1 - \displaystyle\frac{\alpha}{2} \Big) \cdot C |t|^{\alpha / 2} \bigg).
\end{flalign*}
\noindent Thus, $X$ and $Y$ are independent, nonnegative $\frac{\alpha}{2}$-stable laws with scaling parameters $B^{2 / \alpha}$ and $C^{2 / \alpha}$, respectively. Since $Z = X - Y$, we deduce the lemma from \Cref{sumalpha}.
\end{proof}
The next lemma bounds the denstiy of a random linear combination of entries sampled from the restriction of the Poisson point process with intensity $\frac{\alpha}{2} x^{-\alpha / 2 - 1} dx$ to the complement of some interval $[u, v]$.
\begin{lem}
\label{zuv}
For any positive real numbers $u < v$ and $\kappa \in (0, 1/4)$, there exists a constant $c = c(u, v, \kappa) \in (0, 1)$ such that the following holds. Let $\boldsymbol{a} = (a_1, a_2, \ldots )$ denote a family of mutually independent, identically distributed, real random variables with law $a$. Defining $B$ and $C$ as in \eqref{bc}, assume that $B, C \in (\kappa, \kappa^{-1})$. Let $\boldsymbol{\zeta} = (\zeta_1, \zeta_2, \ldots )$ denote a Poisson point process on $\mathbb{R}_{> 0}$ with intensity proportional to $\frac{\alpha}{2} x^{-\alpha / 2 - 1} \cdot \mathbbm{1}_{x \notin [u, v]} \cdot dx$, and denote $Z = \sum_{j = 1}^{\infty} a_j \zeta_j$. The following two statements hold for any interval $J \subset \mathbb{R}$.
\begin{enumerate}
\item We have
\begin{flalign}
\label{estimatezj1}
\mathbb{P} [Z \in J] \le c^{-1} \displaystyle\int_J \displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha/2 + 1}}.
\end{flalign}
\item Further assume for any interval $I \subset \mathbb{R}$ that
\begin{flalign}
\label{ainterval}
\mathbb{P} [a \in I] \ge \kappa \displaystyle\int_I \displaystyle\frac{dx}{ \big( |x| + 1 \big)^{\alpha/2 +1}}.
\end{flalign}
\noindent Then, we have
\begin{flalign}
\label{estimatezj2}
\mathbb{P} [Z \in J] \ge c \displaystyle\int_J \displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha/2 + 1}}.
\end{flalign}
\end{enumerate}
\end{lem}
\begin{proof}
Set $J = [x_0, y_0]$. Let $\boldsymbol{\zeta}' = (\zeta_1', \zeta_2', \ldots )$ denote a Poisson point process on $\mathbb{R}_{> 0}$ with intensity $ \frac{\alpha}{2} x^{-\alpha / 2 - 1} dx$, and set $Z' = \sum_{j = 1}^{\infty} a_j \zeta_j'$. Then, there exists a constant $c_1 = c_1 (u, v) > 0$ such that none of the $\zeta_i'$ are in $[u, v]$ with probability at least $c_1$. In particular, there exists a coupling of $\boldsymbol{\zeta}$ and $\boldsymbol{\zeta}'$ such that $\mathbb{P} [\boldsymbol{\zeta} = \boldsymbol{\zeta}'] \ge c_1$. Since the law of $Z$ is that of $Z'$, conditional on the event $\{ \boldsymbol{\zeta} = \boldsymbol{\zeta}' \}$, this together with the fact that \Cref{sumaxi} implies the existence of a constant $c_2 = c_2 (\kappa) \in (0, 1)$ such that
\begin{flalign*}
\mathbb{P} [Z' \in J] \le c_2^{-1} \displaystyle\int_J \displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha/2 + 1}},
\end{flalign*}
\noindent yields \eqref{estimatezj1}.
To establish \eqref{estimatezj2}, observe there exists a constant $c_3 = c_3 (u) > 0$ such that
\begin{flalign*}
\mathbb{P} [\mathscr{E}] \ge c_3, \qquad \text{where} \qquad \mathscr{E} = \bigg\{ \displaystyle\frac{u}{2} \le \zeta_1 \le u \bigg\} \cap \bigg\{ \zeta_2 < \displaystyle\frac{u}{2} \bigg\}.
\end{flalign*}
\noindent Denoting $Z_0 = \sum_{j = 2}^{\infty} a_j \zeta_j$, we have that $a_1 \zeta_1$ and $Z_0$ are independent conditional on the event $\mathscr{E}$. Moreover, on $\mathscr{E}$, the random variable $\zeta_1$ has density proportional to $ x^{-\alpha / 2 - 1} \cdot \mathbbm{1}_{x \in [u/2, u]} \cdot dx$. Now, by the upper bound \eqref{estimatezj1}, there exists a constant $C_1 = C_1 (u, v, \kappa) > 1$ such that $\mathbb{P} \big[ |Z_0| \le C_1 \big| \mathscr{E} \big] \ge \frac{1}{2}$. This, together with the law of $\zeta_1$ and the independence of $(\zeta_1, Z_0)$, implies
\begin{flalign*}
\mathbb{P} [Z \in J] & \ge \mathbb{P} [\mathscr{E}] \cdot \mathbb{P} \big[ a_1 \zeta_1 + Z_0 \in [x_0, y_0] \big| \mathscr{E} \big] \\
& \ge c_3 \cdot \mathbb{P} \big[ a_1 \zeta_1 \in [x_0 - Z_0, y_0 - Z_0] \big| \mathscr{E} \big] \\
& \ge \displaystyle\frac{c_3}{4} \cdot \mathbb{P} \Big[ a_1 \zeta_1 \in [x_0 - Z_0, y_0 - Z_0] \Big| \mathscr{E} \cap \big\{ |Z_0| \le C_1 \big\} \cap \big\{ |a_1| \ge \kappa \big\} \Big] \ge c_4 \displaystyle\int_J\displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha/2 + 1}}
\end{flalign*}
\noindent for some constant $c_4 = c_4 (u, v, \kappa) > 0$; here, to deduce the last inequality, we applied \eqref{ainterval}. This confirms the lower bound \eqref{estimatezj2}.
\end{proof}
We next state a version of Campbell's theorem for Poisson point processes. To that end, let $\mathcal{M} (\mathbb{R}_{> 0})$ denote the space of locally finite measures on $\mathbb{R}_{> 0}$, and (following \cite{daley2008introduction}) let $\mathcal N_{\mathbb{R}_{> 0}}^{\#*}$ denote the set of all simple counting measures; these are boundedly finite integer-value measures $N$ on $\mathbb{R}_{> 0}$ such that $N(\{x\}) =0$ or $1$ for each $x \in \mathbb{R}_{> 0}$. The latter set is endowed with the $\sigma$-algebra $\mathcal B (\mathcal N_{\mathbb{R}_{> 0}}^{\#*})$, the smallest $\sigma$-algebra such that for $A \in \mathcal B(\mathbb{R}_{> 0})$, the map $\mathcal N_{\mathbb{R}_{> 0}}^{\#*} \rightarrow \mathbb{R}$ given by $N \mapsto N(A)$ is measurable. Here $\mathcal B(\mathbb{R}_{> 0})$ is the Borel $\sigma$-algebra on $\mathbb{R}_{> 0}$
For any locally finite sequence of points $\Xi = (\xi_1, \xi_2, \ldots ) \subset \mathbb{R}_{> 0}$, we frequently also use $\Xi$ to denote the measure $\sum_{\xi \in \Xi} \delta_{\xi} \in \mathcal N_{\mathbb{R}_{> 0}}^{\#*}$. For any $x \in \mathbb{R}$, we further set $\Xi - \delta_x \in \mathcal N_{\mathbb{R}_{> 0}}^{\#*}$ equal to $\sum_{\xi \in \Xi \setminus x} \delta_{\xi}$ if $x \in \Xi$ and $\sum_{\xi \in \Xi} \delta_{\xi}$ if $x \notin \Xi$.
\begin{lem}[{\cite[Lemma 13.1.II, Definition 13.1.I (b), and Exercise 13.1.1 (a)]{daley2008introduction}}]
\label{fidentityxi}
Let $\Xi$ denote a Poisson point process on $\mathbb{R}_{> 0}$ with intensity measure $\mu \in \mathcal{M} (\mathbb{R}_{> 0})$. For any nonnegative, measurable function $f : \mathbb{R}_{> 0} \times \mathcal N_{\mathbb{R}_{> 0}}^{\#*} \rightarrow \mathbb{R}_{\ge 0}$, we have
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{j = 1}^{\infty} f(\xi_j, \Xi \setminus \xi_j) \Bigg] = \displaystyle\int_0^{\infty} \mathbb{E} \big[ f(x, \Xi - \delta_x) \big] d \mu (x).
\end{flalign*}
\noindent In particular, if $\mu(dx) = c x^{-\alpha-1} \cdot \mathbbm{1}_{x \in I} \cdot dx$ for some constant $c > 0$ and interval $I \subset \mathbb{R}$, then
\begin{flalign}
\label{fxi}
\mathbb{E} \Bigg[ \displaystyle\sum_{j = 1}^{\infty} f(\xi_j, \Xi \setminus \xi_j) \Bigg] = c \displaystyle\int_I \mathbb{E} \big[ f(x, \Xi - \delta_x) \big] \cdot x^{-\alpha-1} dx.
\end{flalign}
\end{lem}
\begin{rem}
We will often apply \Cref{fidentityxi} for functions $f$ containing an infinite sum built from the random variables $\xi_j$. We define such $f$ to be $0$ on the event where the sum does not converge (observing the fact that this event is measurable).
\end{rem}
\subsection{Properties of Random Trees}
\label{EstimateTreeVertex}
In this section we provide properties of various random trees. We begin by recalling the notion of unimodularity introduced in \cite{benjamini2001recurrence,aldous2004objective,aldous2007processes}.
We consider edge-weighted graphs; these are graphs together a map from its set of edges to $\mathbb{R}$. A (possibly infinite) graph $G$ is \emph{locally finite} if, for any compact $K \subset \mathbb{R}$, each of its vertices has a finite number of adjacent edges with weight in $K$; we let $V = V(G)$ denote the vertex set of $G$. Let $\mathcal{G}_*$ denote the set of isomorphism classes $(G, u_0)$ of \emph{rooted weighted graphs}, namely of a (connected) locally finite weighted graph $G$ with a distinguished vertex $u_0 \in V(G)$. Further let $\mathcal{G}_{**}$ denote the set of isomorphism classes $(G, u, v)$ of \emph{doubly rooted graphs}, that is, of a (connected) locally finite weighted graph $G$ with an ordered pair of distinguished vertices $(u, v)$ of $G$. The spaces $\mathcal{G}_*$ and $\mathcal{G}_{**}$ both admit a complete, separable metric; we refer to \cite[Section 2]{benjamini2015unimodular} for definitions.
\begin{definition}
\label{definitionunimodular}
A measure $\mu$ on $\mathcal{G}_{*}$ is \emph{unimodular} if, for any Borel measurable function $f : \mathcal{G}_{**} \rightarrow \mathbb{R}_{\ge 0} \cup \{ \infty \}$, we have
\begin{flalign}
\label{sumfuv}
\mathbb{E}_{\mu (G, u_0)} \Bigg[ \displaystyle\sum_{v \in V(G)} f(G, u_0, v) \Bigg] = \mathbb{E}_{\mu (G, u_0)} \Bigg[ \displaystyle\sum_{v \in V(G)} f(G, v, u_0) \Bigg].
\end{flalign}
\noindent The equality \eqref{sumfuv} is sometimes referred to as the \emph{Mass Transport Principle}.
\end{definition}
\begin{rem}
\label{tunimodular}
As observed in \cite[Section 3.2]{benjamini2001recurrence}, the local weak limit of any measure on finite trees is unimodular. Thus, the random tree $\mathbb{T}$ is unimodular.
\end{rem}
Next, we provide an estimate for the number of vertices on the $k$-th level of a certain random tree (namely, a Galton--Watson tree). Given a real number $\lambda > 0$, we recall that the \emph{Galton--Watson tree} with parameter $\lambda$ is a random rooted tree $\mathcal{T}$ with vertex set $\mathcal{V} = \bigcup_{k = 0}^{\infty} \mathcal{V} (k)$, where the $\mathcal{V} (k)$ are described inductively as follows. The set $\mathcal{Z} (0)$ consists of a single vertex $\{ 0 \}$, given by the root of $\mathcal{T}$. For each integer $k \ge 0$ and vertex $v \in \mathcal{V} (k)$, let $n_v \ge 0$ denote a Poisson random variable with parameter $\lambda$. Then, $v$ has $n_v$ children $\mathcal{D} (v) = \big\{ (v, 1), (v, 2), \ldots , (v, n_v) \big\}$, and we set $\mathcal{V} (k+1) = \bigcup_{v \in \mathcal{V} (k)} \mathcal{D} (v)$. We refer to $\mathcal{V} (k)$ as the \emph{$k$-th level} of $\mathcal{T}$.
The below lemma bounds the number of vertices in the $k$-th level of a Galton--Watson tree.
\begin{lem}
\label{treenumber}
We have $\mathbb{E} \big[ \big| \mathcal{V} (k) \big| \big] = \lambda^k$. Moreover, if $\lambda \ge 2$ then for any real number $B \ge 1$, we have
\begin{flalign}
\label{vnk}
\mathbb{P} \Big[ \big| \mathcal{V} (k) \big| \ge B \cdot (2\lambda)^k \Big] \le 3 e^{- B/2}.
\end{flalign}
\end{lem}
\begin{proof}
Observe for any integer $\ell \ge 1$ that
\begin{flalign}
\label{vum}
\big| \mathcal{V} (k) \big| = \displaystyle\sum_{w \in \mathcal{V} (k - 1)} n_w,
\end{flalign}
\noindent where $n_w$ denotes the number of children of any vertex $w \in \mathbb{V}$. Since $n_w$ is a Poisson random variable with mean $\lambda$, this together with \eqref{vum} implies that $\mathbb{E} \big[ \big| \mathcal{V} (k) \big| \big] = \lambda \cdot \mathbb{E} \big[ \big| \mathcal{V} (k-1) \big| \big]$. Hence, $\mathbb{E} \big[ \big| \mathcal{V} (k) \big| \big] = \lambda^k$ by induction on $k$.
To show \eqref{vnk}, define $f\colon \mathbb{R}_{> 0} \rightarrow \mathbb{R}$ by setting $f(x) = f_{\lambda} (x) = e^{\lambda (x - 1)}$ for each $x > 0$. Then, since $n_w$ is a Poisson random variable with mean $\lambda$, we have $\mathbb{E} [s^{n_w}] = e^{\lambda(s-1)} = f(s)$, for each $s > 0$. Denoting $g_{\ell} (s) = \mathbb{E} [s^{|\mathcal{V} (\ell)|}]$, \eqref{vum} then implies for each integer $\ell \ge 1$ and real number $s > 0$ that
\begin{flalign}
\label{vn1}
g_{\ell} (s) = \mathbb{E} \Bigg[ \displaystyle\prod_{w \in \mathcal{V} (\ell - 1)} s^{n_w} \Bigg] = \mathbb{E} \Bigg[ \displaystyle\prod_{w \in \mathcal{V} (\ell - 1)} \mathbb{E} [s^{n_w}] \Bigg] = \mathbb{E} \big[ (e^{\lambda (s - 1)})^{|\mathcal{V} (\ell - 1)|} \big] = g_{\ell - 1} \big( f(s) \big),
\end{flalign}
\noindent and so $g_{\ell} (s) = f^{(\ell)} (s)$, where $f^{(\ell)}$ denotes the $\ell$-fold composition of $f$.
Since $f(x + 1) - 1 \le \lambda x + (\lambda x)^2$ for any $0 \le x \le \lambda^{-1}$, we have by induction on $\ell$ that $f_{\ell} (1 + s) - 1 \le \lambda^{\ell} s + 2^{\ell} (\lambda^{\ell} s)^2$, if $0 \le s \le (2^{\ell + 1} \lambda^{\ell})^{-1}$, as then it is quickly verified that
\begin{flalign*}
\lambda \cdot (\lambda^{\ell} s + 2^{\ell} \lambda^{2 \ell} s^2) + \lambda^2 \cdot (\lambda^{\ell} s + 2^{\ell} \lambda^{2 \ell} s^2)^2 \le \lambda^{\ell + 1} s + 2^{\ell + 1} (\lambda^{\ell + 1} s)^2.
\end{flalign*}
\noindent Applying this for $\ell = k - 1$ and $s = 2^{-k} \lambda^{-k}$ yields
\begin{flalign*}
\mathbb{E} \big[ ( 1 + 2^{-k} \lambda^{-k})^{|\mathcal{V} (k)|} \big] \le f(1 + \lambda^{k-1} s + 2^{k-1} \lambda^{2k-2} s^2) \le f \bigg( 1 + \displaystyle\frac{3}{4 \lambda} \bigg) < 3.
\end{flalign*}
\noindent Hence, in \eqref{vnk} follows from a Markov estimate.
\end{proof}
Finally, we introduce the following notation for the tree $\mathbb{T}$ (defined in \Cref{Stable}). The \emph{length} of any vertex $v \in \mathbb{Z}_{\ge 1}^k \subset \mathbb{V}$ is $\ell (v) = k$, and we let $\mathbb{V} (\ell) = \mathbb{Z}_{\ge 1}^\ell$ denote the set of vertices of length $\ell$. We write $v \sim w$ if there is an edge between $v, w \in \mathbb{V}$. The \emph{parent} of any vertex $v \in \mathbb{V}$ that is not the root\footnote{If $v = 0$, then its parent $0_-$ is defined to be the empty set.} (namely, $v \ne 0$) is the unique vertex $v_- \in \mathbb{V}$ such that $v_- \sim v$ and $\ell (v_-) = \ell (v) - 1$. A child of $v$ is any vertex $w \in \mathbb{V}$ whose parent is $v$. We let $\mathbb{D} (v) \subset \mathbb{V}$ denote the set of children of $v$.
A \emph{path} on $\mathbb{T}$ is a sequence of vertices $\mathfrak{p} = (v_0, v_1, \ldots , v_k)$ such that $v_i$ is the parent of $v_{i + 1}$ for each $i \in [0, k - 1]$. The \emph{length} of this path is $\ell (\mathfrak{p}) = k$, and its \emph{boundary vertices} are $v_0$ and $v_k$, with $v_0$ and $v_k$ being the \emph{starting} and \emph{ending} vertices of $\mathfrak{p}$, respectively. If $\mathfrak{p} \subset \mathbb{V}$ is a path in $\mathbb{T}$, and $v \in \mathfrak{p}$ is not the ending vertex, then let $v_+ = v_+ (\mathfrak{p}) \in \mathfrak{p}$ denote the child of $v$ in $\mathfrak{p}$. We write $v \preceq w$ (equivalently, $w \succeq v$) if there exists a path (possibly of length $0$) with starting and ending vertices $v$ and $w$, respectively; we further write $v \prec w$ (equivalently, $w \succ v$) if $v \preceq w$ and $v \ne w$. For any integer $k \ge 0$ and vertex $v \in \mathbb{V}$ with $\ell (v) = \ell$, we additionally let $\mathbb{D}_k (v) = \big\{ u \in \mathbb{V} (\ell + k) : v \preceq u \big\}$. Observe in particular that $\mathbb{D}_1 (v) = \mathbb{D} (v)$ and $\mathbb D_\ell(0) = \mathbb{V}(\ell)$.
We will sometimes identify $\mathbb{T}$ with the edge-weighted graph whose weights are given by the operator $\boldsymbol{T}$, as in \eqref{t}.
For any vertex $w \in \mathbb{V}$, let $\mathbb{T}_- (w)$ denote the set of all edge weights and vertices in connected component containing the root $0$ of $\mathbb{T} \setminus \{ w \}$. That is, it constitutes the subgraph of $\mathbb{T}$ that lies ``above'' $w$, including the weight on the edge $(w_-, w)$.
Let $\mathbb{P}_{\mathbb{T}_- (w)}$ and $\mathbb{E}_{\mathbb{T}_- (w)}$ denote the probability measure and expectation conditional on $\mathbb{T}_- (w)$, respectively.
We will often abbreviate $\mathbb{P}_w = \mathbb{P}_{\mathbb{T}_- (w)}$ and $\mathbb{E}_w = \mathbb{E}_{\mathbb{T}_- (w)}$.
\subsection{Identities for Resolvent Entries}
\label{EquationsResolvent}
In this section we collect several known identities for entries of the resolvents of adjacency operators on trees. We begin with some notation; in what follows, we recall the notation from \Cref{Stable}.
For any subset of vertices $\mathcal{U} \subseteq \mathbb{V}$, we let $\boldsymbol{T}^{(\mathcal{U})}$ denote the operator obtained from $\boldsymbol{T}$ by setting all entries associated with at least one vertex in $\mathcal{U}$ to $0$. More specifically, we let $\boldsymbol{T}^{(\mathcal{U})} : L^2 (\mathbb{V}) \rightarrow L^2 (\mathbb{V})$ denote the self-adjoint operator defined by first setting, for any vertices $v, w \in \mathbb{V}$,
\begin{flalign*}
T_{vw}^{(\mathcal{U})} = \big\langle \delta_v, \boldsymbol{T}^{(\mathcal{U})} \delta_w \big\rangle = T_{vw}, \quad \text{if $v, w \notin \mathcal{U}$}; \qquad T_{vw}^{(\mathcal{U})} = \big\langle \delta_v, \boldsymbol{T}^{(\mathcal{U})} \delta_w \big\rangle = 0, \qquad \text{otherwise};
\end{flalign*}
\noindent then extending to $\mathcal{F}$ (finitely supported vectors of $L^2 (\mathbb V)$) by linearity; and next by extending to $L^2 (\mathbb{V})$ by density. For any complex number $z \in \mathbb{H}$, we further denote the associated resolvent operator $\boldsymbol{R}^{(\mathcal{U})} : L^2 (\mathbb{V}) \rightarrow L^2 (\mathbb{V})$ and its entries by
\begin{flalign*}
\boldsymbol{R}^{(\mathcal{U})} = \boldsymbol{R}^{(\mathcal{U})} (z) = \big( \boldsymbol{T}^{(\mathcal{U})} - z \big)^{-1}; \qquad R_{vw}^{(\mathcal{U})} = R_{vw}^{(\mathcal{U})} (z) = \big\langle \delta_v, \boldsymbol{R}^{(\mathcal{U})} \delta_w \big\rangle, \qquad \text{for any $v, w \in \mathbb{V}$}.
\end{flalign*}
The following lemma provides identities and estimates on the entries of $\boldsymbol{R}^{(\mathcal{U})}$. The first statement \eqref{qvv} below is given by \cite[Proposition 2.1]{klein1998extended}; the second follows from the first; and the third is given by \cite[Equation (3.37)]{aizenman2013resonant}. Alternatively, in the context of finite-dimensional operators, the first, second, and third statements below are given by \cite[Equation (8.8)]{erdos2017dynamical}, \cite[Equation (8.34)]{erdos2017dynamical}, and \cite[Equation (8.3)]{erdos2017dynamical}, respectively.
\begin{lem}[{\cite{klein1998extended,aizenman2013resonant}}]
\label{q12}
Fix $z = E + \mathrm{i} \eta \in \mathbb{H}$ and $\mathcal{U} \subset \mathbb{V}$.
\begin{enumerate}
\item For any vertex $v \in \mathbb{V}$, we have the \emph{Schur complement identity}
\begin{flalign}
\label{qvv}
R_{vv}^{(\mathcal{U})} = R_{vv}^{(\mathcal{U})} (z) = - \Bigg( z + \displaystyle\sum_{w \sim v} \big( T_{vw}^{(\mathcal{U})} \big)^2 \cdot R_{ww}^{(\mathcal{U}, v)} \Bigg)^{-1}.
\end{flalign}
\noindent Moreover, $R_{ww}^{(v)} (z)$ has the same law as $R_{00} (z)$, for any $w \in \mathbb{D}(v)$.
\item For any vertices $v, w \in \mathbb{V}$, we deterministically have
\begin{flalign*}
\big| R_{vw}^{(\mathcal{U})} (z) \big| \le \eta^{-1}.
\end{flalign*}
\item For any vertex $v \in \mathbb{V}$, we have the \emph{Ward identity}
\begin{flalign}
\label{sumrvweta}
\displaystyle\sum_{w \in \mathbb{V} \setminus \mathcal{U}} \big| R_{vw}^{(\mathcal{U})} \big|^2 = \eta^{-1} \cdot \Imaginary R_{vv}^{(\mathcal{U})}.
\end{flalign}
\end{enumerate}
\end{lem}
A quick consequence of the Schur complement identity (and \Cref{tuvalpha}) is the following result from \cite{bordenave2011spectrum}, which states that the entries of $\boldsymbol{R}$ satisfy a recursion in law, sometimes referred to as a \emph{recursive distributional equation}.
\begin{prop}[{\cite[Theorem 4.1]{bordenave2011spectrum}}]
\label{rdistribution}
Fix $z \in \mathbb{H}$, and let $R_1, R_2, \ldots $ denote mutually independent random variables with law $R_{\star} (z)$. Further let $(\xi_1, \xi_2, \ldots )$ denote a Poisson point process on $\mathbb{R}_{> 0}$ with intensity measure $\frac{\alpha}{2} x^{-\alpha / 2 - 1} dx$. Then, $R_{\star} (z)$ has the same law as
\begin{flalign*}
-\Bigg( z + \displaystyle\sum_{j = 1}^{\infty} \xi_j R_j \Bigg)^{-1}.
\end{flalign*}
\end{prop}
Next we have the following lemma, which expresses $R_{uv}$ as a product over diagonal resolvent entries. It essentially appears as \cite[Equation (3.4)]{aizenman2013resonant} (where there it is assumed that all nonzero entries of the adjacency matrix $\boldsymbol{T}$ are equal to $1$).
\begin{lem}[{\cite[Equation (3.4)]{aizenman2013resonant}}]
\label{rproduct}
Fix $v, w \in \mathbb{V}$ with $v \preceq w$ that are connected by a path $\mathfrak{p}$ (with $v$ as the starting vertex and $w$ as the ending vertex) consisting of $m+1$ vertices. For any subset $\mathcal{U} \subset \mathbb{V}$, we have
\begin{flalign*}
R_{vw}^{(\mathcal{U})} = (-1)^m \cdot R_{vv}^{(\mathcal{U})} \cdot \displaystyle\prod_{v \prec u \preceq w} T_{u u_-} R_{uu}^{(\mathcal{U}, u_-)} = (-1)^m \cdot R_{ww}^{(\mathcal{U})} \cdot \displaystyle\prod_{v \preceq u \prec w} T_{u u_+} R_{uu}^{(\mathcal{U}, u_+)}.
\end{flalign*}
\end{lem}
The following corollary of \Cref{q12} and \Cref{rproduct}, which lower bounds $\Imaginary R_{00}$ in terms of other entries of $\boldsymbol{R}$, appears in \cite{aizenman2013resonant}. We include its quick proof.
\begin{cor}[{\cite[Equation (4.3)]{aizenman2013resonant}}]
\label{rsum}
For any integer $k \ge 0$, we have
\begin{flalign}
\label{r00sum}
\Imaginary R_{00} \ge \displaystyle\sum_{v \in \mathbb{V} (k)} |R_{0v}|^2 \displaystyle\sum_{u \in \mathbb{D} (v)} |T_{vu}|^2 \Imaginary R_{uu}^{(v)}.
\end{flalign}
\end{cor}
\begin{proof}
By \eqref{qvv}, we have
\begin{flalign}
\label{r00imaginary}
\begin{aligned}
\Imaginary R_{00} & = \Bigg( \eta + \displaystyle\sum_{v \sim 0} |T_{0v}|^2 \Imaginary R_{vv}^{(0)} \Bigg) \cdot \Bigg| z + \displaystyle\sum_{v \sim 0} |T_{0v}|^2 R_{vv}^{(0)} \Bigg|^{-2} \ge \Bigg( \displaystyle\sum_{v \sim 0} |T_{0v}|^2 \Imaginary R_{vv}^{(0)} \Bigg) \cdot |R_{00}|^2.
\end{aligned}
\end{flalign}
\noindent Applying \eqref{r00imaginary} $k+1$ times, we find
\begin{flalign*}
\Imaginary R_{00} \ge \displaystyle\sum_{u \in \mathbb{V} (k+1)} \Bigg( |T_{u_- u}|^2 \Imaginary R_{uu}^{(u_-)} \cdot \displaystyle\prod_{0 \preceq w \prec u} |T_{ww_-}|^2 \big| R_{ww}^{(w_-)} \big|^2 \Bigg),
\end{flalign*}
\noindent where we have set $T_{ww_-} = 1$ for $w = 0$. Together with \Cref{rproduct}, this gives
\begin{flalign*}
\Imaginary R_{00} \ge \displaystyle\sum_{u \in \mathbb{V} (k+1)} |R_{0u_-}|^2 \cdot |T_{u_-} T_u|^2 \cdot \Imaginary R_{uu}^{(u_-)},
\end{flalign*}
\noindent which yields the corollary upon setting $v = u_-$.
\end{proof}
\subsection{Laws of the Resolvent Entries}
\label{ResolventDensity}
In this section we analyze various properties of the laws of the diagonal entries of $\boldsymbol{R}$. Throughout this section, we fix a complex number $z = E + \mathrm{i} \eta \in \mathbb{H}$. We assume that $\eta \in (0, 1)$ and that $\varepsilon \le |E| \le B$ for some fixed real numbers $\varepsilon \in (0, 1)$ and $B \ge 1$. In what follows, various constants may implicitly depend on $\varepsilon$ and $B$, even when not stated. Denote
\begin{flalign}
\label{kappatheta1}
\varkappa_0 = \varkappa_0 (z) = - E - \Real \bigg( \displaystyle\frac{1}{R_{00} (z)} \bigg); \qquad \vartheta_0 = \vartheta_0 (z) = -\eta - \Imaginary \bigg( \displaystyle\frac{1}{R_{00} (z)} \bigg),
\end{flalign}
\noindent so that by \eqref{qvv} we have
\begin{flalign}
\label{kappa0theta0sum}
\varkappa_0 = \displaystyle\sum_{v \sim 0} T_{0v}^2 \Real R_{vv}^{(0)}; \qquad \vartheta_0 = \displaystyle\sum_{v \sim 0} T_{0v}^2 \Imaginary R_{vv}^{(0)}.
\end{flalign}
The following lemma then states that $(\varkappa_0, \vartheta_0)$ are (possibly correlated) $\frac{\alpha}{2}$-stable laws; we recall the notation on such variables from \Cref{Stable}.
\begin{lem}
\label{rrealimaginary}
For any complex number $z = E + \mathrm{i} \eta \in \mathbb{H}$, the following two statements hold.
\begin{enumerate}
\item We have $\varkappa_0$ is an $\frac{\alpha}{2}$-stable law with scaling parameter
\begin{flalign*}
\sigma (\varkappa_0) = \mathbb{E} \big[ |\Real R_{\star}|^{\alpha / 2} \big]^{2 / \alpha},
\end{flalign*}
\noindent and skewness parameter
\begin{flalign*}
\beta (\varkappa_0) = \displaystyle\frac{\mathbb{E} \big[ \max \{ \Real R_{\star}, 0 \}^{\alpha / 2} - \max \{ -\Real R_{\star}, 0 \}^{\alpha / 2} \big]}{\mathbb{E} \big[ |\Real R_{\star}|^{\alpha / 2} \big]}.
\end{flalign*}
\item We have $\vartheta_0$ is a nonnegative $\frac{\alpha}{2}$-stable law with scaling parameter $\sigma (\vartheta_0) = \mathbb{E} \big[ (\Imaginary R_{\star})^{\alpha / 2} \big]^{2 / \alpha}$.
\end{enumerate}
\end{lem}
\begin{proof}
By \Cref{rdistribution} and \eqref{kappatheta1}, we have that
\begin{flalign*}
\varkappa_0 + \mathrm{i} \vartheta_0 = - z - \displaystyle\frac{1}{R_{\star}} = \displaystyle\sum_{j = 1}^{\infty} \xi_j R_j,
\end{flalign*}
\noindent where $(\xi_j)_{j \ge 1}$ is a Poisson point process on $\mathbb{R}_{> 0}$ with intensity measure $\frac{\alpha}{2} x^{-\alpha / 2 - 1} dx$, and $(R_j)_{j \ge 1}$ are mutually independent random variables with law $R_{\star} (z)$. In particular,
\begin{flalign*}
\varkappa_0 = \Real \Bigg( \displaystyle\sum_{j = 1}^{\infty} \xi_j R_j \Bigg) = \displaystyle\sum_{j = 1}^{\infty} \xi_j \Real R_j; \qquad \vartheta_0 = \Imaginary \Bigg( \displaystyle\sum_{j = 1}^{\infty} \xi_j R_j \Bigg) = \displaystyle\sum_{j = 1}^{\infty} \xi_j \Imaginary R_j,
\end{flalign*}
\noindent and so both statements of the lemma follows from \Cref{sumaxi}.
\end{proof}
The following lemma estimates the densities of $\varkappa_0 (z)$ and $\vartheta_0 (z)$. Its first part bounds the density of the former from above and below on any interval (with constants dependent on $\varepsilon$ and $B$); its second part upper bounds the both densities on intervals disjoint from $[-1, 1]$ (with constants independent of $\varepsilon$ and $B$). We establish it in \Cref{ProofSigma} below.
\begin{lem}
\label{ar}
There exist constants $C_1 = C_1 (\varepsilon, B) > 1$; $C_2 > 1$ (independent of $\varepsilon$ and $B$); and $c_1 = c_1 (\varepsilon, B) > 0$ such that the following two statements hold.
\begin{enumerate}
\item For any interval $I \subset \mathbb{R}$, we have
\begin{flalign}
\label{q00delta1}
c_1 \displaystyle\int_I \displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha / 2 + 1}} \le \mathbb{P} \big[ \varkappa_0 (z) \in I \big] \le C_1 \displaystyle\int_I \displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha / 2 + 1}}.
\end{flalign}
\item For any interval $I \subset \mathbb{R}$ disjoint from $[-1, 1]$, we have
\begin{flalign}
\label{q00delta2}
\mathbb{P} \big[ \varkappa_0 (z) \in I \big] \le C_2 \displaystyle\int_I \displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha / 2 + 1}}; \qquad \mathbb{P} \big[ \vartheta_0 (z) \in I \big] \le C_2 \displaystyle\int_I \displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha / 2 + 1}}.
\end{flalign}
\end{enumerate}
\end{lem}
\noindent The reason we stipulate that $|E| \ge \varepsilon$ is that the first part of \Cref{ar} becomes false if $E = 0$. Indeed, in this case, it is known from \cite[Lemma 4.3(b)]{bordenave2011spectrum} that $R_{\star} (\mathrm{i} \eta)$ is almost surely purely imaginary, which implies that \eqref{q00delta1} does not hold.
The following corollary estimates fractional moments of the diagonal resolvent entries $R_{00}$.
\begin{cor}
\label{expectationqchi}
Adopt the notation and assumptions of \Cref{ar}, and let $\delta, \chi \in (0, 1)$ be real numbers. There exists a constant $C = C(\chi, \varepsilon, B) > 1$ such that
\begin{flalign}
\label{qchi}
\mathbb{E} \big[ |R_{00}|^{\chi} \big] \le C; \qquad \mathbb{E} \big[ \mathbbm{1}_{|R_{00}| > \delta^{-1}} \cdot |R_{00}|^{\chi} \big] < C \delta^{1-\chi}.
\end{flalign}
\end{cor}
\begin{proof}
Observe, by \eqref{kappatheta1} and \Cref{ar}, that there exists a constant $C_1 > 1$ such that for any $\delta > 0$ we have
\begin{flalign*}
\mathbb{E} \big[ \mathbbm{1}_{|R_{00}| > \delta^{-1}} \cdot |R_{00}|^{\chi} \big] \le \mathbb{E} \big[ \mathbbm{1}_{|E + \varkappa_0| < \delta} \cdot |E + \varkappa_0|^{-\chi} \big] \le C_1 \displaystyle\int_{E - \delta}^{E + \delta} \displaystyle\frac{dx}{(x + E)^{\chi} \big( |x| + 1 \big)^{\alpha/2 + 1}}.
\end{flalign*}
\noindent Thus, the first and second bounds in \eqref{qchi} follow from the fact that there exists a constant $C_2 = C_2 (\chi) > 1$ such that
\begin{flalign*}
& \displaystyle\int_{-\infty}^{\infty} \displaystyle\frac{dx}{(x+E)^{\chi} \big( |x| + 1 \big)^{\alpha/2 + 1}} \le C_2 \qquad \text{and} \qquad \displaystyle\int_{E - \delta}^{E + \delta} \displaystyle\frac{dx}{(x + E)^{\chi} \big( |x| + 1\big)^{\alpha/2 + 1}} \le C_2 \delta^{1-\chi},
\end{flalign*}
\noindent respectively.
\end{proof}
\begin{proof}[Proof of \Cref{l:compactness}]
We may suppose that $E\neq 0$, since the $E=0$ case follows from \cite[Lemma 4.3(b)]{bordenave2011spectrum}.
By \eqref{qchi} and Markov's inequality, the set ${\big\{ R_\star(E + \mathrm{i} \eta) \big\}_{\eta \in (0,1)}}$ is tight. Then the conclusion follows from Prokhorov's theorem.
\end{proof}
\begin{proof}[Proof of \Cref{l:upgrade}]
The assumption implies that $\liminf_{\eta \rightarrow 0} \sigma\big(\vartheta_0(E + \mathrm{i} \eta)\big) > 0$, which (together with \Cref{rrealimaginary}) implies the conclusion.
\end{proof}
\subsection{Proof of \Cref{ar}}
\label{ProofSigma}
In this section we establish \Cref{ar}. As in the statement of that lemma, here we assume that $z = E + \mathrm{i} \eta \in \mathbb{H}$ with $\varepsilon \le |E| \le B$ and $\eta \in (0, 1)$; various constants might or might not depend on $\varepsilon$ and $B$, and we will specify which do. Throughout, we will make use of equalities (from \eqref{qvv})
\begin{flalign}
\label{q00equation}
\displaystyle\frac{\Imaginary R_{00}}{|R_{00}|^2} = \eta + \displaystyle\sum_{v \sim 0} T_{0v} \Imaginary R_{vv}^{(0)} = \eta + \vartheta_0; \qquad \displaystyle\frac{\Real R_{00}}{|R_{00}|^2} = - E - \displaystyle\sum_{v \sim 0} T_{0v}^2 \Real R_{vv}^{(0)} = -E - \varkappa_0.
\end{flalign}
We begin with the following lemma bounding from above the quantities $\sigma (\varkappa_0)$ and $\sigma (\theta_0)$ from \Cref{rrealimaginary}, and showing that $|R_{00}|$ is unlikely to be very small. In what follows, we recall the parameters $\sigma (\varkappa_0)$ and $\sigma (\vartheta_0)$ from \Cref{rrealimaginary}.
\begin{lem}
\label{q0estimate1}
There exists a constant $C > 1$ such that the following two statements hold, for any real number $t \ge 1$ and complex number $z = E + \mathrm{i} \eta \in \mathbb{H}$ with $\eta \in (0, 1)$.
\begin{enumerate}
\item Adopting the notation of \Cref{ar}, we have $\sigma (\vartheta_0) < C$ and $\sigma (\varkappa_0) < C$.
\item We have
\begin{flalign}
\label{r00estimate}
\mathbb{P} \bigg[ \big| R_{00} (z) \big| \ge \displaystyle\frac{1}{|E| + t} \bigg] \ge 1 - Ct^{-\alpha / 2}.
\end{flalign}
\end{enumerate}
\end{lem}
\begin{proof}
We begin with the proof of the first statement. To that end, observe from \eqref{q00equation} that
\begin{flalign*}
\vartheta_0 = \displaystyle\sum_{v \sim 0} T_{0v}^2 \Imaginary R_{vv}^{(0)} \le \displaystyle\frac{\Imaginary R_{00}}{|R_{00}|^2} \le \displaystyle\frac{1}{\Imaginary R_{00}},
\end{flalign*}
\noindent and so
\begin{flalign}
\label{theta0estimate}
\sigma (\vartheta_0) = \mathbb{E} \big[ (\Imaginary R_{00})^{\alpha / 2} \big]^{2 / \alpha} \le \mathbb{E} \big[ (\vartheta_0)^{-\alpha / 2} \big]^{2 / \alpha},
\end{flalign}
where the equality is justified by the second part of \Cref{rrealimaginary}. Recalling from \Cref{ar} that $\vartheta_0$ is a nonnegative $\frac{\alpha}{2}$-stable law with scaling parameter $\sigma (\vartheta_0)$, the bound \eqref{theta0estimate} implies the existence of a constant $C_1 > 1$ such that $\sigma (\vartheta_0) \le C_1$ (indeed, recall that a non-negative $\alpha/2$-stable law has all its negative moments finite). The proof of the upper bound on $\sigma (\varkappa_0)$ is entirely analogous (using the second equality in \eqref{q00equation}) and is therefore omitted.
To establish the second statement of the lemma, we may assume that $t > 2$, for otherwise $1 - C t^{-\alpha / 2} < 0$ for any $C > 2$. Then,
\begin{flalign*}
\mathbb{P} \bigg[ \big| R_{00} (z) \big| \ge \displaystyle\frac{1}{|E| + t} \bigg] = \mathbb{P} \big[ |z + \varkappa_0 + \mathrm{i} \theta_0| \le |E| + t \big] \ge \mathbb{P} \bigg[ |\varkappa_0| + |\theta_0| \le \displaystyle\frac{t}{2} \bigg]
\end{flalign*}
\noindent where the first statement follows from \eqref{kappatheta1} and the second from the facts that $t \ge 2$ and $\eta \le 1$. This, together with the first statement of the lemma, implies \eqref{r00estimate}.
\end{proof}
We next require the following lemma bounding from below $\sigma (\varkappa_0)$, and estimating $\beta (\varkappa_0)$; we recall these quantities were defined in \Cref{rrealimaginary}.
\begin{lem}
\label{sigmabetaestimate}
There exists a constant $c = c (\varepsilon, B) > 0$ such that
\begin{flalign}
\label{sigmathetakappa}
\sigma (\varkappa_0) \ge c; \qquad c - 1 \le \beta (\varkappa_0) \le 1 - c.
\end{flalign}
\end{lem}
\begin{proof}
We first establish the lower bound on $\sigma (\varkappa_0)$. To that end, observe that the upper bound on $\sigma (\varkappa_0)$ from \Cref{q0estimate1} yields a constant $c_1 > 0$ such that
\begin{flalign}
\label{kappa0estimate}
\mathbb{P} \bigg[ |\varkappa_0| \le \displaystyle\frac{\varepsilon}{2} \bigg] \ge c_1 \varepsilon.
\end{flalign}
\noindent Moreover, \eqref{r00estimate} yields a constant $c_2 = c_2 (\varepsilon, B) > 0$ such that for any $\varepsilon \in (0, 1)$ we have
\begin{flalign}
\label{q00estimate}
\mathbb{P} \bigg[ |R_{00}| \ge \displaystyle\frac{c_2 \varepsilon^{2 / \alpha}}{|E| + 1} \bigg] \ge 1 - \displaystyle\frac{c_1 \varepsilon}{2}.
\end{flalign}
\noindent Combining the second equality in \eqref{q00equation}, \eqref{kappa0estimate}, \eqref{q00estimate}, and the fact that $\varepsilon \le |E| \le B$ yields
\begin{flalign}
\mathbb{P} \Bigg[ |\Real R_{00}| \ge \displaystyle\frac{\varepsilon}{2} \cdot \bigg( \displaystyle\frac{c_2 \varepsilon^{2 / \alpha}}{B + 1} \bigg)^2 \Bigg] \ge \displaystyle\frac{c_1 \varepsilon}{2}.
\end{flalign}
\noindent It follows that that there exists a constant $c_3 = c_3 (\varepsilon, B) > 0$ such that $\mathbb{E} \big[ |\Real R_{00}|^{\alpha / 2} \big] > c_3$, which verifies the lower bound on $\sigma (\varkappa_0) = \mathbb{E} \big[ |\Real R_{00}|^{\alpha / 2} \big]^{2 / \alpha}$ from \eqref{sigmathetakappa}.
It remains to bound $\beta (\varkappa_0)$; let us assume in what follows that $E > 0$, for the proof when $E < 0$ is entirely analogous. To that end, observe that the estimates $\sigma (\vartheta_0) \le C$ and $c \le \sigma (\varkappa_0) \le C$ from \Cref{q0estimate1} and \eqref{sigmathetakappa} together imply the existence of a constant $c_4 = c_4 (\varepsilon, B) > 0$ such that $\mathbb{P} [\Real R_{00} < -c_4] > c_4$. Since $\sigma (\varkappa_0) < C$ and
\begin{flalign}
\label{betakappa01}
\beta (\varkappa_0) = 1 - \displaystyle\frac{2}{\sigma (\varkappa_0)^{\alpha / 2}} \cdot \mathbb{E} \big[ \max \{ -\Real R_{00}, 0 \}^{\alpha / 2} \big],
\end{flalign}
\noindent this yields a constant $c_5 = c_5 (\varepsilon, B) > 0$ such that
\begin{flalign}
\label{beta1}
\beta (\varkappa_0) \le 1 - c_5,
\end{flalign}
\noindent verifying the upper bound on $\beta (\varkappa_0)$ from \eqref{sigmathetakappa}.
Next, by \eqref{beta1}, there exists a constant $c_6 = c_6 (\varepsilon, B) > 0$ such that $\mathbb{P} [\varkappa_0 < -2B] \ge c_6$. This, with the second equality of \eqref{q00equation}, \Cref{q0estimate1}, and the fact that $|E| \le B$, gives a constant $c_7 = c_7 (\varepsilon, B) > 0$ such that $\mathbb{P} [\Real R_{00} > c_7] > c_7$. Again using \eqref{betakappa01} and the bound $\sigma (\varkappa_0) \le C$, this yields a constant $c_8 = c_8 (\varepsilon, B) > 0$ so that $\beta (\varkappa_0) \ge c_8 - 1$, confirming the last bound in \eqref{sigmathetakappa}.
\end{proof}
Now we can establish \Cref{ar}.
\begin{proof}[Proof of \Cref{ar}]
By \Cref{rrealimaginary}, $\varkappa_0 (z)$ is an $\frac{\alpha}{2}$-stable law with scaling parameter $\sigma (\varkappa_0)$ and skewness parameter $\beta (\varkappa_0)$, and $\vartheta_0 (z)$ is a nonnegative $\frac{\alpha}{2}$-stable law with scaling parameter $\sigma (\vartheta_0)$. By \Cref{q0estimate1} and \Cref{sigmabetaestimate}, there exist constants $C > 1$ and $c = c (\varepsilon, B) > 1$ such that $\sigma (\vartheta_0) \le C$, $c \le \sigma (\varkappa_0) \le C$, and $c - 1 \le \beta (\varkappa_0) \le 1 - c$. The upper bounds $\sigma (\varkappa_0) \le C$ and $\sigma (\vartheta_0) \le C$ imply the second statement \eqref{q00delta2} of the lemma. The upper bound in the first statement \eqref{q00delta1} follows from the bounds $c \le \sigma (\varkappa_0) \le C$; the lower bound there follows from this together with the estimates $c - 1 \le \beta (\varkappa_0) \le 1 - c$.
\end{proof}
\subsection{Real Boundary Values of the Resolvent}\label{s:rebdd}
We begin with the following definition.
\begin{definition}
\label{locre}
Fix $\alpha \in (0,1)$ and $E \in \mathbb{R}$, and let $R_\mathrm{loc}(E)$ be the real random variable given by
\begin{equation*}
R_\mathrm{loc}(E) = - \frac{1}{E + \varkappa_\mathrm{loc}(E)},
\end{equation*}
where we recall that $\varkappa_\mathrm{loc}$ was defined in \Cref{pkappae}.
\end{definition}
The following lemma asserts that any real boundary value of $R_\star(z)$ is equal to $R_\mathrm{loc}(E)$. We recall the quantities $a(E)$ and $b(E)$ from \eqref{opaque}.
\begin{lem}\label{l:boundarybasics}
Fix $E \in \mathbb{R}$, and let $R(E)$ be any limit point (under the weak topology) of the sequence $\big\{ R_\star(E + \mathrm{i} \eta) \big\}_{\eta > 0}$ as $\eta$ tends to $0$. If $\Im R(E) = 0$, then
\begin{equation}\label{abrepresentation}
a(E) = \mathbb{E}\left[ \big( R(E) \big)^{\alpha/2}_+ \right], \qquad b(E) = \mathbb{E}\left[ \big( R(E) \big)^{\alpha/2}_- \right],
\end{equation}
and, abbreviating $a = a(E)$ and $b = b(E)$,
\begin{equation}\label{fixedpoint}
a = \mathbb{E} \left[ \left( E + a^{2/\alpha} S_1 - b^{2/\alpha} S_2 \right)_-^{-\alpha/2} \right],\qquad
b = \mathbb{E} \left[ \left( E + a^{2/\alpha} S_1 - b^{2/\alpha} S_2 \right)_+^{-\alpha/2} \right],
\end{equation}
where $S_1$ and $S_2$ are independent, nonnegative $\alpha/2$-stable laws and we have used the shorthand notation $(x)_{\pm}^{-\alpha /2} := ((x^{-1} )_{\pm} )^{\alpha /2}$. Further, we have $R(E) = R_\mathrm{loc}(E)$ in distribution.
\end{lem}
\begin{proof}
We prove \eqref{abrepresentation} first.
We may suppose that $E \neq 0$, since otherwise $\Im R(E)$ is almost surely positive and the claim is vacuously true; see \cite[Lemma 4.3(b)]{bordenave2011spectrum}.
Let $\{\eta_k\}_{k=1}^\infty$ be a decreasing sequence such that \begin{equation*}
\lim_{k\rightarrow \infty} R_\star(E + \mathrm{i} \eta_k) = R(E).
\end{equation*} Set $z_k = E + \mathrm{i} \eta_k$.
To show \eqref{abrepresentation}, note that
\begin{equation}\label{y1}
y(E) = \lim_{k \rightarrow \infty} \mathbb{E} \left[ \big(- \mathrm{i} R_\star(z_k) \big)^{\alpha/2} \right],
\end{equation}
since $y(z)$ extends continuously to $\overline{\mathbb{H}}$.
We have
\begin{equation*}
R_\star(z_k) = - \frac{1}{z + \varkappa_0(z_k) + \mathrm{i} \vartheta_0(z_k)},
\end{equation*}
and using \Cref{ar}, we find that the sequence $\big( \Re R_\star(z_k) \big)_+^{\alpha/2}$ is uniformly integrable.
Then
\begin{equation*}
\lim_{k \rightarrow \infty} \mathbb{E}\left[ \big( \Re R_\star(z_k) \big)_+^{\alpha/2} \right]^{2/\alpha}
= \mathbb{E}\left[ \big( \Re R(E) \big)_+^{\alpha/2} \right]^{2/\alpha},
\end{equation*}
and similarly for the limits of the $\alpha/2$-moments of $\big( \Re R_\star(z_k) \big)_-$, $\big( \Im R_\star(z_k) \big)_+$, and $\big( \Im R_\star(z_k) \big)_-$. By assumption, the latter two limits vanish. Using \eqref{y1}, this gives
\begin{equation*}
y(E) =
(-\mathrm{i})^{\alpha/2} \mathbb{E}\left[ \big( \Re R(E) \big)_+^{\alpha/2} \right]^{2/\alpha} + (\mathrm{i})^{\alpha/2} \mathbb{E}\left[ \big( \Re R(E) \big)_-^{\alpha/2} \right]^{2/\alpha}.
\end{equation*}
Because the complex numbers $\mathrm{i}^{\alpha/2}$ and $(-\mathrm{i})^{\alpha/2}$ are linearly independent over $\mathbb{R}$, the previous equation implies that \eqref{abrepresentation} holds by the definitions of $a(E)$ and $b(E)$ in \eqref{opaque}.
We recall from \Cref{rdistribution}
that, in distribution,
\begin{equation}\label{rde2}
R_\star(z_k) =
- \left( z_k + \sum_{j=1}^\infty \xi_j R_j(z_k) \right)^{-1}
\end{equation}
for $k \ge 1$.
By the Poisson thinning property (\Cref{sumaxi}),
\begin{align}\label{pthinning}
\sum_{j=1}^\infty \xi_j R_j(z_k)
&=
\mathbb{E}\left[ \big( \Re R_\star(z_k) \big)_+^{\alpha/2} \right]^{2/\alpha} S_1
- \mathbb{E}\left[ \big( \Re R_\star(z_k) \big)_-^{\alpha/2} \right]^{2/\alpha} S_2\\
&+ \mathrm{i} \cdot \mathbb{E}\left[ \big( \Im R_\star(z_k) \big)_+^{\alpha/2} \right]^{2/\alpha} S_3
- \mathrm{i} \cdot \mathbb{E}\left[ \big( \Im R_\star(z_k) \big)_-^{\alpha/2} \right]^{2/\alpha} S_4\notag
\end{align}
for $k \ge 1$, where $\{S_i\}_{i=1}^4$ are nonnegative $\alpha/2$-stable laws, the pair $(S_1, S_2)$ is independent, and the pair $(S_3, S_4)$ is independent.
Since $\Im R(E) = 0$, we have
\begin{align}
\lim_{k\rightarrow \infty} \sum_{j=1}^\infty \xi_j R_j(z_k)
&=
\mathbb{E}\left[ \big( \Re R(E) \big)_+^{\alpha/2} \right]^{2/\alpha} S_1
- \mathbb{E}\left[ \big( \Re R(E) \big)_-^{\alpha/2} \right]^{2/\alpha} S_2.
\end{align}
Then taking the limit as $k$ tends to infinity in \eqref{rde2}, we obtain
\begin{align}\label{fixedpenultimate}
R(E) &=
-\left(
E
+
\mathbb{E}\left[ \big( \Re R(E) \big)_+^{\alpha/2} \right]^{2/\alpha} S_1
- \mathbb{E}\left[ \big( \Re R(E) \big)_-^{\alpha/2} \right]^{2/\alpha} S_2
\right)^{-1}\\
&= -\big(
E
+
a(E) S_1
- b(E) S_2
\big)^{-1}.\notag
\end{align}
We then obtain \eqref{fixedpoint} from \eqref{fixedpenultimate} after separating into positive and negative and taking $\alpha/2$ moments. Since \eqref{fixedpenultimate} also holds for $R_\mathrm{loc}$, it further shows that $R(E) = R_\mathrm{loc}(E)$ in distribution. This completes the proof.
\end{proof}
\newpage
\chapter{Fractional Moment Criterion}
\label{MomentFractional}
\section{Integral and Poisson Point Process Estimates}
\label{G0vMoment}
In this section we provide various estimates for heavy-tailed integrals and functionals of Poisson point processes that will be used in the proof of \Cref{limitr0j} in \Cref{EstimateMomentss} below. To briefly indicate how such quantities arise, let us explain how one might estimate $\Phi_L (s; z)$ at $L = 1$, namely, $\Phi_1 (s; z) = \mathbb{E} \big[ \sum_{v \sim 0} |R_{0v}|^s \big]$ (recall \Cref{moment1}). To that end, we first apply \Cref{rproduct} to express the off-diagonal resolvent entry $R_{0v}$ as a product of diagonal ones. For $v \sim 0$, this yields
\begin{flalign*}
|R_{0v}|^s = |R_{00}|^s \cdot |T_{0v}|^s \cdot \big|R_{vv}^{(0)} \big|^s = \displaystyle\frac{|T_{0v}|^s}{\big|z + T_{0v}^2 R_{vv}^{(0)} + K_v \big|^s} \cdot \big| R_{vv}^{(0)} \big|^s, \quad \text{where $K_v = \displaystyle\sum_{\substack{w \sim 0 \\ w \ne v}} T_{0w}^2 R_{ww}^{(0)}$},
\end{flalign*}
\noindent where in the last equality we applied the Schur complement identity \eqref{qvv}. Observe that the $K_v$ all have the same law $K$, which by \Cref{tuvalpha} and \Cref{sumaxi} is $\frac{\alpha}{2}$-stable. Hence,
\begin{flalign*}
\Phi_1 (s; z) = \mathbb{E} \Bigg[ \displaystyle\sum_{v \sim 0} \displaystyle\frac{|T_{0v}|^s}{\big| z + T_{0v}^2 R_{vv}^{(0)} + K_v \big|^s} \cdot \big| R_{vv}^{(0)} \big|^s\Bigg].
\end{flalign*}
\noindent So, to bound $\Phi_1 (s; z)$ (and more generally $\Phi_L (s; z)$) we estimate more general sums of the form
\begin{flalign}
\label{expectationtjrjkj}
\mathbb{E} \Bigg[ \displaystyle\sum_{j=1}^{\infty} \displaystyle\frac{|T_j|^s}{|z + T_j^2 R_j + K_j|^{\chi}} \cdot |R|^s \Bigg],
\end{flalign}
\noindent where the $(T_j)_{j \ge 1}$ form a Poisson point process with intensity measure $\alpha x^{-\alpha-1} dx$, and the $R_j$ and $K_j$ are random variables. By the Campbell theorem (\Cref{fidentityxi}), the above expectation is given by an explicit heavy-tailed integral. We begin by bounding this integral in \Cref{EstimateIntegral}. We then use these integral estimates to bound quantities of the type \eqref{expectationtjrjkj} in \Cref{EstimateProcess}, conditional on an estimate for inverse moments of the $K_j$, which we prove in \Cref{ProofV}.
\subsection{Integral Estimates}
\label{EstimateIntegral}
In this section we estimate certain integrals that arise from applying the Campbell theorem, \Cref{fidentityxi}, to evaluate expectations of the form \eqref{expectationtjrjkj}. Throughout this section, we fix complex numbers $z, R \in \mathbb{C}$ and real numbers $s \in (\alpha, 1)$ and $\chi \in \big( \frac{s-\alpha}{2}, 1 \big)$. We assume that there exists a constant $\varpi > 0$ such that
\begin{flalign}
\label{chisalphaomega}
\displaystyle\frac{s-\alpha}{2} + \varpi \le \chi \le 1 - \varpi.
\end{flalign}
\noindent In what follows, various constants may depend on $\varpi$ (and $\alpha$), even when not written explicitly, but not on $z$, $R$, $s$, or $\chi$. Denoting
\begin{flalign}
\label{zgamma}
\begin{aligned}
\gamma = |R|^{-1/2} & \cdot \big( |z| + 1 \big)^{1/2}; \qquad \mathfrak{Y} = \big( |z| + 1 \big)^{(s-\alpha)/2 - \chi} \cdot |R|^{(\alpha-s)/2}; \\
& \mathfrak{z} (t) = \displaystyle\frac{t^{s-\alpha-1}}{\big( |z + t^2 R| + 1 \big)^{\chi}}; \qquad \mathfrak{Z} = \displaystyle\int_0^{\infty} \mathfrak{z} (t)\, dt,
\end{aligned}
\end{flalign}
\noindent we have the following estimate for $\mathfrak{Z}$.
\begin{lem}
\label{integral3}
Under \eqref{chisalphaomega} and \eqref{zgamma}, there exists a constant $C > 1$ such that
\begin{flalign*}
C^{-1} (s-\alpha)^{-1} \cdot \mathfrak{Y} \le \mathfrak{Z} \le C (s-\alpha)^{-1} \cdot \mathfrak{Y}.
\end{flalign*}
\end{lem}
To show the above lemma, for any $\delta > 0$, we define the quantities
\begin{flalign*}
\mathfrak{Z}_- (\delta) = \displaystyle\int_0^{\delta \gamma} \mathfrak{z} (t)\, dt; \qquad \mathfrak{Z}_+ (\delta) = \displaystyle\int_{\gamma / \delta}^{\infty} \mathfrak{z} (t)\, dt.
\end{flalign*}
\noindent Then, \Cref{integral3} will quickly follow from the following two lemmas bounding $\mathfrak{Z}_- (\delta)$ and $\mathfrak{Z}_+ (\delta)$.
\begin{lem}
\label{integral}
Under \eqref{chisalphaomega} and \eqref{zgamma}, there exists a constant $C > 1$ such that
\begin{flalign}
\label{yz}
\mathfrak{Z}_- (\delta) \le C (s-\alpha)^{-1} \cdot \delta^{s-\alpha} \cdot \mathfrak{Y}
\end{flalign}
for all $\delta >0$.
Further, if $\delta \in (0,1]$, we have
\begin{flalign}
\label{yzlower}
C^{-1} (s-\alpha)^{-1} \cdot \delta^{s-\alpha} \cdot \mathfrak{Y} \le \mathfrak{Z}_- (\delta).
\end{flalign}
\end{lem}
\begin{lem}
\label{integral2}
Under \eqref{chisalphaomega} and \eqref{zgamma}, for all $\delta \in (0,1]$, there exists a constant $C > 1$ such that
\begin{flalign}
\label{yz2}
C^{-1} \delta^{2\chi - s + \alpha} \cdot \mathfrak{Y} \le \mathfrak{Z}_+ (\delta) \le C \delta^{2\chi - s + \alpha} \cdot \mathfrak{Y}.
\end{flalign}
\end{lem}
\begin{proof}[Proof of \Cref{integral3}]
This follows from summing \Cref{integral} and \Cref{integral2} at $\delta = 1$.
\end{proof}
Now we show \Cref{integral} and \Cref{integral2}.
\begin{proof}[Proof of \Cref{integral}]
Begin by supposing that $\delta \le 1$. We first establish the lower bound in \eqref{yzlower}. Changing variables $v = \gamma^{-1} t$, observe that
\begin{flalign*}
\mathfrak{Z}_- (\delta) \ge \displaystyle\int_0^{\delta \gamma} \displaystyle\frac{t^{s-\alpha-1} dt}{\big( |z| + t^2 |R| + 1 \big)^{\chi}} & = \gamma^{s-\alpha} \cdot \big( |z| + 1 \big)^{-\chi} \displaystyle\int_0^{\delta} \displaystyle\frac{v^{s-\alpha-1} dv}{(v^2+1)^{\chi}} \\
& \ge \displaystyle\frac{1}{2} \cdot R^{(\alpha-s)/2} \cdot \big( |z| + 1 \big)^{(s-\alpha)/2 - \chi} \displaystyle\int_0^{\delta} v^{s-\alpha-1} dt = \displaystyle\frac{\delta^{s-\alpha}}{2 (s-\alpha)} \cdot \mathfrak{Y},
\end{flalign*}
\noindent where in the third statement we used the definition of $\gamma$ and the fact that $v^2 + 1 \le 2$ for $v \le \delta$. This confirms the lower bound in the \eqref{yz}.
To establish the upper bound, observe by again changing variables $v = \gamma^{-1} t$ that
\begin{flalign*}
\mathfrak{Z}_- (\delta) & = \gamma^{s-\alpha} \displaystyle\int_{\delta/2}^{\delta} \displaystyle\frac{v^{s-\alpha-1} dv}{\Big( \big| z + (|z| + 1) v^2 \big| + 1 \Big)^{\chi}} + \gamma^{s-\alpha} \displaystyle\int_0^{\delta/2} \displaystyle\frac{v^{s-\alpha-1} dv}{\Big( \big| z + (|z|+1) v^2 \big| + 1 \Big)^{\chi}} \\
& \le 2 \gamma^{s-\alpha} \cdot \big( |z| + 1 \big)^{-\chi} \cdot \delta^{s-\alpha-1} \displaystyle\int_{\delta/2}^{\delta} \Bigg( \bigg| \displaystyle\frac{z}{|z|+1}+ v^2 \bigg| + \displaystyle\frac{1}{|z|+1} \Bigg)^{-\chi} dv \\ &
\qquad + 4 \gamma^{s-\alpha} \cdot \big( |z| + 1 \big)^{-\chi} \displaystyle\int_0^{\delta/2} v^{s-\alpha-1} dv,
\end{flalign*}
\noindent where the last statement follows from the fact that $\big| z + (|z|+1)v^2 \big| + 1 \ge (|z|+1)/4$ for $v \le 1/2$. Thus,
\begin{flalign}
\label{zdelta1}
\begin{aligned}
\mathfrak{Z}_- (\delta) & \le 2 R^{(\alpha-s)/2} \cdot \big( |z| + 1 \big)^{(s-\alpha)/2-\chi} \cdot \delta^{s-\alpha-1} \displaystyle\int_0^{\delta/2} \Bigg( \bigg| \displaystyle\frac{z}{|z|+1}+ v^2 \bigg| + \displaystyle\frac{1}{|z|+1} \Bigg)^{-\chi} dv \\
& \qquad + \displaystyle\frac{4\delta^{s-\alpha}}{s-\alpha} \cdot R^{(\alpha-s)/2} \cdot \big( |z| + 1 \big)^{(s-\alpha)/2-\chi} \\
& \le \displaystyle\frac{4 \delta^{s-\alpha}}{s-\alpha} \cdot \mathfrak{Y} \cdot \Bigg( 1 + \delta^{-1} \displaystyle\int_0^{\delta/2} \bigg( \Big| \displaystyle\frac{z}{|z|+1} + v^2 \Big| + \displaystyle\frac{1}{|z|+1} \bigg)^{-\chi} dv \Bigg).
\end{aligned}
\end{flalign}
\noindent If $|z| \le 2$, then
\begin{flalign*}
\delta^{-1} \displaystyle\int_0^{\delta/2} \Bigg( \bigg| \displaystyle\frac{z}{|z|+1}+ v^2 \bigg| + \displaystyle\frac{1}{|z|+1} \Bigg)^{-\chi} dv \le \big( |z| + 1 \big)^{\chi} \le 3,
\end{flalign*}
\noindent and inserting this into \eqref{zdelta1} yields the upper bound in \eqref{yz}. Thus, assume instead that $|z| \ge 2$. Denoting $\kappa = z (|z|+1)^{-1}$, we have $|\kappa| \ge \frac{2}{3}$, in which case
\begin{flalign*}
\delta^{-1} \displaystyle\int_0^{\delta/2} \Bigg( \bigg| \displaystyle\frac{z}{|z|+1}+ v^2 \bigg| + \displaystyle\frac{1}{|z|+1} \Bigg)^{-\chi} dv \le \delta^{-1} \displaystyle\int_0^{\delta/2} \displaystyle\frac{dv}{|v^2 + \kappa|^{\chi}} \le \delta^{-1} \displaystyle\int_0^{\delta/2} \bigg| v^2 - \displaystyle\frac{2}{3} \bigg|^{-\chi} dv \le C_1,
\end{flalign*}
\noindent for some constant $C_1 > 1$ (recall from \eqref{chisalphaomega} that $\chi \le 1 - \varpi$). Inserting this into \eqref{zdelta1} yields the upper bound in \eqref{yz}.
We now suppose that $\delta > 1$. In light of the previous calculations, it suffices to bound the integral
\begin{align*}
\gamma^{s-\alpha} \displaystyle\int_{1}^{\delta} \displaystyle\frac{v^{s-\alpha-1} dv}{\Big( \big| z + (|z| + 1) v^2 \big| + 1 \Big)^{\chi}}&
\le \gamma^{s-\alpha} \delta^{s - \alpha - 1} \displaystyle\int_{1}^{\delta} \displaystyle\frac{ dv}{\Big( \big| z + (|z| + 1) v^2 \big| + 1 \Big)^{\chi}}\\
&\le \gamma^{s-\alpha} \delta^{s-\alpha-1} ( |z| + 1)^{-\chi}\int_1^\delta (v^2 - 1)^{-\chi} \, dv\\
& \le C \gamma^{s-\alpha} \delta^{s-\alpha} ( |z| + 1)^{-\chi} (1-\chi)^{-1},
\end{align*}
which completes the proof.
\end{proof}
\begin{proof}[Proof of \Cref{integral2}]
We first establish the lower bound in \eqref{yz2}. As in the proof of \Cref{integral}, we change variables $v = \gamma^{-1} z$ to deduce
\begin{flalign*}
\mathfrak{Z}_+ (\delta) \ge \displaystyle\int_{\gamma / \delta}^{\infty} \displaystyle\frac{t^{s-\alpha-1} dt}{\big( |z| + t^2 |R| + 1 \big)^{\chi}} & = \gamma^{s-\alpha} \cdot \big( |z| + 1 \big)^{-\chi} \displaystyle\int_{1/\delta}^{\infty} \displaystyle\frac{v^{s-\alpha-1} dv}{(v^2 + 1)^{\chi}} \\
& \ge \displaystyle\frac{1}{2} \cdot R^{(\alpha-s)/2} \cdot \big( |z| + 1 \big)^{(s-\alpha)/2 - \chi} \displaystyle\int_{1/\delta}^{\infty} v^{s-\alpha-2\chi-1} dv \\
& \ge \displaystyle\frac{\delta^{s-\alpha-2\chi}}{2 (2\chi - s + \alpha)} \cdot \mathfrak{Y},
\end{flalign*}
\noindent which verifies the lower bound in \eqref{yz2} (upon recalling from \eqref{chisalphaomega} that $\chi \ge \frac{s-\alpha}{2} + \varpi$).
To establish the upper bound, observe (again setting $v = \gamma^{-1} z$) that
\begin{flalign*}
\mathfrak{Z}_+ (\delta) & = \gamma^{s-\alpha} \displaystyle\int_{1/\delta}^{2/\delta} \displaystyle\frac{v^{s-\alpha-1} dv}{\Big( \big| z + (|z| + 1) v^2 \big| + 1 \Big)^{\chi}} + \gamma^{s-\alpha} \displaystyle\int_{2/\delta}^{\infty} \displaystyle\frac{v^{s-\alpha-1} dv}{\Big( \big| z + (|z| + 1) v^2 \big| + 1 \Big)^{\chi}} \\
& \le \gamma^{s-\alpha} \cdot \big( |z| + 1 \big)^{-\chi} \cdot \delta^{\alpha-s+1} \displaystyle\int_{1/\delta}^{2/\delta} \Bigg( \bigg| \displaystyle\frac{z}{|z| + 1} + v^2 \bigg| + \displaystyle\frac{1}{|z|+1} \Bigg)^{-\chi} dv \\
& \qquad + 2 \gamma^{s-\alpha} \cdot \big( |z| + 1 \big)^{-\chi}\displaystyle\int_{2/\delta}^{\infty} v^{s-\alpha-2\chi-1} dv,
\end{flalign*}
\noindent where in the second statement we used the fact that $\big| z + (|z| + 1) v^2 \big| \ge (|z|+1) v^2/2$ for $v \ge 2$. Thus, again recalling from \eqref{chisalphaomega} that $\chi > \frac{s-\alpha}{2} + \varpi$, there exists a constant $C_1 > 1$ such that
\begin{flalign}
\label{zdelta2}
\begin{aligned}
\mathfrak{Z}_+ (\delta) \ge C_1 \delta^{2\chi - s + \alpha} \cdot \mathfrak{Y} \cdot \Bigg( 1 + \delta^{1-2\chi} \displaystyle\int_{1/\delta}^{2/\delta} \bigg( \Big| \displaystyle\frac{z}{|z| + 1} + v^2 \Big| + \displaystyle\frac{1}{|z| + 1} \bigg)^{-\chi} dv \Bigg).
\end{aligned}
\end{flalign}
\noindent Now, observe that
\begin{flalign*}
\delta^{1-2\chi} \displaystyle\int_{1/\delta}^{2/\delta} \Bigg( \bigg| \displaystyle\frac{z}{|z| + 1} + v^2 \bigg| + \displaystyle\frac{1}{|z| + 1} \Bigg)^{-\chi} dv \le \delta^{1-2\chi} \displaystyle\int_{1/\delta}^{2/\delta} (v^2 - 1)^{-\chi} dv \le \displaystyle\frac{1}{1-\chi},
\end{flalign*}
\noindent and inserting this into \eqref{zdelta2} yields the upper bound in \eqref{yz2}.
\end{proof}
\subsection{Poisson Point Process Estimates}
\label{EstimateProcess}
In this section we establish estimates for quantities of the form \eqref{expectationtjrjkj}. We begin by imposing the following restrictions on the random variables $R_j$ and $K_j$ there. Throughout, $z = E + \mathrm{i} \eta \in \mathbb{H}$ is a complex number, and $s \in (\alpha, 1)$ and $\chi \in \big( \frac{s-\alpha}{2}, 1 \big)$ are real numbers satisfying \eqref{chisalphaomega} for some $\varpi > 0$.
\begin{assumption}
\label{sqk}
Let $\mathcal{T} = (T_1, T_2, \ldots )$ be a Poisson point process with intensity $\alpha x^{-\alpha-1} dx$, and let $\mathcal{R} = (R_1, R_2, \ldots )$ and $\mathcal{S} = (S_1, S_2, \ldots )$ denote sequences of random variables independent from $\mathcal{T}$, with the $R_j \in \overline{\mathbb{H}}$ and the $S_j \in \mathbb{R}_{\ge 0}$. Assume that the pairs $(R_1, S_1), (R_2, S_2), \ldots $ are identically distributed, each with some law $(R, S) \in \overline{\mathbb{H}} \times \mathbb{R}_{\ge 0}$. In particular, $\mathcal P = \sum_i \delta_{T_i,R_i,S_i}$ is a Poisson point process on $\mathbb{R}_{\ge 0} \times \overline{\mathbb{H}} \times \mathbb{R}_{\ge 0}$. Further let $\mathcal{K} = (K_1, K_2, \ldots )$ denote a sequence of random variables with all $K_j \in \overline{\mathbb{H}}$ and $K_j = f (\mathcal P_j, R_j, S_j, U_j)$ where $\mathcal{P}_j = \sum_{i \ne j } \delta_{T_i,R_i,S_i}$, $(U_1, U_2, \ldots)$ are iid uniform $[0,1]$ random variables and $f$ is a measurable function.
Assume there exist constants $A_1, A_2 > 1$ satisfying the below two conditions; conditional on $(T_j, R_j, S_j)$ they essentially state that the law of $\Real K_j$ has a bounded density and that $|K_j|$ has an $\frac{\alpha}{2}$-heavy tail. Here, $\mathbb{P}^j$ denotes the probability measure for $K_j$ conditional on $(T_j, R_j, S_j)$.
\begin{enumerate}
\item \label{k11} For any interval $I \subset \mathbb{R}$, we have
\begin{flalign*}
A_1^{-1} \displaystyle\int_I \displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha / 2 + 1}} \le \mathbb{P}^j [ \Real K_j \in I] \le A_1 \displaystyle\int_I \displaystyle\frac{dx}{ \big( |x| + 1 \big)^{\alpha / 2 + 1}}.
\end{flalign*}
\item \label{k12} For any interval $I \subset \mathbb{R} \setminus [-1, 1]$, we have
\begin{flalign*}
\mathbb{P}^j [\Real K_j \in I] \le A_2 \displaystyle\int_I \displaystyle\frac{dx}{\big( |x| + 1 \big)^{\alpha / 2 + 1}}; \qquad \mathbb{P}^j [\Imaginary K_j \in I] \le A_2 \displaystyle\int_I \displaystyle\frac{1}{\big( |x| + 1 \big)^{\alpha / 2 + 1}}.
\end{flalign*}
\end{enumerate}
\end{assumption}
Under the above notation, define for any $\delta > 0 $ the expectations
\begin{flalign*}
& \mathfrak{I} = \displaystyle\sum_{j=1}^{\infty} \displaystyle\frac{|T_j|^s S_j}{|z + T_j^2 R_j + K_j|^{\chi}}; \qquad
\mathfrak{I} (\delta) = \displaystyle\sum_{j=1}^{\infty} \displaystyle\frac{|T_j|^s S_j}{|z + T_j^2 R_j + K_j|^{\chi}} \cdot \mathbbm{1}_{|T_j| < \delta}.
\end{flalign*}
\noindent The following lemma provides upper and lower bounds for the quantities $\mathfrak{I}$ and $\mathfrak{I} (\delta)$. Below, constants might implicitly depend on $\varpi$ and $A_2$ (and $\alpha$), even when not explicitly stated, but we will clarify when they depend on $A_1$ or $s$. In particular, observe that the third statement below removes the dependence on $A_1$ of the constant $C$ from the first part, assuming that $(R_j, S_j, K_j)$ are identically distributed and that $|\Re z| \ge 2$.
\begin{lem}
\label{expectationsum1}
The following three statements hold.
\begin{enumerate}
\item There exist constants $c > 0$ and $C (A_1) > 1$ such that
\begin{flalign*}
c (s-\alpha)^{-1} & \cdot \big( |z| + 1 \big)^{(s-\alpha)/2 - \chi} \cdot \mathbb{E} \big[ |R|^{(\alpha-s)/2} S \big] \\
& \le \mathbb{E} [\mathfrak{I}] \le C (s-\alpha)^{-1} \cdot \big( |z| + 1 \big)^{(s-\alpha)/2 - \chi} \cdot \mathbb{E} \big[ |R|^{(\alpha-s)/2} S \big].
\end{flalign*}
\item There exists a constant $C > 1$ such that
\begin{flalign*}
\mathbb{E} \big[ \mathfrak{I} (\delta) \big] \le C (s-\alpha)^{-1} \cdot \delta^{s-\alpha} \cdot \big( |z| + 1 \big)^{-\chi} \cdot \mathbb{E} [S].
\end{flalign*}
\item Assume that $|\Real z| \ge 2$ and that the $(R_j, S_j, K_j)$ are all identically distributed and independent from $\mathcal{T}$. Then there exists some constant $C > 1$ (independent of $A_1$) such that
\begin{flalign*}
\mathbb{E} [\mathfrak{I}] \le C (s-\alpha)^{-1} \cdot \big( |z| + 1 \big)^{(s-\alpha)/2 - \chi} \cdot \mathbb{E} \big[ |R|^{(\alpha-s)/2} S \big].
\end{flalign*}
\end{enumerate}
\end{lem}
To establish \Cref{expectationsum1}, we require the following lemma bounding negative moments of continuous random variables, which will be established in \Cref{ProofV} below.
\begin{lem}
\label{zxrkestimate}
There exist constants $c > 0$; $C_1 = C_1 (A_1) > 1$; and $C_2 > 1$ such that the following holds. Fix real numbers $x > 0$ and $\delta \in (0,1]$, a complex number $Q \in \mathbb{H}$ with $|Q| = 1$, and a complex random variable $K$ satisfying the two assumptions in \Cref{sqk}. Setting $V = V(K) = |x Q + K|$, the below statements hold.
\begin{enumerate}
\item We have $c (x + 1)^{-\chi} \le \mathbb{E} [ V^{-\chi} ] \le C_1 (x + 1)^{- \chi}$.
\item If $x |\Real Q| \ge 2$, then $\mathbb{E} [ V^{-\chi} ] \le C_2 (x+1)^{-\chi}$.
\item We have $\mathbb{E} \big[ V^{-\chi} \cdot \mathbbm{1}_{V \le \delta} \big] \le C_1 \delta^{1 - \chi} \cdot (x + 1)^{-\alpha / 2 - 1}$.
\item For any finite union of intervals $J \subset \mathbb{R}$ with total length at most $\delta$, we have
\begin{flalign*}
\mathbb{E} \big[ V^{-\chi} \cdot \mathbbm{1}_{\Real K \in J} \big] \le C_1 \delta^{1 - \chi} \cdot (x + 1)^{-\alpha/2 - 1} + C \delta \cdot (x + 1)^{-\chi}.
\end{flalign*}
\end{enumerate}
\end{lem}
\begin{proof}[Proof of \Cref{expectationsum1}]
Throughout this proof, let $\mathbb{E}^j$ denote the expectation for $K_j$, conditional on $(T_j, R_j, S_j)$. In view of the first part of \Cref{zxrkestimate}, there exist constants $c_1 > 0$ and $C_1 = C_1 (A_1) > 1$ such that
\begin{flalign}
\label{ztjrj1}
c_1 \cdot \displaystyle\frac{1}{\big( |z + T_j^2 R_j| + 1 \big)^{\chi}} \le \mathbb{E}^j \Bigg[ \displaystyle\frac{1}{|z + T_j^2 R_j + K_j|^{\chi}} \Bigg] \le C_1 \cdot \displaystyle\frac{1}{\big( |z + T_j^2 R_j| + 1 \big)^{\chi}}.
\end{flalign}
\noindent Further observe from \Cref{fidentityxi} that
\begin{flalign}
\label{ztjrj2}
\begin{aligned}
\mathbb{E} \Bigg[ \displaystyle\sum_{j=1}^{\infty} \displaystyle\frac{|T_j|^s S_j}{\big( |z + T_j^2 R_j| + 1 \big)^{\chi}} \Bigg] & = \alpha \displaystyle\int_0^{\infty} \mathbb{E} \Bigg[ \displaystyle\frac{t^s S}{\big( |z + t^2 R| + 1 \big)^{\chi}} \Bigg] \cdot t^{-\alpha-1} dt; \\
\mathbb{E} \Bigg[ \displaystyle\sum_{j=1}^{\infty} \displaystyle\frac{|T_j|^s S_j}{\big( |z+T_j^2 R_j| + 1 \big)^{\chi}} \cdot \mathbbm{1}_{|T_j| \le \delta} \Bigg] & = \alpha \displaystyle\int_0^{\delta} \mathbb{E} \Bigg[ \displaystyle\frac{t^s S}{\big( |z + t^2 R| + 1 \big)^{\chi}}\Bigg] \cdot t^{-\alpha-1} dt.
\end{aligned}
\end{flalign}
\noindent Moreover, for any deterministic $R_0 \in \mathbb{C}$, there exists a constant $C_2 > 1$ such that
\begin{flalign}
\label{ztjrj3}
\begin{aligned}
C_2^{-1} & (s-\alpha)^{-1} \cdot \big( |z| + 1 \big)^{(s-\alpha)/2 - \chi} \cdot |R_0|^{(\alpha-s)/2} \\
& \le \alpha \displaystyle\int_0^{\infty} \displaystyle\frac{t^{s-\alpha-1} dt}{\big( |z + t^2 R_0| + 1 \big)^{\chi}} \le C_2 (s-\alpha)^{-1} \cdot \big( |z| + 1 \big)^{(s-\alpha)/2 - \chi} \cdot |R_0|^{(\alpha-s)/2},
\end{aligned}
\end{flalign}
\noindent and
\begin{flalign}
\label{ztjrj4}
\alpha \displaystyle\int_0^{\delta} \displaystyle\frac{t^{s-\alpha-1} dt}{\big( |z + t^2 R_0| + 1 \big)^{\chi}} \le C_2 (s-\alpha)^{-1} \cdot \delta^{s-\alpha} \cdot \big( |z| + 1 \big)^{-\chi}.
\end{flalign}
\noindent where the first bound follows from \Cref{integral3} and the second from \Cref{integral} (using $\delta \gamma^{-1}$ there instead of $\delta$). Combining \eqref{ztjrj1}, \eqref{ztjrj2}, \eqref{ztjrj3}, and \eqref{ztjrj4} yields the first two statements of the lemma.
To establish the third, let the joint law of each $(R_j, S_j, K_j)$ be $(R, S, K)$. Then, applying \Cref{fidentityxi} yields
\begin{flalign}
\label{estimate2i}
\mathbb{E} [\mathfrak{I}] = \alpha \displaystyle\int_0^{\infty} \mathbb{E} \Bigg[ \displaystyle\frac{t^s S}{|z + t^2 R + K|^{\chi}} \Bigg] \cdot t^{-\alpha-1} dt.
\end{flalign}
\noindent By changing variables $t = R^{-1/2} \cdot |z+K|^{1/2} \cdot v$, we deduce that
\begin{flalign*}
\displaystyle\int_0^{\infty} \displaystyle\frac{t^{s-\alpha-1} dt}{|z + t^2 R + K|^{\chi}} & \le R^{(\alpha-s)/2} \cdot |z + K|^{(s-\alpha)/2} \displaystyle\int_0^{\infty} \displaystyle\frac{v^{s-\alpha-1} dv}{|v^2 - 1|^{\chi}} \\
& \le C_3 (s-\alpha)^{-1} \cdot R^{(\alpha-s)/2} \cdot |z+K|^{(s-\alpha)/2 - \chi},
\end{flalign*}
\noindent for some constant $C_3 > 1$. Inserting this into \eqref{estimate2i} gives
\begin{flalign*}
\mathbb{E} [\mathfrak{I}] \le C_3 \cdot \mathbb{E} \Big[ |R|^{(\alpha-s)/2} S \cdot \mathbb{E}^j \big[ |z + K|^{(s-\alpha)/2 - \chi} \big] \Big],
\end{flalign*}
\noindent which together with the second statement of \Cref{zxrkestimate} yields the third statement of the lemma.
\end{proof}
\subsection{Proof of \Cref{zxrkestimate}}
\label{ProofV}
In this section we establish \Cref{zxrkestimate}.
\begin{proof}[Proof of \Cref{zxrkestimate}]
Observe from the second part of \Cref{sqk} that there exists some constant $C_1 > 1$ such that $\mathbb{P} \big[ |K| \le C_1 \big] \ge \frac{1}{2}$. Restricting to this event, we find
\begin{flalign*}
\mathbb{E} \big[ |xQ + K|^{-\chi} \big] \ge \displaystyle\frac{1}{2} \cdot (x + C_1)^{-\chi} \ge \displaystyle\frac{1}{2C_1} \cdot (x + 1)^{-\chi},
\end{flalign*}
\noindent thereby establishing the lower bound in the first statement of the lemma.
The remaining parts of the lemma constitute upper bounds. Since $|Q| = 1$ and $Q, K \in \overline{\mathbb{H}}$, we have $|xQ + K| \ge |xQ + \Real K|$, and so \Cref{sqk} implies for any interval $I \subseteq \mathbb{R}_{\ge 0}$ that
\begin{flalign}
\label{vichi}
\begin{aligned}
\mathbb{E} \big[ V^{-\chi} \cdot \mathbbm{1}_{V \in I} \big] & \le A_1 \displaystyle\int_{-\infty}^{\infty} \displaystyle\frac{\mathbbm{1}_{V (w) \in I}}{\big( |w| + 1 \big)^{\alpha / 2 + 1}} \cdot \displaystyle\frac{dw}{|xQ + w|^{\chi}}; \\
\mathbb{E} [V^{-\chi}] & \le A_2 \displaystyle\int_{-\infty}^{\infty} \displaystyle\frac{\mathbbm{1}_{|w| \ge 1}}{\big( |w| + 1 \big)^{\alpha/2 + 1}} \cdot \displaystyle\frac{dw}{|xQ + w|^{\chi}} + \displaystyle\max_{w \in [-1, 1]} |xQ + w|^{-\chi}.
\end{aligned}
\end{flalign}
Now let us establish the first part of the lemma. We first take $I = [0, \infty)$ in the first statement of \eqref{vichi}; then, there exists a constant $C_2 = C_2 (A_1) > 1$ such that
\begin{flalign}
\label{vchi1}
\begin{aligned}
\mathbb{E} [V^{-\chi}] & \le 2 A_1 \Bigg( \displaystyle\int_{-\infty}^{\infty} \displaystyle\frac{\mathbbm{1}_{|w| < 1}}{\big( |w| + 1 \big)^{\alpha/2 + 1}} \cdot \displaystyle\frac{dw}{|xQ + w|^{\chi}} + \displaystyle\int_{-\infty}^{\infty} \displaystyle\frac{\mathbbm{1}_{1 \le |w| < x_0/2}}{\big( |w| + 1 \big)^{\alpha/2 + 1}} \cdot \displaystyle\frac{dw}{|xQ + w|^{\chi}} \\
& \qquad \quad + \displaystyle\int_{-\infty}^{\infty} \displaystyle\frac{\mathbbm{1}_{x_0/2 \le |w| < 2x_0}}{\big( |w| + 1 \big)^{\alpha/2 + 1}} \cdot \displaystyle\frac{dw}{|xQ + w|^{\chi}} + \displaystyle\int_{-\infty}^{\infty} \displaystyle\frac{\mathbbm{1}_{|w| \ge 2x_0}}{\big( |w| + 1 \big)^{\alpha/2 + 1}} \cdot \displaystyle\frac{dw}{|xQ + w|^{\chi}} \Bigg) \\
& \le C_2 \Bigg( \displaystyle\int_{-1}^1 \displaystyle\frac{dw}{|xQ+w|^{\chi}} + (x + 1)^{-\chi} \displaystyle\int_1^{x_0/2} \displaystyle\frac{dw}{(w + 1)^{\alpha/2 + 1}} \\
& \qquad \quad + (x + 1)^{-\alpha/2 - 1} \displaystyle\int_{-\infty}^{\infty} \mathbbm{1}_{x_0/2 \le |w| < 2x_0} \cdot \displaystyle\frac{dw}{|xQ+w|^{\chi}} + \displaystyle\int_{2x_0}^{\infty} \displaystyle\frac{dx}{w^{\alpha/2 + \chi + 1}}\Bigg),
\end{aligned}
\end{flalign}
\noindent where $x_0 = x + \frac{1}{2}$. Here, for the first integral we used the fact that $|w| + 1 \ge 1$; for the second we used the fact that $|xQ + w| \ge \frac{x_0}{4} \ge \frac{x+1}{8}$ whenever $1 \le |w| \le \frac{x_0}{2}$; for the third we used the fact that $|w| + 1 \ge \frac{x_0}{2}$ on its support; and for the fourth we used the fact that $|w| \ge 2x_0 \ge 2x$ implies $|x+w| \ge \frac{|w|}{2}$. Since there exists a constant $C_3 > 1$ such that
\begin{flalign*}
& \displaystyle\int_{-1}^1 \displaystyle\frac{dw}{|xQ + w|^{\chi}} \le C_3 (x + 1)^{-\chi}; \qquad \qquad \qquad \qquad \quad \displaystyle\int_1^{x_0/2} \displaystyle\frac{dw}{(w + 1)^{\alpha/2 + 1}} \le C_3; \\
& \displaystyle\int_{-\infty}^{\infty} \mathbbm{1}_{x_0/2 \le |w| < 2x_0} \cdot \displaystyle\frac{dw}{|xQ + w|^{\chi}} \le C_3 (x + 1)^{1 - \chi}; \qquad \displaystyle\int_{2x_0}^{\infty} w^{-\alpha/2 - \chi - 1} \le C_3 (x + 1)^{-\alpha/2 - \chi},
\end{flalign*}
\noindent inserting these bounds into \eqref{vchi1} yields the first statement of the lemma. The proof of the second statement is entirely analogous, using the second part of \eqref{vichi} and the fact that $|xQ + w| \ge \frac{1}{8} \cdot \big( |x| + 1 \big)$ for $|x \Real Q| \ge 2$ and $|w| \le 1$ (to bound the second term on its right side).
We next establish the third statement of the lemma, to which end we take $I = [0, \delta]$. Then, since $|Q| = 1$ and $\delta \le 1$, we have that $|w| + 1 \ge \frac{1}{2} (x + 1)$ whenever $V(w) = |xQ + w| \le \delta$. Hence,
\begin{flalign*}
\mathbb{E} \big[ V^{-\chi} \cdot \mathbbm{1}_{V \le \delta} \big] & \le 4 A_1 \cdot (x + 1)^{-\alpha / 2 - 1} \displaystyle\int_{-\infty}^{\infty} \mathbbm{1}_{V(w) \le \delta} \cdot \displaystyle\frac{dw}{|x \Real Q + w|^{\chi}} \\
& \le 8 A_1 \cdot (x + 1)^{-\alpha / 2 - 1} \displaystyle\int_0^{\delta} u^{-\chi} du = \displaystyle\frac{8 A_1}{1 - \chi} \cdot (x + 1)^{-\alpha / 2 - 1} \cdot \delta^{1 - \chi},
\end{flalign*}
\noindent where to deduce the second inequality we changed variables $u = x \Real Q + w$. This establishes the third bound in the lemma.
To establish the fourth, observe from \eqref{vichi} that
\begin{flalign*}
\mathbb{E} \big[ V^{-\chi} \cdot \mathbbm{1}_{\Real K \in J} \big] \le A_1 \displaystyle\int_J \displaystyle\frac{1}{\big( |w| + 1 \big)^{\alpha/2 + 1}} \cdot \displaystyle\frac{dw}{|xQ + w|^{\chi}}.
\end{flalign*}
\noindent Since $|Q| = 1$, we either have $|xQ + w| \ge \frac{1}{2} (x + 1)$ or $|w| + 1 \ge \frac{1}{2} (x + 1)$. It follows for some constant $C_4 = C_4 (A_1) > 1$ that
\begin{flalign*}
\mathbb{E} \big[ V^{-\chi} \cdot \mathbbm{1}_{\Real K \in J} \big] & \le 2 A_1 \int_{-\delta}^{\delta} \displaystyle\frac{dw}{(x + 1)^{\chi}} + 2 A_1 \cdot (x+1)^{-\alpha/2 - 1} \displaystyle\int_{-\delta}^{\delta} w^{-\chi} dw \\
& \le C_4 \delta \cdot (x + 1)^{-\chi} + C_4 \delta^{1-\chi} \cdot (x + 1)^{-\alpha/2-1},
\end{flalign*}
\noindent where we used the fact that $J$ has length at most $\delta$ (enabling us to replace it by $[-\delta, \delta]$ to upper bound the integrals). This verifies the fourth statement of the lemma.
\end{proof}
\section{Fractional Moment Estimates}
\label{EstimateMomentss}
In this section we establish \Cref{limitr0j}. We begin in \Cref{EstimateR0} by establishing recursive estimates for a variant of the moment $\Phi_{\ell} (s; z)$ (from \Cref{moment1}), which we refer to by $\Xi_{\ell} (s; z)$ (see \Cref{xil} below). In \Cref{MultiplicativeR} establish \Cref{limitr0j} by showing that the $\Phi_{\ell} (s; z)$ are approximately multiplicative in $\ell$. Throughout this section, we fix a complex number $z = E + \mathrm{i} \eta \in \mathbb{H}$ with $\eta \in (0, 1)$ and $\varepsilon \le |E| \le B$; we will frequently abbreviate $\Phi_{\ell} (s) = \Phi_{\ell} (s; z)$. We further fix $s \in (\alpha, 1)$ and $\chi \in \big( \frac{s-\alpha}{2}, 1 \big)$ satisfying \eqref{chisalphaomega} for some $\varpi > 0$. In what follows, constants might implicitly depend on $\varpi$ (and $\alpha$), and we will mention how they depend on $\varepsilon$, $B$, and $s$.
\subsection{Resolvent Estimates}
\label{EstimateR0}
In this section we establish estimates for the following variant $\Xi_{\ell}$ of the fractional moment $\Phi_{\ell}$.
\begin{definition}
\label{xil}
For any integer $\ell > 0$, define $\Xi_\ell (\chi) = \Xi_\ell (\chi; s; z)$ and $\Xi_{-\ell} (\chi) = \Xi_{-\ell} (\chi; s; z)$ by
\begin{flalign*}
\Xi_\ell (\chi) = \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{V}(\ell)} |R_{00}|^{\chi} \cdot |T_{00_+}|^s \cdot \big| R_{0_+ v}^{(0)} \big|^s \Bigg]; \qquad \Xi_{-\ell} (\chi) = \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{V} (\ell)} |R_{vv}|^{\chi} \cdot |T_{v_- v}|^s \cdot \big| R_{0v_-}^{(v)} \big|^s \Bigg],
\end{flalign*}
\noindent where $0_+$ is the child of $0$ on the path $\mathfrak{p} (0, v) \subset \mathbb{V}$ from $0$ to $v$. At $\ell = 0$, we set $\Xi_0 (\chi) = \Xi_0 (\chi; s; z) = \mathbb{E} \big[ |R_{00}|^{\chi} \big]$.
\end{definition}
\begin{rem}
\label{xir}
By \Cref{rproduct}, we have that
\begin{flalign}
\label{rproduct2}
| R_{00} | \cdot |T_{00_+}| \cdot \big| R_{0_+ v}^{(0)} \big| = | R_{0v}| = |R_{vv}| \cdot |T_{v_- v}| \cdot \big| R_{0v_-}^{(v)} \big|,
\end{flalign}
\noindent so $\Xi_{\ell} (s) = \Xi_{-\ell} (s) = \Phi_{\ell} (s)$.
\end{rem}
The following lemma shows that $\Xi_{\ell} (\chi) = \Xi_{-\ell} (\chi)$ holds for general $\chi$.
\begin{lem}
\label{xichi}
For any integer $\ell \ge 0$, we have $\Xi_{\ell} (\chi) = \Xi_{-\ell} (\chi)$.
\end{lem}
\begin{proof}
We may assume that $\ell \ge 1$. For any $u, v \in \mathbb{V}$, set
\begin{flalign*}
f(u, v) = |R_{uu}|^{\chi} \cdot |T_{uw}|^s \cdot \big| R_{w v}^{(u)} \big|^s \cdot \mathbbm{1}_{d(v, u) = \ell},
\end{flalign*}
\noindent where $d(u, v)$ denotes the distance between $u$ and $v$ in $\mathbb{T}$, and $w \sim u$ is the vertex adjacent to $u$ in the path $\mathfrak{p} (u, v) \subset \mathbb{V}$ between $u$ and $v$. Then, the unimodularity of $\mathbb{T}$ (recall \Cref{tunimodular}) gives
\begin{flalign*}
\Xi_{\ell} (\chi) = \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{T}} f(0, v) \Bigg] = \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{T}} f(v, 0) \Bigg] = \Xi_{-\ell} (\chi),
\end{flalign*}
\noindent which implies the lemma.
\end{proof}
Next, we have the following two lemmas showing that $\Xi_{\ell} (\chi)$ and $\Xi_{\ell} \big( \frac{\alpha+s}{2} \big)$ are comparable.
\begin{lem}
\label{chilchil1}
There exists a constant $C_1 = C_1 (\varepsilon, B) > 1$ such that for any integer $\ell \ge 1$ we have
\begin{flalign*}
C_1^{-1} (s-\alpha)^{-1} \cdot \Xi_{\ell-1} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) \le \Xi_\ell (\chi) \le C_1 (s-\alpha)^{-1} \cdot \Xi_{\ell-1} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg).
\end{flalign*}
\noindent Additionally, if $|E| \ge 2$, then there exists a constant $C_2 > 1$ (independent of $\varepsilon$ and $B$) such that for any integer $\ell \ge 1$ we have
\begin{flalign*}
\Xi_{\ell} (\chi) \le C_2 (s-\alpha)^{-1} \cdot E^{(s-\alpha)/2 - \chi} \cdot \Xi_{\ell-1} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg).
\end{flalign*}
\end{lem}
\begin{proof}
Applying the Schur complement identity \eqref{qvv} (and setting $u = 0_+$), we deduce
\begin{flalign}
\label{sestimate}
\Xi_\ell (\chi) & = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (1)} \displaystyle\frac{|T_{0u}|^s}{\big| z + T_{0u}^2 R_{uu}^{(0)} + K_u \big|^{\chi}} \cdot \displaystyle\sum_{v \in \mathbb{D}_{\ell - 1} (u)} \big| R_{uv}^{(0)} \big|^s \Bigg], \quad \text{where} \quad K_u = \displaystyle\sum_{\substack{w \in \mathbb{V} (1) \\ w \ne u}} T_{0w}^2 R_{ww}^{(0)}.
\end{flalign}
\noindent Observe from \Cref{tuvalpha} and \Cref{sumaxi} that $K_u$ is a stable random variable with some law $K$, and that it is independent from $T_{0u}$, $R_{uu}^{(0)}$, and $R_{uv}^{(0)}$. Applying the first part of \Cref{expectationsum1}, with the $R_j$ there equal to $R_{uu}^{(0)}$ here and the $S_j$ there equal to the inner sum $\sum_{v \in \mathbb{D}_{\ell-1} (u)} \big| R_{0v}^{(u)} \big|^s$ on the right side of \eqref{sestimate} (and using \Cref{tuvalpha}, \Cref{sumaxi}, the first statement of \Cref{q0estimate1}, and \eqref{sigmathetakappa} to verify \Cref{sqk} for $K_u$), yields a constant $C = C (\varepsilon, B) > 1$ such that
\begin{flalign*}
C^{-1} \cdot \Xi_{\ell-1} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) & = C^{-1} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{D}_{\ell-1} (u)} \big| R_{uu}^{(0)} \big|^{(\alpha+s)/2} \cdot |T_{uu_+}|^s \cdot \big| R_{u_+ v}^{(u)} \big|^s \Bigg] \\
& = C^{-1} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{D}_{\ell-1} (u)} \big| R_{uu}^{(0)} \big|^{(\alpha-s)/2} \cdot \big| R_{uv}^{(0)} \big|^s \Bigg] \\
& \le (s-\alpha) \cdot \Xi_\ell (\chi) \\
& \le C \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{D}_{\ell - 1} (u)} \big| R_{uu}^{(0)} \big|^{(\alpha-s)/2} \cdot \big| R_{uv}^{(0)} \big|^s \Bigg] \\
& = C \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{D}_{\ell-1} (u)} \big| R_{uu}^{(0)} \big|^{(\alpha+s)/2} \cdot |T_{uu_+}|^s \cdot \big| R_{u_+ v}^{(u)} \big|^s \Bigg] = C \cdot \Xi_{\ell-1} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg).
\end{flalign*}
\noindent Here, to obtain the first and sixth statements we used the fact that $\big( R_{u_+ v}^{(u)}, T_{uu_+} \big)$ and $(R_{0w}, T_{00_+})$ have the same law, for $u \in \mathbb{V} (1)$, $v \in \mathbb{D}_{\ell-1} (u)$, and $w \in \mathbb{V} (\ell-1)$ suffix of $v$ (that is $v = (u,w)$); to obtain the second and fifth, we used \Cref{rproduct} (see also \eqref{rproduct2}). This establishes the first statement of the lemma.
The proof of the second is entirely analogous and thus omitted, obtained by replacing the use of the first part of \Cref{expectationsum1} to estimate \eqref{sestimate} with the third part of \Cref{expectationsum1} (using \Cref{tuvalpha}, \Cref{sumaxi}, and the first statement of \Cref{q0estimate1} to verify \Cref{sqk} for $K_u$, with a constant $A_2$ that is independent of $\varepsilon$ and $B$).
\end{proof}
\begin{lem}
\label{xilchi2}
There exists a constant $C = C(s, \varepsilon, B) > 1$ such that for any integer $\ell > 0$ we have
\begin{flalign*}
C^{-1} \cdot \Xi_{-\ell} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) \le \Xi_{-\ell - 1} (\chi) \le C \cdot \Xi_{-\ell} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg).
\end{flalign*}
\end{lem}
\begin{proof}
This follows from \Cref{xichi} and \Cref{chilchil1}, but let us also provide an alternative proof, similar to that of \Cref{chilchil1}. Again applying the Schur complement identity \eqref{qvv} yields
\begin{flalign*}
\Xi_{-\ell} (\chi) = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell - 1)} \displaystyle\sum_{v \in \mathbb{D} (u)} \displaystyle\frac{|T_{uv}|^s}{\big| z + T_{uv}^2 R_{uu}^{(v)} + K_v \big|^{\chi}} \cdot \big| R_{0u}^{(v)} \big|^s \Bigg], \quad \text{where} \quad K_v = \displaystyle\sum_{w \in \mathbb{D} (v)} T_{vw}^2 R_{ww}^{(v)}.
\end{flalign*}
\noindent In particular, by \Cref{tuvalpha} and \Cref{sumaxi}, $K_v$ is an $\frac{\alpha}{2}$-stable law $K$. Since $K_v$ is also independent from $T_{uv}$ and $R_{uu}^{(v)}$, we have
\begin{flalign}
\label{xi1}
\Xi_{-\ell} (\chi) = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell - 1)} |T_{u_- u}|^s \cdot \big| R_{0 u_-}^{(u)} \big|^s \cdot \mathbb{E}_u \bigg[ \displaystyle\sum_{v \in \mathbb{D}(u)} \displaystyle\frac{|T_{uv}|^s}{\big| z + T_{uv}^2 R_{uu}^{(v)} + K \big|^{\chi}} \cdot \big| R_{uu}^{(v)} \big|^s \bigg] \Bigg],
\end{flalign}
\noindent where we used \Cref{rproduct} and recalled that $\mathbb{E}_u$ denotes the expectation conditional on $\mathbb{T}_- (u)$. Also observe by \eqref{qvv} that
\begin{flalign*}
R_{uu}^{(v)} = - \big( z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + g_v \big)^{-1}, \quad \text{where} \quad g_v = \displaystyle\sum_{\substack{w \in \mathbb{D} (u) \\ w \ne v}} T_{uw}^2 R_{ww}^{(u)}.
\end{flalign*}
Now let $(R_w)_{w \sim u}$ denote a family of independent, identically distributed random variables, each with law $R_{00}$. Then the first part of \Cref{q12} implies, for any $u \in \mathbb{V} (\ell - 1)$, the law of the sequence of random variables $\big( R_{ww}^{(u)} \big)$ is the same as that of $(R_w)$. Hence, exchanging the expectation with the sum in \eqref{xi1} twice, we obtain
\begin{flalign}
\label{xil2}
\begin{aligned}
\Xi_{-\ell} (\chi) & = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell-1)} |T_{u_- u}|^s \cdot \big| R_{0u_-}^{(u)} \big|^s \cdot \displaystyle\sum_{v \in \mathbb{D} (u)} \mathbb{E}_u \bigg[ \displaystyle\frac{|T_{uv}|^s}{\big| z + T_{uv}^2 R_{uu}^{(v)} + K \big|^{\chi} } \cdot \big| R_{uu}^{(v)} \big|^s \bigg] \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell-1)} |T_{u_- u}|^s \cdot \big| R_{0u_-}^{(u)} \big|^s \cdot \displaystyle\sum_{v \in \mathbb{D} (u)} \mathbb{E}_u \bigg[ \displaystyle\frac{|T_{uv}|^s}{\big| z + T_{uv}^2 \widetilde{R}_{uu}^{(v)} + K \big|^{\chi} } \cdot \big| \widetilde{R}_{uu}^{(v)} \big|^s \bigg] \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell-1)} |T_{u_- u}|^s \cdot \big| R_{0u_-}^{(u)} \big|^s \cdot \mathbb{E}_u \bigg[ \displaystyle\sum_{v \in \mathbb{D} (u)} \displaystyle\frac{|T_{uv}|^s}{\big| z + T_{uv}^2 \widetilde{R}_{uu}^{(v)} + K \big|^{\chi} } \cdot \big| \widetilde{R}_{uu}^{(v)} \big|^s \bigg] \Bigg],
\end{aligned}
\end{flalign}
\noindent where
\begin{flalign*}
\widetilde{R}_{uu}^{(v)} = - \big( z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + \widetilde{g}_v \big)^{-1}, \quad \text{where} \quad \widetilde{g}_v = \displaystyle\sum_{\substack{w \in \mathbb{D} (u) \\ w \ne v}} T_{vw}^2 R_w.
\end{flalign*}
\noindent We next us apply \eqref{fxi} with
\begin{flalign*}
f(x, \nu) = \displaystyle\frac{|x|^s}{\big|z + x^2 \widetilde{R} (\nu) + K \big|^{\chi}} \cdot \big| \widetilde{R} (\nu) \big|^s,
\end{flalign*}
\noindent where $\nu = (\nu_1, \nu_2, \ldots ) \in \mathbb{R}_{> 0}$ is a point process, and
\begin{flalign*}
\widetilde{R} (\nu) = - \big( z + T_{u_- u}^2 \widetilde{R}_{u_- u_-}^{(u)} + \widetilde{g} (\nu) \big)^{-1}, \quad \text{and} \quad \widetilde{g} (\nu) = \displaystyle\sum_{j = 1}^{\infty} \nu_j^2 R_j,
\end{flalign*}
\noindent where $R_1, R_2, \ldots $ are independent, identically distributed random variables with law $R_{00}$. Observe in particular from the Schur complement identity \eqref{qvv} that $\widetilde{R} (\nu)$ has the same law as $R_{uu}$, conditional on $(T_{uw})_{w \in \mathbb{D}(u)} = (\nu_1, \nu_2, \ldots)$. Inserting \eqref{fxi} into \eqref{xil2}, we obtain
\begin{flalign}
\label{xil3}
\begin{aligned}
\Xi_{-\ell} (\chi) & = \alpha \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell - 1)} |T_{u_- u}|^s \cdot \big| R_{0u_-}^{(u)} \big|^s \cdot \displaystyle\int_0^{\infty} \mathbb{E}_u \bigg[ \displaystyle\frac{x^s}{\big| z + x^2 \widetilde{R} (\nu) + K \big|^{\chi}} \cdot \big| \widetilde{R} (\nu) \big|^s \bigg] \cdot x^{-\alpha-1} dx \Bigg] \\
& = \alpha \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell - 1)} |T_{u_- u}|^s \cdot \big|R_{0u_-}^{(u)} \big|^s \cdot \displaystyle\int_0^{\infty} \mathbb{E}_u \Bigg[ \displaystyle\frac{x^s}{|z + x^2 R_{uu} + K|^{\chi}} \cdot |R_{uu}|^s \Bigg] \cdot x^{-\alpha-1} dx \Bigg],
\end{aligned}
\end{flalign}
\noindent where in the first expectation $\nu$ is a Poisson point process with intensity measure $\alpha x^{-\alpha-1} dx$. By \Cref{expectationsum1}, there exists a constant $C (s, \varepsilon, B) > 1$ for which
\begin{flalign*}
C^{-1} \cdot \mathbb{E}_u \big[ |R_{uu}|^{(\alpha+s)/2} \big] & \le \displaystyle\int_0^{\infty} \mathbb{E}_u \Bigg[ \displaystyle\frac{x^s}{ |z + x^2 R_{uu} + K|^{\chi}} \cdot |R_{uu}|^s \Bigg] \cdot x^{-\alpha - 1} dx \le C \cdot \mathbb{E}_u \big[ |R_{uu}|^{(\alpha+s)/2} \big].
\end{flalign*}
\noindent This, with \eqref{xil3}, then yields the lemma.
\end{proof}
\begin{cor}
\label{estimatemoment1}
There exist constants $C_1 = C_1 (s, \varepsilon, B) > 1$; $C_2 = C_2 (s) > 1$ (independent of $\varepsilon$ and $B)$; $c_1 = c_1 (\varepsilon, B)$ (independent of $s)$; and $c_2 > 0$ such that the following hold for any integer $\ell \ge 2$.
\begin{enumerate}
\item We have $C_1^{-1} \cdot \Phi_{\ell-1} (s) \le \Phi_{\ell} (s) \le C_1 \cdot \Phi_{\ell-1} (s)$.
\item We have $C_1^{-1} c_1^{\ell} \cdot (s-\alpha)^{-\ell} \le \Phi_{\ell} (s) \le C_1 c_1^{-\ell} \cdot (s-\alpha)^{-\ell}$.
\item If $|E| \ge 2$, then $\Phi_{\ell} (s) \le C_2 c_2^{-\ell} \cdot (s- \alpha)^{-\ell} \left(|E|^{(s-\alpha)/2 - \chi}\right)^{\ell}$.
\end{enumerate}
\end{cor}
\begin{proof}
By \Cref{xir} and three applications of the first statement of \Cref{chilchil1} (twice at $\chi = s$ and once at $\chi = \frac{\alpha+s}{2}$), we deduce the existence of a constant $C_1 = C_1 (s, \varepsilon, B) > 1$ such that
\begin{flalign*}
\Phi_{\ell} (s) =\Xi_{\ell} (s) \le C_1 \cdot \Xi_{\ell-1} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) \le C_1^2 \cdot \Xi_{\ell-2} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) \le C_1^3 \cdot \Xi_{\ell-1} (s) = C_1^3 \cdot \Phi_{\ell-1} (s),
\end{flalign*}
\noindent and so $\Phi_{\ell} (s) \le C_1^3 \cdot \Phi_{\ell-1} (s)$; entirely analogous reasoning yields $\Phi_{\ell} (s) \ge C_1^{-3} \cdot \Phi_{\ell-1} (s)$. This establishes the first statement of the lemma. To establish the second, observe from $\ell$ applications of the first part of \Cref{chilchil1} that there exists a constant $c = c(\varepsilon, B) > 0$ such that
\begin{flalign*}
c^{\ell} \cdot (s-\alpha)^{-\ell} \cdot \Xi_0 \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) \le \Phi_{\ell} (s) \le c^{-\ell} \cdot (s-\alpha)^{-\ell} \cdot \Xi_0 \bigg( \displaystyle\frac{\alpha+s}{2}\bigg).
\end{flalign*}
\noindent This, together with the fact that $\Xi_0 (\chi)$ is bounded for any $\chi \in (0, 1)$ (by \Cref{expectationqchi}), yields the second statement of the lemma. The proof of the third is very similar, obtained by repeatedly applying the second part of \Cref{chilchil1} (instead of the first).
\end{proof}
\subsection{Approximate Multiplicativity of Resolvent Moments}
\label{MultiplicativeR}
In this section we establish \Cref{limitr0j}, to which end we will first show that the resolvent moments $\Phi_{\ell} (s)$ (from \Cref{moment1}) are approximately multiplicative through the following proposition. Throughout this section, all constants may implicitly depend on $s$, $\varepsilon$, and $B$, even when not explicitly stated.
\begin{prop}
\label{smultiplicative}
There is a constant $C = C(s, \varepsilon, B) > 1$ such that, for any integers $\ell, m \ge 1$, we have
\begin{flalign*}
C^{-1} \cdot \Phi_{\ell-1} (s) \cdot \Phi_{m-1} (s) \le \Phi_{\ell + m} (s) \le C \cdot \Phi_{\ell-1} (s) \cdot \Phi_{m-1} (s).
\end{flalign*}
\end{prop}
The above proposition will quickly follow from the following two lemmas.
\begin{lem}
\label{lm1}
There is a constant $C = C(s, \varepsilon, B) > 1$ such that, for any integers $\ell, m \ge 1$, we have
\begin{flalign*}
C^{-1} & \cdot \Phi_{\ell-1} (s) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (m)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^{-s} \Bigg] \\
& \le \Phi_{\ell + m} (s) \le C \cdot \Phi_{\ell-1} (s) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (m)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^{-s} \Bigg].
\end{flalign*}
\end{lem}
\begin{lem}
\label{m2}
There is a constant $C = C(s, \varepsilon, B) > 1$ such that, for any integer $m \ge 1$, we have
\begin{flalign*}
C^{-1} \cdot \Phi_{m-1} (s) \le \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (m)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^{-s} \Bigg] \le C \cdot \Phi_{m-1} (s).
\end{flalign*}
\end{lem}
\begin{proof}[Proof of \Cref{smultiplicative}]
This follows from \Cref{lm1} and \Cref{m2}.
\end{proof}
Now let us establish \Cref{lm1} and \Cref{m2}.
\begin{proof}[Proof of \Cref{lm1}]
We only establish the upper bound, as the proof of the lower bound is entirely analogous. To that end, by \Cref{rproduct}, we have
\begin{flalign*}
\Phi_{\ell + m} (s) & = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (m)} \displaystyle\sum_{v \in \mathbb{D}_{\ell} (u)} |R_{0v}|^s \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V}(m)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot | R_{uu}|^s \displaystyle\sum_{w \in \mathbb{D} (u)} |T_{uw}|^s \displaystyle\sum_{v \in \mathbb{D}_{\ell-1} (w)} \big| R_{wv}^{(u)} \big|^s \Bigg],
\end{flalign*}
\noindent from which it follows (by the Schur complement identity \eqref{qvv}) that
\begin{flalign}
\label{slm}
\begin{aligned}
\Phi_{\ell+m} (s) = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (m)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \mathbb{E}_u \bigg[ & \displaystyle\sum_{w \in \mathbb{D} (u)} \displaystyle\frac{|T_{uw}|^s}{\big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + T_{uw}^2 R_{ww}^{(u)} + K_{u, w} \big|^{\chi}} \\
& \times \displaystyle\sum_{v \in \mathbb{D}_{\ell - 1} (w)} \big| R_{wv}^{(u)} \big|^s \bigg] \Bigg],
\end{aligned}
\end{flalign}
\noindent where we recall that $\mathbb{E}_u$ denotes the expectation conditional on $\mathbb{T}_- (u)$ and we have denoted
\begin{flalign*}
K_{u, w} = \displaystyle\sum_{\substack{v' \in \mathbb{D} (u) \\ v' \ne w, u_- }} T_{uv'}^2 R_{v' v'}^{(u)}.
\end{flalign*}
\noindent To estimate the inner expectation in \eqref{slm}, we proceed as in the proof of \Cref{xilchi2}. Specifically, observe that $K_{u, w}$ is an $\frac{\alpha}{2}$-stable random variable (by \Cref{tuvalpha} and \Cref{sumaxi}) with some law $K$ that is independent from $T_{u_- u}$, $T_{uw}$, $R_{u_- u_-}^{(u)}$, $R_{ww}^{(u)}$, and $R_{wv}^{(u)}$. We then apply \Cref{expectationsum1}, with the $z$ there equal to $z + T_{u_- u}^2 R_{u_- u_-}^{(u)}$ here; the $S_j$ there equal to $\sum_{v \in \mathbb{D}_{\ell-1} (w)} \big| R_{wv}^{(u)} \big|^s$; and the $K_j$ there equal to $K$ here (whose conditions are verified by \Cref{tuvalpha}, \Cref{zuv}, and \Cref{ar}). This yields a constant $C_1 > 1$ such that
\begin{flalign}
\label{expectationtu}
\begin{aligned}
\mathbb{E}_u & \Bigg[ \displaystyle\sum_{w \in \mathbb{D} (u)} \displaystyle\frac{|T_{uw}|^s}{\big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + T_{uw}^2 R_{ww}^{(u)} + K_{u, w}\big|^s} \displaystyle\sum_{v \in \mathbb{D}_{\ell-1} (w)} \big| R_{wv}^{(u)} \big|^s \Bigg] \\
& \le \displaystyle\frac{C_1}{\Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^s} \cdot \mathbb{E}_u \Bigg[ \big| R_{u_+ u_+}^{(u)} \big|^{(\alpha-s)/2} \displaystyle\sum_{v \in \mathbb{D}_{\ell-1} (u_+)} \big| R_{u_+ v}^{(u)} \big|^s \Bigg] \\
& = \displaystyle\frac{C_1}{\Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^s} \cdot \mathbb{E} \Bigg[ | R_{00}|^{(\alpha-s)/2} \displaystyle\sum_{v \in \mathbb{V} (\ell-1)} | R_{0v}|^s \Bigg],
\end{aligned}
\end{flalign}
\noindent where in the second statement we used the fact that $\big( R_{u_+ u_+}^{(u)}, R_{u_+ v}^{(u)} \big)$ for $w \in \mathbb{D} (u)$ and $v \in \mathbb{D}_{\ell-1} (u_+)$ has the same law as $(R_{00}, R_{0v})$ for $v \in \mathbb{V} (\ell-1)$. Hence,
\begin{flalign}
\label{expectationtu2}
\begin{aligned}
\mathbb{E}_u & \Bigg[ \displaystyle\sum_{w \in \mathbb{D} (u)} \displaystyle\frac{|T_{uw}|^s}{\big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + T_{uw}^2 R_{ww}^{(u)} + K_{u, w} \big|^s} \displaystyle\sum_{v \in \mathbb{D}_{\ell-1} (w)} \big| R_{wv}^{(u)} \big|^s \Bigg] \\
& \le \displaystyle\frac{C_1}{\Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^s} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{V} (\ell-1)}| R_{00}|^{(\alpha+s)/2} \cdot |T_{00_+}|^s \cdot | R_{0_+ v}|^s \Bigg] \\
& = \displaystyle\frac{C_1}{\Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^s} \cdot \Xi_{\ell-1} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg),
\end{aligned}
\end{flalign}
\noindent where in the first statement we applied \Cref{rproduct}. Inserting \eqref{expectationtu} and \eqref{expectationtu2} into \eqref{slm} yields
\begin{flalign*}
\Xi_{\ell+m} (s) \le C_1 \cdot \Xi_{\ell-1} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (m)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^{-s} \Bigg],
\end{flalign*}
\noindent and so the upper bound in the lemma follows from \Cref{chilchil1} and \Cref{estimatemoment1}; as mentioned previously, the proof of the lower bound is entirely analogous and is thus omitted.
\end{proof}
\begin{proof}[Proof of \Cref{m2}]
By \Cref{rproduct}, we have
\begin{flalign}
\label{expectationsumvm}
\begin{aligned}
\mathbb{E} & \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (m)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^{-s} \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (m-1)} \displaystyle\sum_{u \in \mathbb{D} (w)} \big| R_{0w}^{(u)} \big|^s \cdot |T_{w u}|^s \cdot \Big( \big| z + T_{w u}^2 R_{ww}^{(u)} \big| + 1 \Big)^{-s} \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (m-1)} \big| R_{0w_-}^{(w)} \big|^s \cdot |T_{w_- w}|^s \cdot \mathbb{E}_w \bigg[ \displaystyle\sum_{u \in \mathbb{D} (w)} \displaystyle\frac{|T_{wu}|^s}{\Big( \big| z + T_{wu}^2 R_{ww}^{(u)} \big| + 1 \Big)^s} \cdot \big| R_{ww}^{(u)} \big|^s\bigg] \Bigg].
\end{aligned}
\end{flalign}
Now, observe from \eqref{qvv} that
\begin{flalign*}
R_{ww}^{(u)} = - \big( z + T_{w_- w}^2 R_{w_- w_-}^{(w)} + g_u \big)^{-1}, \quad \text{where} \quad g_u = \displaystyle\sum_{\substack{v \in \mathbb{D} (w) \\ v \ne u}} T_{wv}^2 R_{vv}^{(w)},
\end{flalign*}
\noindent and let $(R_v)_{v \sim u}$ denote a family of independent, identically distributed random variables, each with law $R_{00}$. Then, for any $w \in \mathbb{V} (m-1)$, the law of the $\big( R_{vv}^{(w)} \big)$ is the same as that of the $(R_v)$. So,
\begin{flalign}
\label{expectationtw1}
\begin{aligned}
\mathbb{E}_w & \Bigg[ \displaystyle\sum_{u \in \mathbb{D}(w)} \displaystyle\frac{|T_{wu}|^s}{\Big( \big| z + T_{wu}^2 R_{ww}^{(u)} \big| + 1 \Big)^s} \cdot \big| R_{ww}^{(u)} \big|^s \Bigg] = \mathbb{E}_w \Bigg[ \displaystyle\sum_{u \in \mathbb{D}(w)} \displaystyle\frac{|T_{wu}|^s}{\Big( \big| z + T_{wu}^2 \widetilde{R}_{ww}^{(u)} \big| + 1 \Big)^s} \cdot \big| \widetilde{R}_{ww}^{(u)} \big|^s \Bigg],
\end{aligned}
\end{flalign}
\noindent where
\begin{flalign*}
\widetilde{R}_{ww}^{(u)} = - \big( z + T_{w_- w}^2 R_{w_- w_-}^{(w)} + \widetilde{g}_u \big)^{-1}, \quad \text{and} \quad \widetilde{g}_u = \displaystyle\sum_{\substack{v \in \mathbb{D} (w) \\ v \ne u}} T_{wv}^2 R_v.
\end{flalign*}
\noindent Again as in the proof of \Cref{xilchi2}, for any point process $\nu = (\nu_1, \nu_2, \ldots )$ on $\mathbb{R}_{> 0}$, let
\begin{flalign*}
\widetilde{R} (\nu) = - \big( z + T_{w_- w}^2 R_{w_- w_-}^{(w)} + \widetilde{g} (\nu) \big)^{-1}, \quad \text{where} \quad \widetilde{g} (\nu) = \sum_{j = 1}^{\infty} \nu_j^2 R_j,
\end{flalign*}
\noindent and the $(R_j)$ are mutually independent random variables with law $R_{00}$. Then, it follows from applying \eqref{fxi} that
\begin{flalign}
\label{expectationtw}
\begin{aligned}
\mathbb{E}_w \Bigg[ \displaystyle\sum_{u \in \mathbb{D}(w)} \displaystyle\frac{|T_{wu}|^s}{\big| z + T_{wu}^2 \widetilde{R}_{ww}^{(u)} \big|^s} \cdot \big| \widetilde{R}_{ww}^{(u)} \big|^s \Bigg] & = \alpha \displaystyle\int_0^{\infty} \mathbb{E}_w \Bigg[ \displaystyle\frac{x^s}{\Big( \big| z + x^2 \widetilde{R} (\nu) \big| + 1 \Big)^s} \cdot \big| \widetilde{R} (\nu) \big|^s \Bigg] \cdot x^{-\alpha-1} dx \\
& = \alpha \displaystyle\int_0^{\infty} \mathbb{E}_w \Bigg[ \displaystyle\frac{x^s}{\big( | z + x^2 R_{ww}| + 1 \big)^s} \cdot |R_{ww}|^s \Bigg] \cdot x^{-\alpha-1} dx.
\end{aligned}
\end{flalign}
\noindent where in the expectation $\nu$ is a Poisson point process with intensity $\alpha x^{-\alpha-1} dx$, and we have used the fact that $\widetilde{R} (\nu)$ has the same law as $R_{ww}$.
Applying \Cref{integral} in \eqref{expectationtw}, using \Cref{rproduct}, and inserting into \eqref{expectationtw1} and \eqref{expectationsumvm} yields a constant $C > 1$ such that
\begin{flalign*}
& C^{-1} \cdot \Xi_{1-\ell} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) \\
& \qquad = C^{-1} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (m-1)} \big| R_{0w_-}^{(w)} \big|^s \cdot |T_{w_- w}|^s \cdot |R_{ww}|^{(\alpha+s)/2} \Bigg] \\
& \qquad \le \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V}(m-1)} \big| R_{0w_-}^{(w)} \big|^s \cdot |T_{w_- w}|^s \cdot \mathbb{E}_w \bigg[ \displaystyle\sum_{u \in \mathbb{D} (w)} \displaystyle\frac{|T_{wu}|^s}{\Big( \big| z + T_{wu}^2 R_{ww}^{(u)} \big| + 1 \Big)^s} \cdot \big| R_{ww}^{(u)} \big|^s \bigg] \Bigg] \\
& \qquad \le C \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (m-1)} \big| R_{0w_-}^{(w)} \big|^s \cdot |T_{w_- w}|^s \cdot |R_{ww}|^{(\alpha+s)/2} \Bigg] = C \cdot \Xi_{1-\ell} \bigg( \displaystyle\frac{\alpha+s}{2} \bigg).
\end{flalign*}
\noindent This, together with \Cref{xilchi2}, \Cref{xichi}, and \Cref{estimatemoment1}, yields the lemma.
\end{proof}
Now we can establish \Cref{limitr0j}.
\begin{proof}[Proof of \Cref{limitr0j}]
The first statement of the theorem follows from \Cref{estimatemoment1}, \Cref{smultiplicative}, and the P\'{o}lya--Szeg\"{o} lemma sequences that are both approximately subadditive and superadditive \cite[Lemma 1.9.1]{steele1997probability}.
To establish the second, observe for any integer $L \ge 1$ that the function $\varphi_L (s; z)$ is convex in $s$, since $\mathbb{E} \big[ |R_{0v}|^s \big]$ is for any $v \in \mathbb{V} (L)$ (by Young's inequality for products). Thus, $\varphi (s; z)$ is convex in $s$ as well, as it is a limit of convex functions. Further, for $\alpha < s < s' < 1$ we have $\mathbb{E} \big[ |R_{0v}|^s \big] \le \eta^{s' - s} \cdot \mathbb{E} \big[ |R_{0v}|^{s'} \big]$ for each $v \in \mathbb{V} (L)$ (by the second part of \Cref{q12}). Taking logarithms, dividing by $L$, and letting $L$ tend to $\infty$, we deduce that $\varphi (s; z)$ is nonincreasing in $s$; this verifies the second statement of the theorem.
The third follows from the uniformity of the constant $C$ from \Cref{smultiplicative} in $\Imaginary z$.
The fourth and fifth parts of the theorem follow from the second and third parts of \Cref{estimatemoment1}, respectively.
\end{proof}
\section{Restricted Moment Estimates}
\label{MomentEvent}
In this section we estimate fractional moments $|R_{0v}|^s$ of the resolvent, upon restricting to certain events. In \Cref{EventR} we define these events and establish the ``restricted fractional moment bounds,'' conditional on several estimates that will be shown in \Cref{ProofB}, \Cref{ProofG}, and \Cref{ProofRY}. Throughout this section, we fix a real number $s \in (\alpha, 1)$ and a complex number $z = E + \mathrm{i} \eta \in \mathbb{H}$ with $\eta \in (0, 1)$ and $\varepsilon \le |E| \le B$. Below, constants may implicitly depend on $s$, $\varepsilon$, and $B$, even when not explicitly stated.
\subsection{Events and Restricted Moments}
\label{EventR}
In this section we define events on which certain tree weights and resolvent entries are bounded, and we analyze fractional moments of off-diagonal resolvent entries restricted to such events. We begin with defining these events. The first considers the event on which exactly one descendant $w$ of a vertex $v$ has $|T_{vw}| \in [1, 2]$; this will later be useful in specifying paths with large resolvent entries. The second defines events on which the tree weights $|T_{uu_+}|$ are bounded from below, and on which the diagonal resolvent entries are bounded from above; in view of \Cref{rproduct} and \eqref{qvv}, this will also later be useful in locating large off-diagonal resolvent entries.
\begin{definition}
\label{eventg}
For any vertex $v \in \mathbb{V}$, we define the vertex set $\mathcal{Y} (v) \subset \mathbb{V}$ and the event $\mathscr{G}_0 (v)$ by
\begin{flalign*}
\mathcal{Y} (v) = \big\{ w \in \mathbb{D} (v) : |T_{vw}| \in [1, 2] \big\}; \qquad \mathscr{G}_0 (v) = \Big\{ \big| \mathcal{Y} (v) \big| = 1 \Big\}.
\end{flalign*}
\noindent On the event $\mathscr{G}_0 (v)$, let $\mathfrak{c} = \mathfrak{c} (v)$ denote the unique child of $v$ such that $|T_{v\mathfrak{c}}| \in [1, 2]$.
\end{definition}
\begin{definition}
\label{br0vw}
Fix $z \in \mathbb{H}$; let $v, w \in \mathbb{V}$ be vertices with $v \prec w$; and let $\omega \in (0,1)$ and $\Omega > 1$ be real numbers. For any vertex $u \in \mathbb{V}$ with $v \preceq u \prec w$, define the events $\mathscr{D} (u; v, w) = \mathscr{D} (u; v, w; \omega)$ and $\mathscr{D} (v, w) = \mathscr{D} (v, w; \omega)$ by
\begin{flalign*}
\mathscr{D} (u; v, w) = \big\{ |T_{uu_+}| \ge \omega \big\}; \qquad \mathscr{D} (v, w) = \bigcap_{v \preceq u \prec w} \mathscr{D} (u; v, w).
\end{flalign*}
\noindent Next, if $\mathscr{G}_0 (w)$ holds, then denote $w_+ = \mathfrak{c} (w)$. Then, define the events $\mathscr{B}_0 (v, w) = \mathscr{B}_0 (v, w; \omega; \Omega) = \mathscr{B}_0 (v, w; \omega; \Omega; z)$; $\mathscr{B}_1 (v, w) = \mathscr{B}_1 (v, w; \omega; \Omega) = \mathscr{B}_1 (v, w; \omega; \Omega; z)$; and $\mathscr{B} (v, w) = \mathscr{B} (v, w; \omega; \Omega) = \mathscr{B} (v, w; \omega; \Omega; z)$ by
\begin{flalign*}
& \mathscr{B}_0 (v, w) = \mathscr{G}_0 (w) \cap \Big\{ \big| R_{ww}^{(w_+)} (z) \big| \le \Omega \Big\} \cap \mathscr{D} (v, w; \omega); \\
& \mathscr{B}_1 (v, w) = \mathscr{G}_0 (w) \cap \Big\{ \big| R_{vv}^{(v_-, w_+)} (z) \big| \le \Omega \Big\} \cap \mathscr{D} (v, w; \omega),
\end{flalign*}
\noindent and $\mathscr{B} (v, w) = \mathscr{B}_0 (v, w) \cap \mathscr{B}_1 (v, w)$. Here we used the convention that $0- = \emptyset$ so that $R_{00}^{(0_-, w_+)} (z) = R_{00}^{(w_+)} (z)$ for example.
\end{definition}
We next have the following proposition, which indicates that restricting to the events $\mathscr{B} (v, w)$ does not substantially decrease the sum of fractional moments $|R_{vw}|^s$, if $\omega$ and $\Omega$ are sufficiently small and large, respectively.
\begin{prop}
\label{expectationr0vsd}
For any real number $\delta \in (0, 1)$, there exist constants $c > 0$ (independent of $\delta$), $\omega = \omega (\delta) \in (0, 1)$, and $\Omega = \Omega (\delta) > 1$ such that the following holds. For any integer $L \ge 1$ and vertex $v \in \mathbb{V}$, we have
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_L (v)} \big| R_{vw}^{(v_-, w_+)} (z) \big|^s \cdot \mathbbm{1}_{\mathscr{B} (v, w; \omega; \Omega)} \Bigg] \ge c \cdot \exp \Big( L \cdot \big( \varphi (s; z) - \delta \big) \Big).
\end{flalign*}
\end{prop}
We will deduce \Cref{expectationr0vsd} as a consequence of the following two propositions. The former will be established in \Cref{ProofB} below and the latter in \Cref{ProofG} below.
\begin{prop}
\label{expectationbr}
For any real number $\delta > 0$, there exist constants $c > 0$ and $\Omega = \Omega (\delta) > 1$ such that, for any $\omega \in \big( 0, \frac{1}{2} \big)$, we have
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B} (v, w)} \Bigg] \ge (1 - \delta) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg].
\end{flalign*}
\end{prop}
\begin{prop}
\label{dexpectationg}
For any real number $\delta > 0$, there exist constants $c > 0$ and $ \omega = \omega (\delta) \in (0, 1)$ such that
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v,w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg] \ge c (1 - \delta)^{\ell} \cdot \Phi_{\ell} (s).
\end{flalign*}
\end{prop}
\begin{proof}[Proof of \Cref{expectationr0vsd}]
This follows from \Cref{expectationbr}, \Cref{dexpectationg}, and \eqref{limitr0j2}.
\end{proof}
\subsection{Proof of \Cref{expectationbr}}
\label{ProofB}
In this section we establish \Cref{expectationbr}, which will follow from the two following two lemmas.
\begin{lem}
\label{ruw}
For any real number $\delta > 0$, there exists a constants $\Omega = \Omega (\delta) > 1$ such the following two statements hold whenever $\omega \in \big(0, \frac{1}{2} \big)$.
\begin{enumerate}
\item We have
\begin{flalign}
\label{rvw10}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B}_0 (v, w; \omega; \Omega)} \Bigg] \ge (1 - \delta) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D}(v,w; \omega)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg].
\end{flalign}
\item We have
\begin{flalign}
\label{rvw2}
\begin{aligned}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-)} \big|^s \cdot \big| & R_{ww}^{(v_-)} \big|^{(\alpha-s)/2} \cdot \mathbbm{1}_{|R_{ww}^{(v_-)}| \le 1/\Omega} \cdot \mathbbm{1}_{\mathscr{D} (v, w; \omega)} \Bigg] \\
& \ge (1 - \delta) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-)} \big|^s \cdot \big| R_{ww}^{(v_-)} \big|^{(\alpha-s)/2} \cdot \mathbbm{1}_{\mathscr{D} (v, w; \omega)} \Bigg].
\end{aligned}
\end{flalign}
\end{enumerate}
\end{lem}
\begin{lem}
\label{expectationrvwdelta}
For any $\delta > 0$, there exists a constant $\Omega = \Omega (\delta) > 1$ for which the following two statements hold.
\begin{enumerate}
\item We have
\begin{flalign}
\label{rv3}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B}_1 (v, w; \omega; \Omega)} \Bigg] \ge (1 - \delta) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w; \omega)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg].
\end{flalign}
\item We have
\begin{flalign}
\label{rv4}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w; \omega)} \cdot \mathbbm{1}_{|R_{00}| \le \Omega} \Bigg] \ge (1 - \delta) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w; \omega)} \Bigg].
\end{flalign}
\end{enumerate}
\end{lem}
\begin{proof}[Proof of \Cref{expectationbr}]
For any events $\mathscr A, \mathscr B, \mathscr C$, if $\mathscr B \cap \mathscr C \subset \mathscr A$, then $\mathbbm{1}_{\mathscr A} - \mathbbm{1}_{\mathscr B \cap \mathscr C } \leq \mathbbm{1}_{\mathscr A} - \mathbbm{1}_{\mathscr B} + \mathbbm{1}_{\mathscr A} - \mathbbm{1}_{\mathscr C}$. Since $\mathscr{B} (v, w) = \mathscr{B}_0 (v, w) \cap \mathscr{B}_1 (v, w)$, we have
\begin{align*}
\mathbb{E}& \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot
(
\mathbbm{1}_{\mathscr{D} (v, w; \omega)\cap \mathscr{G}_0 (w)}- \mathbbm{1}_{ \mathscr{B} (v, w)}
) \Bigg] \\
&\leq \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot (\mathbbm{1}_{\mathscr{D} (v, w; \omega)\cap \mathscr{G}_0 (w)} -\mathbbm{1}_{ \mathscr{B}_0 (v, w)}) \Bigg]\\
&+ \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot (\mathbbm{1}_{\mathscr{D} (v, w; \omega)\cap \mathscr{G}_0 (w)} -\mathbbm{1}_{ \mathscr{B}_1 (v, w)}) \Bigg].
\end{align*}The proposition next follows from \eqref{rvw10} and \eqref{rv3}.
\end{proof}
Now we establish \Cref{ruw} and \Cref{expectationrvwdelta}.
\begin{proof}[Proof of \Cref{ruw}]
The proofs of \eqref{rvw10} and \eqref{rvw2} are very similar, so we only establish the first. Applying \Cref{rproduct} and \eqref{qvv} yields
\begin{flalign}
\label{rvwb0}
\begin{aligned}
\mathbb{E} & \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B}_0 (v, w)} \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{D}_{\ell-1} (v)} \displaystyle\sum_{w \in \mathbb{D}(u)} \big| R_{vu}^{(v_-, w)} \big|^s \cdot |T_{uw}|^s \cdot \big| R_{ww}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B}_0 (v, w)} \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{D}_{\ell-1} (v)} \displaystyle\sum_{w \in \mathbb{D} (u)} \displaystyle\frac{|T_{uw}|^s}{\big| z + T_{uw}^2 R_{uu}^{(v_-, w)} + K_u \big|^s } \cdot \big| R_{vu}^{(v_-, w)} \big|^s\cdot \mathbbm{1}_{\mathscr{D} (v, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \cdot \mathbbm{1}_{|R_{ww}^{(w_+)}| \le \Omega} \Bigg],
\end{aligned}
\end{flalign}
\noindent where $K_u = K_{u; v, w}$ is defined by
\begin{flalign*}
K_u = \displaystyle\sum_{\substack{u' \in \mathbb{D} (w) \\ u' \ne \mathfrak{c} (w)}} T_{wu'}^2 R_{u'u'}^{(w)}.
\end{flalign*}
\noindent Observe that $K_{u; v, w}$ is independent from the $T_{uw}$, $R_{uu}^{(v_-, w)}$, $R_{vu}^{(v_-, w)}$, and $\mathscr{D} (v, w)$. Moreover, conditional on $\mathscr{G}_0 (w)$, the random variable $K_{u; v, w}$ satisfies \Cref{sqk} (by \Cref{tuvalpha}, \Cref{zuv}, and \Cref{ar}). Additionally, again by \eqref{qvv} we have the equivalence of events
\begin{flalign*}
\Big\{ \big| R_{ww}^{(w_+)} \big| \ge \Omega \Big\} = \big\{ |Q_u + K_u| \le \Omega^{-1} \big\}, \qquad \text{where} \qquad Q_u = z + T_{uw}^2 R_{uu}^{(w)}.
\end{flalign*}
Thus, applying the lower bound in the first part of \Cref{zxrkestimate}, and the fourth part of the same lemma (with the $K_u$ there equal to $K$ here and the $J$ there equal to $[-\Omega^{-1} - \Real Q_u, \Omega^{-1} - \Real Q_u]$), and using the independence between $K_u$ and $Q_u$, yields a constant $C > 1$ such that
\begin{flalign*}
\mathbb{E}_w \Big[ \big| z + T_{uw}^2 & R_{uu}^{(v_-, w)} + K_u \big|^{-s} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \cdot \mathbbm{1}_{|R_{ww}^{(w_+)}| > \Omega} \Big] \\
& \le C \Omega^{s-1} \cdot \mathbb{E}_w \Big[ \big| z + T_{uw}^2 R_{uu}^{(v_-, w_+)} + K_u \big|^{-s} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Big],
\end{flalign*}
which yields
\begin{flalign*}
\mathbb{E}_w \Big[ \big| z + T_{uw}^2 & R_{uu}^{(v_-, w)} + K_u \big|^{-s} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \cdot \mathbbm{1}_{|R_{ww}^{(w_+)}| \le \Omega} \Big] \\
& \ge (1 - C \Omega^{s-1}) \cdot \mathbb{E}_w \Big[ \big| z + T_{uw}^2 R_{uu}^{(v_-, w_+)} + K_u \big|^{-s} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Big].
\end{flalign*}
\noindent Inserting this into \eqref{rvwb0} and then applying the same reasoning as in \eqref{rvwb0} yields
\begin{flalign*}
\mathbb{E} \Bigg[ & \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B}_0 (v, w)}\Bigg] \\
& \ge (1 - C \Omega^{s-1}) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{D}_{\ell-1} (v)} \displaystyle\sum_{w \in \mathbb{D} (u)} \displaystyle\frac{|T_{uw}|^2}{\big| z + T_{uw}^2 R_{uu}^{(v_-, w)} + K_u \big|^s } \cdot \big| R_{vu}^{(v_-, w)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg] \\
& = (1 - C \Omega^{s-1}) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg],
\end{flalign*}
\noindent from which we deduce \eqref{rvw10} by taking $\Omega = \Omega (\delta)$ sufficiently large so that $C \Omega^{s-1} \le \delta$.
As mentioned previously, the proof of \eqref{rvw2} is very similar (by observing that the random variable $K_{u; v, w}' = \sum_{u' \in \mathbb{D}(w)} T_{wu'}^2 R_{u' u'}^{(w)}$ also satisfies \Cref{sqk}, and then applying the first and third parts of \Cref{zxrkestimate}).
\end{proof}
\begin{proof}[Proof of \Cref{expectationrvwdelta}]
The proofs of \eqref{rv3} and \eqref{rv4} are very similar, and so we only establish the former. To that end, we have by \Cref{rproduct} and \eqref{qvv} that
\begin{flalign}
\label{b1sum2}
\begin{aligned}
\mathbb{E} & \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B}_1 (v, w)} \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{D} (v)} \displaystyle\sum_{w \in \mathbb{D}_{\ell-1} (u)} \big| R_{vv}^{(v_-, w_+)} \big|^s \cdot |T_{vu}|^s \cdot \big| R_{uw}^{(v, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B}_1 (v, w)} \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{D}(v)} \displaystyle\frac{|T_{vu}|^s}{\big| z + T_{vu}^2 R_{uu}^{(v, w_+)} + K_{u; v, w} \big|^s} \cdot \mathbbm{1}_{|R_{vv}^{(v_-, w_+)}| \le \Omega} \displaystyle\sum_{w \in \mathbb{D}_{\ell-1} (u)} \big| R_{uw}^{(v, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg],
\end{aligned}
\end{flalign}
\noindent where
\begin{flalign*}
K_{u; v, w} = \displaystyle\sum_{\substack{v' \in \mathbb{D} (v) \\ v' \ne u}} T_{vv'}^2 R_{v'v'}^{(v)}.
\end{flalign*}
\noindent Observe that $K_{u; v, w}$ is independent from $T_{vu}$, $R_{uu}^{(v, w_+)}$, $R_{uw}^{(v, w_+)}$, $\mathscr{D} (v, w)$, and $\mathscr{G}_0 (w)$ Therefore, applying the first and third parts of \Cref{zxrkestimate} (with the $K$ there equal to $K_{u; v, w}$ here) yields a constant $C_1 > 1$ such that
\begin{flalign*}
\mathbb{E} & \Bigg[ \displaystyle\sum_{u \in \mathbb{V}} \displaystyle\frac{|T_{vu}|^s}{\big| z + T_{vu}^2 R_{uu}^{(v, w_+)} + K_{u; v, w} \big|^s} \cdot \mathbbm{1}_{|R_{vv}^{(v_-, w_+)}| \le \Omega} \displaystyle\sum_{w \in \mathbb{D}_{\ell-1} (u)} \big| R_{uw}^{(v, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg] \\
& \ge (1 - C \Omega^{s - 1}) \cdot \Bigg[ \displaystyle\sum_{u \in \mathbb{V}} \displaystyle\frac{|T_{vu}|^s}{\big| z + T_{vu}^2 R_{uu}^{(v, w_+)} + K_{u; v, w} \big|^s} \displaystyle\sum_{w \in \mathbb{D}_{\ell-1} (u)} \big| R_{uw}^{(v, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg] \\
& = (1 - C \Omega^{s -1}) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg],
\end{flalign*}
\noindent where the last equality is obtained through the same reasoning as in \eqref{b1sum2}. Then \eqref{rv3} follows from selecting $\Omega$ sufficiently large so that $C \Omega^{1-\chi} < \delta$.
\end{proof}
\subsection{Proof of \Cref{dexpectationg}}
\label{ProofG}
In this section we establish \Cref{dexpectationg}, which will be a consequence of the following two lemmas. The first will be established in this section and the latter in \Cref{ProofRY} below.
\begin{lem}
The following two claims hold.
\label{ryestimate}
\begin{enumerate}
\item
There exists a constant $c > 0$ such that for any sufficiently small $\omega \in \big( 0, \frac{1}{2} \big)$ we have
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w; \omega)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg] \ge c \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{D}_{\ell-1} (v)} \big| R_{vu}^{(v_-)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, u; \omega)} \Bigg].
\end{flalign*}
\item There exists a constant $C>0$ such that
\begin{equation}
\mathbb{E} \left[\sum_{w \in \mathbb D_\ell (v)} |R_{vw}^{(v_-, w_+)}|^s \cdot \mathbbm{1}_{\mathscr G_0 (w)}\right] \le C \cdot \mathbb{E}\left[ \sum_{u \in \mathbb D_{\ell-1} (v)} | R_{vu}^{(v_-)}|^s \right].
\end{equation}
\end{enumerate}
\end{lem}
\begin{lem}
\label{ry0estimate}
For any $\delta > 0$, there exists a constant $\omega = \omega (\delta) > 0$ such that
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell (v)}} \big| R_{vw}^{(v_-)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w; \omega)} \Bigg] \ge (1 - \delta)^{\ell} \cdot \Phi_{\ell} (s).
\end{flalign*}
Moreover, there exists constants $C > 1$ and $c \in (0, 1)$ such that $\omega \ge c \cdot \delta^C$
\end{lem}
\begin{proof}[Proof of \Cref{dexpectationg}]
This follows from \Cref{ryestimate} and \Cref{ry0estimate}.
\end{proof}
Now let us establish \Cref{ryestimate}.
\begin{proof}[Proof of \Cref{ryestimate} (Outline)]
Since the joint law of $\big( R_{vw}^{(v_-, w_+)}, \mathscr{D} (v, w), \mathscr{G}_0 (w) \big)$ for $w \in \mathbb{D}_{\ell} (v)$ is the same as that of $\big( R_{0w}^{(w_+)}, \mathscr{D} (0, w), \mathscr{G}_0 (w) \big)$ for $w \in \mathbb{V} (\ell)$, it suffices to address the case when $v = 0$. This proof will proceed similarly to that of \Cref{xilchi2}, so we only outline it. Further, we only prove the first part of the lemma, since the proof of the second part is similar.
Following \eqref{xi1} and \eqref{xil2} there, we find that
\begin{flalign}
\label{expectationwv1}
\begin{aligned}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell)} \big| R_{0w}^{(w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg] & = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell - 1)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, u)} \\
& \quad \times \mathbb{E}_w \bigg[ \displaystyle\sum_{w \in \mathbb{D} (u)} \displaystyle\frac{|T_{uw}|^s \cdot \big| \widetilde{R}_{uu}^{(w)} \big|^s}{\big| z + T_{uw}^2 \widetilde{R}_{uu}^{(w)} + K \big|^s} \cdot \mathbbm{1}_{|T_{uw}| \ge \omega} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \bigg] \Bigg],
\end{aligned}
\end{flalign}
\noindent where
\begin{flalign*}
\widetilde{R}_{uu}^{(w)} = - \big( z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + \widetilde{g}_w \big)^{-1}; \qquad K = \displaystyle\sum_{\substack{v \in \mathbb{D} (w) \\ v \ne \mathfrak{c} (w)}} T_{wv}^2 R_v; \qquad \widetilde{g}_w = \displaystyle\sum_{\substack{v \in \mathbb{D} (u) \\ v \ne w}} T_{wv}^2 R_v',
\end{flalign*}
\noindent and $(R_v)_{v \in \mathbb{D} (w)}$ and $(R_v')_{v \in \mathbb{D} (u)}$ are mutually independent, identically distributed random variable with law $R_{00}$. Observe in particular that $\widetilde{R}_{uu}^{(w)}$ has the same law as $R_{uu}$ (conditional on $\mathbb{T}_- (w)$).
Now, observe (from \Cref{tuvalpha}) that there exists a constant $c_1 > 0$ for which $\mathbb{P}\big[ \mathscr{G}_0 (w) \big] \ge c_0$. Since, conditional on $\mathscr{G}_0 (w)$, the random variable $K$ satisfies \Cref{sqk} by \Cref{zuv} (and \Cref{tuvalpha} and \Cref{ar}), it follows from the first statement of \Cref{zxrkestimate} that there exists a constant $c_2 > 0$ such that
\begin{flalign}
\label{sumtw}
\begin{aligned}
\mathbb{E}_w & \Bigg[ \displaystyle\sum_{w \in \mathbb{D} (u)} \displaystyle\frac{|T_{uw}|^s \cdot \big| \widetilde{R}_{uu}^{(w)} \big|^s}{\big| z + T_{uw}^2 \widetilde{R}_{uu}^{(w)} + K \big|^s} \cdot \mathbbm{1}_{|T_{uw}| \ge \omega} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg] \\
& \ge c_2 \cdot \mathbb{E}_w \Bigg[ \displaystyle\sum_{w \in \mathbb{D} (u)} \displaystyle\frac{|T_{uw}|^s}{\Big( \big| z + T_{uw}^2 \widetilde{R}_{uu}^{(w)} \big| + 1 \Big)^s } \cdot \big| \widetilde{R}_{uu}^{(w)} \big|^s \cdot \mathbbm{1}_{|T_{uw}| \ge \omega} \Bigg].
\end{aligned}
\end{flalign}
\noindent Since $\widetilde{R}_{uu}^{(w)}$ does not depend on $T_{uw}$, we may proceed as in \eqref{xil3} (through applying \eqref{fxi}) to deduce
\begin{flalign}
\label{integralsumomega1}
\begin{aligned}
\mathbb{E}_w \Bigg[ \displaystyle\sum_{w \in \mathbb{D} (u)} & \displaystyle\frac{|T_{uw}|^s}{\Big( \big| z + T_{uw}^2 \widetilde{R}_{uu}^{(w)} \big| + 1 \Big)^s } \cdot \big| \widetilde{R}_{uu}^{(w)} \big|^s \cdot \mathbbm{1}_{|T_{uw}| \ge \omega} \Bigg] \\
& = \alpha \displaystyle\int_{\omega}^{\infty} \mathbb{E}_w \Bigg[ \displaystyle\frac{x^2}{\big( |z + x^2 R_{uu}| + 1 \big)^s} \cdot |R_{uu}|^s \Bigg] \cdot x^{-\alpha-1} dx.
\end{aligned}
\end{flalign}
Next, let $\varsigma \in (0, 1)$ be a real number (to be determined later). By \Cref{integral} and \Cref{integral3}, there exists a constant $C > 1$ such that
\begin{flalign}
\label{integralomega1}
\begin{aligned}
\displaystyle\int_{\omega}^{\infty} \mathbb{E}_w & \Bigg[ \displaystyle\frac{x^s}{|z + x^2 R_{uu}|^s} \cdot |R_{uu}|^s \Bigg] \cdot x^{-\alpha-1} dx \ge \mathbbm{1}_{|R_{uu}| \le 1/\varsigma} \cdot \big(1 - C (\omega \varsigma^{-1})^{s-\alpha} \big) \cdot |R_{uu}|^{(\alpha+s)/2}.
\end{aligned}
\end{flalign}
\noindent By \eqref{expectationwv1}, \eqref{integralomega1}, \eqref{integralsumomega1}, \eqref{sumtw}, \Cref{rproduct}, and \eqref{rvw2}, we deduce
\begin{flalign*}
\mathbb{E} & \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell)} \big| R_{0w}^{(w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w)} \cdot \mathbbm{1}_{\mathscr{G}_0 (w)} \Bigg] \\
& \ge c_2 \big( 1 - 2C (\omega \varsigma^{-1})^{s-\alpha} \big) \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell-1)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot |R_{uu}|^{(\alpha+s)/2} \cdot \mathbbm{1}_{\mathscr{D} (0, u)} \cdot \mathbbm{1}_{|R_{uu}| \le 1/\varsigma,} \Bigg] \\
& \ge \displaystyle\frac{c_2}{2} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell-1)} |R_{0u}|^s \cdot |R_{uu}|^{(\alpha-s)/2} \cdot \mathbbm{1}_{\mathscr{D} (0,u)} \cdot \mathbbm{1}_{|R_{uu}| \le 1/\varsigma}\Bigg] \\
& \ge \displaystyle\frac{c_2}{2} \cdot \varsigma^{(\alpha-s)/2} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell-1)} |R_{0u}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, u)} \Bigg],
\end{flalign*}
\noindent where the second inequality holds by fixing $\omega$ sufficiently small with respect to $\varsigma$, and the third holds by fixing $\varsigma$ sufficiently smal (due to the third by \eqref{rvw2}). This yields the lemma.
\end{proof}
\subsection{Proof of \Cref{ry0estimate}}
\label{ProofRY}
In this section we establish \Cref{ry0estimate}, which will be a quick consequence of the following two lemmas.
\begin{lem}
\label{ry0estimate2}
For any real number $\delta > 0$, there exist constants $c > 0$ (independent of $\delta$) and $\omega = \omega (\delta) > 0$ such that for any integer $k \in \unn{0}{\ell}$ we have
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} \displaystyle\sum_{w \in \mathbb{D}_{\ell-k} (u)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (u,w)} \Bigg] \ge c \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k-1)} | R_{0u}|^s \Bigg] \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell-k-1)} |R_{0u}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, u)}\Bigg].
\end{flalign*}
\end{lem}
\begin{lem}
\label{ry0estimate3}
For any real number $\delta > 0$, there exist constants $C > 1$ (independent of $\delta$) and $\omega = \omega (\delta) > 0$ such that for any integer $k \in \unn{0}{\ell}$ we have
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} & \displaystyle\sum_{w \in \mathbb{D}_{\ell-k} (u)} |R_{0w}|^s \cdot \mathbbm{1}_{|T_{uu_+}| < \omega} \cdot \mathbbm{1}_{\mathscr{D} (u_+,w)} \Bigg] \\
& \le C \delta \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k-1)} | R_{0u}|^s \Bigg] \cdot \mathbb{E}\Bigg[ \displaystyle\sum_{u \in \mathbb{V} (\ell - k- 1)} | R_{0u}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, u)} \Bigg],
\end{flalign*}
\noindent where $u_+ \in \mathbb{D} (u)$ denotes the unique child of $u$ in the path $\mathfrak{p} (u, w)$ between $u$ and $w$. Moreover, there exists a constant $c \in (0, 1)$ such that $\omega \ge c \cdot \delta^C$.
\end{lem}
\begin{proof}[Proof of \Cref{ry0estimate}]
Let $\delta >0$ be a parameter. Combining \Cref{ry0estimate2} and \Cref{ry0estimate3},
we deduce that there exists a constant $\omega = \omega (\delta) > 0$ such that for any $k \in \unn{0}{\ell}$ we have
\begin{equation*}
\mathbb{E} \left[ \sum_{u \in \mathbb{V}(k)} \sum_{w \in {\mathbb D}_{\ell-k} (u)} |R_{0w}|^s \cdot \mathbbm{1}_{|T_{uu_+}| < \omega} \cdot \mathbbm{1}_{\mathscr{D} (u_+, w)} \right]
\le c^{-1} C \delta \cdot \mathbb{E}\left[ \sum_{u \in \mathbb{V}(k)} \sum_{w \in {\mathbb D}_{\ell-k} (u)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D}(u,w)} \right],
\end{equation*}
where $c$ and $C$ are from \Cref{ry0estimate2} and \Cref{ry0estimate3}, respectively. Since $\big( R_{vw}^{(v_-)} \big)_{w \in \mathbb{D}_m (v)}$ has the same law as $(R_{0w})_{w \in \mathbb{V} (m)}$ for any integer $m \ge 0$, this is equivalent to
\begin{flalign*}
\mathbb{E} \Bigg[ \sum_{u \in \mathbb{D}_k (v)} \sum_{w \in {\mathbb D}_{\ell-k} (u)} & | R_{vw}^{(v_-)} |^s \cdot \mathbbm{1}_{|T_{uu_+}| < \omega} \cdot \mathbbm{1}_{\mathscr{D} (u_+, w)} \Bigg] \\
& \le c^{-1} C \delta \cdot \mathbb{E}\left[ \sum_{u \in \mathbb{D}_k (v)} \sum_{w \in {\mathbb D}_{\ell-k} (u)} | R_{vw}^{(v_-)} |^s \cdot \mathbbm{1}_{\mathscr{D}(u,w)} \right],
\end{flalign*}
for any $v \in \mathbb{V}$.
Observe from the definition of $\mathscr D(u, v)$ that
\begin{equation*}
\mathbbm{1}_{\mathscr{D} (u_+, v)} - \mathbbm{1}_{\mathscr{D}(u, w)} = \mathbbm{1}_{\mathscr{D}(u_+, w)} \cdot \mathbbm{1}_{|T_{uu_+}| < \omega}.\end{equation*}
\noindent Thus, adding
\begin{equation*}
\mathbb{E}\left[ \sum_{u \in \mathbb{D}_k (v)} \sum_{w \in \mathbb D_{\ell-k} (u)} | R_{vw}^{(v_-)}|^s \cdot \mathbbm{1}_{\mathscr{D}(u, w)} \right]
\end{equation*}
to both sides of the previous bound gives
\begin{equation*}
\mathbb{E} \left[ \sum_{u \in \mathbb{D}_k (v)} \sum_{w \in \mathbb D_{\ell-k} (u)} | R_{vw}^{(v_-)} |^s \cdot \mathbbm{1}_{\mathscr{D} (u_+, w)} \right]
\le (1 + c^{-1} C\cdot\delta) \cdot \mathbb{E}\left[ \sum_{u \in \mathbb{D}_k (v)} \sum_{w \in {\mathbb D}_{\ell-k} (u)} | R_{vw}^{(v_-)} |^s \cdot \mathbbm{1}_{\mathscr{D}(u,w)} \right].\end{equation*}
\noindent This inequality implies that
\begin{align*}
\mathbb{E} & \left[ \sum_{u \in \mathbb{D}_k (v)} \sum_{w \in {\mathbb D}_{\ell-k} (u)} | R_{vw}^{(v_-)} |^s \cdot \mathbbm{1}_{\mathscr{D}(u,w)} \right] \\
&\ge (1 + c^{-1} C\cdot\delta)^{-1} \cdot \mathbb{E}\left[ \sum_{u_+ \in {\mathbb D}_{k+1} (v)} \sum_{w \in {\mathbb D}_{\ell-k-1} (u_+)} | R_{vw}^{(v_-)} |^s \cdot \mathbbm{1}_{\mathscr{D}(u_+, w)} \right]\\
&\ge (1 - c^{-1} C\cdot\delta) \cdot \mathbb{E}\left[ \sum_{u \in {\mathbb D}_{k+1} (v)} \sum_{w \in {\mathbb D}_{\ell-k-1} (u)} | R_{vw}^{(v_-)} |^s \cdot \mathbbm{1}_{\mathscr{D}(u, w)} \right],\end{align*}
\noindent where in the first inequality we observed that the sum over $u \in \mathbb D_k (v)$ is equivalent to the sum over $u_+ \in \mathbb D_{k+1} (v)$, and in the second inequality we used the fact that $(1+a)^{-1} \ge 1-a$, and we also replaced $u_+$ by $u$. Hence,
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{D}_k (v)} \displaystyle\sum_{w \in \mathbb{D}_{\ell-k} (v)} \big| R_{vw}^{(v_-)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (u,w)} \Bigg] \ge (1 - c^{-1} C \cdot \delta) \cdot \mathbb{E}\Bigg[ \displaystyle\sum_{u \in \mathbb{D}_{k+1} (v)} \displaystyle\sum_{w \in \mathbb{D}_{\ell-k-1} (v)} \big| R_{vw}^{(v_-)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (u,w)} \Bigg].
\end{flalign*}
\noindent Applying this bound $\ell$ times (to increase the parameter $k$ from $0$ to $\ell$), we deduce that there exists a constant such that
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \Bigg] \ge (1 - c^{-1} C \cdot \delta)^{\ell} \cdot \mathbb{E}\Bigg[ \displaystyle\sum_{w \in \mathbb{D}_{\ell} (v)} \big| R_{vw}^{(v_-)} \big|^s \Bigg] = (1 - c^{-1} C \cdot \delta)^{\ell} \cdot \Phi_{\ell} (s),
\end{flalign*}
\noindent where for the last equality we again used the fact that $R_{vw}^{(v_-)}$ for $w \in \mathbb{D}_{\ell} (v)$ and $R_{0w}$ for $w \in \mathbb{V} (\ell)$ have the same law; this establishes the lemma.
\end{proof}
Next we establish \Cref{ry0estimate2} and \Cref{ry0estimate3}.
\begin{proof}[Proof of \Cref{ry0estimate2}]
The proof of this lemma is similar to that of \Cref{smultiplicative}. In particular, observe from \Cref{rproduct} and \eqref{qvv} that
\begin{flalign}
\label{expectationwt0}
\begin{aligned}
\mathbb{E} \Bigg[ & \displaystyle\sum_{u \in \mathbb{V} (k)} \displaystyle\sum_{w \in \mathbb{D}_{\ell-k} (u)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (u, w)} \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V}(k)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot | R_{uu}|^s \displaystyle\sum_{v \in \mathbb{D} (u)} |T_{uv}|^s \displaystyle\sum_{w \in \mathbb{D}_{\ell-k-1} (v)} \big| R_{vw}^{(u)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (u, w)} \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \mathbb{E}_u \bigg[ \displaystyle\sum_{v \in \mathbb{D} (u)} \displaystyle\frac{|T_{uv}|^s \cdot \mathbbm{1}_{|T_{uv}| \ge \omega}}{\big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + T_{uv}^2 R_{vv}^{(u)} + K_{u, v} \big|^s} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times \displaystyle\sum_{w \in \mathbb{D}_{\ell - k-1} (v)} \big| R_{vw}^{(u)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \bigg] \Bigg],
\end{aligned}
\end{flalign}
\noindent where
\begin{flalign}
\label{kuv1}
K_{u, v} = \displaystyle\sum_{\substack{v' \in \mathbb{D} (u) \\ v' \ne v, u_-}} T_{uv'}^2 R_{v' v'}^{(u)}.
\end{flalign}
\noindent To estimate the inner expectation in \eqref{slm}; fix a real number $\varsigma \in (0, 1)$ (to be determined later); and observe from \Cref{sumaxi} (and \Cref{tuvalpha} and \Cref{ar}) that $K_{u, v}$ is an $\frac{\alpha}{2}$-stable law satisfying \Cref{sqk}. Moreover, it is independent from the random variables $T_{u_- u}$, $T_{uv}$, $R_{u_- u_-}^{(u)}$, $R_{vv}^{(u)}$, and $R_{vw}^{(u)}$, as well as the event $\mathscr{D} (v, w)$. Thus, (the $\chi = s$ case of) the first and second parts of \Cref{expectationsum1} together yield a constant $C_1 > 1$ such that
\begin{flalign*}
& \mathbb{E}_u \Bigg[ \displaystyle\sum_{v \in \mathbb{D} (u)} \displaystyle\frac{|T_{uv}|^s}{\big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + T_{uv}^2 R_{vv}^{(u)} + K_{u, v}\big|^s} \cdot \mathbbm{1}_{|T_{uv}| \ge \omega} \displaystyle\sum_{w \in \mathbb{D}_{\ell-k-1} (v)} \big| R_{vw}^{(u)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \Bigg] \\
& \quad \ge \Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^{-s} \cdot \mathbb{E}_u \Bigg[ \Big( \big|R_{u_+ u_+}^{(u)} \big|^{(\alpha-s)/2} - C_1 \omega^{s-\alpha} \Big) \cdot \mathbbm{1}_{|R_{u_+ u_+}^{(u)}| \le 1/\varsigma} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times \displaystyle\sum_{w \in \mathbb{D}_{\ell-k-1} (u_+)} \big| R_{u_+ w}^{(u)} \big|^s \cdot \mathbbm{1}_{\mathscr{D}(u_+, w)} \Bigg],
\end{flalign*}
\noindent for any child $u_+$ of $u$. Inserting this into \eqref{expectationwt0}; using the fact that $\big( R_{u_+ u_+}^{(u)}, R_{u_+ w}^{(u)} \big)$ for $w \in \mathbb{D}_{\ell-k-1} (u_+)$ has the same law as $(R_{00}, R_{0w})$ for $w \in \mathbb{V} (\ell-k-1)$; and taking $\omega$ sufficiently small so that $\varsigma^{(s-\alpha)/2} > 2 C_1 \omega^{s-\alpha}$ yields
\begin{flalign}
\label{sumlk}
\begin{aligned}
\mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} \displaystyle\sum_{w \in \mathbb{D}_{\ell-k} (u)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, u)} \Bigg] & \ge \displaystyle\frac{1}{2} \cdot \varsigma^{(s-\alpha)/2} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} \displaystyle\frac{\big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s}{\Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^s} \Bigg] \\
& \qquad \times \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell - k-1)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w)} \cdot \mathbbm{1}_{|R_{00}| \le 1/\varsigma} \Bigg].
\end{aligned}
\end{flalign}
Next, by \eqref{rv4}, for sufficiently small $\varsigma$ we have
\begin{flalign}
\label{sumwvlk1}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell-k-1)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w)} \cdot \mathbbm{1}_{|R_{00}| \le 1/\varsigma} \Bigg] \ge \displaystyle\frac{1}{2} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell-k-1)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w)} \Bigg].
\end{flalign}
\noindent Additionally, by \Cref{m2}, there exists a constant $c_1 > 0$ such that
\begin{flalign}
\label{sumuvk1}
\mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} \displaystyle\frac{\big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s}{\Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^s} \Bigg] \ge c_1 \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k-1)} |R_{0u}|^s \Bigg].
\end{flalign}
\noindent Combining \eqref{sumlk}, \eqref{sumwvlk1}, and \eqref{sumuvk1} then yields the lemma.
\end{proof}
\begin{proof}[Proof of \Cref{ry0estimate3}]
The proof of this lemma will be similar to that of \Cref{ry0estimate2}. Indeed, following \eqref{expectationwt0}, we obtain
\begin{flalign}
\label{expectationwt02}
\begin{aligned}
\mathbb{E} \Bigg[ & \displaystyle\sum_{u \in \mathbb{V} (k)} \displaystyle\sum_{w \in \mathbb{D}_{\ell-k} (u)} |R_{0w}|^s \cdot \mathbbm{1}_{|T_{uu_+}| < \omega} \cdot \mathbbm{1}_{\mathscr{D} (u_+, w)} \Bigg] \\
& = \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \mathbb{E}_u \bigg[ \displaystyle\sum_{v \in \mathbb{D} (u)} \displaystyle\frac{|T_{uv}|^s \cdot \mathbbm{1}_{|T_{uv}| < \omega}}{\big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + T_{uv}^2 R_{vv}^{(u)} + K_{u, v} \big|^s} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times \displaystyle\sum_{w \in \mathbb{D}_{\ell - k-1} (v)} \big| R_{vw}^{(u)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \bigg] \Bigg],
\end{aligned}
\end{flalign}
\noindent where $K_{u, v}$ is as in \eqref{kuv1}. Using the fact that $K_{u, v}$ is independent from $T_{u_- u}$, $R_{u_- u_-}^{(u)}$, $T_{uv}$, $R_{vv}^{(u)}$, $R_{vw}^{(u)}$, and $\mathscr{D} (v, w)$, we obtain from (the $\chi = s$ case of) the second part of \Cref{expectationsum1} that there exists a constant $C_1 > 1$ such that
\begin{flalign*}
\mathbb{E}_u \Bigg[ \displaystyle\sum_{v \in \mathbb{D} (u)} & \displaystyle\frac{|T_{uv}|^s \cdot \mathbbm{1}_{|T_{uv}| < \omega}}{\big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} + T_{uv}^2 R_{vv}^{(u)} + K_{u, v} \big|^s} \cdot \displaystyle\sum_{w \in \mathbb{D}_{\ell - k-1} (v)} \big| R_{vw}^{(u)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v, w)} \Bigg] \\
& \le C_1 \omega^{s-\alpha} \cdot \Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^{-s} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell - k-1)} | R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w)} \Bigg],
\end{flalign*}
\noindent where we have used the fact that $R_{vw}^{(u)}$ for $w \in \mathbb{D}_{\ell-k-1} (u)$ has the same law as $R_{0w}$ for $w \in \mathbb{V} (\ell-k-1)$. Inserting this into \eqref{expectationwt02} yields
\begin{flalign}
\label{sumuw1}
\begin{aligned}
\mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} \displaystyle\sum_{w \in \mathbb{D}_{\ell-k} (u)} & |R_{0w}|^s \cdot \mathbbm{1}_{|T_{uu_+}| < \omega} \cdot \mathbbm{1}_{\mathscr{D} (u, w)}\Bigg] \\
& < C_1 \omega^{s-\alpha} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^{-s} \Bigg] \\
& \qquad \times \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell-k-1)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w)} \Bigg].
\end{aligned}
\end{flalign}
Applying \Cref{m2} yields a constant $C_2 > 1$ such that
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} \big| R_{0u_-}^{(u)} \big|^s \cdot |T_{u_- u}|^s \cdot \Big( \big| z + T_{u_- u}^2 R_{u_- u_-}^{(u)} \big| + 1 \Big)^{-s} \Bigg] \le C_2 \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k-1)} | R_{0u}|^s \Bigg],
\end{flalign*}
\noindent which together with \eqref{sumuw1} yields
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k)} & \displaystyle\sum_{w \in \mathbb{D}_{\ell-k} (u)} |R_{0w}|^s \cdot \mathbbm{1}_{|T_{uu_+}| < \omega} \cdot \mathbbm{1}_{\mathscr{D} (u, w)}\Bigg] \\
& < C_1 C_2 \omega^{s-\alpha} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{u \in \mathbb{V} (k-1)} | R_{0u}|^s \Bigg] \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{V} (\ell-k-1)} |R_{0w}|^s \cdot \mathbbm{1}_{\mathscr{D} (0, w)} \Bigg].
\end{flalign*}
\noindent Now the lemma follows from taking $\omega$ sufficiently small so that $C_1 C_2 \omega^{s-\alpha} < \delta$.
\end{proof}
\section{Expected Number of Large Resolvent Entries}
\label{EstimateN}
In this section we estimate the expectation of the number of vertices $v \in \mathbb{V} (L)$ for which $|R_{0v}|$ is above some given threshold $e^{t L}$. We begin in \Cref{EstimateNL} by using the restricted fractional moment estimate \Cref{expectationr0vsd} to bound expectations of counts for the number of large resolvent entries. We then in \Cref{EstimateNLkappa} provide a result (used in \Cref{ProbabilityNmu0} below) lower bounding the number of moderate resolvent entries, of order $e^{\kappa L}$ for some possibly negative $\kappa$ bounded from below. Throughout this section, we recall the notation and conventions from \Cref{EventR}.
\subsection{Estimates for the Expected Number of Large $R_{vw}$}
\label{EstimateNL}
In this section we provide lower bounds on the expected number of large off-diagonal resolvent entries. We begin with the following definition, which provides notation for the set and number of vertices for which the off-diagonal resolvent entry $|R_{vw}|$ is bounded below by some threshold $e^{tL}$ (and on which the event $\mathscr{B} (v, w)$ from \Cref{br0vw} holds), where $L$ denotes the distance between $v$ and $w$.
\begin{definition}
\label{ns}
Fix a vertex $v \in \mathbb{V}$; an integer $\ell \ge 1$; and real numbers $t \in \mathbb{R}$, $\omega \in (0, 1)$, and $\Omega > 1$. Define the vertex set $\mathcal{S}_{\ell} (t) = \mathcal{S}_{\ell} (t; v) = \mathcal{S}_{\ell} (t; v; \omega; \Omega) = \mathcal{S}_{\ell} (t; v; \omega; \Omega; z)$ and integer $\mathcal{N}_{\ell} (t) = \mathcal{N}_{\ell} (t; v) = \mathcal{N}_{\ell} (t; v; \omega; \Omega) = \mathcal{N}_{\ell} (t; v; \omega; \Omega; z)$ by
\begin{flalign*}
\mathcal{S}_{\ell} (t) = \Big\{ w \in \mathbb{D}_{\ell} (v) : \big| R_{vw}^{(v_-, w_+)} \big| \cdot \mathbbm{1}_{\mathscr{B} (v, w; \omega; \Omega)} \ge e^{t \ell} \Big\}; \qquad \mathcal{N}_{\ell} (t) = \big| \mathcal{S}_{\ell} (t) \big|.
\end{flalign*}
\end{definition}
Before proceeding, it will be useful to have the following tail estimate on $\mathcal{N}_{\ell} (t)$; it is essentially a consequence of \Cref{treenumber} below, which is a corresponding tail estimate for the number of vertices in a Galton--Watson tree.
\begin{lem}
\label{nlestimate}
For any real numbers $\omega \in (0, 1)$, there exists a constant $C = C (\omega) > 1$ such that the following holds. For any integer $\ell \ge 1$, vertex $v \in \mathbb{V}$, and real numbers $t \in \mathbb{R}$ and $A, \Omega \ge 1$, we have
\begin{flalign*}
\mathbb{P} \big[ \mathcal{N}_{\ell} (t) \ge A \cdot C^{\ell} \big] \le 3e^{-A/2}.
\end{flalign*}
\end{lem}
\begin{proof}
Let $\mathcal{V} (k) = \big\{ w \in \mathbb{D}_k (v) : \text{$\mathscr{D} (v, w)$ holds} \big\}$, and set $\mathcal{V} = \bigcup_{k=0}^{\infty} \mathcal{V} (k)$. Then, $\mathcal{V} (\ell) \subseteq \mathcal{S}_{\ell} (t)$, as $\mathscr{B} (v, w) \subseteq \mathscr{D} (v, w)$ (by \eqref{br0vw}); hence, $\big| \mathcal{V} (\ell) \big| \ge \mathcal{N}_{\ell} (t)$. Moreover, $\mathcal{V}$ is a Galton--Watson tree (see \Cref{EstimateTreeVertex}) with parameter $\lambda = \alpha \cdot \int_{\omega}^{\infty} x^{-\alpha-1} dx = \omega^{-\alpha}$. Thus, it follows from \Cref{treenumber} that
\begin{flalign*}
\mathbb{P} \Big[ \big| \mathcal{V} (\ell) \big| \ge A \cdot (2\lambda)^{\ell} \Big] \le 3 e^{-A/2},
\end{flalign*}
\noindent from which the lemma follows from taking $C = 2 \lambda$ and the fact that $\mathcal{N}_{\ell} (t) \le \mathcal{V} (\ell)$.
\end{proof}
We next state the following proposition that provide lower bounds on $\mathbb{E} \big[ \mathcal{N}_L (t) \big]$.
\begin{prop}
\label{nestimate}
For any real number $\delta \in (0, 1)$, there exist constants $C = C (\alpha ; s) > 1$ (independent of $\delta$), $L_0 = L_0 (\alpha ; s ; \delta) > 1$, $\omega = \omega (\alpha ; s ; \delta) \in (0, 1)$; and $\Omega = \Omega (\alpha ; s ; \delta) > 1$ such that the following holds. Fix an integer $L \ge L_0$, and a vertex $v \in \mathbb{V}$; set $\ell = \ell (v)$. Then, there exists a real number $t_0 = t_0 (\alpha; s; z; \ell; L) \in [-C,0]$ such that
\begin{flalign}
\label{expectationm}
L^{-1} \log \mathbb{E} \big[ \mathcal{N}_L (t_0; v; \omega; \Omega; z) \big] \ge \varphi (s; z) - s t_0 - \delta.
\end{flalign}
\end{prop}
\begin{rem}
\label{t0}
The real number $t_0$ from \Cref{nestimate} can in principle depend (fairly discontinuously) on the parameters $(z, \ell, L)$, but the fact that $t_0 \in [-C, 0]$ indicates that it is bounded independently of them.
\end{rem}
To establish \Cref{nestimate}, we require the following tail bound, stating that the contribution to the truncated expectation considered in \Cref{expectationr0vsd} does not arise if $\big| R_{vw}^{(v_-, w_+)} \big|$ is either too large or small.
\begin{lem}
\label{sumlargesmallr}
Adopting the notation of \Cref{nestimate}, we have
\begin{flalign}
\label{wslcc}
\begin{aligned}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathcal{S}_L (C)} \big| R_{vw}^{(v_-, w_+)} \big|^s \Bigg] & \le e^{-L} \cdot \exp \big( L \cdot \varphi (s; z) \big); \\
\mathbb{E} \Bigg[ \displaystyle\sum_{w \notin \mathcal{S}_L (-C)} \big| R_{vw}^{(v_-, w_+)} \big|^s \Bigg] & \le e^{-L} \cdot \exp \big( L \cdot \varphi (s; z) \big).
\end{aligned}
\end{flalign}
\end{lem}
\begin{proof}
The first and second bounds in \eqref{wslcc} will follow from comparing the $s$-th moments considered in their left sides to an $s'$-th moment for $s' > s$ or an $s''$-th moment for $s'' < s$, respectively. Let us begin by showing the first bound in \eqref{wslcc}. Fix $s' = \frac{s+1}{2} > s$; by \eqref{limitr0j2} and the second part of \Cref{ryestimate}, there exists a constant $C_1 > 0$ such that
\begin{align*}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathcal{S}_L (C)} \big| R_{vw}^{(v_-, w_+)} \big|^{s'} \Bigg] &\le \mathbb{E} \left[\sum_{w \in \mathbb D_L (v)} |R_{vw}^{(v_-, w_+)}|^{s'} \cdot \mathbbm{1}_{\mathscr G_0 (w)}\right] \\
&\le C_1 \cdot \mathbb{E}\left[ \sum_{u \in \mathbb D_{L-1} (v)} | R_{vu}^{(v_-)}|^{s'} \right]\\
& \le C_1 \cdot \exp \big( (L-1) \cdot \varphi (s'; z) \big) \\
&\le C_1 \cdot \exp \big( L \cdot \varphi (s'; z) \big) \le C_1 \cdot \exp \big( L \cdot \varphi (s; z) \big),
\end{align*}
\noindent where in the last line we used the fact that $\varphi$ is nonincreasing in $s$ (by the second part of \Cref{limitr0j}). We also used \Cref{estimatemoment1} to bound $\phi(s;z) < C_1$ in order to replace $L-1$ by $L$ in the exponent. Since $\big| R_{vw}^{(v_-, w_+)} \big| \ge e^{CL}$ for each $w \in \mathcal{S}_L (C)$, it follows that
\begin{flalign*}
e^{C (s' - s) L} \cdot \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathcal{S}_L (C)} \big| R_{vw}^{(v_-, w_+)} \big|^s \Bigg] \le C_1 \cdot \exp \big( L \cdot \varphi (s; z) \big),
\end{flalign*}
\noindent which yields the first statement of \eqref{wslcc}, after selecting $C = C(s, \varepsilon, B) > \frac{2}{s' - s}$.
Recall from the second part of \Cref{limitr0j} that $\varphi (s; z)$ is (weakly) convex in $s$; in particular, it is differentiable almost everywhere. Moreover, the fourth part of \Cref{limitr0j} implies that $\lim_{s \rightarrow \alpha} \varphi (s; z) = \infty$. Thus, there exists $s'' \in \big( \alpha, \frac{s+\alpha}{2} \big)$ sufficiently close to $\alpha$ such that the following two properties hold. First, $\varphi (s; z)$ is differentiable at $s''$; set $t = \partial_s \varphi (s''; z) \le 0$, where the nonpositivity of $t$ follows from the second part of \Cref{limitr0j}). Second, $\varphi (s''; z) + t (s'' - s) \le \varphi (s; z) - 2$.
Then, again by \eqref{limitr0j2} we have
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \notin \mathcal{S}_L (t)} \big| R_{vw}^{(v_-, w_+)} \big|^{s''} \cdot \mathbbm{1}_{\mathscr{B} (v, w)} \Bigg] \le C_1 \cdot \exp \big( L \cdot \varphi (s''; z) \big).
\end{flalign*}
\noindent Together with the bound $\big| R_{vw}^{(v_-, w_+)} \big|^s \le e^{(s - s'') tL} \cdot \big| R_{vw}^{(v_-, w_+)} \big|^{s''}$ (as $t \le 0$), we deduce
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{w \notin \mathcal{S}_L (t)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B} (v, w)} \Bigg] & \le C_1 \cdot \exp \Big( L \cdot \big( \varphi (s''; z) + t (s-s'') \big) \Big) \\
& \le C_1 e^{-2L} \cdot \exp \big( L \cdot \varphi (s; z) \big),
\end{flalign*}
\noindent where in the last bound we used the fact that $\varphi (s''; z) + t (s'' - s) \le \varphi (s; z) - 2$. Taking $L$ sufficiently large (and $C > |t|$), this yields the second bound in \eqref{wslcc}.
\end{proof}
Now we can establish \Cref{nestimate}.
\begin{proof}[Proof of \Cref{nestimate}]
Observe that $\big| R_{vw}^{(v_-, w_+)} \big| \cdot \mathbbm{1}_{\mathscr{B} (v, w)} \le e^{s (t + \delta/2) L}$ for any $w \in \mathcal{S} (t) \setminus \mathcal{S} \big( t + \frac{\delta}{2} \big)$. Thus, by \Cref{expectationr0vsd} and \Cref{sumlargesmallr}, there exist constants $C_1 > 1$, $c_1 > 0$, $\omega = \omega (\delta) \in (0, 1)$, and $\Omega = \Omega (\delta) > 1$ such that
\begin{flalign*}
\displaystyle\sum_{t \in \mathcal{A}} e^{(t+\delta/2) L} \cdot \mathbb{E} \bigg[ \mathcal{N}_L (t) - \mathcal{N}_L \Big( t + \displaystyle\frac{\delta}{2} \Big) \bigg] & \ge \displaystyle\sum_{t \in \mathcal{A}} \mathbb{E} \Bigg[ \displaystyle\sum_{\substack{v \in \mathcal{S} (t + \delta / 2) \\ v \notin \mathcal{S} (t)}} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B} (v, w; \omega; \Omega)} \Bigg] \\
& \ge \mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathcal{S} (-C) \setminus \mathcal{S} (C)} \big| R_{vw}^{(v_-, w_+)} \big|^s \cdot \mathbbm{1}_{\mathscr{B} (v, w; \omega; \Omega)} \Bigg] \\
& \ge c_1 \cdot \exp \Bigg( L \cdot \bigg( \varphi (s; z) - \displaystyle\frac{\delta}{4} \bigg) \Bigg),
\end{flalign*}
\noindent where
\begin{flalign*}
\mathcal{A} = \bigg\{ -C, \displaystyle\frac{\delta}{2} - C, \delta - C, \ldots , \displaystyle\frac{D \delta}{2} - C \bigg\}, \qquad \text{and} \qquad D = \bigg\lceil \displaystyle\frac{4C}{\delta} \bigg\rceil.
\end{flalign*}
\noindent Hence, there exists some $t_0 = t_0 (\alpha; s; z; \ell; L) \in \mathcal{A}$ such that
\begin{flalign*}
\mathbb{E} \big[ \mathcal{N}_L (t_0) \big] \ge \displaystyle\frac{c_1}{D} \cdot \exp \Bigg( L \cdot \bigg( \varphi (s; z) - s t_0 - \displaystyle\frac{3 \delta}{4} \bigg) \Bigg).
\end{flalign*}
\noindent (Observe that it only depends on $v$ through $\ell$ since the random variables $R_{vw}^{(v_-, w_+)} \cdot \mathbbm{1}_{\mathscr{B} (v, w; \omega; \Omega)}$ are identically distributed over $v \in \mathbb{V} (\ell)$.) Taking $L$ sufficiently large, this yields Equation \eqref{expectationm} and the proposition up to the condition that $t_0 \leq 0$ which we now check.
To this end, we first prove that, up to increasing our constants $C$ and $L_0$, we have that $t_0 \leq 4 \delta / (1- s)$. Indeed, if this is not the case, there exists $s < s' < 1$ such that $t_0 > 2 \delta / (s'-s)$. Then, if $L \geq C / \delta$ where $C$ is the constant of Theorem \ref{limitr0j2}(1), we would have
$$
\varphi(s) \ge \varphi(s')
\ge \Phi_L (s'; z) - C L^{-1}
\ge s' t_0 + L^{-1} \log \mathbb{E}[\mathcal N_L (t_0)] - C L^{-1}.$$
Here, the first inequality follows from Theorem \ref{limitr0j2}(1), the second from Theorem \ref{limitr0j2}(2), and the third follows from only considering $v \in S_L (t_0)$ in the sum defining $\Phi(s'; z)$. Next, using Equation \eqref{expectationm}, we would get
$$
\varphi(s) \ge (s' - s) t_0 + \varphi (s) - \delta - C /L > \varphi (s).
$$
where the last inequality comes from $t_0 > 2(s'-s)^{-1} \delta$ and $L \geq C / \delta$.
We thus have proved that $t_0 \in [-C,4\delta / (1-s)]$. If $t_0 \leq 0$, we are done. If $0 < t_0 \leq 4\delta / (1-s)$, we get from \eqref{expectationm}
$$
L^{-1} \log \mathbb{E} \big[ \mathcal{N}_L (t_0; v; \omega; \Omega; z) \big] \ge L^{-1} \log \mathbb{E} \big[ \mathcal{N}_L (t_0; v; \omega; \Omega; z) \big] \geq \varphi (s; z) - s t_0 - \delta \geq \varphi (s; z) - 5 \delta / (1- s).
$$
Hence, at the cost of changing $\delta$ into $(1-s)\delta / 5$, this concludes the proof.
\end{proof}
We next have the following corollary, which lower bounds (though by a quantity that is exponentially small) the probability that $\mathcal{N}_L (t_0)$ is large. In \Cref{ProbabilityNmu0} below (see \Cref{mprobability}), we will explain a way of ``amplifying'' the below corollary to instead provide a high-probability bound that $\mathcal{N}_L (t_0)$ is large.
\begin{cor}
\label{mudeltak}
Adopt the notation of \Cref{nestimate}. There is a constant $\mu_0 = \mu_0 (\alpha; s; z; \ell; L) \in [-C, C]$ so that
\begin{flalign*}
\mathbb{P} \big[ L^{-1} \log \mathcal{N}_L (t_0; v; \omega; \Omega; z) \ge \varphi (s; z) - st_0 + \mu_0 \big] \ge C^{-1} \cdot e^{-(\mu_0 + \delta) L}.
\end{flalign*}
\end{cor}
As in \Cref{t0}, we mention that the real number $\mu_0$ from \Cref{mudeltak} can in principle depend quite discontinuously on $(z, \ell, L)$, but that is bounded independently of them. The proof of \Cref{mudeltak} will follow from \Cref{nestimate}, together with the tail estimate \Cref{nlestimate} for $\mathcal{N}_L$.
\begin{proof}[Proof of \Cref{mudeltak}]
Let $C_1 > 1$ be some constant to be determined later, and assume to the contrary that
\begin{flalign}
\label{nlc}
\mathbb{P} \big[ L^{-1} \log \mathcal{N}_L (t_0; v; \omega; \Omega; z) \ge \varphi (s; z) - st_0 + \mu \big] \le C_1 \cdot e^{-(\mu + \delta) L}, \quad \text{for each $\mu \in [-2C_1, 2C_1]$},
\end{flalign}
\noindent which we will show contradicts \Cref{nestimate}. To that end, abbreviate $\mathcal{N} = \mathcal{N}_L (t_0; v; \omega; \Omega; z)$ and denote $\psi = \varphi (s; z) - st_0$. Further define the set
\begin{flalign*}
\mathcal{M} = \bigg\{ \mu = \displaystyle\frac{k \delta}{2} : k \in \mathbb{Z}, \psi + \mu \ge 0 \bigg\}.
\end{flalign*}
\noindent Then, observe that
\begin{flalign*}
\mathbb{E} [ \mathcal{N}] \le \displaystyle\sum_{\mu \in \mathcal{M}} \mathbb{E} \big[ \mathcal{N} \cdot \mathbbm{1}_{L^{-1} \log \mathcal{N} \in [\psi + \mu - \delta/2, \psi + \mu]} \big] \le \displaystyle\sum_{\mu \in \mathcal{M}} e^{(\psi + \mu) L} \cdot \mathbb{P} \bigg[ L^{-1} \log \mathcal{N} \ge \psi + \mu - \displaystyle\frac{\delta}{2} \bigg].
\end{flalign*}
Applying \eqref{nlc}, we obtain for sufficiently large $L$ that
\begin{flalign}
\label{psimu0}
\begin{aligned}
\displaystyle\sum_{\mu \in \mathcal{M}} e^{(\psi + \mu) L} \cdot \mathbb{P} \bigg[ L^{-1} \log \mathcal{N} \ge \psi + \mu - \displaystyle\frac{\delta}{2} \bigg] \cdot \mathbbm{1}_{|\mu| \le C_1} & \le \displaystyle\sum_{\mu \in \mathcal{M}} e^{(\psi + \mu) L} \cdot e^{- (\mu + \delta/2) L} \cdot \mathbbm{1}_{|\mu| \le C_1} \\
& \le 4C_1 \delta^{-1} \cdot e^{(\psi - \delta/2) L} \le e^{(\psi - \delta/3) L}.
\end{aligned}
\end{flalign}
\noindent Additionally, applying \Cref{nlestimate} (with the $A$ there equal to $e^{\mu L/2}$ here) we obtain for $C_1$ sufficiently large (so that $\psi + \frac{\mu}{2} - 1 \ge C$ for $\mu > C_1$, where $C$ is from \Cref{nlestimate}) sufficiently large that
\begin{flalign}
\label{psimu1}
\begin{aligned}
\displaystyle\sum_{\mu \in \mathcal{M}} e^{(\psi + \mu) L} \cdot \mathbb{P} \bigg[ L^{-1} \log \mathcal{N} \ge \psi + \mu - \displaystyle\frac{\delta}{2} \bigg] \cdot \mathbbm{1}_{\mu > C_1} & \le 3 \displaystyle\sum_{\mu \in \mathcal{M}} e^{(\psi + \mu)L} \cdot \exp \bigg( -\displaystyle\frac{e^{\mu L}}{2} \bigg) \cdot \mathbbm{1}_{\mu > C_1} \\
& \le e^{(\psi - \delta/3) L},
\end{aligned}
\end{flalign}
\noindent where the last inequality again follows from taking $C_1$ and $L$ sufficiently large.
Combining \eqref{psimu0} and \eqref{psimu1} (with using the fact that $\mathcal{M}$ cannot contain any $\mu < -C_1$ for $C_1$ sufficiently large) gives
\begin{flalign*}
\mathbb{E} [\mathcal{N}] \le e^{(\psi - \delta/4) L},
\end{flalign*}
\noindent for sufficiently large $L$, which contradicts \Cref{nestimate} and establishes the corollary.
\end{proof}
\subsection{Lower Bound on Smaller Resolvent Entries}
\label{EstimateNLkappa}
In this section we provide a lower bound on the number of resolvent entries that are not too small. We begin with the following definition that provides a minor modification of the quantity $\mathcal{N}_{\ell} (t)$ from \Cref{ns} (essentially omitting the event $\mathscr{D} (v, w)$ from \Cref{br0vw} that constrains the tree weights $T_{vw}$).
\begin{definition}
\label{ml}
Fix a complex number $z \in \mathbb{H}$; an integer $\ell \ge 1$; and real numbers $\kappa \in \mathbb{R}_{> 0}$ and $\Omega > 1$. Define the vertex set $\mathcal{S}_{\ell}' (\kappa) = \mathcal{S}_{\ell}' (\kappa; \Omega) = \mathcal{S}_{\ell}' (\kappa; \Omega; z)$ and integer $\mathcal{N}_{\ell}' (\kappa) = \mathcal{N}_{\ell}' (\kappa; \Omega; z)$ by
\begin{flalign*}
\mathcal{S}_{\ell}' (\kappa) = \Big\{ v \in \mathbb{V}(\ell) : \big| R_{0v}^{(v_+)} \big| \cdot \mathbbm{1}_{\mathscr{G}_0 (v)} \ge \kappa^{\ell}, \big| R_{vv}^{(v_+)} \big| \le \Omega \Big\}; \qquad \mathcal{N}_{\ell}' (\kappa) = \big| \mathcal{S}_{\ell}' (\kappa) \big|.
\end{flalign*}
\end{definition}
In this section we establish the following lemma, which states that $\mathcal{N}_{\ell}' (\kappa)$ can be made arbitrarily large by taking $\kappa$ sufficiently small. It will be used in the proof of \Cref{mprobability} below.
\begin{lem}
\label{vertexk}
For any real numbers $\Delta > 1$ and $\varepsilon \in (0, 1)$, there exist constants $\Omega = \Omega (\Delta, \varepsilon) > 1$ and $\kappa = \kappa (\Delta, \varepsilon) > 0$ such that, for any integer $\ell \ge 2$, we have
\begin{flalign*}
\mathbb{P} \big[ \mathcal{N}_{\ell}' (\kappa) \ge \Delta^{\ell} \big] \ge 1 - \varepsilon.
\end{flalign*}
\end{lem}
To establish this lemma, let $\varpi = \varpi (\Delta, \varepsilon) \in (0, 2^{-2/\alpha})$ and $\Theta = \Theta (\Delta, \varepsilon, \varpi) > 1$ be real numbers to be fixed later. For any vertex $v \in \mathbb{V}$, define the vertex set $\mathcal{A}_v = \mathcal{A}_v (\varpi) \subset \mathbb{V}$ by
\begin{flalign}
\label{av0}
& \mathcal{A}_v = \mathcal{A}_v (\varpi) = \big\{ w \in \mathbb{D} (v) : |T_{vw}| \in [\varpi, 1] \big\},
\end{flalign}
\noindent the set of children of $v$ whose weight to $v$ is at least $\varpi$. Further inductively define the vertex sets $\mathcal{U} (k) \subset \mathbb{V}$ for each integer $k \ge 0$, by setting
\begin{flalign}
\label{u0k}
\mathcal{U} (0) = \{ v \}; \qquad \mathcal{U} (k+1) = \bigcup_{w \in \mathcal{U}(k)} \mathcal{A}_w; \qquad \mathcal{U} = \bigcup_{k = 0}^{\infty} \mathcal{U} (k),
\end{flalign}
\noindent so that in particular all edge weights connecting vertices in $\mathcal{U}$ are (in magnitude) at least $\varpi$. For any vertex $v \in \mathcal{U}$, define the complex numbers $K_v$ and $Q_v$ by
\begin{flalign*}
K_v = \displaystyle\sum_{w \in \mathbb{D} (v) \setminus \mathcal{A}_v} T_{vw}^2 R_{ww}^{(v)}; \qquad Q_v = z + T_{v_- v}^2 R_{v_- v_-}^{(v)} + \displaystyle\sum_{\substack{w \in \mathcal{A}_v \\ w \ne v_+}} T_{vw}^2 R_{ww}^{(v)}.
\end{flalign*}
\noindent Then define events bounding $R_{v_- v_-}^{(v)}$, $K_v$, and $Q_v$, given by
\begin{flalign}
\label{p0}
\begin{aligned}
& \mathscr{P}_0 (v) = \big\{ \Theta^{-1} \le \big| R_{v_- v_-}^{(v)} \big| \le \varpi^{-1} \big\}; \qquad \mathscr{P}_1 (v) = \big\{ |K_v| + |Q_v| \le \Theta \big\}; \\
& \mathscr{P}_2 (v) = \big\{ |K_v + Q_v| \ge \varpi \big\}; \qquad \qquad \quad \mathscr{P} (v) = \mathscr{P}_0 (v) \cap \mathscr{P}_1 (v) \cap \mathscr{P}_2 (v).
\end{aligned}
\end{flalign}
Recall that $\mathbb{P}_v$ denotes the conditional probability with respect to the subtree $\mathbb{T}_- (v)$. Then observe, for sufficiently small $\varpi$ and sufficiently large $\Theta$, we have
\begin{flalign}
\label{p02}
\mathscr{P}_0 (v) \subseteq \mathscr{P}_2 (v_-); \qquad \mathbb{P}_v \big[ \mathscr{P}_1 (v) \cap \mathscr{P}_2 (v) \big| \mathscr{P}_0 (v) \big] \ge 1 - \displaystyle\frac{\varepsilon}{16},
\end{flalign}
\noindent where the first statement holds since $R_{v_- v_-}^{(v)} = (K_{v_-} + Q_{v_-})^{-1}$ (by \eqref{qvv}) and the second holds from \eqref{zuv} (and \Cref{tuvalpha} and \Cref{ar}). We then define the vertex sets $\mathcal{V} (k)$, for $k \ge 0$, and $\mathcal{V}_0 (\ell)$ by
\begin{flalign}
\label{vk}
\mathcal{V} (k) = \bigg\{ w \in \mathcal{U} (k) : \text{$\bigcap_{j = 0}^k \mathscr{P} \big( w_-^{(j)} \big)$ holds} \bigg\}; \qquad \mathcal{V}_0 (\ell) = \big\{ w \in \mathcal{V} (\ell) : \text{$\mathscr{G}_0 (w)$ holds} \big\},
\end{flalign}
\noindent where the $w_-^{(j)}$ are defined inductively, by setting $w_-^{(0)} = w$ and $w_-^{(j+1)} = \big( w_-^{(j)} \big)_-$ for each $j \ge 0$ (in words, $w^{(j)}$ is the $j$-th ancestor of $w$).
The following lemma indicates that, to lower bound $\mathcal{N}_{\ell}'$, it suffices to lower bound $\big| \mathcal{V}_0 (\ell) \big|$.
\begin{lem}
\label{sv}
Under the above notation, we have $ \mathcal{V}_0 (\ell)\subseteq \mathcal{S}_{\ell}' (\varpi \Theta^{-2}; \varpi^{-1}; z)$.
\end{lem}
\begin{proof}
Observe for any $v \in \mathcal{V}_0 (\ell)$ that
\begin{flalign}
\label{r0vomega}
\big| R_{0v}^{(v_+)} \big| = \big| R_{00}^{(0+)} \big| \cdot \displaystyle\prod_{0 \preceq u \prec v} | T_{uu_+}| \cdot \big| R_{uu}^{(u_+)} \big| \ge \Theta^{-\ell + 1} \varpi^{\ell} \ge (\Theta^{-2} \varpi)^{\ell},
\end{flalign}
\noindent where the first statement holds by \Cref{rproduct}; the second holds since $|T_{uu_+}| \ge \varpi$ (by \eqref{av0}, \eqref{u0k}, and \eqref{vk}) and $\big| R_{uu}^{u_+} \big| \le \Theta^{-1}$ (by the event $\mathscr{P}_0$ from \eqref{p0}, and \eqref{vk}); and the third holds since $\Theta > 1$. Further observe (from the event $\mathscr{P}_0$ in \eqref{p0}, and \eqref{vk}) that for any $v \in \mathcal{V}_0 (\ell)$ we have
\begin{flalign*}
\big| R_{vv}^{(v_+)} \big| \le \Theta, \quad \text{and $\mathscr{G}_0 (v)$ holds}.
\end{flalign*}
\noindent This, together with \eqref{r0vomega} implies the lemma.
\end{proof}
Before lower bounding $\big| \mathcal{V}_0 (\ell) \big|$, we lower bound $\big| \mathcal{V} (m) \big|$.
\begin{lem}
\label{vestimate}
For real numbers $\Delta > 1$ and $\varepsilon \in (0, 1)$. There exist constants $\varpi_0(\Delta, \varepsilon) > 0$
and $\Theta_0(\Delta, \varepsilon, \varpi)$ such that for
$\varpi \in (0, \varpi_0)$ and $\Theta > \Theta_0$,
we have for any integer $m \ge 1$ that
\begin{flalign*}
\mathbb{P} \Big[ \big| \mathcal{V} (m) \big| \ge \Delta^m \Big] \ge 1 - \displaystyle\frac{\varepsilon}{2}.
\end{flalign*}
\end{lem}
\begin{proof}
Recall that $\mathbb{T} (k)$ denotes the part of $\mathbb{T}$ above its $k$-th level $\mathbb{V} (k)$, and that $\mathbb{P}_k$ denotes the probability conditional on $\mathbb{T} (k)$. We claim that there exists a constant $c > 0$ such that, for sufficiently small $\varpi$ and large $\Theta$ we have
\begin{flalign}
\label{v0vk}
\begin{aligned}
& \mathbb{P} \big[ \mathcal{V} (0) = \{ 0 \} \big] \ge 1 - \displaystyle\frac{\varepsilon}{8}; \qquad \mathbb{P} \bigg[ \big| \mathcal{V} (1) \big| \ge \displaystyle\frac{1}{4 \varpi^{1/\alpha}} \bigg] \ge 1 - \displaystyle\frac{\varepsilon}{4}; \\
& \mathbb{P}_k \bigg[ \big| \mathcal{V} (k+1) \big| \ge \displaystyle\frac{1}{4 \varpi^{\alpha}} \cdot \big| \mathcal{V} (k) \big| \bigg] \ge 1 - \exp \big( - c \cdot |\mathcal{V} (k)| \big).
\end{aligned}
\end{flalign}
\noindent This would imply the lemma, as from \eqref{v0vk} we would deduce that
\begin{flalign*}
\mathbb{P} \Big[ \big| & \mathbb{V} (m) \big| \ge 4^{-m} \varpi^{-\alpha m} \Big] \\
& \ge \mathbb{P} \bigg[ \big| \mathcal{V} (1) \big| \ge \displaystyle\frac{1}{4 \varpi^{\alpha}} \bigg] \cdot \displaystyle\prod_{k = 0}^{m - 1} \mathbb{P}_k \bigg[ \big| \mathcal{V} (k+1) \big| \ge \displaystyle\frac{1}{4 \varpi^{\alpha}} \cdot \mathcal{V} (k) \bigg| \big| \mathcal{V} (k) \big| \ge 4^{-k} \varpi^{-\alpha k} \bigg] \\
& \ge \bigg( 1 - \displaystyle\frac{\varepsilon}{4} \bigg) \cdot \displaystyle\prod_{k = 0}^{m - 1} \Bigg( 1 - \exp \bigg( -c \cdot \Big( \displaystyle\frac{1}{4 \varpi^{\alpha}} \Big)^k \bigg) \Bigg) \ge 1 - \displaystyle\frac{\varepsilon}{2},
\end{flalign*}
\noindent where the last inequality holds by taking $\varpi$ sufficiently small. The lemma then follows from having $4^{-m} \varpi^{-\alpha m} \ge \Delta^m$ by again taking $\varpi$ sufficiently small.
Thus, it suffices to establish \eqref{v0vk}; its first bound follows from the second statement of \eqref{p02}. To establish its second and third, for any vertex $v \in \mathcal{V}$ let $n_v = \big| \mathcal{V} \cap \mathbb{D} (v) \big|$ denote the number of children of $v$ in $\mathcal{V}$. Observe for any vertex $v \in \mathcal{V} (k)$, the event $\mathscr{P}_2 (v)$ holds by \eqref{vk}. Hence, for any child $w \in \mathcal{A}_v$ (recall \eqref{av0}), $\mathscr{P}_0 (w)$ holds by the first statement of \eqref{p02}. Thus, for any $v \in \mathcal{V} (k)$, any child $w \in \mathbb{D} (v)$ of it is in $\mathcal{V} (k+1)$ if and only if $w \in \mathcal{A}_v$ and $\mathscr{P}_1 (w) \cap \mathscr{P}_2 (w)$ both hold.
Now, we claim (conditional, as above, on $\mathbb{T} (k)$) that
\begin{flalign}
\label{nvomega}
\mathbb{P}_k \bigg[ n_v \ge \frac{1}{4} \cdot \varpi^{-\alpha} \bigg] \ge 1 - \frac{\varepsilon}{8}, \qquad \text{for any $v \in \mathcal{V} (k)$}.
\end{flalign}
\noindent This would imply the second bound of \eqref{v0vk} (through a union bound with the first bound there). It would also imply the third, as it would follow from a Chernoff estimate that there exists a constant $c_2 > 0$ such that the following holds for any integer $k \ge 1$. With probability at least $1 - e^{-c_2 |\mathcal{V}(k)|}$, the probability that there exist at least $\frac{3}{4} \cdot \big| \mathcal{V} (k) \big|$ vertices $v \in \mathcal{V} (k)$ for which $n_v \ge \frac{1}{4} \cdot \varpi^{-\alpha}$. Summing over all such vertices $v$, it would follow on this event that $\big| \mathcal{V} (k+1) \big| \ge \frac{1}{4} \cdot \varpi^{-\alpha} \cdot \big| \mathcal{V} (k) \big|$, hence showing the third bound of \eqref{v0vk}.
To show \eqref{nvomega}, first observe that $|\mathcal{A}_v|$ (recall \eqref{av0}) is a Poisson random variable with parameter $\alpha \cdot \int_{\varpi}^1 x^{-\alpha-1} dx = \varpi^{-\alpha} - 1 \ge \varpi^{-\alpha/2}$. Thus, for sufficiently small $\varpi$ we have
\begin{flalign}
\label{1av}
\mathbb{P} \bigg[ |\mathcal{A}_v| \ge \frac{1}{2} \cdot \varpi^{-\alpha/2} \bigg] \ge 1 - \frac{\varepsilon}{16}.
\end{flalign}
\noindent Additionally, by the second statement of \eqref{p02} we have that with probability at least $1 - \frac{\varepsilon}{16}$, at least $\frac{1}{2}$ of the vertices in $w \in \mathcal{A}_v$ satisfy $\mathscr{P}_1 (w) \cap \mathscr{P}_2 (w)$, namely,
\begin{flalign}
\label{2av}
\mathbb{P} \Bigg[ \bigg| \Big\{ w \in \mathcal{A}_v : \text{$\mathscr{P}_1 (w) \cap \mathscr{P}_2 (w)$ holds} \Big\} \bigg| \ge \displaystyle\frac{1}{2} \cdot |\mathcal{A}_v| \Bigg] \ge 1 - \displaystyle\frac{\varepsilon}{16}.
\end{flalign}
\noindent Together, \eqref{1av} and \eqref{2av} (with the previously mentioned fact that a vertex $w \in \mathscr{A}_v$ satisfying $\mathscr{P}_1 (w) \cap \mathscr{P}_2 (w)$ is in $\mathcal{V} (k + 1)$) imply \eqref{nvomega} and thus the lemma.
\end{proof}
Now we can establish \Cref{vertexk}.
\begin{proof}[Proof of \Cref{vertexk}]
We adopt the notation from the proof of \Cref{vestimate} and set the $m$ there equal to $\ell - 1$ here. First observe that there exists a constant $c_1 > 0$ such that
\begin{flalign}
\label{g0estimate}
\mathbb{P} \big[ \mathscr{G}_0 (v) \big] \ge c_1, \qquad \text{for any $v \in \mathbb{V}$}.
\end{flalign}
\noindent Now, define the event $\mathscr{U}$ on which $\big| \mathcal{V} (\ell - 1) \big| \ge \Delta^{\ell - 1}$; by \Cref{vestimate}, we have $\mathbb{P} [\mathscr{U}] \ge 1 - \frac{\varepsilon}{2}$. Additionally, by \eqref{nvomega}, we have $\mathbb{P} \big[ n_v \ge \frac{1}{4} \cdot \varpi^{-\alpha} \big] \ge 1 - \frac{\varepsilon}{8}$ for any $v \in \mathcal{V} (\ell - 1)$. We may $\varepsilon < c_1$, so together with \eqref{g0estimate} this implies from a union bound that
\begin{flalign*}
\mathbb{P} \Bigg[ \big\{ n_v \ge \displaystyle\frac{1}{4 \varpi^{\alpha}} \bigg\} \cap \mathscr{G}_0 (v) \Bigg] \ge \displaystyle\frac{c_1}{2}.
\end{flalign*}
Now we may proceed as in the proof of \Cref{vestimate}. Specifically, by a Chernoff estimate, we deduce the existence of a constant $c_2 > 0$ such that the following holds with probability at least $1 - e^{-c_2 |\mathcal{V} (\ell-1)|}$. There exist at least $\frac{c_1}{4} \cdot \big| \mathcal{V} (\ell - 1) \big|$ vertices $v \in \mathcal{V} (k-1)$ such that $n_v \ge \frac{1}{4} \cdot \varpi^{-\alpha}$. Combining this with the event $\mathscr{U}$ and taking $\Delta$ sufficiently large so that $e^{-c_2 \Delta} \le \frac{\varepsilon}{2}$, it follows that
\begin{flalign*}
\mathbb{P} \bigg[ \big| \mathcal{V} (k) \big| \ge \displaystyle\frac{1}{4} \cdot \varpi^{-1/\alpha} \cdot \Delta^{\ell - 1} \Bigg] \ge \mathbb{P} [\mathscr{U}] - \exp (-c_2 \Delta^{\ell-1}) \ge 1 - \varepsilon,
\end{flalign*}
\noindent which gives the lemma.
\end{proof}
\section{Delocalization}
\label{EstimateR}
In this section we establish the delocalization result \Cref{rimaginary0}. We begin in \Cref{ProbabilityNmu0} by first lower bounding the quantity $\mathcal{N}_L (t)$ from \Cref{ns} (counting large off-diagonal resolvent entries) with high probability. Then, in \Cref{RLarge} we use this estimate to show \Cref{rimaginary0}, assuming \Cref{estimater} below, which we will establish in \Cref{ProofEstimateR}.
\subsection{Lower Bound on $\mathcal{N}_L$}
\label{ProbabilityNmu0}
In this section we establish the following result that lower bounds $\mathcal{N}_L (t)$ with high probability; it maybe viewed as a variant of \Cref{mudeltak} with $\mu_0 = 0$ (and an improvement to the $\mu_0 = 0$ probability there, increasing $e^{-\delta L}$ to $1 - o(1)$).
\begin{prop}
\label{mprobability}
For any real numbers $\varsigma, \delta \in (0, 1/2)$, there exist constants $C > 1$ (independent of $\varsigma$ and $\delta$), $\omega = \omega (\varsigma, \delta) \in (0, 1)$, $\Omega = \Omega (\varsigma, \delta) > 1$, and $L_0 = L_0 (\varsigma, \delta) > 1$ such that the following holds. Let $E, \eta \in \mathbb{R}$ be such that $\varepsilon \le |E| \le B$ and $\eta \in (0, 1)$, and set $z = E + \mathrm{i} \eta$; also fix an integer $L \ge L_0$. There exists a real number $t_0 = t_0 (\alpha, s, z, \omega, \Omega, L) \in [-C, 0]$ such that if $\varphi(s;z) \geq \delta$ we have
\begin{flalign*}
\mathbb{P} \big[ L^{-1} \log \mathcal{N}_L (t_0 - \delta; 0; \omega; \Omega; z) \ge (1- \delta) \varphi (s; z) - st_0 - \delta \big] \ge 1 - \varsigma.
\end{flalign*}
\end{prop}
Variants of \Cref{mprobability} were shown in \cite{aizenman2013resonant} through a second moment method. However, we are unaware how to implement this route in our setting, since the tree $\mathbb{T}$ is infinite (and any finite truncation of it that preserves independence would involve a random number of leaves). Instead, to establish \Cref{mprobability}, we ``amplify'' the low probability result \Cref{mudeltak} as follows.
First, we identify a subset of vertices $\mathcal{Z}_0 \subset \mathbb{V} (M)$ (for some $M \approx \delta^2 L$) on which $\big| R_{0v}^{(v_+)} \big|$ is likely not too small. The events on which $\mathcal{N}_M (t_0; v; \Omega; z) \ge e^{(\varphi (s; z) - st + \mu) M}$ are independent over $v \in \mathcal{Z}_0$. So, from \Cref{mudeltak} (denoting the $\mu_0$ there by $\mu$ here), if $|\mathcal{Z}_0| \gg e^{\mu M}$ then one will likely find many (about $e^{-\mu M} \cdot e^{(\varphi (s; z) - st + \mu) M} \cdot |\mathcal{Z}_0| = e^{(\varphi (s; z) - st) M} \cdot |\mathcal{Z}_0|$) vertices $v$ for which $\mathcal{N}_M (t_0; v; \Omega; z) \ge e^{(\varphi (s; z) - st + \mu) M}$. Considering the vertices in $\mathcal{S}_M (t_0; v; \Omega; z)$ yields a set $\mathcal{Z}_1$ of vertices $w$ with $\ell (w) \approx 2M$ for which $\big| R_{vw}^{(v_-, w_+)} \big| \ge e^{tM}$ and we likely have $|\mathcal{Z}_1| \ge e^{(\varphi (s; z) - st + \mu) M} \cdot |\mathcal{Z}_0|$. Repeating this procedure $\frac{L}{M}$ times, and using \Cref{rproduct}, will then yield with high probability about $e^{(\varphi (s; z) - st) L}$ vertices $v \in \mathbb{V} (L)$ for which $\big| R_{0v}^{(v_-)} \big| \ge e^{tL}$.
Now let us implement this in more detail. We let $M = \lfloor \delta^2 L \rfloor$. For each integer $j \ge 0$, we set $M_j = (M+2) j$ and for notational simplicity assume that there exists an integer $\Theta \in [1, 2 \delta^{-2}]$ for which $L = M_{\Theta + 1} - 2 = M \cdot \Theta + M + 2 \Theta$.\footnote{We use this assumption in \Cref{zidefinition} when define the sets $\mathcal Z_k$ there to be $M$ levels apart. To treat general a $L$, one would consider a set defined analogously to the sets $\mathcal F_k$ specifically for vertices at level $L$, and use this set in all remaining arguments in this section.} For each integer $j \ge 1$, let $t_j$ denote the constant $t_0 (\alpha; s; z; M_j; M)$ from \Cref{nestimate} and $\mu_j$ denote the constant $\mu_0 (\alpha; s; z; M_j; M)$ from \Cref{mudeltak}. We further let $C_0 > 1$ denote a constant larger than the $C$ from \Cref{nestimate} and \Cref{mudeltak}.
We begin with the following two definitions for certain vertex sets on which diagonal resolvent entries are either not too small or more specifically bounded below by the threshold $e^{t_j M}$.
\begin{definition}
\label{x0}
For any constant $c > 0$, define the vertex set $\mathcal{X}_0 (c) \subset \mathbb{V} (M)$ by setting
\begin{flalign*}
\mathcal{X}_0 (c) = \big\{ v \in \mathbb{V} (M) : \big| R_{0v}^{(v_+)} \big| \cdot \mathbbm{1}_{\mathscr{B} (0, v)} \ge c^M \big\}.
\end{flalign*}
\noindent By \Cref{vertexk}, we may select $\kappa \in (0, 1)$ sufficiently small so that
\begin{flalign}
\label{x0nu}
\mathbb{P} \Big[ \big| \mathcal{X}_0 (\kappa) \big| \ge e^{5 (C_0 + 1) M} \Big] \ge 1 - \displaystyle\frac{\varsigma}{2},
\end{flalign}
where $\varsigma\in(0,1)$ was chosen in \Cref{mprobability}.
\end{definition}
\begin{definition}
\label{xvw}
Recall the notation from \Cref{br0vw}, and fix an integer $j \ge 0$. For any vertex $v \in \mathbb{V} (M_j)$, define the vertex set $\mathcal{X}_1 (v) \subset \mathbb{V}$ by
\begin{flalign*}
\mathcal{X}_1 (v) = \big\{ w \succ v : w \in \mathbb{D} (M_{j+1}), \big| R_{vw}^{(v_-, w_+)} \big| \cdot \mathbbm{1}_{\mathscr{B} (v, w)} \ge e^{t_j M} \big\}.
\end{flalign*}
\end{definition}
Next we require the following events.
\begin{definition}
\label{eventg1}
Recalling the notation of \Cref{eventg}, abbreviate $w = \mathfrak{c} (v)$ and set
\begin{flalign*}
\mathscr{G}_1 (v) = \mathscr{G}_0 (v) \cap \mathscr{G}_0 (w); \qquad \mathscr{G} (v) = \mathscr{G}_1 (v) \cap \Bigg\{ \bigg| \displaystyle\sum_{\substack{u \in \mathbb{D} (w) \\ u \ne \mathfrak{c} (w)}} R_{uu}^{(w)} T_{wu}^2 \bigg| < 5 \Bigg\}.
\end{flalign*}
\end{definition}
\noindent By the definition \eqref{t} of the $\{ T_{vw} \}$, and \Cref{ar}, there exists a constant $c_1 = c_1 (\alpha) > 0$ with
\begin{flalign}
\label{c1e}
\mathbb{P} \big[ \mathscr{G}_0 (v) \big] \ge c_1; \quad \mathbb{P} \big[ \mathscr{G}_1 (v) \big| \mathscr{G}_0 (v) \big] \ge c_1; \quad \mathbb{P} \big[ \mathscr{G} (v) \big| \mathscr{G}_1 (v) \big] \ge c_1, \qquad \text{so that $\mathbb{P} \big[ \mathscr{G} (v) \big] \ge c_1^3$}.
\end{flalign}
We next define the vertex sets $\mathcal{Z}_k$ briefly mentioned above.
\begin{definition}
\label{zidefinition}
For each integer $k \ge 0$, define the vertex set $\mathcal{Z}_k$ inductively as follows. Set
\begin{flalign*}
\mathcal{Z}_0 = \big\{ v \in \mathcal{X}_0 (\kappa) : \text{$\mathscr{G} (v)$ holds} \big\}.
\end{flalign*}
\noindent For $k \ge 1$, define $\mathcal{Z}_k \subset \mathbb{V}$ to be the subset of vertices $v \in \mathbb{V} ( M_k - 2)$ satisfying the following four properties, where here we let $v^{(i)}$ denote the $i$-th parent of $v$, that is, $v^{(j)} = (v^{(j-1)})_-$ for each $j \ge 1$, with $v^{(0)} = v$. First, the event $\mathscr{G} (v)$ holds. Second, we have $v^{(M+2)} \in \mathcal{Z}_{k-1}$. Third, we have $v^{(M)} = \mathfrak{c} \big( \mathfrak{c} (v^{(M+2)}) \big)$. Fourth, we have $v \in \mathcal{X}_1 (v^{(M)})$.
For $k \ge 1$, we then define the events
\begin{flalign*
\mathscr{F}_0 = \big\{ |\mathcal{Z}_0| \ge e^{4 (C_0 + 1) M} \big\}; \quad \mathscr{F}_k = \Bigg\{ |\mathcal{Z}_k| \ge \exp \bigg( kM \cdot \Big( \varphi (s; z) - \displaystyle\frac{s}{k} \displaystyle\sum_{j=1}^k t_j \Big) + M (2C_0 + 1 - 2 k \delta^2) \bigg) \Bigg\}.
\end{flalign*}
\end{definition}
We then have the following two lemmas. The first lower bounds $\mathbb{P} [\mathscr{F}_k]$, and the second lower bounds $|R_{0v}^{(v_+)}|$ for $v \in \mathcal{Z}_k$. We recall that $\kappa$ was chosen in \eqref{x0nu}.
\begin{lem}
\label{estimatekf}
If $\varphi(s;z) \geq \delta$ and $\delta \in (0,1/2)$, there exists a constant $c > 0$ (independent of $\delta$) such that we have $\mathbb{P} [\mathscr{F}_0] \ge 1 - \frac{3\varsigma}{4}$ and $\mathbb{P} [ \mathscr{F}_k] \ge \mathbb{P}[\mathscr{F}_{k-1}] - c^{-1} e^{-c \delta^2 M}$.
\end{lem}
\begin{lem}
\label{estimatenlf}
There is a constant $C > 1$ such that for any integer $k \ge 0$ and vertex $v \in \mathcal{Z}_k$ we have
\begin{flalign*}
\big| R_{0v}^{(v_+)} \big| \ge \exp \Bigg( \bigg( \displaystyle\sum_{j=1}^k t_j + C (\log \kappa - \delta^4 k - C_0 - 1) \bigg) \cdot M \Bigg).
\end{flalign*}
\end{lem}
Given \Cref{estimatekf} and \Cref{estimatenlf}, we can quickly establish \Cref{mprobability}.
\begin{proof}[Proof of \Cref{mprobability}]
Set $M_k' = M_k - 2$. By \Cref{estimatenlf} (and the fact that $\mathcal{Z}_k \subset \mathbb{V} (M_k')$), there exists constants $C_1 > 1$ and $C_2 > 1$ such that on $\mathscr{F}_k$ we have
\begin{flalign*}
M_k'^{-1} \log \mathcal{N}_{M_k'} &\Bigg( \displaystyle\frac{1}{k} \displaystyle\sum_{j = 1}^k t_j + \frac{C_1}{k} (\log \kappa - \delta^4 k - C_0 - 1) \bigg) \\
&\ge M_k'^{-1} \log \mathcal{N}_{M_k'} \Bigg( \displaystyle\frac{M}{M_k'} \cdot \bigg( \displaystyle\sum_{j = 1}^k t_j + C_2 (\log \kappa - \delta^4 k - C_0 - 1) \Bigg)\Bigg) \\
& \ge M_k'^{-1} \log |\mathcal{Z}_k| \ge \Bigg( \varphi (s; z) - \displaystyle\frac{s}{k} \displaystyle\sum_{j=1}^k t_j - 2 \delta^2 \Bigg) \cdot \displaystyle\frac{kM}{M_k'}.
\end{flalign*}
\noindent Taking $k = \Theta \le 2 \delta^{-2}$; applying \Cref{estimatekf}; and using the facts that $L = M_{\Theta}'$, $\mathcal{N}_L(x)$ is decreasing in $x$, and $-2 \delta^2 > -\delta$, we deduce the existence of constants $t_0 = \Theta^{-1} \sum_{j=1}^{\Theta} t_j$ and $c_1 > 0$ such that
\begin{flalign*}
\mathbb{P} \big[ L^{-1} \log \mathcal{N}_L (t_0 - \delta) \ge ( 1- \delta) \varphi (s;z) - st_0 - \delta \big] \ge \mathbb{P} [ \mathscr{F}_{\Theta} \big] \ge 1 - \displaystyle\frac{3 \varsigma}{4} - c_1^{-1} \Theta \cdot e^{-c_1 \delta^2 M} \ge 1 - \varsigma,
\end{flalign*}
\noindent for sufficiently large $L$. The proposition follows.
\end{proof}
Now let us establish \Cref{estimatekf} and \Cref{estimatenlf}.
\begin{proof}[Proof of \Cref{estimatekf}]
\noindent We induct on $k$; let us first verify the case $k = 0$. By \eqref{x0nu}, \eqref{c1e}, the independence of $\big\{ \mathscr{G} (v) \big\}$ over $v \in \mathbb{V} (M)$, and a Chernoff bound, we deduce the existence of a constant $c_2 \in (0, \kappa)$ with
\begin{flalign*}
\mathbb{P} [\mathscr{F}_0] \ge 1 - \displaystyle\frac{\varsigma}{2} - e^{-c_2 M} \ge 1 - \displaystyle\frac{3 \varsigma}{4}.
\end{flalign*}
\noindent This establishes the lemma when $k = 0$. To establish it when $k \ge 1$, first observe that since $t_j \geq 0$ and $\varphi(s;z) \geq \delta \geq 2 \delta^2$, we have if $\mathcal F_{k-1}$ holds that $|\mathcal Z_{k-1} |\geq\exp( M (2 C_0 + 1))$. Hence, from \eqref{c1e} and a Chernoff bound, there exists a new constant $c_2 > 0$ such that
\begin{flalign*}
\mathbb{P} \bigg[ \Big| \big\{ v \in \mathcal{Z}_{k-1} : \text{$\mathscr{G} (v)$ holds} \big\} \Big| \ge \displaystyle\frac{c_1^3}{2} \cdot |\mathcal{Z}_{k-1}| \cdot \mathbbm{1}_{\mathscr{F}_{k-1}} \bigg] \ge 1 - e^{- c_2 M}.
\end{flalign*}
\noindent Additionally, observe from \Cref{mudeltak} that for any $u \in \mathbb{V} (M_k)$ that
\begin{flalign}
\label{upsi}
\mathbb{P} \big[ \mathscr{H} (u) \big] \ge 1 - e^{-(\delta^2 + \mu_k) M}, \quad \text{where} \quad \mathscr{H} (u) = \bigg\{ \big| \mathcal{X}_1 (u) \big| \ge \exp \Big( \big( \varphi (s; z) - st_k + \mu_k \big) \cdot M \Big) \bigg\}.
\end{flalign}
\noindent In particular, applying \eqref{upsi} for any $u = \mathfrak{c} \big( \mathfrak{c} (v) \big)$, where $v \in \mathcal{Z}_{k - 1}$ is such that $\mathscr{G} (v)$ holds, we deduce from a Chernoff bound that there exists a constant $c_3 > 0$ such that
\begin{flalign}
\label{zk1u}
\mathbb{P} \Bigg[ \bigg| \Big\{ u \in \mathcal{Z}_{k-1} : \text{$\mathscr{G}(u) \cap \mathscr{H} \big( \mathfrak{c} ( \mathfrak{c} (u)) \big)$ holds} \Big\} \bigg| \ge e^{-(2 \delta^2 + \mu_k) M} \cdot |\mathcal{Z}_{k-1}| \cdot \mathbbm{1}_{\mathscr{F}_{k-1}} \Bigg] \ge 1 - e^{-c_3 \delta^2 M}.
\end{flalign}
\noindent By \Cref{zidefinition}, any $v \in \mathcal{X}_1 \big( \mathfrak{c} (\mathfrak{c} (u)) \big)$ for which the event $\{ u \in \mathcal{Z}_{k - 1} \} \cap \mathscr{G} (u)$ holds satisfies $v \in \mathcal{Z}_k$. Thus, \eqref{zk1u} together with the bound
\begin{flalign*}
e^{-(2 \delta^2 + \mu_k) M} \cdot |\mathcal{Z}_{k-1}| \cdot \mathbbm{1}_{\mathscr{F}_{k-1}} & \cdot \exp \Big( \big( \varphi (s; z) - st_k + \mu_k \big) \cdot M \Big) \\
& \qquad \quad \ge \exp \Bigg( kM \cdot \bigg( \varphi (s; z) - \displaystyle\frac{s}{k} \displaystyle\sum_{j=1}^k t_j \bigg) - M (2C_0 +1 - 2 k \delta^2) \Bigg) \cdot \mathbbm{1}_{\mathscr{F}_{k-1}},
\end{flalign*}
\noindent yields the lemma.
\end{proof}
\begin{proof}[Proof of \Cref{estimatenlf}]
We induct on $k$; the case $k = 0$ follows from \Cref{x0}, as it indicates that $\big| R_{0v}^{(v_+)} \big| \ge \kappa^M$, for any $v \in \mathcal{Z}_0$. To establish it for $k \ge 1$, fix $v \in \mathcal{Z}_k$, and let $w = v^{(M+1)}$ and $u = v^{(M)} = \mathfrak{c} (w)$ (where we recall the notation above \Cref{zidefinition}). We have $u_- = w$. Then, \Cref{rproduct} implies
\begin{flalign}
\label{rz1}
\big| R_{0v}^{(v_+)} \big| = \big| R_{0w_-}^{(w)} \big| \cdot |T_{w_- w}| \cdot \big| R_{ww}^{(v_+)} \big| \cdot |T_{wu}| \cdot \big| R_{uv}^{(u_-, v_+)} \big|.
\end{flalign}
\noindent Since $v \in \mathcal{Z}_k$ and $w_- = v^{(M+2)}$, we have $w_- \in \mathcal{Z}_{k-1}$ and that $\mathscr{G} (w_-)$ holds. The former implies
\begin{flalign}
\label{rz2}
\big| R_{0w_-}^{(w)} \big| \ge \exp \Bigg( \bigg( \displaystyle\sum_{j=1}^{k-1} t_j + C ( \log \kappa - (k-1) \delta^4 - C_0 - 1) \bigg) \cdot M \Bigg),
\end{flalign}
\noindent and the latter (as $\mathscr{G}_0 (w_-) \cap \mathscr{G}_0 (w) \subset \mathscr{G} (w_-)$, since $w = \mathfrak{c} (w_-)$) gives
\begin{flalign}
\label{rz3}
|T_{w_- w}| \ge 1; \qquad |T_{wu}| \ge 1.
\end{flalign}
\noindent Additionally, from the fact that $v \in \mathcal{X}_1 (u)$ (as $v \in \mathcal{Z}_k$), we obtain
\begin{flalign}
\label{rz4}
\big| R_{uv}^{(u_-, v_+)} \big| \cdot \mathbbm{1}_{\mathscr{B} (u, v; \varsigma)} \ge e^{t_0 M}.
\end{flalign}
Next, we have
\begin{flalign}
\label{rz5}
\big| R_{ww}^{(v_+)} \big| = \Bigg| z + T_{w_- w}^2 R_{w_- w_-}^{(w)} + T_{wu}^2 R_{uu}^{(u_-, v_+)} + \displaystyle\sum_{\substack{u' \in \mathbb{D} (w) \\ u' \ne \mathfrak{c} (w)}} R_{u'u'}^{(w)} T_{u' w}^2 \Bigg|^{-1} \ge \big| B + 8 \Omega + 6 \big|^{-1},
\end{flalign}
\noindent where we have used the bounds
\begin{flalign*}
& |z| \le |E| + \eta \le B + 1; \qquad |T_{w_- w}| \le 2; \qquad |T_{wu}| \le 2, \\
& \big| R_{w_- w_-}^{(w)} \big| \le \Omega; \qquad \big| R_{uu}^{(u_-, v_+)} \big| \le \Omega; \qquad \Bigg| \displaystyle\sum_{\substack{u' \in \mathbb{D} (w) \\ u' \ne \mathfrak{c} (w)}} R_{u'u'}^{(w)} T_{u' w}^2 \Bigg| \le 5.
\end{flalign*}
\noindent Here, the second and third bounds follow from the fact that $\mathscr{G}_0 (w_-) \cap \mathscr{G}_0 (w)$ holds; the fourth and fifth follow from the fact that $\mathscr{B} (0, w_-; \varsigma) \cap \mathscr{B} (u, v; \varsigma)$ holds; and the sixth follows from the fact that $\mathscr{G} (w_-)$ holds. Thus, the lemma follows from combining \eqref{rz1}, \eqref{rz2}, \eqref{rz3}, \eqref{rz4}, and \eqref{rz5}, and then taking $M$ (equivalently, $L$) sufficiently large.
\end{proof}
\subsection{Delocalization Through Large Resolvent Entries}
\label{RLarge}
In this section we establish \Cref{rimaginary0}, which is a delocalization result for $\mathbb{T}$, stating that $\Imaginary R_{\star} (E + \mathrm{i} \eta)$ remains bounded below if $\varphi (1; E) > 0$. Given \Cref{mprobability}, its proof will follow the ideas of \cite{aizenman2013resonant}. In particular, we begin with the following definition from \cite{aizenman2013resonant}, which provides notation for the quantiles of $\Imaginary R_{\star} (z)$ (recall \Cref{r}).
\begin{definition}[{\cite[Definition 4.2]{aizenman2013resonant}}]
\label{axi}
For any complex number $z = E + \mathrm{i} \eta \in \mathbb{H}$ and positive real number $a > 0$, let $\xi (a; z)$ be the largest value of $\xi \ge 0$ so that
\begin{flalign*}
\mathbb{P} \big[ \Imaginary R_{\star} (z) \ge \xi \big] \ge a.
\end{flalign*}
\end{definition}
\begin{rem}
\label{axi0}
Observe that $\xi (a; z)$ is bounded for any fixed $z \in \mathbb{H}$ and $a \in \mathbb{R}_{> 0}$, due to the deterministic estimate $\Imaginary R_{00} (z) \le \big| R_{00} (z) \big| \le \eta^{-1}$ that holds by the second part of \Cref{q12}.
\end{rem}
Next, we require the following events on which $|R_{0v}|$ is large and on which a vertex $v$ has some child $w$ on which $|T_{vw}|^2 \cdot \Imaginary R_{ww}^{(v)}$ is not too small.
\begin{definition}
\label{eventa}
In what follows, for any real number $\delta > 0$, integer $L \ge 0$, and vertices $v \in \mathbb{V} (L)$ and $w \in \mathbb{V} (L + 1)$ with $v \prec w$, we define the events
\begin{flalign}
\label{av}
\mathscr{A}_1 (\delta; v)= \mathscr{A}_1 (\delta; v; z) = \big\{ |R_{0v}(z)| \ge e^{\delta L} \big\}
\end{flalign}
and
\begin{flalign}
\mathscr{A}_2 (v; w)= \mathscr{A}_2 (v; w ;z) = \Bigg\{ |T_{vw}|^2 \Imaginary R_{ww}^{(v)}(z) \ge \xi \bigg( \displaystyle\frac{1}{2}; z \bigg) \Bigg\}; \qquad \mathscr{A}_2 (v)= \mathscr{A}_2 (v;z) = \bigcup_{v \prec w} \mathscr{A}_2 (v; w).
\end{flalign}
\end{definition}
The following proposition is similar to \cite[Theorem 4.6]{aizenman2013resonant}, though the probability bound in \eqref{r0jestimate} is $\frac{3}{5}$ (which could be replaced with any number less than and bounded away from $1$, without substantially affecting the proof), instead of an unspecified constant $c$ as in \cite{aizenman2013resonant}. Its proof, which makes use of \Cref{mprobability}, is given in \Cref{ProofEstimateR} below.
\begin{prop}
\label{estimater}
Let $s \in (\alpha, 1)$, $\delta \in (0, 1/8)$, $\varepsilon> 0$, $B > 1$, $\nu \in (8 \delta, 1)$ be real numbers. Let $E \in \mathbb{R}$ be a real number such that $\varepsilon \le |E| \le B$, and let $\{ \eta_j\}_{j=1}^\infty$ be a decreasing sequence of positive reals such that $\lim_{j} \eta_j = 0$. Assume
\begin{flalign}
\label{estimatespectrum}
\displaystyle\lim_{s \rightarrow 1} \bigg( \displaystyle\liminf_{j \rightarrow \infty} \varphi_s (E + \mathrm{i} \eta_j) \bigg) > \nu \text{ and } \displaystyle\liminf_{j \rightarrow \infty} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta_j) > u \big] = 0, \quad \text{for any $u > 0$}.
\end{flalign}
\noindent Then, for any sufficiently large positive integer $L \ge L_0 (\nu) > 1$, there exists $J(L)$ such that for all $j \ge J$, we have \begin{flalign}
\label{r0jestimate}
\mathbb{P} \Bigg[ \bigcup_{v \in \mathbb{V} (L)} \big( \mathscr{A}_1 (\delta; v; E + \mathrm{i} \eta_j) \cap \mathscr{A}_2 (v;E + \mathrm{i} \eta_j) \big) \Bigg] \ge \displaystyle\frac{3}{5}.
\end{flalign}
\end{prop}
Given \Cref{estimater}, we can prove \Cref{rimaginary0}.
\begin{proof} [Proof of \Cref{rimaginary0}]
We may assume that $E \ne 0$, for otherwise the result follows from \cite[Lemma 4.3(b)]{bordenave2011spectrum}. We first prove that there exists $c >0$ such that
\begin{flalign}
\label{dedeXX}
\liminf_{j \rightarrow \infty} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta_j) > c \big] > c.
\end{flalign}
Assume to the contrary that $\liminf_{j \rightarrow \infty} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta_j) > c \big] = 0$ for every $c > 0$. By assumption, there $\nu >0$ such that
\begin{equation}
\displaystyle\lim_{s \rightarrow 1} \bigg( \displaystyle\liminf_{j \rightarrow \infty} \varphi_s (E + \mathrm{i} \eta_j) \bigg) > \nu.
\end{equation}
\noindent Also, by \eqref{r00sum}, we have
\begin{flalign}
\label{r00vimaginary}
\Imaginary R_{00} \ge \displaystyle\sum_{v \in \mathbb{V} (k)} |R_{0v}|^2 \displaystyle\sum_{v \prec u} |T_{vu}|^2 \Imaginary R_{uu}^{(v)}.
\end{flalign}
\noindent By \Cref{estimater}, there exist constants $\delta = \delta (\alpha, \nu, E) > 0$ and $J, L \in \mathbb{Z}_{\ge 1}$ such that
\begin{flalign}
\label{probabilitya}
\mathbb{P} \big[ \mathscr{A} (\delta) \big] \ge \displaystyle\frac{3}{5}, \qquad \text{where} \quad \mathscr{A} (\delta) = \bigcup_{v \in \mathbb{V} (L)} \mathscr{A}_1 (\delta; v; E + \mathrm{i} \eta_j) \cap \mathscr{A}_2 (v; E + \mathrm{i} \eta_j)
\end{flalign}
for $j \ge J$. Inserting this into \eqref{r00vimaginary}, and using the definitions \eqref{av} of the events $\mathscr{A}_1 (\delta; v)$ and $\mathscr{A}_2 (v)$, we find
\begin{flalign*}
\mathbbm{1}_{\mathscr{A}(\delta)} \cdot \Imaginary R_{00}(E + \mathrm{i} \eta_j) \ge \mathbbm{1}_{\mathscr{A}(\delta)} \cdot e^{\delta L} \cdot \xi \bigg( \displaystyle\frac{1}{2}; E + \mathrm{i} \eta_j \bigg).
\end{flalign*}
\noindent This, together with \eqref{probabilitya} and \Cref{axi}, implies that for all $j \ge J$ we have
\begin{flalign*}
e^{\delta L} \cdot \xi \bigg( \displaystyle\frac{1}{2} ; E + \mathrm{i} \eta_j \bigg) \le \xi \bigg( \displaystyle\frac{3}{5}, E + \mathrm{i} \eta_j \bigg) \le \xi \bigg( \displaystyle\frac{1}{2}; E + \mathrm{i} \eta_j \bigg),
\end{flalign*}
\noindent where in the last inequality we used the fact that $\xi (a; z)$ is nonincreasing in $a$ (which follows from \Cref{axi}). For $L$ large enough, this is a contradiction and thus establishes \eqref{dedeXX}.
We may now check that $
\liminf_{\eta \rightarrow 0} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta) > c \big] > c$. Set $z_j = E + \mathrm{i} \eta_j$. By \eqref{dedeXX}, there exists a constant $c > 0$ such that $\mathbb{E} [\big( \Imaginary R_{\star} (z_{j}) \big)^{\alpha/2} ] > c$ for sufficiently large $j$. Recalling the notation from \eqref{kappatheta1} and \Cref{rrealimaginary}, it follows that $\sigma \big(\vartheta_0 (z_{j}) \big) \ge c^{2/\alpha}$, and so for any $\theta \in (0, 1)$ there exists a real number $\delta = \delta (\theta) > 0$ such that $\mathbb{P} \big[ \vartheta (z_{j}) > \delta \big] > \theta$ for all sufficiently large $j$. In particular, any limit point as $j$ tends to $\infty$ of $\vartheta (z_{j})$ is almost surely positive, which by \eqref{kappatheta1} (and \eqref{q00delta1}) implies the same for $\Imaginary R_{\star} (z_{\eta})$.
\end{proof}
\begin{rem}
\label{rnu}
Although not explicitly stated in \Cref{rimaginary0}, it is quickly verified that its proof shows the following more uniform variant of it. For any real numbers $ \nu, \theta \in (0, 1)$, there exists a constant $\delta = \delta (\varepsilon, B, \nu, \theta) > 0$ such that, if\footnote{Observe in this statement that the uniformity of the lower bound on $\delta$ is lost as $E$ tends to $0$; this seems to be an artifact of our proof method. Indeed, at $E = 0$, \cite[Lemma 4.3(b)]{bordenave2011spectrum} explicitly identifies the law of $R_{\star} (\mathrm{i} \eta)$ for any $\eta > 0$ and observes it to be almost surely positive as $\eta$ tends to $0$.} $\varepsilon \le |E| \le B$ and
\begin{equation}
\displaystyle\lim_{s \rightarrow 1} \bigg( \displaystyle\liminf_{j \rightarrow \infty} \varphi_s (E + \mathrm{i} \eta_j) \bigg) > \nu,
\end{equation}
then $\mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta_j) > \delta \big] \ge \theta$ for large enough $j$.
\end{rem}
\subsection{Proof of \Cref{estimater}}
\label{ProofEstimateR}
In this section we establish \Cref{estimater}. Let $\delta \in (0, 1/8)$ be a parameter, as in the statement of that proposition. Perhaps the ``most unlikely aspect'' of the event described in \eqref{r0jestimate} is the exhibition of some vertex $v \in \mathbb{V} (L)$ for which $\mathscr{A}_1 (\delta; v)$ holds, that is, for which $|R_{0v}| \ge e^{\delta L}$. The heuristic underlying the existence of such a vertex is simlar to the one described in \cite[Section 4.3]{aizenman2013resonant} under the name of \emph{resonant delocalization}.
To explain it, first observe that \Cref{mprobability} with $s$ close to $1$ yields about $e^{ (\varphi(s; z) - t_0)L}$ vertices $w \in \mathbb{V} (L-1)$ for which
\begin{flalign}
\label{r0w}
\big|R_{0w}^{(w_+)} \big| \ge e^{t_0 L}.
\end{flalign}
\noindent Since $t_0$ could be negative, this does not immediately give an off-diagonal resolvent entry of size $e^{\delta L}$. So, we use the identity (from \Cref{rproduct})
\begin{flalign}
\label{r0vw}
|R_{0v}| = \big| R_{0w}^{(w_+)} \big| \cdot |T_{wv}| \cdot |R_{vv}|
\end{flalign}
\noindent for such $w$, where $v = w_+$. We will take $v$ so that $|T_{vw}| \ge 1$ (such a child $v$ of $w$ exists with positive probability), and so $|R_{0v}| \ge e^{t_0 L} \cdot |R_{vv}|$.
Next, since $\lim_{j \rightarrow \infty} \Imaginary R_{\star} (E + \mathrm{i} \eta_j) = 0 $, \Cref{rrealimaginary} shows that $R_{\star}$ becomes a real random variable whose density is bounded below; in particular, $\mathbb{P} \big[ |R_{vv}| \ge e^{(\delta - t_0) L} \big] \sim e^{(t_0 - \delta) L}$. Since there are about $e^{(\varphi (s; z) - t_0) L}$ vertices $w$ satisfying \eqref{r0w}, and since $e^{t_0 L} \cdot e^{(\varphi (s; z) - t_0) L} = e^{(\varphi (s; z) - \delta)L} \gg 1$ (as $\varphi (s; z) \ge \nu > \delta$), there is likely at least one such vertex $v$ satisfying $|R_{vv}| \ge e^{(\delta - t_0) L}$. Combined with \eqref{r0w} and \eqref{r0vw}, this yields a vertex $v \in \mathbb{V}(L)$ for which $|R_{0v}| \ge e^{\delta L}$.
Now let us implement this in detail. Let $C_0 > 1$ denote a sufficiently large constant (so that in particular $C_0 > 4C$, where $C$ is from \Cref{mprobability}). Let $L \in \mathbb{Z}_{\ge 1}$ be a parameter. By \eqref{estimatespectrum}, we may fix $s \in (\alpha, 1)$ sufficiently close to $1$ and $J(L)\in \mathbb{Z}_{\ge 1}$ so that
\begin{flalign}
\label{r0}
\varphi_s (E + \mathrm{i} \eta_j) \ge \nu; \qquad \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta_j) > e^{-2C_0 L} \big] < e^{-C_0 L}; \qquad \eta_j < e^{-2C_0 L}
\end{flalign}
for all $j\in \mathbb{Z}_{\ge 1}$ such that $j \ge J$. Set $z = E + \mathrm{i} \eta$, $z_j = E + \mathrm{i} \eta_j$, and for each integer $j \ge 1$ let $R_k = R_k (z)$ denote mutually independent random variables with law $R_{\star} (z)$. Letting $\boldsymbol{\zeta} = (\zeta_1, \zeta_2, \ldots )$ denote a Poisson point process on $\mathbb{R}_{> 0}$ with intensity $\alpha x^{-\alpha/2-1} dx$, it follows from \eqref{r0}, \Cref{sumaxi}, \Cref{rrealimaginary}, and \Cref{ar} that
\begin{flalign}
\label{zetajrj}
\mathbb{P} \Bigg[ \eta_j + \displaystyle\sum_{k = 1}^{\infty} \zeta_k \Imaginary R_k(z_j) > e^{-C_0 L} \Bigg] < e^{-C_0 L}
\end{flalign}
for sufficiently large $j$.
Next, recall the real number $t_0 = t_0 (\alpha, s, z, \omega, \Omega, L-1) \in \big[ -\frac{C_0}{4}, 0 \big]$ from \Cref{mprobability}, and the vertex set $\mathcal{S}_L (t) = \mathcal{S}_L (t; 0; \omega; \Omega;z)$ and its size $\mathcal{N}_L (t) = \mathcal{N}_L (t; 0; \omega; \Omega;z)$ from \Cref{ns}. Here we take $L-1$ instead of $L$ in the statement of \Cref{mprobability}, and restrict to $L \ge L_0$, where $L_0$ is given by the same proposition.
By \Cref{mprobability} and the first bound of \eqref{r0} (and by taking $s \ge 1 - C_0^{-1} \delta$, so that $\big| (s - 1) t_0 \big| \le \frac{\delta}{2}$), we may fix $\omega = \omega (\delta) \in (0, 1)$ and $\Omega = \Omega (\delta) > 1$ such that
\begin{flalign}
\label{9nl110}
\mathbb{P} \big[ L^{-1} \log \mathcal{N}_{L-1} (t_0 - \delta) \ge \nu - t_0 - \delta \big] \ge \frac{9}{10}.
\end{flalign}
We next require the following two vertex sets.
\begin{definition}
\label{0r1}
Further define the vertex set
\begin{flalign*}
\mathcal{R}_1 = \mathcal{R}_1(z) = \Big\{ w \in \mathcal{S}_{L-1} (t_0 - \delta) : \text{$\mathscr{G}_0 \big( \mathfrak{c} (w) \big)$ holds} \Big\},
\end{flalign*}
\noindent where we recall the event $\mathscr{G}_0$ from \Cref{eventg}; the fact that any $w \in \mathcal{S}_L (t_0 - \delta)$ satisfies $\mathscr{G}_0 (w)$ (by \Cref{br0vw} and \Cref{ns}); and the child $\mathfrak{c} = \mathfrak{c} (w)$ of $w$ from \Cref{eventg}.
\end{definition}
\begin{definition}
\label{r02}
Additionally define the vertex set $\mathcal{R}_2 = \mathcal{R}_2(z)$ consisting of those vertices $v \in \mathbb{V} (L)$ such that the following five statements hold.
\begin{enumerate}
\item There exists a vertex $w \in \mathcal{R}_1$ for which $v = \mathfrak{c} (w)$.
\item The event $\mathscr{G}_0 (v)$ holds.
\item Denoting $\mathfrak{c} = \mathfrak{c} (v)$, we have the bounds $\Imaginary R_{\mathfrak{c} \mathfrak{c}}^{(v)} \ge \xi \big( \frac{1}{2}; z \big)$ and $\big| R_{\mathfrak{c} \mathfrak{c}}^{(v)} \big| \le C_0$.
\item We have
\begin{flalign*}
\Imaginary K_v \le e^{-C_0 L}; \quad \Imaginary Q_v \le e^{-C_0 L}; \quad \Real K_v \in [- \Real Q_v - e^{(2\delta + t_0 - \nu)L}, e^{(2\delta + t_0 - \nu)L} - \Real Q_v \big],
\end{flalign*}
\noindent where
\begin{flalign}
\label{kvqv}
K_v = \displaystyle\sum_{\substack{u \in \mathbb{D} (v) \\ u \ne \mathfrak{c}}} T_{vu}^2 R_{uu}^{(v)}; \qquad Q_v = z + T_{w v}^2 R_{ww}^{(v)} + T_{v \mathfrak{c}}^2 R_{\mathfrak{c} \mathfrak{c}}^{(v)}.
\end{flalign}
\end{enumerate}
\noindent We refer to the second, third, and fourth events above as $\mathscr{E}_1 (w)$, $\mathscr{E}_2 (w)$, and $\mathscr{E}_3 (w)$, respectively (indexing these events by $w = v_-$ instead of $v$); we also set $\mathscr{E} (w) = \mathscr{E}_1 (w) \cap \mathscr{E}_2 (w) \cap \mathscr{E}_3 (w)$. These events all depend on the choice of $z \in \mathbb{H}$; however, we omit this dependence from the notation.
\end{definition}
The following lemma indicates that the event required in \eqref{r0jestimate} holds if $\mathcal{R}_2$ is nonempty.
\begin{lem}
\label{va}
For any $v \in \mathcal{R}_2$, the event $\mathscr{A}_1 (\delta; v) \cap \mathscr{A}_2 \big( v; \mathfrak{c} (v) \big)$ holds.
\end{lem}
\begin{proof}
Abbreviating $\mathfrak{c} = \mathfrak{c}(v)$, we have $|T_{v\mathfrak{c}}| \ge 1$ (since $\mathscr{G}_0 (v)$ holds) and $\Imaginary R_{\mathfrak{c} \mathfrak{c}}^{(v)} \ge \Xi \big( \frac{1}{2}; z \big)$. Together, these imply that $\mathscr{A}_2 \big( v; \mathfrak{c} (w) \big)$ holds, so it remains to verify that $\mathscr{A}_1 (\delta, v)$ does as well.
To that end, recalling that $v = \mathfrak{c} (w)$ for some $w \in \mathcal{R}_1$, we have
\begin{flalign*}
|R_{0v}| = \big| R_{ww}^{(v)} \big| \cdot |T_{wv}| \cdot |R_{vv}| \le e^{(t_0 - \delta) (L-1)} \cdot \Bigg| z + T_{wv}^2 R_{ww}^{(v)} + T_{v\mathfrak{c}}^2 R_{\mathfrak{c}\mathfrak{c}}^{(v)} + \displaystyle\sum_{\substack{u \in \mathbb{D}(v) \\ u \ne \mathfrak{c}}} T_{vu}^2 R_{uu}^{(v)} \Bigg|^{-1},
\end{flalign*}
\noindent where the first equality follows from \Cref{rproduct}, and the second follows from the fact that $\big| R_{ww}^{(v)} \big| \ge e^{(t_0 - \delta) (L-1)}$ (since $w \in \mathcal{S}_{L-1} (t_0 - \delta)$); the fact that $|T_{vw}| \ge 1$ (since $\mathscr{G}_0 (w)$ holds); and the Schur complement identity \eqref{qvv} for $R_{vv}$. Combining this with the fourth statement of \Cref{r02} (and taking $C_0$ sufficiently large), it follows that $|R_{0v}| \ge \frac{1}{2} \cdot e^{-t_0} \cdot e^{(\nu - 3 \delta)L} \ge e^{\delta L}$, where the last statement holds for sufficiently large $L$, since $\nu \ge 8 \delta$ and $|t_0| \le C_0$. Thus, $\mathscr{A} (\delta; v)$ holds, confirming the lemma.
\end{proof}
Now we can establish \Cref{estimater}.
\begin{proof}[Proof of \Cref{estimater}]
By \Cref{va}, it suffices to show that $\mathcal{R}_2$ is not empty. We begin by considering an arbitrary point $z = E + \mathrm{i} \eta$ for $\eta \in (0,1)$.
Let us first lower bound $|\mathcal{R}_1|$. To that end, recall from \eqref{c1e} that there exists a constant $c_1 > 0$ such that $\mathbb{P} \big[ \mathscr{G}_0 (w) \big] \ge c_1$. Moreover, the events $\big\{ \mathscr{G}_0 (w) \big\}$ over $w \in \mathcal{S}_{L-1} (t_0)$ are mutually independent. Together with \eqref{9nl110}, a Chernoff estimate, and a union bound, this yields a constant $c_2 = c_2 (\nu) > 0$ such that for sufficiently large $L$ we have
\begin{flalign}
\label{r1estimate}
\mathbb{P} \bigg[ |\mathcal{R}_1| \ge \displaystyle\frac{c_1}{2} \cdot e^{(\nu - t_0 - \delta) L} \bigg] \ge \displaystyle\frac{9}{10} - e^{-c_2 L} \ge \displaystyle\frac{4}{5}.
\end{flalign}
Next, let us use \eqref{r1estimate} to lower bound the proabability that $\mathcal{R}_2$ is nonempty, to which end we lower bound $\mathbb{P} \big[ \mathscr{E} (w) \big]$ from \Cref{r02}. To do this, we condition on $\mathbb{T}_- (w)$ and again use the fact from \eqref{c1e} that $\mathbb{P} \big[ \mathscr{G}_0 (v) \big] \ge c_1$. Next, we condition on those $\{ T_{vu} \}$ for which $|T_{vu}| \in [1, 2]$, and on the event that $\mathscr{G}_0 (v)$ holds. Under this conditioning, we have for sufficiently large $C_0$ that
\begin{flalign}
\label{e1e2}
\mathbb{P} \Bigg[ \Imaginary R_{\mathfrak{c} \mathfrak{c}}^{(v)} \ge \xi \bigg( \displaystyle\frac{1}{2}; z \bigg) \Bigg] \ge \displaystyle\frac{1}{2}, \quad \text{and} \quad \mathbb{P} \Big[ \big| R_{\mathfrak{c} \mathfrak{c}}^{(v)} \big| \le C_0 \Big] \ge \displaystyle\frac{3}{4}, \quad \text{so} \quad \mathbb{P} \big[ \mathscr{E}_2 (w) \big| \mathscr{E}_1 (w) \big] \ge \displaystyle\frac{1}{4}.
\end{flalign}
\noindent Indeed, since $R_{\mathfrak{c} \mathfrak{c}}^{(v)}$ from the $T_{vu}$, the first bound follows from \Cref{axi} for $\xi$ and the second follows from \eqref{q00delta2}; then the third follows from a union bound.
Next, we us additionally condition on $R_{\mathfrak{c} \mathfrak{c}}^{(v)}$, and on the event that $\mathscr{E}_1 (w) \cap \mathscr{E}_2 (w)$ holds. Then the quantity $Q_v$ from \eqref{kvqv} becomes deterministic and satisfies $|Q_v| \le |B| + 4 \Omega + 4 C_0 + 1$, since $|z| \le |E| + |\eta| \le B + 1$; since $|T_{vw}|, |T_{wu}| \le 2$ (since $\mathscr{G}_0 (w) \cap \mathscr{G}_0 (v)$ holds); since $\big| R_{vv}^{(w)} \big| \le \Omega$ (since $\mathscr{B} (0, w)$ holds, as $w \in \mathcal{S}_{L-1} (t_0 - \delta)$); and since $\big| R_{\mathfrak{c} \mathfrak{c}}^{(v)} \big| \le C_0$ (since $\mathscr{E}_2 (w)$ holds).
We now specialize to $\eta = \eta_j$ for some $j\in \mathbb{Z}_{\ge 1}$. By \eqref{zetajrj}, \eqref{zuv} (with \Cref{tuvalpha}, \Cref{q0estimate1}, and \Cref{sigmabetaestimate}), and a union bound it follows that there exists a constant $c_3 > 0$ such that
\begin{flalign} \label{b27}
\mathbb{P} \big[ \mathscr{E}_3 (w) \big| \mathscr{E}_1 (w) \cap \mathscr{E}_2 (w) \big] \ge c_3 \cdot e^{(2\delta + t_0 - \nu) L} - 2 e^{-C_0 L} \ge \displaystyle\frac{c_3}{2} \cdot e^{(2 \delta + t_0 - \nu) L}
\end{flalign}
if $j$ is sufficiently large. Together with \eqref{e1e2} and the previously mentioned bound $\mathbb{P} \big[\mathscr{E}_1 (w) \big] = \mathbb{P} \big[ \mathscr{G}_0 (w) \big] \ge c_1$, this gives
\begin{flalign*}
\mathbb{P} \big[ \mathscr{E} (w) \big] \ge \displaystyle\frac{c_1 c_3}{8} \cdot e^{(2 \delta + t_0 - \nu)L}
\end{flalign*}
for $z = E + \mathrm{i} \eta_j$ when $j$ is sufficiently large.
Now observe that, conditional on the subtree $\mathbb{T}_- (L)$ above $\mathbb{V} (L)$, the events $\big\{ \mathscr{E} (w) \big\}$ are mutually independent over $w \in \mathcal{R}_1$. Hence, since by \Cref{r02} the event $\mathcal{R}_2$ consists of those $\mathfrak{c} (w)$ for which $\mathscr{E} (w)$ holds, it follows from \eqref{r1estimate}, a Chernoff estimate, and a union bound that there exists a constant $c_4 = c_4 (\nu) > 0$ such that
\begin{flalign*}
\mathbb{P} \big[ |\mathcal{R}_2| \ge 1] \ge \mathbb{P} \bigg[ |\mathcal{R}_2| \ge \displaystyle\frac{c_1 c_3}{8} \cdot e^{(2 \delta + t_0 - \nu) L} \cdot e^{(\nu - t_0 - \delta) L} \bigg] \ge \displaystyle\frac{4}{5} - e^{-c_4 L} \ge \displaystyle\frac{3}{5},
\end{flalign*}
\noindent for sufficiently large $L$. We choose $L_0$ so that \eqref{r1estimate}, \eqref{b27}, and the previous inequality all hold for $L \ge L_0$. Together with \Cref{va}, this implies the proposition.
\end{proof}
\newpage
\chapter{Explicit Formula for the Mobility Edge}
\label{EdgeExplicit}
\section{Transfer Operator}\label{s:transfer}
This section is devoted to the proof of \Cref{l:bootstrap}. We begin in \Cref{s:productexpansion} by deriving a useful product expansion for $\Phi_L (s; z)$ (recall \Cref{moment1}). We take its limit as $\Imaginary z$ tends to $0$ in \Cref{EtaSmall}, which will imply that $\Phi_L$ may be calculated by iterating a certain integral operator $T$. In \Cref{s:transferoperator}, we estimate the operator norm of powers of $T$ in terms of the largest positive eigenvalue of $T$ (which will be evaluated in \Cref{s:pfeigenvector} explicitly). Finally, in \Cref{s:provebootstrap}, we prove \Cref{l:bootstrap}.
\subsection{Product Expansion}\label{s:productexpansion}
Throughout this section, we fix a complex number $z = E + \mathrm{i} \eta \in \mathbb{H}$ and an integer $L \ge 1$; we frequently abbreviate $R_{vw} = R_{vw} (z)$ for $v, w \in \mathbb{V}$ and $R_{\star} = R_{\star} (z)$ (from \Cref{gr}). Recalling the random variables $\varkappa_0 = \varkappa_0 (z)$ and $\vartheta_0 = \vartheta_0 (z)$ from \eqref{kappatheta1}, we let $V = V(z)$ denote a complex random variable with law
\begin{flalign*}
V \sim \varkappa_0 + \mathrm{i} \vartheta_0 \sim -z - R_{\star}^{-1}.
\end{flalign*}
\noindent For each integer $j \in \unn{1}{L}$, let $V_j = V_j (z)$ denote a complex random variable with law $V$, with the $\{ V_j \}$ mutually independent.
Next, we fix some infinite path $\mathfrak{p} = (v_0, v_1, \ldots ) \in \mathbb{V}$ with $v_0 = 0$, as well as a sequence of real numbers $\bm{t} = (t_1, t_2, \ldots , t_L)$. The following definition recursively introduces two sets of random variables $\{ R_i(z ; L; \bm{t}; \mathfrak{p} )\}_{i=0}^L$ and $\{ S_i(z ; L; \bm{t}; \mathfrak{p}) \}_{i=0}^L$. We often abbreviate $R_i(z ; L) = R_i(z ; L; \bm{t}) = R_i (z; L; \bm{t}; \mathfrak{p})$, and $S_i(z;L) = S_i(z ; L; \bm{t}) = S_i (z; L; \bm{t}; \mathfrak{p})$.
\begin{definition}
\label{srz}
For $i \in \unn{0}{L}$, we define the random variables $S_i (z; L)$ recursively, by setting
\begin{flalign}
\label{sirecursion}
S_L(z;L) = - \big( R_{v_L v_L}^{(v_{L-1})} \big)^{-1} -z, \quad \text{and} \quad S_i(z;L) = V_{i+1}(z) - \frac{t_{i+1}^2}{z + S_{i+1}(z;L)}, \quad \text{for $i \in \unn{0}{L-1}$}.
\end{flalign}
\noindent Moreover, for $i \in\unn{0}{L}$, we define
\begin{equation}\label{ridef}
R_i(z;L) = - \big(z + S_i(z;L) \big)^{-1}.
\end{equation}
\end{definition}
Observe in particular that
\begin{flalign*}
R_L (z; L) = R_{v_L v_L}^{(v_{L-1})}, \quad \text{and} \quad R_i(z;L) = - \big(z + V_{i+1}(z) + t^2_{i+1} R_{i+1}(z ; L) \big)^{-1}, \quad \text{for $i \in \unn{0}{L-1}$},
\end{flalign*}
\noindent and that the latter equality is analogous in form to the Schur complement identity \eqref{qvv}, with one term (being $t_{i+1}^2 R_{i+1} (z; L)$ here) separated from the sum on the right side (with the remaining part of the sum being $V_{i+1} (z)$ here).
We next require a truncation of the fractional moment sum $\Phi_L$ (from \Cref{moment1}). For any real numbers $s \in (\alpha, 1)$ and $\omega > 0$, define
\begin{equation}
\Phi_L^\circ(\omega)
= \Phi_L^\circ(s;z;\omega)=
\mathbb{E} \Bigg[ \displaystyle\sum_{v \in \mathbb{V}_{L}} \big| R_{0v} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (0, v; \omega)} \Bigg],
\end{equation}
\noindent recalling the event $\mathscr{D} (0, v; \omega)$ from \Cref{br0vw}. Observe in particular that $\Phi^\circ_L(s;z;0) = \Phi_L(s;z)$.
The following lemma provides a product expansion for $\Phi^\circ_L (\omega)$.
\begin{lem}\label{l:expansionz}
For any real numbers $s \in (\alpha,1)$ and $\omega > 0$, we have
\begin{equation}\label{productN}
\Phi^\circ_{L}(s;z;\omega)
=
\int_{\mathbb{R}^L}
\mathbb{E} \Bigg[
\big| R_L (z; L; \bm{t}) \big|^s \cdot
\prod_{j=1}^{L}
|t_j|^s \big| R_{j-1}(z; L; \bm{t}) \big|^s \Bigg] \cdot
\prod_{j=1}^{L}
\alpha t_j^{-\alpha -1 } \mathbbm{1}_{t_j \ge \omega} \, dt_j
.
\end{equation}
\end{lem}
\begin{proof}
For each $i \in \unn{1}{L}$, set $\bm{t}^{(i)} = (t_1, t_2, \ldots , t_i)$. Set $\Phi_{L, 0}^{\circ} = \Phi_L^{\circ}$ and, for any $i \in \unn{1}{L}$, set
\begin{flalign}
\label{lsumi}
\begin{aligned}
\Phi^\circ_{L,i}(s;z)
=
\int_{\mathbb{R}^i} &
\mathbb{E} \Bigg[
\prod_{j=1}^{i}
|t_j|^s \Big| R_{j-1} \big(z; i; \bm{t}^{(i)} \big) \Big|^s
\cdot
\sum_{ v\in \mathbb{D}_{L-i} (v_i)}
\big| R_{v_i v}^{(v_{i-1})} \big|^s \cdot \mathbbm{1}_{\mathscr{D}(v_i,v;\omega)}
\Bigg] \\
& \qquad \times
\prod_{j=1}^{i}
\alpha t_j^{-\alpha -1 } \mathbbm{1}_{t_j \ge \omega} \, dt_j.
\end{aligned}
\end{flalign}
\noindent We will show that for every $i \in \unn{0}{ L-1}$, if $\Phi^\circ_{L, i }= \Phi^\circ_{L} $, then $\Phi^\circ_{L, i +1} = \Phi^\circ_{L} $.
Since the desired conclusion \eqref{productN} is $\Phi^\circ_{L, L} = \Phi^\circ_{L}$ (as $R_L (z; L) = R_{v_L v_L}^{(v_{L-1})}$), this will prove the lemma by induction.
By the product expansion \Cref{rproduct}, for any $w \in \mathbb{D} (v_i)$ and $v \in \mathbb{D}_{L-i-1} (w) \subseteq \mathbb{V}(L)$, we have
\begin{equation}\label{tildeproduct}
\big| R^{(v_{i-1})}_{v_i v } \big| = \big| R^{(v_{i-1})}_{v_i v_i} \big| \cdot |T_{v_i w }| \cdot \big| R^{(v_i)}_{wv} \big|.
\end{equation}
\noindent Further, we have by the Schur complement identity \eqref{qvv} that
\begin{equation}\label{schur}
R^{(v_{i-1})}_{v_i v_i} = -\big( z + T^2_{v_i w }R^{(v_i)}_{ww} + K_{w} \big)^{-1},\quad \text{where}\quad
K_{w} = \sum_{\substack{u \in \mathbb{D} (v_i) \\ u \neq w}} T^2_{v_i u} R^{(v_i)}_{uu}.
\end{equation}
\noindent Observe that $K_w$ has the same law as $V$, as (by \Cref{sumaxi} and \eqref{kappa0theta0sum}) both are complex $\frac{\alpha}{2}$-stable laws with the same parameters. Inserting these into \eqref{lsumi} gives
\begin{flalign}
\label{momenti1}
\begin{aligned}
\Phi^\circ_{L,i}(s;z) & = \int_{\mathbb{R}^i}
\mathbb{E} \Bigg[ \bigg( \prod_{j=1}^{i}
|t_j|^s \big|R_{j-1}(z; i) \big|^s \bigg)
\cdot
\displaystyle\sum_{w \in \mathbb{D} (v_i)} \bigg|
\frac{T_{v_i w}}{z + T^2_{v_i w }R^{(v_i)}_{ww} + K_{w}}
\bigg|^s \\
& \qquad \qquad \times
\sum_{\substack{ v\in \mathbb{V}(L)\\ w \preceq v}}
\big| R_{w v}^{(v_i)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (w, v; \omega)}
\Bigg] \cdot
\prod_{j=1}^{i}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega}\, dt_j.
\end{aligned}
\end{flalign}
\noindent Recall from \Cref{srz} and \eqref{schur} that $R_{i-1}(z;i) = - \big(z + S_{i-1}(z;i) \big)^{-1}$ and
\begin{equation*} S_{i-1}(z;i) = V_i - t_i^2 \cdot R_i (z; i) = V_{i} + t_i^2 \cdot R_{v_i v_i}^{(v_{i-1})}= V_i + t_i^2 \cdot \big(z + T^2_{v_i w }R^{(v_i)}_{ww} + K_{w} \big)^{-1}.
\end{equation*}
\noindent Further set
\begin{flalign*}
& \tilde S_{i-1}(z;i; t_{i+1}) =
\tilde S_{i-1}(z; i ; \bm{t})
= V_i + t_i^2 \cdot \big( z + t_{i+1}^2 R^{(v_i)}_{v_{i+1},v_{i+1}} + V_{i+1} \big)^{-1}; \\
& \tilde R_{i-1}(z;i; t_{i+1}) =
\tilde R_{i-1}(z; i; \bm{t}) = - \big( z + \tilde S_{i-1}(z;i; t_{i+1}) \big)^{-1}.
\end{flalign*}
and define $\tilde R_j(z;i; t_{i+1}) = \tilde R_j(z;i; \bm{t})$ and $\tilde S_j(z;i;t_{i+1}) = \tilde S_j(z;i; \bm{t}) $ for $j \le i-2$ recursively using the analogues of \eqref{sirecursion} and \eqref{ridef} (replacing $S$ and $R$ there with $\widetilde{S}$ and $\widetilde{R}$ here, respectively). Observe, since $\big(K_w, R_{ww}^{(v_i)} \big)$ has the same law as $\big( V_{i+1}, R_{v_{i+1}, v_{i+1}}^{(v_i)} \big)$, we have that $\widetilde{S}_{i-1} (z; i; t_{i+1})$ has the same law as $S_{i-1} (z; i)$ if $t_{i+1} = T_{v_i w}$. Thus, for $t_{i+1} = T_{v_i w}$, we have that $\big( \widetilde{R}_{i-1} (z; i; t_{i+1}), \widetilde{S}_{i-1} (z; i; t_{i+1}) \big)$ has the same law as $\big( R_{i-1} (z; i), S_{i-1} (z; i) \big)$. It follows for $t_{i+1} = T_{v_i w}$ that the random variables $\big( \widetilde{R}_j (z; i; t_{i+1}), \widetilde{S}_j (z; i; t_{i+1}) \big)_j$ have the same law as $\big( R_j (z; i), S_j (z; i) \big)_j$, as they satisfy the same recursion. Hence, \eqref{momenti1} implies
\begin{flalign}
\label{momenti2}
\begin{aligned}
\Phi^\circ_{L,i}(s;z) & = \int_{\mathbb{R}^i}
\mathbb{E} \Bigg[ \bigg( \prod_{j=1}^{i}
|t_j|^s \big| \widetilde{R}_{j-1}(z; i) \big|^s \bigg)
\cdot
\displaystyle\sum_{w \in \mathbb{D} (v_i)} \bigg|
\frac{T_{v_i w}}{z + T^2_{v_i w }R^{(v_i)}_{ww} + K_{w}}
\bigg|^s \\
& \qquad \qquad \times
\sum_{\substack{ v\in \mathbb{V}(L)\\ w \preceq v}}
\big| R_{w v}^{(v_i)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (w, v; \omega)}
\Bigg] \cdot
\prod_{j=1}^{i}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega}\, dt_j.
\end{aligned}
\end{flalign}
We evaluate the right side of \eqref{momenti2} through \eqref{fxi}, to which end we must introduce a suitable a function $f \colon \mathbb{R}_{> 0} \times \mathcal N_{\mathbb{R}_{> 0}}^{\#*} \rightarrow \mathbb{R}_{\ge 0}$ to which \Cref{fidentityxi} would apply. Suppose that $\nu \in \mathcal N_{\mathbb{R}_{> 0}}^{\#*}$ is supported on a finite interval $(0,B)$ for some $B > 0$. Then we can write $\nu = \sum_{k=1}^\infty \delta_{\nu_k}$, where the sequence $\nu_1, \nu_2, \dots$ is decreasing. Suppose also that the sum
\begin{equation}\label{Knu}
K(\nu) = \sum_{k=1}^\infty \nu^2_k \cdot R_{k}(z),
\end{equation}
converges, where $\{R_{k}(z)\}_{k=1}^\infty$ is a collection of independent, identically distributed random variables with law $R_\star(z)$.
We set
\begin{flalign*}
& \tilde S_{i-1}(z;i; t_{i+1};\nu) = \tilde S_{i-1}(z; i ; \bm{t};\nu)= V_i + t_i^2 \cdot \big(z + t_{i+1}^2 R^{(v_i)}_{v_{i+1},v_{i+1}} + K(\nu) \big)^{-1}; \\
& \tilde R_{i-1}(z;i; t_{i+1}, \nu) = \tilde R_{i-1}(z; i; \bm{t};\nu) = - \big( z + \tilde S_{i-1}(z;i; t_{i+1};\nu)\big)^{-1},
\end{flalign*}
\noindent and define $\tilde R_{j}(z;i; t_{i+1};\nu) = \tilde R_{j}(z;i; \bm{t};\nu)$ and $\tilde S_{j}(z;i;t_{i+1};\nu) = \tilde S_{j}(z;i; \bm{t};\nu) $ for $j \le i-2$ recursively using the analogues of \eqref{sirecursion} and \eqref{ridef} (replacing the $S$ and $R$ there with $\widetilde{S}$ and $\widetilde{R}$ here, respectively). Observe that, if $\nu$ is sampled according to a Poisson point process with intensity measure $\alpha x^{-\alpha-1} dx$, then \Cref{tuvalpha} implies that $\big( \widetilde{S}_j (z; i; t_{i+1}; nu), \widetilde{R}_j (z; i; t_{i+1}; \nu) \big)_j$ has the same law as $\big( \widetilde{S}_j (z; i; t_{i+1}), \widetilde{R}_j (z; i; t_{i+1}) \big)_j$. Now, if \eqref{Knu} diverges set $f = 0$, and when it converges define
$f \colon \mathbb{R}_{> 0} \times \mathcal N_{\mathbb{R}_{> 0}}^{\#*} \rightarrow \mathbb{R}_{\ge 0}$ by
\begin{flalign*}
f(x, \nu) & =
\int_{\mathbb{R}^i}
\mathbb{E} \Bigg[
\bigg(
\prod_{j=1}^{i} |t_j|^s \big| \tilde R_{j-1}(z; i; x; \nu) \big|^s
\bigg) \cdot
\bigg|
\frac{x}{z + x^2 R^{(v_i)}_{v_{i+1},v_{i+1}} + K(\nu)}
\bigg|^s \\
& \qquad\qquad \times \sum_{ v\in \mathbb{D}_{L-i-1} (v_{i+1})}
\big| R_{v_{i+1}, v}^{(v_i)} \big|^s \cdot \mathbbm{1}_{\mathscr{D} (v_{i+1}, v; \omega)}
\Bigg] \cdot \prod_{j=1}^{i}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega} \, dt_j.
\end{flalign*}
We can then apply \eqref{fxi} to the process $\Xi = \big\{ |T_{v_i u}| \big\}_{u \in \mathbb{D}( v_i)}$ to obtain from \eqref{momenti2} that
\begin{flalign}
\label{cutsum12}
\begin{aligned}
\Phi^\circ_{L,i}(s;z) & = \mathbb{E} \Bigg[ \displaystyle\sum_{w \in \mathbb{D} (v_i)} f (T_{v_i w}, \Xi \setminus T_{v_i w}) \Bigg] \\
& = \int_{\mathbb{R}^{i+1}}
\mathbb{E} \Bigg[
\bigg(
\prod_{j=1}^{i}
|t_j|^s \big|\tilde R_{j-1}(z; i; t_{i+1}) \big|^s
\bigg) \cdot
\bigg|
\frac{t_{i+1}}{z + t_{i+1}^2 R^{(v_i)}_{v_{i+1},v_{i+1}} + V_{i+1}(z)}
\bigg|^s \\
& \qquad \qquad \qquad \times \sum_{ v\in \mathbb{D}_{L-i-1} (v_{i+1})} \left| R_{v_{i+1}, v}^{(v_i)}\right|^s \cdot \mathbbm{1}_{\mathscr{D} (v_{i+1}, v; \omega)}
\Bigg] \cdot \alpha t_j^{-\alpha-1} \mathbbm{1}_{t_{i+1} \ge \omega} dt_{i+1} \\
& \qquad \qquad \qquad \times \Bigg( \prod_{j=1}^{i} \alpha t_j^{-\alpha -1 } \mathbbm{1}_{t_j \ge \omega} dt_j \Bigg),
\end{aligned}
\end{flalign}
\noindent where in the second term the expectation is with respect to a Poisson point process with intensity $\alpha t^{-\alpha-1} dt$ by \Cref{tuvalpha} (and we used the fact that we can exchange the sum and the integrals because all terms are positive).
Recalling that $V_{i+1}$ has the same law as $K_w$ (by \Cref{tuvalpha}), we find $\big( \widetilde{S}_j (z; i; t_{i+1}), \widetilde{R} (z; i; t_{i+1}) \big)_j$ has the same law as $\big( S_j (z; i+1), R_j (z; i+1) \big)_j$. Thus, the right side of \eqref{cutsum12} is $\Phi^\circ_{L,i+1}(s ; z)$, by \eqref{momenti1}. Hence, \eqref{cutsum12} is the claim $\Phi^\circ_{L,i}(s ; z) = \Phi^\circ_{L,i+1}(s ; z)$. Using the induction hypothesis that $\Phi_{L,i}(s ; z) = \Phi^\circ_L(s;z)$, this shows the desired equality $\Phi^\circ_{L,i+1}(s ; z) = \Phi^\circ_L(s;z)$.
\end{proof}
\subsection{The Small $\eta$ Limit of the Product Expansion}
\label{EtaSmall}
In this section we analyze the limit of $\Phi_L (s; z)$ (from \Cref{moment1}) as $\Imaginary z$ tends to $0$. To that end, we adopt the notation of \Cref{s:productexpansion}; we further fix a complex number $u_L \in \overline{\mathbb{H}}$ and a sequence of complex numbers $\bm{r} = (r_1, r_2, \ldots , r_{L-1}) \in \overline{\mathbb{H}}^{L-1}$. We define quantities $u_i \in \overline{\mathbb{H}}$ recursively for $i\in \unn{0}{L-1}$ by
\begin{equation}\label{ui}
u_i = r_i - \frac{t_{i+1}^2}{z+u_{i+1}}.
\end{equation}
\noindent Observe that $(u_i, r_i)$ serve as the analogs of $\big( S_i (z; L), V_{i+1} \big)$ in \Cref{srz}. Also observe that $u_i$ is a function of the parameters $\{t_{i+1},\dots, t_L \} \cup \{ r_i, \dots, r_{L-1} \} \cup \{ u_L \}$ only. We additionally set
\begin{equation*}
F(z; \bm{t}, \bm{r}, u_L)= \frac{1}{z + u_0} \cdot \prod_{i=1}^L
\frac{t_i}{z + u_i} .
\end{equation*}
\noindent Also let $p_z\colon \mathbb{C} \rightarrow \mathbb{R}$ denote the density of the complex random variable $\varkappa_0(z) + \mathrm{i} \vartheta_0(z)$, and let $\mu_z$ denote the measure on $\mathbb{C}$ given by $p_z(y) \, dy$.
Under this notation, \Cref{srz} implies that, if $u_L$ is sampled under the measure $\mu_z$ (which has law $-z-R_{\star}^{-1}$, or equivalently of $-z - \big( R_{v_L v_L}^{(v_{L-1})} \big)^{-1}$) then $u_L$ has the same law as $S_L (z; L)$. If the $r_i$ are further sampled independently under $\mu_z$, then $r_i$ has the same law as $V_{i+1}$, and so \eqref{ui} implies that $(\bm{r}, u_L)$ has the same law as $\big( S_1 (z; L), S_2 (z; L), \ldots, S_L (z; L) \big)$. Together with \eqref{productN} and the definition of $R_i (z; L)$ from \Cref{srz}, this implies that
\begin{multline}\label{productN3}
\Phi^\circ_{L}(\omega)
=
\int_{\mathbb{R}^L}
\mathbb{E} \Bigg[
\big| R_L (z; L; \bm{t}) \big|^s \cdot \prod_{j=1}^{L}
|t_j|^s \big|R_{j-1}(z; L) \big|^s \Bigg] \cdot
\prod_{j=1}^{L}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega} \, dt_j \\
=
\int_{\mathbb{C}^L \times \mathbb{R}^L}
\big|F(z; \bm{t}, \bm{r}, u_L)\big|^s
d\mu_z(u_L) \prod_{i=0}^{L-1} d\mu_z(r_i )
\prod_{j=1}^{L}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega} \, dt_j.
\end{multline}
We next introduce an analog of the right side of \eqref{productN3}, assuming as $\eta$ tends to $0$ the random variable $R_{\star} (E + \mathrm{i} \eta)$ becomes real. To that end, recall from \Cref{pkappae} that $p_E(x) \colon \mathbb{R} \rightarrow \mathbb{R}$ denote the density of the real random variable $\varkappa_\mathrm{loc}(E)$.
\begin{definition}
\label{amoment2}
For any real numbers $s \in (\alpha, 1)$; $E \in \mathbb{R}$; and $\omega > 0$, define
\begin{flalign}\label{productN2}
\Phi_L^{\mathrm{loc}} (s;E;\omega) =\int_{\mathbb{R}^{2L}}
\big|F(E; \bm{t}, \bm{r}, u_L)\big|^s
\cdot p_E (u_L) d u_L \cdot \prod_{i=0}^{L-1} p_E (r_i) dr_i \cdot
\prod_{j=1}^{L}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega}\, dt_j,
\end{flalign}
and set $\Phi_L^{\mathrm{loc}} (s;E) = \Phi_L^{\mathrm{loc}} (s;E;0)$.
\end{definition}
The next lemma considers the behavior of $ \Phi_L(s;z;\omega)$ as $\eta$ tends to zero.
\begin{lem}\label{l:PhiEA}
Fix an integer $L \ge 1$ and real numbers $s \in (\alpha,1)$; $E \in \mathbb{R}$; and $\omega>0$.
Suppose there exists a decreasing sequence of positive real numbers $(\eta_1, \eta_2, \ldots ) \subset (0, 1)$ tending to $0$ and an almost surely real random variable $R_\star (E)$, such that $\lim_{k \rightarrow \infty} R_\star(E + \mathrm{i} \eta_k) = R_\star(E)$ weakly. Then,
\begin{equation*}
\lim_{k \rightarrow \infty} \Phi^\circ_L(s;E + \mathrm{i} \eta_k; \omega ) =\Phi_L^{\mathrm{loc}} (s ; E ;\omega ).
\end{equation*}
\end{lem}
\begin{proof}
We may assume that $E \ne 0$, for otherwise such a sequence $(\eta_1, \eta_2, \ldots )$ does not exist by \cite[Lemma 4.3(b)]{bordenave2011spectrum}. Throughout this proof, we let $\mu_E$ denote the probability measure on $\mathbb{C}$, which is supported on $\mathbb{R}$ and whose restriction to $\mathbb{R}$ is the measure $p_E(x)\, dx$.
By \Cref{l:boundarybasics}, we have $R_\star(E) = R_\mathrm{loc}(E)$ in law. Setting $z_k = E + \mathrm{i} \eta_k$, this implies $\lim_{k \rightarrow \infty} \mu_{z_k} = \mu_E$, which in turn implies the weak convergence
\begin{align}\label{replace!}
\lim_{k \rightarrow \infty} &d\mu_z(u_L) \cdot \prod_{i=0}^{L-1} d\mu_z(r_i ) \cdot
\prod_{j=1}^{L}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega} \, dt_j = d\mu_E(u_L) \cdot \prod_{i=0}^{L-1} d\mu_E(r_i ) \cdot
\prod_{j=1}^{L}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega}\, dt_j.
\end{align}
We observe that, up to a normalization (dependent on $\omega$), the measure $\alpha t_j^{-\alpha -1 } \cdot \mathbbm{1}_{t_j \ge \omega}\, dt_j$ prescribes the law of a real random variable. Then (up to this normalization), \eqref{productN3} may be viewed as the expectation of the random variable $|F(z)|^s$, where the law of $F(z)$ is induced by the $L$ mutually independent random variables $(\bm{r}, u_L)$, each with law $\mu_z$, and the $L$ mutually random variables $\bm{t}$, each with law proportional to $\alpha t_j^{-\alpha -1 } \cdot \mathbbm{1}_{t_j \ge \omega}\, dt_j$. Further observe that the sequence $\{\Phi_{L}(s_0,z_k)\}_{k=1}^\infty$ is uniformly bounded for any fixed $s_0 \in (s, 1)$ by \Cref{limitr0j}. Hence, we have $ \mathbb{E} \big[ |F(E + \mathrm{i} \eta)|^{s_0} \big] < C$ for some constant $C = C(L) > 1$ independent of $\eta \in (0,1)$.
It follows that the sequence of random variables $\{ |F(z_k)|^s\}_{k=1}^\infty$ is uniformly integrable, which justifies the exchange of limits (using \eqref{replace!} for the first equality),
\begin{align*}
\lim_{k \rightarrow \infty} & \Phi^\circ_{L} (s;z_k;\omega) \\
&= \int_{\mathbb{C}^L \times \mathbb{R}^L} \lim_{k \rightarrow \infty}
\big|F(z_k; \bm{t}, r_1, \dots , r_{L-1}, u_L)\big|^s
d\mu_z(u_L) \prod_{i=0}^{L-1} d\mu_z(r_i )
\prod_{j=1}^{L}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega}\, dt_j
\\
&= \int_{\mathbb{C}^L \times \mathbb{R}^L}
\big|F(E; \bm{t}, r_1, \dots , r_{L-1}, u_L)\big|^s
d\mu_E(u_L) \prod_{i=0}^{L-1} d\mu_E(r_i )
\prod_{j=1}^{L}
\alpha t_j^{-\alpha -1 }\mathbbm{1}_{t_j \ge \omega}\, dt_j.
\end{align*}
The lemma follows after recalling the definition \eqref{productN2} and that of $\mu_E$ in terms of $p_E$.
\end{proof}
The next lemma removes the cutoff $\omega$ from the previous lemma, and computes the boundary value of $\Phi_L(s;z)$ under the same hypotheses as in \Cref{l:PhiEA}.
\begin{lem}\label{l:PhiE}
Adopting the notation and assumptions of \Cref{l:PhiEA}, we have
\begin{equation*}
\lim_{k \rightarrow \infty} \Phi_L(s;E + \mathrm{i} \eta_k ) =\Phi_L^{\mathrm{loc}} (s ; E ).
\end{equation*}
\end{lem}
\begin{proof}
Set $z_k = E + \mathrm{i} \eta_k$; as in the proof of \Cref{l:PhiEA}, we may assume that $E \ne 0$. For any $\omega > 0$, we have $\Phi_L(s ; z_k) \ge \Phi^\circ_L(s ; z_k ;\omega )$, so we conclude from \Cref{l:PhiEA} that
\begin{equation*}
\liminf_{k \rightarrow \infty}
\Phi_L(s ; z_k) \ge \liminf_{k \rightarrow \infty} \Phi_L^{\circ} (s; z_k; \omega) = \Phi_L^{\mathrm{loc}} (s ; E ;\omega ).
\end{equation*}
\noindent By the monotone convergence theorem, we make take $\omega$ to zero to conclude that
\begin{equation}\label{liminf}
\liminf_{k \rightarrow \infty}
\Phi_L(s ; z_k) \ge \Phi_L^{\mathrm{loc}} (s ; E ).
\end{equation}
Next, by \Cref{ry0estimate}, there exists for any $\delta >0$ a constant $\omega = \omega(\delta) > 0$ such that
\begin{equation*}
\Phi^\circ (s; z_k ; \omega) \ge ( 1 - \delta) \cdot \Phi_L(s, z_k).
\end{equation*}
Holding $\delta$ fixed, taking $k$ to infinity, and using \Cref{l:PhiEA} gives
\begin{equation}\label{limsup}
\Phi_L^{\mathrm{loc}} (s, E) \ge \Phi_L^{\mathrm{loc}} (s ; E ;\omega) = \displaystyle\lim_{k\rightarrow \infty} \Phi_L^{\circ} (s; z_k; \omega) \ge (1 - \delta) \cdot
\limsup_{k\rightarrow\infty} \Phi_L(s, z_k).
\end{equation}
Taking $\delta$ to zero and combining \eqref{liminf} and \eqref{limsup} completes the proof.
\end{proof}
We next show that $\Phi_L^{\mathrm{loc}} (s; E)$ can be computed by iterating a certain integral operator.
We define a sequence of functions $\{g_i(x;E)\}_{i=0}^L$ in the following way. We set $g_L(x ; E) = p_E(x)$, and for $i \in \unn{0}{L-1}$ we set
\begin{equation}\label{gidef}
g_i(x;E) = \int_{\mathbb{R}^2} g_{i+1}(y,E)
\left| \frac{t}{E + y} \right|^s \cdot p_E\left( x + \frac{t^2}{E + y } \right) \, dy \cdot \alpha t^{-\alpha - 1 } \mathbbm{1}_{t > 0}\, dt .
\end{equation}
\begin{lem}\label{l:Erecursion}
Fix an integer $L \ge 1$ and real numbers $s \in (\alpha,1)$ and $E \in \mathbb{R}$. We have
\begin{flalign}\label{lsbig}
\Phi_L^{\mathrm{loc}} (s; E) = \int_{-\infty}^{\infty} \left| \frac{1 }{E +x } \right|^s g_0(x ; E) \, dx.
\end{flalign}
\end{lem}
\begin{proof}
Iterating \eqref{gidef}, we obtain that the right side of \eqref{lsbig} equals
\begin{equation}
\int_{\mathbb{R}^{2L}}
\left| \frac{1}{E + y_0} \prod_{i=1}^L
\frac{t_i}{E + y_i} \right|^s \cdot
p_E(y_L) \, dy_L \cdot
\prod_{i=0}^{L-1} p_E\left( y_i + \frac{t_{i+1}^2}{E + y_{i+1}} \right)\, dy_i \cdot \prod_{j=1}^L \alpha t_j^{-\alpha - 1} \mathbbm{1}_{t_j > 0}\, dt_j.
\end{equation}
We now make the change of variables
$r_i = y_i + \frac{t_{i+1}^2}{E + y_{i+1}}$ for $i\in\unn{0}{L-1}$ to obtain that the previous line equals
\begin{equation}
\int_{\mathbb{R}^{2L}}
\left| \frac{1}{E + y_0} \prod_{i=1}^L
\frac{t_i}{E + y_i} \right|^s
\cdot p_E(y_L) \, dy_L
\cdot \prod_{i=0}^{L-1} p_E(r_i) \, dr_i \cdot \prod_{j=1}^L \alpha t_j^{-\alpha - 1}\mathbbm{1}_{t_j > 0}\, dt_j,
\end{equation}
where we consider the $y_i$ for $i \in \unn{0}{ L -1}$ as functions of $(r_0, \dots , r_{L-1}, y_L)$. Further, these $y_i$ satisfy the relations \eqref{ui}, so (setting $y_L = u_L$) the previous integral is equivalent to
\begin{equation}
\int_{\mathbb{R}^{2L}}
\big| F(z; \bm{t}, \bm{r}, u_L)\big|^s \cdot
p_E(u_L) \, du_L \cdot
\prod_{i=0}^{L-1} p_E(r_i) \, dr_i \cdot \prod_{j=1}^L \alpha t_j^{-\alpha - 1} \mathbbm{1}_{t_j > 0}\, dt_j,
\end{equation}
\noindent which by the definition of $\Phi_L^{\mathrm{loc}} (s; E)$ (from \Cref{amoment2}) yields the lemma.
\end{proof}
\subsection{Integral Operator}\label{s:transferoperator}
Given $\alpha \in (0,1)$ and $s \in (\alpha, 1)$,
let $\| \cdot \|=\| \cdot \|_{\alpha,s}$ be the norm on functions $f\colon \mathbb{R} \rightarrow \mathbb{R}$ defined by
\begin{flalign}
\label{xf}
\| f \| = \Big\| f(x) \cdot \big(1 + |x|^{(\alpha-s)/2 + 1} \big) \Big\|_\infty,
\end{flalign}
where $\| \cdot \|_\infty$ is the $L^\infty$ norm. Let $\mathcal X$ be the Banach space of functions $f \colon \mathbb{R} \rightarrow \mathbb{R}$ such that $\|f \| < \infty$ (more precisely, we consider equivalence classes of functions that differ only on sets of measure zero). The following definition introduces a transfer operator that will be used to analyze $\Phi_L$; in the below, we recall the density $p_E$ from \Cref{pkappae}.
\begin{definition}
\label{toperator}
We define the operator $T= T_{s,\alpha,E}$ on functions $f \in \mathcal X$ by
\begin{equation*}
Tf(x)
=
\frac{\alpha}{2} \int_{\mathbb{R}^2}
f(y)
\left| \frac{1}{E + y} \right|^s
|h|^{s-\alpha -1 } p_E\left(x + h^2 (E+y)^{-1}\right)
\, d h \, dy.
\end{equation*}
\end{definition}
\begin{rem}
\label{tlgl}
Using this notation, we may write \eqref{gidef} as
\begin{equation}\label{Erecursionrewrite}
g_i(x ; E) = \big(T g_{i+1} ( \,\cdot \,; E) \big)(x).
\end{equation}
\noindent Through $L$ applications of \eqref{Erecursionrewrite}, it follows that $g_0 = T^L g_L = T^L p_E$.
\end{rem}
Before analyzing $T$, we state the following useful integral bound.
\begin{lem}\label{l:Tintegral}
Fix $\alpha \in (0,1)$ and $s \in (\alpha/2 , 1/2)$.
There exists a constant $c(s,\alpha) > 0$ such that, for all $x \in \mathbb{R}$,
\begin{equation*}
\int_{0}^\infty \frac{ r^{(s- \alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} }
\le
c^{-1} \min(1, |x|^{(s - \alpha)/2 -1}).
\end{equation*}
Additionally, if $x \ge 0$, we have
\begin{equation}\label{Tintegrallower}
c \min(1,x^{(s-\alpha)/2 -1})
\le
\int_{0}^\infty \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} },
\end{equation}
and if $x \le 0$,
\begin{equation}\label{Tintegrallowerweak}
c \min(1,|x|^{s/2 - \alpha -1})
\le
\int_{0}^\infty \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} },
\end{equation}
\end{lem}
\begin{proof}
First, suppose that $x>2$.
We write
\begin{align*}
\int_{0}^\infty \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} }
&=
\int_{0}^{x-1} \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} } +\int_{x-1}^{x+1} \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} } +\int_{x+1}^{\infty} \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} }.
\end{align*}
For the first integral, we bound
\begin{flalign*}
\int_{0}^{x-1} \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} }
\le
\int_{0}^{x-1} \frac{ r^{(s-\alpha)/2 -1} dr}{ | x -r |^{ 1 + \alpha/2} }.
\end{flalign*}
We now substitute $r = tx$ to obtain
\begin{equation*}
\int_{0}^{x-1} \frac{ r^{(s-\alpha)/2 -1} dr}{ | x -r |^{ 1 + \alpha/2} }
=
x^{s/2 - \alpha -1 } \int_{0}^{1- x^{-1}} \frac{ t^{(s-\alpha)/2 -1} dt}{ | 1 -t|^{ 1 + \alpha/2} } .
\end{equation*}
The latter integral is straightforwardly bounded by splitting it into two pieces at $t=1/2$. We have
\begin{equation*}
\int_{0}^{1/2 } \frac{ t^{(s-\alpha)/2 -1} dt}{ | 1 -t|^{ 1 + \alpha/2} }
\le C,
\end{equation*}
and direct integration gives
\begin{equation*}
\int_{1/2}^{1- x^{-1}} \frac{ t^{(s-\alpha)/2 -1} dt}{ | 1 - t|^{ 1 + \alpha/2} }
\le C \int_{1/2}^{1- x^{-1}} \frac{ dt}{ | 1 - t|^{ 1 + \alpha/2} }
\le
C ( x^{\alpha/2} + 1).
\end{equation*}
\noindent Summing these two estimates, we conclude that
\begin{flalign}
\label{integralr}
\displaystyle\int_0^{x-1} \displaystyle\frac{r^{(s-\alpha)/2 - 1} dr}{1 + |x-r|^{1+\alpha/2}} \le C x^{s/2 - \alpha -1 } \cdot x^{\alpha/2} = C x^{(s-\alpha)/2 -1}.
\end{flalign}
when $x > 1$.
Similar reasoning gives a matching lower bound of $c x^{(s-\alpha)/2 -1}$ for the integral on the left side of \eqref{integralr} if $x > 2$.
We next bound
\begin{flalign}
\label{integralr1}
\int_{x-1}^{x+1} \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} }
\le \int_{x-1}^{x+1} r^{(s-\alpha)/2 -1} dr \le C x^{(s-\alpha)/2 -1},
\end{flalign}
and a matching lower bound follows similarly. Finally, again changing variables $r = tx$,
\begin{align}
\label{integralr2}
\begin{aligned}
\int_{x+1}^{\infty} \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} }
& \le x^{s/2 -\alpha -1 } \int_{1 + x^{-1}}^\infty
\frac{ t^{(s-\alpha)/2 -1} dt}{ | 1 -t|^{ 1 + \alpha/2} }\\
&\le
C x^{s/2 -\alpha -1 }
\left(
\int_{1 + x^{-1}}^2
\frac{ dt}{ | 1 -t|^{ 1 + \alpha/2} }
+
\int_{2}^\infty
\frac{ t^{(s-\alpha)/2 -1} dt}{ | 1 -t|^{ 1 + \alpha/2} }
\right) \le C x^{(s-\alpha)/2 -1},
\end{aligned}
\end{align}
\noindent and a matching lower bound follows similarly. Summing \eqref{integralr}, \eqref{integralr1}, and \eqref{integralr2} proves both the upper and lower bounds in the lemma for $x>2$.
When $x\in [-2,2]$, the continuity and positivity of
\begin{equation*}
x \mapsto \int_{0}^\infty \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} },
\end{equation*}
\noindent imply that it is bounded above and below by a uniform constant on the compact interval $x \in [-2, 2]$; this verifies the upper and lower bounds in the lemma in this case.
For $x< - 2$, we have
\begin{equation*}
\int_{0}^\infty \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} }
\le
\int_{0}^\infty \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | |x| -r |^{ 1 + \alpha/2} },
\end{equation*}
which proves the upper bound, since the $x>2$ case was shown above. For the lower bound, we use the change of variables $r = tx$ (and the fact that $|x - r| \ge 1$ for $r \ge 0 > -2 \ge x$) to deduce that
\begin{equation*}
\int_{0}^\infty \frac{ r^{(s-\alpha)/2 -1} dr}{1 + | x -r |^{ 1 + \alpha/2} } \ge c \displaystyle\int_0^{\infty} \displaystyle\frac{r^{(s-\alpha)/2 - 1} dr}{1 + |x-r|^{1+\alpha/2}}
= c |x|^{s/2 -\alpha -1 } \int_{0}^\infty
\frac{ t^{(s-\alpha)/2 -1} dt}{ | 1 + t|^{ 1 + \alpha/2} }
\ge c
|x|^{s/2 -\alpha -1 },
\end{equation*}
\noindent which establishes the lemma.
\end{proof}
We now use the previous lemma to estimate the norm of $T$. We also provide a lower bound on $T^2f$ for nonnegative $f \in \mathcal X$ that are nonzero on a set of positive measure.
\begin{lem}\label{l:Tbounds}
For any $s \in (\alpha, 1)$, there exists a constant $C = C(s) > 1$ such that, for all $f \in \mathcal X$,
\begin{equation*}
\| Tf \| \le C \| f\|.
\end{equation*}
\item
Further, if $f(x_0) \ge 0$ for all $x\in \mathbb{R}$ and $\int_{-\infty}^{\infty} \big| f(x) \big| dx > 0$, then there exists a constant $c_f = c_f (s) > 0$ such that
\begin{equation}\label{Tlower}
T^2 f(x) \ge \frac{c_f}{1 + x^{1 + (\alpha-s)/2} }.
\end{equation}
\end{lem}
\begin{proof}
Using the fact (from \Cref{ar}) that $|p_E(x)| \le C \cdot \min \big\{ 1, |x|^{-1-\alpha/2} \big\}$, we have
\begin{equation*}
\big|Tf(x)\big| \le C \int_{\mathbb{R}^2}
f(y)
\left| \frac{1}{E + y} \right|^s
|h|^{s-\alpha -1 } \frac{1}{ 1 + \big|x + h^2 (E+y)^{-1} \big|^{1 + \alpha/2} }
\, d h \, dy.
\end{equation*}
Set $t = h^2$, so that $ t^{-1/2} \, dt = 2 \, dh$. Then it follows that
\begin{equation*}
\big| Tf (x) \big| \le \int_{-\infty}^\infty \int_{0}^\infty
f(y)
\left| \frac{1}{E + y} \right|^s
|t|^{(s-\alpha)/2 -1 } \frac{1}{ 1 + \big| x + t (E + y)^{-1} \big|^{1 + \alpha/2} }
\, d t \, dy.
\end{equation*}
\noindent Setting $r = t |E + y|^{-1}$ (implying that $|E + y|\, dr = dt$), we find that
\begin{flalign}
\label{tfx0}
\big| Tf (x) \big| \le C \int_{-\infty}^\infty \int_{0}^\infty
f(y)
|E+ y|^{- (s+\alpha)/2 } \frac{|r|^{(s-\alpha)/2 -1 }}{ 1 + \left|x\sgn(E-y) + r \right|^{1 + \alpha/2} }
\, d r \, dy.
\end{flalign}
\noindent Since we also have (from \Cref{ar}) that $\big| p_E (x) \big| \ge c \cdot \min \big\{ 1, |x|^{-1-\alpha/2} \big\}$, the same reasoning yields
\begin{flalign}
\label{tfx2}
\big| Tf(x) \big| \ge c \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_0^{\infty} f(y) |E + y|^{-(s+\alpha)/2} \displaystyle\frac{|r|^{(s-\alpha)/2-1}}{1 + \big| x \sgn (E-y) + r \big|^{1+\alpha/2}} dr dy.
\end{flalign}
Applying \Cref{l:Tintegral} in \eqref{tfx0}, it follows that
\begin{flalign}
\label{tfx1}
\big| Tf(x) \big| \le \frac{C}{1 + |x|^{1 + (\alpha-s)/2}} \int_{-\infty}^\infty
f(y)
|E+y|^{- (s+\alpha)/2 }
\, dy.
\end{flalign}
Using the definition of the norm $\| f \|$, we have
\begin{equation*}
\int_{-\infty}^{\infty}
f(y)
|E+y|^{-(s+\alpha)/2} \, dy \le
C \|f \| \int_{-\infty}^{\infty}
\frac{1}{1 + |y|^{1 + (\alpha-s)/2}}
|E+y|^{-(s+\alpha)/2} \, dy
\le C \cdot \| f\|,
\end{equation*}
\noindent where to deduce the last inequality we used the facts that $\frac{s+\alpha}{2} \in (0, 1)$ (to bound the integral around $y = -E$) and $\frac{\alpha-s}{2} + 1 + \frac{s + \alpha}{2} > 1$ (to bound the integral around $y = \infty$). This, together with \eqref{tfx1}, concludes the proof of the upper bound in the lemma.
Now let us establish the lower bound of the lemma; in what follows, we assume that $E \ge 0$, as the proof when $E \le 0$ is entirely analogous. Inserting \eqref{Tintegrallower} and \eqref{Tintegrallowerweak} in \eqref{tfx2}, we deduce
\begin{flalign*}
\big| Tf(x) \big| \ge\frac{c}{1 + x^{1 + \alpha -s/2}} \int_{-\infty}^\infty
f(y)
|E-y|^{- (s+\alpha)/2 } \, dy
=
\frac{c_f}{1 + x^{1 + \alpha -s/2}}, \qquad \text{for any $x \in \mathbb{R}$},
\end{flalign*}
where $c_f = c_f (s) > 0$ is a constant depending on the function $f$.
This implies that $Tf (x) \ge c \cdot c_f$, uniformly for $x \in [-2E, 2E]$. Inserting this in \eqref{tfx2}, and using \eqref{Tintegrallower}, we obtain
\begin{flalign*}
T^2 f(x) & \ge c \displaystyle\int_{\infty}^{\infty} \displaystyle\int_0^{\infty} f(y) |E+y|^{-(s+\alpha)/2} \displaystyle\frac{|r|^{(s-\alpha)/2 - 1}}{1 + \big| x \sgn (E-y) + r \big|^{1 + \alpha/2}} dr dy \\
& \ge \displaystyle\frac{c}{1 + x^{1 + (\alpha-s)/2}} \Bigg( \mathbbm{1}_{x \ge 0} \cdot \displaystyle\int_{E}^{2E} f(y) |E - y|^{-(s+\alpha)/2} dy \\
& \qquad \qquad \qquad \qquad \qquad + \mathbbm{1}_{x < 0} \cdot \displaystyle\int_{-2E}^E f(y) |E-y|^{-(s+\alpha)/2} dy \Bigg) \ge \displaystyle\frac{ c \cdot c_f}{1 + x^{1 + (\alpha-s)/2}},
\end{flalign*}
\noindent where to obtain the second bound we observe that $\big| x \sgn (E - y) + r \big| = |r - x|$ either if $y > E$ and $x > 0$ or if $y < E$ and $x < 0$. This establishes the lower bound on $\| T^2 f \|$.
\end{proof}
The following lemma shows that $T$ is compact.
\begin{lem}\label{l:compact}
For all $s\in (\alpha,1)$, the operator $T\colon \mathcal X \rightarrow \mathcal X$ is compact.
\end{lem}
\begin{proof}
Changing integration variables $u = h^2 (E + y)^{-1}$ in the definition \Cref{toperator} of $T$, we deduce
\begin{align*}
Tf(x)
&=
\frac{\alpha}{2} \int_{\mathbb{R}} \int_0^\infty
f(y) |E+y|^{-(s+\alpha)/2}
\cdot |u|^{(s-\alpha)/2 -1 } p_E (x + u)
\, d u \, dy\\
& =
\frac{\alpha}{2} \int_{-E}^\infty \int_0^\infty
f(y)
\left|E + y \right|^{-(s+\alpha)/2}
\cdot u^{(s-\alpha)/2 -1 } p_E\left(x + u\right)
\, d u \, dy \\ & \qquad +
\frac{\alpha}{2} \int_{-\infty}^{-E} \int_{-\infty}^0
f(y)
\left|E + y \right|^{-(s+\alpha)/2}
\cdot |u|^{(s-\alpha)/2 -1 } p_E\left(x + u\right)
\, d u \, dy.
\end{align*}
We observe that this separates the $u$ and $y$ variables, so that we have
\begin{equation}\label{compactnessrepresentation}
Tf(x) = I_1 \cdot F_1(x) + I_2 \cdot F_2(x).
\end{equation}
\noindent Here, $I_1 = I_1 (f)$ and $I_2 = I_2 (f)$ are constants given by
\begin{flalign*}
I_1 = \frac{\alpha}{2} \int_{-E}^\infty
f(y)
\left|E + y \right|^{-(s+\alpha) / 2} dy; \qquad I_2 = \displaystyle\frac{\alpha}{2} \displaystyle\int_{-\infty}^{-E} f(y) |E + y|^{-(s+\alpha)/2} dy,
\end{flalign*}
\noindent whose convergence is guaranteed by the fact that $f \in \mathcal{X}$, with
\begin{flalign}
\label{i1i2cf}
|I_1| + |I_2| \le C \cdot \| f\|.
\end{flalign}
\noindent Moreover, $F_1$ and $F_2$ are functions of $x$ given by
\begin{flalign*}
F_1 (x) = \displaystyle\int_0^{\infty} u^{(s-\alpha)/2 - 1} p_E (x + u) du; \qquad F_2 (x) = \displaystyle\int_{-\infty}^0 |u|^{(s-\alpha)/2 - 1} p_E (x + u) du.
\end{flalign*}
\noindent We have $F_1, F_2 \in \mathcal{X}$ with $\| F_1 \| + \| F_2 \| \le C$, since $\big| p_E (x) \big| \le C \cdot \min \{ 1, |x|^{-\alpha/2-1} \}$ by \Cref{ar}.
Now, we claim that the image of the unit ball $\mathcal B_1 \subset \mathcal X$ under $T$ is relatively compact, which implies the conclusion of the lemma. Let $\{f_n\}_{n=1}^\infty$ be an infinite sequence of functions with $f_n \in \mathcal B_1$ for all $n\in \mathbb{N}$. It suffices to exhibit a convergent subsequence of $\{T f_n\}_{n=1}^\infty $.
We observe that, since each $f_j \in \mathcal B_1$, then by \eqref{i1i2cf} there exists a subsequence $\{g_n\}_{n=1}^\infty$ of $\{f_n\}_{n=1}^\infty$ such that the sequences $\{I_1(g_n)\}_{n=1}^\infty$ and $\{I_2(g_n)\}_{n=1}^\infty$ both converge. The representation \eqref{compactnessrepresentation} then shows that $\{T g_n \}_{n=1}^\infty$ converges in $\mathcal X$, establishing the lemma.
\end{proof}
\begin{comment}
We next define the following function, which we will show to be the Perron--Frobenius eigenfunction of $T$.
\begin{definition}
\label{fxe}
For any $s \in (\alpha, 1)$ and $E \in \mathbb{R}$, define the function $f_E : \mathbb{R} \rightarrow \mathbb{R}$ by
\begin{equation}\label{fEdef}
f_E(x) =f_E(x; \alpha, s)= \left( |x|^{(s-\alpha)/2 -1} \ast p_E \right)(x).
\end{equation}
\end{definition}
The following lemma, which states that $f$ is an eigenfunction of $T$, is proved in \Cref{s:pfeigenvector} below. In what follows, we recall that $\lambda(E, s, \alpha)$ was given by \Cref{lambdaEsalpha}.
\begin{prop}
For all $s \in (\alpha,1)$ and $E \in \mathbb{R}$, the function $f_E(x)$
satisfies $f_E \in \mathcal X$ and
\begin{equation*}
Tf_E (x) = \lambda(E, s, \alpha) \cdot f_E(x).
\end{equation*}
\end{prop}
\end{comment}
We have the following proposition stating that the Perron--Frobenius eigenvalue of $T$ (corresponding to a nonnegative eigenvector) is $\lambda (E,s, \alpha)$. Its proof is similar to the one given in \cite{tarquini2016level} and will be given in \Cref{s:pfeigenvector} below.
\begin{prop}
\label{l:pfeigenvector}
Fix $s \in (\alpha, 1)$ and recall the notation of \Cref{lambdaEsalpha}. There exists a unique positive function $f_E = f_{E, s} \in \mathcal{X}$ such that
\begin{flalign*}
Tf_E (x) = \lambda (E, s, \alpha) \cdot f_E (x) \qquad \text{for all $x \in \mathbb{R}$}.
\end{flalign*}
\end{prop}
We next state a consequence of the Krein--Rutman theorem, which will be useful for studying $T$. We first recall some functional analytic terminology. Let $X$ be a Banach space. A subset $K \subset X$ is called a \emph{cone} if it is a closed, convex set such that $c \cdot K \subseteq K$ for all $c > 0$ and $K \cap (-K) = \{ 0 \}$. If the cone $K$ has nonempty interior $K^0$, then it is called a \emph{solid cone}. An operator $S : X \rightarrow X$ is said to be \emph{positive} if $Sx \in K$ for every $x \in K$. It is further said to be \emph{strongly positive} if $Sx \in K^0$ for each $x \in S \setminus \{ 0 \}$. Finally, let $r(S)$ denote the spectral radius of $S$.
\begin{thm}[{\cite[Theorem 19.3]{deimling2010nonlinear}}]\label{t:kr}
Let $X$ be a Banach space, $K \subset X$ be a solid cone, and $S : X \rightarrow X$ be a compact, positive, linear operator with spectral radius $r = r(T)$. Then there exists an eigenvector $v \in K \setminus \{ 0 \}$ of $S$ with eigenvalue $r$, that is, $Sv = r \cdot v$. Moreover, if $S$ is strongly positive, then $r$ is a simple eigenvalue of $S$, and $S$ has no other positive eigenvalue.
\end{thm}
With these preparations, we can prove the following corollaries.
\begin{lem}
\label{t2positivity}
Let. $\mathcal{K} = \{ f \in \mathcal{X}: f \ge 0 \}\subset \mathcal{X}$ denote the cone of nonnegative functions in $\mathcal{X}$. Then, $T$ is positive with respect to $\mathcal{K}$, and $T^2$ is strongly positive.
\end{lem}
\begin{proof}
Consider the function $g \in \mathcal{K}$ defined by
\begin{equation*}
g(x) = \frac{1}{1 + |x|^{(\alpha-s)/2 +1} }.
\end{equation*}
The open ball $\big\{ f \in \mathcal X : \| g -f \| < \frac{1}{2} \big\}$ is contained in $\mathcal{K}$, so the interior $\mathcal{K}^0$ of $\mathcal{K}$ is nonempty, meaning that $\mathcal{K}$ is a solid cone.
Given $f \in \mathcal K$, it is immediate from the definition of $T$ that $Tf \in \mathcal K$, so $T$ is positive. Further, given $f \in \mathcal{K}$, \eqref{Tlower} implies that $T^2f \in \mathcal{K}^0$, so $T^2$ is strongly positive.
\end{proof}
\begin{cor}\label{l:pfcompute}
We have
\begin{equation*}
\lim_{n\rightarrow \infty} \| T^n \|^{1/n} = \lambda (E, s, \alpha) .
\end{equation*}
where $\| T^n \|$ denotes the norm of the operator $T^n \colon \mathcal X \rightarrow \mathcal X$.
\end{cor}
\begin{proof}
The operator $T^2$ is compact by \Cref{l:compact}, and strongly positive by \Cref{t2positivity}.
Then $T^2$ satisfies the hypothesis of \Cref{t:kr}, and we conclude that $r(T^2) = \lambda(E,s, \alpha)^2$,
where $\lambda(E,s, \alpha)$ is the eigenvalue associated to the positive eigenfunction $f_E$ given by \Cref{l:pfeigenvector}. Since Gelfand's spectral radius formula implies that
\begin{equation*}
\lim_{n\rightarrow \infty} \| T^{2n} \|^{1/n} = r(T^2) = \lambda(E, s, \alpha)^2,
\end{equation*}
\noindent this shows that
\begin{equation*}
r(T) = \lim_{n\rightarrow \infty} \| T^{n} \|^{1/n} = \lambda(E, s, \alpha) = \lambda(E, s, \alpha),
\end{equation*}
\noindent establishing the corollary.
\end{proof}
\subsection{Proof of \Cref{l:bootstrap}}\label{s:provebootstrap}
We are now ready to complete the proof of \Cref{l:bootstrap}.
\begin{proof}[Proof of \Cref{l:bootstrap}]
We first define the linear operator $S = S_{E,\alpha,s}$ on $\mathcal X$ by
\begin{equation*}
S(f) = \int_{-\infty}^\infty \left| \frac{1 }{E +y } \right|^s f(y) \, dy.
\end{equation*}
By the definition of the norm \eqref{xf} on $\mathcal X$, we have
\begin{equation*}
\big|f(y) \big|
\le \| f \| \cdot \big(1 + |y|^{(\alpha-s) /2 + 1}\big)^{-1}.
\end{equation*}
It follows that
\begin{align}\label{Sfupper}
\big| S(f) \big|
&\le \| f \| \int_{-\infty}^\infty \frac{1}{|E + y|^s }
\frac{1}{1 + |y|^{(\alpha-s)/2 + 1}} \,dy = C \cdot \| f \|.
\end{align}
We now claim that the sequence $R_{\star} (E + \mathrm{i} \eta_j)$ converges in law to $R_{\mathrm{loc}}(E)$. It suffices to show that any weak limit point $R(E)$ of the sequence $R_{\star} (E + \mathrm{i} \eta_j)$ equals $R_{\mathrm{loc}}(E)$. To that end, fix a subsequence $\{ \eta'_j\}_{j=1}^\infty$ of $\{ \eta_j\}_{j=1}^\infty$ such that $R(E + \mathrm{i} \eta'_j)$ converges to $R(E)$. Then the hypothesis $\lim_{j\rightarrow\infty} \eta_j =0$ and \Cref{l:boundarybasics} together imply that $R(E) = R_{\mathrm{loc}}(E)$, showing that $R_{\mathrm{loc}} (E) = \lim_{j \rightarrow \infty} R_{\star} (E + \mathrm{i} \eta_j)$. Using \Cref{l:PhiE}, we conclude that
$\lim_{j \rightarrow \infty} \Phi_L(s;E + \mathrm{i} \eta_j ) =
\Phi_L^{\mathrm{loc}} (s ; E )$ and therefore that
\begin{equation}\label{phiint1}
\lim_{j \rightarrow \infty} L^{-1} \log \Phi_L(s;E + \mathrm{i} \eta_j ) =
L^{-1} \log \Phi_L^{\mathrm{loc}} (s ; E ) .
\end{equation}
We now examine the limit $\lim_{L \rightarrow \infty} L^{-1} \log \Phi_L^{\mathrm{loc}} (s ; E) = \lim_{L \rightarrow \infty} (2L)^{-1} \log \Phi_{2L}^{\mathrm{loc}} (s; E)$. Let $g(x) = p_E (x)$, which satisfies $c \big( 1 + |x|^{(\alpha-s)/2 + 1} \big)^{-1} \le g(x) \le C \big( 1 + |x|^{(\alpha-s)/2 + 1} \big)^{-1}$, for any $x \in \mathbb{R}$, by \Cref{ar}. From \Cref{tlgl}, \Cref{l:Erecursion}, and \eqref{productN2}, we have
\begin{flalign}
\label{smomentt}
\Phi_{2L}^{\mathrm{loc}} (s ; E) = S \left( T^{2L} \left( g \right) \right).
\end{flalign}
\noindent Further, using \eqref{Sfupper}, we obtain
\begin{equation*}
\Phi_{2L}^{\mathrm{loc}} (s ; E) \le C \cdot \left \| T^{2L} \left( g \right) \right\| \le C \cdot r(T^{2L}) \cdot \| g\| \le C \cdot r(T^2)^L \cdot \| g \|,
\end{equation*}
\noindent where $r(T^2)$ and $r(T^{2L})$ denote the spectral radii of $T$ and $T^{2L}$, respectively. Then,
\begin{align}\label{loglambdaupper}
\lim_{L \rightarrow \infty} (2L)^{-1} \log \Phi_{2L}^{\mathrm{loc}} (s ; E) &\le
\lim_{L \rightarrow \infty} (2L)^{-1}
\left( \log C + L \log r(T^2)
+ \log \| g\| \right) \le \log \lambda(E, s, \alpha),
\end{align}
where we used \Cref{l:pfcompute} in the last line.
For the lower bound, we note that \eqref{Tlower} and the fact that $f_E \in \mathcal{X}$ (by the first statement of \Cref{l:pfeigenvector}) impy that $T^2 g (x) \ge c \cdot f_E(x)$, for each $x \in \mathbb{R}$. Then, by \eqref{smomentt} and \Cref{l:pfeigenvector},
\begin{equation*}
\Phi_L(s ; E) = S (T^L g)\ge c \cdot
S ( T^{L-2} f_E) \ge
c \cdot \lambda(E,s,\alpha)^{L-2} \cdot S(f_E) \ge c \cdot \lambda (E, s, \alpha)^{L-2},
\end{equation*}
and we conclude
\begin{equation*}
\lim_{L \rightarrow \infty} (2L)^{-1} \log \Phi_{2L}^{\mathrm{loc}} (s ; E)\ge
\log \lambda(E, s, \alpha).
\end{equation*}
Combining the previous equation with \eqref{loglambdaupper} gives
\begin{equation}\label{phiLlimit}
\displaystyle\lim_{L \rightarrow \infty} L^{-1} \log \Phi_L^{\mathrm{loc}} (s; E) = \lim_{L \rightarrow \infty} (2L)^{-1} \log \Phi_{2L}^{\mathrm{loc}} (s ; E)
= \log \lambda(E,s, \alpha).
\end{equation}
From \eqref{limitr0j2}, we have
\begin{equation}\label{finiteLbdd}
\left| L^{-1} \log \Phi_L(s ; E +\mathrm{i} \eta_j )
- \phi(s; E + \mathrm{i} \eta_j)
\right| \le \frac{C}{L}
\end{equation}
for all $k \in \mathbb{Z}_{\ge 1}$. Then using \eqref{phiint1} and \eqref{finiteLbdd}, we have
\begin{equation*}
L^{-1} \log \Phi_L^{\mathrm{loc}} (s ; E ) - \frac{C}{L} =
\lim_{j \rightarrow \infty} L^{-1} \log \Phi_L(s ; E +\mathrm{i} \eta_j ) - \frac{C}{L}
\le \liminf_{j \rightarrow \infty} \phi(s; E + \mathrm{i} \eta_j),
\end{equation*}
and
\begin{equation*}
\limsup_{j \rightarrow \infty} \phi(s; E + \mathrm{i} \eta_j)
\le
\lim_{j \rightarrow \infty} L^{-1} \log \Phi_L(s ; E +\mathrm{i} \eta_j ) + \frac{C}{L}
=
\log \Phi_L^{\mathrm{loc}} (s ; E ) + \frac{C}{L}.
\end{equation*}
Taking the limit as $L$ goes to infinity and using \eqref{phiLlimit}, we conclude that
\begin{equation*}
\lim_{k \rightarrow \infty} \phi(s; E + \mathrm{i} \eta_j) = \log \lambda(E,s,\alpha),
\end{equation*}
\noindent verifying the lemma.
\end{proof}
\section{The Perron--Frobenius Eigenvalue of $T$}\label{s:pfeigenvector}
\label{TEigenvalue}
In this section we establish \Cref{l:pfeigenvector}. First observe, by the Krein--Rutman theorem (\Cref{t:kr}) and the strong positivity of $T^2$ (from \Cref{t2positivity}), that $T$ admits an eigenfunction $f = f_E = f_{E, s} \in \mathcal{X}$ with a nonnegative eigenvalue $\lambda_0 = \lambda_0 (s, E) > 0$; further $\lambda_0$ is the unique positive eigenvalue, and it admits no other eigenvectors. In \Cref{Transformf}, we analyze the Fourier transform $\hat f$, and in \Cref{Lambda0Lambda} we use this analysis to complete the proof of \Cref{l:pfeigenvector}. Finally, in \Cref{LambdaOther}, we provide an alternative expression for $\lambda(E,s,\alpha)$, which will be used in \Cref{s:alphanear1} and \Cref{s:alphanear1uniqueness} below.
We note that the definition \eqref{toperator} of $T$ gives, for any $x \in \mathbb{R}$, that
\begin{flalign}
\label{lambda0f}
\lambda_0 \cdot f (x) = Tf (x) = \displaystyle\frac{\alpha}{2} \displaystyle\int_{\mathbb{R}^2} f (y) \bigg| \displaystyle\frac{1}{E+y} \bigg|^s |h|^{s-\alpha-1} p_E \bigg( x + \displaystyle\frac{h^2}{E+y} \bigg) \, dh\, dy.
\end{flalign}
\noindent Observe since $f_E \in \mathcal{X}$ that $f_E \in L^2 (\mathbb{R})$ (by the definition of the norm \eqref{xf} on $\mathcal{X}$, together with the fact that $\frac{\alpha-s}{2} + 1 > \frac{1}{2}$). In particular, the Fourier transform of $f_E$ satisfies $\widehat{f}_E \in L^2 (\mathbb{R})$.
\subsection{Analysis of $\widehat{f}$}
\label{Transformf}
We recall our convention for the Fourier transform from \Cref{s:fouriertransform}. We then have the following lemma, whose statement and proof are similar to those of \cite[Equation (15)]{tarquini2016level}. In what follows, we will repeatedly use the fact that, for any $r \in (0, 1)$ and $\xi \in \mathbb{R}$, we have
\begin{flalign}
\label{xrexi}
\displaystyle\int_0^{\infty} e^{\mathrm{i} \xi x} x^{-r} dx = \Gamma (1 - r) \cdot \exp \bigg( \displaystyle\frac{\pi \mathrm{i}}{2} (1-r) \sgn (\xi) \bigg) \cdot |\xi|^{r-1}.
\end{flalign}
\begin{lem}
\label{flambda1}
For any $\xi \in \mathbb{R}$, we have
\begin{flalign}
\label{lambda0fxi}
\begin{aligned}
\lambda_0 \cdot \widehat{f} (\xi) & = \displaystyle\frac{\alpha}{2} \cdot \Gamma \bigg( \displaystyle\frac{s-\alpha}{2} \bigg) \cdot |\xi|^{(\alpha-s)/2} \cdot \widehat{p}_E (\xi) \\
& \qquad \times \displaystyle\int_{-\infty}^{\infty} \displaystyle\frac{f(y)}{|E+y|^{(s+\alpha)/2}} \exp \bigg( \displaystyle\frac{\pi \mathrm{i} (\alpha-s)}{4} \cdot \sgn \big( \xi (E + y) \big) \bigg) \, dy.
\end{aligned}
\end{flalign}
\end{lem}
\begin{proof}
It will be useful to begin by implementing a cutoff in the integral (over $h$) on the right side of \eqref{lambda0f} since, as is, it does not define a function in $L^1 (\mathbb{R})$. So, for any integer $N \ge 1$, define the function $g_N \in \mathcal{X}$ through
\begin{flalign*}
\lambda_0 \cdot g_N (x) = \displaystyle\frac{\alpha}{2} \displaystyle\int_{-\infty}^{\infty} f(y) \bigg| \displaystyle\frac{1}{E+y}\bigg|^s \displaystyle\int_{-N}^N |h|^{s-\alpha} p_E \bigg( x + \displaystyle\frac{h^2}{E+y} \bigg) \, dh \, dy,
\end{flalign*}
\noindent observing by \eqref{lambda0f} that $\lim_{N \rightarrow \infty} g_N = f$. Taking the Fourier transform yields
\begin{flalign*}
\lambda_0 \cdot \widehat{g}_N (\xi) = \displaystyle\frac{\alpha}{2} \displaystyle\int_{-\infty}^{\infty} e^{\mathrm{i} x \xi} \displaystyle\int_{-\infty}^{\infty} f (y) \bigg| \displaystyle\frac{1}{E+y} \bigg|^s \displaystyle\int_{-N}^{N} |h|^{s-\alpha-1} p_E \bigg( x + \displaystyle\frac{h^2}{E+y} \bigg) \, dh\, dy\, dx,
\end{flalign*}
\noindent where the integral absolutely converges since $f \in \mathcal{X}$ and $p_E (w) \le C \big( |x| + 1 \big)^{\alpha/2 + 1}$ for any $w \in \mathbb{R}$ (by \Cref{ar}). Changing variables $t = |E+y|^{-1} \cdot h^2$, and then replacing $x$ by $x - t \sgn (E+y)$ yields
\begin{flalign}
\label{lambda0gn}
\begin{aligned}
\lambda_0 \cdot \widehat{g}_N (x) & = \displaystyle\frac{\alpha}{2} \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_0^{N^2} e^{\mathrm{i} x \xi} f(y) \bigg| \displaystyle\frac{1}{E+y} \bigg|^{(\alpha+s)/2} t^{(s-\alpha)/2-1} p_E \big( x + t \sgn (E+y) \big) \, dt \, dy \, dx \\
& = \displaystyle\frac{\alpha}{2} \displaystyle\int_{-\infty}^{\infty} e^{\mathrm{i} x \xi} p_E (x) dx \cdot \displaystyle\int_{-\infty}^{\infty} f(y) \bigg| \displaystyle\frac{1}{E+y} \bigg|^{(\alpha+s)/2} \displaystyle\int_0^{N^2} e^{-\mathrm{i} t \xi \sgn (E+y)} t^{(s-\alpha)/2-1} \, dy \, dt \\
& = \displaystyle\frac{\alpha}{2} \cdot \widehat{p}_E (\xi) \displaystyle\int_{-\infty}^{\infty} f(y) \bigg| \displaystyle\frac{1}{E+y} \bigg|^{(\alpha+s)/2} \displaystyle\int_0^{N^2} e^{-\mathrm{i} t \xi \sgn (E+y)} t^{(s-\alpha)/2-1} \, dy \, dt.
\end{aligned}
\end{flalign}
\noindent By \eqref{xrexi}, we have
\begin{flalign*}
\displaystyle\lim_{N \rightarrow \infty} \displaystyle\int_0^{N^2} e^{\mathrm{i} t \xi \sgn (E+y)} t^{(s-\alpha)/2-1} dt = \Gamma \bigg( \displaystyle\frac{s-\alpha}{2} \bigg) \cdot \exp \bigg( \displaystyle\frac{\pi \mathrm{i} (s-\alpha)}{4} \cdot \sgn \big( \xi (E + y) \big) \bigg) \cdot |\xi|^{(\alpha-s)/2}.
\end{flalign*}
\noindent Inserting this into \eqref{lambda0gn} and using the fact that $\lim_{N \rightarrow \infty} g_N = f_N$ then yields the lemma (observing that the resulting integral converges absolutely, since $f \in \mathcal{X}$).
\end{proof}
The next lemma provides a different form for $\widehat{f} (\xi)$.
\begin{lem}
\label{fxie1}
For any $\xi \in \mathbb{R}$, we have
\begin{flalign*}
\lambda_0 \cdot \widehat{f}_E (\xi) = \displaystyle\frac{1}{\pi} \cdot K_{\alpha, s} \cdot |\xi|^{(\alpha-s)/2} \cdot \widehat{p}_E (\xi) \displaystyle\int_{-\infty}^{\infty} e^{\mathrm{i} E \zeta} |\zeta|^{(\alpha+s)/2 - 1} \cdot \big( t_{\alpha} \cdot \mathbbm{1}_{\xi \zeta > 0} + t_s \cdot \mathbbm{1}_{\xi \zeta < 0} \big) \widehat{f} (\zeta)\, d \zeta.
\end{flalign*}
\end{lem}
\begin{proof}
First observe from \Cref{flambda1}, the explicit form \eqref{xtsigma} of $\widehat{p}_E$; and the convergence of the integral in \eqref{lambda0fxi} (by the fact that $f \in \mathcal{X}$) that
\begin{flalign}
\label{fxiintegralc}
\big| \widehat{f} (\xi) \big| \le C \exp \big(-c |\xi|^{\alpha/2} \big), \qquad \text{and so $\widehat{f} \in L^1 (\mathbb{R})$}.
\end{flalign}
\noindent Next, by \eqref{lambda0fxi} and the fact that $f \in \mathcal{X}$, we obtain
\begin{flalign}
\label{lambda0fxi2}
\begin{aligned}
\lambda_0 \cdot \widehat{f} (\xi) & = \displaystyle\frac{\alpha}{2} \cdot \Gamma \bigg( \displaystyle\frac{s-\alpha}{2} \bigg) \cdot |\xi|^{(\alpha-s)/2} \cdot \widehat{p}_E (\xi) \\
& \qquad \times \displaystyle\lim_{N \rightarrow \infty} \displaystyle\int_{-N}^N \displaystyle\frac{f(y)}{|E+y|^{(\alpha+s)/2}} \exp \bigg( \displaystyle\frac{\pi \mathrm{i} (\alpha-s)}{4} \cdot \sgn \big( \xi (E+y) \big)\bigg) \, dy.
\end{aligned}
\end{flalign}
\noindent Applying the Fourier inversion identity $f(y) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} e^{-\mathrm{i} y \zeta} \widehat{f} (\zeta)\, d \zeta$, we obtain
\begin{flalign}
\label{flambda0xi2n}
\begin{aligned}
\lambda_0 \cdot \widehat{f} (\xi) & = \displaystyle\frac{\alpha}{4 \pi} \cdot \Gamma \bigg( \displaystyle\frac{s-\alpha}{2} \bigg) \cdot |\xi|^{(\alpha-s)/2} \cdot \widehat{p}_E (\xi) \\
& \qquad \times \displaystyle\lim_{N \rightarrow \infty} \displaystyle\int_{-\infty}^{\infty} e^{-\mathrm{i} y \zeta }\displaystyle\int_{-N}^N \displaystyle\frac{\widehat{f} (\zeta)}{|E+y|^{(\alpha+s)/2}} \exp \bigg( \displaystyle\frac{\pi \mathrm{i} (\alpha-s)}{4} \cdot \sgn \big( \xi (E + y) \big) \bigg) \, dy\, d \zeta \\
& = \displaystyle\frac{\alpha}{4\pi} \cdot \Gamma \bigg( \displaystyle\frac{s-\alpha}{2} \bigg) \cdot |\xi|^{(\alpha-s)/2} \cdot \widehat{p}_E (\xi) \\
& \qquad \times \displaystyle\lim_{N\rightarrow \infty} \displaystyle\int_{-\infty}^{\infty} e^{\mathrm{i} E \zeta} \widehat{f} (\zeta) \displaystyle\int_{-N-E}^{N-E} e^{-\mathrm{i} y \zeta} |y|^{-(\alpha+s)/2} \exp \bigg( \displaystyle\frac{\pi \mathrm{i} (\alpha-s)}{4} \cdot \sgn (\xi y) \bigg) \, dy \, d \zeta,
\end{aligned}
\end{flalign}
\noindent where in the last equality we changed variables, setting the original $y$ now to $y - E$. Next, observe from \eqref{xrexi} that
\begin{flalign*}
\displaystyle\lim_{N \rightarrow \infty} & \displaystyle\int_{-N-E}^0 e^{-\mathrm{i} y \zeta} |y|^{-(\alpha+s)/2} \exp \bigg( \displaystyle\frac{\pi \mathrm{i} (\alpha-s)}{4} \sgn (\xi y) \bigg) dy \\
& = \Gamma \bigg( 1 - \displaystyle\frac{\alpha+s}{2} \bigg) \cdot \exp \bigg( \displaystyle\frac{\pi \mathrm{i}}{2} \Big( 1 - \displaystyle\frac{\alpha+s}{2} \Big) \cdot \sgn (\zeta) \bigg) \exp \bigg( \displaystyle\frac{\pi \mathrm{i} (s-\alpha)}{4} \cdot \sgn (\xi) \bigg) \cdot |\zeta|^{(\alpha+s)/2 - 1}; \\
\displaystyle\lim_{N \rightarrow \infty} & \displaystyle\int_0^{N-E} e^{-\mathrm{i} y \zeta} |y|^{-(\alpha+s)/2} \exp \bigg( \displaystyle\frac{\pi \mathrm{i} (\alpha-s)}{4} \sgn (\xi y) \bigg) dy \\
& = \Gamma \bigg( \displaystyle\frac{\alpha+s}{2} - 1 \bigg) \cdot \exp \bigg( \displaystyle\frac{\pi \mathrm{i}}{2} \Big( \displaystyle\frac{\alpha+s}{2} - 1 \Big) \cdot \sgn (\zeta) \bigg) \exp \bigg( \displaystyle\frac{\pi \mathrm{i} (\alpha - s)}{4} \cdot \sgn (\xi) \bigg) \cdot |\zeta|^{(\alpha+s)/2 - 1}.
\end{flalign*}
\noindent Inserting these into \eqref{flambda0xi2n} (and using \eqref{fxiintegralc}), we find
\begin{flalign}
\label{flambda03}
\begin{aligned}
\lambda_0 \cdot \widehat{f} (\xi) & = \displaystyle\frac{\alpha}{2\pi} \cdot \Gamma \bigg( \displaystyle\frac{s-\alpha}{2} \bigg) \cdot \Gamma \bigg( 1 - \displaystyle\frac{\alpha+s}{2} \bigg) \cdot |\xi|^{(\alpha-s)/2} \cdot \widehat{p}_E (\xi) \\
& \qquad \times \displaystyle\int_{-\infty}^{\infty} e^{\mathrm{i} E \zeta} |\zeta|^{(\alpha+s)/2 - 1} \cdot \big( t_{\alpha} \cdot \mathbbm{1}_{\xi \zeta > 0} + t_s \cdot \mathbbm{1}_{\xi \zeta < 0} \big) \cdot \widehat{f}(\zeta) \, d \zeta,
\end{aligned}
\end{flalign}
\noindent where we used the facts that
\begin{flalign*}
\exp \bigg( & \displaystyle\frac{\pi \mathrm{i}}{2} \Big( 1 - \displaystyle\frac{\alpha+s}{2} \Big) \cdot \sgn (\zeta) + \displaystyle\frac{\pi \mathrm{i} (s-\alpha)}{4} \cdot \sgn (\xi) \bigg) \\
& \qquad + \exp \bigg( \displaystyle\frac{\pi \mathrm{i}}{2} \Big( \displaystyle\frac{\alpha+s}{2} - 1 \Big) \cdot \sgn (\zeta) + \displaystyle\frac{\pi \mathrm{i} (\alpha-s)}{4} \cdot \sgn (\xi) \bigg) \\
& \qquad \qquad \qquad \qquad = 2 \bigg( \sin \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot \mathbbm{1}_{\xi \zeta > 0} + \sin \Big( \displaystyle\frac{\pi s}{2} \Big) \cdot \mathbbm{1}_{\xi \zeta < 0} \bigg),
\end{flalign*}
\noindent and recalled the definition of $t_r$ from \eqref{tlrk}. The lemma then follows from combining \eqref{flambda03} with the definition \eqref{tlrk} of $K_{\alpha, s}$.
\end{proof}
\subsection{Solution for $\lambda_0$}
\label{Lambda0Lambda}
In this section we solve for the eigenvalue $\lambda_0$ of $T$ with eigenvector $f$, continuing to follow \cite{tarquini2016level}. To that end, we denote
\begin{flalign}
\label{llii}
\begin{aligned}
& I_- = I_- (E) = \displaystyle\frac{1}{\pi} \displaystyle\int_{-\infty}^0 e^{\mathrm{i} E \xi} |\xi|^{(\alpha+s)/2 - 1} \widehat{f} (\xi)\, d \xi; \qquad I_+ = I_+ (E) = \displaystyle\frac{1}{\pi} \displaystyle\int_0^{\infty} e^{\mathrm{i} E \xi} |\xi|^{(\alpha+s)/2-1} \widehat{f} (\xi) \, d \xi; \\
& \ell_- = \ell_- (E) = \displaystyle\frac{1}{\pi} \displaystyle\int_{-\infty}^0 e^{\mathrm{i} E \xi} |\xi|^{\alpha-1} \widehat{p}_E (\xi)\, d\xi; \qquad \qquad \ell_+ = \ell_+(E) = \displaystyle\frac{1}{\pi} \displaystyle\int_0^{\infty} e^{\mathrm{i} E \xi} |\xi|^{\alpha-1} \widehat{p}_E (\xi) \, d \xi.
\end{aligned}
\end{flalign}
The following corollary, which follows from \Cref{fxie1}, indicates that $(I_+, I_-)$ satisfy a certain linear relation.
\begin{cor}
\label{isumi}
For any $E \in \mathbb{R}$, we have
\begin{flalign*}
\lambda_0 \cdot I_+ = K_{\alpha, s} \ell_+ (t_{\alpha} I_+ + t_s I_-); \qquad \lambda_0 \cdot I_- = K_{\alpha, s} \ell_- (t_s I_+ + t_{\alpha} I_-).
\end{flalign*}
\end{cor}
\begin{proof}
By \Cref{fxie1} and \eqref{llii}, we have
\begin{flalign*}
\lambda_0 \cdot |\xi|^{(\alpha+s)/2 - 1} \widehat{f} (\xi) \cdot \mathbbm{1}_{\xi > 0} & = \displaystyle\frac{1}{\pi} \cdot K_{\alpha, s} \cdot |\xi|^{\alpha-1} \widehat{p}_E (\xi) \cdot (t_{\alpha} I_+ + t_s I_-) \cdot \mathbbm{1}_{\xi > 0}; \\
\lambda_0 \cdot |\xi|^{(\alpha+s)/2 - 1} \widehat{f} (\xi) \cdot \mathbbm{1}_{\xi < 0} & = \displaystyle\frac{1}{\pi} \cdot K_{\alpha, s} \cdot |\xi|^{\alpha-1} \widehat{p}_E (\xi) \cdot (t_s I_+ + t_{\alpha} I_-) \cdot \mathbbm{1}_{\xi < 0}.
\end{flalign*}
\noindent Integrating over $\xi$ and using \eqref{llii} again then yields the lemma.
\end{proof}
The following lemma, which makes use of the nonnegativity of $f$, indicates that $(I_+, I_-) \ne (0, 0)$ (meaning that the linear relation between $(I_+, I_-)$ given by \Cref{isumi} is nondegenerate).
\begin{lem}
\label{iireal0}
For any $E \in \mathbb{R}$, we have that $(I_+, I_-) \ne (0, 0)$.
\end{lem}
\begin{proof}
It suffices to show that $\Real I_+ \ne 0$. To that end, observe that
\begin{flalign}
\label{isum2}
\pi \cdot I_+ = \mathcal{F} \big( \mathbbm{1}_{\xi > 0} \cdot |\xi|^{(\alpha+s)/2 - 1} \cdot \widehat{f} (\xi) \big) (E) = (2 \pi)^{-1} \cdot \big( \mathcal{F} \big( \mathbbm{1}_{\xi > 0} \cdot |\xi|^{(\alpha+s)/2 - 1} \big) \ast \mathcal{F}^2 f \big) (E).
\end{flalign}
\noindent Defining $\widetilde{f} \in \mathcal{X}$ by $\widetilde{f}(x) = f(-x)$, observe by Fourier inversion and \eqref{xrexi} that
\begin{flalign*}
\mathcal{F}^2 f = 2\pi \cdot \widetilde{f}; \qquad \mathcal{F} \big( \mathbbm{1}_{\xi > 0} \cdot |\xi|^{(\alpha+s)/2-1} \big) = \Gamma \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) \cdot \exp \bigg( \displaystyle\frac{\pi \mathrm{i}}{4} (s + \alpha) \sgn (x) \bigg) \cdot |x|^{-(\alpha+s)/2}.
\end{flalign*}
\noindent Inserting this, together with the facts that $\widetilde{f} (x) = f(-x) \in \mathbb{R}_{\ge 0}$ and
\begin{flalign*}
\Real \exp \bigg( \displaystyle\frac{\pi \mathrm{i}}{4} (\alpha+s) \sgn(x) \bigg) = \cos \bigg( \displaystyle\frac{\pi}{4} (\alpha+s) \bigg) \in (0, 1),
\end{flalign*}
\noindent into \eqref{isum2}, it follows that
\begin{flalign*}
\pi \cdot \Real I_+ = \Gamma \bigg( \displaystyle\frac{\alpha+s}{2} \bigg) \cos \bigg( \displaystyle\frac{\pi}{4} (\alpha+s) \bigg) \cdot \big( \widetilde{f} \ast |x|^{-(\alpha+s)/2} \big) (E) > 0,
\end{flalign*}
\noindent where in the last inequality we further used the fact that $\widetilde{f}(x) = f(x) \in \mathbb{R}_{\ge 0}$ for all $x \in \mathbb{R}$, and that $\widetilde{f} \ne 0$. In particular, this shows that $\Real I_+ \ne 0$, which establishes the lemma.
\end{proof}
Given \Cref{isumi} and \Cref{iireal0}, we can quickly establish \Cref{l:pfeigenvector}
\begin{proof}[Proof of \Cref{l:pfeigenvector}]
Observe that \Cref{isumi} implies the equation
\begin{flalign*}
\boldsymbol{A} \cdot \boldsymbol{v} = \boldsymbol{0}, \qquad \text{where} \qquad \boldsymbol{A} = \left[ \begin{array}{cc} t_{\alpha} K_{s, \alpha} \ell_+ - \lambda_0 & t_s K_{s,\alpha} \ell_+ \\ t_s K_{s, \alpha} \ell_- & t_{\alpha} K_{s, \alpha} \ell_- - \lambda_0 \end{array}\right], \qquad \text{and} \qquad \boldsymbol{v} = \left[ \begin{array}{cc} I_+ \\ I_- \end{array} \right].
\end{flalign*}
\noindent By \Cref{iireal0}, this implies that $\det \boldsymbol{A} = 0$, meaning that $\lambda_0$ is the positive (as $\lambda_0$ is the Perron--Frobenius eigenvalue of $T$) solution to
\begin{flalign*}
\lambda_0^2 - t_{\alpha} K_{s, \alpha} (\ell_+ + \ell_-)\cdot \lambda_0 + (t_{\alpha}^2 - t_s^2) K_{s,\alpha}^2 \ell_+ \ell_- = 0.
\end{flalign*}
\noindent This, together with \Cref{lambdaEsalpha} and the facts that $\ell_+ = \ell (E)$ and that $\ell_+ = \overline{\ell_-}$ (since $\widehat{p}_E (x)$ is the conjugate of $\widehat{p}_E (-x)$ for each $x \in \mathbb{R}$, by \eqref{xtsigma}) yields the proposition.
\end{proof}
\subsection{Alternative Expression for $\lambda (E, s, \alpha)$}
\label{LambdaOther}
In this section we provide a different expression for the eigenvalue $\lambda (E, s, \alpha)$ from \Cref{lambdaEsalpha}, in terms of the random variable $R_{\mathrm{loc}}$ from \Cref{locre}.
\begin{lem}
\label{realimaginaryl}
For every $E \in \mathbb{R}$, we have
\begin{flalign*}
\Real \ell (E) & = \pi^{-1} \cdot \Gamma (\alpha) \cdot \cos \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot \mathbb{E} \Big[ |R_{\mathrm{loc}} (E)|^{\alpha} \Big]; \\
\Imaginary \ell (E) & = \pi^{-1} \cdot \Gamma (\alpha) \cdot \sin \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \cdot \sgn \big(- R_{\mathrm{loc}} (E) \big) \Big].
\end{flalign*}
\end{lem}
\begin{proof}
By \eqref{tlrk}, \eqref{xrexi}, and Fourier inversion, we have
\begin{flalign*}
\ell (E) = \pi^{-1} \cdot \mathcal{F} \big( \mathbbm{1}_{\xi>0} \cdot |\xi|^{\alpha-1} \cdot \widehat{p}_E \big) (E) & = \displaystyle\frac{1}{2\pi^2} \cdot \Big( \mathcal{F} (\mathbbm{1}_{\xi > 0} \cdot |\xi|^{\alpha-1} \big) \ast \mathcal{F}^2 (p_E) \Big) (E) \\
& = \pi^{-1} \cdot \Gamma (\alpha) \cdot \Bigg( \bigg( \exp \Big( \displaystyle\frac{\pi \mathrm{i} \alpha}{2} \cdot \sgn (x) \Big) \cdot |x|^{-\alpha} \bigg) \ast \widetilde{p}_E \Bigg) (E),
\end{flalign*}
\noindent where we have defined $\widetilde{p}_E : \mathbb{R} \rightarrow \mathbb{R}$ by setting $\widetilde{p}_E (x) = p_E (-x)$ for all $x \in \mathbb{R}$. Thus,
\begin{flalign*}
\Real \ell (E) & = \pi^{-1} \cdot \Gamma (\alpha) \cos \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot \big( |x|^{-\alpha} \ast \widetilde{p}_E \big) (E) \\
& = \pi^{-1} \cdot \Gamma (\alpha) \cdot \cos \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot\displaystyle\int_{-\infty}^{\infty} p_E (x-E) \cdot |x|^{-\alpha} dx \\
& = \pi^{-1} \cdot \Gamma (\alpha) \cdot \cos \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot \displaystyle\int_{-\infty}^{\infty} p_E (x) \cdot |x+E|^{-\alpha} dx \\
& = \pi^{-1} \cdot \Gamma (\alpha) \cdot \cos \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \Big],
\end{flalign*}
\noindent where in the third equality we changed variables (mapping $x$ to $x + E$) and in the fourth we recalled (from \Cref{pkappae}) the definition $p_E$ as the denstiy of $\varkappa_{\mathrm{loc}} (E)$ and (from \Cref{locre}) that $R_{\mathrm{loc}} (E) = - \big(\varkappa_{\mathrm{loc}} (E) + E \big)^{-1}$. This establishes the first statement of the lemma; the proof of the latter is entirely analogous and is thus omitted.
\end{proof}
\begin{lem}
\label{2lambdaEsalpha}
For any real numbers $s \in (\alpha, 1)$ and $E \in \mathbb{R}$, we have
\begin{flalign*}
\lambda (E, s, \alpha) & = \pi^{-1} \cdot K_{\alpha, s} \cdot \Gamma (\alpha) \cdot \bigg( t_{\alpha} \sqrt{1 - t_{\alpha}^2} \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \Big] \\
& \qquad + \sqrt{t_s^2 (1 - t_{\alpha}^2) \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \Big]^2 + t_{\alpha}^2 (t_s^2 - t_{\alpha}^2) \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \cdot \sgn \big( - R_{\mathrm{loc}} (E) \big) \Big]^2} \bigg).
\end{flalign*}
\end{lem}
\begin{proof}
Denoting $\lambda = \lambda (E, s, \alpha)$, we have from \Cref{lambdaEsalpha} that
\begin{flalign*}
\lambda = K_{\alpha, s} \Big( t_{\alpha} \Real \ell (E) + \sqrt{t_s^2 \cdot \big( \Real \ell (E) \big)^2 + (t_s^2 - t_{\alpha}^2) \cdot \big( \Imaginary \ell (E) \big)^2 } \Big).
\end{flalign*}
\noindent This, together with \eqref{realimaginaryl}, yields the lemma.
\end{proof}
\section{Continuity of the Lyapunov Exponent}\label{s:lyapunovcontinuity}
In this section we show \Cref{l:phicontinuity}, which essentially states that $\varphi(s; E) = \limsup_{\eta \rightarrow 0} \varphi (s; E + \mathrm{i} \eta)$ is continuous in $E$ on the region where $\varphi (s; E) < 0$. To that end, it suffices to show two statements. First, $\varphi (s; E + \mathrm{i} \eta)$ is uniformly continuous in $\eta > 0$ for any fixed $E = E_0$ with $\varphi (s; E_0) < 0$. Second, $\varphi (s; E + \mathrm{i} \eta)$ is uniformly continuous in $E$ if $\eta = \eta_0$ uniformly bounded away from $0$. Let us first provide a heuristic for the former, which is more involved.
Recalling the definition \eqref{szl} of $\phi$, it suffices to understand the continuity of $\Phi_L(s; E + \mathrm{i} \eta)$ in $\eta$, which was defined in \eqref{sumsz1} as the sum of the $s$-th powers of off-diagonal resolvent entries. For the purposes of this heuristic, we take $L = 1$ and indicate the continuity of $\Phi_{1} (s; z) = \mathbb{E} \big[ \sum_{v \sim 0} | R_{0v} (z)|^s \big]$. Fixing $\eta_1 > \eta_2 > 0$ and letting $(z_1, z_2) = (E + \mathrm{i} \eta_1, E + \mathrm{i} \eta_2)$, we have since $s < 1$ that
\begin{align}\label{thediff}
\left|
\displaystyle\sum_{v \in \mathbb{V} (L)} \big| R_{0v} (z_1) \big|^s
-
\displaystyle\sum_{v \in \mathbb{V} (L)} \big| R_{0v} (z_1) \big|^s
\right|
&\le
\displaystyle\sum_{v \in \mathbb{V} (L)} \big| R_{0v} (z_1)
- R_{0v} (z_1) \big|^s.
\end{align}
\noindent In view of the product expansion \Cref{rproduct}, we have that $R_{0v} (z) = -R_{00} (z) T_{0v} R_{vv}^{(0)} (z)$. Thus, to show that $R_{0v} (z_1) \approx R_{0v} (z_2)$, it plausibly suffices to show that $R_{00} (z_1) \approx R_{00} (z_2)$ with high probability (which implies $R_{vv}^{(0)} (z_1) \approx R_{vv}^{(0)} (z_2)$ by symmetry). To that end, we apply the resolvent identity $\bm{A}^{-1} - \bm{B}^{-1} = \bm{A}^{-1} (\bm{B} - \bm{A}) \bm{B}^{-1}$ to write
\begin{flalign}
\label{r00z1r00z2}
\begin{aligned}
\big| R_{00} (z_1) - R_{00} (z_2) \big| & = |z_2-z_1| \displaystyle\sum_{w \in \mathbb{V} } \big| R_{0w} (z_1) R_{w0} (z_2) \big| \\
& \le |z_1 - z_2| \Bigg( \displaystyle\sum_{w \in \mathbb{V}} \big| R_{0w} (z_1) \big|^2 \Bigg)^{1/2} \Bigg( \displaystyle\sum_{w \in \mathbb{V}} \big| R_{0w} (z_2) \big|^2 \Bigg)^{1/2}.
\end{aligned}
\end{flalign}
\noindent Applying the Ward identity \eqref{sumrvweta} would give the bound
\begin{align}
\label{r002}
\big| R_{00}(z_1) - R_{00}(z_2) \big| \le |z_1 - z_2| (\eta_1\eta_2) ^{-1/2}
\big( \Im R_{00}(z_1) \big)^{1/2}
\big( \Im R_{00}(z_2) \big)^{1/2}.
\end{align}
\noindent Assuming for $i \in \{ 1, 2 \}$ that $\Imaginary R_{00} (z_i) \le C$ (that is, it is bounded), this would suggest that $\big| R_{00} (z_1) - R_{00} (z_2) \big| \le C$, which does not quite give a continuity bound; we thus require a minor improvement on the right side of \eqref{r002} (by a factor of $\eta_1^c$, for example).
Hence, we instead use the assumption $\varphi (s; z_1) < 0$ to deduce that, with high probability,
\begin{flalign}
\label{r0wz1}
\displaystyle\sum_{w \in \mathbb{V}} \big| R_{0w} (z_1) \big|^2 \le \Bigg( \displaystyle\sum_{w \in \mathbb{V}} \big| R_{0w} (z_1) \big|^s \Bigg)^{2/s} = \Bigg( \displaystyle\sum_{k=0}^{\infty} \displaystyle\sum_{w \in \mathbb{V} (k)} \big| R_{0w} (z_1) \big|^s \Bigg)^{2/s} \le C.
\end{flalign}
\noindent Inserting this into \eqref{r00z1r00z2} and again applying the Ward identity \eqref{sumrvweta} gives
\begin{flalign*}
\big| R_{00} (z_1) - R_{00} (z_2) \big| \le C |z_1 - z_2| \eta_2^{-1/2} \big( \Imaginary R_{00} (z_2) \big)^{1/2},
\end{flalign*}
\noindent which upon again assuming that $\Imaginary R_{00} (z_2) \le C$ gives the required improvement on the right side of \eqref{r002}. Indeed, it implies that $\big| R_{00} (z_1) - R_{00} (z_2) \big| \le C \eta_2^{-1/2} |\eta_1 - \eta_2| \le C \eta_1^{1/4}$ if $\eta_2 \ge \eta_1^{3/2}$, and so by repeatedly applying this bound on the sequence of (for example) $\eta_j = e^{-(3/2)^j}$ tending to $0$ yields the uniform continuity of $R_{00} (E + \mathrm{i} \eta)$ in $\eta$.
The argument outlined above is not entirely valid in (at least) two ways. First, it assumes that $R_{00} (z_1) \approx R_{00} (z_2)$ implies $R_{0v} (z_1) \approx R_{0v} (z_2)$, but by the product expansion \Cref{rproduct} one should only expect this to be true if $T_{0v}$ is not too large. We cirumvent this issue by first truncating all large $T_{0v}$ and show that omitting them from the fractional moment sums yields a negligible error; we will in fact do the same for all small $T_{0v}$ in order to reduce the number of entries in $\mathbb{V}(L)$, which is summed over in the definition \eqref{sumsz1} of $\Phi_L$. Second, the above heuristic conflates the high probability estimate \eqref{r0wz1} with an expectation bound. We confront this issue by slightly varying the moment parameter $s$ in the above bounds, namely, by comparing $s$-th moments to the $s(1+\kappa)$-th ones and showing that they are not too distant. These are done in \Cref{s:continuitypreliminary}, with \Cref{l:phicutoff} and \Cref{momentkappa} addressing the former and latter issues, respectively. In \Cref{Estimate1} we then use these bounds to show the continuity of $\varphi (s; E + \mathrm{i} \eta)$ in $E$ (\Cref{l:goingsideways}) and $\eta$ (\Cref{l:goingdown} and \Cref{l:goingup}), which are used in \Cref{s:continuityproof} to complete the proof of \Cref{l:phicontinuity}.
Throughout this section, we fix $\alpha \in (0,1)$, and constants $\varepsilon > 0$ and $B > 1$. All complex numbers $z \in \mathbb{H}$ discussed here will satisfy $\varepsilon \le | \Re z | \le B$ and $\Im z \in (0,1)$ (including those labeled $z_1$ and $z_2$). All constants below depend on $\alpha$, $\varepsilon$, and $B$, even when not stated explicitly.
\subsection{Preliminary Estimates}
\label{s:continuitypreliminary}
We begin with a fractional moment bound for the resolvent.
\begin{lem}\label{l:qsexpectation}
Fix $s \in (\alpha,1)$ and $\delta \in (0,1)$. There exists a constant $C=C(\delta,s) > 1$ such that the following holds for any $z = E + \mathrm{i} \eta \in \mathbb{H}$ such that $\varepsilon \le |E| \le B$ and $\eta \in (0,1)$. If $\varphi (s; z) < -\delta$, then
\begin{equation*}
\mathbb{E} \left[ \big( \Im R_{00}( z ) \big)^{s/2} \right] \le C \eta^{s/2}.
\end{equation*}
\end{lem}
\begin{proof}
We abbreviate $R_{0v} = R_{0v} (z)$ for any vertex $v \in \mathbb{V}$. By the Ward identity \eqref{sumrvweta}, we have
\begin{equation*}
\Im R_{00}( z ) = \eta \sum_{v \in \mathbb{V}} |R_{0v}|^2,
\end{equation*}
so it suffices to bound the sum in the previous equation.
We have
\begin{flalign*}
\Bigg( \displaystyle\sum_{v \in \mathbb{V}} |R_{0v}|^2 \Bigg)^{s/2} = \Bigg( \displaystyle\sum_{L = 0}^{\infty} \displaystyle\sum_{v \in \mathbb{V} (L)} |R_{0v}|^2 \Bigg)^{s/2} \le \displaystyle\sum_{L = 0}^{\infty} \displaystyle\sum_{v \in \mathbb{V} (L)} |R_{0v}|^{s},
\end{flalign*}
\noindent where the last bound follows from $s/2 < 1$. Taking expectations, applying \Cref{moment1} (and \eqref{limitr0j2}), and using the fact that $\varphi (s; z) < -\delta$, we deduce the existence of a constant $C > 1$ such that
\begin{flalign*}
\mathbb{E} \Bigg[ \displaystyle\sum_{L = 0}^{\infty} \displaystyle\sum_{v \in \mathbb{V} (L)} |R_{0j}|^{s} \Bigg] = \displaystyle\sum_{L = 0}^{\infty} \Phi_L (s; z) \le C \displaystyle\sum_{L = 0}^{\infty} e^{-\delta L} \le \delta^{-1} C ,
\end{flalign*}
which completes the proof.
\end{proof}
The previous lemma quickly implies \Cref{p:imvanish}.
\begin{proof}[Proof of \Cref{p:imvanish}]
We only prove the first part, since the second is similar.
By \Cref{l:qsexpectation}, we have $\mathbb{E} \big[ (\Imaginary R_{\star} (E + \mathrm{i}\eta_j))^s \big] \le C \eta_j^s$ for all $j\in\mathbb{Z}_{\ge 1}$. Then $(\Imaginary R_{\star} (E + \mathrm{i} \eta_j))^s$ converges to $0$ in expectation as $j$ tends to infinity, which implies it converges to zero in probability.
\end{proof}
For the next lemma, we recall the definition of the event $\mathscr{D} (v, w ; \omega)$ from \Cref{br0vw}, and for brevity set $\mathscr{D}(v) = \mathscr{D}(v; \omega) = \mathscr{D} (0, v; \omega)$. We also make the following definition.
\begin{definition}
\label{br0vw2}
Fix $z \in \mathbb{H}$; let $v, w \in \mathbb{V}$ be vertices with $v \prec w$; and let $\omega \in (0,1)$ and $\Omega > 1$ be real numbers. For any vertex $u \in \mathbb{V}$ with $v \preceq u \prec w$, define the events $\mathscr{T} (u; v, w) = \mathscr{T} (u; v, w; \omega, \Omega) = \mathscr{T} (u; v, w; \omega, \Omega; z)$ and $\mathscr{T} (v, w) = \mathscr{T} (v, w; \omega, \Omega) = \mathscr{T} (v, w; \omega, \Omega; z)$ by
\begin{flalign*}
\mathscr{T} (u; v, w) = \big\{ \omega \le |T_{uu_+}| \le \Omega \big\}; \qquad \mathscr{T} (v, w) = \bigcap_{v \preceq u \prec w} \mathscr{T} (u; v, w).
\end{flalign*}
We also set $\mathscr{T}(v) = \mathscr{T}(v; \omega, \Omega) = \mathscr{T} (0, v; \omega, \Omega)$.
\end{definition}
The next lemma shows the vertices $v\in \mathbb{V}$ such that $\mathbbm{1}_{\mathscr{D}(v)} = 0$ may be neglected in the sum defining $\Phi_L$.
\begin{lem}\label{l:phicutofflower}
Fix $s \in (\alpha,1)$, $ \theta \in (0,1/2)$, and $L \in \mathbb{Z}_{\ge 1}$. There exist constants $C(s) > 1$ and $c( s) > 0$
such that for any $z = E + \mathrm{i} \eta \in \mathbb{H}$ such that $\varepsilon \le |E| \le B$ and $\eta \in (0,1)$, and any $\omega \in [0, c \theta^C]$, we have
\begin{equation*}
\left|
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z) \right|^s
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z) \right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right]
\right|
\le \theta L C^{1+L} .
\end{equation*}
\end{lem}
\begin{proof}
By \Cref{ry0estimate} and the second part of \Cref{estimatemoment1},
\begin{align*}
&\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z) \right|^s
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z) \right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right]\\
&
\le
\big(1 - (1 -\theta)^L )\cdot \mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z) \right|^s
\right]\le \theta L C^{1+ L},
\end{align*}
which completes the proof. In the last inequality, we used $\theta < 1/2$ and Taylor expansion.
\end{proof}
We next state some useful moment bounds on the sum defining $\Phi_L$, with estimate the $1+\kappa$ moment of this sum for any $\kappa >0$.
\begin{lem}
\label{momentkappa}
Fix $s\in (\alpha,1)$, $ \omega \in (0,1 )$, $L \in \mathbb{Z}_{\ge 1}$, and $ \Omega\ge 1$. Fix $\kappa,\chi> 0$ such that
\begin{equation}\label{chisconditions}
s(1+2\kappa)/(1-2\kappa) < 1, \quad \chi > (s-\alpha)/2, \quad \chi (1+2\kappa) / (1 - 2\kappa) < 1.\end{equation}
There exist constants $C(\chi,s) > 1$ and $c( \chi, s) > 0$ such that the following holds.
Fix $E_1, E_2 \in \mathbb{R}$ such that $|E_1|, |E_2|\in [\varepsilon, B]$, and fix $\eta_1, \eta_2 \in (0,1)$. Set $z_i = E_i + \mathrm{i} \eta_i$ for $i=1,2$.
Then we have
\begin{align}\label{fracmom1}
\mathbb{E}\left[
\left(\sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_1)\right|^\chi \left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{D}(v)} \right)^{1+\kappa}
\right]\le C^{1+L} \omega^{-2 \kappa L }
\end{align}
and
\begin{align}\label{fracmom2}
\mathbb{E}\left[
\left(
\sum_{w\in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2) \right|^\chi \left|
T_{0w }
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)} \right)^{1+\kappa}
\right]\le \Omega \cdot C^{1+L} \omega^{-2 \kappa L }.
\end{align}
\end{lem}
\begin{proof}
We begin with the proof of \eqref{fracmom2}.
We let $V$ denote the random variable equal to the number of nonzero $\mathbbm{1}_{\mathscr{T}(v)}$ at level $L$ of the Poisson weighted infinite tree:
\begin{equation*}
V = \left| \{ v \in \mathbb{V}(L) : \mathbbm{1}_{\mathscr{T}(v)} =1 \}\right|.
\end{equation*}
Using H\"older's inequality with conjugate exponents
\begin{equation*}
p^{-1} = (1-\kappa)(1 + \kappa)^{-1}, \qquad q^{-1} = 2\kappa(1 + \kappa)^{-1},
\end{equation*}
we get
\begin{align*}
&\left(\sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2)\right|^\chi \left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)} \right)^{1+\kappa}\\
&\le V^{2\kappa}
\left(\sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2) \right|^{\chi(1+\kappa)/(1-\kappa) }\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^{s(1+\kappa)/(1-\kappa)} \mathbbm{1}_{\mathscr{T}(v)} \right)^{1-\kappa}.\end{align*}
Further using H\"older's inequality with conjugate exponents
\begin{equation*}
p^{-1} = (1 - \kappa) , \qquad q^{-1} = \kappa,
\end{equation*}
we have
\begin{align}
&\mathbb{E}\left[ V^{2\kappa}
\left(\sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2) \right|^{\chi(1+\kappa)/(1-\kappa) }\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^{s(1+\kappa)/(1-\kappa)} \mathbbm{1}_{\mathscr{T}(v)} \right)^{1-\kappa} \right]\notag \\
&\le \mathbb{E} \left[ V^2 \right]^{\kappa}\notag \\
&\quad \times
\mathbb{E}\left[
\sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2) \right|^{\chi(1+\kappa)/(1-\kappa) }\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^{s(1+\kappa)/(1-\kappa)} \mathbbm{1}_{\mathscr{T}(v)} \right]^{1-\kappa}.\label{banana3}
\end{align}
We observe that $V$ is equal to the number of leaves at level $L$ of a Galton--Watson tree with parameter
\begin{equation*}
\lambda = \int_{\omega}^\infty \alpha x^{-\alpha - 1}\, dx = \omega^{-\alpha}.
\end{equation*}
By \Cref{treenumber}, we have for any $t \ge 0$ that
\begin{flalign*}
\mathbb{P} [ V \ge t] \le 3 \exp(- t \cdot 2^{-L-1} \omega^{\alpha L}).
\end{flalign*}
Then
\begin{align}
\mathbb{E} [ V^2 ]
\le 2 \cdot 3 \int_0^\infty t \exp(- t \cdot 2^{-L-1} \omega^{\alpha L})\, dt
\le 24 \cdot 2^{2L} \omega^{-2 \alpha L}.\label{banana1}
\end{align}
Further, by using the Schur complement formula \eqref{qvv}, we may write
\begin{align}\label{cherry}
& \sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2) \right|^{\chi(1+\kappa)/(1-\kappa) }\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^{s(1+\kappa)/(1-\kappa)} \mathbbm{1}_{\mathscr{T}(v)}\\
& = \sum_{w\in \mathbb{V}(1)} \frac{|T_{0w}|^{s(1+\kappa)/(1-\kappa)} \mathbbm{1}_{\omega \le |T_{0w}| \le \Omega } S_w}{| z_2 + T_{0w}^2 R^{(0)}_{w w} + K_w |^{\chi(1+\kappa)/(1-\kappa)} }
\le \sum_{w\in \mathbb{V}(1)} \frac{|T_{0w}|^{s(1+\kappa)/(1-\kappa)} \mathbbm{1}_{|T_{0w}| \le \Omega } S_w}{| z_2 + T_{0w}^2 R^{(0)}_{w w} + K_w |^{\chi(1+\kappa)/(1-\kappa)} }\notag,
\end{align}
where we set
\begin{equation*}
S_w = \sum_{v \in \mathbb{D}_{L-1}(w) } \left| R^{(0)}_{w v}(z_1)
\right|^{s(1+\kappa)/(1-\kappa)}\mathbbm{1}_{\mathscr{T}(w, v)}, \qquad
K_w = \sum_{\substack{ u \in \mathbb{V}(1)\\ u \neq w }}
T_{0u}^2 R^{(0)}_{uu}(z_2).
\end{equation*}
We observe that the sequence $\{ (K_w, R^{(0)}_{ww},S_w,T_{0w}) \}_{w \in \mathbb{V}(1)}$ satisfies \Cref{sqk}.
Then by \eqref{cherry}, the second part of \Cref{expectationsum1} and the second part of \Cref{estimatemoment1},
\begin{align}\label{banana2}
&\mathbb{E}\left[
\sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2) \right|^{\chi(1+\kappa)/(1-\kappa) }\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^{s(1+\kappa)/(1-\kappa)} \mathbbm{1}_{\mathscr{T}(v)} \right] \\
&\le
C \Omega^{s(1+\kappa)/(1-\kappa) - \alpha} \cdot
\mathbb{E}\left[
\sum_{v \in \mathbb{V}(L-1) } \left| R_{0 v}(z_1)
\right|^{s(1+\kappa)/(1-\kappa)}\right]
\le
\Omega C^{L+1}\notag
\end{align}
for some $C> 1$.
Inserting \eqref{banana1} and \eqref{banana2} into \eqref{banana3} concludes the proof of \eqref{fracmom2}.
The proof of \eqref{fracmom1} is similar to \eqref{fracmom2}, so we omit it. However, let us explain why the right side of \eqref{fracmom1} differs from \eqref{fracmom2}, and how this changes the proof. We note that because \eqref{fracmom1} involves only resolvents evaluated at $z_1$, and no resolvent evaluated at $z_2$, the first part of \Cref{expectationsum1} may be used in place of the second part of \Cref{expectationsum1} above (in \eqref{banana2}) when proving \eqref{fracmom1}. This improves the right side of \eqref{banana2} by a factor of $\Omega$, which yields the same improvement of \eqref{fracmom1} over \eqref{fracmom2}.
\end{proof}
\begin{lem}\label{holdershort}
Retain the notation and assumptions of the previous theorem. For any event $\mathscr A$, we have
\begin{equation*}
\mathbb{E}\left[
\mathbbm{1}_{\mathscr A}\sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2) \right|^\chi
\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]
\le \P(\mathscr A)^{\kappa /( 1 + \kappa)}
\cdot \Omega \cdot C^{1+L} \omega^{- 2 \kappa L}.
\end{equation*}
\end{lem}
\begin{proof}
By H\"older's inequality and \eqref{fracmom2},
\begin{align*}
&\mathbb{E}\left[
\mathbbm{1}_{\mathscr A}\sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2) \right|^\chi
\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]\\
&\le
\P(\mathscr A)^{\kappa /( 1 + \kappa)}
\mathbb{E}\left[
\left(\sum_{w\in \mathbb{V}(1)} \sum_{v \in \mathbb{D}_{L-1}(w) }
\left|R_{00}(z_2) \right|^\chi
\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}\right)^{1+\kappa}
\right]^{1/(1+\kappa)}\\
&\le
\P(\mathscr A)^{\kappa / ( 1 + \kappa)}
\cdot \Omega \cdot C^{1+L} \omega^{- 2 \kappa L}.
\end{align*}
This completes the proof.
\end{proof}
We now show that vertices $v\in \mathbb{V}$ such that $\mathbbm{1}_{\mathscr{T}(v)} = 0$ may be neglected in the sum defining $\Phi_L$, further improving \Cref{l:phicutofflower}.
\begin{lem}\label{l:phicutoff}
Fix $s \in (\alpha,1)$, $ \omega \in (0,1)$, $L \in \mathbb{Z}_{\ge 1}$, and $\Omega\ge 1$. There exist constants $C=C(s) > 1$ and $c(s) >0$ such that for any $z = E + \mathrm{i} \eta \in \mathbb{H}$ such that $\varepsilon \le |E| \le B$ and $\eta \in (0,1)$, we have
\begin{equation*}
\left|
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z) \right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z) \right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]
\right|
\le L C^{L+1} \Omega^{- c} \omega^{- L }.
\end{equation*}
\end{lem}
\begin{proof}
\begin{comment}
It suffices to show that
\begin{equation*}
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z)\mathbbm{1}_{\mathscr{D}(v)}
- R_{0v}(z)\mathbbm{1}_{\mathscr{T}(v)}
\right|^s
\right] \le L C^{L+1} \Omega^{- c} \omega^{-C L } ,
\end{equation*}
since this immediately implies the result after using the elementary inequality $\big| |x|^s - |y|^s \big| \le | x -y|^s$.
\end{comment}
For this proof only, we define $\tilde T_{vw}$ for $v,w \in \mathbb{V}$ by
$\tilde T_{vw} = \mathbbm{1}_{ |T_{vw}| > \Omega } T_{vw}$.
It suffices to show that for each of the $L$ terms
\begin{align}\label{DeltaDef}
\Delta_k =
\mathbb{E}
\left[
\sum_{w \in \mathbb{V}(k) }
\sum_{u \in \mathbb{D}(w) }
\sum_{v \in \mathbb{D}(L - k - 1) }
\left| R_{0w}(z) \tilde T_{wu } R^{(w)}_{uv} (z)
\right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right]
\end{align}
defined for $k \in \unn{0}{L-1}$, we have
\begin{equation}\label{delt}
\Delta_k \le
C^{L+1} \Omega^{-c} \omega^{- L}.\end{equation}
This is because \eqref{delt} implies
\begin{align}\label{pepper}
&\left|
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z) \right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z) \right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]
\right| \\ &\le
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L) }
\left| R_{0v}(z)\mathbbm{1}_{\mathscr{D}(v)}
- R_{0v}(z)\mathbbm{1}_{\mathscr{T}(v)}
\right|^s
\right] \le \sum_{k=0}^{L-1} \Delta_k \le L C^{L+1} \Omega^{- c} \omega^{- L }\notag
\end{align}
after using the elementary inequality $\big| |x|^s - |y|^s \big| \le | x -y|^s$, which is the desired conclusion. Hence, we now turn to proving \eqref{delt}.
We first essentially reduce to the case when $k = 0$; this will closely follow the proof of \Cref{chilchil1}. Specifically, using \Cref{rproduct} to expand the $R_{0w}(z)$ term appearing in \eqref{DeltaDef} and the Schur complement formula \eqref{qvv}, we obtain
\begin{align}\label{pineapple}
&
\mathbb{E}
\left[
\sum_{w \in \mathbb{V}(k) }
\sum_{u \in \mathbb{D}(w) }
\sum_{v \in \mathbb{D}(L - k - 1) }
\left| R_{0w}(z) \tilde T_{wu } R^{(w)}_{uv} (z)
\right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right]\\
&= \mathbb{E}
\left[
\sum_{r \in \mathbb{V}(1)}
\sum_{w \in \mathbb{D}_{k-1}(r) }
\sum_{u \in \mathbb{D}(w) }
\sum_{v \in \mathbb{D}(L - k - 1) }
\left| R_{00}(z) T_{0r} R^{(0)}_{rw} \tilde T_{wu } R^{(w)}_{uv} (z)
\right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right]\label{pineapple3} \\
&=
\mathbb{E}
\left[
\sum_{r \in \mathbb{V}(1)}
\sum_{w \in \mathbb{D}_{k-1}(r) }
\sum_{u \in \mathbb{D}(w) }
\sum_{v \in \mathbb{D}(L - k - 1) }
\frac{|T_{0r}|^s S_r }{| z + T^2_{0r} R^{(0)}_{rr} + K_r|^s}
\right],\label{pineapple4}
\end{align}
where in the last equality we set
\begin{equation*}
S_r = \sum_{w \in \mathbb{D}_{k-1}(r) }
\sum_{u \in \mathbb{D}(w) }
\sum_{v \in \mathbb{D}(L - k - 1) } \left| R^{(0)}_{rw} \tilde T_{wu } R^{(w)}_{uv} (z)
\right|^s \mathbbm{1}_{\mathscr{D}(r,v)},\qquad
K_r = \sum_{\substack{u \in \mathbb{V}(1)\\ u \neq r}}
T_{0u}^2 R^{(0)}_{uu}.
\end{equation*}
We observe that the sequence $\{ (K_r, R^{(0)}_{rr},S_r,T_{0r}) \}_{r \in \mathbb{V}(1)}$ satisfies \Cref{sqk}. This permits use of the first part of \Cref{expectationsum1} in \eqref{pineapple4}, which yields
\begin{align}\notag
&\mathbb{E}
\left[
\sum_{r \in \mathbb{V}(1)}
\sum_{w \in \mathbb{D}_{k-1}(r) }
\sum_{u \in \mathbb{D}(w) }
\sum_{v \in \mathbb{D}(L - k - 1) }
\frac{|T_{0r}|^s S_r }{| z + T^2_{0r} R^{(0)}_{rr} + K_r|^s}
\right]\\
&\le \label{rterm}
C \cdot \mathbb{E}\left[
\sum_{w \in \mathbb{D}_{k-1}(r) }
\sum_{u \in \mathbb{D}(w) }
\sum_{v \in \mathbb{D}(L - k - 1) } \big|R^{(0)}_{rr}(z)\big|^{(\alpha -s)/2} \left| R^{(0)}_{rw} \tilde T_{wu } R^{(w)}_{uv} (z)
\right|^s \mathbbm{1}_{\mathscr{D}(r,v)} \right]\\
& \le C \cdot \mathbb{E}
\left[
\sum_{w \in \mathbb{V}(k-1) }
\sum_{u \in \mathbb{D}(w) }
\sum_{v \in \mathbb{D}(L - k - 1) }
\big|R_{00}(z)\big|^{(\alpha -s)/2} \left| R_{0w}(z) \tilde T_{wu } R^{(w)}_{uv} (z)
\right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right],\label{pineapple2}
\end{align}
where in \eqref{rterm}, $r \in \mathbb{V}(1)$ is an arbitrarily selected vertex (and the choice of $r$ does change the expectation as all $S_r$ are identically distributed).
We now observe that the term \eqref{pineapple2} is essentially the same as \eqref{pineapple}, except with the sum $w\in \mathbb{V}(k)$ replaced by $w \in \mathbb{V}(k-1)$, and an additional factor of $\big|R_{00}(z)\big|^{(\alpha -s)/2}$.
Therefore, considering the sum \eqref{pineapple2}, we can repeat the computations from \eqref{pineapple} through \eqref{pineapple2} (with the change that \eqref{pineapple3} has a factor of $\big|R_{00}(z)\big|^{(\alpha + s)/2}$ in place of the $\big|R_{00}(z)\big|^{s}$ there, which must be carried through the calculation). Iterating these computations $k-1$ times and putting the resulting bound on \eqref{pineapple} in \eqref{DeltaDef} gives
\begin{align}
\Delta_k
&\le
C^k \mathbb{E}
\left[
\sum_{w \in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L -k -1}(w) }
\left| R_{00}(z)\right|^{(\alpha-s)/2} \left| R_{00}(z) \tilde T_{0w} R^{(0)}_{wv} (z)
\right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right]\notag \\
&=
C^k
\mathbb{E}
\left[
\sum_{w \in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L -k -1}(w) }
\left| R_{00}(z)\right|^{(\alpha+s)/2} \left| \tilde T_{0w} R^{(0)}_{wv} (z)
\right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right]\label{thesum}.
\end{align}
Using the Schur complement formula \eqref{qvv} to expand $R_{00}(z)$, we may write the previous line as
\begin{equation}\label{campbellsetup}
\Delta_k \le C^k \sum_{w \in \mathbb{V}(1)} \frac{|T_{0w}|^s \mathbbm{1}_{|T_{j}| > \Omega } S_w}{| z + T_{0w}^2 R^{(0)}_{w w} + K_w |^{(\alpha + s)/2} },
\end{equation}
where
\begin{equation*}
S_w = \sum_{v\in \mathbb{D}_{L-k-1}(w)} \left| R^{(0)}_{w v}\right|^s \mathbbm{1}_{\mathscr{D}(w,v)},
\qquad K_w = \sum_{\substack{ u \in \mathbb{V}(1)\\ u \neq w }}
T_{0u}^2 R^{(0)}_{uu}.
\end{equation*}
Then \Cref{fidentityxi} shows that
\begin{align}\label{jalapeno1}
& \sum_{w \in \mathbb{V}(1)} \frac{|T_{0w}|^s \mathbbm{1}_{|T_{j}| > \Omega } S_w}{| z + T_{0w}^2 R^{(0)}_{w w} + K_w |^{(\alpha + s)/2} } = \int_\Omega^\infty
\mathbb{E} \left[
\frac{t^s S }{\big | z + t^2 R + K \big|^{(\alpha+s)/2}}
\right]
\cdot \alpha t^{-\alpha -1 } \,dt \\
&\le
\left( \int^\infty_\Omega \alpha t^{-\alpha -1 } \,dt \right)^{\kappa/(1+\kappa)}
\left(
\int_0^\infty
\mathbb{E} \left[
\frac{t^{s(1+\kappa)} S^{(1+\kappa)} }{\big ( | z + t^2 R| +1 \big)^{(\alpha+s)(1+\kappa)/2}}
\right]
\cdot \alpha t^{-\alpha -1 } \,dt
\right)^{(1+\kappa)^{-1}}
,\notag
\end{align}
where we used H\"older's inequality in the last line, and set $\kappa = (1/2)(1 - s)$ so that $s (1 + \kappa ) < 1$. Here the random vector $(K,S,R)$ is defined to have the same joint law as any $(K_w, S_w, R^{(0)}_{ww})$.
We have
\begin{equation}\label{jalapeno2}
\alpha \int^\infty_\Omega t^{-\alpha -1 } \,dt
= \Omega^{-\alpha}.
\end{equation}
Also, by the first part of \Cref{expectationsum1},
\begin{align}
&\int_0^\infty
\mathbb{E} \left[
\frac{t^{s(1+\kappa)} S^{1+\kappa} }{\big ( | z + t^2 R| +1 \big)^{(\alpha+s)(1+\kappa)/2}}
\right]
\cdot \alpha t^{-\alpha -1 } \,dt\notag \\
&\le C \cdot \mathbb{E} \left[ | R |^{(\alpha-s)/2} S^{1+\kappa} \right]\notag \\ &= C \cdot
\mathbb{E}
\left[
\left| R_{00}(z)\right|^{(\alpha - s)/2}
\left(
\sum_{v\in \mathbb{V}(L-k-1)} \left| R_{0 v}(z)\right|^s \mathbbm{1}_{\mathscr{D}(v)}
\right)^{1+\kappa}
\right]\notag\\
&= C \cdot
\mathbb{E}
\left[
\left(
\sum_{w\in \mathbb{V}(1)}
\sum_{v\in \mathbb{D}_{L-k-2}(w)}
\left| R_{00}(z)\right|^{(\alpha - s)(1+\kappa)^{-1}/2}
\left| R_{00}(z) T_{0w} R^{(0)}_{wv} (z)
\right|^s\right)^{1+\kappa}
\mathbbm{1}_{\mathscr{D}(v)}
\right]\notag\\
&\le C^{L-k + 1} \omega^{-2 \kappa L}.\label{jalapeno3}
\end{align}
where the last inequality follows from \eqref{fracmom1}.
Set $c(s) = \kappa /(1 + \kappa)$. Using \eqref{campbellsetup}, \eqref{jalapeno1}, \eqref{jalapeno2}, and \eqref{jalapeno3}, we obtain $\Delta _k \le C^{L+1} \Omega^{- c} \omega^{- 2\kappa L}$, proving \eqref{delt}. This completes the proof after recalling \eqref{pepper} and using $\kappa < 1/2$ to show that $\omega^{-2 \kappa L} < \omega^{-L}$.
\end{proof}
\subsection{Continuity Estimates}
\label{Estimate1}
We now prove a continuity estimate for the truncated version of $\Phi_L$ considered in \Cref{l:phicutoff}, involving only the vertices $v \in \mathbb{V}$ such that $\mathbbm{1}_{\mathscr{T}(v)} \neq 0$. It will be our primary technical tool for the remainder of this section.
\label{s:continuityestimates}
\begin{lem}\label{l:cutoffclose}
Fix $s \in (\alpha,1)$, $ \delta, \omega \in (0,1)$, and $ \Omega \ge 1$. There exist constants $C(\delta, s) > 1$ and $c(s) > 0$
such that the following holds. For $i=1,2$, fix $z_i = E_i + \mathrm{i} \eta_i \in \mathbb{H}$ such that $\varepsilon \le |E_i| \le B$ and $\eta_i \in (0,1)$. Suppose that $\phi(s;z_2) < - \delta$. Then
\begin{equation*}
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_1) - R_{0v}(z_2)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]
\le L \Omega^2 C^{L+1}|z_1 - z_2|^s (\eta_1 \eta_2)^{-s/2}
\cdot \eta_2^{cs } \omega^{-2 L }.
\end{equation*}
\end{lem}
\begin{proof}
For $k \in \unn{0}{L-2}$, we define
\begin{align}\label{PsiDef}
&\Psi_{L, k} (s; \omega, \Omega; z_1, z_2)\notag \\
&=
\mathbb{E}\left[
\sum_{w \in \mathbb{V}(k+1)}
\sum_{v \in \mathbb{D}_{L-k-1}(w)}
\left|
R_{0 w_-}(z_2) T_{w_- w} R^{(w_-)}_{w v }(z_1)
- R_{0 w}(z_2) T_{w w_+} R^{(w)}_{w_+ v } (z_1)
\right|^s\mathbbm{1}_{\mathscr{T}(v)}
\right].
\end{align}
We also set
\begin{equation*}
\Psi_{L, -1} (s; \omega, \Omega; z_1, z_2)=
\mathbb{E}\left[
\sum_{w \in \mathbb{V}(1)}
\sum_{w \in \mathbb{D}_{L-1} (w)}
\left|
R_{0 v}(z_1)
- R_{0 0 }(z_2) T_{0 w} R^{(0)}_{w v}(z_1 )
\right|^s\mathbbm{1}_{\mathscr{T}(v)}
\right]
\end{equation*}
and
\begin{equation*}
\Psi_{L, L-1} (s; \omega, \Omega; z_1, z_2)=
\mathbb{E}\left[
\sum_{w \in \mathbb{V}(L-1)}
\sum_{v \in \mathbb{D}(w)}
\left|
R_{0 w }(z_2) T_{w v} R^{(w)}_{vv}(z_1)
-
R_{0 v}(z_2)
\right|^s\mathbbm{1}_{\mathscr{T}(v)}
\right].
\end{equation*}
Then using \Cref{rproduct}, we have
\begin{equation}\label{thephisum}
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_1) - R_{0v}(z_2)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right] \le
\sum_{j = -1}^{L-1} \Psi_{L, j} (s; \omega, \Omega ; z_1, z_2),
\end{equation}
so to complete the proof, it suffices to bound the sum on the right side of \eqref{thephisum}.
Considering the terms in the sum defining \eqref{PsiDef}, we have using \Cref{rproduct} that
\begin{align*}
& R_{0 w_-}(z_2) T_{w_- w} R^{(w_-)}_{w v }(z_1)
- R_{0 w}(z_2) T_{w w_+} R^{(w)}_{w_+ v } (z_1)\\
& = R_{0 w_-}(z_2) T_{w_- w} R^{(w_-)}_{w w}(z_1)
T_{w w_+}
R^{(w)}_{w_+ v} (z_1)\\
&\quad - R_{0 w_-}(z_2) T_{w_- w} R^{(w_-)}_{w w}(z_2)
T_{w w_+}
R^{(w)}_{w_+ v} (z_1)\\
&= R_{0 w_-}(z_2) T_{w_- w} \big(R^{(w_-)}_{w w}(z_1) - R^{(w_-)}_{w w}(z_2) \big)
T_{w w_+}
R^{(w)}_{w_+ v} (z_1).
\end{align*}
Then using the previous line in the definition of $\Psi_{L, k}$ in \eqref{PsiDef} gives
\begin{align*}
&\Psi_{L, k} (s; \omega, \Omega; z_1, z_2)\notag\\
&=
\mathbb{E}\left[
\sum_{w \in \mathbb{V}(k+1)}
\sum_{v \in \mathbb{D}_{L-k-1}(w)}
\left|
R_{0 w_-}(z_2) T_{w_- w} \big(R^{(w_-)}_{w w}(z_1) - R^{(w_-)}_{w w}(z_2) \big)
T_{w w_+}
R^{(w)}_{w_+ v} (z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right].
\end{align*}
Using the product expansion from \Cref{rproduct}, the Schur complement formula \eqref{qvv} together with the first part of \Cref{expectationsum1} a total of $k$ times, and the second part of \Cref{expectationsum1} once, we obtain that
(reasoning as in the calculations leading to \eqref{thesum})
\begin{align}
\Psi_{L, k}& (s; \omega, \Omega; z_1, z_2)\notag \\
&\le
\Omega C^{k+1} \mathbb{E}\left[
\sum_{w \in \mathbb{V}(k+1)}
\sum_{v \in \mathbb{D}_{L-k-1}(w)}
\left|\big(R^{(w_-)}_{ww}(z_1) - R^{(w_-)}_{ww}(z_2) \big)
T_{ww_+}
R^{(0)}_{w_+ v} (z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]\notag\\
&\le
\Omega C^{k+1} \mathbb{E}\left[
\sum_{w \in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L-k-2}(w)}
\left|\big(R_{00}(z_1) - R_{00}(z_2) \big)
T_{0w}
R^{(0)}_{w v} (z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]\label{phiprev
\end{align}
for some $C > 1$.
Using the resolvent identity $\bm{A}^{-1} - \bm{B}^{-1} = \bm{A}^{-1} (\bm{B} - \bm{A}) \bm{B}^{-1}$, we write
\begin{equation*}
R_{00}(z_1) - R_{00}(z_2)
= (z_2 - z_1) \sum_{v \in \mathbb{V}} R_{0v}(z_1) R_{v0}(z_2).
\end{equation*}
H\"older's inequality and the Ward identity \eqref{sumrvweta} together give the bound
\begin{align}
\left| R_{00}(z_1) - R_{00}(z_2) \right|
&\le |z_1 - z_2|
\left(\sum_{v \in \mathbb{V}} |R_{0v}(z_1)|^2 \right)^{1/2}
\left(\sum_{v \in \mathbb{V}} |R_{0v}(z_2)|^2 \right)^{1/2}\notag
\\
& = |z_1 - z_2| (\eta_1\eta_2) ^{-1/2}
\big( \Im R_{00}(z_1) \big)^{1/2}
\big( \Im R_{00}(z_2) \big)^{1/2}.\label{holder2}
\end{align}
Inserting \eqref{holder2} into \eqref{phiprev} gives
\begin{align}\notag
&\Psi_{L, k} (s; \omega, \Omega ; z_1, z_2)\le
\Omega C^{k+1}|z_1 - z_2|^s (\eta_1 \eta_2)^{-s/2}\\
&\times \mathbb{E}\left[
\sum_{w \in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L-k-2}(w)}
\big( \Im R_{00}(z_1) \big)^{s/2}
\big( \Im R_{00}(z_2) \big)^{s/2}
\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)} \label{intPhi}
\right]
\end{align}
Let $\kappa > 0$ be a parameter, to be fixed later. By Markov's inequality, the assumption $\phi(s;z_2) < - \delta$, and \Cref{l:qsexpectation}, we have
\begin{equation}\label{simplemarkov}
\P\left( \big( \Im R_{00}(z_2) \big)^{s/2} > \eta_2^\kappa \right) \le C \eta_2^{s/2 - \kappa}.
\end{equation}
Define the event
\begin{equation*}
\mathscr A = \left\{ \big( \Im R_{00}(z_2) \big)^{s/2} < \eta_2^\kappa \right\}.
\end{equation*}
Then by the definition of $\mathscr A$, we have
\begin{align}\label{ongoodevent}
&\mathbb{E}\left[\mathbbm{1}_{\mathscr A}
\sum_{w \in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L-k-2}(w)}
\big( \Im R_{00}(z_1) \big)^{s/2}
\big( \Im R_{00}(z_2) \big)^{s/2}
\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s
\right]
\\
&\le \eta^{\kappa}_2\cdot \mathbb{E}\left[
\sum_{w \in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L-k-2}(w)}
\big| R_{00}(z_1) \big|^{s/2}
\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s
\right]
\le
\eta^{\kappa}_2
C^{L-k},\notag
\end{align}
where the last inequality follows from iterating \Cref{chilchil1} with $\chi$ there equal to $s/2$, and using \Cref{expectationqchi} (as in the proof of \Cref{estimatemoment1}).
We next note that the elementary inequality $ab \le a^2 + b^2$ gives
\begin{equation*}
\big( \Im R_{00}(z_1) \big)^{s/2} \big( \Im R_{00}(z_2) \big)^{s/2} \le\big( \Im R_{00}(z_1) \big)^{s}
+
\big( \Im R_{00}(z_2) \big)^{s} .
\end{equation*}
Fix $\kappa>0$ so that \eqref{chisconditions} holds, with $\chi$ there equal to $s/2$. Then the previous line and \Cref{holdershort} give
\begin{align}
& \mathbb{E}\left[
\mathbbm{1}_{\mathscr A^c}
\sum_{w \in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L-k-2}(w)}
\big( \Im R_{00}(z_1) \big)^{s/2}
\big( \Im R_{00}(z_2) \big)^{s/2}
\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]\notag \\
&\le \mathbb{E}\left[
\mathbbm{1}_{\mathscr A^c} \sum_{w \in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L-k-2}(w)}
\big( \Im R_{00}(z_1) \big)^{s}
\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]\notag \\
&+
\mathbb{E}\left[
\mathbbm{1}_{\mathscr A^c}\sum_{w \in \mathbb{V}(1)}
\sum_{v \in \mathbb{D}_{L-k-2}(w)}
\big( \Im R_{00}(z_2) \big)^{s}
\left|
T_{0w}
R^{(0)}_{w v}(z_1)
\right|^s \mathbbm{1}_{\mathscr{T}(v)}
\right]\notag \\
&\le \P(\mathscr A^c)^{c_1} \cdot \Omega \cdot C^{L-k} \omega^{-2 L }
= \eta_2^{c_1s }
\cdot \Omega \cdot C^{L-k} \omega^{-2 L }
\label{onbadevent}
\end{align}
for some $c_1(s) > 0$, where we used \eqref{simplemarkov} in the last line. The claimed bound follows after inserting \eqref{ongoodevent} and \eqref{onbadevent} into \eqref{intPhi}, and using \eqref{thephisum}.
\end{proof}
The next lemma states a continuity estimate for $\phi(s; E + \mathrm{i} \eta)$ as $\eta$ is a fixed and $E$ varies.
\begin{lem}\label{l:goingsideways}
Fix $s \in (\alpha,1)$ and $ \delta, \eta \in (0,1)$. Then for every $\mathfrak b >0$, there exists $\mathfrak a (\delta, \eta, s) > 0$ such that following holds. For every $E_1, E_2 \in \mathbb{R}$ such that $|E_1|, |E_2| \in [\varepsilon , B]$, $|E_1 - E_2| < \mathfrak a$, and $\phi(s;E_2 + \mathrm{i} \eta) < - \delta$,
we have
\begin{equation*}
\big|
\phi(s;E_1 + \mathrm{i} \eta ) - \phi(s;E_2 + \mathrm{i} \eta)
\big| < \mathfrak b.
\end{equation*}
\end{lem}
\begin{proof}
Set $z_1 = E_1 + \mathrm{i} \eta$ and $z_2 = E_2 + \mathrm{i} \eta$. We may assume that $|E_1 - E_2| < 1/10$. By the first part of \Cref{limitr0j}, there exists $C_1 > 1$ such that for any $L \in \mathbb{Z}_{\ge 1}$,
\begin{equation}\label{combineme2}
\left|
\big(
\phi_L(s;z_1) - \phi_L(s;z_2)
\big)
-
\big(
\phi(s;z_1) - \phi(s;z_2)
\big)
\right|
\le \frac{C_1}{L}.
\end{equation}
We set $L = \lceil 2 C_1 \mathfrak b^{-1} \rceil$.\footnote{In what follows, we drop the ceiling functions when doing computations with $L$, since it makes no essential difference.} Then it remains to bound the difference $\big|
\phi_L(s;z_1) - \phi_L(s;z_2)
\big|$.
\begin{comment}
Next, recall from the definitions \eqref{sumsz1} and \eqref{szl} that
\begin{equation}
\phi(s;z) = \lim_{L\rightarrow \infty} \phi_L(s: z) = \lim_{L\rightarrow \infty} L^{-1} \log \Phi_L(s; z).
\end{equation}
\end{comment}
Recalling the definition \eqref{sumsz1}, we have
\begin{align}\label{mango1}
\phi_L(s;z_1)
-
\phi_L(s;z_2)
=
\log \left(
\frac{\Phi_L(s; z_1)}{\Phi_L(s; z_2)}
\right).
\end{align}
We write
\begin{align}\notag
\left|\log \left(
\frac{\Phi_L(s; z_1)}{\Phi_L(s; z_2)}
\right)\right|
&=
\left|\log \left(
\frac{ \Phi_L(s; z_2) + \big(\Phi_L(s; z_1) - \Phi_L(s; z_2) \big)}{\Phi_L(s; z_2)}
\right)\right|\\
&\le
\frac{ 2 \left|\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_1)
\right|^s
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_2)
\right|^s
\right]\right|}{\Phi_L(s; z_2)},\label{mango2}
\end{align}
where the inequality is valid under the assumption that
\begin{equation}
\left|\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_1)
\right|^s
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_2)
\right|^s
\right]\right|\le \frac{1}{2}\Phi_L(s; z_2).\label{mango3}
\end{equation}
From the second part of \Cref{estimatemoment1}, there exists $c_1 \in (0,1)$ such that
\begin{equation}\notag
\Phi_L(s; z_2) \ge c_1^{L+1}.
\end{equation}
Let $\theta \in(0,1/2)$ and $\Omega > 1$ be real parameters, and let $\omega = c_2 \theta^{C_2}$, where $C_2>1$ and $c_2>0$ are the constants given by \Cref{l:phicutofflower}.
Using \Cref{l:phicutofflower}, \Cref{l:phicutoff}, \Cref{l:cutoffclose}, and the elementary inequality $\big| |x|^s - |y|^s \big| \le | x -y|^s$, there exist constants $C_3>1$ and $c_3>0$ such that
\begin{align}
&\left|\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_1)
\right|^s
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_2)
\right|^s
\right]\right|
\\
&\le \theta L C_3^{1+L} + L C_3^{1+L} \Omega^{-c_3} \omega^{-C_3L} + L C_3^{L+1} \Omega^2 |E_1 - E_2 |^s \eta^{s(c_3-1)} \omega^{-2L}.
\label{crudebound}
\end{align}
We set
\begin{equation}\label{blueberry1}
\theta = \frac{1}{16} \cdot L^{-1} C_3^{-1 - L} c_1^{L+1} \mathfrak b
\end{equation}
and fix $\Omega = \Omega(\theta, \mathfrak b)>1$ large enough so that
\begin{equation}\label{blueberry2}
L C_3^{1+L} \Omega^{-c_3} \omega^{-C_3L}
\le \frac{1}{16} \cdot c_1^{L+1} \mathfrak b.
\end{equation}
Finally, we fix $\mathfrak a(\theta,\Omega, \eta, \mathfrak b) > 0$ such that
\begin{equation}\label{blueberry3}
L C_3^{L+1} \Omega^2 |E_1 - E_2 |^s \eta^{s(c_3-1)} \omega^{-2L}
\le \frac{1}{16} \cdot c_1^{L+1} \mathfrak b
\end{equation}
when $|E_1 - E_2 | < \mathfrak{a}$.
With the choices \eqref{blueberry1}, \eqref{blueberry2}, and \eqref{blueberry3}, equation \eqref{crudebound} gives
\begin{align}\notag
\left|
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_1)
\right|^s
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_2)
\right|^s
\right]
\right|
\le \frac{3}{16} \cdot c_1^{L+1} \mathfrak{b}.
\end{align}
Combining the previous line with \eqref{mango1}, \eqref{mango2}, and \eqref{mango3},
we get
\begin{equation}\notag
\big| \phi_L(s;z_1)
-
\phi_L(s;z_2) \big| < \frac{\mathfrak{b}}{2}.
\end{equation}
Combining the previous line with \eqref{combineme2} and the choice of $L$ below \eqref{combineme2} completes the proof.
\end{proof}
The next lemma provides a continuity estimate for $\phi(s, E + \mathrm{i} \eta)$ when $E$ is fixed and $\eta$ varies. Given the initial estimate $\phi(s; E + \mathrm{i} \eta_0) < - \delta$ for some $\delta > 0$ and sufficiently small $\eta_0$, the lemma states that $\limsup_{\eta \rightarrow 0}
\phi(s; E + \mathrm{i} \eta)$ is negative. In other words, the negativity of $\phi(s, E + \mathrm{i} \eta_0)$ propagates toward the real axis.
\begin{lem}\label{l:goingdown}
Fix $s \in (\alpha,1)$ and $ \delta \in (0,1)$. Then for every $\mathfrak b \in (0, \delta)$, there exists $\mathfrak a (\delta, \varepsilon, B,s) \in (0,1)$ such that following holds. For every $E\in \mathbb{R}$ such that $|E| \in [\varepsilon, B]$, and every $\eta_0 \in (0, \mathfrak a)$ such that ${\phi(s;E + \mathrm{i} \eta_0)} < - \delta$,
we have
\begin{equation*}
\limsup_{\eta \rightarrow 0}
\phi(s; E + \mathrm{i} \eta) < - \mathfrak b.
\end{equation*}
\end{lem}
\begin{proof}
From the second part of \Cref{estimatemoment1}, there exists $c_1 \in (0,1)$ such that
\begin{equation}\label{upcondition}
\Phi_L(s; E + \mathrm{i} \eta) \ge c_1^{L+1}.
\end{equation}
for all $E\in \mathbb{R}$ such that $|E| \in [\varepsilon, B]$ and $\eta \in (0,1)$. Let $C_2>1$ and $c_2 > 0$ be the two constants given by \Cref{l:cutoffclose}.
We begin by considering two points $z_1 = E + \mathrm{i} \eta_1$ and $z_2 = E + \mathrm{i} \eta_2$ with $\eta_1, \eta_2 \in (0,1)$ such that $\eta_1 \in [\eta_2^{(1 - c_2s/7)^{-1}}, \eta_2]$, so that $\eta_1 \le \eta_2$. We also suppose that $\phi(s;E + \mathrm{i} \eta_2) < - \delta/2$.
We make the choice of parameters
\begin{equation}\label{choices}
\omega = \exp\left( - ( -\log \eta_1)^{1/4} \right),
\quad
\Omega = \exp\left( (- \log \eta_1)^{1/2} \right),
\quad
L = ( - \log \eta_1)^{1/5}.
\end{equation}
We also set $\theta = (c_2^{-1} \omega)^{C_2^{-1}}$.
Using \Cref{l:phicutofflower}, \Cref{l:phicutoff}, \Cref{l:cutoffclose}, and the elementary inequality $\big| |x|^s - |y|^s \big| \le | x -y|^s$, we find
\begin{align}\notag
&\left|\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_1)
\right|^s
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_2)
\right|^s
\right]\right|
\\
&\le \theta L C_3^{1+L} + L C_3^{1+L} \Omega^{-c_3} \omega^{-C_3L} + L C_3^{L+1} \Omega^2 |\eta_1 - \eta_2 |^s (\eta_1 \eta_2)^{-s/2}
\cdot \eta_2^{c_2s } \omega^{-2 L },
\label{crudebound12}
\end{align}
where $C_3 >1$ and $c_3 >0$ are constants.
Then by \eqref{choices} and \eqref{crudebound2}, there exists $\eta_0(s, c_1, c_2, C_2, c_3, C_3)>0$ such that, if $\eta_2 \in (0 , \eta_0)$, then
\begin{equation}\label{crudebound2}
\left|\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_1)
\right|^s
\right]
-
\mathbb{E}
\left[
\sum_{v \in \mathbb{V}(L)} \left|
R_{0v}(z_2)
\right|^s
\right]\right|
\le \frac{1}{4} \cdot c_1^{L+1}.
\end{equation}
Using \eqref{upcondition}, and \eqref{combineme2}, \eqref{mango1}, and \eqref{mango2}, we find that there exists $C_4 > 1$ such that
\begin{equation} \label{phigoingdown}
\big| \phi(s;z_1) - \phi(s;z_2) \big|
\le \frac{C_4}{L} + \frac{1}{4} \cdot c_1^{L+1}
< C_4 ( - \log \eta_2)^{-1/5},
\end{equation}
where we increased the value of $C_4$ in the second inequality.
Now consider a decreasing sequence $\{\nu_k\}_{k=1}^\infty$ of reals such that $\nu_1 < \eta_0$, $\phi(s; E + \mathrm{i} \nu_1) < -\delta$, and $\nu_{k+1} = \nu_k^{(1 - c_2s/7)^{-1}}$ for all $k \in \mathbb{Z}_{\ge 1}$. Set $w_k = E + \mathrm{i} \nu_k$. With $\mathfrak{b}$ chosen as in the statement of this lemma, we will show by induction that there exists $\mathfrak a(\mathfrak b)>0$ such that $\phi(s; w_k) < - \mathfrak b$ for all $k\in \mathbb{Z}_{\ge 1}$ if $\nu_1 < \mathfrak a$. We may suppose that $\mathfrak b > \delta /2$.
For brevity, we set $c_5 = 1 - c_2s/7$.
We now claim that $\phi(s; w_j) < - \mathfrak b$ for all $j \in \mathbb{Z}_{\ge 1}$; we will prove this claim by induction.
For the induction hypothesis at step $n \in \mathbb{Z}_{\ge 1}$, suppose that $\phi(s; w_j) < - \mathfrak b$ holds for all $j \le n$.
We will prove the same estimate holds for $j = n+1$.
The bound \eqref{phigoingdown} gives, for all $k\le n$, that
\begin{align*}
\left|
\phi(s;w_k) - \phi(s;w_{k+1})
\right| &\le C_4 ( - \log \nu_k)^{-1/5}\\
&\le C_4 \left( - \log\left( \nu_1^{c_5^{-k}} \right)\right)^{-1/5}
\le C_4 c_5^{k/5} ( - \log \nu_1)^{-1/5}.
\end{align*}
This previous line implies that
\begin{align}\notag
\left|
\phi(s;w_1) - \phi(s;w_{n+1})
\right|
&\le
\sum_{k=1}^{n}
\left|
\phi(s;w_k) - \phi(s;w_{k+1})
\right| \\
&\le C_4 ( - \log \nu_1)^{-1} \sum_{k=1}^\infty c_5^k\le C_6 ( - \log \nu_1)^{-1/5}\label{returnto}
\end{align}
for some $C_6(C_4, c_5) >1$. We choose $\mathfrak a$ so that
\begin{equation*}
C_6 ( - \log \mathfrak a)^{-1} < \delta - \mathfrak b.
\end{equation*}
Then \eqref{returnto} and the assumption that $\nu_1 < \mathfrak{a}$ implies that
\begin{equation}\label{returnto2}
\phi(s;w_{n+1})
\le \phi(s;w_1) + C_6 ( - \log \nu_1)^{-1/5} < - \delta + (\delta - \mathfrak b) < -\mathfrak b.
\end{equation}
This completes the induction step, and shows that $\phi(s; w_j) < - \mathfrak b$ for all $j \in \mathbb{Z}_{\ge 1}$.
We now claim that for any that any $\tilde w = E + \mathrm{i} \tilde \nu$ with $\tilde \nu \in (0, \mathfrak a)$, we have $
\phi(s; \tilde w) < - \mathfrak b$. To see this, observe that there is a unique index $k \in \mathbb{Z}_{\ge 1}$ such that $\nu_k > \tilde \nu \ge \nu_{k+1}$. Consider the sequence $\{\tilde w_j\}_{j=1}^{k+1}$ defined by $\tilde w_j = w_j$ for $j \neq k+1$, and $\tilde w_{k+1} = E+ \mathrm{i} \tilde \nu$. Then the same induction argument that gave \eqref{returnto2} also gives
\begin{equation}\label{returnto3}
\phi(s; \tilde w_{k+1})
< -\mathfrak b.
\end{equation}
This completes the proof.
\end{proof}
The following lemma is in some sense the reverse of the previous one. It shows that if
\begin{equation*}\limsup_{\eta \rightarrow 0}
{\phi(s; E + \mathrm{i} \eta)} < 0,\end{equation*}
then $\phi(s, E + \mathrm{i} \eta_0)<0$ if $\eta_0$ is chosen sufficiently small, in a way that is independent of the energy $E$ (assuming $|E| \in [\varepsilon, B]$). In other words, the negativity of $\limsup_{\eta \rightarrow 0}\phi(s; E + \mathrm{i} \eta) $ propagates away from the real axis in a uniform way.
\begin{lem}\label{l:goingup}
Fix $s \in (\alpha,1)$ and $ \delta \in (0,1)$. Then for every $\mathfrak b \in (0, \delta)$, there exists $\mathfrak a (\delta, s) \in (0,1)$ such that following holds. For every $E\in \mathbb{R}$ such that $|E| \in [\varepsilon, B]$ and
\begin{equation}\label{limsuphypo}
\limsup_{\eta \rightarrow 0}
\phi(s; E + \mathrm{i} \eta) < - \delta,
\end{equation}
we have
\begin{equation*}
\sup_{\eta \in (0, \mathfrak{a}]} \varphi (s; E + \mathrm{i} \eta) < -\mathfrak{b}.
\end{equation*}
\end{lem}
\begin{proof}
From the second part of \Cref{estimatemoment1}, there exists $c_1 \in (0,1)$ such that
\begin{equation}
\Phi_L(s; E + \mathrm{i} \eta) \ge c_1^{L+1}.
\end{equation*}
for all $E\in \mathbb{R}$ such that $|E| \in [\varepsilon, B]$ and $\eta \in (0,1)$. Let $C_2>1$ and $c_2 > 0$ be the two constants given by \Cref{l:cutoffclose}.
We begin by considering two points $z_1 = E + \mathrm{i} \eta_1$ and $z_2 = E + \mathrm{i} \eta_2$ with $\eta_1, \eta_2 \in (0,1)$ such that $\eta_1 \in [\eta_2, \eta_2^{(1 - c_2s/7)}]$, so that $\eta_1 \le \eta_2$. We also suppose that $\phi(s;E + \mathrm{i} \eta_2) < - \delta/2$.
Then repeating the calculations in \eqref{crudebound}, \eqref{crudebound2}, and \eqref{phigoingdown} (with the same parameter choices \eqref{choices}) shows that there exists $C_3 > 0$ such that
\begin{equation} \label{phigoingup}
\big| \phi(s;z_1) - \phi(s;z_2) \big|
< C_3 ( - \log \eta_2)^{-1/5}
\end{equation}
for $\eta < C_3^{-1}$.
We define a sequence $\{\tilde \nu_k \}_{k=1}^\infty$ in the following way. We let $\tilde \nu_1$ be any positive real number such that
\begin{equation}\label{initialcondition}
\phi(s; E + \mathrm{i} x) < - \delta + \frac{\delta - \mathfrak b}{2}.
\end{equation}
for all $x \in (0, \tilde \nu_1]$.
The existence of such a $\nu_1$ is guaranteed by the assumption \eqref{limsuphypo}.
For brevity, we set $c_5 = 1 - c_2 s/7$.
We define $\tilde \nu_k$ for $k \ge 1$ recursively by $\tilde \nu_{k+1} = \tilde \nu_k^{c_5}$.
Let $\mathfrak a$ be a parameter to be determined later, and let $M \in \mathbb{Z}_{\ge 1}$ be the smallest integer such that $\nu_M \ge \mathfrak a$. We define the sequence $\{\nu_k \}_{k=1}^M$ by $\nu_k = \tilde \nu_k$ for $k < M$ and $\nu_M = \mathfrak a$, and set $w_k = E + \nu_k$.
We now show by induction that $\phi(s;w_k) < - \mathfrak b$ for all $k \in \unn{1}{M}$. The base case $k=1$ holds by the definition $\nu_1 = \tilde \nu_1$. For the induction step, suppose that $\phi(s;E + \nu_k) < - \mathfrak b$ holds for all $k\le n$. We may suppose that $\mathfrak b > \delta /2$. The bound \eqref{phigoingup} gives, for all $k\le n$, that
\begin{align*}
\left|
\phi(s;w_k) - \phi(s;w_{k+1})
\right| &\le C_3 ( - \log \nu_k)^{-1/5}\\
&\le C_3 \left( - \log\left( \nu_1^{c_5^k} \right)\right)^{-1/5}
\le C_3 c_5^{-k/5} ( - \log \nu_1)^{-1/5}.
\end{align*}
This previous line implies that
\begin{align}\notag
\left|
\phi(s;w_1) - \phi(s;w_{n+1})
\right|
&\le
\sum_{k=1}^{n}
\left|
\phi(s;w_k) - \phi(s;w_{k+1})
\right| \\
&\le C_3 ( - \log \nu_1)^{-1/5} \sum_{k=1}^M c_5^{-k/5}
\le C_3 c_5^{-(M+1)/5} ( - \log \nu_1)^{-1/5}.\label{apple}
\end{align}
Since $\nu_{M-1} < \mathfrak a$, we have
\begin{equation*}
\nu_1^{c_5^{-M+1}} < \mathfrak a.
\end{equation*}
Rearranging the previous line gives
\begin{equation*}
c_5^{-(M+ 1)/5}\le c_5^{-2/5} \left( \frac{ - \log \nu_1}{ - \log \mathfrak a} \right)^{1/5}
\end{equation*}
Inserting this equation in \eqref{apple} gives
\begin{align*}
\left|
\phi(s;w_1) - \phi(s;w_{n+1})
\right|
\le C_3 c_5^{-2/5}
( - \log \mathfrak a)^{-1/5}.
\end{align*}
Choosing $\mathfrak a(C_3, c_5, \delta, \mathfrak b)> 0 $ sufficiently small in the previous inequality gives
\begin{equation}\label{achoice1}
\left|
\phi(s;w_1) - \phi(s;w_{n+1})
\right| \le \frac{\delta - \mathfrak b}{2}.
\end{equation}
Combining the previous line with \eqref{initialcondition} gives
\begin{equation*}
\phi(s;w_{n+1}) < - \mathfrak b,
\end{equation*}
as desired. This completes the induction step.
We conclude that
\begin{equation}\label{inductionconclude}
\phi(s; E + \mathrm{i} \mathfrak a )
= \phi(s; E + \mathrm{i} \nu_M ) < - \mathfrak b.
\end{equation}
We now show that $\phi(s; E + \mathrm{i} \nu ) < - \mathfrak{b}$ for any $\nu \in (0, \mathfrak{a})$. Recall that $\tilde \nu_1$ was defined through \eqref{initialcondition}; if $\nu \le \tilde \nu_1$, then we are done by this inequality. Otherwise, let $m\in \mathbb{Z}_{\ge 1}$ be the unique index such that
$\nu_m < \nu \le \nu_{m+1}$, and define the sequence $\{\hat \nu_k \}_{k=1}^{m+1}$ by $\hat \nu_k = \nu_k$ for $k \le m$, and $\hat \nu_{m+1} = \nu$. Then the same argument that gave \eqref{inductionconclude} applied to the sequence $\{\hat \nu_k \}_{k=1}^{m+1}$ (with the index $M$ replaced by $m+1$) and our choice of $\mathfrak {a}$ before \eqref{achoice1} together give
\begin{equation*}
\phi(s; E + \mathrm{i} \hat \nu_{m+1}) = \phi(s; E + \mathrm{i} \nu) < - \mathfrak{b}.
\end{equation*}
This completes the proof.
\end{proof}
\subsection{Proof of \Cref{l:phicontinuity}}\label{s:continuityproof}
We can now establish \Cref{l:phicontinuity} by combining \Cref{l:goingsideways}, \Cref{l:goingdown}, and \Cref{l:goingup}.
\begin{proof}[Proof of \Cref{l:phicontinuity}]
By \Cref{l:goingdown} and \Cref{l:goingup}, there exists $\delta_1(\kappa, \omega) \in (0, 1) $ such that the following two claims hold for all $E\in I$.
First, if
\begin{equation*}
\limsup_{\eta\rightarrow 0}
\phi(s; E + \mathrm{i} \eta) < -\kappa.
\end{equation*}
then for all $\eta \in (0, \delta_1]$,
\begin{equation} \label{down}
\phi(s; E + \mathrm{i} \eta) < -\kappa + \frac{\omega}{3}.
\end{equation}
Second, if $\eta \in (0, \delta_1]$ and
\begin{equation*}
\phi(s; E + \mathrm{i} \eta) < -\kappa + \frac{2 \omega}{3},
\end{equation*}
then
\begin{equation}\label{up}
\limsup_{\eta\rightarrow 0}
\phi(s; E + \mathrm{i} \eta) < -\kappa + \varepsilon.
\end{equation}
Next, by \Cref{l:goingsideways}, there exists $\delta_2(\delta_1, \kappa, \omega)>0$ such that, if
\begin{equation*}
\phi(s; E_1 + \mathrm{i} \delta_1/2) < -\kappa + \frac{\omega}{3}
\end{equation*}
then
\begin{equation}\label{sideways}
\big|
\phi(s; E_2 + \mathrm{i} \delta_1/2) - \phi(s; E_1 + \mathrm{i} \delta_1/2)
\big|
< \omega/3
\end{equation}
for all $E_1, E_2 \in I$ such that $|E_1 - E_2| < \delta_2$.
Under the assumption that $|E_1 - E_2| < \delta_2$, using \eqref{down}, \eqref{up}, and \eqref{sideways} with the choice $\eta = \delta_1/2$ gives
\begin{equation*}
\limsup_{\eta\rightarrow 0}
\phi(s; E_2 + \mathrm{i} \eta) < -\kappa + \varepsilon,
\end{equation*}
as desired.
\end{proof}
\newpage
\chapter{The Mobility Edge for Large and Small $\alpha$}
\label{Scaling}
\section{Scaling Near One}\label{s:alphanear1}
The goal of this section is to prove \Cref{l:crudeasymptotic}, which states that any solution in $E$ to $\lambda(E,\alpha)=1$ (recall \eqref{lambdaEalpha}) must scale as $(1-\alpha)^{-1}$, as $\alpha$ tends to $1$. In \Cref{s:aEbEpreliminaries} we state some preliminary estimates on $\alpha$-stable laws. In \Cref{s:aEbEbounds} we estimate certain functionals of $a(E)$ and $b(E)$ (recall \eqref{opaque}), and in \Cref{s:asymptoticconclusion} we use these bounds to prove \Cref{l:crudeasymptotic}. Throughout this section, constants $C > 1$ and $c > 0$ will be independent of $\alpha$.
\subsection{Preliminaries}\label{s:aEbEpreliminaries}
We require the following tail bounds for nonnegative stable laws.
\begin{lem}\label{l:stabletailbounds}
There exists $C>1$ such that the following holds for all $\alpha \in [1/2, 1]$. Let $g_\alpha(x)$ be the density of a nonnegative $\alpha/2$-stable law. Then
\begin{equation}\label{stabletailbounds}
g_\alpha(x) \le C \min(1 , x^{-1 -\alpha/2} ), \qquad
\big| g_\alpha'(x) \big| \le C \min(1 , x^{-2 -\alpha/2} ).
\end{equation}
Further, for $x > C$, we have
\begin{equation}\label{gprimenegative}
g_\alpha'(x) \le - C^{-1} x^{-2 - \alpha/2},
\end{equation}
and
\begin{equation}\label{gprimenegative2}
\big( x g_\alpha(x) \big)' \le - C^{-1} x^{-1 - \alpha/2}.
\end{equation}
\end{lem}
\begin{proof}
The uniform bounds
\begin{equation*}
g_\alpha(x) \le C, \qquad
\big| g_\alpha'(x) \big| \le C .
\end{equation*}
are easily obtained from the Fourier inversion formula, using the explicit representation of $\hat g_\alpha(k)$ given by \eqref{xtsigma} and recalling that the Fourier transform of $g'(x)$ is $\mathrm{i} k \hat g(k)$.
We recall the following series expansion from \cite[(4)]{pollard1946representation} (see also \cite[(7)]{penson2010exact}). The representation
\begin{equation*}
g_\alpha(x) = \frac{1}{\pi} \sum_{j=1}^\infty
\frac{(-1)^{j+1}}{j! x^{1+\alpha j/2}} \Gamma( 1 + \alpha j/2) \sin(\pi \alpha j/2)
\end{equation*}
is valid for all $\alpha \in (0,1)$ and $x>0$. For $x>2$, it gives
\begin{equation*}
\left| g_\alpha(x)\right|
\le \sum_{j=1}^\infty
\frac{1}{j! x^{1+\alpha j/2}} \Gamma( 1 + j)
\le C x^{-1 - \alpha/2},
\end{equation*}
for some $C>1$.
Differentiating term by term for $x>2$ gives
\begin{equation*}
g'_\alpha(x) = \frac{1}{\pi x^{2+\alpha/2 }} \sum_{j=1}^\infty
\frac{(-1)^{j}(1+\alpha j/2)}{j! x^{\alpha (j-1)/2}} \Gamma( 1 + \alpha j/2) \sin(\pi \alpha j/2).
\end{equation*}
A similar bound gives $\big| g_\alpha'(x) \big| \le C x^{-2 -\alpha/2}$. We next write the summation as
\begin{equation*}
- (1 + \alpha/2)\Gamma(1+\alpha/2) \sin(\pi \alpha/2 ) +
\sum_{j=2}^\infty
\frac{(-1)^{j}(1+\alpha j/2)}{j! x^{\alpha (j-1)/2}} \Gamma( 1 + \alpha j/2) \sin(\pi \alpha j/2),
\end{equation*}
and bound, using $\alpha \ge 1/2$,
\begin{equation*}
\left|
\sum_{j=2}^\infty
\frac{(-1)^{j}(1+\alpha j/2)}{j! x^{\alpha (j-1)/2}} \Gamma( 1 + \alpha j/2) \sin(\pi \alpha j/2)
\right|
\le
\sum_{j=2}^\infty
\frac{(1+ j/2)}{j! x^{(j-1)/4}} \Gamma( 1 + j/2).
\end{equation*}
This bound is smaller than
\begin{equation*}
\frac{1}{2}(1 + 1/4)\Gamma(1+1/4) \sin(\pi/4 ) \le \frac{1}{2} (1 + \alpha/2)\Gamma(1+\alpha/2) \sin(\pi \alpha/2 )
\end{equation*}
for $x > C$ if $C>1$ is chosen large enough and $\alpha \in [1/2,1]$. This proves \eqref{gprimenegative}.
Next, we have
\begin{equation*}
x g_\alpha(x) = \frac{1}{\pi} \sum_{j=1}^\infty
\frac{(-1)^{j+1}}{j! x^{\alpha j/2}} \Gamma( 1 + \alpha j/2) \sin(\pi \alpha j/2),
\end{equation*}
and differentiating term by term for $x > 2$ gives
\begin{equation*}
\big(x g_\alpha(x)\big)' = \frac{1}{\pi} \sum_{j=1}^\infty
\frac{\alpha j}{2} \frac{(-1)^{j}}{j! x^{1+\alpha j/2}} \Gamma( 1 + \alpha j/2) \sin(\pi \alpha j/2).
\end{equation*}
We write this as
\begin{equation}\label{prv}
- \frac{\alpha}{2\pi} \frac{1}{x^{1 + \alpha/2}} \Gamma(1 + \alpha/2) \sin(\pi \alpha/2)
+
\frac{1}{\pi x^{1 + \alpha/2}} \sum_{j=2}^\infty
\frac{\alpha j}{2} \frac{(-1)^{j}}{j! x^{\alpha (j-1)/2}} \Gamma( 1 + \alpha j/2) \sin(\pi \alpha j/2).
\end{equation}
For every $\varepsilon > 0$, there exists $C(\varepsilon)>1$ such that
\begin{equation*}
\left|\sum_{j=2}^\infty
\frac{\alpha j}{2} \frac{(-1)^{j}}{j! x^{\alpha (j-1)/2}} \Gamma( 1 + \alpha j/2) \sin(\pi \alpha j/2)\right|\le \varepsilon
\end{equation*}
for $x > C$ uniformly for $\alpha \in [1/2, 1]$. Together with \eqref{prv}, this proves the final bound \eqref{gprimenegative2}.
\end{proof}
\subsection{Bounds on $a(E)$ and $b(E)$}\label{s:aEbEbounds}
In this section, we develop large $E$ asymptotics for $a(E)$ and $b(E)$ (recall \eqref{opaque}). We begin with a definition.
\begin{definition}
Fix $\gamma \in (0,1)$.
We
define the functions $F_\gamma\colon\mathbb{R}\times \mathbb{R}_+ \times \mathbb{R}_+ \rightarrow \mathbb{R}$ and
$G_\gamma\colon{\mathbb{R} \times \mathbb{R}_+ \times \mathbb{R}_+} \rightarrow \mathbb{R}$ by
\begin{flalign}\label{FandG}
F_\gamma (E,x,y) &= \mathbb{E} \left[ \left( E + x^{2/\alpha} S - y^{2/\alpha} T \right)_-^{-\gamma} \right]; \qquad G_\gamma(E,x,y) = \mathbb{E} \left[ \left( E + x^{2/\alpha} S - y^{2/\alpha} T \right)_+^{-\gamma} \right],
\end{flalign}
where $S$ and $T$ are independent, nonnegative $\alpha/2$-stable random variables.
\end{definition}
\begin{rem}\label{r:abfixedpointeqn}
We observe that the fixed point equations \eqref{fixedpoint} can be written as
\begin{equation*}
a(E) = F_{\alpha/2}(E , a , b), \qquad b(E) = G_{\alpha/2}(E, a, b).
\end{equation*}
\end{rem}
\begin{lem}\label{l:abasymptotic}
Suppose $a(E)$ and $b(E)$ solve \eqref{fixedpoint}.
Then there exists $c>0$ such that for all $\alpha \in (1-c, 1)$, $\gamma\in(0,1)$, and $E > c^{-1}$,
\begin{equation}\label{Fgammabound}
\left| F_{\gamma}\big(E , a(E) , b(E)\big) \right| \le \frac{c^{-1} E^{-\alpha - \gamma}}{1 - \gamma} ,\qquad \left| G_\gamma\big(E, a(E) ,b(E)\big) - E^{ - \gamma} \right| \le \frac{c^{-1} E^{-\alpha - \gamma}}{1-\gamma}.
\end{equation}
\end{lem}
\begin{proof}
Throughout this proof, we abbreviate $a = a(E)$ and $b = b(E)$. Set $Y = a^{2/\alpha} S - b^{2/\alpha} T$, where $S$ and $T$ are independent, nonnegative $\alpha/2$-stable random variables, and let $g_\alpha(x)$ be the density of a nonnegative $\alpha/2$-stable law. Then the density of $a^{2/\alpha}S$ is $a^{-2/\alpha} g_\alpha(x a^{-2/\alpha})$, and the density of $-b^{2/\alpha}T$ is $b^{-2/\alpha} g_\alpha( - x b^{-2/\alpha})$. The density $f(y)$ of $Y$ is given by their convolution. For $y > 0$, we bound
\begin{align}
\begin{aligned}
f(y) &= \int_{\mathbb{R}} a^{-2/\alpha} g_\alpha(x a^{-2/\alpha}) \cdot b^{-2/\alpha} g_\alpha\big( - (y-x) b^{-2/\alpha}\big)\, dx \\
&= \int_{y}^\infty a^{-2/\alpha} g_\alpha(x a^{-2/\alpha}) \cdot b^{-2/\alpha} g_\alpha\big( (x-y) b^{-2/\alpha}\big)\, dx\\ & \le \sup_{x >y} a^{-2/\alpha} g_\alpha(x a^{-2/\alpha}) \le C a y^{-1 - \alpha/2 },
\end{aligned}
\label{yuppertail}
\end{align}
\noindent for some constant $C>1$, where for the last inequality we used \Cref{l:stabletailbounds}.
Similarly,
\begin{equation}\label{ylowertail}
f(y) \le C b |y|^{-1 - \alpha/2 }, \qquad \text{for $y < 0$}.
\end{equation}
We write the first equation in \eqref{FandG} as
\begin{align}
F_{\gamma} (E , a , b) &= \int_{-\infty}^{ -E } |E + y|^{-\gamma} f(y) \, dy =
\int_{- 3E/2 }^{-E}
|E + y|^{-\gamma} f(y) \, dy
+
\int_{-\infty}^{ - 3E/2 }
|E + y|^{-\gamma} f(y) \, dy.\label{Ffirstpiece}
\end{align}
Using \eqref{ylowertail} and substituting $y = Ex$ in the first term of \eqref{Ffirstpiece}, we get
\begin{align*}
\int_{- 3E/2 }^{-E}
|E + y|^{-\gamma} f(y) \, dy
&\le C b E^{- 1 - \alpha/2} \int_{- 3E/2 }^{-E} |E + y|^{-\gamma} \, dy \\
&\le Cb E^{- 1 - \alpha/2} \int_{-3/2}^{-1} E^{-\gamma} | 1 + x|^{-\gamma} E \, dx \le C(1- \gamma)^{-1}b E^{-\alpha/2 - \gamma}.
\end{align*}
For the second,
\begin{align*}
\int_{-\infty}^{ - 3E/2 }
|E + y|^{-\gamma} f(y) \, dy &=
\int_{-\infty}^{ - 3/2 }
|E + Ex |^{-\gamma} f(Ex) \, E \, dx \\
&= E^{1 - \gamma}
\int_{-\infty}^{ - 3/2 }
|1 + x |^{-\gamma} f(Ex) \, dx \\
&\le C b E^{1 - \gamma}
\int_{-\infty}^{ - 3/2 }
|1 + x |^{-\gamma} E^{-1 - \alpha/2} |x|^{-1 -\alpha/2} \, dx \le C b E^{-\alpha/2 - \gamma}.
\end{align*}
\noindent Summing these estimates, we obtain
\begin{equation}\label{Fgammaestimate}
\big| F_{\gamma} (E , a , b) \big| \le C(1- \gamma)^{-1}b E^{-\alpha/2 - \gamma}.
\end{equation}
Next, we write the second equation in \eqref{FandG} as
\begin{equation*}
G_{\gamma}(E , a , b) = \int_{-E}^{\infty}
|E + y|^{-\gamma} f(y) \, dy.
\end{equation*}
We break the interval of integration into the three intervals $[-E, -E/2]$, $[-E/2, E/2]$, and $[E/2, \infty]$. For the first using \eqref{ylowertail} and substituting $y =Ex$, we find
\begin{align*}
\int_{-E }^{ - E/2 }
|E + y|^{-\gamma} f(y) \, dy
&\le Cb E^{-1 - \alpha/2} \int_{-E}^{-E/2} |E + y |^{-\gamma} \, dy\\
&= C b E^{- 1 - \alpha/2} \int_{-1}^{-1/2} E^{-\gamma} | 1 + x|^{-\gamma} E \, dx \le Cb ( 1- \gamma)^{-1} E^{-\alpha/2 - \gamma}.
\end{align*}
Also, using \eqref{yuppertail} and substituting $y =Ex$, we find
\begin{align*}
\int_{E /2}^{ \infty }
|E + y|^{-\gamma} f(y) \, dy
&\le C a \int_{E/2}^{\infty} |E + y |^{-\gamma} y^{-1 - \alpha/2} \, dy\\
&= C a E^{- 1 - \alpha/2} \int_{1/2}^{\infty} E^{-\gamma} | 1 + x|^{-\gamma} x^{-1 - \alpha/2} E \, dx \le Ca E^{-\alpha/2 - \gamma}.
\end{align*}
It remains to evaluate
\begin{equation*}
\int_{-E/2 }^{ E/2 }
|E + y|^{-\gamma} f(y) \, dy
=E^{-\gamma}\int_{-E/2 }^{ E/2 }
|1 + y/E|^{-\gamma} f(y) \, dy.
\end{equation*}
We have the Taylor series expansion
\begin{equation*}
( 1 + x)^{-\gamma} = 1 + \gamma \cdot O \big( |x| \big)
\end{equation*}
for $|x| < 1/2$, where the implicit constant is uniform in $\gamma\in (0,1)$.
Inserting this expansion into the previous integral, the first term yields
\begin{equation*}
E^{-\gamma}\int_{-E/2 }^{ E/2 } f(y) \, dy = E^{-\gamma} \left ( 1 + (a+b) \cdot O(E^{-\alpha/2}) \right),
\end{equation*}
where we used \eqref{yuppertail} and \eqref{ylowertail}.
The error term is bounded by
\begin{align}
C \gamma E^{-1 - \gamma}\int_{-E/2 }^{ E/2 } |y| f(y) \, dy
& \le
C \gamma E^{-1 - \gamma}
\left(
\int_{-1 }^{ 1 } |y| f(y) \, dy + a \int_{1 }^{ E/2 } y^{-\alpha/2} dy + b \int_{-E/2 }^{-1 } |y|^{-\alpha/2} \, dy
\right)\notag
\\
&\le
C \gamma E^{-1 - \gamma}
\left(
1 + (a+b) E^{1 - \alpha/2}
\right).
\end{align}
\noindent Summing these estimates, we obtain
\begin{align}\label{Ggammaestimate}
\begin{aligned}
\big| G_{\gamma} (E , a , b) - E^{-\gamma} \big| &\le
C(a+b) ( 1- \gamma)^{-1} E^{-\alpha/2 - \gamma}\\
& \qquad + C E^{-\gamma} (a+b) E^{-\alpha/2} + C \gamma E^{-1 - \gamma}
\left(
1 + (a+b) E^{1 - \alpha/2}
\right).
\end{aligned}
\end{align}
With $\gamma = \alpha/2$, we obtain from the assumption that $a = a(E)$ and $b = b(E)$ satisfy \eqref{fixedpoint}, \eqref{Fgammaestimate}, and \eqref{Ggammaestimate} that
\begin{equation}\label{Fgammaestimate2}
a \le Cb E^{-\alpha},
\end{equation}
and
\begin{align}\label{Ggammaestimate2}
| b - E^{-\alpha/2} | &\le
C(a+b) E^{-\alpha}+ E^{-\alpha/2} \cdot C (a+b) E^{-\alpha/2} + C E^{-1 - \alpha/2}
\left(
1 + (a+b) E^{1 - \alpha/2}
\right).
\end{align}
After substituting \eqref{Fgammaestimate2} into \eqref{Ggammaestimate2}, we see that there exists $C > 0$ such that $b < C E^{-\alpha/2}$ (otherwise \eqref{Ggammaestimate2} is a contradiction).
Putting this bound into \eqref{Fgammaestimate} yields the first claim in \eqref{Fgammabound}. We also note that $b < C E^{-\alpha/2}$ and \eqref{Fgammaestimate2} together yield
\begin{equation}\label{aplusb}
|a + b| \le C E^{-\alpha/2}.
\end{equation}
The second claim of \eqref{Fgammabound} then follows from inserting \eqref{aplusb} into \eqref{Ggammaestimate}. This completes the proof.
\end{proof}
\begin{lem}\label{l:ablower}
Fix a constant $A > 1$. Then there exists $c(A) > 0$ such that for all $|E| < A$, we have $a(E) > c$ or $b(E) > c$ for all $\alpha \in (1/2,1)$.
\end{lem}
\begin{proof}
By the symmetry of $a(E)$ and $b(E)$, it suffices to consider the case $E>0$.
Throughout this proof, we abbreviate $a = a(E)$ and $b = b(E)$, and set $Y = a^{2/\alpha} S - b^{2/\alpha} T$, where $S$ and $T$ are independent, nonnegative $\alpha/2$-stable random variables.
For $t >0$, we have
\begin{equation*}
\P[ Y > t] \le \P[ a^{2/\alpha} S > t] \le C_0 (t a^{-2/\alpha})^{-\alpha/2 } = C_0 t^{-\alpha/2 } a
\end{equation*}
for some $C_0>1$, by \eqref{stabletailbounds}.
Similarly,
\begin{equation*}
\P[ Y < - t] \le C_0 t^{-\alpha/2 } b.
\end{equation*}
We conclude that
\begin{equation}\label{garlic}
\P\big[ Y \in ( -t, t) \big] \ge 1 - C_0 t^{-\alpha/2} (a + b).
\end{equation}
Together with \eqref{FandG} and \eqref{r:abfixedpointeqn}, the previous line yields \begin{align*}
b(E) = \mathbb{E} \left[ \left( E + Y \right)_+^{-\alpha/2} \right]
&\ge
\mathbb{E} \left[ \left( E + Y \right)_+^{-\alpha/2} \mathbbm{1}_{\{Y \in ( -t, t) \}} \right] \\ &\ge (E+t)^{-\alpha/2} \big( 1 - C_0 t^{-\alpha/2} (a + b)\big).
\end{align*}
Setting $t=1$ and using $E <A$, the previous line gives
\begin{equation}\label{e:prev1}
(A+1)^{\alpha/2} b + C_0 (a+ b ) \ge 1.
\end{equation}
This shows the theorem, after choosing $c$ depending on $C_0$ and $A$ (since we reach a contradiction if the quantity $\max(a,b)$ tends to zero).
\end{proof}
\begin{comment}
\begin{align}
\begin{aligned}
f(y) &= \int_{\mathbb{R}} a^{-2/\alpha} g_\alpha(x a^{-2/\alpha}) \cdot b^{-2/\alpha} g_\alpha\big( - (y-x) b^{-2/\alpha}\big)\, dx \\
&= \int_{y}^\infty a^{-2/\alpha} g_\alpha(x a^{-2/\alpha}) \cdot b^{-2/\alpha} g_\alpha\big( (x-y) b^{-2/\alpha}\big)\, dx\\ &
\ge
\int_{1}^{2} a^{-2/\alpha} g_\alpha(x a^{-2/\alpha}) \cdot b^{-2/\alpha} g_\alpha\big( (x-y) b^{-2/\alpha}\big)\, dx
\\ &
\ge
c \int_{1}^{2} a^{-2/\alpha} (x a^{-2/\alpha}))^{-1-\alpha/2} \cdot b^{-2/\alpha} g_\alpha\big( (x-y) b^{-2/\alpha}\big)\, dx
\\ & \ge c \cdot a(E) \int_{1}^{2} b^{-2/\alpha} g_\alpha\big( (x-y) b^{-2/\alpha}\big)\, dx\\
& = c \cdot a(E) \int_{1-y}^{2-y} b^{-2/\alpha} g_\alpha( x b^{-2/\alpha})\, dx\\
&= c \cdot a(E) \int_{b^{-2/\alpha}(1-y)}^{b^{-2/\alpha}(2-y)} g_\alpha( x )\, dx
\end{aligned}
\label{yuppertail}
\end{align}
\begin{proof}
From the definition \eqref{FandG}, we have
\begin{equation*}
G_\gamma(E,a,b) = \int \int_{a^{2/\alpha}s - b^{2/\alpha} t > -E }
(E + a^{2/\alpha} s - b^{2/\alpha} t)^{-\gamma} g(s) g(t) \, ds\, dt
\end{equation*}
where $g$ is the density of the one-sided $\alpha/2$-stable law.
We now rewrite the limits of integration. For the outer integral, we must have $s >0$. Then in the interior integral, we need
\begin{equation*}
t < b^{-2/\alpha} ( E + a^{2/\alpha} s ).
\end{equation*}
This shows that
\begin{equation}\label{Gintegral}
G_\gamma(E, a, b)=
\int_{0}^\infty
\int_0^{b^{-2/\alpha} ( E + a^{2/\alpha} s )}
(E + a^{2/\alpha} s - b^{2/\alpha} t)^{-\gamma} g(s) g(t) \, dt\, ds.
\end{equation}
By \Cref{Fgammabound}, $a \le C E^{-3\alpha/2}$, so $a^{2/\alpha} \le C E^{-3}$. This motivates splitting the integral in $s$ at the point $s = E^3$, so that in the region $s \le E^3$, the term $a^{2/\alpha}s$ in the integrand is negligible compared to $E$.
The $s \ge E^3$ piece is
\begin{equation}\label{e3first}
\int_{E^3}^\infty
\int_0^{b^{-2/\alpha} ( E + a^{2/\alpha} s )}
(E + a^{2/\alpha} s - b^{2/\alpha} t)^{-\alpha/2} g(s) g(t) \, dt\, ds.
\end{equation}
Using $g(s) \le C s^{-1 - \alpha/2}$, we obtain the bound
\begin{equation*}
C\int_{E^3}^\infty
\int_0^{b^{-2/\alpha} ( E + a^{2/\alpha} s )}
(E + a^{2/\alpha} s - b^{2/\alpha} t)^{-\gamma} s^{-1 - \alpha/2} g(t) \, dt\, ds.
\end{equation*}
We now split the $t$ integral at $t=1/2$.
In this regime, we have from the upper bound on $b(E)$ in \Cref{l:abasymptotic} that there exists $C>0$ such that
\begin{equation*}
E - b^{2/\alpha}t > E/3,
\end{equation*}
for $E > C$, where the constant $C$ is uniform in $\alpha$.
Then we obtain the bound
\begin{align*}
\int_{E^3}^\infty
\int_0^{1/2}
(E/3 + a^{2/\alpha} s )^{-\gamma} s^{-1 - \alpha/2} g(t) \, dt\, ds
&\le
\int_{E^3}^\infty
(E/3 + a^{2/\alpha} s )^{-\gamma} s^{-1 - \alpha/2} \, ds\\
&\le C E^{-\gamma} \int_{E^3}^\infty
s^{-1 - \alpha/2} \, ds\\
&\le C E^{-3\alpha/2 - \gamma}.
\end{align*}
For the $t\ge 1/2$ piece, we use $g(t) \le C t^{-1 - \alpha/2}$ to get the bound
\begin{equation*}
\int_{E^3}^\infty
\int_{1/2}^{b^{-2/\alpha} ( E + a^{2/\alpha} s )}
(E + a^{2/\alpha} s - b^{2/\alpha} t)^{-\gamma} s^{-1 - \alpha/2} t^{-1-\alpha/2} \, dt\, ds.
\end{equation*}
We make the substitution $w = b^{2/\alpha}(E + a^{2/\alpha} s)^{-1} t$. This gives
\begin{multline*}
b^{-2/\alpha}(E + a^{2/\alpha} s) \int_{E^3}^\infty
\int_{b^{2/\alpha}(E + a^{2/\alpha} s)^{-1} /2}^{1}
\big(E + a^{2/\alpha} s - (E + a^{2/\alpha} s) w \big)^{-\gamma} s^{-1 - \alpha/2} \\ \times \big(w b^{-2/\alpha}( E + a^{2/\alpha} s ) \big)^{-1-\alpha/2} \, dw\, ds,
\end{multline*}
which simplifies to
\begin{equation*}
b\int_{E^3}^\infty (E + a^{2/\alpha} s)^{-\alpha/2 - \gamma} s^{-1 - \alpha/2}
\int_{b^{2/\alpha}(E + a^{2/\alpha} s)^{-1} /2}^{1}
(1- w )^{-\gamma} w^{-1-\alpha/2} \, dw\, ds.
\end{equation*}
We split the integral in $w$ at $w=1/2$. The integral with $w \in [1/2, 1]$ is bounded by
\begin{multline*}
b\int_{E^3}^\infty (E + a^{2/\alpha} s)^{-\alpha/2 - \gamma} s^{-1 - \alpha/2}
\int_{1/2}^{1}
(1- w )^{-\gamma} w^{-1-\alpha/2} \, dw\, ds
\\
\le C (1 - \gamma)^{-1} b E ^{-\alpha/2 - \gamma } E^{ - 3 \alpha/2} \le C (1-\gamma)^{-1} E^{-2\alpha - \gamma}.
\end{multline*}
It remains to bound the contribution from $w < 1/2$, which is
\begin{align*}
b &\int_{E^3}^\infty(E + a^{2/\alpha} s)^{-\alpha/2 - \gamma} s^{-1 - \alpha/2}
\int_{b^{2/\alpha}(E + a^{2/\alpha} s)^{-1} /2}^{1/2}
(1- w )^{-\gamma} w^{-1-\alpha/2} \, dw\, ds
\\ &\le
C b \int_{E^3}^\infty (E + a^{2/\alpha} s)^{-\alpha/2 - \gamma} s^{-1 - \alpha/2}
\int_{b^{2/\alpha}(E + a^{2/\alpha} s)^{-1} /2}^{1/2}
w^{-1-\alpha/2} \, dw\, ds
\\
&\le
C b \int_{E^3}^\infty (E + a^{2/\alpha} s)^{-\alpha/2 - \gamma} s^{-1 - \alpha/2}
\big(b^{2/\alpha} (E + a^{2/\alpha} s )^{-1} \big)^{-\alpha/2}\, ds
\\
&\le
C E^{-\gamma} \int_{E^3}^\infty s^{-1 - \alpha/2}
\, ds
\\
&\le
C E^{-3\alpha/2 - \gamma} .
\end{align*}
This concludes the bound of \eqref{e3first}.
We now consider the integral over $s \in [ 0 , E^3]$ in \eqref{Gintegral}, which is
\begin{multline*}
\int_0^{E^3}
\int_0^{b^{-2/\alpha} ( E + a^{2/\alpha} s )}
(E + a^{2/\alpha} s - b^{2/\alpha} t)^{-\gamma} g(s) g(t) \, dt\, ds\\
=
E^{-\gamma} \int_0^{E^3}
\int_0^{b^{-2/\alpha} ( E + a^{2/\alpha} s )}
\big(1 + E^{-1}( a^{2/\alpha} s - b ^{2/\alpha} t)\big)^{-\gamma} g(s) g(t) \, dt\, ds.
\end{multline*}
We divide the integral into two regions, depending on whether $E^{-1}( a^{2/\alpha} s - b ^{2/\alpha} t)$ is large or small.
The inequality
\begin{equation*}
| E^{-1}( a^{2/\alpha} s - b ^{2/\alpha} t)| < 1/2
\end{equation*}
holds for
\begin{equation*}
t \in \left[ (a^{2/\alpha} s - E/2 )b^{-2/\alpha}, (a^{2/\alpha} s + E/2 )b^{-2/\alpha}\right].
\end{equation*}
When $a \le E^3$, we have $a^{2/\alpha} s \le E/2$, so using the condition that $t$ is positive, this interval becomes
\begin{equation}\label{thetinterval}
t \in \left[0, (a^{2/\alpha} s + E/2 )b^{-2/\alpha}\right].
\end{equation}
The contribution from $t$ values that do not satisfy \eqref{thetinterval} is
\begin{equation*}
E^{-\gamma} \int_0^{E^3}
\int_{b^{-2/\alpha} (E/2 + a^{2/\alpha} s )}^{b^{-2/\alpha} ( E + a^{2/\alpha} s )}
\big(1 + E^{-1}( a^{2/\alpha} s - b ^{2/\alpha} t)\big)^{-\gamma} g(s) g(t) \, dt\, ds.
\end{equation*}
Using the bound $g(t) \le C t^{-1 - \alpha/2}$ from \Cref{l:stabletailbounds}, this integral is bounded by
\begin{equation*}
b^{1 + 2/\alpha} E^{-\alpha/2 - \gamma - 1 } \int_0^{E^3}
\int_{b^{-2/\alpha} (E/2 + a^{2/\alpha} s )}^{b^{-2/\alpha} ( E + a^{2/\alpha} s )}
\big(1 + E^{-1}( a^{2/\alpha} s - b ^{2/\alpha} t)\big)^{-\gamma} g(s) \, dt\, ds.
\end{equation*}
We now change variables in $t$ and set $ w = b^{2/\alpha}t - a^{2/\alpha} s$ to get
\begin{equation*}
b E^{-\alpha/2 - \gamma - 1 } \int_0^{E^3}
\int_{E/2}^{E}
(1 - w/E )^{-\gamma} g(s) \, dw\, ds
\le b E^{-\alpha/2 - \gamma - 1 }
\int_{E/2}^{E}
(1 - w/E )^{-\gamma} \, dw
.
\end{equation*}
Now set $ v = w/E$. This gives
\begin{equation*}
b E^{-\alpha/2 - \gamma }
\int_{1/2}^{1}
(1 - v )^{-\gamma} \, dv
\le \frac{ C E^{-\alpha - \gamma}}{1 - \gamma}.
\end{equation*}
Finally, the remaining term is
\begin{equation*}
E^{-\gamma} \int_0^{E^3}
\int_{0}^{b^{-2/\alpha} ( a^{2/\alpha} s + E/2)}
\big(1 + E^{-1}( a^{2/\alpha} s - b ^{2/\alpha} t)\big)^{-\alpha/2} g(s) g(t) \, dt\, ds.
\end{equation*}
We have the power series expansion
\begin{equation*}
( 1 + x)^{-\alpha/2} = 1 + \alpha\cdot O(x).
\end{equation*}
The constant term in this expansion gives
\begin{equation*}
E^{-\gamma} \int_0^{E^3}
\int_{0}^{b^{-2/\alpha} ( a^{2/\alpha} s + E/2)}
g(s) g(t) \, dt\, ds = E^{-\gamma} + O(E^{-\gamma - \alpha}),
\end{equation*}
since the upper limits of the integrals are lower bounded by $E^2/2$ when $E >C$.
It remains to bound
\begin{equation*}
E^{-1 - \gamma }\int_0^{E^3}
\int_{b^{-2/\alpha}(a^{2/\alpha} s - E/2 )}^{b^{-2/\alpha} ( a^{2/\alpha} s + E/2)}
| a^{2/\alpha} s - b ^{2/\alpha} t| g(s) g(t) \, dt\, ds.
\end{equation*}
Write $f(y)$ for the density of $a^{2/\alpha} S - b^{2/\alpha}T$. The previous integral is bounded by
\begin{equation}\label{ybb}
E^{-1 -\gamma} \int_{ -E/2}^{E/2} |y| f(y) \, dy,
\end{equation}
We first note that
\begin{equation*}
\int_{-1 }^{ 1 } |y| f(y) \, dy \le 1,
\end{equation*}
since $f(y)$ is a probability measure.
Using the bounds \eqref{yuppertail} and \eqref{ylowertail}, we conclude that
\begin{equation*}
\int_{1}^{E/2} |y| f(y) \, dy
\le C E^{-\alpha/2} \int_1^{E/2} y^{-\alpha/2} \, dy \le C E^{1 - \alpha}.
\end{equation*}
Then \eqref{ybb} is bounded by $CE^{-\gamma - \alpha}$. This completes the proof.
\end{proof}
\end{comment}
The proof of the following lemma is similar to the proof of \Cref{l:boundarybasics}, so we omit it.
\begin{lem}\label{l:boundarybasics2}
Fix $E \in \mathbb{R}$.
Then
\begin{equation}
F_{\alpha}\big(E, a(E), b(E)\big) = \mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha}_+ \right], \qquad G_{\alpha}\big(E, a(E), b(E)\big) = \mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha}_- \right].
\end{equation*}
\end{lem}
The next lemma provides a useful lower bound on $\mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha} \right]$.
\begin{comment}
\ber
\begin{proof}
\begin{equation}
G_\gamma (E,a,b) \ge
\int_{-E/2 }^{ E/2 }
|E + y|^{-\gamma} f(y) \, dy
=E^{-\gamma}\int_{-E/2 }^{ E/2 }
|1 + y/E|^{-\gamma} f(y) \, dy
\end{equation}
Lowered bounded by $A^{-\gamma} (3/2) \int_{[-E/2, E/2]} f(y)$. So we do in fact need upper bounds on $a,b$ for $E$ near zero.
\end{proof}
\eer
\end{comment}
\begin{lem}\label{l:Rlower}
Fix a constant $A > 1$. Then there exists $c(A) > 0$ such that for all $|E| < A$, we have $\mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha} \right] \ge c$ for all $\alpha \in (1/2,1)$.
\end{lem}
\begin{proof}
We abbreviate $a = a(E)$ and $b = b(E)$, and set $Y = a^{2/\alpha} S - b^{2/\alpha} T$, where $S$ and $T$ are independent, nonnegative $\alpha/2$-stable random variables.
By \Cref{l:boundarybasics}, \Cref{l:boundarybasics2}, and H\"older's inequality,
\begin{align*}
b(E) = \mathbb{E} \left[ \left( E + Y \right)_+^{-\alpha/2} \right]
\le \mathbb{E} \left[ \left( E + Y \right)_+^{-\alpha} \right]^{1/2}
=\mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha}_- \right]^{1/2}.
\end{align*}
Similarly,
\begin{align*}
a(E)\le \mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha}_+ \right]^{1/2}.
\end{align*}
The previous two lines and \Cref{l:ablower} together yield
\begin{align*}
c \le a(E)^{2} + b(E)^{2} \le \mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha} \right].
\end{align*}
for $|E| < A$. This completes the proof.
\begin{comment}By \Cref{l:ablower} and \eqref{garlic}, we have for all $t>0$ and $|E| < A$ that
\begin{equation*}
\P\big[ Y \in ( -t, t) \big] \ge 1 - c_0 t^{-\alpha/2}.
\end{equation*}
for some $c_0 >0$. Then by \Cref{l:boundarybasics}, we have
\begin{align*}
\mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha}_- \right]
= \mathbb{E} \left[ \left( E + Y \right)_+^{-\alpha} \right]
&\ge \mathbb{E} \left[ \left( E + Y \right)_+^{-\alpha} \mathbbm{1}_{\{Y \in ( -t, t) \}} \right]\\ &
\ge (E+t)^{-\alpha} ( 1 - c_0 t^{-\alpha/2 -1}).
\end{align*}
This completes the proof after choosing $t$ so that $c_0 t^{-\alpha/2 -1} = 1/2$.
\end{comment}
\end{proof}
\subsection{Conclusion}\label{s:asymptoticconclusion}
We require the below lemma, which follows from routine Taylor series approximations.
\begin{lem}\label{l:mobeqasymptotics}
As $\alpha$ tends to $1$, we have
\begin{equation*}
K_\alpha = \frac{2 \alpha}{ (1-\alpha)^2} + O\big((1 - \alpha)^{-1} \big)
, \qquad t_1^2 - t_{\alpha}^2 = \frac{\pi^2}{4} (\alpha - 1)^{2} + O\big( (1-\alpha)^3 \big).
\end{equation*}
\end{lem}
\begin{proof}
The function $\Gamma(1/2 - \alpha/2)$ has has pole at $\alpha = 1$ of order $1$, with the expansion
\begin{equation*}
\Gamma(1/2 - \alpha/2) = \frac{2}{1-\alpha} + O(1).
\end{equation*}
Using the definition of $K_\alpha$ in \eqref{tlrk}, we find
\begin{equation}\label{Kalpha}
K_\alpha = \frac{\alpha}{2} \cdot \Gamma(1/2 - \alpha/2)^2 = \frac{2 \alpha}{ (1-\alpha)^2} + O\big((1 - \alpha)^{-1} \big).
\end{equation}
This proves the first claim.
Next, we note that
\begin{equation*}
t_\alpha^2 - t_{1}^2 = (t_\alpha - t_1)(t_\alpha + t_1)
\end{equation*}
has a double root at $\alpha = 1$. Using $t_\alpha = \sin(\alpha \pi/2)$ and considering a power series around $\alpha = 1$, we find
\begin{equation*}
t_\alpha^2 - t_{1}^2 = -\frac{\pi^2}{4} (\alpha - 1)^{2} + O\big( (1-\alpha)^3 \big),
\end{equation*}
which completes the proof.
\end{proof}
We are now ready to prove the main result of this section.
\begin{lem}\label{l:crudeasymptotic}
There exist $c, c_0>0$ such that for all $\alpha \in (1-c ,1)$ the following holds.
Any $E_{\mathrm{mob}} \in \mathbb{R}$ such that $\lambda(E_{\mathrm{mob}}, \alpha) = 1$ satisfies
\begin{equation*}
c_0 ( 1- \alpha)^{-1} \le E_{\mathrm{mob}} \le c_0^{-1} (1- \alpha)^{-1}.
\end{equation*}
Further, we have $\lambda(E, \alpha) > 1$ for $E \in \big(0, c_0 (1 - \alpha)^{-1} \big)$ and $\lambda(E, \alpha) < 1$ for
$E \in \big( c_0^{-1} (1 - \alpha)^{-1}, \infty \big)$
\end{lem}
\begin{proof}
First, note that by \Cref{2lambdaEsalpha} and the second part of \Cref{l:lambdalemma}, we have (using $t_1 = 1$) that
\begin{flalign}
\lambda(E,\alpha) &= \lim_{s\rightarrow 1} \lambda (E, s, \alpha)\notag \\ & = \pi^{-1} \cdot K_{\alpha} \cdot \Gamma (\alpha) \cdot \bigg( t_{\alpha} \sqrt{1 - t_{\alpha}^2} \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \Big] \label{kiwi} \\
& \qquad + \sqrt{ (1 - t_{\alpha}^2) \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \Big]^2 + t_{\alpha}^2 (1 - t_{\alpha}^2) \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \cdot \sgn \big( - R_{\mathrm{loc}} (E) \big) \Big]^2} \bigg).\notag
\end{flalign}
For brevity, in this proof we write $F_\alpha = F_\alpha(E) = F_\alpha(E,a(E), b(E))$ and $G_\alpha = G_\alpha(E) = G_{\alpha} (E,a(E), b(E))$. Recalling \Cref{l:boundarybasics2}, we have
\begin{equation}
F_{\alpha} = \mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha}_+ \right], \qquad G_{\alpha} = \mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha}_- \right].
\end{equation*}
Putting the previous line into \eqref{kiwi} gives
\begin{flalign}\notag
\lambda(E,\alpha) & = \pi^{-1} \cdot K_{\alpha} \cdot \Gamma (\alpha) \cdot \bigg( t_{\alpha} \sqrt{1 - t_{\alpha}^2} \cdot (F_\alpha + G_\alpha) \\
& \qquad + \sqrt{ (1 - t_{\alpha}^2) \cdot (F_\alpha + G_\alpha)^2 + t_{\alpha}^2 (1 - t_{\alpha}^2) \cdot (- F_\alpha + G_\alpha)^2} \bigg)\notag\\
& = \pi^{-1} \cdot K_{\alpha} \cdot \Gamma (\alpha) \cdot \bigg( t_{\alpha} \sqrt{1 - t_{\alpha}^2} \cdot (F_\alpha + G_\alpha) \label{kiwi3} \\
& \qquad + \sqrt{1 - t^2_\alpha} \cdot \sqrt{ 2 t_\alpha^2 (F_\alpha^2 + G_\alpha^2) + (1 - t_{\alpha}^2) ( F_\alpha + G_\alpha)^2} \bigg)\notag
\end{flalign}
Let $c_1>0$ be the constant given by \Cref{l:abasymptotic} (so that \eqref{Fgammabound} holds if $E > c_1^{-1}$). By \Cref{l:Rlower}, there exists a constant $c_2>0$ such that
\begin{equation} F_\alpha + G_\alpha = \mathbb{E}\left[ \big( R_\mathrm{loc}(E) \big)^{\alpha} \right] \ge c_2\label{kiwi2}\end{equation} for $|E| < c_1^{-1}$. Using \eqref{kiwi3}, \eqref{kiwi2}, and \Cref{l:mobeqasymptotics}, we find that there exists a constant $c_3 >0$ such that $\lambda(E,\alpha) > 2$ for $|E| < c_1^{-1}$ and $\alpha \in (1- c_3, 1)$. For the rest of the proof, we suppose that $|E| > c_1^{-1}$ and $\alpha \in (1- c_3, 1)$, so that we can apply the conclusions of \Cref{l:abasymptotic}.
From \eqref{Fgammabound}, we have
\begin{equation}\label{somebounds}
0\le F_\alpha \le C (1-\alpha) E^{-2\alpha} ,
\qquad |G_\alpha - E^{-\alpha} | \le C (1-\alpha)^{-1} E^{-2\alpha}.
\end{equation}
The conclusion of the lemma will now follow from combining the previous equation with \eqref{kiwi3}. We give only the details of the proof of the upper bound on $E_{\mathrm{mob}}$, since the lower bound is similar. Inserting \eqref{somebounds} into \eqref{kiwi} and using the second estimate in \Cref{l:mobeqasymptotics} gives
\begin{comment}
\begin{align}
\lambda(E,\alpha)&\ge
\sqrt{1 - t^2_\alpha} \cdot \big(E^{-\alpha} -C (1-\alpha) E^{-2\alpha} \big) \ge c \big((1-\alpha)E^{-\alpha} -C (1-\alpha)^2 E^{-2\alpha} \big). \notag
\end{align}
The previous equation shows that there exists $C_0$ such that $\lambda(E,\alpha) > 0$ if $E> C_0 (1-\alpha)^{-1}$. This proves the claimed lower bound on $E_{\mathrm{mob}}$ and completes the proof.
\end{comment}
\begin{align}
\lambda(E,\alpha)&\ge
\pi^{-1} \cdot K_{\alpha} \cdot \Gamma (\alpha) \cdot t_{\alpha} \sqrt{1 - t_{\alpha}^2} \cdot \big(E^{-\alpha} -C (1-\alpha)^{-1} E^{-2\alpha} \big). \label{kiwi4
\end{align}
From \Cref{l:mobeqasymptotics}, we have
\begin{equation*}
K_{\alpha} \cdot \Gamma (\alpha) \cdot t_{\alpha} \sqrt{1 - t_{\alpha}^2} = \frac{\pi \alpha}{(1-\alpha)} + O(1)
\end{equation*}
as $\alpha$ tends to $1$. Then from the previous equation and \eqref{kiwi4}, there exists a constant $c>0$ such that
\begin{equation}
\lambda(E,\alpha) \ge c (1-\alpha)^{-1} E^{-\alpha} - C (1-\alpha)^{-2} E^{-2\alpha}.
\end{equation}
The previous equation shows that there exists $C_0>1$ such that $\lambda(E,\alpha) > 0$ if $E> C_0 (1-\alpha)^{-1/\alpha}$. This proves the claimed upper bound on $E_{\mathrm{mob}}$ and completes the proof after noticing that \bex2 (1-\alpha)^{-1/\alpha} >(1-\alpha)^{-1}\end{equation*} for $\alpha$ sufficiently close to $1$ (and decreasing $c_3$ if necessary).\end{proof}
\section{Uniqueness Near One}
\label{s:alphanear1uniqueness}
In this section we collect some derivative bounds, which we then use to prove the uniqueness of the mobility edge for $\alpha$ near $1$. In \Cref{s:dxy} and \Cref{DerivativesG} we consider the derivatives of $F_\gamma$ and $G_\gamma$ in $x$ and $y$, and in \Cref{s:dE} we consider the derivative in $E$. \Cref{s:dAB} and \Cref{s:dtildeAB} consider derivatives of certain fractional moments of the resolvent $R_\star$, and \Cref{s:proveuniqueness} concludes with the proof of \Cref{t:main2}. Throughout this section, constants $c > 0$ and $C > 1$ will be independent of $\alpha$.
\subsection{Derivatives in $x$ and $y$ of $F$}\label{s:dxy}
We begin with the following integral estimate that will be used to bound derivatives of $F_{\gamma}$ and $G_{\gamma}$.
\begin{lem
There exists a constant $C>1$ such that the following holds for all $\alpha \in (0,1)$.
Let $g(s)$ denote the density of a nonnegative $\alpha/2$-stable law.
For $x, E>0$ and $\gamma \in (0,1)$, we have
\begin{equation}\label{already!}
\int_0^\infty s (E + x^{2/\alpha} s)^{-\gamma-1}g(s)\, ds
\le Cx^{-2/\alpha + 1}E^{-\alpha/2 - \gamma}.
\end{equation}
\end{lem}
\begin{proof}
We have
\begin{equation*}
(E + x^{2/\alpha} s)^{-\gamma-1}
\le \max(E, x^{2/\alpha} s)^{-\gamma-1}
\end{equation*}
We split the integral on the left side of \eqref{already!} at $s = x^{-2/\alpha} E$ and use the bound $s g(s) \le C s^{-\alpha/2}$ from \Cref{l:stabletailbounds} to obtain
\begin{align*}
\int_0^{x^{-2/\alpha} E} s (E + x^{2/\alpha} s)^{-\gamma-1}g(s)\, ds
&\le C E^{-1 - \gamma} \int_0^{ x^{-2/\alpha} E} s^{-\alpha/2} \,ds\\
&\le
C E^{-1 - \gamma} \cdot x^{ - 2/\alpha + 1} E^{ 1 - \alpha/2} = C x^{-2/\alpha + 1} E^{-\alpha/2 - \gamma}.
\end{align*}
The second piece is
\begin{align*}
\int_{x^{-2/\alpha} E}^\infty s (E + x^{2/\alpha} s)^{-\gamma-1}g(s)\, ds &\le
\int_{x^{-2/\alpha} E}^\infty s (x^{2/\alpha} s)^{-\gamma-1}g(s)\, ds
\\ & \le x^{-(2/\alpha)(\gamma + 1)}\int_{x^{-2/\alpha} E}^\infty s^{-\gamma }g(s)\, ds\\
& \le C x^{-(2/\alpha)(\gamma + 1)}
\int_{x^{-2/\alpha} E}^\infty s^{-1 - \alpha/2 -\gamma} \, ds\\
& \le C x^{-(2/\alpha)(\gamma + 1)} (x^{-2/\alpha} E)^{-\alpha/2 - \gamma} \le C x^{-2/\alpha + 1} E^{-\alpha/2 - \gamma} .
\end{align*}
This completes the proof.
\end{proof}
The following two lemmas then bound the derivatives of $F$ with respect to $x$ and $y$.
\begin{lem}\label{l:Fx}
There exists a constant $c \in (0, 1)$ such that for all $\alpha \in (1-c, 1)$, $\gamma \in (0,1)$ and $E > c^{-1}$, we have
\begin{equation*}
|\partial_x F_\gamma (E, x,y) | \le \frac{ y E^{-1 - \alpha }}{c(1- \gamma)}, \qquad \text{for all $x \le 1$ and $y \le 1$}.
\end{equation*}
\end{lem}
\begin{proof}
By the definition of $F_\gamma(E,x,y)$, we have
\begin{equation*}
F_\gamma(E, x,y) = \int \int_{x^{2/\alpha}s - y^{2/\alpha} t < -E }
|E + x^{2/\alpha} s - y^{2/\alpha} t|^{-\gamma} g(t) g(s) \, dt\, ds,
\end{equation*}
where $g$ is the density of the one-sided $\alpha/2$-stable law. We now rewrite the limits of integration. For the outer integral, we must have $s >0$ for $g(s)$ to be nonzero. In the interior integral, the range of $t$ on which the integrand is supported is
\begin{flalign*}
\{ t \in \mathbb{R}: -y^{2/\alpha} t < - E - x^{2/\alpha} s \} = \big\{ t \in \mathbb{R}: t > y^{-2/\alpha} ( E + x^{2/\alpha} s ) \big\}.
\end{flalign*}
\noindent Hence,
\begin{equation}\label{Flimitsrewrite}
F_\gamma(E, x,y) = \int_{0}^\infty g(s)
\int_{y^{-2/\alpha} ( E + x^{2/\alpha} s )}^\infty
(- E - x^{2/\alpha} s + y^{2/\alpha} t)^{-\gamma} g(t) \, dt\, ds.
\end{equation}
\noindent Changing variables $u = y^{2/\alpha} t - E - x^{2/\alpha} s$, we obtain
\begin{equation}\label{Fu}
F_\gamma(E,x,y) = y^{-2/\alpha} \int_0^\infty
\int_{ 0 }^\infty
u^{-\gamma}
g\big(y^{-2/\alpha} (u +E + x^{2/\alpha} s )\big) g(s) \, du \, ds.
\end{equation}
We differentiate \eqref{Fu} in $x$ to obtain
\begin{equation*}
\partial_x F_\gamma (E, x,y) = (2/ \alpha) y^{-4/\alpha} x^{2/\alpha-1 }\int_0^\infty s \int_{ 0 }^\infty
u^{-\gamma}
g'(y^{-2/\alpha} (u +E + x^{2/\alpha} s )) g(s) \, du \,ds.
\end{equation*}
Since $y \le 1$ and $E > c^{-1} \ge 1$, we have $y^{-2/\alpha} (u +E + x^{2/\alpha} s ) \ge 1$. Then using the estimate $|g'(w)| \le C w^{-\alpha/2 -2 }$ (which holds by \Cref{l:stabletailbounds}), the interior integral is bounded in absolute value by
\begin{align*}
C y^{-4/\alpha} x^{2/\alpha-1 } & \int_{ 0 }^\infty
u^{-\gamma}
\big(y^{-2/\alpha} (u +E + x^{2/\alpha} s )\big)^{-\alpha/2 -2} \, du
\\ &= C y x^{2/\alpha-1 } \int_{ 0 }^\infty
u^{-\gamma}
(u +E + x^{2/\alpha} s )^{-\alpha/2 -2} \, du.
\end{align*}
Therefore, it suffices to bound
\begin{equation}\label{Fusplit}
y x^{2/\alpha-1 }\int_0^\infty s g(s) \int_{ 0 }^\infty
u^{-\gamma}
(u +E + x^{2/\alpha} s )^{-\alpha/2 -2} \, du \, ds
\end{equation}
to complete the proof.
We now split the interior integral in \eqref{Fusplit} at $u=1$. For $u \in (0,1)$, we can bound the integrand using
\begin{equation*}
(u +E + x^{2/\alpha} s )^{-\alpha/2 -2} \le
(E + x^{2/\alpha} s ) ^{-\alpha/2 - 2} \le
\max (E, x^{2/\alpha} s)^{-\alpha/2 - 2},
\end{equation*}
which to leads to
\begin{align}\label{parts1}
y x^{2/\alpha-1 } & \int_0^\infty s g(s) \int_{ 0 }^1
u^{-\gamma}
(u +E + x^{2/\alpha} s )^{-\alpha/2 -2} \, du \, ds\\
&\le C ( 1- \gamma)^{-1}y x^{2/\alpha-1 }
\int_0^\infty \max(E, x^{2/\alpha} s)^{-\alpha/2 - 2} s g(s) \, ds\notag
\end{align}
after integrating in $u$.
We split the integral in $s$ at $s = x^{-2/\alpha} E$. The contribution from $s < x^{-2/\alpha} E$ is at most
\begin{equation}\label{slowerregime}
C ( 1- \gamma)^{-1} y x^{2/\alpha-1 } E^{-\alpha/2 - 2 } \int_0^{x^{-2/\alpha} E} s g(s) ds.
\end{equation}
\noindent Again applying \Cref{l:stabletailbounds},
\begin{equation*}
\int_0^{x^{-2/\alpha} E} s g(s) ds \le C + C \int_1^{x^{-2/\alpha} E}
s^{-\alpha/2} ds \le C( 1 + x^{1 - 2/\alpha} E^{-\alpha/2 + 1}).
\end{equation*}
The contribution from \eqref{slowerregime} is then bounded by
\begin{align}
C(1-\gamma)^{-1} y x^{2/\alpha-1 } E^{-\alpha/2 - 2 } (1 + x^{1 - 2/\alpha} E^{-\alpha/2 + 1} )&\le C(1-\gamma)^{-1} y (x^{2/\alpha-1 } E^{-\alpha/2 - 2 }
+ E^{-1 - \alpha})\notag \\
&\le C(1-\gamma)^{-1} y E^{-1 - \alpha},\label{lemon1}
\end{align}
where we used the assumptions that $x\le 1$ and $E > c^{-1} \ge 1$. The contribution from $s > x^{-2/\alpha} E$ in \eqref{parts1} is at most
\begin{align}
C(1-\gamma)^{-1} y x^{2/\alpha-1 } \int_{x^{-2/\alpha} E}^\infty (x^{2/\alpha} s )^{-\alpha/2 -2} s^{-\alpha/2} ds\notag
&=
C(1-\gamma)^{-1} y x^{-2/\alpha-2 } \int_{x^{-2/\alpha} E}^\infty
s^{-\alpha -2} ds\notag \\
& = C(1-\gamma)^{-1} y x^{-2/\alpha-2 } (x^{-2/\alpha} E )^{-\alpha -1}\notag \\
&= C(1-\gamma)^{-1} y E^{-\alpha -1}.\label{lemon2}
\end{align}
We next consider the $u\ge 1$ part of \eqref{Fusplit}. This can be bounded by
\begin{align*}
&C y x^{2/\alpha-1 }\int_0^\infty s g(s) \int_{1}^\infty
(u +E + x^{2/\alpha} s )^{-\alpha/2 -2} \, du \, d
\le
C y x^{2/\alpha-1 }\int_0^\infty s g(s)
( E + x^{2/\alpha} s )^{-\alpha/2 -1} \, ds.
\end{align*}
Using \eqref{already!}, we bound this by $Cy E^{-\alpha}$. Combining the estimates \eqref{lemon1} and \eqref{lemon2} (for the $u\le 1$ part of \eqref{Fusplit}) and the previous line (for the $u\ge 1$ part) bounds \eqref{Fusplit} and completes the proof.
\end{proof}
\begin{lem}\label{l:Fy}
There exists a constant $c \in (0, 1)$ such that for all $\alpha \in (1-c, 1)$, $\gamma\in(0,1)$, and $E > c^{-1}$, we have
\begin{equation*}
|\partial_y F_\gamma(E, x,y)| \le
c^{-1} (1 - \gamma)^{-1} E^{-\alpha/2 - \gamma} \end{equation*}
and
\begin{equation}\label{Fyispositive}
\partial_y F_\gamma(E, x,y) \ge 0
\end{equation}
for $y \le 1$.
\end{lem}
\begin{proof}
Recall from \eqref{FandG} that
\begin{equation*}
F_\gamma(E, x,y) = \int \int_{x^{2/\alpha}s - y^{2/\alpha} t < -E }
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma} g(s) g(t) \, ds\, dt .
\end{equation*}
Setting $v = y^{2/\alpha} t$ gives
\begin{equation*}
F_\gamma(E, x,y) = \int \int_{x^{2/\alpha} s - v < -E }
|E + x^{2/\alpha} s - v |^{-\gamma} g(s) g(y^{-2/\alpha } v) y^{-2/\alpha} \, ds \, dv.
\end{equation*}
\noindent Since
\begin{equation*}
\frac{d}{dy} \big( g(y^{-2/\alpha } v) y^{-2/\alpha} \big) = - \frac{2}{\alpha}g'( y^{-2/\alpha} v) \cdot y^{-1 - 4/\alpha} v - \frac{2}{\alpha} y^{-1 -2/\alpha} \cdot g(y^{-2/\alpha } v),
\end{equation*}
\noindent we have
\begin{flalign*}
\partial_y F_{\gamma} (E, x, y) = - \displaystyle\frac{2}{\alpha} & \displaystyle\int \displaystyle\int_{x^{2/\alpha} s -v < -E} |E + x^{2/\alpha} s- v|^{-\gamma} g(s) \\
& \qquad \qquad \times y^{-1-2/\alpha} \big( g' (y^{-2/\alpha} v) v y^{-2/\alpha} + g (y^{-2/\alpha} v) \big) ds dv.
\end{flalign*}
\noindent
\noindent Recalling $v = y^{2/\alpha} t$, we find
\begin{equation*}
\partial_y F_{\gamma} (E, x, y) = - \frac{2}{\alpha y } \int \int_{x^{2/\alpha}s - y^{2/\alpha} t < -E }
|E + x^{2/\alpha} s - y^{2/\alpha} t|^{-\gamma} g(s)
[g( t )t]'
\, ds \, dt.
\end{equation*}
after adding both terms. Rewriting the limits of integration as in \eqref{Flimitsrewrite} gives
\begin{equation}\label{Flimitsrewrite2}
\partial_y F_\gamma(E,x,y) = - \frac{2}{\alpha y } \int_{0}^\infty
\int_{y^{-2/\alpha} ( E + x^{2/\alpha} s )}^\infty
|E + x^{2/\alpha} s - y^{2/\alpha} t|^{-\gamma} (tg(t))'
g(s)
\, ds \, dt.
\end{equation}
We observe that $y^{-2/\alpha}(E + x^{2/\alpha} s ) \ge E$ for $y \le 1$ and $x \ge 0$. We also note that there exists $C>1$ such that $\big(t g(t)\big)' < 0$ for $t > C$, where $C$ is uniform in $\alpha \in (1/2, 1)$, by \Cref{l:stabletailbounds}. Hence the previous expression is positive, showing \eqref{Fyispositive}.
Using \Cref{l:stabletailbounds} to bound $\big|\big( t g(t) \big)' \big| \le g(t) + t \big| g'(t) \big| \le Ct^{-1-\alpha/2}$, we find from \eqref{Flimitsrewrite2} that
\begin{equation*}
\big| \partial_y F_{\gamma} (E, x, y) \big| \le Cy^{-1} \int_{0}^\infty
\int_{y^{-2/\alpha} ( E + x^{2/\alpha} s )}^\infty
|E + x^{2/\alpha} s - y^{2/\alpha} t|^{-\gamma} t^{-1 - \alpha/2}
g(s)
\, ds \, dt.
\end{equation*}
\noindent Changing variables $w =y^{2/\alpha}(E + x^{2/\alpha}s )^{-1} t$, it follows that
\begin{align}
\big| \partial_y F_{\gamma} (E, x, y) \big| \le C y^{-1} &\int_{0}^\infty
\int_{1}^\infty
\big|E + x^{2/\alpha} s - (E + x^{2/\alpha} s) w \big|^{-\gamma} (w y^{-2/\alpha} \big(E + x^{2/\alpha} s) \big) ^{-1 - \alpha/2} \notag \\
& \qquad \qquad \times y^{-2/\alpha} (E + x^{2/\alpha} s) g(s) ds dw \notag \\
&= C \int_{0}^\infty
(E + x^{2/\alpha} s)^{-\alpha/2 - \gamma} g(s)
\int_{1}^\infty
w^{-1 - \alpha/2} (w -1) ^{-\gamma}
\, dw \, ds .\label{129}
\end{align}
We note that
\begin{equation}\label{wintegral!}
\int_{1}^\infty
w^{-1 - \alpha/2} (w -1) ^{-\gamma}
\, dw \le C ( 1 - \gamma)^{-1}.
\end{equation}
Then we use the bound $(E + x^{2/\alpha} s)^{-\alpha/2 - \gamma} \le E^{-\alpha/2 - \gamma}$, \eqref{wintegral!}, and the fact that $g$ is a probability density
in \eqref{129} to obtain the desired bound $\big| \partial_y F_{\gamma} (E, x, y) \big| \le C (1 - \gamma)^{-1} E^{-\alpha/2 - \gamma}$.
\end{proof}
\subsection{Derivatives in $x$ and $y$ of $G$}
\label{DerivativesG}
In this section we establish the below two lemmas that bound the derivatives of $G$ with respect to $x$ and $y$, which are parallel to those of \Cref{l:Fx} and \Cref{l:Fy}.
\begin{lem}\label{l:Gx}
There exists a constant $c \in (0, 1)$ such that for all $\alpha \in (1-c, 1)$ and $E > c^{-1}$, we have
\begin{align}\label{gxfinal}
|\partial_x G_\gamma(E,x,y) | &\le
C (1 - \gamma)^{-1} E^{-\alpha/2 - \gamma}
+ C ( 1- \gamma)^{-2} E^{-\alpha/2 -2 \gamma} y\\
&+ C E^{-1 - \alpha/2 - \gamma} y^{-2/\alpha}
+ C E^{-\gamma} y.
\end{align}
\end{lem}
\begin{proof}
From the definition \eqref{FandG}, we have
\begin{equation*}
G_\gamma(E,x,y) = \int \int_{x^{2/\alpha}s - y^{2/\alpha} t > -E }
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma} g(s) g(t) \, dt \, ds
\end{equation*}
where $g$ is the density of the nonnegative $\alpha/2$-stable law.
We now rewrite the limits of integration. The integrand of the outer integral is supported on $\{ s >0 \}$. For the interior integral, the integrand is supported on $\big\{ t \in \mathbb{R}_{> 0} : t < y^{-2/\alpha} ( E + x^{2/\alpha} s ) \big\}$. This shows that
\begin{equation}\label{Gintegralx}
G_\gamma(E,x,y) = \int_{0}^\infty
\int_0^{y^{-2/\alpha} ( E + x^{2/\alpha} s )}
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma} g(s) g(t) \, dt\, ds.
\end{equation}
In the interior integral, we set $ v = (E + x^{2/\alpha} s)^{-1} y^{2/\alpha} t$ to obtain
\begin{flalign}
G_{\gamma} (E, x, y) &= y^{-2/\alpha}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{1-\gamma} \int_0^1\label{diffme2}
(1 - v )^{-\gamma}
g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) \, dv \,ds.
\end{flalign}
We differentiate \eqref{diffme2} in $x$ and obtain two terms. The first is
\begin{equation}\label{diffresult1}
\displaystyle\frac{2}{\alpha} (1 - \gamma) y^{-2/\alpha} x^{2/\alpha -1}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{-\gamma} s \int_0^1
(1 - v )^{-\gamma}
g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) \, dv\,ds,
\end{equation}
and the second is
\begin{equation}\label{put}
\frac{2}{\alpha}\cdot
y^{-4/\alpha} x^{2/\alpha-1}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{1-\gamma}s \int_0^1
(1 - v )^{-\gamma} v g'\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) \, dv.
\end{equation}
We begin by analyzing \eqref{diffresult1}.
We split the integral in $v$ at $v = (E (E + x^{2/\alpha} s ))^{-1}$.
For $v < (E (E + x^{2/\alpha} s ))^{-1}$, we obtain
\begin{align*}
\displaystyle\frac{2}{\alpha} (1 - \gamma) y^{-2/\alpha}& x^{2/\alpha -1} \int_0^\infty g(s) (E + x^{2/\alpha} s)^{-\gamma} s \int_0^{(E (E + x^{2/\alpha} s ))^{-1}}
(1 - v )^{-\gamma}
g(y^{-2/\alpha} (E + x^{2/\alpha} s) v) \, dv\,ds
\\ \le& C
y^{-2/\alpha} x^{2/\alpha -1}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{-\gamma} s \big( E (E + x^{2/\alpha} s ) \big)^{-1} ds
\\ \le & C E^{-1} y^{-2/\alpha}
x^{2/\alpha -1}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{-1-\gamma} s \,ds,
\end{align*}
\noindent where in the second bound we used the fact that $g(w) \le C$ for $w = y^{-2/\alpha}(E + x^{2/\alpha} s)v$ (from \eqref{stabletailbounds} and that $E > c^{-1} > 2$ (by imposing that $c < \frac{1}{2}$). To bound the integral in $s$, we use \eqref{already!} to find
\begin{align}
E^{-1} & y^{-2/\alpha}
x^{2/\alpha -1} \int_0^\infty g(s) (E + x^{2/\alpha} s)^{-\alpha/2-1} s\, ds \notag
\\
&\le C E^{-1} y^{-2/\alpha}
x^{2/\alpha -1} x^{-2/\alpha + 1}E^{-\alpha/2 - \gamma} \le
C E^{-1-\alpha/2 - \gamma} y^{-2/\alpha}.\label{2ndtrm}
\end{align}
This yields the third term in the claimed bound \eqref{gxfinal}.
We now consider the case $v > (E (E + x^{2/\alpha} s ))^{-1}$ in \eqref{diffresult1}, which gives
\begin{align*}
&(1 - \gamma)y^{-2/\alpha} x^{2/\alpha -1}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{-\gamma}s \int_{(E (E + x^{2/\alpha} s ))^{-1}}^1
(1 - v )^{-\gamma}
g(y^{-2/\alpha}(E + x^{2/\alpha} s) v) \, dv\, ds \notag
\\ &\le
C(1 - \gamma) y^{-2/\alpha} x^{2/\alpha -1}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{-\gamma}s \int_{(E (E + x^{2/\alpha} s ))^{-1}}^1
(1 - v )^{-\gamma}
(y^{-2/\alpha} (E + x^{2/\alpha} s) v)^{-1 - \alpha/2} \, dv\, ds \notag
\\
&\le
C (1 - \gamma) y x^{2/\alpha -1}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{-1 -\alpha/2 - \gamma }s \int_{(E (E + x^{2/\alpha} s ))^{-1}}^1
(1 - v )^{-\gamma}
v^{-1 - \alpha/2} \, dv\, ds
\\& \le
Cy x^{2/\alpha -1}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{1 -\alpha/2 - \gamma}s
(E (E + x^{2/\alpha} s ))^{\alpha/2} \, ds
\\ &\le Cy x^{2/\alpha -1} E^{\alpha/2}
\int_0^\infty g(s) (E + x^{2/\alpha} s)^{-1 -\gamma}s \\
& \le C y E^{ - \gamma} .
\end{align*}
In the last inequality, we used \eqref{already!}. This gives the last term in \eqref{gxfinal}.
We now consider \eqref{put}. We split the integral where the argument of $g'$ is $1$, which leads to the condition
\begin{equation*}
1 = y^{-2/\alpha} (E + x^{2/\alpha} s) v.
\end{equation*}
This gives
\begin{equation*}
v = y^{2/\alpha} (E + x^{2/\alpha} s)^{-1}.
\end{equation*}
Using $|g'| \le C$ from \Cref{l:stabletailbounds}, we find
\begin{align}\label{vcontrib1}
\int_0^{y^{2/\alpha} (E + x^{2/\alpha} s)^{-1}}
(1 - v )^{-\gamma} v g'\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) \, dv
&\le \int_0^{y^{2/\alpha} (E + x^{2/\alpha} s)^{-1}}
v \, dv\\ &= C y^{4/\alpha} (E + x^{2/\alpha} s)^{-2} \notag.
\end{align}
Putting this in \eqref{put} and using \eqref{already!} shows that the $v < y^{2/\alpha} (E + x^{2/\alpha} s)^{-1}$ contribution from
\eqref{vcontrib1} to \eqref{put} is bounded by
\begin{equation*}
x^{2/\alpha -1} \int_0^\infty s (E + x^{2/\alpha} s)^{-1 - \alpha/2} \, ds \le C E^{-\alpha/2 - \gamma}.
\end{equation*}
We are left with the contribution from $v > y^{2/\alpha} (E + x^{2/\alpha} s)^{-1}$, which is
\begin{equation*}
y^{-4/\alpha} x^{2/\alpha-1}
\int_0^\infty g(s) (E + x^{2/\alpha} s)^{1-\gamma}s
\int_{y^{2/\alpha} (E + x^{2/\alpha} s)^{-1}}^1
(1 - v )^{-\gamma} v g'\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) \, dv.
\end{equation*}
On this interval we use $|g'(x) |\le x^{-2 - \alpha/2}$ from \Cref{l:stabletailbounds}. After taking absolute value, we get the bound
\begin{align}\notag
&y^{-4/\alpha} x^{2/\alpha-1} \int_0^\infty g(s) (E + x^{2/\alpha} s)^{1-\gamma} s \int_{y^{2/\alpha} (E + x^{2/\alpha} s)^{-1}}^1
(1 - v )^{-\gamma} v \big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)^{-2 - \alpha/2} \, dv\,ds\\
&= y x^{2/\alpha-1} \int_0^\infty g(s) (E + x^{2/\alpha} s)^{-1 - \alpha/2 - \gamma} s \int_{y^{2/\alpha} (E + x^{2/\alpha} s)^{-1}}^1
(1 - v )^{-\gamma} v^{-1-\alpha/2} \, dv\, ds.\label{vpieces1}
\end{align}
We first consider the piece of the integral in $v$ with $v \in (1/2 ,1)$.
Using
\begin{equation*}
\int_{1/2}^1
(1 - v )^{-\gamma} v^{-1-\alpha/2} \, dv \le C( 1- \gamma)^{-1},
\end{equation*}
we obtain
\begin{align*}
&y x^{2/\alpha-1} \int_0^\infty g(s) (E + x^{2/\alpha} s)^{-1 - \alpha/2 - \gamma} s \int_{1/2}^1
(1 - v )^{-\gamma} v^{-1-\alpha/2} \, dv\,ds \\
&\le C E^{-\gamma}( 1- \gamma)^{-1} y x^{2/\alpha -1} \int_0^\infty s (E + x^{2/\alpha} s)^{-1 - \alpha/2}g(s)\, ds\\
&\le C (1 - \gamma)^{-2} y E^{-\alpha/2 - 2 \gamma},
\end{align*}
where the last inequality follows from \eqref{already!}.
This corresponds to the second term in \eqref{gxfinal}.
The contribution from $v < 1/2$ in \eqref{vpieces1} is bounded by
\begin{align*}
& C y x^{2/\alpha-1} \int_0^\infty g(s) (E + x^{2/\alpha} s)^{-1 - \alpha/2 - \gamma}s \int_{y^{2/\alpha} (E + x^{2/\alpha} s)^{-1}}^{1/2}
v^{-1-\alpha/2} \, dv \, ds\\
&\le C y x^{2/\alpha-1} \int_0^\infty g(s) (E + x^{2/\alpha} s)^{-1 - \alpha/2 - \gamma} s
\big( y^{2/\alpha} (E + x^{2/\alpha} s)^{-1} \big) ^{-\alpha/2}\, ds\\
& = x^{2/\alpha-1}\int_0^\infty g(s) (E + x^{2/\alpha} s)^{-1 - \gamma}s \, ds\\
&\le C( 1 - \gamma)^{-1} E^{-\alpha/2 - \gamma}.
\end{align*}
This corresponds to the first term in \eqref{gxfinal}, and completes the proof.
\end{proof}
\begin{lem}\label{l:Gy}
There exists $c \in \big( 0, \frac{1}{2} \big)$ such that for all $\alpha \in (1-c, 1)$, $\gamma\in(0,1)$, $x \le 1$, and $E > c^{-1}$, we have
\begin{equation}\label{Gy1}
|\partial_y G(E,x,y)| \le c^{-1} \left( (1- \gamma)^{-1}E^{-\alpha/2 - \gamma} + E^{-\gamma} \right)
\end{equation}
and
\begin{equation*}
\partial_y G(E,x,y) \ge c (1 - \gamma)^{-1} E^{-\alpha/2 - \gamma} + O(E^{-\gamma} ).
\end{equation*}
\end{lem}
\begin{proof}
Analogously to \eqref{Flimitsrewrite2}, we have
\begin{equation}\label{firsttt}
\partial_y G_\gamma(E,x,y) = - \frac{2}{\alpha y } \int_{0}^\infty
\int_0^{y^{-2/\alpha} ( E + x^{2/\alpha} s )}
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma} \big(tg(t)\big)'
g(s)
\, dt \, ds.
\end{equation}
We split the integral in $t$ into two parts at $t= y^{-2/\alpha}$.
We claim that the contribution from $t< y^{-2/\alpha}$ in \eqref{firsttt} satisfies
\begin{equation}\label{firstt}
\Bigg| \frac{2}{\alpha y } \int_{0}^\infty
\int_0^{y^{-2/\alpha}}
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma} \big(tg(t)\big)'
g(s)
\, ds \, dt \Bigg| \le C E^{-\gamma}.
\end{equation}
We integrate by parts with respect to $t$ in the integral on the left side; then the boundary term at $t=0$ vanishes, and the boundary term at $t=y^{-2/\alpha}$ is in absolute value, after using \Cref{l:stabletailbounds} to bound $g(y^{-2/\alpha}) \le Cy^{2/\alpha + 1}$, at most
\begin{align*}
Cy^{-1} \int_{0}^\infty
(E + x^{2/\alpha} s - 1)^{-\gamma} y^{-2/\alpha} g(y^{-2/\alpha}) g(s)
\, d
&\le C \int_{0}^\infty
(E + x^{2/\alpha} s - 1)^{-\gamma} g(s)
\, ds\\
&\le C \int_{0}^\infty
(E- 1 )^{-\gamma} g(s)
\, ds \le C E^{-\gamma},
\end{align*}
\noindent where in the last two bounds we used the facts that $E - 1 \ge \frac{E}{2}$ (as $E > c^{-1} \ge 2$) and that $g$ is a probability density function. From \Cref{l:stabletailbounds}, we have $tg(t) \le C t^{-\alpha/2}$. Then the main term from integrating by parts satisfies the absolute value bound
\begin{align*}
\displaystyle\frac{4 \gamma}{\alpha^2} \Bigg| y^{2/\alpha-1} & \int_{0}^\infty
\int_0^{y^{-2/\alpha}}
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma-1} tg(t)
g(s)
\, ds \, dt \Bigg|\\
&\le C y^{2/\alpha-1} \int_{0}^\infty
\int_0^{y^{-2/\alpha}}
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma-1} t^{-\alpha/2}
g(s)
\, dt \, ds \\
& \le C y^{2/\alpha-1} y^{-2/\alpha}
\int_{0}^\infty ( E + x^{2/\alpha}s)
\int_0^{ ( E + x^{2/\alpha}s)^{-1}}
(E + x^{2/\alpha} s - ( E + x^{2/\alpha}s) w)^{-\gamma-1}\\
& \qquad \qquad \qquad \qquad \qquad \times (y^{-2/\alpha} ( E + x^{2/\alpha}s) w)^{-\alpha/2}
g(s)
\, ds \, dw,
\end{align*}
\noindent where in the second bound we changed variables $t = y^{-2/\alpha} ( E + x^{2/\alpha}s) w$. Hence,
\begin{align*}
\displaystyle\frac{4 \gamma}{\alpha^2} \Bigg| & y^{2/\alpha-1} \int_{0}^\infty
\int_0^{y^{-2/\alpha}}
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma-1} tg(t)
g(s)
\, ds \, dt \Bigg| \\
& \le
C \int_{0}^\infty ( E + x^{2/\alpha}s)^{-\alpha/2 - \gamma}
\int_0^{( E + x^{2/\alpha}s)^{-1}}
(1- w)^{-\gamma-1} w^{-\alpha/2}
g(s)
\, dw \, ds
\\ & \le
C E^{-\alpha/2 - \gamma}
\int_0^{ E^{-1} }
(1- w)^{-\alpha/2-1} w^{-\alpha/2}
\, dw \le
C E^{-\alpha/2 - \gamma} \int_0^{E^{-1}}
w^{-\alpha/2}
\, dw \le C E^{-1 - \gamma},
\end{align*}
\noindent where in the second bound we used the fact that $g$ is a probability density. This confirms \eqref{firstt}.
The contribution from $t> y^{-2/\alpha}$ in \eqref{firsttt} is
\begin{equation}\label{2t}
- \frac{2}{\alpha y } \int_{0}^\infty
\int_{y^{-2/\alpha}}^{y^{-2/\alpha} ( E + x^{2/\alpha} s )}
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma} \big(tg(t)\big)'
g(s)
\, ds \, dt.
\end{equation}
We bound this in absolute value using \Cref{l:stabletailbounds} by
\begin{equation*}
\frac{C}{ y } \int_{0}^\infty
\int_{y^{-2/\alpha}}^{y^{-2/\alpha} ( E + x^{2/\alpha} s )}
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma} t^{-1-\alpha/2}
g(s)
\, dt
\, ds.
\end{equation*}
Setting $w =y^{2/\alpha} (E + x^{2/\alpha} s)^{-1} t$, the above integral is bounded by
\begin{align}
C \int_0^\infty &
(E + x^{2/\alpha} s)^{-\alpha/2 - \gamma}
\int_{(E + x^{2/\alpha} s)^{-1}}^{ 1 }
(1- v)^{-\gamma} v^{-1 - \alpha/2}
g(s)
\, dv
\, ds \notag
\\
&\le
C\left((1 - \gamma)^{-1} \int_{0}^\infty
(E + x^{2/\alpha} s)^{ -\alpha/2 - \gamma}
g(s)
\, ds
+
\int_{0}^\infty
(E + x^{2/\alpha} s)^{ - \gamma}
g(s)
\, ds \right)\label{intsplit}
\\
&\le
(1- \gamma)^{-1}E^{-\alpha/2 - \gamma} + E^{-\gamma}.\notag
\end{align}
\noindent To deduce the first bound of \eqref{intsplit}, we split the integral in $v$ at the point $v=1/2$ and bounded each piece separately; to deduce the second, we used the fact that $g$ is a probability density function. Together with \eqref{firsttt}, this completes the proof of \eqref{Gy1}.
Next, we will find a lower bound on \eqref{2t}. Observe for $t \ge C$ that by \Cref{l:stabletailbounds} we have $\big( t g(t) \big)' \le - c t^{-1 - \alpha/2 }$. Then \eqref{2t} is lower bounded by
\begin{equation*}
\frac{2c}{\alpha y } \int_{0}^\infty
\int_{y^{-2/\alpha}}^{y^{-2/\alpha} ( E + x^{2/\alpha} s )}
(E + x^{2/\alpha} s - y^{2/\alpha} t)^{-\gamma}
t^{-1 - \alpha/2}
g(s)
\, ds \, dt.
\end{equation*}
for some $c > 0$. Again substituting $w =y^{2/\alpha} (E + x^{2/\alpha} s)^{-1} t$, the previous line becomes
\begin{equation}\label{lowerboundme}
\frac{2c}{\alpha} \int_0^\infty
(E + x^{2/\alpha} s)^{-\alpha/2 - \gamma}
\int_{(E + x^{2/\alpha} s)^{-1}}^{ 1 }
(1- v)^{-\gamma} v^{-1 - \alpha/2}
g(s)
\, dv
\, ds.
\end{equation}
\noindent Since $E > c^{-1} \ge 2$, we have the lower bound
\begin{align*}
\int_{(E + x^{2/\alpha} s)^{-1}}^{ 1 }
(1- v)^{-\gamma} v^{-1 - \alpha/2}
\, dv
&\ge
\int_{1/2}^{ 1 }
(1- v)^{-\gamma} v^{-1 - \alpha/2}
\, dv \ge c (1 - \gamma)^{-1} .
\end{align*}
Then we see that \eqref{lowerboundme} is lower bounded by
\begin{align*}
\frac{c}{\alpha(1- \gamma)} \int_0^\infty
(E + x^{2/\alpha} s)^{-\alpha/2 - \gamma}
g(s)
\, ds
&\ge
\frac{c}{\alpha(1- \gamma)} \int_0^1
(E + x^{2/\alpha} s)^{-\alpha/2 - \gamma}
\, ds \ge
\frac{c}{\alpha(1- \gamma)} E^{ - \alpha/2 - \gamma},
\end{align*}
\noindent where in the last bounds we used the fact that $g(s)$ is uniformly bounded below on the compact interval $[0, 1]$ (as $g(x) > 0$ there), and that $E + x^{2/\alpha} s \le E + 1 \le 2E$ for $x \le 1$ and $s \le 1$. Together with \eqref{firsttt}, this completes the proof of the lemma.
\end{proof}
\subsection{Derivatives in $E$}\label{s:dE}
In this section we establish the following three lemmas. The first bounds the derivatives in $E$ of the function $F_{\gamma}$, and the second and third bound those in $E$ of $G_{\gamma}$.
\begin{lem}\label{l:FE}
There exists a constant $c \in (0, 1)$ such that for all $\alpha \in (1-c, 1)$, $\gamma \in (0,1)$, $E > c^{-1}$, and $x, y \le 1$, we have
\begin{equation}\label{fefirst}
| \partial_E F_\gamma (E,x,y)| \le c^{-1} ( 1- \gamma)^{-1} y E^{-1 - \alpha/2 - \gamma}.\end{equation}
and
\begin{equation}\label{fesecond}
\partial_E F_\gamma (E,x,y) \le - c ( 1- \gamma)^{-1} y E^{-1 - \alpha/2 - \gamma}
+ c^{-1} y E^{-1 - \alpha/2 - \gamma}.
\end{equation}
\end{lem}
\begin{proof}
We recall from \eqref{FandG} that
\begin{equation*}
F_\gamma(E, x,y) =
\int_{0}^\infty
\int_{y^{-2/\alpha} ( E + x^{2/\alpha} s )}^\infty
(- E - x^{2/\alpha} s + y^{2/\alpha} t)^{-\gamma} g(s) g(t) \, dt\, ds.
\end{equation*}
We set $w = y^{2/\alpha} (E + x^{2/\alpha} s)^{-1} t$. Then
\begin{align*}
F(E, x,y) &=
y^{-2/\alpha} \int_{0}^\infty
(E + x^{2/\alpha} s)^{1 - \gamma}
\int_1^\infty
(w- 1 )^{-\gamma} g(s) g\big( w y^{-2/\alpha} (E+ x^{2/\alpha} s) \big) \, dw\, ds.
\end{align*}
When we differentiate in $E$, there are two contributions to $\partial_E F(E,x,y)$:
\begin{equation}\label{FE1}
(1 - \gamma) y^{-2/\alpha} \int_{0}^\infty
(E + x^{2/\alpha} s)^{ - \gamma}
\int_1^\infty
(w- 1 )^{-\gamma} g(s) g\big( w y^{-2/\alpha} (E+ x^{2/\alpha} s) \big) \, dw\, ds,
\end{equation}
and
\begin{equation}\label{FE2}
y^{-2/\alpha} \int_{0}^\infty
(E + x^{2/\alpha} s)^{1 - \gamma}
\int_1^\infty
(w- 1 )^{-\gamma} g(s) g'\big( w y^{-2/\alpha} (E+ x^{2/\alpha} s) \big) w y^{-2/\alpha} \, dw\, ds.
\end{equation}
We first bound \eqref{FE1}. Using the facts that $ w y^{-2/\alpha} (E+ x^{2/\alpha} s) > 1$ for $w \ge 1$ (which holds by the assumptions that $y\le 1$ and $E > c^{-1} \ge 1$) and that $g(x) \le C x^{-1 - \alpha/2}$ for $x \ge 1$ (by \Cref{l:stabletailbounds}), we see that \eqref{FE1} is bounded in absolute value by
\begin{align}
\label{estimatege0}
\begin{aligned}
&C(1-\gamma) y^{-2/\alpha} \int_{0}^\infty
(E + x^{2/\alpha} s)^{ - \gamma}
\int_1^\infty
(w- 1 )^{-\gamma} g(s) \big( w y^{-2/\alpha} (E+ x^{2/\alpha} s) \big)^{-1 - \alpha/2} \, dw\, ds\\
&\le C(1-\gamma) y \int_{0}^\infty
(E + x^{2/\alpha} s)^{ -1 - \alpha/2 - \gamma}
g(s)
\int_1^\infty
(w- 1 )^{-\gamma} w^{-1 - \alpha/2} \, dw\, ds\\
&\le Cy \int_{0}^\infty
(E + x^{2/\alpha} s)^{ -1 - \alpha/2 - \gamma}
g(s)\, ds\le C y E^{-1 - \alpha/2 - \gamma},
\end{aligned}
\end{align}
\noindent where in the last estimate we used the facts that $g$ is a probability density function and that $(E + x^{2/\alpha} s)^{-1-\alpha/2-\gamma} \le E^{-1-\alpha/2-\gamma}$.
Next, we bound \eqref{FE2} in absolute value using the fact (from \Cref{l:stabletailbounds}) that $|g'(x)| \le C x^{-2 - \alpha/2}$ for $x \ge 1$. This yields that \eqref{FE2} is bounded by
\begin{align}
\label{estimatege2}
\begin{aligned}
C y^{-4 /\alpha} \int_{0}^\infty &(E + x^{2/\alpha} s)^{1 - \gamma}
\int_1^\infty
(w- 1 )^{-\gamma} w g(s) ( w y^{-2/\alpha} (E+ x^{2/\alpha} s) )^{-2 - \alpha/2} \, dw\, ds\\
& =
C y \int_{0}^\infty
(E + x^{2/\alpha} s)^{-1 - \alpha/2 - \gamma} g(s)
\int_1^\infty
(w- 1 )^{-\gamma} w^{-1 - \alpha/2} \, dw\, ds\\
& \le C (1 - \gamma)^{-1} y\int_{0}^\infty
(E + x^{2/\alpha} s)^{-1 -\alpha/2 - \gamma} g(s) \,ds \le C(1 - \gamma)^{-1} y E^{-1 - \alpha/2 - \gamma},
\end{aligned}
\end{align}
\noindent where the last estimates again follows from the that fact $g$ is a probability density function and that $(E + x^{2/\alpha} s)^{-1-\alpha/2-\gamma} \le E^{-1-\alpha/2-\gamma}$.
Summing \eqref{estimatege0} and \eqref{estimatege2} verifies \eqref{fefirst}.
To prove \eqref{fesecond}, we first recall (from \Cref{l:stabletailbounds}) that $g'(x) \le -c x^{-2 - \alpha/2}$ for $x \ge C$.
We then see that \eqref{FE2} is negative and upper bounded by
\begin{align*}
-c & y^{-4 /\alpha} \int_{0}^\infty
(E + x^{2/\alpha} s)^{1 - \gamma}
\int_1^\infty
(w- 1 )^{-\gamma} w g(s) ( w y^{-2/\alpha} (E+ x^{2/\alpha} s) )^{-2 - \alpha/2} \, dw\, ds\\
&=
- c y \int_{0}^\infty
(E + x^{2/\alpha} s)^{-1 - \alpha/2 - \gamma} g(s)
\int_1^\infty
(w- 1 )^{-\gamma} w^{-1 - \alpha/2} \, dw\, ds\\
&=
- c(1- \gamma)^{-1} y \int_{0}^\infty
(E + x^{2/\alpha} s)^{-1 - \alpha/2 - \gamma} g(s)
\, ds \le - c (1 - \gamma)^{-1} y E^{-1 - \alpha/2 - \gamma},
\end{align*}
\noindent following the same reasoning as in \eqref{estimatege2}. Summing with our estimate \eqref{estimatege0} for \eqref{FE1} completes the proof.
\end{proof}
\begin{lem}\label{l:GE}
There exists a constant $c \in \big(0, \frac{1}{2} \big)$ such that for all $\alpha \in (1-c, 1)$, $\gamma\in(0,1)$, $E > c^{-1}$, and $y \le 1$, we have
\begin{equation*}
\left| \partial_E G(E,x,y) \right| \le c^{-1} E^{-1 - \gamma}\big(1 + (1-\gamma)^{-1} y E^{-\alpha/2}\big) .
\end{equation*}
\end{lem}
\begin{proof}
From \eqref{diffme2}, we have
\begin{equation*}
G(E,x,y) =
y^{-2/\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma} \int_0^1
(1 - v )^{-\gamma}
g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) g(s) \, dv \, ds.
\end{equation*}
There are two contributions from differentiating in $E$. The first is
\begin{equation}\label{GE1}
( 1- \gamma) y^{-2/\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_0^1
(1 - v )^{-\gamma}
g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)g(s) \, dv\, ds.
\end{equation}
The second is
\begin{equation}\label{GE2}
y^{-4 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma} \int_0^1
(1 - v )^{-\gamma}
g'\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) v g(s) \, dv\, ds.
\end{equation}
We begin by bounding \eqref{GE1} in absolute value. We divide the interval of integration at $v = (E + x^{2/\alpha} s)^{-1} y^{2/\alpha}$. This results in two integrals. The contribution from $v < (E + x^{2/\alpha} s)^{-1} y^{2/\alpha}$ is
\begin{flalign}
\label{Gprev1}
\begin{aligned}
( 1- \gamma) & y^{-2/\alpha} \int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_0^{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
(1 - v )^{-\gamma} g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) g(s) \, dv\, ds\\
&\le C ( 1- \gamma) y^{-2/\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_0^{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
(1 - v )^{-\gamma} g(s) \, dv\, ds \\
&\le C( 1- \gamma) \int_0^\infty (E + x^{2/\alpha} s)^{-1 -\gamma} g(s) \, ds \le C( 1- \gamma) E^{ -1 - \gamma}.
\end{aligned}
\end{flalign}
\noindent To deduce the first inequality, we used \Cref{l:stabletailbounds} to bound $g \big( y^{-2/\alpha} v (E + x^{2/\alpha})\big) \le C$; to deduce the second, we used the fact that $y^{2/\alpha} (E+x^{2/\alpha} s)^{-1} \le E^{-1} \le \frac{1}{2}$ (as $y \le 1$ and $E \ge c^{-1} \ge 2$); and to deduce the third we used the facts that $(E + x^{2/\alpha} s)^{-1-\gamma} \le E^{-1-\gamma}$ and that $g$ is a probability density function.
By similar reasoning, the contribution from $v > (E + x^{2/\alpha} s)^{-1} y^{2/\alpha}$ in \eqref{GE1} is
\begin{align*}
( 1- &\gamma)y^{-2/\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^1
(1 - v )^{-\gamma} g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) g(s) \, dv\, ds \\
&\le
C( 1- \gamma) y^{-2/\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} g(s) \\
& \qquad \qquad \qquad \qquad \times \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^1
(1 - v )^{-\gamma} \big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)^{-1 - \alpha/2} \, dv\, ds \\
&\le
C( 1- \gamma) y\int_0^\infty (E + x^{2/\alpha} s)^{-1 -\alpha/2 - \gamma} g(s) \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^1
(1 - v )^{-\gamma} v^{-1 - \alpha/2} \, dv\, ds,
\end{align*}
\noindent where in the first inequality we used \Cref{l:stabletailbounds} to now bound $g (w) \le C w^{-1-\alpha/2}$ at $w = y^{-2/\alpha} v (E + x^{2/\alpha} s)$. Restricting the integral to $v > 1/2$ gives
\begin{align}
\label{integraly2}
\begin{aligned}
C ( 1- \gamma) & y\int_0^\infty (E + x^{2/\alpha} s)^{-1 -\alpha/2 - \gamma} g(s) \int_{1/2}^1
(1 - v )^{-\gamma} v^{-1 - \alpha/2} \, dv\, ds\\
&\le
C( 1- \gamma) y\int_0^\infty E ^{-1 -\alpha/2 - \gamma} g(s) (1 - \gamma)^{-1} \, ds =Cy E^{-1 - \alpha/2 - \gamma}.
\end{aligned}
\end{align}
\noindent Integrating over the complementary region $v < 1/2$ yields
\begin{align}
\label{integraly3}
\begin{aligned}
C (1 & - \gamma) y\int_0^\infty (E + x^{2/\alpha} s)^{-1 -\alpha/2 - \gamma} g(s) \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^{1/2}
(1 - v )^{-\gamma} v^{-1 - \alpha/2} \, dv\, ds\\
&\le
C y\int_0^\infty (E + x^{2/\alpha} s)^{-1 -\alpha/2 - \gamma} g(s) \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^{1/2}v^{-1 - \alpha/2} \, dv\, ds\\
&\le
C y\int_0^\infty (E + x^{2/\alpha} s)^{-1 -\alpha/2 - \gamma} g(s)
\big( y^{2/\alpha} (E + x^{2/\alpha} s)^{-1} \big)^{-\alpha/2} ds \le
C \int_0^\infty E^{-1 - \gamma} g(s) ds = C E^{ - 1 - \gamma}.
\end{aligned}
\end{align}
\noindent Summing \eqref{Gprev1}, \eqref{integraly2}, and \eqref{integraly3} yields
\begin{flalign*}
\Bigg| (1-\gamma) y^{-2/\alpha} \displaystyle\int_0^{\infty} (E + x^{2/\alpha} s)^{-\gamma} \displaystyle\int_0^1 & (1 - v)^{-\gamma} g \big( y^{-2/\alpha} (E + x^{2/\alpha} s) v \big) g(s) dv ds \Bigg| \\
& \le C E^{-1-\gamma} + Cy E^{-1-\alpha/2-\gamma},
\end{flalign*}
which gives a bound for \eqref{GE1}.
We next examine \eqref{GE2} by again splitting the interval of integration at $v = (E + x^{2/\alpha} s)^{-1} y^{2/\alpha}$.
The first piece is
\begin{equation}\label{gefirst}
y^{-4 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma} \int_0^{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
(1 - v )^{-\gamma}
g'\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) v g(s) \, dv\, ds.
\end{equation}
Using $| g'(x) | \le C$ from \Cref{l:stabletailbounds}, and $(E + x^{2/\alpha} s)^{-1} y^{2/\alpha} \le 1/2$, we bound this in absolute value by
\begin{align*}
&C y^{-4 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma}
g(s) \int_0^{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
(1 - v )^{-\gamma} v \, dv\, ds\\
&\le C y^{-4 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma}
g(s) \int_0^{ (E + x^{2/\alpha} s)^{-1} y^{2/\alpha}} v \, dv\, ds\\
&\le C y^{-4 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma}
g(s) ( (E + x^{2/\alpha} s)^{-1} y^{2/\alpha})^2 \, ds\\
&\le C \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma}
g(s) \, ds\\
&\le C E^{-1 - \gamma}.
\end{align*}
The second piece of \eqref{GE2} is
\begin{align*}
&y^{-4 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma} \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^1
(1 - v )^{-\gamma}
g'\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) v g(s) \, dv\, ds\\
&\le Cy^{-4 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma} \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^1
(1 - v )^{-\gamma}
\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)^{-2 - \alpha/2} v g(s) \, dv\, ds\notag \\
&\le
Cy \int_0^\infty (E + x^{2/\alpha} s)^{- 1-\alpha/2 - \gamma}g(s) \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^1
(1 - v )^{-\gamma} v^{-1 - \alpha/2} \, dv\, ds.\notag
\end{align*}
The contribution from $v \in [1/2 ,1 ]$ is bounded by $C y (1- \gamma)^{-1}E^{-1- \alpha/2 - \gamma}$, after integrating in $s$. Further, for $v < 1/2$, we have
\begin{align*}
&y \int_0^\infty (E + x^{2/\alpha} s)^{- 1-\alpha/2 - \gamma}g(s) \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^{1/2}
(1 - v )^{-\gamma} v^{-1 - \alpha/2} \, dv\, ds\\
&\le
C y \int_0^\infty (E + x^{2/\alpha} s)^{- 1-\alpha/2 - \gamma}g(s) \int_{(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}^{1/2} v^{-1 - \alpha/2} \, dv\, ds\\
&\le
C y \int_0^\infty (E + x^{2/\alpha} s)^{- 1-\alpha/2 - \gamma}g(s) \big((E + x^{2/\alpha} s)^{-1} y^{2/\alpha} \big)^{-\alpha/2} \, ds\\
&\le
C \int_0^\infty (E + x^{2/\alpha} s)^{- 1-\gamma }g(s)\, ds\\
&\le C E^{-1 - \gamma}.
\end{align*}
This finishes the bound for \eqref{GE2} and completes the proof of the upper bound.
\end{proof}
\begin{lem}\label{l:GE2}
There exists $c>0$ such that for all $\alpha \in (1-c, 1)$, $E > c^{-1}$, $\gamma\in(0,1)$, and $x, y \le 1/2$, we have
\begin{equation*}
\partial_E G_\gamma(E,x,y) \le - c \gamma E^{-1-\gamma} + c^{-1} E^{-1 - \gamma - \alpha/4} .\end{equation*}
\end{lem}
\begin{proof}
From \eqref{diffme2}, we have
\begin{equation*}
G(E,x,y) =
y^{-2/\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma} \int_0^1
(1 - v )^{-\gamma}
g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) g(s) \, dv \, ds.
\end{equation*}
There are two contributions from differentiating in $E$. The first is
\begin{equation}\label{GE11}
( 1- \gamma) y^{-2/\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_0^1
(1 - v )^{-\gamma}
g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)g(s) \, dv\, ds.
\end{equation}
The second is
\begin{equation}\label{GE21}
y^{-4 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma} \int_0^1
(1 - v )^{-\gamma}
g'\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) v g(s) \, dv\, ds.
\end{equation}
Making the change of variables $t = y^{-2/\alpha} (E + x^{2/\alpha} s ) v$ and summing \eqref{GE11} and \eqref{GE21}, we obtain
\begin{align}
\partial_E G(E,x,y) =&
( 1- \gamma) \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma} \int_0^{y^{-2/\alpha} (E + x^{2/\alpha} s )}
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-\gamma}
g(t)g(s) \, dt\, ds\notag
\\
&+
\int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma} \int_0^{y^{-2/\alpha} (E + x^{2/\alpha} s )}
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1}t \big)^{-\gamma}
g'(t) t g(s) \, dt\, ds\notag \\
=&
- \gamma \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma} \int_0^{y^{-2/\alpha} (E + x^{2/\alpha} s )}
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-\gamma}
g(t)g(s) \, dt\, ds \label{iron1} \\
& + \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma} \int_0^{y^{-2/\alpha} (E + x^{2/\alpha} s )}
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-\gamma}
\big(g(t)t \big)' g(s) \, dt\, ds.\label{iron2}
\end{align}
Considering \eqref{iron1}, we have from \Cref{l:stabletailbounds} that
\begin{align}
- &\gamma \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma} \int_0^{y^{-2/\alpha} (E + x^{2/\alpha} s )}
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-\gamma}g(t)g(s) \, dt\, ds\notag \\
&\le
- \gamma \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma} \int_0^{y^{-2/\alpha} (E + x^{2/\alpha} s )}
g(t)g(s) \, dt\, ds\notag \\
& - c \gamma E^{-1-\gamma} \int_0^{y^{-2/\alpha} (E + x^{2/\alpha} s )}
g(t) \, dt \le - \frac{c \gamma }{2} E^{-1-\gamma},\label{mercury4}
\end{align}
if $E >C_0$, for some constants $C_0 > 1$ and $c>0$.
For the term \eqref{iron2}, we divide the integral in $t$ into the sum of integrals of $[0,M]$ and $\big[M, y^{-2/\alpha} (E + x^{2/\alpha} s )\big]$, where $M$ is a parameter to be chosen later. Suppose that $M \in (C_1, E)$, where $C_1$ is the constant given by \Cref{l:stabletailbounds}. For the integral over $[0,M]$, we set
\begin{equation*}
u = \big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-\gamma}, \qquad w = g(t) t,
\end{equation*}
and use integration by parts in the form
\begin{equation*}
\int u\, dw = uw - \int w\, du.
\end{equation*}
Using the previous line in \eqref{iron2}, and noting that the boundary term at $0$ vanishes, we get
\begin{align}\label{mercury3}
\int_0^{M}&
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-\gamma}
\big(g(t)t \big)' \, dt\\
=& \big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} M \big)^{-\gamma} \cdot g(M) M\notag \\
& - \gamma \int_0^{M} \big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-1-\gamma} y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} g(t) t \, dt.\notag\\
\le& \big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} M \big)^{-\gamma} \cdot g(M) M.\notag
\end{align}
Under the assumptions on $M$ and $y$, and using \Cref{l:stabletailbounds}, there exists $C>0$ such that
\begin{equation}\label{mercury1}
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} M \big)^{-\gamma} \cdot g(M) M \le C M^{-\alpha/2}.
\end{equation}
\begin{comment}
and
\begin{align}
- &\int_0^{M} \big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-1-\gamma} y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} g(t) t \, dt \\
&\le
- y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} \int_0^{M} g(t) t \, dt \le - c y^{2/\alpha} (E + x^{2/\alpha} s )^{-1}.
\label{mercury2}
\end{align}
\end{comment}
Then inserting \eqref{mercury1} into \eqref{mercury3}, we see that
\begin{align}\notag
&\int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma} \int_0^{M}
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-\gamma}
\big(g(t)t \big)' g(s) \, dt\, ds\\
&\le C M^{-\alpha/2} \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma} g(s)\, ds
\le C M^{-\alpha/2} E^{-1 - \gamma}.\label{mercury5}
\end{align}
Next, we consider the integral in $v$ over $\big[M, y^{-2/\alpha} (E + x^{2/\alpha} s )\big]$ in \eqref{iron2}. Again using \Cref{l:stabletailbounds} to bound $\big( g(t) t \big)'$, and setting $t = y^{-2/\alpha} (E + x^{2/\alpha} s ) v$, we obtain
\begin{align*}
&\left|\int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma}\int_M^{y^{-2/\alpha} (E + x^{2/\alpha} s )}
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-\gamma}
\big(g(t)t \big)' g(s) \, dt\, ds\right|\\
& \le
C \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma}\int_M^{y^{-2/\alpha} (E + x^{2/\alpha} s )}
\big(1 - y^{2/\alpha} (E + x^{2/\alpha} s )^{-1} t \big)^{-\gamma}
t^{-1 - \alpha/2} g(s) \, dt\, ds\\
&=
C y \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma - \alpha/2}\int_{My^{2/\alpha} (E + x^{2/\alpha} s )^{-1}}^{1}
(1 - v )^{-\gamma}
v^{-1 - \alpha/2} g(s) \, dv\, ds\\
&\le
C y \int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma - \alpha/2} \big(M^{-\alpha/2} y^{-1} (E + x^{2/\alpha} s )^{\alpha/2} \big)g(s)\, ds
\le C M^{-\alpha/2} E^{-1 - \gamma}.
\end{align*}
We complete the proof by taking $M= E^{1/2}$, and combining the previous line with \eqref{mercury4} and \eqref{mercury5}.
\end{proof}
\begin{comment}
\begin{lem}\label{l:GE2}
There exists $c>0$ such that for all $\alpha \in (1-c, 1)$, $E > c^{-1}$, $\gamma\in(0,1)$, and $x, y \le 1$, we have
\begin{equation*}
\partial_E G_\gamma(E,x,y) \le - c E^{-1 - \gamma} + c^{-1} \left( E^{-1 - \gamma - \alpha/8 } + ( 1- \gamma) E^{-1 - \gamma} + y E^{-1 - \alpha/2 - \gamma} \right) .\end{equation*}
\end{lem}
\begin{proof}
In the previous proof, we showed that \eqref{GE1} is $ ( 1- \gamma) O( E^{-1 - \gamma} ) + O(y E^{-1 - \alpha/2 - \gamma})$. It remains to analyze \eqref{GE2}.
We examine \eqref{GE2} by again splitting the interval of integration at $v = M(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}$, with $M=E^{1/4}$.
The contribution from $v < M(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}$ is
\begin{align
&y^{-4 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{1-\gamma} \int_0^{M(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
(1 - v )^{-\gamma}
g'\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big) v g(s) \, dv\, ds \\
&=
y^{-2 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_0^{M(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
v (1 - v )^{-\gamma}
\big[g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)\big]' g(s) \, dv\, ds.\notag
\end{align}
We integrate by parts in $v$ with
\begin{equation*}
u_1 = g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big), \qquad u_2 = (1-v)^{-\gamma} v,
\end{equation*}
so that the integral in $v$ is $\int u_2 \, du_1$.
The boundary term $u_1 u_2$ vanishes at $0$, and the boundary term evaluated at the upper limit $M(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}$ is bounded by
\begin{align*}
C My^{-2 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma}
y^{2/\alpha}
g(M) g(s) \, ds
&\le
C M\int_0^\infty (E + x^{2/\alpha} s)^{-1-\gamma}
g(M) g(s) \, ds\\ &\le C M^{ - \alpha/2} E^{-1 - \gamma} = C E^{ -1 - \gamma - \alpha/8}.
\end{align*}
The main term resulting from integration by parts, $-\int u_1 \, du_2$, is
\begin{equation}\label{e:twoterms}
- y^{-2 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_0^{M(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
\big(v (1 - v )^{-\gamma}\big)'
\big[g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)\big] g(s) \, dv\, ds.
\end{equation}
Expanding $\big(v (1 - v )^{-\gamma}\big)'$ in the previous line using the product rule gives two terms. The first is
\begin{equation*}
- y^{-2 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_0^{M(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
(1 - v )^{-\gamma}
\big[g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)\big] g(s) \, dv\, ds,
\end{equation*}
which is negative.
It is bounded above by
\begin{equation*}
- \frac{y^{-2 /\alpha}}{2}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_0^{M(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
\big[g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)\big] g(s) \, dv\, ds,
\end{equation*}
We set $w = y^{-2/\alpha} (E + x^{2/\alpha} s) v$ and use $M \ge 1$ to reach
\begin{align*}
- \frac{1}{2} \int_0^\infty (E + x^{2/\alpha} s)^{-1 -\gamma} \int_0^{M}
g(w) g(s) \, dw \, ds
& \le - c \int_0^\infty (E + x^{2/\alpha} s)^{-1 -\gamma} g(s) \, ds\\
&\le - c \int_0^1 (E + x^{2/\alpha} s)^{-1 -\gamma} \,ds\\
&\le -c (E + x^{2/\alpha})^{-1 - \gamma}\\
&\le - c E^{-1-\gamma}.
\end{align*}
The second term from \eqref{e:twoterms} is, after using $g(x) \le 1$,
\begin{align*}\notag
&\gamma y^{-2 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} \int_0^{M (E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
v (1 - v )^{-\gamma- 1}
\big[g\big(y^{-2/\alpha} (E + x^{2/\alpha} s) v\big)\big] g(s) \, dv\, ds\\
&\le C y^{-2 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma}g(s) \int_0^{M (E + x^{2/\alpha} s)^{-1} y^{2/\alpha}}
v
\, dv\, ds,\\
&\le C y^{-2 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-\gamma} g(s) \big(M (E + x^{2/\alpha} s)^{-1} y^{2/\alpha}\big)^2
\, ds\\
&\le C M^2 y^{2 /\alpha}\int_0^\infty (E + x^{2/\alpha} s)^{-2-\gamma} g(s) \, ds\\
&\le C M^2 y^{2 /\alpha} E^{-2 - \gamma}\\
&\le C y E^{-3/2 - \gamma }.
\end{align*}
To get the last term in the claimed bound, we use $E^{-3/2 - \gamma } \le E^{ -1 - \alpha/2 - \gamma}$.
Finally, the contribution from $v > M(E + x^{2/\alpha} s)^{-1} y^{2/\alpha}$ in $\eqref{GE2}$ will be negative if $E>C$, since $g'$ will be negative in this regime, by \eqref{gprimenegative}. This completes the proof.
\end{proof}
\end{comment}
\subsection{Derivatives of $a(E)$ and $b(E)$}\label{s:dAB}
\begin{lem}\label{uniqueness}
There exists a constant $c \in (0, 1)$ such that, for all $\alpha \in (1-c, 1)$ and $E > c^{-1}$, the function $E \mapsto \big(a(E), b(E)\big)$ is differentiable
\end{lem}
\begin{proof}
Consider the function $H(E,x, y)$ defined by
\begin{equation*}
H(E,x,y) = \left( F_{\alpha/2}(E,x,y) - x, G_{\alpha/2}(E,x,y) - y \right).
\end{equation*}
By \Cref{r:abfixedpointeqn}, we have $H(E, a, b) = 0$ for all $E \in \mathbb{R}$.
The Jacobian matrix of $H$ is given by
\begin{align*}
&\begin{bmatrix}
\partial_E (F(E,x,y) - x) & \partial_x (F(E,x,y) - x) & \partial_y (F(E,x,y) - x) \\
\partial_E (G(E,x,y) - y) &\partial_x (G(E,x,y) - y) & \partial_y (G(E,x,y) - y)
\end{bmatrix}\\
&=\begin{bmatrix}
\partial_E F(E,x,y) & \partial_x F(E,x,y) - 1 & \partial_y F(E,x,y) \\
\partial_E G(E,x,y) &\partial_x G(E,x,y) & \partial_y G(E,x,y) - 1
\end{bmatrix}.
\end{align*}
Using \Cref{l:abasymptotic}, \Cref{l:Fx}, \Cref{l:Fy}, \Cref{l:Gx}, and \Cref{l:Gy}, there exists $c>0$ such that the right $2\times 2$ submatrix evaluated at $(E, x, y) = (E, a(E), b(E))$ can be written as
\begin{equation*}
\begin{bmatrix}
O(E^{-1/9}) - 1 & O(E^{-1/9}) \\
O(E^{-1/9}) & O(E^{-1/9}) - 1
\end{bmatrix}
\end{equation*}
for $\alpha \in (1-c, 1)$.
Hence this matrix is invertible for $E \ge c^{-1}$, after possibly decreasing $c$. The conclusion now follows from the implicit function theorem.
\end{proof}
\begin{lem}\label{l:abE}
There exists $c>0$ such that for all $\alpha \in (1-c, 1)$ and $E > c^{-1}$,
\begin{equation}\label{Ederiv1}
\big|a'(E)\big| \le c^{-1} E^{- 1 - 3\alpha/2},\qquad \big|b'(E)\big| \le c^{-1} E^{- 1 - \alpha/2}.
\end{equation}
\end{lem}
\begin{proof}
Using $a(E) = F_{\alpha/2}\big(E, a(E), b(E)\big)$, we have
\begin{align*}
a'(E) &= \partial_E F_{\alpha/2} \big(E,a(E), b(E)\big)
\\ &+ \partial_x F_{\alpha/2}\big(E, a(E), b(E)\big) a'(E)
\\ &+ \partial_y F_{\alpha/2}\big(E, a(E), b(E)\big) b'(E).
\end{align*}
Using \Cref{l:abasymptotic}, \Cref{l:Fx}, \Cref{l:Fy}, and \Cref{l:FE}, the previous line yields
\begin{equation}\label{peach1}
\left(1 + O(E^{- 1 - 3\alpha/2}) \right) a'(E)
+ O( E^{-\alpha}) b'(E)
= O(E^{- 1 - 3\alpha/2}).
\end{equation}
Similarly expanding $b'(E) = \partial_E \Big(G_{\alpha/2} \big(E,a(E), b(E)\big) \Big) $ using the chain rule and applying \Cref{l:abasymptotic}, \Cref{l:Gx}, \Cref{l:Gy}, and \Cref{l:GE}, we obtain
\begin{equation}\label{peach2}
O(E^{-\alpha}) a'(E) + \left(1 + O(E^{-\alpha/2}) \right) b'(E) = O (E^{-1 - \alpha/2}).
\end{equation}
The equations \eqref{peach1} and \eqref{peach2} can be written as the system
\begin{equation*}
\begin{bmatrix}
1 + O(E^{- 1 - 3\alpha/2}) & O( E^{-\alpha})\\
O(E^{-\alpha}) & 1 + O(E^{-\alpha/2})
\end{bmatrix}
\begin{bmatrix}
a'(E) \\
b'(E)
\end{bmatrix}
=
\begin{bmatrix}
O(E^{- 1 - 3\alpha/2}) \\
O (E^{-1 - \alpha/2})
\end{bmatrix}.
\end{equation*}
We invert this system to obtain
\begin{align*}
\begin{bmatrix}
a'(E) \\
b'(E)
\end{bmatrix}
&=
\frac{1}{ 1 + O(E^{-\alpha/2})}
\begin{bmatrix}
1 + O(E^{-\alpha/2}) & O( E^{-\alpha})\\
O(E^{-\alpha}) & 1 + O(E^{- 1 - 3\alpha/2})
\end{bmatrix}
\begin{bmatrix}
O(E^{- 1 - 3\alpha/2}) \\
O (E^{-1 - \alpha/2})
\end{bmatrix}\\
&=\begin{bmatrix}
O(E^{- 1 - 3\alpha/2}) \\
O (E^{-1 - \alpha/2})
\end{bmatrix}.
\end{align*}
This shows \eqref{Ederiv1}.
\end{proof}
We next show that $b'(E)$ is negative for sufficiently large $E$.
\begin{lem}\label{l:bprimenegative}
There exists $c>0$ such that for $\alpha \in (1-c, 1)$,
the following holds. Then for every $D>1$, there exists $C(D) >1$ such that
\begin{equation*}
b'(E) \le 0.
\end{equation*}
for all $E$ such that $D^{-1}(1 - \alpha)^{-1} \le E \le D(1-\alpha)^{-1}$ and $E \ge C(D)$.
\end{lem}
\begin{proof}
Differentiating $b(E) = G_{\alpha/2}\big(E, a(E), b(E)\big)$, where the equality follows from \Cref{r:abfixedpointeqn}, we have
\begin{align}
b'(E) &= \partial_E G_{\alpha/2} \big(E,a(E), b(E)\big) \label{dassum}
\\ &+ \partial_x G_{\alpha/2}\big(E, a(E), b(E)\big) a'(E)\notag\\
&+ \partial_y G_{\alpha/2}\big(E, a(E), b(E)\big) b'(E).\notag
\end{align}
Using \Cref{l:GE2}, \eqref{Fgammabound} to bound $b(E)= G_{\alpha/2}\big(E, a(E), b(E)\big)$, and the assumption on $E$, we get
\begin{equation}\label{ledt}
\partial_E G_{\alpha/2} \big(E,a(E), b(E)\big)
\le - c E^{ -1 - \alpha/2}.
\end{equation}
Further using
\eqref{Fgammabound}, \Cref{l:Gx}, and \Cref{l:Gy}, we obtain
\begin{align*}
\left| \partial_x G_{\alpha/2}\big(E, a(E), b(E)\big) a'(E)\right|
\le C E^{-\alpha} \cdot E^{-1 - 3\alpha/2},
\end{align*}
and
\begin{equation*}
\left|\partial_y G_{\alpha/2}\big(E, a(E), b(E)\big) b'(E)\right|
\le E^{-\alpha/2} \cdot E^{-1 - \alpha/2}.
\end{equation*}
Combining these bounds completes the proof after choosing $C(D)$ sufficiently large and using $E > C(D)$, since \eqref{ledt} represents the leading order term in the sum \eqref{dassum} for $\alpha > 3/4$.
\end{proof}
\subsection{Derivatives of Fractional Powers of the Resolvent}\label{s:dtildeAB}
We begin by defining functions $\tilde a (E)$ and $\tilde b (E)$.
\begin{definition}
Let $S$ and $T$ be nonnegative $\alpha/2$-stable random variables.
We define
\begin{align}\label{abtildedef}
\tilde a(E) &= \mathbb{E} \left[ \left( E + a(E)^{2/\alpha} S - b(E)^{2/\alpha} T \right)_-^{-\alpha} \right],\\
\tilde b(E) &= \mathbb{E} \left[ \left( E + a(E)^{2/\alpha} S - b(E)^{2/\alpha} T \right)_+^{-\alpha} \right].\notag
\end{align}
\end{definition}
We remark that the definitions of $F_\gamma$ and $G_\gamma$ in \eqref{FandG} imply that
\begin{equation}\label{tildedefs}
\tilde a(E) = F_{\alpha}(E, a(E), b(E)), \qquad \tilde b(E) = B_{\alpha}(E, a(E), b(E)).
\end{equation}
\noindent We now derive bounds on the derivatives of $\tilde a(E)$ and $\tilde b(E)$.
\begin{lem}\label{l:tildeaE}
There exists $c>0$ such that for all $\alpha \in (1-c, 1)$,
the following holds.
For every $D>1$, there exists $c_1(D) >0$ such that
\begin{equation*}
\tilde a'(E) \le - c_1 E^{-2\alpha}
\end{equation*}
for all $E$ such that $D^{-1}(1 - \alpha)^{-1} \le E \le D(1-\alpha)^{-1}$ and $E \ge c_1^{-1}$.
\end{lem}
\begin{proof}
We have
$\tilde a(E) = F_{\alpha}(E, a(E), b(E))
$ from \eqref{tildedefs}. Then from the chain rule, we obtain
\begin{align}
\tilde a'(E) &= \partial_E F_{\alpha} (E,a(E), b(E)) \label{onion1}
\\ &+ \partial_x F_{\alpha}(E, a(E), b(E)) a'(E) \notag
\\ &+ \partial_y F_{\alpha}(E, a(E), b(E)) b'(E).\notag
\end{align}
Using \eqref{fesecond}, $D (1 - \alpha)^{-1} \ge E $, and \eqref{Fgammabound} to bound $b(E)= G_{\alpha/2}\big(E, a(E), b(E)\big)$, we have
\begin{align}
\partial_E F_\gamma (E,x,y) &\le -c ( 1- \alpha)^{-1} b(E) E^{-1 -3 \alpha/2} + C b(E) E^{-1 - 3 \alpha/2} \notag \\
&\le - c E^{-2 \alpha} + C E^{-1 - 2\alpha}\label{onion2}
\end{align}
for some constants $c(D) > 0$ and $C(D)>1$.
Further, using \Cref{l:Fx} and \Cref{l:abE}, we have
\begin{equation}\label{onion3}
|\partial_x F_{\alpha}(E, a(E), b(E)) a'(E) |
\le C (1 - \alpha)^{-1} E^{-1 - 3\alpha/2} \cdot E^{-1 - 3\alpha/2}.
\end{equation}
Finally, we observe that
\begin{equation}\label{onion4}
\partial_y F_{\alpha}(E, a(E), b(E)) b'(E) \le 0
\end{equation}
as a consequence of \eqref{Fyispositive} and \Cref{l:bprimenegative}.
Combining \eqref{onion1}, \eqref{onion2}, \eqref{onion3}, and \eqref{onion4} completes the proof.
\end{proof}
\begin{lem}\label{l:tildebE}
There exists $c>0$ such that for all $\alpha \in (1-c, 1)$,
the following holds.
For every $D>1$, there exists $c_1(D) >0$ such that
\begin{equation*}
\tilde b'(E) \le - c_1 E^{-3/2 - \alpha/2}.
\end{equation*}
for all $E$ such that $D^{-1}(1 - \alpha)^{-1} \le E \le D(1-\alpha)^{-1}$ and $E \ge c_1^{-1}$.
\end{lem}
\begin{proof}
Using \eqref{tildedefs}, we have
$
\tilde b(E) = G_{\alpha}(E, a(E), b(E))$.
Then the chain rule gives
\begin{align}
\tilde b'(E) &= \partial_E G_{\alpha} (E,a(E), b(E)) \label{starfruit3}
\\ &+ \partial_x G_{\alpha}(E, a(E), b(E)) a'(E) \notag
\\ &+ \partial_y G_{\alpha}(E, a(E), b(E)) b'(E).\notag
\end{align}
Using the assumption on $E$ and
\eqref{Fgammabound} to bound $b(E)= G_{\alpha/2}\big(E, a(E), b(E)\big)$, and \Cref{l:GE2}, we have
\begin{align}
\partial_E G\big(E, a(E),b(E)\big) &\le - c E^{-1 - \alpha} + C E^{-1 - \alpha - \alpha/4 }, \label{starfruit1}
\end{align}
and from \Cref{l:Gx} and \Cref{l:abE}, we have
\begin{align}
|\partial_x G_{\alpha}(E, a(E), b(E)) a'(E) |
&\le C \left( E^{1 - 3\alpha/2} + E^{2 - 3\alpha} + E^{-3\alpha/2} \right) \cdot E^{-1 - 3\alpha/2}\le CE^{-3 \alpha}.\label{starfruit2}
\end{align}
Finally, from \Cref{l:Gy}, we have
\begin{equation*}
\partial_y G_{\alpha}(E,a(E), b(E)) \ge c ( 1 - \alpha)^{-1} E^{-\alpha -1/2 } + O(E^{ -1/2 - \alpha/2}) \ge 0,
\end{equation*}
and from \Cref{l:bprimenegative} we have $b'(E) \le 0$, which implies $\partial_y G_{\alpha}(E, a(E), b(E)) b'(E) \le 0$. Combining this inequality with \eqref{starfruit3}, \eqref{starfruit1}, and \eqref{starfruit2} completes the proof.
\end{proof}
\subsection{Uniqueness of the Mobility Edge}\label{s:proveuniqueness}
We are now ready to prove the first part of \Cref{t:main2}.
\begin{proof}[Proof of \Cref{t:main2}(1)]
By \Cref{l:crudeasymptotic}, there exists $c>0$ such that any solution to $\lambda(E,\alpha) = 1$ satisfies
\begin{equation}\label{therange}
c ( 1- \alpha)^{-1} \le E \le c^{-1} (1- \alpha)^{-1}.
\end{equation}
Further, we have $\lambda(E,\alpha) > 1$ for $0 < E < c(1 - \alpha)^{-1}$ and $\lambda(E,\alpha) < 1$ for $E > c^{-1} (1-\alpha)^{-1}$, so there is at least one $E$ in the range \eqref{therange} such that $\lambda(E,\alpha) = 1$ (using the continuity of $E \mapsto \lambda(E,\alpha)$ provided by the third part of \Cref{l:lambdalemma}). Therefore, it suffices to show that $E\mapsto \lambda(E,\alpha)$ is strictly decreasing in \eqref{therange}, since this implies there is exactly one solution to $\lambda(E,\alpha) = 1$.
For brevity, in this proof we write $F_\alpha = F_\alpha(E) = F_\alpha(E,a(E), b(E))$ and $G_\alpha = G_\alpha(E) = G_{\alpha} (E,a(E), b(E))$. We recall from \eqref{kiwi3} that
\begin{flalign}\label{qkiwi}
\lambda(E,\alpha)
& = \pi^{-1} \cdot K_{\alpha} \cdot \Gamma (\alpha) \cdot \bigg( t_{\alpha} \sqrt{1 - t_{\alpha}^2} \cdot (F_\alpha + G_\alpha) \\
& \qquad + \sqrt{1 - t^2_\alpha} \cdot \sqrt{ 2 t_\alpha^2 (F_\alpha + G_\alpha)^2 + (1 - t_{\alpha}^2) ( F_\alpha^2 + G_\alpha^2)} \bigg).
\end{flalign}
Recalling that $\tilde a(E) = F_\alpha(E)$ and $\tilde b(E) = G_\alpha(E)$ (see \eqref{tildedefs}), and using \Cref{l:tildeaE} and \Cref{l:tildebE}, we find that $F_\alpha(E)$ and $G_\alpha(E)$ are decreasing for $E$ in the region \eqref{therange}. Hence $(F_\alpha + G_\alpha)^2$ and $F_\alpha^2 + G_\alpha^2$ are also decreasing in this range, and we conclude from \eqref{qkiwi} that $\lambda(E,\alpha)$ is decreasing for such $E$ (since all coefficients in \eqref{qkiwi} are positive). We noted previously that this claim is enough to establish the theorem, so the proof is complete.
\end{proof}
\section{Scaling Near Zero}
\label{Alpha0Scaling}
In this section we analyze the how any solution to the equation $\lambda (E, \alpha) = 1$ (recall \Cref{lambdaEalpha}) scales when $\alpha$ is small. Throughout, we assume that $\alpha \in \big( 0, \frac{1}{20} \big)$, even when not explicitly stated. We recall from \eqref{FandG} that, for any real numbers $\gamma \in (0, 1)$ and $E, x, y > 0$, we define
\begin{flalign}
\label{fggamma}
F_{\gamma} (E, x, y) = \mathbb{E} \Big[ (E + x^{2/\alpha} S - y^{2/\alpha} T)_-^{\gamma} \Big]; \qquad G_{\gamma} (E, x, y) = \mathbb{E} \Big[ (E+x^{2/\alpha} S - y^{2/\alpha} T)_+^{-\gamma} \Big],
\end{flalign}
\noindent where $S$ and $T$ are positive $\frac{\alpha}{2}$-stable laws. We further recall from \eqref{r:abfixedpointeqn} that, under this notation, $(a, b) = \big( a(E), b(E) \big)$ (recall \eqref{opaque}) solves the system
\begin{flalign}
\label{fba}
a = F_{\alpha/2} (E, a, b); \qquad b = F_{\alpha/2} (E, a, b).
\end{flalign}
\noindent It will be convenient to parameterize $E = u^{2/\alpha}$ and view $a$, $b$, $F$, and $G$ as functions of $u$; we will do this throughout, often abbreviating $a = a(u^{2/\alpha})$ and $b = b(u^{2/\alpha})$ without comment. We begin in \Cref{H0Halpha} by explaining how, for $\alpha$ small, $\frac{\alpha}{2} \log S$ behaves as a Gumbel random variable \cite{SDCS}. We then provide some initial estimates on $a$ and $b$ in \Cref{InitialabAlpha0}. In \Cref{FFGG} we state an estimate for the error in replacing $(F, G)$ with more explicit quantities (in terms of Gumbel random variables), which is proven in \Cref{Replaceh0halpha20} and \Cref{FFGGalpha0Estimate0}. We then establish the scaling for the mobility edge in \Cref{Alpha0E0Scale}. Throughout this section, constants $c > 0$ and $C > 1$ will be independent of $\alpha$.
\subsection{Approximation by Gumbel Random Variables}
\label{H0Halpha}
In this section, we quantify the sense in which the logarithm of a positive $\frac{\alpha}{2}$-stable law can be approximated by a Gumbel random variable. To that end, we begin with the following definition, where $\Upsilon_{\alpha/2}$ below measures this error.
\begin{definition}
\label{halpha}
Fix $\alpha \in (0, 1)$. Let $S$ be a positive $\frac{\alpha}{2}$-stable law; let $W_{\alpha/2} = \frac{\alpha}{2} \log S$; and let $h_{\alpha/2} (x)$ denote the probability density function of $W_{\alpha/2}$. Further let $W_0$ denote a Gumbel random variable, and let $h_0 (x) = e^{-x - e^{-x}}$ denote its probability density function. Define $\Upsilon_{\alpha/2} (x) : \mathbb{R} \rightarrow \mathbb{R}$ so that $h_{\alpha/2} (x) = h_0 (x) + \alpha^2 \cdot \Upsilon_{\alpha/2} (x)$.
\end{definition}
We begin with the following lemma providing the characterstic functions of $W_{\alpha/2}$ and $W_0$.
\begin{lem}
\label{eitw0walpha2}
For any $\alpha \in (0, 1)$ and $t \in \mathbb{R}$, we have
\begin{flalign*}
\mathbb{E} [e^{\mathrm{i} t W_{\alpha/2}}] = \displaystyle\frac{\Gamma (1-\mathrm{i} t)}{\Gamma \big( 1 - \frac{\mathrm{i} t \alpha}{2} \big)} \cdot \Gamma \Big(1 - \displaystyle\frac{\alpha}{2} \Big)^{\mathrm{i} t}; \qquad \mathbb{E} [ e^{\mathrm{i} t W_0}] = \Gamma (1 - \mathrm{i}t).
\end{flalign*}
\end{lem}
\begin{proof}
The expression for $\mathbb{E} [ e^{\mathrm{i} t W_0}]$ follows from direct integration, so we omit the proof. For the characteristic function of $W_{\alpha/2}$, we recall the identity
$$
x^{-k} = \frac{1}{\Gamma(k)}\int_0^\infty e^{- t x} t^{k-1} dt,
$$
which holds for all $x >0$ and $k >0$,
Using \eqref{ytsigma}, we deduce that for real $k > 0$,
\begin{eqnarray*}
\mathbb{E} [e^{- k W_{\alpha/2} }] = \mathbb{E} [S^{-k\alpha/2}] &= &\frac{1}{\Gamma(k\alpha/2)}\int_0^\infty \mathbb{E} [e^{- t S} t^{k\alpha/2 - 1}]\, dt \\
& = & \frac{1}{\Gamma(k\alpha/2)}\int_0^\infty e^{- \Gamma(1-\alpha/2) t^{\alpha/2}} t^{k\alpha/2-1} dt \\
& = & \frac{2}{\alpha\Gamma(k\alpha/2)\Gamma(1-\alpha/2)^k}\int_0^\infty e^{- t} t^{k -1} dt \\
& = &\frac{2\cdot \Gamma(k)}{\alpha\Gamma(k\alpha/2)\Gamma(1-\alpha/2)^k}.
\end{eqnarray*}
Recall that $z \Gamma(z) = \Gamma(z+1)$. Then the previous display gives
\begin{equation}\label{eq:negmomS}
\mathbb{E} [e^{-k W_{\alpha/2}}] = \mathbb{E} [S^{-k\alpha/2}] = \frac{\Gamma(1+k)}{\Gamma(1+k\alpha/2)\Gamma(1-\alpha/2)^k}.
\end{equation}
For all real $t$, \eqref{eq:negmomS} implies by analytic continuation that
$$
\mathbb{E} \left[ e^{\mathrm{i} t W_{\alpha/2}} \right]
=
\frac{\Gamma (1-\mathrm{i} t) \cdot \Gamma(1-\alpha/2)^{\mathrm{i} t}}{\Gamma(1-\mathrm{i} t \alpha/2)},
$$
which completes the proof.
\end{proof}
Lemma \ref{eitw0walpha2} indicates that, as $\alpha$ tends to $0$, the characteristic function of $W_{\alpha/2}$ converges to that of $W_0$; in particular, $W_{\alpha/2}$ converges weakly to a Gumbel distribution.\footnote{In fact, it suggests that $W_{\alpha/2}$ can be written exactly (for any $\alpha \in (0, 1)$) as the sum of a Gumbel distribution and another independent random variable; this is indeed shown to be the case as \cite[Corollary 4.1]{SDCS}.} The following lemma quantifies this convergence by bounding (derivatives of) $\Upsilon_{\alpha/2}$.
\begin{lem}
\label{w0walpha}
For any positive integers $p, q \ge 0$, there exists a constant $C = C(p, q) > 1$ such that
\begin{flalign}
\label{h0halpha21}
\displaystyle\sup_{x \in \mathbb{R}} \big| \partial_x^q \Upsilon_{\alpha/2} (x) \big| \le C; \qquad \displaystyle\int_{-\infty}^{\infty} |x|^p \cdot \big|\partial_x^q \Upsilon_{\alpha/2} (x) \big| dx < C.
\end{flalign}
\end{lem}
\begin{proof}
\noindent Recall for any $x \in \mathbb{R}$ that
\begin{flalign*}
\big|\Gamma (1 - \mathrm{i} x) \big|^2 = \displaystyle\frac{\pi x}{\sinh(\pi x)},
\end{flalign*}
\noindent which implies that
\begin{flalign}
\label{estimate1it0}
\Bigg| \displaystyle\frac{\Gamma (1 - \mathrm{i} t)}{\Gamma \big( 1 - \frac{\mathrm{i} t \alpha}{2} \big)} \Bigg| = \bigg( \displaystyle\frac{2}{\alpha} \cdot \displaystyle\frac{\sinh \big (\frac{\pi \alpha t}{2}\big)}{\sinh (\pi t)} \bigg)^{1/2}.
\end{flalign}
\noindent These, together with a Taylor expansion, yields
\begin{flalign*}
\Bigg| \Gamma \bigg(1 - \frac{\alpha}{2} \bigg)^{\mathrm{i}t} - \Gamma \bigg( 1 - \frac{\mathrm{i} t \alpha}{2} \bigg) \Bigg| \le C \alpha^2 \big( |t| + |t|^2 \big),
\end{flalign*}
\noindent uniformly for $t \in \mathbb{R}$. Thus, \Cref{eitw0walpha2} and Fourier inversion gives
\begin{flalign*}
\alpha^2 \cdot \displaystyle\sup_{x \in \mathbb{R}} \big| \partial_x^q \Upsilon_{\alpha/2} (x) \big| = \displaystyle\sup_{x \in \mathbb{R}} \big| \partial_x^q h_{\alpha/2} (x) - \partial_x^q h_0 (x) \big| \le \displaystyle\int_{-\infty}^{\infty} t^q \big| \mathbb{E} [ e^{\mathrm{i} t W_0}] - \mathbb{E} [ e^{\mathrm{i} t W_{\alpha/2}}] \big| dt \le C \alpha^2,
\end{flalign*}
\noindent which verifies the first bound in \eqref{h0halpha21}.
To establish the latter, denoting the digamma function $\psi_k = \partial_x^k \big( \log \Gamma(x) \big)$; recall from Section 1.1 of \cite[Section 1.1]{TSF} that for any integer $k \in [0, p+q]$ we have the estimates
\begin{flalign*}
\big| \psi_0 (1 + \mathrm{i} x) - \log (1 + \mathrm{i} x) \big| \le \displaystyle\frac{C}{|x| + 1}; \qquad \bigg| \psi_k (1 + \mathrm{i} x) + (-1)^k \displaystyle\frac{k!}{(1 + \mathrm{i} x)^k} \bigg| \le \displaystyle\frac{C}{\big( |x| + 1 \big)^k},
\end{flalign*}
\noindent uniformly in $x \in \mathbb{R}$. Together with \eqref{estimate1it0}, these yield
\begin{flalign}
\label{estimate1it1}
\Bigg| \partial_t^k \bigg( \displaystyle\frac{\Gamma (1 - \mathrm{i} t)}{\Gamma \big( 1 - \frac{\mathrm{i}t\alpha}{2} \big)} \bigg) \Bigg| \le C \log \big( |t| + 3 \big)^k \cdot \bigg( \displaystyle\frac{2}{\alpha} \cdot \displaystyle\frac{\sinh \big( \frac{\pi \alpha t}{2} \big)}{\sinh (\pi t)} \bigg)^{1/2}.
\end{flalign}
\noindent From a Taylor expansion (and again \eqref{estimate1it0}), we also have
\begin{flalign}
\label{estimate1it2}
\Bigg| \partial_t^k \bigg( \Gamma \Big( 1 - \displaystyle\frac{\alpha}{2} \Big)^{\mathrm{i} t} - \Gamma \Big( 1 - \displaystyle\frac{\mathrm{i} t \alpha}{2} \Big) \bigg) \Bigg| \le C \alpha^2 \big( |t|^k + |t| \big).
\end{flalign}
\noindent Together \Cref{eitw0walpha2}, Fourier inversion, \eqref{estimate1it1}, and \eqref{estimate1it2} yield
\begin{flalign*}
\alpha^2 \displaystyle\int_{-\infty}^{\infty} |x|^p \cdot \big| \partial_x^q \Upsilon_{\alpha/2} (x) \big| dx & = \displaystyle\int_{-\infty}^{\infty} |t|^p \cdot \big| \partial_x^q h_{\alpha/2} (x) - \partial_x^q h_0 (x) \big| \\
& \le C \displaystyle\sum_{0 \le k+l \le p+q} \displaystyle\int_{-\infty}^{\infty} |t|^l \cdot \Big| \partial_t^k \big( \mathbb{E} [e^{\mathrm{i} t W_0}] - \mathbb{E} [e^{\mathrm{i} t W_{\alpha/2}}] \big) \Big| dt \le C \alpha^2,
\end{flalign*}
\noindent from which we deduce the second statement of \eqref{h0halpha21}.
\end{proof}
Recall that the total variational distance between two real random variables $(X, Y)$, with probability density functions $(f, g)$ respectively, is defined by
\begin{flalign*}
d_{\mathrm{TV}} (X, Y) = \displaystyle\int_{-\infty}^{\infty} \big( f(x) - g(x) \big)_+ dx.
\end{flalign*}
We then have the following two corollaries. The first bounds the total variation distance between $W_{\alpha/2}$ and $W_0$; the second bounds integrals of various densities (and their derivatives and differences) against a certain fractional moment.
\begin{cor}
\label{w0walphavariation}
There exists a constant $C > 1$ such that $d_{\mathrm{TV}} (W_{\alpha/2}, W_0) \le C \alpha^2$.
\end{cor}
\begin{proof}
This follows from the fact that
\begin{flalign*}
d_{\mathrm{TV}} (W_{\alpha/2}, W_0) = \displaystyle\frac{1}{2} \displaystyle\int_{-\infty}^{\infty} \big| h_{\alpha/2} (x) - h_0 (x) \big| dx = \displaystyle\frac{\alpha^2}{2} \displaystyle\int_{-\infty}^{\infty} \big| \Upsilon_{\alpha/2} (x) \big| dx < C \alpha^2,
\end{flalign*}
\noindent where in the last bound we used the second estimate in \Cref{h0halpha21}.
\end{proof}
\begin{cor}
\label{integralk}
There exists a constant $C > 1$ such that the following holds for any $\gamma \in \big[ \frac{\alpha}{2}, \alpha \big]$. For any index $f \in \{ h_{\alpha/2}, h_0, \Upsilon_{\alpha/2} \}$, we have
\begin{flalign*}
\displaystyle\sup_{K \in \mathbb{R}} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha} - 1)^{-\gamma} \big| f (t+K) \big| dt < C; \qquad \displaystyle\sup_{K \in \mathbb{R}} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha} - 1)^{-\gamma} \big| f' (t+K) \big| \le C.
\end{flalign*}
\end{cor}
\begin{proof}
We only establish the first bound, since the proof of the latter is very similar. To that end, observe from \Cref{w0walpha} that
\begin{flalign*}
\sup_{x \in \mathbb{R}} \big| f (x) \big| \le C; \qquad \displaystyle\sup_{x \in \mathbb{R}} \big| f' (x) \big| \le C,
\end{flalign*}
\noindent for $f = \Upsilon_{\alpha/2}$. Since the same bounds hold for $f = h_0$, they also hold for $f = h_{\alpha/2}$ (upon replacing $C$ with $2C$ if necessary). Hence,
\begin{flalign*}
\displaystyle\sup_{K \in \mathbb{R}} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha} - 1)^{-\gamma} f (t + K) dt = C \displaystyle\int_{-1}^1 (e^{2t/\alpha} - 1)^{-\gamma} dt + \displaystyle\sup_{K \in \mathbb{R}} \displaystyle\int_{|t| \ge 1} f (t+K) dt \le C,
\end{flalign*}
\noindent where in the last bound we again used \Cref{w0walpha}; this establishes the lemma.
\end{proof}
We conclude with the following two tail bounds on $h_{\alpha/2} (x)$ and $h_0 (x)$ for $x$ negative.
\begin{lem}
\label{halpha2h0xsmall}
There exists a constant $C > 1$ such that, for any $x \ge 0$, we have
\begin{flalign*}
\displaystyle\int_{-\infty}^{-x} \big( h_{\alpha/2} (y) + h_0 (y) \big) dy \le 4 e^{-e^x}.
\end{flalign*}
\end{lem}
\begin{proof}
We have $\int_{-\infty}^{-x} h_0 (y) dy = e^{- e^x}$ by \Cref{halpha} and integration. So, it suffices to show that
\begin{flalign}
\label{walpha2x}
\mathbb{P} [W_{\alpha/2} \le -x] = \int_{-\infty}^{-x} h_{\alpha/2} (y) dy \le 3 e^{-e^x},
\end{flalign}
\noindent which follows from the bound
\begin{flalign*}
\mathbb{P} [W_{\alpha/2} \le -x] = \mathbb{P} [S \le e^{-2x/\alpha}] \le 3 \cdot \mathbb{E} \big[ \exp (-e^{2x/\alpha} S) \big] = 3 \cdot \exp \big( - \Gamma (1 - \alpha) \cdot e^x \big) \le 3 e^{-e^x}.
\end{flalign*}
\noindent Here, to deduce the first statement we applied \eqref{halpha}; to deduce the second we applied a Markov estimate; to deduce the third we applied \eqref{ytsigma}; and to deduce the fourth we used the fact that $\Gamma (1 - \alpha) \ge 1$.
\end{proof}
\begin{cor}
\label{exponentialhh}
There exists a constant $C > 1$ such that, for any $\kappa \in [0, 2]$, we have
\begin{flalign*}
\displaystyle\int_{-\infty}^{\infty} e^{-\kappa s} \Upsilon_{\alpha/2} (s) ds \le C |\log \alpha|^2.
\end{flalign*}
\end{cor}
\begin{proof}
By \Cref{w0walpha} and \Cref{halpha2h0xsmall}, we have for any real number $A > 0$ that
\begin{flalign*}
\displaystyle\int_{-\infty}^{\infty} e^{-\kappa s} \Upsilon_{\alpha/2} (s) ds & \le \alpha^{-2} \displaystyle\int_{-\infty}^{-A} e^{-\kappa s} \big( h_{\alpha/2} (s) + h_0 (s) \big) ds + \displaystyle\int_{-A}^0 e^{-\kappa s} \Upsilon_{\alpha/2} (s) ds + C \\
& \le C \alpha^{-2} \displaystyle\int_{-\infty}^{-A} e^{-\kappa s -e^{-s}} ds + C e^{\kappa A} = C \displaystyle\int_{e^A}^{\infty} v^{\kappa} e^{-v} dv + C e^{\kappa A}
\end{flalign*}
\noindent where in the last statement we changed variables $v = e^{-s}$. Since for $B = e^A$ we have
\begin{flalign*}
\displaystyle\int_B^{\infty} v^{\kappa} e^{-v} dv = e^{-B/2} \displaystyle\int_B^{\infty} v^{\kappa} e^{-v/2} dv = 2^{\kappa + 1} e^{-B/2} \cdot \Gamma (\kappa) \le C e^{-B/2},
\end{flalign*}
\noindent it follows that for $e^A = 10 |\log \alpha|$ that
\begin{flalign*}
\displaystyle\int_{-\infty}^{\infty} e^{-\kappa s} \Upsilon_{\alpha/2} (s) ds \le \alpha^{-2} C e^{-e^A/2} + C e^{\kappa A} \le C \alpha^3 + C |\log \alpha|^2,
\end{flalign*}
\noindent which verifies the lemma.
\end{proof}
\subsection{Initial Bounds on $a$ and $b$}
\label{InitialabAlpha0}
In this section we show that $a(E_0)$ and $b(E_0)$ (throughout this section, we recall their definitions from \eqref{opaque}) are bounded above and below if $\lambda (E_0, \alpha) = 1$. We begin with the following proposition, approximating the $\alpha$-th moment of $R_{\star} (E_0)$ if $\lambda (E_0, \alpha) = 1$.
\begin{prop}
\label{lambdaalpha0}
There exists a constant $C > 1$ such that the following holds. Fix a real number $E_0 > 0$ such that $\lambda (E_0, \alpha) = 1$. Denoting $c_{\star} = 4 \log 2 + \pi$, we have
\begin{flalign*}
\bigg| \mathbb{E} \Big[ \big| R_{\star} (E) \big|^{\alpha} \Big] - (2 - c_{\star} \alpha) \bigg| \le C \alpha^2.
\end{flalign*}
\end{prop}
\begin{proof}
Throughout this proof, we abbreviate $R_{\star} = R_{\star} (E_0)$ and set
\begin{flalign}
\label{alpharab}
A = \mathbb{E}\big[ |R|^{\alpha} \big]; \qquad B = \mathbb{E}\big[ |R|^{\alpha} \sgn (-R) \big].
\end{flalign}
\noindent By \Cref{2lambdaEsalpha}, we have
\begin{flalign*}
\pi^{-1} \cdot K_{\alpha} \cdot \Gamma (\alpha) \cdot (1 - t_{\alpha}^2)^{1/2} \cdot \big( t_{\alpha} A + \sqrt{A^2 + t_{\alpha}^2 B^2} \big) = \lambda(E_0, \alpha) = 1.
\end{flalign*}
\noindent Using the definitions \eqref{tlrk} of $K_{\alpha}$ and $t_{\alpha}$, and applying the Taylor expansion for $t_{\alpha} = \frac{\pi \alpha}{2} + O(\alpha^3)$, it follows that
\begin{flalign*}
\displaystyle\frac{\alpha}{2 \pi} \cdot \Gamma \bigg( \displaystyle\frac{1-\alpha}{2} \bigg)^2 \cdot \Gamma (\alpha) \cdot \big( 1 + O(\alpha^2) \big) \cdot \bigg( \Big( \displaystyle\frac{\pi \alpha}{2} + O (\alpha^3) \Big) \cdot A + \sqrt{A^2 + \big(\alpha^2 + O(\alpha^4) \big) \cdot B^2} \bigg) = 1.
\end{flalign*}
\noindent Further using the fact that $\alpha \Gamma (\alpha) = \Gamma (1 + \alpha)$; applying Taylor expansions
\begin{flalign*}
\Gamma (1 + \alpha) = 1 + \alpha \Gamma' (1) + O(\alpha^2); \qquad \Gamma \bigg( \frac{1-\alpha}{2} \bigg) = \Gamma \bigg( \frac{1}{2} \bigg) - \frac{\alpha}{2} \cdot \Gamma' \bigg( \frac{1}{2} \bigg) + O(\alpha^2);
\end{flalign*}
\noindent and using the fact that $\Gamma \big( \frac{1}{2} \big) = \pi^{1/2}$, we deduce
\begin{flalign*}
\pi^{-1} \cdot \big(1 + \alpha \cdot \Gamma' (1) \big) \cdot \bigg( \pi^{1/2} - \frac{\alpha}{2} \cdot \Gamma' (1/2) \bigg)^2 \cdot \bigg( \displaystyle\frac{\pi \alpha}{2} \cdot A + \sqrt{A^2 + \alpha^2 B^2} \bigg) = 2 + O (\alpha^2),
\end{flalign*}
\noindent and thus
\begin{flalign*}
\displaystyle\frac{\pi \alpha}{2} \cdot A + \sqrt{A^2 + \alpha^2 B^2} = 2 + 2 \pi^{-1/2} \alpha \cdot \Gamma' \Big( \displaystyle\frac{1}{2} \Big) - 2 \alpha \cdot \Gamma' (1) + O(\alpha^2) = 2 - 4 \alpha \log 2 + O(\alpha^2).
\end{flalign*}
\noindent Since $|B| \le |A|$ from the definition \eqref{alpharab} of $A$ and $B$, it follows that
\begin{flalign*}
\bigg( 1 + \displaystyle\frac{\pi \alpha}{2} \bigg) \cdot A = 2 - 4 \alpha \log 2 + O(\alpha^2),
\end{flalign*}
\noindent which yields the proposition.
\end{proof}
The next lemma states that $E_0 \le 1$ if $\lambda (E_0, \alpha) = 1$ (and $\alpha$ is sufficiently small).
\begin{lem}
\label{lambdaalpha01}
There exists a constant $c>0$ such that the following holds for $\alpha \in (0,c)$. Any positive real number $E_0 > 0$ such that $\lambda (E_0, \alpha) = 1$ satisfies $E_0 \le 1$.
\end{lem}
\begin{proof}
Suppose for the sake of contradiction that there exists some $E_0 > 1$ satisfying $\lambda (E_0, \alpha) = 1$. By the elementary inequality $x \le 1 + x^2$, we have by \eqref{fba} and \Cref{lambdaalpha0} that, for $\alpha \in (0,c)$,
\begin{flalign}
\label{abap}
a + b = \mathbb{E} \Big[ \big| R_{\star} (E_0) \big|^{\alpha/2} \Big]
\le \mathbb{E} \Big[ \big| R_{\star} (E_0) \big|^{\alpha} \Big] + 1
\le 4.
\end{flalign}
\noindent Further, recalling the variables $S$ and $T$ from \eqref{fggamma}, and setting $Y = a^{2/\alpha} S - b^{2/\alpha} T$, we have
\begin{flalign}
\label{cow}
\mathbb{E} \big[ |E_0 + Y|^{-\alpha} \big] = \mathbb{E} \Big[ \big| R_{\star} (E_0) \big|^{\alpha} \Big] \ge \displaystyle\frac{7}{4}
\end{flalign}
\noindent where in the last inequality we applied \eqref{lambdaalpha0}. Using H\"older's inequality, we write
\begin{align}
\mathbb{E} \big[ |E_0 + Y|^{-\alpha} \big]\notag
&= \mathbb{E} \big[ \mathbbm{1}_{Y \notin [E_0 - \alpha, E_0 + \alpha]} \cdot |E_0 + Y|^{-\alpha} \big] + \mathbb{E} \big[ \mathbbm{1}_{Y \in [E_0 - \alpha, E_0 + \alpha]} \cdot |E_0 + Y|^{-\alpha} \big]\\
\le \alpha^{-\alpha/2} + \mathbb{E} \big[ \mathbbm{1}_{Y \in [E_0 - \alpha, E_0 + \alpha]} \cdot |E_0 + Y|^{-\alpha} \big].
\label{goat}
\end{align}
\noindent Letting $g$ denote the probability density function of $y$, we have $\sup_{|s| \ge 1/2} f(s) \le C$ by \eqref{abap}. Hence,
\begin{flalign*}
\mathbb{E} \big[ \mathbbm{1}_{Y \in [E_0 - \alpha, E_0 + \alpha]} \cdot |E_0 + Y|^{-\alpha} \big] \le C \alpha,
\end{flalign*}
\noindent With \eqref{goat}, this contradicts \eqref{cow} after taking $\alpha$ sufficiently small, establishing the lemma.
\end{proof}
We further require the following lemma bounding inverse moments of stable laws, which will be useful for obtaining an upper bound on $a + b$.
\begin{lem}
\label{avestimatek}
There exists a constant $C > 1$ such that the following holds for any $k \in \mathbb{R}$ such that $0< k \le \frac{1}{\alpha} \le 20$. For any real numbers $A, v \ge 0$, we have
\begin{flalign}\label{fjt}
\mathbb{E} \big[ |A^{2/\alpha} - v^{2/\alpha} S|^{-k\alpha/2} \big] + \mathbb{E} \big[ | A^{2/\alpha} + v^{2/\alpha} S|^{-k\alpha/2} \big] \le C \cdot \max ( A, v )^{-k}.
\end{flalign}
\end{lem}
\begin{proof}
We bound each term on the left side of \eqref{fjt} separately. The claimed bound on the second term follows directly from \eqref{eq:negmomS} after bounding
\begin{flalign}
| A^{2/\alpha} + v^{2/\alpha} S|^{-k\alpha/2} \le \max\big(A^{2/\alpha},
|v^{2/\alpha} S| \big)^{-k\alpha/2}.
\end{flalign}
We now turn to bounding the first term on the left side of \eqref{fjt}. We can assume without loss of generality that $v = 1$ and $A>0$. We write
$$
\mathbb{E} \big[ | A^{2/\alpha} - S |^{-k\alpha/2}\big] = \int_0 ^\infty \mathbb{P}\big( | A^{2/\alpha} - S | \leq t^{-2/(k\alpha)} \big)\, dt.
$$
We decompose this integral over $(0,2^{\alpha k/2}A^{-k})$ and $(2^{\alpha k/2}A^{-k},\infty)$. For the first term, we find
\begin{eqnarray}
\int_{0}^{2^{\alpha k/2} A^{-k} } \mathbb{P}\big( | A^{2/\alpha} - S | \leq t^{-2/(k\alpha)} \big)\, dt
& \leq &\int_{0}^{2^{\alpha k/2} A^{-k}} \mathbb{P}\big( S \leq A^{2/\alpha} + t^{-2/(k\alpha)} \big)\, dt\notag \\
& \leq & \int_{0}^{\infty} \mathbb{P}\big( S \leq (2 +1) t^{-2/(k\alpha)} \big)\, dt \notag \\
& = & 3^{k \alpha/2} \cdot \mathbb{E} [S^{-k\alpha/2}] \le C,\label{minpr}
\end{eqnarray}
where in the last inequality we used \eqref{eq:negmomS} and the assumed bound on $k$.
We also have the trivial bound $\mathbb{P}\big( | A^{2/\alpha} - S | \leq t^{-2/(k\alpha)} \big) \leq 1$. We deduce using this estimate, the assumed bound on $k$, and \eqref{minpr} that
$$
\int_{0}^{2^{\alpha k/2}A^{-k}} \mathbb{P}\big( | A^{2/\alpha} - S | \leq t^{-2/(k\alpha)} \big)\, dt \leq \min (C , 2^{\alpha k/2} A^{-k}) \leq C \max(A,1)^{-k}.
$$
Recall that we denote the the density of $\frac{\alpha}{2} \log S$ by $h_{\alpha/2}$ (see \Cref{halpha}).
For the integral over $(2^{\alpha k/2}A^{-k},\infty)$, we write
\begin{eqnarray}
\int_{2^{\alpha k/2} A^{-k}}^\infty \mathbb{P}\big( | A^{2/\alpha} - S | \leq t^{-2/(k\alpha)} \big) dt
& = & \int_{2^{\alpha k/2} A^{-k}}^\infty \int_{\frac{\alpha}{2} \log ( A^{2/\alpha} - t^{-2/(k\alpha)})}^{\frac{\alpha}{2} \log ( A^{2/\alpha} + t^{-2/(k\alpha)})} h_{\alpha/2} (x) \, dx \, dt \nonumber \\
& \leq & \int_{2^{\alpha k/2} A^{-k}}^\infty \int_{\log A - \delta }^{\log A + \delta }h_{\alpha/2} (x) \, dx \, dt, \label{eq:2ndint}
\end{eqnarray}
where we set $\delta = 2\alpha (At^{1/k})^{-2/\alpha}$ and we have used that $\log (1 + x) \leq x$ and $\log (1- x)\geq x /(1-x_0)$ if $0 \leq x \leq x_0 < 1$ for $x = (At^{1/k})^{-2/\alpha}$ and $x_0 = 1/2$.
By \Cref{w0walpha}, we have $h_{\alpha/2}(x) \leq C$ for all $\alpha \in (0, \frac{1}{20})$. We deduce that
$$
\int_{2^{\alpha k/2} A^{-k}}^\infty \int_{\log A - \delta }^{\log A + \delta }h_{\alpha/2} (x) \, dx \, dt \leq C \alpha \int_{2^{\alpha k/2} A^{-k}}^\infty (At^{1/k})^{-2/\alpha} \, dt = C \cdot \frac{2^{ \alpha k} \alpha }{\frac{2}{\alpha k} - 1}\cdot \frac{1}{A^k} \leq \frac{C}{A^k}.
$$
The proof of the lemma will thus be complete if we prove that the integral \eqref{eq:2ndint} is also bounded uniformly in $A$ and $\alpha$ by a constant.
For this, we need an upper bound on $h_{\alpha/2}(x)$ that depends on $x$ but remains uniform in $\alpha$. To that end, note that by Markov's inequality and \eqref{eq:negmomS}, we have for all $r,x >0 $ and $0 < \alpha < 1/2$ that
\begin{flalign}\label{gcdf}
\mathbb{P}(W_{\alpha/2} \leq x) = \mathbb{P}( e^{-r W_{\alpha/2}} \geq e^{-rx}) \leq e^{rx }\cdot \mathbb{E} [e^{-k W_{\alpha/2}}] \leq C_r e^{rx},
\end{flalign}
where $C_r>1$ is a constant that depends only on $r$, and we used \Cref{eitw0walpha2} in the last inequality.
By \Cref{w0walpha}, there exists $C>1$ such that for any $x,y>0$,
\begin{flalign}\label{lss}
h_{\alpha/2} (x) \leq h_{\alpha/2} (y) + C |x-y|.
\end{flalign}
Then by \eqref{gcdf}, we have for any $\theta >0$ that
$$
C_r e^{rx} \geq \int_{-\infty}^{x} h_{\alpha/2}(y) \, dy \geq \int_{x - \theta}^{x} h_{\alpha/2}(y) \, dy \geq \theta h_{\alpha/2}(x) - \frac{c \theta^2}{2},
$$
where we used \eqref{lss} in the last inequality.
Choosing $\theta = e^{rx/2}$, we deduce that for any $r >0$, there exists a constant $C'_r>1$ such that
\begin{flalign}
h_{\alpha/2}(x) \leq C'_r e^{rx}.
\end{flalign}
Putting this last estimate in \eqref{eq:2ndint} and setting $r=k$, we find
\begin{align}
\int_{2^{\alpha k/2}A^{-k}}^\infty \mathbb{P}\big( | A^{2/\alpha} - S | \leq t^{-2/(k\alpha)} \big)\, dt \notag
&\le C'_k \int_{2^{\alpha k/2}A^{-k}}^\infty \int_{\log A - \delta }^{\log A + \delta } e^{kx} \, dx \, dt \\ &= \frac{C'_k A^k }{k} \notag \int_{2^{\alpha k/2}A^{-k}} ^\infty (e^{k\delta} - e^{-k\delta}) \, dt\\
&\le \frac{C'_k}{k} \frac{2^{\alpha k/2} e \alpha }{\frac{2}{\alpha k} - 1},\label{lastintegral}
\end{align}
where we increased the value of $C'_k$ in the last line.
In the last inequality we used the definition of $\delta$ to see that
\begin{flalign}k \delta = 2k \alpha (At^{1/k})^{-2/\alpha} \leq 2^{1-k/2} k \alpha \leq 1, \text{ which implies } e^{k\delta} - e^{-k\delta} \leq 2 e \delta k.
\end{flalign}
%
Using \eqref{lastintegral}, we conclude that the integral \eqref{eq:2ndint} is uniformly bounded in $A$ for all $\alpha \in (0, \frac{1}{20})$ and $k\le \alpha^{-1}$. This completes the proof of the lemma.
\end{proof}
We can now establish the following lemma bounding $a(E_0)$ and $b(E_0)$ from above and below.
\begin{lem}
\label{abestimatealpha0}
There exist constants $c > 0$ and $C > 1$ such that, if $\alpha \in (0, c)$ and $E_0 \le 1$, then
\begin{flalign*}
C^{-1} \le a(E_0) \le C; \qquad C^{-1} \le b(E_0) \le C.
\end{flalign*}
\end{lem}
\begin{proof}
Throughout this proof, we abbreviate $a = a(E_0)$ and $b = b (E_0)$, and we also set $E_0 = u^{2/\alpha}$. We first show the upper bounds on $a$ and $b$. To that end, observe from \eqref{fggamma} that
\begin{flalign}
\label{abu}
a + b = \mathbb{E} \big[ |u^{2/\alpha} + a^{2/\alpha} S - b^{2/\alpha} T|^{-\alpha/2} \big].
\end{flalign}
\noindent Moreover, from \eqref{avestimatek}, we have that
\begin{flalign*}
\mathbb{E} \big[ |u^{2/\alpha} + x^{2/\alpha} S - y^{2/\alpha} T|^{-\alpha/2} \big] \le C \min \{ x^{-1}, y^{-1} \}, \qquad \text{for any $x, y \ge 0$},
\end{flalign*}
\noindent which together with \eqref{abu} implies that $a + b \le C \min \{ a^{-1}, b^{-1} \}$, meaning that $a + b \le C$. This verifies the upper bounds on $a$ and $b$.
Hence, it remains to show the lower bounds. We fix a constant $C_+ \geq 1$ such that $\sigma = a + b \leq C_+$ for all $0 < \alpha < 1/20$.
Assume first that $a \geq b$, and set $\Omega_1 = \{ T \leq S \}$. Using symmetry, we deduce that $\mathbb{P}(\Omega_1) = 1/2$.
We have that
\begin{align*}
b &=
\mathbb{E} \Big[ (E+a^{2/\alpha} S - b^{2/\alpha} T)_+^{-\alpha/2} \Big]
\\ &\ge
\mathbb{E} \Big[ \mathbbm{1}_{\Omega_1} \cdot (E+a^{2/\alpha} S - b^{2/\alpha} T)^{-\alpha/2} \Big]
\ge \frac{1}{2}\cdot \mathbb{E} \Big[ (E+a^{2/\alpha} S)^{-\alpha/2} \Big]
\ge c a^{-1} \ge c C_{+}^{-1}.
\end{align*}
for some $c>0$. This completes the analysis of the case $a \ge b$.
Next, suppose that $ b \geq a$ and $b \geq u$. We consider the event
\begin{flalign*}
\Omega_2 = \{ 2 \leq T \leq 4, S \leq 1\}.
\end{flalign*}
By Corollary \ref{w0walphavariation}, there exists $c>0$ such that
$P(\Omega_2) > c$. Then
\begin{align*}
a &=
\mathbb{E} \Big[ (E+a^{2/\alpha} S - b^{2/\alpha} T)_-^{-\alpha/2} \Big]\\
&\ge
\mathbb{E} \Big[\mathbbm{1}_{\Omega_2} \cdot ( u^{2/\alpha} +a^{2/\alpha} S - b^{2/\alpha} T)_-^{-\alpha/2} \Big]\\
&=
\mathbb{E} \Big[\mathbbm{1}_{\Omega_2} \cdot ( b^{2/\alpha} T - u^{2/\alpha} - a^{2/\alpha} S )^{-\alpha/2} \Big]\\
&\ge \mathbb{E} \Big[\mathbbm{1}_{\Omega_2} \cdot ( b^{2/\alpha} T - u^{2/\alpha} )^{-\alpha/2} \Big]
\ge c ( 4 b^{2/\alpha} - u^{2/\alpha})^{-\alpha/2} \ge c b^{-1} \ge c C_{+}^{-1}.
\end{align*}
This completes the analysis of the case where $b\ge a$ and $b \ge u$.
Finally, suppose that $ u \geq b \geq a$. We first consider the event $\Omega_3 = \{ T \leq 1\}$.
By Corollary \ref{w0walphavariation}, there exists $c>0$ such that
$P(\Omega_3 ) > c$.
We deduce that
\begin{align}
b &=
\mathbb{E} \Big[ (E+a^{2/\alpha} S - b^{2/\alpha} T)_+^{-\alpha/2} \Big]
\ge
\mathbb{E} \Big[\mathbbm{1}_{\Omega_3} \cdot (E+a^{2/\alpha} S - b^{2/\alpha} T)^{-\alpha/2} \Big] \ge c u^{-1} \ge c,\label{lowerbest}
\end{align}
where we used $E=u^{2/\alpha}$ and $u\in[0,1]$ in the last inequality.
Next, we consider the event
\begin{flalign*}
\Omega_4 = \{ S \leq 1/2, 2 u^{2/\alpha} \leq b^{2/\alpha} T \}.
\end{flalign*}
By Corollary \ref{w0walphavariation}, \eqref{lowerbest}, and the assumption that $u\le 1$, there exists $c>0$ such that
$P(\Omega_4 ) > c$.
Then have
\begin{align*}
a =
\mathbb{E} \Big[ (E+a^{2/\alpha} S - b^{2/\alpha} T)_-^{-\alpha/2} \Big] & \ge
\mathbb{E} \Big[\mathbbm{1}_{\Omega_4} \cdot ( u^{2/\alpha} +a^{2/\alpha} S - b^{2/\alpha} T)_-^{-\alpha/2} \Big] \\
& \ge \mathbb{E} [ \mathbbm{1}_{\Omega_4} \cdot u^{-1}] \ge c u^{-1} \ge c.
\end{align*}
This completes the analysis of the case $ u \geq b \geq a$, and therefore completes proof.
\end{proof}
\subsection{Replacement by Gumbel Random Variables}
\label{FFGG}
In this section we approximate the functions $F_{\gamma}$ and $G_{\gamma}$ from \eqref{fggamma} by explicit quantities that can be viewed as functionals of Gumbel random variables. In particular, observe from \eqref{fggamma} and \Cref{halpha} that
\begin{flalign}
\label{fggammaintegralalpha0}
\begin{aligned}
F_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{(\alpha/2) \log (y^{-2/\alpha} u^{2/\alpha} + x^{2/\alpha} y^{-2/\alpha} e^{2s/\alpha})}^{\infty} |u^{2/\alpha} + x^{2/\alpha} e^{2s/\alpha} - y^{2/\alpha} e^{2t/\alpha}|^{-\gamma} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times h_{\alpha/2} (s) h_{\alpha/2} (t) dt ds \\
G_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{(\alpha/2) \log (y^{-2/\alpha} u^{2/\alpha} + x^{2/\alpha} y^{-2/\alpha} e^{2s/\alpha})} |u^{2/\alpha} + x^{2/\alpha} e^{2s/\alpha} - y^{2/\alpha} e^{2t/\alpha}|^{-\gamma} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times h_{\alpha/2} (s) h_{\alpha/2} (t) dt ds.
\end{aligned}
\end{flalign}
The following definition essentially formally replaces $\alpha$ in \eqref{fggammaintegralalpha0} with $0$.
\begin{definition}
\label{fuxyguxy2}
For any real numbers $\gamma \in (0, 1)$ and $x, y, u \ge 0$, set
\begin{flalign*}
\widetilde{F}_{\gamma} (u, x, y) = y^{-2\gamma / \alpha} \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{\log \max \{ u/y, x e^s/y \}}^{\infty} e^{-2\gamma t/\alpha} h_0 (s) h_0 (t) ds dt; \\
\widetilde{G}_{\gamma} (u, x, y) = \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{\log \max \{ u/y, x e^s / y \}} \max \{ u, x e^s \}^{-2 \gamma / \alpha} h_0 (s) h_0 (t) ds dt.
\end{flalign*}
\end{definition}
We establish the following proposition in \Cref{FFGGalpha0Estimate0} below, indicating that one may replace $(F, G)$ with $(\widetilde{F}, \widetilde{G})$ (the latter of which are explicit by \Cref{fgu}).
\begin{prop}
\label{gammaffgg0}
There exists a constant $C > 1$ such that, for any real number $\gamma \in \big[ \frac{\alpha}{2}, \alpha \big]$ and index $\mathfrak{Y} \in \{ F, G \}$, we have
\begin{flalign*}
\big| \mathfrak{Y}_{\gamma} (u^{2/\alpha}, x, y) - \widetilde{\mathfrak{Y}}_{\gamma} (u, x, y) \big| \le \displaystyle\frac{C \alpha^2 |\log \alpha|^2}{x^{2\gamma/\alpha}}.
\end{flalign*}
\end{prop}
The following two lemmas explicitly evaluate $\widetilde{F}_{\alpha/2}$, $\widetilde{G}_{\alpha/2}$, and $\widetilde{F}_{\alpha} + \widetilde{G}_{\alpha}$.
\begin{lem}
\label{fgu}
For any real numbers $u, x, y \ge 0$, we have
\begin{flalign*}
\widetilde{F}_{\alpha/2} (u, x, y) & = \displaystyle\frac{y}{(x+y)^2} - e^{-(x+y)/u} \bigg( \displaystyle\frac{y}{(x+y)^2} + \displaystyle\frac{y}{(x+y) u} \bigg); \\
\widetilde{G}_{\alpha/2} (u, x, y) & = \displaystyle\frac{x}{(x+y)^2} + e^{-(x+y)/u} \bigg( \displaystyle\frac{y}{u(x+y)} - \displaystyle\frac{x}{(x+y)^2} \bigg).
\end{flalign*}
\end{lem}
\begin{proof}
Changing variables from $(s, t)$ to $(e^{-s}, e^{-t})$ in \Cref{fuxyguxy2} yields
\begin{flalign*}
\widetilde{F}_{\alpha/2} (u, x, y) &= \displaystyle\frac{1}{y} \displaystyle\int_0^{\infty} \displaystyle\int_0^{\min \{ y/u, ys/x \}} t e^{-s-t} ds dt \\
& = \displaystyle\frac{1}{y} \displaystyle\int_{x/u}^{\infty} \displaystyle\int_0^{y/u} t e^{-s-t} ds dt + \displaystyle\frac{1}{y} \displaystyle\int_0^{x/u} \displaystyle\int_0^{ys/x} t e^{-s-t} ds dt \\
& = \displaystyle\frac{1}{y} \displaystyle\int_{x/u}^{\infty} e^{-s} \big( 1 - (yu^{-1} + 1) e^{-y/u} \big) ds + \displaystyle\frac{1}{y} \displaystyle\int_0^{x/u} e^{-s} \big( 1 - (yx^{-1} s + 1) e^{-ys/x} \big) ds \\
& = \displaystyle\frac{e^{-x/u}}{y} \big( 1 - (yu^{-1} + 1) e^{-y/u} \big) + \displaystyle\frac{1}{y} (1 - e^{-x/u}) \\
& \qquad - \displaystyle\frac{x}{y(x+y)} \bigg( 1 - e^{-(x+y)/u} + \displaystyle\frac{y}{x+y} - \displaystyle\frac{y(u+x+y)}{u(x+y)} e^{-(x+y)/u} \bigg),
\end{flalign*}
\noindent which gives the first statement of the lemma. Again changing variables from $(s, t)$ to $(e^{-s}, e^{-t})$ in \Cref{fuxyguxy2} gives
\begin{flalign*}
\widetilde{G}_{\alpha/2} (u, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{\min \{ y/u, ys/x \}}^{\infty} \max \{ u, s^{-1} x \}^{-1} e^{-s-t} ds dt \\
& = \displaystyle\int_0^{x/u} \displaystyle\int_{ys/x}^{\infty} sx^{-1} e^{-s-t} ds dt + \displaystyle\int_{x/u}^{\infty} \displaystyle\int_{y/u}^{\infty} u^{-1} e^{-s-t} ds dt \\
& = \displaystyle\frac{1}{x} \displaystyle\int_0^{x/u} se^{-s-ys/x} ds + \displaystyle\frac{e^{-y/u}}{u} \displaystyle\int_{x/u}^{\infty} e^{-s} ds \\
& = \displaystyle\frac{x}{(x+y)^2} \bigg( 1 - \displaystyle\frac{x+y+u}{u} e^{-(x+y)/u} \bigg) + \displaystyle\frac{1}{u} e^{-(x+y)/u},
\end{flalign*}
\noindent which gives the second statement of the lemma.
\end{proof}
\begin{lem}
\label{falphagalpha0}
For any real numbers $u, x, y \ge 0$, we have
\begin{flalign*}
\widetilde{F}_{\alpha} (u, x, y) + \widetilde{G}_{\alpha} (u, x, y) = \displaystyle\frac{2}{(x+y)^2} - 2 \bigg( \displaystyle\frac{1}{u(x+y)} + \displaystyle\frac{1}{(x+y)^2} \bigg) e^{-(x+y)/u}.
\end{flalign*}
\end{lem}
\begin{proof}
Changing variables from $(s, t)$ to $(e^{-s}, e^{-t})$ in \Cref{fuxyguxy2}, we obtain
\begin{flalign*}
\widetilde{F}_{\alpha} (u, x, y) + \widetilde{G}_{\alpha} (u, x, y) & = \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{\infty} \max \{ u, xe^s, ye^t \}^{-2} h_0 (s) h_0 (t) ds dt \\
& = \displaystyle\int_0^{\infty} \displaystyle\int_0^{\infty} \max \bigg\{ u, \displaystyle\frac{x}{s}, \displaystyle\frac{y}{t} \bigg\}^{-2} e^{-s-t} ds dt = \mathbb{E} \Big[ \min \{ u^{-1}, x^{-1} S', y^{-1} T' \}^2 \Big],
\end{flalign*}
\noindent where $(S', T')$ are independent exponential random variables, with probability density function $e^{-x} dx$. Letting $V = \min \{ x^{-1} S', y^{-1} T' \}$, we find that $V$ is a rescaled exponential random variable, with cummulative density function $\mathbb{P} [V \ge t] = \mathbb{P} [S' \ge xt] \cdot \mathbb{P} [T' \ge yt] = e^{-(x+y) t}$. Hence,
\begin{flalign*}
\widetilde{F}_{\alpha} (u, x, y) + \widetilde{G}_{\alpha} (u, x, y) & = \mathbb{E} \big[ \min \{ u^{-1}, V \}^2 \big] \\
& = \displaystyle\int_0^{1/u} (x+y) v^2 e^{-(x+y) v} dv + u^{-2} \displaystyle\int_{1/u}^{\infty} (x+y) e^{-(x+y) v} dv \\
& = \displaystyle\frac{1}{(x+y)^2} \displaystyle\int_0^{(x+y)/u} v^2 e^{-v} dv + u^{-2} e^{-(x+y)/u} \\
& = \displaystyle\frac{1}{(x+y)^2} \bigg( 2 - \bigg( \displaystyle\frac{(x+y)^2}{u^2} + \displaystyle\frac{2(x+y)}{u} + 2 \Big) e^{-(x+y)/u} \bigg) \\
& \qquad + u^{-2} e^{-(x+y)/u},
\end{flalign*}
\noindent from which we deduce the lemma.
\end{proof}
\subsection{Replacement of $h_{\alpha/2}$ With $h_0$}
\label{Replaceh0halpha20}
The following definition replaces the densities $h_{\alpha/2}$ appearing in \eqref{fggammaintegralalpha0} with $h_0$.
\begin{definition}
\label{hhfg}
For any real numbers $\gamma \in \big[ \frac{\alpha}{2}, \alpha \big]$ and $x, y, u \ge 0$ satisfying \eqref{xyualpha0}, set
\begin{flalign*}
\breve{F}_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{(\alpha/2) \log (y^{-2/\alpha} u^{2/\alpha} + x^{2/\alpha} y^{-2/\alpha} e^{2s/\alpha})}^{\infty} |u^{2/\alpha} + x^{2/\alpha} e^{2s/\alpha} - y^{2/\alpha} e^{2t/\alpha}|^{-\gamma} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times h_0 (s) h_0 (t) dt ds \\
\breve{G}_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{(\alpha/2) \log (y^{-2/\alpha} u^{2/\alpha} + x^{2/\alpha} y^{-2/\alpha} e^{2s/\alpha})} |u^{2/\alpha} + x^{2/\alpha} e^{2s/\alpha} - y^{2/\alpha} e^{2t/\alpha}|^{-\gamma} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times h_0 (s) h_0 (t) dt ds.
\end{flalign*}
\end{definition}
In this section we establish the below lemma showing that $(F_{\gamma}, G_{\gamma}) \approx (\breve{F}_{\gamma}, \breve{G}_{\gamma})$.
\begin{lem}
\label{ffgg0}
There exists a constant $C > 1$ such that, for real number $\gamma \in \big[ \frac{\alpha}{2}, \alpha \big]$ and any index $\mathfrak{Y} \in \{ F, G \}$, we have
\begin{flalign*}
\big| \mathfrak{Y}_{\gamma} (u^{2/\alpha}, x, y) - \breve{\mathfrak{Y}}_{\gamma} (u^{2/\alpha}, x, y) \big| \le \displaystyle\frac{C \alpha^2 |\log \alpha|^2}{x^{2\gamma / \alpha}}.
\end{flalign*}
\end{lem}
\begin{proof}
We only analyze the case $\mathfrak{Y} = F$, as the proof is entirely analogous if $\mathfrak{Y} = G$. By changing variables from $(s, t)$ to $\big( s - \log x, t - \log y + \frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \big)$ in \eqref{fggammaintegralalpha0}, we obtain
\begin{flalign}
\label{fgammaintegral2alpha00}
\begin{aligned}
F_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (u^{2/\alpha} + e^{2s/\alpha})^{-\gamma} (e^{2t/\alpha} - 1)^{-\gamma} h_{\alpha/2} (s - \log x) \\
& \qquad \qquad \quad \times h_{\alpha/2} \bigg( t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) ds dt.
\end{aligned}
\end{flalign}
\noindent and, by similar reasoning,
\begin{flalign*}
\breve{F}_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (u^{2/\alpha} + e^{2s/\alpha})^{-\gamma} (e^{2t/\alpha} - 1)^{-\gamma} h_0 (s - \log x) \\
& \qquad \qquad \qquad \quad \times h_0 \bigg(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) ds dt.
\end{flalign*}
\noindent Subtracting; using the bound $(u^{2/\alpha} + e^{2s/\alpha})^{-\gamma} \le e^{-2s\gamma / \alpha}$; and recalling $\Upsilon_{\alpha/2}$ from \Cref{halpha} yields
\begin{flalign*}
\big| & F_{\gamma} (u^{2/\alpha}, x, y) - \breve{F}_{\gamma} (u^{2/\alpha}, x, y) \big| \\
& \le \alpha^2 \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2s\gamma / \alpha} (e^{2t/\alpha} - 1)^{-\gamma} h_0 (s - \log x) \Bigg| \Upsilon_{\alpha/2} \bigg(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) \Bigg| ds dt \\
& \quad + \alpha^2 \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2s\gamma / \alpha} (e^{2t/\alpha} - 1)^{-\gamma} \big| \Upsilon_{\alpha/2} (s-\log x) \big| h_0 \bigg(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) ds dt \\
& \quad + \alpha^4 \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2s \gamma / \alpha} (e^{2t/\alpha} - 1)^{-\gamma} \\
& \qquad \qquad \qquad \times \Bigg| \Upsilon_{\alpha/2} (s - \log x) \Upsilon_{\alpha/2} \bigg(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) \Bigg| ds dt.
\end{flalign*}
\noindent Using \Cref{integralk} to integrate in $t$, it follows that
\begin{flalign*}
\big| F_{\gamma} (u^{2/\alpha}, x, y) - \breve{F}_{\gamma} (u^{2/\alpha}, x, y) \big| & \le C \alpha^2 \displaystyle\int_{-\infty}^{\infty} e^{-2s \gamma / \alpha} \Big( \big| h_0 (s - \log x) \big| + \big| \Upsilon_{\alpha/2} (s - \log x) \big| \Big) ds \\
& \le \displaystyle\frac{C \alpha^2}{x^{2 \gamma / \alpha}} \displaystyle\int_{-\infty}^{\infty} \displaystyle\frac{C \alpha^2 |\log \alpha|^2}{x^{2\gamma/\alpha}},
\end{flalign*}
\noindent where in the second bound we changed variables from $s$ to $s - \log x$, and in the third we used \eqref{exponentialhh}. This establishes the lemma.
\end{proof}
\subsection{Replacement of $(\breve{F}, \breve{G})$ With $(\widetilde{F}, \widetilde{G})$}
\label{FFGGalpha0Estimate0}
In this section we establish \Cref{gammaffgg0}. To that end, we first require the following definition.
\begin{definition}
\label{psi}
For any real numbers $u, s \ge 0$, set
\begin{flalign}
\label{psius}
\psi_{\alpha/2} (u, s) = (u^{2/\alpha} + e^{2s/\alpha})^{-\alpha/2}; \qquad \psi_0 (u, s) = \max \{ u, e^s \}^{-1}.
\end{flalign}
\end{definition}
\noindent We then have the following two lemmas approximating powers of sums of exponentials by exponentials.
\begin{lem}
\label{talpha0}
There exists a constant $C > 1$ such that, for any $\gamma \in \big[ \frac{\alpha}{2}, \alpha \big]$, we have
\begin{flalign}
\label{talphae}
\begin{aligned}
\big| (e^{2t/\alpha} - 1)^{-\gamma} - e^{-2t\gamma / \alpha} \big| & \le Ct^{-\gamma} \cdot \mathbbm{1}_{t \le \alpha^3} + C \alpha |\log \alpha| \cdot \mathbbm{1}_{t \le \alpha} + C \alpha e^{-2t / \alpha} \cdot \mathbbm{1}_{t > \alpha}, \qquad \quad \text{if $t \ge 0$}; \\
\big| (1 - e^{2t/\alpha})^{-\gamma} - 1 \big| & \le C |t|^{-\gamma} \cdot \mathbbm{1}_{t \ge -\alpha^3} + C \alpha |\log \alpha| \cdot \mathbbm{1}_{t \ge -\alpha} + C \alpha e^{2t/\alpha} \cdot \mathbbm{1}_{t \le -\alpha}, \qquad \text{if $t \le 0$}.
\end{aligned}
\end{flalign}
\end{lem}
\begin{proof}
Observe that the second statement of the lemma implies the first, since
\begin{flalign*}
\big| (e^{2t/\alpha} - 1)^{-\gamma} - e^{-2t\gamma/\alpha} \big| = e^{-2t\gamma / \alpha} \cdot \big| (1 - e^{-2t/\alpha})^{-\gamma} - 1 \big| \le \big| (1 - e^{-2t/\alpha})^{-\gamma} - 1 \big|,
\end{flalign*}
\noindent so it suffices to establish the second. To that end, first assume that $t \le -\alpha$. Then, a Taylor expansion implies that $(1 - e^{2t/\alpha})^{-\gamma} = 1 - \gamma e^{2t/\alpha} + O(\gamma^2)$, which since $\gamma \le \alpha$ implies the second estimate in \eqref{talphae}. Next, assume that $-\alpha \le t \le -\alpha^3$. Then, a Taylor expansion yields
\begin{flalign*}
(1 - e^{2t/\alpha})^{-\gamma} = \bigg( \frac{2|t|}{\alpha} + O \Big( \frac{t^2}{\alpha^2} \Big) \bigg)^{-\gamma} = 1 + O \big( \alpha |\log \alpha| \big),
\end{flalign*}
\noindent from which we again deduce the second estimate in \eqref{talphae}. If instead $t \ge -\alpha^3$, then a Taylor expansion gives $(1 - e^{2t/\alpha})^{-\gamma} = O (t^{-\gamma} \cdot \alpha^{-\gamma}) = O (t^{-\gamma})$, which establishes \eqref{talphae} in this last case.
\end{proof}
\begin{lem}
\label{exponentialsu}
There exists a constant $C > 1$ such that the following holds for any $s \ge 0$. Denoting
\begin{flalign}
\label{su0}
\varpi = \varpi (s, u) = \frac{e^s}{u} \cdot \mathbbm{1}_{u \ge e^s} + \frac{u}{e^s} \cdot \mathbbm{1}_{u < e^s} \le 1,
\end{flalign}
\noindent we have for any real number $\gamma \in \big[ \frac{\alpha}{2}, \alpha \big]$ that
\begin{flalign}
\label{psialpha2psi0}
\begin{aligned}
& \big| \log \psi_{\alpha/2} (u,s) - \log \psi_0 (u, s) \big|\le C \alpha \varpi^{2/\alpha}; \\
& \big| \psi_{\alpha/2} (u, s)^{2\gamma/\alpha} - \psi_0 (u, s)^{2\gamma / \alpha} \big| \le C\alpha \varpi^{2/\alpha} \cdot \psi_0 (u, s)^{2 \gamma / \alpha}; \\
& \big| \partial_u \psi_{\alpha/2} (u, s)^{2\gamma / \alpha} - \partial_u \psi_0 (u, s)^{2\gamma/\alpha} \big| \le C u^{-2\gamma / \alpha - 1} \varpi^{2/\alpha}.
\end{aligned}
\end{flalign}
\end{lem}
\begin{proof}
Since $\varpi \le 1$, the first statement follows from the second (at $\gamma = \frac{\alpha}{2}$). To verify the second, we estimate
\begin{flalign*}
\psi_{\alpha/2} (u, s)^{2\gamma/\alpha} - \psi_0 (u, s)^{2 \gamma / \alpha} = \psi_0 (u,s)^{2 \gamma / \alpha} \cdot \big| (1 + \varpi^{2/\alpha})^{-\gamma} - 1 \big| \le C \alpha \varpi^{2/\alpha} \cdot \psi_0 (u, s)^{2 \gamma / \alpha}.
\end{flalign*}
\noindent So, it remains to verify the third, to which end observe that
\begin{flalign*}
\partial_u \psi_{\alpha/2} (u, s)^{2 \gamma / \alpha} & = -\displaystyle\frac{2 \gamma}{\alpha} u^{-2 \gamma /\alpha-1} \cdot (1 + u^{-2/\alpha} e^{2s/\alpha})^{-\gamma-1}; \\
\partial_u \psi_0 (u, s)^{2 \gamma / \alpha} & = - \displaystyle\frac{2\gamma}{\alpha} u^{-2\gamma / \alpha - 1} \cdot \mathbbm{1}_{u > e^s}.
\end{flalign*}
\noindent This, together with the bounds $\gamma \le \alpha$ and
\begin{flalign*}
\big| (1 + u^{-2/\alpha} e^{2s/\alpha})^{-\gamma - 1} - \mathbbm{1}_{u > e^s} \big| \le C \varpi^{2/\alpha}.
\end{flalign*}
\noindent yields the third statement of the lemma.
\end{proof}
Now we can establish \Cref{gammaffgg0}.
\begin{proof}[Proof of \Cref{gammaffgg0}]
In view of \Cref{ffgg0}, it suffices to show that
\begin{flalign}
\label{hzk00}
\big| \breve{\mathfrak{Y}}_{\gamma} (u^{2/\alpha}, x, y) - \widetilde{\mathfrak{Y}}_{\gamma} (u, x, y) \big| \le \displaystyle\frac{C \alpha^2 |\log \alpha|^3}{x^{2 \gamma / \alpha}}.
\end{flalign}
\noindent As in the proof of \Cref{ffgg0}, we assume $\mathfrak{Y} = F$, as the proof when $\mathfrak{Y} = G$ is entirely analogous (using the second bound in \eqref{talphae} instead of the first below). Following \eqref{fgammaintegral2alpha00} and using \eqref{psius}, we find
\begin{flalign}
\label{fgammafgamma0}
\begin{aligned}
\breve{F}_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha} - 1)^{-\gamma} \psi_{\alpha/2} (u, s)^{2 \gamma / \alpha} h_0 (s - \log x) \\
& \qquad \qquad \qquad \times h_0 \big( t - \log y + \log \psi_{\alpha/2} (u,s) \big) ds dt; \\
\widetilde{F}_{\gamma} (u, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t\gamma / \alpha} \psi_0 (u, s)^{2\gamma / \alpha} h_0 (s - \log x) h_0 \big( t - \log y - \log \psi_0 (u, s) \big) ds dt.
\end{aligned}
\end{flalign}
\noindent Further setting
\begin{flalign*}
\widetilde{F}_{\gamma, 1} (u, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma/\alpha} \psi_{\alpha/2} (u, s)^{2\gamma / \alpha} h_0 (s - \log x) h_0 \big( t - \log y - \log \psi_{\alpha/2} (u, s) \big) ds dt; \\
\widetilde{F}_{\gamma, 2} (u, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma / \alpha} \psi_0 (u, s)^{2\gamma / \alpha} h_0 (s - \log x) h_0 \big( t - \log y - \log \psi_{\alpha/2} (u, s) \big) ds dt,
\end{flalign*}
\noindent we have
\begin{flalign}
\label{ff1f20}
\begin{aligned}
\big| \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \widetilde{F}_{\gamma} (u, x, y) \big| & \le \big| \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \widetilde{F}_{\gamma, 1} (u, x, y) \big| + \big| \widetilde{F}_{\gamma, 1} (u, x, y) - \widetilde{F}_{\gamma, 2} (u, x, y) \big| \\
& \qquad + \big| \widetilde{F}_{\gamma, 2} (u, x, y) - \widetilde{F}_{\gamma} (u, x, y) \big|.
\end{aligned}
\end{flalign}
To bound the first term on the right side of \eqref{ff1f20}, observe since $\psi_{\alpha/2} (u, s) \le e^{-s}$ that
\begin{flalign}
\label{fgammafgamma10}
\begin{aligned}
\big| \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \widetilde{F}_{\gamma, 1} (u, x, y) \big| & \le \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2s\gamma / \alpha} h_0 (s - \log x) \cdot \big| (e^{2t/\alpha} - 1)^{-\gamma} - e^{-2t\gamma / \alpha} \big| \\
& \qquad \qquad \qquad \qquad \times h_0 \big( t - \log y + \log \psi_{\alpha/2} (u, s) \big) ds dt.
\end{aligned}
\end{flalign}
\noindent By \Cref{talpha0} and the fact that $\sup_{w \in \mathbb{R}} h_0 (w) \le C$, we have
\begin{flalign*}
\begin{aligned}
\displaystyle\int_0^{\infty} & \big| (e^{2t/\alpha} - 1)^{-\gamma} - e^{-2t\gamma / \alpha} \big| \cdot h_0 \big( t - \log y + \log \psi_{\alpha/2} (u, s) \big) dt \\
& \le C \displaystyle\int_0^{\infty} \big( |t|^{-\gamma} \cdot \mathbbm{1}_{t \le \alpha^3} + \alpha |\log \alpha| \cdot \mathbbm{1}_{t \le \alpha} + \alpha e^{-2t / \alpha} \cdot \mathbbm{1}_{t > \alpha} \big) dt \le C \alpha^2 |\log \alpha|,
\end{aligned}
\end{flalign*}
\noindent which, together with \eqref{fgammafgamma10} and the change of variables (sending $s$ to $s + \log x$), implies
\begin{flalign}
\label{ffgamma10}
\big| \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \widetilde{F}_{\gamma, 1} (u, x, y) \big| \le \displaystyle\int_{-\infty}^{\infty} e^{-2s\gamma / \alpha} h_0 (s - \log x) ds \le \displaystyle\frac{C \alpha^2}{x^{2 \gamma / \alpha}}.
\end{flalign}
To bound the second term in \eqref{ff1f20}, observe from \Cref{integralk} that
\begin{flalign*}
\big| \widetilde{F}_{\gamma, 1} (u, x, y) - \widetilde{F}_{\gamma, 2} (u, x, y) \big| & \le \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma / \alpha} \cdot h_0 \big(t - \log y + \log \psi_{\alpha/2} (u, s) \big) \\
& \qquad \qquad \quad \times h_0 (s - \log x) \cdot \big| \psi_{\alpha/2} (u, s)^{2 \gamma / \alpha} - \psi_0 (u, s)^{2\gamma / \alpha} \big| ds dt \\
& \qquad \le C \displaystyle\int_{-\infty}^{\infty} h_0 (s-\log x) \cdot \big| \psi_{\alpha/2} (u, s)^{2\gamma / \alpha} - \psi_0 (u, s)^{2 \gamma / \alpha} \big| ds dt.
\end{flalign*}
\noindent This; \Cref{exponentialsu}; the bound $\psi_0 (u, s) \le e^{-s}$; the fact that $\varpi^{2/\alpha} \le C \alpha^2$ (recall \eqref{su0}) when $|s - \log u| \ge C \alpha |\log \alpha|$ (as then $\varpi \le 1 - C \alpha |\log \alpha| + O \big( \alpha^2 |\log \alpha| \big)$); the estimate $\sup_{w \in \mathbb{R}} h_0 (w) \le C$ then together yield
\begin{flalign}
\label{fgamma1gamma0}
\begin{aligned}
\big| & \widetilde{F}_{\gamma, 1} (u, x, y) - \widetilde{F}_{\gamma, 2} (u, x, y) \big| \\
& \le C \alpha \displaystyle\int_{|s - \log u| \le C \alpha |\log \alpha|} \psi_0 (u, s)^{2\gamma / \alpha} h_0 (s - \log x) ds + C \alpha^2 \displaystyle\int_{-\infty}^{\infty} \psi_0 (u, s)^{2\gamma / \alpha} h_0 (s - \log x) ds \\
& \le C \alpha \displaystyle\int_{|s - \log u| \le C \alpha |\log \alpha|} e^{-2 s \gamma / \alpha} h_0 (s - \log x) ds + C \alpha^2 \displaystyle\int_{-\infty}^{\infty} e^{-2 s \gamma / \alpha} h_0 (s - \log x) ds \\
& \le \displaystyle\frac{C \alpha}{x^{2 \gamma / \alpha}} \displaystyle\int_{|s - \log u + \log x| \le C \alpha |\log \alpha|} e^{-2s \gamma / \alpha} h_0 (s) ds + \displaystyle\frac{C \alpha^2}{x^{2 \gamma / \alpha}} \displaystyle\int_{-\infty}^{\infty} e^{-2s \gamma / \alpha} h_0 (s) ds \le \displaystyle\frac{C \alpha^2 |\log \alpha|}{x^{2 \gamma / \alpha}}.
\end{aligned}
\end{flalign}
To bound the third term in \eqref{ff1f2}, we first use the fact that $h_0$ is $1$-Lipschitz to find that
\begin{flalign*}
\big| \widetilde{F}_{\gamma, 2} & (u, x, y) - \widetilde{F}_{\gamma} (u, x, y) \big| \\
& \le \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma / \alpha} \cdot \psi_0 (u, s)^{2 \gamma / \alpha} \cdot h_0 (s - \log x) | \cdot \big| \log \psi_{\alpha/2} (u, s) - \log \psi_0 (u, s) \big| ds dt \\
& = \displaystyle\frac{\alpha}{2 \gamma} \displaystyle\int_{-\infty}^{\infty} \psi_0 (u, s)^{2\gamma / \alpha} \cdot h_0 (s - \log x) \cdot \big| \log \psi_{\alpha/2} (u, s) - \log \psi_0 (u, s) \big|ds.
\end{flalign*}
\noindent By \Cref{exponentialsu}, the bound $\varpi^{2/\alpha} \le C \alpha^2$ (recall \eqref{su0}) when $|s - \log u| \ge C \alpha |\log \alpha|$ (by the above), and the fact that $\psi_0 (u, s) \le e^{-s}$, it follows analogously to in \eqref{fgamma1gamma0} that
\begin{flalign}
\label{fgamma2gamma0}
\big| \widetilde{F}_{\gamma, 2} (u, x, y) - \widetilde{F}_{\gamma} (u, x, y) \big| \le \displaystyle\frac{C \alpha}{\gamma} \displaystyle\int_{-\infty}^{\infty} e^{-2s \gamma / \alpha} h_0 (s - \log x) \cdot \varpi^{2/\alpha} ds \le \displaystyle\frac{C \alpha^2 |\log \alpha|}{x^{2 \gamma / \alpha}}.
\end{flalign}
\noindent The proposition then follows from combining \eqref{ff1f20}, \eqref{ffgamma10}, \eqref{fgamma1gamma0}, and \eqref{fgamma2gamma0}.
\end{proof}
\subsection{Scaling of the Mobility Edge}
\label{Alpha0E0Scale}
In this section we establish the scaling for a mobility edge $E_0$ if $\alpha$ is small; in what follows, we recall $a(E)$ and $b(E)$ from \Cref{opaque}.
\begin{thm}
\label{e0alpha0u}
There exist constants $c > 0$ and $C > 1$ such that the following holds for $\alpha \in (0, c)$. Letting $E_0 > 0$ be some real number such that $\lambda (E_0, \alpha) = 1$, we have
\begin{flalign*}
\bigg( \displaystyle\frac{1}{|\log \alpha|} - \displaystyle\frac{C \log |\log \alpha|}{|\log \alpha|^2} \bigg)^{2/\alpha} \le E_0 \le \bigg( \displaystyle\frac{1}{|\log \alpha|} + \displaystyle\frac{C \log |\log \alpha|}{|\log \alpha|^2} \bigg)^{2/\alpha}.
\end{flalign*}
\end{thm}
\begin{proof}[Proof of \Cref{e0alpha0u}]
By \eqref{fba}, \Cref{gammaffgg0}, \Cref{fgu}, \Cref{lambdaalpha01}, and \Cref{abestimatealpha0}, we have
\begin{flalign}
\label{abalpha0}
\begin{aligned}
a & = \displaystyle\frac{b}{(a+b)^2} - e^{-(a+b)/u} \bigg( \displaystyle\frac{b}{(a+b)^2} + \displaystyle\frac{b}{u(a+b)} \bigg) + O \big( \alpha^2 |\log \alpha|^2 \big); \\
b & = \displaystyle\frac{a}{(a+b)^2} + e^{-(a+b)/u} \bigg( \displaystyle\frac{b}{u(a+b)} - \displaystyle\frac{a}{(a+b)^2} \bigg) + O \big( \alpha^2 |\log \alpha|^2 \big),
\end{aligned}
\end{flalign}
\noindent where we have set $a = a(E_0)$, $b = b(E_0)$, and $E_0 = u^{2/\alpha}$. Summing these two equations, and setting $d = d(E_0) = a + b$, we find
\begin{flalign}
\label{d1alpha0}
d = d^{-1} ( 1 - e^{-d/u}) + O \big( \alpha^2 |\log \alpha|^2 \big).
\end{flalign}
\noindent Moreover, applying \Cref{lambdaalpha0}, \eqref{fggamma}, \Cref{gammaffgg0}, and \Cref{falphagalpha0} yields
\begin{flalign}
\label{d2alpha0}
\begin{aligned}
2 - c_{\star} \alpha + O (\alpha^2) = \mathbb{E} \Big[ \big| R(E_0) \big|^{\alpha} \Big] & = F_{\alpha} (E_0, a, b) + G_{\alpha} (E_0, a, b) \\
& = 2d^{-2} - 2 e^{-d/u} ( u^{-1} d^{-1} + d^{-2}) + O \big( \alpha^2 |\log \alpha|^2 \big).
\end{aligned}
\end{flalign}
\noindent where $c_{\star} = 4 \log 2 + \pi$. Inserting \eqref{d1alpha0} into \eqref{d2alpha0}, and using the fact that $c \le d \le C$ (by \Cref{abestimatealpha0}) yields
\begin{flalign}
\label{d3alpha0}
e^{-d/u} u^{-1} = c_{\star} \alpha d + O \big( \alpha^2 |\log \alpha|^2 \big).
\end{flalign}
\noindent Again since $c \le d \le C$ (by \Cref{abestimatealpha0}), this implies that $c |\log \alpha|^{-1} \le u \le C |\log \alpha|^{-1}$. Inserting this bound into \eqref{d1alpha0}, it follows that $|d-1| \le C \alpha^c$, which by \eqref{d3alpha0} yields
\begin{flalign*}
\displaystyle\frac{1}{|\log \alpha|} - \displaystyle\frac{C \log |\log \alpha|}{|\log \alpha|^2} \le u \le \displaystyle\frac{1}{|\log \alpha|} + \displaystyle\frac{C \log |\log \alpha|}{|\log \alpha|^2},
\end{flalign*}
\noindent which establishes the theorem.
\end{proof}
We further include the following lemma, which will be useful in \Cref{E0Unique} below.
\begin{lem}
\label{abdalpha0}
There exist constants $c > 0$ and $C > 1$ such that the following holds for $\alpha \in (0, c)$. Let $E_0 > 0$ be a real number; abbreviate $a = a(E_0)$ and $b = b(E_0)$; and set $u = E_0^{\alpha/2}$. If
\begin{flalign}
\label{ualpha0}
\displaystyle\frac{9}{10} \cdot |\log \alpha|^{-1} \le u \le \displaystyle\frac{11}{10} \cdot |\log \alpha|^{-1},
\end{flalign}
\noindent then
\begin{flalign*}
\bigg| a - \displaystyle\frac{1}{2} \bigg| \le C \alpha^{8/9}; \qquad \bigg|b - \displaystyle\frac{1}{2} \bigg| \le C \alpha^{8/9}.
\end{flalign*}
\end{lem}
\begin{proof}
Throughout this proof, we once again set $d = a + b$. Applying \eqref{fba}, \Cref{gammaffgg0}, \Cref{fgu}, \eqref{ualpha0}, and \eqref{abestimatealpha0} yields \eqref{abalpha0}. This yields \eqref{d1alpha0}, which with \Cref{abestimatealpha0} (and \eqref{ualpha0}), gives $|d-1| \le C \alpha^c$; inserting this again into \eqref{d1alpha0} and applying \eqref{ualpha0}, we obtain $|a + b - 1| \le C \alpha^{8/9}$. Applying these bounds in \eqref{abalpha0} yields
\begin{flalign*}
a & = b \big( 1 + O (\alpha^{8/9}) \big) + O \big( \alpha^{8/9} \big); \\
b \big( 1 - O ( \alpha^{8/9}) \big) & = a \big( 1 + O(\alpha^{8/9}) \big) + O \big( \alpha^{8/9} \big).
\end{flalign*}
\noindent This, together with the fact that $|a + b - 1| \le C \alpha^{8/9}$, implies the lemma.
\end{proof}
\section{Uniqueness Near Zero}
\label{E0Unique}
In this section we establish the second part of \Cref{t:main2}, showing for $\alpha$ small that there exists a unique real number $E_{\mathrm{mob}} > 0$ such that $\lambda (E_{\mathrm{mob}}, \alpha) = 1$. As in \Cref{Alpha0Scaling}, this will proceed by approximating the functions $F$ and $G$ from \eqref{fggamma} by more explicit quantities, though now we must also approximate their derivatives. Throughout this section, we adopt the notation and conventions described at the beginning of \Cref{Alpha0Scaling}. We further assume (which will be justified by \Cref{e0alpha0u} and \Cref{abdalpha0}) in the below that
\begin{flalign}
\label{xyualpha0}
\gamma \in \bigg[ \frac{\alpha}{2}, \alpha \bigg]; \qquad x, y \in \bigg[ \displaystyle\frac{1}{4}, \displaystyle\frac{3}{4} \bigg]; \qquad \displaystyle\frac{99}{100} \cdot |\log \alpha|^{-1} \le u \le \displaystyle\frac{101}{100} \cdot |\log \alpha|^{-1},
\end{flalign}
\noindent which will be in effect even when not stated explicitly. In \Cref{Replaceh0halpha2} we estimate the error in replacing the derivatives of $(F, G)$ with those of $(\breve{F}, \breve{G})$ (recall \Cref{hhfg}); in \Cref{FFGGalpha0Estimate} we estimate the error in replacing the derivatives of $(\breve{F}, \breve{G})$ by $(\widetilde{F}, \widetilde{G})$ (recall \Cref{fuxyguxy2}). We then bound the derivatives of $a$ and $b$ (recall \Cref{opaque}) in \Cref{EstimateAlpha0ab} and establish the second part of \Cref{t:main2} in \Cref{Alpha0Unique}. Throughout this section, constants $c > 0$ and $C > 1$ will be independent of $\alpha$.
\subsection{Replacement of $(F, G)$ With $(\breve{F}, \breve{G})$}
\label{Replaceh0halpha2}
In this section we establish the below lemma stating that the first derivatives of $(F_{\gamma}, G_{\gamma})$ can be approximated by those of $(\breve{F}_{\gamma}, \breve{G}_{\gamma})$, where we recall the definition of the latter from \Cref{hhfg}.
\begin{lem}
\label{ffgg}
There exists a constant $C > 1$ such that, for any indices $\mathfrak{z} \in \{ u, x, y \}$ and $\mathfrak{Y} \in \{ F, G \}$, we have
\begin{flalign*}
\big| \partial_{\mathfrak{z}} \ \mathfrak{Y}_{\gamma} (u^{2/\alpha}, x, y) - \partial_{\mathfrak{z}} \breve{\mathfrak{Y}}_{\gamma} (u^{2/\alpha}, x, y) \big| \le \displaystyle\frac{C \alpha^2}{\mathfrak{z} u^2}.
\end{flalign*}
\end{lem}
\begin{proof}
We only analyze the case $\mathfrak{Y} = F$, as the proof is entirely analogous if $\mathfrak{Y} = G$. Let us first address the case when $\mathfrak{z} \in \{ x, y \}$; these situations are very similar, so we only consider $\mathfrak{z} = x$. By \eqref{fgammaintegral2alpha00}, we have
\begin{flalign*}
\partial_x F_{\gamma} (u^{2/\alpha}, x, y) & = - \displaystyle\frac{1}{x} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (u^{2/\alpha} + e^{2s/\alpha})^{-\gamma} (e^{2t/\alpha} - 1)^{-\gamma} h_{\alpha/2}' (s - \log x) \\
& \qquad \qquad \qquad \quad \times h_{\alpha/2} \bigg(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) dt ds,
\end{flalign*}
\noindent and, by similar reasoning,
\begin{flalign*}
\partial_x \breve{F}_{\gamma} (u^{2/\alpha}, x, y) & = - \displaystyle\frac{1}{x} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (u^{2/\alpha} + e^{2s/\alpha})^{-\gamma} (e^{2t/\alpha} - 1)^{-\gamma} h_0' (s - \log x) \\
& \qquad \qquad \qquad \quad \times h_0 \bigg(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) ds dt.
\end{flalign*}
\noindent Subtracting; using the bound $(u^{2/\alpha} + e^{2s/\alpha})^{-\gamma} \le u^{-2\gamma / \alpha} \le u^{-2}$ (as $\gamma \le \alpha$); and recalling $\Upsilon_{\alpha/2}$ from \Cref{halpha} yields
\begin{flalign*}
\big| & \partial_x F_{\gamma} (u^{2/\alpha}, x, y) - \partial_x \breve{F}_{\gamma} (u^{2/\alpha}, x, y) \big| \\
& \le \displaystyle\frac{\alpha^2}{xu^2} \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha} - 1)^{-\gamma} \Bigg| h_0' (s - \log x) \Upsilon_{\alpha/2} \bigg(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) \Bigg| ds dt \\
& \quad + \displaystyle\frac{\alpha^2}{xu^2} \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha} - 1)^{-\gamma} \big| \Upsilon_{\alpha/2}' (s-\log x) \big| h_0 \bigg(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) ds dt \\
& \quad + \displaystyle\frac{\alpha^4}{xu^2} \displaystyle\int_{-\infty}^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha} - 1)^{-\gamma} \Bigg| \Upsilon_{\alpha/2}' (s - \log x) \Upsilon_{\alpha/2} \bigg(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) \Bigg| ds dt.
\end{flalign*}
\noindent Using \Cref{integralk} to integrate in $t$, it follows that
\begin{flalign*}
\big| \partial_x & F_{\gamma} (u^{2/\alpha}, x, y) - \partial_x \breve{F}_{\gamma} (u^{2/\alpha}, x, y) \big| \\
& \le \displaystyle\frac{C \alpha^2}{xu^2} \displaystyle\int_{-\infty}^{\infty} \Big( \big| h_0' (s - \log x) \big| + \big| \Upsilon_{\alpha/2} (s - \log x) \big| + \big| \Upsilon_{\alpha/2}' (s - \log x) \big| \Big) ds \le \displaystyle\frac{C \alpha^2}{xu^2},
\end{flalign*}
\noindent where in the last bound we applied \Cref{w0walpha}. This establishes the $z = x$ case of the lemma.
Next, we address the case $\mathfrak{z} = u$. Differentiating \eqref{fgammaintegral2alpha00} in $u$, we obtain
\begin{flalign*}
\partial_u F_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} u^{2/\alpha-1} (u^{2/\alpha} + e^{2s/\alpha})^{-\gamma-1} (e^{2t/\alpha} - 1)^{-\gamma} h_{\alpha/2} (s - \log x) \\
& \qquad \qquad \quad \times \Bigg( h_{\alpha/2}' \bigg( t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) \\
& \qquad \qquad \qquad \qquad - \displaystyle\frac{2\gamma}{\alpha} \cdot h_{\alpha/2} \bigg( t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) ds dt.
\end{flalign*}
\noindent By similar reasoning,
\begin{flalign*}
\partial_u \breve{F}_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} u^{2/\alpha - 1} (u^{2/\alpha} + e^{2s/\alpha})^{-\gamma-1} (e^{2t/\alpha} - 1)^{-\gamma} h_0 (s - \log x) \\
& \qquad \qquad \quad \times \Bigg( h_0' \bigg( t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) \\
& \qquad \qquad \qquad \qquad - \displaystyle\frac{2\gamma}{\alpha} \cdot h_0 \bigg( t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \bigg) \Bigg) ds dt.
\end{flalign*}
\noindent Subtracting, and using the bound
\begin{flalign*}
u^{2/\alpha-1} (u^{2/\alpha} + e^{2s/\alpha})^{-\gamma-1} \le u^{-2\gamma/\alpha-1} \le u^{-3},
\end{flalign*}
\noindent it follows that for $\gamma \le \alpha$ that
\begin{flalign*}
\big| & \partial_u F_{\gamma} (u^{2/\alpha}, x, y) - \partial_u \breve{F}_{\gamma} (u^{2/\alpha}, x, y) \big| \\
& \le \displaystyle\frac{\alpha^2}{u^3} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha}-1)^{-\gamma} \bigg| \Upsilon_{\alpha/2} (s-\log x) h_{\alpha/2}' \Big(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \Big) \bigg| ds dt \\
& \quad + \displaystyle\frac{\alpha^2}{u^3} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha}-1)^{-\gamma} h_0 (s-\log x) \bigg| \Upsilon_{\alpha/2}' \Big(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \Big) \bigg| ds dt \\
& \quad + \displaystyle\frac{2 \alpha^2}{u^3} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha}-1)^{-\gamma} \big| \Upsilon_{\alpha/2} (s - \log x) \big| h_0 \Big( t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \Big) ds dt \\
& \quad + \displaystyle\frac{2 \alpha^2}{u^3} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha}-1)^{-\gamma} h_0 (s-\log x) \bigg| \Upsilon_{\alpha/2} \Big(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \Big) \bigg| ds dt \\
& \quad + \displaystyle\frac{\alpha^4}{u^3} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha}-1)^{-\gamma} \bigg| \Upsilon_{\alpha/2} (s-\log x) \Upsilon_{\alpha/2}' \Big(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \Big) \bigg| ds dt \\
& \quad + \displaystyle\frac{2 \alpha^4}{u^3} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha}-1)^{-\gamma} \bigg| \Upsilon_{\alpha/2} (s-\log x) \Upsilon_{\alpha/2} \Big(t - \log y + \displaystyle\frac{\alpha}{2} \log (u^{2/\alpha} + e^{2s/\alpha}) \Big) \bigg| ds dt.
\end{flalign*}
\noindent Again using \Cref{integralk} to first integrate with respect to $t$, and then \Cref{w0walpha} to integrate with respect to $s$, we obtain
\begin{flalign*}
\big| \partial_u F_{\gamma} (u^{2/\alpha}, x, y) - \partial_u \breve{F}_{\gamma} (u^{2/\alpha}, x, y) \big| & \le \displaystyle\frac{C \alpha^2}{u^3} \displaystyle\int_{-\infty}^{\infty} \Big( h_0 (s - \log x) + \big| \Upsilon_{\alpha/2} (s - \log x) \big| \Big) ds \le \displaystyle\frac{C \alpha^2}{u^3}.
\end{flalign*}
\noindent thus establishing the lemma.
\end{proof}
\subsection{Replacement of $(\breve{F}, \breve{G})$ With $(\widetilde{F}, \widetilde{G})$}
\label{FFGGalpha0Estimate}
In this section we establish the following two propositions, indicating that one may replace derivatives of $(F, G)$ with those of $(\widetilde{F}, \widetilde{G})$ (whose definitions we recall from \Cref{fuxyguxy2}).
\begin{prop}
\label{gammaffgg}
There exists a constant $C > 1$ such that, for any indices $\mathfrak{z} \in \{ x, y \}$ and $\mathfrak{Y} \in \{ F, G \}$, we have
\begin{flalign*}
\big| \partial_{\mathfrak{z}}^k \mathfrak{Y}_{\gamma} (u^{2/\alpha}, x, y) - \partial_{\mathfrak{z}}^k \widetilde{\mathfrak{Y}}_{\gamma} (u, x, y) \big| \le \displaystyle\frac{C \alpha^2 |\log \alpha|}{\mathfrak{z} u^2}.
\end{flalign*}
\end{prop}
\begin{prop}
\label{gammaffggu}
There exists a constant $C > 1$ such that, for any index $\mathfrak{Y} \in \{ F, G \}$, we have
\begin{flalign*}
\big| \partial_u \mathfrak{Y}_{\gamma} (u^{2/\alpha},x, y) - \partial_u \widetilde{\mathfrak{Y}}_{\gamma} (u^{2/\alpha}, x, y) \big| \le C \alpha^{7/6}.
\end{flalign*}
\end{prop}
\begin{proof}[Proof of \Cref{gammaffgg}]
In view of \Cref{ffgg}, it suffices to show that
\begin{flalign}
\label{hzk0}
\big| \partial_{\mathfrak{z}} \breve{\mathfrak{Y}}_{\gamma} (u^{2/\alpha}, x, y) - \partial_{\mathfrak{z}} \widetilde{\mathfrak{Y}}_{\gamma} (u, x, y) \big| \le \displaystyle\frac{C \alpha^2 |\log \alpha|}{\mathfrak{z} u^2}.
\end{flalign}
\noindent We again assume $\mathfrak{Y} = F)$ and only address the case $\mathfrak{z} = x$, as the proofs in the remaining situations are entirely analogous (for the proof when $H = G$, one uses the second bound in \eqref{talphae} instead of the first below). Following \eqref{fgammaintegral2alpha00} and using \eqref{psius}, we find
\begin{flalign}
\label{fgammafgamma}
\begin{aligned}
\breve{F}_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha} - 1)^{-\gamma} \psi_{\alpha/2} (u, s)^{2 \gamma / \alpha} h_0 (s - \log x) \\
& \qquad \qquad \qquad \times h_0 \big( t - \log y + \log \psi_{\alpha/2} (u,s) \big) ds dt; \\
\widetilde{F}_{\gamma} (u, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t\gamma / \alpha} \psi_0 (u, s)^{2\gamma / \alpha} h_0 (s - \log x) h_0 \big( t - \log y - \log \psi_0 (u, s) \big) ds dt.
\end{aligned}
\end{flalign}
\noindent Further setting
\begin{flalign*}
\widetilde{F}_{\gamma, 1} (u, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma/\alpha} \psi_{\alpha/2} (u, s)^{2\gamma / \alpha} h_0 (s - \log x) h_0 \big( t - \log y - \log \psi_{\alpha/2} (u, s) \big) ds dt; \\
\widetilde{F}_{\gamma, 2} (u, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma / \alpha} \psi_0 (u, s)^{2\gamma / \alpha} h_0 (s - \log x) h_0 \big( t - \log y - \log \psi_{\alpha/2} (u, s) \big) ds dt,
\end{flalign*}
\noindent we have
\begin{flalign}
\label{ff1f2}
\begin{aligned}
\big| \partial_x \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \partial_x \widetilde{F}_{\gamma} (u, x, y) \big| & \le \big| \partial_x \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \partial_x \widetilde{F}_{\gamma, 1} (u, x, y) \big| \\
& \qquad + \big| \partial_x \widetilde{F}_{\gamma, 1} (u, x, y) - \partial_x \widetilde{F}_{\gamma, 2} (u, x, y) \big| \\
& \qquad + \big| \partial_x \widetilde{F}_{\gamma, 2} (u, x, y) - \partial_x \widetilde{F}_{\gamma} (u, x, y) \big|.
\end{aligned}
\end{flalign}
To bound the first term on the right side of \eqref{ff1f2}, observe from the bounds $\psi_{\alpha/2} (u, s) \le u^{-1}$ and $\gamma \le \alpha$ that
\begin{flalign}
\label{fgammafgamma1}
\begin{aligned}
\big| \partial_x \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \partial_x \widetilde{F}_{\gamma, 1} (u, x, y) \big| & \le \displaystyle\frac{1}{xu^2} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} \big| h_0' (s - \log x) \big| \cdot \big| (e^{2t/\alpha} - 1)^{-\gamma} - e^{-2t\gamma / \alpha} \big| \\
& \qquad \qquad \qquad \qquad \times h_0 \big( t - \log y + \log \psi_{\alpha/2} (u, s) \big) ds dt.
\end{aligned}
\end{flalign}
\noindent By \Cref{talpha0} and the fact that $\sup_{w \in \mathbb{R}} h_0 (w) \le C$, we have
\begin{flalign}
\label{t2alpha1t2alpha}
\begin{aligned}
\displaystyle\int_0^{\infty} & \big| (e^{2t/\alpha} - 1)^{-\gamma} - e^{-2t\gamma / \alpha} \big| \cdot h_0 \big( t - \log y + \log \psi_{\alpha/2} (u, s) \big) dt \\
& \le C \displaystyle\int_0^{\infty} \big( |t|^{-\gamma} \cdot \mathbbm{1}_{t \le \alpha^3} + \alpha |\log \alpha| \cdot \mathbbm{1}_{t \le \alpha} + \alpha e^{-2t / \alpha} \cdot \mathbbm{1}_{t > \alpha} \big) dt \le C \alpha^2 |\log \alpha|,
\end{aligned}
\end{flalign}
\noindent which together with \Cref{integralk} and \eqref{fgammafgamma1} shows that
\begin{flalign}
\label{ffgamma1}
\big| \partial_x \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \partial_x \widetilde{F}_{\gamma, 1} (u, x, y) \big| \le \displaystyle\frac{C \alpha^2 |\log \alpha|}{x u^2}.
\end{flalign}
To bound the second term in \eqref{ff1f2}, observe from \Cref{integralk} that
\begin{flalign*}
& \big| \partial_x \widetilde{F}_{\gamma, 1} (u, x, y) - \partial_x \widetilde{F}_{\gamma, 2} (u, x, y) \big| \\
& \qquad \le \displaystyle\frac{1}{x} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma / \alpha} \cdot h_0 \big(t - \log y + \log \psi_{\alpha/2} (u, s) \big) \\
& \qquad \qquad \qquad \qquad \times \big| h_0' (s - \log x) \big| \cdot \big| \psi_{\alpha/2} (u, s)^{2 \gamma / \alpha} - \psi_0 (u, s)^{2\gamma / \alpha} \big| ds dt \\
& \qquad \le \displaystyle\frac{C}{x} \displaystyle\int_{-\infty}^{\infty} \big| h_0' (s-\log x) \big| \cdot \big| \psi_{\alpha/2} (u, s)^{2\gamma / \alpha} - \psi_0 (u, s)^{2 \gamma / \alpha} \big| ds dt.
\end{flalign*}
\noindent This; \Cref{exponentialsu}; the bound $\psi_0 (u, s) \le u^{-1}$; the facts that $\gamma \le \alpha$ and $\varpi^{2/\alpha} \le C \alpha^2$ (recall \eqref{su0}) when $|s - \log u| \ge C \alpha |\log \alpha|$ (as then $\varpi \le 1 - C \alpha |\log \alpha| + O \big(\alpha^2 |\log \alpha| \big)$); and the estimate $\sup_{w \in \mathbb{R}} \big| h_0' (w) \big| \le C$ then together yield
\begin{flalign}
\label{fgamma1gamma}
\begin{aligned}
\big| \partial_x \widetilde{F}_{\gamma, 1} (u, x, y) - \partial_x \widetilde{F}_{\gamma, 2} (u, x, y) \big| & \le \displaystyle\frac{C \alpha}{x} \displaystyle\int_{|s - \log u| \le C \alpha |\log \alpha|} \psi_0 (u, s)^{2\gamma / \alpha} \big| h_0' (s - \log x) \big| ds \\
& \qquad + \displaystyle\frac{C \alpha^2}{x} \displaystyle\int_{-\infty}^{\infty} \psi_0 (u, s)^{2\gamma / \alpha} \big| h_0' (s - \log x) \big| ds \le \displaystyle\frac{C \alpha^2 |\log \alpha|}{x u^2}.
\end{aligned}
\end{flalign}
To bound the third term in \eqref{ff1f2}, we first use the fact that $h_0$ is $1$-Lipschitz to find that
\begin{flalign*}
\big| \partial_x \widetilde{F}_{\gamma, 2} & (u, x, y) - \partial_x \widetilde{F}_{\gamma} (u, x, y) \big| \\
& \le \displaystyle\frac{1}{x} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma / \alpha} \cdot \psi_0 (u, s)^{2 \gamma / \alpha} \cdot \big| h_0' (s - \log x) \big| \\
& \qquad \qquad \qquad \times \big| \log \psi_{\alpha/2} (u, s) - \log \psi_0 (u, s) \big| ds dt \\
& = \displaystyle\frac{\alpha}{2 \gamma x} \displaystyle\int_{-\infty}^{\infty} \psi_0 (u, s)^{2\gamma / \alpha} \cdot \big| h_0' (s - \log x) \big| \cdot \big| \log \psi_{\alpha/2} (u, s)^{2 \gamma / \alpha} - \log \psi_0 (u, s)^{2 \gamma / \alpha} \big|ds.
\end{flalign*}
\noindent By \Cref{exponentialsu}, the bound $\varpi^{2/\alpha} \le C \alpha^2$ (recall \eqref{su0}) when $|s - \log u| \ge C \alpha |\log \alpha|$ (by the above), and the fact that $\psi_0 (u, s) \le u^{-1}$, it follows analogously to in \eqref{fgamma1gamma} that
\begin{flalign}
\label{fgamma2gamma}
\big| \partial_x \widetilde{F}_{\gamma, 2} (u, x, y) - \partial_x \widetilde{F}_{\gamma} (u, x, y) \big| \le \displaystyle\frac{C \alpha}{u^{2 \gamma / \alpha} x} \displaystyle\int_{-\infty}^{\infty} \big| h_0' (s - \log x) \big| \varpi^{2/\alpha} ds \le \displaystyle\frac{C \alpha^2 |\log \alpha|}{u^2 x}.
\end{flalign}
\noindent The proposition then follows from combining \eqref{ff1f2}, \eqref{ffgamma1}, \eqref{fgamma1gamma}, and \eqref{fgamma2gamma}.
\end{proof}
\begin{proof}[Proof of \Cref{gammaffggu}]
The proof of this proposition is similar to that of \Cref{gammaffgg}; the differences can be attributed to the fact that the third bound in \eqref{psialpha2psi0} lacks a power of $\alpha$ when compared to the first and second parts. To this address this, we will use the decay of $h_0$ and the assumption \eqref{xyualpha0}. First observe by \Cref{ffgg} (and \eqref{xyualpha0}) that it suffices to show for any index $\mathfrak{Y} \in \{ F, G \}$ that
\begin{flalign*}
\big| \partial_u \breve{\mathfrak{Y}}_{\gamma} (u^{2/\alpha}, x, y) - \partial_u \widetilde{\mathfrak{Y}} (u, x, y) \big| \le C \alpha^{7/6}.
\end{flalign*}
\noindent We will only consider the case $\mathfrak{Y} = F$, as the case $\mathfrak{Y} = G$ is entirely analogous. By \eqref{fgammafgamma}, we have
\begin{flalign*}
\partial_u \breve{F}_{\gamma} (u^{2/\alpha}, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} (e^{2t/\alpha} - 1)^{-\gamma} \cdot h_0 (s - \log x) \cdot \psi_{\alpha/2} (u)^{2 \gamma / \alpha - 1} \cdot \partial_u \psi_{\alpha/2} (u) \\
& \qquad \times \bigg( h_0' \big(t - \log y + \log \psi_{\alpha/2} (u, s) \big) + \displaystyle\frac{2 \gamma}{\alpha} \cdot h_0 \big( t - \log y + \log \psi_{\alpha/2} (u, s) \big) \bigg) ds dt;\\
\partial_u \widetilde{F}_{\gamma} (u, x, y) & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma/\alpha} \cdot h_0 (s - \log x) \cdot \psi_0 (u)^{2 \gamma / \alpha - 1} \cdot \partial_u \psi_0 (u) \\
& \qquad \quad \times \bigg( h_0' \big( t - \log y + \log \psi_0 (u, s) \big) + \displaystyle\frac{2\gamma}{\alpha} \cdot h_0 \big( t - \log y + \log \psi_0 (u, s) \big) \bigg) ds dt.
\end{flalign*}
\noindent Further define
\begin{flalign*}
\mathfrak{X}_1 & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t \gamma /\alpha} \cdot h_0 (s - \log x) \cdot \psi_{\alpha/2} (u, s)^{2 \gamma / \alpha - 1} \cdot \partial_u \psi_{\alpha/2} (u, s) \\
& \qquad \times \bigg( h_0' \big(t - \log y + \log \psi_{\alpha/2} (u, s) \big) + \displaystyle\frac{2 \gamma}{\alpha} \cdot h_0 \big( t - \log y + \log \psi_{\alpha/2} (u, s) \big) \bigg) ds dt; \\
\mathfrak{X}_2 & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2 t \gamma/\alpha} \cdot h_0 (s - \log x) \cdot \psi_0 (u, s)^{2 \gamma / \alpha - 1} \cdot \partial_u \psi_{\alpha/2} (u, s) \\
& \qquad \times \bigg( h_0' \big(t - \log y + \log \psi_{\alpha/2} (u, s) \big) + \displaystyle\frac{2 \gamma}{\alpha} \cdot h_0 \big( t - \log y + \log \psi_{\alpha/2} (u, s) \big) \bigg) ds dt;\\
\mathfrak{X}_3 & = \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2 t \gamma/\alpha} \cdot h_0 (s - \log x) \cdot \psi_0 (u, s)^{2 \gamma / \alpha - 1} \cdot \partial_u \psi_{\alpha/2} (u, s) \\
& \qquad \times \bigg( h_0' \big(t - \log y + \log \psi_0 (u, s) \big) + \displaystyle\frac{2 \gamma}{\alpha} \cdot h_0 \big( t - \log y + \log \psi_0 (u, s) \big) \bigg) ds dt,
\end{flalign*}
\noindent and we have
\begin{flalign}
\label{fx1x2x3f}
\begin{aligned}
\big| \partial_u \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \partial_u \widetilde{F}_{\gamma} (u, x, y) \big| & \le \big| \partial_u \breve{F}_{\gamma} (u^{2/\alpha}, x, y) - \mathfrak{X}_1 \big| + |\mathfrak{X}_1 - \mathfrak{X}_2| + |\mathfrak{X}_2 - \mathfrak{X}_3| \\
& \qquad + \big| \mathfrak{X}_3 - \partial_u \widetilde{F}_{\gamma} (u, x, y) \big|.
\end{aligned}
\end{flalign}
We will show that
\begin{flalign}
\label{fx1x2x3}
\begin{aligned}
\big| \partial_u \breve{F}_{\gamma} (u, x, y) - \mathfrak{X}_1 \big| \le C u^{-5} \alpha^2; & \qquad |\mathfrak{X}_1 - \mathfrak{X}_2| \le C u^{-5} \alpha^2; \qquad |\mathfrak{X}_2 - \mathfrak{X}_3| \le C u^{-5} \alpha^2; \\
& \big| \mathfrak{X}_3 - \widetilde{F}_{\gamma} (u, x, y) \big| \le C \alpha^{7/6}.
\end{aligned}
\end{flalign}
\noindent The proofs of the first, second, and third bounds in \eqref{fx1x2x3} are similar to those of \eqref{ffgamma1}, \eqref{fgamma1gamma}, and \eqref{fgamma2gamma}. So, let us only detail the verification of the first (and fourth) bound(s) in \eqref{fx1x2x3}. To that end, observe that
\begin{flalign}
\label{fugammaxyx1}
\begin{aligned}
\big| \partial_u \breve{F}_{\gamma} (u, x, y) - \mathfrak{X}_1 \big| & \le \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} \big| (e^{2t / \alpha} - 1)^{-\gamma} - e^{-2t\gamma / \alpha} \big| \cdot h_0 (s - \log x) \psi_{\alpha/2} (u)^{2\gamma / \alpha- 1} \big| \partial_u \psi_{\alpha/2} (u) \big| \\
& \qquad \times \bigg( \Big| h_0' \big( t - \log y + \psi_{\alpha/2} (u, s) \big) \Big| + \displaystyle\frac{2 \gamma}{\alpha} \cdot h_0 \big( t - \log y + \psi_{\alpha/2} (u, s) \big) \bigg) ds dt \\
& \le \displaystyle\frac{C}{u^4} \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} \big| ( e^{2t/\alpha} - 1)^{-\gamma} - e^{-2t \gamma / \alpha} \big| \cdot h_0 (s - \log x) \\
&\qquad \qquad \quad \times \bigg( \Big| h_0' \big( t - \log y + \psi_{\alpha/2} (u, s) \big) \Big| + h_0 \big( t - \log y + \psi_{\alpha/2} (u, s) \big) \bigg) ds dt,
\end{aligned}
\end{flalign}
\noindent where in the second inequality we used the facts that
\begin{flalign}
\label{psialpha2}
\big| \psi_{\alpha/2} (u) \big| + \big| \psi_0 (u) \big| \le 2u^{-1}; \qquad \big| \partial_u \psi_{\alpha/2} (u) \big| + \big| \partial_u \psi_0 (u) \big| \le \frac{4 \gamma}{\alpha} \cdot u^{-2\gamma - 1} \le 4 u^{-3}.
\end{flalign}
\noindent As in \eqref{t2alpha1t2alpha}, we have since $\sup_{w \in \mathbb{R}} \big( \big| h_0 (w) \big| + \big| h_0' (w) \big| \big) \le C$ that
\begin{flalign*}
\displaystyle\sup_{K \in \mathbb{R}} \displaystyle\int_0^{\infty} & \big| (e^{2t/\alpha} - 1)^{-\gamma} - e^{-2t\gamma / \alpha} \big| \cdot \Big( h_0 (t - K) + \big| h_0' (t - K) \big| \Big) dt \\
& \le C \displaystyle\int_0^{\infty} \big( t^{-\gamma} \cdot \mathbbm{1}_{t \le \alpha^3} + \alpha |\log \alpha| \cdot \mathbbm{1}_{t \le \alpha} + \alpha e^{-2t/\alpha} \cdot \mathbbm{1}_{t > \alpha} \big) dt \le C \alpha^2 |\log \alpha|,
\end{flalign*}
\noindent which together with \eqref{fugammaxyx1}, \Cref{integralk}, and \eqref{xyualpha0} implies the first bound in \eqref{fx1x2x3}. As mentioned previously, the proofs of the second and third are very similar to those of \eqref{fgamma1gamma} and \eqref{fgamma2gamma}, respectively, and are thus omitted.
To establish the fourth, observe that
\begin{flalign*}
\big| \mathfrak{X}_3 - \partial_u \widetilde{F}_{\gamma} (u, x, y) \big| & \le 2 \displaystyle\int_0^{\infty} \displaystyle\int_{-\infty}^{\infty} e^{-2t\gamma / \alpha} \cdot h_0 (s - \log x) \cdot \psi_0 (u, s)^{2 \gamma / \alpha} \cdot \big| \partial_u \psi_{\alpha/2} (u, s) - \partial_u \psi_0 (u, s) \big| \\
& \qquad \times \bigg( \Big| h_0' \big( t - \log y + \log \psi_0 (u, s) \big) \Big| + h_0 \big( t - \log y + \log \psi_0 (u, s) \big) \bigg) ds dt.
\end{flalign*}
\noindent Using \Cref{integralk} to bound the integral with respect to $t$, it follows that
\begin{flalign*}
\big| \mathfrak{X}_3 - \partial_u \widetilde{F}_{\gamma} (u, x, y) \big| & \le C \displaystyle\int_{-\infty}^{\infty} h_0 (s - \log x) \cdot \psi_0 (u, s)^{2\gamma / \alpha} \cdot \big| \partial_u \psi_{\alpha/2} (u, s) - \partial_u \psi_0 (u, s) \big| ds dt \\
& \le C u^{-2} \displaystyle\int_{-\infty}^{\infty} e^{-s - e^{-s}/4} \cdot \big| \partial_u \psi_{\alpha/2} (u, s) - \partial_u \psi_0 (u, s) \big| ds,
\end{flalign*}
\noindent where in the last inequality we used \eqref{psialpha2}; the fact that $\gamma \le \alpha$; the explicit form for $h_0 (w) = e^{-w - e^{-w}}$; and the fact that $x \ge \frac{1}{4}$ (from \eqref{xyualpha0}). Applying \Cref{exponentialsu} and the fact that $\varpi^{2/\alpha} \le \alpha^2$ (recall \eqref{su0}) if $|s - \log u| > C \alpha |\log \alpha|$ (as then $\varpi \le 1 - C \alpha |\log \alpha| + O \big( \alpha^2 |\log \alpha| \big)$), it follows for sufficiently small $\alpha$ (the bound holds by compactness for $\alpha$ bounded away from $0$) that
\begin{flalign*}
\big| \mathfrak{X}_3 - \partial_u \widetilde{F}_{\gamma} (u, s) \big| & \le C u^{-2 \gamma / \alpha - 3} \bigg( \displaystyle\int_{|s - \log u| \le C \alpha |\log \alpha|} e^{-s - e^{-s} / 4} ds + \alpha^2 \displaystyle\int_{-\infty}^{\infty} e^{-s - e^{-s/4}} ds \bigg) \\
& \le C u^{-5} \Big( \alpha |\log \alpha| \cdot \displaystyle\max_{|s - \log u| < 1/100} e^{-s - e^{-s}/4} ds + \alpha^2 \Big) \\
& \le C u^{-5} \Big( \alpha |\log \alpha| \cdot e^{1/5u -\log u} + \alpha^2 \big) \le C \alpha^{7/6},
\end{flalign*}
\noindent where in the last two estimates we applied the bound on $u$ from \eqref{xyualpha0}. This verifies \eqref{fx1x2x3}, which together with \eqref{xyualpha0} and \eqref{fx1x2x3f} yields the proposition.
\end{proof}
\subsection{Derivative Estimates for $a$ and $b$}
\label{EstimateAlpha0ab}
In this section we use \eqref{fba}, \Cref{gammaffgg}, and \Cref{gammaffggu} to establish the following bounds on the derivatives of $a (u^{2/\alpha})$ and $b (u^{2/\alpha})$ (recall \eqref{opaque}).
\begin{prop}
\label{abderivativeu0}
There exists constants $c > 0$ and $C > 1$ such that for $\alpha \in (0, c)$, we have
\begin{flalign*}
\bigg| \partial_u a (u^{2/\alpha}) + \displaystyle\frac{e^{-1/u}}{4u^3} \bigg| \le C \alpha^{7/6} ; \qquad \bigg| \partial_u b - \displaystyle\frac{e^{-1/u}}{4u^3} (1 - 2u) \bigg| \le C \alpha^{7/6}.
\end{flalign*}
\end{prop}
\begin{proof}
Throughout this proof, we abbreviate $a = a(u^{2/\alpha})$ and $b = b(u^{2/\alpha})$; we also set $d = d(u) = a + b$. Then, \Cref{e0alpha0u} and \Cref{abdalpha0} together imply that $(a, b, u)$ satisfy the constraints \eqref{xyualpha0} on $(x, y, u)$. Next, by \eqref{fba}, \Cref{gammaffgg} (with \eqref{xyualpha0}), \Cref{gammaffggu}, and \Cref{fgu}, there exist functions $\mathfrak{f} (u, x, y)$ and $\mathfrak{g} (u, x, y)$ such that
\begin{flalign}
\label{fwgwhw}
\displaystyle\sup_{w \in \mathbb{R}^3} \Big( \big| \mathfrak{f} (w) \big| + \big| \mathfrak{g} (w) \big| \Big) \le C; \qquad \displaystyle\sup_{w \in \mathbb{R}^3} \displaystyle\max_{\mathfrak{z} \in \{ u, x, y \}} \Big( \big| \partial_{\mathfrak{z}} \mathfrak{f} (w) \big| + \big| \partial_{\mathfrak{z}} \mathfrak{g} (w) \big|\Big) \le C,
\end{flalign}
\noindent and
\begin{flalign*}
a = F_{\alpha/2} (u^{2/\alpha}, a, b) & = \widetilde{F}_{\alpha/2} (u, a, b) + \alpha^{7/6} \cdot \mathfrak{f} (u, a, b) \\
& = \displaystyle\frac{b}{(a+b)^2} - e^{-(a+b)/u} \bigg( \displaystyle\frac{b}{(a+b)^2} + \displaystyle\frac{b}{(a+b) u} \bigg) + \alpha^{7/6} \cdot \mathfrak{f} (u, a, b),
\end{flalign*}
\noindent and
\begin{flalign*}
b = G_{\alpha/2} (u^{2/\alpha}, a, b) & = \widetilde{G}_{\alpha/2} (u, a, b) + \alpha^{7/6} \cdot \mathfrak{g} (u, a, b) \\
& = \displaystyle\frac{a}{(a+b)^2} + e^{-(a+b)/u} \bigg( \displaystyle\frac{b}{(a+b) u} - \displaystyle\frac{a}{(a+b)^2}\bigg) + \alpha^{7/6} \cdot \mathfrak{g} (u, a, b).
\end{flalign*}
Setting $a' = \partial_u a (u^{2/\alpha})$, $b' = \partial_u b (u^{2/\alpha})$, and $d' = d' (u)$, we obtain from differentiating \eqref{a01} and \eqref{b01} that
\begin{flalign}
\label{a01}
\begin{aligned}
a' & = \displaystyle\frac{b'}{d^2} - \displaystyle\frac{2b (a'+b')}{d^3} - e^{-d / u} \bigg( \displaystyle\frac{d}{u^2} - \displaystyle\frac{d'}{u} \bigg) \bigg( \displaystyle\frac{b}{d^2} + \displaystyle\frac{b}{du} \bigg) - e^{-d/u} \bigg( \displaystyle\frac{b'}{d^2} - \displaystyle\frac{2b d'}{d^3} + \displaystyle\frac{b'}{du} - \displaystyle\frac{bd'}{d^2 u} - \displaystyle\frac{b}{du^2} \bigg) \\
& \qquad + \alpha^{7/6} \cdot \big( \partial_u \mathfrak{f} (u, a, b) + a' \cdot \partial_a \mathfrak{f} (u, a, b) + b' \cdot \partial_b \mathfrak{f} (u, a, b) \big),
\end{aligned}
\end{flalign}
\noindent and
\begin{flalign}
\label{b01}
\begin{aligned}
b' & = \displaystyle\frac{a'}{d^2} - \displaystyle\frac{2a(a'+b')}{d^3} + e^{-d/u} \bigg( \displaystyle\frac{d}{u^2} - \displaystyle\frac{d'}{u} \bigg) \bigg( \displaystyle\frac{b}{du} - \displaystyle\frac{a}{d^2} \bigg) + e^{-d/u} \bigg( \displaystyle\frac{b'}{du} - \displaystyle\frac{bd'}{d^2 u} - \displaystyle\frac{a}{du^2} - \displaystyle\frac{a'}{d^2} + \displaystyle\frac{2ad'}{d^3} \bigg) \\
& \qquad + \alpha^{7/6} \cdot \big( \partial_u \mathfrak{g} (u, a, b) + a' \cdot \partial_a \mathfrak{g} (u, a, b) + b' \cdot \partial_b \mathfrak{g} (u, a, b) \big).
\end{aligned}
\end{flalign}
Using the bounds $\big| a - \frac{1}{2} \big| + \big| b - \frac{1}{2} \big| \le C \alpha^2 |\log \alpha|$ from \eqref{abdalpha0}, together with the estimate on $u$ from \eqref{xyualpha0}, it follows from \eqref{a01} that
\begin{flalign*}
a' \cdot \big( 2 + O (u^{-2} e^{-d/u}) + O (\alpha^{7/6}) \big) + b \cdot \big( & O(u^{-2} e^{-d/u}) + O (\alpha^{7/6}) \big) \\
& = e^{-d/u} bu^{-3} + O(\alpha^{7/6}) = \frac{e^{-d/u}}{2u^3} + O(\alpha^{7/6}),
\end{flalign*}
\noindent and from \eqref{b01} that
\begin{flalign*}
a' \cdot \big( O(u^{-2} e^{-d/u}) + O (\alpha^{7/6}) & \big) + b \cdot \big( 2 + O(u^{-2} e^{-d/u}) + O (\alpha^{7/6}) \big) \\
& = e^{-d/u} u^{-3} \big( b - 2au d^{-1} \big) + O(\alpha^{7/6}) = \displaystyle\frac{e^{-d/u}}{2u^3} ( 1 - 2u) + O(\alpha^{7/6}).
\end{flalign*}
\noindent The proposition then follows from solving this linear system for $(a', b')$ (and using \eqref{xyualpha0}).
\end{proof}
\subsection{Uniqueness of the Mobility Edge}
\label{Alpha0Unique}
In this section we establish the second part of \Cref{t:main2}. Throughout, we abbreviate $a = a(u^{2/\alpha})$ and $b = b(u^{2/\alpha})$ and denote
\begin{flalign*}
H(u) = F_{\alpha} (u^{2/\alpha}, a, b) + G_{\alpha} (u^{2/\alpha}, a, b); \qquad \widetilde{H} (u) = \widetilde{F}_{\alpha} (u, a, b) + \widetilde{G}_{\alpha} (u, a, b).
\end{flalign*}
We begin with the following lemma that bounds the derivative of $\widetilde{H}$ (which is explicit from \Cref{falphagalpha0}).
\begin{lem}
\label{derivativeh0}
There exist constants $c > 0$ and $C > 1$ such that, for any $\alpha \in (0, c)$, we have $\partial_u \widetilde{H} (u) \le (Cu^{-2} - cu^{-3}) e^{-1/u}$.
\end{lem}
\begin{proof}
First observe from \eqref{xyualpha0} and \Cref{abdalpha0} that $(a, b)$ satisfy the constraints \eqref{xyualpha0} on $(x, y)$. Then, denoting $d = a+b$, we have from \Cref{falphagalpha0} that
\begin{flalign*}
\partial_u \widetilde{H} (u) & = 2 \bigg( \displaystyle\frac{1}{ud} + \displaystyle\frac{1}{d^2} \bigg) e^{-d/u} \cdot \partial_u d - \displaystyle\frac{2d}{u^2} \bigg( \displaystyle\frac{1}{ud} + \displaystyle\frac{1}{d^2} \bigg) e^{-d/u} \\
& \qquad + 2 \bigg( \displaystyle\frac{1}{du^2} + \displaystyle\frac{\partial_u d}{d^2 u} + \displaystyle\frac{2 \partial_u d}{d^3} \bigg) e^{-d/u} - \displaystyle\frac{4}{d^3} \partial_u d.
\end{flalign*}
\noindent This, together with \Cref{abderivativeu0}, \Cref{abdalpha0}, and \eqref{xyualpha0}, yields the lemma.
\end{proof}
The next corollary estimates derivatives of $F_{\alpha/2}$, $G_{\alpha/2}$, and $H$.
\begin{cor}
\label{fguestimate}
There exist constants $c > 0$ and $C > 1$ such that the following holds for any $\alpha \in (0, c)$. Setting $F(u) = F_{\alpha/2} (u^{2/\alpha}, a, b)$ and $G(u) = G_{\alpha/2} (u^{2/\alpha}, a, b)$, we have
\begin{flalign}
\label{ugufhu}
\partial_u H(u) \le (Cu^{-2} - cu^{-3}) e^{-1/u} + C \alpha^{7/6}; \qquad \big| \partial_u F (u) \big| \le C \alpha^{1/2}; \qquad \big| \partial_u G(u) \big| \le C \alpha^{1/2}.
\end{flalign}
\end{cor}
\begin{proof}
As in the proof of \Cref{derivativeh0}, observe from \eqref{xyualpha0} and \Cref{abdalpha0} that $(a, b)$ satisfy the constraints \eqref{xyualpha0} on $(x, y)$. Letting $a' = \partial_u a(u^{2/\alpha})$, $b' = \partial_u b(u^{2/\alpha})$, and $d' = a' + b'$, we find
\begin{flalign*}
\big| \partial_u H(u) - \partial_u \widetilde{H}_{\alpha} (u, a, b) \big| & \le \big| \partial_u H_{\alpha} (u, a, b) - \partial_u \widetilde{H}_{\alpha} (u, a, b) \big| + |a'| \cdot \big| \partial_a H_{\alpha} (u, a, b) \big| \\
& \qquad + |b'| \cdot \big| \partial_b H_{\alpha} (u, a, b) \big| \\
& \le O(\alpha^{7/6}) + \big( |a'| + |b'| \big) \cdot \Big( \big| \partial_a H_{\alpha} (u, a, b) \big| + \big| \partial_b H_{\alpha} (u, a, b) \big|\Big) \\
& \le O (\alpha^{7/6}) + O (\alpha^{9/10}) \cdot \Big( \big| \partial_b H_{\alpha} (u, a, b) \big| + \big| \partial_b H_{\alpha} (u, a, b) \big| \Big)\\
& \le O(\alpha^{7/6}) + O (\alpha^{9/10}) \cdot \Big( u^{-3} e^{-(a+b)/u} \cdot \big( |a'| + |b'| + 1 \big) \Big) = O(\alpha^{7/6}),
\end{flalign*}
\noindent where in the first bound we applied \Cref{gammaffggu}; in the second we applied \Cref{abderivativeu0} and \eqref{xyualpha0}; and in the third we applied \Cref{falphagalpha0}, \Cref{abderivativeu0}, and \eqref{xyualpha0}. Together with \Cref{derivativeh0}, this yields the first statement of \eqref{ugufhu}. The proofs of the second and third are very similar (following from \Cref{fgu}, \Cref{gammaffgg}, \Cref{gammaffggu}, and \Cref{abderivativeu0}) and are thus omitted.
\end{proof}
We can now establish the second part of \Cref{t:main2}.
\begin{proof}[Proof of \Cref{t:main2}(2)]
The scaling \eqref{ealpha0} of $E_{\mathrm{mob}}$ was verified by \Cref{e0alpha0u}, so it suffices to show uniqueness of $E_{\mathrm{mob}}$. By \Cref{e0alpha0u}, It suffices to show that $\lambda (u^{2/\alpha}, \alpha)$ is decreasing in $u$ on the interval
\begin{flalign}
\label{ualphainterval}
u \in \bigg[ \frac{99}{100} \cdot |\log \alpha|^{-1}, \frac{101}{100} \cdot |\log \alpha|^{-1} \bigg],
\end{flalign}
\noindent to which end we must show that
\begin{flalign*}
\mathfrak{L} (u) = \sin \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot (F+G) + \sqrt{(F+G)^2 + \sin \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot (F-G)^2},
\end{flalign*}
\noindent is decreasing in $u$; here, we have abbreviated $F = F(u) = F_{\alpha} (u^{2/\alpha}, a, b)$ and $G = G(u) = G_{\alpha} (u^{2/\alpha}, a, b)$ (upon setting $a = a(u^{2/\alpha})$ and $b = b(u^{2/\alpha})$). Observe that
\begin{flalign*}
\mathfrak{L}' (u) & = \sin \Big( \displaystyle\frac{\alpha}{2} \Big) \cdot (F' + G') + (F+G)(F'+G') \cdot \bigg( (F+G)^2 + \sin^2 \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot (F-G)^2 \bigg)^{-1/2} \\
& \qquad + \sin^2 \Big( \displaystyle\frac{\pi \alpha}{2} \Big)^2 \cdot (F-G)(F'-G') \bigg( (F+G)^2 + \sin^2 \Big( \displaystyle\frac{\pi \alpha}{2} \Big) \cdot (F-G)^2 \bigg)^{-1/2}.
\end{flalign*}
\noindent Using the fact that $|F-G| \le |F+G|$ and $F+G = 2 + O(\alpha)$ by \Cref{lambdaalpha0}, it follows that
\begin{flalign*}
\mathfrak{L}' (u) \le F'+G' + C \alpha \big( |F'| + |G'| \big) + C \alpha^2 |F' - G'| \le F'+G' + C \alpha \big( |F'| + |G'| \big).
\end{flalign*}
\noindent By \Cref{fguestimate}, it follows for sufficiently small $\alpha$ that
\begin{flalign*}
\mathfrak{L}' (u) \le (Cu^{-2} - cu^{-3}) u^{-1/\alpha} + C \alpha^{7/6} < 0,
\end{flalign*}
\noindent where in the last bound we used \eqref{ualphainterval}. This shows that $\mathfrak{L}' (u)$ is decreasing, thereby establishing the theorem.
\end{proof}
\chapter{Appendices}
\section{Eigenvector Localization and Delocalization Criteria}
\label{a:ELarge}
This appendix is devoted to the proof of \Cref{l:loccriteria}. \Cref{a:localization} and \Cref{a:delocalization} prove the criteria for eigenvector localization and delocalization, respectively, and the proof is concluded in \Cref{a:conclusion}.
Given $s \in \mathbb{R}_+$ and an $N\times N$ symmetric matrix $\boldsymbol{M}$, we recall the quantity $P_I(j)$ from \Cref{jwi} and define $Q_I(s) = Q_I(s; \boldsymbol{M})$ by
\begin{equation}
Q_I(s) = \frac{1}{N} \sum_{j=1}^N P_I(j)^s.
\end{equation}
Note that under \Cref{jwi}, we have the shorthand $Q_I = Q_I(2)$.
\subsection{Localization}\label{a:localization}
The proofs in this section are similar to that of \cite[Lemma 5.9]{bordenave2013localization} and \cite[Theorem 1.1]{bordenave2017delocalization}, so we only outline them.
\begin{lem} \label{huw2}
Fix $\alpha, s, \delta \in (0,1)$ and $E \in \mathbb{R}\setminus\{0\}$ such that $\lim_{\eta \rightarrow 0} \Im R_\star(E + \mathrm{i} \eta) = 0$ in probability. Let $\boldsymbol{H} = \boldsymbol{H}_N$ denote an $N \times N$ L\'{e}vy matrix with parameter $\alpha$, and define the interval $I(\eta) = [E - \eta, E + \eta]$. There exists $\eta_0(\alpha, s, \delta, E) > 0$ such that if $\eta \in(0,\eta_0)$, then
\begin{flalign*}
\displaystyle\lim_{N \rightarrow \infty} \mathbb{P} \Bigg[ Q_{I(\eta)} \bigg(\displaystyle\frac{s}{2}; \boldsymbol{H} \bigg) \le \delta \Bigg] = 1.
\end{flalign*}
\end{lem}
\begin{proof}
Set $z = E + \mathrm{i} \eta$, and denote $\boldsymbol{G} = \boldsymbol{G} (z) = (\boldsymbol{H} - z)^{-1}$, so that
\begin{flalign}
\label{giizu}
G_{ij} (z) = \displaystyle\sum_{k = 1}^N \displaystyle\frac{u_k (i) u_k (j)}{z - \lambda_k}
\end{flalign}
\noindent by the spectral theorem, where $G_{ij} = G_{ij} (z)$ denotes the $(i, j)$-entry of $\boldsymbol{G}$. Then observe for any index $j \in \unn{1}{N}$ that
\begin{flalign*}
\displaystyle\frac{|\Lambda_I|}{2 N \eta} \cdot P_I (j) = \displaystyle\frac{1}{2 \eta} \displaystyle\sum_{\lambda_k \in \Lambda_I} \big| u_k (j) \big|^2 \le \displaystyle\sum_{k = 1}^N \displaystyle\frac{\eta \big| u_k (j) \big|^2}{|z - \lambda_k|^2} = \Imaginary G_{jj}.
\end{flalign*}
\noindent In the second statement we used the fact that
\begin{equation*}
|z - \lambda_k| \le \big((\Imaginary z)^2 + |E - \lambda_k|^2 \big)^{1/2} \le 2^{1/2} \eta
\end{equation*}
for $\lambda_k \in I = [E - \eta, E + \eta]$, and in the third we used \eqref{giizu}. Thus,
\begin{flalign}
\label{sumwijs}
Q_I \bigg( \displaystyle\frac{s}{2} \bigg) = \displaystyle\frac{1}{N} \displaystyle\sum_{j = 1}^N P_I (j)^{s/2} \le \displaystyle\frac{1}{N} \bigg( \displaystyle\frac{2N \eta}{|\Lambda_I|} \bigg)^{s/2} \displaystyle\sum_{j = 1}^N (\Imaginary G_{jj})^{s/2}.
\end{flalign}
Now we apply a concentration estimate for $N^{-1} \sum_{j = 1}^N (\Imaginary G_{jj})^{s/2}$. Specifically, \cite[Lemma C.4]{bordenave2013localization} implies the existence of a constant $C_1 = C_1 (s) > 1$ such that for $\eta \ge (\log N)^{-1}$ we have
\begin{flalign}
\label{giis}
\mathbb{P} \Bigg[ \bigg| \displaystyle\frac{1}{N} \displaystyle\sum_{j = 1}^N (\Imaginary G_{jj})^{s/2} - \mathbb{E} \big[ (\Imaginary G_{11})^{s/2} \big] \bigg| \le \eta \Bigg] \ge 1 - C_1 N^{-100}.
\end{flalign}
\noindent Moreover, it follows from \cite[Theorem 1.1, Theorem 1.6]{arous2008spectrum} that there exists a constant $c_1 = c_1 (E) \in (0, 1)$ such that
\begin{flalign}
\label{lambdaic1}
\lim_{N \rightarrow \infty} \mathbb{P} \big[ |\Lambda_I| \ge c_1 N \eta \big] = 1.
\end{flalign}
\noindent By \eqref{sumwijs}, \eqref{giis}, and \eqref{lambdaic1}, we obtain
\begin{flalign}
\label{wijsum}
\displaystyle\lim_{N \rightarrow \infty} \mathbb{P} \Bigg[ Q_I \bigg( \displaystyle\frac{s}{2} \bigg) \le 2c_1^{-1} \cdot \mathbb{E} \big[ (\Imaginary G_{11})^{s/2} \big] + \eta \Bigg] \ge 1.
\end{flalign}
From \cite[Theorem 2.2, Proposition 2.6]{bordenave2011spectrum}, the random variable $G_{11} = G_{11} (z)$ converges to $R_{\star} = R_{\star} (z)$ in probability when $z$ is fixed. This, together with the deterministic bound $|G_{11}| \le \eta^{-1}$ (recall the second part of \Cref{q12}), yields
\begin{flalign}\label{appstarcvg}
\displaystyle\lim_{N \rightarrow \infty} \mathbb{E} \big[ (\Imaginary G_{ii})^{s/2} \big] = \mathbb{E} \big[ (\Imaginary R_{\star})^{s/2} \big]
\end{flalign}
for any $z \in \mathbb{H}$.
Observe that by \Cref{expectationqchi}, the sequence $\big\{ (\Imaginary R_{\star}(E + \mathrm{i} \eta))^{s/2} \big\}_{\eta \in (0,1)}$ is uniformly integrable, since its $(1+\varepsilon)$-moments are uniformly bounded for some $\varepsilon >0$.
Then the hypothesis that $\lim_{\eta \rightarrow 0} \Im R_\star(E + \mathrm{i} \eta) = 0$ yields a constant $c_2 = c_2 (s, \delta) > 0$ such that for $\eta \in (0, c_2)$ we have $\mathbb{E} \big[ (\Imaginary R_{\star})^{s/2} \big] < \frac{c_1 \delta}{8}$.
This implies for sufficiently large $N$ that
\begin{flalign*}
\mathbb{E} \big[ (\Imaginary G_{ii})^{s/2} \big] \le \displaystyle\frac{c_1 \delta}{4}.
\end{flalign*}
\noindent Combining this with \eqref{wijsum} and taking $\eta < \frac{\delta}{2}$ (by imposing $\eta_0 < \frac{\delta}{2}$) yields the proposition.
\end{proof}
\begin{prop}\label{l:localizationmatrix}
Retain the notation and hypotheses of \Cref{huw2}. For every $D>0$, there exists $\eta_0(\alpha, D, E) > 0$ such that if $\eta \in(0,\eta_0)$, then
\begin{flalign*}
\displaystyle\lim_{N \rightarrow \infty} \mathbb{P} \Big[ Q_{I(\eta)} \left( \boldsymbol{H} \right) \ge D \Big] = 1.
\end{flalign*}
\end{prop}
\begin{proof}
Let $\varepsilon \in (0,1)$ and $p, q \in \mathbb{R}_+$ such that $p^{-1} + q^{-1}=1$ be parameters. By H\"older's inequality,
\begin{align}\label{duality1}
N = \sum_{k=1}^N P_I(k) &=
\frac{1}{N} \sum_{k=1}^N \left( N P_I(k) \right)^\varepsilon
\left( N P_I(k) \right)^{1-\varepsilon}\\
&\le
\left(N^{\varepsilon p -1} \sum_{k=1}^N P_I (k)^{\varepsilon p} \right)^{1/p}
\left(
N^{(1-\varepsilon)q - 1}
\sum_{k=1}^N P_I (k)^{(1-\varepsilon) q } \right)^{1/q}\notag \\
& =
\big(N^{\varepsilon p} Q_I(\varepsilon p) \big)^{1/p}
\left(
N^{(1-\varepsilon)q }
Q_I\big( (1-\varepsilon) q \big) \right)^{1/q} \notag.
\end{align}
With $s \in(0,1)$ a parameter, we set
\begin{equation*}
\varepsilon = \frac{s}{4-s}, \qquad p = 2 - \frac{s}{2}.
\end{equation*}
These choices imply
\begin{equation*}
\varepsilon p = \frac{s}{2}, \quad (1-\varepsilon) q = 2, \quad \frac{p}{q} = 1 - \frac{s}{2}.\end{equation*}
Then taking $p$-th powers in \eqref{duality1} gives
\begin{equation*}
1 \le \big( Q_I(s/2) \big)
\big( Q_I(2) \big)^{1 - s/2}.
\end{equation*}
Using the conclusion of \Cref{huw2} for an arbitrary value of $s \in (0,1)$, we obtain the desired result.
\end{proof}
\subsection{Delocalization}\label{a:delocalization}
\begin{prop}
\label{l:delocalizationmatrix}
Fix $\alpha, s \in (0,1)$ and $E \in \mathbb{R}$ such that $\liminf_{\eta \rightarrow 0} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta) > c \big] > c$ for some $c>0$. Let $\boldsymbol{H} = \boldsymbol{H}_N$ denote an $N \times N$ L\'{e}vy matrix with parameter $\alpha$, and define the interval $I(\eta) = [E - \eta, E + \eta]$. There exist constants $\eta_0(\alpha, s, E) > 0$ and $C(\alpha,s,E)>1$ such that for all $\eta \in(0,\eta_0)$, we have
\begin{flalign*}
\displaystyle\lim_{N \rightarrow \infty} \mathbb{P} \big[ Q_{I(\eta)} ( \boldsymbol{H} ) \le C \big] = 1.
\end{flalign*}
\end{prop}
\begin{proof}
We compute
\begin{align*}
P_I(j)
=
\frac{N}{|\lambda_I|} \sum_{\lambda_i \in \Lambda_I}
\big | u_i(j) \big|^2
\le
\frac{N}{|\Lambda_I|} \sum_{i=1}^N
\frac{ 2 \eta^2 \big | u_i(j) \big|^2 }
{\eta^2 + (\Lambda_i - E)^2}
=\frac{2 N \eta }{|\Lambda_I|}\Im G_{jj}(E + \mathrm{i} \eta).
\end{align*}
Then
\begin{equation}\label{aboveA9}
Q_I = \displaystyle\frac{1}{N} \displaystyle\sum_{j = 1}^N P_I (j)^2
\le
\frac{ 4 N \eta^2}{|\Lambda_I|^2} \sum_{j=1}^N \big( \Im G_{jj}(z)\big)^2.
\end{equation}
Now \cite[Lemma C.3]{bordenave2013localization} implies the existence of a constant $C_1 = C_1 (s) > 1$ such that for $\eta \ge (\log N)^{-1}$ we have
\begin{flalign*}
\mathbb{P} \Bigg[ \bigg| \displaystyle\frac{1}{N} \displaystyle\sum_{j = 1}^N (\Imaginary G_{jj})^{2} - \mathbb{E} \big[ (\Imaginary G_{11})^{2} \big] \bigg| \le \eta \Bigg] \ge 1 - C_1 N^{-100}.
\end{flalign*}
Together with \eqref{lambdaic1} and \eqref{aboveA9}, we obtain
\begin{flalign}
\label{wijsum2}
\displaystyle\lim_{N \rightarrow \infty} \mathbb{P} \left[ Q_I \le 4c_1^{-2} \cdot \mathbb{E} \big[ (\Imaginary G_{11})^{2} \big] + \eta \right] = 1.
\end{flalign}
From \cite[Theorem 2.2, Proposition 2.6]{bordenave2011spectrum}, the random variable $G_{11} = G_{11} (z)$ converges to $R_{\star} = R_{\star} (z)$ in probability when $z$ is fixed. Together with the deterministic bound $|G_{11}| \le \eta^{-1}$ (recall the second part of \Cref{q12}), this yields
\begin{flalign*
\displaystyle\lim_{N \rightarrow \infty} \mathbb{E} \big[ (\Imaginary G_{11})^{2} \big] = \mathbb{E} \big[ (\Imaginary R_{\star})^{2} \big].
\end{flalign*}
By \Cref{rrealimaginary}, the hypothesis that $\liminf_{\eta \rightarrow 0} \mathbb{P} \big[ \Imaginary R_{\star} (E + \mathrm{i} \eta) > c \big] > c$ implies the existence of a constant $c_2 > 0$ such that $\sigma\big(\vartheta_0(z)\big) > c_2$ for $\eta \in (0, c_2)$. Using \eqref{kappatheta1}, we write
\begin{equation}
\big( \Im R_\star (z) \big)^2 = \frac{\big( \eta + \vartheta_0(z) \big)^2}{|z + \varkappa_0(z) + \mathrm{i} \vartheta_0(z)|^4} \le \frac{1}{\vartheta_0(z)^2},
\end{equation}
from which we conclude the existence of a constant $C_2$ such that
\begin{equation}\label{IMRUPPER}
\mathbb{E}\left[ \big( \Im R_\star (E + \mathrm{i} \eta) \big)^2 \right] \le C_2
\end{equation}
for $\eta \in (0, c_2)$. In making the previous assertion, we used $\sigma\big(\vartheta_0(z)\big) > c_2$ and that the density of a one-sided $\alpha/2$-stable decays faster than any polynomial near $0$ (which can be deduced from the explicit form \eqref{xtsigma} of the characteristic function; see also \cite[Lemma B.3]{bordenave2013localization}). Then using \eqref{appstarcvg} and \eqref{IMRUPPER}, we deduce the existence of constants $C_3, N_0 > 1$ such that
\begin{equation*}
2c_1^{-1} \cdot \mathbb{E} \big[ (\Imaginary G_{11})^{2} \big] + \eta \le C_3
\end{equation*}
if $\eta \in (0, c_2)$ and $N \ge N_0$. Inserting this bound into \eqref{wijsum2} completes the proof.
\end{proof}
\subsection{Conclusion}\label{a:conclusion}
We can now give the proof of \Cref{l:loccriteria}.
\begin{proof}[Proof of \Cref{l:loccriteria}]
This follows immediately from \Cref{l:localizationmatrix} and \Cref{l:delocalizationmatrix}.
\end{proof}
\section{Boundary Values of $y(z)$}\label{b:bvals}
This appendix states some important facts about the boundary values of the quantity $y(z)$. In \Cref{s:boundaryvalues} we show the existence of boundary values of $y(z)$, and in \Cref{b:pfll} we prove \Cref{l:lambdalemma}.
\subsection{Boundary Values}\label{s:boundaryvalues}
Recall our convention for the function $z \mapsto z^a$ from \Cref{s:notation}, and recall from \Cref{s:PWITresults} the definitions $\mathbb{K} = \{ z \in \mathbb{C}: \Re z > 0\}$,
\begin{equation*}
y(z) = \mathbb{E} \left[ \left(- \mathrm{i} R_\star(z) \right)^{\alpha/2} \right],
\qquad
\varphi_{\alpha,z}(x) = \frac{1}{\Gamma(\alpha/2)} \int_0^\infty t^{\alpha/2 -1} e^{\mathrm{i} t z} \exp \left( -\Gamma(1-\alpha/2) t^{\alpha/2} x\right)\, dt,
\end{equation*}
and the equation
\begin{equation}\label{fpb1}
y(z) = \phi_{\alpha,z}( y(z)).
\end{equation}
In \cite{belinschi2009spectral}, this relation is formulated in a slightly different way. The authors introduce the entire function
\begin{equation*}
g_\alpha(x) = \int_0^\infty t^{\alpha/2 -1 } e^{-t} \exp( - t^{\alpha/2} x ) \, d
\end{equation*}
and the function $Y(z)\colon \mathbb{H} \rightarrow \mathbb{C}$ is defined as the unique analytic solution (on $\mathbb{H}$) to
\begin{equation}\label{fpb2}
z^{\alpha} Y(z) = \frac{\mathrm{i}^\alpha \Gamma\left( 1 - \frac{\alpha}{2} \right)}{\Gamma\left( \frac{\alpha}{2} \right)} \cdot g_\alpha\big( Y(z) \big).
\end{equation}
Comparing \eqref{fpb1} to \eqref{fpb2}, we deduce the relation
\begin{equation}\label{darelation}
y(z) = \frac{ z^{\alpha/2} Y(z)}{\mathrm{i}^{\alpha/2} \Gamma\left( 1 - \frac{\alpha}{2} \right)}.
\end{equation}
We now show that the boundary values of $y(z)$ exist.
\begin{lem}\label{l:boundaryvalues}
For every $\alpha \in (0,1)$, the function $y\colon \mathbb{H} \rightarrow \mathbb{C}$ has a continuous extension $y\colon \overline{\mathbb{H}} \rightarrow \mathbb{C}$.
\end{lem}
\begin{proof}
For points $E \in \mathbb{R}$ such that $E\neq 0$, this is a direct consequence of \eqref{darelation} and \cite[Proposition 1.1]{belinschi2009spectral}. It remains to consider the point $E=0$. Define $h_\alpha(x) = 1 - \frac{\alpha}{2} x g_\alpha(x)$. Then by \eqref{fpb2},
\begin{equation*}
h_\alpha\big( Y(z) \big) = 1 - \frac{\alpha \Gamma \left( \frac{\alpha}{2} \right)}{2 \mathrm{i}^\alpha \Gamma\left( 1 - \frac{\alpha}{2} \right) } z^\alpha Y(z)^2
= 1 - \frac{\alpha }{2 } \Gamma \left( \frac{\alpha}{2} \right) \Gamma\left( 1 - \frac{\alpha}{2} \right) y(z)^2,
\end{equation*}
where the last equality follows from \eqref{darelation}.
In the proof of \cite[Proposition 1.1]{belinschi2009spectral}, it is shown that
\begin{equation*}
\lim_{z \rightarrow 0} h_\alpha\big( Y(z) \big) = 0,
\end{equation*}
where the limit is taken over $z \in \mathbb{H}$. The completes the proof.
\end{proof}
\begin{rem}
The previous proof further shows that
\begin{equation}\label{y0}
y(0) = \bigg[ \frac{\alpha }{2 } \Gamma \left( \frac{\alpha}{2} \right) \Gamma\left( 1 - \frac{\alpha}{2} \right) \bigg]^{-1/2}
= \left[ \frac{\alpha }{2 } \frac{\pi}{ \sin \left( \pi \alpha/2 \right)} \right]^{-1/2} = \sqrt{ \frac{2 \sin \left( \pi \alpha/2 \right)}{\pi \alpha } },
\end{equation}
where the second equality uses Euler's reflection formula for the Gamma function.\footnote{We remark that $y(0)$ may alternatively be computed using \cite[Lemma 4.3(b)]{bordenave2011spectrum} and equation (4.9) of that reference.}
\end{rem}
\subsection{Proof of \Cref{l:lambdalemma}}\label{b:pfll}
We recall that the quantity $\lambda(E,\alpha)$ was defined in \Cref{lambdaEalpha}.
\begin{lem}\label{l:lambda>1}
For all $\alpha \in (0,1)$, we have $\lambda(0, \alpha) > 1$.
\end{lem}
\begin{proof}
We have
\begin{equation*}
y(0) = \frac{y(0)}{ (\mathrm{i})^{\alpha/2} + (-\mathrm{i})^{\alpha/2} }
\big(
(\mathrm{i})^{\alpha/2} + (-\mathrm{i})^{\alpha/2}
\big)
= \frac{y(0)}{
2\cos\left( \alpha \pi /4 \right)
}
\big(
(\mathrm{i})^{\alpha/2} + (-\mathrm{i})^{\alpha/2}
\big).
\end{equation*}
Then from \eqref{opaque}, we have
\begin{equation*}
a(0) = b(0) = \frac{y(0)}{
2\cos\left( \alpha \pi /4 \right)
},
\end{equation*}
so by the definition of $\varkappa_\mathrm{loc}$ and \eqref{sumalpha},
\begin{equation*}
\varkappa_\mathrm{loc}(0) = a(0)^{2/\alpha} \left(S_1 - S_2 \right)
= a(0)^{2/\alpha} S_0,
\end{equation*}
where $S_0$ is a symmetric $\alpha/2$-stable law with scaling parameter $\sigma(S_0)=2^{2/\alpha}$.
Recalling \eqref{xtsigma}, we have
\begin{equation*}
\hat p_0(k)
=
\exp
\left(
- \frac{\pi a(0)}{\sin(\pi \alpha/4) \Gamma(\alpha/2) }
|k|^{\alpha/2}
\right).
\end{equation*}
Recalling the definition of $\ell(E)$ from \eqref{tlrk}, we have
\begin{equation}\label{ell0}
\ell(0)
= \frac{1}{\pi} \int_0^\infty
k^{\alpha-1} \hat p_0(k)\, dk =
\frac{1}{\pi} \int_0^\infty
k^{\alpha-1}
\exp
\left(
- \frac{\pi a(0) }{\sin(\pi \alpha/4) \Gamma(\alpha/2) }
\cdot k^{\alpha /2}
\right)
\, dk.
\end{equation}
We compute using \eqref{y0} that
\begin{align}
\frac{\pi a(0) }{\sin(\pi \alpha/4) \Gamma(\alpha/2) }
&=
\frac{y(0)}{
2\cos\left( \alpha \pi /4 \right)
}\cdot
\frac{\pi }{\sin(\pi \alpha/4) \Gamma(\alpha/2) } \notag
\\ &=
\left(
\frac{2 \sin \left( \pi \alpha/2 \right)}{\pi \alpha }
\right)^{1/2}
\frac{1}{
2\cos\left( \alpha \pi /4 \right)
}
\frac{\pi }{\sin(\pi \alpha/4) \Gamma(\alpha/2) }.\label{ell0compute}
\end{align}
For any parameter $A > 0$, we have
\begin{align*}\int_0^\infty
k^{\alpha-1}
\exp
\left(
- A k^{\alpha /2}
\right)
\, dk = \frac{2}{\alpha} \int_0^\infty m \exp\left( - A m \right)\, dm = \frac{2}{ \alpha A^2},
\end{align*}
where we used the change of variables $m = k^{\alpha/2}$ in the first equality, so that $dm = (\alpha/2) k^{\alpha/2 -1 } \, dk$.
We conclude from the previous line, \eqref{ell0}, and \eqref{ell0compute} that
\begin{equation*}
\ell(0)
= \frac{2}{\alpha \pi} \left[
\left(
\frac{2 \sin \left( \pi \alpha/2 \right)}{\pi \alpha }
\right)^{1/2}
\frac{1}{
2\cos\left( \alpha \pi /4 \right)
}
\frac{\pi }{\sin(\pi \alpha/4) \Gamma(\alpha/2) }
\right]^{-2}.
\end{equation*}
We recall that $\lambda(0,\alpha)$ was defined in \eqref{mobilityquadratic} the largest solution of
\begin{equation}
\label{mobilityquadratic3}
K_\alpha^2 ( t_\alpha^2 - t_{1}^2) \ell(0)^2 - 2 t_\alpha K_\alpha \ell(0) \lambda(0,\alpha)+ \lambda(0,\alpha)^2=0,
\end{equation}
with $
t_\alpha = \sin(\alpha \pi/2)$ and $
K_\alpha = \frac{\alpha}{2} \Gamma(1/2 - \alpha/2)^2$.
The quadratic formula applied to \eqref{mobilityquadratic3} yields
\begin{equation*}
\lambda(0,\alpha) = \ell(0) K_\alpha ( t_\alpha + 1).
\end{equation*}
We write this as
\begin{align}\label{lambda0exp}
\lambda(0,\alpha) &=
\frac{\alpha }{2 \pi^2} \Gamma\left( \frac{1}{2} - \frac{\alpha}{2}\right)^2 \Gamma\left(\frac{\alpha}{2} \right)^{2}
\big( \sin(\pi \alpha/2) + 1 \big)
\sin(\pi \alpha/2),
\end{align}
where we simplified the previous expression using $2 \sin(x) \cos(x) = \sin(2x)$. Using Euler's reflection formula for the gamma function gives
\begin{equation*}
\Gamma\left( \frac{\alpha}{2} \right) = \frac{\pi}{ \Gamma\left( 1 - \frac{\alpha}{2} \right) \sin(\pi \alpha/2).
}
\end{equation*}
Putting this identity in \eqref{lambda0exp} gives
\begin{equation}\label{lambda0exp2}
\lambda(0,\alpha) =
\frac{\alpha }{2 \sin(\pi \alpha/2)} \Gamma\left( \frac{1}{2} - \frac{\alpha}{2}\right)^2 \Gamma\left( 1 - \frac{\alpha}{2}\right)^{-2}
\big( \sin(\pi \alpha/2) + 1 \big).
\end{equation}
It remains to argue that $\lambda(0,\alpha) > 1$. We also use the following facts, which may be proved through elementary calculus. For $x\in (0,1)$, we have
\begin{equation*}
\ \sin(\pi x/2) + 1 > 1,\end{equation*}
\begin{equation*}
\frac{x }{2 \sin(\pi x/2)} > \frac{1}{\pi},
\end{equation*}
\begin{equation*}
\Gamma\left( \frac{1}{2} - \frac{x}{2}\right) \ge \sqrt{\pi} \cdot \Gamma\left( 1 - \frac{x}{2}\right),
\end{equation*}
where the last inequality follows from the convexity of $\Gamma$ and the fact that $\Gamma(1/2)/\Gamma(1) = \sqrt{\pi}$.
Inserting these bounds into \eqref{lambda0exp2} gives $\lambda(0,\alpha) > 1$ for $\alpha \in (0,1)$, as desired.
\end{proof}
We can now prove \Cref{l:lambdalemma}.
\begin{proof}[Proof of \Cref{l:lambdalemma}]
The first and second parts of the theorem is immediate from the definitions of $\lambda(E,s,\alpha)$ and $\lambda(E,\alpha)$, the quadratic formula, and the continuity of the coefficient of \eqref{mobilityquadratic} in $s$.
For the third part of the theorem, the continuity in $s$ is again clear from the definition of $\lambda(E,s,\alpha)$. For the continuity in $E$ of $\lambda(E,s,\alpha)$, we recall
from \Cref{2lambdaEsalpha} that
\begin{flalign*}
\lambda (E, s, \alpha) & = \pi^{-1} \cdot K_{\alpha, s} \cdot \Gamma (\alpha) \cdot \bigg( t_{\alpha} \sqrt{1 - t_{\alpha}^2} \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \Big] \\
& \qquad + \sqrt{t_s^2 (1 - t_{\alpha}^2) \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \Big]^2 + t_{\alpha}^2 (t_s^2 - t_{\alpha}^2) \cdot \mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \cdot \sgn \big( - R_{\mathrm{loc}} (E) \big) \Big]^2} \bigg).
\end{flalign*}
Then to prove the continuity of $\lambda (E, s, \alpha)$ in $E$, it suffices to establish the continuity of the two quantities
\begin{equation}\label{twotwo}
\Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \Big], \qquad
\Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \cdot \sgn \big( - R_{\mathrm{loc}} (E) \big) \Big]^2.
\end{equation}
From \Cref{l:boundarybasics2}, we have
\begin{align}\label{appc1}
\mathbb{E} \big[ |R_\mathrm{loc}(E)|^{\alpha} \big] &=\mathbb{E} \left[ \big(R_\mathrm{loc}(E)\big)_+^{\alpha} \right]
+ \mathbb{E} \left[ \big(R_\mathrm{loc}(E)\big)_-^{\alpha} \right] \\
&=
F_{\alpha}\big(E, a(E), b(E)\big)\notag
+ G_{\alpha}\big(E, a(E), b(E)\big),
\end{align}
and
\begin{align}\label{appc2}
\mathbb{E} \Big[ \big| R_{\mathrm{loc}} (E) \big|^{\alpha} \cdot \sgn \big( - R_{\mathrm{loc}} (E) \big) \Big]
&=
- E \left[ \big(R_\mathrm{loc}(E)\big)_+^{\alpha} \right]
+ \mathbb{E} \left[ \big(R_\mathrm{loc}(E)\big)_-^{\alpha} \right]\\
&= F_{\alpha}\big(E, a(E), b(E)\big)
+ G_{\alpha}\big(E, a(E), b(E)\big),\notag
\end{align}
where we recall the functions $F_\gamma$ and $G_\gamma$ defined in \eqref{FandG}. Recall also that $a(E)$ and $b(E)$ are continuous; this follows from their definitions in \eqref{opaque} and the continuity of $y(E)$ on $\mathbb{R}$ given by \Cref{l:boundaryvalues}. We observe that $F_\alpha$ and $G_\alpha$ are continuous in each of their arguments; in fact, they are differentiable in each argument, as shown in \Cref{l:Fx}, \Cref{l:Gy}, \Cref{l:Gx}, \Cref{l:Gy}, \Cref{l:FE}, and \Cref{l:GE}. We conclude from this, the representations \eqref{appc1} and \eqref{appc2}, and the continuity of $a(E)$ and $b(E)$, that the quantities in \eqref{twotwo} are continuous in $E$, and hence that $\lambda(E,\alpha)$ is continuous in $E$.
The proof of the continuity of $\lambda(E,\alpha)$ in $E$ is similar, and this completes the proof of the third claim.
The fourth part of the lemma follows directly from \Cref{l:lambda>1}.
\end{proof}
\printbibliography
\end{document}
\end{document} |
2,869,038,155,603 | arxiv | \section{Introduction} \label{Intro}
An integral quadratic form is a homogeneous polynomial $q$ of degree two with integer coefficients, given usually as $q(x)=x^{\tr}\widecheck{G}_qx$ for a unique upper triangular matrix $\widecheck{G}_q$. In case $\widecheck{G}_q$ has only $1$'s as diagonal entries, $q$ is simply called a unit form, and the symmetric matrix $G_q=\widecheck{G}_q+\widecheck{G}_q^{\tr}$ is a \emph{generalized Cartan matrix} (see for instance~\cite{BGZ06} or~\cite{dS20}). Two unit forms $q'$ and $q$ are said to be weakly (resp. strongly) Gram congruent, if there is an integer matrix $B$ with $\det(B)=\pm 1$ such that $G_{q'}=B^{\tr}G_qB$ (resp. $\widecheck{G}_{q'}=B^{\tr}\widecheck{G}_qG$). Clearly, strong congruence implies weak congruence.
The classification of connected positive unit forms up to weak Gram congruence is well known (see Ovsienko~\cite{saO78}, Kosakowska~\cite{jK12} and Simson~\cite{dS13}). Corresponding generalizations to the non-negative case are also known (see Barot and de la Pe\~na~\cite{BP99,BP06}, and Simson et al.~\cite{dS16a,SZ17}). A strong Gram classification of non-negative unit forms is far from been completed (see~\cite[Problem~2.1$(a)$]{dS18} and~\cite[Problem~1.12]{dS20} for a specific formulation of these problems in terms of Coxeter spectra):
\medskip\noindent
\textbf{Problem A.}\
Classify all (connected) non-negative unit forms up to strong Gram congruence.
\medskip
The following problem was posed by Simson in~\cite[Problems~1.10 and~1.11]{dS13b} (see also~\cite[Problem~2.1$(b)$]{dS18}):
\medskip\smallskip\noindent
\textbf{Problem B.}\
Construct algorithms that compute an integer matrix $B$ with $\det(B)=\pm 1$ that defines the strong Gram congruence $\widecheck{G}_{q'}=B^{\tr}\widecheck{G}_{q}B$, in case the quadratic unit forms $q$ and $q'$ are strongly Gram congruent.
\medskip\smallskip
There have been many advances towards the strong classification of positive quadratic forms, both from computational and geometrical points of view. For instance, a classification for small cases ($n \leq 9$), including the exceptional cases $\E_6$, $\E_7$ and $\E_8$, as well as all non-simply laced cases, was presented by Simson et al., cf.~\cite{dS20}. The general case of Dynkin type $\D_n$ was announced in~\cite{dS18} (see also~\cite[\S 4]{dS20}), and is solved by Simson in~\cite{dS21a}.
There is a well known graphical description of quadratic (unit) forms by means of (loop-less) signed multi-graphs (or bigraphs, as called in the paper following Simson~\cite{dS13}). It is easy to verify that if $q(x)=\frac{1}{2}x^{\tr}(2\Id+A)x$, where $A$ is the symmetric adjacency matrix of a loop-less bigraph (where $\Id$ denotes the identity matrix of appropriate size), then $q$ is non-negative if and only if the least eigenvalue of $A$ is greater than or equal to $-2$.
Loop-lees bigraphs whose symmetric adjacency matrix has least eigenvalue greater than or equal to $-2$ have attracted the attention of graph theorists since the 1960's~\cite{HN61,mA67,GH68,CGSS}, mainly focusing in their graphical characterization and Laplacian (or Kirchhoff) spectral properties. The connection of these bigraphs with the classical root systems $\A\D\E$ was established in the seminal work~\cite{CGSS} by Cameron, Goethals, Seidel and Shult in 1976 (see also~\cite{tZ81}).
The theory of (loop-lees) bigraphs with least eigenvalue $-2$, and the theory of quadratic (unit) forms, have maintained fairly independent roads (see for instance~\cite{dS20} and references therein), perhaps due to their seemingly different graphical and algebraic goals. The author translated in~\cite{jaJ2018} the inflation techniques into the combinatorial context of (a version of) line digraphs, using the so-called \emph{incidence quadratic form} of a quiver (directed multi-graph). Some of the basic concepts in~\cite{jaJ2018} had already been introduced by Zaslavsky in~\cite[\S 3]{tZ08}, see also~\cite{BS16,tZ84}. Here we further explore this connection. More on the development of the theory of line graph can be found in~\cite{CDD21}.
Motivated by results of Barot~\cite{B99} and von H{\"o}hne~\cite{vH88}, the author associated to any quiver $Q$ a bigraph $\Inc(Q)$, called \emph{incidence graph} of $Q$~\cite{jaJ2018}. This construction has many similarities to the so-called \emph{line digraph} (introduced by Harary and Norman~\cite{HN61} in 1961), and is used as an auxiliary construction for \emph{signed line graphs} (see~\cite{tZ08,BS16}). The theory of flations for non-negative unit forms of Dynkin type $\A_n$, in this combinatorial context, was worked out in~\cite{jaJ2018}. Here we propose a modified theory of flations that preserves strong Gram congruence of unit forms (Section~\ref{S(C)}), given a solution of Problem~A for positive unit forms of Dynkin type $\A_n$:
\medskip\smallskip\noindent
\textbf{Theorem A.}
Any two connected positive unit forms of Dynkin type $\A_n$ are strongly Gram congruent.
\medskip\smallskip
An essentially combinatorial proof of Theorem~A is given below (Theorem~\ref{T:20}), after some technical preparations. An equivalent result was obtained recently by Simson in~\cite{dS21b}, in the context of edge-bipartite graphs and morsifications of quadratic forms. Recall that a positive unit form is positive precisely when it is non-negative and has corank zero (see Lemma~\ref{L:01}$(c)$ below). A connected non-negative unit form of corank one is called principal. In Section~\ref{S(A)} we present a combinatorial formula for the Coxeter matrix in terms of quiver inverses, whose similarity invariants (for instance, the characteristic polynomial, called Coxeter polynomial of the unit form), serve as discriminant in the analogous of Theorem~A for principal unit forms (Theorem~\ref{T:34}):
\medskip\smallskip\noindent
\textbf{Theorem B.}
Two connected principal unit forms of Dynkin type $\A_n$ are strongly Gram congruent if and only if they have the same Coxeter polynomial.
\medskip\smallskip
A list of the corresponding Coxeter polynomials is given in Remark~\ref{R:29medio}. Quiver inverses are used in Corollary~\ref{C:28} to give bounds for the coefficients of the Coxeter matrix associated to non-negative unit forms $q$ of Dynkin type $\A_n$. In Section~\ref{Basic} we collect concepts and general results needed throughout the paper.
\section{Basic notions} \label{Basic}
The set of integers is denoted by $\Z$, and the canonical basis of $\Z^n$ by $\bas_1,\ldots,\bas_n$. All matrices have integer coefficients, and for a $n \times m$ matrix $A$ and a $n \times m'$ matrix $B$, the $n \times (m + m')$ matrix with columns those of $A$ and $B$ (in this order), is denoted by $[A|B]$. In particular, if $A_1,\ldots,A_m$ are the columns of $A$, we write $A=[A_1|\cdots |A_m]$. The identity $n \times n$ matrix is denoted by $\Id_n$, and simply by $\Id$ for appropriate size. The transpose of a matrix $A$ is denoted by $A^{\tr}$, and if $A$ is an invertible square matrix, then $A^{-\tr}$ denotes $(A^{-1})^{\tr}$. By total (or linear) order we mean a partial order where any two elements are comparable, and a totally (or linearly) ordered set is one equipped with such order.
\subsection{Integral quadratic forms} \label{S(I):def}
Let $q=q(x_1,\ldots,x_n)$ be an \textbf{integral quadratic form} on $n \geq 1$ variables, that is, $q$ is a homogeneous polynomial of degree two,
\[
q(x_1,\ldots,x_n)=\sum_{1 \leq i\leq j \leq n}q_{ij}x_ix_j.
\]
In case $q_{ii}=1$ for $i=1,\ldots,n$ we say that $q$ is a \textbf{unit form}. The \textbf{(upper) triangular Gram matrix} associated to an integral quadratic form $q$ is the $n \times n$ matrix given by $\widecheck{G}_q=(g_{ij})$ where $g_{ij}=q_{ij}$ for $1 \leq i \leq j \leq n$ and $g_{ij}=0$ for $1 \leq j < i \leq n$. The \textbf{symmetric Gram matrix} associated to $q$ is given by $G_q=\widecheck{G}_q+\widecheck{G}_q^{\tr}$.
\medskip
As usual, the form $q$ can be seen as a function $q:\Z^n \to \Z$ given by evaluation in the vector of variables $(x_1,\ldots,x_n)$. Notice that for $x=(x_1,\ldots,x_n)^{\tr} \in \Z^n$ we have
\[
q(x)=x^{\tr}\widecheck{G}_qx=\frac{1}{2}x^{\tr}G_qx.
\]
For an endomorphism $T:\Z^n \to \Z^n$, the integral quadratic form $qT$ is given by $qT(x)=q(T(x))$. Observe that $G_{qT}=T^{\tr}G_qT$ (where here as in the rest of the text we identify an endomorphism $T$ with its square matrix under the ordered canonical basis $\bas_1,\ldots,\bas_n$ of $\Z^n$). We say that two unit forms $q$ and $q'$ are \textbf{weakly congruent} if there is an automorphism $T$ of $\Z^n$ such that $q'=qT$ (written $q' \sim q$ or $q' \sim^T q$).
\medskip
The \textbf{direct sum} $q \oplus q': \Z^{n+n'} \to \Z$ of integral quadratic forms $q:\Z^n\to \Z$ and $q':\Z^{n'} \to \Z$ is given by
\[
(q \oplus q')(x_1,\ldots,x_{n+n'})=q(x_1,\ldots,x_n)+q'(x_{n+1},\ldots,x_{n+n'}).
\]
Observe that $\widecheck{G}_{q\oplus q'}=\begin{pmatrix} \widecheck{G}_{q}&0\\0&\widecheck{G}_{q'} \end{pmatrix}$, which we denote by $\widecheck{G}_q\oplus \widecheck{G}_{q'}$. The \textbf{symmetric bilinear form} associated to an integral quadratic form $q:\Z^n \to \Z$, denoted by $q(-|-):\Z^n\times \Z^n \to \Z$, is given by
\[
q(x|y)=q(x+y)-q(x)-q(y),
\]
(notice that $q(x|y)=x^{\tr}G_qy$ for any $x,y \in \Z^n$). The \textbf{radical} $\rad(q)$ of a unit form $q$ is the set of vectors $x$ in $\Z^n$ such that $q(x|-)\equiv 0$ (called \textbf{radical vectors} of $q$). Clearly, $\rad(q)$ is a subgroup of $\Z^n$, whose rank $\CRnk(q)$ is called \textbf{corank} of $q$. Alternatively, $\CRnk(q)=n-\Rnk(q)$, where $\Rnk(q)=\Rnk(G_q)$ is called \textbf{rank} of $q$. By \textbf{root} of $q$ we mean a vector $x \in \Z^n$ such that $q(x)=1$.
\medskip
For convenience, throughout the text we use the notation $q_{ji}=q_{ij}$ for $i<j$. A unit form $q$ is said to be \textbf{connected} if for any indices $i \neq j$ there is a sequence of indices $i_0,\ldots,i_r$ with $r \geq 1$ such that $i=i_0$, $j=i_r$ and $q_{i_{t-1}i_t}\neq 0$ for $t=1,\ldots,r$. Recall that a unit form $q:\Z^n \to \Z$ is called \textbf{non-negative} (resp. \textbf{positive}) if $q(x)\geq 0$ for any vector $x$ in $\Z^n$ (resp. $q(x)>0$ for any non-zero vector $x$ in $\Z^n$). A unit form $q$ is called \textbf{principal} if $q$ is non-negative and has corank one, see Simson~\cite{dS11} and Kosakowska~\cite{jK12}. The following important observation is well known (see for instance~\cite{BP99}), for convenience here we give a short proof.
\begin{lemma}\label{L:01}
For a unit form $q$ the following assertions hold.
\begin{itemize}
\itemsep=0.9pt
\item[a)] If $q$ is not connected, then $q \sim q^1\oplus q^2$ for unit forms $q^1$ and $q^2$.
\item[b)] If $q$ is non-negative and $q(x)=0$, then $x$ is a radical vector of $q$.
\item[c)] The unit form $q$ is positive if and only if $q$ is non-negative and $\CRnk(q)=0$.
\end{itemize}
Assume now that $q$ is a non-negative unit form, and that $q'$ is a unit form weakly congruent to $q$.
\begin{itemize}
\itemsep=0.9pt
\item[d)] Then $q'$ is non-negative and $\CRnk(q')=\CRnk(q)$.
\item[e)] If $q$ is connected, then so is $q'$.
\end{itemize}
\end{lemma}
\begin{proof}
A more specific version of $(a)$ will be given later in Lemma~\ref{L:05}. For $(b)$, take a basic vector $\bas_i$ and any integer $m$. If $q(x)=0$ for a vector $x$, then
\[
0 \leq q(mx+\bas_i)=mq(x|\bas_i)+1.
\]
Since $m$ and $i$ are arbitrary, then $q(x|\bas_i)=0$ and $x$ is a radical vector of $q$.
\eject
For $(c)$, if $q$ is positive then clearly $q$ is non-negative and $\rad(q)=0$. Conversely, if $q$ is non-negative and $\CRnk(q)=0$, then $\rad(q)=0$. By $(b)$, for any non-zero vector $x$ we have $q(x)>0$. To prove $(d)$ consider an automorphism $T$ such that $q'=qT$. Then $q'(x)=q(T(x))\geq 0$, that is, $q'$ is non-negative. Since
\[
G_{q'}=T^{\tr}G_qT,
\]
it is well known that $\Rnk(G_{q'})=\Rnk(G_q)$ (cf.~\cite[\S~4.5]{cdM00}), and therefore $\CRnk(q')=\CRnk(q)$.
\medskip
Finally, to show $(e)$ assume that $q'=qT$ is not connected. By $(a)$ we may assume that $q'=q^1 \oplus q^2$ for unit forms $q^1:\Z^{n_1} \to \Z$ and $q^2:\Z^{n_2} \to \Z$ (with $n=n_1+n_2$). Let $y_1,\ldots,y_n$ be the columns of $T^{-1}$. Since $q$ is a unit form, $y_i$ is a root of $q'$ for $i=1\ldots,n$. Moreover, if $y_i^1$ (resp. $y_i^2$) is the projection of $y_i$ into its first $n_1$ entries (resp. its last $n_2$ entries), then
\[
1=q'(y_i)=q^1(y^1_i)+q^2(y^2_i).
\]
By $(d)$, the unit forms $q^1$ and $q^2$ are non-negative, therefore either $q^1(y^1_i)=1$ and $q^2(y^2_i)=0$, or $q^1(y^1_i)=0$ and $q^2(y^2_i)=1$. Consider the following partition of the set $\{1,\ldots,n\}$,
\[
X=\{1 \leq i \leq n \mid q^1(y^1_i)=1\} \quad \text{and} \quad Y=\{1 \leq j \leq n \mid q^2(y^2_j)=1\}.
\]
By $(b)$, observe that if $i \in X$ and $j \in Y$, then $y_j^1$ is a radical vector of $q^1$ and $y_i^2$ is a radical vector of $q^2$, and therefore
\[
q(\bas_i+\bas_j)=q'(y_i+y_j)=q^1(y^1_i+y^1_j)+q^2(y^2_i+y^2_j)=q^1(y^1_i)+q^2(y^2_j)=2.
\]
Then $q(\bas_i+\bas_j)=q(\bas_i)+q(\bas_j)$, which implies that $q_{ij}=0$ for arbitrary $i \in X$ and $j \in Y$ (for $q_{ij}=q(\bas_i|\bas_j)$). We need to show that both $X$ and $Y$ are non-empty sets.
\medskip
We may write
\[
(T^{-1})^{\tr}G_{q'}T^{-1}=\begin{pmatrix} B^{\tr}&C^{\tr} \end{pmatrix}\begin{pmatrix} G_{q^1}&0\\0&G_{q^2} \end{pmatrix}\begin{pmatrix} B\\C \end{pmatrix}=B^{\tr}G_{q^1}B+C^{\tr}G_{q^2}C,
\]
where $B$ and $C$ are respectively $n_1 \times n$ and $n_2 \times n$ matrices. If $Y$ is an empty set, then the columns of $C$ are radical vectors of $q^2$, and therefore
\[
G_q=(T^{-1})^{\tr}G_{q'}T^{-1}=B^{\tr}G_{q^1}B.
\]
In particular, $\Rnk(q)\leq \Rnk(q^1)$ (cf.~\cite[\S 4.5]{cdM00}). This is impossible, since by $(d)$ we have $\Rnk(q)=\Rnk(q')=\Rnk(q^1)+\Rnk(q^2)>\Rnk(q^1)$ (a similar contradiction can be found assuming that $X$ is an empty set). This completes the proof, since $X \neq \emptyset$ and $Y \neq \emptyset$ imply that $q$ is non-connected.
\end{proof}
The example following Remark~\ref{R:02} below shows that the non-negativity assumption is necessary for part $(e)$ in Lemma~\ref{L:01}.
\subsection{Bigraphs and associated unit forms} \label{S(I):bigraph}
Let $\Gamma=(\Gamma_0,\Gamma_1,\sigma)$ be a \textbf{bigraph}, that is, a multi-graph $(\Gamma_0,\Gamma_1)$ together with a \textbf{sign function} $\sigma:\Gamma_1 \to \{\pm 1\}$ such that all parallel edges have the same sign (see~\cite{dS13}). As usual, bigraphs are graphically depicted in the following way: for vertices $i$ and $j$, an edge $a$ joining $i$ and $j$ with $\sigma(a)=1$ will be denoted by $\xymatrix{{\bullet_i} \ar@{-}[r] & {\bullet_j}}$, and by $\xymatrix{{\bullet_i} \ar@{.}[r] & {\bullet_j}}$ if $\sigma(a)=-1$ (this convention is used in~\cite{BP99} and~\cite{jaJ2018}, and is opposite to the one used in~\cite{dS13}). We assume that the set of vertices $\Gamma_0$ is totally ordered. If $\Gamma$ has no loop and $|\Gamma_0|=n$, the \textbf{(upper) triangular adjacency matrix} $\Adj_{\Gamma}$ of $\Gamma$ is the $n \times n$ matrix given by
\[
\Adj_{\Gamma}=\begin{pmatrix} 0 & d_{1,2} & d_{1,3} & \cdots & d_{1,n-1} & d_{1,n} \\
0 & 0 & d_{2,3} & \cdots & d_{2,n-1} & d_{2,n} \\ 0 & 0 & 0 & \vdots & d_{3,n-1} & d_{3,n} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 0 & d_{n-1,n} \\
0 & 0 & 0 & \cdots & 0 & 0 \end{pmatrix},
\]
where $|d_{ij}|$ is the number of edges between vertices $i<j$, and $\sigma(a)d_{ij}=|d_{ij}|$ for any such edge $a$. Throughout the text we assume that all bigraphs are \textbf{loop-less}, that is, have no loop.
\medskip
The \textbf{(upper) triangular Gram matrix} $\widecheck{G}_{\Gamma}$ of $\Gamma$ is given by $\widecheck{G}_{\Gamma}=\Id-\Adj_{\Gamma}$ (following the convention in~\cite{dS13} one gets $\widecheck{G}_{\Gamma}=\Id+\Adj_{\Gamma}$). The \textbf{quadratic form associated to a bigraph $\Gamma$} is given by
\[
q_{\Gamma}(x)=x^{\tr}\widecheck{G}_{\Gamma}x, \quad \text{for any $x \in \Z^n$,}
\]
that is, $\widecheck{G}_{q_{\Gamma}}=\widecheck{G}_{\Gamma}$. There is a well known bijection between unit forms and loop-less bigraphs (see for instance~\cite{dS13} and~\cite{jaJ2018}), given by the corresponding triangular (Gram and adjacency) matrices.
\begin{remark}\label{R:02}
For a loop-less bigraph $\Gamma$, the unit form $q_{\Gamma}$ is connected if and only if $\Gamma$ is a connected bigraph.
\end{remark}
\begin{proof}
Take $q_{\Gamma}(x)=\sum \limits_{1 \leq i \leq j \leq n}q_{ij}x_ix_j$, and observe that for any sequence of indices $i_0,\ldots,i_r$ such that $q_{i_{t-1}i_t}\neq 0$ for $t=1,\ldots,r$, there is a sequence of edges $a_1,\ldots,a_r$ in $\Gamma$ such that $a_t$ contains vertices $i_{t-1}$ and $i_t$ (that is, $(i_0,a_1,i_1,\ldots,i_{r-1},a_r,i_r)$ is a walk in $\Gamma$). Hence the claim follows.
\end{proof}
Consider the following connected bigraph $\Gamma$ with twelve edges, and non-connected bigraph $\Gamma'$ with six edges,
\[
\Gamma= \xymatrix@R=1.3pc@C=1.3pc{\mathmiddlescript{\bullet_1} \ar@{-}@<-.5ex>[dd] \ar@{-}[dd] \ar@{-}@<-.5ex>[rrdd] \ar@{-}[rrdd] \ar@{-}[rr] \ar@{-}@<.5ex>[rr] \ar@{-}@<-.5ex>[rr] \ar@{-}@<1ex>[rr] \ar@{-}@<-1ex>[rr] & & \mathmiddlescript{\bullet_2} \\ \\
\mathmiddlescript{\bullet_3} \ar@{-}[rr] \ar@{-}@<-.5ex>[rr] \ar@{-}@<.5ex>[rr] & & \mathmiddlescript{\bullet_4}}
\qquad \qquad \Gamma'= \xymatrix@R=1.3pc@C=1.3pc{\mathmiddlescript{\bullet_1} \ar@{-}[rr] \ar@{-}@<.5ex>[rr] \ar@{-}@<-.5ex>[rr] & & \mathmiddlescript{\bullet_2} \\ \\
\mathmiddlescript{\bullet_3} \ar@{-}[rr] \ar@{-}@<-.5ex>[rr] \ar@{-}@<.5ex>[rr] & & \mathmiddlescript{\bullet_4}}
\]
None of the unit forms $q_{\Gamma}$ and $q_{\Gamma'}$ is non-negative, and $q_{\Gamma'} \sim q_{\Gamma}$. Indeed, we have
\[
G_{\Gamma}=\left(\begin{smallmatrix} 2&\mi{5}&\mi{2}&\mi{2}\\\mi{5}&2&0&0\\\mi{2}&0&2&\mi{3}\\\mi{2}&0&\mi{3}&2 \end{smallmatrix}\right) \quad \text{and} \quad G_{\Gamma'}=\left(\begin{smallmatrix}2&\mi{3}&0&0\\\mi{3}&2&0&0\\0&0&2&\mi{3}\\0&0&\mi{3}&2 \end{smallmatrix}\right),
\]
where for an integer $a$ we take $\mi{a}=-a$. A direct calculation shows that if $B=\left(\begin{smallmatrix} 1&0&0&0\\1&1&0&0\\\mi{2}&0&1&0\\\mi{2}&0&0&1 \end{smallmatrix}\right)$, then $G_{\Gamma'}=B^{\tr}G_{\Gamma}B$.
\subsection{Elementary transformations for unit forms} \label{S(I):trans}
We consider the following transformations of a unit form $q:\Z^n \to \Z$.
\begin{enumerate}
\item \textit{Point inversion.} Take a subset of indices $C \subseteq \{1,\ldots,n\}$ and define the automorphism $V_C:\Z^n \to \Z^n$ given by $V_C(\bas_k)=-\bas_k$ if $k \in C$, and $V_C(\bas_k)=\bas_k$ otherwise. The transformation $V_C$ is known as \textbf{point inversion} (or sign change) for $q$, and the unit form $qV_C$ is usually referred to as \textbf{point inversion} of $q$.
\item \textit{Swapping.} Given two indices $i \neq j$, consider the transformation $S_{ij}:\Z^n \to \Z^n$ given by $S_{ij}(\bas_i)=\bas_j$, $S_{ij}(\bas_j)=\bas_i$ and $S_{ij}(\bas_k)=\bas_k$ for $k \neq i,j$ (clearly, $S_{ij}=S_{ji}$). We say that the unit form $qS_{ij}$ is obtained from $q$ by \textbf{swapping} indices $i$ and $j$.
\item \textit{Flation.} For two indices $i \neq j$, consider the sign $\epsilon=\sgn(q_{ij})\in \{+1,0,-1\}$ of $q_{ij}$. Take the linear transformation $T^{\epsilon}_{ij}:\Z^n \to \Z^n$ given by $T^{\epsilon}_{ij}(x)=x-\epsilon x_i\bas_j$, for a (column) vector $x=(x_1,\ldots,x_n)^{\tr}$ in $\Z^n$. The transformation $T^{\epsilon}_{ij}$ will be referred to as \textbf{flation} for $q$.
\item \textit{$FS$-transformation.} For our arguments we consider the composition $\FSq^{\epsilon}_{ij}=T^{\epsilon}_{ij}S_{ij}$, and call it a \textbf{$FS$-(linear) transformation} for $q$ if $\epsilon=\sgn(q_{ij})$.
\end{enumerate}
\begin{remark}\label{R:03}
Let $q$ be a unit form, with indices $i \neq j$.
\begin{itemize}
\item[a)] Let $T^{\epsilon}_{ij}$ be a flation for $q$. Then $q'=qT^{\epsilon}_{ij}$ is a unit form if and only if $|q_{ij}|\leq 1$, and in that case $q'T^{-\epsilon}_{ij}=q$.
\item[b)] Let $\FSq^{\epsilon}_{ij}$ be a $FS$-transformation for $q$. Then $q'=q\FSq^{\epsilon}_{ij}$ is a unit form if and only if $|q_{ij}|\leq 1$, and in that case $q'\FSq^{-\epsilon}_{ji}=q$.
\end{itemize}
\end{remark}
\begin{proof}
To show $(a)$, observe that $q'(\bas_k)=q(\bas_k)=1$ if $k \neq i$, and
\[
q'(\bas_i)=q(T^{\epsilon}_{ij}(\bas_i))=q(\bas_i-\epsilon\bas_j)=1+(\epsilon-|q_{ij}|).
\]
Since $\epsilon=\sgn(q_{ij})$, it follows that $q'(\bas_i)=1$ if and $|q_{ij}|\leq 1$. That $T_{ij}^{\epsilon}T_{ij}^{-\epsilon}=\Id$ is clear, since
\[
T_{ij}^{\epsilon}T_{ij}^{-\epsilon}(x)=T_{ij}^{\epsilon}(x+\epsilon x_i\bas_j)=T_{ij}^{\epsilon}(x)+\epsilon x_iT_{ij}^{\epsilon}(\bas_j)=x-\epsilon x_i\bas_j+\epsilon x_i\bas_j=x,
\]
for any $x$ in $\Z^n$. Claim $(b)$ follows from $(a)$.
\end{proof}
A composition $T=T_{i_1j_1}^{\epsilon_1}\cdots T_{i_rj_r}^{\epsilon_r}$ is called an \textbf{iterated flation} for a unit form $q$, if taking $q^0=q$, then $T_{i_tj_t}^{\epsilon_t}$ is a flation for $q^{t-1}$, and $q^t=q^{t-1}T_{i_tj_t}^{\epsilon_t}$ is a unit form, for $t=1,\ldots,r$. In a similar situation, a composition $\FSq=\FSq_{i_1j_1}^{\epsilon_1}\cdots \FSq_{i_rj_r}^{\epsilon_r}$ is called an \textbf{iterated $FS$-transformation} for $q$.
\subsection{Weak classification and strong congruence} \label{S(I):cong}
The classification of positive unit forms up to weak congruence is classical: for $n \geq 1$, the weak equivalence classes of connected positive unit forms in $n$ variables are in correspondence with the set of (simply laced) Dynkin types $\mathcal{D}_n$, where
\begin{equation*}
\mathcal{D}_n = \left\{ \begin{array}{l l} \{\A_n\}, & \text{if $n=1,2,3$},\\ \{\A_n,\D_n\}, & \text{if $n=4,5$ or $n \geq 9$},\\ \{\A_n,\D_n,\E_n\}, & \text{if $n=6,7,8$},\end{array} \right.
\end{equation*}
(called \textbf{Dynkin graphs} on $n$ vertices, see Table~\ref{T:dynkin}). The following weak classification of connected non-negative unit forms can be found in~\cite{SZ17} (see also~\cite{BP99}). If $J \subset \{1,\ldots,n\}$ is a subset of indices, denote by $\tau:\Z^J \to \Z^n$ the canonical inclusion. If $q:\Z^n \to \Z$ is a unit form, then the composition $q'=q\tau$ is a unit form called \textbf{restriction} of $q$, and the unit form $q$ is called \textbf{extension} of $q'$. Simson fixed in~\cite{dS16a} (see also~\cite[Algorithm~3.18]{SZ17}) canonical extensions of connected positive unit forms, via corresponding connected bigraphs $\widehat{\Delta}^{(c)}$ for $c \geq 0$ and $\Delta$ a Dynkin graph, which serve as representatives of weak congruence classes of connected positive unit forms.
\begin{table}[!h]
\begin{center}
\caption{(Simply laced) Dynkin graphs with ordered set of vertices.} \label{T:dynkin}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{c l}
\hline Notation & \multicolumn{1}{c}{Graph} \\
\hline \\ [-5pt]
$\A_n \; (n \geq 1)$ & $\xymatrix@C=1pc@R=1pc{\mathmiddlescript{\bullet_1} \ar@{-}[r] & \mathmiddlescript{\bullet_2} \ar@{-}[r] & \mathmiddlescript{\bullet_3} \ldots \mathmiddlescript{\bullet_{n-2}} \ar@{-}[r] & \mathmiddlescript{\bullet_{n-1}} \ar@{-}[r] & \mathmiddlescript{\bullet_n}} $ \\\raisebox{-1ex}{$\D_n \; (n \geq 4)$} & $\xymatrix@C=1pc@R=.2pc{\mathmiddlescript{\bullet_1} \ar@{-}[rd] \\ & \mathmiddlescript{\bullet_3} \ar@{-}[r] & \mathmiddlescript{\bullet_4} \ldots \mathmiddlescript{\bullet_{n-2}} \ar@{-}[r] & \mathmiddlescript{\bullet_{n-1}} \ar@{-}[r] & \mathmiddlescript{\bullet_n} \\ \mathmiddlescript{\bullet_2} \ar@{-}[ru] } $ \\ \raisebox{-2ex}{$\E_6$} & $\xymatrix@C=1pc@R=1pc{ & & \mathmiddlescript{\bullet_6} \ar@{-}[d] \\ \mathmiddlescript{\bullet_1} \ar@{-}[r] & \mathmiddlescript{\bullet_2} \ar@{-}[r] & \mathmiddlescript{\bullet_3} \ar@{-}[r] & \mathmiddlescript{\bullet_4} \ar@{-}[r] & \mathmiddlescript{\bullet_5}} $ \\ \raisebox{-2ex}{$\E_7$} & $\xymatrix@C=1pc@R=1pc{ & & \mathmiddlescript{\bullet_4} \ar@{-}[d] \\ \mathmiddlescript{\bullet_1} \ar@{-}[r] & \mathmiddlescript{\bullet_2} \ar@{-}[r] & \mathmiddlescript{\bullet_3} \ar@{-}[r] & \mathmiddlescript{\bullet_5} \ar@{-}[r] & \mathmiddlescript{\bullet_6} \ar@{-}[r] & \mathmiddlescript{\bullet_7}} $ \\ \raisebox{-2ex}{$\E_8$} & $\xymatrix@C=1pc@R=1pc{ & & \mathmiddlescript{\bullet_4} \ar@{-}[d] \\ \mathmiddlescript{\bullet_1} \ar@{-}[r] & \mathmiddlescript{\bullet_2} \ar@{-}[r] & \mathmiddlescript{\bullet_3} \ar@{-}[r] & \mathmiddlescript{\bullet_5} \ar@{-}[r] & \mathmiddlescript{\bullet_6} \ar@{-}[r] & \mathmiddlescript{\bullet_7} \ar@{-}[r] & \mathmiddlescript{\bullet_8}} $ \\
\multicolumn{1}{c}{}\\[-5pt]
\hline
\end{tabular
\end{center}
\end{table}
\begin{theorem}[Simson-Zaj\k{a}c, 2017]\label{T:04}
Let $q:\Z^n \to \Z$ be a non-negative connected unit form of corank $\CRnk(q)=c$. Then there exists an iterated flation $B:\Z^n \to \Z^n$ and a unique Dynkin graph $\Delta \in \mathcal{D}_{n-c}$, denoted by $\Dyn(q)=\Delta$ and called \textbf{Dynkin type} of $q$, such that the unit form $qB$ is the canonical $c$-extension of $q_{\Delta}$. In particular, two non-negative unit forms $q$ and $q'$ are weakly congruent if and only if they are of the same corank and of the same Dynkin type.
\end{theorem}
Two unit forms $q$ and $q'$ are said to be \textbf{strongly (Gram) congruent}, written $q' \approx q$ or $q' \approx^B q$, if there is an automorphism $B$ such that
\[
\widecheck{G}_{q'}=B^{\tr}\widecheck{G}_qB.
\]
A complete classification of strong congruence classes for the exceptional Dynkin types $\E_6$, $\E_7$ and $\E_8$ is given in~\cite{dS20,dS18}, as well as for the non-simply laced Dynkin types $\B_n$, $\C_n$, $\F_4$ and $\G_2$. Similar results for the Dynkin type $\D_n$ were announced in~\cite{dS18} and proved in~\cite{dS21a}.
\begin{lemma}\label{L:05}
Let $q:\Z^n \to \Z$ be a unit form. If $q$ is not connected, then $q \approx q^1 \oplus q^2$ for unit forms $q^1$ and $q^2$, and $q^2 \oplus q^1 \approx q^1 \oplus q^2$. In particular, strong congruence preserves connectedness of non-negative unit forms.
\end{lemma}
\begin{proof}
Assume that $q$ is not connected, and take non-empty sets $X$ and $Y$ such that $q_{ij}=0$ for $i \in X$ and $j \in Y$. There is a permutation $\rho$ of the set $\{1,\ldots,n\}$ satisfying
\begin{itemize}
\item If $i<j$ and $i,j \in X$ or $i,j \in Y$, then $\rho(i)<\rho(j)$.
\item If $i \in X$ and $j \in Y$, then $\rho(i)<\rho(j)$.
\end{itemize}
Let $P=[\bas_{\rho^{-1}(1)}|\cdots |\bas_{\rho^{-1}(n)}]$ be the permutation matrix associated to the inverse permutation $\rho^{-1}$, and consider the product
\[
A=(a_{ij})_{i=1}^n=P^{\tr}\widecheck{G}_qP.
\]
Then $a_{\rho(i)\rho(j)}=q_{ij}$, and by the conditions on the permutation $\rho$, the matrix $A$ has the following block-diagonal shape,
\[
A=\begin{pmatrix} A_1&0\\0&A_2 \end{pmatrix}.
\]
Moreover, $A_1$ and $A_2$ are upper diagonal matrices with ones in their main diagonals. Taking $q^i$ such that $\widecheck{G}_{q^i}=A_i$ (for $i=1,2$), we get
\[
P^{\tr}\widecheck{G}_qP=\widecheck{G}_{q^1}\oplus \widecheck{G}_{q^2}=\widecheck{G}_{q^1 \oplus q^2},
\]
as wanted. Swapping the sets $X$ and $Y$ we get $q \approx q^2 \oplus q^1$.
\medskip
The last claim follows by Lemma~\ref{L:01}$(e)$, since strong congruence implies weak congruence.
\end{proof}
\subsection{Elementary quiver transformations} \label{S(I):Quiv}
A \textbf{quiver} $Q=(Q_0,Q_1,\sou,\tar)$ consists of (finite) sets $Q_0$ and $Q_1$, whose elements are called \textbf{vertices} and \textbf{arrows} of $Q$ respectively, and functions $\sou,\tar:Q_1 \to Q_0$. We say that the vertices $v$ and $w$ are \textbf{source} and \textbf{target} of an arrow $i$ respectively, if $\sou(i)=v$ and $\tar(i)=w$, and display $i$ graphically as $\xymatrix{v \ar[r]^-{i} & w}$ if $v \neq w$. For convenience we consider the set $\MiNu(i)=\{\sou(i),\tar(i)\}$ of vertices \textbf{incident} to $i$. An arrow $i$ with $\sou(i)=\tar(i)$ is called a \textbf{loop} of $Q$, and we say that $Q$ is a \textbf{loop-less quiver} if it contains no loop. Two arrows $i$ and $j$ are said to be \textbf{parallel} if $\MiNu(i) = \MiNu(j)$, and \textbf{adjacent} if $\MiNu(i)\cap \MiNu(j)\neq \emptyset$. The \textbf{degree} of a vertex $v$ in $Q$ is the number of arrows $i$ in $Q$ such that $v \in \MiNu(i)$. A vertex in $Q$ with degree one is called a \textbf{leaf}, and any arrow $i$ such that $\MiNu(i)$ contains a leaf is called a \textbf{pendant arrow} of $Q$. Observe that $\overline{Q}=(Q_0,\{\MiNu(i)\}_{i \in Q_1})$ is a multi-graph, referred to as \textbf{underlying graph} of $Q$. We say that $Q$ is \textbf{simple}, \textbf{connected} or a \textbf{tree} if so is $\overline{Q}$, and walks of $\overline{Q}$ are also called walks of $Q$. To be precise and fix some notation, by \textbf{walk} of $Q$ we mean an alternating sequence of vertices and arrows in $Q$ of the form,
\[
\alpha=(v_0,i_1,v_1,\ldots,v_{\ell-1},i_{\ell},v_{\ell}),
\]
such that $\MiNu(i_t)=\{v_{t-1},v_t\}$ for $t=1,\ldots,\ell$. The notation $\alpha=i_1^{\epsilon_1}\cdots i_{\ell}^{\epsilon_{\ell}}$ for $\epsilon_t=\pm 1$ is also used, where the symbol $i_t^{+1}$ stands for $\sou(i_t)=v_{t-1}$ and $\tar(i_t)=v_t$, while $i_t^{-1}$ stands for $\sou(i_t)=v_{t}$ and $\tar(i_t)=v_{t-1}$ (that is, the sign $\epsilon_t$ denotes the direction in which the arrow $i_t$ is found along the walk $\alpha$). The non-negative integer $\ell$ is called \textbf{length} of the walk $\alpha$, and we call vertex $v_0$ (resp. vertex $v_{\ell}$) the \textbf{starting vertex} (resp. the \textbf{ending} vertex) of $\alpha$. A walk with length zero is called a \textbf{trivial walk}. As usual we omit the exponent $+1$ in our notation of walks, and abusing notation we set $\sou(\alpha)=v_0$ and $\tar(\alpha)=v_{\ell}$. Observe that the reversed sequence
\[
(v_{\ell},i_{\ell},v_{\ell-1},\ldots,v_1,i_1,v_0),
\]
is also a walk in $Q$, referred to as \textbf{reverse walk} of $\alpha$ and denoted by $\alpha^{-1}$. With our notation we have
\[
(i_1^{\epsilon_1}\cdots i_{\ell}^{\epsilon_{\ell}})^{-1}=i_{\ell}^{-\epsilon_{\ell}}\cdots i_1^{-\epsilon_1},
\]
(in particular $\sou(i^{-1})=\tar(t)$ and $\tar(i^{-1})=\sou(i)$).
\medskip
Let $Q=(Q_1,Q_0)$ be a quiver (we will usually exclude the source and target functions $\sou$ and $\tar$ from the notation of $Q$). For a vertex $v \in Q_0$, we denote by $Q^{(v)}$ the quiver obtained from $Q$ by removing the vertex $v$, as well as all arrows containing it (that is, all arrows $i$ with $v \in \MiNu(i)$). Similarly, if $i \in Q_1$ is an arrow of $Q$, we denote by $Q^{(i)}$ the quiver obtained from $Q$ by removing the arrow $i$.
\medskip
We will need the following transformations of a loop-less quiver $Q=(Q_0,Q_1)$. Throughout the text we assume that both sets $Q_0$ and $Q_1$ are totally ordered.
\begin{enumerate}
\item \label{LbOne} \textit{Arrow inversion.} Let $C$ be a set of arrows in $Q$, and take $Q\mathcal{V}_C$ to be the quiver obtained from $Q$ by inverting the direction of all arrows in $C$.
\item \textit{Swapping.} Given two arrows $i \neq j$ in $Q$, define the new quiver $Q\mathcal{S}_{ij}$ as the quiver obtained from $Q$ by \textbf{swapping} arrows $i$ and $j$ (therefore, swapping their positions in the total ordering of $Q_1$).
\item \textit{(Quiver) Flation.} Let $i$ and $j$ be adjacent arrows in $Q$, and choose signs $\epsilon_i$ and $\epsilon_j$ such that $\alpha=i^{\epsilon_i}j^{\epsilon_j}$ is a walk in $Q$. Consider the quiver $Q'$ obtained from $Q$ by replacing $i$ by a new arrow $i'$ having $\sou(i')=\sou(\alpha)$ and $\tar(i')=\tar(\alpha)$ if $\epsilon_i=1$, and $\sou(i')=\tar(\alpha)$ and $\tar(i')=\sou(\alpha)$ if $\epsilon_i=-1$. The new arrow $i'$ takes the place of the deleted arrow $i$ in the ordering of $Q_1$. We will use the notation $Q'=Q\mathcal{T}_{ij}^{\epsilon}$ where $\epsilon=-\epsilon_i\epsilon_j$, and say that $\mathcal{T}_{ij}^{\epsilon}$ is a \textbf{flation} for the quiver $Q$. If $i$ and $j$ are non-adjacent arrows, we take $Q\mathcal{T}_{ij}^{0}=Q$.
\item \textit{$FS$-transformation.} By \textbf{$FS$-transformation} of a quiver $Q$ with respect to the ordered pair of different arrows $(i,j)$, we mean the new quiver $Q\FS^{\epsilon}_{ij}$ given by
\[
Q\FS^{\epsilon}_{ij}=(Q\mathcal{T}^{\epsilon}_{ij})\mathcal{S}_{ij}.
\]
\end{enumerate}
The analogous of Remark~\ref{R:03} can be stated as follows.
\begin{remark}\label{R:06}
Let $Q$ be a loop-less quiver.
\begin{itemize}
\item[a)] Let $\mathcal{T}^{\epsilon}_{ij}$ be a flation for $Q$. Then $Q'=Q\mathcal{T}^{\epsilon}_{ij}$ is a loop-less quiver if and only if $i$ and $j$ are non-parallel arrows, and in that case $Q'\mathcal{T}^{-\epsilon}_{ij}=Q$.
\item[b)] Let $\FS^{\epsilon}_{ij}$ be a $FS$-transformation for $Q$. Then $Q'=Q\FS^{\epsilon}_{ij}$ is a loop-less quiver if and only if $i$ and $j$ are non-parallel arrows, and in that case $Q'\FS^{-\epsilon}_{ji}=Q$.
\end{itemize}
Moreover, the four types of transformations described above preserve the number of vertices and arrows of a quiver, as well as its connectedness if $i$ and $j$ are non-parallel arrows.
\end{remark}
\begin{proof}
Let $i$ and $j$ be non-parallel arrows in $Q$, and notice that by construction the corresponding arrows $i'$ and $j'$ in $Q'=Q\mathcal{T}^{\epsilon}_{ij}$ are non-parallel (for $i'$ takes the place of the walk $i^{\epsilon_i}j^{\epsilon_j}$ of $Q$, and $j'$ remains unchanged). Moreover, $i$ and $j$ are adjacent in $Q$ if and only if $i'$ and $j'$ are adjacent in $Q'$. Noticing that now $(i')^{\epsilon_i}(j')^{-\epsilon_j}$ is a walk in $Q'$, then $\mathcal{T}_{i'j'}^{-\epsilon}$ is a flation for $Q'$, and a second application of the construction above shows that $(Q')\mathcal{T}^{-\epsilon}_{i'j'}=Q$ (subsequently labels $i'$ and $j'$ are replaced by $i$ and $j$). In particular,
\[
(Q\FS^{\epsilon}_{ij})\FS^{-\epsilon}_{ji}=([Q'S_{ij}]\mathcal{T}^{-\epsilon}_{ji})S_{ij}=([Q'\mathcal{T}^{-\epsilon}_{ij}]\mathcal{S}_{ij})S_{ij}=Q,
\]
since clearly $[Q'S_{ij}]\mathcal{T}^{-\epsilon}_{ji}=[Q'\mathcal{T}^{-\epsilon}_{ij}]\mathcal{S}_{ij}$. This shows $(a)$ and $(b)$.
\medskip
Now, if $Q'$ is the quiver obtained from $Q$ after applying any of the four types of transformations above, then $Q'_0=Q_0$, and $|Q'_1|=|Q_1|$. The claim on connectedness is clear for arrow inversions and swappings, and follows for $\mathcal{T}^{\epsilon}_{ij}$ and $\FS^{\epsilon}_{ij}$ using the arguments above.
\end{proof}
\section{Quivers and their incidence bigraphs} \label{S(C)}
The techniques presented in this section were introduced in a slightly wider context in~\cite{jaJ2018}, following ideas from Barot~\cite{B99} and von H{\"o}hne~\cite{vH88}.
\subsection{Incidence quadratic forms} \label{S(C):Iqf}
Consider an arbitrary quiver $Q$ with $|Q_0|=m$ vertices and $|Q_1|=n$ arrows, both sets $Q_0$ and $Q_1$ with fixed total orderings. The \textbf{(vertex-arrow) incidence matrix of $Q$} is the $m \times n$ matrix $I(Q)$ with columns $I_i=\bas_{\sou(i)}-\bas_{\tar(i)}$ for $i \in Q_1$ (observe that $I_i=0$ if and only if $i$ is a loop in $Q$), cf.~\cite{tZ08}. For a loop-less quiver $Q$, it will be useful to consider the \textbf{incidence function} $\sigma_Q:Q_0\times Q_1 \to \{+1,0,-1\}$ given by
\begin{equation*}
\sigma_Q(v,i) = \left\{
\begin{array}{l l}
+1, & \text{if $v$ is the source of arrow $i$},\\
-1, & \text{if $v$ is the target of arrow $i$},\\
0, & \text{otherwise}.
\end{array} \right.
\end{equation*}
Clearly, $I(Q)=[\sigma_Q(v,i)]_{v \in Q_0}^{i \in Q_1}$. For a non-trivial walk $\alpha=i_1^{\epsilon_1}\cdots_{\ell}^{\epsilon_{\ell}}$ in $Q$, we use the notation $I_{\alpha}=\sum_{t=1}^{\ell}\epsilon_tI_{i_t}$. Observe that $I_{\alpha}$ is the telescopic sum
\[
I_{\alpha}=\epsilon_1(\bas_{\sou(i_1)}-\bas_{\tar(i_1)})+\ldots+\epsilon_{\ell}(\bas_{\sou(i_{\ell})}-\bas_{\tar(i_{\ell})})=\bas_{\sou(\alpha)}-\bas_{\tar(\alpha)}.
\]
\begin{definition}\label{D:new}
Let $Q$ be a quiver with $m \geq 1$ vertices and $n \geq 0$ arrows.
\begin{itemize}
\item[a)] The square matrix $G_Q=I(Q)^{\tr}I(Q)$ is defined to be the \textbf{symmetric Gram matrix of $Q$}.
\item[b)] Let $\widecheck{G}_Q$ be the (unique) upper triangular matrix such that $G_Q=\widecheck{G}_Q+\widecheck{G}_Q^{\tr}$, called \textbf{(upper) triangular Gram matrix} of $Q$.
\item[c)] The quadratic form $q_Q:\Z^n \to \Z$ given by
\[
q_{Q}(x)=\frac{1}{2}x^{\tr}I(Q)^{\tr}I(Q)x=\frac{1}{2}||I(Q)x||^2,
\]
is defined to be the \textbf{quadratic form associated to $Q$}.
\end{itemize}
\end{definition}
Notice that the matrix $\widecheck{G}_Q$ is the standard matrix morsification of $q_Q$ in the sense of Simson~\cite{dS13a}.
\begin{remark}\label{R:new}
Observe that $G_Q$ and $\widecheck{G}_Q$ do not depend on the order given to the set of vertices $Q_0$. Indeed, if $Q'$ is a copy of $Q$ with different ordering of its vertices, there is a permutation $m \times m$ matrix $P$ such that $I(Q')=PI(Q)$, and therefore $I(Q')^{\tr}I(Q')=I(Q)^{\tr}P^{\tr}PI(Q)=I(Q)^{\tr}I(Q)$.
\end{remark}
To present a graphical description of $q_Q$, we need the following definition.
\begin{definition}\label{D:07}
Let $Q$ be a loop-less quiver with $m \geq 1$ vertices and $n \geq 0$ arrows. The \textbf{incidence (bi)graph $\Inc(Q)$} of a loop-less quiver $Q$ is defined as follows. The set of vertices $\Inc(Q)_0$ of $\Inc(Q)$ is the set of arrows of $Q$ (that is, $\Inc(Q)_0=Q_1$). The signed edges in $\Inc(Q)$ are given as follows:
\begin{itemize}
\itemsep=0.95pt
\item[a)] Let $i$ and $j$ be adjacent non-parallel arrows in $Q$. Then the vertices $i$ and $j$ are connected by exactly one edge $a$ in $\Inc(Q)$, with sign $\sigma(a)$ given by $\sigma(a)=(-1)\sigma_Q(v,i)\sigma_Q(v,j)$ where $v$ is the (unique) vertex in $Q$ with $\{v\}=\MiNu(i)\cap \MiNu(j)$.
\item[b)] Let $i$ and $j$ be parallel arrows in $Q$. Then the vertices $i$ and $j$ are connected by exactly two edges $a$ and $b$ in $\Inc(Q)$, with signs given by
\[
\sigma(a)=(-1)\sigma_Q(v,i)\sigma_Q(v,j) \quad \text{and} \quad \sigma(b)=(-1)\sigma_Q(v',i)\sigma_Q(v',j),
\]
where $v$ and $v'$ are the vertices in $Q$ such that $\MiNu(i)=\{v,v'\}=\MiNu(j)$. Notice that $\sigma(a)=\sigma(b)$.
\item[c)] If $i$ and $j$ are non-adjacent arrows in $Q$, then the vertices $i$ and $j$ are not adjacent in $\Inc(Q)$.
\end{itemize}
\end{definition}
See~\cite{jaJ2018} for an alternative and more general construction of $\Inc(Q)$ (compare also with the signed line graph construction in~\cite{tZ08,BS16}). The following lemma contains a graphical description of the quadratic form $q_Q$.
\begin{lemma}\label{L:08}
For any loop-less quiver $Q$ with $n$ arrows, the quadratic form $q_Q:\Z^n \to \Z$ of $Q$ is a unit form, and
\begin{itemize}
\itemsep=0.95pt
\item[a)] $\widecheck{G}_{Q}=\Id-\Adj(\Inc(Q))$.
\item[b)] $q_Q=q_{\Inc(Q)}$.
\item[c)] $q_Q$ is non-negative.
\item[d)] $q_Q$ is connected if and only if $Q$ is a connected quiver.
\end{itemize}
\end{lemma}
\begin{proof}
The non-negativity of $q_Q$ follows directly from the definition $G_{q_Q}=I(Q)^{\tr}I(Q)$. The diagonal entries of $I(Q)^{\tr}I(Q)$ are the squared norms of the columns $I_i$ of $I(Q)$. Since $Q$ has no loop, each column $I_i=\bas_{\sou(i)}-\bas_{\tar(i)}$ has squared norm two, which shows that $q_{Q}$ is a unit form.
\medskip
Observe that, by definition of $\Inc(Q)$, we have
\begin{equation*}
\Adj(\Inc(Q))= \left\{
\begin{array}{l l}
-I_i^{\tr}I_j, & \text{if $i < j$},\\
0, & \text{if $i \geq j$}.
\end{array} \right.
\end{equation*}
In particular, since $Q$ and $\Inc(Q)$ have no loop, we have
\[
2\Id-[\Adj(\Inc(Q))+\Adj(\Inc(Q))^{\tr}]=I(Q)^tI(Q),
\]
that is, $\widecheck{G}_Q=\Id-\Adj(\Inc(Q))$. Therefore $q_Q=q_{\Inc(Q)}$.
\medskip
By construction, the incidence bigraph $\Inc(Q)$ is connected if and only if $Q$ is connected. Then the last claim follows from Remark~\ref{R:02}.
\end{proof}
\subsection{The rank of an incidence matrix} \label{S(C):rk}
The following result is well known (cf.~\cite{tZ08}), and easy to prove.
\begin{lemma}\label{L:09}
For a connected loop-less quiver $Q$ we have $\Rnk(I(Q))=|Q_0|-1$.
\end{lemma}
\begin{proof}
Assume first that $Q$ is a tree (that is, that $|Q_1|=|Q_0|-1$), and proceed by induction on $m=|Q_0|$ (the cases $m \leq 2$ are clear). Assume that $m>2$ and take a vertex $v$ in $Q$ with degree one. Then there is a unique arrow $i$ in $Q$ containing $v$, and the restriction $Q^{(v)}$ is a tree. By induction hypothesis, the set of columns $\{I_j \mid j \in Q_1-\{i\}\}$ is linearly independent. Notice that $v$ does not belong to the support of $I_j$ for $j \neq i$, which implies that the whole set $\{I_j\}_{j \in Q_1}$ is linearly independent, which completes the induction step.
\medskip
Assume now that $Q$ is an arbitrary connected quiver with $n$ arrows, and choose a spanning tree $Q'$ of $Q$. By the first part of the proof, the set of columns $\{I_j\}$ with $j \in Q'_1$ is linearly independent. In particular $\Rnk(I(Q))\geq m-1$. Take now an arrow $i \in Q_1-Q'_1$, and let $v$ and $w$ be respectively the source and target of $i$. Then there is a (unique) walk $\alpha=i_1^{\epsilon_1}\cdots i_{\ell}^{\epsilon_{\ell}}$ in $Q'$ with starting vertex $v$ and ending vertex $w$. We have
\[
I_{\alpha}=\sum_{i=1}^{\ell}\epsilon_tI_{i_t}=\bas_{\sou(\alpha)}-\bas_{\tar(\alpha)}=\bas_v-\bas_w=I_i,
\]
which shows that $\Rnk(I(Q))\leq m-1$, hence the result.
\end{proof}
\begin{corollary}\label{C:10}
For a connected loop-less quiver $Q$ we have $\CRnk(q_{Q})=|Q_1|-|Q_0|+1$.
\end{corollary}
\begin{proof}
Since $q_Q$ is a non-negative unit form in $|Q_1|$ variables and $\Rnk(G_{q_Q})=\Rnk(I(Q))$ (see for instance~\cite{fZ99}), by the lemma above we have the result (for $\CRnk(q)=n-\Rnk(G_q)$).
\end{proof}
\subsection{Connection between the elementary transformations of $Q$ and $q_Q$} \label{S(C):trans}
\begin{lemma}\label{L:11}
Let $Q$ be a loop-less quiver with arrows $i \neq j$. Then, for $\epsilon \in \{+1,0,-1\}$, the linear transformation $T^{\epsilon}_{ij}$ is a flation for $q_Q$ if and only if $\mathcal{T}^{\epsilon}_{ij}$ is a quiver flation for $Q$.
\end{lemma}
\begin{proof}
Take $q=q_Q$. Recall that $T^{\epsilon}_{ij}$ is a flation for $q$ if $\epsilon=\sgn(q_{ij})$. On the other hand, $\mathcal{T}^{\epsilon}_{ij}$ is a flation for $Q$ if $\epsilon=-\epsilon_i\epsilon_j$, where $i^{\epsilon_i}j^{\epsilon_j}$ is a walk of $Q$, when the arrows $i$ and $j$ are adjacent, and $\epsilon=0$ otherwise. Using Lemma~\ref{L:08}, we simply compute $q_{ij}$ for each of the cases in Definition~\ref{D:07}:
\begin{itemize}
\item[a)] Assume $\MiNu(i) \cap \MiNu(j)=\{v\}$. Then $q_{ij}=\sigma_Q(v,i)\sigma_Q(v,j)$, and $i^{-\sigma_Q(v,i)}j^{\sigma_Q(v,j)}$ is a walk of $Q$. Therefore $\sgn(q_{ij})=\sigma_Q(v,i)\sigma_Q(v,j)=-\epsilon_i\epsilon_j$.
\item[b)] Assume $\MiNu(i) \cap \MiNu(j)=\{v,w\}$. Then $|q_{ij}|=2$ and $\sgn(q_{ij})=\sigma_Q(v,i)\sigma_Q(v,j)=\sigma_Q(w,i)\sigma_Q(w,j)$. As before, $i^{-\sigma_Q(v,i)}j^{\sigma_Q(v,j)}$ is a walk of $Q$, and $\sgn(q_{ij})=\sigma_Q(v,i)\sigma_Q(v,j)=-\epsilon_i\epsilon_j$.
\item[c)] If $i$ and $j$ are non-adjacent, then $q_{ij}=0$ and $\epsilon=0$.
\end{itemize}
This completes the proof.
\end{proof}
\begin{proposition}\label{P:12}
Let $Q$ be a loop-less quiver, and consider an elementary transformation $\mathcal{A} \in \{\mathcal{V}_C,\mathcal{S}_{ij}$, $\mathcal{T}^{\epsilon}_{ij},\FS^{\epsilon}_{ij}\}$ for $Q$ (with $C$ a subset of arrows, and arrows $i \neq j$) with corresponding elementary linear transformation $A \in \{ V_C,S_{ij},T^{\epsilon}_{ij},\FSq^{\epsilon}_{ij}\}$. Then
\[
I(Q\mathcal{A})=I(Q)A \quad \text{and} \quad q_{Q\mathcal{A}}=q_QA.
\]
\end{proposition}
\begin{proof}
Notice that the claims for $\mathcal{A}=\mathcal{V}_C$ or $\mathcal{A}=\mathcal{S}_{ij}$ follow directly from the definitions, thus we only need to show the claims for $\mathcal{A}=\mathcal{T}^{\epsilon}_{ij}$.
\medskip
If $i$ and $j$ are non-adjacent arrows, there is nothing to prove. Assume that $i$ and $j$ are adjacent arrows, say $\MiNu(i)=\{v,w\}$ and $\MiNu(j)=\{v,w'\}$. Let $I'_1,I'_2,\ldots$ be the columns of the (vertex-arrow) incidence matrix $I(Q')$ where $Q'=Q\mathcal{T}^{\epsilon}_{ij}$. By definition (see~\ref{S(I):Quiv}), the new arrow $i'$ in $Q'$ satisfies $\MiNu(i')=\{w,w'\}$, and $I'_k=I_k$ for any arrow $k \in Q_1-\{i\}$. Moreover, observe that
\[
\sigma_Q(v,i)I'_i=\sigma_Q(v,i)I_i-\sigma_Q(v,j)I_j, \quad \text{that is,} \quad I'_i=I_i-\sigma_Q(v,i)\sigma_Q(v,j)I_j.
\]
On the other hand, $T^{\epsilon}_{ij}$ is a flation for $q_Q$ if $\epsilon=\sgn((q_Q)_{ij})$. Hence, using Lemma~\ref{L:08} we get $\epsilon=\sigma_Q(v,i)\sigma_Q(v,j)$. Take $I''=I(Q)T^{\epsilon}_{ij}$ with columns $I''_1,I''_2,\ldots$ Then, for $k \in Q_1$ we have
\begin{equation*}
I''_k =I(Q)T^{\epsilon}_{ij}(\bas_k) \left\{
\begin{array}{l l}
I(Q)\bas_k=I_k, & \text{if $k \neq i$},\\
I(Q)(\bas_i-\epsilon\bas_j)=I_i-\epsilon I_j, & \text{if $k=i$}.
\end{array} \right.
\end{equation*}
Therefore $I''_k=I'_k$ for all $k$, which completes the proof (cf. also~\cite[Proposition~5.2$(b)$ and Remark~5.2]{jaJ2018}).
\medskip
For the claims on quadratic forms, taking $q=q_Q$ and $q'=q_{Q\mathcal{A}}$, we have
\[
G_{q'}=I(Q\mathcal{A})^{\tr}I(Q\mathcal{A})=A^{\tr}I(Q)^{\tr}I(Q)A=A^{\tr}G_{q}A,
\]
that is, $q'=qA$.
\end{proof}
\begin{remark}\label{R:13}
For a subset $C$ of arrows in $Q$ we have
\[
q_{Q\mathcal{V}_C} \approx^{V_C} q_{Q},
\]
where $V_C$ is the point inversion of~\ref{S(I):trans} over the indices determined by $C$.
\end{remark}
\begin{proof}
Take $q=q_Q$, $q'=q_{Q\mathcal{V}_C}$ and $V=V_C$. Notice that $V^{\tr}TV$ is an upper (or lower) triangular matrix if and only if so is $T$. This observation and the equality $G_{q'}=V^{\tr}G_{q}V$ show that $\widecheck{G}_{q'}=V\widecheck{G}_{q}V$, that is, $q' \approx^{V} q$.
\end{proof}
\subsection{Admissibility} \label{S(C):adm}
The following observation justifies our considerations on $FS$-transformations.
\begin{lemma}\label{L:14}
Let $q(x)=\sum \limits_{1 \leq i \leq j \leq n}q_{ij}x_ix_j$ be a unit form, and take indices $i \neq j$ such that $|q_{ij}|\leq 1$. Consider the $FS$-transformation $\FSq=\FSq^{\epsilon}_{ij}$ for $q$, and take $q'=q\FSq$. Then $q' \approx^{\FSq} q$ if and only if $q_{ik}=0=q_{kj}$ for any integer $k$ such that $i<k<j$ or $j<k<i$.
\end{lemma}
\begin{proof}
Assume that $i<j$. Consider the matrix $\widecheck{G}_q$ partitioned in the following way, where $q_{i,\bullet}^{\tr}=(q_{i,i+1},\ldots\,$, $q_{i,j-1})$ and $q_{\bullet,j}^{\tr}=(q_{i+1,j},\ldots,q_{j-1,j})$,
\[
\widecheck{G}_q=\begin{pmatrix}
G_1 & y_1 & A & y_2 & B \\
0 & 1 & q_{i,\bullet}^{\tr} & q_{ij} & z^{\tr}_1 \\
0 & 0 & G_2 & q_{\bullet,j} & C \\
0 & 0 & 0 & 1 & z^{\tr}_2 \\
0 & 0 & 0 & 0 & G_3
\end{pmatrix},
\]
and where $G_1$, $G_2$ and $G_3$ are upper triangular (square) matrices with all diagonal entries equal to~$1$, with matrices $A, B, C$ and vectors $y_1,y_2,z_1,z_2$ of appropriate size. Observe that the composition $(T^{\epsilon}_{ij})^{\tr}\widecheck{G}_qT^{\epsilon}_{ij}$ has the following shape,
\[
(T^{\epsilon}_{ij})^{\tr}\widecheck{G}_qT^{\epsilon}_{ij}=
\begin{pmatrix}
G_1 & y_1-y_2 & A & y_2 & B \\
0 & 2-\epsilon q_{ij} & q_{i,\bullet}^{\tr} & q_{ij}-\epsilon & z^{\tr}_1-z^{\tr}_2 \\
0 & -q_{\bullet,j} & G_2 & q_{\bullet,j} & C \\
0 & -\epsilon & 0 & 1 & z^{\tr}_2 \\
0 & 0 & 0 & 0 & G_3
\end{pmatrix}.
\]
Swapping the $i$-th and $j$-th columns and rows of the matrix above, we get
\[
(\FSq^{\epsilon}_{ij})^{\tr}\widecheck{G}_q\FSq^{\epsilon}_{ij}=
\begin{pmatrix}
G_1 & y_2 & A & y_1-y_2 & B \\
0 & 1 & 0 & -\epsilon & z^{\tr}_2 \\
0 & q_{\bullet,j} & G_2 & -q_{\bullet,j} & C \\
0 & q_{ij}-\epsilon & q_{i,\bullet}^{\tr} & 2-\epsilon q_{ij} & z^{\tr}_1-z^{\tr}_2 \\
0 & 0 & 0 & 0 & G_3
\end{pmatrix}.
\]
Now, since $\epsilon=\textrm{sgn}(q_{ij})$ and $|q_{ij}|\leq 1$, then $q_{ij}-\epsilon=0$ (and $2-\epsilon q_{ij}=1$, as already argued in Remark~\ref{R:03}$(a)$). We conclude by observing that $\FSq^{\tr}\widecheck{G}_q\FSq$ is an upper triangular matrix if and only if $q_{i,\bullet}=0$ and $q_{\bullet,j}=0$. The case $j<i$ can be shown in a similar way.
\end{proof}
A $FS$-transformation $\FSq^{\epsilon}_{ij}$ for $q$ such that $q\FSq^{\epsilon}_{ij} \approx^{\FSq^{\epsilon}_{ij}} q$ will be called \textbf{$q$-admissible}.
\medskip
For an arrow $i \in Q_1$ in a loop-less quiver $Q$, consider the set $Q_1(i)$ of all arrows in $Q$ adjacent to arrow $i$, that is, $Q_1(i)=\{j \in Q_1 \mid \text{$j \neq i$ and $\MiNu(i) \cap \MiNu(j) \neq \emptyset$}\}$. We say that two arrows $i \neq j$ in $Q$ are \textbf{$Q$-admissible} (or that $\FS^{\epsilon}_{ij}$ is \textbf{$Q$-asmissible}) if we have
\[
k \notin Q_1(i) \cup Q_1(j), \qquad \text{for any arrow $k$ with $i<k<j$ or $j<k<i$}.
\]
\begin{corollary}\label{C:15}
Let $Q$ be a loop-less quiver with arrows $i \neq j$, and consider the flation $\FS=\FS^{\epsilon}_{ij}$ for $Q$, and the corresponding $FS$-transformation $\FSq=\FSq_{ij}^{\epsilon}$ for $q_Q$. Then $\FSq$ is $q_{Q}$-admissible if and only if $\FS$ is $Q$-admissible, and in that case
\[
q_{Q\FS} \approx^{\FSq} q_{Q}.
\]
\end{corollary}
\begin{proof}
Take $q_Q(x)=\sum \limits_{1 \leq i \leq j \leq n}q_{ij}x_ix_j$. By Lemma~\ref{L:14}, the transformation $\FSq$ is $q_Q$ admissible if and only if $q_{ik}=0=q_{kj}$ for all $k$ such that $i<k<j$ or $j<k<i$. Correspondingly, using Lemma~\ref{L:08} and the definition of incidence graph $\Inc(Q)$, this means that
\[
k \notin Q_1(i) \cup Q_1(j), \qquad \text{for any arrow $k$ with $i<k<j$ or $j<k<i$},
\]
that is, $\FSq$ is $q_Q$ admissible if and only if $\FS$ is $Q$-admissible.
\end{proof}
\subsection{Iterated transformations} \label{S(C):ite}
Recall that by iterated $FS$-(linear) transformation $\FSq$ for $q$ we mean a composition of the form $\FSq=\FSq^{\epsilon_1}_{i_1j_1}\cdots \FSq^{\epsilon_r}_{i_rj_r}$ such that $\FSq^{\epsilon_t}_{i_tj_t}$ is a $FS$-transformation for $q^{t-1}$, where $q^0=q$ and we take recursively $q^{t}=q^{t-1}\FSq_{i_tj_t}$, and such that each $q^t$ is a unit form for $t=1,\ldots,r$. If each $\FSq_{i_tj_t}^{\epsilon_t}$ is $q^{t-1}$-admissible, we say that $\FSq$ is a \textbf{$q$-admissible iterated $FS$-(linear) transformation}. In this case we have $q^r \approx^{\FSq} q$.
\medskip
An \textbf{iterated $FS$-transformation} $\FS$ of a loop-less quiver $Q$ is a concatenation $\FS=\FS_{i_1j_1}^{\epsilon_1}\cdots \FS^{\epsilon_r}_{i_rj_r}$ of $FS$-transformations such that if we take inductively $Q^{0}=Q$ and $Q^{t}=Q^{t-1}\FS^{\epsilon_t}_{i_tj_t}$, then the pair of arrows $i_t$ and $j_t$ are not parallel in $Q^{t-1}$ for $t=1,\ldots,r$. The expression $Q\FS$ denotes the final (loop-less) quiver $Q^r$. If in each step $t$, the $FS$-transformation $\FS^{\epsilon_t}_{i_tj_t}$ is $Q^{t-1}$-admissible, then the iterated $FS$-transformation $\FS$ is called \textbf{$Q$-admissible}. By definition and Corollary~\ref{C:15}, $\FS$ is $Q$-admissible if and only if $\FSq$ is $q_Q$ admissible, and in that case
\[
q_{Q\FS} \approx^{\FSq}q_Q.
\]
Notice that if $\FS$ is a $Q$-admissible iterated $FS$-transformation, and $\FS'$ is a $Q\FS$-admissible iterated $FS$-transformation, then the concatenation $\FS\FS'$ is a $Q$-admissible iterated $FS$-transformation.
\medskip
Recall that a \textbf{maximal (quiver) star} $\Star_n$ with $n$ arrows is a tree quiver with $n+1$ vertices (and arbitrary direction of its arrows), such that there exists a vertex $v_0$ incident to all arrows of the quiver (called \textbf{center of the star}). One of our main combinatorial problems is to show that any tree quiver can be brought to a maximal star via an admissible iterated $FS$-transformation. We need the following key preliminary observation.
\begin{lemma}\label{L:16}
For any maximal (quiver) star $\Star$ and any vertex $v \in \Star_0$, there exists a $\Star$-admissible iterated $FS$-transformation $\FS$ such that $\Star\FS$ is a maximal star having $v$ as its center.
\end{lemma}
\begin{proof}
Let $1,\ldots,{n}$ be the arrows of the maximal star $\Star$, all of them having in common the center of the star $v_0$, and assume $n \geq 2$. The rest of vertices $v_1,\ldots,v_n$ of $\Star$ are enumerated such that $v_t$ is incident to the arrow $t$ for $t=1,\ldots,n$. We show that the following is a $\Star$-admissible iterated $FS$-transformation,
\[
\mathcal{W}=\FS_{2,1}\FS_{3,2}\cdots \FS_{n,n-1}.
\]
For $t=1,\ldots,n$, take $Q^{t}$ to be the tree quiver with same set of vertices than $\Star$, given by
\[
\xymatrix@R=1pc{
& v_2 \ar@{-}[rd]^-{1} & & & v_{t+1} \ar@{-}[ld]_-{t+1} \\
Q^{t}= & \vdots & v_1 \ar@{-}[r]_-{t} & v_0 & \vdots \\
& v_{t} \ar@{-}[ru]_-{t-1} & & & v_n \ar@{-}[lu]^-{n} }
\]
(the direction of arrows, not shown in the diagram, is irrelevant for the proof). Then $Q^{1}=\Star$, and clearly $\FS_{t+1,t}$ is a $Q^{t}$-admissible $FS$-transformation (admissibility holds since $t+1$ and $t$ are consecutive arrows). Observe that $Q^{t+1}=Q^{t}\FS_{t+1,t}$, and thus by definition $Q^{n}=\Star \mathcal{W}$. Notice also that $Q^{n}$ is a maximal star with center the vertex $v_1$, that $\mathcal{W}$ is a $Q^{n}$-admissible iterated $FS$-transformation, and that the first arrow $1$ in $Q^{n}$ joins the vertices $v_1$ and $v_2$. In particular, the quiver $(\Star\mathcal{W})\mathcal{W}$ is a maximal star with center $v_2$, whose first arrow joins vertices $v_2$ and $v_3$.
Now, if $v=v_0$ is the original center of the star, there is nothing to do. If $v=v_t$ for some $1 \leq t \leq n$, then we may repeat the above construction $t$ times to get a maximal star $\Star \mathcal{W}^{t}$ with center the vertex $v_t$, as wanted, where $\mathcal{W}^t$ denotes the concatenation of $t$ copies of $\mathcal{W}$.
\end{proof}
\begin{proposition}\label{P:17}
For any tree quiver $Q$ with selected vertex $v$, there exists a $Q$-admissible iterated $FS$-transformation $\FS$ such that $Q\FS$ is a maximal star with center the vertex $v$.
\end{proposition}
\begin{proof}
We proceed by induction on the number of arrows $n=|Q_1|$ of the tree $Q$. For $n=1$ there is nothing to show. For $n=2$ the tree $Q$ is a star, and we may change the position of its center as in the Lemma above. Hence, we may assume that $n \geq 3$ and that the claim holds for all trees with less than $n$ arrows.
Let $n$ be the maximal arrow in $Q$ (relative to the total order $\leq$ in $Q_1$) and take $\MiNu(n)=\{v,w\}$. Let $Q'$ be the quiver obtained from $Q$ by deleting the arrow $n$. Then $Q'$ is the disjoint union of exactly two tree quivers, one containing vertex $v$ and denoted by $Q^v$, and one containing vertex $w$ and denoted by $Q^w$ . The sets $Q^v_1$ and $Q^w_1$ inherit the total order from $Q_1$.
Observe that, by the maximality of $n$, any $Q^v$-admissible iterated $FS$-transformation is also $Q$-admissible, and similarly for $Q^w$. We distinguish two cases:
\medskip
\noindent \textbf{Case 1.} Assume first that $|Q^w_0|=1$ (that is, that $w$ is a leaf in $Q$). By induction hypothesis, we may assume that $Q^v$ is a maximal star with center $v$. Then $Q$ is a maximal star. We proceed analogously if $|Q^v_0|=1$.
\medskip
\noindent \textbf{Case 2.} Assume that $|Q^w_0|>1$ and $|Q^v_0|>1$, and that the second largest arrow $n-1$ in $Q$ belongs to $Q^v$. By induction hypothesis, we may assume that $Q^v$ is a maximal star with center $v$. In particular, $n-1$ and $n$ are adjacent arrows and $n-1$ is a pendant arrow in $Q$. Then $\FS_{n-1,n}$ is a $Q$-admissible $FS$-transformation, and the maximal arrow $n$ in $Q'=Q\FS_{n-1,n}$ is a pendant arrow in $Q'$. Apply then Case~1 to the tree quiver $Q'$.
\medskip
To complete the proof, use the Lemma above to change the center of the resulting maximal star, as desired.
\end{proof}
\begin{remark}\label{R:18}
For a loop-less quiver $Q$, all $Q$-admissible iterated $FS$-transformations are reversible. To be precise, if $\FS=\FS^{\epsilon_1}_{i_1j_1}\cdots\FS^{\epsilon_r}_{i_rj_r}$ is a $Q$-admissible iterated $FS$-transformation and $Q'=Q\FS$, then
\[
\FS^{-1}:=\FS^{-\epsilon_r}_{j_ri_r}\cdots \FS^{-\epsilon_1}_{j_1i_1},
\]
is a $Q'$-admissible $FS$-transformation, and $Q=Q'\FS^{-1}$.
\end{remark}
\begin{proof}
The claim follows inductively from Remark~\ref{R:06}$(b)$.
\end{proof}
\subsection{Strong congruence among positive unit forms of Dynkin type $\A_n$} \label{S(C):proof}
The following proposition is one of the goals in~\cite{jaJ2018} (see~\cite[Theorem~5.5$(c)$ and Corollary~6.6]{jaJ2018}). Here we give a short proof. For $c \geq 0$, define the \textbf{canonical $c$-extension linear quiver} $\overrightarrow{\A}_n^{(c)}$ as a quiver obtained from the linear quiver $\overrightarrow{\A}_n=\overrightarrow{\A}_n^{(0)}=\xymatrix{v_1 \ar[r]^-{1} & v_2 \ldots v_n \ar[r]^-{n} & v_{n+1}}$ by adding $c$ arrows from $v_{n+1}$ to $v_1$:
\[
\overrightarrow{\A}_n^{(c)}=\xymatrix{v_1 \ar[r]^-{1} & v_2 \ar[r]^-{2} & v_3 \ldots v_{n-1} \ar[r]^-{n-1} & v_n \ar[r]^-{n} & v_{n+1} \ar@<2.5ex>@/^20pt/[llll]^-{n+c} \ar@<.5ex>@/^20pt/[llll]_-{n+1}^-{\cdots} \\ {} }
\]
Observe that the incidence graph of $\overrightarrow{\A}_n^{(c)}$ is the canonical $c$-vertex extension $\widehat{\A}_n^{(c)}$ of Dynkin type $\A_{n}$, as defined by Simson in~\cite[Definition~2.2]{dS16a},
\[
\Inc(\overrightarrow{\A}_n^{(c)})=\widehat{\A}_n^{(c)}.
\]
\begin{proposition}\label{P:19}
Let $q$ be a connected non-negative unit form of Dynkin type $\A_n$. Then there is a connected loop-less quiver $Q$ such that $q=q_Q$.
\end{proposition}
\begin{proof}
By Theorem~\ref{T:04}, there is an iterated flation $T=T^{\epsilon_1}_{i_1j_1}\cdots T^{\epsilon_r}_{i_rj_r}$ for $q$ such that $qT=q_{\widehat{\A}_n^{(c)}}$ is the canonical $c$-extension of $q_{\A_n}$ (for $c \geq 0$ the corank of $q$, see~\cite{dS16a,SZ17}).
\medskip
Take $Q$ as the unique quiver satisfying
\[
I(Q)=I(\overrightarrow{\A}_n^{(c)})T^{-1},
\]
and notice that for any $x \in \Z^n$ we have
\begin{eqnarray*}
q_Q(x)\hspace{-2mm}&=&\hspace{-2mm}\frac{1}{2} x^{\tr}I(Q)^{\tr}I(Q)x =\frac{1}{2}
x^{\tr}T^{-\tr}I(\overrightarrow{\A}_n^{(c)})^{\tr}I(\overrightarrow{\A}_n^{(c)})T^{-1}x=\\
&=\hspace{-2mm}& q_{\widehat{\A}_n^{(c)}}(T^{-1}x)=q(x),
\end{eqnarray*}
that is, $q=q_Q$.
\end{proof}
Now we are ready to prove one of the main Gram classification results of the paper.
\begin{theorem}\label{T:20}
Let $q$ be a connected positive unit form of Dynkin type $\A_n$. If $q'$ is a unit form with $q' \sim q$, then there exists a composition of $q$-admissible iterated $FS$-linear transformations and sign changes $B$, such that
\[
q' \approx^B q.
\]
\end{theorem}
\begin{proof}
Let $Q$ be a tree quiver such that $q=q_Q$ as in Proposition~\ref{P:19}. By Proposition~\ref{P:17}, there is a $Q$-admissible iterated $FS$-transformation $\FS$ such that $Q\FS$ is a maximal star. Take an arrow inversion $\mathcal{V}$ such that $\Star=Q\FS \mathcal{V}$ has all arrows pointing away from the star center. Denote by $\FSq$ the $FS$-linear transformation for $q$ corresponding to $\FS$, and by $V$ the point inversion corresponding to $\mathcal{V}$, so that $qTV=q_{\Star}$, by Proposition~\ref{P:12}.
\medskip
Now, if $q' \sim q$ then $q'$ is a connected positive unit form (by Lemma~\ref{L:01}). As before, there is a composition of $q'$-admissible iterated $FS$-linear transformations $T'$, and a point inversion $V'$, such that $q'T'V'=q_{\Star}=qTV$. Considering the linear transformation $B=TV(V')^{-1}(T')^{-1}$, by Corollary~\ref{C:15} and Remark~\ref{R:13} we have $q' \approx^B q$, which completes the proof.
\end{proof}
\section{Combinatorial Coxeter analysis of unit forms of Dynkin type $\A_n$} \label{S(A)}
We begin this section giving some combinatorial definitions, in particular the notion of \emph{inverse of a quiver}, which will be used for the Coxeter analysis of non-negative unit forms of Dynkin type $\A_n$. Let $Q=(Q_0,Q_1,\sou,\tar)$ be a loop-less quiver, with a fixed total ordering $\leq$ of its arrows, and consider the sets
\[
Q_1^<(v,i)=\{j \in Q_1 \mid \text{$j<i$ and $v \in \MiNu(i) \cap \MiNu(j)$}\},
\]
for each vertex $v$ and each arrow $i$ of $Q$.
\begin{definition}\label{D:22}
Let $\alpha=i_0^{\epsilon_0}i_1^{\epsilon_1}\cdots i_{\ell}^{\epsilon_{\ell}}$ be a non-trivial walk of $Q$.
\begin{itemize}
\item[a)] We say that the walk $\alpha$ is \textbf{minimally decreasing} if
\[
i_{t+1}=\max Q_1^{<}(v_t,i_t), \quad \text{for $t=0,\ldots,\ell-1$}.\vspace*{-1.8mm}
\]
\item[b)] If $\alpha$ is minimally decreasing, we say that $\alpha$ is \textbf{left complete} if whenever $\beta \alpha$ is minimally decreasing for some walk $\beta$, then $\beta$ is a trivial walk. Similarly, $\alpha$ is \textbf{right complete} if whenever $\alpha \beta$ is minimally decreasing for some walk $\beta$, then $\beta$ is a trivial walk. A left and right complete minimally decreasing walk will be called \textbf{structural (decreasing) walk}.
\end{itemize}
\end{definition}
We will mainly consider the following particular minimally decreasing walks. For an arrow $i$ with there are exactly two right complete minimally descending walks starting with arrow $i$, one starting at vertex $\sou(t)$ and denoted by $\alpha^-_Q(i^{+1})$, and one starting at vertex $\tar(i)$ and denoted by $\alpha^-_Q(i^{-1})$. To be precise, if
\[
\alpha_Q^-(i^{+1})=(v_{-1},i_0,v_{0},i_1,v_1,\ldots,v_{\ell-1},i_{\ell},v_{\ell}),
\]
then $v_{-1}=\sou(i)$, $i_0=i$, $i_{t+1}=\max Q_1^{<}(v_t,i_t)$ for $t=0,\ldots,\ell-1$, and $Q_1^<(v_{\ell},i_{\ell})=\emptyset$.
\medskip
Similarly, if
\[
\alpha_Q^-(i^{-1})=(v_{-1},i_0,v_{0},i_1,v_1,\ldots,v_{\ell-1},i_{\ell},v_{\ell}),
\]
then $v_{-1}=\tar(i)$, $i_0=i$, $i_{t+1}=\max Q_1^{<}(v_t,i_t)$ for $t=0,\ldots,\ell-1$, and $Q_1^<(v_{\ell},i_{\ell})=\emptyset$. For a vertex $v$ we denote by $\alpha_Q^-(v)$ the unique structural decreasing walk starting at $v$.
\medskip
Consider dually the minimally increasing walks $\alpha_Q^+(i^{\pm 1})$ and $\alpha_Q^+(v)$.
\begin{definition}\label{D:inv} Define a new quiver $Q^*=(Q^*_0,Q^*_1,\sou^*,\tar^*)$ having the same set of vertices $Q_0^*=Q_0$ than $Q$, and the same number of arrows $|Q_1^*|=|Q_1|$, and such that each arrow $i$ in $Q$ corresponds to an arrow $i^*$ in $Q^*$, given by
\[
\sou^*(i^*)=\tar(\alpha_Q^-(i^{-1})), \qquad \text{and} \qquad \tar^*(i^*)=\tar(\alpha_Q^-(i^{+1})).
\]
The order of the set of arrows of $Q_1^*$ corresponds to the order in $Q_1$ (that is, $i^* \leq j^*$ in $Q^*_1$ if and only if $i \leq j$ in $Q_1$).
\end{definition}
The quiver $Q^*$ defined above will be referred to as \textbf{inverse quiver} of $Q$, and we will use the notation $Q^{-1}=Q^*$ (we will drop the asterisk $*$ on arrows of $Q^{-1}$ when the context allows it). Proposition~\ref{P:24} below, for which we need the following technical result, justifies our definitions.
\subsection{A technical lemma} \label{S(A):Inv}
Recall that the columns of the (vertex-arrow) incidence matrix $I(Q)$ of $Q$ are denoted by $I_i=\sou(i)-\tar(i) \in \Z^{|Q_0|}$ for an arrow $i$ of $Q$. The columns of $I(Q^{-1})$ will be denoted by $I_i^{-1}$ for an arrow $i$ of the inverse quiver $Q^{-1}$. For convenience, we consider the function $\langle-,- \rangle:Q_1 \times Q_1 \to \Z$ given by $\langle i,j \rangle=I^{\tr}_iI_j$ for arrows $i$ and $j$ in $Q$.
\begin{lemma}\label{L:23}
Let $Q$ be a loop-less quiver. Define an auxiliary function $\Xi:Q_0 \times Q_1 \to \Z^{|Q_0|}$, given for a vertex $v$ and an arrow $k$ of $Q$ by,
\[
\Xi(v,k)=\sum_{i \in Q_1^<(v,k)}I^{-1}_i\frac{\langle i,k \rangle}{|\langle i,k \rangle|},
\]
where as usual the sum is zero when the set of indices is empty. Then the following assertions hold:
\begin{itemize}
\itemsep=0.98pt
\item[a)] If $j=\max Q_1^<(v,k)$ and $\MiNu(j)=\{v,w\}$, then $\Xi(v,k)=\frac{\langle j,k \rangle}{|\langle j,k \rangle|}\left[I_j-\Xi(w,j) \right]$.
\item[b)] For any arrow $k$ with $\MiNu(k)=\{v,w\}$ we have $\Xi(v,k)+\Xi(w,k)=I_k-I_k^{-1}$.
\end{itemize}
In particular, for any arrow $k$ in $Q$ the following recursive formula for $I_k^{-1}$ holds,
\[
I_k^{-1}=I_k-\sum_{i<k}I_i^{-1}\langle i,k \rangle.
\]
\end{lemma}
\begin{proof}
We proceed by induction on the totally ordered arrows $Q_1$. Observe that if $k$ is minimal in $Q_1$, then $\Xi(\sou(k),k)=0=\Xi(\tar(k),k)$ and $I^{-1}_k=I_k$, therefore all claims hold in this case. For simplicity, for adjacent arrows $j$ and $k$ we take
\[
\sigma(j,k)=\frac{\langle j,k \rangle}{|\langle j,k \rangle|} \in \{\pm 1\}.
\]
To verify $(a)$ for an arrow $k$, assume that $(b)$ is satisfied for all arrows smaller than $k$. Then, since $j=\max Q_1^<(v,k)$, we have $Q_1^<(v,k)=\{j\} \cup Q_1^<(v,j)$, and therefore
\begin{eqnarray}
\Xi(v,k) & = & I^{-1}_{j}\sigma(j,k)+\sum_{i \in Q_1^<(v,j)}I^{-1}_i\sigma(i,k) \nonumber \\
& = & I^{-1}_{j}\sigma(j,k)+\sum_{i \in Q_1^<(v,j)}I^{-1}_i\sigma(i,j)\sigma(j,k) \label{EqOne} \\
& = & \sigma(j,k)\left[ I_{j}^{-1}+\Xi(v,j) \right] \nonumber \\
& = & \sigma(j,k)\left[ I_{j}^{-1}+I_{j}-I_{j}^{-1}-\Xi(w,j) \right] \label{EqTwo} \\
& = & \sigma(j,k)\left[ I_{j}-\Xi(w,j) \right], \nonumber
\end{eqnarray}
where the equality~(\ref{EqOne}) holds since $\sigma(i,j)\sigma(j,k)=\sigma(i,k)$ whenever $\MiNu(i)\cap \MiNu(j) \cap \MiNu(k) \neq \emptyset$, and the equality~(\ref{EqTwo}) holds applying $(b)$ to the arrow $j$.
\medskip
To show $(b)$, assume the claim holds for all arrows smaller than $k$, and that $(a)$ holds for all arrows smaller than or equal to $k$. Consider the minimally descending walk $\alpha=\alpha^-_Q(k^{-1})$ of $i_0=k$ as in the definition above,
\[
\alpha=i_0^{\epsilon_0}\cdots i_r^{\epsilon_r},
\]
(in particular, $\epsilon_0=-1$ since $\tar(k)=v_{-1}$). For simplicity, for such a walk and for $t=1,\ldots,r$ define
\[
\sigma(i_t,i_{t-1},\ldots,i_1,i_0)=(-1)^{t+1}\frac{\langle i_{t},i_{t-1} \rangle \; \langle i_{t-1},i_{t-2} \rangle \cdots \langle i_{1},i_{0} \rangle}{|\langle i_{t},i_{t-1} \rangle \; \langle i_{t-1},i_{t-2} \rangle \cdots \langle i_{1},i_{0} \rangle|} \in \{\pm 1\}.
\]
Applying recursively $(a)$ we get $\Xi(v_0,i_0)=\sum_{t=1}^rI_{i_t}\sigma(i_t,\ldots,i_0)$. However, since $\sou(i_0)=v_0$, we have $\sigma(i_t,\ldots,i_0)=-\sigma(v_t,i_t)=\epsilon_t$, and therefore the above expression for $\Xi(v_0,i_0)$ is a telescopic sum, that is,
\[
\Xi(v_0,i_0)=I_{i_1}\epsilon_1+\ldots+I_{i_r}\epsilon_r=\bas_{v_0}-\bas_{\xi(v_0,i_0)}.
\]
Proceeding similarly for $\Xi(w_0,i_0)$, where $w_0=\tar(i_0)$, we find that
\[
\Xi(w_0,i_0)=\bas_{\xi(w_0,i_0)}-\bas_{w_0}.
\]
Since our definition of $I^{-1}_{i_0}$ is $I^{-1}_{i_0}=\bas_{\xi(v_0,i_0)}-\bas_{\xi(w_0,i_0)}$, these equations yield the result,
\[
\Xi(v_0,i_0)+\Xi(w_0,i_0)=\bas_{v_0}-\bas_{\xi(v_0,i_0)}+\bas_{\xi(w_0,i_0)}-\bas_{w_0}=I_{i_0}-I_{i_0}^{-1}.
\]
Now, to verify the last claim, for an arrow $k$ with $\MiNu(k)=\{v,w\}$ consider the sets
\[
X=Q_1^<(v,k)-Q_1^<(w,k), \quad Y=Q_1^<(w,k)-Q_1^<(v,k) \quad \text{and} \quad Z=Q_1^<(v,k)\cap Q_1^<(w,k).
\]
Then, applying $(b)$, we get
\begin{eqnarray}
I_k^{-1} & = & I_k-\sum_{i \in Q_1^<(v,k)}I^{-1}_i\sigma(i,k)-\sum_{i \in Q_1^<(w,k)}I^{-1}_i\sigma(i,k) \nonumber \\
& = &I_{k}-\left(\sum_{i \in X\cup Y}I^{-1}_i\frac{\langle j,k \rangle}{|\langle j,k \rangle|}+2\sum_{i \in Z}I^{-1}_i\frac{\langle j,k \rangle}{|\langle j,k \rangle|} \right) \nonumber \\
& = & I_k-\sum_{i<k}I_i^{-1}\langle i,k \rangle, \nonumber
\end{eqnarray}
where the last equality holds since $|\langle i,k \rangle|=1$ if $i \in X \cup Y$, and $|\langle i,k \rangle|=2$ if $i \in Z$ (observe that $Z$ is the set of arrows smaller than $k$ that are parallel to $k$), and $\langle i,k \rangle=0$ if $i \notin X \cup Y \cup Z$. This completes the proof.
\end{proof}
\subsection{Gram matrices of a quiver and its inverse} \label{S(A):InvDos}
We recall that the inverse quiver of a loop-less quiver $Q$ is given in Definition~\ref{D:inv}, and is denoted by $Q^{-1}$.
\begin{proposition}\label{P:24}
Let $Q$ be a loop-less quiver with inverse quiver $Q^{-1}$ and upper triangular Gram matrices $\widecheck{G}_Q$ and $\widecheck{G}_{Q^{-1}}$. Then
\[
I(Q^{-1})=I(Q)\widecheck{G}^{-1}_Q \qquad \text{and} \qquad \widecheck{G}_Q\widecheck{G}_{Q^{-1}}=\Id.
\]
\end{proposition}
\begin{proof}
Take $G=\widecheck{G}_Q$ and let $A$ be the (upper) triangular adjacency matrix $\Adj(\Inc(Q))$ of the incidence bigraph $\Inc(Q)$ of $Q$ (hence $G+G^{\tr}=I(Q)^{\tr}I(Q)=2\Id-(A+A^{\tr})$, cf.~\ref{S(C):Iqf}). We proceed by induction on the number of arrows $m=|Q_1|$ of $Q$ (the claim is trivial for $m=1$). Let $1,\ldots,m$ be the ordered arrows of $Q$, and denote by $\widetilde{Q}$ the quiver obtained from $Q$ by removing the maximal arrow~$m$. Notice that, by maximality, the inverse $\widetilde{Q}^{-1}$ is obtained from $Q^{-1}$ by removing its maximal arrow. Then
\[
I(\widetilde{Q})=[I_{1}|\cdots |I_{{m-1}}], \quad \text{and} \quad G=\begin{pmatrix} \widetilde{G}&-V\\0&1 \end{pmatrix},
\]
where $\widetilde{G}=\widecheck{G}_{\widetilde{Q}}$ and $V$ is the column vector of size $m-1$ given by $V=(\langle {t},m \rangle)_{t=1}^{m-1}$ (since $G=\Id-A$, cf. Lemma~\ref{L:08}). Observe that
\[
G^{-1}=\begin{pmatrix} \widetilde{G}^{-1}&\widetilde{G}^{-1}V\\0&1 \end{pmatrix},
\]
therefore, by induction hypothesis,
\begin{eqnarray}
I(Q)G^{-1} & = & [I(\widetilde{Q})|I_{m}]\begin{pmatrix} \widetilde{G}^{-1}&\widetilde{G}^{-1}V\\0&1 \end{pmatrix} \nonumber \\
& = & [I(\widetilde{Q})\widetilde{G}^{-1}|I(\widetilde{Q})\widetilde{G}^{-1}V+I_
{m}] \nonumber \\[4pt]
& = & [I(\widetilde{Q}^{-1}) | I_{m}+I(\widetilde{Q}^{-1})V]. \nonumber
\end{eqnarray}
We have shown in Lemma~\ref{L:23} that
\[
I_{m}+I(\widetilde{Q}^{-1})V=I_{m}-\sum_{t=1}^{m-1}I^{-1}_{t}\langle {t},m \rangle=I_{m}^{-1},
\]
that is, $I(Q)G^{-1}=[I(\widetilde{Q}^{-1})|I^{-1}_{m}]=I(Q^{-1})$, as wanted.
\medskip
Now we show that $\widecheck{G}_Q\widecheck{G}_{Q^{-1}}=\Id$. By the first claim, observe that
\begin{eqnarray}
\widecheck{G}_{Q^{-1}}+\widecheck{G}_{Q^{-1}}^{\tr}&=&G_{Q^{-1}}=I(Q^{-1})^{\tr}I(Q^{-1}) \nonumber \\
&=&\widecheck{G}_{Q}^{-\tr}I(Q)^{\tr}I(Q)\widecheck{G}_{Q}^{-1}=\widecheck{G}_{Q}^{-\tr}G_Q\widecheck{G}_{Q}^{-1} \nonumber \\[3pt]
& = & \widecheck{G}_{Q}^{-\tr}(\widecheck{G}_{Q}+\widecheck{G}_{Q}^{\tr})\widecheck{G}_{Q}^{-1} = \widecheck{G}_{Q}^{-\tr}+\widecheck{G}_{Q}^{-1}. \nonumber
\end{eqnarray}
This completes the proof since both $\widecheck{G}_{Q^{-1}}$ and $\widecheck{G}_{Q}^{-1}$ are upper triangular matrices.
\end{proof}
As consequence we have the following important observation.
\begin{corollary}\label{C:25}
For a loop-less quiver $Q$ the following assertions hold.
\begin{itemize}
\item[a)] The inverse quiver $Q^{-1}$ has no loop, and
\[
Q=(Q^{-1})^{-1}.
\]
\item[b)] The quiver $Q$ is connected if and only if $Q^{-1}$ is connected.
\item[c)] Inverses commute with arrow inversions, that is, if $C$ is a set of arrows in $Q$, and $C^*$ is the set of corresponding arrows in $Q^{-1}$, then $(Q\mathcal{V}_C)^{-1}=Q^{-1}\mathcal{V}_{C^*}$.
\end{itemize}
\end{corollary}
\begin{proof}
For $(a)$, if $Q^{-1}$ has a loop, then its (vertex-arrow) incidence matrix $I(Q^{-1})$ has a zero column, say $I_i=0$. Then the $i$-th diagonal entry of $G_{Q^{-1}}=I(Q^{-1})^{\tr}I(Q^{-1})$ is zero. But $G_{Q^{-1}}=\widecheck{G}_Q^{-1}+\widecheck{G}_Q^{-\tr}$, which means that the $i$-th diagonal entry of $\widecheck{G}_Q^{-1}$ is zero. This is impossible since $\widecheck{G}_Q^{-1}$ is invertible. It follows that the inverse quiver $Q^{-1}$ has no loop. Now, applying Proposition~\ref{P:24}, we obtain
\[
I((Q^{-1})^{-1})=I(Q^{-1})\widecheck{G}_{Q^{-1}}^{-1}=[I(Q)\widecheck{G}_Q^{-1}]\widecheck{G}_Q=I(Q),
\]
that is, $Q=(Q^{-1})^{-1}$. This completes the proof of $(a)$.
\medskip
For $(b)$, notice that the construction of the inverse $Q^{-1}$ involves only vertices and arrows inside the connected components of $Q$, that is, if $Q=Q^1 \cup Q^2$, then
\[
Q^{-1}=(Q^1)^{-1} \cup (Q^2)^{-1}.
\]
Using $(a)$, this shows that $Q$ is connected if and only if so is $Q^{-1}$.
\medskip
For $(c)$, by applying Propositions~\ref{P:12} and~\ref{P:24} we obtain
\begin{eqnarray}
I((Q\mathcal{V}_C)^{-1})&=&I(Q\mathcal{V}_C)\widecheck{G}^{-1}_{Q\mathcal{V}_C}=I(Q)V_C(V_C^{\tr}\widecheck{G}_QV_C)^{-1} \nonumber \\
&=&I(Q)\widecheck{G}_Q^{-1}V_C^{-\tr}=I(Q^{-1})V_C=I(Q^{-1}\mathcal{V}_{C^*}). \nonumber
\end{eqnarray}
Here we use the equalities $\widecheck{G}_{Q\mathcal{V}_C}\!=\!V_C^{\tr}\widecheck{G}_QV_C\,$(cf.$\,$Remark~\ref{R:13}) and $V_C^{-\tr}\!=\!V_C.\,$Clearly$\,I((Q\mathcal{V}_C)^{-1})\!=I(Q^{-1}\mathcal{V}_{C^*})$ implies $(Q\mathcal{V}_C)^{-1}=Q^{-1}\mathcal{V}_{C^*}$. This finishes the proof of $(c)$.
\end{proof}
\subsection{A combinatorial formula for the Coxeter matrix} \label{S(A):Cox}
The \textbf{Coxeter matrix} associated to a unit form $q$ on $n$ variables is the $n \times n$ matrix given by
\[
\Cox_q=-\widecheck{G}_q^{\tr}\widecheck{G}_q^{-1}.
\]
The characteristic polynomial of $\Cox_q$, given by $\cox_q(\va)=\det(\Id\va-\Cox_q)$, is called \textbf{Coxeter polynomial} of $q$. The \textbf{Coxeter number} $\coxN_q$ of $q$ is the minimal natural number $m$ such that $\Cox_q^m=\Id$, if such number exists, and $\coxN_q=\infty$ otherwise (cf.~\cite{dS20} for these and related definitions). The following result may be found in~\cite[Proposition~4.2]{dS20} in a wider context.
\begin{lemma}\label{L:26}
Let $q$ and $q'$ be unit forms. If $q' \approx^Bq$, then
\[
\Cox_{q'}=B^{\tr}\Cox_qB^{-\tr}.
\]
In particular, $\cox_{q'}(\va)=\cox_q(\va)$ and $\coxN_{q'}=\coxN_q$.
\end{lemma}
\begin{proof}
By hypothesis we have $\widecheck{G}_{q'}=B^{\tr}\widecheck{G}_qB$, therefore $\widecheck{G}_{q'}^{-1}=B^{-1}\widecheck{G}^{-1}_qB^{-\tr}$, and
\[
\Cox_{q'}=-\widecheck{G}_{q'}^{\tr}\widecheck{G}_{q'}^{-1}=-(B^{\tr}\widecheck{G}_q^{\tr}B)(B^{-1}\widecheck{G}_q^{-1}B^{-\tr})=B^{\tr}\Cox_qB^{-\tr}.
\]
Finally, both the characteristic polynomial and the order of a square matrix are similarity invariants, therefore the remaining two equalities hold.
\end{proof}
We now give a combinatorial expression for the Coxeter matrix of some unit forms.
\begin{theorem}\label{T:27}
For a loop-less quiver $Q$, the following formula for the Coxeter matrix $\Cox_{q_Q}$ of the unit form $q_Q$ holds,
\[
\Cox_{q_Q}=\Id-I(Q)^{\tr}I(Q^{-1}).
\]
\end{theorem}
\begin{proof}
By Proposition~\ref{P:24} we have
\begin{eqnarray}
\Id-I(Q)^{\tr}I(Q^{-1})&=&\Id-I(Q)^{\tr}I(Q)\widecheck{G}_Q^{-1} = \Id-G_Q\widecheck{G}_Q^{-1} \nonumber \\ &=&\Id-(\widecheck{G}_Q+\widecheck{G}_Q^{\tr})\widecheck{G}_Q^{-1} \nonumber \\
& = & \Id-\Id-\widecheck{G}_Q^{\tr}\widecheck{G}_Q^{-1} \nonumber \\
&=& -\widecheck{G}_{q_Q}^{\tr}\widecheck{G}_{q_Q}^{-1} = \Cox_{q_Q}, \nonumber \vspace*{-3mm}
\end{eqnarray}
since $\widecheck{G}_Q=\widecheck{G}_{q_Q}$.
\end{proof}
We define the \textbf{Coxeter matrix} $\Cox_Q$ of a loop-less quiver $Q$ as
\[
\Cox_Q=\Id-I(Q)^{\tr}I(Q^{-1}).
\]
The Coxeter polynomial of $q_Q$, denoted by $\cox_Q(\va)$, is also referred to as \textbf{Coxeter polynomial} of $Q$.
\begin{corollary}\label{C:28}
Let $q:\Z^n \to \Z$ be a connected non-negative unit form of Dynkin type $\A_{n-c}$ (with $c$ the corank of $q$). Then the entries $c_{ij}$ of the Coxeter matrix $\Cox_q=(c_{ij})_{i,j=1}^n$ of $q$ are bounded as follows.
\[
|c_{ij}-\delta_{ij}|\leq 2, \quad \text{for $i,j=1,\ldots,n$},
\]
where $\delta_{ij}=1$ if $i=j$ and $\delta_{ij}=0$ otherwise. Moreover,
\begin{itemize}
\item[a)] If $q$ is principal (that is, of corank one) and $|q_{ij}|\leq 1$ for all $i,j=1,\ldots,n$, then $|c_{ij}|\leq 2$ for $i,j=1,\ldots,n$.
\item[b)] If $q$ is positive, then $|c_{ij}|\leq 1$ for $i,j=1,\ldots,n$.
\end{itemize}
\end{corollary}
\begin{proof}
By Proposition~\ref{P:19}, there is a connected loop-less quiver $Q$ such that $q=q_Q$. If $I_i$ and $I_i^{-1}$ denote respectively the columns of the (vertex-arrow) incidence matrices $I(Q)$ and $I(Q^{-1})$, then the $(i,j)$-th entry $d_{ij}$ of the matrix $\Cox_Q-\Id$ is $d_{ij}=-I_i^{\tr}I_j^{-1}$. In particular,
\[
|d_{ij}|\leq 2 \quad \text{and}\quad -2\leq d_{ii} \leq 0, \quad \text{for $i,j=1,\ldots,n$}.
\]
Hence the general claim follows since $d_{ij}+\delta_{ij}=c_{ij}$, by Theorem~\ref{T:27}.
\medskip
Let $q$ be a principal unit form. By Corollary~\ref{C:10}, we see that the connected quiver $Q$ satisfies $|Q_0|=|Q_1|$ (that is, $Q$ is a $1$-tree quiver). Assume, to the contrary, that there is an arrow $i$ such that $I_i^{\tr}I^{-1}_i=-2$, that is, $i$ and its corresponding arrow $i^*$ in the inverse quiver $Q^{-1}$ are parallel arrows with opposite directions, say
\[
\sou(i)=v=\tar^*(i^*) \quad \text{and} \quad \tar(i)=w=\sou^*(i^*),
\]
where $Q=(Q_0,Q_1,\sou,\tar)$ and $Q^{-1}=(Q_0,Q^*_1,\sou^*,\tar^*)$. Let $\alpha=\alpha_Q^-(i^{-1})$ and $\beta=\alpha_Q^-(i^{+1})$ be the minimally descending walks starting with arrow $i$, as given after Definition~\ref{D:22}. Then
\[
\sou(\alpha)=\tar(i)=\sou^*(i^*)=\tar(\alpha) \quad \text{and}\quad \sou(\beta)=\sou(i)=\tar^*(i^*)=\tar(\beta),
\]
that is, both $\alpha$ and $\beta$ are closed walks. Since $Q$ is a $1$-tree and $\alpha \neq \beta$ (for $\sou(\alpha) \neq \sou(\beta)$), and both $\alpha$ and $\beta$ are minimally descending walks, we conclude that $\alpha=i_0^{\epsilon_0}i_1^{\epsilon_1}$ and $\beta=i_0^{-\epsilon_0}i_1^{-\epsilon_1}$. In particular $|q_{i_0i_1}|=2$, which proves the claim~$(a)$.
\medskip
Assume now that $q$ is a positive unit form. Again by Corollary~\ref{C:10}, we see that the connected quiver $Q$ is a tree. Assume that $i$ and $j$ are arrows in $Q$ such that $\MiNu^*(i^*)=\MiNu(j)$. Let $\alpha=i_0^{\epsilon_0}i_1^{\epsilon_1}\cdots i_r^{\epsilon_r}$ and $\beta=j_0^{\eta_0}j_1^{\eta_1}\cdots j_s^{\eta_s}$ be as before, where $i=i_0=j_0$. In this situation, observe that there are signs $\epsilon, \eta \in \{ \pm 1\}$ such that the following is a non-trivial closed walk in $Q$,
\[
j_1^{\eta_1}\cdots j_s^{\eta_s}j^{\eta}i_r^{-\epsilon_r}\cdots i_1^{-\epsilon_1}i_0^{\epsilon},
\]
which is impossible since $Q$ is a tree. This shows that $|c_{ij}| \leq 1$ for $i \neq j$.
\medskip
Finally, for any arrow $i$ in $Q$ with corresponding arrow $i^*$ in $Q^{-1}$, we may argue as above to show that if $\MiNu(i)\cap \MiNu^*(i^*) \neq 0$, then either $\sou(i)=\sou^*(i^*)$ or $\tar(i)=\tar^*(i^*)$. This shows that $d_{ii}=-I_i^{\tr}I_{i}^{-1} \in \{-2,-1,0\}$, and in particular $c_{ii}=d_{ii}+1\in \{-1,0,1\}$, which completes the proof of $(b)$.
\end{proof}
\subsection{Extended maximal stars} \label{S(A):Star}
Recall that a \textbf{$1$-tree quiver} is a connected quiver $Q$ with $|Q_0|=|Q_1|$. By \textbf{maximal $1$-star} we mean a $1$-tree quiver $\widetilde{\Star}$ such there is a vertex $v \in \widetilde{\Star}_0$ (called \textbf{center of the star}) incident to all arrows of $\widetilde{\Star}$. Notice that there is exactly one pair of parallel arrows in $\widetilde{\Star}$, say $\ell < m$. In that case, if the maximal $1$-star quiver $\widetilde{\Star}$ has $n+1$ arrows and $1 \leq \ell<m \leq n+1$, we use the notation
\[
\widetilde{\Star}=\widetilde{\Star}^{\ell,m}_n.
\]
For instance, the cases $\widetilde{\Star}_4^{\ell,5}$ for $1 \leq \ell <5$, with corresponding inverses shown underneath, have the following shapes,\vspace*{-1mm}
\[
\xymatrix{{} & {} \ar@{<-}@<-.6ex>[d]_5 \ar@{<-}[d]^-1 & {} \\ {} \ar@{<-}[r]^-2 & {} & {} \ar@{<-}@<.1ex>[l]_-4 \\ {} & {} \ar@{<-}[u]_-3 & {} } \quad \xymatrix{{} & {} \ar@{<-}[d]^-1 & {} \\ {} \ar@{<-}@<-.4ex>[r]_-5 \ar@{<-}@<.4ex>[r]^-2 & {} & {} \ar@{<-}[l]_-4 \\ {} & {} \ar@{<-}[u]_-3 & {} } \quad \xymatrix{{} & {}\ar@{<-}[d]^-1 & {} \\ {} \ar@{<-}[r]^-2 & {} & {} \ar@{<-}[l]_4 \\ {} & {} \ar@{<-}@<-.4ex>[u]_-3 \ar@{<-}@<.4ex>[u]^-5 & {} } \quad \xymatrix{{} & {} \ar@{<-}[d]^-1 & {} \\ {} \ar@{<-}[r]^-2 & {} & {} \ar@{<-}@<.4ex>[l]^-5 \ar@{<-}@<-.4ex>[l]_-4 \\ {} & {} \ar@{<-}[u]_-3 & {} }
\]
\[
\xymatrix{{} & {} \ar@{<-}@<.6ex>[d]^-1 & {} \\ {} \ar@{<-}[ru]^-2 & {} \ar@{<-}[r]_-5 & {} \ar@{<-}[ld]^-4 \\ {} & {} \ar@{<-}[lu]^-3 & {} } \quad \xymatrix{{} & {} \ar@{<-}@<.4ex>[d]^-1 \ar@{<-}[rd]^-5 & {} \\ {} \ar@{<-}[ru]^-2 & {} & {} \ar@{<-}[ld]^-4 \\ {} & {} \ar@{<-}[lu]^-3 & {} } \quad \xymatrix{{} & {}\ar@{<-}@<.4ex>[d]^-1 & {} \\ {} \ar@{<-}[ru]^-2 \ar@{<-}@/_4pt/[rr]_-5 & {} & {} \ar@{<-}[ld]^-4 \\ {} & {} \ar@{<-}[lu]^-3 & {} } \quad \xymatrix{{} & {} \ar@{<-}@<.4ex>[d]^-1 & {} \\ {} \ar@{<-}[ru]^-2 & {} & {} \ar[ld]_-5 \\ {} & {} \ar@{<-}[lu]^-3 \ar@<-.8ex>[ru]_-4 & {} }
\]
\eject
We now generalize Lemma~\ref{L:16} and Proposition~\ref{P:17} to maximal $1$-stars.
\begin{lemma}\label{L:29}
Let $\widetilde{\Star}$ be a maximal $1$-star quiver.
\begin{itemize}
\item[a)] For an arbitrary vertex $v$ of $\widetilde{\Star}$, there is a $\widetilde{\Star}$-admissible iterated $FS$-transformation $\FS$ such that $\widetilde{\Star}\FS$ is a maximal $1$-star with center $v$.
\item[b)] If $\widetilde{\Star}=\widetilde{\Star}^{\ell,m}_n$ and $\widetilde{\Star}'=\widetilde{\Star}_n^{\ell',m'}$, then $q_{\widetilde{\Star}'} \approx q_{\widetilde{\Star}}$ if and only if
\[
(m'-\ell')=(m-\ell) \qquad \text{or} \qquad (m'-\ell')+(m-\ell)=n+1.
\]
\end{itemize}
\end{lemma}
\begin{proof}
Let $\widetilde{\Star}_1=\{1,\ldots,{n},{n+1}\}$ be the arrows of the maximal $1$-star $\widetilde{\Star}$, all of them having in common the center of the star $v_0$, and assume $n > 3$. Take $\widetilde{\Star}=\widetilde{\Star}^{\ell,m}_n$ for arrows $1 \leq \ell < m \leq n+1$, and enumerate the non-central vertices of $\widetilde{\Star}$ so that $v_t$ is incident to arrow $t$ for $t=1,\ldots,m-1$, and to arrow ${t+1}$ for $t=m,\ldots,n$.
To prove $(a)$ it is enough to show, as in Lemma~\ref{L:16}, that there is a $\widetilde{\Star}$-admissible iterated $FS$ transformation $\FS$ such that $\widetilde{\Star}\FS$ is a maximal $1$-star with center $v_1$, and such that the vertex $v_2$ is incident to the minimal arrow of $\widetilde{\Star}\FS$. We distinguish three cases:
\medskip
\noindent \textbf{Case 1.} Assume first that $\ell>1$. We use the following iterated $FS$-transformation,
\[
\mathcal{W}=\FS_{2,1}\FS_{3,2}\cdots \FS_{n+1,n}.
\]
As in Lemma~\ref{L:16}, a direct computation shows
\[
\widetilde{\Star}^{\ell,m}_n\mathcal{W}=\widetilde{\Star}^{\ell-1,m-1}_n,
\]
where now $v_1$ is the star center of $\widetilde{\Star}\mathcal{W}$, and the first arrow of $\widetilde{\Star}\mathcal{W}$ is joining vertices $v_1$ and $v_2$.
\medskip
\noindent \textbf{Case 2.} Assume now that $\ell=1$ and $m>2$, and consider the following iterated $FS$ transformation,
\[
\mathcal{W}_m=\FS_{2,1}\FS_{3,2}\cdots\FS_{m-1,m-2}\FS_{m+1,m}\cdots \FS_{n+1,n},
\]
obtained from $\mathcal{W}$ by omitting the transformation $\FS_{m,m-1}$.
Notice similarly that $\widetilde{\Star}^{1,m}_n\mathcal{W}=\widetilde{\Star}^{m-1,n+1}_n$. Indeed, the quiver $Q:=\widetilde{\Star}\FS_{2,1}\FS_{3,2}\cdots\FS_{m-1,m-2}$ has the following shape,
\[
\xymatrix@C=3pc@R=1pc{
{v_2} \ar@{-}[rd]^-{1} & & & {v_m} \ar@{-}[ld]_-{m+1} \\
\vdots & {v_1} \ar@{-}@<-.5ex>[r]_-{m-1} \ar@{-}@<.5ex>[r]^-{m} & {v_0} & \vdots \\
{v_{m-1}} \ar@{-}[ru]_-{m-2} & & & {v_n} \ar@{-}[lu]^-{n+1} }
\]
The arrows $m$ and $m-1$ are parallel in $Q$, therefore we omit $\FS_{m,m-1}$ to avoid loops (see Remark~\ref{R:06}). However, $\FS_{m+1,m}\cdots \FS_{n+1,n}$ is a $Q$-admissible iterated $FS$-transformation, and one can directly compute
\[
\widetilde{\Star}^{1,m}_n\mathcal{W}_m=Q\FS_{m+1,m}\cdots \FS_{n+1,n}=\widetilde{\Star}^{m-1,n+1}_n.
\]
Moreover, since $m>2$, both vertices $v_1$ and $v_2$ are incident to the minimal arrow of $\widetilde{\Star}\mathcal{W}_m$.
\medskip
\noindent \textbf{Case 3.} Assume now that $\ell=1$ and $m=2$. Take
\[
\mathcal{M}=\FS_{n,n+1}\FS_{n-1,n}\cdots \FS_{1,2} \quad \text{and} \quad\mathcal{M}_{n}=\FS_{n-1,n}\cdots\FS_{1,2},
\]
and observe that $Q'=\widetilde{\Star}\mathcal{M}^{n-1}$ has the following shape,
\[
Q'=\xymatrix@C=3pc@R=1.2pc{
{v_2} \ar@{-}[d]^-1 \ar@{-}@<-.5ex>[rd]_-n \ar@{-}@<.5ex>[rd]^(.7){n+1} \ar@{-}@/^10pt/@<1ex>[rrd]^-{n-1} \ar@{-}@/_8pt/[dd]_(.7){2} \ar@{-}@<-1ex>@/_18pt/[ddd]_-{n-2} \\ {v_3} & {v_1} & {v_0} \\
{v_4} \ar@{}[d]|-{\vdots} \\ v_{n} } \quad Q'\mathcal{M}_n= \xymatrix@C=3pc@R=1.2pc{
{v_2} \ar@{-}@<-.5ex>[rd]_(.3){1} \ar@{-}@<.5ex>[rd]^(.7){n+1} \\ {v_3} \ar@{-}[r]_-2 & {v_1} \ar@{-}[r]_-{n} & {v_0} \\
{v_4} \ar@{-}[ru]_-3 \ar@{}[d]|-{\vdots} \\ v_{n} \ar@{-}[ruu]_-{n-1} }
\]
Notice also that $Q'\mathcal{M}_n=\widetilde{\Star}^{1,n+1}_n$ is a maximal $1$-star with center $v_1$, and such that $v_2$ is incident to the minimal arrow of $Q'\mathcal{M}_n$.
\medskip
Take now $\FS$ to be the transformation $\mathcal{W}$, $\mathcal{W}_m$ or $\mathcal{M}^{n-1}\mathcal{M}_n$ in cases~1, 2 and~3 respectively, and observe that we have
\begin{equation} \label{EqThree}
\widetilde{\Star}^{\ell,m}_n\FS= \left\{ \begin{array}{l l} \widetilde{\Star}^{\ell-1,m-1}_n, & \text{if $\ell>1$},\\ \widetilde{\Star}^{m-1,n+1}_n, & \text{if $\ell=1$}.\end{array} \right.
\end{equation}
By the above, $\widetilde{\Star}\FS$ is a maximal $1$-star with center $v_1$ and such that $v_2$ is incident to the minimal arrow of $\widetilde{\Star}\FS$, which shows the inductive step to complete the proof of $(a)$.
\medskip
To show $(b)$, if
\[
(m'-\ell')=(m-\ell) \qquad \text{or} \qquad (m'-\ell')+(m-\ell)=n+1,
\]
then by Corollary~\ref{C:15} and equation~(\ref{EqThree}) we have $q_{\widetilde{\Star}'} \approx q_{\widetilde{\Star}}$. For the converse, assume that $q_{\widetilde{\Star}'} \approx q_{\widetilde{\Star}}$. Using equation~(\ref{EqThree}), we may also assume that $m=n+1=m'$, and that $1 \leq \ell,\ell' \leq \frac{n+1}{2}$. In this case, consider the shape of the Coxeter polynomial of $\widetilde{\Star}$ (see remark below), must have $\ell=\ell'$. Hence the result.
\end{proof}
\begin{remark}\label{R:29medio}
A direct calculation using the description of Coxeter matrices in Theorem~\ref{T:27} yields the following Coxeter polynomial for a maximal $1$-star $\widetilde{\Star}=\widetilde{\Star}^{\ell,m}_n$,
\[
\varphi_{\widetilde{\Star}}(\va)=(\va^{m-\ell}-1)(\va^{(n+1)-(m-\ell)}-1).
\]
Therefore, Lemma~\ref{L:29}$(b)$ may be reinterpreted as follows:
\begin{itemize}
\item[b')] Two maximal $1$-star quivers are strongly Gram congruent if and only if they have the same Coxeter polynomial.
\end{itemize}
Results of this kind for posets may be found in~\cite{GSZ14}.
\end{remark}
\begin{proposition}\label{P:30}
Let $Q$ be a $1$-tree quiver. Then for any vertex $v$ in $Q$ there is a $Q$-admissible iterated $FS$-transformation $\FS$ such that $Q\FS$ is a maximal $1$-star with center the vertex $v$.
\end{proposition}
\begin{proof}
We proceed as in Proposition~\ref{P:17}, that is, by induction on the number $n=|Q_1|$ of arrows in $Q$. For $n=1,2$ the tree $Q$ is a $1$-star, and we may change the position of its center as in the Lemma above. Hence, we may assume that $n \geq 3$ and that the claim holds for all $1$-trees with less than $n$ arrows.
Let $n$ be the maximal arrow in $Q$ (relative to the total order $\leq$ in $Q_1$) and take $\MiNu(n)=\{v,w\}$. Let $Q'$ be the quiver obtained from $Q$ by deleting the arrow $n$. The set $Q'_1$ inherits a total order from $Q_1$. Observe that, by the maximality of $n$, any $Q'$-admissible iterated $FS$-transformation is also $Q$-admissible. We distinguish two cases:
\medskip
\noindent \textbf{Case 1.} Assume first that $Q'$ is a connected quiver. Then $Q'$ is a ($0-$)tree, and we may use Proposition~\ref{P:17} to assume that $Q'$ is a maximal star with center $v$. Since $\MiNu(n)=\{v,w\}$, then $Q$ is a maximal $1$-star.
\medskip
\noindent \textbf{Case 2.} Assume now that $Q'$ is not connected. Then $Q'$ is the disjoint union of exactly two connected quivers, one containing vertex $v$ and denoted by $Q^v$, and one containing vertex $w$ and denoted by $Q^w$.
\medskip
\textbf{Subcase 2.1.} Assume first that $|Q^w_0|=1$. Then $Q^v$ is a $1$-tree, and by induction hypothesis we may assume that $Q^v$ is a maximal $1$-star with center $v$. Hence $Q$ is a maximal star, and we proceed analogously if $|Q^v_0|=1$.
\medskip
\textbf{Subcase 2.2.} Assume now that $|Q^w_0|>1$ and $|Q^v_0|>1$, and that the second largest arrow $n-1$ in $Q$ belongs to $Q^v$. Since $Q$ is a $1$-tree, either $Q^v$ is a $0$-tree or a $1$-tree with less than $n$ arrows. Thus, by Proposition~\ref{P:17} or induction hypothesis respectively, we may assume that $Q^v$ is a maximal $c$-star with center $v$ for $c=0$ or $c=1$. First, if $n-1$ is a pendant arrow in $Q^v$, then $n$ is a pendant arrow in $Q'=Q\FS^{\epsilon}_{n-1,n}$, and we may apply Case~1 above to the $1$-tree quiver $Q'$. Second, if $n-1$ has a parallel arrow $j$ in $Q^{v}$, then the arrows $j$, $n-1$ and $n$ form a cycle in $Q'$, and we may apply Case~1 above to the $1$-tree quiver $Q'$.
\medskip
\noindent ~~~~To complete the proof, use Lemma~\ref{L:29}$(a)\,$to change the center of the resulting$\,1$-star as desired.
\end{proof}
We end this section with the second main classification result of the paper.
\begin{theorem}\label{T:34}
Two (connected) principal unit forms of Dynkin type $\A_n$ are strongly Gram congruent if and only if they have the same Coxeter polynomial.
\end{theorem}
\begin{proof}
Since Coxeter polynomials are strong Gram invariants (Lemma~\ref{L:26}), we only need to show sufficiency: assume $q$ and $q'$ are principal unit forms in $n+1$ variables, both of Dynkin type $\A_n$ and same Coxeter polynomial. By Proposition~\ref{P:19}, there are connected $1$-tree quivers $Q$ and $Q'$ such that $q=q_Q$ and $q'=q_{Q'}$. By Proposition~\ref{P:30}, there is a $Q$-admissible iterated $FS$-transformation $\FS$ and a maximal $1$-star $\widetilde{\Star}_n^{\ell,m}$ (for some $n \geq 1$ and $1 \leq \ell < m \leq n+1$), such that $Q\FS=\widetilde{\Star}_n^{\ell,m}$. By Lemma~\ref{L:29}$(b)$ and Remark~\ref{R:29medio}, we may assume that $m=n+1$ and $1 \leq \ell \leq \frac{n+1}{2}$ is such that
\[
\varphi_{Q}(\va)=\varphi_{\widetilde{\Star}_n^{\ell,n+1}}(\va)=(\va^{\ell}-1)(\va^{n+1-\ell}-1).
\]
Proceeding similarly for $q_{Q'}$, since $\varphi_{Q'}=\varphi_{Q}$, we find a $Q'$-admissible iterated $FS$-transformation $\FS'$ with $Q'\FS'=\widetilde{\Star}_n^{\ell,n+1}=Q\FS$, hence $q \approx q'$ by Corollary~\ref{C:15}.
\end{proof}
\section*{Concluding remarks and future work}
We stress that the proof of all preparatory results towards the main theorems (the elementary quiver transformations, Lemmas~\ref{L:16},~\ref{L:29}, Propositions~\ref{P:17},~\ref{P:30}, and most importantly, Proposition~\ref{P:19}) are completely constructive, and can be easily implemented in any programming language of general use. In particular, one may follow the proofs of Theorems~\ref{T:20} and~\ref{T:34} to find algorithmic solutions to Problem~B in terms of iterated $FS$-transformations, for the case of non-negative unit forms of Dynkin type $\A_n$ of corank zero or one.
\medskip
It seems to be a good idea to consider quadratic forms $q:\Z^n \to \Z$ having symmetric Gram matrix $G_q$ factorized as
\[
G_q=I^{\tr}I,
\]
for a $n \times m$ matrix $I$ with ``special'' properties, for instance, one having columns in the root systems as given in~\cite[Definitions~3.1 and~3.2]{CGSS}. Any such quadratic form is clearly non-negative. Here we consider the root system $A_n$ given in~\cite[Definition~3.1]{CGSS}, that is, matrices $I$ such that for each column $I\bas_i$ (for $i=1,\ldots,n$) there are signs $S,T \in \{\pm 1\}$ and indices $s,t \in \{1,\ldots,n\}$ satisfying
\begin{itemize}
\itemsep=0.9pt
\item[i)] $I\bas_i=S\bas_s+T\bas_t$.
\item[ii)] $S \neq T$.
\item[iii)] $s \neq t$.
\end{itemize}
Indeed, matrices with these three conditions are precisely the (vertex-arrow) incidence matrices of loop-less quivers, and Proposition~\ref{P:19} asserts that the corresponding quadratic forms are precisely the non-negative unit forms with all components of Dynkin type $\A_r$.
\medskip
Assume, additionally, that $A$ is a $\Z$-invertible matrix morsification of $q$ with integer coefficients (in the sense of Simson~\cite{dS13a}). As in the proof of Theorem~\ref{T:27}, the Coxeter matrix $\Cox_A=-A^{\tr}A^{-1}$ of $A$ admits the following expression,
\[
\Cox_A=\Id_n-I^{\tr}IA^{-1}.
\]
In an upcoming paper~\cite{jaJ2020b}, we show that the similarity invariants of $\Cox_A$ correspond to the orthogonal invariants of the matrix
\[
\Lambda_A=\Id_m-IA^{-1}I^{\tr},
\]
which turns out to be an orthogonal matrix, producing in this way many important strong Gram invariants of $A$. The particular case when $IA^{-1}$ satisfies again conditions $(i-iii)$ above has special combinatorial features, as illustrated in Section~\ref{S(A):InvDos} when $A$ is the standard morsification of $q$. In this case, $IA^{-1}$ is precisely the incidence matrix of the inverse of the quiver with incidence matrix $I$. Although successful for coranks zero and one, we do not know whether the technique of $FS$-transformations used in Lemmas~\ref{L:16} and~\ref{L:29} can be generalized to higher coranks (even corank two). As in the proofs of Propositions~\ref{P:17} and~\ref{P:30}, and Theorems~\ref{T:20} and~\ref{T:34}, such a generalization would imply the Coxeter spectral determination of strong Gram classes of non-negative unit forms of Dynkin type $\A_n$, as in the positive and principal case. In an upcoming work, we approach such strong classification with a matricial method.
\medskip
A different direction is to omit conditions $(ii)$ and $(iii)$ on $I$, that is, to consider simply ``incidence matrices'' as in~\cite{tZ08}. In a future work we show that the corresponding quadratic forms include not only non-negative semi-unit forms of Dynkin type $\A_n$, but also those of Dynkin type $\D_n$, as well as the Euler form of important classes of algebras (for instance, gentle algebras of finite global dimension). Moreover, the results of~\ref{S(A):Cox} and~\cite{jaJ2020b} can be extended to unimodular morsifications of quadratic forms with Gram matrix factorized by ``incidence matrices'', potentially facilitating their Coxeter spectral analysis.
In some sense, the matrix $I$ \emph{exposes} internal properties of $I^{\tr}I$. The straightforward constructions and considerations of the paper work in support of this claim.
\bibliographystyle{fundam}
|
2,869,038,155,604 | arxiv | \section{introduction}
Atomically abrupt (``digital'') interfaces (IFs) between oxides
with strongly differing electronic properties
(superconducting-ferromagnetic; ferroelectric-ferromagnetic) have
attracted interest\cite{ivan,ijiri}
due to the new behavior that may arise, and for
likely device applications.
Hwang and collaborators~\cite{ohtomo2002,hwang2006} have reported coherent
superlattices containing a controllable number of Mott insulator [LaTiO$_3$~(LTO)]
and band insulator [SrTiO$_3$~(STO)] layers
using pulsed laser deposition, with analysis suggesting atomically sharp
interfaces comparable to those produced by molecular beam epitaxy~\cite{ivan}.
The most provocative result was that the IFs of these insulators showed
metallic conductivity and high mobility.
Electron energy loss spectra (EELS) for Ti
suggested a superposition of Ti$^{3+}$~and Ti$^{4+}$~ions in the interface region.
Incorporating doping with magnetic ions, these same materials are being
explored for spin-dependent transport applications~\cite{herranz}.
Effects of structural imperfections are being
studied~\cite{hwang2004,shibuya,muller2004,nakagawa}, but the ideal IFs
need to be understood first.
Single- and three-band Hubbard models with screened intersite Coulomb
interaction have been applied to this IF. Both the Hartree-Fock approximation or dynamical mean field with a semiclassical
treatment of correlation~\cite{SO_AJM} result in a ferromagnetic (FM)
metallic IF over a substantial parameter range.
{\it Ab initio} studies reported so far
have focused on charge profiles~\cite{satpathy,hamann} while neglecting
correlation effects beyond the local density approximation (LDA) that
we address below. Very recent {\it ab initio} calculations of the
effects of lattice relaxation~\cite{hamann,spaldin06}
at the IF have provided additional input
into the Hubbard-modeling of these IFs, allowing the investigation
of the interplay of correlation effects and relaxation in these models.
\begin{figure}[b]
\begin{center}
\rotatebox{0.}{
\includegraphics[scale=0.48]{fig1.eps}}
\end{center}
\caption{\label{structure} Segment of the (1,1) LaTiO$_3$-SrTiO$_3$
multilayer, illustrating the cubic perovskite structure
(unlabeled white spheres denote oxygen). An LaO layer lies
in the center, bordered by two TiO$_2$ layers, with a SrO layer at top
and bottom. The lateral size of this figure corresponds to the
$p(2 \times 2)$ cell discussed in the text.
}
\end{figure}
The material-specific insight into correlated behavior that can
be obtained from first-principles-based approaches is still lacking.
In LTO/STO superlattices, the transition metal ions on the perovskite
B-sublattice are identical (Ti) and only the charge-controlling A-sublattice
cations (Sr, La) change across the interface ({\sl cf.} Fig.~\ref{structure}).
This leaves at each IF a TiO$_2$ layer whose local
environment is midway between that
in LTO and STO.
In this paper we study
mechanisms of charge compensation at the LTO/STO-IF
based on density-functional theory calculations
[within the generalized gradient approximation
(GGA)~\cite{pbe96}] employing the
all electron FP-LAPW-method within the WIEN2k-implementation~\cite{wien}
including a Hubbard-type on-site Coulomb repulsion (LDA+U)~\cite{anisimov93}.
We focus on (1) the local
charge imbalance at the IF and its dependence on neighboring layers, (2)
the breaking of three-fold degeneracy of the Ti $t_{2g}$ orbitals
which will be at most
singly occupied, (3) magnetic ordering and its effect on gap formation, and
(4) how rapidly the insulators heal (both in charge and in magnetic order)
to their bulk condition away from the IF.
To explore the formation of possible charge disproportionated, magnetically
ordered,
and orbitally selective phases at the IF and to probe the relaxation
length towards bulk behavior we have
investigated a variety of ($n,m$) heterostructures with $n$ LTO and $m$
STO layers ($1 \leq n,m \leq 9$), and lateral cells of $c(2\times 2)$
or $p(2\times 2)$~\cite{details}.
The $\hat z$ direction is taken perpendicular to the IF.
Lattice parameters of the systems have been set to the experimental
lattice constant of STO, 3.92~\AA, therefore modeling
coherent IFs on an STO substrate.
Bulk STO is a
semiconductor with a GGA-band gap~\cite{pbe96} of 2.0 eV
(experimental value 3.2~eV), separating
filled O $2p$ bands from empty Ti $3d$ bands.
Currently LTO ($a$=3.97 \AA)
and other $3d^1$ perovskites are intensively
studied because their structure is crucial in determining their electronic
and magnetic behavior~\cite{pavarini}. Bulk LTO is an AFM insulator of
G-type (rocksalt spin arrangement) with a
gadolinium orthoferrite (20 atom) structure; however, lattice imaging
indicates that only a few layers of LTO assume the cubic structure that
we use in our superlattices.
Using the LDA+U method, an AFM insulator is obtained for
$U \geq 6$ eV, with a magnetic moment
$M_{Ti} \approx 0.75 \mu_B$ due to
occupation of one of the $t_{2g}$ orbitals (orbital ordering arising from
spontaneous symmetry breaking).
FM alignment of spins is 50 meV/Ti less favorable.
\begin{figure}[b]
\hspace{-3mm}
\includegraphics[scale=0.78]{fig2a.eps}
\rotatebox{90.}{\includegraphics[scale=0.75]{fig2b.eps}}
\caption{\label{MofU} a) Phase diagram of the Ti moments for
the (1,1) superlattice in a
transverse $c(2\times2)$ cell, versus the on-site Coulomb repulsion strength
$U$ on the Ti $3d$ orbitals. b) Density of states
(spin direction indicated by arrows)
of the (1,1) superlattice for different $U$ values.
Disproportionation occurs in a weak first-order manner around
$U\approx$5.5 eV.
HM FM indicates a region of half
metallic ferromagnetism before the Mott gap appears around $U \approx$ 6.5 eV.
}
\end{figure}
We discuss first the (1,1) multilayer (1LTO/1STO layer)
pictured in Fig. \ref{structure},
and then consider systems with thicker LTO and/or STO slabs to analyze
the relaxation towards bulk behavior.
{\it The (1,1) superlattice}
is modeled in a transverse $c(2\times 2)$ cell
(not considered in earlier work~\cite{spaldin06}) with two
inequivalent Ti ions, which allows disproportionation within a
single Ti layer and is consistent with
the AFM G-type order in bulk LTO. The on-site repulsion strength
$U$ on Ti was varied from 0 to 8~eV to assess both weak and strong
interaction limits.
The Ti moment versus U and the evolution of the density of states as a
function of U are shown in Fig. \ref{MofU}a) and b), respectively.
Within GGA ($U=0$) nonmagnetic metallic character is obtained, consistent
with earlier reports.~\cite{satpathy,hamann} For
$U\leq5$ eV
the system is a ferromagnetic metal with equivalent Ti ions, {\it i.e.}
it is qualitatively like earlier results on multiband Hubbard
models.~\cite{SO_AJM} At $U\approx$ 5.5 eV
disproportionation occurs on the Ti
ions, apparently weakly first-order as has been found to occur
in the Na$_x$CoO$_2$
system~\cite{kwlee}.
Around $U \approx$ 6 eV there is
a half metallic ferromagnet region, but beyond $U \approx$ 6.5 eV a gap
opens separating the lower Hubbard band and resulting in a correlated
insulator phase.
In the following we model the Mott insulating gap (0.5 eV) with $U$=8 eV.
The arrangement of disproportionated ions, which is
charge-ordered (CO) rocksalt, retains
inversion symmetry and, more importantly, the more highly
charged $d^0$ ions avoid being nearest neighbors.
The spatial distribution of the occupied $d$-orbitals
in the IF TiO$_2$ layer displayed in Fig.~\ref{CDNCOOO} reveals that
besides the CO for $U>7$~eV this state is orbitally ordered (OO)
with a filled $d_{xy}$ orbital at the Ti$^{3+}$~sites, the non-degenerate member
of the cubic $t_{2g}$ triplet after the intrinsic symmetry-lowering effect
of the IF. The Fermi level lies in a
small Mott gap separating the occupied narrow $d_{xy}$ band
(`lower Hubbard band') from the rest of
the unoccupied $d$-orbitals. For ferromagnetic alignment of the spins
($M_{\rm Ti^{3+}}=0.72\mu_{\rm B}$) the gap ensures an integer moment
($2.0\mu_{\rm B}$).
The system at this level of treatment is a realization of a
quarter-filled extended Hubbard model (EHM)
system. The Hubbard model itself is metallic at quarter-filling; when
intersite repulsion is included~\cite{onari,zhang} it becomes CO and insulating.
The intersite repulsion is included correctly in first principles methods
and that combined with the on-site repulsion ($U$) gives charge ordering.
\begin{figure}[t]
\vspace{-5mm}
\begin{center}
\includegraphics[scale=0.43]{fig3.eps}
\end{center}
\vspace{-10mm}
\caption{\label{CDNCOOO} 45$^{\circ}$ checkerboard
charge density distribution of the occupied $3d$ states in the
charge-ordered TiO$_2$ layer in the FM (1,1) multilayer. Orbital-ordering
due to $d_{xy}$ orbital occupation is apparent.
The positions of O-, Ti$^{3+}$~and Ti$^{4+}$-ions are marked
by white, dark blue (black) and light blue (grey) circles, respectively.
}
\end{figure}
The calculation was extended to a larger
$p(2\times2)$-cell
to allow antiferromagnetic
alignment of the Ti$^{3+}$~spins. We obtain the same
CO/OO state
with an occupied $d_{xy}$-orbital on every second IF Ti ion, giving
a checkerboard ordering of Ti$^{3+}$~and ~Ti$^{4+}$, regardless of whether
the spins are aligned or antialigned.
AFM coupling is preferred by 80~meV per
$p(2\times2)$-cell for the (1,1) superlattice (a spin-spin exchange
coupling of $|J|$=10 meV). For heterostructures
containing a thicker LTO slab, however, AFM coupled spins on the 50\% diluted
p(2$\times$2) mesh in the IF layer will not match the AFM G-type order on
the LTO side of the slab, where spins in the IF-1 layer couple antiparallel
with a c(2$\times$2)-periodicity. Due to this frustration, AFM alignment
within the IF layer may become less favorable.
\begin{table}
\caption{\label{tab:MU_nm} Layer resolved magnetic moments (in
$\mu_B$) of the Ti ions in ($n,m$) superlattices.
Due to the $c(2\times 2)$-lateral unit cell there are two inequivalent
Ti-ions in each layer. $(n,m)$ denotes a multilayer containing $n$ LTO and $m$
STO layers. The IF moments are nearly bulk-like and become so at the layer
next to the IF layer. (1,5)$^*$ denotes a configuration where the interlayer
distances were relaxed according to Ref.\cite{spaldin06}.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
System & \multicolumn{2}{c}{LTO} & IF & \multicolumn{2}{c}{STO} \\
$(n,m)$ &IF-2 & IF-1 & IF & IF+1 & IF+2 \\
\hline
(1,1)& - & - & 0.72/0.05 & - & - \\
(1,5) & - & - & 0.71/0.05 & 0.0/0.0 & 0.0/0.0 \\
(1,5)$^*$ & - & - & 0.50/0.08 & 0.0/0.0 & 0.01/0.01 \\
(5,1)& 0.73/-0.73 & -0.73/0.73 & 0.70/0.05 & - & - \\
(5,5)& 0.73/-0.73 & -0.73/0.73 & 0.70/0.06 & 0.0/0.0 & 0.0/0.0 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[b]
\begin{minipage}{4. cm}
\scalebox{0.85}{\includegraphics{fig4a.eps}}
\end{minipage}
\begin{minipage}{3.5 cm}
\hspace*{-.5cm}
\scalebox{0.84}{\includegraphics{fig4b.eps}}
\end{minipage}
\caption{\label{DOS55} Layer-resolved density of states of a
a) structurally ideal and b) relaxed (1,5) multilayer.
The two topmost panels show Ti$^{3+}$~ and Ti$^{4+}$~ at the IF, the succeeding panels
show the behavior of the Ti-ion in deeper layers of the STO part of the slab.
While for the ideal geometry rapid relaxation of the electronic structure to
bulk form versus distance from the interface (IF) takes place, in the relaxed
structure the electronic relaxations involve deeper lying layers.
}
\end{figure}
{\it The (n,m) superlattice.}
To examine charge- and spin-order relaxation towards bulk behavior,
and observe charge accommodation at more isolated IFs, we have studied several
thicker slabs containing $(n,m)$ layers of LTO and STO, respectively, with
$1\leq n,m \leq 9$. Following the experiment of Ohtomo
{\it et al.}~\cite{ohtomo2002} we present results specifically for
the (1,5) and (5,1) as
well as the (5,5) superlattices, all of
which we find to be disproportionated, CO and OO, and insulating in the strong
interaction regime.
As is clear both from the layer-resolved magnetic moments
presented in Table~\ref{tab:MU_nm} and the layer resolved projected
DOS for the (1,5) superlattice in Fig.~\ref{DOS55}a),
the IF TiO$_2$ layer, and only this layer, is CO/OO with Ti$^{3+}$~and
Ti$^{4+}$~distributed in a checkerboard manner.
At every second Ti-site the $t_{2g}$ states
split according to the IF-imposed symmetry lowering,
and the $d_{xy}$ orbital becomes occupied.
The $t_{2g}$ states on the Ti$^{4+}$~ions remain essentially
degenerate, and there is only
a tiny induced moment $M_{\rm Ti^{4+}}= 0.06\mu_{\rm B}$.
Ti ions in neighboring or deeper
layers in the STO part of the slab have the configuration
$3d^0$ and are nonmagnetic,
while those on the LTO side of the slab
have the configuration $3d^1$ and are AFM G-type ordered.
Thus the charge mismatch is localized at the interface
layer, with bulk LTO and STO character quickly re-emerging on
neighboring layers. Consequently, these results indicate a relaxation length
much less that the 1-2~nm value
estimated from the EELS data~\cite{ohtomo2002}.
The same CO/OO results have been obtained on a variety of $(n,m)$
LTO-STO superlattices;
these repeatedly emerging insulating ordered IF phases are very robust.
However, the systems discussed so far are structurally
perfect with ideal positions of the atoms in the perovskite lattice.
In the following we discuss the influence of lattice relaxations
on the electronic
properties of the system. Recently, two DFT studies using GGA~\cite{hamann}
and the LDA+U approach~\cite{spaldin06} investigated structural
relaxations in LaTiO$_3$/SrTiO$_3$~superlattices, finding that Ti-ions at the IF
are displaced by 0.15\AA\ with respect to the oxygen ions leading to a
longer Ti-Ti distance through
the LaO layer than through the SrO-layer. This ``ferroelectric''-like
distortion decays quickly in deeper lying layers. Using the relaxations
reported in Ref.\cite{spaldin06}, we repeated the calculations for the
(1,5)-heterostructure. The resulting layer-resolved projected DOS at the
Ti-ions is displayed in Fig.~\ref{DOS55}b). The most prominent feature is that
for the relaxed structure the $d_{xy}$-band (the lower Hubbard band) has
been shifted up by 0.4 eV,
leaving it incompletely ($\sim$70\%) occupied.
The charge is distributed in the minority spin channel at the Ti$^{3+}$-sites
(hybridization with O$2p$ bands) reducing the magnetic moment from
$0.71\mu_{\rm B}$ in the ideal structure to $0.50\mu_{\rm B}$. Additionally
there is a small contribution to conductivity of Ti$^{4+}$~in deeper lying layers
in the SrTiO$_3$-host whose $d$-bands now slightly overlap the Fermi level.
Hence it is the lattice relaxations that result in a metallic heterostructure
and a longer
healing length towards bulk behavior, in agreement with the experimental
observations~\cite{ohtomo2002,hwang2006} in spite of a majority of the charge
being tied up at the IF.
Still, the CO/OO arrangement remains;
it is robust with respect to relaxation and tetragonal distortion.
Now we summarize.
While the behavior of the many superlattices that we have studied
produce robust results for the IFs which is easily understood,
they have a charge- and orbital-ordered character that was
unanticipated from the original reports on these heterostructures.
If the interaction strength $U$ within the Ti $3d$ states is large enough to
reproduce the AFM insulating state in cubic LTO, then it is more
advantageous for the local charge
imbalance to be accommodated within the IF layer itself, which can be
accomplished by disproportionation, followed by charge order with
Ti$^{3+}$~and Ti$^{4+}$~distributed in a checkerboard manner. The interface
layer is orbitally-ordered, with an occupied $d_{xy}$-orbital; this
symmetry breaking is due to the intrinsic IF symmetry and is
unaffected by atomic relaxation. Indeed both disproportionation and
orbital ordering are insensitive to relaxation.
For the ideal structure, the CO/OO state is a very narrow gap insulator.
In agreement with previous studies,~\cite{hamann,spaldin06} coupling to the
lattice is however found to be important in some respects.
Most notably,
atomic relaxation at the IF shifts the Ti$^{3+}$~lower Hubbard band
upward just enough to lead to conducting behavior, which also implies
a longer healing length towards bulk behavior, consistent with the
experimental indications.
We acknowledge discussions with
J. Rustad and J. Kune\v{s}, and communication with A. J. Millis and N. A.
Spaldin. R.P. was supported through DOE grant
DE-FG02-04ER15498. W.E.P. was supported by DOE grant DE-FG03-01ER45876 and
the DOE Computational Materials Science Network. W.E.P also acknowledges
support from the Alexander von Humboldt Foundation, and hospitality of
the Max Planck Institute Stuttgart and IFW Dresden, during the latter stages
of this work.
|
2,869,038,155,605 | arxiv | \section{Introduction}
Charge exchange processes during atom-surface or molecule-surface collisions have been the subject
of intense scientific research during the last decades~\cite{Winter2007,Rabalais1994}. This type
of surface reactions is of fundamental interest. It represents a quantum-impurity problem where a
finite many-body system with discrete quantum states couples to an extended
system with a continuum of states which essentially acts as a reservoir for electrons. Under
appropriate conditions~\cite{He2010,Merino1998Roomtemperature,Shao1994Manybody} such an arrangement
gives for instance rise to the Kondo effect~\cite{KondoEffect1993} originally found in metals containing
magnetic impurities or to Coulomb blockades as it is discussed in nanostructures~\cite{CoulombBlockade1992}.
Besides of being a particular realization of a quantum impurity problem, atom/molecule-surface
collisions are also of technological interest, especially in the field of bounded low-temperature
plasmas, where this type of surface collisions is the main supplier of secondary electrons which in turn
strongly affect the overall charge balance of the discharge~\cite{Lieberman2005}. In dielectric barrier
discharges, for instance, secondary electron emission determines whether the discharge operates in a
filamentary or a diffuse mode~\cite{Brandenburg2005,Massines2003}. Only the latter mode is useful for
surface modification. Controlling the yield with which secondary electrons are produced is thus of great
practical interest. This applies even more so to micro-discharges~\cite{Becker2006} where the continuing
miniaturization gives charge-transferring surface reactions more and more influence on the properties of
the discharge.
Depending on the projectile and the target, secondary electron emission usually occurs either in the
course of a resonant tunneling process or an Auger transition. In some situations, however, both
transitions may be energetically allowed and hence contribute to the yield with which electrons are
released. The interplay of the two reaction channels has therefore been studied in the
past~\cite{Snowdon1987,Zimny1991,Onufriev1996Memory,Keller1998,Lorente1998Unified,Alvarez1998Auger,Goldberg1999,Pepino2002,Wang2001,Garcia2003Interference}.
Starting with the work of \citeauthor{Alvarez1998Auger}~\cite{Alvarez1998Auger} a detailed theoretical
analysis of the interference of Auger and resonant tunneling processes has been given by Goldberg and
coworkers~\cite{Goldberg1999,Wang2001,Garcia2003Interference}. Their results for $\mathrm{H}^+$ and $\mathrm{He}^+$
indicate the Auger channel to be active only close to the surface whereas the resonant channel is already
efficient at rather large projectile-surface distances. When both channels are coupled together the dynamics
of the system is hence controlled by the resonant channel as it destroys the initial species before the Auger
channel can become operative. The Auger channel is therefore strongly suppressed in the coupled system albeit
the individual efficiencies of the reaction channels are comparable. Onufriev and Marston~\cite{Onufriev1996Memory}
also investigated the interplay of tunneling and Auger processes for the particular case of $\mathrm{Li}$(2p)
atoms de-exciting on a metallic surface. Using a sophisticated many-body theoretical description of the
scattering process they concluded that depending on the model parameters the two de-excitation channels
interfere either constructively or destructively.
Whereas the previous studies focused on atomic projectiles we will investigate in the following the interplay
of Auger and resonant tunneling processes for a molecular projectile. More precisely, we will analyze
how these two processes affect secondary electron emission due to de-excitation of metastable
molecules. Neutralization of molecular ions~\cite{Heiland94,Imke1986Resonant,Imke1986Theory} will not be
discussed. A particularly interesting case is the de-excitation of metastable \ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\
because this molecule de-excites in two primary reaction channels~\cite{Stracke1998Formation,Lorente1999N2}.
On the one hand, there is the two-step resonant charge transfer (RCT) reaction
\begin{equation}
e_{\vec{k}}^- + \ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)} \xrightarrow{\text{RCT}} \ensuremath{\mathrm{N_2^-}(\NitrogenNegativeIonResonanceTermSymbol)}
\xrightarrow{\text{RCT}} \ensuremath{\mathrm{N_2}(\NitrogenGroundStateTermSymbol)} + e_{\vec{q}}^- \;,
\label{eq react rct}
\end{equation}
where~$e_{\vec{k}}^-$ and~$e_{\vec{q}}^-$ denote an electron within the surface and a free electron, respectively. In
this process the metastable~\ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\ molecule first resonantly captures an electron from the
surface to form the intermediate negative ion shape resonance \ensuremath{\mathrm{N_2^-}(\NitrogenNegativeIonResonanceTermSymbol)}\ which then decays into
the ground state~\ensuremath{\mathrm{N_2}(\NitrogenGroundStateTermSymbol)}\ by resonantly emitting an electron. The decay of the negative ion can be either
due to the ion's natural life time or due to the ion surface interaction. In addition to the RCT channel, there exists
an Auger de-excitation reaction also known as Penning de-excitation
\begin{equation}
e_{\vec{k}}^- + \ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)} \xrightarrow{\text{Auger}} \ensuremath{\mathrm{N_2}(\NitrogenGroundStateTermSymbol)} + e_{\vec{q}}^- \;.
\label{eq react auger}
\end{equation}
Here the molecule non-resonantly captures a surface electron and simultaneously releases another electron. Depending
on the surface band structure both processes may be possible at the same time. This is for instance the case
for diamond as it possesses a rather wide valence band.
In our previous work~\cite{Marbach2011Auger,Marbach2012Resonant} we investigated the two reaction channels
separately by a quantum-kinetic approach and a rate equation technique. The molecule was in both cases
treated as a semi-empirical two-level system corresponding to the $2\pi_u$ and $2\pi_g$ molecular
orbitals, which are the two molecular orbitals whose occupancies change during the de-excitation
process. Spectator electrons not involved in the processes were neglected.
Depending on the process the two levels denoted the upper and lower ionization levels of either the
metastable molecule (Auger de-excitation) or the negative ion (resonant charge-transfer).
The advantage of a semi-empirical model is that it is based on a few parameters which are relatively
easy accessible, either experimentally or theoretically. The difference of the two parameter sets,
which arises from intra-molecular Coulomb correlations not included in the model, is not a problem as
long as the two processes are treated separately. A simultaneous treatment of them requires
however a way to implement both parameterizations within a single Hamiltonian.
A way to overcome the problem would be of course to set up a more general projectile Hamiltonian,
including active and spectator electrons and their mutual Coulomb interactions. Treating
these intra-molecular Coulomb correlations explicitly is however extremely demanding. It embraces
a full quantum-chemical description of the approaching molecule which cannot easily be adapted from
one projectile to another. Since we are primarily interested in developing models and tools to
be used for the description of secondary electron emission from plasma walls easy adaptation
from one projectile-target combination to another is however an important criterion for us. We stay
therefore within the limits of a semi-empirical two-level system and use instead projection operators
and two auxiliary bosons to assign and control the level energies. With these
constructs it is possible to formulate a Hamiltonian containing both channels - \eqref{eq react rct}
and~\eqref{eq react auger} - without introducing intra-molecular interactions. The projection
operators allow us to assign different parameterizations to the two levels, depending on the occupancy
of the system, whereas the auxiliary bosons enable us to mimic the intra-molecular Coulomb correlations
which need to kick in to make the two tunneling processes involved in \eqref{eq react rct} resonant.
Expressing the projection operators in terms of pseudo-particle operators with boson or fermion
statistics opens then the door for employing non-equilibrium Green functions to derive from the Hamiltonian
quantum-kinetic equations for the probabilities with which the molecular states can be found in the
course of the collision.
The strength and flexibility of the pseudo-particle or slave field approach, originally developed by
Coleman~\cite{Coleman1984New} in the context of the infinite-$U$ Anderson Hamiltonian, has been demonstrated
many times for Anderson-type and Anderson-Newns-type
models~\cite{Wingreen1994Anderson,Langreth1991Derivation,Shao1994Manybody,Aguado2003Kondo,Dutta2001Simple}.
We apply this method to a generalized Anderson-Newns model describing the coupling of different molecular
configurations to a solid. It is of the type but not identical to the model introduced by
Marston and coworkers~\cite{Onufriev1996Memory,Marston1993Manybody}.
The derivation of the quantum-kinetic equations for the pseudo-particle propagators with the subsequent
reduction to the rate equations for the occupancies of the molecular pseudo-particle states follows the
work of Langreth and coworkers~\cite{Langreth1991Derivation,Shao1994Manybody}. As
a result we obtain rate equations describing the de-excitation of \ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\
in situations where the RCT and the Auger channel are simultaneously open.
In the absence of the Auger channel the rate equations reduce
to the ones we derived intuitively before for the isolated RCT channel~\cite{Marbach2012Resonant}.
Applying the model to the particular case of a diamond surface shows the resonant process dominating
the Auger process. The overall secondary electron emission coefficient due to de-excitation of
\ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\ at a diamond surface is on the order of $10^{-1}$.
The outline of the rest of the paper is as follows. In Sec.~\ref{sec model} we describe the semi-empirical
model on which our investigation of the de-excitation process is based. Thereafter, we explain in
Sec.~\ref{sec slave boson} the pseudo-particle representation. Afterwards, we conduct in Sec.~\ref{sec quant kin}
a second order quantum-kinetic calculation on top of the pseudo-particle model. In Sec.~\ref{sec semiclassical}
we introduce a physically motivated semi-classical approximation that allows us to reduce the set of
Dyson equations to a set of rate equations. Finally, we present in Sec.~\ref{sec results} the results
for the diamond surface and conclude in Sec.~\ref{sec conclusion} with a brief summary of the main points of
the work. Appendix~\ref{app langreth} lists the Langreth-Wilkins rules~\cite{Langreth1972Theory} as used in our
calculation and Appendix~\ref{DysonEq} collects the second order Dyson equations for the molecular Green
functions.
\section{Model\label{sec model}}
The interacting molecule-surface system is characterized by three different types of electronic states:
bound and unbound molecular states and states within the solid surface. In the spirit of our
previous work~\cite{Marbach2011Auger,Marbach2012Resonant} we restrict the attention to those states
whose occupancies change during the molecule-surface collision. For these states and the coupling
between them we construct a semi-empirical model. Its matrix elements can be either obtained from
quantum-mechanical calculations based on particular assumptions about the electron wave functions
and/or experimentally measured ionization energies, electron affinities, surface response functions,
and electron tunneling rates. Since we are primarily interested in the quantum-kinetic handling of
the semi-empirical model we pursue for simplicity the former route. A more realistic parameterization
of the model is however in principle possible.
\begin{figure}
\centerline{\includegraphics[scale=1]{two-level-system.pdf}}
\caption{Correspondence between the occupation of the two level system and the molecular states.
Here~$\varepsilon_0$ and~$\varepsilon_1$ denote the energies of the levels~$0$ and $1$, respectively.}
\label{fig two level system}
\end{figure}
We treat the relevant bound states of the nitrogen molecule in terms of a two-level system consisting
of a ground state level~"$0$" and an excited level~"$1$". Within a linear combination of atomic orbital
(LCAO) description of the molecule these two levels represent the nitrogen molecule's $2\pi_u$ and~$2\pi_g$
orbitals. Each of the two orbitals can carry four electrons. We neglect however the three electrons
in the $2\pi_u$ orbital and the three holes in the $2\pi_g$ orbital which are not directly involved
in the de-excitation process. They act only as frozen-in spectators. For the same reason we neglect
the electron spin and treat the magnetic quantum number~${m=\pm1}$ as an initial parameter. Hence, both
levels of our model carry at most one electron.
The two-level system represents any of the molecular states depicted in Fig.~\ref{fig two level system}.
The positive ion \ensuremath{\mathrm{N_2^+}(\NitrogenPositiveIonTermSymbol)}, the ground state \ensuremath{\mathrm{N_2}(\NitrogenGroundStateTermSymbol)}, the metastable
state \ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}, and the negative ion \ensuremath{\mathrm{N_2^-}(\NitrogenNegativeIonResonanceTermSymbol)}.
Depending on the particular occupation the energies $\varepsilon_0$ and~$\varepsilon_1$ correspond therefore to the
ionization energies of different molecular states. Due to intra-molecular Coulomb interactions
these ionization energies are in general different~\cite{Kaldor1984Generalmodelspace,Honigmann2006Complex}.
We have to allow therefore $\varepsilon_0$ and~$\varepsilon_1$ to depend on the
occupancy of the two-level system, that is, on the molecular state it is supposed to represent. In
addition, the ionization energies are also subject to the surface's image potential $V_i(z)$ and
thus vary with time as the molecule moves with respect to the surface. Using the analysis presented in
Ref.~\onlinecite{Marbach2012Resonant} we find
\begin{subequations}
\begin{align}
\varepsilon_{0g}(z) & = \varepsilon_{0g}^\infty - V_i(z) \;,\\
\varepsilon_{1*}(z) & = \varepsilon_{1*}^\infty - V_i(z) \;,\\
\varepsilon_{0-}(z) & = \varepsilon_{0-}^\infty + V_i(z) \;, \\
\varepsilon_{1-}(z) & = \varepsilon_{1-}^\infty + V_i(z) \;,
\end{align}\label{eq energy shifts}%
\end{subequations}
where the subscripts~$g$, $*$ and $-$ signal the dependence of the energy levels on the molecular
state denoting, respectively, the ground state molecule, the metastable molecule, and the negative
ion. The unperturbed molecular energies are given by~\cite{Kaldor1984Generalmodelspace,Honigmann2006Complex}
\begin{equation}
\begin{split}
\varepsilon_{0g}^\infty & = -17.25\, eV\;, \phantom{\varepsilon_{1-}^\infty} \varepsilon_{1*}^\infty = -9.67\, eV \;,\\
\varepsilon_{0-}^\infty & = -14.49\, eV\;, \phantom{\varepsilon_{1*}^\infty} \varepsilon_{1-}^\infty = 1.18\, eV\;.
\end{split}\label{eq inf mol energies}%
\end{equation}
The overall energy scheme of the coupled molecule-surface system is sketched in Fig.~\ref{fig energy scheme}
for the particular case of a diamond surface. As can be seen the positive ion is neither involved in the
RCT nor the Auger process.
\begin{figure}
\centerline{\includegraphics[scale=1]{energy-scheme-diamond.pdf}}
\caption{Energy scheme for the case of a metastable \ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\ molecule interaction
with a diamond surface. The Auger de-excitation channel (a) is depicted on the left hand side whereas
the RCT electron capture (b) and the subsequent RCT electron release (c) are shown on the right hand side.
The latter transition is drawn with a dashed line because the electron emission actually occurs at the same $z$~position
and hence energy. It is resonant. Therefore the emitted electron will only reach the shown location over time.
The drawing is to scale in terms of energy units. \label{fig energy scheme}}
\end{figure}
For the image potential we employ for simplicity the classical expression,
\begin{equation}
V_i(z) \approx - \frac{\varepsilon_r^b - 1}{\varepsilon_r^b + 1} \frac{e^2}{16 \pi \varepsilon_0} \frac{1}{z} \;,
\end{equation}
with $\varepsilon_r^b$ standing for the surface's static bulk dielectric constant. Close to the surface the
image potential is however truncated according to $V_i(z_c)=V_0$ where $V_0$ is the depth of the
potential barrier confining the electrons of the solid participating in the de-excitation process. As
in our previous investigations~\cite{Marbach2011Auger,Marbach2012Resonant}, we describe the solid by
a step potential of depth $V_0$. For a metallic surface~\cite{Marbach2011Auger} the step depth is the
width of the conduction band $\Delta\varepsilon_C$, that is, the sum of the work function $\Phi_W$ and the Fermi
energy $\varepsilon_F$ whereas for a dielectric surface~\cite{Marbach2012Resonant} it is the sum of the width
of the valence band $\Delta\varepsilon_V$, the energy gap $\varepsilon_g$, and the electron affinity~$\varepsilon_\alpha$
which can be positive or negative.
In both the RCT and the Auger channel the emitted electron stems from the molecule and, thus, the emission proceeds
into the molecular continuum states. We model the latter as free electron states moving along with the molecule and
label them with $\vec{q}$. Electrons residing in those states are also affected by the image potential and their
energy is thus given by
\begin{equation}
\varepsilon_{\vec{q}}(z) = \varepsilon_{\vec{q}}^\infty + V_i(z) = \frac{\hbar^2 q^2}{2 m_e} + V_i(z) \;.
\label{eq eps q}
\end{equation}
The wave function of the emitted electron is a two-center Coulomb wave for the Auger channel~\cite{Marbach2011Auger}
and a plane wave for the RCT channel~\cite{Marbach2012Resonant}. The latter is a special case of the former which
holds for zero effective nucleus charge. Since in the RCT channel the emitted electron leaves a neutral molecule
behind the plane wave is the suitable choice for this channel. In the Auger channel however the emitted electron
feels the residual two-center Coulomb attraction of the ion core. The two-center Coulomb wave takes this effect
into account. To complete the description of the model we note that we use LCAO molecular wave functions for
the two-level system and eigenstates of a step potential for the single-electron wave functions of the solid.
Explicit expressions for the wave functions and the details of the calculation of the Auger and tunneling matrix
elements are given in Refs.~\onlinecite{Marbach2011Auger,Marbach2012Resonant}.
Clearly, the model just described is based on a simple surface potential. Modeling the surface potential
as a superposition of a step potential (which somewhat underestimates the exponential tail of the metal
electron's wave functions and hence the absolute value of the matrix elements) and a classical image potential
(which is less critical because the turning point turns out to be far in front of the surface) allows us
however to write down analytic expressions for the matrix elements. For the quantum kinetics itself the
matrix elements are only parameters. Our approach can thus be also furnished with improved matrix elements
obtained from more realistic potentials. Based on the work of \citeauthor{Kurpick1996Basic}~\cite{Kurpick1996Basic}
we would expect however not too dramatic differences.
We now cast the semi-empirical model just described into a mathematical form. Introducing projection operators
\begin{equation}
P^{n_0n_1}=|n_0n_1\rangle\langle n_0n_1|
\label{projectors}
\end{equation}
projecting onto states of the two-level system with $n_0=0,1$ electrons in the lower and $n_1=0,1$ electrons in
the upper state, the transitions shown in Fig.~\ref{fig energy scheme} give rise to a generalized Anderson-Newns
model~\cite{Marston1993Manybody},
\begin{equation}
\begin{split}
H(t) & = \sum_{\vec k} \varepsilon_{\vec k}^{{\phantom{\dag}}} \, c_{\vec{k}}^{\dag} \, c_{\vec{k}}^{{\phantom{\dag}}}
+ \sum_{\vec q} \varepsilon_{\vec q}^{{\phantom{\dag}}}(t) \, c_{\vec{q}}^{\dag} \, c_{\vec{q}}^{{\phantom{\dag}}} \\
& + \omega_0 b_0^{\dag} b_0 + \omega_1 b_1^{\dag}b_1\\
& + \sum_{n_0,n_1} P^{n_0n_1} \left(\varepsilon^{n_0n_1}_{0}(t) \, c_{0}^{\dag} \, c_{0}^{{\phantom{\dag}}}
+ \varepsilon^{n_0n_1}_{1}(t) \, c_{1}^{\dag} \, c_{1}^{{\phantom{\dag}}} \right) \\
& + \sum_{\vec k} \left(\big[P^{01}+P^{11}\big]V_{\vec{k}}^{{\phantom{\conj}}\!}(t) \, c_{\vec{k}}^{\dag}\, b_0^{\dag}\, c_{0}^{{\phantom{\dag}}}
+ h.c. \right) \\
& + \sum_{\vec q} \left(\big[P^{10}+P^{11}\big]V_{\vec{q}}^{{\phantom{\conj}}\!}(t) \, c_{\vec{q}}^{\dag}\, b_1^{\dag}\, c_{1}^{{\phantom{\dag}}}
+ h.c. \right) \\
& + \sum_{\vec{k} \vec{q}} \left(\big[P^{10}+P^{01}\big] V_{\vec{k}\vec{q}}(t) \,
c_0^\dag \, c_{\vec{k}}^{\phantom{\dag}} \, c_{\vec{q}}^\dag \, c_1^{\phantom{\dag}} + h.c. \right) \;,
\end{split}\label{eq hamiltonian}%
\end{equation}
where $\varepsilon^{10}_{0}=\varepsilon_{0g}$, $\varepsilon^{01}_{1}=\varepsilon_{1*}$, $\varepsilon^{11}_{0}=\varepsilon_{0-}$, and
$\varepsilon^{11}_{1}=\varepsilon_{1-}$. The remaining energy levels need not be specified further. They drop
out in the course of the pseudo-particle representation presented in the next section. The two auxiliary
bosons $b_0^{(\dag)}$ and $b_1^{(\dag)}$ mimic
the intra-molecular Coulomb correlations which make the two steps of the RCT channel \eqref{eq react rct}
resonant. This can be accomplished by setting $\omega_0=\varepsilon_1^{11}-\varepsilon_1^{01}$ and $\omega_1=\varepsilon_0^{11}-\varepsilon_0^{10}$.
The initial energy of the tunneling electron is then resonant
respectively with the lower and the upper level of the negative ion. The rest of the Hamiltonian is
written in the notation we used in our previous work~\cite{Marbach2011Auger,Marbach2012Resonant}.
The time dependence of the Hamiltonian arises from the trajectory of the molecule's center of mass~${\vec{R}(t)}$.
For simplicity we assume normal incidence described by the trajectory
$\vec{R}(t) = \bigl( v_0 \abs{t} + z_0 \bigr) \vec{e}_z$,
where~$v_0$ is a constant velocity and~$z_0$ is the molecule's turning point which can be calculated from a Morse type
interaction potential~\cite{Marbach2011Auger}. Employing the trajectory the~$z$~dependence
of the molecular energies~\eqref{eq energy shifts} and the energy of the emitted electron~\eqref{eq eps q}
transforms into a time dependence. Similarly, the Auger matrix element $V_{\vec{k}\vec{q}}$ and the two resonant tunneling
matrix elements $V_{\vec{k}}$ and $V_{\vec{q}}$ acquire also a time-dependence.
The quantum-kinetic calculation presented below treats the matrix elements of the Hamiltonian as
parameters. We are thus not restricted to the specific approximations to the matrix elements derived in
our previous work~\cite{Marbach2011Auger,Marbach2012Resonant}. We could as well use matrix elements
from ab-initio calculations or experimental measurements. The natural decay of the negative ion,
described by the rate $\Gamma_n=1/\tau_n$ with $\tau_n$ the natural lifetime of the
negative ion, is not included in the Hamiltonian~\eqref{eq hamiltonian}. It will be inserted at the end
into the final set of rate equations for the molecular occupancies.
\section{Pseudo-particle representation\label{sec slave boson}}
The projection operators \eqref{projectors} permit us to describe the
transitions~\eqref{eq react rct} and~\eqref{eq react auger} by a single Hamiltonian. Depending on the
process and thus the occupancy of the molecular levels different matrix elements can be assigned to the
Hamiltonian. The projectors also guarantee that the~\ensuremath{\mathrm{N_2^+}(\NitrogenPositiveIonTermSymbol)}\ state never occurs. In
other words, they ensure that the occupancies of the two molecular levels never vanishes simultaneously.
For instance, an electron residing in the upper level can only be released when an electron has been
captured in the lower level.
A drawback of the projection operators is that they are not suitable for a diagrammatic treatment which
on the other hand is a powerful tool to set up quantum-kinetic equations. To
remedy this drawback we now employ a pseudo-particle approach to express the Hamiltonian \eqref{eq hamiltonian}
in terms of slave
fields~\cite{Coleman1984New,Wingreen1994Anderson,Langreth1991Derivation,Shao1994Manybody,Aguado2003Kondo,Dutta2001Simple}.
The starting point for this procedure is the completeness condition,
\begin{equation}
|0 0 \rangle \langle 0 0| + |1 0 \rangle \langle 1 0| + |0 1 \rangle \langle 0 1| + |1 1 \rangle \langle 1 1| = 1\;,
\label{eq-completeness}
\end{equation}
which expresses the fact that the molecule can be only in either one of the configurations depicted in
Fig.~\ref{fig two level system}. Introducing pseudo-particle operators $c_+^\dag$, $c_g^\dag$, $c_*^\dag$,
and $c_-^\dag$ that create the positive ion, the ground state molecule, the metastable molecule, and the
negative ion from an abstract vacuum state,
\begin{subequations}
\begin{align}
|0 0 \rangle & = c_+^\dag | \text{vac} \rangle, \; |1 0 \rangle = c_g^{\dag\,\,} | \text{vac} \rangle \;, \\
|0 1 \rangle & = c_*^{\dag\,\,} | \text{vac} \rangle, \; |1 1 \rangle = c_-^\dag | \text{vac} \rangle \;,
\end{align}\label{eq-basis-states}%
\end{subequations}
the completeness condition~\eqref{eq-completeness} becomes
\begin{equation}
c_+^\dag \, c_+^{\phantom{\dag}} + c_g^\dag \, c_g^{\phantom{\dag}} + c_*^\dag \, c_*^{\phantom{\dag}} + c_-^\dag \, c_-^{\phantom{\dag}} = 1 \;.
\label{eq-constraint-full}
\end{equation}
Using~\eqref{eq-completeness} and~\eqref{eq-basis-states} the operators~$c_{0/1}^{(\dag)}$ creating and destroying
an electron in the two states of the two-level system can then be written as
\begin{subequations}
\begin{align}
c_0^{\phantom{\dag}} & = c_0^{\phantom{\dag}} * 1 = |0 0 \rangle \langle 1 0| - |0 1 \rangle \langle 1 1| = c_+^\dag \, c_g^{\phantom{\dag}} - c_*^\dag \, c_-^{\phantom{\dag}} \;, \\
c_0^\dag & = c_0^\dag * 1 = |1 0 \rangle \langle 0 0| - |1 1 \rangle \langle 0 1| = c_g^\dag \, c_+^{\phantom{\dag}} - c_-^\dag \, c_*^{\phantom{\dag}} \;, \\
c_1^{\phantom{\dag}} & = c_1^{\phantom{\dag}} * 1 = |0 0 \rangle \langle 0 1| + |1 0 \rangle \langle 1 1| = c_+^\dag \, c_*^{\phantom{\dag}} + c_g^\dag \, c_-^{\phantom{\dag}} \;, \\
c_1^\dag & = c_1^\dag * 1 = |0 1 \rangle \langle 0 0| + |1 1 \rangle \langle 1 0| = c_*^\dag \, c_+^{\phantom{\dag}} + c_-^\dag \, c_g^{\phantom{\dag}} \;.
\end{align}\label{eq-c01-decomposition}%
\end{subequations}
In order to satisfy the anti-commutation relations of the~$c_{0/1}^{(\dag)}$ one has to define~${c_0^{\phantom{\dag}} \,
|1 1 \rangle = - |0 1 \rangle}$ and~${c_0^\dag \, |0 1 \rangle = - |1 1 \rangle}$ (see also
Ref.~\onlinecite{Carmona2002Slave}). The anti-commutator relations then reproduce the completeness
condition~\eqref{eq-constraint-full} when either the~$c_{g/*}$ are bosonic and the~$c_{-/+}$ are fermionic or
the~$c_{g/*}$ are fermionic and the~$c_{-/+}$ are bosonic. Without loss of generality we choose $c_g$ and $c_*$
to be bosonic and declare the labeling conventions
\begin{subequations}
\begin{align}
c_g^{(\dag)} \longrightarrow b_g^{(\dag)} \;, \phantom{c_+^{(\dag)} \longrightarrow f_+^{(\dag)}} & \hspace{-1.8cm}\phantom{c_-^{(\dag)}} c_*^{(\dag)} \longrightarrow b_*^{(\dag)} \;,\\
c_+^{(\dag)} \longrightarrow f_+^{(\dag)} \;, \phantom{c_g^{(\dag)} \longrightarrow b_g^{(\dag)}} & \hspace{-1.8cm}\phantom{c_*^{(\dag)}} c_-^{(\dag)} \longrightarrow f_-^{(\dag)} \;.
\end{align}
\label{convention}
\end{subequations}
The constraint~\eqref{eq-constraint-full} reduces then to
\begin{equation}
Q = b_g^\dag \, b_g^{\phantom{\dag}} + b_*^\dag \, b_*^{\phantom{\dag}} + f_-^\dag \, f_-^{\phantom{\dag}} + f_+^\dag \, f_+^{\phantom{\dag}} = 1 \;,
\label{eq-projected-constraint}
\end{equation}
where we have introduced the usual pseudo-particle number operator~$Q$ to encapsulate the constraint.
Formally, the auxiliary fermion and boson operators are pseudo-particle operators creating and annihilating
molecular configurations. The constraint ensures that at any time only one of the four possible molecular
configurations is present in the system. The occupancy of a molecular pseudo-particle
state is thus at most unity. Hence, it represents the probability with which the molecular
configuration it describes appears in the course of the scattering event.
Inserting the decomposition~\eqref{eq-c01-decomposition} into the Hamiltonian \eqref{eq hamiltonian},
making the identifications \eqref{convention}, and collecting only terms which are
in accordance with \eqref{eq-projected-constraint} is straightforward. The result is
\begin{equation}
\begin{split}
H(t) = & \sum_{\vec k} \varepsilon_{\vec k}^{{\phantom{\dag}}} \, c_{\vec{k}}^{\dag} \, c_{\vec{k}}^{{\phantom{\dag}}} + \sum_{\vec q} \varepsilon_{\vec q}^{{\phantom{\dag}}}(t) \, c_{\vec{q}}^{\dag} \, c_{\vec{q}}^{{\phantom{\dag}}} \\
& + \omega_0 b_0^{\dag} b_0 + \omega_1 b_1^{\dag}b_1\\
& + \varepsilon_g^{{\phantom{\dag}}}(t) \, b_{g}^{\dag} \, b_{g}^{{\phantom{\dag}}} + \varepsilon_*^{{\phantom{\dag}}}(t) \, b_{*}^{\dag} \, b_{*}^{{\phantom{\dag}}} + \varepsilon_-^{{\phantom{\dag}}}(t) \, f_{-}^{\dag} \, f_{-}^{{\phantom{\dag}}} \\[1.5ex]
& - \sum_{\vec k} \Bigl( V_{\vec{k}}^{{\phantom{\conj}}\!}(t) \, c_{\vec{k}}^{\dag} \, b_0^{\dag}\, b_{*}^{\dag} \, f_{-}^{{\phantom{\dag}}} + h.c. \Bigr) \\
& + \sum_{\vec q} \Bigl( V_{\vec{q}}^{{\phantom{\conj}}\!}(t) \, c_{\vec{q}}^{\dag} \, b_1^{\dag}\, b_{g}^{\dag} \, f_{-}^{{\phantom{\dag}}} + h.c. \Bigr) \\
& + \sum_{\vec{k} \vec{q}} \left( V_{\vec{k}\vec{q}}(t) \, c_{\vec{k}}^{\phantom{\dag}} \, c_{\vec{q}}^\dag \, b_g^\dag \, b_*^{\phantom{\dag}} + h.c. \right) \;,
\end{split}\label{eq-projected-hamiltonian}%
\end{equation}
where we introduced the abbreviations
\begin{equation}
\varepsilon_g = \varepsilon_0^{10} \;,\quad \varepsilon_* = \varepsilon_1^{01} \;,\quad \varepsilon_- = \varepsilon_0^{11} + \varepsilon_1^{11} \;.
\label{assigment}
\end{equation}
Notice, no term in the Hamiltonian contains the operator $f_+$ or its adjoint. This is by construction
because the positive ion \ensuremath{\mathrm{N_2^+}(\NitrogenPositiveIonTermSymbol)}\ is not involved in the two transitions the Hamiltonian
is supposed to model. The physical meaning of the various terms in the Hamiltonian is particularly
transparent. Consider for instance the last term describing the Auger de-excitation. A metastable
molecule and an electron from the surface disappear while a ground state molecule and an Auger
electron are created.
The operators $f_-$ and $b_{*/g}$ comply to standard commutation and anti-commutation relations.
It is thus possible to conduct a non-equilibrium diagrammatic expansion of the interaction terms
in~\eqref{eq-projected-hamiltonian}. The Hamiltonian~\eqref{eq-projected-hamiltonian} conserves
the pseudo-particle number~$Q$. The quantum-kinetic equations however may contain terms which violate
the constraint. The projection onto the physical subspace with~${Q=1}$ needs to be therefore carried out
explicitly. For this purpose we employ in the next section the Langreth-Nordlander projection
technique~\cite{Langreth1991Derivation}.
\section{Quantum-kinetics\label{sec quant kin}}
To start the quantum-kinetic calculation we first define a contour-ordered fermion Green function $G_-$
for the negative ion and contour-ordered boson Green functions $B_{*/g/0/1}$ for the metastable molecule,
the molecular ground state, and the two auxiliary bosons, respectively. Using the notation
of \citeauthor{Langreth1991Derivation}~\cite{Langreth1991Derivation} we write
\begin{subequations}
\begin{align}
i G_-(t,t^\prime) & = \bigl\langle T_\mathcal{C} \, f_-^{\phantom{\dag}}(t) \, f_-^\dag(t^\prime) \bigr\rangle \;, \\
i B_{l}(t,t^\prime) & = \bigl\langle T_\mathcal{C} \, b_{l}^{\phantom{\dag}}(t) \, b_{l}^\dag(t^\prime) \bigr\rangle \;,
\end{align}
\label{contourGF}
\end{subequations}
where $l=*,g,0,1$, and define the analytic pieces~$G_-^{\genfrac{}{}{0pt}{}{>}{<}}$ and
$B_{l}^{\genfrac{}{}{0pt}{}{>}{<}}$ by
\begin{subequations}
\begin{align}
\begin{split}
i G_-(t,t^\prime) & = \Theta_\mathcal{C}(t-t^\prime) \, G_-^>(t,t^\prime) \\
& \phantom{=} - \Theta_\mathcal{C}(t^\prime-t) \, G_-^<(t,t^\prime) \;,
\end{split} \\
\begin{split}
i B_{l}(t,t^\prime) & = \Theta_\mathcal{C}(t-t^\prime) \, B_{l}^>(t,t^\prime) \\
& \phantom{=} + \Theta_\mathcal{C}(t^\prime-t) \, B_{l}^<(t,t^\prime) \;.
\end{split}
\end{align}%
\end{subequations}
The time-ordering operator~$T_\mathcal{C}$ and the~$\Theta_\mathcal{C}$~function are defined on a complex time contour.
The associated retarded Green functions~$G_-^R$ and~$B_{l}^R$ are given by
\begin{subequations}
\begin{align}
i G_-^R(t,t^\prime) & = \Theta(t-t^\prime) \bigl( G_-^>(t,t^\prime) + G_-^<(t,t^\prime) \bigr) \;, \\
i B_{l}^R(t,t^\prime) & = \Theta(t-t^\prime) \bigl( B_{l}^>(t,t^\prime) - B_{l}^<(t,t^\prime) \bigr)\;,
\end{align}\label{eq-retarded-definition}%
\end{subequations}
whereas the advanced functions~$G_-^A$ and~$B_{l}^A$ may be constructed from~\eqref{eq-retarded-definition}
using the relations
\begin{subequations}
\begin{align}
G_-^A(t,t^\prime) & = \bigl[ G_-^R(t^\prime,t) \bigr]^{*} \;, \\
B_{l}^A(t,t^\prime) & = \bigl[ B_{l}^R(t^\prime,t) \bigr]^{*} \;.
\end{align}
\label{SymmetryRelations}
\end{subequations}
In contrast to our previous work~\cite{Marbach2011Auger,Marbach2012Resonant}, we no longer use Keldysh's
matrix notation. Langreth's notation is more convenient. It enables us to use the powerful Langreth-Wilkins
rules~\cite{Langreth1972Theory} which lead more directly to the quantum-kinetic equations for the
molecular occupancies, that is, the probabilities with which the molecular configurations involved in the
de-excitation process appear in the course of the scattering event.
\begin{figure}
\centerline{\includegraphics[scale=1]{self-energy-rct-fermion.pdf}}
\centerline{\includegraphics{self-energy-rct-bosons.pdf}}
\centerline{\includegraphics{self-energy-auger-meta.pdf}}
\centerline{\includegraphics{self-energy-auger-ground.pdf}}
\caption{Second order self-energy diagrams in self-consistent non-crossing approximation. Depicted are from top to
bottom the self-energy of the fermionic level~$-i\Sigma_-(t_1,t_2)$, the RCT component of the self-energies of the
bosonic levels~$-i\Pi_{*}(t_1,t_2)$ and~$-i\Pi_{g}(t_1,t_2)$, the Auger component of the metastable boson
self-energy~$-i\Pi_*(t_1,t_2)$, and the Auger component of the ground state boson self-energy~$-i\Pi_g(t_1,t_2)$.
In the upper most diagram the $+$ sign in the indices means that there are two separate diagrams, one with
the~$\vec{k}$ and~$*$~indices and one with the~$\vec{q}$ and~$g$~indices, both of which contribute in an additive way
and have thus to be summed. Furthermore in the diagram below the upper most one the subscripts~${\vec{k}}$
and $0$ hold for~$\Pi_*$ and the subscripts~${\vec{q}}$ and $1$ hold for~$\Pi_g$. In all diagrams a double line
indicates a full propagator whereas a single line denotes an unperturbed propagator.
\label{fig self energies}}
\end{figure}
In order to calculate the self-energies we truncate the diagrammatic expansion beyond the second order and
employ the self-consistent non-crossing approximation~\cite{Langreth1991Derivation}.
The diagrams for the fermionic self-energy~$\Sigma_-$ and the
bosonic self-energies~$\Pi_{*}$ and~$\Pi_{g}$ are shown in Fig.~\ref{fig self energies}.
Mathematically they read
\begin{subequations}
\begin{align}
\begin{split}
\Sigma_-(t_1,t_2) & = i\, \sigma_{\vec{k}}(t_1,t_2) \, B_*(t_1,t_2) \\
& \phantom{=} \,\, +i \, \sigma_{\vec{q}}(t_1,t_2) \, B_g(t_1,t_2) \;,
\end{split} \\
\begin{split}
\Pi_{*}(t_1,t_2) & = -i\, \sigma_{\vec{k}}(t_2,t_1) \, G_-(t_1,t_2) \\
& \phantom{=} \,\, + \sigma_{\vec{k}\vec{q}}(t_1,t_2) \, B_g(t_1,t_2) \;,
\end{split} \\
\begin{split}
\Pi_{g}(t_1,t_2) & = -i\, \sigma_{\vec{q}}(t_2,t_1) \, G_-(t_1,t_2) \\
& \phantom{=} \,\, + \sigma_{\vec{k}\vec{q}}(t_2,t_1) \, B_*(t_1,t_2) \;
\end{split}
\end{align}\label{eq unprojected self energies}
\end{subequations}
with
\begin{subequations}
\begin {align}
\begin{split}
\sigma_{\vec{k}/\vec{q}}(t_1,t_2) & = \frac{i}{\hbar^2} \sum_{\vec{k}/\vec{q}} V_{\vec{k}/\vec{q}}^{*}(t_1) \, V_{\vec{k}/\vec{q}}^{\phantom{\conj}}(t_2) \\
& \phantom{=} \times G_{\vec{k}/\vec{q}}^{(0)}(t_1,t_2) B^{(0)}_{0/1}(t_1,t_2)\;,
\end{split} \\
\begin{split}
\sigma_{\vec{k}\vec{q}}(t_1,t_2) & = - \frac{1}{\hbar^2} \sum_{\vec{k}\vec{q}} V_{\vec{k}\vec{q}}^{*}(t_1) \, V_{\vec{k}\vec{q}}^{\phantom{\conj}}(t_2) \\
& \phantom{=} \times G_{\vec{k}}^{(0)}(t_2,t_1) \, G_{\vec{q}}^{(0)}(t_1,t_2) \;,
\end{split}
\end{align}\label{eq sigma definitions}%
\end{subequations}
and $G_{\vec{q}/\vec{k}}^{(0)}$ and $B_{0/1}^{(0)}$ denoting, respectively, the contour-ordered Green functions
for a free/valence band electron and an auxiliary boson.
In the self-energies~\eqref{eq unprojected self energies} the two reaction channels~\eqref{eq react rct}
and~\eqref{eq react auger} are separated. Every term involving~$\sigma_{\vec{k}}$ or~$\sigma_{\vec{q}}$
refers to the RCT channel and every term containing~$\sigma_{\vec{k}\vec{q}}$ pertains to the Auger channel.
Due to the dressed Green functions the two channels are however coupled. The Green functions of the
auxiliary bosons contained in~$\sigma_{\vec{k}}$ and $\sigma_{\vec{q}}$ ensure that the two tunneling
processes contained in~\eqref{eq react rct} are resonant. Physically, the auxiliary bosons simulate
the action of intra-molecular correlations which kick-in when an electron hops to-and-fro the molecule.
Using the Langreth-Wilkins rules for analytic continuation (see Refs.~\onlinecite{Langreth1972Theory,Langreth1991Derivation}
and Appendix~\ref{app langreth}) we obtain from the self-energies~\eqref{eq unprojected self energies} the set of Dyson
equations given in Appendix~\ref{DysonEq}. The components of these equations arising from the RCT terms are equivalent to
the ones in Ref.~\onlinecite{Langreth1991Derivation} but with two bosonic pseudo-particles instead of one and an energy
shift caused by the auxiliary bosons. The set of Dyson
equations~\eqref{eq-plain-dyson} contains terms which violate the constraint~\eqref{eq-projected-constraint}. Before
physically meaningful information can be extracted the Dyson equations have to be projected onto the physical subspace
defined by the constraint~\eqref{eq-projected-constraint}.
The procedure to achieve this is originally due to Langreth and Nordlander and has been outlined several
times~\cite{Langreth1991Derivation,Wingreen1994Anderson,Aguado2003Kondo}.
It is based on inspecting the order of the Green functions in the conserved pseudo-particle number $Q$. The retarded
functions $G_-^R$ and $B_{*/g}^R$ are proportional to $Q^0$ while the lesser Green functions~$G_-^<$ and~$B_{*/g}^<$ are
proportional to~$Q^1$. Thus, we have to omit any terms of higher order than~$Q^0$ from the retarded self-energies and any
terms of higher order than~$Q^1$ from the lesser self-energies. This approach is not an additional approximation but an
exact projection enforced by the constraint~\cite{Langreth1991Derivation}.
Before carrying out the projection we split off the Green functions' oscillating factors by means of the
decompositions~\cite{Shao1994Manybody}
\begin{subequations}
\begin{align}
G_-^{</R/A}(t,t^\prime) & = \widetilde{G}_-^{</R/A}(t,t^\prime) \, e^{-\frac{i}{\hbar} \int_{t^\prime}^t \mathrm{d}t_1 \, \varepsilon_-(t_1)} \;, \\
B_*^{</R/A}(t,t^\prime) & = \widetilde{B}_*^{</R/A}(t,t^\prime) \, e^{-\frac{i}{\hbar} \int_{t^\prime}^t \mathrm{d}t_1 \, \varepsilon_*(t_1)} \;, \\
B_g^{</R/A}(t,t^\prime) & = \widetilde{B}_g^{</R/A}(t,t^\prime) \, e^{-\frac{i}{\hbar} \int_{t^\prime}^t \mathrm{d}t_1 \, \varepsilon_g(t_1)} \;,
\end{align}\label{eq-g<-decomposition}%
\end{subequations}
and
\begin{subequations}
\begin{align}
\widetilde{G}_-^R(t,t^\prime) & = -i \, \Theta(t-t^\prime) \, g_-(t,t^\prime) \;, \\
\widetilde{G}_-^A(t,t^\prime) & = i \, \Theta(t^\prime - t) \, g_-(t,t^\prime) \;, \\
\widetilde{B}_{*/g}^R(t,t^\prime) & = -i \, \Theta(t-t^\prime) \, b_{*/g}(t,t^\prime) \;, \\
\widetilde{B}_{*/g}^A(t,t^\prime) & = i \, \Theta(t^\prime - t) \, b_{*/g}(t,t^\prime) \;.
\end{align}\label{eq-gr-decomposition}%
\end{subequations}
Using the definition of the retarded and advanced Green functions we find the following relations~\cite{Shao1994Manybody}
\begin{subequations}
\begin{align}
g_-(t,t) & = b_{*/g}(t,t) = 1 \;, \\
g_-(t,t^\prime) & = \bigl[ g_-(t^\prime,t) \bigr]^{*} \;, \\
b_{*/g}(t,t^\prime) & = \bigl[ b_{*/g}(t^\prime,t) \bigr]^{*} \;.
\end{align}
\end{subequations}
Within the Dyson equations the oscillating terms emerging from Eqs.~\eqref{eq-g<-decomposition} will be absorbed in the functions~$\widetilde{\sigma}_{\vec{k}}^<$, $\widetilde{\sigma}_{\vec{q}}^>$ and~$\widetilde{\sigma}_{\vec{k}\vec{q}}^>$ which are defined by
\begin{subequations}
\begin{align}
\sigma_{\vec{k}}^{<}(t,t^\prime) & = \widetilde{\sigma}_{\vec{k}}^{<}(t,t^\prime) \, e^{-\frac{i}{\hbar} \int_{t^\prime}^t \mathrm{d}t_1 \, [ \varepsilon_-(t_1) - \varepsilon_*(t_1) ]} \;, \\
\sigma_{\vec{q}}^{>}(t,t^\prime) & = \widetilde{\sigma}_{\vec{q}}^{>}(t,t^\prime) \, e^{-\frac{i}{\hbar} \int_{t^\prime}^t \mathrm{d}t_1 \, [ \varepsilon_-(t_1) - \varepsilon_g(t_1) ]} \;, \\
\sigma_{\vec{k}\vec{q}}^{>}(t,t^\prime) & = \widetilde{\sigma}_{\vec{k}\vec{q}}^{>}(t,t^\prime) \, e^{-\frac{i}{\hbar} \int_{t^\prime}^t \mathrm{d}t_1 \, [ \varepsilon_*(t_1) - \varepsilon_g(t_1) ]} \;.
\end{align}\label{eq-sigma-decomposition}%
\end{subequations}
The terms~$\sigma_{\vec{k}}^>$, $\sigma_{\vec{q}}^<$, and~$\sigma_{\vec{k}\vec{q}}^<$ vanish identically due to the
initial conditions $n_0(t_0)=n_{\vec{k}}(t_0)=1$ and $n_1(t_0)=n_{\vec{q}}(t_0)=0$ since
\begin{subequations}
\begin{align}
\sigma_{\vec{k}}^>(t,t^\prime) & \sim \bigl( 1 - n_{\vec{k}}(t_0) \bigr) \bigl( 1 - n_0(t_0) \bigl) = 0 \;, \\
\sigma_{\vec{q}}^<(t,t^\prime) & \sim n_{\vec{q}}(t_0) n_1(t_0) = 0 \;, \\
\sigma_{\vec{k}\vec{q}}^<(t,t^\prime) & \sim n_{\vec{q}}(t_0) = 0 \;.
\end{align}\label{eq-sigma-initial}%
\end{subequations}
Employing the Langreth-Nordlander projection together with the relations~\eqref{eq-g<-decomposition}, \eqref{eq-gr-decomposition}, \eqref{eq-sigma-decomposition} and~\eqref{eq-sigma-initial} the set of Dyson equations~\eqref{eq-plain-dyson} takes the following form
\begin{widetext}
\begin{subequations}
\begin{align}
\begin{split}
\frac{\partial}{\partial t} \widetilde{G}_-^<(t,t^\prime) & = - \int_{-\infty}^t\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{q}}^>(t,t_1) \, b_g(t,t_1) \, \widetilde{G}_-^<(t_1,t^\prime) + \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{k}}^<(t,t_1) \, \widetilde{B}_*^<(t,t_1) \, g_-(t_1,t^\prime) \;,
\end{split} \label{eq-projected-dyson-first} \tag{\theequation a} \\[1ex]
\begin{split}
\frac{\partial}{\partial t} \widetilde{B}_*^<(t,t^\prime) & = - \int_{-\infty}^t\mathrm{d}t_1 \; \Bigl[ \widetilde{\sigma}_{\vec{k}}^<(t_1,t) \, g_-(t,t_1) \, \widetilde{B}_*^<(t_1,t^\prime) + i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t,t_1) \, b_g(t,t_1) \, \widetilde{B}_*^<(t_1,t^\prime) \Bigr] \;,
\end{split} \tag{\theequation b} \\[1ex]
\begin{split}
\frac{\partial}{\partial t} \widetilde{B}_g^<(t,t^\prime) & = \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \Bigl[ \widetilde{\sigma}_{\vec{q}}^>(t_1,t) \, \widetilde{G}_-^<(t,t_1) \, b_{g}(t_1,t^\prime) + i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t_1,t) \, \widetilde{B}_*^<(t,t_1) \, b_g(t_1,t^\prime) \Bigr] \;,
\end{split} \tag{\theequation c} \\[1ex]
\begin{split}
\Theta(t-t^\prime) \frac{\partial}{\partial t} g_-(t, t^\prime) & = - \int_{t^\prime}^t\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{q}}^>(t,t_1) \, b_g(t,t_1) \, g_-(t_1,t^\prime) \;,
\end{split} \tag{\theequation d} \\[1ex]
\begin{split}
\Theta(t-t^\prime) \frac{\partial}{\partial t} b_*(t, t^\prime) & = - \int_{t^\prime}^t\mathrm{d}t_1 \; \Bigl[ \widetilde{\sigma}_{\vec{k}/\vec{q}}^<(t_1,t) \, g_-(t,t_1) \, b_{*/g}(t_1,t^\prime) + i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t,t_1) \, b_g(t,t_1) \, b_*(t_1,t^\prime) \Bigr] \;,
\end{split} \tag{\theequation e} \\[1ex]
\begin{split}
\Theta(t-t^\prime) \frac{\partial}{\partial t} b_g(t, t^\prime) & = 0 \;.
\end{split} \label{eq-projected-dyson-last} \tag{\theequation f}
\end{align}\label{eq-projected-dyson}%
\end{subequations}
\end{widetext}
The time evolution of the molecular occupation numbers~$n_-$, $n_*$ and~$n_g$ can be calculated from the following
relations~\cite{Langreth1991Derivation}
\begin{subequations}
\begin{align}
\frac{\mathrm{d}n_-(t)}{\mathrm{d}t} & = \frac {\partial \widetilde{G}_-^<(t,t^\prime)}{\partial t} \biggr|_{t=t^\prime} + \frac {\partial \widetilde{G}_-^<(t,t^\prime)}{\partial t^\prime} \biggr|_{t=t^\prime} \;, \\[1ex]
\frac{\mathrm{d}n_{*}(t)}{\mathrm{d}t} & = \frac {\partial \widetilde{B}_{*}^<(t,t^\prime)}{\partial t} \biggr|_{t=t^\prime} + \frac {\partial \widetilde{B}_{*}^<(t,t^\prime)}{\partial t^\prime} \biggr|_{t=t^\prime} \;, \\[1ex]
\frac{\mathrm{d}n_{g}(t)}{\mathrm{d}t} & = \frac {\partial \widetilde{B}_{g}^<(t,t^\prime)}{\partial t} \biggr|_{t=t^\prime} + \frac {\partial \widetilde{B}_{g}^<(t,t^\prime)}{\partial t^\prime} \biggr|_{t=t^\prime} \;.
\end{align}\label{eq-occupancies}%
\end{subequations}
Hence, we also need the adjoint Dyson equations of the lesser functions which can be calculated in the same manner. The result reads
\begin{subequations}
\begin{align}
\begin{split}
\frac{\partial}{\partial t^\prime} \widetilde{G}_-^<(t,t^\prime) & = - \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{G}_-^<(t,t_1) \, \widetilde{\sigma}_{\vec{q}}^>(t_1,t^\prime) \, b_g(t_1,t^\prime)\\
& + \int_{-\infty}^t\mathrm{d}t_1 \; g_-(t,t_1) \, \widetilde{\sigma}_{\vec{k}}^<(t_1,t^\prime) \, \widetilde{B}_*^<(t_1,t^\prime) \;,
\end{split} \label{eq proj adj dyson G-} \\[1ex]
\begin{split}
\frac{\partial}{\partial t^\prime} \widetilde{B}_*^<(t,t^\prime) & = - \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{B}_*^<(t,t_1) \, \widetilde{\sigma}_{\vec{k}}^<(t^\prime,t_1) \, g_-(t_1,t^\prime) \\
& + - \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{B}_*^<(t,t_1) \, i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t_1,t^\prime) \, b_g(t_1,t^\prime) \;,
\end{split} \\[1ex]
\begin{split}
\frac{\partial}{\partial t^\prime} \widetilde{B}_g^<(t,t^\prime) & = \int_{-\infty}^t\mathrm{d}t_1 \; b_g(t,t_1) \, \widetilde{\sigma}_{\vec{q}}^>(t^\prime, t_1) \, \widetilde{G}_-^<(t_1,t^\prime)\\
& + \int_{-\infty}^t\mathrm{d}t_1 b_g(t,t_1) \, i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t^\prime,t_1) \, \widetilde{B}_*^<(t_1,t^\prime) \;.
\end{split}
\end{align}\label{eq-projected-adjoint-dyson}%
\end{subequations}
Equations~\eqref{eq-projected-dyson} and~\eqref{eq-projected-adjoint-dyson} constitute the final projected set of Dyson equations
that determines the dynamics of the system within the subspace~$Q=1$. The rate-equation-like structure of these equations is
already evident. The Dyson equation for the lesser Green function of the negative ion, Eq.~\eqref{eq proj adj dyson G-},
for instance, contains a production term proportional to~$\widetilde{\sigma}_{\vec{k}}$ and~$\widetilde{B}_*^<$ and a loss term
proportional to~$\widetilde{\sigma}_{\vec{q}}$ and $\widetilde{G}_-^<$. These terms relate to the production and loss of
negative ions by the RCT electron capture and release reaction, respectively.
Since the self-energies are known in terms of the Green functions, Eqs.~\eqref{eq-projected-dyson}
and~\eqref{eq-projected-adjoint-dyson} constitute a closed set of equations. A numerical solution along
the lines pioneered by \citeauthor{Shao1994Manybody}~\cite{Shao1994Manybody} could thus be attempted. The
rather involved numerics of double-time Green functions is however not required for moderate projectile
velocities~\cite{Shao1994Manybody}. In that case the semi-classical approximation described in the next
section can be employed to reduce Eqs.~\eqref{eq-projected-dyson} and~\eqref{eq-projected-adjoint-dyson}
to a set of rate equations. As far as possible applications of our approach to plasma walls are concerned,
we have to keep in mind however that plasma walls are usually negatively charged with respect to the bulk
plasma. Charged projectiles might thus acquire kinetic energies for which the semi-classical approximation
and with it the rate equations are no longer valid. The metastable molecules however we are concerned with
in the present work approach the surface with thermal energies making the rate equations an excellent
approximation to the full two-time equations.
\section{Semi-classical approximation\label{sec semiclassical}}
The strongly oscillating factors of the projected set of Dyson equations are contained in the
functions~$\widetilde{\sigma}_{\vec{k}}^<$, $\widetilde{\sigma}_{\vec{q}}^>$, and $\widetilde{\sigma}_{\vec{k}\vec{q}}^>$.
If these functions are strongly peaked along the time diagonal~$t=t^\prime$, we can apply a saddle point approximation
to the integrals in~\eqref{eq-projected-dyson} and~\eqref{eq-projected-adjoint-dyson}. For instance,
\begin{equation}
\begin{split}
& \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{k}}^<(t,t_1) \, \widetilde{B}_*^<(t,t_1) \, g_-(t_1,t^\prime) \\
& \quad \approx \widetilde{B}_*^<(t,t) \, g_-(t,t^\prime) \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{k}}^<(t,t_1) \;.
\end{split}\label{eq semiclassical example}%
\end{equation}\\
The validity of this approximation, which is also known as the semi-classical
approximation~\cite{Langreth1991Derivation}, will be demonstrated in Sec.~\ref{sec results}.
Within the saddle-approximation the projected Dyson equations for the lesser Green functions become
\begin{widetext}
\begin{subequations}
\begin{align}
\begin{split}
\frac{\partial}{\partial t} \widetilde{G}_-^<(t,t^\prime) & \approx - \underbrace{b_g(t,t)}_{1} \widetilde{G}_-^<(t,t^\prime) \int_{-\infty}^t\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{q}}^>(t,t_1) + \widetilde{B}_*^<(t,t) \, g_-(t,t^\prime) \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{k}}^<(t,t_1) \;,
\end{split} \\[1ex]
\begin{split}
\frac{\partial}{\partial t^\prime} \widetilde{G}_-^<(t,t^\prime) & \approx - \widetilde{G}_-^<(t,t^\prime) \underbrace{b_g(t^\prime,t^\prime)}_{1} \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{q}}^>(t_1,t^\prime) + g_-(t,t^\prime) \, \widetilde{B}_*^<(t^\prime,t^\prime) \int_{-\infty}^t\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{k}}^<(t_1,t^\prime) \;,
\end{split} \\[1ex]
\begin{split}
\frac{\partial}{\partial t} \widetilde{B}_*^<(t,t^\prime) & \approx - \underbrace{g_-(t,t)}_{1} \widetilde{B}_*^<(t,t^\prime) \int_{-\infty}^t\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{k}}^<(t_1,t) - \underbrace{b_g(t,t)}_{1} \widetilde{B}_*^<(t,t^\prime) \int_{-\infty}^t\mathrm{d}t_1 \; i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t,t_1) \;,
\end{split} \\[1ex]
\begin{split}
\frac{\partial}{\partial t^\prime} \widetilde{B}_*^<(t,t^\prime) & \approx - \widetilde{B}_*^<(t,t^\prime) \underbrace{g_-(t^\prime,t^\prime)}_{1} \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{k}}^<(t^\prime,t_1) - \widetilde{B}_*^<(t,t^\prime) \underbrace{b_g(t^\prime,t^\prime)}_{1} \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t_1,t^\prime) \;,
\end{split} \\[1ex]
\begin{split}
\frac{\partial}{\partial t} \widetilde{B}_g^<(t,t^\prime) & \approx \widetilde{G}_-^<(t,t) \, b_{g}(t,t^\prime) \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{q}}^>(t_1,t) + \widetilde{B}_*^<(t,t) \, b_{g}(t,t^\prime) \int_{-\infty}^{t^\prime}\mathrm{d}t_1 \; i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t_1,t) \;,
\end{split} \\[1ex]
\begin{split}
\frac{\partial}{\partial t^\prime} \widetilde{B}_g^<(t,t^\prime) & \approx b_g(t,t^\prime) \, \widetilde{G}_-^<(t^\prime,t^\prime) \int_{-\infty}^t\mathrm{d}t_1 \; \widetilde{\sigma}_{\vec{q}}^>(t^\prime, t_1) + b_g(t,t^\prime) \, \widetilde{B}_*^<(t^\prime,t^\prime) \int_{-\infty}^t\mathrm{d}t_1 \; i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t^\prime, t_1) \;.
\end{split}
\end{align}
\end{subequations}
\end{widetext}
Using Eqs.~\eqref{eq-occupancies} we then arrive at a set of rate equations for the occupancies of the molecular
pseudo-particle states,
\begin{subequations}
\begin{align}
\frac{\mathrm{d}n_-(t)}{\mathrm{d}t} & \approx - \Gamma_1(t) \, n_-(t) + \Gamma_0(t) \, n_*(t) \;, \label{eq ode n-} \\
\frac{\mathrm{d}n_*(t)}{\mathrm{d}t} & \approx - \Gamma_0(t) \, n_*(t) - \Gamma_A(t) \, n_*(t) \;, \label{eq ode n*} \\
\frac{\mathrm{d}n_g(t)}{\mathrm{d}t} & \approx \Gamma_1(t) \, n_-(t) + \Gamma_A(t) \, n_*(t) \;, \label{eq ode ng}
\end{align}\label{eq-rate-lowest}%
\end{subequations}
where the rates are given by
\begin{subequations}
\begin{align}
\Gamma_0(t) & = \int_{-\infty}^{t}\mathrm{d}t_1 \; 2 \Re \bigl\{ \widetilde{\sigma}_{\vec{k}}^<(t,t_1) \bigr\} \;, \label{eq semi rate 0}\\
\Gamma_1(t) & = \int_{-\infty}^t\mathrm{d}t_1 \; 2 \Re \bigl\{ \widetilde{\sigma}_{\vec{q}}^>(t,t_1) \bigr\} + \Gamma_n \;, \label{eq semi rate 1} \\
\Gamma_A(t) & = \int_{-\infty}^t\mathrm{d}t_1 \; 2 \Re \bigl\{ i \widetilde{\sigma}_{\vec{k}\vec{q}}^>(t,t_1) \bigr\} \;. \label{eq semi rate auger}
\end{align}\label{eq semi-classical rates}%
\end{subequations}
Note, in Eq.~\eqref{eq semi rate 1} we incorporated the natural decay of the negative ion by adding the natural
decay rate~$\Gamma_n=1/\tau_n$ on the right-hand side.
Similar to what Langreth and coworkers did in the context of the neutralization of atomic
ions~\cite{Langreth1991Derivation,Shao1994Manybody} we have thus reduced a complicated set of Dyson
equations - Eqs.~\eqref{eq-projected-dyson} and~\eqref{eq-projected-adjoint-dyson} describing the de-excitation
of a metastable molecule via the simultaneous action of the RCT channel \eqref{eq react rct} and the Auger
channel \eqref{eq react auger} - to an easy to handle system of rate equations~\eqref{eq-rate-lowest}. The
reaction rates~\eqref{eq semi-classical rates} entering the rate equations are linked to quantum-kinetic quantities
and thus related to the semi-empirical model.
The rates~$\Gamma_0$ and~$\Gamma_1$ defined in \eqref{eq semi rate 0} and \eqref{eq semi rate 1} are equal
to the rates employed in Ref.~\onlinecite{Marbach2012Resonant}. Mathematical expressions for these two
rates - as obtained from the semi-empirical model - can thus be looked up in our previous
work~\cite{Marbach2012Resonant}. The Auger rate~$\Gamma_A$ introduced in \eqref{eq semi rate auger} however
has not been calculated before. Within the semi-empirical model it is given by
\begin{equation}
\begin{split}
& \Gamma_A(t) = \int_0^\infty \mathrm{d}q_r \, 2 \Re \Biggl\{ \frac{\bigl[ \Gamma_{\vec{k}\vec{q}}(t) \bigr]^*}{\hbar^2 (\Delta q)^3 (\Delta k)^3} \int_{t_0}^t \mathrm{d}t_1 \\
& \qquad \times \int_0^{\frac{\pi}{2}} \mathrm{d}q_\vartheta \int_0^{2\pi} \mathrm{d}q_\varphi \int_0^{k_F} \mathrm{d}k_r \int_0^\pi \mathrm{d}k_\vartheta \int_0^{2\pi} \mathrm{d}k_\varphi \\
& \qquad \times q_r^2 \sin(q_\vartheta) \, k_r^2 \sin(k_\vartheta) \Gamma_{\vec{k}\vec{q}}(t_1) \Biggr\} \;
\end{split}\label{eq gamma auger spherical}
\end{equation}
with
\begin{equation}
\Gamma_{\vec{k}\vec{q}}(t) = V_{\vec{k}\vec{q}}(t) \, e^{\frac{i}{\hbar} \int_0^t \mathrm{d}t_1 [ \varepsilon_0(t_1) + \varepsilon_{\vec{q}}(t_1) - \varepsilon_1(t_1) - \varepsilon_{\vec{k}} ]} \;.
\end{equation}
For a given Auger matrix element $V_{\vec{k}\vec{q}}(t)$ the multi-dimensional integral in~\eqref{eq gamma auger spherical}
can be calculated efficiently using the techniques and approximations outlined in Ref.~\onlinecite{Marbach2011Auger}.
The Auger matrix element, originating from the Coulomb interaction between the (active) projectile electron and
an electron from the solid, is in general subject to the dynamical response of the target electrons. For metallic
surfaces this is an important issue, as discussed for instance by \citeauthor{Alvarez1998Auger}~\cite{Alvarez1998Auger}.
It leads to the screening of the Coulomb interaction and should be at least accounted for by a statically
screened Coulomb potential. For the dielectric surfaces we are primarily interest in, however, screening is suppressed
by the energy gap. We calculate therefore $V_{\vec{k}\vec{q}}(t)$ from the bare Coulomb interaction. Thereby we overestimate
somewhat the strength of the Auger matrix element.
We will now seek an analytic solution to the coupled rate equations~\eqref{eq-rate-lowest}. As a starting point we first
take a step back and consider the isolated decay channels of resonant electron capture, resonant electron emission, and Auger
de-excitation. Singling out the individual reactions in~\eqref{eq-rate-lowest} we obtain
\begin{subequations}
\begin{align}
\frac{\mathrm{d}n_*^{(0)}(t)}{\mathrm{d}t} & = -\Gamma_0(t) \, n_*^{(0)}(t) \;, \label{eq rate * isolated}\\
\frac{\mathrm{d}n_-^{(1)}(t)}{\mathrm{d}t} & = -\Gamma_1(t) \, n_-^{(1)}(t) \;, \\
\frac{\mathrm{d}n_*^{(A)}(t)}{\mathrm{d}t} & = -\Gamma_A(t) \, n_*^{(A)}(t) \;.
\end{align}\label{eq isolated system}%
\end{subequations}
The superscripts~${(0)}$, ${(1)}$, and~${(A)}$ identify the isolated resonant electron capture, resonant electron emission,
and Auger de-excitation, respectively. Since the channels are isolated, each of the decay equations
\eqref{eq isolated system} comes with an analogous
equation for the species that is produced. For instance, accompanying~\eqref{eq rate * isolated} is the equation
\begin{equation}
\frac{\mathrm{d}n_-^{(0)}(t)}{\mathrm{d}t} = \Gamma_0(t) \, n_*^{(0)}(t) \;.
\label{eq rate - isolated}
\end{equation}
The time derivatives of~$n_*^{(0)}$ and~$n_-^{(0)}$ differ however only in sign. Hence,~$n_-^{(0)}$ is given through the
conservation of particles as~${n_-^{(0)} = 1 - n_*^{(0)}}$ (valid when the channels are isolated). Consequently, the
additional equations of type~\eqref{eq rate - isolated} do not contain any additional information and can be omitted. Using
the initial condition~${n_*^{(0)}(t_0)=n_*^{(A)}(t_0)=1}$ the system~\eqref{eq isolated system} can be solved
straightforwardly. The result is
\begin{subequations}
\begin{align}
n_*^{(0)}(t) & = e^{-\int_{t_0}^t \mathrm{d}t_1 \, \Gamma_0(t)} \;, \\
n_-^{(1)}(t) & = n_-^{(1)}(t^\prime) \, e^{-\int_{t^\prime}^t \mathrm{d}t_1 \, \Gamma_1(t)} \;, \\
n_*^{(A)}(t) & = e^{-\int_{t_0}^t \mathrm{d}t_1 \, \Gamma_A(t)} \;.
\end{align}\label{eq isolated solutions}%
\end{subequations}
Now we are in position to use these occupancies to calculate the solution of the full, coupled system
of rate equations~\eqref{eq-rate-lowest}. First we consider the equation for $n_*$. Using the initial
condition ${n_*(t_0)=1}$, Eq.~\eqref{eq ode n*} can be solved by separation of variables and yields
\begin{equation}
n_*(t) = e^{-\int_{t_0}^t \mathrm{d}t_1 \bigl( \Gamma_0(t_1)
+ \Gamma_1(t_1) \bigr)} = n_*^{(0)}(t) \, n_*^{(A)}(t) \;. \label{eq n* solution}
\end{equation}
To solve~\eqref{eq ode n-} for the occupancy of the negative ion state we first multiply this equation by a
factor ${\exp\bigl(\int_{t_0}^t \mathrm{d}t_2 \, \Gamma_1(t_2)\bigr)}$ and rearrange the terms to obtain
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} \biggl( n_-(t) \, e^{\int_{t_0}^t \mathrm{d}t_2 \, \Gamma_1(t_2)} \biggr) = \Gamma_0(t) \, n_*(t) \, e^{\int_{t_0}^t \mathrm{d}t_2 \, \Gamma_1(t_2)} \;.
\end{equation}
Relabeling then $t$ as $t_1$ and integrating the equation from ${t_1=t_0}$ to ${t_1=t}$ while minding the initial
condition ${n_-(t_0)=0}$ yields after a further rearrangement
\begin{equation}
\begin{split}
n_-(t) & = \int_{t_0}^t \mathrm{d}t_1 \, \Gamma_0(t_1) \, n_*(t_1) \, e^{-\int_{t_1}^t \mathrm{d}t_2 \, \Gamma_1(t_2)} \\
& = \int_{t_0}^t \mathrm{d}t_1 \biggl[ - \frac{\mathrm{d}n_*^{(0)}(t_1)}{\mathrm{d}t_1} \biggr] n_*^{(A)}(t_1) \, \frac{n_-^{(1)}(t)}{n_-^{(1)}(t_1)} \;.
\end{split}\label{eq n- solution}%
\end{equation}
Finally, the occupancy of the molecular ground state $n_g$, that is, the solution of Eq.~\eqref{eq ode ng}, is
given through the particle conservation property of the full system~\eqref{eq-rate-lowest},
\begin{equation}
n_g(t) = 1- n_*(t) - n_-(t) \;. \label{eq ng solution}
\end{equation}
Note, the molecular occupancies satisfying the combined rate equation scheme Eqs.~\eqref{eq n* solution},
\eqref{eq n- solution}, and \eqref{eq ng solution} are completely determined by the occupancies
$n_*^{(0)}$, $n_-^{(1)}$ and $n_*^{(A)}$. Moreover, when the Auger channel is disabled by
setting~${\Gamma_A(t) \equiv 0}$, Eqs.~\eqref{eq n* solution}, \eqref{eq n- solution} and \eqref{eq ng solution}
reduce to the rate equations derived by intuitive means for the isolated RCT channel~\cite{Marbach2012Resonant}.
Hence, the quantum-kinetic treatment justifies a posteriori the intuitive approach taken
by us in Ref.~\onlinecite{Marbach2012Resonant}.
We now turn to the spectrum of the emitted electron. While the evolution of the~$\vec{q}$ states has not been
considered explicitly in our quantum-kinetic calculation, the occupancy of these states can nevertheless be
extracted from the solution of~\eqref{eq-rate-lowest}.
From the reactions~\eqref{eq react rct} and~\eqref{eq react auger} it is obvious that the probability for
emitting an electron $n_{e}(t)$ is equal to the occupancy of the ground state $n_g(t)$ as every ground state
molecule must have resulted from the reaction chain and, hence, must be accompanied by an emitted electron.
Consequently, the evolution of~$n_{e}(t)$ is governed by Eq.~\eqref{eq ode ng}. Due to the image potential, however,
not every emitted electron can escape the surface. In particular, the escape is only possible when the emitted
electron's perpendicular kinetic energy~${\varepsilon_{q_z}^\infty=\varepsilon_{\vec{q}}^\infty \cos^2(q_\vartheta)}$
is higher than the absolute value of the image potential~$V_i$ at the position of emission. The latter can be
approximated by the position of the molecule's center of mass at the time of emission.
To incorporate the image potential effect, we adopt a two step strategy. As a start we introduce the spectral
rates $\varrho_1(\varepsilon_{\vec{q}}^\infty,t)$ and $\varrho_A(\varepsilon_{\vec{q}}^\infty,t)$ which are not
restricted by the image potential effect by writing
\begin{equation}
\Gamma_{1/A}(t) = \int_0^\infty \mathrm{d}\varepsilon_{\vec{q}}^\infty \, \varrho_{1/A}(\varepsilon_{\vec{q}}^\infty,t) \;, \label{eq spectral rate defs}
\end{equation}
and afterwards we let~$\varrho_{1/A}\rightarrow\bar{\varrho}_{1/A}$ with
\begin{equation}
\begin{split}
& \bar{\varrho}_{1/A}(\varepsilon_{\vec{q}}^\infty,t) = \int_0^{\frac{\pi}{2}} \mathrm{d}q_\vartheta \int_0^{2\pi} \mathrm{d}q_\varphi \\\
& \qquad \times \Theta\bigl( V_i(z_R(t)) + \varepsilon_{q_z}^\infty \bigr) \, \frac{\mathrm{d}^2 \varrho_{1/A}(\varepsilon_{\vec{q}}^\infty,t)}{\mathrm{d}q_\vartheta \mathrm{d}q_\varphi} \;.
\end{split}\label{eq spectral cut-off}%
\end{equation}
An explicit expression for the spectral RCT emission rate~$\varrho_1$ has been given in Ref.~\onlinecite{Marbach2012Resonant}.
The spectral Auger rate~$\varrho_A$ may be calculated from Eq.~\eqref{eq gamma auger spherical} by stripping out the
$q_r$~integral and multiplying the result by~$m_e/\hbar^2 q_r$. Introducing the spectral decomposition of the rates
in Eq.~\eqref{eq ode ng} and identifying $n_g$ with $\bar{n}_{e}$ we obtain
\begin{equation}
\begin{split}
\frac{\mathrm{d}\bar{n}_{}(t)}{\mathrm{d}t} & = \int_0^\infty \mathrm{d}\varepsilon_{\vec{q}}^\infty \, \bar{\varrho}_1(\varepsilon_{\vec{q}}^\infty,t) \, n_-(t) \\
& \phantom{=} + \int_0^\infty \mathrm{d}\varepsilon_{\vec{q}}^\infty \, \bar{\varrho}_A(\varepsilon_{\vec{q}}^\infty,t) \, n_*(t) \;,
\end{split}
\end{equation}
where $\bar{n}_{e}$ denotes the probability for emitting an electron that can escape from the surface. Integrating over
the time argument with the initial condition ${\bar{n}_{e}(t_0)=0}$ and taking the derivative with respect
to~$\varepsilon_{\vec{q}}^\infty$ we find for the spectrum of the emitted electron at time~$t$
\begin{equation}
\begin{split}
\frac{\mathrm{d}\bar{n}_{e}}{\mathrm{d}\varepsilon_{\vec{q}}^\infty} \biggr|_t & = \int_{t_0}^t \mathrm{d}t_1 \, \bar{\varrho}_1(\varepsilon_{\vec{q}}^\infty,t_1) \, n_-(t_1) \\
& \phantom{=} + \int_{t_0}^t \mathrm{d}t_1 \, \bar{\varrho}_A(\varepsilon_{\vec{q}}^\infty,t_1) \, n_*(t_1) \;.
\end{split}\label{eq spectrum}%
\end{equation}
The secondary electron emission coefficient~$\gamma_e$, that is, the probability for having emitted an electron
after the collision is completed, is given by
\begin{equation}
\gamma_e = \bar{n}_{e}(\infty) \;. \label{eq gamma}
\end{equation}
The occupancies of the molecular pseudo-particle states \eqref{eq n* solution}, \eqref{eq n- solution},
\eqref{eq ng solution} and the spectrum of the emitted electron \eqref{eq spectrum} are the main result
of this work. The occupancies fully characterize the temporal evolution of the de-excitation of a
metastable \ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\ molecule at a surface when both the RCT and the
Auger channel are open. The ingredients required as an input, the occupancies arising from the isolated
processes~$n_*^{(0)}$, $n_-^{(1)}$, $n_*^{(A)}$ and the image potential adjusted spectral
rates~$\bar{\varrho}_1$, $\bar{\varrho}_A$, can be obtained from the quantum-kinetic calculation
and thus from the semi-empirical model.
Assuming the parameters of the model Hamiltonian to be a priori fixed, either by
experiment or by quantum-chemical calculations, there is no free parameter in the kinetic equations
which can be a posteriori adjusted to experimental data concerning the surface collision itself. This is
in contrast to Hagstrum's theory of secondary electron emission~\cite{Hagstrum1954Auger,Hagstrum1961Theory}
where the matrix elements (more precisely the combined density of states) are directly fitted to the outcome
of the surface scattering experiment.
\section{Numerical results\label{sec results}}
\begin{figure}
\centerline{\includegraphics[scale=1]{sigma-k-kq.pdf}}
\caption{Variation of the real (solid line) and imaginary (dashed line) part of~${\widetilde{\sigma}_{\vec{k}}(t_1,t_2)}$
(upper panel) and~${\widetilde{\sigma}_{\vec{k}\vec{q}}(t_1,t_2)}$ (lower panel) as calculated from~\eqref{eq sigma definitions}
along the anti-diagonal~${t_1=-t_2=t}$. The molecule's axis was aligned perpendicular to the surface and the molecule's kinetic
energy was fixed to~${50\,meV}$. The behavior for negative time arguments~$t$ is omitted since the real (imaginary) part of both
functions is symmetric (anti-symmetric) with respect to the time diagonal. Note that the time~$t$ is dimensionless. It relates
to the physical time~$t_{\rm phys}$ via~$t_{\rm phys}=a_B \Delta t /2 v_0$.\label{fig sigma k kq}}
\end{figure}
In this section we present numerical results based on the semi-classical equations of Sec.~\ref{sec semiclassical}.
We consider the particular case of a diamond surface and restrict our investigations to normal incident with a
molecular kinetic energy of~${50\,meV}$. The turning point of the molecule's trajectory is then $4.4$ Bohr
radii~\cite{Katz1995Temperature}. As in our previous work~\cite{Marbach2011Auger,Marbach2012Resonant} we treat
only the two principal orientations of the metastable \ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\ molecule: molecular
axis perpendicular to the surface and molecule axis parallel to the surface. Furthermore, we omit the surface
induced decay channel by setting~${\Gamma_1(t)=\Gamma_n}$. As our previous investigations showed, this is an
excellent approximation~\cite{Marbach2012Resonant}.
\begin{figure}
\centerline{\includegraphics[scale=1]{n2.pdf}}
\caption{Time dependence of the occupancy of the ground state molecule~$n_g$, the metastable molecule~$n_*$, and
the negative ion~$n_-$ in the parallel (solid line) and perpendicular (dashed line) orientation. The molecule's
kinetic energy was fixed to~${50\,meV}$. The dotted line represents half filling of the respective level.
The $y$-axis is logarithmic and the time is denoted in the dimensionless units of Fig.~\ref{fig sigma k kq}.
The curves were calculated from Eqs.~\eqref{eq n* solution}, \eqref{eq n- solution} and~\eqref{eq ng solution},
respectively. The incoming and outgoing branches of the trajectory are indicated at the top of the
diagram.\label{fig n2}}
\end{figure}
The numerics necessary to calculate within the semi-empirical model the Auger matrix element $V_{\vec{k}\vec{q}}$ has been
described in Ref.~\onlinecite{Marbach2011Auger}. Utilizing the fact that for the low collision energies we
are interested in the turning point of the molecule is far outside the solid as well as the fact that the LCAO
wave functions are strongly localized on the molecule whereas the wave functions of the solid and free electron
are bounded in the mathematical sense we split-off the time-dependence of the matrix element and integrate the
rest by an interpolative grid-based Monte Carlo scheme. The tunneling matrix elements $V_{\vec{k}}$ and
$V_{\vec{q}}$ can be calculated within the semi-empirical model partly analytically and partly numerically.
The required tools are given in Ref.~\onlinecite{Marbach2012Resonant}.
First, we investigate the validity of the semi-classical approximation. As outlined above, approximations of the
form~\eqref{eq semiclassical example} are only acceptable if the functions~${\widetilde{\sigma}_{\vec{k}}(t_1,t_2)}$
and~${\widetilde{\sigma}_{\vec{k}\vec{q}}(t_1,t_2)}$ are sufficiently peaked on the time diagonal~${t_1=t_2}$. In
order to validate this assumption for our case, we plot in Fig.~\ref{fig sigma k kq} for the perpendicular
orientation the variation of these two functions along the anti-diagonal~${t_1=-t_2}$. The plots can be generated
directly from the definitions~\eqref{eq sigma definitions} and represent profiles with respect to the time diagonal.
For both functions ${\widetilde{\sigma}_{\vec{k}}(t_1,t_2)}$ and ${\widetilde{\sigma}_{\vec{k}\vec{q}}(t_1,t_2)}$
the real part has its maximum on the time diagonal whereas the imaginary part vanishes on the diagonal
itself but exhibits the highest value very close to it. When the time separation from the diagonal is enlarged, both
functions decrease in an oscillating way. For~${\widetilde{\sigma}_{\vec{k}}}$ the amplitude decreases to about~$10\%$
over a dimensionless time interval of~${\Delta t \approx 0.05}$. This relates to a physical time
span of~$\Delta t_{\rm phys} = a_B \Delta t / 2 v_0 \approx 2.25\,fs$ and to the motion of the molecule along
a distance of~${0.025\,a_B}$. For ${\widetilde{\sigma}_{\vec{k}\vec{q}}}$ the fall-off is even more drastic. The behavior
for shifted anti-diagonals as well as for parallel orientation is very similar and hence not shown here. Altogether,
we can conclude that with respect to the macroscopic motion of the molecule the functions~${\widetilde{\sigma}_{\vec{k}}}
$ and~${\widetilde{\sigma}_{\vec{k}\vec{q}}}$ are indeed sufficiently peaked on the time diagonal. Thus, the
semi-classical approximation is valid in our case.
We now turn to the occupancies of the molecular pseudo-particle states, that is, the probabilities with which the
molecular configurations involved in the de-excitation process appear in the course of the scattering event. The
time dependent occupancy of the ground state~$n_g$, the metastable state~$n_*$, and negative ion state~$n_-$
can be calculated from Eqs.~\eqref{eq n* solution}, \eqref{eq n- solution} and~\eqref{eq ng solution},
respectively. The results are depicted in a semi-logarithmic plot in Fig.~\ref{fig n2}.
Inspection of the curves in Fig.~\ref{fig n2} reveals that even close to the surface the occupancy of the negative
ion state is rather low. Hence, the metastable projectile is almost immediately converted into a ground state
molecule and thus stays mostly neutral during the whole collision.
In Fig.~\ref{fig n2} this fact is recognizable at the crossing point of the~$n_*$ and~$n_g$ curves which occurs
at approximately half filling of both levels. The low occupancy of the negative ion state is caused by the
high efficiency of the natural decay channel~\cite{Marbach2012Resonant} and not by the Auger channel destroying
the metastable molecule, which is the generating species of the negative ion. In fact, it is the other way
around and in order to substantiate this claim, we investigate below the relative efficiency of the RCT and
Auger channel by considering the respective reaction rates. Before we do that let us note however that due
the neutrality of the projectile along most of its path it would not gain much kinetic energy in front of a
charged surface. We expect therefore the semi-classical approximation and hence the rate equations to be
also valid in case the de-excitation occurred in front of a negatively charged plasma wall.
\begin{figure}
\centerline{\includegraphics[scale=1]{gammas.pdf}}
\caption{Variation of the rates of resonant electron capture~$\Gamma_0$ (upper panel) and Auger de-excitation~$\Gamma_A$
(lower panel) for the parallel (solid line) and perpendicular (dashed line) orientation at a kinetic energy of~${50\,meV}$.
The time is denoted in the dimensionless units of Fig.~\ref{fig sigma k kq}.\label{fig gamma 0 a}}
\end{figure}
Figure~\ref{fig gamma 0 a} shows the rates of resonant electron capture~$\Gamma_0$ and Auger de-excitation~$\Gamma_A$.
For both channels the rates are highest at the molecule's turning point (approximately $4.4~a_B$) which is the point
of smallest molecule-surface separation and strongest molecule-surface interaction.
When the molecule-surface distance is increased, the rates decrease
exponentially. The RCT channel's rate is about two orders of magnitude higher than the Auger channel's rate. Consequently,
the RCT channel captures surface electrons much more efficiently than the Auger channel. In fact, the RCT channel is so
effective in capturing electrons that it under-runs the Auger channel by destroying its starting basis, the metastable
state. As a result, in the combined two-channel system the Auger channel's performance is significantly diminished as
compared to the isolated Auger reaction.
This conclusion may be verified by considering the term in the rate equations which is responsible for the production
of the ground state molecule by an Auger de-excitation. It is given by
(see Eqs.~\eqref{eq ode ng} and~\eqref{eq n* solution})
\begin{equation}
\Gamma_A(t) \, n_*(t) = \Gamma_A(t) \, n_*^{(A)}(t) \, n_*^{(0)}(t) \;. \label{eq ng auger term}
\end{equation}
Here the factor~${n_*^{(0)}(t)}$ is only present in the combined two-channel system but not in the isolated Auger
system. Without explicit proof but based on numerical observations we note that the term~${n_*^{(0)}(t)}$ is almost
identical to the combined
occupation~${n_*(t)}$ depicted in Fig.~\ref{fig n2}. Hence, in the combined system the Auger channel's ground state
production term~\eqref{eq ng auger term} is strongly suppressed already in the incoming branch of the trajectory.
\begin{figure}
\centerline{\includegraphics[scale=1]{spectrum.pdf}}
\caption{Energy spectrum of the emitted electron in parallel (upper panel) and perpendicular (lower panel) orientation
calculated from~\eqref{eq spectrum} at~${t=\infty}$ for a kinetic energy of~$50\,meV$. In both panels the dotted line
specifies the isolated RCT spectrum (obtained by setting~${\bar{\varrho}_A \equiv 0}$), the dashed line denotes the isolated
Auger spectrum (obtained by setting~${\bar{\varrho}_1 \equiv 0}$), and the solid line represents the combined
two-channel spectrum.\label{fig spectrum}}
\end{figure}
Finally, we turn to the energy spectrum of the emitted electron. Figure~\ref{fig spectrum} depicts the
emission spectrum at $t=\infty$ for the combined two-channel reaction as well as for the isolated reaction
channels. The latter can be obtained by setting in~\eqref{eq spectrum} ${\bar{\varrho}_A \equiv 0}$ or
${\bar{\varrho}_1 \equiv 0}$. As can be seen, the isolated RCT spectra exhibit a strong peak at about~$1.5\,eV$ and
slowly drop off for higher energies. The isolated Auger spectra, on the other hand, monotonously increase until
approximately~$2.8\,eV$ and then immediately fall off. The low energy cut-off of all curves is due to the
trapping of the emitted electron in the image potential close to the surface when its perpendicular energy
is too low. The combined spectra almost equal the respective isolated RCT spectra. Only in the range from~${1.5\,eV}$
to~${2.5\,eV}$ are the combined spectra slightly increased with respect to the RCT curves. This minor enlargement
is due to the Auger channel and supports our previous finding that the RCT channel dominates the Auger channel.
The combined spectra in Fig.~\ref{fig spectrum} are different from the simple addition of the isolated spectra.
This behavior is caused by the unified treatment of the RCT and Auger reaction channels. The effect would be
even more pronounced for molecular species forming stable negative ions. Here the resonant electron emission
would be almost completely blocked as the surface induced decay is always very weak. The resonant electron
capture, however, would be still very efficient in destroying the initial species. Consequently, the spectrum
of the emitted electron would resemble the Auger spectrum in shape but would be strongly decreased in magnitude.
The secondary electron emission coefficients are given by the area beneath the curves in
Fig.~\ref{fig spectrum} and are summarized in Table~\ref{tab gamma}.
In accordance with our previous observations the emission coefficients are not changed significantly by the inclusion
of the Auger channel. A similar result was found by \citeauthor{Stracke1998Formation}~\cite{Stracke1998Formation} for
\ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\ de-exciting at a tungsten surface. Their experimental measurements imply that only
about 10\% of the secondary electron emission coefficient is made up by the Auger channel.
\section{Conclusions\label{sec conclusion}}
We constructed in this work a semi-empirical generalized Anderson-Newns model for secondary electron emission
due to de-excitation of metastable~\ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\ molecules at dielectric surfaces. The
model treats Auger de-excitation and the two-step resonant charge transfer
process, where the \ensuremath{\mathrm{N_2^-}(\NitrogenNegativeIonResonanceTermSymbol)}\ ion acts as a relay state, on an equal footing.
It reduces the molecular projectile to a two-level system representing the molecular
orbitals which change their occupancies during the reaction and treats the surface as a simple step
potential confining the electrons of the solid.
By construction, the semi-empirical model is not restricted to a particular projectile-target combination.
Having applications of the model to charge-transferring processes at plasma walls in mind, where
a great variety of different projectile-target combinations occurs, we consider this as a real advantage.
Another advantage is that the semi-empirical model separates the many-body theoretical description of the
non-interacting projectile and target from the quantum-kinetic treatment of the scattering process. The
former is simply encapsulated in the parameters of the model Hamiltonian and the latter is performed by
Green functions. This is particularly advantageous in cases where the surface scattering event is studied
primarily because of its connection to the physics of quantum-impurities.
\begin{table}
\begin{ruledtabular}
\begin{tabular}{c|d|d}
& \text{$\gamma_e^\parallel$} & \text{$\gamma_e^\perp$} \\\hline
RCT & 0.16685 & 0.15873 \\
Auger & 0.02760 & 0.04921 \\
RCT \& Auger & 0.16754 & 0.16335
\end{tabular}
\end{ruledtabular}
\caption{Secondary electron emission coefficients in parallel ($\gamma_e^\parallel$) and perpendicular ($\gamma_e^\perp$)
orientation at a kinetic energy of~${50\,meV}$.\label{tab gamma}}
\end{table}
For the semi-empirical model to work a method was required to assign and control the energies of the
two-level system in accordance to the reaction channels, that is, to have the two-level system describing
all three molecular configurations involved in the de-excitation process: the metastable
\ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}\ molecule, the negative ion \ensuremath{\mathrm{N_2^-}(\NitrogenNegativeIonResonanceTermSymbol)}, and the
molecular ground state \ensuremath{\mathrm{N_2}(\NitrogenGroundStateTermSymbol)}. We showed how this can be done with projection
operators and auxiliary bosons. As a result, both the resonant tunneling and the Auger channel could
be cast into a single model Hamiltonian which, with the help of pseudo-particle operators, could then
be made amenable to a diagrammatic quantum-kinetic calculation. Using the self-consistent
non-crossing approximation for the self-energies and a saddle-point approximation for the time integrals
in the self-energies we finally derived from the Dyson equations for the propagators of the molecular
pseudo-particles a set of rate equations for the probabilities with which the molecular configurations
contributing to the de-excitation process can be found in the course of the scattering event. Without
the Auger channel, the system of rate equations reduces to the one postulated by us before on intuitive
grounds for the RCT channel alone~\cite{Marbach2012Resonant}. The present work justifies therefore this
reasoning a posteriori.
For the particular case of a diamond surface we verified the validity of the semi-classical approximation and
investigated for a collision energy of $50\, meV$ the interplay of the resonant tunneling and the Auger channel.
In particular, we analyzed the temporal evolution of the probabilities with which the projectile is to be
found in the \ensuremath{\mathrm{N_2}(\NitrogenDominantMetastableStateTermSymbol)}, the \ensuremath{\mathrm{N_2^-}(\NitrogenNegativeIonResonanceTermSymbol)}, or the \ensuremath{\mathrm{N_2}(\NitrogenGroundStateTermSymbol)}\
state and explicitly calculated the rates for electron capture due respectively to tunneling and Auger
de-excitation.
We also obtained the spectrum of the emitted electron and the secondary electron emission coefficient $\gamma$
which are the two quantities of main importance for the modeling of gas discharges. Our results indicate
for a diamond surface and a kinetic energy of $50\,meV$ the resonant tunneling channel clearly dominating the
Auger channel. The contribution of the Auger channel to the secondary electron emission coefficient lies only in
the range of a few percent. The overall $\gamma$ coefficient is on the order of $10^{-1}$ in agreement with
what has to be typically assumed to make kinetic simulations of dielectric barrier discharges reproduce the
properties of the discharge.
With minor modifications the semi-empirical model and its quantum-kinetic handling leading to the easy
to use set of rate equations can be adopted to other plasma-relevant charge-transferring surface collisions
as well. At least for low-energy collisions, where the projectile velocities are low enough to allow for a
reduction of the full double-time kinetic equations to a set of simple rate equations, we can thus hope
to replace the rules of the thumb which are often needed to characterize secondary electron emission
due to neutral and charged heavy plasma species hitting the plasma wall by plausible quantitative estimates.
\begin{acknowledgments}
Johannes Marbach was funded by the federal state of Mecklenburg-Western Pomerania through a postgraduate scholarship.
In addition this work was supported by the Deutsche Forschungsgemeinschaft through the Transregional Collaborative
Research Center SFB/TRR24.
\end{acknowledgments}
|
2,869,038,155,606 | arxiv | \section{Introduction}
Quantum gauge theories can be described using the holonomies along the
edges of a regular lattice as basic configuration observables.
This idea was introduced by Wilson \cite{wilson} in the '70s and is now
the basis of the modern lattice gauge theory.
In diffeomorphism invariant gauge theories
(like gravity using Ashtekar variables \cite{l,ashtekar}
or Yang-Mills coupled to gravity) , the use of
Wilson loops as primary observables of the theory led to the
discovery of an interesting relation between quantum gauge theories
and knot theory \cite{lee-carloLOOP}.
Twenty years after the early works, the notion of Wilson loops was
extended and serves as a rigorous foundation of quantum gauge field
theory \cite{2dYMSU2}. The modern approach rests on the following idea:
Begin by considering ``the family of all the possible lattice gauge
theories'' defined on graphs whose edges are embedded in the base space.
Then use a projective structure to organize the repeated information
from graphs that share edges.
For a manageable theory, the precise definition
of ``the family of all the possible lattice gauge theories'' had to
avoid situations where two different edges intersect each other an infinite
number of times. The first solution to this problem \cite{analytic}
led to the framework referred in this article as the
analytic category; by restricting the set of
allowed graphs $\Gamma_\o $ to contain only graphs with
piecewise analytic edges,
one acquires a controllable theory. In the analytic
category the diffeomorphisms are restricted to be analytic accordingly.
After a subtle analysis,
it was possible to sacrifice part of the simplicity of the results of the
analytic case and extend the theory to the smooth category \cite{smooth}.
While the foundations were solidifying, the theory also produced its first
kinematical results for quantum gravity (the canonical
quantization of gravity expressed in terms of Ashtekar variables).
Regularized expressions for operators measuring
the area of surfaces and volume of regions were developed \cite{geoperators}.
These operators
were also diagonalized and its eigenvectors were found to be labeled by
spin networks (one-dimensional objects). In other words, a picture of
polymer-like geometry arises from quantum gravity \cite{polymer}.
A polymer-like geometry is predicted from a theory whose foundations
require space to be an analytic manifold. This peculiar
situation
was the main motivation for the work presented in this article.
In this article we present two quantum models:
the combinatorial and the piecewise linear (PL) categories.
The intention is to keep a simple framework that minimizes background
structure and is suited to a polymer-like geometry, but
that can still recover the classical macroscopic theory.
Both models are based on the projective techniques used
for the analytic and smooth categories; again, the difference relies on the
family of graphs $\Gamma$ considered and the corresponding ``diffeomorphisms.''
In the the piecewise linear category we fix a piecewise linear
structure in the space manifold to specify the elements of the family
of graphs $\Gamma_{\rm PL}$ that define the Hilbert space. A piecewise
linear structure on a manifold $\S$ can be specified by a division of the
manifold into cells with a fixed affine structure (flat connection). Also
it can be specified by a triangulation, that is, a fixed
homeomorphism $\varphi : \S \to \S_0$ where $\S_0 \subset R^{2n+1}$
is a n-dimensional polyhedron with a fixed decomposition into simplices.
An element of $\Gamma_{\rm PL}$ is a graph
whose edges are piecewise linear according to the fixed PL structure.
This seems to be far from a background-free situation,
but a PL structure is much weaker than an analytic structure;
the same PL structure can be specified by any refinement of the
original triangulation.
Furthermore, we will prove that
in three (or less) dimensions different choices of PL
structures yield unitarily equivalent representations of the algebra
of physical observables.
This result is of particular interest for $3+1$ ($2+1$ or $1+1$)
quantum models of pure gravity or of gravity coupled to Yang-Mills fields.
To avoid confusion, we stress that the piecewise linear
spaces used in this approach are not directly related to the ones used in
Regge calculus. In simplified theories of gravity, like $2+1$ gravity and
$BF$ theory, the lattice dual to the one induced by one of our piecewise
linear spaces can be successfully related to a Regge lattice \cite{2+1-BF}.
On the other hand, our approach contains a treatment based in cubic lattices
as a particular case; the difference with the usual lattice gauge theory is
that the continuum limit is taken by considering every lattice instead of
just one.
The manifestly combinatorial model has two main ingredients:
simplicial complexes that describe geometry in combinatorial fashion,
and a refinement mechanism that makes it capable
to describe field theories.
If we use a simplicial complex as the starting point of our combinatorial
approach, the resulting model would
be appropriate to describe topological field theories, but we want to generate
a model for gauge theories with local degrees of freedom. A way to achieve
this goal is to replace physical space (the base space) with a sequence of
simplicial complexes $K_0, K_1, \ldots $ that are finer and finer. Our
combinatorial model for quantum gauge theory is based in the family of
graphs defined using our combinatorial representation of space.
Even though the PL and the combinatorial categories
are closely related, the resulting kinematical Hilbert spaces
${\cal H}_{\rm kin_{PL}}$ and ${\cal H}_{\rm kin_C}$ are dramatically
different. While the combinatorial Hilbert space ${\cal H}_{\rm kin_C}$
is separable
(admits a countable basis), ${\cal H}_{\rm kin_{PL}}$ (like the Hilbert
space constructed from the analytic category) is much bigger.
Physically, what we need is a Hilbert space to represent
physical (gauge and ``diffeomorphism'' invariant) observables; such
Hilbert space can be constructed by ``averaging'' the states of the
kinematic Hilbert space to produce physical states.
An encouraging result is that
the two models produce unitarily equivalent representations of the algebra
of physical observables in the
naturally isomorphic separable Hilbert spaces
${\cal H}_{\rm diff_{PL}} , {\cal H}_{\rm diff_C}$.
Separability in the combinatorial case is no
surprise, and that both spaces of physical states (PL and combinatorial)
are isomorphic follows from the fact that
every knot-class of piecewise linear graphs has a representative that
fits in our combinatorial representation of space.
Two aspects of the loop approach to gauge theory are enhanced in its
combinatorial version.
On the mathematical-physics side, other approaches to quantum gravity
coming from topological quantum field theory \cite{crane-lee}
are much closer to the combinatorial category than they are to
the analytic or smooth categories.
On the practical side, the loop approach to quantum gauge theory is at least
as attractive; a powerful computational technique comes built into
this approach. Given any state in the Hilbert space of the continuum we can
express it, to any desired accuracy, as a finite linear combination of
states that come from the Hilbert space of a lattice gauge theory.
Therefore, the matrix elements of every bounded operator can be computed,
to any desired accuracy, in the Hilbert space of a lattice gauge theory.
In this respect, the combinatorial picture presented in this article is
favored because it is best suited for a computer implementation.
We organize this article as follows.
Section~\ref{review} reviews the general procedure to construct the
kinematical Hilbert space in the continuum starting
from a family of lattice gauge
theories. Then, in section~\ref{p-lspaces},
we carry out the procedure in the combinatorial and PL frameworks.
In section~\ref{homeos}, we construct the physical Hilbert space.
We treat separately the PL and combinatorial categories. Then we prove
that the combinatorial and PL frameworks provide unitarily equivalent
representations of the algebra of physical observables are unitarily
equivalent. We also prove that the mentioned algebra of physical observables
is independent of the background PL structure when the dimension of the space
manifold is three or less. A summary, an analysis of some
problems from the combinatorial perspective and a comparison with the analytic
category are the subjects of the concluding section.
\section{From Quantum Gauge Theory in the Lattice to the Continuum
Via the Projective Limit: A Review} \label{review}
A connection on a principal bundle is
characterized by the group element that it assigns to
every possible path in the base space.
Historically, this simple observation
led to treat the set of holonomies for all
the loops of the base space as the basic configuration
observables to be promoted to operators.
Now we start the
construction of a kinematical Hilbert space for quantum gauge theories.
To avoid extra complications,
we only treat cases with a compact base space $\S$ and
we restrict our atention to trivializable
bundles over $\S$. For convenience, we start with a fixed trivialization.
In the modern approach (Baez, Ashtekar et al \cite{analytic})
the concept of paths or loops has been extended to that of
graphs $\c \subset \S$ whose edges, in contrast with their
predecessors, are allowed to intersect.
A {\em graph}
$\c$ is, by definition, a {\em finite} set $E_\c$ of oriented edges
and a set $V_\c$ of vertices satisfying the following conditions: \\
\begin{itemize}
\item $e \in E_\c$ implies $e^{-1} \in E_\c$.
\item The vertex set is the set of boundary points of the edges.
\item The intersection set of two different edges
$e_1, e_2 \in E_\c$ ($e_1 \neq e_2 , e_1 \neq e_2^{-1}$) is a subset of the
vertex set.
\end{itemize}
Generally an edge $e \in E_\c$, is considered to be an equivalence class of
not self-intersecting curves, under orientation preserving
reparametrizations. Formally,
$e:=[ e^\prime (I) \subset \S ]$ such that $e^\prime (I )\approx I$,
where we denoted the unit interval by $I=[0,1]$.
Composition of edges $e, f$ is
defined if they intersect only at the final point of the initial edge and the
initial point of the final edge
$e^\prime (I)\cap f^\prime (I) = e^\prime (1) = f^\prime (0)$.
Then the composition is defined by
$f\circ e:=[f^\prime \circ e^\prime (I)]$; and
given an edge $e:=[ e^\prime ]$ the edge defined by paths with the opposite
orientation is denoted by $e^{-1}:=[ e^{\prime -1} ]$.
The idea of considering ``every possible path'' in the base space to
construct the space of generalized connections has to be made precise.
Different choices in the class of edges that form the
family of graphs considered lead to the different
categories --analytic, smooth, PL and combinatoric--
of this general approach to
diffeomorphism invariant quantum gauge theories. We denote a generic family
of graphs by $\Gamma$, and the analytic, smooth and combinatoric families by
$\Gamma_\o ,\Gamma_\infty$, $\Gamma_{\rm PL}$ and $\Gamma_C$.
A connection on a graph
assigns a group element to each of the $2N_1$ graph's edges. Therefore,
we can identify the space of connections ${\cal A}_\c$ of graph $\c$
with $G^{N_1}$. An element $A \in {\cal A}_\c$ is represented by
$(A(e_1),A(e_1^{-1})=A(e_1)^{-1},\ldots,A(e_{N_1}),
A(e_{N_1}^{-1})=A(e_{N_1})^{-1})$, where $A(e_i) \in G$.
The collection of the spaces ${\cal A}_\c$ for every graph
$\c \in \Gamma$ gives an {\em over-complete} description of
the space of generalized connections in the category specified by $\Gamma$.
For example, $\Gamma_\o$ determines the analytic category and
$\Gamma_C$ specifies the combinatoric category.
It is possible to organize all the repeated information by means of a
projective structure.
We say that graph $\c$ is a refinement of graph $\c^\prime$
($\c\geq \c^\prime$) if the edges of $\c^\prime$ are ``contained'' in
edges of $\c$; more precisely, if $e\in \c^\prime$ then either
$e=e_1$ or $e=e_1\circ \ldots \circ e_n$ for some
$e_1, \ldots , e_n \in \c$.
Given any two graphs
related by refinement $\c\geq \c^\prime$ there is a projection
$p_{\c^\prime\,\c}:{\cal A}_{\c}\rightarrow{\cal A}_{\c^\prime}$
\be
(A(e_1),A(e_2),\ldots,A(e_{N_1}))
\stackrel{p_{\c^\prime \c}}{\longrightarrow}
(A^\prime(e_1)=A(e_2)A(e_1),A^\prime(e_2),\ldots,A^\prime(e_{N_1}))
\ee
where $e=e_1\circ e_2$, $e\in \c^\prime$, $e_1,e_2 \in \c$.
The projection map and the refinement relation have two properties
that will allow us to define $\overline{\cal A}$ as ``the space of
connections of the finest lattice.''
First, we can easily check that $p_{\c\,\c^\prime}\circ p_{\c^\prime\,
\c^{\prime\prime}}=p_{\c\,\c^{\prime\prime}}$.
Second, equipped with the refinement relation ``$\geq$'', the set
$\Gamma$ is a partially ordered, directed set; i.e. for all $\c$,
$\c^\prime$ and $\c^{\prime\prime}$ in $\Gamma$ we have:
\be
\c\geq \c\;\;\;;\;\;\;\;\c\geq \c^\prime
\;\;\;{\rm and}\;\;\;\c^\prime\geq \c
\Rightarrow \c=\c^\prime\;;\;\;\;\;\c\geq
\c^\prime\;\;\;{\rm and}\;\;\;
\c^\prime\geq \c^{\prime\prime}\Rightarrow \c\geq
\c^{\prime\prime}\;;
\ee
and, given any $\c^\prime,\c^{\prime\prime}\in \Gamma$, there exists
$\c\in \Gamma$ such that
\be
\c\geq \c^\prime\;\;\;\;{\rm and}\;\;\;\;\c\geq \c^{\prime\prime}.
\ee
This last property, that $\Gamma$ is directed, is the only non trivial
property; it will be proved for the PL and the
combinatoric categories in the next section.
The {\em projective limit} of the
spaces of connections of all graphs yields the space of
{\em generalized connections} $\overline{\cal A}$
\be
\overline{\cal A}:=\left\{(A_\c)_{\c\in \Gamma}\in
\prod_{\c\in \Gamma}{\cal A}_\c\;\;:\;\;\c^\prime\geq \c
\Rightarrow p_{\c\,\c^\prime}A_{\c^\prime}=A_{\c}\right\}.
\ee
That is, the projective limit is contained in the cartesian product of
the spaces of connections of all graphs in $\Gamma$, subject to the
consistency conditions stated above. There is a canonical projection $p_\c$
from the space $\overline{\cal A}$ to the spaces ${\cal A}_\c$ given by,
\be
p_\c\;\;:\;\overline{\cal A}
\rightarrow
{\cal A}_\c,\;\;\;p_\c((A_{\c^\prime}) _{\c^\prime\in \Gamma})
:=A_\c.
\ee
With this projection, functions $f_\c$
defined on the space ${\cal A}_\c$ can be pulled-back to
${\rm Fun}(\overline{\cal A})$.
Such functions are called {\em cylindrical functions}. The sup norm
\be
|| f || _\infty = \sup_{A\in {\cal A}_\c} | f(A) |
\ee
can be used to complete the space of cylindrical functions. As result we
get the Abelian $C^*$ algebra usually denoted by
${\rm Cyl}(\overline{\cal A})$;
to simplify the notation, in the rest of the article we will denote this
algebra by ${\rm Cyl}_\Box$, where
$\Box =\o ,\infty ,{\rm PL}, C$
labels the family of graphs defining the space of cylindrical functions
considered.
The uniform generalized measure $\m_0: {\rm Cyl}_\Box \to C$,
sometimes called the Ashtekar-Lewandowski measure, is
induced in ${\cal A}$ by the uniform (Haar) measure on the spaces
${\cal A}_\c= G^{N_1}$. Other gauge invariant measures are available;
when they are diffeomorphism invariant they induce ``generalized
knot invariants'' (see \cite{genknot}).
Finally, we define the kinematical Hilbert space to
be the completion of ${\rm Cyl}({\cal A})$ on the norm induced by
the (strictly positive) generalized measure $\m_0$
\be
{\cal H}_{\rm kin}:=L^2(\overline{\cal A},d\m_0).
\ee
This construction yields a cyclic representation of the algebra of cylindrical
functions, the so called {\em connection representation}.
Given a function defined on a lattice $\c$, for example the
trace of the holonomy $T_\a$ along a loop $\a$ contained in $\c$,
the corresponding operator $\hat{T}_\a$
will act by multiplication on states $\Psi_\c \in {\cal H}_{\rm kin}$:
\be
(\hat{T}_\a\cdot\Psi_\c)(\bar{A}):=T_\a(\bar{A})\Psi_\c(\bar{A}).
\ee
A complete set of Hermitian momentum operators on the Hilbert space
$L^2(G_e,d\m_{\rm Haar})$ of a graph with a single edge $e$ come from
the left $L_e(f)$ and right invariant $R_e(f)$ vector fields on $G_e$
as labeled by $f \in {\rm Lie}(G_e)$. These momentum operators are
compatible with the projective structure \cite{dgeoAL}; thus, the set of
momentum operators
\be
X_{\a ,e}(f) =
\left\{ \begin{array}
{r@{\quad }l}
L_e(f) & \hbox{if edge }e \hbox{ goes out of vertex }\a \\
-R_e(f) & \hbox{if edge }e \hbox{ comes into vertex }\a
\end{array} \right. \label{x}
\ee
is a complete set of Hermitian momentum operators on
${\cal H}_{\rm kin}$ when we use the generalized measure $\m_0$.
In regularized expressions of operators involving the triad,
the place of the triad is taken by the vector fields $X$; therefore, the
measure $\m_0$ incorporates the physical reality conditions.
Our main goal is to construct a Hilbert space where we can represent the
algebra of physical (gauge and diffeomorphism invariant) observables.
Because it is custumary we will proceed in steps; in this section we
deal with the issue of gauge invariance
and in the next with that of diffeomorphism invariance.
If we had chosen to generate the space of states invariant under both
symmetries simultaneously we would arrive at the same result.
A finite gauge transformation takes the
holonomy $A_{e_1}$ to $g(\a)A_{e_1}g(\b)^{-1}$ (where edge $e_1$ goes from
vertex $\a$ to vertex $\b$). Then a quantum gauge transformation
is given by the unitary transformation
\be
G(g) \Psi_\c(A_{e_1}, \ldots A_{e_n}) :=
\Psi_\c(g(\a)A_{e_1}g(\b)^{-1}, \ldots g(\m)A_{e_n}g(\n)^{-1})
\ee
Gauge transformations are just generalizations of right and left translations
in the group. This implies that they are generated by left and right invariant
vector fields. Given a graph $\c$, $C_\a(f)$ generates
gauge transformations at vertex $\a$. Therefore gauge invariance of
$\Psi_\c=\Psi_\c(A_{e_1}, \ldots A_{e_n})$ at vertex $\a$ means
that it lies in the kernel of the Gauss constraint
\be
C_\a(f) \cdot \Psi_\c :=\sum_{e\to \a}X^I_{e}\cdot \Psi_\c=0 \quad ,
\ee
where the sum is taken over all the edges $e$ that start at vertex $\a$.
Because it is a real linear combination of the
momentum operators (\ref{x}), the Gauss constraint is essentially
self-adjoint on ${\cal H}_{\rm kin}$.
We could construct the space of connections modulo gauge
transformations of a graph ${\cal A}_\c / {\cal G}_\c$.
Then, using the same projective machinery, we could construct the Hilbert
space $L^2(\overline{\cal A/G},d\n_0)$. It is easy to see that the space of
gauge invariant functions of $L^2(\overline{\cal A},d\m_0)$ is naturally
isomorphic to ${\cal H}^{\prime}_{\rm kin}=L^2(\overline{\cal A/G},d\n_0)$
if the measure $\n_0$ is the one induced by $\m_0$. The space
${\cal H}^{\prime}_{\rm kin}$ of gauge
invariant functions is spanned by spin network states. Spin network
states are cylindrical functions
$S_{\vec{\c} , j(e), c(v)}(A)$ labeled by an {\em oriented graph}
(a graph $\c$ plus a choice of either $e\in E_\c$ or
$e^{-1} \in E_\c$, for every edge in $\c$, to belong to the
oriented graph $\vec{\c}$) whose edges and vertices are colored. The
``colors''
$j(e)$ on the edges $e\in E_\c$ assign a non trivial irreducible
representation of the gauge group to the edges.
And the ``colors'' $c(v)$ on the vertices
$v\in V_\c$ assign to each vertex a
gauge invariant contractor (intertwining operator)
that has indices in the representations determined by the colored
edges that meet at the vertex. The spin network states is defined by
\be \label{S}
S_{\vec{\c} , j(e), c(v)}(A) =
\prod_{e\in E_{\vec{\c}}} \p _{j(e)}[A(e)] \cdot
\prod_{ v\in V_{ \vec{\c} } } c(v) \quad ,
\ee
where `$\cdot$' stands for contraction of all the indices of the
matrices attached to the edges with the indices of the intertwiners
attached to the vertices. In the inner product that the uniform measure
$\m_0$ induces in ${\cal H}^{\prime}_{\rm kin}$ two spin network states are
orthogonal if they are not labeled by the same (unoriented) graph or
if their edge's colors are diferent.
For calculational purposes it is convenient to choose an orthonormal basis for
${\cal H}^{\prime}_{\rm kin}$ by normalized spin network states with
special labels of the intertwining operators assigned to the vertices; see
\cite{rovelli-depietri}.
\section{PL and combinatoric categories}
\label{p-lspaces}
In this section we construct two quantum models using the general framework
outlined above. First the family of piecewise linear (PL) graphs is
introduced. Then we prove that it is a partially ordered, directed set.
As a result, the algebra of functions of the connection defined by the
PL graphs has a cyclic representation in the Hilbert space
${\cal H}_{\rm kin_{PL}}$. The second subsection briefly reviews some
elements of combinatoric topology while constructing the
family of combinatoric graphs. In this case, the resulting algebra of
functions is represented in the separable Hilbert space
${\cal H}_{\rm kin_C}$. While at this level the two quantum models
yield completely different Hilbert spaces, in the section~\ref{homeos}
we will
prove that the corresponding spaces of ``diffeomorphism'' invariant states
are naturally isomorphic.
\subsection{The PL category}
To specify the elements of the family
of graphs $\Gamma_{\rm PL}$ that define the Hilbert space of the PL
category we need a fixed piecewise linear structure on space $\S$.
A piecewise
linear structure on a manifold $\S$ can be specified by a division of the
manifold into cells with a fixed affine structure (flat connection). Also
it can be specified by a triangulation, that is, a fixed
homeomorphism $\varphi : \S \to \S_0$ where $\S_0$ is a n-dimensional
polyhedron with a fixed decomposition into simplices.
To be more explicit, we can use the fact that
every n-dimensional polyhedron can be embedded in $R^{2n+1}$ and consider
from the beginning $\S_0\subset R^{2n+1}$.
Then $\S_0$ can be decomposed into a collection of convex cells
(geometrical simplices).
A {\em geometric simplex} in $R^{2n+1}$ is
simply the convex region defined by its set of vertices
$\{ {\bf s}_0, \ldots , {\bf s}_k \}$, ${\bf s}_i \in R^{2n+1}$
\be
\D(\{ {\bf s}_0, \ldots , {\bf s}_k \} )= \{ {\bf s} =
\sum_{i=0}^k t_i {\bf s}_i \}
\ee
\ni
where $t_i \in [ 0,1]$ and $\sum_{i=0}^k t_i =1$.
The triangulation of $\S_0$ fixes an affine structure in its cells,
namely, a PL structure.
Using the local affine coordinate systems $t_i$, we can decide which curves
are straight lines inside any cell.
Then a piecewise linear curve in $\S_0$ is a curve that is straight inside
every cell except for a {\em finite} set of points; in this set of points and
in the points where it crosses the boundaries of the cells the curve bends,
but is continuous.
A piecewise linear graph $\c \in \Gamma_{\rm PL}$ is a graph (according to
the definition given in the previous section) such that every edge
$e \in E_\c$ is piecewise linear.
In the previous section we gave a natural partial
order (``refinement relation'', $\geq$) for any family of graphs.
Our task is now to prove that the partially ordered set $\Gamma_{\rm PL}$ is
a projective family; once we prove this property, the general procedure
outlined in the previous section gives us the Hilbert space of the
PL category.
The only non-trivial property to prove is that the family
of graphs $\Gamma_{\rm PL}$ is directed. For instance, according to the
definition of a graph given in last section, the family of all the graphs
with piecewise smooth edges is not directed. In this case,
two edges of different graphs can intersect an infinite number of
times; such two graphs would only accept a common refinement with an
infinite number of edges, that according to our definition is not a graph.
We will construct a graph $\c_3$ that refines two given
graphs $\c_1$ and $\c_2$.
A trivial property of PL edges lies in the heart of our construction;
due to its importance, it is stated as a lemma.
\begin{lemma} \label{finiteness}
Given two edges of different graphs
$e_1 \in \c_1$ and $e_2 \in \c_2$, we know that $e_1 \cap e_2$ has
{\em finitely many} connected components.
These connected components are either isolated points or piecewise linear
segments.
\end{lemma}
Now we start our construction.
First we note that every graph $\c$ is refined by a graph
$\c^\prime$ constructed from $\c$ simply by adding a finite number
of vertices $v\in V^\prime$
in the interior of its edges (and by splitting the edges in the points where
a new vertex sits).
Because of lemma~\ref{finiteness}, we know that
given two graphs $\c_1,\c_2\in C_{\rm PL}$ we can refine each of them
trivially by adding {\em finitely many} new vertices to form the graphs
$\c_1^\prime \geq \c_1, \c_2^\prime \geq \c_2$ that satisfy the following
property. Every edge $e_1\in E_{\c_1^\prime}$ falls into one of the three
categories given bellow:
\begin{itemize}
\item $e_1$ does not intersect any edge of $\c_2^\prime$. \label{1}
\item $e_1$ is also an edge of $\c_2^\prime$;
$e_1\in E_{\c_2^\prime}$. \label{2}
\item $e_1$ intersects an edge $e_2$ of $\c_2^\prime$ at vertices (one or two)
of both graphs $e_1 \cap e_2 \subset V_{\c_1^\prime}$,
$e_1 \cap e_2 \subset V_{\c_2^\prime}$. \label{3}
\end{itemize}
A direct consequence of these properties is the following:
\begin{lemma} \label{pl-projective}
The graph $\c_3$ defined by
$E_{\c_3}=E_{\c_1^\prime}\cup E_{\c_2^\prime}$ and
$V_{\c_3}=V_{\c_1^\prime}\cup V_{\c_2^\prime}$ is a refinement of
$\c_1^\prime$ and $\c_2^\prime$. By the properties of the
partial ordering relation it follows that $\c_3$ is also a refinement of the
original graphs $\c_3 \geq \c_1 , \c_3 \geq \c_2$; thus the family of
piecewise linear graphs $\Gamma_{\rm PL}$ is a projective family.
\end{lemma}
In the light of lemma~\ref{pl-projective},
the rest of the construction is a simple application
of the general framework described in the previous section.
There is a canonical projection $p_\c$ from the space of generalized
connections $\overline{\cal A}_{\rm PL}$ to the spaces of connections
${\cal A}_\c$ on graphs $\c \in \Gamma_{\rm PL}$ given by,
\be
p_\c\;\;:\;\overline{\cal A}_{\rm PL}
\rightarrow
{\cal A}_\c,\;\;\;p_\c((A_{\c^\prime}) _{\c^\prime\in \Gamma_{\rm PL}})
:=A_\c.
\ee
This projective structure is the main ingredient that yields the Hilbert
space of the connection representation in the PL category.
Below we state our result concisely.
\begin{theorem} \label{pl-h}
The completion (in the sup norm) of the family of
functions $p_\c^* f_\c(\bar{A})$, defined by graphs $\c \in \Gamma_{\rm PL}$,
is an Abelian $C^*$ algebra $Cyl_{\rm PL}$.
A cyclic representation of $Cyl_{\rm PL}$ is provided by the Hilbert space
\be
{\cal H}_{\rm kin_{PL}}:=L^2(\overline{\cal A}_{\rm PL},d\m_0).
\ee
that results after completing $Cyl_{\rm PL}$ in the norm provided by the
Ashtekar-Lewandowski measure $\m_0$.
\end{theorem}
In the manner described in the previous section we can also consider the
space of gauge invariant states and obtain
${\cal H}^{\prime}_{\rm kin_{PL}}$ that is is spanned by spin network states
labeled by piecewise linear graphs.
\subsection{The combinatoric category}
In this subsection we introduce the family of combinatoric graphs that
leads to a manifestly combinatoric approach to quantum gauge theory.
The construction of combinatoric graphs uses as a corner stone
the same stone that serves as the combinatoric foundation of topology.
Thus, our construction provides a quantum/combinatoric
model for physical space, the space where physical processes take place.
Simplicial complexes appear first as the combinatoric means of
capturing the topological information of a topological space $X$.
By definition, a {\em simplicial complex} $K$ is a set of finite
sets closed under formation of subsets, formally:
\be
x\in K \, \hbox{and} \, y \subset x
\; \Rightarrow \; y\in K \quad .
\ee
A member of a simplicial complex $x\in K$ is called an $n$-simplex if
it has $n+1$ elements; n is the dimension of $x$. Generically, the set
of which all simplices are subsets is called the vertex set and denoted
by $\L$. Some examples of simplicial complexes are given in figure 1
\hskip 1.7in\epsfbox{Cf2.eps}
\bigskip
\noindent
{\small {\bf Fig. 1}
a) Geometrical representations of a zero dimensional simplex $x=\{ p\}$
and a one dimensional simplex $x=\{ p,q\}$.
{\em The simplices are the sets}; in the figures, what we draw
are the geometric realizations $\D_x$ of the abstract simplices $x$.
b) A two dimensional complex is a set of simplices of dimension
smaller or equal to two. In this case the complex
$K=\{ \{ p\},\{ q\},\{ r\},\{ s\},
\{ p, q\},\{ q, r\},$ \\
$\{ r, p\},
\{ s, p\},\{ s, q\},\{ s, r\},
\{ p, q, r\},\{ p, q, s\},\{ q, r, s\},
\{ r, p, s\},\{ p, q, r, s\} \}$
represents a sphere $S^2$.
Figure (1b) is the geometric realization
$\| K^1\|$ of the one dimensional subcomplex of $K$ given by
$K^1=\{ \{ p\},\{ q\},\{ r\},\{ s\}, \{ p, q\},\{ q, r\},\{ r, p\},
\{ s, p\},\{ s, q\},\{ s, r\}$.
c) The vertices of the baricentric subdivision $Sd(K)$ are the simplices of
$K$. For example if $K=\{ \{ p\},\{ q\},\{ p, q\} \}$ then
$Sd(K)=\{ \{ \{ p\} \}, \{ \{ p, q\} \}, \{ \{ q\} \},
\{ \{ p\} , \{ p, q\} \}, \{ \{ p, q\}, \{ q\} \}\}$.
}
Given an open cover ${\cal U}(\L)= \{ U_\l : \l \in \L \}$
of a topological space
$X$ the information about the relative position of the open sets
$U_1, U_2 , ... \in {\cal U}(\L)$ is the combinatoric information
that the {\em nerve} $K(\L)$ of ${\cal U}(\L)$ casts.
The simplicial complex $K(\L)$
is the set of all finite subsets of $\L$ such that
\be
\bigcap_{\l \in \L} U_\l \neq \emptyset
\ee
Using the information encoded in the $K(\L)$ one can often
recover the topological space $X$. More precisely, every open cover
${\cal U}(\L)$ of $X$ admits a refinement ${\cal U}^\prime (\L^\prime )$
such that the geometric realization (to be defined bellow)
of its nerve is homeomorphic to $X$, $|K(\L^\prime )| \approx X$.
This is the sense in which simplicial complexes constitute a combinatoric
foundation of topology.
A simplicial complex stores
topological information combinatorically, but the same
information can be encoded in a geometric fashion (see
\cite{fritsch-piccinini}).
The {\em geometric realization} $|K|$ of a simplicial complex
$K= K(\L)$, is the subset of $R^{\L}$ given by
$|K|:= \bigcup_{x \in K} \D_x$ where $\D_x$ is a geometrical simplex
represented as a segment of a plane of codimension one, embedded in
$R^{x}$; more precisely,
\be
\D_x:= \left\{ \mathbf{s}:=(s_\l : \l \in \x)\in I^x :
\sum_{\l \in x} s_\l = 1 \right\}
\ee
where $I=[0,1]$ is the unit interval. The topology of $|K|$ is determined
by declaring all its geometrical simplices $\D_x$ to be closed sets.
Our main purpose is to find a combinatoric analog of a generalized
connection. We need to find the appropriate concept of the space of all
combinatoric graphs; then a generalized connection will be an assignment of
group elements to the edges of the graphs. We could fix a simplicial complex
$K$ to represent the base space and consider that a combinatoric graph is a
one-dimensional subcomplex $\c \subset K$. The resulting model would
properly describe topological field theories, but we want to generate
a model for gauge theories with local degrees of freedom. To achieve
our goal, we replace physical space (the base space) with a sequence of
simplicial coplexes $K_0, K_1, \ldots $ that are finer and finer.
The concept of baricentric subdivision give us the option of generating
finer and finer simplicial complexes. Given a simplicial complex $K$ its
{\em baricentric subdivision} $Sd(K)$ is defined as the simplicial complex
constructed by assigning a vertex to every simplex of $K$, $\L= K$.
Then, the simplices of $Sd(K)$ are the finite subsets $X\subset \L$
that satisfy
\be
x,y \in X \Rightarrow x\subset y \; \hbox{or} \; y\subset x
\ee
A geometric representation of the operation baricentric subdivision $Sd$ is
given in figure 1.
Our approach to quantum gauge theory replaces the base space $\S$
with a sequence of simplicial complexes
$\{K, Sd(K), \ldots , Sd^n(K), \ldots \}$ such that $|K| \approx \S_0$,
where $\S_0$ is a compact Hausdorff three dimensional manifold. This concept
of space leads to the definition of combinatoric graphs.
A {\em combinatoric graph} $\c \in \Gamma_C$ is simply a graph, according to
the definition given in the previous section, where the set of vertices
$V_\c$ and the set of edges $E_\c$ are restricted to be subsets of the set
of points $V(K)$ and the set of oriented paths $E(K)$.
In the combinatoric representation of space, a point $p\in V(K)$ is
represented by an equivalence class of sequences of the kind
$\{ p_n, p_{n+1}=Sd(p_n), p_{n+2}=Sd^2(p_n), \ldots \}$ of
zero-dimensional simplices $p_n \in Sd^n(K)$, $p_{n+1} \in Sd^{n+1}(K)$, etc.
Noteably one single element of the sequence determines the
whole sequence.
Two sequences $\{ p_n, p_{n+1}= Sd(p_n), \ldots \}$
$\{ q_m, q_{m+1}= Sd(q_m), \ldots \}$
are equivalent if all their elements coincide, $p_s=q_s \in Sd^s(K)$
for all $s\geq \max(n,m)$.
The definition of oriented paths follows the same idea, but is a little more
involved. First we will define paths, then oriented paths, and composition
of oriented paths.
A path $e\in P(K)$ is an equivalence class of sequences
$\{ e_n, e_{n+1}=Sd(e_n), \ldots \}$ of one dimensional subcomplexes
$e_n \subset Sd^n(K)$ such that the geometric realizations of its elements are
homeomorphic to the unit interval $|e_n| \approx I$. Again, two sequences
$\{ e_n, e_{n+1}= Sd(e_n), \ldots \}, \{ f_m, \ldots \}$ are equivalent if all
their elements coincide $e_s=f_s \in Sd^s(K)$ for all $s\geq \max(n,m)$.
An oriented path
$e \in E(K)$ is a path $e^\prime \in P(K)$ and a sequence of relations that
order the vertices%
\footnote{
Here the term vertex refers to a zero-dimensional simplex in the one
of the one-dimensional subcomplexes $e_n$ in the path $e$. It should not be
confused with a vertex $v\in V_\c$ of a combinatoric graph.
}
of each of the one-dimensional subcomplexes $e_n^\prime$ in the path.
We denote the initial point of a path by $e(0)\in V(K)$ and
it is defined by the class of the sequence of initial vertices
$e(0)=[ \{ e_n(0), e_{n+1}(0)=Sd(e_n(0)), \ldots \} ]\in V(K)$;
the final point of a combinatoric path is denoted by $e(1)\in V(K)$.
Composition of two oriented paths
$e, f \in E(K)$ is possible if they intersect only at the final point of
the initial path and the initial point of the final path
$[ \{ e_n\cap f_n, e_{n+1}\cap f_{n+1}, \ldots \} ] = e(1) = f(0)$;
it is denoted by $f\circ e \in E(K)$ and is defined by
\be
f \circ e=[ \{ (f\circ e)_n =f_n \cup e_n,
(f\circ e)_{n+1}=Sd((f\circ e)_n), \ldots \} ]
\ee
and the obvious sequence of ordering relations.
Given an oriented path $e\in E(K)$ its inverse $e^{-1} \in E(K)$ is defined by
the same path $e^\prime \in P(K)$ and the opposite orientation. Notice that
the composition relation is not defined for $e$ and $e^{-1}$; it is possible to
define combinatoric curves that behave like usual curves, but it is not necessary
for the purpose of this article.
Once the set of edges $E$ is endowed with the composition operation,
the rest of our construction is almost a simple application of
the general framework reviewed in the previous section.
The only gap to be filled is proving that the family of combinatoric
graphs $\Gamma_C$ is directed.
To prove the directedness in the PL case we used the finiteness property
stated in lemma~\ref{finiteness}; an adapted statement of
this same property holds trivially in the combinatoric case.
\begin{lemma} \label{C-finiteness}
The intersection of
two one dimensional subcomplexes $e_n, f_n \subset Sd^n(K)$, defining
the paths $e, f \in P(K)$ respectively, has
{\em finitely many} connected components.
These connected components are either isolated zero-dimensional simplices
or one-dimensional subcomplexes homeomorphic to the unit interval.
That is,
\be
e_n\cap f_n = (\bigcup_{i=1}^N p(i)_n ) \bigcup (\bigcup_{j=1}^M g(j)_n )
\ee
where $p(i)_n\subset Sd^n(K)$ is a zero-dimensional simplex and
$I \approx g(i)_n\subset Sd^n(K)$. In addition,
$p(i)_n \cap p(j)_n = p(i)_n \cap g(j)_n = g(i)_n \cap g(j)_n =
\emptyset$ for all $i\neq j$.
By defining the appropriate notion of union and intersection of classes of
sequences we can state the result as
\be
e\cap f = (\bigcup_{i=1}^N p(i) ) \bigcup (\bigcup_{j=1}^M g(j) )
\ee
where $p(i)\in V(K)$, $g(j)\in P(K)$, and
$p(i) \cap p(j) = p(i) \cap g(j) = g(i) \cap g(j) = \emptyset$ for all
$i\neq j$.
\end{lemma}
Therefore, the
construction of a graph $\c_3\in \Gamma_C$ that refines two given
graphs $\c_1, \c_2 \in \Gamma_C$ is just an adaptation of the construction
given for the piecewise linear case.
Using lemma~\ref{C-finiteness} it is easy to prove that
given two graphs $\c_1,\c_2\in C_C$ we can refine each of them
trivially by adding {\em finitely many} new vertices; forming graphs
$\c_1^\prime \geq \c_1, \c_2^\prime \geq \c_2$ such that every
edge $e_1\in E_{\c_1^\prime}$ falls in one of the three categories
(\ref{1}), (\ref{2}), (\ref{3}) itemized in the previous subsection.
From the previous construction the following lemma is evident.
\begin{lemma} \label{C-projective}
Let $\c_3$ be the graph defined by \\
$V_{\c_3}:=V_{\c_1^\prime}\cup V_{\c_2^\prime}\subset V(K)$ and
$E_{\c_3}:=E_{\c_1^\prime}\cup E_{\c_2^\prime}\subset E(K)$.
$\c_3$ is a refinement of
$\c_1^\prime$ and $\c_2^\prime$. By the properties of the
partial ordering relation it follows that $\c_3$ is also a refinement of the
original graphs $\c_3 \geq \c_1 , \c_3 \geq \c_2$; thus the family of
combinatoric graphs $\Gamma_C$ is a projective family.
\end{lemma}
Following the general framework described in the previous section
we will complete the construction of our combinatoric/quantum model
for gauge theory.
There is a canonical projection $p_\c$ from the space of generalized
connections $\overline{\cal A}_C$ to the spaces of connections
${\cal A}_\c$ on graphs $\c \in \Gamma_C$ given by,
\be
p_\c\;\;:\;\overline{\cal A}_C
\rightarrow
{\cal A}_\c,\;\;\;p_\c((A_{\c^\prime}) _{\c^\prime\in \Gamma_C})
:=A_\c.
\ee
These projections are the key ingredient that yields the Hilbert
space of the connection representation in the combinatoric category.
Below we state our result concisely.
\begin{theorem} \label{C-h}
The completion (in the sup norm) of the family of
functions $p_\c^* f_\c(\bar{A})$, defined by graphs $\c \in \Gamma_C$,
is an Abelian $C^*$ algebra $Cyl_C$.
A cyclic representation of $Cyl_C$ is provided by the Hilbert space
\be
{\cal H}_{\rm kin_C}:=L^2(\overline{\cal A}_C,d\m_0).
\ee
that results after completing $Cyl_C$ in the norm provided by the
Ashtekar-Lewandowski measure $\m_0$.
\end{theorem}
As described in the previous section we can consider the
space of gauge invariant states and get ${\cal H}^{\prime}_{\rm kin_C}$
that is is spanned by spin network states labeled by combinatoric graphs.
The constructions, given in this and the previous subsection, of the Hilbert
spaces for the piecewise linear and the cobinatoric categories were
similar. Despite the parallelism, the resulting Hilbert spaces
are completely different. A property that marks the difference is the size
of these Hilbert spaces.
\begin{theorem} \label{C-separable}
The Hilbert space
${\cal H}^{\prime}_{\rm kin_C}$ is separable.
\end{theorem}
Proof -- We will prove that the spin network basis
is countable in the combinatoric case.
We did not describe precisely the spin network basis, but we stated that
two spin network states $S^1_{\vec{\c} , j(e), c(v)}(A)$,
$S^2_{\vec{\d} , j(e), c(v)}(A)$ are
orthogonal if $\c \neq \d$ or if their edge's colors are different.
Let $L_{\c ,j(e)}$ be the space spanned by all the spin network states
with labels $\vec{\c}, j(e)$. Our task is bound $n=\dim(L_{\c ,j(e)})$.
We know that $n$ is less than the number of labels that we would get
by assigning not one integer but three integers to the graphs edges.
The first integer $j(e)$ labels the irreducible representation
assigned to $e$, and the other two $m_L(e), m_R(e)$ determine basis
vectors in the vector space selected by $j(e)$. With these basis vectors
sitting at both ends of every edge we can label any set of
(generally non gauge invariant) contractors for the vertices.
Thus, the spin network basis is countable if the set of
finite subsets of
\be
E(K) \times \mathbf{N}
\ee
is countable.
Then to prove the theorem we just have to show that the set $E(K)$
is countable, which in turn reduces to prove that the set of paths $P(K)$
is countable.
A path $e\in P(K)$ is determined by a sequence of one-dimensional subcomplexes
that are all related by baricentric subdivision. Therefore, a path $e\in P(K)$
can be specified by just one one-dimensional subcomplex of an appropriate
$Sd^n(K)$. A particular one-dimensional subcomplex can be described by
specifying which of the one-dimensional and zero-dimensional simplices
belong to it. We can use the set $\{ 0, 1 \}$ to specify which simplex belong
or does not belong to a particular subcomlpex.
Therefore, there is an onto map
\be
M: \bigcup_{n=1}^\infty Sd^n(K) \times \{ 0, 1 \} \to P(K)
\ee
since a countable union of finite sets is countable and each $Sd^n(K)$ is
finite, we have proved that $P(K)$ is countable. $\Box$
\section{Physical observables and physical states}\label{homeos}
In this section we construct the Hilbert
space of physical states of our model for quantum gauge theory;
where we can represent the algebra of physical (gauge and
``diffeomorphism'' invariant) observables. Our
quantization procedure follows the same steps as in the analytic
category; that is, it follows
(a refined version of) the algebraic quantization
program \cite{l,alg-quant}. When we deal
with theories with extra constraints, like gravity, we need to solve these
extra constraints to find the space of physical states.
Since the issue of ``diffeomorphism'' invariance acquires quite
different faces
in the PL and combinatoric categories, we tackle it first for the PL category.
Then we find the space of physical states of the combinatoric category and
prove that it is separable and isomorphic to the space of physical states
of the PL category.
\subsection{``diffeomorphism'' invariance in the PL category}
Any operator can be defined by specifying its action on the space of cylindrical
functions $Cyl$ and then using continuity to extend it to the whole Hilbert
space ${\cal H}_{\rm kin}$. This is what we did to define the unitary
operators induced by the gauge symmetry and it is what we will do in this
section to define quantum ``diffeomorphisms.''
Our piecewise linear framework is based on the family of graphs
$\Gamma_{\rm PL}$ selected by a fixed piecewise linear structure in $\S$.
Therefore, the role of ``diffeomorphisms'' is played by {\em piecewise linear
homeomorphisms}. It is important to note that the space of such maps
can be defined as
\be
{\rm Hom}_{\rm PL}(\S):= \left\{ h\in {\rm Hom}(\S) :
h(\Gamma_{\rm PL})= \Gamma_{\rm PL} \right\} \quad . \label{plh}
\ee
The unitary operator
$\hat{U}_h :{\cal H}_{\rm kin_{PL}} \to {\cal H}_{\rm kin_{PL}}$
induced by a piecewise linear homeomorphism $h$ is determined by its
action on cylindrical functions
\be \label{u}
\hat{U}_h\cdot\Psi_\c(\bar{A}):=\Psi_{h^{-1}(\c)}(\bar{A}).
\ee
In contrast with our treatment of gauge invariance, the space of
diffeomorphism invariant states is not the kernel of any Hermitian operator;
the reason is that the one-dimensional subgroups of
the diffeomorphism group induce one-parameter families of unitary
transformations that are not strongly continuous in our Hilbert space
\cite{analytic}. Another important difference is that the space of
``diffeomorphism'' invariant states cannot be made a subspace of the Hilbert
space ${\cal H}_{\rm kin_{PL}}$,
the solutions are true distributions, i.e., they lie in
a subspace of the topological dual of $Cyl_{\rm PL}$.
A distribution $\bar\phi \in Cyl_{\rm PL}^*$ is ``diffeomorphism''
invariant if
\be
\bar\phi[\hat{U}_h\circ\psi]=\bar\phi[\psi]\;\;\;\forall\;\; h\in
{\rm Hom}_{\rm PL}(\Sigma)
\;\;\;\;{\rm and}\;\;\;\;\psi\in {\rm Cyl}_{\rm PL} .
\ee
We can construct such distributions by ``averaging'' over the
group ${\rm Hom}_{\rm PL}(\S)$. The infinite
size of ${\rm Hom}_{\rm PL}(\S)$ makes a precise definition of the
group average procedure very subtle. Here we follow
the procedure used for the analytic category \cite{analytic}.
An inner product for the space of solutions is given by the
same formula that defines the group averaging; therefore,
a summation over all the elements of ${\rm Hom}_{\rm PL}(\S)$
would yield states with infinite
norm. In this sense, prescribing an adequate definition for the
averaging over the group ${\rm Hom}_{\rm PL}(\S)$ involves
``renormalization.'' The issue is resolved once the following two
observations have been made:
First, the inner product between two states based on graphs
$\c , \d\in \Gamma_C$ must be zero unless there is a homeomorphism
$h_0 \in{\rm Hom}_{\rm PL}(\S)$ such that $\c =h_0 \d$.
Second, our construction of generalized connections assigns group elements
to unparametrized edges. Therefore, two homeomorphisms that restricted to
a graph $\c$ are equal except for a reparametrization of the edges of
$\c$ should be counted only once in our construction of group averaging
of states based on graph $\c$.
Thus, we define a map
$\e:Cyl_{\rm PL} \to Cyl_{\rm PL}^*$
that transforms any given gauge invariant
cylindrical function into a ``diffeomorphism'' invariant
distribution. We define
$\e$ acting on spin network states, then by {\em anti}linearity
we can extend its action to any gauge invariant cylindrical function.
Averaging a spin network state
$S_{\vec{\c} , j(e), c(v)}$ produces a s-knot state
$s_{[\vec{\c}],j(e),c(v)}=\e(S_{\vec{\c},j(e),c(v)})\in Cyl_{\rm PL}^*$
defined by
\be
s_{[\vec{\c}] , j(e), c(v)} [S_{\vec{\d}^\prime , j(e), c(v)}]
:= \d_{[\c] [\d]} a ([\c]) \sum_{[h]\in {\rm GS}(\c)}\langle
S_{\vec{U_{h\cdot h_0}\c} , j(e), c(v)} |
S_{\vec{\d}^\prime , j(e), c(v)} \rangle \label{d}
\ee
where $\d_{[\c] [\d]}$ is non vanishing only if there is
a homeomorphism $h_0 \in {\rm Hom}_{\rm PL}(\S)$
that maps $\c$ to $\d$, $a([\c])$ is a normalization parameter,
and $h\in {\rm Hom}_{\rm PL}(\S)$ is any element in the class of
$[h]\in {\rm GS}(\c)$. The discrete group ${\rm GS}(\c)$ is the group of
symmetries of $\c$; i.e. elements of ${\rm GS}(\c)$ are maps between the
edges of $\c$. The group can be constructed from subgroups of
${\rm Hom}_{\rm PL}(\S)$ as follows:
${\rm GS}(\c)= {\rm Iso}(\c)/{\rm TA}(\c)$ where
${\rm Iso}(\c)$ is the subgroup of
${\rm Hom}_{\rm PL}(\S)$
that maps $\c$ to itself, and the elements of
${\rm TA}(\c)$ are the ones that preserve all the edges of $\c$ separately.
{\em The Hilbert space of physical states} ${\cal H}_{\rm diff_{PL}}$
is obtained after completing the space spanned by the s-knot states
$\e(Cyl_{\rm PL})$
in the norm provided by the inner product defined by
\be
(F,G)=(\e(f),\e(g)):=G[f] \quad .
\ee
Define the algebra ${\cal A}_{\rm diff_{PL}}^\prime$ to be the algebra
of operators on ${\cal H}_{\rm kin_{PL}}$ satisfying the following
two properties: First, for
$O \in {\cal A}_{\rm diff_{PL}}^\prime$, both $O$ and $O^\dagger$
are defined on $Cyl_{\rm PL}$ and map $Cyl_{\rm PL}$ to itself.
Second, both $O$ and $O^\dagger$
are representable in ${\cal H}_{\rm diff_{PL}}$ by means of
\be \label{obs}
r_{PL}(\hat{O}) F = r_{PL}(\hat{O}) \e(f) :=
\e( \hat{O} f) \quad .
\ee
${\cal A}_{\rm diff_{PL}}$
is the analog of the algebra of weak ``observables.''
Different weak observables can be weakly equivalent; in the same way,
many operators
of ${\cal A}_{\rm diff_{PL}}^\prime$ are represented by the same
operator in ${\cal H}_{\rm diff_{PL}}$. For example,
$r_{PL}(\hat{U}_h)=r_{PL}(1) =1$. We can define the algebra of
classes of operators of ${\cal A}_{\rm diff_{PL}}^\prime$
that are represented
by the same operator in ${\cal H}_{\rm diff_{PL}}$; this algebra is
faithfully represented in ${\cal H}_{\rm diff_{PL}}$ and is called
the algebra of physical operators ${\cal A}_{\rm diff_{PL}}$
\cite{alg-quant}. Even more, it is easy to prove that every operator
on ${\cal H}_{\rm diff_{PL}}$ is in the image of
$r_{PL}({\cal A}_{\rm diff_{PL}})$.
The algebra of strong observables (Hermitian operators
invariant under
gauge transformations and ``diffeomorphisms'') sits inside of
${\cal A}_{\rm diff_{PL}}$ (with the commutator as product);
then it is representable in
${\cal H}_{\rm diff_{PL}}$ faithfully.
Since (\ref{obs}) maps any observable to a Hermitian
operator in ${\cal H}_{\rm diff_{PL}}$,
this representation implements the reality conditions. In
particular (when the space manifold is three dimensional and the
gauge group is $SU(2)$), the construction
provides a ``quantum Husain-Kucha\v{r} model'' \cite{husain-kuchar},
that has local degrees of freedom \cite{analytic}.
An interesting feature of the quantum
Husain-Kucha\v{r} model (and of any other diffeomorphism invariant
quantum gauge theory defined over a compact manifold
$\S$ with $\dim(\S)= 1,2,3$ following our general framework)
is that the choice of background structure is
not reflected in the resulting quantum theory. To be precise, fix a
piecewise linear structure $PL_0$ on $\S$ and construct the algebra of
physical operators ${\cal A}_{\rm diff_{PL_0}}$
(acting on ${\cal H}_{\rm diff_{PL_0}}$) that it induces.
Given another piecewise linear structure $PL_1$ on $\S$ and a piecewise
linear homeomorphism connecting both PL structures
$h_1: \S_{PL_0}\to \S_{PL_1}$, we get a representation of
${\cal A}_{\rm diff_{PL_0}}$ in ${\cal H}_{\rm diff_{PL_1}}$ by
$r_{PL_1} (O)= \hat{U}_{h_1}^{-1} O \hat{U}_{h_1}$. In fact,
$r_{PL_1}: {\cal A}_{\rm diff_{PL_0}} \to {\cal A}_{\rm diff_{PL_1}}$ is
onto and it is independent of $h$. Thus we can label the operators of
${\cal A}_{\rm diff_{PL_1}}$ by the elements of
${\cal A}_{\rm diff_{PL_0}}$. Using ${\cal A}_{\rm diff_{PL_0}}$ as a
fiducial abstract algebra, the independence of the background PL structure
on $\S$ may be stated as follows.
\begin{theorem} \label{pl-equiv}
Any piecewise linear structure $PL_1$ on a fixed manifold $\S$
of dimension $\dim(\S)= 1,2,3$ defines a representation
$r_{PL_1}({\cal A}_{\rm diff_{PL_0}})$ of
${\cal A}_{\rm diff_{PL_0}}$. This representation is independent
of the piecewise linear structure, in the sense that, given any two
piecewise linear structures $PL_1$ and $PL_2$ on $\S$, the
representations $r_{PL_1}({\cal A}_{\rm diff_{PL_0}})$ and
$r_{PL_2}({\cal A}_{\rm diff_{PL_0}})$ are unitarily equivalent.
\end{theorem}
Proof -- In dimensions $\dim(\S)= 1,2,3$
it is known \cite{PLstructures} that
any two PL structures $PL_i$ and $PL_0$
are related by a piecewise linear homeomorphism
$h_i:\S_{PL_0} \to \S_{PL_i}$. This implies that
$r_{PL_i}({\cal A}_{\rm diff_{PL_0}})$ defined above is a representation
of ${\cal A}_{\rm diff_{PL_0}}$. That the representations induced by $PL_1$
and $PL_2$ are equivalent is trivial.
$U_{h_2^{-1}\circ h_1}: {\cal H}_{\rm diff_{PL_1}} \to
{\cal H}_{\rm diff_{PL_2}}$; that $U_{h_2^{-1}\circ h_1}$ is the required
unitary map and it induces an algebra isomorphism. $\Box$
\subsection{Physical observables and physical states
in the combinatorial category} \label{Cdinv}
Now our task is to find the analog of knot-classes of combinatoric
graphs.
In section~\ref{p-lspaces} we reviewed how is that a simplicial complex
$K$ encodes combinatorially topological information, and how this
information can be displayed in its geometric realization $|K|$.
Then, to decide whether
or not two combinatoric graphs $\c, \d \in \Gamma_C$ belong to the same
knot-class we are going to display them in the same space and compare them.
To this end, we fix the sequence of piecewise linear maps
\be
M_n : |Sd^n(K)| \to |K|
\ee
defined by successive application of
the canonical map $M_1: |Sd(K)| \to |K|$ that maps the vertices of
$|Sd(K)|$ to the baricenter of the corresponding simplex in $|K|$.
Then, we map every every representative
$\{ \c_n, c_{n+1}=Sd(\c_n), \ldots \}$
of the combinatoric graph $\c$ in to a sequence
\be
\{ M_n(|\c_n|), M_{n+1}(|c_{n+1}|)=M_n(|\c_n|), \ldots \}
\ee
that assigns the same geometric graph $|\c|:= M_n(|\c_n|)$ to every integer.
Using these maps we are going to define that the combinatoric graphs
$\c, \d \in \Gamma_C$ are ``diffeomorphic'' if the their corresponding
geometrical graphs $|\c|, |\d|$ are related by a piecewise linear
homeomorphism.
One method in implementing the above idea is to use the sequence of
maps $M_n$ to induce a map that links the kinematical Hilbert spaces of
the combinatoric and PL categories. The map
$M: Cyl_C \to Cyl_{\rm PL}$ is defined by
\be
M (f_\c ):= f_{M_n(|\c_n|)} = f_{|\c|} \quad .
\ee
Now the map $\e:Cyl_{\rm PL}(\overline{\cal A/G}) \to Cyl_{\rm PL}^*(\overline{\cal A/G})$
induces a new map
$\e_C:Cyl_C(\overline{\cal A/G}) \to Cyl_C^*(\overline{\cal A/G})$
\be
\e_C := M^* \circ \e \circ M : Cyl_C \to Cyl_C^*
\ee
that produces ``diffeomorphism'' invariant distributions in the combinatoric
category.
Again, we characterize the averaging map by the s-knot states
$s_{[\vec{\c}]_C , j(e), c(v)} \in Cyl_C^*$ induced by the
combinatoric spin network states $S_{\vec{\c} , j(e), c(v)}$
\be \label{s_C}
s_{[\vec{\c}]_C , j(e), c(v)} [S_{\vec{\d}^\prime , j(e), c(v)}] =
\e_C (S_{\vec{\c} , j(e), c(v)}) [S_{\vec{\d}^\prime , j(e), c(v)}] :=
s_{[\vec{| \c |}] , j(e), c(v)} [S_{\vec{| \d |}^\prime , j(e), c(v)}]
\ee
As follows from the above formula, the label $[\vec{\c}]_C$ of the s-knot
states is an equivalence class of oriented combinatoric graphs, where
$\vec{\c}$ and $\vec{\d}$ are considered equivalent if there is
$h \in {\rm Hom}_{\rm PL}(|K|)$ such that $h(\vec{| \c |})=\vec{| \d |}$.
Just as in the PL case, {\em the Hilbert space of physical states}
${\cal H}_{\rm diff_C}$
is obtained after completing the space spanned by the s-knot states
$\e_C(Cyl_C(\overline{\cal A/G}))$
in the norm provided by the inner product defined by
\be
(F,G)=(\e_C(f),\e_C(g)):=G[f] \quad .
\ee
It may seem odd that we are constructing the space of ``diffeomorphism''
states without a family of unitary maps called ``diffeomorphisms''.
The reason for this
peculiarity is behind the very beginning of our construction. We chose to
represent space combinatorially with a sequence generated by the simplicial
complex $K$, and we did not consider the sequence generated by other complex,
say $L$, even if it had the same topological information $|K| \approx |L|$.
If we had done that, we would have ended with a kinematical Hilbert space that
would be made of two copies of the one that we defined here, and these two
copies would be linked by ``diffeomorphisms''. What we did was
to construct every thing above the minimal kinematical Hilbert space. A
relevant question is if by shrinking the kinematical Hilbert space we also
shrank the space of physical states. Below, we will prove that this is not
the case.
Now we state two important characteristics of the spaces of physical states
of the combinatoric and PL models.
First, we constructed the space
${\cal H}_{\rm diff_C}$ using the map $\e_C$; the same
map can be restricted to give an onto map from the spin network basis of
${\cal H}^{\prime}_{\rm kin_C}$
to the basis of ${\cal H}_{\rm diff_C}$. Since the
kinematical Hilbert space is separable, we have the following physically
interesting result.
\begin{theorem} \label{d-separable}
The Hilbert space ${\cal H}_{\rm diff_C}$ is separable.
\end{theorem}
Second, the map $M^*: Cyl_{\rm PL}^* \to Cyl_C^*$
can be extended by continuity to link the spaces of
physical states of the PL and combinatoric categories.
Using this map we can compare these two spaces.
\begin{theorem} \label{iso}
The spaces of physical states in the PL and combinatoric categories
are naturally isomorphic,
${\cal H}_{\rm diff_{PL}}\approx {\cal H}_{\rm diff_C}$.
\end{theorem}
Proof -- If $\vec{\c_{\rm PL}} = \vec{| \c |}$
then $M^*$ identifies the s-knot
states that they generate by averaging, in other words,
$ M^* ( s_{[\vec{| \c_{\rm PL} |}] , j(e), c(v)}) =
s_{[\vec{\c}]_C , j(e), c(v)} $. From the definition of the inner products
and the definition of the combinatoric s-knot states it follows immediately
that $M^*$ is an isometry.
Since the spaces of physical states were
constructed by completing the vector spaces spanned by the s-knot states,
the theorem is a consequence of the following lemma, which will be proved in
the appendix.
\begin{lemma} \label{Cknots}
In any knot-class of PL oriented graphs $[\vec{\c_{PL}}]$ there is at
least one representative that comes from the geometric representation of a
combinatoric oriented graph $\vec{| \c |} \in [\vec{\c_{PL}}]$.
\end{lemma}
$\Box$
Now we proceed to construct a representation of the algebra of physical
operators in the combinatoric category.
As in the PL category, we
define the algebra ${\cal A}_{\rm diff_C}^\prime$ to be the algebra
of operators on ${\cal H}_{\rm kin_C}$ that satisfy the following
two conditions: First, for
$O \in {\cal A}_{\rm diff_C}^\prime$, both $O$ and $O^\dagger$
are defined on $Cyl_{\rm C}$ and map $Cyl_{\rm C}$ to itself.
Second, both $O$ and $O^\dagger$
are representable in ${\cal H}_{\rm diff_C}$ by means of
\be \label{obsC}
r_C(\hat{O}) F = r_C(\hat{O}) \e_C(f) :=
\e_C( \hat{O} f) \quad .
\ee
We are interested in the algebra of
classes of operators of ${\cal A}_{\rm diff_C}^\prime$
that are represented
by the same operator in ${\cal H}_{\rm diff_C}$; this algebra is
faithfully represented in ${\cal H}_{\rm diff_C}$ and is called
the algebra of physical operators ${\cal A}_{\rm diff_C}$
\cite{alg-quant}.
In contrast with the PL case, in the combinatoric framework the
``diffeomorphism group'' does not have a natural action; for this
reason the notion of strong observables can not be intrinsically
defined. However, it is easy to prove that in the PL case
the subset of ${\cal A}_{\rm diff_{PL}}$ consisting of Hermitian
operators is, in fact, the algebra of strong observables
(with the commutator as product). Therefore, in the combinatoric
category we can regard the algebra of Hermitian operators
in ${\cal A}_{\rm diff_C}$ as the algebra of strong observables;
this algebra is naturally represented in ${\cal H}_{\rm diff_C}$.
Since (\ref{obsC}) maps any observable to a Hermitian
operator in ${\cal H}_{\rm diff_C}$,
this representation implements the reality conditions. In
particular (when the space manifold is three dimensional and the
gauge group is $SU(2)$),
the construction provides another ``quantum
Husain-Kucha\v{r} model'' \cite{husain-kuchar}. A natural
question is whether the PL and combinatoric models are physically
equivalent or not. We saw that the
algebra ${\cal A}_{\rm diff_{C(K)}}$ is represented in
${\cal H}_{\rm diff_C(K)}$ by $r_{C(K)}$; it is also natural to
give the representation $d_K({\cal A}_{\rm diff_{C(K)}})$ on
${\cal H}_{\rm diff_{PL(|K|)}}$ by
$d_K(\hat{O}) F_{PL} = d_K(\hat{O}) (\e \circ M f_C) :=
\e \circ M (\hat{O} f_C)$. This two representations are identified by
the isomorphism exhibited in (\ref{iso}), more precisely:
\begin{theorem} \label{equiv}
The representations $r_{C(K)}({\cal A}_{\rm diff_{C(K)}})$
on ${\cal H}_{\rm diff_C(K)}$ and
$d_K({\cal A}_{\rm diff_{C(K)}})$ on ${\cal H}_{\rm diff_{PL(|K|)}}$
of the algebra ${\cal A}_{\rm diff_C(K)}$ are unitarily equivalent.
In addition if $\dim(\S)= 1,2,3$
this algebra does not depend on $K$ but only on the topology
of $|K|\approx \S$;
the combinatoric and PL frameworks (based on
the choice of the Ashtekar-Lewandowski measure
$\m_0$ on ${\cal H}_{\rm kin}$)
provide unitarily equivalent
representations of the abstract algebra ${\cal A}_{\rm diff_{\S}}$.
\end{theorem}
Proof -- The unitary equivalence of
$r_{C(K)}({\cal A}_{\rm diff_{C(K)}})$ and
$d_K({\cal A}_{\rm diff_{C(K)}})$ is given by the unitary map
$M^*: {\cal H}_{\rm diff_{PL(|K|)}} \to {\cal H}_{\rm diff_C(K)}$.
$d_K({\cal A}_{\rm diff_{C(K)}})$ maps ${\cal A}_{\rm diff_C(K)}$
onto the algebra of operators on ${\cal H}_{\rm diff_{PL(|K|)}}$ and
the representation is faithful; the same thing happens for
the combinatoric model based on a different simplicial complex $L$.
From theorem (\ref{pl-equiv}) we know that if $\dim(\S)= 1,2,3$
for any two simplicial
complexes $K,L$ such that $|K|\approx |L|\approx \S$ the Hilbert spaces
${\cal H}_{\rm diff_{PL(|K|)}}$, ${\cal H}_{\rm diff_{PL(|L|)}}$ and
the algebras of operators on them
are {\em identified} (unambiguously) by a unitary map. Since
$d_K({\cal A}_{\rm diff_{C(K)}})$,
$d_L({\cal A}_{\rm diff_{C(L)}})$,
$d_K({\cal A}_{\rm diff_{PL(|K|)}})$ and
$d_L({\cal A}_{\rm diff_{PL(|L|)}})$
label the operators on
${\cal H}_{\rm diff_{PL(\S)}}$, there is an unambiguous
invertible map {\em identifying} these algebras. Thus the family of
all these equivalent algebras may be regarded as the abstract algebra
${\cal A}_{\rm diff_{\S}}$ and the combinatoric and PL frameworks are
procedures that yield unitarily equivalent
representations of this abstract algebra. $\Box$
From the theorems it follows that the PL and combinatoric frameworks are
physically equivalent. They yield representations of the algebra of
physical observables in separable Hilbert spaces; hence, maintaining the
usual interpretation of quantum field theory \cite{wightman}.
\section{Discussion and Comparison} \label{compari}
In this paper we have presented two models for quantum gauge field theory.
We proved that the two models represented the algebra of physical observables
in separable Hilbert spaces ${\cal H}_{\rm diff_{PL}}$ and
${\cal H}_{\rm diff_C}$; furthermore, we proved that the two models
where physically equivalent in the sense that they
gave rise to unitarily equivalent representations of the algebra of
physical observables.
The equivalence of the two models is a good feature, but we may still
ask if by choosing a different background structure (like a different
PL structure for our base space manifold) we could have arrived at a
physically different model.
In contrast to the analytic case, this problem has been thoroughly studied
(see for example \cite{PLstructures}).
For example, in dimensions $\dim(\S) =1,2,3$ any two
PL structures, like any two differential structures,
of a fixed topological manifold $\S$ are known to be equivalent
in the sense that they are related by a PL homeomorphism (diffeomorphism).
Then, if the base space is three dimensional (like in canonical
quantum gravity) all the different choices of background structure would
yield unitarily equivalent representations of the algebra of physical
(gauge and diffeomorphism invariant)
observables (the unitary map given by a quantum ``diffeomorphism'').
Our quantum models are not equivalent to the ones created in the analytic
category \cite{analytic}; for instance, in the analytic category the
physical Hilbert space is not separable. The reason for this size difference
is not that the family of piecewise analytic graphs is too big; the kinematic
Hilbert space of the PL category is also not separable. In a separate
paper \cite{CSfromLQG} we show that concept of knot-classes that should be
used in the piecewise
analytic category is with respect to the group of maps defined
by
\be
{\rm Pdiff}_\o (\S):= \left\{ h\in {\rm Hom}(\S) :
h(\Gamma_\o)= \Gamma_\o \right\} \quad . \label{pw}
\ee
In the appendix we show how to adapt the proof of lemma~\ref{Cknots}
to show that every (modified) knot-class of piecewise analytic graphs
has a representative induced by a combinatorial graph. Then,
theorem~\ref{iso} and theorem~\ref{equiv}
have analogs proving that the the Hilbert space of physical states of the
piecewise analytic category is also separable and that
the representation of the algebra of physical observables
given by the piecewise analytic
category is unitarily equivalent to the one provided by the combinatorial
framework.
We can expect (the author does)
that a more satisfactory understanding of field theory
may arise from this combinatorial picture of quantum geometry.
The bridge between three-dimensional quantum geometry and a smooth
macroscopic space-time is the missing ingredient to complete this picture
of quantum field theory. Three unsolved problems prevent us from building
this bridge. Dynamics in quantum gravity is only partially understood
\cite{thomas-lee-rueman}. The emergence of a four-dimensional picture
from solutions to the constraints has just begun to be explored
\cite{carlo-mike}. And the statistical mechanics needed to find the
semi-classical/macroscopic behavior of the theory of
quantum geometry is also at its developing stage \cite{weeves-kiril}.
\section*{Acknowledgments}
This paper greatly benefited from the wonderful
course on QG given by Abhay Ashtekar on the fall of 95, as well as from
many discussions with Abhay Ashtekar and Alejandro Corichi.
I also need to acknowledge illuminating conversations, suggestions and
encouragement from John Backer, John Baez,
Chris Beetle, Louis Crane, Kiril Krasnov, Seth Major,
Guillermo Mena Murgain,
Jorge Pullin, Carlo Rovelli, Lee Smolin, Madhavan Varadarajan, Thomas
Thiemann, and the editorial help of Mary-Ann Hall.
Support was provided by Universidad Nacional Aut\'onoma de M\'exico (DGAPA),
and grants NSF-PHY-9423950, NSF-PHY-9396246, research funds of the Pennsylvania
State University, the Eberly Family research fund at PSU and the Alfred P.
Sloan foundation.
\section*{Appendix}
First we will prove lemma~\ref{Cknots}, and then, indicate how the proof
can be extended to link our models and the refined version of the analytic
category that was mentioned in section~\ref{compari}.
Given an oriented PL graph $\vec{\c}_{PL} \subset |K|$ we will construct
an oriented combinatoric graph $\vec{\c}$ and a piecewise
linear homeomorphism
(PL map) $h: |K| \to |K|$ such that $h(|\vec{\c}|) = \vec{\c}_{PL}$.
The construction has four steps.
\begin{enumerate}
\item \label{0}
Let $\vec{\c}_{PL}^\prime$ be a refinement of $\vec{\c}_{PL}$ such that
for every $\D(x_n) \in |K|$ $e \in E_{\c_{PL}^\prime}$ implies that
$e \cap \D(x_n)$ is empty or
linear according to the affine coordinates given by $\D(x_n)$.
\item \label{11}
Find $n$
such that $M_n(|Sd^n(K)|)$ separates the vertices of
$\c_{PL}^\prime$ to
lie in different geometric simplices $M_n(\D(x_n))$, where
$\D(x_n) \in |Sd^n(K)|$. Namely, we chose $n$ as big as necessary to
accomplish a fine enough refinement of $|K|$, where
$v_1, v_2 \in M_{n}(\D(x_n))$ for
two different vertices of the PL graph
$v_1, v_2 \in V_{\c_{PL}^\prime}$ does not
happen.
\item \label{12}
Let $h_1:|K| \to |K|$ be the PL map that fixes the vertices of
$M_n(|Sd^n(K)|)$ and sends the new vertices $M_{n+1}(v(\D(x_n)))$
of $M_{n+1}|(|Sd^{n+1}(K)|)$ to
\begin{enumerate}
\item $v\in V_{\c_{PL}^\prime}$ if $v$ lies in the
interior of $M_n(\D(x_n))$;
symbolically, $v \in (M_n(\D(x_n)))^{\circ}$.
\item the baricenter of $M_n(\D(x_n))$ if
there is no $v\in V_{\c_{PL}^\prime}$ such that
$v \in (M_n(\D(x_n)))^{\circ}$.
\end{enumerate}
\item \label{13}
Find $m$ such that
$h_1(M_{n+m}(|Sd^{n+m}(K)|))$ separates the edges of $\c_{PL}$.
Stated formally, find $m\geq 1$ such that
$\c_{PL}\cap h_1(M_{n+m}(\D(x_{n+m})))^{\circ}$
has one connected component or it is empty.
\item \label{14}
Let $h=h_2\circ h_1:|K| \to |K|$, where $h_2$ is
the PL map that fixes the vertices of
$h_1(M_{n+m}(|Sd^{n+m}(K)|))$ and sends the new vertices
$h_1(M_{n+m+1}(v(\D(x_{n+m}))))$
of $h_1(M_{n+m+1}|(|Sd^{n+m+1}(K)|))$ to
\begin{enumerate}
\item the baricenter of
$\c_{PL}\cap h_1(M_{n+m}(\D(x_{n+m})))$ if
$\c_{PL}\cap (h_1(M_{n+m}(\D(x_{n+m}))))^{\circ}\neq \emptyset$.
\item the baricenter of
$h_1(M_{n+m}(\D(x_{n+m})))$ if
$\c_{PL}\cap (h_1(M_{n+m}(\D(x_{n+m}))))^{\circ}= \emptyset$.
\end{enumerate}
\end{enumerate}
From the construction of $h\circ M_{n+m} : |Sd^{n+m}(K)| \to |K|$
it is immediate that $(h\circ M_{n+m})^{-1} (\c_{PL}) = |\c_{n+m}|$ if
$\c_{n+m}\subset Sd^{n+m}(K)$ is defined by
\begin{itemize}
\item The zero-dimesional simplex $p \in Sd^{n+m}(K)$ belongs to
$\c_{n+m}$ if $(h\circ M_{n+m})^{-1} (\c_{PL}) \cap |p| \neq \emptyset$.
\item The one-dimesional simplex $e \in Sd^{n+m}(K)$ belongs to
$\c_{n+m}$ if
$(h\circ M_{n+m})^{-1}(\c_{PL})\cap |e|^{\circ} \neq \emptyset$.
\end{itemize}
Then the obvious orientation of $\c_{n+m}$ defines the oriented
combinatoric graph $\vec{\c}$ and the pair
$h$, $\vec{\c}$ satisfies
\be h(|\vec{\c}|) = \vec{\c}_{PL}
\ee
$\Box$
To link the combinatoric and the analytic categories we need to fix a
map $N_0: |K| \to \S_{P\o}$ that assigns a piecewise
analytic curve in $\S_{P\o}$ to every PL curve of $|K|$.
Then the map $N: Cyl_C \to Cyl_\o$ defined by
\be
N (f_\c ):= f_{N_0 \circ M_n(|\c_n|)} = f_{N_0(|\c|)}
\ee
links the kinematical Hilbert spaces, and
the map $N^*: Cyl_\o^* \to Cyl_C^*$ links the spaces of
physical states of the analytic and combinatoric categories.
As it was argued in section~\ref{homeos} $N^*$ is an isometry between
${\cal H}_{\rm diff_\o}$ and ${\cal H}_{\rm diff_C}$, which means that
the two Hilbert spaces are isomorphic if every knot-class of piecewise
analytic graphs $[\c_\o]$ has at least one representative that
comes from a combinatoric graph $N_0(| \c |) \in [\c_\o]$.
An extension of the lemma proved in this appendix solves the issue.
Given a piecewise analytic
graph $\c_\o \subset \S_{P\o}$ we can construct
a combinatoric graph $\c$ and a piecewise analytic map $\f$ such that
$\f \circ N_0 (|\c|) = \c_\o$. First find a refinement
$\c_\o^{\prime}$ of $\c_\o$ such that its edges are analytic according to
the domains of analycity of $\S_{P\o}$. Then,
define a graph in $|K|$ by
$\a = N_0^{-1}(\c_\o^{\prime})$ and do steps (\ref{11}), (\ref{12})
and (\ref{13}) using $\a$ instead of $\c_{PL}$.
At this moment $N_0 \circ h_1 \circ M_{n+m}(|Sd^{n+m}(K)|)$ separates
the edges of $\c_\o^{\prime}$; we only need to find a replacement for
step (\ref{14}). Our strategy is to find a map of the form
$\f= \f_2 \circ N_0 \circ h_1$ to solve the problem.
This would be achieved if the
piecewise analytic diffeomorphism $\f_2$ fixes the mesh given by
$N_0 \circ h_1 \circ M_{n+m}(|Sd^{n+m}(K)|)$ and at the same time
matches the mesh given by
$N_0 \circ h_1 \circ M_{n+m+1}(|Sd^{n+m+1}(K)|)$ and the graph
$\c_\o^{\prime}$. The map $\f_2$ needs to send every cell
$N_0 \circ h_1 \circ M_{n+m}(\D(x_{n+m}))$ to itself and
matche the graph with analytic edges. An explicit construction
would be cumbersome, but the existence of such a
{\em piecewise} analytic map is clear. After this is completed, the
construction of the combinatoric graph follows the instructions
given above to link the combinatoric and PL categories.
|
2,869,038,155,607 | arxiv | \section{INTRODUCTION}
Recently, utilizing molecules to build electronic devices at the single-molecule level has been paid much attention, and it is considered to be a promising area for future nanoelectronics.\cite{thiele2014electrically,schedin2007detection,nozaki2010engineering,joachim1997electromechanical,selzer2006single}
Due to the development of techniques on selfassembly and nanofabrication, it is possible to fabricate practical molecular devices.\cite{tien1997microfabrication,xu1997nanometer}
Until now, kinds of functional molecular devices, as well as interesting electronic features, have been proposed, such as diode, logic operators, and negative differential resistance.\cite{chen2020conductance,wang2020theoretical,thiele2014electrically,dias2021investigation,niu2015phonon,min2021multifunctional,gu2018recent,li2021designing,song2021first,antonova2017negative,kobashi2017negative,rahighi2021all}
On the other hand, spintronics is also a rapidly emerging field for future devices, which exploiting the spin degree of freedom in electrons.\cite{hao2018spin,yang2019spin,peng2019electrically}
Thus, combining molecular electronics and spintronics together would no doubt offer more possibilities for device design.
Among all kinds of molecular structures, due to the peculiar atomic orbitals and bonding types, carbon-based ones exhibit various electronic behaviors and have attracted more and more attention, considered as a competitive candidate for next-generation electronic devices.\cite{stankovich2007synthesis,wang2010large,baughman2002carbon,howard1991fullerenes,zhang2013spin}
It has been reported that, with the tools of chemical synthesis, the ultimate goal of miniaturization in nanostructures' design can be arrived, where Chanteau et al.\cite{chanteau2003synthesis} successfully demonstrated the synthesization of 2-nm-tall carbon-based anthropomorphic molecules in monomeric, dimeric, and polymeric forms.
Interestingly, the anthropomorphic molecules could be synthesized in kinds of desired geometries, e.g., various heads (chef, jester, baker, and etc) and postures (dance, raising hands, holding hands, and etc).
This means the molecular synthesis can arrive at the atomic limit, providing us exciting opportunities for device design.
Both in theory and experiment, it is found that conformational change in molecules may induce the variation of
electronic behavior.
Especially, the twisting of carbon-based units could result in the variation of p-p coupling.\cite{venkataraman2006dependence,larsson1981electron,woitellier1989possibility,guo2013conformational,ma2010low,woitellier1989possibility} And in experiment, the rotation can be realized in many ways.\cite{tierney2011experimental,leoni2011controlling}
Based on those findings, kinds of functional device have been proposed, such as switch, amplifier and logic operators.
However, in these devices, the rotating parts mainly locate in the transport branch and play as the key bridge for electronic transmission. Such a kind of geometry usually limits the application scenarios of the device, as rotating the main part of the system may be not allowed or convenient.
Previous studies showed that, side-contacting molecules can also effectively influence the electronic transport of a nanosystem.\cite{chowdhury2011graphene}
According to those, together with the progress on atomic-level nanofabrication, it is expected to realize the modulation of electronic transport by rotating side-bonded functional groups in a single-molecule device, just like a valve in the pipeline. The study on it is still lacking, especially on spin-related features.
Such a device setup will not disturb the main structure of a configuration, showing advantages in manipulating electronic transport.
In the present work, we investigate the spin-dependent electronic transport of graphene nanoflakes with side-bonded functional groups, using the density functional theory (DFT) combined with nonequilibrium Green's function (NEGF).
{\color{black}{{ The two-probe systems are constructed by contacting the nanoflakes with atomic carbon chain electrodes. }}}
It is found that, the transmission at the Fermi level could be switched between completely polarized and unpolarized states, through rotating the functional group. Moreover, the transition between spin-up and spin-down polarized states can also be achieved, operating as a dual-spin filter. Further analysis shows that, it is the shift of density of states, caused by the rotation, that causes the shift of transmission peaks, and then results in the variation of spin polarization. And this is the intrinsic feature of this system, robust to the size of nanoflake and electrode material, indicating great application potential.
\begin{figure*}
\includegraphics[width=1.0\textwidth]{Figure1.eps}
\caption{\label{structure}
(Color online) (a1)-(a4) The functional groups of NanoJester, NanoChef, NanoChefNO, and NanoChefBN, respectively. (a5) The illustration of combining graphene nanoflake and functional group to form C32J. (b1)-(b6) The geometries of C32J, C13J, C32J2, C32C, C32CNO, and C32CBN, respectively. (c1)-(c6) The geometries of C24AJ, C40AJ, C40SJ, C48J, C32C2, and C32CNO2, respectively. (d1)-(d7) The illustration of rotating the functional group in C32J. (e1)-(e2) The two-probe systems of C32J contacting with atomic carbon chain and Au nanowire electrodes, respectively. {\color{black}{{ The magnetic moments mainly locate on six carbon atoms, denoted by the dashed circles in (e1), and the corresponding magnetic moments are given out aside (the first and second values correspond to the magnetic moments of 0$^\circ$ and 90$^\circ$ rotation-angle cases respectively, where the unit of $\mu_{\text{B}}$ is omitted). }}}
}
\end{figure*}
\section{COMPUTATIONAL METHOD}
The calculations in the present work are carried out by combining DFT and NEGF together, which are implemented in the Atomistix Toolkit (ATK) package.\cite{taylor2001ab,brandbyge2002density,datta2000nanoscale,cohen2008insights} The electron exchange-correlation functional is treated by generalized gradient approximation (GGA) in the form of Perdew, Burke, and Ernzerhof (PBE).\cite{perdew1996generalized,perdew1992atoms,tao2003climbing} And the medium basis set of PseudoDojo pseudopotentials is used.\cite{van2018pseudodojo}
Sufficient vacuums (more than 10 {\AA}) in the supercell are constructed to eliminate the interactions between adjacent images.
The geometries are fully optimized until all the forces are less than 0.02 eV/{\AA}. The mesh cut-off energy of 150 Ry and 1$\times$1$\times$100 \emph{k}-point mesh in the Monkhorst-Pack scheme are employed.
\section{RESULTS AND DISCUSSIONS}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure2.eps}
\caption{\label{C32J-trans}
(Color online) The transport spectra for two-probe systems that molecules contacted with atomic carbon chain electrodes. (a) for graphene nanoflake C32 (H is omitted for the configuration name). (b)-(h) for C32J with different rotation angles of the functional group of NanoJester, and the angle is denoted in each inset.
For the sake of simplicity, the electrodes are omitted in the figure, so as the following figures.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure3.eps}
\caption{\label{SP-angle}
(Color online) The spin polarization at $E_F$ varies with the rotation angle for (a) C32J, (b) C32J2, (c) C32J2 , (d) C40AJ, (e) C32J (with Au electrodes), and (f) C48J systems, respectively. For C32J2 in (b), the two functional groups are rotated in the same direction, and for C32J2 in (c), they are rotated in the opposite directions.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure4.eps}
\caption{\label{DDOS-C32J}
(Color online) Device density of states for two-probe systems. (a) for graphene nanoflake C32. (b)-(h) for C32J with different rotation angles of NanoJester.
}
\end{figure}
Until now, there are kinds of carbon-based nanostructures being synthesized, e.g., graphene, nanotube, fullerene and etc. Among them, graphene is considered as one of the most promising candidates for future nanodevices. However, there is no spontaneous magnetism in it. Both theoretical and experimental investigations show that, cutting graphene to create edges, especially zigzag ones, is a feasible way to acquire spontaneous magnetic moment. With the development of miniaturization in electronics, the device at the single-molecule level is urgently needed. So, in the present work, we cut the graphene into nanoflakes with zigzag edges to explore the modulation of spin transport in it. To realize effective modulation, and at the same time to disturb the nanoflake as less as possible, we make carbon-based functional groups being bonded on the side of the flake.
In recent years, considering the fabrication and operation of a device, pure-carbon configurations attract increasing attention, as they would possess good stability.
Thus, we here choose atomic carbon chains as electrodes to contact with the nanoflakes.
The atomic carbon chain is a prefect kind of electrode with good conductivity.\cite{csahin2008first} And it could also form a good contact with the graphene nanoflake. This would help us to get rid of other electronic effects, and facilitate revealing the intrinsic feature of the system.
In carbon nanostructures, graphene is the most promising candidate for nanoelectronic device. To fulfill the miniaturization requirements, we cut graphene into nanoflakes. And to acquire spontaneous magnetism, graphene nanoflakes with zigzag edges are chosen.
Finally, a nanoflake denoted by C32 is constructed, shown in the left panel of Fig.~\ref{structure}(a5).
It can be seen as being cut from a zigzag graphene nanoribbon.
To mimic the fabrication process in experiment, the top and bottom edges of the nanoflake are hydrogenated. And to mimic the cutting process, the left and right edges are not hydrogenated, which can also facilitate the contact of electrodes.
{\color{black}{{
According to the experimental studies,\cite{chanteau2003synthesis} we here choose two kinds of functional groups, i.e., NanoJester and NanoChef, shown in Fig.~\ref{structure}(a1) and (a2) respectively.
Figure~\ref{structure}(a5) illustrates the combination of graphene nanoflake and NanoJester, and for clarity, the whole structure is denoted as C32J, shown in Fig.~\ref{structure}(b1).
The final two-probe setup of C32J is shown in Fig.~\ref{structure}(e1).
And Fig.~\ref{structure}(d1)-(d7) show how to rotate the functional group. In each figure, there is also a side view of the configuration.
}}}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure5.eps}
\caption{\label{Fitting}
(Color online) (a)-(b) Fitting functions for the energy split-angle relation in C32J and C13J systems, respectively. The energy split refers to the energy difference between spin-up and spin-down transmission peaks around $E_F$, see Fig.~\ref{C32J-trans} and Fig.~\ref{C13J-trans}.
The fitting function and parameters are present in the figure, and both of the two curves follow the sin$^2$($\theta$) relation.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure6.eps}
\caption{\label{LDDOS-C32J}
(Color online) Local density of states of the spin-down component at the Fermi level for two-probe systems. (a) for graphene nanoflake C32. (b)-(d) for C32J with rotation angles of 15$^\circ$, 60$^\circ$ and 90$^\circ$, respectively.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure7.eps}
\caption{\label{C13J-trans}
(Color online) The transport spectra for molecules contacted with atomic carbon chain electrodes. (a) for graphene nanoflake C13. (b)-(h) for C13J with different rotation angles of NanoJester.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure8.eps}
\caption{\label{DDOS-C13J}
(Color online) Device density of states for two-probe systems. (a) for graphene nanoflake C13. (b)-(h) for C13J with different rotation angles of NanoJester.
}
\end{figure}
To figure out the influence of the rotation on electronic transport, we calculate the spin-dependent transmission spectra for the two-probe system of C32J with different rotation angles, shown in Fig.~\ref{C32J-trans}(b)-(h). For the comparison with the bare nanoflake, the transmission spectra of C32 without the functional group is also calculated, shown in Fig.~\ref{C32J-trans}(a).
For the bare nanoflake, one finds the transport is finite and spin-unpolarized at $E_F$, see Fig.~\ref{C32J-trans}(a). Above $E_F$, the transport exhibits slightly spin-polarized. However, when the NanoJester is contacted with the nanoflake, the transport becomes quite different, see Fig.~\ref{C32J-trans}(b). One finds, around $E_F$, the transport is wholly increased. There are two transmission peaks emerges in this region, and they are in opposite spin components. More importantly, these two peaks are split in energy. And this results in a large spin-polarization at $E_F$, where the spin-down transmission reaches the summit and spin-up one is near the bottom of the valley.
Next, we rotate the functional group of C32J, and the rotation angle varies from 0$^\circ$ to 90$^\circ$. The corresponding transmission spectra are shown in Fig.~\ref{C32J-trans}(b)-(h). Interestingly, from the figures, one finds as the rotation angle increases, the spin-up transmission peak shifts left and spin-down one shifts right gradually.
Note that, the spin-up transmission peak is further away from $E_F$ than spin-down ones when $\theta=0^\circ$. As a result, when $\theta=45^\circ$, the spin-up peak completely shifts out of $E_F$, and spin-down one still locates there. So, the transmission becomes completely spin-polarized, operating as a spin filter. Moreover, the completely spin-polarized transmission distributes in a wide range around $E_F$, which is quite beneficial to the practical applications.
When $\theta>0^\circ$, the two peaks continue to shift in the opposite directions. However, at the same time, they both decrease gradually. So, at $E_F$, when $\theta=90^\circ$, the spin-down transmission goes down to a small value. Surprisingly, the spin-up transmission at $E_F$ increases when $\theta>45^\circ$, compared with that of $\theta=45^\circ$, and reaches an almost the same value with spin-down one when $\theta=90^\circ$. Thus, transmission becomes spin-unpolarized again at $E_F$, just like the non-NanoJester case in Fig.~\ref{C32J-trans}(a).
{\color{black}{{And in a wide range of energy around $E_F$, the transmission spectra are similar between non-NanoJester and $\theta=90^\circ$ cases, see Fig.~\ref{C32J-trans}(a) and (h).
But for the deep energy ranges above and below $E_F$, the transmission behaves different between $0^\circ$ and $90^\circ$.
Note that, for $\theta=90^\circ$, the planes of nanoflake and NanoJester are perpendicular to each other. So, such a $\theta=90^\circ$ effect may result from the destroy of the $\pi$-conjugation between nanoflake and NanoJester,\cite{venkataraman2006dependence,guo2013conformational} as well as the distribution of the magnetic moments (discussed below) and the spatial symmetry of the whole system.
}}}
{\color{black}{{
To observe the behavior of magnetism in the system, we do the Mulliken population analysis for the rotation angles of 0$^\circ$ and 90$^\circ$. It is found that, the magnetic moments mainly locate on six carbon atoms, and others are quite small. Those six atoms are denoted in Fig.~\ref{structure}(e1) and the corresponding magnetic moments are given out aside (the first and second values correspond to the magnetic moments of 0$^\circ$ and 90$^\circ$ cases respectively, where the unit of $\mu_{\text{B}}$ is omitted). One finds the magnetic moments of the four corner atoms in the nanoflake are around 0.94 $\mu_{\text{B}}$, which actually are the origin of the spin-dependent transport in the two-probe system. And they change quite a little from 0$^\circ$ to 90$^\circ$. However, for the two carbon atoms in the functional group, the magnetic moments change a lot from 0$^\circ$ to 90$^\circ$, suggesting modulation effect on the spin-related behaviors. For instance, the magnetic moment of the top carbon atom changes from 0.116 to 0.353 $\mu_{\text{B}}$ when the rotation angle varies from 0$^\circ$ to 90$^\circ$.
}}}
In the rotating process, one finds the spin-polarized state of transmission changes. To observe it more clearly, we plot the spin-polarization varying with the rotation angle, shown in Fig.~\ref{SP-angle}(a).
The spin-polarization here is defined as SP$=(T_{up}-T_{down})/(T_{up}+T_{down})$, and SP=0 means spin-unpolarized and SP=$\pm100\%$ means completely spin-polarized. For C32J in Fig.~\ref{SP-angle}(a), one finds SP could varies from almost 0 to -100\%. That is to say, through rotating, the modulation of spin polarization can be realized in a single-molecule system. Not only the completely polarized-unpolarized transition can be achieved, but also the state with any other SP value is expected to be achieved, just by finely tuning the rotation angle.
Those findings may bring many other novel devices.
{\color{black}{{
Apparently, it is the rotation-induced shift of the transmission peaks that causes the variation of spin polarization. }}}
To figure out the origin of the shift, we calculate and plot the device density of states (DDOS) of the two-probe C32J system, shown in Fig.~\ref{DDOS-C32J}, where (a)-(h) correspond to Fig.~\ref{SP-angle}(a)-(h) respectively.
From Fig.~\ref{DDOS-C32J}(a), one finds, for the non-NanoJester case, the DDOS is spin-unpolarized at $E_F$, the same as the transmission in Fig.~\ref{SP-angle}(a). However, when NanoJester is bonded, two DDOS peaks around $E_F$ emerge, see Fig.~\ref{DDOS-C32J}(b), and they locate at the same energy with the transmission peaks in Fig.~\ref{SP-angle}(b).
When rotating the NanoJester, the two DDOS peaks shift in the opposite directions, just like the transmission peaks, see Fig.~\ref{DDOS-C32J}(b)-(h). No doubt, it is those peak states in DDOS that contribute the transmission peaks around $E_F$, and so as the shift of them.
Although in some angle cases, e.g., 45$^\circ$-90$^\circ$, there are some small peaks arising around $E=-0.3$ eV, they are not delocalized enough to contribute electronic transmission and do not induce transmission peaks there.
As mentioned above, when the rotation angle increases, the two transmission peaks with opposite spins split and shift in the opposite directions. To see more clearly, we plot the variation of energy split between the two peaks following the angle $\theta$, shown in Fig.~\ref{Fitting}(a). The other rotating cases in the range of $\theta=[-\pi/2, 0]$ are also calculated and plotted. One finds, in the whole range of $\theta=[-\pi/2, \pi/2]$, the energy split does not change monotonously, and it decreases first and then increases with the increase of $\theta$. It reaches the minimum value when $\theta=0^\circ$.
Previous studies showed that, in a p-conjugated system, when the rotation angle varies, the p-p coupling strength will change following the square of trigonometric functions, e.g., cos$^2(\theta)$.\cite{woitellier1989possibility,larsson1981electron,guo2013conformational}
In our systems, although the variation of p-p coupling happens on the side of the nanoflake, there may also exists regular relationships between the angle and transmission behaviors.
Here, we try to fit those points with the square of a sine function, shown in Fig.~\ref{Fitting}(a).
{\color{black}{{
Obviously, one could find the curve follows the sin$^2(\theta)$ relation quite well.
This indicates that, in our systems, the rotation could effectively modulate the p-p coupling, which results in the shift of the orbitals. Such an effect that, a functional group's rotation-triggered orbital shift, has been observed experimentally in carbon-based systems.\cite{senge2000molecular} In our systems, the molecule is
contacted with two semi-infinite electrodes, so the molecular orbital will be broadened and contribute to the DDOS of the two-probe system. Consequently, besides the molecular states, the rotation would also induce the shift of DDOS, which can be characterized by the movement of the peaks.
For such a transport system, the transmission of electrons is mainly determined by the energy states of the middle molecule, which act like a bridge and contribute to the transmission channels.
Thus, the distribution of the states, i.e., DDOS, determines the behavior of the transport, and they (DDOS and transmission spectra) generally show a one-to-one correspondence. As a result, the rotation finally results in the shift of transmission spectra.
For our system, this shift effect is spin-dependent, where the spin-up and spin-down energy states may not shift synchronously, even in the opposite directions.
This asynchronous shift would result in the split of the DDOS peaks between spin-up and spin-down components, as well as the transmission ones.
As well known, the split of transmission peaks between different spins would trigger spin-polarized transport, which can be utilized to build spin devices, especially for the large spin polarization at the Fermi level.}}} Moreover, those finding may also throw light on the development of nanomechanics-related spintronics.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure9.eps}
\caption{\label{trans-C32J2}
(Color online) The transport spectra for molecules contacted with atomic carbon chain electrodes. (a) for graphene nanoflake C32. (b)-(h) for C32J2 with different rotation angles of NanoJesters. The two NanoJesters are rotated in the same direction.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure10.eps}
\caption{\label{trans-C32J2-45opposite}
(Color online) The transport spectra for molecules contacted with atomic carbon chain electrodes. (a) for graphene nanoflake C32. (b)-(d) for C32J2 with different rotation angles of NanoJesters, where the two NanoJesters are rotated in the same direction in (b) and in the opposite directions in (c).
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure11.eps}
\caption{\label{C32CBN-trans}
(Color online) (a)-(g) The transport spectra for C32CBN systems with different rotation angles of NanoJesters, contacted by atomic carbon chain electrodes.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure12.eps}
\caption{\label{C32CBN-SP}
(Color online) (a)-(g) The spin polarization for C32CBN systems with different rotation angles of NanoJesters, contacted by atomic carbon chain electrodes.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure13.eps}
\caption{\label{C40AJ-trans}
(Color online) (a)-(m) The transport spectra for C40AJ systems with different rotation angles of NanoJesters, contacted by atomic carbon chain electrodes.
}
\end{figure}
As discussed in the above, it is the shift of the DDOS peaks that results in the variation of DDOS at $E_F$, which causes the variation of the transmission at $E_F$. To confirm and to see the variation process more clearly, we take the spin-down component as an example and calculate the local density of the states (LDOS) of the system at $E_F$ for different rotation angles, shown in Fig.~\ref{LDDOS-C32J}. Note that, for the sake of comparison, the isovalues are the same for all the cases in that figure.
One finds, for all the configurations, the LDOS is continuous on both left and right electrodes, suggesting good conductivity of the electrodes. The difference happens at the middle molecular region.
When there is no functional group, there is only a quite small distribution of LDOS, corresponding to the contributed small transmission at $E_F$ in Fig.~\ref{C32J-trans}(a).
When the NanoJester is bonded with $\theta=15^\circ$, the LDOS on the molecule increase a lot and become more continuous along the molecule, see Fig.~\ref{C32J-trans}(b). Thus, a large spin-down transmission emerges at $E_F$ in Fig.~\ref{C32J-trans}(c). When $\theta=60^\circ$, the LDOS decays quite a lot, compared with that of $\theta=15^\circ$, and the spin-down transmission also decreases, see Fig.~\ref{C32J-trans}(c) and Fig.~\ref{C32J-trans}(f).
However, when $\theta=90^\circ$, the LDOS at the molecule decays more severely, and consequently, the transmission decreases more heavily, see Fig.~\ref{C32J-trans}(d) and Fig.~\ref{C32J-trans}(h).
In brief, the spin-down transmission at $E_F$ decreases gradually with the rotation of the NanoJester. And one can easily see the functional group here operates like a valve in a pipeline.
Next, we change the molecule in the two-probe system to see weather the above transport features still exist.
Figure~\ref{structure}(b2) show the C13J configuration, which is constructed by combining a smaller graphene nanoflake and NanoJester. By contacting with atomic carbon chain electrodes, the transport spectra are calculated and shown in Fig.~\ref{C13J-trans}. In the figure, one can see there is also an energy split between spin-up and spin-down transmission peaks by bonding functional group, just a little above the Fermi level.
Moreover, by rotating the NanoJester, the split increases, the same as that of C32J system.
To trace the origin of the modulation, DDOS is calculated and plotted in Fig.~\ref{DDOS-C13J}. The same like that of C32J, one finds the DDOS peaks around $E_F$ shift in the opposite directions for different spins.
Obviously, it is those rotation-induced DDOS shift that results in the shift of transmission peaks.
Those results suggest such a rotation-modulated variation of energy split is a intrinsic feature of this kind of systems.
From the morphology of the peaks, traces of Fano resonance can be found, which is common in such kind of systems.\cite{csahin2008first,guo2010spin}
To check the effect of p-p coupling mechanism in this configuration, the fitting for the relation between rotation angle and energy split is also performed in the range of $\theta=[-\pi/2, \pi/2]$, shown in Fig.~\ref{Fitting}(b). Surprisingly, one finds the curve also follows the sin$^2(\theta)$ relation quite well.
Such kind of systems show good robustness to those transport features, which will be quite beneficial to practical applications.
{\color{black}{{
Note that, the transmission spectra of C13 and C13J with $\theta=90^\circ$ are different, even around the Fermi level, see Fig.~\ref{C13J-trans}(a) and (h) respectively. This is not like that of some other configurations, e.g., C32J, where the corresponding transmission spectra are similar, at least in the energy region around $E_F$. This indicates that, for such a C13 system, the width of the configuration might be an important factor. Actually, for the configuration of C13, it is more like a molecule, not a nanoflake, from a structural point of view. And for such a small molecule, each dimension, as well as each atom, plays an important role. While, for C32 and other configurations, they preserve the main geometric characters of graphene nanoribbons, e.g., two zigzag or armchair edges. To some extent, they can be seen as finite graphene nanoribbons. So, they are expected to inherit the electronic behaviors of graphene nanoribbons, where the edge morphology mainly dominates the electronic structure and transport, especially for the zigzag graphene nanoribbons. To figure out the influence of the width on different nanoflakes, a more detailed and systematic investigation is needed, which we hope to do in the future.
}}}
Next, we investigate the influence of the number of functional groups on the transport, and construct the configuration of C32J2, shown in Fig.~\ref{structure}(b3), where there are two NanoJesters bonded on the graphene nanoflake (top and bottom ones).
When rotating the two NanoJesters (in the same direction with the same angle), the transmission spectra are calculated and plotted in Fig.~\ref{trans-C32J2}.
This time, the energy split of the transmission peaks around $E_F$ also emerges.
Interestingly, when $\theta\leq45^\circ$, it is a spin-up transmission peak that dominates the transport at $E_F$, and the transmission is spin-up polarized (even near 100\% polarized for some cases). However, when $60^\circ\leq\theta\leq45^\circ$, it is a spin-down peak that dominates the transport there, and the transmission changes to spin-down polarized. And when $\theta=90^\circ$, the transport becomes almost spin-unpolarized. In brief, rotation could modulate the transport at $E_F$ switches among completely spin-up polarized, completely spin-down polarized, and unpolarized states.
This feature would be very helpful in device design.
To see the variation more clearly, we plot the spin polarization varying with $\theta$ at $E_F$, shown in Fig.~\ref{SP-angle}(b). From the figure, one could easily conclude that, other spin polarizations except the above three cases (0, 100\%, and -100\%) may also be achieved by finely tuning the rotation angle, e.g., SP=80\%.
So, a tunable dual-spin filter can be realized with our system.
This would provide many possibilities for device fabrication.
It should be noted that, there are two NanoJesters in the configuration. The rotations of the two functional groups could be in the opposite directions. As a demonstration, we calculate the transmission spectra for $\theta=45^\circ$ but in the opposite directions, shown in Fig.~\ref{trans-C32J2-45opposite}(c). For comparison, the transmission spectra of the configurations with $\theta=0^\circ, 45^\circ$ (in the same direction), $90^\circ$ are also presented, shown in Fig.~\ref{trans-C32J2-45opposite}(a), (b) and (d), respectively.
Apparently, the transmission spectra for the two kinds of $\theta=45^\circ$ cases are almost the same, especially around $E_F$. So, at least for those cases, the rotating directions of the two functional groups have little effect on the transport.
Beside the nanoflake, the functional group may also influence the electronic structure. So, we construct the functional groups of NanoChef, NanoChefNO and NanoChefBN, shown in Fig.~\ref{structure}(a2)-(a4) respectively, and also bond them to C32 nanoflake, shown in Fig.~\ref{structure}(b4)-(b6) respectively.
For the latter two functional groups, we change the two O atoms of (O, O) to (N, O) and (B, N) atoms, respectively. Different atoms could trigger electric polarization in the molecule, which would facilitate the rotation of the functional group by an electric field.
The transmission spectra of them under rotations are calculated and plotted.
However, for C32C and C32CNO, the transmission spectra are (almost) spin-unpolarized at $E_F$ and do not change with the variation of rotation angle, see Supporting Information. For C32CBN, the transmission spectra are completely spin-polarized around $E_F$ for a large energy range, and they are also insensitive to the rotation, see Fig.~\ref{C32CBN-trans}.
Such a large energy range with complete spin polarization is quite useful for spintronic device. To see it more clearly, we plot the spin polarization for all the angles, shown in Fig.~\ref{C32CBN-SP}.
One finds, when $\theta<75^\circ$, the transmission exhibits -100\% spin polarization for a wide energy around $E_F$, no matter how does the $\theta$ changes. This suggests good robustness, and this feature is suitable for practical applications.
To explore the effect of the number of NanoChef on the transport properties, we construct C32C2 and C32CNO2 configurations, shown in Fig.~\ref{structure}(c5) and (c6), respectively. The transmission spectra are show in Supporting Information. For them, there is no rotation-induced shift effect, where the transport is quite insensitive to the rotation. And there is no large spin-polarization around $E_F$, and even the transport is completely spin-unpolarized for some cases.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure14.eps}
\caption{\label{C48J-trans}
(Color online) (a)-(g) The transport spectra for C48J systems with different rotation angles of NanoJesters, contacted by atomic carbon chain electrodes.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Figure15.eps}
\caption{\label{C32J-Au-trans}
(Color online) (a)-(g) The transport spectra for C32J systems with different rotation angles of NanoJesters, contacted by Au nanowire electrodes.
}
\end{figure}
Next, we investigate the influence of the {\color{black}{{length}}} of the nanoflake on the electronic properties, and we construct C24AJ, C40AJ, C40SJ and C48J configurations, shown in Fig.~\ref{structure}(c1)-(c4), respectively, where S stands for symmetric and A stands for asymmetric.
For all these cases, the rotation-induced shift effect still exist.
For C24AJ and C40SJ configurations, the transmission spectra are almost spin-unpolarized or slightly spin-polarized at $E_F$, shown in Supporting Information. For C40AJ and C48J cases, the transmission spectra are shown in Fig.~\ref{C40AJ-trans} and Fig.~\ref{C48J-trans}, respectively. One finds, for both of the two cases, the transmission at $E_F$ could varies from almost 0 to -100\% by rotating. To see it more clearly, the SP varying with $\theta$ is shown in Fig.~\ref{SP-angle}(d) and (e), respectively.
Obviously, the SP could be modulated by rotation, and for C40AJ, there are two symmetric minus SP regions, although the configuration itself is not symmetric, see Fig.~\ref{SP-angle}(d) and Fig.~\ref{structure}(c2).
For nanodevice in applications, metal electrodes are also widely used. Among kinds of metal materials, Au electrodes possesses the best conductivity. To check the influence of metal electrodes on the transport, we take C32J as an example to contact with Au nanowire electrodes, shown in Fig.~\ref{structure}(e2). The transport spectra are shown in Fig.~\ref{C32J-Au-trans}. Surprisingly, the rotation-modulated shift effect remains, suggesting it is the intrinsic feature of those molecular systems and robust to the electrode material.
More importantly, the transmission at $E_F$ is completely spin-polarized when $\theta=0^\circ$. And due to the shift mechanism, the SP could reach to -100\% when rotating the NanoJester.
The variation of SP modulated by rotation is shown in Fig.~\ref{SP-angle}(f), and one can see a large range of SP=[0, -100\%] can be achieved. By projecting the DDOS onto the orbitals of different parts of the system, it is found that the peak around $E_F$ is mainly contributed by the p orbitals, especially the p orbitals on the middle molecule, i.e., C32J, see Supporting Information.
The realization of modulating SP in metal electrode systems could largely expand the application potential of the device.
{\color{black}{{
For practical applications, the stability of the structure is quite crucial. As for graphene flakes, plenty of theoretical and experimental investigations have been carried out on their stability.\cite{kuc2010structural,ricca2012infrared,wohner2014energetic,silva2010graphene,barnard2008thermal,ci2009graphene} From a geometric point of view, the configuration of C32, as well as C24, C40 and C48, can be seen as circular flake, and C13 can be seen as triangular flake. It is found that, those two kinds of graphene flakes exhibit good stability, especially after hydrogenation.\cite{kuc2010structural,ricca2012infrared,wohner2014energetic,silva2010graphene} For the functional groups we adopted, they have been synthesized in experiment and have been proved to be stable.\cite{chanteau2003synthesis} Moreover, edge functionalization on graphene flakes by such kind of functional groups has also been reported to be stable, both in theory and experiment.\cite{xiang2016edge,shao2020molecular,bellunato2016chemistry,sun2010soluble,dai2013functionalization}
}}}
\section{CONCLUSION}
In summary, through first-principles calculations, we investigate the spin-dependent electronic transport of
graphene nanoflakes with side-bonded functional groups, contacted by atomic carbon chain electrodes. It is found that, by rotating the functional groups, the spin polarization of the transmission at $E_F$ could be switched between completely spin-polarized and completely spin-unpolarized states. The transition between spin-up and spin-down polarized states can also be realized, operating as a dual-spin filter. Moreover, by tuning the rotation angle, any other partially spin-polarized state can be achieved. Further analysis shows that, it is the spin-dependent shift of DDOS peaks, caused by the rotation, that triggers the shift of transmission peaks, and then results in the variation of spin polarization at $E_F$. Such a feature is found to be robust to the {\color{black}{{length}}} of the nanoflake and the electrode material, showing great application potential.
Those findings may throw light on the fabrication of nanoelectronic devices.
\begin{acknowledgments}
This work is supported by the National Natural Science Foundation of China (11705097, 11504178 and 11804158), the Natural Science Foundation of Jiangsu Province (BK20170895), and the Funding of Jiangsu Innovation Program for Graduate Education (KYCX21$\_$0709).
\end{acknowledgments}
|
2,869,038,155,608 | arxiv | \section{Introduction}
\label{sec:intro}
In 1961, Vickrey \cite{V61}, a Nobel laureate in Economics,
initiated auction theory. At the center of his work was the solution concept of Nash equilibrium \cite{N50} for auctions as non-cooperative games. This game-theoretical approach has shaped modern auction theory and has a tremendous influence also on other areas in mathematical economics \cite{H04}. In particular, Vickrey systematically investigated the first-price auction (or its equivalent, the Dutch auction),
the most common auction format in real business.
In the first-price auction, the auctioneer (seller) sells an indivisible item to $n$ potential bidders (buyers).
The rule is as simple as it is: All bidders simultaneously submit bids to the auctioneer (each of which is unknown to the other bidders); the highest bidder wins the item, paying his/her own bid.
Simple as the rule is, the bidders' optimal bidding strategies can be sophisticated.
From one bidder's own perspective, a higher bid means a higher payment on winning but also a better chance of winning.
Accordingly, this bidder's optimal bidding strategy depends on the competitive environment, which in turn is determined by the other bidders’ bidding strategies. This situation is perfectly a non-cooperative game and the standard solution concept is Bayesian Nash equilibrium.
To get a better sense, let us show a warm-up example from Vickrey's original work.
\begin{example}[{\cite{V61}}]
\label{exp:intro:1}
Consider two bidders: Alice and Bob have (independent) uniform random values $v_{1},\, v_{2} \sim U[0,\, 1]$ and respectively bid $b_{1} = \frac{v_{1}}{2}$ and $b_{2} = \frac{v_{2}}{2}$. The value distributions and bidding strategies determine the (independent) bid distributions of bidders. In this example, they are (independent) uniform random bids $b_{1},\, b_{2} \sim U[0,\, \frac{1}{2}]$, whose CDFs are $B_{1}(b) = B_{2}(b) = \min(2b,\, 1)$.
By bidding $b \geq 0$, Alice wins with probability $\min(2b,\, 1)$ and gains a utility $(v_{1} - b)$ conditioned on winning. Her expected utility is $(v_{1} - b) \cdot \min(2b,\, 1)$, which is maximized when $b = \frac{v_{1}}{2}$.
Thus, her current strategy $b_{1} = \frac{v_{1}}{2}$ is optimal. By symmetry, Bob's current strategy $b_{2} = \frac{v_{2}}{2}$ is also optimal.
In sum, the above strategy profile $(b_{1},\, b_{2}) = (\frac{v_{1}}{2},\, \frac{v_{2}}{2})$ is an equilibrium, in a sense that no bidder can gain a better utility by deviating from his/her current strategy.
\end{example}
For clarity, let us formalize the model. Each bidder $i \in [n]$ independently draws his/her value from a distribution $v_{i} \sim V_{i}$.
Only with this knowledge and depending on his/her own strategy $s_{i}$, bidder $i$ submits a possibly random bid $b_{i} = s_{i}(v_{i})$.
Then over the randomness of the other bidders' values $\bv_{-i} = (v_{k})_{k \neq i}$ and strategies $\bs_{-i} = \{s_{k}\}_{k \neq i}$, bidder $i$ wins with probability $\mathrm{x}_{i}(b_{i}) \in [0,\, 1]$ and gains an expected utility $u_{i}(v_{i},\, b_{i}) = (v_{i} - b_{i}) \cdot \mathrm{x}_{i}(b_{i})$.
\begin{definition}[Equilibria]
\label{def:bne}
A strategy profile $\bs = \{s_{i}\}_{i \in [n]}$ is a {\textsf{Bayesian Nash Equilibrium}} when: For each bidder $i \in [n]$ and any possible value $v \in \supp(V_{i})$, the current strategy $s_{i}(v)$ is optimal, namely $\Ex_{s_{i}} \big[ u_{i}(v,\, s_{i}(v)) \big] \geq u_{i}(v,\, b)$ for any deviation bid $b \geq 0$.
\end{definition}
\Cref{exp:intro:1} is special in that bidders have identically distributed values. This {\em symmetric} setting is well understood: The first-price auction has a {\em unique} Bayesian Nash equilibrium \cite{CH13}, which is {\em fully efficient} -- The bidder with the highest value always wins the item.
Instead, the current trend focuses more on the {\em asymmetric} setting, where bidders' values are distinguished by their distributions.
Again, let us get a better sense through two concrete examples.
\begin{example}
\label{exp:intro:2}
Consider two bidders: Alice has a fixed value $v_{1} \equiv 2$ and always bids $s_{1}(v_{1}) \equiv 1$. Bob has a uniform random value $v_{2} \sim U[0,\, 1]$ and {\em truthfully} bids his value $s_{2}(v_{2}) = v_{2}$, namely the distribution $B_{2}(b) = \min(b,\, 1)$.
By bidding $b \geq 0$, Alice gains an expected utility $(v_{1} - b) \cdot \min(b,\, 1)$, for which her current strategy $s_{1}(v_{1}) \equiv 1$ is optimal.
Bob cannot gain a positive utility because his value $v_{2} \sim U[0,\, 1]$ is at most Alice's bid $s_{1}(v_{1}) \equiv 1$; thus his current strategy $s_{2}(v_{2}) = v_{2}$ is also optimal.
In sum, this strategy profile $(s_{1}, s_{2})$ is an equilibrium.
Unlike the symmetric setting, this auction game has {\em multiple} equilibria.
E.g., it is easy to verify that the same strategy $s_{1}(v_{1}) \equiv 1$ for Alice and a different strategy $\tilde{s}_{2}(v_{2}) = \max(\frac{2v_{2} - 1}{v_{2}},\, 0)$ for Bob (namely the distribution $\tilde{B}_{2}(b) = \frac{1}{2 - b}$ for $b \in [0,\, 1]$) are another equilibrium.
\end{example}
The both equilibria $(s_{1},\, s_{2})$ and $(s_{1},\, \tilde{s}_{2})$ given in \Cref{exp:intro:2} have two features:
(i)~The strategies $s_{1}(v_{1})$, $s_{2}(v_{2})$ and $\tilde{s}_{2}(v_{2})$ are {\em pure strategies} -- Each of them has no randomness, just the function of a value.
(ii)~The both equilibria are {\em fully efficient} akin to the symmetric setting -- Alice always has the highest value $\equiv 2$ and always wins the item.
However, these are not always the case in the asymmetric setting, such as in the following example, which only slightly modifies \cref{exp:intro:2} by changing the fixed value of Alice from $2$ to $1$.
\begin{example}
\label{exp:intro:3}
Consider two bidders: Alice has a fixed value $v_{1} \equiv 1$ and bids $s_{1}(v_{1}) \sim B_{1}$ following the distribution $B_1(b) = \frac{\exp((4b - 3) / (2b - 1))}{2\sqrt{4b^{2} - 4b + 1}}$ for $b \in [\frac{1}{2},\, \frac{3}{4}]$. Bob has a uniform random value $v_{2} \sim U[0,\, 1]$ and bids $s_{2}(v_{2}) = \max(\frac{4v_{2} - 1}{4v_{2}},\, 0)$, namely the distribution $B_2(b) = \frac{1}{4 - 4b}$ for $b \in [0,\, \frac{3}{4}]$.
By bidding $b \geq 0$, Alice gains an expected utility $(v_{1} - b) \cdot B_{2}(b)$, which is maximized $= \frac{1}{4}$ anywhere between $b \in [0,\, \frac{3}{4}]$, so her current strategy $s_{1}(v_{1}) \sim B_{1}$ is optimal. By elementary calculus, we can see that Bob's current strategy $s_{2}(v_{2}) = \max(\frac{4v_{2} - 1}{4v_{2}},\, 0)$ also is optimal. In sum, this strategy profile $(s_{1}, s_{2})$ is an equilibrium.
\end{example}
In \Cref{exp:intro:3}: Alice has a {\em mixed strategy} -- A fixed value $v_{1} \equiv 1$ but a random bid $s_{1}(v_{1}) \sim B_{1}$.
Furthermore, the equilibrium $(s_{1},\, s_{2})$ is {\em not} fully efficient. E.g., with a value $v_{2} = \frac{3}{4}$, although not the highest value $< v_{1} \equiv 1$, Bob bids $s_{2}(v_{2}) = \frac{2}{3}$ and wins with probability $B_{1}(\frac{2}{3}) = \frac{3}{2e} \approx 0.5518$. Indeed, \Cref{exp:intro:3} has infinite equilibria, among which the given one $(s_{1}, s_{2})$ has the relatively ``simplest'' format. But none of those equilibria is a pure equilibrium or is fully efficient, despite that \cref{exp:intro:3} is only a minor modification of \cref{exp:intro:2}.
From the above examples, we observe that Bayesian Nash equilibria
can be very complicated and sensitive to the value distributions, despite the intrinsic simple rule of the first-price auction. After an extensive study for more than 60 years, the first-price auction and its equilibria remain the centerpiece of modern auction theory and have promoted a rich literature; see \cite{SZ90,L06,L96,L99,MR00a,MR00b,JSSZ02,MR03,HKMN11,CH13,ST13,S14,HHT14,FLN16,HTW18,WSZ20,FGHLP21} etc. These efforts are justifiable since the study of the first-price auction and its equilibria is both theoretically challenging and practically important.
Among various aspects of the first-price auction, {\em efficiency} at equilibria is of primary interest. In economics, efficiency measures to what extent a recourse can be allocated to the persons who value it the most, thus maximizing the {\em social welfare}, particularly in a competitive environment.
As shown above (\Cref{exp:intro:3}), the first-price auction generally is not fully efficient at an equilibrium: The winner has the highest {\em bid} but possibly not the highest {\em value}; this crucially depends on both (i)~the instance $\bV$ itself and (ii)~which Bayesian Nash equilibrium $\bs \in \bbBNE(\bV)$ it falls into.
Earlier works in economics focus on (generalizing) the conditions for the value distribution $\bV = \{V_{i}\}_{i \in [n]}$
that guarantee the full efficiency.
\begin{comment}
\vspace{2cm}
\color{blue}
The study of different properties about the BNEs for the first-price auction is an important and active research area both in economics and algorithmic game theory communities\cite{}. These studies are both theoretically challenging and also practically relevant. Among these, the efficiency of the Auction (and the equilibrium) is a central question. Auctions are commonly used to allocate recourse in a competitive environment.
In economics, efficiency means that the recourse is always allocated to the one who value it the most, and thus the social welfare is maximized. In the first-price auction, the item is allocated to the bidder whose bid is the highest. But this does not necessarily mean that the winner’s value is always the highest. It depends on the instance and the equilibrium. In
\Cref{exp:intro:1}, the bidders' strategy mappings are the same monotone function, so the one with higher bid is indeed the one with higher value. Thus, \Cref{exp:intro:1} in indeed efficient. In \Cref{exp:intro:2}, Alice’s value is always higher than Bob and in the equilibrium her bid is also always higher. Thus, it is also efficient. But this is no longer the case for \Cref{exp:intro:3}. In \Cref{exp:intro:3}, Alice’s value is always higher but there is nontrivial probability for Bob to bid higher and get the item. Thus, it is not efficient. Therefore, the first-price auction is not efficient in general. There are quite a few works to give some sufficient conditions under which the first-price auction is efficient. In particular, it was proved that the symmetric settings are always efficient.
However, the property of efficiency should not be a binary one. For example, if the highest value is $1$, to allocate it to one with value $0.99$ or $0.01$ should be very different although neither of them is fully efficient. To carefully capture how efficient (inefficient) of an equilibrium in auctions (or games in general), an important solution concept called the Price of Anarchy (PoA) was proposed~\cite{KP99} in the spirit of approximation ratio in theoretical computer science. PoA of the first-price auction is defined to be the minimum possible ratio between the expected social welfare obtained by an equilibrium ${\sf FPA}(\bV,\, \bs)$ over the expected optimal social welfare ${\sf OPT}(\bV)$ in any possible instance $\bV$ and its equilibrium $\bs$. Formally, we have the following definition.
\color{black}
\end{comment}
However, the quality of (in)efficiency should not be all-or-nothing.
E.g., when the highest value is $1$, a value-$0.99$ bidder versus a value-$0.01$ bidder is very different, although neither of them is fully efficient. Towards a quantitative analysis,
Koutsoupias and Papadimitriou introduced a new measure on the efficiency degradation under selfish behaviors, the Price of Anarchy \cite{KP99} (which is an analog of the ``approximation ratio'' in theoretical computer science).
For the first-price auction, denote by ${\sf OPT}(\bV)$ the expected optimal social welfare from an instance $\bV$, and by ${\sf FPA}(\bV,\, \bs)$ the expected social welfare at an equilibrium $\bs \in \bbBNE(\bV)$, then the Price of Anarchy is defined to be the minimum possible ratio, as follows.
\begin{definition}[{\textsf{Price of Anarchy}}]
\label{def:poa}
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} is given by
\[
{\sf PoA} ~\eqdef~ \inf \bigg\{\, \frac{{\sf FPA}(\bV,\, \bs)}{{\sf OPT}(\bV)} \,\biggmid\, \bs \in \mathbb{BNE}(\bV) ~\text{and}~ {\sf OPT}(\bV) < +\infty \,\bigg\}.
\]
\end{definition}
The Price of Anarchy is bounded between $[0,\, 1]$. Namely, a larger ratio means a higher efficiency and the $= 1$ ratio means the full efficiency.
For the first-price auction, Syrgkanis and Tardos first proved that the {{\sf PoA}} is at least $1 - 1 / e \approx 0.6321$ \cite{ST13}.
Later, Hoy, Taggart and Wang derived an improved lower bound of $\approx 0.7430$ \cite{HTW18}.
On the other hand, Hartline, Hoy and Taggart gave a concrete instance of ratio $\approx 0.8689$ \cite{HHT14}, which remains the best known upper bound.
Despite the prevalence of the first-price auction and much effort in studying its efficiency, there has been a persistent gap in the state of the art. In the current paper, we will completely solve this long-standing open problem.
\begin{theorem}[Tight {{\sf PoA}}]
\label{thm:main}
The {\textsf{Price of Anarchy}} in {\textsf{First Price Auctions}} is $1 - 1 / e^{2} \approx 0.8647$.
\end{theorem}
Remarkably, neither the best known lower bound $\approx 0.7430$ nor the upper bound $\approx 0.8689$ is tight; we close the gap by improving both of them. Our tight bound $1 - 1 / e^{2} \approx 0.8647$ not only is a mathematically elegant result but has further implications in real business since it is fairly close to $1$. Namely, at any equilibrium, the efficiency degradation in the first-price auction is small, no worse than 13.53\%, which might be acceptable given other merits of the first-price auction.
En route to the tight {{\sf PoA}}, we obtain many new important perspectives, characterizations, and properties of equilibria in the first-price auction. These might be of independent interest and find applications in the future study of other aspects of the first-price auction, e.g., the complexity of computing equilibria. Beyond the first-price auction, our overall approach is general enough and might help determine the Price of Anarchy in other auctions.
\section{Lower Bound Analysis}
\label{sec:UB}
Following \Cref{cor:reduction} and Optimization~\eqref{eq:reduction}, we study the space of twin ceiling pseudo instances $H \otimes L_{\lambda}$ under all possible supremum bids $\lambda \in (0,\, 1)$ and aim to find the worst case $H^{*} \otimes L^{*}$. This task is accomplished in three steps:
\begin{itemize}
\item \Cref{subsec:UB_welfare} presents an explicit formula for the expected auction {\textsf{Social Welfare}} ${\sf FPA}(H \otimes L_{\lambda})$. Also, we prove that imposing certain properties (such as {\em differentiability}) on the monopolist's bid distribution $H$ does not change the solution to Optimization~\eqref{eq:reduction}.
\item \Cref{subsec:UB_ODE} resolves Optimization~\eqref{eq:reduction} by leveraging tools from Calculus of Variations (i.e., the imposed properties enable these tools). Roughly speaking, the worst case $H^{*} \otimes L^{*}$ turns out to be specified by the supremum bid $\lambda \in (0,\, 1)$ and one more parameter $\mu > 0$, thus taking the form $H^{*} \otimes L^{*} = H_{\mu^{*}} \otimes L_{\lambda^{*}}$.
\item \Cref{subsec:UB_lambda_mu} captures the worst case $H_{\mu^{*}} \otimes L_{\lambda^{*}}$ by optimizing the parameters $\lambda$ and $\mu$, which immediately implies the lower-bound part of \Cref{thm:main} that the ${\sf PoA} \geq 1 - 1 / e^{2}$.
\end{itemize}
\subsection{The expected auction {\textsf{Social Welfare}}}
\label{subsec:UB_welfare}
We introduce the space of {\em twice continuously differentiable} CDF's $\mathbb{C}_{\lambda}^{2}$ right below (\Cref{def:differentiability}). Afterward, we (i)~get an explicit reformulation for Optimization~\eqref{eq:reduction}, then (ii)~prove that the same solution can be achieved from a smaller search space $\mathbb{H}_{\lambda} \cap \mathbb{C}_{\lambda}^{2} \subsetneq \mathbb{H}_{\lambda}$, and then (iii)~relax/expand the search space to $\mathbb{C}_{\lambda}^{2}$.
\begin{definition}[Differentiability]
\label{def:differentiability}
Denote by $\mathbb{C}_{\lambda}^{0}$ the space of all {\em continuous} CDF's that are supported on $[0,\, \lambda]$. The smaller space $\mathbb{C}_{\lambda}^{1}$ (resp.\ the even smaller space $\mathbb{C}_{\lambda}^{2}$) further requires {\em continuously differentiable} CDF's (resp.\ {\em twice continuously differentiable} CDF's), i.e., the derivatives (resp.\ the second derivatives) of those CDF's are continuous functions.
\end{definition}
\begin{lemma}[{\textsf{Price of Anarchy}}]
\label{lem:reformulation}
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} satisfies that
\begin{align}
{\sf PoA}
& ~\geq~ 1 - \sup \Bigg\{\, (1 - \lambda) \cdot \int_{0}^{\lambda} \frac{H(x) \cdot H'(x) \cdot \d x}{H(x) + (1 - x) \cdot H'(x)} \,\Biggmid\, \text{\em $\lambda \in (0,\, 1)$ and $H \in \mathbb{H}_{\lambda}$} \,\Bigg\}
\label{eq:reformulation:1}\tag{I} \\
& ~=~ 1 - \sup \Bigg\{\, (1 - \lambda) \cdot \int_{0}^{\lambda} \frac{H(x) \cdot H'(x) \cdot \d x}{H(x) + (1 - x) \cdot H'(x)} \,\Biggmid\, \text{\em $\lambda \in (0,\, 1)$ and $H \in (\mathbb{H}_{\lambda} \cap \mathbb{C}_{\lambda}^{2})$} \,\Bigg\}
\label{eq:reformulation:2}\tag{II} \\
& ~\geq~ 1 - \sup \Bigg\{\, (1 - \lambda) \cdot \int_{0}^{\lambda} \frac{H(x) \cdot H'(x) \cdot \d x}{H(x) + (1 - x) \cdot H'(x)} \,\Biggmid\, \text{\em $\lambda \in (0,\, 1)$ and $H \in \mathbb{C}_{\lambda}^{2}$} \,\Bigg\}.
\label{eq:reformulation:3}\tag{III}
\end{align}
\end{lemma}
Let us prove the three steps of \Cref{lem:reformulation} one by one.
\begin{proof}[Proof of Optimization~\eqref{eq:reformulation:1}]
This is a reformulation of Optimization~\eqref{eq:reduction}. The monopolist $H$ has a constant bid-to-value mapping $\varphi_{H}(b) = v_{H} = 1$ for $b \in [0,\, \lambda]$ (\Cref{def:twin:restate}). Following the formula in \Cref{lem:pseudo_welfare}, the expected auction {\textsf{Social Welfare}} ${\sf FPA}(H \otimes L_{\lambda})$ is given by
\begin{align*}
{\sf FPA}(H \otimes L_{\lambda})
& ~=~ v_{H} \cdot H(0) \cdot L_{\lambda}(0) + \int_{0}^{\lambda} \bigg(v_{H} \cdot H'(b) \cdot L_{\lambda}(b) + \varphi_{L}(b) \cdot L'_{\lambda}(b) \cdot H(b)\bigg) \cdot \d b \\
& ~=~ v_{H} \cdot H(\lambda) \cdot L_{\lambda}(\lambda) - \int_{0}^{\lambda} \big(v_{H} - \varphi_{L}(b)\big) \cdot L'_{\lambda}(b) \cdot H(b) \cdot \d b \\
& ~=~ 1 - \int_{0}^{\lambda} \frac{(1 - b)^{2}}{H(b) / H'(b) + 1 - b} \cdot L'_{\lambda}(b) \cdot H(b) \cdot \d b \\
& ~=~ 1 - (1 - \lambda) \cdot \int_{0}^{\lambda} \frac{H(b) \cdot \d b}{H(b) / H'(b) + 1 - b}
\end{align*}
Here the second step uses integration by parts $\int_{0}^{\lambda} \big(H'(b) \cdot L_{\lambda}(b) + L'_{\lambda}(b) \cdot H(b)\big) \cdot \d b = H(b) \cdot L_{\lambda}(b) \bigmid_{b = 0}^{\lambda}$. The third step uses $H(\lambda) = L_{\lambda}(\lambda) = 1$ (because $\lambda$ is the supremum bid) and the $\varphi_{L}(b)$ formula in \Cref{def:twin:restate}. And the last step applies $L'_{\lambda}(b) = \frac{1 - \lambda}{(1 - b)^{2}}$ (\Cref{def:twin:restate}).
The above formula ${\sf FPA}(H \otimes L_{\lambda})$ together with \Cref{cor:reduction} implies Optimization~\eqref{eq:reformulation:1}.
\end{proof}
\begin{proof}[Proof of Optimization~\eqref{eq:reformulation:2}]
Given a supremum bid $\lambda \in (0,\, 1)$, we consider a specific CDF $H \in \mathbb{H}_{\lambda}$. The integral $\int_{0}^{\lambda} \frac{H(b) \cdot H'(b) \cdot \d b}{H(b) + (1 - b) \cdot H'(b)}$ must be {\em Lebesgue integrable}, since the integrand $\frac{H(b) \cdot H'(b)}{H(b) + (1 - b) \cdot H'(b)}$ is nonnegative and the integral is upper bounded by $\int_{0}^{\lambda} \frac{H(b) \cdot H'(b) \cdot \d b}{H(b)} = H(\lambda) - H(0) \leq 1$.
Further, the integrand, if being regarded as a function of three variables $b \in (0,\, \lambda)$, $H \in [0,\, 1]$ and $H' \geq 0$, is {\em twice continuously differentiable}. For these reasons, the CDF $H \in \mathbb{H}_{\lambda}$ can be replaced
with another {\em twice continuously differentiable} CDF $\tilde{H} \in (\mathbb{H}_{\lambda} \cap \mathbb{C}_{\lambda}^{2})$ such that the integral {\em keeps the same}, namely the replacement incurs an infinitesimal error whose magnitude is irrelevant to the $\lambda \in (0,\, 1)$.
To conclude, Optimization~\eqref{eq:reformulation:2} has the same solution as Optimization~\eqref{eq:reformulation:1}.
\end{proof}
\begin{proof}[Proof of Optimization~\eqref{eq:reformulation:3}]
This step trivially holds, since it relaxes/enlarges the search space.
\end{proof}
\subsection{The optima are solutions to an ODE}
\label{subsec:UB_ODE}
Following Optimization~\eqref{eq:reformulation:3}, we consider a {\em constant} supremum bid $\lambda \in (0,\, 1)$ for the moment, and study the next optimization (i.e., a functional of all twice differentiable CDF's $H \in \mathbb{C}_{\lambda}^{2}$):
\begin{align}
\label{eq:functional}\tag{IV}
\sup \Bigg\{\, \int_{0}^{\lambda} \frac{H(x) \cdot H'(x) \cdot \d x}{H(x) + (1 - x) \cdot H'(x)} \,\Biggmid\, \text{$H \in \mathbb{C}_{\lambda}^{2}$} \,\Bigg\}.
\end{align}
Below we first (\Cref{lem:ODE}) leverage tools from Calculus of Variations to get a {\em necessary condition}, i.e., an {\em ordinary differential equation} (ODE), for any worst-case CDF $H \in \mathbb{C}_{\lambda}^{2}$ of Optimization~\eqref{eq:functional}, and then (\Cref{lem:implicit_equation}) explicitly resolve this ODE. In this way, any candidate worst-case CDF $H \in \mathbb{C}_{\lambda}^{2}$ will be controlled by just one parameter $\mu > 0$, namely $H = H_{\mu}$ (see \Cref{fig:function_H_mu} for a visual demonstration). The following two lemmas formalize our proof plan.
\begin{figure}[t]
\centering
\includegraphics[width = .7\textwidth]{functionH.png}
\caption{Demonstration for the CDF $H_{\mu}$ given by Implicit Equation~\eqref{eq:implicit_equation}, where a {\em full curve} is a well-defined CDF restricted to $\{(x,\, H_{\mu}) \mid 0 \leq x \leq \lambda ~\text{and}~ (1 + 1 / \mu)^{-2} \leq H_{\mu} \leq 1\}$. \\ [.05in]
In contrast, a {\em dotted curve} is just an {\em analytic continuation}, through the same implicit equation, to all $H_{\mu} \in [0,\, 1]$. Such an analytic continuation violates the above restriction and/or Condition~\eqref{eq:condition}, so it is {\em not} a well-defined CDF supported on $[0,\, \lambda]$. \\ [.05in]
All the three CDF's in the figure choose the same parameter $\mu = 1$, so Condition~\eqref{eq:condition} is equivalent to $\lambda \leq 1 - (1 + 1 / \mu)^{2} \cdot e^{-2 / \mu} = 1 - 4 / e^{2} \approx 0.4587$.
(i)~The green {\em full/dotted} curves choose a supremum bid $\lambda_{1} = 0.35$ that lies {\em within} the feasible space, by which the $y$-axis crosses the implicit equation twice, hence an {\em intersecting line}.
(ii)~The black {\em full/dotted} curves choose the {\em boundary-case} supremum bid $\lambda_{2} = 1 - (1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}$, by which the $y$-axis just touches the implicit equation, hence a {\em tangent line}.
(iii)~The red {\em dotted} curve chooses an {\em infeasible} supremum bid $\lambda_{3} = 0.6$, by which the $y$-axis {\em never} crosses the implicit equation, hence a {\em disjoint line}.}
\label{fig:function_H_mu}
\end{figure}
\begin{lemma}[The ODE; a necessary condition]
\label{lem:ODE}
Given a supremum bid $\lambda \in (0,\, 1)$, any extremum CDF $H \in \mathbb{C}_{\lambda}^{2}$ of Optimization~\eqref{eq:functional} satisfies the following ODE for any $x \in (0,\, \lambda)$:
\begin{align}
\label{eq:ODE}
\frac{(1 - x) \cdot (H'(x))^{2}}{(H(x) + (1 - x) \cdot H'(x))^{2}}
~=~
\frac{2 \cdot H(x) \cdot H'(x)}{(H(x) + (1 - x) \cdot H'(x))^{2}} ~-~ \frac{2 \cdot (1 - x) \cdot (H(x))^{2} \cdot H''(x)}{(H(x) + (1 - x) \cdot H'(x))^{3}}.
\end{align}
\end{lemma}
\begin{lemma}[The ODE solutions]
\label{lem:implicit_equation}
Given a supremum bid $\lambda \in (0,\, 1)$, any extremum CDF $H \in \mathbb{C}_{\lambda}^{2}$ of Optimization~\eqref{eq:functional} is determined by one parameter $\mu > 0$ such that
\begin{align}
\label{eq:condition}
1 - (1 + 1 / \mu)^{2} \cdot e^{-2 / \mu} ~\geq~ \lambda,
\end{align}
namely $H = H_{\mu}$. Moreover, this extremum CDF $H_{\mu}$ is defined by the following implicit equation:
\begin{align}
\label{eq:implicit_equation}
\left\{\, (x,\, H_{\mu}) \,\middlemid\,
\begin{aligned}
& x ~=~ 1 - (1 - \lambda) \cdot H_{\mu} \cdot \exp\Big((1 + 1 / \mu) \cdot \big(2 - 2\sqrt{H_{\mu}}\big)\Big) \\
& \text{\em such that $0 \leq x \leq \lambda$ and $(1 + 1 / \mu)^{-2} \leq H_{\mu} \leq 1$} \phantom{\bigg.}
\end{aligned}\,\right\}.
\end{align}
\end{lemma}
From subsequent proof details, it will be clear that whenever the parameter $\mu > 0$ satisfies Condition~\eqref{eq:condition}, Implicit Equation~\eqref{eq:implicit_equation} does specify a well-defined CDF $H_{\mu}$.
We first establish \Cref{lem:ODE}. The proof exploits the {\em Euler-Lagrange equation} (stated below as \Cref{thm:Euler-Lagrange}), which is a standard tool from Calculus of Variations.
\begin{theorem}[The Euler-Lagrange equation; Calculus of Variations {\cite{GS20}}]
\label{thm:Euler-Lagrange}
Assume the following four premises for the functional $\calJ[f] \eqdef \int_{x_{1}}^{x_{2}} \calL\big(x, f(x), f'(x)\big) \cdot \d x$.
\begin{itemize}
\item $x_{1}$ and $x_{2}$ are constants.
\item $f(x)$ is twice continuously differentiable.
\item $f'(x) = \frac{\d}{\d x} f(x)$.
\item $\calL\big(x, f(x), f'(x)\big)$ is twice continuously differentiable with respect to variables $x$, $f$ and $f'$.
\end{itemize}
Then any extremum $\calJ[f]$ satisfies (a necessary condition) the Euler-Lagrange equation
\begin{align*}
\frac{\partial \calL}{\partial f} ~=~ \frac{\d}{\d x} \bigg(\frac{\partial \calL}{\partial f'}\bigg)
\quad \text{for any} \quad x \in (x_{1},\, x_{2}).
\end{align*}
\end{theorem}
\begin{remark}[The Euler-Lagrange equation]
We emphasize that (i)~the {\LHS} of this equation takes the partial derivative of $\calL(x, f, f')$ with respect to $f$; and (ii)~the {\RHS} first regards the partial derivative $\frac{\partial \calL}{\partial f'}$ as a function of {\em one} variable $x$, and then takes the derivative (with respect to $x$).
\end{remark}
Now we use the Euler-Lagrange equation to obtain the desired necessary condition. Regarding Optimization~\eqref{eq:functional}, we will consider the following functional.
\begin{align*}
\calJ[H] ~\eqdef~ \int_{0}^{\lambda} \calL\big(x, H(x), H'(x)\big) \cdot \d x,
\quad \text{where} \quad
\calL\big(x, H(x), H'(x)\big) ~\eqdef~ \frac{H(x) \cdot H'(x)}{H(x) + (1 - x) \cdot H'(x)}.
\end{align*}
Notice that all the four premises of the Euler-Lagrange equation are satisfied:
\begin{itemize}
\item $x_{1} = 0$ and $x_{2} = \lambda$ are constants.
\item We only consider those twice continuously differentiable CDF's $H \in \mathbb{C}_{\lambda}^{2}$.
\item Obviously $H'(x) = \frac{\d}{\d x} H(x)$.
\item It is easy to verify that, with respect to variables $x \in (0,\, \lambda) \subseteq (0,\, 1)$, $H \in [0,\, 1]$ and $H' \geq 0$, the integrand $\calL\big(x, H(x), H'(x)\big)$ is twice continuously differentiable.
\end{itemize}
\begin{proof}[Proof of \Cref{lem:ODE}]
For ease of notation, we would use the shorthands $H = H(x)$, $h = H'(x)$ and $h' = H''(x)$, thus writing the integrand as $\calL(x, H, h) = \frac{H \cdot h}{H + (1 - x) \cdot h}$. Then regarding our functional, the {\LHS} of the Euler-Lagrange equation is given by
\[
\frac{\partial \calL}{\partial H}
~=~ \frac{h \cdot (H + (1 - x) \cdot h) - H \cdot h}{(H + (1 - x) \cdot h)^{2}}
~=~ \frac{(1 - x) \cdot (H'(x))^{2}}{(H(x) + (1 - x) \cdot H'(x))^{2}}
~=~ \LHS \text{ of ODE~\eqref{eq:ODE}}
\]
where the second step combines the like terms and then substitutes back $H = H(x)$ and $h = H'(x)$.
Moreover, the partial derivative of $\calL(x, H, h)$ with respect to variable $h$ is given by
\[
\frac{\partial \calL}{\partial h}
~=~ \frac{H \cdot (H + (1 - x) \cdot h) - (H \cdot h) \cdot (1 - x)}{(H + (1 - x) \cdot h)^{2}}
~=~ \frac{H^{2}}{(H + (1 - x) \cdot h)^{2}},
\]
where the last step combines the like terms. When being regarded as a function of {\em one} variable $x$,
the derivative of function $\frac{\partial \calL}{\partial h}$ (i.e., the {\RHS} of the Euler-Lagrange equation) is given by
\begin{align*}
\frac{\d}{\d x} \bigg(\frac{\partial \calL}{\partial h}\bigg)
& ~=~ \frac{\d}{\d x} \big(H^{2}\big) ~\cdot~ \frac{1}{(H + (1 - x) \cdot h)^{2}}
~+~ H^{2} ~\cdot~ \frac{\d}{\d x} \bigg(\frac{1}{(H + (1 - x) \cdot h)^{2}}\bigg) \\
& ~=~ \frac{2 \cdot H \cdot h}{(H + (1 - x) \cdot h)^{2}} \hspace{2.06cm} +~ H^{2} \cdot \frac{-2 \cdot (1 - x) \cdot h'}{(H + (1 - x) \cdot h)^{3}} \phantom{\bigg.} \\
& ~=~ \frac{2 \cdot H(x) \cdot H'(x)}{(H(x) + (1 - x) \cdot H'(x))^{2}}
\hspace{0.78cm} -~ (H(x))^{2} \cdot \frac{2 \cdot (1 - x) \cdot H''(x)}{(H(x) + (1 - x) \cdot H'(x))^{3}} \phantom{\bigg.} \\
& ~=~ \RHS \text{ of ODE~\eqref{eq:ODE}}, \phantom{\bigg.}
\end{align*}
where the third step substitutes back $H = H(x)$, $h = H'(x)$ and $h' = H''(x)$.
Therefore, ODE~\eqref{eq:ODE} is just a reformulation of the Euler-Lagrange equation and we immediately conclude the lemma from \Cref{thm:Euler-Lagrange}.
\end{proof}
In the rest of this subsection, we would resolve ODE~\eqref{eq:ODE}. For ease of notation, we often write $H = H(x)$, $H' = H'(x)$ and $H'' = H''(x)$. Following \Cref{lem:ODE}, we can deduce that
\begin{align}
\text{ODE~\eqref{eq:ODE}} \quad \iff \quad \parbox{3.35cm}{\hfill $\dfrac{(1 - x) \cdot (H')^{2}}{(H + (1 - x) \cdot H')^{2}}$}
& ~=~ \frac{2 \cdot H \cdot H'}{(H + (1 - x) \cdot H')^{2}} ~-~ \frac{2 \cdot (1 - x) \cdot H^{2} \cdot H''}{(H + (1 - x) \cdot H')^{3}} \phantom{\bigg.}
\nonumber \\
\iff \quad \parbox{3.35cm}{\hfill $\dfrac{2 \cdot (1 - x) \cdot H^{2} \cdot H''}{(H + (1 - x) \cdot H')^{3}}$}
& ~=~ \frac{2H \cdot H'}{(H + (1 - x) \cdot H')^{2}} ~-~ \frac{(1 - x) \cdot (H')^{2}}{(H + (1 - x) \cdot H')^{2}} \phantom{\bigg.}
\nonumber \\
\iff \quad \parbox{3.35cm}{\hfill $2 \cdot (1 - x) \cdot \dfrac{H''}{H'}$}
& ~=~ \bigg(2H \cdot H' - (1 - x) \cdot (H')^{2}\bigg) \cdot \frac{H + (1 - x) \cdot H'}{H^{2} \cdot H'}
\nonumber \\
\iff \quad \parbox{3.35cm}{\hfill $2 \cdot (1 - x) \cdot \dfrac{H''}{H'}$}
& ~=~ \bigg(2 - (1 - x) \cdot \frac{H'}{H}\bigg) \cdot \bigg(1 + (1 - x) \cdot \frac{H'}{H}\bigg).
\label{eq:solve_ODE:1}
\end{align}
Here the first step restates ODE~\eqref{eq:ODE}. The second step moves the first term to the {\RHS} and moves the third term to the {\LHS}. The third step first multiplies\footnote{Because $x \in (0,\, \lambda)$ and $\lambda \in (0,\, 1)$, this term $\geq \frac{1}{H^{2} \cdot H'} \cdot \binom{3}{2} \cdot H^{2} \cdot (1 - x) \cdot H' = 3 \cdot (1 - x)$ is {\em strictly positive}.} the both hand sides by the same term $\frac{1}{H^{2} \cdot H'} \cdot (H + (1 - x) \cdot H')^{3}$ and then simplifies the {\LHS}. And the last step rearranges the {\RHS}.
We define the function $z(x) \eqdef (1 - x) \cdot \frac{H'(x)}{H(x)}$ and, for brevity, may write $z = z(x)$ and $z' = z'(x)$. We have $z(x) \geq 0$ for any $x \in (0,\, \lambda)$, since the supremum bid $\lambda \in (0,\, 1)$ and the $H$ is a {\em nonnegative increasing} CDF. Therefore, there exists a {\em nonnegative} subset $\Omega \subseteq [0,\, +\infty]$ such that, over the whole support $x \in [0,\, \lambda]$, the CDF $H$ can be formulated in term of a {\em parametric equation}
\[
\Big\{\, (x,\, H) = \big(\calX(z),\, \calH(z)\big) \,\Bigmid\, z \in \Omega \,\Big\}.
\]
Below we would capture the analytic formulas $x = \calX(z)$ and $H = \calH(z)$ and then (in the proof of \Cref{lem:implicit_equation}) study the condition for which the above parametric equation does yield a CDF $H$ that is well defined on the support $x \in [0,\, \lambda]$.
\begin{proof}[Proof of the $x = \calX(z)$ formula]
We can deduce that
\begin{align*}
z \cdot H
& ~=~ (1 - x) \cdot H'
&& \Longleftarrow && \mbox{definition of $z(x)$} \phantom{\dfrac{H''}{H'}} \\
z' \cdot H ~+~ z \cdot H'
& ~=~ -H' ~+~ (1 - x) \cdot H''
&& \Longleftarrow && \mbox{take derivative} \phantom{\dfrac{H''}{H'}} \\
(1 - x) \cdot \frac{H''}{H'}
& ~=~ 1 ~+~ z ~+~ z' \cdot \frac{H}{H'}
&& \Longleftarrow && \mbox{multiply $1 \big/ H'$ and rearrange} \phantom{\dfrac{H''}{H'}} \\
(1 - x) \cdot \frac{H''}{H'}
& ~=~ 1 ~+~ z ~+~ (1 - x) \cdot \frac{z'}{z}
&& \Longleftarrow && \mbox{substitute $H \big/ H' = \frac{1 - x}{z}$ for the {\RHS}} \phantom{\dfrac{H''}{H'}}
\end{align*}
Apply this identity (resp.\ the definition $z = (1 - x) \cdot \frac{H'}{H}$) to the {\LHS} (resp.\ {\RHS}) of ODE~\eqref{eq:solve_ODE:1}:
\begin{align}
\text{ODE~\eqref{eq:solve_ODE:1}} \quad \iff \quad \parbox{4.35cm}{\hfill $2 \cdot \bigg(1 + z + (1 - x) \cdot \dfrac{z'}{z}\bigg)$}
& ~=~ (2 - z) \cdot (1 + z)
\nonumber \\
\iff \quad \parbox{4.35cm}{\hfill $2 \cdot (1 - x) \cdot \dfrac{z'}{z}$}
& ~=~ -z \cdot (1 + z) \phantom{\bigg.}
\nonumber \\
\iff \quad \parbox{4.35cm}{\hfill $\dfrac{2}{z^{2} \cdot (1 + z)} \cdot z'$}
& ~=~ \frac{-1}{1 - x}.
\label{eq:solve_ODE:2}
\end{align}
Here the second step cancels the common term $(2 + 2z)$ on the both hand sides. And the last step multiplies the both hand sides by the same positive term $\frac{1}{(1 - x) \cdot z \cdot (1 + z)}$.
We denote $\mu = z(\lambda)$ for some parameter $\mu > 0$, namely the boundary condition for \eqref{eq:solve_ODE:2} as a {\em partial differential equation} (PDE). For this PDE, the integration of the {\LHS} on any interval between $z = z(x)$ and $\mu = z(\lambda)$, is equal to that of the {\RHS} on the corresponding interval between $x$ and $\lambda$. Namely, we have
\begin{align}
\text{PDE~\eqref{eq:solve_ODE:2}} \quad
\Longrightarrow \quad \parbox{4.75cm}{\hfill $\displaystyle{\int_{\mu}^{z} \frac{2}{y^{2} \cdot (1 + y)} \cdot \d y}$}
& ~=~ \int_{\lambda}^{x} \frac{-1}{1 - y} \cdot \d y
\nonumber \\
\Longrightarrow \quad \parbox{4.75cm}{\hfill $\Big(2\ln(1 + 1 / y) - 2 / y\Big)\Bigmid_{y \,=\, \mu}^{z}$}
& ~=~ \ln(1 - y)\Bigmid_{y \,=\, \lambda}^{x} \phantom{\bigg.}
\nonumber \\
\Longrightarrow \quad \parbox{4.75cm}{\hfill $\displaystyle{2\ln\bigg(\frac{1 + 1 / z}{1 + 1 / \mu}\bigg) - 2 / z + 2 / \mu}$}
& ~=~ \ln(1 - x) - \ln(1 - \lambda)
\nonumber \\
\Longrightarrow \quad \parbox{4.75cm}{\phantom{}}
& \hspace{-4.75cm} x ~=~ \calX(z) ~=~ 1 - \frac{1 - \lambda}{(1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}} \cdot (1 + 1 / z)^{2} \cdot e^{-2 / z}, \phantom{\bigg.}
\label{eq:solve_ODE:3}
\end{align}
where the last step rearranges the equation. Notably, at $z = \mu$ we achieve the supremum bid $\calX(\mu) = \lambda$.
Formula~\eqref{eq:solve_ODE:3} is precisely the first part $x = \calX(z)$ of our parametric equation.
\end{proof}
\begin{proof}[Proof of the $H = \calH(z)$ formula]
To further obtain this formula, we observe that
\begin{align}
\label{eq:solve_ODE:4}
\frac{1}{H} \cdot \frac{\d H}{\d x} ~=~ \frac{H'}{H}
~=~ \frac{z}{1 - x}
~=~ \frac{(1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}}{1 - \lambda} \cdot \frac{z}{(1 + 1 / z)^{2} \cdot e^{-2 / z}},
\end{align}
where the second step applies the definition $z = (1 - x) \cdot \frac{H'}{H}$ and the last step applies Formula~\eqref{eq:solve_ODE:3}. Moreover, the derivative of function $\calX(z)$ is given by
\begin{align}
\calX'(z)
~=~ \frac{\d x}{\d z}
& ~=~ -\frac{1 - \lambda}{(1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}}
\cdot \frac{\d}{\d z}\bigg((1 + 1 / z)^{2} \cdot e^{-2 / z}\bigg)
\nonumber \\
& ~=~ -\frac{1 - \lambda}{(1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}}
\cdot \bigg(2 \cdot (1 + 1 / z) \cdot \frac{-1}{z^{2}} \cdot e^{-2 / z} + (1 + 1 / z)^{2} \cdot e^{-2 / z} \cdot \frac{2}{z^{2}}\bigg)
\nonumber \\
& ~=~ -\frac{1 - \lambda}{(1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}}
\cdot \frac{2 \cdot (1 + 1 / z) \cdot e^{-2 / z}}{z^{3}},
\label{eq:solve_ODE:5}
\end{align}
where the first step applies Formula~\eqref{eq:solve_ODE:3} and the last step combines the like terms. Notice that this derivative $\calX'(z)$ is {\em strictly negative}, so $\calX(z)$ is a {\em strictly decreasing} function.
Combining \Cref{eq:solve_ODE:4,eq:solve_ODE:5} together gives
\begin{align}
\big(\ln \calH(z)\big)'
& ~=~ \frac{\d}{\d z}\big(\ln H\big)
~=~ \frac{1}{H} \cdot \frac{\d H}{\d z}
~=~ \bigg(\frac{1}{H} \cdot \frac{\d H}{\d x}\bigg) \cdot \bigg(\frac{\d x}{\d z}\bigg)
\nonumber \\
& ~=~ \bigg(\frac{z}{(1 + 1 / z)^{2} \cdot e^{-2 / z}}\bigg) \cdot \bigg(-\frac{2 \cdot (1 + 1 / z) \cdot e^{-2 / z}}{z^{3}}\bigg)
\nonumber \\
& ~=~ \frac{-2}{z \cdot (z + 1)}.
\label{eq:solve_ODE:6}
\end{align}
This ODE~\eqref{eq:solve_ODE:6} must admit the boundary condition $\calH(\mu) = 1$, given that (as mentioned) at $z = \mu$ we achieve the supremum bid $\calX(\mu) = \lambda$. For this reason, taking the integration of the both hand sides on any interval $[\mu,\, z]$ gives
\begin{align}
\text{ODE~\eqref{eq:solve_ODE:6}} \quad
\Longrightarrow \quad \parbox{3.05cm}{\hfill $\displaystyle{\int_{\mu}^{z} \big(\ln \calH(y)\big)' \cdot \d y}$}
& ~=~ \int_{\mu}^{z} \frac{-2}{y \cdot (y + 1)} \cdot \d y
\nonumber \\
\Longrightarrow \quad \parbox{3.05cm}{\hfill $\ln \calH(z) - \ln \calH(\mu)$}
& ~=~ \Big(2\ln(1 + 1 / y)\Big)\Bigmid_{y \,=\, \mu}^{z} \phantom{\bigg.}
\nonumber \\
\Longrightarrow \quad \parbox{3.05cm}{\hfill $\ln \calH(z)$}
& ~=~ 2\ln\bigg(\frac{1 + 1 / z}{1 + 1 / \mu}\bigg)
\nonumber \\
\Longrightarrow \quad \parbox{3.05cm}{\hfill $H ~=~ \calH(z)$}
& ~=~ \frac{(1 + 1 / z)^{2}}{(1 + 1 / \mu)^{2}}.
\label{eq:solve_ODE:7}
\end{align}
where the third step applies the boundary condition $\calH(\mu) = 1$ to the {\LHS}. Notice that $\calH(z)$ is a {\em strictly decreasing} function.
Formula~\eqref{eq:solve_ODE:7} is precisely the second part $H = \calH(z)$ of our parametric equation.
\end{proof}
Based on the above discussions, we are able to prove \Cref{lem:implicit_equation}.
\begin{proof}[Proof of \Cref{lem:implicit_equation}]
We first check Condition~\eqref{eq:condition} and then capture Implicit Equation~\eqref{eq:implicit_equation}.
\vspace{.1in}
\noindent
{\bf Condition~\eqref{eq:condition}.}
As mentioned, both of $\calX(z)$ and $\calH(z)$ are {\em strictly decreasing} functions (\Cref{eq:solve_ODE:5,eq:solve_ODE:7}). Moreover, at $z = \mu$ we achieve the supremum bid $\calX(\mu) = \lambda$ and the boundary condition $\calH(\mu) = 1$. By considering all possible parameters $z \in [\mu,\, +\infty]$, our parametric equation $\big(\calX(z),\, \calH(z)\big)$ yields a {\em strictly increasing} function $\bar{H}$ with the domain $x \in [\calX(+\infty),\, \lambda]$. In this domain, function $\bar{H}$ is bounded between $\calH(+\infty) = (1 + 1 / \mu)^{-2} \geq 0$ and $\calH(\mu) = 1$.
This function $\bar{H}$ must be {\em consistent} with the considered CDF $H$. In this way, we require that: (i)~the support $x \in [0,\, \lambda]$ of CDF $H$ is a {\em subset} of the domain $x \in [\calX(+\infty),\, \lambda]$ of function $\bar{H}$; and, if so, (ii)~function $\bar{H}$ can be a {\em well-defined} CDF on the support $x \in [0,\, \lambda]$.
Clearly, the first requirement is equivalent to
\[
0 ~\geq~ \calX(+\infty) ~=~ 1 - (1 - \lambda) \cdot (1 + 1 / \mu)^{-2} \cdot e^{2 / \mu},
\]
which after being rearranged gives Condition~\eqref{eq:condition}. Indeed, the first requirement ensures the second one -- function $\bar{H}$ is {\em strictly increasing} and is bounded between $\calH(+\infty) = (1 + 1 / \mu)^{-2} \geq 0$ and $\calH(\mu) = 1$ in the whole domain $x \in [\calX(+\infty),\, \lambda]$, thus (once the first requirement is satisfied) being a well-defined CDF on the support $x \in [0,\, \lambda] \subseteq [\calX(+\infty),\, \lambda]$.
\vspace{.1in}
\noindent
{\bf Implicit Equation~\eqref{eq:implicit_equation}.}
We reformulate our {\em parametric equation} $(x,\, H) = \big(\calX(z),\, \calH(z)\big)$ below, which precisely gives the claimed implicit equation.
\begin{align*}
\text{Formula~\eqref{eq:solve_ODE:3}} ~~ \iff ~~
x & ~=~ 1 - \frac{1 - \lambda}{(1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}} \cdot (1 + 1 / z)^{2} \cdot e^{-2 / z} \phantom{\bigg.} \\
& ~=~ 1 - \frac{1 - \lambda}{(1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}} \cdot \Big((1 + 1 / \mu)^{2} \cdot H\Big) \cdot \exp\Big(2 - (1 + 1 / \mu) \cdot 2\sqrt{H}\Big) \phantom{\bigg.} \\
& ~=~ 1 - (1 - \lambda) \cdot H \cdot \exp\Big((1 + 1 / \mu) \cdot \big(2 - 2\sqrt{H}\big)\Big). \phantom{\bigg.}
\end{align*}
Here the first step restates Formula~\eqref{eq:solve_ODE:3}. The second step (Formula~\eqref{eq:solve_ODE:7}) substitutes $(1 + 1 / z)^{2} = (1 + 1 / \mu)^{2} \cdot H$ and $-1 / z = 1 - (1 + 1 / \mu) \cdot \sqrt{H}$.\footnote{Since the variable $z \in [\mu,\, +\infty]$ is positive, we safely ignore the other (impossible) case $-1 / z = 1 + (1 + 1 / \mu) \cdot \sqrt{H}$.}
And the last step rearranges the equation.
\vspace{.1in}
Combining Condition~\eqref{eq:condition} and Implicit Equation~\eqref{eq:implicit_equation} together finishes the proof.
\end{proof}
As an implication of \Cref{lem:implicit_equation}, by setting $x = 0$ in Implicit Equation~\eqref{eq:implicit_equation} we can characterize the pointmass $h_{\mu} \eqdef H_{\mu}(0)$ of a candidate worst-case CDF. This is formalized into \Cref{cor:pointmass}. Also, because \Cref{lem:implicit_equation} holds for any given supremum bid $\lambda \in (0,\, 1)$, combining it with \Cref{lem:reformulation} gives \Cref{cor:reformulation}. These two corollaries are more convenient for our later use.
\begin{corollary}[The ODE solutions]
\label{cor:pointmass}
Given a supremum bid $\lambda \in (0,\, 1)$ and a parameter $\mu > 0$ that meets Condition~\eqref{eq:condition}, the CDF $H_{\mu}$ defined by Implicit Equation~\eqref{eq:implicit_equation} has a pointmass $h_{\mu} \eqdef H_{\mu}(0)$ at $x = 0$, which is the unique solution to the following equation:
\begin{align*}
\left\{\,~
\begin{aligned}
& 1 ~=~ (1 - \lambda) \cdot h_{\mu} \cdot \exp\Big((1 + 1 / \mu) \cdot \big(2 - 2\sqrt{h_{\mu}}\big)\Big) \\
& \text{\em such that $(1 + 1 / \mu)^{-2} \leq h_{\mu} < 1$} \phantom{\bigg.}
\end{aligned}
~\,\right\}.
\end{align*}
\end{corollary}
\begin{corollary}[{\textsf{Price of Anarchy}}]
\label{cor:reformulation}
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} satisfies that
\begin{align*}
{\sf PoA} ~\geq~
1 - \sup \left\{\, (1 - \lambda) \cdot \int_{0}^{\lambda} \frac{H_{\mu}(x) \cdot H'_{\mu}(x) \cdot \d x}{H_{\mu}(x) + (1 - x) \cdot H'_{\mu}(x)} \,\middlemid\,
\begin{aligned}
& 0 < \mu < +\infty \\
& 0 < \lambda \leq 1 - (1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}
\end{aligned}\,\right\}.
\end{align*}
\end{corollary}
\begin{proof}
Applying \Cref{lem:implicit_equation} to Optimization~\eqref{eq:reformulation:3} gives an interim optimization that takes the supremum operation {\em twice}: (i)~the inner operation, according to Condition~\eqref{eq:condition}, works on feasible space $\big\{\, \mu > 0 \,\bigmid\, 1 - (1 + 1 / \mu)^{2} \cdot e^{-2 / \mu} \geq \lambda \,\big\}$ given by the $\lambda \in (0,\, 1)$; and (ii)~the outer operation works on feasible space $\lambda \in (0,\, 1)$. Switching the order of both operations gives the corollary.
\end{proof}
\subsection{Optimizing the parameters \texorpdfstring{$(\lambda,\, \mu)$}{}}
\label{subsec:UB_lambda_mu}
We solve the optimization in \Cref{cor:reformulation} in two steps: (i)~\Cref{lem:UB_lambda_mu} transforms the considered {\em integral formula} into an equivalent {\em analytic formula}, Optimization~\eqref{eq:UB_lambda_mu}; and (ii)~\Cref{lem:worst_case} further derives the analytic solution, i.e., finding the (unique) worst case $(\lambda^{*},\, \mu^{*})$ of Optimization~\eqref{eq:UB_lambda_mu}.
\begin{lemma}[{\textsf{Price of Anarchy}}]
\label{lem:UB_lambda_mu}
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} satisfies that
\begin{align}
\label{eq:UB_lambda_mu}\tag{V}
{\sf PoA} ~\geq~
1 - \sup \left\{\, (1 - \lambda) \cdot \bigg((1 - h_{\mu}) - \frac{2 - 2\sqrt{h_{\mu}}}{1 + 1 / \mu}\bigg) \,\middlemid\,
\begin{aligned}
& 0 < \mu < +\infty \\
& 0 < \lambda \leq 1 - (1 + 1 / \mu)^{2} \cdot e^{-2 / \mu} \\
& \text{\em $h_{\mu}$ defined by \Cref{cor:pointmass}}
\end{aligned}\,\right\}.
\end{align}
\end{lemma}
\begin{lemma}[Worst case]
\label{lem:worst_case}
The solution to Optimization~\eqref{eq:UB_lambda_mu} is $1 - 1 / e^{2} \approx 0.8647$, which can be achieved by the worst case $(\lambda^{*},\, \mu^{*}) = (1 - 4 / e^{2},\, 1)$ and the resulting pointmass $h_{\mu^{*}} = 1 / 4$.
\end{lemma}
\begin{proof}[Proof of \Cref{lem:UB_lambda_mu}]
Following \Cref{cor:reformulation}, given any feasible pair $(\lambda,\, \mu)$, we study the integral
\begin{align}
(1 - \lambda) \cdot \int_{0}^{\lambda} \frac{H_{\mu}(x) \cdot H'_{\mu}(x) \cdot \d x}{H_{\mu}(x) + (1 - x) \cdot H'_{\mu}(x)}
& ~=~ (1 - \lambda) \cdot \int_{x \,\in\, [0,\, \lambda]} \frac{H_{\mu} \cdot \big(\d H_{\mu} \big/ \d x\big) \cdot \d x}{H_{\mu} + (1 - x) \cdot \big(\d H_{\mu} \big/ \d x\big)}
\nonumber \\
& ~=~ (1 - \lambda) \cdot \int_{H_{\mu} \,\in\, [h_{\mu},\, 1]} \frac{H_{\mu} \cdot \d H_{\mu}}{H_{\mu} + (1 - x) \big/ (\d x / \d H_{\mu})},
\label{eq:PoA_formula:1}
\end{align}
where the last step changes the variables. We aim to give an analytic formula for the term $\frac{1 - x}{\d x / \d H_{\mu}}$. To this end, let us regard Implicit Equation~\eqref{eq:implicit_equation} as a function $x(H_{\mu})$ of variable $H_{\mu} \in [h_{\mu},\, 1]$ (i.e., the inverse function of the considered CDF), then the derivative is given by
\begin{align*}
\frac{\d x}{\d H_{\mu}}
& ~=~ -(1 - \lambda) \cdot \frac{\d}{\d H_{\mu}} \bigg(H_{\mu} \cdot \exp\Big((1 + 1 / \mu) \cdot \big(2 - 2\sqrt{H_{\mu}}\big)\Big)\bigg) \\
& ~=~ -(1 - \lambda) \cdot \bigg(\exp\Big((1 + 1 / \mu) \cdot \big(2 - 2\sqrt{H_{\mu}}\big)\Big)\Bigg. \\
& \phantom{~=~} \hspace{2.5cm} ~+~ \bigg.H_{\mu} \cdot \exp\Big((1 + 1 / \mu) \cdot \big(2 - 2\sqrt{H_{\mu}}\big)\Big) \cdot (1 + 1 / \mu) \cdot \frac{-1}{\sqrt{H_{\mu}}}\bigg) \\
& ~=~ -(1 - \lambda) \cdot \Big(1 - (1 + 1 / \mu) \cdot \sqrt{H_{\mu}}\Big) \cdot \exp\Big((1 + 1 / \mu) \cdot \big(2 - 2\sqrt{H_{\mu}}\big)\Big) \phantom{\bigg.}
\end{align*}
Combining this formula with Implicit Equation~\eqref{eq:implicit_equation} gives
\begin{align*}
\frac{1 - x}{\d x \big/ \d H_{\mu}}
& ~=~ \frac{\phantom{-}(1 - \lambda) \cdot \parbox{3.88cm}{\centering $H_{\mu}$} \cdot \exp\Big((1 + 1 / \mu) \cdot \big(2 - 2\sqrt{H_{\mu}}\big)\Big)}{-(1 - \lambda) \cdot \Big(1 - (1 + 1 / \mu) \cdot \sqrt{H_{\mu}}\Big) \cdot \exp\Big((1 + 1 / \mu) \cdot \big(2 - 2\sqrt{H_{\mu}}\big)\Big)} \\
& ~=~ \frac{H_{\mu}}{(1 + 1 / \mu) \cdot \sqrt{H_{\mu}} - 1}.
\end{align*}
Plugging this analytic formula for the term $\frac{1 - x}{\d x / \d H_{\mu}}$ back to Integral~\eqref{eq:PoA_formula:1} gives
\begin{align*}
\text{Integral~\eqref{eq:PoA_formula:1}}
& ~=~ (1 - \lambda) \cdot \int_{H_{\mu} \,\in\, [h_{\mu},\, 1]} \frac{H_{\mu} \cdot \d H_{\mu}}{H_{\mu} + \frac{H_{\mu}}{(1 + 1 / \mu) \cdot \sqrt{H_{\mu}} - 1}} \\
& ~=~ (1 - \lambda) \cdot \int_{H_{\mu} \in [h_{\mu},\, 1]} \bigg(1 - \frac{1}{(1 + 1 / \mu) \cdot \sqrt{H_{\mu}}}\bigg) \cdot \d H_{\mu} \\
& ~=~ (1 - \lambda) \cdot \bigg(H_{\mu} - \frac{2\sqrt{H_{\mu}}}{1 + 1 / \mu}\bigg) \biggmid_{H_{\mu} \,=\, h_{\mu}}^{1} \\
& ~=~ (1 - \lambda) \cdot \bigg((1 - h_{\mu}) - \frac{2 - 2\sqrt{h_{\mu}}}{1 + 1 / \mu}\bigg),
\end{align*}
which is precisely the formula in Optimization~\eqref{eq:UB_lambda_mu}. This finishes the proof of the lemma.
\end{proof}
Optimization~\eqref{eq:UB_lambda_mu} involves just three variables and, indeed, one of them $h_{\mu}$ is even determined by the other two $(\lambda,\, \mu)$. \Cref{lem:worst_case} gives the analytic solution to Optimization~\eqref{eq:UB_lambda_mu}.
\begin{proof}[Proof of \Cref{lem:worst_case}]
For the moment, we regard the $\mu > 0$ as a given parameter. It follows from \Cref{cor:pointmass} that $\lambda = G(h_{\mu})$, where the function $G$ is defined in the domain $h_{\mu} \in \big[(1 + 1 / \mu)^{-2},\, 1\big)$:
\begin{align*}
G(h_{\mu}) ~\eqdef~ 1 - (1 / h_{\mu}) \cdot \exp\Big(-\big(2 - 2\sqrt{h_{\mu}}\big) \cdot (1 + 1 / \mu)\Big).
\end{align*}
This $G(h_{\mu})$ is a continuous function. And for any $h_{\mu} > (1 + 1 / \mu)^{-2}$, its derivative
\begin{align*}
G'(h_{\mu})
& ~=~ (1 / h_{\mu}^{2}) \cdot \exp\Big(-\big(2 - 2\sqrt{h_{\mu}}\big) \cdot (1 + 1 / \mu)\Big) \\
& \hspace{1cm} ~-~ (1 / h_{\mu}) \cdot \exp\Big(-\big(2 - 2\sqrt{h_{\mu}}\big) \cdot (1 + 1 / \mu)\Big) \cdot \Big((1 + 1 / \mu) \big/ \sqrt{h_{\mu}}\Big) \\
& ~=~ (1 / h_{\mu}^{2}) \cdot \exp\Big(-\big(2 - 2\sqrt{h_{\mu}}\big) \cdot (1 + 1 / \mu)\Big) \cdot \Big(1 - (1 + 1 / \mu) \cdot \sqrt{h_{\mu}}\Big)
\end{align*}
is {\em strictly positive}. Thus, this function $G(h_{\mu})$ is {\em strictly increasing} in the whole {\em left-closed right-open} domain $h_{\mu} \in \big[(1 + 1 / \mu)^{-2},\, 1\big)$. Moreover, the image of function $G(h_{\mu})$,
\[
\Big\{\, G(h_{\mu}) \,\Bigmid\, (1 + 1 / \mu)^{-2} \leq h_{\mu} < 1 \,\Big\} ~=~ \big(0,\, 1 - (1 + 1 / \mu)^{2} \cdot e^{-2 / \mu}\big]
\]
is precisely the feasible space of $\lambda$ in Optimization~\eqref{eq:UB_lambda_mu}, which means the mapping between all feasible $\lambda$ and all $h_{\mu} \in \big[(1 + 1 / \mu)^{-2},\, 1\big)$ is {\em one-to-one}. For these reasons, we can deduce that
\begin{align*}
\text{Optimization~\eqref{eq:UB_lambda_mu}}
& ~=~ 1 - \sup \left\{\, \big(1 - G(h_{\mu})\big) \cdot \bigg((1 - h_{\mu}) - \frac{2 - 2\sqrt{h_{\mu}}}{1 + 1 / \mu}\bigg) \phantom{\frac{}{\Big.}}\middlemid\,
\begin{aligned}
& 0 < \mu < +\infty \\
& (1 + 1 / \mu)^{-2} \leq h_{\mu} < 1
\end{aligned}\,\right\} \\
& ~=~ 1 - \sup \left\{\, \frac{(1 - h_{\mu}) - \big(2 - 2\sqrt{h_{\mu}}\big) \cdot (1 + 1 / \mu)^{-1}}{h_{\mu} \cdot \exp\Big(\big(2 - 2\sqrt{h_{\mu}}\big) \cdot (1 + 1 / \mu)\Big)} \,\middlemid\,
\begin{aligned}
& 0 < \mu < +\infty \\
& (1 + 1 / \mu)^{-2} \leq h_{\mu} < 1
\end{aligned}\,\right\} \\
& ~=~ 1 - \sup \left\{\, \calG(\beta,\, \gamma) \eqdef \frac{(1 - \beta^{2}) - (2 - 2\beta) \cdot \gamma}{\beta^{2} \cdot \exp\big((2 - 2\beta) \cdot \gamma^{-1}\big)}\phantom{\frac{}{\Big.}} \middlemid\, \text{$0 < \gamma \leq \beta < 1$} \,\right\},
\end{align*}
where the first step switches the optimization variable from $\lambda$ to $h_{\mu} \in \big[(1 + 1 / \mu)^{-2},\, 1\big)$, the second step plugs the $G(h_{\mu})$ formula, and the last step substitutes $\beta \eqdef \sqrt{h_{\mu}}$ and $\gamma \eqdef (1 + 1 / \mu)^{-1}$.
Below we would reason about the function $\calG(\beta,\, \gamma)$ defined above. The partial derivative of this function with respect to variable $\gamma$ is given by
\begin{align*}
\frac{\partial \calG}{\partial \gamma}
& ~=~ \frac{-(2 - 2\beta)}{\beta^{2} \cdot \exp\big((2 - 2\beta) \cdot \gamma^{-1}\big)}
~+~ \frac{(1 - \beta^{2}) - (2 - 2\beta) \cdot \gamma}{\beta^{2} \cdot \exp\big((2 - 2\beta) \cdot \gamma^{-1}\big)} \cdot (2 - 2\beta) \cdot \gamma^{-2} \\
& ~=~ \frac{(1 - \beta^{2}) - (2 - 2\beta) \cdot \gamma - \gamma^{2}}{\beta^{2} \cdot \exp\big((2 - 2\beta) \cdot \gamma^{-1}\big)} \cdot (2 - 2\beta) \cdot \gamma^{-2},
\end{align*}
where the last step rearranges the formula. Clearly, in the feasible space $\{0 < \gamma \leq \beta < 1\}$, the sign of this partial derivative $\partial \calG \big/ \partial \gamma$ depends on the numerator $(1 - \beta^{2}) - 2\gamma \cdot (1 - \beta) - \gamma^{2}$.
\begin{figure}[t]
\centering
\includegraphics[width = .9\textwidth]{functionG.png}
\caption{Demonstration for (the red curve) the $\calG(\beta,\, \beta)$ as an increasing function of $\beta \in (0,\, 1 / 2]$ and (the blue curve) the $\calG(\beta,\, \gamma_{\beta})$ as a decreasing function of $\beta \in (1 / 2,\, 1)$.
\label{fig:functionG}}
\end{figure}
For the moment, we regard $\beta \in (0,\, 1)$ as a given parameter and consider the parabola
\[
P_{\beta}(\gamma) ~\eqdef~ (1 - \beta^{2}) - 2\gamma \cdot (1 - \beta) - \gamma^{2}
\]
in the domain $\gamma \in [0,\, \beta]$. This parabola opens {\em downwards} and has a {\em nonpositive} axis of symmetry $\gamma = -(1 - \beta)$, thus being a {\em decreasing} function in the whole domain $\gamma \in [0,\, \beta]$. Also, the left/right ends are respectively $P_{\beta}(0) = 1 - \beta^{2} > 0$ and $P_{\beta}(\beta) = 1 - 2\beta$. Let us do case analysis.
\vspace{.1in}
\noindent
{\bf Case~I that $0 < \beta \leq 1 / 2$.}
Now the both ends $P_{\beta}(0)$ and $P_{\beta}(\beta)$, as well as the whole parabola, are {\em nonnegative}, which implies that $\partial \calG \big/ \partial \gamma \geq 0$ for any $\gamma \in (0,\, \beta]$. Accordingly, we have
\begin{align*}
\sup \Big\{\, \calG(\beta,\, \gamma) \,\Bigmid\, \text{\bf Case~I} \,\Big\}
& ~=~ \sup \Big\{\, \calG(\beta,\, \beta) \,\Bigmid\, \text{$0 < \beta \leq 1 / 2$} \,\Big\} \\
& ~=~ \sup \Big\{\, (1 / \beta - 1)^{2} \cdot e^{-(2 / \beta - 2)} \,\Bigmid\, \text{$0 < \beta \leq 1 / 2$} \,\Big\} \\
& ~=~ 1 / e^{2}, \phantom{\Big.}
\end{align*}
where the last two steps can be checked through elementary algebra. Particularly, the $\calG(\beta,\, \beta)$ as a function of $\beta \in (0,\, 1 / 2]$ is increasing (see \Cref{fig:functionG} for a visual demonstration), and the supremum $1 / e^{2} \approx 0.1353$ is achieved when $\beta = \gamma = 1 / 2$.
\vspace{.1in}
\noindent
{\bf Case~II that $1 / 2 < \beta < 1$.}
Now the left end $P_{\beta}(0) = 1 - \beta^{2}$ is {\em positive}, the right end $P_{\beta}(\beta) = 1 - 2\beta$ is {\em negative}, and (elementary algebra) the parabola has exactly one root $\gamma_{\beta} \eqdef \sqrt{(2 - 2\beta)} - (1 - \beta)$ in its domain. This implies that $\partial \calG \big/ \partial \gamma \geq 0$ for any $\gamma \in [0,\, \gamma_{\beta}]$ and $\partial \calG \big/ \partial \gamma \leq 0$ for any $\gamma \in [\gamma_{\beta},\, \beta]$. As a consequence, we have
\begin{align*}
\sup \Big\{\, \calG(\beta,\, \gamma) \,\Bigmid\, \text{\bf Case~II} \,\Big\}
& ~=~ \sup \Big\{\, \calG(\beta,\, \gamma_{\beta}) \,\Bigmid\, \text{$1 / 2 < \beta < 1$} \,\Big\} \\
& ~=~ \sup \Big\{\, \Big(\tfrac{\sqrt{2} - \sqrt{1 - \beta}}{\beta / \sqrt{1 - \beta}}\Big)^{2} \cdot \exp\Big(\tfrac{-2\sqrt{1 - \beta}}{\sqrt{2} - \sqrt{1 - \beta}}\Big) \,\Bigmid\, \text{$1 / 2 < \beta < 1$} \,\Big\} \\
& ~=~ 1 / e^{2}, \phantom{\Big.}
\end{align*}
where the last two steps can be checked through elementary algebra. Particularly, the $\calG(\beta,\, \gamma_{\beta})$ as a function of $\beta \in (1 / 2,\, 1)$ is decreasing (see \Cref{fig:functionG} for a visual demonstration), and the supremum $1 / e^{2} \approx 0.1353$ is approached (but cannot be achieved) when $\gamma = \gamma_{\beta}$ and $\beta \searrow 1 / 2$.
\vspace{.1in}
Combining both cases together gives $\sup \{\calG(\beta,\, \gamma) \mid 0 < \gamma \leq \beta < 1\} = 1 / e^{2}$ and thus
\begin{align*}
\text{Optimization~\eqref{eq:UB_lambda_mu}}
~=~ 1 - \sup \Big\{\, \calG(\beta,\, \gamma) \,\Bigmid\, 0 < \gamma \leq \beta < 1 \,\Big\}
~=~ 1 - 1 / e^{2}.
\end{align*}
Particularly, the worst case can be achieved when $\beta = \gamma = 1 / 2$, or equivalently, when the supremum bid $\lambda = 1 - 4 / e^{2} \approx 0.4587$, the parameter $\mu = 1$ and the pointmass $h_{\mu} = 1 / 4$.
This finishes the proof of \Cref{lem:worst_case}.
\end{proof}
\Cref{lem:UB_lambda_mu,lem:worst_case} together imply the lower-bound part of \Cref{thm:main}, which is restated below.
\begin{restate}[{\Cref{thm:main}}]
The {\textsf{Price of Anarchy}} in {\textsf{First Price Auctions}} is $\geq 1 - 1 / e^{2} \approx 0.8647$.
\end{restate}
\section*{Acknowledgements}
We are grateful to Xi Chen for invaluable discussions in multiple stages throughout this work and would like to thank Xiaohui Bei, Hu Fu, Tim Roughgarden, Rocco Servedio, Zhihao Gavin Tang, Zihe Wang, and anonymous reviewers for helpful comments.
Y.J.\ is supported by NSF grants IIS-1838154, CCF-1563155, CCF-1703925, CCF-1814873, CCF-2106429, and CCF-2107187.
P.L.\ is supported by Science and Technology Innovation 2030 – ``New Generation of Artificial Intelligence'' Major Project No.(2018AAA0100903), NSFC grant 61922052 and 61932002, Innovation Program of Shanghai Municipal Education Commission, Program for Innovative Research Team of Shanghai University of Finance and Economics, and the Fundamental Research Funds for the Central Universities.
\newpage
\begin{flushleft}
\bibliographystyle{alpha}
\subsection{Technical overview}
\label{sec:overview}
This subsection sketches our high-level proof ideas. To ease the readability, some descriptions are roughly accurate but not perfectly, and most technical details are deferred to \Cref{sec:structure,sec:preprocessing,sec:reduction,sec:UB,sec:LB}.
Our approach is very different from the prior works \cite{ST13,HTW18}, which mainly adopt the smoothness techniques or the extension.
More concretely, we employ the first principle approach that, directly characterizes the {\em worst-case} instance and the {\em worst-case} Bayesian Nash equilibrium regarding the definition of the Price of Anarchy. To this end, we step by step narrow down the search space of the worst case by proving more and more necessary conditions it must satisfy.
Finally, we capture the exact worst case and thus derive the tight {{\sf PoA}} bound.
\subsection*{Changing the viewpoints (\Cref{sec:structure})}
Regarding the {{\sf PoA}} definition, we need to prove that
the auction social welfare ${\sf FPA}(\bV,\, \bs)$ is within a certain ratio of the optimal social welfare ${\sf OPT}(\bV)$, given any value distribution $\bV = \{V_{i}\}_{i \in [n]}$ and any equilibrium thereof $\bs = \{s_{i}\}_{i \in [n]}$.
Two difficulties emerge immediately: (i)~One value distribution $\bV \in \bbV$ generally has {\em multiple} or even {\em infinite} equilibria. (ii)~One equilibrium $\bs \in \bbBNE(\bV)$ generally has {\em no} analytic
solution; even an efficient algorithm for computing (approximate) equilibria from value distributions is unknown.
To the rescue, instead of the original value-strategy representation $(\bV,\, \bs) \in \bbV \times \bbBNE$ of equilibria,
we use another representation, the {\em bid distributions} $\bB(\bV,\, \bs)$ resulted from equilibria.
These two representations and their relationship are demonstrated in \Cref{fig:mappings}.
Given one equilibrium bid distribution $\bB(\bV,\, \bs) = \{B_{i}\}_{i \in [n]}$: (i)~One equilibrium $(\bV,\, \bs) \in \bbV \times \bbBNE$ is {\em uniquely} determined.
(ii)~The reconstruction of the $(\bV,\, \bs)$ essentially {\em has} an analytic
solution, through the {\em bid-to-value mappings} $\varphi_{i}(b) \eqdef b + (\sum_{k \neq i} B'_{k}(b) / B_{k}(b))^{-1}$ that are almost the inverse functions of the strategies $\bs = \{s_{i}\}_{i \in [n]}$.\footnote{Although the mapping formulas $\varphi_{i}(b) = b + (\sum_{k \neq i} B'_{k}(b) / B_{k}(b))^{-1}$ are previously known, NO prior work takes ANY further step. Even this structure result is unknown: ``One (valid) bid distribution $\bB \in \mathbb{B}_{\sf valid}$ backward uniquely decides the underlying value-strategy tuple $(\bV,\, \bs) \in \bbV \times \bbBNE$''; let alone techniques towards the tight ${\sf PoA} = 1 - 1 / e^{2}$.}
These mitigate the above two difficulties.
In sum, there is a bijection between the space of equilibria $\bbV \times \bbBNE \ni (\bV,\, \bs)$ and the space of equilibrium bid distributions $\{\bB(\bV,\, \bs)\}$.
Further, the new representation $\{\bB(\bV,\, \bs)\}$ is mathematically equivalent but technically easier -- this is {\em one} infinite set instead of the Cartesian product $\bbV \times \bbBNE$ of {\em two} infinite sets -- showing an avenue towards the tight {{\sf PoA}}. (Later in \Cref{sec:structure,sec:preprocessing,sec:reduction,sec:UB,sec:LB}, we will see more advantages.)
\newlength{\FigMappingsHeight}
\setlength{\FigMappingsHeight}{3.1cm}
\begin{figure}[t]
\centering
\subfloat[Forward direction: $\bbV \times \bbBNE \mapsto \mathbb{B}_{\sf valid}$
\label{fig:mappings:1}]{
\resizebox{.38\textwidth}{!}{
\begin{tikzpicture}[thick, smooth, scale = 1]
\draw[fill = red!15] (-1.35, 0) arc(-180: 180: 1.35cm and \FigMappingsHeight);
\draw[fill = green!15] (3.65, 0) arc(-180: 180: 1.35cm and \FigMappingsHeight) -- cycle;
\node[below] at (0, -\FigMappingsHeight) {Space $\bbV$};
\node[below] at (5, -\FigMappingsHeight) {Space $\mathbb{B}_{\sf valid}$};
\draw[fill = black] (0, 0.7) circle (3pt);
\node[left] at (0, 0.7) (V1) {$\bV$};
\draw[fill = black] (5, 2.1) circle (3pt);
\node[right] at (5, 2.1) (B1) {$\bB$};
\draw[fill = black] (5, 0.7) circle (3pt);
\node[right] at (5, 0.7) (B2) {$\bar{\bB}$};
\draw[fill = black] (5, -0.7) circle (3pt);
\node[right] at (5, -0.7) (B3) {$\hat{\bB}$};
\draw[fill = black] (0, -2.1) circle (3pt);
\node[left] at (0, -2.1) (V4) {$\tilde{\bV}$};
\draw[fill = black] (5, -2.1) circle (3pt);
\node[right] at (5, -2.1) (B4) {$\tilde{\bB}$};
\draw[shorten <=0.2cm, shorten >=0.2cm, ->, >=triangle 45] (V1) to node[above, rotate = 15, yshift = -.1cm] {\small $\bs \in \bbBNE(\bV)$} (B1);
\draw[shorten <=0.2cm, shorten >=0.2cm, ->, >=triangle 45] (V1) to node[above, yshift = -.1cm] {\small $\bar{\bs} \in \bbBNE(\bV)$} (B2);
\draw[shorten <=0.2cm, shorten >=0.2cm, ->, >=triangle 45] (V1) to node[above, rotate = -13, yshift = -.1cm] {\small $\hat{\bs} \in \bbBNE(\bV)$} (B3);
\draw[shorten <=0.2cm, shorten >=0.2cm, ->, >=triangle 45] (V4) to node[above, yshift = -.1cm] {\small $\tilde{\bs} \in \bbBNE(\bV)$} (B4);
\end{tikzpicture}}}
\hfill
\subfloat[Inverse direction: $\mathbb{B}_{\sf valid} \mapsto \bbV \times \bbBNE$
\label{fig:mappings:2}]{
\resizebox{.38\textwidth}{!}{
\begin{tikzpicture}[thick, smooth, scale = 1]
\draw[fill = orange!15] (8.65, 0) arc(-180: 180: 1.35cm and \FigMappingsHeight);
\draw[fill = green!15] (3.65, 0) arc(-180: 180: 1.35cm and \FigMappingsHeight) -- cycle;
\node[below] at (10, -\FigMappingsHeight) {Space $\bbV \times \bbBNE$};
\node[below] at (5, -\FigMappingsHeight) {Space $\mathbb{B}_{\sf valid}$};
\draw[fill = black] (10, 2.1) circle (3pt);
\node[right] at (10, 2.1) (V1) {};
\node[below] at (10, 2.1) {$(\bV,\, \bs)$};
\draw[fill = black] (5, 2.1) circle (3pt);
\node[left] at (5, 2.1) (B1) {$\bB$};
\draw[fill = black] (10, 0.7) circle (3pt);
\node[right] at (10, 0.7) (V2) {};
\node[below] at (10, 0.7) {$(\bV,\, \bar{\bs})$};
\draw[fill = black] (5, 0.7) circle (3pt);
\node[left] at (5, 0.7) (B2) {$\bar{\bB}$};
\draw[fill = black] (10, -0.7) circle (3pt);
\node[right] at (10, -0.7) (V3) {};
\node[below] at (10, -0.7) {$(\bV,\, \hat{\bs})$};
\draw[fill = black] (5, -0.7) circle (3pt);
\node[left] at (5, -0.7) (B3) {$\hat{\bB}$};
\draw[fill = black] (10, -2.1) circle (3pt);
\node[right] at (10, -2.1) (V4) {};
\node[below] at (10, -2.1) {$(\tilde{\bV},\, \tilde{\bs})$};
\draw[fill = black] (5, -2.1) circle (3pt);
\node[left] at (5, -2.1) (B4) {$\tilde{\bB}$};
\draw[shorten <=0.2cm, shorten >=0.2cm, ->, >=triangle 45] (B1) to (V1);
\draw[shorten <=0.2cm, shorten >=0.2cm, ->, >=triangle 45] (B2) to (V2);
\draw[shorten <=0.2cm, shorten >=0.2cm, ->, >=triangle 45] (B3) to (V3);
\draw[shorten <=0.2cm, shorten >=0.2cm, ->, >=triangle 45] (B4) to (V4);
\end{tikzpicture}}}
\caption{Demonstration for the two representations of equilibrium, (i)~the value-strategy representation $(\bV,\, \bs) \in \bbV \times \bbBNE$ versus (ii)~the bid distribution representation $\bB = \bB(\bV,\, \bs) \in \mathbb{B}_{\sf valid}$.
One value distribution $\bV$ has multiple or even infinite equilibria $\bs \in \bbBNE(\bV)$; each equilibrium induces one valid bid distribution $\bB = \bB(\bV,\, \bs) \in \mathbb{B}_{\sf valid}$. Backward, the $\bB$ decides the underlying tuple $(\bV,\, \bs) \in \bbV \times \bbBNE$ via the bid-to-value mappings $\bvarphi$. Hence, there is a bijection between the two spaces $\bbV \times \bbBNE$ and $\mathbb{B}_{\sf valid}$.
\label{fig:mappings}}
\end{figure}
Nonetheless, there is no free lunch. The new representation has two drawbacks. (i)~Unlike that any value distribution $\bV \in \bbV$ must be feasible since the existence of an equilibrium $\bs \in \bbBNE(\bV) $ is promised \cite{L96},\footnote{This existence result requires suitable {\em tie-breaking} rules in the first-price auction; see \Cref{sec:structure} for more details.}
a generic bid distribution $\bB$ not necessarily induces an equilibrium.
To remedy this issue, we introduce the notion of {\em validity}, which almost refers to {\em monotonicity} of the bid-to-value mappings $\bvarphi = \{\varphi_{i}\}_{i \in [n]}$.
In addition, (ii)~a mapping $\varphi_{i}$ is undefined at a singular point where the equilibrium bid distribution $B_{i}$ of that bidder $i \in [n]$ has a probability mass, which must be handled separately.
Luckily, such probability masses are possible only at the {\em left endpoint} of the distribution's support.
So, we further consider the {\em conditional value distribution} $P$ given one's bid being at the left endpoint.
Indeed, the concept of validity refers to a tuple $(\bB,\, P)$ of equilibrium bid distributions and a conditional value distribution.
More rigorously, our new representation considers all the {\em valid} tuples/instances $(\bB,\, P) \in \mathbb{B}_{\sf valid}$.
Here, the {\em valid} distributions $\bB$ can be replaced with the {\em increasing} bid-to-value mappings $\bvarphi$: One decides the other and vice versa (\Cref{lem:pseudo_distribution}).
More importantly, monotonicity of the mappings $\bvarphi$ gives a strong geometric intuition, making the third representation $(\bvarphi,\, P)$ of equilibrium more convenient for our later use.
Especially, in what follows, the figures of the mappings $\bvarphi = \{\varphi_{i}\}_{i \in [n]}$ \textbf{\em play the role of visual demonstrations}, i.e., \textbf{\em horizontal bid axes} and \textbf{\em vertical value axes}, where each mapping $\varphi_{i}$ denotes one bidder $i \in [n]$.
\subsection*{The worst-case instance (\Cref{sec:LB})}
Our overall approach is to reduce any given instance to the {\em worst-case} instance step by step. Thus, it is helpful to describe the worst case in advance, which explains why we design those reductions since our target is very specific.
The worst case has $n + 1$ bidders, one bidder $H$ with a {\em fixed} high value $v_{H} \equiv 1$ and {\em sufficiently many} $n \gg 1$ identical low-value bidders $\{\sqrt[n]{L}\}^{\otimes n}$.
Among the low-value bidders, the highest value $v_{L} \sim V_{L}$ is distributed following the parametric equation $V_{L}(1 - t \cdot e^{2 - 2t}) = 4 / t^{2} \cdot e^{2t - 4}$ for $t \in [1,\, 2]$, over the value support $v_{L} \in \supp(V_{L}) = [0,\, 1 - 2 / e^{2}]$.
See \Cref{fig:intro:worst_case} for a visual aid, based on the bid-to-value mappings $\varphi_{H} \otimes \{\varphi_{L}\}^{\otimes n}$.
The equilibrium strategies $s_{H} \otimes \{s_{L}\}^{\otimes n}$ and the bid distributions $H \otimes \{\sqrt[n]{L}\}^{\otimes n}$ are less important for understanding our approach; we defer a description and the proof of the tight {{\sf PoA}} $= 1 - 1 / e^{2}$ to \Cref{sec:LB}.
The parametric equation $V_{L}$ also is less important, included just for completeness.
In contrast, the point is the underlying structure: The bidder $H$ always contributes the optimal social welfare $\equiv 1$.
The low-value bidders $\{\sqrt[n]{L}\}^{\otimes n}$ {\em individually} have negligible effects (the winning probabilities etc)
but {\em collectively} make the auction game less efficient.
We are inspired by the instance due to Hartline, Hoy and Taggart \cite{HHT14}, which has the same $H \otimes \{\sqrt[n]{L}\}^{\otimes n}$ structure. Our construction differs from theirs in the concrete distributions, hence a slightly worse {{\sf PoA}} of $1 - 1 / e^{2} \approx 0.8647$ in comparison with their ratio of $\approx 0.8689$.\footnote{\cite{HHT14} ``just'' considered a reasonable instance $H(b) = \sqrt{b / \lambda}$ and $L(b) = \frac{1 - \lambda}{1 - b}$ for $\lambda = 0.57$, but neither gave argument/evidence for ``worstness'' of the instance family $H \otimes \{\sqrt[n]{L}\}^{\otimes n}$, nor searched the worst case $(H^{*},\,\lambda^{*})$.
In contrast, our contributions are twofold: (primary) ``worstness'' of this family; (secondary) the nontrivial worst case $b = 1 - {4H^{*}} \cdot {e^{2 - 4\sqrt{H^{*}}}}$ and $L^{*}(b) = \frac{1 - \lambda^{*}}{1 - b}$ for $\lambda^{*} = 1 - 4 / e^{2}$. In this family, the {{\sf PoA}} is less sensitive to different instances, hence two numerically close bounds $0.8689$ versus $0.8647$.}
\begin{figure}[t]
\centering
\subfloat[\label{fig:intro:worst_case}
The worst-case pseudo instance $H \otimes L$]{
\includegraphics[width = .4\textwidth]
{intro_worst_case.png}}
\hfill
\subfloat[\label{fig:intro:collapse}
A {\em non collapsed} nearly worst-case instance]{
\includegraphics[width = .4\textwidth]
{intro_collapse.png}}
\caption{Demonstration for the worst-case pseudo instance and the {\text{\tt Collapse}} reduction. \\
\Cref{fig:intro:worst_case}: The bidder $H$ has a fixed high value $\varphi_{H}(b) \equiv 1$ for $b \in [0,\, 1 - 4 / e^{2}]$ and the pseudo bidder $L$ has the (parametric) bid-to-value mapping $\varphi_{L}(1 - t^{2} \cdot e^{2 - 2t}) = 1 - t \cdot e^{2 - 2t}$ for $t \in [1,\, 2]$. \\
\Cref{fig:intro:collapse}: When a specific bidder $i \in [n]$ has a fixed high value $\varphi_{i}(b) \equiv v_{i} \geq \max\{ \varphi_{k}(b): k \neq i \}$, over the bid support $b \in [\gamma,\, \lambda]$, the {\text{\tt Collapse}} reduction transforms this instance $\bB = B_{i} \otimes \bB_{-i}$ to a two-bidder pseudo instance $B_{i} \otimes \tilde{L}$ that yields a worse-or-equal {{\sf PoA}}.
\label{fig:intro:twin}}
\end{figure}
\subsection*{The pseudo instances (\Cref{subsec:pseudo}) and \blackref{alg:collapse} (\Cref{subsec:collapse})}
Rigorously, the worst case $H \otimes \{\sqrt[n]{L}\}^{\otimes n}$ for $n \gg 1$ described above is not a specific instance, but a sequence of instances with the {\em limit} $n \to +\infty$ being the worst case.
This incurs some notational inconvenience.
When such issues occur in real/complex analysis, the standard method is to add the infinite point(s) to the space of the real/complex numbers, making the space more complete.
In the same spirit, we introduce the notion of {\em pseudo bidders} and {\em pseudo instances} (\Cref{def:pseudo}), thus including the above limit to our search space.
In our extended language, the worst case has a succinct format: one high-value bidder $H$ and one pseudo bidder $L$,
namely a two-bidder pseudo instance (\Cref{fig:intro:worst_case}).
Beyond notational brevity, the notion of pseudo bidders/instances also simplifies our proof in several places.
This extension does not hurt our proof, given that pseudo instances are only considered in the {\em lower-bound} analysis. Precisely, we show that even the worst case in the extended search space has the {{\sf PoA}} $\geq 1 - 1 / e^{2}$, implying a lower bound of $1 - 1 / e^{2}$ for the original problem.
In addition, the {\em upper-bound} analysis leverages the above instance sequence $H \otimes \{\sqrt[n]{L}\}^{\otimes n}$.
Precisely, we show that when the $n \gg 1$ is sufficiently large, the {{\sf PoA}} can be arbitrarily close to $1 - 1 / e^{2}$. As a combination, the tight {{\sf PoA}} in \Cref{thm:main} gets established.
Another related thing is the \blackref{alg:collapse} reduction. As \Cref{fig:intro:collapse} shows,
when a specific bidder $i \in [n]$ has a fixed high value $\varphi_{i}(b) \equiv v_{i} \geq \max\{ \varphi_{k}(b): k \neq i \}$ over the bid support $b \in [\gamma,\, \lambda]$,
the \blackref{alg:collapse} reduction
can replace all the other bidders $\{B_{k}\}_{k \neq i}$ by one pseudo bidder $\tilde{L}$, resulting in a two-bidder pseudo instance $B_{i} \otimes \tilde{L}$ that has a worse-or-equal {{\sf PoA}}.
Such a pseudo instance $B_{i} \otimes \tilde{L}$ {\em will} be easy to handle since it has the same shape as the worst case $H \otimes L$.
In sum, the remaining task is to transform every valid instance $\in \mathbb{B}_{\sf valid}$ into a specific instance that the {\text{\tt Collapse}} reduction can work on, i.e., an instance that has one {\em fixed-high-value} bidder.
\subsection*{\blackref{alg:layer} (\Cref{subsec:layer}): Hierarchizing the bidders}
\begin{figure}[t]
\centering
\subfloat[\label{fig:intro:layer:old}
The input {\em increasing} mappings $\bvarphi$]{
\includegraphics[width = .4\textwidth]
{layer_old.png}}
\hfill
\subfloat[\label{fig:intro:layer:new}
The output {\em layered} mappings $\tilde{\bvarphi}$]{
\includegraphics[width = .4\textwidth]
{layer_new.png}}
\caption{The {\text{\tt Layer}} reduction. Each color corresponds to a bidder. After the reduction, the bidders are ordered. For example, the red bidder is the highest bidder. \label{fig:intro:layer}}
\end{figure}
To transform a given instance into the worst case, we shall identify one special high-value bidder $H \in [n]$ and ``derandomize'' his/her value, i.e., making the bid-to-value mapping a constant function $\varphi_{H}(b) \equiv v_{H}$. (If so, upon scaling, we can normalize this fixed value $v_{H} = 1$.)
But it is even unclear which bidder $i \in [n]$ should be the candidate: The highest-value bidder $\argmax\{ \varphi_{i}(b): i \in [n]\}$ at different bids $b \in [\gamma,\, \lambda]$ can be different.
More concretely, consider the example in \Cref{fig:intro:layer:old},
the highest-value bidder is the red bidder $B_{1}$ initially for small bids,
but changes to the blue bidder $B_{2}$ later for large bids.
We can segment the bid-to-value mappings into pieces and rearrange them, in a sense of \Cref{fig:intro:layer:new}, making them {\em ordered point-wise} $\tilde{\varphi}_{1}(b) \geq \dots \geq \tilde{\varphi}_{n}(b)$.
Then, the candidate high-value bidder clearly is the index-$1$ bidder $\tilde{B}_{1}$ (i.e., the red one in \Cref{fig:intro:layer:new}).
We call this reduction \blackref{alg:layer}.
Under the \blackref{alg:layer} reduction, the auction/optimal social welfares {{\sf FPA}} and {{\sf OPT}} each remain the same, so the {{\sf PoA}} is invariant.
This proof is not that technically involved,
once we formalize the reduction.
However, without switching our viewpoint to the bid-to-value mappings $\bvarphi = \{\varphi_{i}\}_{i \in [n]}$, even describing this reduction seems difficult.
In sum, we narrow down the search space to the ``layered'' instances and the remaining task to address the candidate high-value bidder $B_{1}$, especially derandomizing his/her value $\varphi_{1}(b) \equiv v_{1}$.
\subsection*{\blackref{alg:discretize} and \blackref{alg:translate} (\Cref{subsec:discretize,subsec:translate}): Two simplifications}
\begin{figure}[t]
\centering
\subfloat[\label{fig:infro:discretize:old}
The input {\em increasing} mappings $\bvarphi$]{
\includegraphics[width = .4\textwidth]
{intro_discretize_input.png}}
\hfill
\subfloat[\label{fig:infro:discretize:new}
The out {\em discretized}/{\em translated} mappings $\tilde{\bvarphi}$]{
\includegraphics[width = .4\textwidth]
{intro_discretize_output.png}}
\caption{The {\text{\tt Discretize}} and {\text{\tt Translate}} reductions
\label{fig:intro:discretize}}
\end{figure}
Our main reduction is an {\em iterative} algorithm; thus it is convenient to work with {\em bounded discrete} instances, for which the main reduction will terminate in {\em finite} iterations.
Such an idea appears in many prior works \cite{CGL14,AHNPY19,JLTX20,JLQTX19a,JJLZ21}, and the proof plan is twofold: (i)~For any bounded discrete instance, the {{\sf PoA}} is at least $1 - 1 / e^{2}$. (ii)~For any valid instance, there exists a bounded discrete instance approximating the {{\sf PoA}}, up to an {\em arbitrary but fixed} error $\epsilon > 0$.
As a combination, the {{\sf PoA}} for any valid instance is at least $1 - 1 / e^{2}$.
However, our discretization scheme is subtle. First, we cannot discretize the valid bid distributions $\bB$ given that they must be continuous everywhere except the left endpoint of the bid support.
Second, we cannot discretize the value distributions and the strategies $(\bV,\, \bs)$.
Although bounded discrete value distributions $\tilde{\bV}$ that well approximate the given ones $\bV$ are trivial, it is unclear how to obtain desirable strategies $\tilde{\bs} \in \bbBNE(\tilde{\bV})$:
No algorithm for computing equilibria from value distributions is available.
Also, with modified value distributions, even the existence of $\tilde{\bs} \in \bbBNE(\tilde{\bV})$ that is ``close enough'' to the given equilibrium $\bs \in \bbBNE(\bV)$ is doubtful.
We circumvent these issues by discretizing the bid-to-value mappings $\bvarphi$ instead (\Cref{fig:intro:discretize}), i.e., approximating them
through {\em piecewise constant} functions $\tilde{\bvarphi}$.\footnote{In real analysis, for various purposes, piecewise constant functions (a.k.a.\ simple functions) are widely used as approximations to general functions.}
The benefits are threefold:
(i)~The valid bid distributions $\tilde{\bB}$, the value distributions $\tilde{\bV}$, and the equilibrium strategies $\tilde{\bs} \in \bbBNE(\tilde{\bV})$ can be reconstructed from the new mappings $\tilde{\bvarphi}$, through analytic expressions.
(ii)~The piecewise constancy of the new mappings $\tilde{\bvarphi}$ makes the value distribution $\tilde{\bV}$ {\em bounded} and {\em discrete}.
(iii)~More importantly, we are able to obtain a {\em sequence} of piecewise constant mappings $\{\tilde{\bvarphi}\}$ that {\em pointwise converge} to the given ones $\bvarphi$.
The corresponding $\{ \tilde{\bB} \}$, $\{ \tilde{\bV}\}$, and $\{ \tilde{\bs} \}$ can be arbitrarily close to the given $\bB$,\, $\bV$, and $\bs$, approximating the {{\sf PoA}} up to any fixed error $\epsilon > 0$.
This exactly implements the second part of our proof plan.
Afterward, the \blackref{alg:translate} reduction further vanishes the lowest bid, through shifting both the value space and the bid space by the same distance (\cref{fig:intro:discretize}) -- Again, a worse-or-equal {{\sf PoA}}.
In sum, we can focus on {\em piecewise constant} and {\em increasing} mappings $\bvarphi$ over a {\em bounded} support $b \in [0,\, \lambda]$, together with the conditional value $P$ given the nil bid $b = 0$ (i.e., the left endpoint).
\subsection*{\blackref{alg:polarize} and \blackref{alg:slice} (\Cref{subsec:polarize,subsec:slice}): Dealing with the probability masses}
Now we address the probability masses and the conditional value $P$. (This is the only information that cannot be reconstructed from the bid-to-value mappings $\bvarphi$ and must be handled separately.)
Our main observation is (\Cref{lem:monopolist}) that {\em at most one} bidder can have a nontrivial conditional value $P$, who will be called the monopolist $H$.
(After the \blackref{alg:layer} reduction, this monopolist $H$ will be the {\em high-value} bidder; cf.\ \Cref{fig:intro:layer}.)
That is why we can use a single conditional value $P$, instead of $n$ values akin to the distributions $\bV$ or $\bB$. Moreover, following the monotonicity, this value $P$ is supported on $[0,\varphi_{H}(0)]$.
The \blackref{alg:polarize} reduction derandomizes the conditional value by moving all the probabilities to either $P \equiv 0$ or $P \equiv \varphi_{H}(0)$.
Namely, this is a {\em win-win} reduction, in a sense that at least one of the two possibilities $P \equiv 0$ and $P \equiv \varphi_{H}(0)$ induces a worse-or-equal {{\sf PoA}} than the given instance. Later, such win-win reductions also appear in several other places.
Further, the \blackref{alg:slice} reduction totally determines the conditional value $P \equiv \varphi_{H}(0)$ by eliminating the other possibility.
This reduction modifies both the conditional value $P$ and the mappings $\bvarphi$, thus far more complicated than the \blackref{alg:polarize} reduction, which only modifies the $P$.\footnote{For very technical reasons, we {\em should not} unify \blackref{alg:polarize} and \blackref{alg:slice} into a single reduction (cf.\ \Cref{sec:preprocessing,sec:reduction}).}
In conclusion, we can drop the notation $P$ hereafter: Just the {\em piecewise constant} and {\em increasing} mappings $\bvarphi$ are enough to describe the entire instance and the reductions on it.
\subsection*{\blackref{alg:halve} and \blackref{alg:AD} (\Cref{subsec:halve,subsec:AD}): The main reductions}
\afterpage{
\begin{figure}
\centering
\begin{minipage}{.49\textwidth}
\centering
\subfloat[\label{fig:intro:halve:input}
{The input with a {\bf pseudo jump}}]{\includegraphics[width = .79\linewidth]{intro_halve_input.png}} \\
\subfloat[\label{fig:intro:halve:left}
The {\em left} candidate output]{\includegraphics[width = .79\linewidth]{intro_halve_left.png}} \\
\subfloat[\label{fig:intro:halve:right}
The {\em right} candidate output]{\includegraphics[width = .79\linewidth]{intro_halve_right.png}}
\captionof{figure}{The {\text{\tt Halve}} reduction \\
(A baby version of \Cref{fig:halve})}
\label{fig:intro:halve}
\end{minipage}
\hfill
\begin{minipage}{.49\textwidth}
\centering
\subfloat[\label{fig:intro:AD:input}
{The input with a {\bf real jump}}]{\includegraphics[width = .79\linewidth]{intro_AD_input.png}} \\
\subfloat[\label{fig:intro:AD:ascend}
The {\em ascended} candidate output]{\includegraphics[width = .79\linewidth]{intro_AD_ascend.png}} \\
\subfloat[\label{fig:intro:AD:descend}
The {\em descended} candidate output]{\includegraphics[width = .79\linewidth]{intro_AD_descend.png}}
\captionof{figure}{The {\text{\tt Ascend-Descend}} reduction \\
(A baby version of \Cref{fig:AD})}
\label{fig:intro:AD}
\end{minipage}
\end{figure}
\clearpage}
Our main reduction aims to reduce the number of pieces of the high-value bidder $H$'s mapping $\varphi_{H}$.
(After the \blackref{alg:discretize} reduction, this mapping has {\em finite} many pieces. Especially, the one in the worst-case instance has one {\em single} piece (\Cref{fig:intro:worst_case}), namely a constant mapping.)
To this end, we look at the first {\em jump point} of this mapping $\varphi_{H}$.
Furthermore, we distinguish two types of jumps, {\bf pseudo jumps} versus {\bf real jumps}, and adopt two different reductions respectively.
For a \blackref{halve_jump} (\Cref{fig:intro:halve:input}),
the after-jump parts of all mappings $\bvarphi$ are {\em universally} higher than the before-jump parts, so the entire instance naturally divides into the {\em left-lower} part versus the {\em right-upper} part (\Cref{fig:intro:halve:left,fig:intro:halve:right}).
We prove that at least one part induces a worse-or-equal {{\sf PoA}} than the given instance.\footnote{Especially, for the right-upper part, we reapply the \blackref{alg:translate} reduction to make the bid space start from $0$.}
This is another win-win reduction and will be called \blackref{alg:halve}.
For a \blackref{AD_jump} (\Cref{fig:intro:AD:input}), the after-jump parts (of some mappings) still interlace with the before-jump parts.
We would modify the first two pieces of the high-value mapping $\varphi_{H}$ to reduce the number of pieces by one.
I.e., we either {\em ascend} the first piece to the value of the second piece (\Cref{fig:intro:AD:ascend}) or {\em descend} the second piece to the value of the first piece (\Cref{fig:intro:AD:descend}).
We show that at least one modification gives a worse-or-equal {{\sf PoA}} than the given instance. This is again a win-win reduction and will be called \blackref{alg:AD}.
To conclude, whether a \blackref{halve_jump} or a \blackref{AD_jump}, we obtain a new instance that (i)~has a worse-or-equal {{\sf PoA}} and (ii)~the high-value mapping $\varphi_{H}$ has one fewer piece.
Clearly, we can {\em iterate} the entire process until a constant mapping $\varphi_{H}(b) \equiv v_{H}$, i.e., one {\em single} piece.
Then, the \blackref{alg:collapse} reduction can replace all the other bidders $\{B_{k}\}_{k \neq H}$ by one pseudo bidder $L$, hence a two-bidder pseudo instance $H \otimes L$ that has the same shape as the worst case (\Cref{fig:intro:twin}), as desired.
We stress again that the above description is greatly simplified and thus just roughly accurate -- The high-level idea is delivered, but most technical details are hidden.
\subsection*{Solving the functional optimization (\Cref{sec:UB})}
After the above reductions, we get a nice-looking instance: (i)~one bidder $H$ with a fixed high value $\varphi_{H}(b) \equiv v_{H}$ and (ii)~infinite $n \to +\infty$ identical low-value bidders $\{\sqrt[n]{L}\}^{\otimes n}$, or one pseudo bidder $L$ in our extended language.
Towards the worst case, it remains to decide the pseudo mapping $\varphi_{L}(b)$, or essentially, the high-value bidder $H$'s bid distribution.
This is accomplished by standard tools from the calculus of variations.
Via the Euler-Lagrange equation \cite{GS20}, we formulate the {\em worst-case} bid distribution $H$ as the solution to an ordinary differential equation (ODE).
Luckily, this ODE admits a closed-form solution, and ``coincidentally'' the tight bound $1 - 1 / e^{2}$ is pretty nice-looking.
\subsection*{Comparison with previous techniques}
On the Price of Anarchy in auctions, the canonical approach is the {\em smoothness framework} proposed by Roughgarden \cite{R15} and developed by Syrgkanis and Tardos \cite{ST13}.
The past decade has seen an abundance of its applications and extensions (cf.\ the survey \cite{RST17}).
However, the smoothness framework has an intrinsic restriction -- It focuses on the structure of an auction game BUT ignores the {\em independence}, i.e., both the independence of value distributions $\bV = \{V_{i}\}_{i \in [n]}$ and the independence of strategies $\bs = \{s_{i}\}_{i \in [n]}$.
Thus, although giving an arsenal of tools for proving {\em lower bounds}, this approach seems hard to access {\em tight bounds} for some auctions. Especially for the first-price auction, the smoothness-based bound of $1 - 1 / e \approx 0.6321$ by Syrgkanis and Tardos \cite{ST13} is tight when {\em correlated} distributions are allowed. Therefore, this bound cannot be improved without taking the independence into account.
The work by Hoy, Taggart, and Wang \cite{HTW18} is one of the very few follow-ups that transcend the smoothness framework.\footnote{To the best of our knowledge, \cite{HTW18} is the only work for Bayesian {\em single-item} auctions that transcends the smoothness framework. Other such works for {\em multi-item} auctions are thoroughly discussed in \cite[Section~8.3]{RST17}.}
For the first-price auction, they show an improved lower bound of $\approx 0.7430$, but still not tight.
Technically, the improvement stems from an extension of the smoothness framework that leverages the independence.
In sum, towards {\em tight bounds} for the first-price auction and the beyond, the primary consideration is:
What is the consequence of the {\em independence} of values $\bv = (v_{i})_{i \in [n]} \sim \bV$ and strategies $\bs$?
Our answer is, the bids $\bs(\bv) = (s_{i}(v_{i}))_{i \in [n]}$ are also independent and follow a product distribution $\bB = \{B_{i}\}_{i \in [n]}$. This is the starting point of our overall approach.
In the literature, ALL prior works (within/beyond the smoothness framework) follow the proof paradigm: ``searching over value distributions $\bV \in \bbV$ and equilibrium strategies $\bs \in \bbBNE(\bV)$ plus deviation strategies thereof $\bs' \neq \bs$'' \cite[Section~8.3]{RST17}. Instead, we present the FIRST different proof paradigm: ``searching over (valid) bid distributions $\bB = \bB(\bV,\, \bs) \in \mathbb{B}_{\sf valid}$ that are resulted from certain value-strategy tuples $(\bV,\, \bs) \in \bbV \times \bbBNE$''.
\begin{comment}
\blue{
\begin{itemize}[leftmargin = *]
\item {\bf Q:}
What is the consequence of the {\em independence} of values $\bv = (v_{i})_{i \in [n]} \sim \bV$ and strategies $\bs$?
{\bf A:}
The random bids
$\bs(\bv) = (s_{i}(v_{i}))_{i \in [n]} \sim \bB = \{B_{i}\}_{i \in [n]}$ are also independent.
\item {\bf Q:}
What the role of an {\em equilibrium} strategy profile $\bs \in \bbBNE(\bV)$?
{\bf A:}
In a sense of the {\em Fourier transform}, transform the value distribution $\bV$ into the bid distribution $\bB$ in a particular way, so as to satisfy the equilibrium conditions (\Cref{def:bne}).
\item {\bf Q:}
Can we reconstruct the value distribution $\bV$ from the equilibrium bid distribution $\bB$, i.e., our version of the {\em inverse Fourier transform}?
{\bf A:}
Yes and even more; see \Cref{fig:mappings} for a visual aid.
(Forward) There can be {\em multiple} equilibria $\bs \in \bbBNE(\bV)$, hence {\em multiple} equilibrium bid distributions $\bB^{(\bs)}$.
(Inverse) Every equilibrium bid distribution $\bB^{(\bs)}$ {\em uniquely} determines this equilibrium $\bs \in \bbBNE(\bV)$, i.e., the value distribution $\bV$ plus the strategy profile $\bs$.
So, there exists a {\em bijection} between the space of equilibria $\{(\bV,\, \bs): \bs \in \bbBNE(\bV)\}$ and the space of equilibrium bid distributions $\{\bB^{(\bs)}: \bs \in \bbBNE(\bV)\}$.
\item {\bf Q:}
Why do we change our viewpoint to the space of bid distributions $\{\bB^{(\bs)}\}$?
{\bf A:}
Towards the tight {{\sf PoA}}, it is mathematically equivalent and technically easier -- We consider {\em one} infinite set $\{\bB^{(\bs)}\}$ instead of the Cartesian product of {\em two} infinite sets $\{\bV\} \times \{\bs \in \bbBNE(\bV)\}$.
\end{itemize}}
\end{comment}
In our main proof, we step by step narrow down the search space of the worst-case instance, by showing stronger and stronger conditions it must satisfy, and completely capture the worst-case instance eventually.
En route, an abundance of tools is developed, which may find applications in the future.
It is conceivable that our approach can be adapted to other auctions.
So, we hope that this would develop into a complement to the smoothness framework, especially in the setting of independent distributions.
\section{Preprocessing Valid Instances}
\label{sec:preprocessing}
This section presents several preparatory reductions towards the potential worst-case instances.
In \Cref{subsec:pseudo}, we introduce the concept of valid {\em pseudo} instances, which generalizes valid instances and eases the presentation.
In \Cref{subsec:discretize}, we {\em discretize} an instance up to any {{\sf PoA}}-error $\epsilon > 0$, simplifying the bid-to-value mappings $\bvarphi$ to a (finite-size) bid-to-value table $\bPhi$.
In \Cref{subsec:translate}, we {\em translate} the instances, making the infimum bids zero $\gamma = 0$.
In \Cref{subsec:layer}, we
{\em layer} the bid-to-value mappings $\bvarphi$, making the table $\bPhi$ decreasing bidder-by-bidder.
In \Cref{subsec:polarize}, we
{\em derandomize} the conditional value $P$ to either the {\em floor} value $P^{\downarrow} \equiv 0$ or the {\em ceiling} value $P^{\uparrow} \equiv \varphi_1(0)$.
To conclude (\Cref{subsec:preprocess}),
we restrict the space of the worst cases to the {\em floor}/{\em ceiling} pseudo instances, which can be represented just by the table $\bPhi$.
These materials serve as the basis for the more complicated reductions later in \Cref{sec:reduction}.
\begin{comment}
Roughly speaking, towards a lower bound on the {\textsf{Price of Anarchy}}, we can switch the search space from valid real instances to {\em valid pseudo instances}.
\begin{itemize}
\item \Cref{subsec:pseudo} the concept of pseudo instances
\item \Cref{subsec:discretize} {\blackref{alg:discretize}}
\item \Cref{subsec:translate} {\blackref{alg:translate}}
\item \Cref{subsec:layer} {\blackref{alg:layer}}
\item \Cref{subsec:polarize} {\blackref{alg:polarize}}
\item \Cref{subsec:preprocess} falls in the subspace $(\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$ of {\em floor}/{\em ceiling} pseudo instances.
\end{itemize}
\begin{reminder*}
\blue{{\bf Yaonan:} For \Cref{sec:preprocessing}, we are left with
\begin{itemize}
\item \Cref{fact:discretize:function_property} in the proof of \Cref{lem:discretize}
\item \Cref{lem:discretize}; {\bf The General Case}.
\end{itemize}
}
\end{reminder*}
\newpage
\end{comment}
\subsection{The concept of valid pseudo instances}
\label{subsec:pseudo}
This subsection introduces the concept of valid {\em pseudo} instances $(P,\, \bB \otimes L)$, a natural extension of valid {\em real} instances $(P,\, \bB)$ from \Cref{def:valid}.
Given a valid real instance $(P,\, \bB)$, we can always (\Cref{lem:pseudo_instance}) construct a valid pseudo instance $(P,\, \bB \otimes L)$ that yields the same auction/optimal {\textsf{Social Welfares}}. Thus towards a lower bound on the {\textsf{Price of Anarchy}}, we can consider valid pseudo instances instead (\Cref{cor:pseudo_instance}).
Roughly speaking, a pseudo instance $(P,\, \bB \otimes L)$ includes one more {\em pseudo bidder} $L$ to a real instance $(P,\, \bB)$. To clarify this difference, we often use the letter $i \in [n]$ and its variants for real bidders, and the Greek letter $\sigma \in [n] \cup \{L\}$ and its variants for real or pseudo bidders. Without ambiguity, we redenote by $\mathbb{B}_{\sf valid}$ the space of valid pseudo instances. The formal definition of {\em valid} pseudo instances is given below; cf.\ \Cref{def:valid} for comparison.
\begin{definition}[Valid pseudo instances]
\label{def:pseudo}
A pseudo instance $(P,\, \bB \otimes L)$ has two parts.
\begin{itemize}
\item $P$ denotes a conditional value distribution (\Cref{def:conditional_value}), {\em independent of} the $\bB \otimes L$.
\item $\bB \otimes L = \{B_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ denotes bid distributions. Again, the infimum bid $\max(\supp(B_{\sigma})) = \gamma \in [0,\, +\infty)$ and the supremum bid $\max(\supp(B_{\sigma})) \leq \lambda \in [\gamma,\, +\infty]$; with at least one equality $\max(\supp(B_{\sigma})) = \lambda$. The pseudo bidder $L$ differs from real bidders $\bB$ in that:
\begin{itemize}
\item Each real bidder $B_{i}$ for $i \in [n]$ competes with OTHER bidders $\bB_{-i} \otimes L$.
\item The pseudo bidder $L$ competes with ALL bidders $\bB \otimes L$, {\em including him/herself $L$}.
\end{itemize}
In a uniform way (cf.\ \Cref{def:mapping}), each bidder $\sigma \in [n] \cup \{L\}$ has the bid-to-value mapping
$\varphi_{\sigma}(b)
\eqdef b + \big(\sum_{k \in [n] \setminus \{\sigma\}} \frac{B'_{k}(b)}{B_{k}(b)} + \frac{L'(b)}{L(b)}\big)^{-1}
= b + \big(\frac{\calB'(b)}{\calB(b)} - \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \indicator(\sigma \neq L)\big)^{-1}$ over $b \in [\gamma,\, \lambda]$,
where the first-order bid distribution $\calB(b) \eqdef \prod_{\sigma \in [n] \cup \{L\}} B_{\sigma}(b)$.
\end{itemize}
This pseudo instance is {\em valid} $(P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid}$ when it satisfies \blackref{monotonicity} and \blackref{boundedness}.
\begin{itemize}
\item \term[{\bf monotonicity}]{monotonicity}: The bid-to-value mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ are increasing over the bid support $b \in [\gamma,\, \lambda]$.
\item \term[{\bf boundedness}]{boundedness}: The monopolist $B_{1}$'s value $v_{1} = s_{1}^{-1}(b_{1})$, conditioned on the infimum bid $\{b_{1} = \gamma\}$, exactly follows the distribution $P$; this induces the restriction $\supp(P) \subseteq [\gamma,\, \varphi_{1}(\gamma)]$.
Moreover, each other bidder $\sigma \in [2: n] \cup \{L\}$, conditioned on the infimum bid $\{b_{\sigma} = \gamma\}$, is truthful $v_{i} = s_{i}^{-1}(b_{i}) = \gamma$.
\end{itemize}
\end{definition}
\begin{remark}[Pseudo instances]
\label{rem:pseudo_instance}
The concept of pseudo instances greatly eases the presentation and similar ideas also appear in the previous works \cite{AHNPY19,JLTX20,JLQTX19a,JJLZ21}.
Essentially, a pseudo instance $\bB \otimes L$ can be viewed as the limit of a sequence of real instances. I.e., the pseudo bidder $L$ can be viewed as, in the limit $k \to +\infty$, a collection of {\em low-impact} bidders $L_{1}(b) = \dots = L_{k}(b) \eqdef (L(b))^{1 / k}$.
All these low-impact bidders have the {\em common} competing bid distribution $\mathfrak{B}(b) = \prod_{i \in [n]} B_{i}(b) \cdot (L(b))^{1 - 1 / k}$, so the highest bid among them leads to the highest value and we just need to keep track of the collective information $\prod_{j \in [k]} L_{j} = L(b)$.
On the other hand, in the limit $k \to +\infty$, the common competing bid distribution $\mathfrak{B}(b)$ pointwise converges to the first-order bid distribution $\calB(b) = \prod_{i \in [n]} B_{i}(b) \cdot L(b)$.
This accounts for the pseudo mapping $\varphi_{L}(b) = b + \calB(b) / \calB'(b)$.
\end{remark}
Because valid pseudo instances $(P,\, \bB \otimes L)$ differ from the valid real instances ONLY in the definition of the pseudo mapping $\varphi_{L}(b)$, most results in \Cref{sec:structure} still hold. Especially,
\Cref{lem:pseudo_welfare} formulates their {\textsf{Social Welfares}} (which restates \Cref{lem:auction_welfare,lem:optimal_welfare} basically).
\begin{lemma}[{\textsf{Social Welfares}}]
\label{lem:pseudo_welfare}
For a valid pseudo instance $(P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid}$, the expected auction/ optimal {\textsf{Social Welfares}} ${\sf FPA}(P,\, \bB \otimes L)$ and ${\sf OPT}(P,\, \bB \otimes L)$ are given by
\begin{align*}
{\sf FPA}(P,\, \bB \otimes L)\;
& ~=~ \Ex[P] \cdot \calB(\gamma)
~+~ \sum_{\sigma \in [n] \cup \{L\}} \Big(\int_{\gamma}^{\lambda} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b\Big), \\
{\sf OPT}(P,\, \bB \otimes L)
& ~=~ \gamma ~+~ \int_{\gamma}^{+\infty} \Big(1 - P(v) \cdot \calV(v)\Big) \cdot \d v,
\end{align*}
where the first-order bid distribution $\calB(b) \eqdef \prod_{\sigma \in [n] \cup \{L\}} B_{\sigma}(b)$ and the quasi first-order value distribution $\calV(v) \eqdef \prod_{\sigma \in [n] \cup \{L\}} \Prx_{b_{\sigma} \sim B_{\sigma}}\big[\, (b_{\sigma} \leq \gamma) \vee (\varphi_{\sigma}(b_{\sigma}) \leq v) \,\big]$ can be reconstructed from $\bB \otimes L$.
\end{lemma}
\Cref{lem:pseudo_mapping} shows that given a pseudo instance $(P,\, \bB \otimes L)$, the pseudo bidder $L$ always has a {\em dominated} bid-to-value mapping $\varphi_{L}(b) \leq \varphi_{i}(b)$, compared with other real bidders $i \in [n]$.
\begin{lemma}[Bid-to-value mappings]
\label{lem:pseudo_mapping}
For a valid pseudo instance $(P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid}$, the bid-to-value mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ satisfy the following:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:pseudo_mapping:dominance}
$\min(\bvarphi(b^{\otimes n + 1})) = \varphi_{L}(b) > b$ for $b \in (\gamma,\, \lambda]$. Further, $\min(\bvarphi(\gamma^{\otimes n + 1})) = \varphi_{L}(\gamma) \geq \gamma$.
\item\label{lem:pseudo_mapping:inequality}
$\sum_{i \in [n]} \frac{\varphi_{L}(b) - b}{\varphi_{i}(b) - b} \geq n - 1$ for $b \in (\lambda,\, \gamma]$.
\end{enumerate}
\end{lemma}
\begin{proof}
{\bf \Cref{lem:pseudo_mapping:dominance}} is an immediate consequence of (\Cref{def:pseudo}) the bid-to-value mappings
$\varphi_{\sigma}(b) = b + \big(\frac{\calB'(b)}{\calB(b)} - \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \indicator(\sigma \neq L)\big)^{-1}$ for $\sigma \in [n] \cup \{L\}$.
Particularly, in the proof of \Cref{lem:high_bid} (Fact~\ref{fact:pseudo}), we already show that the pseudo mapping $\varphi_{L}(b) > b$ for $b \in (\gamma,\, \lambda]$ and $\varphi_{L}(\gamma) \geq \gamma$. Moreover, we have
$\sum_{i \in [n]} \big(\varphi_{i}(b) - b\big)^{-1}
= n \cdot \frac{\calB'(b)}{\calB(b)} - \sum_{i \in [n]} \frac{B'_{i}(b)}{B_{i}(b)}
\geq (n - 1) \cdot \frac{\calB'(b)}{\calB(b)} + \frac{L'(b)}{L(b)}
= (n - 1) \cdot \big(\varphi_{L}(b) - b\big)^{-1}$
Rearranging this equation gives {\bf \Cref{lem:pseudo_mapping:inequality}}.
\end{proof}
\begin{comment}
\[
\sum_{i \in [n]} \big(\varphi_{i}(b) - b\big)^{-1}
~=~ n \cdot \frac{\calB'(b)}{\calB(b)} - \sum_{i \in [n]} \frac{B'_{i}(b)}{B_{i}(b)}
~\geq~ (n - 1) \cdot \frac{\calB'(b)}{\calB(b)} + \frac{L'(b)}{L(b)}
~=~ n \cdot \big(\varphi_{L}(b) - b\big)^{-1}.
\]
Further, we can infer {\bf \Cref{lem:pseudo_mapping:inequality}} from
$\sum_{i \in [n]} \big(\varphi_{i}(b) - b\big)^{-1} = n \cdot \frac{\calB'(b)}{\calB(b)} - \sum_{i \in [n]} \frac{B'_{i}(b)}{B_{i}(b)}
\geq n \cdot \frac{\calB'(b)}{\calB(b)}
= n \cdot \big(\varphi_{L}(b) - b\big)^{-1}$.
\end{comment}
\Cref{lem:pseudo_distribution} shows that the bid distributions $\bB \otimes L = \{B_{i}\}_{i \in [n]} \otimes L$ can be reconstructed from the bid-to-value mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$.
This lemma is important in that many subsequent reductions will construct new bid distributions $\tilde{\bB} \otimes \tilde{L}$ in terms of the mappings $\tilde{\bvarphi}$.
Therefore, the correctness of such construction reduces to checking \Cref{lem:pseudo_mapping:dominance,lem:pseudo_mapping:inequality} of \Cref{lem:pseudo_mapping} and monotonicity of the mappings $\tilde{\bvarphi}$.
\begin{lemma}[Bid distributions]
\label{lem:pseudo_distribution}
For $(n + 1)$ functions $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ that are increasing on $b \in [\gamma,\, \lambda]$ and satisfy \Cref{lem:pseudo_mapping:dominance,lem:pseudo_mapping:inequality} of \Cref{lem:pseudo_mapping},
the functions $\bB \otimes L$ given below are well-defined $[\gamma,\, \lambda]$-supported bid distributions and their bid-to-value mappings are exactly $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$.
\begin{align*}
B_{i}(b) & ~\equiv~ \exp\Big(-\int_{b}^{\lambda}
\Big(\big(\varphi_{L}(b) - b\big)^{-1} - \big(\varphi_{i}(b) - b\big)^{-1}\Big) \cdot \d b\Big) \cdot \indicator(b \geq \gamma),
\qquad\qquad \forall i \in [n]. \\
L(b) & ~\equiv~ \exp\Big(-\int_{b}^{\lambda}
\Big(\sum_{i \in [n]} \big(\varphi_{i}(b) - b\big)^{-1} - (n - 1) \cdot \big(\varphi_{L}(b) - b\big)^{-1}\Big) \cdot \d b\Big) \cdot \indicator(b \geq \gamma).
\end{align*}
\end{lemma}
\begin{proof}
The construction of $\bB \otimes L$ intrinsically ensures that $B_{\sigma}(b) = 0$ below the infimum bid $b < \gamma$ and $B_{\sigma}(\lambda) = 1$ at the supremum bid $\lambda$, for $\sigma \in [n] \cup \{L\}$.
Moreover, \Cref{lem:pseudo_mapping:dominance,lem:pseudo_mapping:inequality} of \Cref{lem:pseudo_mapping} together make each $B_{\sigma}(b)$ an increasing function on $b \in [\gamma,\, \lambda]$, hence a well-defined $[\gamma,\, \lambda]$-supported bid distribution. It remains to verify $\varphi_{\sigma}(b) = b + \big(\sum_{k \in [n] \setminus \{\sigma\}} \frac{B'_{k}(b)}{B_{k}(b)} + \frac{L'(b)}{L(b)}\big)^{-1}$ over the bid support $b \in [\gamma,\, \lambda]$, for $\sigma \in [n] \cup \{L\}$, namely the functions $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ are exactly (\Cref{def:pseudo}) the bid-to-value mappings stemmed from the $\bB \otimes L$.
These identities for $\sigma \in [n] \cup \{L\}$ can be easily seen via elementary algebra; we omit the details for brevity.
\Cref{lem:pseudo_distribution} follows then.
\end{proof}
\begin{comment}
For a valid pseudo instance $(P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid}$, the bid distributions $\bB \otimes L = \{B_{i}\}_{i \in [n]} \otimes L$ can be reconstructed from the bid-to-value mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$:
\begin{align*}
& B_{i}(b) ~=~ \exp\bigg(-\int_{b}^{\lambda}
\Big(\big(\varphi_{L}(b) - b\big)^{-1} - \big(\varphi_{i}(b) - b\big)^{-1}\Big) \cdot \d b\bigg),
\qquad\qquad \forall i \in [n]. \\
& L(b) ~=~ \exp\bigg(-\int_{b}^{\lambda}
\Big(\sum_{\sigma \in [n] \cup \{L\}} \big(\varphi_{\sigma}(b) - b\big)^{-1} - n \cdot \big(\varphi_{L}(b) - b\big)^{-1}\Big) \cdot \d b\bigg).
\end{align*}
\blue{Moreover, for any given $n+1$ monotone functions $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$, satisfying the three conditions stated in \Cref{lem:pseudo_mapping}. The $n+1$ bid distributions constructed as above are well-defined.}
\end{comment}
\begin{comment}
that under such $\bB \otimes L$, the resulting bid-to-value mappings
$\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ r exactly leads to the underlying bid-to-value mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$. This can be easily checked
(\Cref{def:pseudo})
(\Cref{def:pseudo}) Recall the bid-to-value mappings $\varphi_{\sigma}(b) = b + \big(\sum_{k \in [n] \setminus \{\sigma\}} \frac{B'_{k}(b)}{B_{k}(b)} + \frac{L'(b)}{L(b)}\big)^{-1}$ for $\sigma \in [n] \cup \{L\}$. For real bidders and the pseudo bidder separately, we can deduce that
\begin{align*}
& \big(\varphi_{i}(b) - b\big)^{-1}
~=~ \sum_{k \in [n] \setminus \{i\}} B'_{k}(b) \big/ B_{k}(b)
~+~ L'(b) \big/ L(b),
\qquad\qquad \forall i \in [n]. \\
& \big(\varphi_{L}(b) - b\big)^{-1}
~=~ \sum_{k \in [n]} B'_{\sigma}(b) \big/ B_{\sigma}(b) ~+~ L'(b) \big/ L(b).
\end{align*}
and thus that
\begin{align*}
& \frac{\d}{\d b}\Big(\ln B_{i}(b)\Big)
~=~ B'_{i}(b) \big/ B_{i}(b)
~=~ \big(\varphi_{L}(b) - b\big)^{-1} - \big(\varphi_{i}(b) - b\big)^{-1},
\qquad\qquad \forall i \in [n], \\
& \frac{\d}{\d b}\Big(\ln L(b)\Big)
~=~ L'(b) \big/ L(b)
~=~ \big(\varphi_{L}(b) - b\big)^{-1} - \sum_{i \in [n]} \Big(\big(\varphi_{L}(b) - b\big)^{-1} - \big(\varphi_{i}(b) - b\big)^{-1}\Big).
\end{align*}
Resolve these ODE's under the boundary conditions $B_{\sigma}(\lambda) = 1$ for $\sigma \in [n] \cup \{L\}$ at the supremum bid $b = \lambda$, then the CDF formulas $\{B_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ claimed in \Cref{lem:pseudo_distribution} follow directly.
As long as $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ satisfies the three conditions stated in \Cref{lem:pseudo_mapping}, it is easy to verify that $\{B_{i}\}_{i \in [n]}$ and $L$ defined above are non-negative increasing functions with value $1$ at $\lambda$. Therefore, they are well-defined CDF functions supported on $[\gamma,\, \lambda]$.
\end{comment}
\Cref{lem:pseudo_instance} suggests that any valid {\em real} instance $(P,\, \bB)$ can be reinterpreted as a valid {\em pseudo} instance $(P,\, \bB \otimes L_{\gamma})$, by employing a specific pseudo bidder $L_{\gamma}$.
As a consequence (\Cref{cor:pseudo_instance}): Towards a lower bound on the {\textsf{Price of Anarchy}} in {\textsf{First Price Auctions}}, we can consider valid {\em pseudo} instances instead.
(Cf.\ \Cref{cor:poa_identity} for comparison).
\begin{lemma}[Pseudo instances]
\label{lem:pseudo_instance}
For a valid instance $(P,\, \bB) \in \mathbb{B}_{\sf valid}$, the following hold:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:pseudo_instance:property}
The pseudo bidder $L_{\gamma}(b) \equiv \indicator(b \geq \gamma)$ induces a valid pseudo instance $(P,\, \bB \otimes L_{\gamma}) \in \mathbb{B}_{\sf valid}$.
\item\label{lem:pseudo_instance:poa}
${\sf OPT}(P,\, \bB \otimes L_{\gamma}) = {\sf OPT}(P,\, \bB)$ and ${\sf FPA}(P,\, \bB \otimes L_{\gamma}) = {\sf FPA}(P,\, \bB)$.
\end{enumerate}
\end{lemma}
\begin{proof}
As mentioned in \Cref{rem:pseudo_instance}, to see the validity $(P,\, \bB \otimes L_{\gamma}) \in \mathbb{B}_{\sf valid}$, we only need to verify \blackref{monotonicity} of mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ and \blackref{boundedness} of the conditional value $P$.
\vspace{.1in}
\noindent
{\bf \Cref{lem:pseudo_instance:property}.}
We have $L_{\gamma}(b) = 1$ and $L'_{\gamma}(b) \big/ L_{\gamma}(b) = 0$ for $b \geq \gamma$. Thus, each real bidder $i \in [n]$ keeps the {\em same} mapping $\varphi_{i}(b)$ after including the pseudo bidder $L_{\gamma}$ (\Cref{def:mapping,def:pseudo}), preserving \blackref{monotonicity}.
Also, the pseudo mapping can be written as $\varphi_{L_{\gamma}}(b) = b + \calB(b) \big/ \calB'(b)$ for $b \in [\gamma,\, \lambda]$, based on the first-order bid distribution $\calB(b) = \prod_{i \in [n]} B_{i}(b) \cdot L_{\gamma}(b) = \prod_{i \in [n]} B_{i}(b)$.
In the proof of \Cref{lem:high_bid} (Fact~\ref{fact:pseudo}), we already verify \blackref{monotonicity} of this mapping.
Moreover, the {\em unmodified} conditional value $P$ must preserve \blackref{boundedness}.
\vspace{.1in}
\noindent
{\bf \Cref{lem:pseudo_instance:poa}.}
This can be easily inferred from (\Cref{lem:auction_welfare,lem:optimal_welfare} versus \Cref{lem:pseudo_welfare}) the {\textsf{Social Welfare}} formulas for valid {\em real} versus {\em pseudo} instances; thus we omit details for brevity. Roughly speaking, the point is that the pseudo bidder $L_{\gamma}(b) = 1$ for $b \geq \gamma$ has no effect.
\end{proof}
Due to the above lemma, we know that the PoA of pseudo instances is a lower bound of the real PoA. On the other hand, we know that a pseudo instance can be viewed as a limit of real instances. As a result, any PoA ratio obtained by a pseudo instance can arbitrarily approximated by real instances. Thus, we have the following corollary.
\begin{corollary}[Lower bound]
\label{cor:pseudo_instance}
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} is
\begin{align*}
{\sf PoA} ~=~ \inf \bigg\{\, \frac{{\sf FPA}(P,\, \bB \otimes L)}{{\sf OPT}(P,\, \bB \otimes L)} \,\biggmid\, (P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid} ~\text{\em and}~ {\sf OPT}(P,\, \bB \otimes L) < +\infty \,\bigg\}.
\end{align*}
\end{corollary}
\subsection{{\text{\tt Discretize}}: Towards (approximate) discrete pseudo instances}
\label{subsec:discretize}
In this subsection, we introduce the concept of {\em discretized} pseudo instances $(P,\, \bB \otimes L)$; see \Cref{def:discretize,fig:discretize} for its formal definition and a visual demonstration.
\begin{definition}[Discretized pseudo instances]
\label{def:discretize}
A {\em valid} pseudo instance $(P,\, \bB \otimes L)$ from \Cref{def:pseudo} is further called {\em discretized} when it satisfies \blackref{finiteness} and \blackref{piecewise}.
\begin{itemize}
\item \term[{\bf finiteness}]{finiteness}{\bf :}
The supremum bid/value are {\em bounded away} from infinite $\lambda \leq \max(\bvarphi(\blambda)) < +\infty$.
\item \term[{\bf piecewise constancy}]{piecewise}{\bf :}
Given some {\em bounded} integer $0 \leq m < +\infty$ and some $(m + 1)$-piece partition of the bid support $\bLambda \eqdef [\gamma = \lambda_{0},\, \lambda_{1}) \cup
[\lambda_{1},\, \lambda_{2}) \cup \dots \cup
[\lambda_{m},\, \lambda_{m + 1} = \lambda]$, each bid-to-value mapping $\varphi_{\sigma}(b)$ for $\sigma \in [n] \cup \{L\}$ is a {\em piecewise constant} function under this partition.
\end{itemize}
Notice that the partition $\bLambda$ relies on the bid distributions $\bB \otimes L$, irrelevant to the monopolist $B_{1}$'s conditional value $P$.
\end{definition}
\begin{remark}[Discretized pseudo instances]
\label{rem:discretize}
The value distributions $V_{\sigma}(v)$ for $\sigma \in [n] \cup \{L\}$ reconstructed from a {\em discretized} pseudo instance $(P,\, \bB \otimes L)$ are almost discrete, except the conditional value $P$ of the monopolist $B_{1}$ (which ranges between $\supp(P) \subseteq [\gamma,\, \varphi_{1}(\gamma)]$ but is otherwise arbitrary).
Namely, each index $j \in [0: m]$ piece of a bid-to-value mapping $\varphi_{\sigma}(b)$ induces a probability mass $\big(B_{\sigma}(\lambda_{j + 1}) - B_{\sigma}(\lambda_{j})\big)$ of the value distribution $V_{\sigma}(v)$.
\end{remark}
\Cref{lem:discretize} shows that the {{\sf PoA}} bound of a (generic) valid pseudo instance $(P,\, \bB \otimes L)$ can be approximated {\em arbitrarily close} by a {\em discretized} pseudo instance $(\tilde{P},\, \tilde{\bB} \otimes \tilde{L})$.
\begin{intuition*}
For the {\textsf{Price of Anarchy}} problem, we only consider (\Cref{cor:pseudo_instance}) the pseudo instances that have {\em bounded} {\textsf{Social Welfares}} ${\sf FPA}(P,\, \bB \otimes L) \leq {\sf OPT}(P,\, \bB \otimes L) < +\infty$.
In a sense of Lebesgue integrability, we can carefully truncate\footnote{We will use a special truncation scheme. The usual truncation scheme often induces probability masses at the truncation threshold, but we require that the bid CDF's $\bB \otimes L$ are {\em continuous} over the bid support $b \in [\gamma,\, \lambda]$.} the bid distributions $\bB \otimes L$ to meet the \blackref{finiteness} condition, by which the auction/optimal {\textsf{Social Welfares}} change by negligible amounts.
Then, we replace the bid-to-value mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ by their {\em interpolation functions} $\tilde{\bvarphi} = \{\tilde{\varphi}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ (\blackref{piecewise}).
The mappings $\bvarphi$ decide the bid distributions $\bB \otimes L$ (\Cref{lem:pseudo_distribution}) and the auction/optimal {\textsf{Social Welfares}} (\Cref{lem:pseudo_welfare}) in terms of {\em integral formulas}, which are insensitive to the (local) changes of the interpolation functions $\tilde{\bvarphi}$. Thus as usual, when the interpolation scheme is accurate enough, everything (especially the {{\sf PoA}}-bound) can be approximated arbitrarily well.
To make the arguments rigorous, we use the continuous mapping theorem \cite{B13} in measure theory.
\end{intuition*}
\begin{theorem}[{The continuous mapping theorem \cite{B13}}]
\label{thm:continuous_mapping}
Let $X$ and $\{X_{m}\}_{m \geq 0}$ be random elements defined on a metric space $\mathbb{S}$. Regarding a function $g: \mathbb{S} \mapsto \mathbb{S}'$ (where $\mathbb{S}'$ is another metric space), suppose its set of discontinuity points $\mathbb{D}_{g}$ has a zero measure $\Pr[X \in \mathbb{D}_{g}] = 0$, then this function $g$ perseveres the property
of convergence in distribution. Formally,
\[
\big\{ X_{m} \big\} ~\overset{{\tt dist}}{\longrightarrow}~ X
\qquad\Longrightarrow\qquad
\big\{ g(X_{m}) \big\} ~\overset{{\tt dist}}{\longrightarrow}~ g(X).
\]
\end{theorem}
\begin{lemma}[Discretization]
\label{lem:discretize}
Given a valid pseudo instance $(P,\, \bB \otimes L)$ that has bounded expected auction/optimal {\textsf{Social Welfares}} ${\sf FPA}(P,\, \bB \otimes L) \leq {\sf OPT}(P,\, \bB \otimes L) < +\infty$, for any error $\epsilon \in (0,\, 1)$, there is a discretized pseudo instance $(\tilde{P},\, \tilde{\bB} \otimes \tilde{L})$ such that $|{\sf PoA}(\tilde{P},\, \tilde{\bB} \otimes \tilde{L}) - {\sf PoA}(P,\, \bB \otimes L)| \leq \epsilon$.
\end{lemma}
\afterpage{
\begin{figure}[t]
\centering
\begin{mdframed}
Approximate Reduction $\term[\text{\tt Discretize}]{alg:discretize}(P,\, \bB \otimes L,\, \bLambda)$
\begin{flushleft}
{\bf Input:} A (generic) valid pseudo instance $(P,\, \bB \otimes L)$;
\hfill \Cref{def:pseudo} \\
\white{\bf Input:} A partition $\bLambda = [\gamma = \lambda_{0},\, \lambda_{1}) \cup
[\lambda_{1},\, \lambda_{2}) \cup \dots \cup
[\lambda_{m},\, \lambda_{m + 1} = \lambda]$.
\hfill \Cref{def:discretize}
\vspace{.05in}
{\bf Output:} A {\em discretized} pseudo instance $(P,\, \tilde{\bB} \otimes \tilde{L})$.
\begin{enumerate}
\item\label{alg:discretize:mapping}
Define $\tilde{\varphi}_{\sigma}(b) \eqdef \sum_{j \in [0:\, m]} \varphi_{\sigma}(\lambda_{j + 1}) \cdot \indicator(\lambda_{j} \leq b < \lambda_{j + 1})$ on $b \in [\gamma,\, \lambda)$, for $\sigma \in [n] \cup \{L\}$.
\item[] \OliveGreen{$\triangleright$ Define $\tilde{\varphi}_{\sigma}(\lambda_{m + 1}) \eqdef \varphi_{\sigma}(\lambda_{m + 1})$ to make a closed domain.}
\item\label{alg:discretize:distribution}
{\bf Return} $(P,\, \tilde{\bB} \otimes \tilde{L})$, namely only the bid distributions $\tilde{\bB} \otimes \tilde{L} = \{\tilde{B}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ are modified and (\Cref{lem:pseudo_distribution}) are reconstructed from the mappings $\tilde{\bvarphi} = \{\tilde{\varphi}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$.
\end{enumerate}
\end{flushleft}
\end{mdframed}
\caption{The {\text{\tt Discretize}} approximate reduction.
\label{fig:alg:discretize}}
\end{figure}
\begin{figure}
\centering
\subfloat[\label{fig:discretize:old}
The input {\em increasing} mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$.]{
\includegraphics[width = .49\textwidth]
{discretize_old.png}}
\hfill
\subfloat[\label{fig:discretize:new}
The output {\em discretized} mappings $\tilde{\bvarphi} = \{\tilde{\varphi}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$.]{
\includegraphics[width = .49\textwidth]
{discretize_new.png}}
\caption{Demonstration for the {\text{\tt Discretize}} approximate reduction.
\label{fig:discretize}}
\end{figure}
\clearpage}
\begin{proof}
For ease of presentation, we first assume that the given pseudo instance $(P,\, \bB \otimes L)$ itself satisfies \blackref{finiteness} $\lambda \leq \max(\bvarphi(\blambda)) < +\infty$; this assumption will be removed later.
Given an integer $m \geq 0$, we would divide the {\em bounded} bid support into $(m + 1)$ uniform pieces $\bLambda = [\gamma = \lambda_{0},\, \lambda_{1}) \cup [\lambda_{1},\, \lambda_{2}) \cup \dots \cup [\lambda_{m},\, \lambda_{m + 1} = \lambda]$, via the partition points
$\lambda_{j} \eqdef \gamma + \frac{j}{m + 1} \cdot (\lambda - \gamma)$ for $j \in [0:\, m + 1]$.
Regarding this partition $\bLambda$, the {\blackref{alg:discretize}} reduction (see \Cref{fig:alg:discretize,fig:discretize} for its description and a visual aid) transforms the given pseudo instance $(P,\, \bB \otimes L)$ into another {\em discretized} pseudo instance $(P,\, \tilde{\bB} \otimes \tilde{L})$. This is formalized into {\bf \Cref{fact:discretize:output}}.
\setcounter{fact}{0}
\begin{fact}
\label{fact:discretize:output}
Under reduction $(P,\, \tilde{\bB} \otimes \tilde{L}) \gets \text{\tt Discretize}(P,\, \bB \otimes L,\, \bLambda)$, the output $(P,\, \tilde{\bB} \otimes \tilde{L})$ is a discretized pseudo instance; the conditional value $P$ is unmodified.
\end{fact}
\begin{proof}
The {\text{\tt Discretize}} reduction (Line~\ref{alg:discretize:distribution}) reconstructs the output bid distributions $\tilde{\bB} \otimes \tilde{L}$ from the output mappings $\tilde{\bvarphi} = \{\tilde{\varphi}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$; (Line~\ref{alg:discretize:mapping}) each output mapping $\tilde{\varphi}_{\sigma}(b)$ is the $(m + 1)$-piece {\em interpolation} of the input mapping $\varphi_{\sigma}(b)$ under the partition $\bLambda$.
Particularly (see \Cref{fig:discretize} for a visual aid), the output mappings $\tilde{\varphi}_{\sigma}(b) = \varphi_{\sigma}(\lambda_{j + 1}) \geq \varphi_{\sigma}(b)$ interpolate the right endpoint of each index-$j$ piece $b \in [\lambda_{j},\, \lambda_{j + 1})$ of the partition $\bLambda$.
Obviously, each interpolation $\tilde{\varphi}_{\sigma}(b)$ is a piecewise constant function on the same bounded domain $b \in [\gamma,\, \lambda]$ (\blackref{finiteness} and \blackref{piecewise}) and is an increasing function \`{a} la the input mapping $\varphi_{\sigma}(b)$ (\blackref{monotonicity}).
Also, the extended range $[\gamma,\, \tilde{\varphi}_{1}(\gamma)] = [\gamma,\, \varphi_{1}(\lambda_{1})] \supseteq [\gamma, \varphi_{1}(\gamma)] \supseteq \supp(P)$ restricts the unmodified conditional value $P$ (\blackref{boundedness}).
It remains to show that the reconstructed bid distributions $\tilde{\bB} \otimes \tilde{L}$ are well defined, namely that
(\Cref{lem:pseudo_distribution}) the interpolations $\tilde{\bvarphi} = \{\tilde{\varphi}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ satisfy the conditions in \Cref{lem:pseudo_mapping}.
The input mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ themselves must satisfy those conditions:
(\Cref{lem:pseudo_mapping:dominance} of \Cref{lem:pseudo_mapping}) that $\min(\bvarphi(b^{\otimes n + 1})) = \varphi_{L}(b) > b$ for $b \in (\gamma,\, \lambda]$ and $\min(\bvarphi(\gamma^{\otimes n + 1})) = \varphi_{L}(\gamma) \geq \gamma$; (\Cref{lem:pseudo_mapping:inequality} of \Cref{lem:pseudo_mapping}) that $\sum_{i \in [n]} \frac{\varphi_{L}(b) - b}{\varphi_{i}(b) - b} \geq n - 1$ for $b \in (\lambda,\, \gamma]$.
By interpolating the right endpoint of each index-$j$ piece $b \in [\lambda_{j},\, \lambda_{j + 1})$ of the partition $\bLambda$,
the output mappings satisfy $\tilde{\varphi}_{\sigma}(b) = \varphi_{\sigma}(\lambda_{j + 1}) \geq \tilde{\varphi}_{L}(b) = \varphi_{L}(\lambda_{j + 1}) > \lambda_{j + 1} > b$ (\Cref{lem:pseudo_mapping:dominance} of \Cref{lem:pseudo_mapping})
and thus $\sum_{i \in [n]} \frac{\tilde{\varphi}_{L}(b) - b}{\tilde{\varphi}_{i}(b) - b} = \sum_{i \in [n]} \frac{\varphi_{L}(\lambda_{j + 1}) - b}{\varphi_{i}(\lambda_{j + 1}) - b} \geq \sum_{i \in [n]} \frac{\varphi_{L}(\lambda_{j + 1}) - \lambda_{j + 1}}{\varphi_{i}(\lambda_{j + 1}) - \lambda_{j + 1}} \geq n - 1$ (\Cref{lem:pseudo_mapping:inequality} of \Cref{lem:pseudo_mapping}).
This finishes the proof of {\bf \Cref{fact:discretize:output}}.
\end{proof}
\begin{comment}
The reconstruction (Line~\ref{alg:discretize:distribution} and \Cref{lem:pseudo_distribution}) of the bid distributions $\tilde{\bB} \otimes \tilde{L} = \{\tilde{B}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ inherently ensures continuity, nonnegativity $\tilde{B}_{\sigma}(b) \geq 0$, and boundary conditions $\tilde{B}_{\sigma}(\lambda) = 1$.
To make them well-defined, it remains to show that $\tilde{B}_{\sigma}(b)$'s are increasing on every piece $b \in [\lambda_{j},\, \lambda_{j + 1})$ for $j \in [0: m]$, or equivalently,
that the derivatives $\frac{\d}{\d b}\big(\ln \tilde{B}_{i}(b)\big) \geq 0$ for $i \in [n]$ and $\frac{\d}{\d b}\big(\ln \tilde{L}(b)\big) \geq 0$. By construction (cf.\ \Cref{lem:pseudo_distribution}), we deduce that
\begin{align*}
\frac{\d}{\d b}\Big(\ln \tilde{B}_{i}(b)\Big)
& ~=~ \big(\tilde{\varphi}_{L}(b) - b\big)^{-1} - \big(\tilde{\varphi}_{i}(b) - b\big)^{-1}
&& \text{Line~\ref{alg:discretize:distribution}} \\
& ~=~ \big(\varphi_{L}(\lambda_{j + 1}) - b\big)^{-1} - \big(\varphi_{i}(\lambda_{j + 1}) - b\big)^{-1},
&& \text{Line~\ref{alg:discretize:mapping}}
\phantom{\Big.} \\
\frac{\d}{\d b}\Big(\ln \tilde{L}(b)\Big) \hspace{4.2pt}
& ~=~ \sum_{i \in [n]} \big(\tilde{\varphi}_{i}(b) - b\big)^{-1} - (n - 1) \cdot \big(\tilde{\varphi}_{L}(b) - b\big)^{-1}
&& \text{Line~\ref{alg:discretize:distribution}} \\
& ~=~ \sum_{i \in [n]} \big(\varphi_{i}(\lambda_{j + 1}) - b\big)^{-1} - (n - 1) \cdot \big(\varphi_{L}(\lambda_{j + 1}) - b\big)^{-1}.
&& \text{Line~\ref{alg:discretize:mapping}}
\end{align*}
The derivatives $\frac{\d}{\d b}\big(\ln \tilde{B}_{i}(b)\big)$ {\em are} nonnegative, provided that $\varphi_{i}(\lambda_{j + 1}) \geq \varphi_{L}(\lambda_{j + 1}) > \lambda_{j + 1} > b$ (\Cref{lem:pseudo_mapping}). The derivative $\frac{\d}{\d b}\big(\ln \tilde{L}(b)\big)$'s nonnegativity reduces to $\sum_{i \in [n]} \frac{\varphi_{L}(\lambda_{j + 1}) - b}{\varphi_{i}(\lambda_{j + 1}) - b} \geq n - 1$, which can be inferred as follows:
\[
\sum_{i \in [n]} \frac{\varphi_{L}(\lambda_{j + 1}) - b}{\varphi_{i}(\lambda_{j + 1}) - b}
~\geq~ \sum_{i \in [n]} \frac{\varphi_{L}(\lambda_{j + 1}) - \lambda_{j + 1}}{\varphi_{i}(\lambda_{j + 1}) - \lambda_{j + 1}}
~\geq~ n - 1.
\]
The first inequality holds: Each formula $\frac{\varphi_{L}(\lambda_{j + 1}) - b}{\varphi_{i}(\lambda_{j + 1}) - b}$ for $i \in [n]$ is decreasing on $b \in [\lambda_{j},\, \lambda_{j + 1}]$, given that (\Cref{lem:pseudo_mapping}) $\varphi_{i}(\lambda_{j + 1}) \geq \varphi_{L}(\lambda_{j + 1}) > \lambda_{j + 1} > b$. \\
The second inequality holds: The input $(P,\, \bB \otimes L)$ as a {\em well-defined} valid pseudo instance, inherently satisfies $\frac{\d}{\d b}\big(\ln L(b)\big) \bigmid_{b = \lambda_{j + 1}} \geq 0$, implying the second inequality.
To conclude, this is a well-defined {\em discretized} pseudo instance $(P,\, \tilde{\bB} \otimes \tilde{L})$. Indeed, regarding all integers $m \geq 0$, we get a {\em sequence} of such pseudo instances $(P,\, \bB^{(m)} \otimes L^{(m)})$.
Below we investigate the bound ${\sf PoA}(P,\, \bB^{(m)} \otimes L^{(m)})$ for each of them.
\end{comment}
We thus obtain a sequence of {\em discretized} pseudo instances $(P,\, \bB^{(m)} \otimes L^{(m)})$, for $m \geq 0$.
Below we study the bound ${\sf PoA}(P,\, \bB^{(m)} \otimes L^{(m)})$ for each of them.
To this end, consider a {\em specific} profile $(p,\, \bb,\, \ell) \in
[\gamma,\, \varphi_{1}(\gamma)] \times [\gamma,\, \lambda]^{n + 1}$.
Following \Cref{def:pseudo} and \Cref{lem:high_bid}, depending on whether the bids $(\bb,\, \ell) = (b_{\sigma})_{\sigma \in [n] \cup \{L\}}$ each take the infimum $= \gamma$ or not $> \gamma$:
\begin{itemize}
\item The monopolist $B_{1}$ takes a value $v_{1}^{(m)} = p \cdot \indicator(b_{1} = \gamma) + \varphi_{1}^{(m)}(b_{1}) \cdot \indicator(b_{1} > \gamma)$.
\item Each other bidder $\sigma \in [2:\, n] \cup \{L\}$ takes a value $v_{\sigma}^{(m)} = \gamma \cdot \indicator(b_{1} = \gamma) + \varphi_{\sigma}^{(m)}(b_{\sigma}) \cdot \indicator(b_{\sigma} > \gamma)$.
\end{itemize}
Over the randomness of the allocation rule $\alloc(\bb,\, \ell)$, this index-$m$ pseudo instance $(P,\, \bB^{(m)} \otimes L^{(m)})$ yields an interim auction {\textsf{Social Welfare}} $\varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell) \eqdef \Ex_{\alloc} \big[ v_{\alloc(\bb,\, \ell)}^{(m)} \big]$.
Likewise, under the {\em same} profile $(p,\, \bb,\, \ell)$, the input pseudo instance $(P,\, \bB \otimes L)$ yields a counterpart formula $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell) \eqdef \Ex_{\alloc} \big[ v_{\alloc(\bb,\, \ell)} \big]$.
{\bf \Cref{fact:discretize:function_property,fact:discretize:function_convergence}} show certain properties of the input/index-$m$ mappings $\bvarphi$ and $\bvarphi^{(m)}$ and the input/index-$m$ formulas $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ and $\varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell)$.
\begin{fact}
\label{fact:discretize:function_property}
On the domain $b \in [\gamma,\, \lambda]$, the mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ and $\bvarphi^{(m)} = \{\varphi_{\sigma}^{(m)}\}_{\sigma \in [n] \cup \{L\}}$ are bounded everywhere, and continuous almost everywhere except a zero-measure set $\mathbb{D} \subseteq [\gamma,\, \lambda]$.
Also, on the domain $(p,\, \bb,\, \ell) \in [\gamma,\, \varphi_{1}(\gamma)] \times [\gamma,\, \lambda]^{n + 1}$, the formulas $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ and $\varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell)$ are bounded everywhere, and continuous almost everywhere except a zero-measure set $\mathbb{D}_{{\sf FPA}}$.
\end{fact}
\begin{proof}
Each input mapping $\varphi_{\sigma}(b)$ as an increasing function (\blackref{monotonicity}) is continuous almost everywhere, except countably many discontinuities that in total have a zero measure. Also, each input mapping $\varphi_{\sigma}(b)$ is bounded $\varphi_{\sigma}(b) \leq \varphi_{\sigma}(\lambda) \leq \max(\bvarphi(\blambda)) < +\infty$ (\blackref{finiteness}).
Such continuity and boundedness extend to the input formula $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$, since tiebreaks $\alloc(\bb,\, \ell) \in \argmax(\bb,\, \ell)$ occur with a zero probability.\footnote{That is, conditioned on the {\em all-infimum} bid profile $\big\{ (\bb,\, \ell) = \bgamma \big\}$, the monopolist {\em always} gets allocated $\alloc(\bgamma) = B_{1}$ (\Cref{lem:monopolist}). Otherwise $\big\{ (\bb,\, \ell) \neq \bgamma \big\}$, tiebreaks occur with a zero probability because the underlying bid CDF's $\bB \otimes L$ are continuous functions (\Cref{lem:bid_distribution}).}
For the same reasons, the index-$m$ mappings $\varphi_{\sigma}^{(m)}(b)$ and the index-$m$ formula $\varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell)$ are also bounded and continuous almost everywhere.
{\bf \Cref{fact:discretize:function_property}} follows then.
\end{proof}
\begin{fact}
\label{fact:discretize:function_convergence}
The sequence $\big\{ \varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell) \big\}_{m \geq 0}$ pointwise converges to $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ almost everywhere.
\end{fact}
\begin{proof}
By construction, each index-$m$ {{\sf FPA}} formula $\varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell)$ for $m \geq 0$ is the $(m + 1)^{n + 1}$-piece uniform interpolation of the input {{\sf FPA}} formula $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$. Because the $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ itself is bounded and continuous almost everywhere ({\bf \Cref{fact:discretize:function_property}}), letting $m \to +\infty$ makes the sequence $\big\{ \varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell) \big\}_{m \geq 0}$ converge to $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ almost everywhere.
{\bf \Cref{fact:discretize:function_convergence}} follows then.
\end{proof}
Now we would consider a {\em random} profile from the input pseudo instance $(p,\, \bb,\, \ell) \sim (P,\, \bB \otimes L)$
rather than a specific profile $\in [\gamma,\, \varphi_{1}(\gamma)] \times [\gamma,\, \lambda]^{n + 1}$.
{\bf \Cref{fact:discretize:measure_input,fact:discretize:measure_sequence}} show that the formulas $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ and $\varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell)$ preserve the almost-everywhere continuity in the probabilistic metric space $(p,\, \bb,\, \ell) \sim (P,\, \bB \otimes L)$.
\begin{fact}
\label{fact:discretize:measure_input}
The measure $\Prx_{p,\, \bb,\, \ell} \big[\, (p,\, \bb,\, \ell) \in \mathbb{D}_{{\sf FPA}} \,\big] = 0$, where $\mathbb{D}_{{\sf FPA}}$ is the set of discontinuity points of the input {{\sf FPA}} formula $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$.
\end{fact}
\begin{proof}
We claim that, with the $(\bb,\, \ell) \in [\gamma,\, \lambda]^{n + 1}$ held constant, the formula $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ is continuous with respect to $p \in [\gamma,\, \varphi_{1}(\gamma)]$, the monopolist $B_{1}$'s conditional value.
{\bf IF} the bid profile is {\em all-infimum} $\big\{ (\bb,\, \ell) = \bgamma \big\}$, the monopolist $B_{1}$ takes the conditional value $v_{1} = p$ (\Cref{def:pseudo}) and {\em will} be allocated $\alloc(\bgamma) = B_{1}$ (\Cref{{lem:monopolist}}) -- the {\em identity} formula $\varphi_{{\sf FPA}}(p,\, \bgamma) = \Ex_{\alloc} [v_{\alloc(\bgamma)}] = p$.
{\bf ELSE} $\big\{ (\bb,\, \ell) \neq \bgamma \big\}$, each first-order bidder $\sigma \in \argmax(\bb,\, \ell)$ takes a {\em non-infimum} bid $b_{\sigma} = \max(\bb,\, \ell) > \gamma$ (\Cref{def:pseudo}) and thus a {\em $p$-irrelevant} value $v_{\sigma} = \varphi_{\sigma}(b_{\sigma})$; the allocated bidder must be one of them $\alloc(\bb,\, \ell) \in \argmax(\bb,\, \ell)$ -- the {\em constant} formula $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell) = \Ex_{\alloc} [v_{\alloc(\bb,\, \ell)}]$.
Combining both cases {\bf IF}/{\bf ELSE} gives our claim.
As a consequence, the formula $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ can just have two types of discontinuity points.
{\bf FIRST}, those by {\em switches} of the allocated bidder $\alloc(\bb,\, \ell) \in \argmax(\bb,\, \ell)$. Those have a zero measure, because (\Cref{lem:bid_distribution}) bid distributions $B_{\sigma}(b)$ for $\sigma \in [n] \cup \{L\}$ are continuous except the infimum bid $b = \gamma$ AND (\Cref{lem:monopolist}) conditioned on the {\em all-infimum} bid profile $\big\{ (\bb,\, \ell) = \bgamma \big\}$, the monopolist {\em always} gets allocated $\alloc(\bgamma) = B_{1}$.
{\bf SECOND}, those by the (unswitched) allocated bidder's mapping $\varphi_{\alloc(\bb,\, \ell)}(b_{\alloc(\bb,\, \ell)})$ at a {\em non-infimum} first-order bid $b_{\alloc(\bb,\, \ell)} \in (\gamma,\, \lambda]$. Those again have a zero measure, because ({\bf \Cref{fact:discretize:function_property}}) each mapping $\varphi_{\sigma}(b)$ is bounded and continuous almost everywhere AND (\Cref{lem:bid_distribution}) without the infimum bid $b \in (\gamma,\, \lambda]$, each bid distribution $B_{\sigma}(b)$ is continuous.
The both types {\bf FIRST}/{\bf SECOND} together have a zero measure. {\bf \Cref{fact:discretize:measure_input}} follows then.
\end{proof}
\begin{fact}
\label{fact:discretize:measure_sequence}
The measure $\Prx_{p,\, \bb,\, \ell} \big[\, (p,\, \bb,\, \ell) \in \mathbb{D}_{{\sf FPA}}^{(m)} \,\big] = 0$, where $\mathbb{D}_{{\sf FPA}}^{(m)}$ is the set of discontinuity points of the index-$m$ {{\sf FPA}} formula $\varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell)$ for $m \geq 0$.
\end{fact}
\begin{proof}
Indeed, {\bf \Cref{fact:discretize:measure_input}} uses two ``input-specific'' arguments.
(i)~The formula $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ only has two kinds of discontinuity points. So is the index-$m$ formula $\varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell)$, by the same analysis.
(ii)~The input mappings $\varphi_{\sigma}(b)$ are bounded and continuous almost everywhere. So are the index-$m$ mappings $\varphi_{\sigma}^{(m)}(b)$, by {\bf \Cref{fact:discretize:function_property}}.
Thus, readopting the proof of {\bf \Cref{fact:discretize:measure_input}} implies {\bf \Cref{fact:discretize:measure_sequence}}.
\end{proof}
Now we further draw {\em random} profiles $(p^{(m)},\, \bb^{(m)},\, \ell^{(m)}) \sim (P,\, \bB^{(m)} \otimes L^{(m)})$ from the {\em discretized} pseudo instances, one by one $m \geq 0$. {\bf \Cref{fact:discretize:distribution_convergence}} shows that the sequence of these profiles {\em converges in distribution} to the input {\em random} profile $(p,\, \bb,\, \ell) \sim (P,\, \bB \otimes L)$.
\begin{fact}
\label{fact:discretize:distribution_convergence}
The sequence of random profiles $\big\{ (p^{(m)},\, \bb^{(m)},\, \ell^{(m)}) \sim (P,\, \bB^{(m)} \otimes L^{(m)}) \big\}_{m \geq 0}$ converges in distribution ($\overset{{\tt dist}}{\longrightarrow}$) to the random input profile $(p,\, \bb,\, \ell) \sim (P,\, \bB \otimes L)$.
\end{fact}
\begin{proof}
The bid distributions $\bB \otimes L$ and/or $\bB^{(m)} \otimes L^{(m)}$ (\Cref{lem:pseudo_distribution}) can be reconstructed from the mappings $\bvarphi$ and/or $\bvarphi^{(m)}$ in term of integral formulas. All of them are {\em Riemann integrable}, since ({\bf \Cref{fact:discretize:function_property}}) all mappings $\bvarphi$ and $\bvarphi^{(m)}$ are bounded and continuous almost everywhere on $b \in [\gamma,\, \lambda]$.
Particularly, the {\em discretized} mapping $\bvarphi^{(m)}$ as the $(m + 1)$-piece uniform interpolations of the input mappings $\bvarphi$, pointwise converges to $\varphi_{\sigma}(b)$ almost everywhere when $m \to +\infty$.
Thus, the sequence $\big\{ \bB^{(m)} \otimes L^{(m)} \big\}_{m \geq 0}$ pointwise converge to input bid distributions $\bB \otimes L$. This gives {\bf \Cref{fact:discretize:distribution_convergence}}.
\end{proof}
Based on {\bf \Cref{fact:discretize:distribution_convergence,fact:discretize:function_convergence,fact:discretize:measure_input,fact:discretize:measure_sequence}}, we can deduce that\footnote{The continuous mapping theorem (\Cref{thm:continuous_mapping}) considers an unmodified function $g$, rather than a modified one $\varphi_{{\sf FPA}}^{(m)}$ versus $\varphi_{{\sf FPA}}$ in our case. Thus we need to take the limit $\lim_{m \to +\infty}$ and $\lim_{t \to +\infty}$ twice.}
\begin{align}
\lim_{\substack{m \to +\infty \\ t \to +\infty}}\, \Ex_{p^{(t)},\, \bb^{(t)},\, \ell^{(t)}} \big[\, \varphi_{{\sf FPA}}^{(m)}(p^{(t)},\, \bb^{(t)},\, \ell^{(t)}) \,\big]
& ~=~ \lim_{m \to +\infty}\, \Ex_{p,\, \bb,\, \ell} \big[\, \varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell) \,\big]
\label{eq:discretize:1}\tag{D1} \\
& ~=~ \Ex_{p,\, \bb,\, \ell} \big[\, \varphi_{{\sf FPA}}(p,\, \bb,\, \ell) \,\big]
\label{eq:discretize:2}\tag{D2} \\
& ~=~ {\sf FPA}(\text{\em input}) < +\infty. \phantom{\Big.}
\nonumber
\label{eq:discretize:3}\tag{D3}
\end{align}
\eqref{eq:discretize:1}: Apply the continuous mapping theorem (\Cref{thm:continuous_mapping}), as the sequence $\big\{ (p^{(t)},\, \bb^{(t)},\, \ell^{(t)}) \big\}_{t \geq 0}$ converges in distribution to the $(p,\, \bb,\, \ell) \sim (P,\, \bB \otimes L)$ ({\bf \Cref{fact:discretize:distribution_convergence}}) and each set of discontinuity points $\mathbb{D}_{{\sf FPA}}^{(m)}$ for $m \geq 0$ has a zero measure $\Prx_{p,\, \bb,\, \ell} \big[\, (p,\, \bb,\, \ell) \in \mathbb{D}_{{\sf FPA}}^{(m)} \,\big] = 0$ ({\bf \Cref{fact:discretize:measure_sequence}}). \\
\eqref{eq:discretize:2}: {\bf \Cref{fact:discretize:function_convergence}} that the sequence of {\em discretized} {{\sf FPA}} formulas $\big\{ \varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell) \big\}_{m \geq 0}$ pointwise converges to the input {{\sf FPA}} formula $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ almost everywhere.\footnote{Rigorously, \Cref{eq:discretize:1} for $m \geq 0$ and \Cref{eq:discretize:2} are all Riemann integrable, because ({\bf \Cref{fact:discretize:function_property}}) formulas $\varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell)$ for $m \geq 0$ and $\varphi_{{\sf FPA}}(p,\, \bb,\, \ell)$ each are bounded and continuous almost everywhere AND ({\bf \Cref{fact:discretize:measure_input,fact:discretize:measure_sequence}}) the sets of discontinuity points $\mathbb{D}_{{\sf FPA}}^{(m)}$ for $m \geq 0$ and $\mathbb{D}_{{\sf FPA}}$ each have a zero measure on $(p,\, \bb,\, \ell) \sim (P,\, \bB \otimes L)$.} \\
\eqref{eq:discretize:3}: ${\sf FPA}(\text{\em input}) \leq {\sf OPT}(\text{\em input}) < +\infty$, as the statement of \Cref{lem:discretize} promises.
\vspace{.1in}
It follows that $\lim_{m \to +\infty}\, \Ex_{p^{(m)},\, \bb^{(m)},\, \ell^{(m)}} \big[\, \varphi_{{\sf FPA}}^{(m)}(p^{(m)},\, \bb^{(m)},\, \ell^{(m)}) \,\big] = {\sf FPA}(\text{\em input})$. Given this and because
the error $\epsilon \in (0,\, 1)$ is a constant, for some {\em bounded} threshold $M_{{\sf FPA}} = M_{{\sf FPA}}(\epsilon) < +\infty$, every index $m \geq M_{{\sf FPA}}$ {\em discretized} pseudo instance yields a close enough auction {\textsf{Social Welfare}}
\begin{align*}
{\sf FPA}(P,\, \bB^{(m)} \otimes L^{(m)})\;
~=~ \Ex_{p^{(m)},\, \bb^{(m)},\, \ell^{(m)}} \Big[\, \varphi_{{\sf FPA}}^{(m)}(p,\, \bb,\, \ell) \,\Big]\,
~=~ {\sf FPA}(\text{\em input}) \cdot e^{\pm \epsilon / 3}.\;
\end{align*}
Likewise,\footnote{Namely, the counterpart {{\sf OPT}} formulas $\varphi_{{\sf OPT}}^{(m)}(p,\, \bb,\, \ell)$ and $\varphi_{{\sf OPT}}(p,\, \bb,\, \ell)$ for profiles $(p,\, \bb,\, \ell) \in [\gamma,\, \varphi_{1}(\gamma)] \times [\gamma,\, \lambda]^{n + 1}$ satisfy the analogs of {\bf \Cref{fact:discretize:function_property,fact:discretize:measure_input,fact:discretize:function_convergence,fact:discretize:measure_sequence}}. These analogs together with {\bf \Cref{fact:discretize:distribution_convergence}} result in the {{\sf OPT}} version of the claim ``$\lim_{m \to +\infty}\, \Ex_{p^{(m)},\, \bb^{(m)},\, \ell^{(m)}} \big[\, \varphi_{{\sf FPA}}^{(m)}(p^{(m)},\, \bb^{(m)},\, \ell^{(m)}) \,\big] = {\sf FPA}(\text{\em input})$''.}
for some $M_{{\sf OPT}} = M_{{\sf OPT}}(\epsilon) < +\infty$, every index $m \geq M_{{\sf OPT}}(\epsilon)$ {\em discretized} pseudo instance yields a close enough optimal {\textsf{Social Welfare}}
\begin{align*}
{\sf OPT}(P,\, \bB^{(m)} \otimes L^{(m)})
~=~ \Ex_{p^{(m)},\, \bb^{(m)},\, \ell^{(m)}} \Big[\, \varphi_{{\sf OPT}}^{(m)}(p,\, \bb,\, \ell) \,\Big]
~=~ {\sf OPT}(\text{\em input}) \cdot e^{\pm \epsilon / 3}.
\end{align*}
So the index $M \eqdef \max(M_{{\sf FPA}},\, M_{{\sf OPT}})$ {\em discretized} pseudo instance gives a close enough bound
\begin{align*}
{\sf PoA}(P,\, \bB^{(M)} \otimes L^{(M)})
~=~ {\sf PoA}(\text{\em input}) \cdot e^{\pm 2 \cdot (\epsilon / 3)}
~=~ {\sf PoA}(\text{\em input}) \pm \epsilon.
\hspace{2.05cm}
\end{align*}
This finishes the proof of \Cref{lem:discretize} assuming that the input pseudo instance $(P,\, \bB \otimes L)$ admits \blackref{finiteness} that $\lambda \leq \max(\bvarphi(\blambda)) < +\infty$. Below we move on to the general case.
\vspace{.1in}
\noindent
{\bf The General Case.}
When the input $(P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid}$ violates \blackref{finiteness}, we instead consider the {\em truncated} bid distributions $\bB^{(t)} \otimes L^{(t)} = \{B_{\sigma}^{(t)}\}_{\sigma \in [n] \cup \{L\}}$ given by $B_{\sigma}^{(t)}(b) \equiv \min(\frac{B_{\sigma}(b)}{B_{\sigma}(t)},\, 1)$, where the parameter $t \in (\gamma,\, \lambda)$; see \Cref{fig:finiteness} for a visual aid.\footnote{The proof works for any supremum bid $\lambda \leq +\infty$, including the infinite one $\lambda = +\infty$. However, we demonstrate a bounded one $\lambda < +\infty$ in \Cref{fig:finiteness} for convenience.}
By construction, (\Cref{def:pseudo}) the mappings keep the same $\varphi_{\sigma}^{(t)}(b) = \varphi_{\sigma}(b)$ on the {\em truncated} bid support $b \in [\gamma,\, t]$, which means $\sup(\supp(\bB^{(t)} \otimes L^{(t)})) = t \leq \max(\bvarphi^{(t)}(\bt)) = \max(\bvarphi(\bt)) < +\infty$.
Precisely, together with the unmodified conditional value $P$, this {\em truncated} pseudo instance is valid $(P,\, \bB^{(t)} \otimes L^{(t)}) \in \mathbb{B}_{\sf valid}$ and satisfies \blackref{finiteness}.
We claim that when the parameter $t \in (\gamma,\, \lambda)$ is close enough to the supremum bid $\lambda \leq +\infty$, the truncation has an arbitrarily small effect on the {\textsf{Social Welfares}} ${\sf FPA}(\text{\em input}) \leq {\sf OPT}(\text{\em input}) < +\infty$.
Following \Cref{lem:pseudo_welfare}, the input auction {\textsf{Social Welfare}} ${\sf FPA}(\text{\em input}) = \Ex[P] \cdot \calB(\gamma) + \int_{\gamma}^{\lambda} f(b) \cdot \d b$, for the integrand $f(b) \eqdef \sum_{\sigma} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b)$.
This formula $< +\infty$ is Lebesgue integrable, hence $\lim_{t \nearrow \lambda} \int_{t}^{\lambda} f(b) \cdot \d b = 0$.
Likewise, the {\em truncated} auction {\textsf{Social Welfare}}
\begin{align*}
{\sf FPA}(P,\, \bB^{(t)} \otimes L^{(t)})
& ~=~ \Ex[P] \cdot \calB^{(t)}(\gamma)
+ \int_{\gamma}^{t} \Big(\sum_{\sigma} \varphi_{\sigma}^{(t)}(b) \cdot \tfrac{{B_{\sigma}^{(t)}}'(b)}{B_{\sigma}^{(t)}(b)} \cdot \calB^{(t)}(b)\Big) \cdot \d b ~~~~ \\
& ~=~ \frac{1}{\calB(t)} \cdot \Ex[P] \cdot \calB(\gamma)
+ \frac{1}{\calB(t)} \cdot \int_{\gamma}^{t} f(b) \cdot \d b \\
& ~=~ \frac{1}{\calB(t)} \cdot \Big({\sf FPA}(\text{\em input}) - \int_{t}^{\lambda} f(b) \cdot \d b\Big).
\end{align*}
We have $\lim_{t \nearrow \lambda} {\sf FPA}(P,\, \bB^{(t)} \otimes L^{(t)}) = {\sf FPA}(\text{\em input})$, given continuity $\lim_{t \nearrow \lambda} \calB(t) = 1$ (\Cref{lem:bid_distribution}).
Accordingly, for some threshold $T_{{\sf FPA}} = T_{{\sf FPA}}(\epsilon) \in (\gamma,\, \lambda)$, any parameter $t \in [T_{{\sf FPA}},\, \lambda)$ yields a close enough auction {\textsf{Social Welfare}} ${\sf FPA}(P,\, \bB^{(t)} \otimes L^{(t)}) = {\sf FPA}(\text{\em input}) \cdot e^{\pm \epsilon / 3}$.
\begin{figure}[t]
\centering
\includegraphics[width = .9\textwidth]{finiteness.png}
\caption{Demonstration for the transformation from $(P,\, \bB)$ into $(P,\, \tilde{\bB})$, making a valid pseudo instance further satisfy {\bf finiteness} (\Cref{def:discretize}). For convenience, the demonstrated supremum bid is bounded $\lambda < +\infty$. But the transformation works for an arbitrary one $\lambda \leq +\infty$.}
\label{fig:finiteness}
\end{figure}
Due to \Cref{lem:pseudo_welfare}, the input optimal {\textsf{Social Welfare}} ${\sf OPT}(\text{\em input}) = \gamma + \int_{\gamma}^{+\infty} (1 - \prod_{\sigma} V_{\sigma}(v)) \cdot \d v$.
This formula $< +\infty$ is Lebesgue integrable, hence $\int_{\gamma}^{+\infty} (1 - V_{\sigma}(v)) \cdot \d v < +\infty$.
Since the bid-to-value mappings are invariant $\varphi_{\sigma}^{(t)}(b) = \varphi_{\sigma}(b)$ over the {\em truncated} support $b \in [\gamma,\, t]$,
(\Cref{lem:value_dist}) the bid distributions $B_{\sigma}^{(t)}(b) \equiv \min(\frac{B_{\sigma}(b)}{B_{\sigma}(t)},\, 1)$ induce the value distributions $V_{\sigma}^{(t)}(v) \equiv \min(\frac{V_{\sigma}(v)}{B_{\sigma}(t)},\, 1)$.
Given continuity $\lim_{t \nearrow \lambda} B_{\sigma}(t) = 1$ (\Cref{lem:bid_distribution}), we deduce that $\lim_{t \nearrow \lambda} \int_{\gamma}^{+\infty} \big|V_{\sigma}^{(t)}(v) - V_{\sigma}(v)\big| \cdot \d v = 0$.
Based on \Cref{lem:value_dist}, we can bound the {\em truncated} optimal {\textsf{Social Welfare}}:
\begin{align*}
{\sf OPT}(P,\, \bB^{(t)} \otimes L^{(t)})
& ~=~ {\sf OPT}(\text{\em input})
- \int_{\gamma}^{+\infty} P(v) \cdot \Big(\prod_{\sigma} V_{\sigma}^{(t)}(v) - \prod_{\sigma} V_{\sigma}(v)\Big) \cdot \d v \\
& ~=~ {\sf OPT}(\text{\em input})
- \sum_{\sigma} \int_{\gamma}^{+\infty} \Big(P(v) \cdot \big(V_{\sigma}^{(t)}(v) - V_{\sigma}(v)\big) \cdot \\
& \phantom{~=~ {\sf OPT}(\text{\em input})
- \int_{\gamma}^{+\infty} \sum_{\sigma} \Big(} \prod_{k < \sigma} V_{k}^{(t)}(v) \cdot \prod_{k > \sigma} V_{k}(v)\Big) \cdot \d v \\
& ~=~ {\sf OPT}(\text{\em input})
\pm \sum_{\sigma} \int_{\gamma}^{+\infty} \big| V_{\sigma}^{(t)}(v) - V_{\sigma}(v) \big| \cdot \d v.
\end{align*}
Here the last step uses the relaxation $|P(v)|,\, |V_{\sigma}^{(t)}(v)|,\, |V_{\sigma}(v)| \leq 1$.
Now it is clear that $\lim_{t \nearrow \lambda} {\sf OPT}(P,\, \bB^{(t)} \otimes L^{(t)}) = {\sf OPT}(\text{\em input})$.
Accordingly, for some threshold $T_{{\sf OPT}} = T_{{\sf OPT}}(\epsilon) \in (\gamma,\, \lambda)$, any parameter $t \in [T_{{\sf OPT}},\, \lambda)$ yields a close enough optimal {\textsf{Social Welfare}} ${\sf OPT}(P,\, \bB^{(t)} \otimes L^{(t)}) = {\sf OPT}(\text{\em input}) \cdot e^{\pm \epsilon / 3}$.
Combining everything together, the parameter $T \eqdef \max(T_{{\sf FPA}},\, T_{{\sf OPT}})$ pseudo instance ensures a close enough bound ${\sf PoA}(P,\, \bB^{(T)} \otimes L^{(T)}) = {\sf PoA}(\text{\em input}) \cdot e^{\pm 2 \cdot (\epsilon / 3)} = {\sf PoA}(\text{\em input}) \pm \epsilon$ and satisfies \blackref{finiteness}.
Through the \blackref{alg:discretize} reduction, we can transform this {\em truncated} pseudo instance $(P,\, \bB^{(T)} \otimes L^{(T)})$ into a {\em discretized} pseudo instance $(P,\, \tilde{\bB} \otimes \tilde{L})$, as desired.
Particularly, the resulting bound ${\sf PoA}(P,\, \tilde{\bB} \otimes \tilde{L}) = {\sf PoA}(P,\, \bB^{(T)} \otimes L^{(T)}) \pm \epsilon = {\sf PoA}(\text{\em input}) \pm 2\epsilon$.
Reducing the error $\epsilon \in (0,\, 1)$ by a factor of $2$ finishes the proof of \Cref{lem:discretize}.
\end{proof}
\begin{comment}
\[
{\sf PoA}(P,\, \tilde{\bB} \otimes \tilde{L})
~=~ {\sf PoA}(P,\, \bB^{(T)} \otimes L^{(T)}) \pm \epsilon
~=~ {\sf PoA}(\text{\em input}) \pm 2\epsilon.
\]
pseudo instance $(P,\, \bB^{(t)} \otimes L^{(t)})$ for $t \in [T_{{\sf OPT}},\, \lambda)$ yields a close enough auction {\textsf{Social Welfare}}
\red{When the supremum bid is unbounded $\lambda = +\infty$,
we can adjust our partition to $\lambda_{j} = \gamma + j / 2^{m}$ for each $j \in [0:\, 4^{m}]$, and then reuse the {\blackref{alg:discretize}} reduction.}
Lebesgue integral
\begin{align*}
{\sf FPA}(P,\, \bB^{(t)} \otimes L^{(t)}) ~=~ \Ex_{p,\, \bb,\, \ell} \Big[\, \varphi_{{\sf FPA}}(p,\, \bb,\, \ell) \cdot \indicator((\bb,\, \ell) \leq t^{\otimes n + 1}) \,\Big] \cdot \frac{1}{\calB(t)}.
\end{align*}
\[
{\sf OPT}(\tilde{P},\, \tilde{\bB} \otimes \tilde{L}) ~\geq~ \Ex_{p,\, \bb,\, \ell} \Big[\, \varphi_{{\sf OPT}}(p,\, \bb,\, \ell) \cdot \indicator((\bb,\, \ell) \leq t^{\otimes n + 1}) \,\Big].
\]
\[
{\sf OPT}(\tilde{P},\, \tilde{\bB} \otimes \tilde{L}) ~\leq~ \Ex_{p,\, \bb,\, \ell} \Big[\, \varphi_{{\sf OPT}}(p,\, \bb,\, \ell) \,\Big].
\]
\end{comment}
\subsection{{\text{\tt Translate}}: Vanishing the minimum winning bids}
\label{subsec:translate}
This subsection presents the {\blackref{alg:translate}} reduction (see \Cref{fig:alg:translate,fig:translate} for its description and a visual demonstration), which transforms a {\em discretized} pseudo instance (\Cref{def:discretize}) into a more special {\em translated} pseudo instance (\Cref{def:translate}).
\begin{definition}[Translated pseudo instances]
\label{def:translate}
A {\em discretized} pseudo instance $(P,\, \bB \otimes L)$ from \Cref{def:discretize} is further called {\em translated} when (\term[{\bf nil infimum bid}]{nil}) the infimum bid is nil $\gamma = 0$.
\end{definition}
\Cref{lem:translate} presents performance guarantees of the {\blackref{alg:translate}} reduction.
\begin{lemma}[{\text{\tt Translate}}; \Cref{fig:alg:translate}]
\label{lem:translate}
Under reduction $(\tilde{P},\, \tilde{\bB} \otimes \tilde{L}) \gets \text{\tt Translate}(P,\, \bB \otimes L)$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:translate:property}
The output $(\tilde{P},\, \tilde{\bB} \otimes \tilde{L})$ is a translated pseudo instance.
\item\label{lem:translate:poa}
A (weakly) worse bound is yielded ${\sf PoA}(\tilde{P},\, \tilde{\bB} \otimes \tilde{L}) \leq {\sf PoA}(P,\, \bB \otimes L)$.
\end{enumerate}
\end{lemma}
\afterpage{
\begin{figure}[t]
\centering
\begin{mdframed}
Reduction $\term[\text{\tt Translate}]{alg:translate}(P,\, \bB \otimes L)$
\begin{flushleft}
{\bf Input:} A (generic) {\em discretized} pseudo instance $(P,\, \bB \otimes L)$
\hfill \Cref{def:discretize}
\vspace{.05in}
{\bf Output:} A {\em translated} pseudo instance $(\tilde{P},\, \tilde{\bB} \otimes \tilde{L})$.
\hfill \Cref{def:translate}
\begin{enumerate}
\item Define $\tilde{P}(v) \equiv P(v + \gamma)$ and $\tilde{B}_{\sigma}(b) \equiv B_{\sigma}(b + \gamma)$ for each bidder $\sigma \in [n] \cup \{L\}$.
\item {\bf Return} $(\tilde{P}, \tilde{\bB} \otimes \tilde{L})$.
\end{enumerate}
\end{flushleft}
\end{mdframed}
\caption{The {\text{\tt Translate}} reduction.
\label{fig:alg:translate}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = .65\textwidth]{translate.png}
\caption{Demonstration for the {\text{\tt Translate}} reduction (\Cref{fig:alg:translate}), which indeed works for all VALID pseudo instances $(P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid}$ (\Cref{def:pseudo}), not only the {\em discretized} ones (\Cref{def:discretize}).
On the whole shifted bid support $b \in [0,\, \lambda - \gamma]$, each bidder $\sigma \in [n] \cup \{L\}$ has a shifted bid-to-value mapping $\tilde{\varphi}_{\sigma}(b) \equiv \varphi_{\sigma}(b + \gamma) - \gamma$.
\label{fig:translate}}
\end{figure}
\clearpage}
\begin{proof}
See \Cref{fig:translate} for a visual aid.
Let us verify {\bf \Cref{lem:translate:property,lem:translate:poa}} one by one.
\vspace{.1in}
\noindent
{\bf \Cref{lem:translate:property}.}
The {\blackref{alg:translate}} reduction shifts the bid/value spaces each by a distance of $-\gamma$, inducing a nil infimum bid $\gamma - \gamma = 0$ (\blackref{nil}). Clearly, each shifted mapping $\tilde{\varphi}_{\sigma}(b) = \varphi_{\sigma}(b + \gamma) - \gamma$ for $\sigma \in [n] \cup \{L\}$ (akin to the input mapping $\varphi_{\sigma}$) is increasing on the shifted support $b \in [0,\, \lambda - \gamma]$ (\blackref{monotonicity}), and the shifted conditional value $\tilde{P} = P - \gamma$ (akin to the input one $\supp(P) \subseteq [\gamma,\, \varphi_{1}(\gamma)]$) ranges between $\supp(\tilde{P}) \subseteq [\gamma - \gamma,\, \varphi_{1}(\gamma) - \gamma] = [0,\, \tilde{\varphi}_{1}(\gamma)]$ (\blackref{boundedness}).
Moreover, the shifted pseudo instance $(\tilde{P},\, \tilde{\bB} \otimes \tilde{L})$ must preserve (\Cref{def:discretize}) \blackref{piecewise} of the mappings $\tilde{\bvarphi} = \{\tilde{\varphi}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ and \blackref{finiteness}, namely $\lambda - \gamma \leq \max(\bvarphi(\blambda)) = \max(\tilde{\bvarphi}(\blambda)) - \gamma < +\infty$. To conclude, {\bf \Cref{lem:translate:property}} holds.
\vspace{.1in}
\noindent
{\bf \Cref{lem:translate:poa}.}
Because the bid/value spaces each are shifted by a distance of $-\gamma$, the auction/optimal {\textsf{Social Welfares}} (\Cref{lem:pseudo_welfare}) each drop by an amount of $\gamma$. We thus deduce that
\[
{\sf PoA}(\tilde{P},\, \tilde{\bB} \otimes \tilde{L})
~=~ \frac{{\sf FPA}(P,\, \bB \otimes L)\; - \gamma}{{\sf OPT}(P,\, \bB \otimes L) - \gamma}
~\leq~ \frac{{\sf FPA}(P,\, \bB \otimes L)\;}{{\sf OPT}(P,\, \bB \otimes L)}
~=~ {\sf PoA}(P,\, \bB \otimes L).
\]
Here the inequality holds, because the {\textsf{Social Welfares}} ${\sf OPT}(P,\, \bB \otimes L) \geq {\sf FPA}(P,\, \bB \otimes L)$ are at least the infimum bid $\gamma$ (cf.\ \Cref{lem:pseudo_welfare}). {\bf \Cref{lem:translate:poa}} and \Cref{lem:translate} follow then.
\end{proof}
\Cref{lem:translate_welfare} presents alternative {\textsf{Social Welfare}} formulas for {\em translated} pseudo instances, which supplement \Cref{lem:pseudo_welfare} and will be more convenient in several places.
\begin{lemma}[{\textsf{Social Welfares}}]
\label{lem:translate_welfare}
For a translated pseudo instance $(P,\, \bB \otimes L)$, the expected auction {\textsf{Social Welfare}} ${\sf FPA}(P,\, \bB \otimes L)$ is given by
\begin{align*}
{\sf FPA}(P,\, \bB \otimes L)\;
& ~=~ \Ex[P] \cdot \calB(0) ~+~ \int_{0}^{\lambda} \sum_{i \in [n]} \Big(\frac{\varphi_{i}(b)}{\varphi_{L}(b) - b} - \frac{\varphi_{i}(b) - \varphi_{L}(b)}{\varphi_{i}(b) - b}\Big) \cdot \calB(b) \cdot \d b~~~~~\;\; \\
& \phantom{~=~ \Ex[P] \cdot \calB(0)} ~-~ \int_{0}^{\lambda} (n - 1) \cdot \frac{\varphi_{L}(b)}{\varphi_{L}(b) - b} \cdot \calB(b) \cdot \d b.
\end{align*}
Regarding the underlying partition $\bLambda = [0 = \lambda_{0},\, \lambda_{1}) \cup
[\lambda_{1},\, \lambda_{2}) \cup \dots \cup
[\lambda_{m},\, \lambda_{m + 1} = \lambda]$,
represent the piecewise constant bid-to-value mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ as a bid-to-value table $\bPhi = \big[\phi_{\sigma,\, j}\big]$ for $(\sigma,\, j) \in ([n] \cup \{L\}) \times [0: m]$, i.e., $\varphi_{\sigma}(b) \equiv \phi_{\sigma,\, j}$ on each index-$j$ piece $b \in [\lambda_{j},\, \lambda_{j + 1})$.
Then the expected optimal {\textsf{Social Welfare}} ${\sf OPT}(P,\, \bB \otimes L)$ is given by
\begin{align*}
{\sf OPT}(P,\, \bB \otimes L)
~=~ \int_{0}^{+\infty} \Big(1 - P(v) \cdot \prod_{(\sigma,\, j) \,\in\, \bPhi} \big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j})\big)\Big) \cdot \d v.
\hspace{2.27cm}
\end{align*}
where the probabilities $\omega_{\sigma,\, j} \eqdef 1 - \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}$ can be reconstructed from the partition-table tuple $(\bLambda,\, \bPhi)$ following \Cref{lem:pseudo_distribution}.
\end{lemma}
\begin{proof}
We deduce from \Cref{lem:pseudo_distribution} that $\frac{B'_{i}(b)}{B_{i}(b)}
= \frac{\d}{\d b}\big(\ln B_{i}(b)\big)
= \big(\varphi_{L}(b) - b\big)^{-1} - \big(\varphi_{i}(b) - b\big)^{-1}$ for $i \in [n]$ and
$\frac{L'(b)}{L(b)}
= \frac{\d}{\d b}\big(\ln L(b)\big)
= \sum_{i \in [n]} \big(\varphi_{i}(b) - b\big)^{-1} - (n - 1) \cdot \big(\varphi_{L}(b) - b\big)^{-1}$.
Thus we can rewrite the auction {\textsf{Social Welfare}} formula from \Cref{lem:pseudo_welfare} (with \blackref{nil} $\gamma = 0$) as follows:
\begin{align*}
{\sf FPA}(P,\, \bB \otimes L)
& ~=~ \Ex[P] \cdot \calB(0) ~+~ \sum_{\sigma \in [n] \cup \{L\}} \Big(\int_{0}^{\lambda} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b\Big) \\
& ~=~ \Ex[P] \cdot \calB(0) ~+~ \sum_{i \in [n]} \Big(\int_{0}^{\lambda} \Big(\frac{\varphi_{i}(b)}{\varphi_{L}(b) - b} - \frac{\varphi_{i}(b)}{\varphi_{i}(b) - b}\Big) \cdot \calB(b) \cdot \d b\Big) \\
& \phantom{~=~ \Ex[P] \cdot \calB(0)} ~+~ \int_{0}^{\lambda} \Big(\sum_{i \in [n]} \frac{\varphi_{L}(b)}{\varphi_{\sigma}(b) - b} - (n - 1) \cdot \frac{\varphi_{L}(b)}{\varphi_{L}(b) - b}\Big) \cdot \calB(b) \cdot \d b,
\end{align*}
which after being rearranged gives the formula in the statement of \Cref{lem:translate_welfare}.
Regarding the partition $\bLambda = [0 \equiv \lambda_{0},\, \lambda_{1}) \cup
[\lambda_{1},\, \lambda_{2}) \cup \dots \cup
[\lambda_{m},\, \lambda_{m + 1} \equiv \lambda]$ and the bid-to-value table $\bPhi = \big[\phi_{\sigma,\, j}\big]$, namely $\varphi_{\sigma}(b) \equiv \phi_{\sigma,\, j}$ on each piece $b \in [\lambda_{j},\, \lambda_{j + 1})$, for $(\sigma,\, j) \in ([n] \cup \{L\}) \times [0: m]$,
we can formulate the conditional probabilities $\omega_{\sigma,\, j} = 1 - \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}$ through \Cref{lem:pseudo_distribution} as follows:
\begin{align*}
& \omega_{i,\, j}
~=~ 1 - \exp\Big(-\int_{\lambda_{j}}^{\lambda_{j + 1}}
\Big(\big(\phi_{L,\, j} - b\big)^{-1} - \big(\phi_{i,\, j} - b\big)^{-1}\Big) \cdot \d b\Big),
\qquad \forall i \in [n]. \\
& \omega_{L,\, j}
~=~ 1 - \exp\Big(-\int_{\lambda_{j}}^{\lambda_{j + 1}}
\Big(\sum_{i \in [n]} \big(\phi_{i,\, j} - b\big)^{-1} - (n - 1) \cdot \big(\phi_{L,\, j} - b\big)^{-1}\Big) \cdot \d b\Big). \hspace{1.92cm}
\end{align*}
Following \Cref{lem:pseudo_welfare} (with \blackref{nil} $\gamma = \lambda_{0} \equiv 0$), the optimal {\textsf{Social Welfare}} is given by
${\sf OPT}(P,\, \bB \otimes L) = \int_{0}^{+\infty} \big(1 - P(v) \cdot \prod_{\sigma \in [n] \cup \{L\}} \Prx_{b_{\sigma} \sim B_{\sigma}}\big[\, (b_{\sigma} \leq \lambda_{0}) \vee (\varphi_{\sigma}(b_{\sigma}) \leq v) \,\big]\big) \cdot \d v$.
Because we are considering piecewise constant mappings, namely $\varphi_{\sigma}(b) \equiv \phi_{\sigma,\, j}$ on each piece $b \in [\lambda_{j},\, \lambda_{j + 1})$, we can deduce that for $\sigma \in [n] \cup \{L\}$ and any nonnegative value $v \geq 0$,
\begin{align*}
\Prx_{b_{\sigma} \sim B_{\sigma}}\big[\, (b_{\sigma} \leq \lambda_{0}) \vee (\varphi_{\sigma}(b_{\sigma}) \leq v) \,\big]
& ~=~ B_{\sigma}(\lambda_{0}) + \sum_{j \in [0: m]} (B_{\sigma}(\lambda_{j + 1}) - B_{\sigma}(\lambda_{j})) \cdot \indicator(\phi_{\sigma,\, j} \leq v)~\, \\
& ~=~ 1 - \sum_{j \in [0: m]} (B_{\sigma}(\lambda_{j + 1}) - B_{\sigma}(\lambda_{j})) \cdot \indicator(v < \phi_{\sigma,\, j}) \\
& ~=~ \prod_{j \in [0: m]} \Big(1 - \big(1 - \tfrac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}\big) \cdot \indicator(v < \phi_{\sigma,\, j})\Big)
\end{align*}
Here the first/third steps holds because (\blackref{monotonicity}) the mappings $\varphi_{\sigma}(b)$ are increasing over the bid support $b \in [0,\, \lambda]$, namely $\phi_{\sigma,\, 0} \leq \dots \leq \phi_{\sigma,\, j} \leq \dots \leq \phi_{\sigma,\, m}$. And the second step uses the boundary conditions $B_{\sigma}(\lambda_{m + 1}) = 1$ at the supremum bid $\lambda_{m + 1} \equiv \lambda$.
Applying the above identities (together with $\omega_{\sigma,\, j} = 1 - \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}$) to the optimal {\textsf{Social Welfare}} formula gives the alternative formula claim in \Cref{lem:translate_welfare}.
This finishes the proof.
\end{proof}
\subsection{{\text{\tt Layer}}: Rearranging the bid-to-value mappings}
\label{subsec:layer}
This subsection shows the {\blackref{alg:layer}} reduction (see \Cref{fig:alg:layer,fig:layer} for its description and for a visual demonstration), which transforms a {\em translated} pseudo instance (\Cref{def:translate}) into a more special {\em layered} pseudo instance (\Cref{def:layer}). Recall \Cref{lem:pseudo_mapping} that over the bid support $b \in [0,\, \lambda]$, the pseudo mapping is dominated $\varphi_{L}(b) \leq \varphi_{i}(b)$ by the real mappings $i \in [n]$.
\begin{definition}[Layered pseudo instances]
\label{def:layer}
A {\em translated} pseudo instance $(P,\, \bB \otimes L)$ from \Cref{def:translate} is further called {\em layered} when (\term[{\bf layeredness}]{layeredness}) the {\em real} bid-to-value mappings $\{\varphi_{i}\}_{i \in [n]}$ are ordered $\varphi_{1}(b) \geq \dots \geq \varphi_{i}(b) \geq \dots \geq \varphi_{n}(b) \geq \varphi_{L}(b)$ over the bid support $b \in [0,\, \lambda]$.
\end{definition}
\Cref{lem:layer} shows performance guarantees of the {\blackref{alg:layer}} reduction.
\begin{lemma}[{\text{\tt Layer}}; \Cref{fig:alg:layer}]
\label{lem:layer}
Under reduction $(P,\, \tilde{\bB} \otimes \tilde{L}) \gets \text{\tt Layer}(P,\, \bB \otimes L)$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:layer:property}
The output $(P,\, \tilde{\bB} \otimes \tilde{L})$ is a layered pseudo instance; the conditional value $P$ is unmodified.
\item\label{lem:layer:poa}
The bound keeps the same ${\sf PoA}(P,\, \tilde{\bB} \otimes \tilde{L}) = {\sf PoA}(P,\, \bB \otimes L)$.
\end{enumerate}
\end{lemma}
\afterpage{
\begin{figure}[t]
\centering
\begin{mdframed}
Reduction $\term[\text{\tt Layer}]{alg:layer}(P,\, \bB \otimes L)$
\begin{flushleft}
{\bf Input:}
A (generic) {\em translated} pseudo instance $(P,\, \bB \otimes L)$.
\hfill \Cref{def:translate}
\vspace{.05in}
{\bf Output:}
A {\em layered} pseudo instance $(P,\, \tilde{\bB} \otimes \tilde{L})$.
\hfill \Cref{def:layer}
\begin{enumerate}
\item\label{alg:layer:real_mapping}
Define the real mappings $\tilde{\varphi}_{i} \equiv \varphi_{(i)}$ as the pointwise {\em ordering} of input real mappings $\{\varphi_{i}\}_{i \in [n]}$, namely $\varphi_{(1)}(b) \geq \dots \geq \varphi_{(i)}(b) \geq \dots \geq \varphi_{(n)}(b)$, for $b \in [0,\, \lambda]$.
\item\label{alg:layer:pseudo_mapping}
Reuse the same pseudo mapping $\tilde{\varphi}_{L} \equiv \varphi_{L}$. (Given Line~\ref{alg:layer:real_mapping} and \Cref{lem:pseudo_mapping}, this remains the {\em dominated} mapping $\min(\tilde{\bvarphi}) \equiv \min(\tilde{\bvarphi}_{-L},\, \tilde{\varphi}_{L}) \equiv \min(\bvarphi_{-L},\, \varphi_{L}) \equiv \varphi_{L} \equiv \tilde{\varphi}_{L}$.)
\item\label{alg:layer:distribution}
{\bf Return} $(P,\, \tilde{\bB} \otimes \tilde{L})$, namely only the bid distributions $\tilde{\bB} \otimes \tilde{L} = \{\tilde{B}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ are modified and (\Cref{lem:pseudo_distribution}) are reconstructed from the {\em layered} mappings $\tilde{\bvarphi}$.
\end{enumerate}
\end{flushleft}
\end{mdframed}
\caption{The {\text{\tt Layer}} reduction.}
\label{fig:alg:layer}
\end{figure}
\begin{figure}
\centering
\subfloat[\label{fig:layer:old}
The input {\em increasing} mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$.]{
\includegraphics[width = .49\textwidth]
{layer_old.png}}
\hfill
\subfloat[\label{fig:layer:new}
The output {\em layered} mappings $\tilde{\bvarphi} = \{\tilde{\varphi}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$.]{
\includegraphics[width = .49\textwidth]
{layer_new.png}}
\caption{Demonstration for the {\text{\tt Layer}} reduction (\Cref{fig:alg:layer}), which indeed works for all VALID pseudo instances $(P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid}$ (\Cref{def:pseudo}), not only the {\em translated} ones (\Cref{def:translate}).
\label{fig:layer}}
\end{figure}
\clearpage}
\begin{proof}
Let us verify {\bf \Cref{lem:layer:property,lem:layer:poa}} one by one; see \Cref{fig:layer} for a visual aid.
\vspace{.1in}
\noindent
{\bf \Cref{lem:layer:property}.}
The {\blackref{alg:layer}} reduction (Line~\ref{alg:layer:real_mapping}) pointwise/piecewise reorders $\{\tilde{\varphi}_{i} \equiv \varphi_{(i)}\}_{i \in [n]}$ all of the {\em real} mappings $\{\varphi_{i}\}_{i \in [n]}$ over the bid support $b \in [0,\, \lambda]$, namely $\varphi_{(1)}(b) \geq \dots \geq \varphi_{(i)}(b) \geq \dots \geq \varphi_{(n)}(b)$,
and (Line~\ref{alg:layer:pseudo_mapping}) preserves the {\em pseudo} mapping $\tilde{\varphi}_{L} \equiv \varphi_{L}$, which keeps being the {\em dominated} mapping $\min(\tilde{\bvarphi}) \equiv \min(\tilde{\bvarphi}_{-L},\, \tilde{\varphi}_{L}) \equiv \min(\bvarphi_{-L},\, \varphi_{L}) \equiv \varphi_{L} \equiv \tilde{\varphi}_{L}$ provided Line~\ref{alg:layer:real_mapping} and \Cref{lem:pseudo_mapping}.
Clearly, such reordered mappings $\tilde{\bvarphi} = \{\tilde{\varphi}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ satisfy \blackref{layeredness}, \blackref{nil}, \blackref{finiteness}, and \blackref{piecewise} (\Cref{def:layer,def:translate,def:discretize})
and are increasing over the bid support $b \in [0,\, \lambda]$ (\blackref{monotonicity}; \Cref{def:pseudo})
\`{a} la the input mappings $\bvarphi$, which can be easily seen with the help of \Cref{fig:layer}.
The extended range $[0,\, \tilde{\varphi}_{1}(0)] = [0,\, \varphi_{(1)}(0)] = [0,\, \max(\bvarphi(\zeros))] \supseteq [0, \varphi_{1}(0)] \supseteq \supp(P)$ must restrict the unmodified conditional value $P$ (\blackref{boundedness}).
It remains to show that (Line~\ref{alg:layer:distribution}) the reconstructed bid distributions $\tilde{\bB} \otimes \tilde{L}$ are well defined, i.e,
(\Cref{lem:pseudo_distribution}) the reordered mappings $\tilde{\bvarphi} = \{\tilde{\varphi}_{\sigma}\}_{\sigma \in [n] \cup \{L\}}$ satisfy the conditions in \Cref{lem:pseudo_mapping}.
This is obvious in that (i)~those conditions are symmetric about the real mappings $\{\tilde{\varphi}_{i} \equiv \varphi_{(i)}\}_{i \in [n]}$ and (ii)~the pseudo mapping is unmodified $\tilde{\varphi}_{L} \equiv \varphi_{L}$.
This finishes the proof of {\bf \Cref{lem:layer:property}}.
\vspace{.1in}
\noindent
{\bf \Cref{lem:layer:poa}.}
Consider the auction/optimal {\textsf{Social Welfare}} formulas from \Cref{lem:translate_welfare}:
\begin{align*}
& {\sf FPA}(P,\, \bB \otimes L)\;
~=~ \Ex[P] \cdot \calB(0) ~+~ \int_{0}^{\lambda} \sum_{i \in [n]} \Big(\frac{\varphi_{i}(b)}{\varphi_{L}(b) - b} - \frac{\varphi_{i}(b) - \varphi_{L}(b)}{\varphi_{i}(b) - b}\Big) \cdot \calB(b) \cdot \d b \\
& \phantom{{\sf FPA}(P,\, \bB \otimes L)\; ~=~ \Ex[P] \cdot \calB(0)} ~-~ \int_{0}^{\lambda} (n - 1) \cdot \frac{\varphi_{L}(b)}{\varphi_{L}(b) - b} \cdot \calB(b) \cdot \d b, \\
& {\sf OPT}(P,\, \bB \otimes L)
~=~ \int_{0}^{+\infty} \Big(1 - P(v) \cdot \prod_{(\sigma,\, j) \,\in\, \bPhi} \big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j})\big)\Big) \cdot \d v.
\end{align*}
The probabilities $\omega_{\sigma,\, j}$ are given by the partition $\bLambda = [0 = \lambda_{0},\, \lambda_{1}) \cup
[\lambda_{1},\, \lambda_{2}) \cup \dots \cup
[\lambda_{m},\, \lambda_{m + 1} = \lambda]$ and the bid-to-value table $\bPhi = \big[\phi_{\sigma,\, j}\big]$ for $(\sigma,\, j) \in ([n] \cup \{L\}) \times [0: m]$.
The auction {\textsf{Social Welfare}} keeps the same ${\sf FPA}(P,\, \tilde{\bB} \otimes \tilde{L}) = {\sf FPA}(P,\, \bB \otimes L)$. Concretely, (i)~the pseudo mapping $\varphi_{L}$, the first-order bid distribution (\Cref{lem:pseudo_distribution}) $\calB(b) = \exp\big(-\int_{b}^{\lambda} (\varphi_{L}(b) - b)^{-1} \cdot \d b\big)$, and the conditional value $P$ are invariant, while (ii)~the auction {\textsf{Social Welfare}} formula is {\em symmetric} about the reordered {\em real} mappings $\{\tilde{\varphi}_{i} \equiv \varphi_{(i)}\}_{i \in [n]}$.
The optimal {\textsf{Social Welfare}} again keeps the same ${\sf OPT}(P,\, \tilde{\bB} \otimes \tilde{L}) = {\sf OPT}(P,\, \bB \otimes L)$, following the same arguments (see the proof of \Cref{lem:translate_welfare} for more details). Basically, (i)~the {\em magnitude} of the values $\phi_{\sigma,\, j}$ and the probabilities $\omega_{\sigma,\, j}$ for $(\sigma,\, j) \in ([n] \cup \{L\}) \times [0: m]$ keeps the same, JUST {\em column-wise} ($j \in [0: m]$) reorder the {\em real} rows such that $\tilde{\phi}_{1,\, j} \geq \dots \tilde{\phi}_{i,\, j} \geq \dots \geq \tilde{\phi}_{n,\, j}$, while
(ii)~the optimal {\textsf{Social Welfare}} formula considers the {\em overall} effect of all entries $(\sigma,\, j) \in ([n] \cup \{L\}) \times [0: m]$, irrelevant to the reordering.
To conclude, the {{\sf PoA}}-bound keeps the same and {\bf \Cref{lem:layer:poa}} follows. This finishes the proof.
\end{proof}
\subsection{{\text{\tt Polarize}}: Derandomizing the conditional values under zero bids}
\label{subsec:polarize}
This subsection shows the {\blackref{alg:polarize}} reduction (see \Cref{fig:alg:polarize} for its description), which transforms a {\em layered} pseudo instance (\Cref{def:layer}) into a more special pseudo instance (\Cref{def:ceiling_floor}): EITHER a {\em floor} pseudo instance OR a {\em ceiling} pseudo instance. Recall that (\blackref{nil} and \blackref{boundedness}) the monopolist $B_{1}$'s conditional value $P$ ranges between $\supp(P) \subseteq [0,\, \varphi_{1}(0)]$.
\begin{definition}[Floor/Ceiling pseudo instances]
\label{def:ceiling_floor}
A {\em layer} pseudo instance $(P,\, \bB \otimes L)$ from \Cref{def:layer} is further called {\em floor}/{\em ceiling} when it satisfies either \blackref{floorness} or \blackref{ceilingness}. Namely, in the either case, the conditional value $P$ is decided by the bid distributions $\bB \otimes L$.
\begin{itemize}
\item \term[{\bf floorness}]{floorness}{\bf :}
The monopolist $B_{1}$'s conditional value always takes the {\em nil value} $P \equiv 0$. \\
Therefore, this pseudo instance can be redenoted as $H^{\downarrow} \otimes \bB_{-1} \otimes L = (P^{\uparrow},\, \bB \otimes L)$, with the monopolist $H^{\downarrow} = (P^{\uparrow} \equiv 0,\, B_{1})$. Denote by $\mathbb{B}_{\sf valid}^{\downarrow}$ the space of such pseudo instances.
\item \term[{\bf ceilingness}]{ceilingness}{\bf :}
The monopolist $B_{1}$'s conditional value always takes the {\em ceiling value} $P \equiv \varphi_{1}(0)$. \\
Therefore, this pseudo instance can be redenoted as $H^{\uparrow} \otimes \bB_{-1} \otimes L = (P^{\uparrow},\, \bB \otimes L)$, with the monopolist $H^{\uparrow} = (P^{\uparrow} \equiv \varphi_{1}(0),\, B_{1})$. Denote by $\mathbb{B}_{\sf valid}^{\uparrow}$ the space of such pseudo instances.
\end{itemize}
\end{definition}
\Cref{lem:polarize} presents performance guarantees of the {\blackref{alg:polarize}} reduction.
\begin{lemma}[{\text{\tt Polarize}}; \Cref{fig:alg:polarize}]
\label{lem:polarize}
Under reduction $H \otimes \bB_{-1} \otimes L \gets \text{\tt Polarize}(P,\, \bB \otimes L)$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:polarize:property}
The output $H \otimes \bB_{-1} \otimes L$ is EITHER a floor pseudo instance OR a ceiling pseudo instance; the bid distributions $\bB \otimes L$ are unmodified.
\item\label{lem:polarize:poa}
A (weakly) worse bound is yielded ${\sf PoA}(H \otimes \bB_{-1} \otimes L) \leq {\sf PoA}(P,\, \bB \otimes L)$.
\end{enumerate}
\end{lemma}
\afterpage{
\begin{figure}[t]
\centering
\begin{mdframed}
Reduction $\term[\text{\tt Polarize}]{alg:polarize}(P,\, \bB \otimes L)$
\begin{flushleft}
{\bf Input:}
A (generic) {\em layered} pseudo instance $(P,\, \bB \otimes L)$.
\hfill \Cref{def:layer}
\vspace{.05in}
{\bf Output:}
A {\em floor}/{\em ceiling} pseudo instance $H \otimes \bB_{-1} \otimes L \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$.
\hfill \Cref{def:ceiling_floor}
\begin{enumerate}
\item\label{alg:polarize:floor}
Define the \term[\text{\em floor}]{floor} candidate $H^{\downarrow} \otimes \bB_{-1} \otimes L \eqdef (P^{\downarrow} \equiv 0,\, \bB \otimes L) \in \mathbb{B}_{\sf valid}^{\downarrow}$.
\item\label{alg:polarize:ceiling}
Define the \term[\text{\em ceiling}]{ceiling} candidate $H^{\uparrow} \otimes \bB_{-1} \otimes L \eqdef (P^{\uparrow} \equiv \varphi_{1}(0),\, \bB \otimes L) \in \mathbb{B}_{\sf valid}^{\uparrow}$.
\item\label{alg:polarize:return}
{\bf Return} the {{\sf PoA}}-worse candidate $\argmin \big\{{\sf PoA}(\blackref{floor}),~ {\sf PoA}(\blackref{ceiling})\big\}$; \\
\white{\bf Return} breaking ties in favor of the {\em ceiling} candidate $H^{\uparrow} \otimes \bB_{-1} \otimes L$.
\end{enumerate}
\end{flushleft}
\end{mdframed}
\caption{The {\text{\tt Polarize}} reduction.}
\label{fig:alg:polarize}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[\label{fig:polarize:input}
The {\em input} monopolist $(P,\, B_{1})$]{
\includegraphics[width = .49\textwidth]
{polarize_input.png}} \\
\vspace{1cm}
\subfloat[\label{fig:polarize:floor}
The {\em floor} monopolist $H^{\downarrow} = (P^{\downarrow} \equiv 0,\, B_{1})$]{
\includegraphics[width = .49\textwidth]
{polarize_floor.png}}
\hfill
\subfloat[\label{fig:polarize:ceiling}
The {\em ceiling} monopolist $H^{\uparrow} = (P^{\uparrow} \equiv \varphi_{1}(0),\, B_{1})$]{
\includegraphics[width = .49\textwidth]
{polarize_ceiling.png}}
\caption{Demonstration for the {\text{\tt Polarize}} reduction (\Cref{fig:alg:polarize}), where blue/red curves denote bid/value CDF's, respectively.
\label{fig:polarize}}
\end{figure}
\clearpage}
\begin{proof}
Let us verify {\bf \Cref{lem:polarize:property,lem:polarize:poa}} one by one.
\vspace{.1in}
\noindent
{\bf \Cref{lem:polarize:property}.}
By construction (Lines~\ref{alg:polarize:floor} to \ref{alg:polarize:return}): The output $H \otimes \bB_{-1} \otimes L$ is the {{\sf PoA}}-worse one between two candidates, the \blackref{floor} candidate $H^{\downarrow} \otimes \bB_{-1} \otimes L$ versus the \blackref{ceiling} candidate $H^{\uparrow} \otimes \bB_{-1} \otimes L$. Each of them does not modify the bid distributions $\bB \otimes L$ and the mappings $\bvarphi$, thus preserving \blackref{layeredness}, \blackref{nil}, \blackref{finiteness}, \blackref{piecewise}, and \blackref{monotonicity} (\Cref{def:layer,def:translate,def:discretize,def:pseudo}).
Further, the \blackref{floor} candidate $H^{\uparrow} \otimes \bB_{-1} \otimes L$ promises an {\em invariant} nil conditional value $P^{\downarrow} \equiv 0 \in [0,\, \varphi_{1}(0)]$ (\blackref{floorness}/\blackref{boundedness}) and the \blackref{ceiling} candidate $H^{\uparrow} \otimes \bB_{-1} \otimes L$ promises an {\em invariant} ceiling value $P^{\uparrow} \equiv \varphi_{1}(0) \in [0,\, \varphi_{1}(0)]$ (\blackref{ceilingness}/\blackref{boundedness}). So these two candidates are \blackref{floor}/\blackref{ceiling} pseudo instances, respectively. {\bf \Cref{lem:polarize:property}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:polarize:poa}.}
We claim that either or both of the two candidates $H^{\downarrow} \otimes \bB_{-1} \otimes L$ and $H^{\uparrow} \otimes \bB_{-1} \otimes L$ has a (weakly) worse {{\sf PoA}} bound than the input $(P,\, \bB \otimes L)$.
Notice that each candidate only modifies the conditional value distribution $P$, to EITHER the \blackref{floor} value distribution $P^{\downarrow}(v) \equiv \indicator(v \geq 0)$
OR the \blackref{ceiling} value distribution $P^{\downarrow}(v) \equiv \indicator(v \geq \varphi_{1}(0))$.
\vspace{.1in}
\noindent
{\bf Auction {\textsf{Social Welfares}}.}
Following \Cref{lem:pseudo_welfare} (with \blackref{nil} $\gamma = 0$), the input auction {\textsf{Social Welfare}}
${\sf FPA}(\text{\em input}) \equiv {\sf FPA}(P,\, \bB \otimes L) = \Ex[P] \cdot \calB(0) + \sum_{\sigma \in [n] \cup \{L\}} \big(\int_{0}^{\lambda} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b\big)$,
where the $\calB(b) = \prod_{\sigma \in [n] \cup \{L\}} B_{\sigma}(b)$.
The summation term is {\em invariant}, since the bid distributions $\bB \otimes L$ and the mappings $\bvarphi$ are unmodified.
Therefore, we can formulate the \blackref{floor} counterpart as ${\sf FPA}(\blackref{floor}) = {\sf FPA}(\text{\em input}) - \Delta_{{\sf FPA}}^{\downarrow}$, using\footnote{Recall that the expectation of a {\em nonnegative} distribution $F$ can be written as $\E[F] = \int_{0}^{+\infty} \big(1 - F(v)\big) \cdot \d v$.}
\begin{align*}
\Delta_{{\sf FPA}}^{\downarrow}
= \big(\Ex[P] - \Ex[P^{\downarrow}]\big) \cdot \calB(0)
= \calB(0) \cdot \int_{0}^{\varphi_{1}(0)} \big(1 - P(v)\big) \cdot \d v,
\hspace{1.72cm}
&& \parbox{2.45cm}{\blackref{boundedness} \\ $0 \leq P \leq \varphi_{1}(0)$}
\end{align*}
and formulate the \blackref{ceiling} counterpart as ${\sf FPA}(\blackref{ceiling}) = {\sf FPA}(\text{\em input}) + \Delta_{{\sf FPA}}^{\uparrow}$, using
\begin{align*}
\Delta_{{\sf FPA}}^{\uparrow}
= \big(\Ex[P^{\uparrow}] - \Ex[P]\big) \cdot \calB(0)
= \calB(0) \cdot \int_{0}^{\varphi_{1}(0)} P(v) \cdot \d v.
\hspace{2.72cm}
&& \parbox{2.45cm}{\blackref{boundedness}}
\end{align*}
\noindent
{\bf Optimal {\textsf{Social Welfares}}.}
Following \Cref{lem:translate_welfare}, the input optimal {\textsf{Social Welfare}}
${\sf OPT}(\text{\em input}) \equiv {\sf OPT}(P,\, \bB \otimes L) = \int_{0}^{+\infty} \big(1 - P(v) \cdot \calV(v)\big) \cdot \d v$,
where the $\calV(v)$ is determined by the unmodified bid distributions $\bB \otimes L$ (\Cref{lem:pseudo_welfare}) and thus is {\em invariant}.
Accordingly, we can formulate the \blackref{floor} counterpart as ${\sf OPT}(\blackref{floor}) = {\sf OPT}(\text{\em input}) - \Delta_{{\sf OPT}}^{\downarrow}$, using
\begin{align*}
\Delta_{{\sf OPT}}^{\downarrow}
= \int_{0}^{+\infty} \big(P^{\downarrow}(v) - P(v)\big) \cdot \calV(v) \cdot \d v
= \int_{0}^{\varphi_{1}(0)} \big(1 - P(v)\big) \cdot \calV(v) \cdot \d v,
&& \parbox{2.45cm}{\blackref{boundedness}}
\end{align*}
and formulate the \blackref{ceiling} counterpart as ${\sf OPT}(\blackref{ceiling}) = {\sf OPT}(\text{\em input}) + \Delta_{{\sf OPT}}^{\uparrow}$, using
\begin{align*}
\Delta_{{\sf OPT}}^{\uparrow}
= \int_{0}^{+\infty} \big(P(v) - P^{\uparrow}(v)\big) \cdot \calV(v) \cdot \d v
= \int_{0}^{\varphi_{1}(0)} P(v) \cdot \calV(v) \cdot \d v.
\hspace{1cm}
&& \parbox{2.45cm}{\blackref{boundedness}}
\end{align*}
Notice that all the four terms $\Delta_{{\sf FPA}}^{\downarrow}$ etc are nonnegative. The remaining proof relies on {\bf \Cref{fact:polarize}}. (For brevity, we ignore the ``$0 / 0$'' issue, which can happen only if EITHER $P \equiv P^{\downarrow}$ OR $P \equiv P^{\uparrow}$, making {\bf \Cref{lem:polarize:poa}} vacuously true.)
\setcounter{fact}{0}
\begin{fact}
\label{fact:polarize}
$\Delta_{{\sf FPA}}^{\downarrow} / \Delta_{{\sf OPT}}^{\downarrow} \geq \Delta_{{\sf FPA}}^{\uparrow} / \Delta_{{\sf OPT}}^{\uparrow}$ or equivalently, $\Delta_{{\sf OPT}}^{\downarrow} / \Delta_{{\sf FPA}}^{\downarrow} \leq \Delta_{{\sf OPT}}^{\uparrow} / \Delta_{{\sf FPA}}^{\uparrow}$.
\end{fact}
\begin{proof}
It is more convenient to verify the second equation: Using the (normalized) antiderivatives
$\bar{\mathfrak{P}}(v) \eqdef (1 / \Delta_{{\sf FPA}}^{\downarrow}) \cdot \calB(0) \cdot \int_{0}^{v} (1 - P(t)) \cdot \d t$ AND
$\mathfrak{P}(v) \eqdef (1 / \Delta_{{\sf FPA}}^{\uparrow}) \cdot \calB(0) \cdot \int_{0}^{v} P(t) \cdot \d t$, we can rewrite
$\LHS \equiv \Delta_{{\sf OPT}}^{\downarrow} \big/ \Delta_{{\sf FPA}}^{\downarrow}
= \int_{0}^{\varphi_{1}(0)} \bar{\mathfrak{P}}'(v) \cdot \calV(v) \cdot \d v$ AND
$\RHS \equiv \Delta_{{\sf OPT}}^{\uparrow} / \Delta_{{\sf FPA}}^{\uparrow} = \int_{0}^{\varphi_{1}(0)} \mathfrak{P}'(v) \cdot \calV(v) \cdot \d v$.
Using integration by parts, we can deduce {\bf \Cref{fact:polarize}} as follows:
\begin{align}
\LHS
& ~=~ \Big(\bar{\mathfrak{P}}(v) \cdot \calV(v)\Big) \Bigmid_{v = 0}^{\varphi_{1}(0)}
~-~ \int_{0}^{\varphi_{1}(0)} \bar{\mathfrak{P}}(v) \cdot \calV'(v) \cdot \d v
\nonumber \\
& ~=~ \Big(\mathfrak{P}(v) \cdot \calV(v)\Big) \Bigmid_{v = 0}^{\varphi_{1}(0)}
~-~ \int_{0}^{\varphi_{1}(0)} \bar{\mathfrak{P}}(v) \cdot \calV'(v) \cdot \d v
\label{eq:polarize:2}\tag{P1} \\
& ~\leq~ \Big(\mathfrak{P}(v) \cdot \calV(v)\Big) \Bigmid_{v = 0}^{\varphi_{1}(0)}
~-~ \int_{0}^{\varphi_{1}(0)} \mathfrak{P}(v) \cdot \calV'(v) \cdot \d v
~=~ \RHS.
\label{eq:polarize:3}\tag{P2}
\end{align}
\eqref{eq:polarize:2}: For the first term, by construction $\bar{\mathfrak{P}}(v) = \mathfrak{P}(v) = 0$ and $\bar{\mathfrak{P}}(\varphi_{1}(0)) = \mathfrak{P}(\varphi_{1}(0)) = 1$. \\
\eqref{eq:polarize:3}: For the second term, by construction $\bar{\mathfrak{P}}(v) \geq \mathfrak{P}(v) \geq 0$ and the PDF $\calV'(v) \geq 0$.
Particularly, the \blackref{floor} antiderivative $\bar{\mathfrak{P}}(v)$ is concave $\bar{\mathfrak{P}}''(v) = -(1 / \Delta_{{\sf FPA}}^{\downarrow}) \cdot \calB(0) \cdot P'(v) \leq 0$, whereas the \blackref{ceiling} antiderivative $\mathfrak{P}(v)$ is convex $\mathfrak{P}''(v) = (1 / \Delta_{{\sf FPA}}^{\uparrow}) \cdot \calB(0) \cdot P'(v) \geq 0$.
\end{proof}
Assume the opposite to {\bf \Cref{lem:polarize:poa}}: The \blackref{floor}/\blackref{ceiling} candidates $H^{\downarrow} \otimes \bB_{-1} \otimes L$ and $H^{\uparrow} \otimes \bB_{-1} \otimes L$ EACH yield a strictly larger bound than the input $(P,\, \bB \otimes L)$. That is,
\begin{align*}
& {\sf PoA}(\blackref{floor}) ~~\,\;>~ {\sf PoA}(\text{\em input})
&& \iff &&
\frac{{\sf FPA}(\text{\em input}) - \Delta_{{\sf FPA}}^{\downarrow}}{{\sf OPT}(\text{\em input}) - \Delta_{{\sf OPT}}^{\downarrow}}
~>~ \frac{{\sf FPA}(\text{\em input})}{{\sf OPT}(\text{\em input})}, \\
& {\sf PoA}(\blackref{ceiling}) ~>~ {\sf PoA}(\text{\em input})
&& \iff &&
\frac{{\sf FPA}(\text{\em input}) + \Delta_{{\sf FPA}}^{\uparrow}}{{\sf OPT}(\text{\em input}) + \Delta_{{\sf OPT}}^{\uparrow}}
~>~ \frac{{\sf FPA}(\text{\em input})}{{\sf OPT}(\text{\em input})}.
\end{align*}
Rearranging the both equations gives $\Delta_{{\sf FPA}}^{\downarrow} / \Delta_{{\sf OPT}}^{\downarrow}
< {\sf PoA}(\text{\em input})
< \Delta_{{\sf FPA}}^{\uparrow} / \Delta_{{\sf OPT}}^{\uparrow}$, which contradicts {\bf \Cref{fact:polarize}}.
Refute our assumption: At least one candidate between $H^{\downarrow} \otimes \bB_{-1} \otimes L$ and $H^{\uparrow} \otimes \bB_{-1} \otimes L$ has a weakly worse bound. {\bf \Cref{lem:polarize:poa}} and \Cref{lem:polarize} follow then.
\end{proof}
\subsection{The floor/ceiling pseudo instances}
\label{subsec:preprocess}
Below we summarize our discussions throughout \Cref{sec:preprocessing}, about how to preprocess valid pseudo instances (\Cref{def:pseudo}).
For ease of reference, we rephrase \Cref{cor:pseudo_instance,lem:discretize,lem:translate,lem:layer,lem:polarize} (with minor modifications).
\begin{restate}[\Cref{cor:pseudo_instance}]
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} is at least
\begin{align*}
{\sf PoA} ~\geq~ \inf \bigg\{\, \frac{{\sf FPA}(P,\, \bB \otimes L)}{{\sf OPT}(P,\, \bB \otimes L)} \,\biggmid\, (P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid} ~\text{\em and}~ {\sf OPT}(P,\, \bB \otimes L) < +\infty \,\bigg\}.
\end{align*}
\end{restate}
\begin{restate}[\Cref{lem:discretize}]
Given a valid pseudo instance $(P,\, \bB \otimes L)$ that has bounded expected auction/optimal {\textsf{Social Welfares}} ${\sf FPA}(P,\, \bB \otimes L) \leq {\sf OPT}(P,\, \bB \otimes L) < +\infty$, for any error $\epsilon \in (0,\, 1)$, there is a discretized pseudo instance $(\tilde{P},\, \tilde{\bB} \otimes \tilde{L})$ such that $|{\sf PoA}(\tilde{P},\, \tilde{\bB} \otimes \tilde{L}) - {\sf PoA}(P,\, \bB \otimes L)| \leq \epsilon$.
\end{restate}
\begin{restate}[\Cref{lem:translate}]
Given a discretized pseudo instance $(P,\, \bB \otimes L)$, there is a translated pseudo instance $(\tilde{P},\, \tilde{\bB} \otimes \tilde{L})$ such that ${\sf PoA}(\tilde{P},\, \tilde{\bB} \otimes \tilde{L}) \leq {\sf PoA}(P,\, \bB \otimes L)$.
\end{restate}
\begin{restate}[\Cref{lem:layer}]
Given a translated pseudo instance $(P,\, \bB \otimes L)$, there is a layered pseudo instance $(\tilde{P},\, \tilde{\bB} \otimes \tilde{L})$ such that ${\sf PoA}(\tilde{P},\, \tilde{\bB} \otimes \tilde{L}) \leq {\sf PoA}(P,\, \bB \otimes L)$.
\end{restate}
\begin{restate}[\Cref{lem:polarize}]
Given a layered pseudo instance $(P,\, \bB \otimes L)$, there is a floor/ceiling pseudo instance $\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L} \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$ such that ${\sf PoA}(\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L}) \leq {\sf PoA}(P,\, \bB \otimes L)$.
\end{restate}
\Cref{cor:preprocess} is a refinement of \Cref{cor:pseudo_instance}: Towards a lower bound on the {\textsf{Price of Anarchy}} in {\textsf{First Price Auctions}}, we can restrict our attention to the subspace $(\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$ of {\em floor}/{\em ceiling} pseudo instances (\Cref{def:ceiling_floor}).
\begin{corollary}[Lower bound]
\label{cor:preprocess}
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} is at least
\begin{align*}
{\sf PoA} ~\geq~ \inf \bigg\{\, \frac{{\sf FPA}(H \otimes \bB \otimes L)}{{\sf OPT}(H \otimes \bB \otimes L)} \,\biggmid\, H \otimes \bB \otimes L \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow}) ~\text{\em and}~ {\sf OPT}(H \otimes \bB \otimes L) < +\infty \,\bigg\}.
\end{align*}
\end{corollary}
\begin{proof}
For a valid pseudo instance $(P,\, \bB \otimes L) \in \mathbb{B}_{\sf valid}$ considered in \Cref{cor:pseudo_instance}, the {\textsf{Social Welfares}} are bounded ${\sf FPA}(P,\, \bB \otimes L) \leq {\sf OPT}(P,\, \bB \otimes L) < +\infty$.
For any error $\epsilon \in (0,\, 1)$, (\Cref{lem:discretize,lem:translate,lem:layer,lem:polarize}) there is a {\em floor}/{\em ceiling} pseudo instance $\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L} \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$ such that
\[
{\sf PoA}(\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L})
~\leq~ {\sf PoA}(P,\, \bB \otimes L) + \epsilon.
\]
Since the error can be arbitrarily small $\epsilon \to 0^{+}$, it has no affect on the {\em infimum} {{\sf PoA}} bound.
\end{proof}
\section{Towards the Worst Case Pseudo Instances}
\label{sec:reduction}
Following \Cref{cor:preprocess}, towards a lower bound on the {\textsf{Price of Anarchy}} in {\textsf{First Price Auctions}}, we can concentrate on {\em floor}/{\em ceiling} pseudo instances $\in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$.
This section will characterize the worst cases in this search space. More concretely:
\begin{itemize}
\item \Cref{subsec:tech_prelim} presents some prerequisite notions, especially the concept of {\em twin ceiling} pseudo instances $\mathbb{B}_{\sf twin}^{\uparrow}$ (\Cref{def:twin}), which form a subset of the space of {\em ceiling} pseudo instances $\mathbb{B}_{\sf twin}^{\uparrow} \subsetneq \mathbb{B}_{\sf valid}^{\uparrow}$ and turn out to be the worst cases.
\item Given an undesirable pseudo instance $\in (\mathbb{B}_{\sf valid}^{\downarrow} \cup (\mathbb{B}_{\sf valid}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow}))$, we will see that there are four types of possible modifications; see the \blackref{ref:reduction:sketch} part in \Cref{subsec:tech_prelim} for details. We give four reductions in \Cref{subsec:slice,subsec:collapse,subsec:halve,subsec:AD}, one-to-one dealing with each of these four types.
\item \Cref{subsec:main} presents the \blackref{alg:main} procedure, which {\em iteratively} invokes the mentioned four reductions, transforming a given {\em floor}/{\em ceiling} pseudo instance $\in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$ to a {\em twin ceiling} pseudo instance $\in \mathbb{B}_{\sf twin}^{\uparrow}$, as desired. Overall, we will use the {\em potential method} to upper bound the number of invocations.
\end{itemize}
\begin{comment}
\blue{As before, we adopt these terminologies and notations:
\begin{itemize}
\item bidders $\sigma \in \{H\} \cup [n] \cup \{L\}$
\item real bidders $i \in \{H\} \cup [n]$
\item the monopolist $H$
\item non-monopoly bidders $i \in [n]$
\item the pseudo bidder $L$
\end{itemize}}
\begin{reminder*}
\blue{{\bf Yaonan:} For \Cref{sec:reduction}, we are left with
\begin{itemize}
\item \Cref{subsec:main}; the very last part.
\end{itemize}
}
\end{reminder*}
\end{comment}
\subsection{Twin ceiling, strong ceiling, jumps, and potentials}
\label{subsec:tech_prelim}
This subsection introduces several prerequisite concepts.
But before that, for ease of reference, let us recap the definitions of {\em floor}/{\em ceiling} pseudo instances (\Cref{def:pseudo,def:discretize,def:translate,def:layer,def:ceiling_floor}).
\begin{definition}[Floor/Ceiling pseudo instances]
\label{def:ceiling_floor:restate}
For a {\em floor}/{\em ceiling} pseudo instance $H \otimes \bB \otimes L$:
\begin{itemize}
\item The {\em monopolist} $H$ is a real bidder, who competes with OTHER bidders $\bB \otimes L$.
\item The {\em non-monopoly} bidders $\bB = \{B_{i}\}_{i \in [n]}$ (if existential; $n \geq 0$) are real bidders, each of which competes with OTHER bidders $H \otimes \bB_{-i} \otimes L$.
\item The pseudo bidder $L$ competes with ALL bidders $H \otimes \bB \otimes L$, {\em including him/herself $L$}.
\end{itemize}
Using the first-order bid distribution $\calB(b) \eqdef \prod_{\sigma \in \{H\} \cup [n] \cup \{L\}} B_{\sigma}(b)$, each bidder $\sigma \in \{H\} \cup [n] \cup \{L\}$ has the bid-to-value mapping
\[
\varphi_{\sigma}(b)
~\eqdef~
b + \bigg(\frac{\calB'(b)}{\calB(b)} - \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \indicator(\sigma \neq L)\bigg)^{-1}.
\]
This pseudo instance, no matter being {\em floor} $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$ or being {\em ceiling} $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$, satisfies \blackref{re:discretization}, \blackref{re:monotonicity}, and \blackref{re:layeredness}.
\begin{itemize}
\item \term[\textbf{discretization}]{re:discretization}{\bf :}
This pseudo instance has a {\em bounded} bid support $b \in [0 \equiv \gamma,\, \lambda] \subseteq [0,\, +\infty)$. Regarding some $(m + 1)$-piece partition $\bLambda \eqdef [0 \equiv \lambda_{0},\, \lambda_{1}) \cup
[\lambda_{1},\, \lambda_{2}) \cup \dots \cup
[\lambda_{m},\, \lambda_{m + 1} \equiv \lambda]$, for $0 \leq m < +\infty$, the bid-to-value mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in \{H\} \cup [n] \cup \{L\}}$ are {\em piecewise constant} functions and thus, can be represented as an $(n + 2)$-to-$(m + 1)$ bid-to-value table $\bPhi = [\phi_{\sigma,\, j}]$ for $\sigma \in \{H\} \cup [n] \cup \{L\}$ and $j \in [0:\, m]$:
\begin{align*}
\bPhi ~\eqdef~
\Big[\, \text{$\varphi_{\sigma}(b) \equiv \phi_{\sigma,\, j}$ on every piece $b \in [\lambda_{j},\, \lambda_{j + 1})$} \,\Big].
\end{align*}
\item \term[\textbf{monotonicity}]{re:monotonicity}{\bf :}
The mappings $\bvarphi$ are increasing over the bid support $b \in [0,\, \lambda]$; thus the table $\bPhi = [\phi_{\sigma,\, j}]$ is increasing $\phi_{\sigma,\, 0} \leq \dots \leq \phi_{\sigma,\, j} \leq \dots \leq \phi_{\sigma,\, m}$ in each row $\sigma \in \{H\} \cup [n] \cup \{L\}$.
\item \term[\textbf{layeredness}]{re:layeredness}{\bf :}
The mappings $\bvarphi$ are ordered $\varphi_{H}(b) \geq \varphi_{1}(b) \geq \dots \geq \varphi_{n}(b) \geq \varphi_{L}(b)$ over the bid support $b \in [0,\, \lambda]$; thus the table $\bPhi = [\phi_{\sigma,\, j}]$ is decreasing $\phi_{H,\, j} \geq \phi_{1,\, j} \geq \dots \geq \phi_{n,\, j} \geq \phi_{L,\, j}$ in each column $j \in [0: m]$.
\end{itemize}
Given these, a pair of {\em floor}/{\em ceiling} pseudo instances (\Cref{def:ceiling_floor}) are uniquely decided. Namely, the {\em floor} one $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$ further satisfies
(\term[\textbf{floorness}]{re:floorness}) that the monopolist $H^{\downarrow}$'s conditional value always takes the {\em nil value} $P^{\downarrow} \equiv 0$,
while the {\em ceiling} one $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ further satisfies (\term[\textbf{ceilingness}]{re:ceilingness}) that the monopolist $H^{\uparrow}$'s conditional value always takes the {\em ceiling value} $P^{\uparrow} \equiv \phi_{H,\, 0}$.
\end{definition}
We notice that possibly there are NO non-monopoly bidders $\bB = \{B_{i}\}_{i \in [n]} = \emptyset$, or possibly all non-monopoly bidders are ``dummies'' $B_{i}(b) \equiv 1$ on $b \in [0,\, \lambda]$. Suppose so, only the monopolist $H$ and the pseudo bidder $L$ have effects; then without ambiguity, we can write $H \otimes \bB \otimes L \cong H \otimes L$.
Indeed, the {\em twin ceiling} pseudo instances $H^{\uparrow} \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ given below (\Cref{def:twin}; a visual aid is deferred to \Cref{fig:twin}) have this structure and turn out to be the WORST CASES.
\begin{definition}[Twin ceiling pseudo instances]
\label{def:twin}
A {\em ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ is further called {\em twin ceiling} when it satisfies \blackref{re:twin_collapse} and \blackref{re:twin_ceiling}.
\begin{itemize}
\item \term[\textbf{absolute ceilingness}]{re:twin_ceiling}{\bf :}
The monopolist $H^{\uparrow}$ takes a {\em constant} bid-to-value mapping $\varphi_{H}(b) \equiv \varphi_{H}(0)$ on $b \in [0,\, \lambda]$; thus the row-$H$ entries in the bid-to-value table $\bPhi = [\phi_{\sigma,\, j}]$ are the same $\phi_{H,\, 0} = \dots = \phi_{H,\, j} = \dots = \phi_{H,\, m}$. (As before, the {\em ceiling} conditional value $P^{\uparrow} \equiv \phi_{H,\, 0}$.)
\item \term[\textbf{non-monopoly collapse}]{re:twin_collapse}{\bf :}
All non-monopoly bidders $i \in [n]$ (if existential) each exactly take the pseudo bid-to-value mapping $\varphi_{i}(b) \equiv \varphi_{L}(b)$ on $b \in [0,\, \lambda]$; thus in each column $j \in [0: m]$ of the bid-to-value table $\bPhi = [\phi_{\sigma,\, j}]$, all the non-monopoly entries are the same as the pseudo entry $\phi_{1,\, j} = \dots = \phi_{n,\, j} = \phi_{L,\, j}$.
That is,\footnote{Following \Cref{def:ceiling_floor:restate}, $\varphi_{i}(b) \equiv \varphi_{L}(b)$ means $B'_{i}(b) \big/ B_{i}(b) \equiv 0$, which together with the boundary condition at the supremum bid $B_{i}(\lambda) = 1$ implies that $B_{i}(b) \equiv 1$ on $b \in [0,\, \lambda]$.}
each non-monopoly bidder $i \in [n]$ has no effect $B_{i}(b) \equiv 1$ over the bid support $b \in [0,\, \lambda]$.
\end{itemize}
Since all non-monopoly bidders have no effect, this {\em twin ceiling} pseudo instance can be written as $H^{\uparrow} \otimes \bB \otimes L \cong H^{\uparrow} \otimes L$.
Denote by $\mathbb{B}_{\sf twin}^{\uparrow}$ the space of such pseudo instances, a subset of the space of {\em ceiling} pseudo instances $\mathbb{B}_{\sf twin}^{\uparrow} \subsetneq \mathbb{B}_{\sf valid}^{\uparrow}$.
\end{definition}
To convert a {\em floor}/{\em ceiling} pseudo instance $H \otimes \bB \otimes L \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$ to another {\em twin ceiling} pseudo instance, a very first question is to measure its distance ``$\mathrm{dist}(H \otimes \bB \otimes L,\, \mathbb{B}_{\sf twin}^{\uparrow})$'' to the target space $\mathbb{B}_{\sf twin}^{\uparrow}$. Also, a {\em floor} pseudo instance $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$ should have a larger distance than the paired {\em ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$, but just a little larger since they only differ on the monopolist's conditional value $P^{\downarrow} \equiv 0$ versus $P^{\uparrow} \equiv \phi_{H,\, 0}$.
The above intuition guides us to introduce the potential $\Psi(H \otimes \bB \otimes L)$ of a {\em floor}/{\em ceiling} pseudo instance (\Cref{def:potential,fig:potential:table}). As the name ``potential'' suggests, we will transform such a pseudo instance iteratively until getting a {\em twin ceiling} pseudo instance $\tilde{H} \otimes \tilde{L} \in \mathbb{B}_{\sf twin}^{\uparrow}$ and use the potential method to bound the number of iterations.
\begin{definition}[Potentials]
\label{def:potential}
Given a pair of {\em floor}/{\em ceiling} pseudo instances $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$ and $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$, consider their {\em common} $(n + 2)$-to-$(m + 1)$ bid-to-value table $\bPhi = [\phi_{\sigma,\, j}]$ for $\sigma \in \{H\} \cup [n] \cup \{L\}$ and $j \in [0:\, m]$ (\Cref{def:ceiling_floor:restate}):
\begin{align*}
\bPhi ~=~
\Big[\, \text{$\varphi_{\sigma}(b) \equiv \phi_{\sigma,\, j}$ on every piece $b \in [\lambda_{j},\, \lambda_{j + 1})$} \,\Big].
\end{align*}
An entry $\phi_{\sigma,\, j}$ is called {\em ultra-ceiling} when it {\em strictly} exceeds the ceiling value $\phi_{\sigma,\, j} > \phi_{H,\, 0} \equiv \varphi_{H}(0)$ of the monopolist $H$. Then the {\em ceiling} pseudo instance's potential counts the ultra-ceiling entries
\[
\Psi(H^{\uparrow} \otimes \bB \otimes L) ~\eqdef~ \big|\big\{(\sigma,\, j) \in \bPhi: \phi_{\sigma,\, j} > \phi_{H,\, 0}\big\}\big|. \hspace{0.35cm}
\]
In contrast, the {\em floor} pseudo instance's potential adds an extra ONE (which accounts for the nil conditional value $P^{\downarrow} \equiv 0$ rather than $P^{\uparrow} \equiv \phi_{H,\, 0}$):
\[
\Psi(H^{\downarrow} \otimes \bB \otimes L) ~\eqdef~ \Psi(H^{\uparrow} \otimes \bB \otimes L) ~+~ 1.~~ \hspace{1.22cm}
\]
\end{definition}
\afterpage{
\begin{figure}
\centering
\subfloat[{The bid-to-value table $\bPhi = \big[\phi_{\sigma,\, j}\big]$; ultra-ceiling entries $\phi_{\sigma,\, j} > \phi_{H,\, 0}$ marked in blue.}
\label{fig:potential:table}]{
\parbox[c]{14cm}{
{\centering
\includegraphics[
height = 10cm]
{potential_table.png}
\par}}} \\
\vspace{1cm}
\subfloat[{A {\bf pseudo jump} $\sigma^{*} = L$}
\label{fig:potential:pseudo}]{
\includegraphics[width = .49\textwidth]
{jump_pseudo.png}}
\hfill
\subfloat[{A {\bf real jump} $\sigma^{*} \in \{H\} \cup [n]$}
\label{fig:potential:real}]{
\includegraphics[width = .49\textwidth]
{jump_real.png}}
\caption{Demonstration for potentials and jumps of a {\em ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ (\Cref{def:potential,def:jump}).
}
\label{fig:potential}
\end{figure}
\clearpage}
\Cref{lem:potential} summarizes several basic properties about potentials.
\begin{lemma}[Potentials]
\label{lem:potential}
The following hold:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:potential:bound}
A floor/ceiling pseudo instance $H \otimes \bB \otimes L \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$ has a bounded potential \\
$0 \leq \Psi(H \otimes \bB \otimes L) \leq |\bPhi| < +\infty$.
\item\label{lem:potential:floor}
A floor pseudo instance $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$ has a nonzero potential $\Psi(H^{\downarrow} \otimes \bB \otimes L) \geq 1$.
\item\label{lem:potential:ceiling}
A ceiling pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ has a zero potential $\Psi(H^{\uparrow} \otimes \bB \otimes L) = 0$ iff it satisfies \blackref{re:twin_ceiling}.
\end{enumerate}
\end{lemma}
\begin{proof}
{\bf \Cref{lem:potential:bound}} follows as $\Psi(H \otimes \bB \otimes L) \leq \big(|\bPhi| - 1\big) + 1 = |\bPhi|$. Namely, the potential cannot counts the ceiling value $\phi_{H,\, 0}$ itself, BUT may add an extra ONE for the nil conditional value $P^{\downarrow} \equiv 0$.
{\bf \Cref{lem:potential:floor}} follows directly from \Cref{def:potential}.
{\bf \Cref{lem:potential:ceiling}} holds since (\blackref{re:monotonicity}/\blackref{re:layeredness}) the bid-to-value table $\bPhi = [\phi_{\sigma,\, j}]$ is row-wise increasing and column-wise decreasing, so the potential $\Psi(H^{\uparrow} \otimes \bB \otimes L) = \big|\big\{(\sigma,\, j) \in \bPhi: \phi_{\sigma,\, j} > \phi_{H,\, 0}\big\}\big|$ is zero iff
(\blackref{re:twin_ceiling}) all the row-$H$ entries are the same $\phi_{H,\, 0} = \dots = \phi_{H,\, j} = \dots = \phi_{H,\, m}$.
\end{proof}
Because the table $\bPhi = [\phi_{\sigma,\, j}]$ is row-wise increasing and column-wise decreasing,
all ultra-ceiling entries $\phi_{\sigma,\, j} > \phi_{H,\, 0}$ (if existential) together form an upper-right subregion (\Cref{fig:potential}). Among those entries, the {\em leftmost-then-lowest} one $\phi_{\sigma^{*},\, j^{*}} > \phi_{H,\, 0}$ will be of particular interest and we would call it the {\em jump entry} (\Cref{def:jump}). In a sense of \Cref{fig:potential:pseudo,fig:potential:real}, the name ``jump'' describes the behavior of the bid-to-value mappings $\bvarphi = \{\varphi_{\sigma}\}_{\sigma \in \{H\} \cup [n] \{L\}}$ at that entry.
\begin{definition}[Jumps]
\label{def:jump}
For a {\em ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$, define the \term[\textbf{jump entry}]{jump_entry} $(\sigma^{*},\, j^{*})$ as the {\em leftmost-then-lowest} ultra-ceiling entry $\phi_{\sigma,\, j} > \phi_{\phi_{H,\, 0}}$ (if existential; \Cref{def:potential}) of the bid-to-value table $\bPhi = [\phi_{\sigma,\, j}]$:
\begin{itemize}
\item $j^{*} \eqdef \min \big\{ j \in [0: m]: \phi_{H,\, j} > \phi_{H,\, 0} \big\}$ denotes the {\em jump piece} of the underlying partition $\bLambda$.
\item $\sigma^{*} \eqdef \max \big\{ \sigma \in \{H < 1 < \dots < n < L\}: \phi_{\sigma,\, j^{*}} > \varphi_{H,\, 0} \big\}$ denotes the {\em jump bidder}.
\end{itemize}
Technically,\footnote{That is, we will handle a \blackref{pseudo_jump} (\Cref{subsec:halve}) versus a \blackref{real_jump} (\Cref{subsec:AD}) via different approaches. But especially for a \blackref{real_jump} $\sigma^{*} \in \{H\} \cup [n]$, technically, there is no difference between a jump monopolist $\sigma^{*} = H$ and a jump non-monopoly bidder $\sigma^{*} \in [n]$.}
there are two types of jumps (if well-defined; see \Cref{lem:jump}):
\begin{itemize}
\item \term[\textbf{pseudo jump}]{pseudo_jump}{\bf :}
The jump bidder is the pseudo bidder $\sigma^{*} = L$ (\Cref{fig:potential:pseudo}). \\
At such a jump, the bid-to-value spectrum $\big\{\, \varphi_{L}(b) \leq v \leq \varphi_{H}(b): b \in [0,\, \lambda] \,\big\}$ decomposes to two {\em disconnected} parts.
\item \term[\textbf{real jump}]{real_jump}{\bf :}
The jump bidder is one of the real bidders $\sigma^{*} \in \{H\} \cup [n]$ (\Cref{fig:potential:real}). \\
At such a jump, the bid-to-value spectrum $\big\{\, \varphi_{L}(b) \leq v \leq \varphi_{H}(b): b \in [0,\, \lambda] \,\big\}$ is still {\em connected}.
\end{itemize}
Moreover, denote by $\lambda^{*} \eqdef \lambda_{j^{*}}$ the {\em jump bid} and by $\phi^{*} \eqdef \phi_{\sigma^{*},\, j^{*}}$ the {\em jump value}.
\end{definition}
\Cref{lem:jump} follows directly from \Cref{def:potential,def:jump}, yet we include it for completeness.
\begin{lemma}[Jumps]
\label{lem:jump}
When a ceiling pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ has a nonzero potential $\Psi(H^{\uparrow} \otimes \bB \otimes L) > 0$, i.e., (\Cref{lem:potential:ceiling} of \Cref{lem:potential}) when it violates \blackref{re:twin_ceiling}:
\begin{enumerate}[font = {\em\bfseries}]
\item The \blackref{jump_entry} $(\sigma^{*},\, j^{*}) \in (\{H\} \cup [n] \cup \{L\}) \times [m]$ exists and cannot be in column $0$.
\item The jump bid as the index $j^{*} \in [m]$ partition bid $\lambda^{*} \equiv \lambda_{j^{*}} \in (0,\, \lambda)$ neither can be the nil bid $\lambda_{0} \equiv 0$ nor can be the supremum bid $\lambda_{m + 1} \equiv \lambda$.
\end{enumerate}
\end{lemma}
\begin{remark}[Jumps]
\label{rem:jump}
Without ambiguity, we can slightly generalize the concept of jump. Concretely, when a ceiling pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ has a zero potential $\Psi(H^{\uparrow} \otimes \bB \otimes L) = 0$, i.e., (\Cref{lem:potential}) when it satisfies \blackref{re:twin_ceiling}, we can reinterpret the {\em undefined} jump bid as the supremum bid $\lambda^{*} = \lambda$.
\end{remark}
The above discussions
investigate the first condition, \blackref{re:twin_ceiling}, for a {\em twin ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf twin}^{\uparrow}$ (\Cref{def:twin}).
We shall further consider the second condition, \blackref{re:twin_collapse}.
Indeed, by considering this condition just before the jump bid $b \in [0,\, \lambda^{*})$ (instead of the whole bid support $b \in [0,\, \lambda]$), we will consider the concept of {\em strong ceiling} pseudo instances (\Cref{def:strong}).
\begin{definition}[Strong ceiling pseudo instances]
\label{def:strong}
A {\em ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ from \Cref{def:ceiling_floor:restate} is further called {\em strong ceiling} when (\term[\textbf{non-monopoly collapse}]{re:collapse}) before the jump bid $b \in [0,\, \lambda^{*})$, each non-monopoly bidder $i \in [n]$ (if existential) exactly takes the pseudo bid-to-value mapping $\varphi_{i}(b) \equiv \varphi_{L}(b)$; therefore in each before-jump column $j \in [0: j^{*} - 1]$ of the bid-to-value table $\bPhi = [\phi_{\sigma,\, j}]$, all of the non-monopoly entries are the same as the pseudo entry $\phi_{1,\, j} = \dots = \phi_{n,\, j} = \phi_{L,\, j}$.
That is, before the jump bid $b \in [0,\, \lambda^{*})$, each non-monopoly bidder $i \in [n]$ has a {\em constant} bid distribution $B_{i}(b) = B_{i}(\lambda^{*})$ and thus has no effect.\footnote{Following \Cref{def:ceiling_floor:restate}, $\varphi_{i}(b) = \varphi_{L}(b)$ means $B'_{i}(b) \big/ B_{i}(b) = 0$, which together with the boundary condition at the jump bid $\lambda^{*} \equiv \lambda_{j^{*}}$ implies that $B_{i}(b) = B_{i}(\lambda^{*})$ on $b \in [0,\, \lambda^{*}]$.}
Denote by $\mathbb{B}_{\sf strong}^{\uparrow}$ the space of such pseudo instances, an intermediate class between the {\em ceiling} class and the {\em twin ceiling} class $\mathbb{B}_{\sf valid}^{\uparrow} \supsetneq \mathbb{B}_{\sf strong}^{\uparrow} \supsetneq \mathbb{B}_{\sf twin}^{\uparrow}$ (\Cref{def:ceiling_floor:restate,def:twin}).
\end{definition}
\Cref{lem:strong} follows directly from \Cref{def:twin,def:strong}, \Cref{lem:potential} (\Cref{lem:potential:ceiling}) and \Cref{rem:jump}.
\begin{lemma}[Twin ceiling pseudo instances]
\label{lem:strong}
A strong ceiling pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf strong}^{\uparrow}$ is further a twin ceiling pseudo instance $H^{\uparrow} \otimes \bB \otimes L \cong H^{\uparrow} \otimes L \in \mathbb{B}_{\sf twin}^{\uparrow}$ iff one of the three equivalent conditions holds: (i)~It satisfies \blackref{re:twin_ceiling}. (ii)~Its potential is zero $\Psi(H^{\uparrow} \otimes \bB \otimes L) = 0$. (iii)~Its jump bid takes the supremum bid $\lambda^{*} = \lambda$.
\end{lemma}
\afterpage{
\begin{figure}
\centering
\includegraphics[width = .45\textwidth, height = 6.375cm]
{reduction_twin.png}
\caption{Demonstration for {\em twin ceiling} pseudo instances.
\label{fig:twin}}
\end{figure}
\begin{figure}
\centering
\subfloat[\label{fig:reduction:floor}
{\em floor}, thus {\em non-ceiling}]{
\includegraphics[width = .45\textwidth, height = 6.375cm]
{reduction_floor.png}}
\hfill
\subfloat[\label{fig:reduction:non_strong}
{\em ceiling} but {\em non strong ceiling}]{
\includegraphics[width = .45\textwidth, height = 6.375cm]
{reduction_nonstrong.png}} \\
\subfloat[\label{fig:reduction:pseudo}
{\em strong} but {\em non twin ceiling}; {\bf pseudo jump}]{
\includegraphics[width = .45\textwidth, height = 6.375cm]
{reduction_pseudo.png}}
\hfill
\subfloat[\label{fig:reduction:real}
{\em strong} but {\em non twin ceiling}; {\bf real jump}]{
\includegraphics[width = .45\textwidth, height = 6.375cm]
{reduction_real.png}}
\caption{Demonstration for case analysis in \Cref{sec:reduction}.
\label{fig:reduction}}
\end{figure}
\clearpage}
\noindent
\term[\textbf{Sketch}]{ref:reduction:sketch}{\bf .}
Based on the above materials, now we are ready to describe how to transform {\em floor}/{\em ceiling} pseudo instances $(\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$ into {\em twin ceiling} pseudo instances $\mathbb{B}_{\sf twin}^{\uparrow}$, as claimed. Indeed, for an {\em undesirable} pseudo instance $H \otimes \bB \otimes L \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup (\mathbb{B}_{\sf valid}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow}))$, there are four types of possible modifications, which are one-to-one illustrated in \Cref{fig:reduction:floor,fig:reduction:non_strong,fig:reduction:pseudo,fig:reduction:real}. (Cf.\ \Cref{fig:twin} for comparison with {\em twin ceiling} pseudo instances $\in \mathbb{B}_{\sf twin}^{\uparrow}$. Recall that $\mathbb{B}_{\sf twin}^{\uparrow} \subsetneq \mathbb{B}_{\sf strong}^{\uparrow} \subsetneq \mathbb{B}_{\sf valid}^{\uparrow}$.)
\begin{flushleft}
\begin{itemize}
\item {\bf $H \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$:}
A {\em floor} pseudo instance (\Cref{fig:reduction:floor}){\bf .} \\
We will deal with this case in \Cref{subsec:slice}, using the \blackref{alg:slice} reduction.
\item {\bf $H \otimes \bB \otimes L \in (\mathbb{B}_{\sf valid}^{\uparrow} \setminus \mathbb{B}_{\sf strong}^{\uparrow})$:}
A {\em ceiling} but {\em non strong ceiling} pseudo instance (\Cref{fig:reduction:non_strong}){\bf .} \\
We will deal with this case in \Cref{subsec:collapse}, using the \blackref{alg:collapse} reduction.
\item {\bf $H \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$; \blackref{pseudo_jump} $\sigma^{*} = L$:}
A {\em strong ceiling} but {\em non twin ceiling} pseudo instance, and the jump bidder is the pseudo bidder $\sigma^{*} = L$ (\Cref{fig:reduction:pseudo}){\bf .} \\
We will deal with this case in \Cref{subsec:halve}, using the \blackref{alg:halve} reduction.
\item {\bf $H \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$; \blackref{real_jump} $\sigma^{*} \neq L$:}
A {\em strong ceiling} but {\em non twin ceiling} pseudo instance, and the jump bidder is one of real bidders $\sigma^{*} \in \{H\} \cup [n]$ (\Cref{fig:reduction:real}){\bf .} \\
We will deal with this case in \Cref{subsec:AD}, using the \blackref{alg:AD} reduction.
\end{itemize}
\end{flushleft}
As mentioned, in \Cref{subsec:main} we will leverage all the four reductions to build the \blackref{alg:main} procedure, and upper bound its running time through the {\em potential method}.
\subsection{{\text{\tt Slice}}: From floor to ceiling}
\label{subsec:slice}
This subsection presents the {\blackref{alg:slice}} reduction (see \Cref{fig:alg:slice,fig:slice} for its description and a visual demonstration), which transforms (\Cref{def:ceiling_floor:restate}) a {\em floor} pseudo instance $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$ into a {\em ceiling} pseudo instance $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$.
\Cref{lem:slice} summarizes performance guarantees of the \blackref{alg:slice} reduction. (The intuition of this reduction might be obscure. In brief, it generalizes the ideas behind the \blackref{alg:translate} reduction and the \blackref{alg:polarize} reduction from \Cref{sec:preprocessing}.)
\begin{lemma}[{\text{\tt Slice}}; \Cref{fig:slice}]
\label{lem:slice}
Under reduction $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \gets \text{\tt Slice}(H^{\downarrow} \otimes \bB \otimes L)$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:slice:property}
The output is a ceiling pseudo instance $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$.
\item\label{lem:slice:potential}
The potential strictly decreases $\Psi(\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L}) \leq \Psi(H^{\downarrow} \otimes \bB \otimes L) - 1$.
\item\label{lem:slice:poa}
A (weakly) worse bound is yielded ${\sf PoA}(\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L}) \leq {\sf PoA}(H^{\downarrow} \otimes \bB \otimes L)$.
\end{enumerate}
\end{lemma}
\afterpage{
\begin{figure}[t]
\centering
\begin{mdframed}
Reduction $\term[\text{\tt Slice}]{alg:slice}(H^{\downarrow} \otimes \bB \otimes L)$
\begin{flushleft}
{\bf Input:}
A {\em floor} pseudo instance $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$.\white{\term[\text{\em input}]{slice_input}}
\hfill
\Cref{def:ceiling_floor}
\vspace{.05in}
{\bf Output:}
A {\em ceiling} pseudo instance $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$.\white{\term[\text{\em output}]{slice_output}}
\hfill
\Cref{def:ceiling_floor}
\begin{enumerate}
\item\label{alg:slice:spectrum}
Define a spectrum of {\em floor} pseudo instances $H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}$, with $0 \leq t < \lambda$, \\
given by $B_{\sigma}^{(t)}(b) \equiv B_{\sigma}(b + t) \cdot \indicator(b \geq 0)$ for $\sigma \in \{L\} \cup [n] \cup \{L\}$.
\white{\term[\text{\em interim}]{slice_interim}}
\item\label{alg:slice:minimum}
Define the \term[\text{\em minimizer}]{slice_minimizer} $t^{*} \eqdef \argmin \big\{\, {\sf PoA}(H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}): 0 \leq t < \lambda \,\big\}$; \\
breaking ties in favor of the smallest one among all alternatives.
\item\label{alg:slice:output}
{\bf Return} $H^{(t^{*})\uparrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})}$, the particular {\em ceiling} pseudo instance given by the $t^{*}$. \\
\OliveGreen{$\triangleright$ This is paired with the minimizer {\em floor} pseudo instance $H^{(t^{*})\downarrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})}$.}
\end{enumerate}
\end{flushleft}
\end{mdframed}
\caption{The {\text{\tt Slice}} reduction
\label{fig:alg:slice}}
\end{figure}
\begin{figure}
\centering
\subfloat[\label{fig:slice:input}
The {\em floor} input $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$]{
\includegraphics[width = .49\textwidth]
{slice_old.png}}
\hfill
\subfloat[\label{fig:slice:output}
The {\em ceiling} output $H^{(t^{*})\uparrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})} \in \mathbb{B}_{\sf valid}^{\uparrow}$]{
\includegraphics[width = .49\textwidth]
{slice_new.png}}
\caption{Demonstration for the {\text{\tt Slice}} reduction (\Cref{fig:alg:slice}), which transforms (\Cref{def:ceiling_floor:restate}) a {\em floor} pseudo instance $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$ into a {\em ceiling} pseudo instance $H^{(t^{*})\uparrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})} \in \mathbb{B}_{\sf valid}^{\uparrow}$.
\label{fig:slice}}
\end{figure}
\clearpage}
\newcommand{\mathfrak{F}}{\mathfrak{F}}
\newcommand{\mathfrak{O}}{\mathfrak{O}}
\newcommand{\mathfrak{P}}{\mathfrak{P}}
\begin{proof}
See \Cref{fig:slice} for a visual aid.
Without loss of generality, we assume that the input bound is strictly less than one ${\sf PoA}(H^{\downarrow} \otimes \bB \otimes L) < 1$; otherwise, the input {\em floor} pseudo instance $H^{\downarrow} \otimes \bB \otimes L$ cannot be (one of) the worst cases.
For brevity, let $[N] \equiv \{H\} \cup [n] \cup \{L\}$.
The \blackref{alg:slice} reduction
(Line~\ref{alg:slice:spectrum}) considers a spectrum $\big\{\, H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}: 0 \leq t < \lambda \,\big\}$ of \blackref{slice_interim} pseudo instances,
then (Line~\ref{alg:slice:minimum}) takes the {{\sf PoA}}-minimizer $t^{*} \in [0,\, \lambda)$,
and then (Line~\ref{alg:slice:output}) outputs the {\em ceiling} pseudo instance $H^{(t^{*})\uparrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})}$ given by the $t^{*}$.
Our analysis relies on {\bf \Cref{fact:slice:minimizer}}. But we defer its proof to {\bf \Cref{lem:slice:poa}} to ease readability.
\setcounter{fact}{0}
\begin{fact}
\label{fact:slice:minimizer}
The minimizer $t^{*} = \argmin \big\{\, {\sf PoA}(H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}): 0 \leq t < \lambda \,\big\}$, breaking ties in favor of the smallest one among all the alternatives (Line~\ref{alg:slice:minimum}), is well defined.
\end{fact}
\noindent
{\bf \Cref{lem:slice:property}.}
Every \blackref{slice_interim} $H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}$ preserves \blackref{re:discretization}, \blackref{re:monotonicity} and \blackref{re:layeredness}:
This just shifts the input {\em floor} pseudo instance $H^{\downarrow} \otimes \bB \otimes L \equiv H^{(0)\downarrow} \otimes \bB^{(0)} \otimes L^{(0)} \in \mathbb{B}_{\sf valid}^{\downarrow}$ (\Cref{def:ceiling_floor:restate}) by a distance of $-t$, and restricts the support to the nonnegative bids $b \in [0,\, \lambda - t]$.
Every \blackref{slice_interim} $H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}$ further promises \blackref{re:floorness}, thus being a {\em floor} pseudo instance,
including the \blackref{slice_minimizer} $H^{(t^{*})\downarrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})} \in \mathbb{B}_{\sf valid}^{\downarrow}$ since the $t^{*} \in [0,\, \lambda)$ is well defined ({\bf \Cref{fact:slice:minimizer}}).
Hence, the paired \blackref{slice_output} $H^{(t^{*})\uparrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})}$ satisfies all the required conditions, \blackref{re:ceilingness} etc, being a {\em ceiling} pseudo instance.
{\bf \Cref{lem:slice:property}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:slice:potential}.}
The potential of a {\em ceiling} pseudo instance $\hat{H}^{\uparrow} \otimes \hat{\bB} \otimes \hat{L}$ counts all the ultra-ceiling entries $\Psi(\text{\em ceiling}) = \big|\big\{(\sigma,\, j) \in \hat{\bPhi}: \hat{\phi}_{\sigma,\, j} > \hat{\phi}_{H,\, 0}\big\}\big|$ in the bid-to-value table $\hat{\bPhi} = \big[\hat{\phi}_{\sigma,\, j}\big]$ (\Cref{def:potential}) yet the paired {\em floor} pseudo instance $\hat{H}^{\downarrow} \otimes \hat{\bB} \otimes \hat{L}$ counts another one $\Psi(\text{\em floor}) = \Psi(\text{\em ceiling}) + 1$.
Following {\bf \Cref{lem:slice:property}}, it suffices to prove that every \blackref{slice_interim} $H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}$ for $t \in [0,\, \lambda)$ has a smaller or equal potential $\Psi(\blackref{slice_interim}) \leq \Psi(\blackref{slice_input})$ than the \blackref{slice_input} $H^{\downarrow} \otimes \bB \otimes L \equiv H^{(0)\downarrow} \otimes \bB^{(0)} \otimes L^{(0)}$, which accommodates the \blackref{slice_minimizer} $H^{(t^{*})\downarrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})}$.
Without loss of generality, consider a specific $t \in [\lambda_{k},\, \lambda_{k + 1})$ that locates in the index $k \in [0: m]$ piece of the underlying partition $\bLambda = [0 \equiv \lambda_{0},\, \lambda_{1}) \cup
[\lambda_{1},\, \lambda_{2}) \cup \dots \cup
[\lambda_{m},\, \lambda_{m + 1} \equiv \lambda]$.
As \Cref{fig:slice:output} suggests, the \blackref{slice_interim} table $\bPhi^{(t)}$ entrywise shifts the \blackref{slice_input} table $\bPhi = \big[\phi_{\sigma,\, j}\big]$ for $(\sigma,\, j) \in [N] \times [0: m]$ by a distance of $-t$, and then discards the columns $j \in [0: k - 1]$. That is,
\begin{align*}
\bPhi^{(t)} ~=~
\Big[\, \text{$\phi_{\sigma,\, j}^{(t)} = \phi_{\sigma,\, j} - t$ for $(\sigma,\, j) \in [N] \times [k: m]$} \,\Big].
\end{align*}
Particularly, the ceiling value changes into the SHIFTED row-$H$ column-$k$ entry $\phi_{H,\, k}^{(t)} = \phi_{H,\, k} - t$.
Since the \blackref{slice_input} $H^{\downarrow} \otimes \bB \otimes L \equiv H^{(0)\downarrow} \otimes \bB^{(0)} \otimes L^{(0)}$ and the \blackref{slice_interim} $H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}$ are two {\em floor} pseudo instances, we can deduce that
\begin{align*}
\Psi(\blackref{slice_interim})
& ~=~ 1 + \big|\big\{(\sigma,\, j) \in \bPhi^{(t)}: \phi_{\sigma,\, j}^{(t)} > \phi_{H,\, k}^{(t)}\big\}\big| \phantom{\Big.} \\
& ~=~ 1 + \big|\big\{(\sigma,\, j) \in \bPhi^{(t)}: \phi_{\sigma,\, j} > \phi_{H,\, k}\big\}\big|
&& \text{add back the shifts} \phantom{\Big.} \\
& ~\leq~ 1 + \big|\big\{(\sigma,\, j) \in \bPhi^{(t)}: \phi_{\sigma,\, j} > \phi_{H,\, 0}\big\}\big|
&& \text{\blackref{re:monotonicity} $\phi_{H,\, 0} \leq \phi_{H,\, k}$} \phantom{\Big.} \\
& ~\leq~ 1 + \big|\big\{(\sigma,\, j) \in \bPhi: \phi_{\sigma,\, j} > \phi_{H,\, 0}\big\}\big|
~=~ \Psi(\blackref{slice_input}).
&& \text{rethink the discarded entries} \phantom{\Big.}
\end{align*}
This equation holds for ({\bf \Cref{fact:slice:minimizer}}) the \blackref{slice_minimizer} $t^{*} \in [0,\, \lambda)$. The paired \blackref{slice_output} $H^{(t^{*})\uparrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})}$ thus has a potential $\Psi(\blackref{slice_output}) = \Psi(\blackref{slice_minimizer}) - 1 \leq \Psi(\blackref{slice_input}) - 1$. {\bf \Cref{lem:slice:potential}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:slice:poa}.}
Regarding every \blackref{slice_interim} $H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}$,
let us consider the following formulas.
\begin{align*}
& \mathfrak{F}(t)\,
~\eqdef~ {\sf FPA}(H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}), \phantom{\big.} \\
& \mathfrak{O}(t)
~\eqdef~ {\sf OPT}(H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}), \phantom{\big.} \\
& \mathfrak{P}(t)
~\eqdef~ {\sf PoA}(H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}). \phantom{\big.}
\end{align*}
We decompose {\bf \Cref{lem:slice:poa}} into a sequence of facts.
\begin{restate}[{\Cref{fact:slice:minimizer}}]
\begin{flushleft}
The minimizer $t^{*} = \argmin \big\{\, \mathfrak{P}(t): 0 \leq t < \lambda \,\big\}$, breaking ties in favor of the smallest one among all the alternatives (Line~\ref{alg:slice:minimum}), is well defined.
\end{flushleft}
\end{restate}
\begin{proof}
As mentioned, we safely assume the \blackref{slice_input} {{\sf PoA}}-bound $\mathfrak{P}(0) \equiv \mathfrak{F}(0) \big/ \mathfrak{O}(0) \equiv {\sf PoA}(\blackref{slice_input}) < 1$.
Moreover, it is easy to verify that the formulas $\mathfrak{F}(t)$ and $\mathfrak{O}(t)$ are continuous functions;\footnote{This is because the \blackref{slice_interim} $H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}$ changes ``continuously'' with respect to the $t \in [0,\, \lambda)$. Indeed, the formula $\mathfrak{F}(t)$ and $\mathfrak{O}(t)$ are even differentiable and we will show their derivatives $\mathfrak{F}'(t)$ and $\mathfrak{O}'(t)$ in {\bf \Cref{fact:slice:fpa,fact:slice:opt}}.}
then so is the {{\sf PoA}}-formula $\mathfrak{P}(t)$. To make the \blackref{slice_minimizer} $t^{*} = \argmin \big\{\, \mathfrak{P}(t): 0 \leq t < \lambda \,\big\}$ well defined, we just need to show that $\lim_{t \nearrow \lambda} \mathfrak{P}(t) = 1$.
We consider a close enough $t \in [\lambda_{m},\, \lambda_{m + 1} \equiv \lambda)$, by which (\Cref{fig:slice}) the \blackref{slice_interim} table $\bPhi^{(t)}$ in {\bf \Cref{lem:slice:potential}} only contains the {\em rightmost} column $\phi_{H,\, m}^{(t)} \geq \phi_{1,\, m}^{(t)} \geq \dots \geq \phi_{n,\, m}^{(t)} \geq \phi_{L,\, m}^{(t)}$.
Thus the optimal {\textsf{Social Welfare}} (\Cref{lem:translate_welfare}; with \blackref{re:floorness} $P^{(t)\downarrow} \equiv 0$ and $B_{\sigma}^{(t)}(\lambda - t) = B_{\sigma}(\lambda) = 1$) is given by
\begin{align*}
\mathfrak{O}(t)
& = \int_{0}^{+\infty} \Big(1 - \prod_{\sigma \in [N]} \big(1 - \big(1 - B_{\sigma}^{(t)}(0)
\big) \cdot \indicator(v < \phi_{\sigma,\, m}^{(t)})\big)\Big) \cdot \d v \\
& \leq \int_{0}^{+\infty} \Big(\sum_{\sigma \in [N]} \big(1 - B_{\sigma}^{(t)}(0)
\big) \cdot \indicator(v < \phi_{\sigma,\, m}^{(t)})\Big) \cdot \d v \\
& = \sum_{\sigma \in [N]} \Big(\phi_{\sigma,\, m}^{(t)} \cdot \big(1 - B_{\sigma}^{(t)}(0)
\big)\Big) \\
& = \sum_{\sigma \in [N]} \Big(\int_{0}^{\lambda - t} \phi_{\sigma,\, m}^{(t)} \cdot {B_{\sigma}^{(t)}}'(b) \cdot \d b\Big) \\
& ~\leq~ \tfrac{1}{\calB^{(t)}(0)} \cdot \sum_{\sigma \in [N]} \Big(\int_{0}^{\lambda - t} \phi_{\sigma,\, m}^{(t)} \cdot {B_{\sigma}^{(t)}}'(b) \cdot \tfrac{\calB^{(t)}(b)}{B_{\sigma}^{(t)}(b)} \cdot \d b\Big)
&& \text{$\calB^{(t)}(0) \leq \calB^{(t)}(b) \leq \tfrac{\calB^{(t)}(b)}{B_{\sigma}^{(t)}(b)}$} \\
& = \tfrac{1}{\calB^{(t)}(0)} \cdot {\sf FPA}(H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)})
= \tfrac{1}{\calB(t)} \cdot \mathfrak{F}(t).
&& \text{\Cref{lem:pseudo_welfare}} \phantom{\Big.}
\end{align*}
Thus, the {{\sf PoA}}-bound is at least $\mathfrak{P}(t) \equiv \mathfrak{F}(t) \big/ \mathfrak{O}(t) \geq \calB(t)$ for $t \in [\lambda_{m},\, \lambda)$.
The \blackref{slice_input} first-order bid distribution $\calB(b)$ is continuous on the bid support $b \in [0,\, \lambda]$ (\Cref{lem:bid_distribution}),
namely $\lim_{t \nearrow \lambda} \calB(t) = 1$,
we know from the squeeze theorem that $\lim_{t \nearrow \lambda} \mathfrak{P}(t) = 1$.
{\bf \Cref{fact:slice:minimizer}} follows then.
\end{proof}
Following {\bf \Cref{fact:slice:minimizer}}, without loss of generality, we can assume that the \blackref{slice_minimizer} is zero $t^{*} = 0$.\footnote{For any two $t,\, \tau \in [0,\, \lambda)$ that $t \leq \tau$, we can regard the $\tau$-\blackref{slice_interim} $H^{(\tau)\downarrow} \otimes \bB^{(\tau)} \otimes L^{(\tau)}$ as the result of (\`{a} la Line~\ref{alg:slice:spectrum}) shifting the $t$-\blackref{slice_interim} $H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}$ by a distance $-(\tau - t)$.}
{\bf \Cref{fact:slice:fpa,fact:slice:opt}} help us to characterize the optimality condition at this \blackref{slice_minimizer} $t^{*} = 0$.
\begin{fact}
\label{fact:slice:fpa}
\begin{flushleft}
The derivative $\mathfrak{F}'(0) = -\big(1 - \calB(0)\big) - \sum_{\sigma \in [N]} \big(\frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \phi_{\sigma,\, 0} \cdot \calB(0)\big) \leq 0$, where the first-order bid distribution (\Cref{lem:pseudo_welfare}) $\calB(b) = \prod_{\sigma \in [N]} B_{\sigma}(b)$.
\end{flushleft}
\end{fact}
\begin{proof}
We can write the auction \blackref{slice_interim} formula $\mathfrak{F}(t)$ as follows (\Cref{lem:pseudo_welfare}; \blackref{re:discretization} $\gamma = 0$ and \blackref{re:floorness} $P^{(t)\downarrow} \equiv 0$).
\begin{align*}
\mathfrak{F}(t)
& = \sum_{\sigma \in [N]} \Big(\int_{0}^{\lambda - t} \varphi_{\sigma}^{(t)}(b) \cdot \tfrac{{B_{\sigma}^{(t)}}'(b)}{B_{\sigma}^{(t)}(b)} \cdot \calB^{(t)}(b) \cdot \d b\Big) \\
& = \sum_{\sigma \in [N]} \Big(\int_{t}^{\lambda} \big(\varphi_{\sigma}(b) - t\big) \cdot \tfrac{B_{\sigma}'(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b\Big)
\qquad\qquad \text{Line~\ref{alg:slice:spectrum}}
\hspace{3.75cm}
\end{align*}
Therefore, the derivative $\mathfrak{F}'(0)$ is given by\footnote{\label{footnote:slice:fpa}Recall that a function $F(t) = \int_{t}^{\lambda} f(x,\, t) \cdot \d x$ has the derivative $F'(0) = -f(0,\, 0) + \int_{0}^{\lambda} \big(\frac{\partial f(x,\, t)}{\d t}\big)\bigmid_{t = 0} \cdot \d x$.}
\begin{align*}
\mathfrak{F}'(0)
& = -\sum_{\sigma \in [N]} \varphi_{\sigma}(0) \cdot \tfrac{B_{\sigma}'(0)}{B_{\sigma}(0)} \cdot \calB(0) - \sum_{\sigma \in [N]} \Big(\int_{0}^{\lambda} \tfrac{B_{\sigma}'(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b\Big) \\
& = -\sum_{\sigma \in [N]} \varphi_{\sigma}(0) \cdot \tfrac{B_{\sigma}'(0)}{B_{\sigma}(0)} \cdot \calB(0) - \int_{0}^{\lambda} \calB'(b) \cdot \d b
\qquad\qquad \mbox{$\calB' = \big(\prod_{\sigma} B_{\sigma}\big)' = \calB \cdot \sum_{\sigma} \frac{B'_{\sigma}}{B_{\sigma}}$} \\
& = -\sum_{\sigma \in [N]} \phi_{\sigma,\, 0} \cdot \tfrac{B_{\sigma}'(0)}{B_{\sigma}(0)} \cdot \calB(0) - \big(1 - \calB(0)\big).
\qedhere
\end{align*}
\end{proof}
\begin{fact}
\label{fact:slice:opt}
\begin{flushleft}
The derivative $\mathfrak{O}'(0) = -\big(1 - \calB(0)\big) - \sum_{\sigma \in [N]} \big(\frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \int_{0}^{\phi_{\sigma,\, 0}} \calV(v) \cdot \d v\big) \leq 0$, where the first-order value distribution (\Cref{lem:translate_welfare}) $\calV(v) = \prod_{(\sigma,\, j) \,\in\, \bPhi} \big(1 - \big(1 - \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}\big) \cdot \indicator(v < \phi_{\sigma,\, j})\big)$.
\end{flushleft}
\end{fact}
\begin{proof}
Given the \blackref{slice_input} partition $\bLambda = [0 \equiv \lambda_{0},\, \lambda_{1}) \cup [\lambda_{1},\, \lambda_{2}) \cup \dots \cup [\lambda_{m},\, \lambda_{m + 1} \equiv \lambda]$, let us consider a small enough $t \in [0,\, \lambda_{1})$.
Namely, the \blackref{slice_input} $H^{\downarrow} \otimes \bB \otimes L$ (Line~\ref{alg:slice:spectrum}) shifts by a distance $-t$ and only the shifted index-$0$ piece $[-t,\, \lambda_{1} - t)$ is restricted to the nonnegative bids $b \in [0,\, \lambda_{1} - t)$.
We consider the two-variable functions $f_{\sigma}(v,\, t) = 1 - \big(1 - \tfrac{B_{\sigma}(t)}{B_{\sigma}(\lambda_{1})}\big) \cdot \indicator(v < \phi_{\sigma,\, 0})$ for $\sigma \in [N]$ that account for the index-$0$ piece and the function $\calS(v) = \prod_{(\sigma,\, j) \,\in\, [N] \times [m]} \big(1 - \big(1 - \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}\big) \cdot \indicator(v < \phi_{\sigma,\, j})\big)$ that accounts for the other pieces. The optimal \blackref{slice_interim} formula (\Cref{lem:translate_welfare}; \blackref{re:floorness} $P^{(t)\downarrow} \equiv 0$) can be written as $\mathfrak{O}(t) = \int_{t}^{+\infty} \big(1 - \calS(v) \cdot \prod_{\sigma \in [N]} f_{\sigma}(v,\, t)\big) \cdot \d v$.
\begin{comment}
\begin{align*}
\mathfrak{O}(t)
& = \int_{0}^{+\infty} \Big(1 - \prod_{\sigma \in [N]} \big(1 - \big(1 - \tfrac{B_{\sigma}^{(t)}(0)}{B_{\sigma}^{(t)}(\lambda_{1} - t)}\big) \cdot \indicator(v < \phi_{\sigma,\, 0}^{(t)})\big) \cdot \calS(v + t)\Big) \cdot \d v
\hspace{3.68cm} \\
& = \int_{t}^{+\infty} \Big(1 - \prod_{\sigma \in [N]} \big(1 - \big(1 - \tfrac{B_{\sigma}(t)}{B_{\sigma}(\lambda_{1})}\big) \cdot \indicator(v < \phi_{\sigma,\, 0})\big) \cdot \calS(v)\Big) \cdot \d v.
\qquad\qquad \text{Line~\ref{alg:slice:spectrum}}
\end{align*}
\end{comment}
Notice that $\big(\frac{\partial f_{\sigma}(v,\, t)}{\d t}\big)\bigmid_{t = 0} = \tfrac{B'_{\sigma}(t)}{B_{\sigma}(\lambda_{1})}$ for $v < \phi_{\sigma,\, 0}$, while $\big(\frac{\partial f_{\sigma}(v,\, t)}{\d t}\big)\bigmid_{t = 0} = 0$ for $v \geq \phi_{\sigma,\, 0}$. Further, we have $\calS(0) = \prod_{(\sigma,\, j) \,\in\, [N] \times [m]} \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})} = \prod_{\sigma \in [N]} \frac{B_{\sigma}(\lambda_{1})}{B_{\sigma}(\lambda_{m + 1})} = \prod_{\sigma \in [N]} B_{\sigma}(\lambda_{1}) = \calB(\lambda_{1})$.
Accordingly (cf.\ \Cref{footnote:slice:fpa}), the derivative $\mathfrak{O}'(0)$ is given by
\begin{align}
\mathfrak{O}'(0)
& = -\Big(1 - \calS(0) \cdot \prod_{\sigma \in [N]} f_{\sigma}(0,\, 0)\Big)
-\sum_{\sigma \in [N]} \Big(\tfrac{B'_{\sigma}(0)}{B_{\sigma}(\lambda_{1})} \cdot \int_{0}^{\phi_{\sigma,\, 0}} \calS(v) \cdot \prod_{k \neq \sigma} f_{k}(v,\, 0) \cdot \d v\Big)
\nonumber \\
& = -\big(1 - \calB(0)\big) - \sum_{\sigma \in [N]} \Big(\tfrac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \int_{0}^{\phi_{\sigma,\, 0}} \calS(v) \cdot \prod_{k \in [N]} f_{k}(v,\, 0) \cdot \d v\Big)
\label{eq:slice:S1}\tag{S1} \\
& = -\big(1 - \calB(0)\big) - \sum_{\sigma \in [N]} \Big(\tfrac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \int_{0}^{\phi_{\sigma,\, 0}} \calV(v) \cdot \d v\Big).
\nonumber
\end{align}
\eqref{eq:slice:S1}: The first term follows from $\calS(0) \cdot \prod_{\sigma} f_{\sigma}(0,\, 0) = \calS(0) \cdot \prod_{\sigma \in [N]} \big(\tfrac{B_{\sigma}(0)}{B_{\sigma}(\lambda_{1})}\big) = \calS(0) \cdot \frac{\calB(0)}{\calB(\lambda_{1})} = \calB(0)$. \\
And the second term follows from $f_{\sigma}(v,\, 0) = \tfrac{B_{\sigma}(0)}{B_{\sigma}(\lambda_{1})}$ when $v \in [0,\, \phi_{\sigma,\, 0})$.
\end{proof}
Now instead of the {\em floor} \blackref{slice_minimizer} $H^{(t^{*})\downarrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})} = H^{\downarrow} \otimes \bB \otimes L$ (with $P^{\downarrow} \equiv 0$), consider the paired {\em ceiling} \blackref{slice_output} $H^{(t^{*})\uparrow} \otimes \bB^{(t^{*})} \otimes L^{(t^{*})} = H^{\uparrow} \otimes \bB \otimes L$ (with $P^{\uparrow} \equiv \phi_{H,\, 0}$).
We can write its auction {\textsf{Social Welfare}} ${\sf FPA}(\blackref{slice_output}) = \mathfrak{F}(0) + \Delta_{{\sf FPA}}$ (\Cref{lem:pseudo_welfare}), using $\Delta_{{\sf FPA}} = \phi_{H,\, 0} \cdot \calB(0)$,
AND its optimal {\textsf{Social Welfare}} ${\sf OPT}(\blackref{slice_output}) = \mathfrak{O}(0) + \Delta_{{\sf OPT}}$ (\Cref{lem:translate_welfare}), using $\Delta_{{\sf OPT}} = \int_{0}^{\phi_{H,\, 0}} \calV(v) \cdot \d v$.
Furthermore, the \blackref{slice_minimizer} $t^{*} = 0$ induces the optimality condition $\mathfrak{P}'(0) = \frac{\mathfrak{F}'(0) \cdot \mathfrak{O}(0) - \mathfrak{F}(0) \cdot \mathfrak{O}'(0)}{\mathfrak{O}(0)^{2}} \geq 0$, which after being rearranged gives $\frac{\mathfrak{F}(0)}{\mathfrak{O}(0)} \geq \frac{|\mathfrak{F}'(0)|}{|\mathfrak{O}'(0)|}$. We then deduce that
\begin{align}
\mathfrak{P}(0)
~=~ \frac{\mathfrak{F}(0)}{\mathfrak{O}(0)}
~\geq~ \frac{|\mathfrak{F}'(0)|}{|\mathfrak{O}'(0)|}
& ~=~ \frac{\big(1 - \calB(0)\big) + \sum_{\sigma \in [N]} B'_{\sigma}(0) \big/ B_{\sigma}(0) \cdot \phi_{\sigma,\, 0} \cdot \calB(0) \hspace{0.665cm}}
{\big(1 - \calB(0)\big) + \sum_{\sigma \in [N]} B'_{\sigma}(0) \big/ B_{\sigma}(0) \cdot \int_{0}^{\phi_{\sigma,\, 0}} \calV(v) \cdot \d v} \hspace{.7cm}
\label{eq:slice:S2}\tag{S2} \\
& ~\geq~ \frac{\sum_{\sigma \in [N]} B'_{\sigma}(0) \big/ B_{\sigma}(0) \cdot \phi_{\sigma,\, 0} \cdot \calB(0) \hspace{0.665cm}}
{\sum_{\sigma \in [N]} B'_{\sigma}(0) \big/ B_{\sigma}(0) \cdot \int_{0}^{\phi_{\sigma,\, 0}} \calV(v) \cdot \d v}
\label{eq:slice:S3}\tag{S3} \\
& ~\geq~ \frac{\phi_{H,\, 0} \cdot \calB(0)}
{\int_{0}^{\phi_{H,\, 0}} \calV(v) \cdot \d v}
~=~ \frac{\Delta_{{\sf FPA}}}{\Delta_{{\sf OPT}}}
\label{eq:slice:S4}\tag{S4}
\end{align}
\eqref{eq:slice:S2}: Apply {\bf \Cref{fact:slice:fpa,fact:slice:opt}}. \\
\eqref{eq:slice:S3}: The dropped two terms are equal, while $\RHS \text{ of } \eqref{eq:slice:S2} \leq \mathfrak{P}(0) < 1$. \\
\eqref{eq:slice:S4}: Replace $\phi_{H,\, 0} \geq \phi_{1,\, 0} \geq \dots \geq \phi_{n,\, 0} \geq \phi_{L,\, 0}$ (\blackref{re:layeredness}) with the highest ceiling value $\phi_{H,\, 0}$.
Notice that $\big(\phi \cdot \calB(0)\big) \big/ \big(\int_{0}^{\phi} \calV(v) \cdot \d v\big)$ is decreasing in $\phi > 0$ because the CDF $\calV(v)$ is increasing.
We thus conclude that ${\sf PoA}(\blackref{slice_output})
= \frac{\mathfrak{F}(0) + \Delta_{{\sf FPA}}}{\mathfrak{O}(0) + \Delta_{{\sf OPT}}}
\leq \frac{\mathfrak{F}(0)}{\mathfrak{O}(0)}
= {\sf PoA}(\blackref{slice_minimizer})
\leq {\sf PoA}(\blackref{slice_input})$.
I.e., the {\em ceiling} \blackref{slice_output} yields a (weakly) worse bound. {\bf \Cref{lem:halve:poa}} follows then.
This finishes the proof.
\end{proof}
\begin{comment}
As a result of \Cref{eq:slice:S4}, the {\em ceiling} pseudo instance $(P^{\uparrow},\, \bB \otimes L)$, whose conditional value takes the ceiling value almost surely $P^{\uparrow} = \varphi_{1}(0) = \phi_{H,\, 0}$, yields a worse {{\sf PoA}} bound than the input:
\begin{align*}
{\sf PoA}(P^{\uparrow},\, \bB \otimes L)
~=~ \frac{\mathfrak{F}(0) + \Delta_{{\sf FPA}}}{\mathfrak{O}(0) + \Delta_{{\sf OPT}}}
~\leq~ \frac{\mathfrak{F}(0)}{\mathfrak{O}(0)}
~=~ {\sf PoA}(H^{\downarrow} \otimes \bB \otimes L).
\end{align*}
This finishes the proof.
\eqref{eq:slice:S2} restates the optimality condition~\eqref{eq:slice:S4} and \Cref{fact:slice:fpa,fact:slice:opt} basically. Also, the inequality $1 \geq \mathfrak{P}(0) = {\sf PoA}(H^{\downarrow} \otimes \bB \otimes L)$ is trivial.
Also, the net {{\sf OPT}}-change $\Delta_{{\sf OPT}} = {\sf OPT}(\blackref{slice_output}) - {\sf OPT}(\blackref{slice_minimizer})$ is
\begin{align*}
\Delta_{{\sf OPT}}
& ~=~ \int_{0}^{+\infty} \big(1 - \indicator(v \geq \phi_{H,\, 0})\big) \cdot \calV(v) \cdot \d v
~=~ \int_{0}^{\phi_{H,\, 0}} \calV(v) \cdot \d v.
\end{align*}
Notice that the both net changes are nonnegative $\Delta_{{\sf FPA}} \geq 0$ and $\Delta_{{\sf OPT}} \geq 0$.
Rearranging the optimality condition~\eqref{eq:slice:S4} gives
Now we study the optimality condition $\mathfrak{P}'(0) \geq 0$. We have $| \mathfrak{F}'(t) | = -\mathfrak{F}'(t)$ and $| \mathfrak{O}'(t) | = -\mathfrak{O}'(t)$, since both of the parametric {{\sf FPA}}/{{\sf OPT}} formulas $\mathfrak{F}(t)$ and $\mathfrak{O}(t)$ are continuous and decreasing (???). Given that $\mathfrak{P}(t) = \mathfrak{F}(t) \big/ \mathfrak{O}(t)$, we can write the optimality condition as
\begin{align}
\label{eq:slice:condition}\tag{$\ast$}
\hspace{-.25cm}
\mathfrak{P}'(0)
~=~ \frac{\mathfrak{F}'(0) \cdot \mathfrak{O}(0) - \mathfrak{F}(0) \cdot \mathfrak{O}'(0)}{\mathfrak{O}(0)^{2}}
~=~ \frac{|\mathfrak{O}'(0)|}{\mathfrak{O}(0)} \cdot \bigg(\frac{\mathfrak{F}(0)}{\mathfrak{O}(0)} - \frac{|\mathfrak{F}'(0)|}{|\mathfrak{O}'(0)|}\bigg)
~\geq~ 0.
\end{align}
It turns out that $|\mathfrak{F}'(0)|$ and $|\mathfrak{O}'(0)|$ can be formulated explicitly, as follows:
\begin{align}
\big|\, \mathfrak{F}'(0) \,\big| ~=~ -\mathfrak{F}'(0)
& ~=~ \sum_{\sigma} \frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \phi_{\sigma,\, 0} \cdot \calB(0) \hspace{0.815cm}
~+~ \Big(1 - \calB(0)\Big),
\label{eq:slice:derivative:FPA}\tag{S1} \\
\big|\, \mathfrak{O}'(0) \,\big| ~=~ -\mathfrak{O}'(0)
& ~=~ \sum_{\sigma} \frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \int_{0}^{\phi_{\sigma,\, 0}} \calV(v) \cdot \d v
~+~ \Big(1 - \calB(0)\Big).
\label{eq:slice:derivative:OPT}\tag{S2}
\end{align}
\end{comment}
\begin{comment}
\vspace{2cm}
\begin{align*}
\mathfrak{O}'(0)
& ~=~ -\Big(1 - \prod_{(\sigma,\, j) \,\in\, [N] \times [m]} \big(\tfrac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}\big) \cdot \prod_{\sigma \in [N]} \big(\tfrac{B_{\sigma}(0)}{B_{\sigma}(\lambda_{1})}\big)\Big) \\
& \phantom{~=~} - \int_{0}^{+\infty} \calI(v) \\
& \phantom{~=~ \int_{0}^{+\infty} \Big(1~\,} \cdot \prod_{\sigma \in [N]} \big(1 - \big(1 - \tfrac{B_{\sigma}(0)}{B_{\sigma}(\lambda_{1})}\big) \cdot \indicator(v < \phi_{\sigma,\, j})\big) \\
& \cdot \sum_{\sigma \in [N]} \frac{\tfrac{B'_{\sigma}(0)}{B_{\sigma}(\lambda_{1})} \cdot \indicator(v < \phi_{\sigma,\, j})}{1 - \big(1 - \tfrac{B_{\sigma}(0)}{B_{\sigma}(\lambda_{1})}\big) \cdot \indicator(v < \phi_{\sigma,\, j})} \cdot \d v \\
& ~=~ -\big(1 - \calB(0)\big) \\
& \phantom{~=~} - \sum_{\sigma \in [N]} \frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \int_{0}^{\phi_{\sigma,\, j}} \calI(v) \\
& \phantom{~=~ \int_{0}^{+\infty} \Big(1~\,} \cdot \prod_{\sigma \in [N]} \big(1 - \big(1 - \tfrac{B_{\sigma}(0)}{B_{\sigma}(\lambda_{1})}\big) \cdot \indicator(v < \phi_{\sigma,\, j})\big) \cdot \d v \\
\end{align*}
\begin{align*}
\mathfrak{O}'(t)
& ~=~ -\Big(1 - \prod_{(\sigma,\, j) \,\in\, \bPhi^{(t)}} \big(1 - \big(1 - \tfrac{B_{\sigma}(\lambda_{j} + t)}{B_{\sigma}(\lambda_{j + 1} + t)}\big) \cdot \indicator(v < \phi_{\sigma,\, j})\big)\Big) \cdot \d v \\
& \phantom{~=~} -\int_{t}^{+\infty} \Big(\prod_{(\sigma,\, j) \,\in\, \bPhi^{(t)}} \big(1 - \big(1 - \tfrac{B_{\sigma}(\lambda_{j} + t)}{B_{\sigma}(\lambda_{j + 1} + t)}\big) \cdot \indicator(v < \phi_{\sigma,\, j})\big)\Big) \cdot \sum_{}\frac{\big(\tfrac{B_{\sigma}(\lambda_{j} + t)}{B_{\sigma}(\lambda_{j + 1} + t)}\big)' \cdot \indicator(v < \phi_{\sigma,\, j})}{1 - \big(1 - \tfrac{B_{\sigma}(\lambda_{j} + t)}{B_{\sigma}(\lambda_{j + 1} + t)}\big) \cdot \indicator(v < \phi_{\sigma,\, j})} \cdot \d v
\end{align*}
\vspace{2cm}
\color{blue}
We consider a small enough $\eta \in [0 = \lambda_{0},\, \lambda_{1})$.
we can write the right derivative of $\mathfrak{P}(\eta) = \mathfrak{F}(\eta) \big/ \mathfrak{O}(\eta)$ at the nil $\eta = 0$ as follows:
\begin{align*}
V_{\sigma}^{(\eta)}(v)
& ~=~ \bigg(1 - \Big(1 - \frac{B_{\sigma}(\eta)}{B_{\sigma}(\lambda_{1})}\Big) \cdot \indicator\big(v < \phi_{\sigma,\, 0} - \eta\big)\bigg) \\
& \phantom{~=~}\qquad \cdot \prod_{j \in [m]} \bigg(1 - \Big(1 - \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}\Big) \cdot \indicator\big(v < \phi_{\sigma,\, j} - \eta\big)\bigg) \\
& ~=~ \frac{1 - \big(1 - \frac{B_{\sigma}(\eta)}{B_{\sigma}(\lambda_{1})}\big) \cdot \indicator\big(v < \phi_{\sigma,\, 0} - \eta\big)}
{1 - \big(1 - \frac{B_{\sigma}(0)}{B_{\sigma}(\lambda_{1})}\big) \cdot \indicator\big(v < \phi_{\sigma,\, 0} - \eta\big)}
\cdot V_{\sigma}(v + \eta) \\
& ~=~ \Big(1 + \frac{B_{\sigma}(\eta) - B_{\sigma}(0)}{B_{\sigma}(0)} \cdot \indicator\big(v + \eta < \phi_{\sigma,\, 0}\big)\Big) \cdot V_{\sigma}(v + \eta)
\end{align*}
\begin{align*}
\mathfrak{O}(0) - \mathfrak{O}(\eta)
& ~=~ \int_{0}^{+\infty} \Big(1 - \calV(v)\Big) \cdot \d v
~-~ \int_{0}^{+\infty} \Big(1 - \calV^{(\eta)}(v)\Big) \cdot \d v \\
& ~=~ \int_{0}^{\eta} \Big(1 - \calV(v)\Big) \cdot \d v
~+~ \int_{\eta}^{+\infty} \bigg(\prod_{\sigma} \Big(1 + \frac{B_{\sigma}(\eta) - B_{\sigma}(0)}{B_{\sigma}(0)} \cdot \indicator\big(v < \phi_{\sigma,\, 0}\big)\Big) - 1\bigg) \cdot \calV(v) \cdot \d v
\end{align*}
For a small enough $\eta \in (0 = \lambda_{0},\, \lambda_{1}]$
\begin{align*}
\mathfrak{O}(0) - \mathfrak{O}(\eta)
& ~=~ \Big(1 - \calV(0)\Big) \cdot \eta
~+~ \sum_{\sigma} \int_{0}^{+\infty} \Big(\frac{B_{\sigma}'(0)}{B_{\sigma}(0)} \cdot \eta \pm O(\eta^{2})\Big) \cdot \indicator\big(v < \phi_{\sigma,\, 0}\big) \cdot \calV(v) \cdot \d v
\end{align*}
\vspace{2cm}
For $\eta \in [0 = \lambda_{0},\, \lambda_{1})$, we have
\begin{align*}
\mathfrak{F}(\eta)
& ~=~ \int_{\eta}^{\lambda} \bigg(\sum_{\sigma} \big(\varphi_{\sigma}(b) - \eta\big) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)}\bigg) \cdot \calB(b) \cdot \d b
\end{align*}
We emphasize that the $\eta$ appears in the bid-to-value mapping $\tilde{\varphi}_{\sigma}(b) = \varphi_{\sigma}(b) - \eta$ and the left endpoint of the interval of integration $b \in [\eta,\, \lambda]$.
\begin{align*}
\mathfrak{F}'(\eta)
& ~=~ -\bigg(\sum_{\sigma} \big(\varphi_{\sigma}(\eta) - \eta\big) \cdot \frac{B'_{\sigma}(\eta)}{B_{\sigma}(\eta)}\bigg) \cdot \calB(\eta)
~-~ \int_{\eta}^{\lambda} \bigg(\sum_{\sigma} \frac{B'_{\sigma}(b)}{B_{\sigma}(b)}\bigg) \cdot \calB(b) \cdot \d b \\
& ~=~ -\bigg(\sum_{\sigma} \big(\varphi_{\sigma}(\eta) - \eta\big) \cdot \frac{B'_{\sigma}(\eta)}{B_{\sigma}(\eta)}\bigg) \cdot \calB(\eta)
~-~ \Big(1 - \calB(\eta)\Big).
\end{align*}
\begin{align*}
\mathfrak{F}'(0)
& ~=~ -\bigg(\sum_{\sigma} \varphi_{\sigma}(0) \cdot \frac{B'_{\sigma}(0)}{B_{\sigma}(0)}\bigg) \cdot \calB(0)
~-~ \Big(1 - \calB(0)\Big).
\end{align*}
\begin{align*}
\mathfrak{O}(0) ~=~ \int_{0}^{+\infty} \bigg(1 - \prod_{(\sigma,\, j) \in ([n] \cup \{L\}) \times [0:\, m]} \Big(1 - \omega_{\sigma,\, j} \cdot \indicator\big(v < \phi_{\sigma,\, j}\big)\Big)\bigg) \cdot \d v
\end{align*}
\begin{align*}
\mathfrak{O}(\eta) ~=~ \int_{0}^{+\infty} \Big(1 - \calV^{(\eta)}(v)\Big) \cdot \d v
\end{align*}
\begin{align*}
\mathfrak{O}(\eta) ~=~ \int_{0}^{+\infty} \Big(1 - \calV^{(\eta)}(v)\Big) \cdot \d v
\end{align*}
index-$0$ interval
\begin{align*}
& 1 - \big(1 - B_{\sigma}(0) \big/ B_{\sigma}(\lambda_{1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, 0}\big) \\
\Longrightarrow \qquad & 1 - \big(1 - B_{\sigma}(\d\eta) \big/ B_{\sigma}(\lambda_{1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, 0} - \d\eta\big) \\
& = 1 - \big(1 - B_{\sigma}(0) \big/ B_{\sigma}(\lambda_{1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, 0} - \d\eta\big) + \big(B_{\sigma}(\d\eta) - B_{\sigma}(0)\big) \big/ B_{\sigma}(\lambda_{1}) \cdot \indicator\big(v < \phi_{\sigma,\, 0} - \d\eta\big)
\end{align*}
each index-$j \in [m]$ interval
\begin{align*}
& 1 - \big(1 - B_{\sigma}(\lambda_{j}) \big/ B_{\sigma}(\lambda_{j + 1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, j}\big) \\
\Longrightarrow \qquad & 1 - \big(1 - B_{\sigma}(\lambda_{j}) \big/ B_{\sigma}(\lambda_{j + 1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, j} - \d\eta\big)
\end{align*}
\begin{align*}
\mathfrak{O}(\d\eta)
& ~=~ \int_{0}^{+\infty} \bigg(1 - \prod_{\sigma} \Big(1 - \big(1 - B_{\sigma}(\d\eta) \big/ B_{\sigma}(\lambda_{1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, 0} - \d\eta\big)\Big) \cdot \bigg. \\
& \phantom{~=~ \int_{0}^{+\infty} \bigg(1 - } \bigg. \prod_{(\sigma,\, j) \in ([n] \cup \{L\}) \times [m]} \Big(1 - \big(1 - B_{\sigma}(\lambda_{j}) \big/ B_{\sigma}(\lambda_{j + 1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, j} - \d\eta\big)\Big)\bigg) \cdot \d v \\
& ~=~ \int_{\d\eta}^{+\infty} \bigg(1 - \prod_{\sigma} \Big(1 - \big(1 - B_{\sigma}(\d\eta) \big/ B_{\sigma}(\lambda_{1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, 0}\big)\Big) \cdot \bigg. \\
& \phantom{~=~ \int_{0}^{+\infty} \bigg(1 - } \bigg. \prod_{(\sigma,\, j) \in ([n] \cup \{L\}) \times [m]} \Big(1 - \big(1 - B_{\sigma}(\lambda_{j}) \big/ B_{\sigma}(\lambda_{j + 1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, j}\big)\Big)\bigg) \cdot \d v \\
& ~=~ \int_{\d\eta}^{+\infty} \bigg(1 - \calV(v) \cdot \prod_{\sigma} \Big(\frac{1 - \big(1 - B_{\sigma}(\d\eta) \big/ B_{\sigma}(\lambda_{1}) \cdot \indicator\big(v < \phi_{\sigma,\, j}\big)}{1 - \big(1 - B_{\sigma}(0) \big/ B_{\sigma}(\lambda_{1}) \cdot \indicator\big(v < \phi_{\sigma,\, j}\big)}\Big)\bigg) \cdot \d v \\
& ~=~ \int_{\d\eta}^{+\infty} \bigg(1 - \calV(v) \cdot \prod_{\sigma} \Big(1 + \frac{\big(B_{\sigma}(\d\eta) - B_{\sigma}(0)\big) \big/ B_{\sigma}(\lambda_{1}) \cdot \indicator\big(v < \phi_{\sigma,\, j}\big)}{1 - \big(1 - B_{\sigma}(0) \big/ B_{\sigma}(\lambda_{1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, j}\big)}\Big)\bigg) \cdot \d v
\end{align*}
\begin{align*}
\mathfrak{O}(0) - \mathfrak{O}(\d\eta)
& ~=~ \int_{0}^{\d\eta} \Big(1 - \calV(v)\Big) \cdot \d v \\
& \phantom{~=~} ~+~ \int_{\d\eta}^{+\infty} \bigg(\prod_{\sigma} \Big(1 + \frac{\big(B_{\sigma}(\d\eta) - B_{\sigma}(0)\big) \big/ B_{\sigma}(\lambda_{1}) \cdot \indicator\big(v < \phi_{\sigma,\, 0}\big)}{1 - \big(1 - B_{\sigma}(0) \big/ B_{\sigma}(\lambda_{1})\big) \cdot \indicator\big(v < \phi_{\sigma,\, 0}\big)}\Big) - 1\bigg) \cdot \calV(v) \cdot \d v \\
& ~=~ \int_{0}^{\d\eta} \Big(1 - \calV(v)\Big) \cdot \d v \\
& \phantom{~=~} ~+~ \int_{\d\eta}^{+\infty} \bigg(\prod_{\sigma} \Big(1 + \frac{\big(B_{\sigma}(\d\eta) - B_{\sigma}(0)\big) \big/ B_{\sigma}(\lambda_{1})}{1 - \big(1 - B_{\sigma}(0) \big/ B_{\sigma}(\lambda_{1})\big)} \cdot \indicator\big(v < \phi_{\sigma,\, 0}\big)\Big) - 1\bigg) \cdot \calV(v) \cdot \d v \\
& ~=~ \int_{0}^{\d\eta} \Big(1 - \calV(v)\Big) \cdot \d v \\
& \phantom{~=~} ~+~ \int_{\d\eta}^{+\infty} \bigg(\prod_{\sigma} \Big(1 + \frac{B_{\sigma}(\d\eta) - B_{\sigma}(0)}{B_{\sigma}(0)} \cdot \indicator\big(v < \phi_{\sigma,\, 0}\big)\Big) - 1\bigg) \cdot \calV(v) \cdot \d v \\
& ~=~ \int_{0}^{\d\eta} \Big(1 - \calV(v)\Big) \cdot \d v \\
& \phantom{~=~} ~+~ \int_{\d\eta}^{+\infty} \bigg(\sum_{\sigma} \frac{B_{\sigma}(\d\eta) - B_{\sigma}(0)}{B_{\sigma}(0)} \cdot \indicator\big(v < \phi_{\sigma,\, 0}\big)\bigg) \cdot \calV(v) \cdot \d v ~\pm~ O(\d\eta^{2})
\end{align*}
\begin{align*}
\mathfrak{O}(0) - \mathfrak{O}(\d\eta)
& ~=~ \int_{0}^{\d\eta} \Big(1 - \calV(v)\Big) \cdot \d v \\
& \phantom{~=~} ~+~ \sum_{\sigma} \bigg(\frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \d\eta \cdot \int_{\d\eta}^{\phi_{\sigma,\, 0}} \calV(v) \cdot \d v\bigg) ~\pm~ O(\d\eta^{2})
\end{align*}
\begin{align*}
\mathfrak{O}'(0)
~=~ -\Big(1 - \calV(0)\Big) ~-~ \sum_{\sigma} \bigg(\frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \int_{0}^{\phi_{\sigma,\, 0}} \calV(v) \cdot \d v\bigg)
\end{align*}
\begin{align*}
\mathfrak{O}(\eta_{1}) - \mathfrak{O}(\eta_{2})
& ~=~ \int_{0}^{+\infty} \bigg(1 - \prod_{\sigma} V_{\sigma}^{(\eta_{1})}(v)\bigg) \cdot \d v
~-~ \int_{0}^{+\infty} \bigg(1 - \prod_{\sigma} V_{\sigma}^{(\eta_{2})}(v)\bigg) \cdot \d v \\
& ~\leq~ \sum_{\sigma} \int_{0}^{+\infty} \Big(V_{\sigma}^{(\eta_{2})}(v) - V_{\sigma}^{(\eta_{1})}(v)\Big) \cdot \d v \\
& ~\leq~ \sum_{\sigma} \big(\Ex\big[\, V_{\sigma}^{(\eta_{1})} \,\big] - \Ex\big[\, V_{\sigma}^{(\eta_{1})} \,\big]\Big)
\end{align*}
\vspace{2cm}
\begin{align*}
{\sf FPA}(H \otimes \bB \otimes L)
& ~=~ \E[P] \cdot \calB(0) + \int_{0}^{\lambda} \bigg(\sum_{i \in [n]} \varphi_{i}(b) \cdot \frac{B'_{i}(b)}{B_{i}(b)} + \varphi_{L}(b) \cdot \frac{L'(b)}{L(b)}\bigg) \cdot \calB(b) \cdot \d b, \\
{\sf OPT}(H \otimes \bB \otimes L)
& ~=~ \int_{0}^{+\infty} \bigg(1 - \calV(v)\bigg) \cdot \d v
~=~ \int_{0}^{+\infty} \bigg(1 - P(v) \cdot \prod_{\sigma} B_{\sigma}(\varphi_{\sigma}^{-1}(v))\bigg) \cdot \d v
\end{align*}
\begin{align*}
{\sf FPA}^{\downarrow}(0)
~=~ {\sf FPA}(H^{\downarrow} \otimes \bB \otimes L)
~=~ \int_{0}^{\lambda} \bigg(\sum_{i \in [n]} \varphi_{i}(b) \cdot \frac{B'_{i}(b)}{B_{i}(b)} + \varphi_{L}(b) \cdot \frac{L'(b)}{L(b)}\bigg) \cdot \calB(b) \cdot \d b.
\end{align*}
\begin{align*}
{\sf FPA}^{\downarrow}(\eta)
& ~=~ \int_{\eta}^{\lambda} \bigg(\sum_{i \in [n]} \big(\varphi_{i}(b) - \eta\big) \cdot \frac{B'_{i}(b)}{B_{i}(b)} + \big(\varphi_{L}(b) - \eta\big) \cdot \frac{L'(b)}{L(b)}\bigg) \cdot \calB(b) \cdot \d b \\
& ~=~ \int_{\eta}^{\lambda} \bigg(\sum_{i \in [n]} \varphi_{i}(b) \cdot \frac{B'_{i}(b)}{B_{i}(b)} + \varphi_{L}(b) \cdot \frac{L'(b)}{L(b)}\bigg) \cdot \calB(b) \cdot \d b
~-~ \eta \cdot \int_{\eta}^{\lambda}\calB'(b) \cdot \d b \\
& ~=~ \int_{\eta}^{\lambda} \bigg(\sum_{i \in [n]} \varphi_{i}(b) \cdot \frac{B'_{i}(b)}{B_{i}(b)} + \varphi_{L}(b) \cdot \frac{L'(b)}{L(b)}\bigg) \cdot \calB(b) \cdot \d b
~-~ \eta \cdot \big(1 - \calB(\eta)\big)
\end{align*}
\begin{align*}
\frac{\d}{\d \eta} \Big({\sf FPA}^{\downarrow}(\eta)\Big)
~=~ -\bigg(\sum_{i \in [n]} \varphi_{i}(\eta) \cdot \frac{B'_{i}(\eta)}{B_{i}(\eta)} + \varphi_{L}(\eta) \cdot \frac{L'(\eta)}{L(\eta)}\bigg) \cdot \calB(\eta)
~-~ \big(1 - \calB(\eta)\big) + \eta \cdot \calB'(\eta)
\end{align*}
\begin{align*}
\frac{\d}{\d \eta} \Big({\sf FPA}^{\downarrow}(\eta)\Big) \Bigmid_{\eta = 0}
~=~ -\bigg(\sum_{\sigma \in [n] \{L\}} \varphi_{\sigma}(0) \cdot \frac{B'_{\sigma}(0)}{B_{\sigma}(0)}\bigg) \cdot \calB(0)
~-~ \big(1 - \calB(0)\big)
\end{align*}
\begin{align*}
{\sf OPT}^{\downarrow}(\d\eta) - {\sf OPT}^{\downarrow}(0)
& ~=~ -\int_{0}^{\varphi_{1}(\lambda)} \bigg(1 - \calV(v)\bigg) \cdot \d v \\
& ~+~ \int_{0}^{\varphi_{1}(\lambda)} \bigg(1 - \calV(v) \cdot \prod_{\sigma} (1 + \frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \d\eta \cdot \indicator(v < \varphi_{\sigma}(0)))\bigg) \cdot \d v \\
& ~-~ \big(1 - \calV(0)\big) \cdot \d\eta \\
& ~=~ -\sum_{\sigma} \frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \d\eta \cdot \int_{0}^{\varphi_{1}(\lambda)} \calV(v) \cdot \indicator(v < \varphi_{\sigma}(0)) \cdot \d v \\
& ~-~ \big(1 - \calV(0)\big) \cdot \d\eta
\end{align*}
\begin{align*}
\frac{\d}{\d \eta} \Big({\sf OPT}^{\downarrow}(\eta)\Big) \Bigmid_{\eta = 0}
~=~ -\bigg(\sum_{\sigma} \frac{B'_{\sigma}(0)}{B_{\sigma}(0)} \cdot \int_{0}^{\varphi_{\sigma}(0)} \calV(v) \cdot \d v\bigg)
~-~ \big(1 - \calV(0)\big)
\end{align*}
\begin{fact}
If
\[
\frac{{\sf FPA}^{\downarrow}(\eta) - |\d{\sf FPA}^{\downarrow}(\eta)|}{{\sf OPT}^{\downarrow}(\eta) - |\d{\sf OPT}^{\downarrow}(\eta)|}
~\geq~ \frac{{\sf FPA}^{\downarrow}(\eta)}{{\sf OPT}^{\downarrow}(\eta)}
\qquad\iff\qquad
\frac{|\d{\sf FPA}^{\downarrow}(\eta)|}{|\d{\sf OPT}^{\downarrow}(\eta)|}
~\leq~ \frac{{\sf FPA}^{\downarrow}(\eta)}{{\sf OPT}^{\downarrow}(\eta)}
\]
then
\[
\frac{{\sf FPA}^{\uparrow}(\eta)}{{\sf OPT}^{\uparrow}(\eta)} ~\leq~ \frac{{\sf FPA}^{\downarrow}(\eta)}{{\sf OPT}^{\downarrow}(\eta)}
\qquad\iff\qquad
\frac{{\sf FPA}^{\uparrow}(\eta) - {\sf FPA}^{\downarrow}(\eta)}{{\sf OPT}^{\uparrow}(\eta) - {\sf OPT}^{\downarrow}(\eta)} ~\leq~ \frac{{\sf FPA}^{\downarrow}(\eta)}{{\sf OPT}^{\downarrow}(\eta)}
\]
\end{fact}
\[
{\sf FPA}^{\uparrow}(\eta) - {\sf FPA}^{\downarrow}(\eta)
\]
If ${\sf PoA}(P^{\uparrow}) > {\sf PoA}(P^{\downarrow})$, then
\begin{align*}
\frac{{\sf FPA}(P^{\uparrow}) - {\sf FPA}(P^{\downarrow})}{{\sf OPT}(P^{\uparrow}) - {\sf OPT}(P^{\downarrow})}
& ~=~ \frac{\varphi_{1}(0) \cdot \calB(0)}{\int_{0}^{\varphi_{1}(0)} \calV(v) \cdot \d v}
\end{align*}
\color{black}
\newpage
\end{comment}
\subsection{{\text{\tt Collapse}}: From ceiling to strong ceiling}
\label{subsec:collapse}
Following the previous subsection, we are able to handle {\em floor} pseudo instances $\in \mathbb{B}_{\sf valid}^{\downarrow}$.
Therefore,
we can restrict our attention to {\em ceiling} pseudo instances $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$.
Indeed, every {\em ceiling} pseudo instance can be transformed into a {\em strong ceiling} pseudo instance, through the {\blackref{alg:collapse}} reduction (see \Cref{fig:alg:collapse,fig:collapse} for its description and a visual aid). For ease of reference, let us rephrase the definition of {\em strong ceiling} pseudo instances.
\begin{restate}[\Cref{def:strong}]
{\em A {\em ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ from \Cref{def:ceiling_floor:restate} is further called {\em strong ceiling} when (\term[\textbf{non-monopoly collapse}]{restate:pro:collapse}) each non-monopoly bidder $i \in [n]$ (if existential) exactly takes the pseudo bid-to-value mapping $\varphi_{i}(b) \equiv \varphi_{L}(b)$ before the jump bid $b \in [0,\, \lambda^{*})$; therefore in each before-jump column $j \in [0: j^{*} - 1]$ of the bid-to-value table $\bPhi = [\phi_{\sigma,\, j}]$, all of the non-monopoly entries are the same as the pseudo entry $\phi_{1,\, j} = \dots = \phi_{n,\, j} = \phi_{L,\, j}$.
That is, before the jump bid $b \in [0,\, \lambda^{*})$, each non-monopoly bidder $i \in [n]$ has a {\em constant} bid distribution $B_{i}(b) = B_{i}(\lambda^{*})$ and thus has no effect.}
\end{restate}
\begin{intuition*}
Regarding a ceiling pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$, in a sense of \Cref{lem:ceiling_welfare}, the optimal {\textsf{Social Welfare}} formula ${\sf OPT}(H^{\uparrow} \otimes \bB \otimes L)$ turns out to be irrelevant to the non-ultra-ceiling entries $\phi_{\sigma,\, j} \leq \phi_{H,\, 0}$ in the bid-to-value table $\bPhi = \big[\phi_{\sigma,\, j}\big]$ (excluding the ceiling value $\phi_{H,\, 0}$ itself).
Conceivably, we would modify those non-ultra-ceiling entries $\phi_{\sigma,\, j} \leq \phi_{H,\, 0}$ to minimize the auction {\textsf{Social Welfare}} ${\sf FPA}(H^{\uparrow} \otimes \bB \otimes L)$, while preserving \blackref{re:monotonicity} etc.
\end{intuition*}
\begin{lemma}[{\textsf{Social Welfares}}]
\label{lem:ceiling_welfare}
Following \Cref{lem:translate_welfare}, for a ceiling pseudo instance $H^{\uparrow} \otimes \bB \otimes L$, the expected optimal {\textsf{Social Welfare}} can be written as
\begin{align*}
{\sf OPT}(H^{\uparrow} \otimes \bB \otimes L)
~=~ \int_{0}^{+\infty} \bigg(1 - \indicator(v \geq \phi_{H,\, 0}) \cdot \prod_{(\sigma,\, j) \,\in\, \bPhi^{*}} \Big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j})\Big)\bigg) \cdot \d v.
\end{align*}
where the subtable $\bPhi^{*} \subseteq \bPhi$ includes all of the ultra-ceiling entries $\bPhi^{*} \supseteq \big\{(\sigma,\, j) \in \bPhi: \phi_{\sigma,\, j} > \phi_{H,\, 0}\big\}$ but otherwise is arbitrary.
\end{lemma}
\begin{proof}
Following \Cref{lem:translate_welfare} (with \blackref{re:ceilingness} $P^{\uparrow} \equiv \phi_{H,\, 0}$), the optimal {\textsf{Social Welfare}} is given by ${\sf OPT}(H^{\uparrow} \otimes \bB \otimes L)
= \int_{0}^{+\infty} \big(1 - \indicator(v \geq \phi_{H,\, 0}) \cdot \prod_{(\sigma,\, j) \,\in\, \bPhi} \big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j})\big)\big) \cdot \d v$.
Each non-ultra-ceiling entry $\phi_{\sigma,\, j} \leq \phi_{H,\, 0}$ has no effect because of the leading {\em indicator} term $\indicator(v \geq \phi_{H,\, 0})$.
\end{proof}
The \blackref{alg:collapse} reduction implements the above intuition, and its performance guarantees are summarized in \Cref{lem:collapse}.
\begin{lemma}[{\text{\tt Collapse}}; \Cref{fig:alg:collapse}]
\label{lem:collapse}
Under reduction $H^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \gets \text{\tt Collapse}(H^{\uparrow} \otimes \bB \otimes L)$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:collapse:property}
The output is a strong ceiling pseudo instance $H^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf strong}^{\uparrow} \subsetneq \mathbb{B}_{\sf valid}^{\uparrow}$; the monopolist $H^{\uparrow}$ is unmodified.
\item\label{lem:collapse:potential}
The potential keeps the same $\Psi(H^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L}) = \Psi(H^{\uparrow} \otimes \bB \otimes L)$.
\item\label{lem:collapse:poa}
A (weakly) worse bound is yielded ${\sf PoA}(H^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L}) \leq {\sf PoA}(H^{\uparrow} \otimes \bB \otimes L)$.
\end{enumerate}
\end{lemma}
\afterpage{
\begin{figure}[t]
\centering
\begin{mdframed}
Reduction $\term[\text{\tt Collapse}]{alg:collapse}(H \otimes \bB \otimes L)$
\begin{flushleft}
{\bf Input:} A {\em ceiling} pseudo instance $H \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$
\white{\term[\text{\em input}]{collapse_input}}
\hfill
\Cref{def:ceiling_floor:restate}
\vspace{.05in}
{\bf Output:} A {\em strong ceiling} pseudo instance $H \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf strong}^{\uparrow} \subsetneq \mathbb{B}_{\sf valid}^{\uparrow}$.
\white{\term[\text{\em output}]{collapse_output}}
\hfill
\Cref{def:strong}
\begin{enumerate}
\item\label{alg:collapse:distribution}
Define $\tilde{B}_{i}(b) \equiv \max\big(B_{i}(b),\, B_{i}(\lambda^{*})\big)$ for $i \in [n]$ and $\tilde{L}(b) \equiv L(b) \cdot \prod_{i \in [n]} \big(B_{i}(b) \big/ \tilde{B}_{i}(b)\big)$.
\item[] \OliveGreen{$\triangleright$ The jump bid is one of the partition bids $\lambda^{*} \in \{\lambda_{1} < \dots < \lambda_{m + 1} \equiv \lambda\} \subseteq (0,\, \lambda]$ \\
except the nil bid $\lambda_{0} \equiv 0$ (\Cref{def:jump,lem:jump,rem:jump}).}
\item\label{alg:collapse:output}
{\bf Return} $H \otimes \tilde{\bB} \otimes \tilde{L}$.
\end{enumerate}
\end{flushleft}
\end{mdframed}
\caption{The {\text{\tt Collapse}} reduction
\label{fig:alg:collapse}}
\end{figure}
\begin{figure}
\centering
\subfloat[\label{fig:collapse:input}
{The {\em ceiling} input $H \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$}]{
\includegraphics[width = .49\textwidth]
{collapse_input.png}}
\hfill
\subfloat[\label{fig:collapse:output}
{The {\em strong ceiling} output $H \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf strong}^{\uparrow}$}]{
\includegraphics[width = .49\textwidth]
{collapse_output.png}}
\caption{{Demonstration for the {\text{\tt Collapse}} reduction (\Cref{fig:alg:collapse}), which transforms (\Cref{def:ceiling_floor:restate}) a {\em ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ into (\Cref{def:strong}) a {\em strong ceiling} pseudo instance $H^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf strong}^{\uparrow}$.}
\label{fig:collapse}}
\end{figure}
\clearpage}
\begin{proof}
See \Cref{fig:collapse} for a visual aid.
For brevity, let $[N] \equiv \{H\} \cup [n] \cup \{L\}$.
\vspace{.1in}
\noindent
{\bf \Cref{lem:collapse:property}.}
By construction, the \blackref{collapse_output} non-monopoly bidders $\tilde{B}_{i}(b) \equiv \max\big(B_{i}(b),\, B_{i}(\lambda^{*})\big)$ for $i \in [n]$ and the \blackref{collapse_output} pseudo bidder $\tilde{L}(b) \equiv L(b) \cdot \prod_{i \in [n]} \big(B_{i}(b) \big/ \tilde{B}_{i}(b)\big)$ are well defined (Line~\ref{alg:collapse:distribution}). Moreover, the monopolist $H^{\uparrow}$ is invariant.
After the jump bid $b \in (\lambda^{*},\, \lambda]$, everything keeps the same.
And before the jump bid $b \in [0,\, \lambda^{*}]$, the first-order bid distribution $\tilde{\calB}(b) = \prod_{\sigma \in [N]} \tilde{B}_{\sigma}(b) = \prod_{\sigma \in [N]} B_{\sigma}(b) = \calB(b)$ keeps the same;
so the pseudo mapping $\varphi_{L}(b) = b + \frac{\calB(b)}{\calB'(b)}$ and the monopolist $H^{\uparrow}$'s mapping $\varphi_{H}(b) = b + (\frac{\calB'(b)}{\calB(b)} - \frac{H'(b)}{H(b)})^{-1}$ are invariant (cf.\ \Cref{fig:collapse}).
Moreover, each non-monopoly bidder $i \in [n]$ {\em does} have a constant bid distributions $\tilde{B}_{i}(b) = B_{i}(\lambda^{*})$ for $b \in [0,\, \lambda^{*}]$ (\Cref{def:strong}).
Hence, the \blackref{collapse_output} $H^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L}$ satisfies \blackref{re:discretization}, \blackref{re:monotonicity}, \blackref{re:layeredness}, and \blackref{re:ceilingness}
\`{a} la the {\em ceiling} \blackref{collapse_input} $H^{\uparrow} \otimes \bB \otimes L$ (\Cref{def:ceiling_floor:restate}) and additionally satisfies \blackref{restate:pro:collapse} (\Cref{def:strong}), being a {\em strong ceiling} pseudo instance $H^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf strong}^{\uparrow}$.
{\bf \Cref{lem:collapse:property}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:collapse:potential}.}
The potential of a {\em ceiling} pseudo instance $\hat{H}^{\uparrow} \otimes \hat{\bB} \otimes \hat{L}$ counts all the ultra-ceiling entries $\Psi(\text{\em ceiling}) = \big|\big\{(\sigma,\, j) \in \hat{\bPhi}: \hat{\phi}_{\sigma,\, j} > \hat{\phi}_{H,\, 0}\big\}\big|$ in the bid-to-value table $\hat{\bPhi} = \big[\hat{\phi}_{\sigma,\, j}\big]$ (\Cref{def:potential}).
We do consider two {\em ceiling} pseudo instances ({\bf \Cref{lem:collapse:property}}) and need to verify that the \blackref{collapse_output} table $\tilde{\bPhi}$ has the same amount of ultra-ceiling entries as the \blackref{collapse_input} table $\bPhi$.
The \blackref{collapse_output} table $\tilde{\bPhi}$ (Line~\ref{alg:collapse:distribution} and \Cref{fig:collapse}) descends the non-monopoly entries $\phi_{1,\, j} \geq \dots \geq \phi_{n,\, j}$ to the {\em lower} pseudo entries $\phi_{L,\, j}$ (\blackref{restate:pro:collapse}) in before-jump columns $j \in [0: j^{*} - 1]$.
All of these entries $(i,\, j) \in [n] \times [0: j^{*} - 1]$ (\blackref{re:layeredness} and \Cref{def:jump}) are non-ultra-ceiling regardless of the reduction: $[\tilde{\phi}_{i,\, j} = \phi_{i,\, L}] \leq [\phi_{i,\, j}] \leq [\phi_{H,\, j} = \phi_{H,\, 0}]$.
So, the \blackref{collapse_output} $H^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L}$ keeps the same potential $\Psi(\blackref{collapse_output}) = \Psi(\blackref{collapse_input})$.
\vspace{.1in}
\noindent
{\bf \Cref{lem:collapse:poa}.}
We would show that compared to the \blackref{collapse_input} $H^{\uparrow} \otimes \bB \otimes L$, the \blackref{collapse_output} $H^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L}$ (weakly) decreases the auction {\textsf{Social Welfare}} ${\sf FPA}(\blackref{collapse_output}) \leq {\sf FPA}(\blackref{collapse_input})$ but keeps the same optimal {\textsf{Social Welfare}} ${\sf OPT}(\blackref{collapse_output}) = {\sf OPT}(\blackref{collapse_input})$.
\vspace{.1in}
\noindent
{\bf Auction {\textsf{Social Welfares}}.}
Following \Cref{lem:pseudo_welfare} (with $\gamma \equiv 0$ and $P^{\uparrow} \equiv \phi_{H,\, 0}$ by \Cref{def:ceiling_floor:restate}), the \blackref{collapse_input} has an auction {\textsf{Social Welfare}}
${\sf FPA}(\blackref{collapse_input}) = \phi_{H,\, 0} \cdot \calB(0) + \sum_{\sigma \in [N]} \big(\int_{0}^{\lambda} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b\big)$.
As mentioned ({\bf \Cref{lem:collapse:property}}), the following keep the same:
(i)~Everything after the jump bid $b \in (\lambda^{*},\, \lambda]$.
(ii)~Everything about the monopolist $H^{\uparrow}$. (iii)~The first-order value distribution $\tilde{\calB}(b) \equiv \calB(b)$.
We can formulate the output auction {\textsf{Social Welfare}} ${\sf FPA}(\blackref{collapse_output})$ likewise and, by concentrating on the changed parts (namely $b \in [0,\, \lambda^{*}]$ and $\sigma \neq H$), deduce that
\begin{align}
{\sf FPA}(\blackref{collapse_output})
& ~=~ \sum_{\sigma \neq H} \bigg(\int_{0}^{\lambda^{*}} \tilde{\varphi}_{\sigma}(b) \cdot \frac{\tilde{B}'_{\sigma}(b)}{\tilde{B}_{\sigma}(b)} \cdot \calB(b) \cdot \d b\bigg) \hspace{0.02cm} ~+~ \mathrm{const}
\nonumber \\
& ~=~ \int_{0}^{\lambda^{*}} \tilde{\varphi}_{L}(b) \cdot \frac{\tilde{L}'(b)}{\tilde{L}(b)} \cdot \calB(b) \cdot \d b \hspace{1.35cm} ~+~ \mathrm{const}
\label{eq:collapse:auction:1}\tag{C1} \\
& ~=~ \int_{0}^{\lambda^{*}} \varphi_{L}(b) \cdot \bigg(\sum_{\sigma \neq H} \frac{B'_{\sigma}(b)}{B_{\sigma}(b)}\bigg) \cdot \calB(b) \cdot \d b ~+~ \mathrm{const}
\label{eq:collapse:auction:2}\tag{C2} \\
& ~\leq~ \sum_{\sigma \neq H} \bigg(\int_{0}^{\lambda^{*}} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b\bigg) ~+~ \mathrm{const}
~=~ {\sf FPA}(\blackref{collapse_input}).
\label{eq:collapse:auction:3}\tag{C3}
\end{align}
\eqref{eq:collapse:auction:1}:
$\tilde{B}'_{i} \big/ \tilde{B}_{i} = 0$ for each $i \in [n]$ before the jump bid $b \in [0,\, \lambda^{*}]$ (\blackref{restate:pro:collapse}). \\
\eqref{eq:collapse:auction:2}:
$\tilde{\varphi}_{L} = \varphi_{L}$ ({\bf \Cref{lem:collapse:property}}) and $\tilde{L}' \big/ \tilde{L} = \sum_{\sigma \neq H} B'_{\sigma} \big/ B_{\sigma}$ (Line~\ref{alg:collapse:distribution})
before the jump bid $b \in [0,\, \lambda^{*}]$. \\
\eqref{eq:collapse:auction:3}:
The pseudo mapping is the dominated mapping $\varphi_{L}(b) \equiv \min(\bvarphi(b^{\otimes n + 1}))$ (\Cref{lem:pseudo_mapping}).
\vspace{.1in}
\noindent
{\bf Optimal {\textsf{Social Welfares}}.}
The optimal {\textsf{Social Welfare}} keeps the same ${\sf OPT}(\blackref{collapse_output}) = {\sf OPT}(\blackref{collapse_input})$, as an implication of \Cref{lem:ceiling_welfare}. Basically, the optimal {\textsf{Social Welfare}} formula is {\em irrelevant} to the non-ultra-ceiling entries $\phi_{\sigma,\, j} < \phi_{H,\, 0}$, including all the modified (non-monopoly before-jump) entries $\tilde{\phi}_{i,\, j} = \phi_{L,\, j}$ for $(i,\, j) \in [n] \times [0: j^{*} - 1]$.
{\bf \Cref{lem:collapse:poa}} follows then. This finishes the proof.
\end{proof}
\subsection{{\text{\tt Halve}}: From strong ceiling towards twin ceiling, under pseudo jumps}
\label{subsec:halve}
Following the previous two subsections, we are able to deal with {\em floor}/{\em ceiling} but {\em non strong ceiling} pseudo instances $(\mathbb{B}_{\sf valid}^{\downarrow} \cup (\mathbb{B}_{\sf valid}^{\uparrow} \setminus \mathbb{B}_{\sf strong}^{\uparrow}))$.
As a consequence, it remains to deal with {\em strong ceiling} but {\em non twin ceiling} pseudo instances $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$.
For ease of reference, below we rephrase the observations from \Cref{subsec:tech_prelim} on such pseudo instances.
\begin{restate}[\Cref{lem:potential,lem:jump,lem:strong}]
For a strong ceiling but non twin ceiling pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$:
\begin{enumerate}[font = {\em\bfseries}]
\item It violates \blackref{re:twin_ceiling}, i.e., its potential is nonzero $\Psi(H^{\uparrow} \otimes \bB \otimes L) > 0$.
\item The \blackref{jump_entry} $(\sigma^{*},\, j^{*}) \in (\{H\} \cup [n] \cup \{L\}) \times [m]$ exists and cannot be in column $0$.
\item The jump bid as the index $j^{*} \in [m]$ partition bid $\lambda^{*} \equiv \lambda_{j^{*}} \in (0,\, \lambda)$ neither can be the nil bid $\lambda_{0} \equiv 0$ nor can be the supremum bid $\lambda_{m + 1} \equiv \lambda$.
\end{enumerate}
\end{restate}
Particularly, the current subsection deals with the case of a \term[\textbf{pseudo jump}]{halve_jump} $\sigma^{*} = L$, using the {\blackref{alg:halve}} reduction (see \Cref{fig:alg:halve,fig:halve} for its description and a visual aid).
\begin{figure}[t]
\centering
\begin{mdframed}
Reduction $\term[\text{\tt Halve}]{alg:halve}(H^{\uparrow} \otimes \bB \otimes L)$
\begin{flushleft}
{\bf Input:} A {\em strong ceiling} but {\em non twin ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$ \\
\white{\bf Input:} that has a \blackref{halve_jump} $(\sigma^{*},\, j^{*}) \in \{L\} \times [m]$.
\white{\term[\text{\em input}]{halve_input}}
\vspace{.05in}
{\bf Output:} A {\em floor}/{\em ceiling} pseudo instance $\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L} \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$.
\white{\term[\text{\em output}]{halve_output}}
{\begin{enumerate}
\item\label{alg:halve:left}
Define the \term[\text{\em left}]{halve_left} candidate $H^{\sf L\uparrow} \otimes \bB^{\sf L} \otimes L^{\sf L}$ as the {\em ceiling} pseudo instance \\
given by $B_{\sigma}^{\sf L}(b) \equiv \min\big(B_{\sigma}(b) \big/ B_{\sigma}(\lambda^{*}),\, 1\big)$ for $\sigma \in \{H\} \cup [n] \cup \{L\}$.
\item\label{alg:halve:right}
Define the \term[\text{\em right}]{halve_right} candidate $H^{\sf R\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R}$ as the {\em floor} pseudo instance \\
given by $B_{\sigma}^{\sf R}(b) \equiv B_{\sigma}(b + \lambda^{*}) \cdot \indicator(b \geq 0)$ for $\sigma \in \{H\} \cup [n] \cup \{L\}$.
\item\label{alg:halve:output}
{\bf Return} the {{\sf PoA}}-worse candidate $\argmin \big\{\, {\sf PoA}(\text{\em left}),~ {\sf PoA}(\text{\em right}) \,\big\}$; \\
\white{\bf Return} breaking ties in favor of the {\em left} candidate $H^{\sf L \uparrow} \otimes \bB^{\sf L} \otimes L^{\sf L}$.
\end{enumerate}}
\end{flushleft}
\end{mdframed}
\caption{The {\text{\tt Halve}} reduction
\label{fig:alg:halve}}
\end{figure}
\begin{intuition*}
As \Cref{fig:halve:input} suggests, a \blackref{halve_jump} $(\sigma^{*},\, j^{*}) \in \{L\} \times [m]$ divides the {\em strong ceiling} but {\em non twin ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$ into the {\em left-lower} part and the {\em right-upper} part. It is conceivable that we can accordingly construct two {\em sub pseudo instances}, such that at least one of them yields a (weakly) worse {{\sf PoA}} bound.
\end{intuition*}
The \blackref{alg:halve} reduction implements the above intuition, and its performance guarantees are summarized in \Cref{lem:halve}.
\begin{lemma}[{\text{\tt Halve}}; \Cref{fig:alg:halve}]
\label{lem:halve}
Under reduction $(\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L}) \gets \text{\tt Halve}(H^{\uparrow} \otimes \bB \otimes L)$:
{\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:halve:property}
The output is a floor/ceiling pseudo instance $\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L} \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$.
\item\label{lem:halve:potential}
The potential strictly decreases $\Psi(\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L}) \leq \Psi(H^{\uparrow} \otimes \bB \otimes L) - 1$.
\item\label{lem:halve:poa}
A (weakly) worse bound is yielded ${\sf PoA}(\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L}) \leq {\sf PoA}(H^{\uparrow} \otimes \bB \otimes L)$.
\end{enumerate}}
\end{lemma}
\afterpage{
\begin{figure}[t]
\centering
\subfloat[{The {\em strong ceiling} but {\em non twin ceiling} input $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$ with a \textbf{pseudo jump} $\sigma^{*} = L$}
\label{fig:halve:input}]{
\parbox[c]{.99\textwidth}{
{\centering
\includegraphics[width = .49\textwidth]{halve_input.png}
\par}}} \\
\vspace{1cm}
\subfloat[{The {\em ceiling} left candidate $H^{\sf L\uparrow} \otimes \bB^{\sf L} \otimes L^{\sf L} \in \mathbb{B}_{\sf valid}^{\uparrow}$}
\label{fig:halve:left}]{
\includegraphics[width = .49\textwidth]{halve_left.png}}
\hfill
\subfloat[{The {\em floor} right candidate $H^{\sf R\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R} \in \mathbb{B}_{\sf valid}^{\downarrow}$}
\label{fig:halve:right}]{
\includegraphics[width = .49\textwidth]{halve_right.png}}
\caption{Demonstration for the {\text{\tt Halve}} reduction (\Cref{fig:alg:halve}), which transforms a {\em strong ceiling} but {\em non twin ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$ that has a \textbf{pseudo jump} $\sigma = L$ (\Cref{def:twin,def:strong,def:jump}) into EITHER the {\em ceiling} left candidate $H^{\sf L\uparrow} \otimes \bB^{\sf L} \otimes L^{\sf L} \in \mathbb{B}_{\sf valid}^{\uparrow}$ OR the {\em floor} right candidate $H^{\sf R\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R} \in \mathbb{B}_{\sf valid}^{\downarrow}$.
\label{fig:halve}}
\end{figure}
\clearpage}
\begin{proof}
See \Cref{fig:halve} for a visual aid. The \blackref{alg:halve} reduction chooses the {{\sf PoA}}-worse pseudo instance (Line~\ref{alg:halve:output}) between the \blackref{halve_left}/\blackref{halve_right} candidates $H^{\sf L\uparrow} \otimes \bB^{\sf L} \otimes L^{\sf L}$ and $H^{\sf R\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R}$, breaking ties in favor of the \blackref{halve_left} candidate. For brevity, let $[N] \equiv \{H\} \cup [n] \cup \{L\}$.
\vspace{.1in}
\noindent
{\bf \Cref{lem:halve:property}.}
We shall prove that the \blackref{halve_left} candidate $H^{\sf L\uparrow} \otimes \bB^{\sf L} \otimes L^{\sf L}$ is a {\em ceiling} pseudo instance while the \blackref{halve_right} candidate $H^{\sf R\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R}$ is a {\em floor} pseudo instance.
The \blackref{halve_left} candidate $H^{\sf L\uparrow} \otimes \bB^{\sf L} \otimes L^{\sf L}$ (Line~\ref{alg:halve:left} and \Cref{fig:halve:left}) shrinks the support to before-jump bids $b \in [0,\, \lambda^{*}]$, taking the form $B_{\sigma}^{\sf L}(b) = B_{\sigma}(b) \big/ B_{\sigma}(\lambda^{*})$ for $\sigma \in [N]$. We can easily check that for $b \in [0,\, \lambda^{*}]$, each \blackref{halve_left} bidder $B_{\sigma}^{\sf L}$ keeps the same mapping $\varphi_{\sigma}^{\sf L}(b) = \varphi_{\sigma}(b)$ as the counterpart \blackref{halve_input} bidder $B_{\sigma}$ (\Cref{def:ceiling_floor:restate}), therefore preserving \blackref{re:discretization}, \blackref{re:monotonicity}, and \blackref{re:layeredness}.
The construction further promises \blackref{re:ceilingness}, making the \blackref{halve_left} candidate $H^{\sf L\uparrow} \otimes \bB^{\sf L} \otimes L^{\sf L} \in \mathbb{B}_{\sf valid}^{\uparrow}$ a {\em ceiling} pseudo instance.
The \blackref{halve_right} candidate $H^{\sf R\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R}$ (Line~\ref{alg:halve:right} and \Cref{fig:halve:right}) shrinks the support to SHIFTED after-jump bids $b \in [0,\, \lambda - \lambda^{*}]$, taking the form $B_{\sigma}^{\sf R}(b) = B_{\sigma}(b + \lambda^{*})$ for $\sigma \in [N]$.
The exactly same construction has been used in the \blackref{alg:slice} reduction to yield a spectrum of {\em floor} pseudo instances $\big\{\, H^{(t)\downarrow} \otimes \bB^{(t)} \otimes L^{(t)}: 0 \leq t < \lambda \,\big\}$ (see the proof of \Cref{lem:slice:property} of \Cref{lem:slice}); the one given by the jump bid $\lambda^{*} \in (0,\, \lambda)$ is precisely the \blackref{halve_right} candidate $H^{\sf R\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R} \equiv H^{(\lambda^{*})\downarrow} \otimes \bB^{(\lambda^{*})} \otimes L^{(\lambda^{*})} \in \mathbb{B}_{\sf valid}^{\downarrow}$.
Putting these both parts together gives {\bf \Cref{lem:halve:property}}.
\vspace{.1in}
\noindent
{\bf \Cref{lem:halve:potential}.}
The potential of a {\em ceiling} pseudo instance $\hat{H}^{\uparrow} \otimes \hat{\bB} \otimes \hat{L}$ counts all the ultra-ceiling entries $\Psi(\text{\em ceiling}) = \big|\big\{(\sigma,\, j) \in \hat{\bPhi}: \hat{\phi}_{\sigma,\, j} > \hat{\phi}_{H,\, 0}\big\}\big|$ in the bid-to-value table $\hat{\bPhi} = \big[\hat{\phi}_{\sigma,\, j}\big]$ (\Cref{def:potential}) yet the paired {\em floor} pseudo instance $\hat{H}^{\downarrow} \otimes \hat{\bB} \otimes \hat{L}$ counts another one $\Psi(\text{\em floor}) = \Psi(\text{\em ceiling}) + 1$.
The ultra-ceiling entries in the \blackref{halve_input} table $\bPhi$ (\Cref{def:jump,fig:halve:input}) are precisely those in the after-jump columns $\bPhi([N] \times [j^{*}: m])$, because the \blackref{halve_jump} entry $(\sigma^{*},\, j^{*}) \in \{L\} \times [m]$ is the {\em leftmost-then-lowest} ultra-ceiling entry.
The \blackref{halve_left} table $\bPhi^{\sf L} = \bPhi([N] \times [0: j^{*} - 1])$ (Line~\ref{alg:halve:left} and \Cref{fig:halve:left}) discards all those ultra-ceiling entries in the after-jump columns $j \in [j^{*}: m]$. The ceiling value $\phi_{H,\, 0}$ keeps the same given $j^{*} \neq 0$.
Therefore, the \blackref{halve_left} table $\bPhi^{\sf L}$ has NO ultra-ceiling entry, while the \blackref{halve_input} table $\bPhi$ has at least one ultra-ceiling entry, i.e., the \blackref{halve_jump} entry $(\sigma^{*},\, j^{*})$.
The first part of {\bf \Cref{lem:halve:potential}} follows.
The \blackref{halve_right} table $\bPhi^{\sf R} = \bPhi([N] \times [j^{*}: m]) - \blambda^{*}$ (Line~\ref{alg:halve:right} and \Cref{fig:halve:right}) discards all the non-ultra-ceiling entries in the before-jump columns $j \in [0; j^{*} - 1]$ and shifts the remaining ones (namely the ultra-ceiling entries in the \blackref{halve_input} table $\bPhi$) each by a distance $-\lambda^{*}$.
The ceiling value changes to the shifted row-$H$ jump-column entry $\phi_{H,\, j^{*}}^{\sf R} \equiv \phi_{H,\, j^{*}} - \lambda^{*}$.
Therefore, the set of ultra-ceiling entries shrinks by removing all the \blackref{halve_right} entries $\phi_{\sigma,\, j}^{\sf R} \leq \phi_{H,\, j^{*}}^{\sf R}$ for $(\sigma,\, j) \in \bPhi^{\sf R}$,
namely all the \blackref{halve_input} entries $\phi_{\sigma,\, j} \leq \phi_{H,\, j^{*}}$ for $(\sigma,\, j) \in [N] \times [j^{*}: m]$.
Particularly, all the entries $(\sigma,\, j) \in [N] \times \{j^{*}\}$ in the jump column (\blackref{re:layeredness}) are removed from the set of ultra-ceiling entries; there are $|[N] \times \{j^{*}\}| \geq 2$ such entries ($[N] = \{H\} \cup [n] \cup \{L\}$, where $n \geq 0$).
The second part of {\bf \Cref{lem:halve:potential}} follows.
\vspace{.1in}
\noindent
{\bf \Cref{lem:halve:poa}.}
We claim that either the \blackref{halve_left} candidate
$H^{\sf L\uparrow} \otimes \bB^{\sf L} \otimes L^{\sf L}$ or the \blackref{halve_right} candidate $H^{\sf R\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R}$ yields a (weakly) worse {{\sf PoA}} bound than the \blackref{halve_input} $H^{\uparrow} \otimes \bB \otimes L$.
\vspace{.1in}
\noindent
{\bf Auction {\textsf{Social Welfares}}.}
The \blackref{halve_left} candidate $H^{{\sf L}\uparrow} \otimes \bB^{\sf L} \otimes L$ yields (\Cref{lem:pseudo_welfare}; with $P^{{\sf L}\uparrow} \equiv \phi_{H,\, 0}^{\sf L}$) the optimal {\textsf{Social Welfare}}
\begin{align*}
{\sf FPA}(\blackref{halve_left})\hspace{.34cm}
& ~=~ \phi_{H,\, 0}^{\sf L} \cdot \calB^{\sf L}(0)
+ \sum_{\sigma \in [N]} \int_{0}^{\lambda^{*}} \varphi_{\sigma}^{\sf L}(b) \cdot \frac{{B_{\sigma}^{\sf L}}'(b)}{B_{\sigma}^{\sf L}(b)} \cdot \calB^{\sf L}(b) \cdot \d b \\
& ~=~ \Big(\phi_{H,\, 0} \cdot \calB(0)
+ \sum_{\sigma \in [N]} \int_{0}^{\lambda^{*}} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b\Big) \cdot \frac{1}{\calB(\lambda^{*})}.
&& \parbox{2cm}{Line~\ref{alg:halve:left}}
\end{align*}
The \blackref{halve_right} candidate $H^{{\sf R}\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R}$ yields (\Cref{lem:pseudo_welfare}; with $P^{{\sf R}\downarrow} \equiv 0$) the auction {\textsf{Social Welfare}}
\begin{align*}
{\sf FPA}(\blackref{halve_right})\hspace{.08cm}
& ~=~ \sum_{\sigma \in [N]} \int_{0}^{\lambda - \lambda^{*}} \varphi_{\sigma}^{\sf R}(b) \cdot \frac{{B_{\sigma}^{\sf R}}'(b)}{B_{\sigma}^{\sf R}(b)} \cdot \calB^{\sf R}(b) \cdot \d b \\
& ~=~ \sum_{\sigma \in [N]} \int_{\lambda^{*}}^{\lambda} \big(\varphi_{\sigma}(b) - \lambda^{*}\big) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b
&& \text{Line~\ref{alg:halve:right}} \\
& ~=~ \sum_{\sigma \in [N]} \int_{\lambda^{*}}^{\lambda} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b
- \int_{\lambda^{*}}^{\lambda} \lambda^{*} \cdot \calB'(b) \cdot \d b
\hspace{1.cm}
&& \parbox{2cm}{$\calB = \prod_{\sigma} B_{\sigma}$} \\
& ~=~ \sum_{\sigma \in [N]} \int_{\lambda^{*}}^{\lambda} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b
- \lambda^{*} \cdot \big(1 - \calB'(\lambda^{*})\big).
&& \text{$\calB(\lambda) = 1$}
\end{align*}
Further, the \blackref{halve_input} $H^{\uparrow} \otimes \bB \otimes L$ yields (\Cref{lem:pseudo_welfare}; with $P^{\uparrow} \equiv \phi_{H,\, 0}$) the auction {\textsf{Social Welfare}}
\begin{align*}
{\sf FPA}(\blackref{collapse_input})
& ~=~ \phi_{H,\, 0} \cdot \calB(0) + \sum_{\sigma \in [N]} \Big(\int_{0}^{\lambda} \varphi_{\sigma}(b) \cdot \frac{B'_{\sigma}(b)}{B_{\sigma}(b)} \cdot \calB(b) \cdot \d b\Big) \\
& ~=~ {\sf FPA}(\blackref{halve_left}) \cdot \calB(\lambda^{*}) + {\sf FPA}(\blackref{halve_right}) + \lambda^{*} \cdot \big(1 - \calB'(\lambda^{*})\big)\hspace{1.55cm}
&& \parbox{2cm}{substitution} \phantom{\Big.}
\end{align*}
\vspace{.1in}
\noindent
{\bf Optimal {\textsf{Social Welfares}}.}
Let $\bPhi^{*} = [N] \times [j^{*}: m]$ be the after-jump subtable.
Following {\bf \Cref{lem:halve:potential}} and \Cref{fig:halve}:
(i)~The \blackref{halve_left} table $\bPhi^{\sf L}$ has NO ultra-ceiling entry $\emptyset$.
(ii)~The \blackref{halve_right} table $\bPhi^{\sf R}$'s ultra-ceiling entries $\phi_{\sigma,\, j}^{\sf R} > \phi_{H,\, j^{*}}^{\sf R}$ are all included in the subtable $\bPhi^{*}$.
(iii)~The \blackref{halve_input} table $\bPhi$' ultra-ceiling entries $\phi_{\sigma,\, j} > \phi_{H,\, 0}$ are precisely the subtable $\bPhi^{*}$.
Hence, the \blackref{halve_left} candidate $H^{{\sf L}\uparrow} \otimes \bB^{\sf L} \otimes L$ yields (\Cref{lem:ceiling_welfare}; with $P^{{\sf L}\uparrow} \equiv \phi_{H,\, 0}^{\sf L}$) the optimal {\textsf{Social Welfare}}
\begin{align*}
{\sf OPT}(\blackref{halve_left})\hspace{.34cm}
& ~=~ \int_{0}^{+\infty} \big(1 - \indicator(v \geq \phi_{H,\, 0}^{\sf L})\big) \cdot \d v
~=~ \phi_{H,\, 0}^{\sf L}
~=~ \phi_{H,\, 0}.\hspace{.99cm}
\text{Line~\ref{alg:halve:left}}
\hspace{3.18cm}
\end{align*}
The \blackref{halve_right} candidate $H^{{\sf R}\downarrow} \otimes \bB^{\sf R} \otimes L^{\sf R}$ yields (\Cref{lem:ceiling_welfare}; with $P^{{\sf R}\downarrow} \equiv 0$) the optimal {\textsf{Social Welfare}}\footnote{The \blackref{halve_right} candidate (Line~\ref{alg:halve:right}) preserves the probabilities $\omega_{\sigma,\, j}^{\sf R} = 1 - \frac{B_{\sigma}^{\sf R}(\lambda_{j})}{B_{\sigma}^{\sf R}(\lambda_{j + 1})} = 1 - \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})} = \omega_{\sigma,\, j}$ for $(\sigma,\, j) \in \bPhi^{*}$.}
\begin{align}
{\sf OPT}(\blackref{halve_right})\hspace{.08cm}
& ~=~ \int_{0}^{+\infty} \Big(1 - \prod_{(\sigma,\, j) \,\in\, \bPhi^{*}} \big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j}^{\sf R})\big)\Big) \cdot \d v
\nonumber \\
& ~=~ \int_{\lambda^{*}}^{+\infty} \Big(1 - \prod_{(\sigma,\, j) \,\in\, \bPhi^{*}} \big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j})\big)\Big) \cdot \d v
\qquad \text{Line~\ref{alg:halve:right}}
\nonumber \\
& ~=~ \int_{0}^{+\infty} \Big(1 - \prod_{(\sigma,\, j) \,\in\, \bPhi^{*}} \big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j})\big)\Big) \cdot \d v
~-~ \lambda^{*} \cdot \big(1 - \calB(\lambda^{*})\big).
\hspace{.75cm}
\label{eq:halve:H1}\tag{H1}
\end{align}
\eqref{eq:halve:H1}: The integrand $\overset{(\dagger)}{=} 1 - \prod_{\bPhi^{*}} (1 - \omega_{\sigma,\, j})
= 1 - \prod_{\bPhi^{*}} \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}
\overset{(\ddagger)}{=} 1 - \prod_{[N]} B_{\sigma}(\lambda^{*})
= 1 - \calB(\lambda^{*})$; \\
($\dagger$) $\phi_{\sigma,\, j} > \phi_{H,\, 0} = \lim_{b \nearrow \lambda^{*}} \varphi_{H}(b) > \lambda^{*}$ given a \blackref{halve_jump} $(\sigma^{*},\, j^{*}) \in \{L\} \times [m]$ and \Cref{lem:pseudo_mapping}; \\
($\ddagger$) $\lambda_{j^{*}} \equiv \lambda^{*}$ and $\calB(\lambda_{m + 1} \equiv \lambda) = 1$.
\vspace{.1in}
\noindent
Further, the \blackref{halve_input} $H^{\uparrow} \otimes \bB \otimes L$ yields (\Cref{lem:ceiling_welfare}; with $P^{\uparrow} \equiv \phi_{H,\, 0}$) the optimal {\textsf{Social Welfare}}
\begin{align*}
{\sf OPT}(\blackref{halve_input})
& ~=~ \int_{0}^{+\infty} \Big(1 - \indicator(v \geq \phi_{H,\, 0}) \phantom{\Big)} \cdot \prod_{(\sigma,\, j) \,\in\, \bPhi^{*}} \big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j})\big)\Big) \cdot \d v \\
& ~=~ \int_{0}^{\phi_{H,\, 0}}
\prod_{(\sigma,\, j) \,\in\, \bPhi^{*}} \big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j})\big) \cdot \d v \\
& \phantom{~=~} + {\sf OPT}(\blackref{halve_right}) + \lambda^{*} \cdot \big(1 - \calB(\lambda^{*})\big)
\qquad\qquad\qquad~~~~~~~~~\, \text{substitute \eqref{eq:halve:H1}}
\phantom{\Big.} \\
& ~=~ \phi_{H,\, 0} \cdot \calB(\lambda^{*}) + {\sf OPT}(\blackref{halve_right}) + \lambda^{*} \cdot \big(1 - \calB(\lambda^{*})\big)
\qquad\;~~~~~ \text{reuse arguments for \eqref{eq:halve:H1}}
\phantom{\Big.} \\
& ~=~ {\sf OPT}(\blackref{halve_input}) \cdot \calB(\lambda^{*}) + {\sf OPT}(\blackref{halve_right}) + \lambda^{*} \cdot \big(1 - \calB(\lambda^{*})\big).
\phantom{\Big.}
\end{align*}
Therefore, either or both of the \blackref{halve_left}/\blackref{halve_right} candidates yields a (weakly) worse {{\sf PoA}} bound, as follows. (Recall that a {{\sf PoA}} bound is always at most $1$.)
\begin{align*}
{\sf PoA}(\blackref{halve_input})
\,=\, \frac{{\sf FPA}(\blackref{halve_left}) \cdot \calB(\lambda^{*}) + {\sf FPA}(\blackref{halve_right}) + \lambda^{*} \cdot (1 - \calB(\lambda^{*}))}
{{\sf OPT}(\blackref{halve_left}) \cdot \calB(\lambda^{*}) + {\sf OPT}(\blackref{halve_right}) + \lambda^{*} \cdot (1 - \calB(\lambda^{*}))}
\,\geq\, \min \big\{\, {\sf PoA}(\blackref{halve_left}),~ {\sf PoA}(\blackref{halve_right}) \,\big\}.
\end{align*}
{\bf \Cref{lem:halve:poa}} follows then.
This finishes the proof.
\end{proof}
\subsection{{\text{\tt Ascend-Descend}}: From strong ceiling towards twin ceiling, under real jumps}
\label{subsec:AD}
This subsection also considers {\em strong ceiling} but {\em non twin ceiling} pseudo instances $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$. Particularly, we deal with the case of a \term[\textbf{real jump}]{AD_jump} $\sigma^{*} \in \{H\} \cup [n]$ through the {\blackref{alg:AD}} reduction (see \Cref{fig:alg:AD,fig:AD} for its description and a visual aid).
For ease of reference, let us rephrase the observations from \Cref{subsec:tech_prelim}.
\begin{restate}[\Cref{lem:potential,lem:jump,lem:strong}]
For a strong ceiling but non twin ceiling pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$:
\begin{enumerate}[font = {\em\bfseries}]
\item It violates \blackref{re:twin_ceiling}, i.e., its potential is nonzero $\Psi(H^{\uparrow} \otimes \bB \otimes L) > 0$.
\item The \blackref{jump_entry} $(\sigma^{*},\, j^{*}) \in (\{H\} \cup [n] \cup \{L\}) \times [m]$ exists and cannot be in column $0$.
\item The jump bid as the index $j^{*} \in [m]$ partition bid $\lambda^{*} \equiv \lambda_{j^{*}} \in (0,\, \lambda)$ neither can be the nil bid $\lambda_{0} \equiv 0$ nor can be the supremum bid $\lambda_{m + 1} \equiv \lambda$.
\end{enumerate}
\end{restate}
\Cref{lem:AD} summarizes performance guarantees of the \blackref{alg:AD} reduction.
(The intuition of this reduction might be obscure. Roughly speaking, it adopts a {\em win-win} approach between two relatively natural modifications that both {\em decrease the potential}.)
\begin{figure}[t]
\centering
\begin{mdframed}
Reduction $\term[\text{\tt Ascend-Descend}]{alg:AD}(H^{\uparrow} \otimes \bB \otimes L)$
\begin{flushleft}
{\bf Input:} A {\em strong ceiling} but {\em non twin ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$ \\
\white{\bf Input:} that has a \blackref{AD_jump} $(\sigma^{*},\, j^{*}) \in (\{H\} \cup [n]) \times [m]$.
\white{\term[\text{\em input}]{AD_input}}
\vspace{.05in}
{\bf Output:} A {\em ceiling} pseudo instance $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$.
\white{\term[\text{\em output}]{AD_output}}
\begin{enumerate}
\setcounter{enumi}{-1}
\item Consider the underlying partition $\bLambda = [0 \equiv \lambda_{0},\, \lambda_{1}) \cup [\lambda_{1},\, \lambda_{2}) \cup \dots \cup [\lambda_{m},\, \lambda_{m + 1} \equiv \lambda]$ \\
and the {\em input} bid-to-value table $\bPhi = \big[\phi_{\sigma,\, j}\big]$ for $(\sigma,\, j) \in (\{H\} \cup [n] \cup \{L\}) \times [0: m]$.
\item\label{alg:AD:ascend}
Define the {\em ascended} table $\bar{\bPhi} = \big[\bar{\phi}_{\sigma,\, j}\big]$ by ONLY {\em ascending} the row-$H$ before-jump entries, from the ceiling value $\phi_{H,\, 0} = \dots = \phi_{H,\, j^{*} - 1}$ to the jump value $\phi^{*} \equiv \phi_{\sigma^{*},\, j^{*}}$.
Reconstruct the ({\em ceiling}) \term[\text{\em ascended}]{AD_ascend} candidate $\bar{H}^{\uparrow} \otimes \bar{\bB} \otimes \bar{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$ from the piecewise constant mappings $\bar{\bvarphi} = \{\bar{\varphi}_{\sigma}\}_{\sigma \in \{H\} \cup [n] \cup \{L\}}$ given by $(\bLambda,\, \bar{\bPhi})$ according to \Cref{lem:pseudo_distribution}.
\item\label{alg:AD:descend}
Define the {\em descended} table $\underline{\bPhi} = \big[\underline{\phi}_{\sigma,\, j}\big]$ by ONLY {\em descending} the jump entry $(\sigma^{*},\, j^{*})$, from the jump value $\phi_{\sigma^{*},\, j^{*}} \equiv \phi^{*}$ to the ceiling value $\phi_{H,\, 0}$.
Reconstruct the ({\em ceiling}) \term[\text{\em descended}]{AD_descend} candidate $\underline{H}^{\uparrow} \otimes \underline{\bB} \otimes \underline{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$ from the piecewise constant mappings $\underline{\bvarphi} = \{\underline{\varphi}_{\sigma}\}_{\sigma \in \{H\} \cup [n] \cup \{L\}}$ given by $(\bLambda,\, \underline{\bPhi})$ according to \Cref{lem:pseudo_distribution}.
\item\label{alg:AD:output}
{\bf Return} the {{\sf PoA}}-worse candidate $\argmin \big\{\, {\sf PoA}(\text{\em ascended}),~ {\sf PoA}(\text{\em descended}) \,\big\}$; \\
\white{\bf Return} breaking ties in favor of the {\em ascended} candidate $\bar{H}^{\uparrow} \otimes \bar{\bB} \otimes \bar{L}$.
\end{enumerate}
\end{flushleft}
\end{mdframed}
\caption{The {\text{\tt Ascend-Descend}} reduction
\label{fig:alg:AD}}
\end{figure}
\begin{lemma}[{\text{\tt Ascend-Descend}}]
\label{lem:AD}
Under reduction $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \gets \text{\tt Ascend-Descend}(H^{\uparrow} \otimes \bB \otimes L)$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:AD:property}
The output is a ceiling pseudo instance $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$.
\item\label{lem:AD:potential}
The potential strictly decreases $\Psi(\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L}) \leq \Psi(H^{\uparrow} \otimes \bB \otimes L) - 1$.
\item\label{lem:AD:poa}
A (weakly) worse bound is yielded ${\sf PoA}(\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L}) \leq {\sf PoA}(H^{\uparrow} \otimes \bB \otimes L)$.
\end{enumerate}
\end{lemma}
\begin{proof}
See \Cref{fig:AD} for a visual aid. The \blackref{alg:AD} reduction chooses the {{\sf PoA}}-worse pseudo instance (Line~\ref{alg:AD:output}) between the \blackref{AD_ascend}/\blackref{AD_descend} candidates $\bar{H}^{\uparrow} \otimes \bar{\bB} \otimes \bar{L}$ and $\underline{H}^{\downarrow} \otimes \underline{\bB} \otimes \underline{L}$, breaking ties in favor of the \blackref{AD_ascend} candidate. For brevity, let $[N] \equiv \{H\} \cup [n] \cup \{L\}$.
\vspace{.1in}
\noindent
{\bf \Cref{lem:AD:property}.}
We shall prove that the both candidates $\bar{H}^{\uparrow} \otimes \bar{\bB} \otimes \bar{L}$ and $\underline{H}^{\uparrow} \otimes \underline{\bB} \otimes \underline{L}$ are {\em ceiling} pseudo instances, which are reconstructed (Lines~\ref{alg:AD:ascend} and \ref{alg:AD:descend}) essentially from the \blackref{AD_ascend}/\blackref{AD_descend} tables $\bar{\bPhi} = \big[\bar{\phi}_{\sigma,\, j}\big]$ and $\underline{\bPhi} = \big[\underline{\phi}_{\sigma,\, j}\big]$ according to \Cref{lem:pseudo_distribution}.
The {\em table-based} construction (Lines~\ref{alg:AD:ascend} and \ref{alg:AD:descend}) intrinsically ensures \blackref{re:discretization} and \blackref{re:ceilingness}; we just need to check
\blackref{re:monotonicity},
\blackref{re:layeredness}, and
{\bf the \Cref{lem:pseudo_mapping} conditions}.
The \blackref{AD_input} $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$ as a {\em strong ceiling} but {\em non twin ceiling} not only satisfies these, but also (\Cref{lem:jump,lem:strong}) has a \blackref{AD_jump} and satisfies \blackref{re:collapse}.
\begin{flushleft}
\begin{itemize}
\item {\bf \blackref{re:monotonicity} and \blackref{re:layeredness}:}
The \blackref{AD_input} table $\bPhi = [\phi_{\sigma,\, j}]$ for $(\sigma,\, j) \in [N] \times [0: m]$
is increasing $\phi_{\sigma,\, 0} \leq \dots \leq \phi_{\sigma,\, j} \leq \dots \leq \phi_{\sigma,\, m}$ in each row $\sigma \in [N] = \{H\} \cup [n] \cup \{L\}$ and
is decreasing $\phi_{H,\, j} \geq \phi_{1,\, j} \geq \dots \geq \phi_{n,\, j} \geq \phi_{L,\, j}$ in each column $j \in [0: m]$.
\item {\bf the \Cref{lem:pseudo_mapping} conditions:}\footnote{Only {\bf \Cref{lem:pseudo_mapping:inequality}} is presented. {\bf \Cref{lem:pseudo_mapping:dominance}} that $\min(\bvarphi(b^{\otimes n + 1})) = \varphi_{L}(b) > b$ is ignored; the equality restates \blackref{re:layeredness} and the inequality is vacuous because the pseudo mapping $\varphi_{L}(b)$ is invariant (Lines~\ref{alg:AD:ascend} and Line~\ref{alg:AD:descend}; $L \notin \{H,\, \sigma^{*}\}$).}
$\sum_{i \in \{H\} \cup [n]} (\phi_{i,\, j} - b)^{-1} \geq n \cdot (\phi_{L,\, j} - b)^{-1}$ over the index-$j$ piece
$b \in [\lambda_{j},\, \lambda_{j + 1})$, for each column $j \in [0: m]$.
\item The \blackref{AD_jump} entry $(\sigma^{*},\, j^{*}) \in (\{H\} \cup [n]) \times [m]$ as the {\em leftmost-then-lowest}
ultra-ceiling entry $\phi_{\sigma^{*},\, j^{*}} > \phi_{H,\, 0}$, cannot be in the row $L$ and cannot be in the column $0$.
In each before-jump column $j \in [0: j^{*} - 1]$, all the non-monopoly entries (\blackref{re:collapse}) collapse to the pseudo entry $\phi_{1,\, j} = \dots = \phi_{n,\, j} = \phi_{L,\, j}$.
\end{itemize}
\end{flushleft}
The \blackref{AD_ascend} table $\bar{\bPhi}$ (Line~\ref{alg:AD:ascend} and \Cref{fig:AD:ascend}) ascends the row-$H$ before-jump entries, from the ceiling value $\phi_{H,\, 0} = \dots = \phi_{H,\, j^{*} - 1}$ to the {\em higher} jump value $\phi_{\sigma^{*},\, j^{*}}$.
The only modified row $H$ keeps increasing
$[\bar{\phi}_{H,\, 0} = \dots = \bar{\phi}_{H,\, j^{*} - 1} = \phi_{\sigma^{*},\, j^{*}}]
\leq [\phi_{H,\, j^{*}}]
\leq \dots \leq
[\phi_{H,\, m}]$
(\blackref{re:monotonicity}),
where the only nontrivial inequality $\phi_{\sigma^{*},\, j^{*}} \leq \phi_{H,\, j^{*}}$ stems from \blackref{re:layeredness} of the \blackref{AD_input} table $\bPhi$.
Each modified/before-jump column $j \in [0: j^{*} - 1]$ keeps decreasing
$[\bar{\phi}_{H,\, j} = \phi_{\sigma^{*},\, j^{*}} > \phi_{H,\, j}]
\geq [\phi_{1,\, j}]
= \dots = [\phi_{n,\, j}]
= [\phi_{L,\, j}]$ (\blackref{re:layeredness})
and satisfies that \\
$(\bar{\phi}_{H,\, j} - b)^{-1} + \sum_{i \neq H} (\phi_{i,\, j} - b)^{-1}
~=~ (\phi_{\sigma^{*},\, j^{*}} - b)^{-1} + n \cdot (\phi_{L,\, j} - b)^{-1}
~\geq~ n \cdot (\phi_{L,\, j} - b)^{-1}$
({\bf \Cref{lem:pseudo_mapping}}).
\vspace{.1in}
The \blackref{AD_descend} table $\underline{\bPhi}$ (Line~\ref{alg:AD:descend} and \Cref{fig:AD:descend}) descends the \blackref{AD_jump} entry $(\sigma^{*},\, j^{*})$, from the jump value $\phi_{\sigma^{*},\, j^{*}}$ to the {\em lower} ceiling value $\phi_{H,\, 0}$.
This is the {\em leftmost-then-lowest} ultra-ceiling entry $(\sigma^{*},\, j^{*}) \in (\{H\} \cup [n]) \times [m]$ and cannot be in row $L$ and/or column $0$; thus the {\em lefter}/{\em lower} entries
both are non-ultra-ceiling $[\phi_{\sigma^{*},\, j^{*} - 1}],\, [\phi_{\sigma^{*} + 1,\, j^{*}}] \leq \phi_{H,\, 0}$.
Therefore, the only modified/jump row $\sigma^{*}$ keeps increasing
$[\phi_{\sigma^{*},\, 0}]
\leq \dots
\leq [\phi_{\sigma^{*},\, j^{*} - 1}]
\leq [\underline{\phi}_{\sigma^{*},\, j^{*}} = \phi_{H,\, 0} < \phi_{\sigma^{*},\, j^{*}}]
\leq [\phi_{\sigma^{*},\, j^{*} + 1}]
\leq \dots \leq
[\phi_{\sigma^{*},\, m}]$
(\blackref{re:monotonicity}).
The only modified/jump column $j^{*}$ keeps decreasing
$[\phi_{H,\, j^{*}}]
\geq \dots
\geq [\phi_{\sigma^{*} - 1,\, j^{*}}] \geq [\phi_{\sigma^{*},\, j^{*}} > \underline{\phi}_{\sigma^{*},\, j^{*}} = \phi_{H,\, 0}]
\geq [\phi_{\sigma^{*} + 1,\, j^{*}}]
\geq \dots
\geq [\phi_{L,\, j^{*}}]$ (\blackref{re:layeredness})
and satisfies that \\
$(\underline{\phi}_{\sigma^{*},\, j^{*}} - b)^{-1} + \sum_{i \neq \sigma^{*}} (\phi_{i,\, j^{*}} - b)^{-1}
~\geq~ \sum_{i} (\phi_{i,\, j^{*}} - b)^{-1}
~\geq~ n \cdot (\phi_{L,\, j^{*}} - b)^{-1}$
({\bf \Cref{lem:pseudo_mapping}}).
\vspace{.1in}
To conclude, the both candidates are {\em ceiling} pseudo instance. {\bf \Cref{lem:AD:property}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:AD:potential}.}
The potential of a {\em ceiling} pseudo instance $\hat{H}^{\uparrow} \otimes \hat{\bB} \otimes \hat{L}$ counts all the ultra-ceiling entries $\Psi(\text{\em ceiling}) = \big|\big\{(\sigma,\, j) \in \hat{\bPhi}: \hat{\phi}_{\sigma,\, j} > \hat{\phi}_{H,\, 0}\big\}\big|$ in the bid-to-value table $\hat{\bPhi} = \big[\hat{\phi}_{\sigma,\, j}\big]$ (\Cref{def:potential}).
We do consider three {\em ceiling} pseudo instances ({\bf \Cref{lem:AD:property}}) and need to show that the \blackref{AD_ascend}/\blackref{AD_descend} tables $\bar{\bPhi}$ and $\underline{\bPhi}$ EACH have strictly fewer ultra-ceiling entries than the \blackref{AD_input} table $\bPhi$.
The \blackref{AD_ascend} table $\bar{\bPhi}$ (Line~\ref{alg:AD:ascend} and \Cref{fig:AD:ascend}) ascends the row-$H$ before-jump entries, from the ceiling value $\phi_{H,\, 0} = \dots = \phi_{H,\, j^{*} - 1}$ to the {\em higher} jump value $\phi_{\sigma^{*},\, j^{*}}$.
Therefore, the \blackref{AD_ascend} table $\bar{\bPhi}$ shrinks the set of ultra-ceiling entries by removing all the unmodified entries $\phi_{\sigma,\, j} \in (\phi_{H,\, 0},\, \phi_{\sigma^{*},\, j^{*}}]$ between the \blackref{AD_input} ceiling value $\phi_{H,\, 0}$ (excluded) and the \blackref{AD_input} jump value $\phi_{\sigma^{*},\, j^{*}}$ (included), especially the \blackref{AD_jump} entry $(\sigma^{*},\, j^{*})$ itself.
This precisely means the \blackref{AD_ascend} table $\bar{\bPhi}$ has strictly fewer ultra-ceiling entries.
The \blackref{AD_descend} table $\underline{\bPhi}$ (Line~\ref{alg:AD:descend} and \Cref{fig:AD:descend}) descends the \blackref{AD_jump} entry $(\sigma^{*},\, j^{*})$, from the jump value $\phi_{\sigma^{*},\, j^{*}}$ to the {\em lower} ceiling value $\phi_{H,\, 0}$.
All the other entries are unmodified, especially the ceiling value $\phi_{H,\, 0}$ (given that $j^{*} \neq 0$).
Thus, the \blackref{AD_descend} table $\underline{\bPhi}$ shrinks the set of ultra-ceiling entries by just removing the \blackref{AD_jump} entry $(\sigma^{*},\, j^{*})$ itself.
Precisely, the \blackref{AD_descend} table $\underline{\bPhi}$ has strictly fewer ultra-ceiling entries.
To conclude, the both candidates EACH strictly decrease the potential. {\bf \Cref{lem:AD:potential}} follows then.
\afterpage{
\begin{figure}[t]
\centering
\subfloat[{The {\em strong ceiling} but {\em non twin ceiling} input $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$ with a \textbf{real jump} $\sigma^{*} \in \{H\} \cup [n]$}
\label{fig:AD:input}]{
\parbox[c]{.99\textwidth}{
{\centering
\includegraphics[width = .49\textwidth]{AD_input.png}
\par}}} \\
\vspace{1cm}
\subfloat[{The {\em ceiling} ascended candidate $\bar{H}^{\uparrow} \otimes \bar{\bB} \otimes \bar{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$}
\label{fig:AD:ascend}]{
\includegraphics[width = .49\textwidth]{AD_ascend.png}}
\hfill
\subfloat[{The {\em ceiling} descended candidate $\underline{H}^{\uparrow} \otimes \underline{\bB} \otimes \underline{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$}
\label{fig:AD:descend}]{
\includegraphics[width = .49\textwidth]{AD_descend.png}}
\caption{Demonstration for the {\text{\tt Ascend-Descend}} reduction (\Cref{fig:alg:AD}), which transforms (\Cref{def:twin,def:strong}) a {\em strong ceiling} but {\em non twin ceiling} pseudo instance $H^{\uparrow} \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow})$ that has a \textbf{real jump} $\sigma \in \{H\} \cup [n]$ to EITHER the {\em ceiling} ascended candidate $\bar{H}^{\uparrow} \otimes \bar{\bB} \otimes \bar{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$ OR the {\em ceiling} descended candidate $\underline{H}^{\uparrow} \otimes \underline{\bB} \otimes \underline{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$. \\
Shorthand: ceiling value (CV) and jump value (JV). \\
\Cref{fig:AD:ascend}: invariant $\bar{H}(b) \cdot \bar{L}(b) \equiv H(b) \cdot L(b)$ and $\bar{\varphi}_{L}(b) \equiv \varphi_{L}(b)$, but modified $\bar{H}(b)$ and $\bar{L}(b)$. \\
\Cref{fig:AD:descend}: invariant $\underline{H}(b) \cdot \underline{L}(b) \equiv H(b) \cdot L(b)$ and $\underline{\varphi}_{L}(b) \equiv \varphi_{L}(b)$, but modified $\underline{H}(b)$ and $\underline{L}(b)$.
\label{fig:AD}}
\end{figure}
\clearpage}
\vspace{.1in}
\noindent
{\bf \Cref{lem:AD:poa}.}
We investigate the auction/optimal {\textsf{Social Welfares}} separately.
\vspace{.1in}
\noindent
{\bf Auction {\textsf{Social Welfares}}.}
Using \Cref{lem:translate_welfare} (with \blackref{re:discretization} and \blackref{re:ceilingness} $P^{\uparrow} \equiv \phi_{H,\, 0}$), the \blackref{AD_input} auction {\textsf{Social Welfare}} ${\sf FPA}(\blackref{AD_input}) \equiv {\sf FPA}(H^{\uparrow} \otimes \bB \otimes L)$ is given by
\begin{align*}
{\sf FPA}(\blackref{AD_input})
& ~=~ \phi_{H,\, 0} \cdot \calB(0) ~+~ \sum_{i \in \{H\} \cup [n]} \sum_{j \in [0: m]} \int_{\lambda_{j}}^{\lambda_{j + 1}} \Big(\frac{\phi_{i,\, j}}{\varphi_{L}(b) - b} - \frac{\phi_{i,\, j} - \varphi_{L}(b)}{\phi_{i,\, j} - b}\Big) \cdot \calB(b) \cdot \d b \\
& \phantom{~=~ \phi_{H,\, 0} \cdot \calB(0)} ~-~ \int_{0}^{\lambda} (n - 1) \cdot \frac{\varphi_{L}(b)}{\varphi_{L}(b) - b} \cdot \calB(b) \cdot \d b.
\end{align*}
The pseudo mapping $\varphi_{L}(b)$ and the first-order bid distribution $\calB(b) = \exp\big(-\int_{b}^{\lambda} (\varphi_{L}(b) - b)^{-1} \cdot \d b\big)$ (\Cref{lem:pseudo_distribution}) are invariant, given that neither of the \blackref{AD_ascend}/\blackref{AD_descend} candidates modify the row $L \notin \{H,\, \sigma^{*}\}$ (Lines~\ref{alg:AD:ascend} and \ref{alg:AD:descend}).
The \blackref{AD_ascend} table $\bar{\bPhi}$ (Line~\ref{alg:AD:ascend} and \Cref{fig:AD:ascend}) ascends the row-$H$ before-jump entries, from the ceiling value $\phi_{H,\, 0} = \dots = \phi_{H,\, j^{*} - 1}$ to the {\em higher} jump value $\phi_{\sigma^{*},\, j^{*}}$.
Considering the modified terms, we can formulate the \blackref{AD_ascend} counterpart as ${\sf FPA}(\blackref{AD_ascend}) = {\sf FPA}(\blackref{AD_input}) + \bar{\Delta}_{{\sf FPA}}$, using
\begin{align*}
\bar{\Delta}_{{\sf FPA}}
& ~=~ (\phi_{\sigma^{*},\, j^{*}} - \phi_{H,\, 0}) \cdot \calB(0)
~+~ \sum_{j = 0}^{j^{*} - 1} \int_{\lambda_{j}}^{\lambda_{j + 1}} \Big(\Big(\frac{\phi_{\sigma^{*},\, j^{*}}}{\varphi_{L}(b) - b} - \frac{\phi_{\sigma^{*},\, j^{*}} - \varphi_{L}(b)}{\phi_{\sigma^{*},\, j^{*}} - b}\Big) \\
& \phantom{~=~ (\phi_{\sigma^{*},\, j^{*}} - \phi_{H,\, 0}) \cdot \calB(0) ~~ \sum_{j = 0}^{j^{*} - 1} \int_{\lambda_{j}}^{\lambda_{j + 1}}\Big(}
- \Big(\frac{\phi_{H,\, 0}}{\varphi_{L}(b) - b} - \frac{\phi_{H,\, 0} - \varphi_{L}(b)}{\phi_{H,\, 0} - b}\Big)\Big) \cdot \calB(b) \cdot \d b.
\hspace{1.1cm}
\end{align*}
The \blackref{AD_descend} table $\underline{\bPhi}$ (Line~\ref{alg:AD:descend} and \Cref{fig:AD:descend}) descends the \blackref{AD_jump} entry $(\sigma^{*},\, j^{*})$, from the jump value $\phi_{\sigma^{*},\, j^{*}}$ to the {\em lower} ceiling value $\phi_{H,\, 0}$.
Considering the only modified term, we can formulate the \blackref{AD_descend} counterpart as ${\sf FPA}(\blackref{AD_descend}) = {\sf FPA}(\blackref{AD_input}) - \underline{\Delta}_{{\sf FPA}}$, using
\begin{align*}
\underline{\Delta}_{{\sf FPA}}
& = \int_{\lambda_{j^{*}}}^{\lambda_{j^{*} + 1}} \Big(\Big(\frac{\phi_{\sigma^{*},\, j^{*}}}{\varphi_{L}(b) - b} - \frac{\phi_{\sigma^{*},\, j^{*}} - \varphi_{L}(b)}{\phi_{\sigma^{*},\, j^{*}} - b}\Big) - \Big(\frac{\phi_{H,\, 0}}{\varphi_{L}(b) - b} - \frac{\phi_{H,\, 0} - \varphi_{L}(b)}{\phi_{H,\, 0} - b}\Big)\Big) \cdot \calB(b) \cdot \d b.
\hspace{.75cm}
\end{align*}
\noindent
{\bf Optimal {\textsf{Social Welfares}}.}
Consider the \blackref{AD_input} subtable $\bPhi^{*} \eqdef \bPhi \setminus (\{H\} \times [0: j^{*} - 1] \cup \{(\sigma^{*},\, j^{*})\})$ that is invariant in both $\bar{\bPhi}$ and $\underline{\bPhi}$. Following {\bf \Cref{lem:AD:potential}} and \Cref{fig:AD}, (i)~the \blackref{AD_input} table $\bPhi$ has the ceiling value $\phi_{H,\, 0}$ and its ultra-ceiling entries are all included in $\bPhi^{*} \cup \{(\sigma^{*},\, j^{*})\}$; (ii)~the \blackref{AD_ascend} table $\bar{\bPhi}$ has the higher ceiling value $= \phi_{\sigma^{*},\, j^{*}}$ and its ultra-ceiling entries are all included in $\bPhi^{*}$; and (iii)~the \blackref{AD_descend} table $\underline{\bPhi}$ has the {\em same} ceiling value $= \phi_{H,\, 0}$ and its ultra-ceiling entries are all included in $\bPhi^{*}$.
Therefore, using the invariant function $\calI(v) \eqdef \prod_{(\sigma,\, j) \in \bPhi^{*}} \big(1 - \omega_{\sigma,\, j} \cdot \indicator(v < \phi_{\sigma,\, j})\big)$,\footnote{Precisely, each probability $\omega_{\sigma,\, j} = 1 - \frac{B_{\sigma}(\lambda_{j})}{B_{\sigma}(\lambda_{j + 1})}$ for $(\sigma,\, j) \in \bPhi^{*}$ is also invariant $\omega_{\sigma,\, j} = \bar{\omega}_{\sigma,\, j} = \underline{\omega}_{\sigma,\, j}$ (cf.\ \Cref{lem:pseudo_distribution} and Lines~\ref{alg:AD:ascend} and \ref{alg:AD:descend}); we omit the detailed verification for brevity.}
we deduce from \Cref{lem:ceiling_welfare} that
\begin{align*}
{\sf OPT}(\blackref{AD_input})~~~~~~
& ~=~ \int_{0}^{+\infty} \Big(1 - \indicator(v \geq \phi_{H,\, 0}) \cdot \big(1 - \omega_{\sigma^{*},\, j^{*}} \cdot \indicator(v < \phi_{\sigma^{*},\, j^{*}})\big) \cdot \calI(v)\Big) \cdot \d v,
\hspace{1.75cm} \\
{\sf OPT}(\blackref{AD_ascend})\;\,
& ~=~ \int_{0}^{+\infty} \Big(1 - \indicator(v \geq \phi_{\sigma^{*},\, j^{*}}) \cdot \calI(v)\Big) \cdot \d v, \\
{\sf OPT}(\blackref{AD_descend})
& ~=~ \int_{0}^{+\infty} \Big(1 - \indicator(v \geq \phi_{H,\, 0}) \cdot \calI(v)\Big) \cdot \d v.
\end{align*}
We have ${\sf OPT}(\blackref{AD_ascend}) \geq {\sf OPT}(\blackref{AD_input}) \geq {\sf OPT}(\blackref{AD_descend})$ given that $\phi_{H,\, 0} < \phi_{\sigma^{*},\, j^{*}}$.
As before, let $\bar{\Delta}_{{\sf OPT}} \equiv {\sf OPT}(\blackref{AD_ascend}) - {\sf OPT}(\blackref{AD_input})$ and $\underline{\Delta}_{{\sf OPT}} \equiv {\sf OPT}(\blackref{AD_input}) - {\sf OPT}(\blackref{AD_descend})$ be the absolute changes.
The remaining proof relies on {\bf \Cref{fact:polarize}}.
(For brevity, we ignore the ``$0 / 0$'' issue.)
\setcounter{fact}{0}
\begin{fact}
\label{fact:AD:poa}
$\bar{\Delta}_{{\sf FPA}} \big/ \bar{\Delta}_{{\sf OPT}} \,\leq\, \underline{\Delta}_{{\sf FPA}} \big/ \underline{\Delta}_{{\sf OPT}}$.
\end{fact}
\begin{proof}
The \blackref{AD_ascend} auction {\textsf{Social Welfare}} change $\bar{\Delta}_{{\sf FPA}} \equiv {\sf FPA}(\blackref{AD_ascend}) - {\sf FPA}(\blackref{AD_input})$ is at most
\begin{align}
\bar{\Delta}_{{\sf FPA}}
& ~=~ (\phi^{*} - \phi_{H,\, 0}) \cdot \calB(0)
+ \sum_{j = 0}^{j^{*} - 1} \int_{\lambda_{j}}^{\lambda_{j + 1}} \Big(\big(\tfrac{\phi^{*}}{\varphi_{L}(b) - b} - \tfrac{\phi^{*} - \varphi_{L}(b)}{\phi^{*} - b}\big)
\nonumber \\
& \phantom{~=~ (\phi^{*} - \phi_{H,\, 0}) \cdot \calB(0) \sum_{j = 0}^{j^{*} - 1} \int_{\lambda_{j}}^{\lambda_{j + 1}} \Big(}
- \big(\tfrac{\phi_{H,\, 0}}{\varphi_{L}(b) - b} - \tfrac{\phi_{H,\, 0} - \varphi_{L}(b)}{\phi_{H,\, 0} - b}\big)\Big) \cdot \calB(b) \cdot \d b
\tag{restate} \\
& ~=~ (\phi^{*} - \phi_{H,\, 0}) \cdot \calB(0)
+ \int_{0}^{\lambda_{j^{*}}} \Big(\tfrac{\phi^{*} - \phi_{H,\, 0}}{\varphi_{L}(b) - b} + \big(\tfrac{\phi_{H,\, 0} - \varphi_{L}(b)}{\phi_{H,\, 0} - b} - \tfrac{\phi^{*} - \varphi_{L}(b)}{\phi^{*} - b}\big)\Big) \cdot \calB(b) \cdot \d b
\hspace{1.95cm}~~\;\; \nonumber \\
& ~\leq~ (\phi^{*} - \phi_{H,\, 0}) \cdot \calB(0)
+ \int_{0}^{\lambda_{j^{*}}} \phantom{\Big(}\tfrac{\phi^{*} - \phi_{H,\, 0}}{\varphi_{L}(b) - b} \cdot \calB(b) \cdot \d b
\label{eq:win_win:FPA:A3}\tag{A1} \\
& ~=~ (\phi^{*} - \phi_{H,\, 0}) \cdot \calB(0)
+ \int_{0}^{\lambda_{j^{*}}} (\phi^{*} - \phi_{H,\, 0}) \cdot \calB'(b) \cdot \d b \phantom{\bigg.}
\label{eq:win_win:FPA:A4}\tag{A2} \\
& ~=~ (\phi^{*} - \phi_{H,\, 0}) \cdot \calB(\lambda_{j^{*}}). \phantom{\bigg.}
\nonumber
\end{align}
\eqref{eq:win_win:FPA:A3}: The dropped term $\leq 0$ (\Cref{def:jump,lem:pseudo_mapping}; $\phi^{*} \equiv \phi_{\sigma^{*},\, j^{*}} > \phi_{H,\, 0} \geq \varphi_{L}(b) > b$). \\
\eqref{eq:win_win:FPA:A4}: The pseudo mapping $\varphi_{L} = b + \calB \big/ \calB'$ (\Cref{def:pseudo}).
\vspace{.1in}
\noindent
The \blackref{AD_descend} auction {\textsf{Social Welfare}} change $\underline{\Delta}_{{\sf FPA}} \equiv {\sf FPA}(\blackref{AD_input}) - {\sf FPA}(\blackref{AD_descend})$ is at least
\begin{align}
\underline{\Delta}_{{\sf FPA}}
& ~=~ \int_{\lambda_{j^{*}}}^{\lambda_{j^{*} + 1}} \Big(\big(\tfrac{\phi^{*}}{\varphi_{L}(b) - b} - \tfrac{\phi^{*} - \varphi_{L}(b)}{\phi^{*} - b}\big) - \big(\tfrac{\phi_{H,\, 0}}{\varphi_{L}(b) - b} - \tfrac{\phi_{H,\, 0} - \varphi_{L}(b)}{\phi_{H,\, 0} - b}\big)\Big) \cdot \calB(b) \cdot \d b
\tag{restate} \\
& ~=~ \int_{\lambda_{j^{*}}}^{\lambda_{j^{*} + 1}} \Big(\big(\tfrac{\phi^{*} - \phi_{H,\, 0}}{\varphi_{L}(b) - b} - \tfrac{\phi^{*} - \phi_{H,\, 0}}{\phi^{*} - b}\big) + \big(\tfrac{\phi_{H,\, 0} - \varphi_{L}(b)}{\phi_{H,\, 0} - b} - \tfrac{\phi_{H,\, 0} - \varphi_{L}(b)}{\phi^{*} - b}\big)\Big) \cdot \calB(b) \cdot \d b
\hspace{2.76cm}~~ \nonumber \\
& ~\geq~ \int_{\lambda_{j^{*}}}^{\lambda_{j^{*} + 1}} \phantom{\Big(} \big(\tfrac{\phi^{*} - \phi_{H,\, 0}}{\varphi_{L}(b) - b} - \tfrac{\phi^{*} - \phi_{H,\, 0}}{\phi^{*} - b}\big) \cdot \calB(b) \cdot \d b
\label{eq:win_win:FPA:D3}\tag{D1} \\
& ~=~ \int_{\lambda_{j^{*}}}^{\lambda_{j^{*} + 1}}~~\; (\phi^{*} - \phi_{H,\, 0}) \cdot B'_{\sigma^{*}}(b) \cdot \tfrac{\calB(b)}{B_{\sigma^{*}}(b)} \cdot \d b
\label{eq:win_win:FPA:D4}\tag{D2} \\
& ~\geq~ \int_{\lambda_{j^{*}}}^{\lambda_{j^{*} + 1}}~~\; (\phi^{*} - \phi_{H,\, 0}) \cdot B'_{\sigma^{*}}(b) \cdot \tfrac{\calB(\lambda_{j^{*}})}{B_{\sigma^{*}}(\lambda_{j^{*}})} \cdot \d b
\label{eq:win_win:FPA:D5}\tag{D3} \\
& ~=~ (\phi^{*} - \phi_{H,\, 0}) \cdot \big(\tfrac{B_{\sigma^{*}}(\lambda_{j^{*} + 1})}{B_{\sigma^{*}}(\lambda_{j^{*}})} - 1\big) \cdot \calB(\lambda_{j^{*}})
~\geq~ 0.
\nonumber
\end{align}
\eqref{eq:win_win:FPA:D3}: The dropped term $\geq 0$ (\Cref{def:jump,lem:pseudo_mapping}; $\phi^{*} \equiv \phi_{\sigma^{*},\, j^{*}} > \phi_{H,\, 0} \geq \varphi_{L}(b) > b$). \\
\eqref{eq:win_win:FPA:D4}: $\phi^{*} \equiv \phi_{\sigma^{*},\, j^{*}} = b + \big(\calB' \big/ \calB - B'_{\sigma^{*}} \big/ B_{\sigma^{*}}\big)^{-1}$ (given a \blackref{AD_jump} $\sigma^{*} \neq L$) and $\varphi_{L} = b + \calB \big/ \calB'$. \\
\eqref{eq:win_win:FPA:D5}: $\calB \big/ B_{\sigma^{*}} = \prod_{\sigma \in [N] \setminus \{\sigma^{*}\}} B_{\sigma}$ is an increasing CDF.
Combining the above two equations and applying the substitution $\omega^{*} \equiv \omega_{\sigma^{*},\, j^{*}} = 1 - \frac{B_{\sigma^{*}}(\lambda_{j^{*}})}{B_{\sigma^{*}}(\lambda_{j^{*} + 1})}$ (\Cref{lem:translate_welfare}), we deduce that $\bar{\Delta}_{{\sf FPA}} \big/ \underline{\Delta}_{{\sf FPA}} \leq (1 - \omega^{*}) / \omega^{*}$.
\vspace{.1in}
\noindent
The \blackref{AD_ascend} optimum change $\bar{\Delta}_{{\sf OPT}} \equiv {\sf OPT}(\blackref{AD_ascend}) - {\sf OPT}(\blackref{AD_input})$ is equal to
\begin{align*}
\bar{\Delta}_{{\sf OPT}}
& ~=~ \int_{0}^{+\infty} \Big(\indicator(v \geq \phi_{H,\, 0}) \cdot
\big(1 - \omega^{*} \cdot \indicator(v < \phi^{*})\big) - \indicator(v \geq \phi^{*})\big) \cdot \calI(v) \cdot \d v
\tag{restate} \\
& ~=~ \int_{0}^{+\infty} \Big(\big(1 - \omega^{*} \cdot \indicator(v < \phi^{*})\big) - \indicator(v \geq \phi^{*})\Big) \cdot \indicator(v \geq \phi_{H,\, 0}) \cdot \calI(v) \cdot \d v
\hspace{3.35cm}~~ \\
& ~=~ \big(1 - \omega^{*}\big) \cdot \int_{0}^{+\infty} \indicator\big(v \geq \phi_{H,\, 0}\big) \cdot
\indicator\big(v < \phi^{*}\big) \cdot \calI(v) \cdot \d v.
\end{align*}
The second step: $\indicator(v \geq \phi^{*}) \equiv \indicator(v \geq \phi_{H,\, 0}) \cdot \indicator(v \geq \phi^{*})$ because the jump value $\phi^{*} \equiv \phi_{\sigma^{*},\, j^{*}} > \phi_{H,\, 0}$. \\
The last step: $\indicator(v \geq \phi^{*}) \equiv 1 - \indicator(v < \phi^{*})$.
\vspace{.1in}
\noindent
The \blackref{AD_descend} optimum change $\underline{\Delta}_{{\sf OPT}} \equiv {\sf OPT}(\blackref{AD_input}) - {\sf OPT}(\blackref{AD_descend})$ is given by
\begin{align*}
\underline{\Delta}_{{\sf OPT}}
& ~=~ \int_{0}^{+\infty} \Big(\Big(1 - \indicator(v \geq \phi_{H,\, 0}) \cdot \big(1 - \omega^{*} \cdot \indicator(v < \phi^{*})\big) \cdot \calI(v)\Big) - \Big(1 - \indicator(v \geq \phi_{H,\, 0}) \cdot \calI(v)\Big)\Big) \cdot \d v \\
& ~=~ \omega^{*} \cdot \int_{0}^{+\infty} \indicator(v \geq \phi_{H,\, 0}) \cdot \indicator(v < \phi^{*}) \cdot \calI(v) \cdot \d v.
\end{align*}
We thus conclude that $\bar{\Delta}_{{\sf OPT}} \big/ \underline{\Delta}_{{\sf OPT}} = (1 - \omega^{*}) / \omega^{*} \geq \bar{\Delta}_{{\sf FPA}} \big/ \underline{\Delta}_{{\sf FPA}}$, which implies {\bf \Cref{fact:AD:poa}}.
\end{proof}
Assume the opposite to {\bf \Cref{lem:AD:poa}}: Both of the \blackref{AD_ascend}/\blackref{AD_descend} candidates $\bar{H}^{\uparrow} \otimes \bar{\bB} \otimes \bar{L}$ and $\underline{H}^{\uparrow} \otimes \underline{\bB} \otimes \underline{L}$ yield strictly larger bounds than the \blackref{AD_input} $H^{\uparrow} \otimes \bB \otimes L$. That is,
\begin{align*}
& {\sf PoA}(\blackref{AD_ascend}) \;\;\;>~ {\sf PoA}(\blackref{AD_input})
&& \iff &&
\frac{{\sf FPA}(\blackref{AD_input}) + \bar{\Delta}_{{\sf FPA}}}{{\sf OPT}(\blackref{AD_input}) + \bar{\Delta}_{{\sf OPT}}}
~>~ \frac{{\sf FPA}(\blackref{AD_input})}{{\sf OPT}(\blackref{AD_input})}, \\
& {\sf PoA}(\blackref{AD_descend}) ~>~ {\sf PoA}(\blackref{AD_input})
&& \iff &&
\frac{{\sf FPA}(\blackref{AD_input}) - \underline{\Delta}_{{\sf FPA}}}{{\sf OPT}(\blackref{AD_input}) - \underline{\Delta}_{{\sf OPT}}}
~>~ \frac{{\sf FPA}(\blackref{AD_input})}{{\sf OPT}(\blackref{AD_input})}.
\end{align*}
Rearranging the both equations gives $\bar{\Delta}_{{\sf FPA}} \big/ \bar{\Delta}_{{\sf OPT}} \,>\, {\sf PoA}(\blackref{AD_input}) \,>\, \underline{\Delta}_{{\sf FPA}} \big/ \underline{\Delta}_{{\sf OPT}}$ (notice that the denominators $\bar{\Delta}_{{\sf OPT}},\, \underline{\Delta}_{{\sf OPT}} > 0$), which contradicts {\bf \Cref{fact:AD:poa}}.
Refute our assumption: At least one candidate between $\bar{H}^{\uparrow} \otimes \bar{\bB} \otimes \bar{L}$ and $\underline{H}^{\uparrow} \otimes \underline{\bB} \otimes \underline{L}$ yields a (weakly) worse bound. {\bf \Cref{lem:AD:poa}} follows.
This finishes the proof of \Cref{lem:AD}.
\end{proof}
\subsection{The main algorithm: From floor/ceiling to twin ceiling}
\label{subsec:main}
This subsection presents the \blackref{alg:main} procedure (see \Cref{fig:alg:main} for its description), which transforms a {\em floor}/{\em ceiling} \blackref{main_input} pseudo instance $H \otimes \bB \otimes L \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\downarrow})$ into a {\em twin ceiling} output pseudo instance $\tilde{H} \otimes \tilde{L} \in \mathbb{B}_{\sf twin}^{\uparrow}$ (\Cref{def:twin}).
This \blackref{alg:main} procedure runs iteratively and is built from all the given reductions:
\blackref{alg:slice}, \blackref{alg:collapse}, \blackref{alg:halve}, and \blackref{alg:AD} (\Cref{subsec:slice,subsec:collapse,subsec:halve,subsec:AD}).
Overall, through the potential method (\Cref{def:potential}), we will see that this procedure terminates in {\em finite} iterations, returning the {\em twin ceiling} output $\tilde{H} \otimes \tilde{L} \in \mathbb{B}_{\sf twin}^{\uparrow}$ afterward.
For ease of reference, we rephrase \Cref{lem:slice,lem:collapse,lem:halve,lem:AD}.
\begin{restate}[{\Cref{lem:slice}}]
\begin{flushleft}
\blackref{alg:slice} transforms a Floor $H^{\downarrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\downarrow}$ to
a \textsf{PoA}-worse Ceiling $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$ such that the potential decreases $\tilde{\Psi} \leq \Psi - 1$.
\end{flushleft}
\end{restate}
\begin{restate}[{\Cref{lem:collapse}}]
\begin{flushleft}
\blackref{alg:collapse} transforms a Ceiling $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf valid}^{\uparrow}$ to a \textsf{PoA}-worse Strong Ceiling $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf strong}^{\uparrow} \subsetneq \mathbb{B}_{\sf valid}^{\uparrow}$
such that the potential keeps the same $\tilde{\Psi} = \Psi$.
\end{flushleft}
\end{restate}
\begin{restate}[{\Cref{lem:halve}}]
\begin{flushleft}
\blackref{alg:halve} transforms a Strong Ceiling $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf strong}^{\uparrow} \subsetneq \mathbb{B}_{\sf valid}^{\uparrow}$ to
a \textsf{PoA}-worse Floor/Ceiling $\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$
such that the potential decreases $\tilde{\Psi} \leq \Psi - 1$.
\end{flushleft}
\end{restate}
\begin{restate}[{\Cref{lem:AD}}]
\blackref{alg:AD} transforms a Strong Ceiling $H^{\uparrow} \otimes \bB \otimes L \in \mathbb{B}_{\sf strong}^{\uparrow}$ to a \textsf{PoA}-worse Ceiling $\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \in \mathbb{B}_{\sf valid}^{\uparrow}$ such that the potential decreases $\tilde{\Psi} \leq \Psi - 1$.
\end{restate}
\Cref{lem:main} summarizes performance guarantees of the \blackref{alg:main} procedure.
\begin{lemma}[{\text{\tt Main}}; \Cref{fig:alg:main}]
\label{lem:main}
Under procedure $\tilde{H}^{\uparrow} \otimes \tilde{L} \gets \text{\tt Main}(H \otimes \bB \otimes L)$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:main:time}
It terminates in at most $(1 + 2\Psi^{*})$ iterations, where $\Psi^{*} = \Psi(H \otimes \bB \otimes L) < +\infty$ (\Cref{lem:potential:bound} of \Cref{lem:potential}) is the finite potential of the input floor/ceiling pseudo instance.
\item\label{lem:main:property}
The output is a twin ceiling pseudo instance $\tilde{H}^{\uparrow} \otimes \tilde{L} \in \mathbb{B}_{\sf twin}^{\uparrow}$.
\item\label{lem:main:poa}
A (weakly) worse bound is yielded ${\sf PoA}(\tilde{H}^{\uparrow} \otimes \tilde{L}) \leq {\sf PoA}(H \otimes \bB \otimes L)$.
\end{enumerate}
\end{lemma}
\begin{figure}[t]
\centering
\begin{mdframed}
Procedure $\term[\text{\tt Main}]{alg:main}(H \otimes \bB \otimes L)$
\begin{flushleft}
{\bf Input:}
A {\em floor}/{\em ceiling} pseudo instance $H \otimes \bB \otimes L \in (\mathbb{B}_{\sf valid}^{\downarrow} \cup \mathbb{B}_{\sf valid}^{\uparrow})$.
\white{\term[\text{\em input}]{main_input}}
\hfill
\Cref{def:ceiling_floor:restate}
\vspace{.05in}
{\bf Output:}
A {\em twin ceiling} pseudo instance $\tilde{H}^{\uparrow} \otimes \tilde{L} \in \mathbb{B}_{\sf twin}^{\uparrow}$.
\white{\term[\text{\em input}]{main_output}}
\hfill
\Cref{def:twin}
\vspace{.05in}
{\bf Remark:}
``{\text{\tt Rename}}'' denotes the reassignment ``$H \otimes \bB \otimes L \,\gets\, \tilde{H} \otimes \tilde{\bB} \otimes \tilde{L}$''.
\begin{enumerate}
\item\label{alg:main:while_begin}
\term[{\bf While}]{main_while} $\big\{ H \otimes \bB \otimes L \notin \mathbb{B}_{\sf twin}^{\uparrow} \big\}${\bf :}
\item\label{alg:main:slice}
\qquad
{\bf If} $\big\{ H \otimes \bB \otimes L \notin \mathbb{B}_{\sf valid}^{\uparrow} \big\}${\bf :}
\hfill
\OliveGreen{$\triangleright$ $\neg$(\colorref{OliveGreen}{re:ceilingness})}
\item
\qquad\qquad
$\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \,\gets\, \text{\tt Slice}(H^{\downarrow} \otimes \bB \otimes L)$ and {\text{\tt Rename}}.
\item\label{alg:main:collapse}
\qquad
{\bf Else If} $\big\{ H \otimes \bB \otimes L \in (\mathbb{B}_{\sf valid}^{\uparrow} \setminus \mathbb{B}_{\sf strong}^{\uparrow}) \big\}${\bf :}
\hfill
\OliveGreen{$\triangleright$ $\neg$(\colorref{OliveGreen}{re:collapse})}
\item
\qquad\qquad
$\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \,\gets\, \text{\tt Collapse}(H^{\uparrow} \otimes \bB \otimes L)$ and {\text{\tt Rename}}.
\item\label{alg:main:halve}
\qquad
{\bf Else If} $\big\{ H \otimes \bB \otimes L \in (\mathbb{B}_{\sf strong}^{\uparrow} \setminus \mathbb{B}_{\sf twin}^{\uparrow}) \big\}${\bf :}
\hfill
\OliveGreen{$\triangleright$ $\neg$(\colorref{OliveGreen}{re:twin_ceiling})}
\item[]
\qquad
\OliveGreen{$\triangleright$ The jump entry $(\sigma^{*},\, j^{*}) \in (\{H\} \cup [n] \cup \{L\}) \times [m]$ (\Cref{lem:jump}).}
\item
\qquad\qquad
{\bf Case $\{\sigma^{*} = L \}$:}
$\tilde{H} \otimes \tilde{\bB} \otimes \tilde{L} \,\gets\, \text{\tt Halve}(H^{\uparrow} \otimes \bB \otimes L)$ and {\text{\tt Rename}}.
\item\label{alg:main:AD}
\qquad\qquad
{\bf Case $\{\sigma^{*} \neq L \}$:}
$\tilde{H}^{\uparrow} \otimes \tilde{\bB} \otimes \tilde{L} \,\gets\, \text{\tt Ascend-Descend}(H^{\uparrow} \otimes \bB \otimes L)$ and {\text{\tt Rename}}.
\item {\bf Return}
$H^{\uparrow} \otimes \bB \otimes L \cong \tilde{H}^{\uparrow} \otimes \tilde{L} \in \mathbb{B}_{\sf twin}^{\uparrow}$.
\end{enumerate}
\end{flushleft}
\end{mdframed}
\caption{The {\text{\tt Main}} procedure
\label{fig:alg:main}}
\end{figure}
\begin{proof}
It suffices to show {\bf \Cref{lem:main:time}}. Suppose so, {\bf \Cref{lem:main:property,lem:main:poa}} follow directly from the termination condition for the \blackref{main_while} loop (Line~\ref{alg:main:while_begin}) and a combination of \Cref{lem:slice,lem:collapse,lem:halve,lem:AD}.
\vspace{.1in}
\noindent
{\bf \Cref{lem:main:time}.}
We can divide all of the four reductions into two types.
\begin{itemize}
\item \textbf{Type-1: \blackref{alg:slice}, \blackref{alg:halve}, and \blackref{alg:AD}.}
Such a reduction decreases the potential $\tilde{\Psi} \leq \Psi - 1$. Therefore, such reductions in total can be invoked at most ``the \blackref{main_input} potential'' many times. (Recall \Cref{lem:potential} that a {\em twin ceiling} pseudo instance $\hat{H}^{\uparrow} \otimes \hat{\bB} \otimes \hat{L} \in \mathbb{B}_{\sf twin}^{\uparrow}$, which triggers the termination condition for the \blackref{main_while} loop, has a zero potential.) Formally, we have $\#\big[\textbf{Type-1}\big] \,\leq\, \Psi^{*}$.
\item \textbf{Type-2: \blackref{alg:collapse}.}
This reduction transforms a {\em ceiling} but {\em non strong ceiling} pseudo instance $\in (\mathbb{B}_{\sf valid}^{\uparrow} \setminus \mathbb{B}_{\sf strong}^{\uparrow})$ that violates \blackref{re:collapse} into a {\em strong ceiling} pseudo instance $\in \mathbb{B}_{\sf strong}^{\uparrow}$ that satisfies \blackref{re:collapse}. Therefore, between any two invocations of this reduction, (Line~\ref{alg:main:collapse}) at least one invocation of another \text{Type-1} reduction is required. Formally, we have $\#\big[\textbf{Type-2}\big] \,\leq\, \#\big[\textbf{Type-1}\big] + 1 \,\leq\, \Psi^{*} + 1$.
\end{itemize}
Hence, the \blackref{alg:main} procedure terminates in at most $\#\big[\textbf{Type-1}\big] + \#\big[\textbf{Type-2}\big] \leq 2\Psi^{*} + 1$ iterations.
{\bf \Cref{lem:main:time}} follows then.
This finishes the proof.
\end{proof}
\begin{wrapfigure}{r}{0.3\textwidth}
\centering
\includegraphics[width = \linewidth]
{twin_restate.png}
\caption{$H \otimes L \in \mathbb{B}_{\sf twin}^{\uparrow}$
\label{fig:twin:restate}}
\end{wrapfigure}
Put \Cref{lem:main,cor:preprocess} together: Towards a lower bound on the {\textsf{Price of Anarchy}}, we can focus on {\em twin ceiling} pseudo instances $H \otimes L \in \mathbb{B}_{\sf twin}^{\uparrow}$ (see \Cref{fig:twin:restate} for a visual aid).
For such a pseudo instance, the monopolist $H$ has a constant bid-to-value mapping $\varphi_{H}(b) = \phi_{H,\, 0}$ over the bid support $b \in [0,\, \lambda]$ and the optimal {\textsf{Social Welfare}} is exactly the ceiling value ${\sf OPT}(H \otimes L) = \phi_{H,\, 0}$ (\Cref{lem:ceiling_welfare}).
Obviously, scaling this pseudo instance to normalize the ceiling value $\phi_{H,\, 0} = 1$ does not change the {{\sf PoA}} bound.
In addition, the \blackref{re:layeredness} condition becomes vacuously true (cf.\ \Cref{lem:pseudo_mapping}), while removing the \blackref{re:discretization} condition will expand the search space, which is fine for our purpose towards a lower bound on the {\textsf{Price of Anarchy}}.
Taking these into account, we would redefine the (normalized) {\em twin ceiling} pseudo instances and consider a modified functional optimization (\Cref{cor:reduction}).
\begin{definition}[Twin ceiling pseudo instances]
\label{def:twin:restate}
For a {\em twin ceiling} pseudo instance $H \otimes L$:
\begin{itemize}
\item The (real) {\em monopolist} $H$ competes with the pseudo bidder $L$, having a {\em constant} conditional value $P^{\uparrow} \equiv 1$ and a {\em constant} bid-to-value mapping $\varphi_{H}(b) = b + \frac{L(b)}{L'(b)} = 1$ on $b \in [0,\, \lambda]$. Hence, the supremum bid is bounded between $\lambda \in (0,\, 1)$ and the pseudo bidder $L$'s bid distribution is given by $L(b) = L_{\lambda}(b) \eqdef \frac{1 - \lambda}{1 - b}$.
\item The pseudo bidder $L$ competes with the both bidders $H \otimes L$, having an {\em increasing} bid-to-value mapping $\varphi_{L}(b) = b + \big(\frac{H'(b)}{H(b)} + \frac{L'(b)}{L(b)}\big)^{-1} = b + \big(\frac{H'(b)}{H(b)} + \frac{1}{1 - b}\big)^{-1} = 1 - \frac{(1 - b)^{2}}{H(b) / H'(b) + 1 - b}$ on $b \in [0,\, \lambda]$.
\end{itemize}
Given a supremum bid $\lambda \in (0,\, 1)$, the pseudo bidder $L = L_{\lambda}$ is determined, so the search space is simply $\mathbb{H}_{\lambda} \eqdef \big\{H: \text{the pseudo mapping $\varphi_{L}(b)$ is increasing on $b \in [0,\, \lambda]$} \big\}$.
\end{definition}
\begin{corollary}[Lower bound]
\label{cor:reduction}
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} satisfies that
\begin{align}
\label{eq:reduction}\tag{$\clubsuit$}
{\sf PoA} ~\geq~ \inf \Big\{\, {\sf FPA}(H \otimes L_{\lambda}) \,\Bigmid\, \lambda \in (0,\, 1) ~\text{\em and}~ H \in \mathbb{H}_{\lambda} \,\Big\}.
\end{align}
\end{corollary}
\begin{comment}
\Cref{cor:main} follows directly from a combination of \Cref{lem:main}, \Cref{cor:preprocess} and the above discussions: Towards a lower bound on the {\textsf{Price of Anarchy}}, we can restrict out attention to the space of {\em twin ceiling} pseudo instances $H^{\uparrow} \otimes L \in \mathbb{B}_{\sf twin}^{\uparrow}$.
\begin{corollary}[Lower bound]
\label{cor:main}
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} is at least
\begin{align*}
{\sf PoA} ~\geq~ \inf \bigg\{\, \frac{{\sf FPA}(H^{\uparrow} \otimes L)}{{\sf OPT}(H^{\uparrow} \otimes L)} \,\biggmid\, H^{\uparrow} \otimes L \in \mathbb{B}_{\sf twin}^{\uparrow} ~\text{\em and}~ v_{H} \in (0,\, +\infty) \,\bigg\}.
\end{align*}
\end{corollary}
\newpage
\begin{remark}[Twin ceiling pseudo instances]
For a twin ceiling pseudo instance $H^{\uparrow} \otimes L \in \mathbb{B}_{\sf twin}^{\uparrow}$, the monopolist $H^{\uparrow}$ maps all zero and/or nonzero bids $b \in [0,\, \lambda]$ to the same value $P^{\uparrow} = \varphi_{1}(0) = \varphi_{1}(b)$, and the pseudo bidder $L$ always have lower values $\varphi_{\sigma}(b) \leq P^{\uparrow}$ for $b \in [0,\, \lambda]$ (\Cref{def:potential}). Therefore, the optimal {\textsf{Social Welfare}} is simply ${\sf OPT} = P^{\uparrow}$. Given these, we would rewrite the index-$1$ bidder as $H = (P^{\uparrow},\, B_{1})$ and his {\em deterministic} value as $v_{H} = P^{\uparrow}$.
\end{remark}
\Cref{lem:collapse,cor:main} together imply the next corollary. Here and later in \red{reference}, we would rewrite $L = L_{\lambda}$ to emphasize that (\Cref{lem:twin:L} of \Cref{lem:twin}) a {\em twin ceiling} pseudo bidder determined by the supremum bid $\lambda \in (0,\, 1)$. For convenience, we denote by $\mathbb{H}_{\lambda}$ the space of twin bidders $H$ under a specific $\lambda \in (0,\, 1)$.
Indeed, a twin ceiling pseudo instance satisfies several nice properties, which are summarized below.
\begin{lemma}[Twin ceiling pseudo bidders]
\label{lem:twin}
Given a twin ceiling pseudo instance $H \otimes L$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:twin:lambda}
The supremum bid $\lambda \in (0,\, 1)$.
\item\label{lem:twin:H_mapping}
The monopolist $H$ has a constant bid-to-value mapping $\varphi_{H}(b) = v_{H} = 1$ for $b \in [0,\, \lambda]$.
\item\label{lem:twin:L}
The twin ceiling pseudo bidder $L$ has a bid distribution $L(b) = \frac{1 - \lambda}{1 - b}$ for $b \in [0, \lambda]$.
\item\label{lem:twin:L_mapping}
The pseudo bidder $L$ has a bid-to-value mapping $\varphi_{L}(b) = 1 - \frac{(1 - b)^{2}}{H(b) / H'(b) + 1 - b}$ for $b \in [0, \lambda]$.
\end{enumerate}
\end{lemma}
\begin{proof}
\Cref{lem:twin:H_mapping} directly follows from \Cref{def:twin}.
\vspace{.1in}
\noindent
{\bf \Cref{lem:twin:L}.}
Bidder $H$ competes with the pseudo bidder $L$, thus (\Cref{def:pseudo}) a bid-to-value mapping $\varphi_{H}(b) = b + L(b) \big/ L'(b)$ for $b \in [0,\, \lambda]$. This and \Cref{lem:twin:H_mapping} together imply the ODE $b + L(b) / L'(b) = 1$ for $b \in [0,\, \lambda]$, which must admit a boundary condition $L(\lambda) = 1$ at the supremum bid $\lambda > 0$. Resolving this ODE gives $L(b) = \frac{1 - \lambda}{1 - b}$ for $b \in [0,\, \lambda]$.
\vspace{.1in}
\noindent
{\bf \Cref{lem:twin:lambda}.}
Certainly, the CDF $L(b) = \frac{1 - \lambda}{1 - b}$ must be an increasing function on the support $b \in [0,\, \lambda]$, so we must have $\lambda \in (0,\, 1)$.
\vspace{.1in}
\noindent
{\bf \Cref{lem:twin:L_mapping}.}
The pseudo bidder $L$ competes with the high-value bidder $H$ as well as himself $L$ (\Cref{def:pseudo}), hence a bid-to-value mapping
\begin{align*}
\varphi_{L}(b)
~=~ b + \bigg(\frac{H'(b)}{H(b)} + \frac{L'(b)}{L(b)}\bigg)^{-1}
& ~=~ b + \bigg(\frac{H'(b)}{H(b)} + \frac{(1 - \lambda) / (1 - b)}{(1 - \lambda) / (1 - b)^{2}}\bigg)^{-1}
\nonumber \\
& ~=~ 1 - \frac{(1 - b)^{2}}{H(b) / H'(b) + 1 - b}. \phantom{\bigg.}
\end{align*}
Here the first step applies $L(b) = \frac{1 - \lambda}{1 - b}$ and $L'(b) = \frac{1 - \lambda}{(1 - b)^{2}}$ (\Cref{lem:twin:L}) and the second step rearranges the equation. Now each of \Cref{lem:twin:lambda,lem:twin:H_mapping,lem:twin:L_mapping,lem:twin:L} gets established.
\end{proof}
\end{comment}
\section{Structural Results}
\label{sec:structure}
This section shows a bunch of structural results on {\textsf{First Price Auction}} and {\textsf{Bayesian Nash Equilibrium}}. Some results or their analogs may already appear in the literature \cite{SZ90,L96,MR00a,MR00b,JSSZ02,MR03,HKMN11,CH13}. However, we still formalize and reprove them for completeness.
Provided these structural results, we can reformulate the {\textsf{Price of Anarchy}} problem in terms of bid distributions rather than value distributions.
To sell a single indivisible item to one of $n \geq 1$ bidders, each bidder $i \in [n]$ is asked to submit a non-negative bid $b_{i} \geq 0$. A generic auction $\calA = (\alloc,\, \pays)$ is given by its allocation rule $\alloc(\bb): \R^{n} \mapsto [n]$ and payment rule $\pays(\bb) = (\pay_{i}(\bb))_{i \in [n]}: \R^{n} \to \R^{n}$, both of which can be {\em randomized}.
Concretely, the item is allocated to {\em the} bidder $\alloc(\bb) \in [n]$, and $\pay_{i}(\bb)$'s are the payments of bidders $i \in [n]$.
Rigorously, {\textsf{First Price Auction}} is not a single auction but a family of auctions, each of which has a specific allocation rule that obeys the \blackref{pro:allocation}/\blackref{pro:payment} principles.
\begin{definition}[{\textsf{First Price Auctions}}]
\label{def:fpa}
An $n$-bidder single-item auction $\calA = (\alloc,\, \pays)$ is called a {\textsf{First Price Auction}} when its allocation rule $\alloc(\bb)$ and payment rule $\pays(\bb) = (\pay_{i}(\bb))_{i \in [n]}$ satisfy that:
\begin{itemize}
\item \term[{\bf first price allocation}]{pro:allocation}{\bf:}
Let $X(\bb) \eqdef \argmax \big\{b_{i}: i \in [n]\big\}$ be the set of first-order bidders. When such bidders are not unique $|X(\bb)| \geq 2$, allocate the item to one of them $\alloc(\bb) \in X(\bb)$, through a (randomized) {\em tie-breaking} rule given by the auction $\calA$ itself on this bid profile $\bb$. When such a bidder is unique $|X(\bb)| = 1$, allocate the item to him/her $\alloc(\bb) \equiv X(\bb)$.
\item \term[{\bf first price payment}]{pro:payment}{\bf:} The allocated bidder $\alloc(\bb)$ pays his/her own bid, while non-allocated bidders $[n] \setminus \{\alloc(\bb)\}$ each pay nothing. Formally, $\pay_{i}(\bb) = b_{i} \cdot \indicator\big(i = \alloc(\bb)\big)$ for $i \in [n]$.
\end{itemize}
Without ambiguity, the allocation rule $\alloc(\bb)$ itself will be called the considered {\textsf{First Price Auction}}, since it controls the payment rule $\pays(\bb)$.
Denote by $\bbFPA \ni \alloc$ the space of all {\textsf{First Price Auctions}}.
\end{definition}
The only undetermined part of a {\textsf{First Price Auction}} is the tie-breaking rule, which is subtle and plays an important role in our analysis.
To clarify it, let us rigorously define the interim allocation/ utility formulas (\Cref{def:interim_utility}) and {\textsf{Bayesian Nash Equilibrium}} (\Cref{def:bne_formal}).
\begin{definition}[Interim allocations/utilities]
\label{def:interim_utility}
Given a {\textsf{First Price Auction}} $\alloc \in \bbFPA$, value distributions $\bV = \{V_{i}\}_{i \in [n]}$, and a strategy profile $\bs = \{s_{i}\}_{i \in [n]}$:
\begin{itemize}
\item Each interim allocation formula $\alloc_{i}(b) \eqdef \Pr_{\alloc,\, \bs_{-i}(\bv_{-i})} \big[ i = \alloc\big(b,\, \bs_{-i}(\bv_{-i})\big) \big]$ for $b \geq 0$, over the randomness of the others' bids $\bs_{-i}(\bv_{-i})$ and the considered {\textsf{First Price Auction}} $\alloc \in \bbFPA$ itself.
\item Each interim utility formula $u_{i}(v,\, b) \eqdef (v - b) \cdot \alloc_{i}(b)$ for $v \in \supp(V_{i})$ and $b \geq 0$.
\end{itemize}
\end{definition}
\begin{definition}[{\textsf{Bayesian Nash Equilibria}}]
\label{def:bne_formal}
Following \Cref{def:interim_utility}, the strategy profile $\bs = \{s_{i}\}_{i \in [n]}$ reaches a {\textsf{Bayesian Nash Equilibrium}} for this distribution-auction tuple, namely $\bs \in \bbBNE(\bV,\, \alloc)$, when: For each bidder $i \in [n]$, any possible value $v \in \supp(V_{i})$, and any deviation bid $b \geq 0$,
\begin{align*}
\Ex_{s_{i}}\big[\, u_{i}(v,\, s_{i}(v)) \,\big] ~\geq~ u_{i}(v,\, b).
\end{align*}
\end{definition}
\Cref{def:bne_formal} means for any value $v \in \supp(V_{i})$, nearly all the equilibrium bids $b \in \supp(s_{i}(v))$ EACH maximize the interim utility formula $u_{i}(v,\, b)$, except a {\em zero-measure} set.
By excluding these zero-measure sets from the bid supports $\supp(s_{i}(v))$, for $i\in [n]$, the modified strategy profile still satisfies the definition of {\textsf{Bayesian Nash Equilibrium}}. Hereafter, without loss of generality, we assume that EVERY equilibrium bid $b \in \supp(s_{i}(v))$ maximizes the formula $u_{i}(v,\, b)$.
The following existence result can be concluded from \cite{L96}.
\begin{theorem}[{{\textsf{Bayesian Nash Equilibrium}}~\cite{L96}}]
\label{thm:exist_bne}
Given any value distribution $\bV = \{V_{i}\}_{i \in [n]}$, there exists at least one {\textsf{First Price Auction}} $\alloc \in \bbFPA$ that admits an equilibrium $\bbBNE(\bV,\, \alloc) \neq \emptyset$.
\end{theorem}
\begin{comment}
\yj{organization of this section.}
\blue{This existence result is the basics of our later discussions
As mentioned in \red{(overview)}, ???. Particularly, we will leverage to prove our universal approximation result.
En route, we gradually characterize the given exact equilibrium $\bs \in \bbBNE(\bV,\, \alloc)$ and conclude with a {\em bid-based} equivalence condition for {\textsf{Bayesian Nash Equilibrium}} (\Cref{thm:bne}), rather than the original {\em value-/utility-based} statement (\Cref{def:bne_formal}).
Using this equivalent condition, it will be trivial how to derive the universal ``$\epsilon$-approximate'' strategy $\bs^{*} = \{s_{i}^{*}\}_{i \in [n]}$ by perturbing the $\alloc$-exclusive strategy $\bs = \{s_{i}\}_{i \in [n]}$.}
First of all, \Cref{lem:optimal_utility} restates \Cref{def:bne_formal}, but we include it for completeness.
\begin{lemma}[Utility optimality]
\label{lem:optimal_utility}
For $i \in [n]$ and any possible value $v \in \supp(V_{i})$, the equilibrium bid $s_{i}(v)$ optimizes the interim utility formula $u_{i}(v,\, s_{i}(v)) = \sup\big\{ u_{i}(v,\, b): b \geq 0\big\}$, almost surely.
\end{lemma}
\begin{proof}
By the condition (\Cref{def:bne_formal}) for making $\bs \in \bbBNE(\bV,\, \alloc) \neq \emptyset$ an exact equilibrium.
\end{proof}
\end{comment}
\subsection{The forward direction: {\textsf{Bayesian Nash Equilibria}}}
\label{subsec:bne}
This subsection presents a bunch of lemmas that characterize an equilibrium $\bs \in \bbBNE(\bV,\, \alloc) \neq \emptyset$.
To this end, let us introduce some helpful notations, which will be adopted throughout this paper.
\begin{itemize
\item $\bB = \{B_{i}\}_{i \in [n]}$ denotes the equilibrium bid distributions $\bs(\bv) = (s_{i}(v_{i}))_{i \in [n]} \sim \bB$.
\item $\calB(b) = \prod_{i \in [n]} B_{i}(b)$ denotes the first-order bid distribution $\max(\bs(\bv)) \sim \calB$.
\item $\calB_{-i}(b) = \prod_{k \in [n] \setminus \{i\}} B_{k}(b)$ denotes the competing bid distribution of each bidder $i \in [n]$.
\item $\gamma \eqdef \inf(\supp(\calB))$ and $\lambda \eqdef \sup(\supp(\calB))$ denote the ``infimum''/``supremum'' first-order bids, respectively.
Without ambiguity, we call $v,\, b < \gamma$ the low values/bids, $v,\, b = \gamma$ the boundary values/bids, and $v,\, b > \gamma$ the normal values/bids. As their names suggest:
(i)~low bids $b < \gamma$ each gives a zero winning probability, so these bids are less important;
(ii)~normal bids $b > \gamma$ are the most common ones and we will show that they behave nicely; and
(iii)~boundary bids $b = \gamma$ are tricky and we will deal with them separately.
\item $\calU_{i}(v) \eqdef \max \big\{u_{i}(v,\, b): b \geq 0\big\}$ denotes the optimal utility formula of each bidder $i \in [n]$. This is well defined on the value support $v \in \supp(V_{i})$, because (\Cref{def:bne_formal}) every equilibrium bid $s_{i}(v)$ does optimize the interim utility formula $u_{i}(v,\, s_{i}(v)) = \calU_{i}(v)$.
\end{itemize}
Each bidder's interim allocation $\alloc_{i}(b)$ relies on his/her competing bid distribution $\calB_{-i}(b)$, plus the tie-breaking rule of the considered auction $\alloc \in \bbFPA$.
When $\calB_{-i}(b)$ is a continuous distribution,
they are identical $\alloc_{i}(b) \equiv \calB_{-i}(b)$,
because the probability of being ONE of the first-order bidders $\Prx_{\bb} [X(\bb) \ni i]$ is exactly the probability of being the ONLY one $\Prx_{\bb} [X(\bb) = \{i\}]$.
In general, we can at least obtain (\Cref{lem:allocation}) the following properties for an interim allocation $\alloc_{i}(b)$.
\begin{lemma}[Interim allocations]
\label{lem:allocation}
For each interim allocation formula $\alloc_{i}(b)$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:allocation:1}
$\alloc_{i}(b)$ is weakly increasing for $b \geq 0$.
\item\label{lem:allocation:2}
It is point-wise lower/upper bounded as $\sup \big\{ \calB_{-i}(t): t < b\big\} \leq \alloc_{i}(b) \leq \calB_{-i}(b)$ for $b \geq 0$.
\item\label{lem:allocation:3}
$\alloc_{i}(b) = 0$ for any low bid $b < \gamma$, AND $\alloc_{i}(b) > 0$ for any normal bid $b > \gamma$.
\end{enumerate}
\end{lemma}
\begin{proof}
{\bf \Cref{lem:allocation:1,lem:allocation:2}} directly follows from the \blackref{pro:allocation} principle (\Cref{def:fpa}). Especially for {\bf \Cref{lem:allocation:2}}, suppose that (with a nonzero probability) bidder $B_{i}$ ties with someone else $[n] \setminus \{i\}$ for this first-order bid $s_{i}(v_{i}) = \max(\bs_{-i}(\bv_{-i})) = b$, then the auction always favors this bidder $B_{i}$ when ``$\alloc_{i}(b) = \calB_{-i}(b)$'', or always disfavor him/her when ``$\alloc_{i}(b) = \sup \big\{ \calB_{-i}(t): t < b\big\}$''.
{\bf \Cref{lem:allocation:3}} holds because $\gamma = \inf(\supp(\calB))$ is the infimum first-order bid $\max(\bs(\bv)) \sim \calB$.
\end{proof}
\Cref{lem:dichotomy} shows that both of the value space $v \in \supp(V_{i})$ and the bid space $b \geq 0$ of each bidder $i \in [n]$, are almost separated by the infimum first-order bid $\gamma = \inf(\supp(\calB))$.
\begin{lemma}[Bidding dichotomy]
\label{lem:dichotomy}
For each bidder $i \in [n]$:
\begin{flushleft}
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:dichotomy:1}
For a low/boundary value $v \in \supp_{\leq \gamma}(V_{i})$, the optimal utility is zero $\calU_{i}(v) = 0$ and, almost surely, the equilibrium bid is low/boundary $s_{i}(v) \leq \gamma$.
\item\label{lem:dichotomy:2}
For a normal value $v \in \supp_{> \gamma}(V_{i})$, the optimal utility is nonzero $\calU_{i}(v) > 0$ and, almost surely, the equilibrium bid is boundary/normal $\gamma \leq s_{i}(v) < v$.
\end{enumerate}
\end{flushleft}
\end{lemma}
\begin{proof}
Recall \Cref{def:interim_utility} for the interim utility formula $u_{i}(v,\, b) = (v - b) \cdot \alloc_{i}(b)$.
\vspace{.1in}
\noindent
{\bf \Cref{lem:dichotomy:1}.}
A low/boundary value $v \in \supp_{\leq \gamma}(V_{i})$ induces EITHER a low (under)bid $b < v \leq \gamma$ and a zero interim allocation/utility $\alloc_{i}(b) = u_{i}(v,\, b) = 0$, OR a truthful/over bid $b \geq v$ and a negative interim utility $u_{i}(v,\, b) \leq 0$. Especially, a normal bid $b > \gamma \geq v$ induces a nonzero interim allocation $\alloc_{i}(b) > 0$ and a strictly negative interim utility $u_{i}(v,\, b) < 0$. Therefore, the optimal utility is zero $\calU_{i}(v) = 0$ and the equilibrium bid must be low/boundary $s_{i}(v) \leq \gamma$.
{\bf \Cref{lem:dichotomy:1}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:dichotomy:2}.}
A normal value $v \in \supp_{> \gamma}(V_{i})$ is able to gain a positive utility $u_{i}(v,\, b) > 0$; for example, underbidding $b^{*} = (\gamma + v) / 2 \in (\gamma,\, v)$ gives a nonzero allocation/utility $\alloc_{i}(b^{*}),\, u_{i}(v,\, b^{*}) > 0$.
So, the equilibrium bid $s_{i}(v)$ neither can be truthful/over $b \geq v$ (hence a negative utility $u_{i}(v,\, b) \leq 0$), nor can be low $b < \gamma$ (hence a zero allocation/utility $\alloc_{i}(b) = u_{i}(v,\, b) = 0$). That is, the optimal utility is nonzero $\calU_{i}(v) > 0$ and the equilibrium (under)bid must be boundary/normal $\gamma \leq s_{i}(v) < v$.
{\bf \Cref{lem:dichotomy:2}} follows then.
\end{proof}
\Cref{lem:bid_monotonicity} shows, in the normal value regime $v \in \supp(V_{i})$, the optimal utility formula $\calU_{i}(v)$ and the equilibrium strategy $s_{i}(v)$ are monotonic (cf.\ \cite[Lemma~3.9]{CH13} for a similar result).
\begin{lemma}[Bidding monotonicity]
\label{lem:bid_monotonicity}
For any two normal values $v,\, w \in \supp_{> \gamma}(V_{i})$ that $v > w$ of a bidder $i \in [n]$:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:bid_monotonicity:1}
The two optimal utilities are strictly monotonic $\calU_{i}(v) > \calU_{i}(w) > 0$.
\item\label{lem:bid_monotonicity:2}
The two random bids are weakly monotonic $s_{i}(v) \geq s_{i}(w) \geq \gamma$, almost surely.
\end{enumerate}
\end{lemma}
\begin{proof}
\Cref{lem:dichotomy} already shows that the utilities are nonzero $\calU_{i}(v),\, \calU_{i}(w) > 0$ and the equilibrium bids are boundary/normal $s_{i}(v),\, s_{i}(w) \geq \gamma$. It remains to verify the utility/bidding monotonicity.
\vspace{.1in}
\noindent
{\bf \Cref{lem:bid_monotonicity:1}.}
The utility monotonicity $\calU_{i}(v) > \calU_{i}(w)$. We deduce that
\begin{align*}
\calU_{i}(v)
~\geq~ u_{i}(v,\, s_{i}(w))
~=~ \frac{v - s_{i}(w)}{w - s_{i}(w)} \cdot u_{i}(w,\, s_{i}(w))
~=~ \frac{v - s_{i}(w)}{w - s_{i}(w)} \cdot \calU_{i}(w)
~>~ \calU_{i}(w),
\end{align*}
The first step: Under the value $v$, the optimal utility $\calU_{i}(v)$ is at least the interim utility $u_{i}(v,\, s_{i}(w))$ resulted from bidding $s_{i}(w)$, namely the equilibrium bid at the (lower) value $w$. \\
The second step: Apply the interim utility formula to $u_{i}(v,\, s_{i}(w))$ and $u_{i}(w,\, s_{i}(w))$. \\
The third step: The random bid $s_{i}(w)$ optimizes the interim utility formula $u_{i}(w,\, s_{i}(w)) = \calU_{i}(w)$ at the value $w$. \\
The fourth step: (\Cref{lem:dichotomy}) $v > w > s_{i}(w)$ and $\calU_{i}(w) > 0$.
{\bf \Cref{lem:bid_monotonicity:1}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:bid_monotonicity:2}.}
The bidding monotonicity $s_{i}(v) \geq s_{i}(w)$. This result also appears in \cite[Lemma~3.9]{CH13}; we would recap their proof for completeness.
Assume to the opposite that $s_{i}(v) < s_{i}(w)$.
Following the interim utility formula (\Cref{def:interim_utility}),
\begin{align*}
u_{i}(v,\, s_{i}(v)) & ~=~ u_{i}(w,\, s_{i}(v)) ~+~ (v - w) \cdot \alloc_{i}(s_{i}(v)), \phantom{\big.} \\
u_{i}(v,\, s_{i}(w)) & ~=~ u_{i}(w,\, s_{i}(w)) ~+~ (v - w) \cdot \alloc_{i}(s_{i}(w)). \phantom{\big.}
\end{align*}
We have \term[{\bf (i)}]{lem:bid_monotonicity:2:i}~$u_{i}(v,\, s_{i}(v)) \geq u_{i}(v,\, s_{i}(w))$, since the equilibrium bid $s_{i}(v)$ optimizes the interim utility formula $u_{i}(v,\, b)$ for this value $v$. Similarly, we have \term[{\bf (ii)}]{lem:bid_monotonicity:2:ii}~$u_{i}(w,\, s_{i}(w)) \geq u_{i}(w,\, s_{i}(v))$.
Also, under the assumption $s_{i}(v) < s_{i}(w)$, we know from \Cref{lem:allocation:1} of \Cref{lem:allocation} that \term[{\bf (iii)}]{lem:bid_monotonicity:2:iii}~$\alloc_{i}(s_{i}(v)) \leq \alloc_{i}(s_{i}(w))$.
Given the above two equations (note that $v > w$), each of \blackref{lem:bid_monotonicity:2:i}, \blackref{lem:bid_monotonicity:2:ii}, and \blackref{lem:bid_monotonicity:2:iii}
must achieve the equality. However, this means\footnote{Particularly, the equality of \blackref{lem:bid_monotonicity:2:i} gives $(v - s_{i}(v)) \cdot \alloc(s_{i}(v)) = (v - s_{i}(w)) \cdot \alloc(s_{i}(w))$; notice that $s_{i}(v) < s_{i}(w)$. And the equality of \blackref{lem:bid_monotonicity:2:iii} gives $\alloc(s_{i}(v)) = \alloc(s_{i}(w))$. For these reasons, we must have $\alloc(s_{i}(v)) = \alloc(s_{i}(w)) = 0$.} all interim allocations/utilities are zero $\alloc_{i}(s_{i}(v)) = \alloc_{i}(s_{i}(w)) = u_{i}(v,\, s_{i}(v)) = u_{i}(w,\, s_{i}(w)) = 0$, which contradicts {\bf \Cref{lem:bid_monotonicity:1}}.
Rejecting our assumption gives {\bf \Cref{lem:bid_monotonicity:2}}.
\end{proof}
\begin{remark}[Quantiles]
\label{quantiles}
Due to \Cref{lem:bid_monotonicity}, the normal value space $v_{i} > \gamma$ and normal/boundary bid space $s_{i}(v_{i}) \geq \gamma$ each identify the other in terms of quantiles. From this perspective, the stochastic process of the auction game runs as follows: Draw a uniform random quantile $q_{i} \sim U[0,1]$, and then realize the value/bid $V_i^{-1}(q_{i})$ and $B_i^{-1}(q_{i})$ accordingly. The two stochastic processes are equivalent conditioned on the realized value being normal $V_i^{-1}(q_{i}) > \gamma$.
\end{remark}
\Cref{lem:bid_distribution} shows that, on the whole interval $b \in [\gamma = \inf(\supp(\calB)),\, \lambda = \sup(\supp(\calB))]$, all of the equilibrium/competing/first-order bid distributions $B_{i}(b)$, $\calB_{-i}(b)$, and $\calB(b)$ are well structured.
The earlier works \cite{MR00a,MR00b,MR03} derive similar results (under additional assumptions).
\begin{lemma}[Bid distributions]
\label{lem:bid_distribution}
Each of the following holds:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:bid_distribution:monotonicity}
The competing/first-order bid distributions $\calB_{-i}(b)$ for $i \in [n]$ and $\calB(b)$ each have probability densities almost everywhere on $b \in (\gamma,\, \lambda]$, therefore being strictly increasing CDF's on the CLOSED interval $b \in [\gamma,\, \lambda]$.
\item\label{lem:bid_distribution:continuity}
The equilibrium/competing/first-order bid distributions $B_{i}(b)$ for $i \in [n]$, $\calB_{-i}(b)$ for $i \in [n]$ and $\calB(b)$, each have no probability mass on $b \in (\gamma,\, \lambda]$, excluding the boundary $\gamma = \inf(\supp(\calB))$, therefore being continuous CDF's on the CLOSED interval $b \in [\gamma,\, \lambda]$.
\end{enumerate}
\end{lemma}
\begin{proof}
We safely assume $n \geq 2$. (Otherwise, \Cref{lem:bid_distribution} is vacuously true since the unique bidder always bids zero $[\gamma,\, \lambda] = \{0\}$ in any {\textsf{First Price Auction}}.)
The proof relies on {\bf \Cref{fact:bid_distribution}}.
\setcounter{fact}{0}
\begin{fact}
\label{fact:bid_distribution}
Any two equilibrium bid distributions $B_{i}$ and $B_{k}$ for $i \neq k \in [n]$ cannot both have probability masses at a normal bid $b > \gamma$.
\end{fact}
\begin{proof}
Assume the opposite. Then both bidders $B_{i}$ and $B_{k}$ (and possibly someone else) tie for this first-order bid $s_{i}(v_{i}) = s_{k}(v_{k}) = \max(\bs(\bv)) = b > \gamma$ with a nonzero probability $> 0$. Conditioned on this, with a nonzero probability $> 0$, at least one bidder between $B_{i}$ and $B_{k}$ cannot be allocated. Hence, either $B_{i}$ or $B_{k}$ or both can gain a nonzero extra allocation by raising his/her (equilibrium) bid $s_{i}(v_{i}) = s_{k}(v_{k}) = b$. More formally, for some nonzero probabilities $\delta_{i},\, \delta_{k} > 0$, we have EITHER $\alloc_{i}(b^{*}) \geq \alloc_{i}(b) + \delta_{i}$ for any $b^{*} > b$, OR $\alloc_{k}(b^{*}) \geq \alloc_{k}(b) + \delta_{k}$ for any $b^{*} > b$, OR both.
Without loss of generality, let us consider the case that $\alloc_{i}(b^{*}) \geq \alloc_{i}(b) + \delta_{i}$ for any $b^{*} > b$. However, this means some deviation bid, such as $b^{*} = b + \frac{\delta_{i} / 2}{\alloc_{i}(b) + \delta_{i}} \cdot (v_{i} - b) $, can benefit. Namely, because this bidder $B_{i}$ has a normal equilibrium bid $s_{i}(v_{i}) = b > \gamma$, (\Cref{lem:dichotomy:2} of \Cref{lem:dichotomy}) he/she must have a higher value $v_{i} > s_{i}(v_{i}) = b$. This implies that (by construction) $b < b^{*} < v_{i}$ and thus, that the deviated utility $u_{i}(v_{i},\, b^{*}) = (v_{i} - b^{*}) \cdot \alloc_{i}(b^{*})$ is lower bounded as
\begin{align*}
u_{i}(v_{i},\, b^{*})
& ~\geq~ (v_{i} - b^{*}) \cdot \big(\alloc_{i}(b) + \delta_{i}\big)
&& \text{the ``WLOG''}
\phantom{\big.} \\
& ~=~ (v_{i} - b) \cdot \big(\alloc_{i}(b) + \delta_{i} / 2\big)
&& \text{construction of $b^{*}$}
\phantom{\big.} \\
& ~>~ (v_{i} - b) \cdot \alloc_{i}(b)
&& \text{$v_{i} > b$ and $\delta_{i} > 0$}
\phantom{\big.} \\
& ~=~ u_{i}(v_{i},\, b)
~=~ u_{i}(v_{i},\, s_{i}(v_{i})).
&& \text{$s_{i}(v_{i}) = b$}
\phantom{\big.}
\end{align*}
To conclude, the deviation bid $b^{*}$ strictly surpasses the equilibrium bid $s_{i}(v_{i})$, which contradicts (\Cref{def:bne_formal}) the optimality of $s_{i}(v_{i})$. Rejecting our assumption gives {\bf \Cref{fact:bid_distribution}}.
\vspace{.1in}
\noindent
\term[{\bf Neighborhood Deviation Arguments}]{pro:deviation}{\bf .}
The concrete construction of the above deviation bid $b^{*}$ is less important.
Instead, the point is that
we are able to find two bids $b^{*} > b = s_{i}(v_{i})$ that can be arbitrarily close $(b^{*} - b) \searrow 0$ BUT
admit a {\em nonzero} interim allocation gap $\alloc_{i}(b^{*})-\alloc_{i}(b) \geq \delta_{i} > 0$.
Suppose so, bidder $i$ strictly benefits $u_{i}(v,\, b^{*}) > u_{i}(v,\, b)$ from the deviation bid $b^{*}$ when it is close enough to the equilibrium bid $b^{*} \searrow b$, hence a contradiction to the optimality of $b = s_{i}(v_{i})$.
Also, suppose there are two bids $b^{*} < b = s_{i}(v_{i})$ that yield two arbitrarily close {\em nonzero} interim allocations $(\alloc_{i}(b^{*}) - \alloc_{i}(b)) \nearrow 0$ BUT themselves are bounded away $b - b^{*} \geq \delta_{i} > 0$,
then bidder $i$ again benefits from the deviation bid $b^{*}$, hence a contradiction to the optimality of $b = s_{i}(v_{i})$.
We will apply such arguments in many places, WITHOUT specifying the deviation bids $b^{*}$ as the explicit constructions are less informative.
For clarity, such arguments will be called the \blackref{pro:deviation}.
\end{proof}
Below let us prove {\bf \Cref{lem:bid_distribution:monotonicity} and \Cref{lem:bid_distribution:continuity}}.
We reason about the first-order bid distribution $\calB(b)$ and the competing bid distributions $\calB_{-i}(b)$ separately.
\vspace{.1in}
\noindent
{\bf \Cref{lem:bid_distribution:monotonicity}: $\calB(b)$.}
Assume the opposite to the $\calB(b)$'s strong monotonicity: The first-order bid distribution $\calB(b)$ has no probability density on an OPEN interval $(\alpha,\, \beta) \subseteq [\gamma,\, \lambda]$.
Indeed, because $\gamma = \inf(\supp(\calB))$ and $\lambda = \sup(\supp(\calB))$,
we can find a {\em maximal} interval $(\alpha,\, \beta)$ such that, the $\calB(b)$ has probability densities within both of the left/right neighborhoods $(\alpha - \delta,\, \alpha]$ and $[\beta,\, \beta + \delta)$, for {\em whatever} $\delta > 0$. This gives $\calB(\beta) \geq \calB(\alpha) > 0$.
Let us do case analysis:
\begin{itemize}
\item {\bf Case 1: $\calB(\beta) > \calB(\alpha) > 0$.} Then the $\calB(b)$ must have ONE probability mass at the $\beta$.
({\bf \Cref{fact:bid_distribution}}) Exactly one equilibrium bid distribution, $B_{k}(b)$ for a specific $k \in [n]$, has a probability mass at the $\beta$; his/her competing bid distribution $\calB_{-k}(b)$ has no probability mass there, thus being continuous at the $\beta$. This competing bid distribution $\calB_{-k}(b)$ has no probability density on the open interval $(\alpha,\, \beta) \bigcap \supp(\calB_{-k}) = \emptyset$, \`{a} la the first-order bid distribution $\calB(b)$.
Hence, (\Cref{lem:allocation:1} of \Cref{lem:allocation}) bidder $B_{k}(b)$ has a {\em constant} interim allocation $\alloc_{k}(b) = \alloc_{k}(\beta) = \calB_{-k}(\beta) > 0$ on the left-open {\em right-closed} interval $b \in (\alpha,\, \beta]$.
But this means, conditioned on this bidder's equilibrium bid being at the probability mass $\big\{ s_{k}(v_{k}) = \beta \big\}$, any lower deviation bid $b^{*} \in (\alpha,\, \beta)$ yields a better deviated utility $u_{k}(v_{k},\, b^{*}) > u_{k}(v_{k},\, s_{k}(v_{k}))$. This contradicts (\Cref{def:bne_formal}) the optimality of this equilibrium bid $s_{k}(v_{k}) = \beta$.
\item {\bf Case 2: $\calB(\beta) = \calB(\alpha) > 0$.} Then the $\calB(b)$ must have NO probability mass at the $\beta$.
\`{A} la the $\calB(b)$, at least one bidder $B_{k}(b)$ for some $k \in [n]$, has probability densities within the right neighborhood $[\beta,\, \beta + \delta)$, for whatever $\delta > 0$. No matter how close $b \searrow \beta$, we can find an equilibrium bid $b = s_k(v)$, for some $v \in \supp(V_{k})$, that gives a positive utility $u_k(v,\, b) > 0$.
\`{A} la {\bf Case~1}, this bidder has a {\em constant} interim allocation $\alloc_{k}(b^{*}) = \alloc_{k}(\beta) = \calB_{-k}(\beta) > 0$ on the left-open {\em right-closed} interval $b^{*} \in (\alpha,\, \beta]$; let us consider a particular bid $b^{*} = \frac{1}{2}(\alpha + \beta)$.
This deviation bid $b^{*}$ is bounded away from the above equilibrium bid $b - b^{*} \geq \frac{1}{2}(\beta - \alpha) > 0$ BUT, because there is no probability mass at $\beta$, gives an arbitrarily close interim allocation $\big(\alloc_{k}(b) - \alloc_{k}(b^{*})\big) \searrow 0$.
Therefore, we can apply the \blackref{pro:deviation} to get a contradiction.
\end{itemize}
To conclude, we get a contradiction in the either case. Reject our assumption: The first-order bid distribution $\calB(b)$ is strictly increasing on $b \in [\gamma,\, \lambda]$.
\vspace{.1in}
\noindent
{\bf \Cref{lem:bid_distribution:monotonicity}: $\calB_{-i}(b)$.}
Assume the opposite to the $\calB_{-i}(b)$'s strong monotonicity: At least one competing bid distribution $\calB_{-i}(b)$ has no probability density on an OPEN interval $(\alpha,\, \beta) \subseteq [\gamma,\, \lambda]$.
In contrast, the remaining bidder $B_{i}(b)$ has probability densities almost everywhere on $(\alpha,\, \beta)$, since the first-order bid distribution $\calB(b) = B_{i}(b) \cdot \calB_{-i}(b)$ is strongly increasing (as shown above). Consider such an equilibrium bid $s_{i}(v_{i}) \in (\alpha,\, \beta)$. However, any lower deviation bid $b^{*} \in \big(\alpha,\, s_{i}(v_{i})\big)$ results in the same nonzero allocation $\alloc_{i}(b^{*}) = \alloc_{i}(s_{i}(v_{i})) > 0$ and thus a better deviated utility $u_{i}(v_{i},\, b^{*}) = (v_{i} - b^{*}) \cdot \alloc_{i}(b^{*}) > \big(v_{i} - s_{i}(v_{i})\big) \cdot \alloc_{i}(b^{*}) = u_{i}(v_{i},\, s_{i}(v_{i}))$.
This contradicts (\Cref{def:bne_formal}) the optimality of the equilibrium bid $s_{i}(v_{i}) \in (\alpha,\, \beta)$.
Reject our assumption: Each competing bid distribution $\calB_{-i}(b)$ is strictly increasing on $b \in [\gamma,\, \lambda]$.
{\bf \Cref{lem:bid_distribution:monotonicity}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:bid_distribution:continuity}.}
We only need to verify continuity of the first-order bid distribution $\calB(b) = B_{i}(b) \cdot \calB_{-i}(b)$, which implies continuity of the equilibrium/competing bid distributions $\calB_{i}(b)$ and $\calB_{-i}(b)$.
Assume the opposite: $\calB(b)$ has a probability mass at some normal bid $\beta \in (\gamma,\, \lambda]$.
According to {\bf \Cref{fact:bid_distribution}}, exactly one bidder $B_{i}(b)$, for a specific $i \in [n]$, has a probability mass at this $\beta$. Further, following {\bf \Cref{lem:bid_distribution:monotonicity}}, his/her competing bid distribution $\calB_{-i}(b)$ has probability densities within the OPEN left neighborhood $b \in (\beta - \delta,\, \beta)$, for {\em whatever} $\delta > 0$; therefore at least one OTHER bidder $k \in [n] \setminus \{i\}$ has probability densities there.
This other bidder $B_{k}(b)$'s interim allocation formula $\alloc_{k}(b)$ is {\em discontinuous} at the $\beta \in (\gamma,\, \lambda]$,
because of $B_{i}(b)$'s probability mass at the $\beta$. Formally (\Cref{lem:allocation}), there exists a nonzero interim allocation gap $\xi^{*} \eqdef \inf \big\{\, \alloc_{k}(b^{*}) - \alloc_{k}(b): b^{*} > \beta > b \,\big\} > 0$ around the $\beta \in (\gamma,\, \lambda]$.
Hence, for some bids $b \in (\beta - \delta,\, \beta)$ and $b^{*} > \beta$ that are arbitrary close $(b^{*} - b) \searrow 0$, we can use the \blackref{pro:deviation} to get a contradiction.
Reject our assumption gives {\bf \Cref{lem:bid_distribution:continuity}}.
This finishes the proof of \Cref{lem:bid_distribution}.
\end{proof}
\begin{comment}
\vspace{.1in}
\noindent
{\bf Remark.}
We note that {\bf \Cref{fact:bid_distribution}} relies on this truth: Given two individual normal values $v_{i},\, v_{k} > \gamma$ of bidders $i \neq k \in [n]$, the equilibrium bids $s_{i}(v_{i}),\, s_{k}(v_{k}) \geq \gamma$ achieve the nonzero optimal utilities $\calU_{i}(v_{i}),\, \calU_{k}(v_{k}) > 0$, respectively. So two individual probability masses are incompatible at the same bid $s_{i}(v_{i}) = s_{k}(v_{k}) = b$, no matter it is normal $b > \gamma$ or is boundary $b = \gamma$ (but cannot be low $b < \gamma$ according to \Cref{lem:dichotomy}).
Later, we will use the same truth to prove \Cref{lem:monopolist}.
\end{comment}
\begin{comment}
Consider two equilibrium bids $s_{k}(w_{k}) > s_{k}(v_{k}) \geq \beta > \gamma$ of this bidder $B_{k}(b)$.
Monotonicity of these two normal bids $> \gamma$ means that the two underlying values (\Cref{lem:dichotomy}) must be higher $w_{k} > s_{k}(w_{k})$, $v_{k} > s_{k}(v_{k})$ AND (\Cref{lem:bid_monotonicity:2} of \Cref{lem:bid_monotonicity}) also be monotonic $w_{k} \geq v_{k}$.
Now let the larger value/bid $w_{k} > s_{k}(w_{k}) > \beta$ hold constant.
The competing bid distribution $\calB_{-k}(b)$ is nonzero $\calB_{-k}(\beta) > 0$ and continuous at the $\beta$. Thus with a small enough $\delta^{*} > 0$, we have $\calB_{-k}(b) \leq \big(1 + \frac{1}{4} \cdot \frac{\beta - \alpha}{w_{k} - \beta}\big) \cdot \calB_{-k}(\beta)$ for $b \in [\beta,\, \beta + \delta^{*})$.
Again, the bidder $B_{k}(b)$ has probability densities within this right neighborhood, so we can choose the smaller bid $s_{k}(v_{k}) \in [\beta,\, \beta + \delta^{*})$ from there. The corresponding allocation (\Cref{lem:allocation:1} of \Cref{lem:allocation}) is upper bounded as $\alloc_{k}(s_{k}(v_{k})) \leq \calB_{-k}(s_{k}(v_{k})) \leq \big(1 + \frac{1}{4} \cdot \frac{\beta - \alpha}{w_{k} - \beta}\big) \cdot \calB_{-k}(\beta)$.
For the underlying value $v_{k}$, consider the lower deviation bid $b^{*} = \frac{1}{2}(\alpha + \beta)$ instead of the equilibrium bid $s_{k}(v_{k})$.
This deviation bid $b^{*} = \frac{1}{2}(\alpha + \beta)$ yields a deviated utility
\begin{align*}
u_{k}(v_{k},\, b^{*})
& ~=~ u_{k}(v_{k},\, s_{k}(v_{k})) \cdot \frac{v_{k} - b^{*}}{v_{k} - s_{k}(v_{k})} \cdot \frac{\alloc_{k}(b^{*})}{\alloc_{k}(s_{k}(v_{k}))} \\
& ~\geq~ u_{k}(v_{k},\, s_{k}(v_{k})) \cdot \frac{w_{k} - \frac{1}{2}(\alpha + \beta)}{w_{k} - \beta} \cdot \frac{\alloc_{k}(b^{*})}{\alloc_{k}(s_{k}(v_{k}))} \\
& ~\geq~ u_{k}(v_{k},\, s_{k}(v_{k})) \cdot \frac{w_{k} - \frac{1}{2}(\alpha + \beta)}{w_{k} - \beta} \bigg/ \bigg(1 + \frac{\beta - \alpha}{4(w_{k} - \beta)}\bigg) \\
& ~=~ u_{k}(v_{k},\, s_{k}(v_{k})) \cdot \bigg(1 + \frac{\beta - \alpha}{4(w_{k} - \beta) + (\beta - \alpha)}\bigg) \\
& ~>~ u_{k}(v_{k},\, s_{k}(v_{k})). \phantom{\bigg.}
\end{align*}
The first line: Apply the interim utility formula $u_{k}(v_{k},\, b) = (v_{k} - b) \cdot \alloc_{k}(b)$. \\
The second line: $b^{*} = \frac{1}{2}(\alpha + \beta) < \beta \leq s_{k}(v_{k}) < v_{k} \leq w_{k}$, all of which are determined before. \\
The third line: Apply $\alloc_{k}(b^{*}) = \calB_{-k}(\beta)$ and $\alloc_{k}(s_{k}(v_{k})) \leq \big(1 + \frac{1}{4} \cdot \frac{\beta - \alpha}{w_{k} - \beta}\big) \cdot \calB_{-k}(\beta)$.
To conclude, the deviation bid $b^{*} = \frac{1}{2}(\alpha + \beta)$ strictly surpasses the equilibrium bid $s_{k}(v_{k}) \in [\beta,\, \beta + \delta^{*})$, which contradicts (\Cref{def:bne_formal}) the optimality of $s_{k}(v_{k})$.
\end{comment}
An important and direct implication of the continuity is that $\alloc_{i}(b)=\calB_{-i}(b)$ for any $b\in (\gamma, \lambda]$. This is formalized as \Cref{cor:allocation} and will be used in many places in the paper.
\begin{corollary}[Interim allocations]
\label{cor:allocation}
The interim allocation formula $\alloc_{i}(b)$ of each bidder $i \in [n]$ is identical to his/her competing bid distribution $\alloc_{i}(b) = \calB_{-i}(b)$, for any normal bid $b \in (\gamma, \lambda]$.
\end{corollary}
According to the Lebesgue differentiation theorem \cite{L04}, a monotonic function $f: (\gamma,\, \lambda) \mapsto \R$ is differentiable almost everywhere, except a set $\mathbb{D}_{f} \subseteq (0,\, \lambda)$ of a zero measure.\footnote{Rigorously, the set $\mathbb{D}_{f}$ has a zero {\em Lebesgue} measure. Yet since the bid distributions $B_{i}(b)$, $\calB_{-i}(b)$, and $\calB(b)$ are continuous on $b \in (\gamma,\, \lambda)$, we need not to distinguish the Lebesgue/probabilistic measures.}
We thus conclude \Cref{cor:bid_distribution} directly from \Cref{lem:bid_distribution}.
\begin{corollary}[Bid distributions]
\label{cor:bid_distribution}
Each of the equilibrium/competing/first-order bid distributions, $\{B_{i}\}_{i \in [n]}$, $\{\calB_{-i}\}_{i \in [n]}$, and $\calB$
is differentiable almost everywhere on the OPEN interval $b \in (\gamma,\, \lambda)$, except a set $\mathbb{D} \subseteq (\gamma,\, \lambda)$ on which this bid distribution, $B_{i}(b)$, $\calB_{-i}(b)$, or $\calB(b)$, has a zero measure.
\end{corollary}
\begin{comment}
Both of {\bf \Cref{cor:bid_distribution:2,cor:bid_distribution:3}} directly follow from \Cref{lem:bid_distribution}, that each bid distribution, $B_{i}(b)$, $\calB_{-i}(b)$, or $\calB(b)$, is continuous on $b \in [\gamma,\, \lambda]$. Particularly, ({\bf \Cref{cor:bid_distribution:3}}) we can choose $\mathbb{D} \eqdef \mathbb{D}_{\calB}$ as the (countable) set of points $\subseteq (\lambda,\, \gamma)$ on which the first-order bid distribution $\calB(b) = B_{i}(b) \cdot \calB_{-i}(b)$ is indifferentiable; this set $\mathbb{D} = \mathbb{D}_{\calB}$ must cover the indifferentiable points of $B_{i}(b)$'s and $\calB_{-i}(b)$'s.
\end{comment}
\begin{remark}[Bid distributions]
\label{rem:bid_distribution}
The continuity/monotonicity/differentiability shown in \Cref{lem:bid_distribution,cor:bid_distribution} are the basics of our later discussions; all subsequent bid distributions will satisfy them. So for brevity, we often omit a formal clarification. Also, we often simply write the derivatives $B'_{i}(b)$ etc by omitting (\Cref{cor:bid_distribution}) the zero-measure indifferentiable points $\mathbb{D} \subseteq (\gamma,\, \lambda)$;
we can use standard tools from real analysis to deal with those points $\mathbb{D} \subseteq (\gamma,\, \lambda)$ separately.
\end{remark}
\subsection{The inverse direction: Bid-to-value mappings}
\label{subsec:mapping}
In this subsection, we study the {\em inverse} mappings of strategies $\bs^{-1} = \{s_{i}^{-1}\}_{i \in [n]}$ and try to reconstruct the value distributions $\bV = \{V_{i}\}_{i \in [n]}$ from (equilibrium) bid distributions.
\begin{definition}[Inverses of strategies]
\label{def:inverse}
Consider bid distributions $\bB = \{B_{i}\}_{i \in [n]}$ given by value distributions $\bv = (v_{i})_{i \in [n]} \sim \bV$ and a strategy profile $\bs = \{s_{i}\}_{i \in [n]}$.
Each {\em random} inverse $s_{i}^{-1}(b)$ for $b \in \supp(B_{i})$ is defined as the {\em conditional} distribution $\{ v_i \sim V_i\,\bigmid\, s_{i}(v_{i}) = b \}$.
So, each random inverse $s_{i}^{-1}(b_{i})$ for $b_{i} \sim B_{i}$ is identically distributed as the random value $v_{i} \sim V_{i}$.
\end{definition}
\Cref{lem:inverse} shows two basic properties of the random inverses $\bs^{-1} = \{s_{i}^{-1}\}_{i \in [n]}$.
\begin{lemma}[Inverses of strategies]
\label{lem:inverse}
For each inverse $s_{i}^{-1}$, the following hold (almost surely):
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:inverse:dichotomy}
{\bf dichotomy:}
$s_{i}^{-1}(b) \leq \gamma$ for any low bid $b \in \supp_{< \gamma}(B_{i})$. \\
\white{\bf dichotomy:}
$s_{i}^{-1}(b) > b > \gamma$ for any normal bid $b \in \supp_{> \gamma}(B_{i})$.
\item\label{lem:inverse:monotone}
{\bf monotonicity:}
$s_{i}^{-1}(b) \geq s_{i}^{-1}(t)$ for two boundary/normal bids $b,\, t \in \supp_{\geq \gamma}(B_{i})$ that $b > t$.
\end{enumerate}
\end{lemma}
\begin{proof}
Under our definition of the $s_{i}^{-1}(b)$, {\bf \Cref{lem:inverse:dichotomy}} follows directly from \Cref{lem:dichotomy}, while {\bf \Cref{lem:inverse:monotone}} follows from a combination of \Cref{lem:dichotomy} and \Cref{lem:bid_monotonicity} (\Cref{lem:bid_monotonicity:2}).
\end{proof}
Below, we characterize (\Cref{lem:high_bid}) the inverses of {\em equilibrium} strategies $s_{i}^{-1}(b)$ for {\em normal} bids $b \in (\gamma,\, \lambda]$, using (\Cref{def:mapping}) the concept of {\em bid-to-value mappings} $\varphi_{i}(b)$. We prove that the inverse $s_{i}^{-1}(b)$ at a normal bid $b \in (\gamma,\, \lambda]$ is essentially a {\em deterministic} value. (The inverses at the boundary bid $s_{i}^{-1}(\gamma)$ are tricky and will be studied later.)
\begin{definition}[Bid-to-value mappings]
\label{def:mapping}
Given equilibrium bid distributions $\bB = \{B_{i}\}_{i \in [n]}$ (cf.\ \Cref{lem:bid_distribution,cor:bid_distribution}), define each {\em bid-to-value mapping} $\varphi_{i}(b)$ for $b \in (\gamma,\, \lambda)$ as follows:
\[
\varphi_{i}(b)
~\eqdef~ b + \frac{\calB_{-i}(b)}{\calB'_{-i}(b)}
~=~ b + \Big(\sum_{k \in [n] \setminus \{i\}} \frac{B'_{k}(b)}{B_{k}(b)}\Big)^{-1}
~=~ b + \Big(\frac{\calB'(b)}{\calB(b)} - \frac{B'_{i}(b)}{B_{i}(b)}\Big)^{-1}
\]
It turns out that (\Cref{lem:high_bid:monotone} of \Cref{lem:high_bid}) each $\varphi_{i}(b)$ is an increasing function, so the domain can be extended to include the both endpoints $\varphi_{i}(\gamma) \eqdef \lim_{b \searrow \gamma} \varphi_{i}(b)$ and $\varphi_{i}(\lambda) \eqdef \lim_{b \nearrow \lambda} \varphi_{i}(b)$.
\end{definition}
The above bid-to-value mapping is known in the literature (\cite{HHT14} etc). However, it is usually defined only on the support of $B_i$. We define it and study its properties on the entire interval $[\gamma,\, \lambda]$, which is very important to our argument.
\begin{lemma}[Normal bids]
\label{lem:high_bid}
For each bid-to-value mapping $\varphi_{i}(b)$, the following hold:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:high_bid:inverse}
{\bf invertibility:}
$s_{i}^{-1}(b) = \varphi_{i}(b)$ almost surely, for any normal bid $b \in \supp_{> \gamma}(B_{i})$.\footnote{More rigorously, (\Cref{cor:bid_distribution,rem:bid_distribution}) we shall exclude a zero-measure set $\mathbb{D} \subseteq (\gamma,\, \lambda]$, namely the indifferentiable points of the $B_{i}(b)$'s}
\item\label{lem:high_bid:monotone}
\term[{\bf monotonicity}]{value_monotonicity}{\bf :}
$\varphi_{i}(b)$ is weakly increasing on the closed interval $b \in [\gamma,\, \lambda]$.
\item\label{lem:high_bid:rational}
{\bf rationality:}
$\varphi_{i}(b) > b$ on the left-open right-closed interval $b \in (\gamma,\, \lambda]$.
Further, $\varphi_{i}(\gamma) \geq \gamma$.
\end{enumerate}
\end{lemma}
\begin{proof}
We safely assume $n \geq 2$. (Otherwise, \Cref{lem:high_bid} is vacuously true since the unique bidder always bids zero $[\gamma,\, \lambda] = \{0\}$ in any {\textsf{First Price Auction}}.)
\vspace{.1in}
\noindent
{\bf \Cref{lem:high_bid:inverse}.}
Consider a specific inverse/value $s_{i}^{-1}(b) = v$ resulted from a normal bid $b \in \supp_{> \gamma}(B_{i})$. Regarding a normal bid $b^{*} > \gamma$, we can rewrite (\Cref{lem:allocation} (\Cref{lem:allocation:2}) and \Cref{cor:allocation}) the interim utility formula $u_{i}(v,\, b^{*}) = (v - b^{*}) \cdot \alloc_{-i}(b^{*}) = (v - b^{*}) \cdot \calB_{-i}(b^{*})$ and deduce the partial derivative
\[
\tfrac{\partial u_{i}}{\partial b^{*}}
~=~ -\calB_{-i}(b^{*}) + (v - b^{*}) \cdot \calB'_{-i}(b^{*})
~=~ \big(v - \varphi_{i}(b^{*})\big) \cdot \calB'_{-i}(b^{*}).
\]
Hence, to make (\Cref{def:bne_formal}) the equilibrium bid $b \in \supp_{> \gamma}(B_{i})$ optimize the formula $u_{i}(v,\, b^{*})$, we must have $\varphi_{i}(b) = v = s_{i}^{-1}(b)$. Here we use the fact that $B'_{-i}(b^{*})>0$ since $B_{-i}$ is strictly increasing (\Cref{lem:bid_distribution}).
Thus, {\bf \Cref{lem:high_bid:inverse}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:high_bid:monotone}.}
For brevity, we consider a {\em twice differentiable}\footnote{In general, following the Lebesgue differentiation theorem \cite{L04}, the $\calB(b)$ is twice differentiable almost everywhere on $(\gamma,\, \lambda)$, except a zero-measure set $\mathbb{D}_{\calB} \subseteq (\gamma,\, \lambda)$ that can be handled by standard tools from real analysis.} first-order bid CDF $\calB(b) = B_{i}(b) \cdot \calB_{-i}(b)$.
Thus for $i \in [n]$, each competing bid CDF $\calB_{-i}(b)$ is also twice differentiable and each bid-to-value mapping $\varphi_{i}(b) = b + \calB_{-i}(b) \big/ \calB'_{-i}(b)$ is continuous on $b \in (\gamma,\, \lambda)$.
Following a combination of {\bf \Cref{lem:high_bid:inverse}} and \Cref{lem:inverse} (\Cref{lem:inverse:monotone}), each mapping $\varphi_{i}(b)$ is increasing on this equilibrium bid distribution $B_{i}$'s normal bid support $\supp_{> \gamma}(B_{i})$, so the task is to extend this monotonicity to the whole interval $b \in (\gamma,\, \lambda)$.
The proof relies on {\bf \Cref{fact:high_bid:convex,fact:pseudo}}.
\setcounter{fact}{0}
\begin{fact}
\label{fact:high_bid:convex}
A mapping $\varphi(b) = b + B(b) \big/ B'(b)$ is increasing iff the reciprocal function $1 \big/ B(b)$ is convex.
\end{fact}
\begin{proof}
Once again, we assume that the underlying CDF $B(b)$ is twice differentiable.
The reciprocal function $1 \big/ B(b)$ has the second derivative $\frac{\d^{2}}{\d b^{2}} \big(\frac{1}{B}\big)
= \frac{\d}{\d b} \big(-\frac{B'}{B^{2}}\big)
= -\frac{B'' \cdot B - 2 \cdot (B')^{2}}{B^{2}}$.
So we can rewrite the first derivative of the mapping $\varphi'(b) = 1 + \frac{(B')^{2} - B \cdot B''}{(B')^{2}} = \frac{B^{2}}{(B')^{2}} \cdot \frac{\d^{2}}{\d b^{2}} \big(\frac{1}{B}\big)$.
Precisely, the mapping is increasing $\varphi'(b) \geq 0$ iff the reciprocal function is convex $\frac{\d^{2}}{\d b^{2}} (\frac{1}{B}) \geq 0$. {\bf \Cref{fact:high_bid:convex}} follows then.
\end{proof}
\begin{fact}
\label{fact:pseudo}
The first-order mapping $\varphi_{\calB}(b) \eqdef b + \calB(b) \big/ \calB'(b)$ is increasing on $b \in (\gamma,\, \lambda)$.
\end{fact}
\begin{proof}
Based on {\bf \Cref{fact:high_bid:convex}}, it suffices to show that the first-order reciprocal function $\calR(b) \eqdef 1 \big/ \calB(b)$ is convex.
For brevity, we denote the equilibrium/competing reciprocal functions $R_{i}(b) \eqdef 1 \big/ B_{i}(b)$ and $\calR_{-i}(b) \eqdef 1 \big/ \calB_{-i}(b)$ for $i \in [n]$. Also, we may simply write $R_{i} = R_{i}(b)$ etc.
We first establish {\bf \Cref{fact:pseudo}} under the assumption that all bid distributions $\bB = \{B_{i}\}_{i \in [n]}$ are supported at this bid, namely $b \in \supp_{> \gamma}(B_{i})$ for $i \in [n]$. By elementary algebra, each competing reciprocal function $\calR_{-i}(b) = \prod_{k \in [n] \setminus \{i\}} R_{k}(b)$ has the second derivative
\begin{align*}
\calR''_{-i}
& ~=~ \sum_{k_{1} \neq k_{2} \neq i} \Big(R'_{k_{1}} R'_{k_{2}} \prod_{k_{3} \notin \{i,\, k_{1},\, k_{2}\}} R_{k_{3}}\Big)
+ \sum_{k \neq i} \Big(R''_{k} \prod_{k_{3} \notin \{i,\, k\}} R_{k_{3}}\Big) \\
& ~=~ \calR_{-i} \cdot \Big(\sum_{k_{1} \neq k_{2} \neq i} \tfrac{R'_{k_{1}} R'_{k_{2}}}{R_{k_{1}} R_{k_{2}}} + \sum_{k \neq i} \tfrac{R''_{k}}{R_{k}}\Big).
\qquad \mbox{factor out $\calR_{-i} = \prod_{k \neq i} R_{k}$}
\end{align*}
We thus deduce that
\begin{align*}
\tfrac{1}{n - 1} \sum_{i \in [n]} R_{i} \cdot \calR''_{-i}
& ~=~ \tfrac{\calR}{n - 1} \sum_{i \in [n]} \Big(\sum_{k_{1} \neq k_{2} \neq i} \tfrac{R'_{k_{1}} R'_{k_{2}}}{R_{k_{1}} R_{k_{2}}} + \sum_{k \neq i} \tfrac{R''_{k}}{R_{k}}\Big)
&& \mbox{$R_{i} \cdot \calR_{-i} = \calR$} \\
& ~=~ \tfrac{n - 2}{n - 1} \cdot \calR \cdot \Big(\sum_{k_{1} \neq k_{2}} \tfrac{R'_{k_{1}} R'_{k_{2}}}{R_{k_{1}} R_{k_{2}}}\Big)
~+~ \calR \cdot \Big(\sum_{k \in [n]} \tfrac{R''_{k}}{R_{k}}\Big).
&& \mbox{combine the like terms}
\end{align*}
Moreover, the first-order reciprocal function $\calR(b) = \prod_{k \in [n]} R_{k}(b)$ has the second derivative
\begin{align*}
\calR''
& ~=~ \calR \cdot \Big(\sum_{k_{1} \neq k_{2}} \tfrac{R'_{k_{1}} R'_{k_{2}}}{R_{k_{1}} R_{k_{2}}} + \sum_{k \in [n]} \tfrac{R''_{k}}{R_{k}}\Big)
&& \mbox{\`{a} la the $\calR''_{-i}$ formulas} \\
& ~=~ \tfrac{\calR}{n - 1} \sum_{k_{1} \neq k_{2}} \tfrac{R'_{k_{1}} R'_{k_{2}}}{R_{k_{1}} R_{k_{2}}}
+ \tfrac{1}{n - 1} \sum_{k \in [n]} R_{k} \cdot \calR''_{-k}
~\geq~ 0.
&& \mbox{substitute $\sum_{i \in [n]} \calR''_{-i} \cdot R_{i}$}
\end{align*}
The first summation: $R'_{k_{1}},\, R'_{k_{2}} \leq 0$ for $k_{1} \neq k_{2} \in [n]$, so the product $R'_{k_{1}} R'_{k_{2}} \geq 0$. \\
The second summation: $\calR''_{-k} \geq 0$ for $k \in [n]$. At the considered bid $b \in \supp_{> \gamma}(B_{i})$, each mapping is increasing $\varphi'_{i}(b) \geq 0$ ({\bf \Cref{lem:high_bid:inverse}}) and thus each reciprocal is convex $\calR''_{-i}(b) \geq 0$ ({\bf \Cref{fact:high_bid:convex}}).
Hence, under the assumption that each bid CDF $\bB = \{B_{i}\}_{i \in [n]}$ is strictly increasing at this bid $b \in \supp_{> \gamma}(B_{i})$, the first-order reciprocal $\calR(b)$ is convex here.
\vspace{.1in}
\noindent
{\bf Removing the Assumption.}
If some bidders $i \in [n]$ are not supported here $b \notin \supp_{> \gamma}(B_{i})$, we can modify the proof by treating those $B_{i}(b)$'s constant and reasoning about the other bidders $\big\{i \in [n]: b \in \supp_{> \gamma}(B_{i})\big\}$.
The above proof relies on that at least TWO bidders $k_{1} \neq k_{2} \in [n]$ are supported at this bid $\supp_{> \gamma}(B_{k_{1}}),\, \supp_{> \gamma}(B_{k_{1}}) \ni b$.
This is true regardless of the assumption, because (\Cref{lem:bid_distribution}) EACH competing bid CDF $\calB_{-i}$ for $i \in [n]$ is strictly increasing on the whole interval $b \in [\gamma,\, \lambda]$.
To conclude, the first-order reciprocal $\calR(b)$ is convex. {\bf \Cref{fact:pseudo}} follows then.
\end{proof}
{\bf \Cref{lem:high_bid:monotone}} follows directly: On the normal bid support $b \in \supp_{> \gamma}(B_{i})$, the mapping $\varphi_{i}(b) = s_{i}^{-1}(b)$ must be increasing (\Cref{lem:inverse:monotone} of \Cref{lem:inverse}).
Outside the support $b \in (\gamma,\, \lambda) \setminus \supp_{> \gamma}(B_{i})$, we have $\calB_{-i}(b) \big/ \calB'_{-i}(b) = \calB(b) \big/ \calB'(b)$, so the mapping $\varphi_{i}(b) = \varphi_{\calB}(b)$ is still increasing ({\bf \Cref{fact:pseudo}}).
\vspace{.1in}
\noindent
{\bf \Cref{lem:high_bid:rational}.}
Following \Cref{lem:bid_distribution}, each competing bid distribution $\calB_{-i}(b)$ is a continuous and strictly increasing function on $b \in [\gamma,\, \lambda]$.
Hence, by definition $\varphi_{i}(b) = b + \calB_{-i}(b) \big/ \calB'_{-i}(b) \geq b$; the equality is possible only at the boundary bid $b = \gamma$. {\bf \Cref{lem:high_bid:rational}} follows then. This finishes the proof.
\end{proof}
As an implication of \Cref{lem:high_bid}, the value distributions $\bV = \{V_{i}\}_{i \in [n]}$ can partially be reconstructed from the equilibrium bid distributions $\bB = \{B_{i}\}_{i \in [n]}$. (See \Cref{quantiles}.)
\begin{corollary}[Reconstructions]
\label{cor:high_bid}
For each $i \in [n]$, the $v \geq \varphi_{i}(\gamma)$ part of the value distribution $V_{i}(v) = \Prx_{v_{i} \sim V_{i}} \big[v_{i} \leq v\big]$ can be reconstructed from the equilibrium bid distributions $\bB$ as follows.
\[
V_{i}(v) ~=~ \Prx_{b_{i} \sim B_{i}}\big[(b_{i} \le \gamma) \vee (\varphi_{i}(b_{i}) \leq v) \big],
\qquad\qquad \forall v \geq \varphi_{i}(\gamma).
\]
\end{corollary}
In the statement of \Cref{cor:high_bid}, a mapping $\varphi_{i}(b_{i})$ is undefined when $b_{i} < \gamma$, but since the first condition $(b_{i} \leq \gamma)$ already holds, we think of the disjunction $(b_{i} \leq \gamma) \vee (\varphi_{i}(b_{i}) \leq v)$ as being satisfied.
\begin{figure}[t]
\centering
\includegraphics[width = .9\textwidth]{monopolist.png}
\caption{Demonstration for the reconstruction of a monopolist $h$'s value distributions $V_{h}$ (\Cref{def:monopolist,lem:value_dist}).
Notice that $V_{i}(\gamma) \leq B_{i}(\gamma) \leq V_{i}(\varphi_{i}(\gamma))$.
By \Cref{lem:dichotomy,lem:bid_monotonicity,lem:high_bid}: \\
(i)~A normal value $v_{h} > \varphi_{h}(\gamma)$ induces a normal bid $s_{h}(v_{h}) > \gamma$. \\
(ii)~The normal value $v_{h} = \varphi_{h}(\gamma)$ induces a normal bid $s_{h}(v_{h}) > \gamma$ or the boundary bid $s_{h}(v_{h}) = \gamma$, both of which are possible especially when there is a probability mass $\Prx_{v_{h} \sim V_{h}} [v_{h} = \varphi_{h}(\gamma)] > 0$. \\
(iii)~A normal value $v_{h} \in (\gamma,\, \varphi_{h}(\gamma))$ induces the boundary bid $s_{h}(v_{h}) = \gamma$. \\
(iv)~The boundary value $v_{h} = \gamma$ induces a low bid $s_{h}(v_{h}) < \gamma$ or the boundary bid $s_{h}(v_{h}) = \gamma$, both of which are possible especially when there is a probability mass $\Prx [v_{h} = \gamma] > 0$. \\
(v)~A low value $v_{h} < \gamma$ induces a bid $s_{h}(v_{h})$ that yields the optimal zero utility $u_{h}(v_{h},\, s_{h}(v_{h})) = 0$ but is otherwise arbitrary; even an overbid $s_{h}(v_{h}) > v_{h}$ is fine.
\label{fig:monopolist}}
\end{figure}
\Cref{cor:high_bid} enables us to reconstruct the $v \geq \varphi_{i}(\gamma)$ part of each value distribution $V_{i}$.
Then how about the $v < \varphi_{i}(\gamma)$ part?
We shall consider two subparts separately.
(i)~The low value $v < \gamma$ subpart of a value distribution $V_{i}$ cannot be reconstructed from the bid distributions $\bB$. That is, an equilibrium strategy $b_{i} = s_{i}(v)$ for a low value $v < \gamma$ is arbitrary as long as it yields the optimal zero utility $u_{i}(v,\, b_{i}) = 0$. Luckily, these low value $v < \gamma$ subparts (\Cref{lem:auction_welfare,lem:optimal_welfare}) turn out to have no contribution to neither of the auction/optimal {\textsf{Social Welfares}}, so we can ignore them.
(ii)~The $\gamma \leq v < \varphi_{i}(\gamma)$ subpart of a value distribution $V_{i}$ {\em has} contributions to the auction/optimal {\textsf{Social Welfares}} but also cannot be reconstructed.
This subpart always induces to the boundary bid $b_{i} = s_{i}(v) = \gamma$, following \blackref{value_monotonicity} (\Cref{lem:dichotomy,lem:high_bid}).
As a remedy, we introduce the concept of {\em monopolists} (\Cref{def:monopolist}); cf.\ \Cref{fig:monopolist} for a visual aid.
Here we notice that $V_{i}(\gamma) \leq B_{i}(\gamma) = \Pr_{b_{i} \sim B_{i}} [b_{i} \leq \gamma] \leq V_{i}(\varphi_{i}(\gamma))$.
I.e., the first inequality holds since (\Cref{lem:dichotomy}) a normal bid $b_{i} > \gamma$ induces a higher normal value $v_{i} = s_{i}^{-1}(b_{i}) = \varphi_{i}(b_{i}) > b_{i} > \gamma$.
The second inequality directly follows from \Cref{cor:high_bid}.
\begin{definition}[Monopolists]
\label{def:monopolist}
A bidder $h \in [n]$ is called a {\em monopolist} when the probabilities of taking normal bids/values are unequal $1 - B_{h}(\gamma) < 1 - V_{h}(\gamma)$ or equivalently, when the probability of taking the boundary bid yet a normal value is nonzero $\Pr_{b_{h},\, s_{h}^{-1}} \big[(b_{h} = \gamma) \wedge (s_{h}^{-1}(b_{h}) > \gamma)\big] > 0$.
\end{definition}
It is easier to understand this definition from the perspective of quantiles (\Cref{quantiles}): A bidder $h \in [n]$ is a monopolist when there are some quantiles $q$ such that $V_h^{-1}(q)>\gamma$ but $B_h^{-1}(q)=\gamma$.
\Cref{lem:bid_monotonicity} presents the properties about the monopolists.
(From the statement of \Cref{lem:monopolist}, we can infer that there is NO monopolist when the first-order bid $\max(\bb) \sim \calB$ has no probability mass at the boundary bid $\calB(\gamma) = 0$.)
\begin{lemma}[Monopolists]
\label{lem:monopolist}
There exists at most one monopolist $h$. If existential:
\begin{itemize
\item \term[{\bf monopoly}]{mono_monopoly}{\bf :}
The probability of a boundary first-order bid $\big\{ \max(\bb) = \gamma \big\}$ is nonzero $\calB(\gamma) > 0$.
Conditioned on the tiebreak $\big\{ b_{h} = \max(\bb) = \gamma \big\}$, the monopolist wins $\alloc(\bb) = h$ almost surely.
\item \term[{\bf boundedness}]{mono_boundedness}{\bf :}
A boundary bid $b_{h} = \gamma$ induces a bounded random value $s_{h}^{-1}(\gamma) \in [\gamma,\, \varphi_{h}(\gamma)]$.
\end{itemize}
\end{lemma}
\begin{proof}
Suppose there is a monopolist $h \in [n]$.
We first verify \blackref{mono_monopoly} and \blackref{mono_boundedness}, and then prove that there is no other monopolist.
For some normal value $v_{h} = s_{h}^{-1}(b_{h}) > \gamma$, (\Cref{def:monopolist,lem:dichotomy}) the boundary bid $b_{h} = \gamma$ yields the optimal utility $u_{h}(v_{h},\, \gamma) = (v_{h} - \gamma) \cdot \alloc_{h}(\gamma) = \calU_{h}(v_{h}) > 0$, so the interim allocation is nonzero $\alloc_{h}(\gamma) > 0$.
The boundary first-order bid $\big\{ \max(\bb) = \gamma \big\}$ occurs with probability $\calB(\gamma) = B_{h}(\gamma) \cdot \calB_{-h}(\gamma) \geq \Prx_{b_{h}} [b_{h} = \gamma] \cdot \alloc_{h}(\gamma) > 0$.
Assume for contradiction that this monopolist $h \in [n]$ loses the tiebreak $\big\{ b_{h} = \max(\bb) = \gamma \big\}$ with a nonzero probability $> 0$.
Based on the \blackref{pro:deviation}, some (infinitesimally) higher deviation bid $b_{h}^{*}> b_{h} = \gamma$ gives a strictly better utility $u_{h}(v_{h},\, b_{h}^{*}) > u_{h}(v_{h},\, \gamma) > 0$, which contradicts (\Cref{def:bne_formal}) the optimality of this equilibrium bid $b_{h} = \gamma$.
Reject our assumption: This monopolist $h \in [n]$ always wins the tiebreak $\big\{ b_{h} = \max(\bb) = \gamma \big\}$ (\blackref{mono_monopoly}).
A boundary bid $b_{h} = \gamma$ induces a bounded random value $\gamma \leq s_{h}^{-1}(\gamma) \leq \varphi_{h}(\gamma)$ (\blackref{mono_boundedness}). (i)~This monopolist $h \in [n]$ can never take a low value $s_{h}^{-1}(\gamma) < \gamma$. The tiebreak $\big\{ b_{h} = \max(\bb) = \gamma \big\}$ occurs with a nonzero probability $> 0$ and, suppose so, the monopolist wins $\alloc(\bb) = h$ almost surely.
Hence, a low value $s_{h}^{-1}(\gamma) < \gamma$ together with a boundary bid $b_{h} = \gamma$ yields a negative utility $< 0$, which is impossible.
(ii)~The value is also upper bounded $s_{h}^{-1}(\gamma) \leq \varphi_{h}(\gamma)$, provided \blackref{value_monotonicity} (\Cref{lem:dichotomy,lem:high_bid}).
This monopolist $h \in [n]$ is the unique one. Otherwise, (\Cref{def:monopolist} and \blackref{mono_monopoly}) at least two monopolists $h \neq k \in [n]$ tie for the boundary first-order bid $\big\{ b_{h} = b_{k} = \max(\bb) = \gamma \big\}$ with a nonzero probability $> 0$ and, suppose so, BOTH win $\alloc(\bb) = \{h,\, k\}$ almost surely.
However, this is impossible because we are auctioning ONE item.
\Cref{lem:monopolist} follows then.
\end{proof}
Conceivably, we shall reconstruct the $\gamma \leq v < \varphi_{h}(\gamma)$ subpart just for the value distribution $V_{h}$ of the unique monopolist $h$ (if existential) -- We will show this later in \Cref{lem:value_dist}. To reconstruct the $V_{h}$, we introduce (\Cref{def:conditional_value}) the concept of {\em conditional value distributions}.
\begin{definition}[Conditional value distributions]
\label{def:conditional_value}
Regarding the monopolist $h$'s the truncated random value $\max(s_{h}^{-1}(b_{h}),\, \gamma)$ for $b_{h} \sim B_{h}$, define the {\em conditional value distribution} $P(v)$ as follows.
\[
P(v) ~\eqdef~ \Prx_{b_{h} \sim B_{h},\, s_{h}^{-1}} \big[\, \max(s_{h}^{-1}(b_{h}),\, \gamma) \leq v \,\bigmid\, b_{h} \leq \gamma \,\big],\qquad\qquad \forall v \geq 0.
\]
In the case of no monopolist $h = \emptyset$, the $P(v) \eqdef \indicator(v \geq \gamma)$ always takes the boundary value $\gamma$.
\end{definition}
\begin{remark}[Conditional value distributions]
\label{rem:conditional_value}
The random value $s_{h}^{-1}(b_{h})$ for $b_{h} \sim B_{h}$ exactly follows the distribution $V_{h}$.
With the help of \Cref{fig:monopolist}, we can see that the CDF $P(v)$ is given by
\begin{align*}
P(v) ~=~
\begin{cases}
0 & \forall v \in [0,\, \gamma) \\
V_{h}(v) / B_{h}(\gamma) & \forall v \in [\gamma,\, \varphi_{h}(\gamma)) \\
1 & \forall v \in [\varphi_{h}(\gamma),\, +\infty]
\end{cases}.
\end{align*}
\end{remark}
In general, the $P(v)$ can be any $[\gamma,\, \varphi_{h}(\gamma)]$-supported distribution.
Given this extra information, we can reconstruct the value distributions $\bV = \{V_{i}\}_{i \in [n]}$ (\Cref{lem:value_dist}) except the unimportant low value $v < \gamma$ parts.
\begin{lemma}[Reconstructions]
\label{lem:value_dist}
For each $i \in [n]$, the $v \geq \gamma$ part of the value distribution $V_{i}(v) = \Prx_{v_{i} \sim V_{i}} \big[v_{i} \leq v\big]$ can be reconstructed from the equilibrium bid distributions $\bB$ plus the conditional value distribution $P$ as follows.
\begin{itemize}
\item $V_{i}(v) = \Prx_{b_{i} \sim B_{i}} \big[(b_{i} \leq \gamma) \vee (\varphi_{i}(b_{i}) \leq v) \big]$ in the case of a non-monopoly bidder $i \in [n] \setminus \{h\}$.
\item $V_h(v) = P(v) \cdot \Prx_{b_h \sim B_{h}} \big[(b_h \leq \gamma) \vee (\varphi_h(b_h) \leq v) \big]$ in the case of the monopolist $h$.
\end{itemize}
\end{lemma}
\begin{figure}[t]
\centering
\subfloat[\label{fig:value_dist:monopolist}
The monopolist $h$]{
\includegraphics[width = .49\textwidth]
{reconstruct_monopolist.png}}
\hfill
\subfloat[\label{fig:value_dist:non}
{A non-monopoly bidder $i \in [n] \setminus \{h\}$}]{
\includegraphics[width = .49\textwidth]
{reconstruct_non.png}}
\caption{Demonstration for (\Cref{lem:value_dist}) the reconstruction of the value distributions $\bV$.
\label{fig:value_dist}}
\end{figure}
\begin{proof}
See \Cref{fig:value_dist} for a visual aid.
Regarding a non-monopoly bidder $i \in [n] \setminus \{h\}$, (\Cref{lem:dichotomy,def:monopolist}; cf.\ \Cref{fig:value_dist:non}) a normal bid $> \gamma$ induces a normal value $> \gamma$ and vice versa.
By \blackref{value_monotonicity}, such a normal value $v_{i} = \varphi_{i}(b_{i})$ for $b_{i} > \gamma$ is at least $v_{i} \geq \varphi_{i}(\gamma)$. Hence, on the OPEN interval $v \in (\gamma,\, \varphi_{i}(\gamma))$, the value distribution $V_{i}(v)$ has no density and keeps a constant function $V_{i}(v) = B_{i}(\gamma)$. Given these, the reconstruction in \Cref{cor:high_bid} can be generalized naively: For $v \in [\gamma,\, \varphi_{i}(\gamma))$, we have $V_{i}(v) = B_{i}(\gamma) = \Prx_{b_{i} \sim B_{i}}\big[ b_{i} \le \gamma \big] = \Prx_{b_{i} \sim B_{i}}\big[(b_{i} \le \gamma) \vee (\varphi_{i}(b_{i}) \leq v \big]$.
Further, we can check the claimed ``monopolist'' formula $V_{h}(v)$ for $v \in [\gamma,\, \varphi_{i}(\gamma))$ (cf.\ \Cref{fig:value_dist:monopolist}):
$P(v) \cdot \Prx_{b_h \sim B_{h}} \big[(b_h \leq \gamma) \vee (\varphi_h(b_h) \leq v) \big]
= P(v) \cdot \Prx_{b_h \sim B_{h}} \big[ b_h \leq \gamma \big]
= P(v) \cdot B_{h}(\gamma) = V_{h}(v)$, where the first step again leverages \blackref{value_monotonicity}.
This finishes the proof.
\end{proof}
\begin{comment}
Regarding the value distribution $V_{i}$, the $v \geq \varphi_{i}(\gamma)$ part is
Together with the monotonicity
satisfies $V_{i}(\gamma) = B_{i}(\gamma) = V_{i}(\varphi_{i}(\gamma))$.
For non-monopoly bidder $i$, since the value $v_{i}$ has no probability density on $(\gamma,\varphi_{i}(\gamma)]$. The distribution for $v>\varphi_{i}(\gamma)$ in \Cref{cor:high_bid} can be directly extended to the range $v\geq \gamma$, namely the CDF value at $V_{i}(v)$ for any $v\in [\gamma,\varphi_{i}(\gamma))$ should be equal to
\[
V_{i}(\varphi_{i}(\gamma))= \Prx_{b_{i}}\big[b_{i} \leq \gamma \vee \varphi_{i}(b_{i})\leq \varphi_{i}(\gamma) \big] = \Prx_{b_{i}}\big[b_{i} \leq \gamma \big].
\]
Interestingly, the same formula as is already satisfies this extension since for any $v\in [\gamma,\varphi_{i}(\gamma))$,
\[
V_{i}(v)= \Prx_{b_{i}}\big[b_{i} \leq \gamma \vee \varphi_{i}(b_{i})\leq v \big] = \Prx_{b_{i}}\big[b_{i} \leq \gamma \big].
\]
This finishes the proof for non-monopoly bidders.
\end{comment}
\Cref{lem:conditional_value} will be useful for formulating the expected auction/optimal {\textsf{Social Welfares}}.
\begin{lemma}[{\textsf{Social Welfares}}]
\label{lem:conditional_value}
Conditioned on the boundary first-order bid $\big\{ \max(\bb) = \gamma \big\}$, each of the following exactly follows the conditional value distribution $P$:
\begin{itemize}
\item The conditional auction {\textsf{Social Welfare}} $\big\{ s_{\alloc(\bb)}^{-1}(\gamma) \bigmid \max(\bb) = \gamma \big\}$.
\item The conditional optimal {\textsf{Social Welfare}} $\big\{ \max(\bs^{-1}(\bb)) \bigmid \max(\bb) = \gamma \big\}$.
\end{itemize}
\end{lemma}
\begin{proof}
The random events $\big\{ \max(\bb) = \gamma \big\}$ and $\big\{ \max(\bb) \leq \gamma \big\}$ are identical, regarding the infimum first-order bid $\gamma = \inf(\supp(\calB))$.
Conditioned on this, (i)~a non-monopoly bidder $i \in [n] \setminus \{h\}$ has a low/boundary bid $b_{i} \leq \gamma$ and (\Cref{lem:dichotomy,def:monopolist}) a low/boundary value $s_{i}^{-1}(b_{i}) \leq \gamma$; while
(ii)~the monopolist $h$ has EITHER a low bid $b_{h} < \gamma$ and a low/boundary value $s_{h}^{-1}(b_{h}) \leq \gamma$, OR the boundary bid $b_{h} = \gamma$ and (\Cref{lem:monopolist}) a boundary/normal value $s_{h}^{-1}(b_{h}) \in [\gamma,\, \varphi_{h}(\gamma)]$.
The allocated bidder $\alloc(\bb) \in [n]$ always take the boundary bid $b_{\alloc(\bb)} = \gamma$ and, to make the utility nonnegative, a boundary/normal value $s_{\alloc(\bb)}^{-1}(b_{\alloc(\bb)}) = s_{\alloc(\bb)}^{-1}(\gamma) \geq \gamma$; the case of a normal value $> \gamma$ occurs only if the monopolist is allocated $\alloc(\bb) = h$.
Hence, the conditional optimal {\textsf{Social Welfare}} is identically distributed as each of the following:
\begin{align*}
\big\{ \max(\bs^{-1}(\bb)) \bigmid \max(\bb) = \gamma \big\}
& ~\overset{\tt d}{=}~
\big\{ \max(\bs^{-1}(\bb)) \bigmid \max(\bb) \leq \gamma \big\}
&& \gamma = \inf(\supp(\calB)) \\
& ~\overset{\tt d}{=}~
\big\{ \max(\bs^{-1}(\bb),\, \gamma) \bigmid \max(\bb) \leq \gamma \big\}
&& s_{\alloc(\bb)}^{-1}(b_{\alloc(\bb)}) \geq \gamma \\
& ~\overset{\tt d}{=}~
\big\{ \max(s_{h}^{-1}(b_{h}),\, \gamma) \bigmid \max(\bb) \leq \gamma \big\}
&& \mbox{$s_{i}^{-1}(b_{i}) \geq \gamma$ for $i \neq h$} \\
& ~\overset{\tt d}{=}~
\big\{ \max(s_{h}^{-1}(b_{h}),\, \gamma) \bigmid b_{h} \leq \gamma \big\}.
&& \mbox{independence}
\end{align*}
That is, the conditional optimal {\textsf{Social Welfare}} follows (\Cref{def:conditional_value}) the distribution $P$.
Further, since the monopolist $h$ (as the only possible bidder that has a normal value $> \gamma$) always wins the tiebreak $\big\{ b_{h} = \max(\bb) = \gamma \big\}$ (\Cref{lem:monopolist}), the conditional auction/optimal {\textsf{Social Welfares}} are identically distributed $\big\{ s_{\alloc(\bb)}^{-1}(\gamma) \bigmid \max(\bb) = \gamma \big\} \overset{\tt d}{=} \big\{ \max(\bs^{-1}(\bb)) \bigmid \max(\bb) = \gamma \big\}$, which again follow the distribution $P$.
This finishes the proof.
\end{proof}
\subsection{Reformulation for the {\textsf{Price of Anarchy}} problem}
\label{subsec:reformulation}
To address the {\textsf{Price of Anarchy}} problem, we shall formulate the expected auction/optimal {\textsf{Social Welfares}} ${\sf FPA}(\bV,\, \bs,\, \alloc)$ and ${\sf OPT}(\bV,\, \bs,\, \alloc)$.
Indeed, these can be written in terms of the equilibrium bid distributions $\bB$ plus the conditional value distribution $P$ (\Cref{def:conditional_value}).
First, \Cref{lem:auction_welfare} formulates the expected auction {\textsf{Social Welfare}} ${\sf FPA}(\bV,\, \bs,\, \alloc)$.
\begin{lemma}[Auction {\textsf{Social Welfare}}]
\label{lem:auction_welfare}
The expected auction {\textsf{Social Welfare}} ${\sf FPA}(\bV,\, \bs,\, \alloc)$ can be formulated based on the conditional value distribution and the equilibrium bid distributions $(P,\, \bB)$:
\begin{align*}
{\sf FPA}(\bV,\, \bs,\, \alloc)
~=~ {\sf FPA}(P,\, \bB)
~=~ \Ex[ P ] \cdot \calB(\gamma) + \sum_{i \in [n]} \bigg(\int_{\gamma}^{\lambda} \varphi_{i}(b) \cdot \frac{B'_{i}(b)}{B_{i}(b)} \cdot \calB(b) \cdot \d b\bigg).
\end{align*}
where the first-order bid distribution $\calB(b) = \prod_{i \in [n]} B_{i}(b)$ and the bid-to-value mappings $\varphi_{i}(b)$ can be computed from the bid distributions $\bB = \{B_{i}\}_{i \in [n]}$ (\Cref{def:mapping}).
\end{lemma}
\begin{proof}
In any realization, the outcome auction {\textsf{Social Welfare}} is the allocated bidder's value $v_{\alloc(\bs(\bv))}$. Over the randomness of the values $\bv = (v_{i})_{i \in [n]} \sim \bV$, the equilibrium strategies $\bs = \{s_{i}\}_{i \in [n]}$, and the allocation rule $\alloc(\bs(\bv))$, the expected auction {\textsf{Social Welfare}} ${\sf FPA}$ is given by
\begin{align}
{\sf FPA}
~=~ \Ex_{\bv,\, \bs,\, \alloc} \big[\, v_{\alloc(\bs(\bv))} \,\big]
& ~=~ \Ex_{\bv,\, \bs,\, \alloc} \big[\, v_{\alloc(\bs(\bv))} \cdot \indicator(\max(\bs(\bv)) = \gamma) \,\big]
\label{eq:fpa_boundary}\tag{B} \\
& \phantom{~=~} + \Ex_{\bv,\, \bs,\, \alloc} \big[\, v_{\alloc(\bs(\bv))} \cdot \indicator(\max(\bs(\bv)) > \gamma) \,\big]. \hspace{0.47cm}
\label{eq:fpa_high}\tag{N}
\end{align}
Here we used the fact that the first-order bid $\max(\bs(\bv)) \sim \calB$ is at least $\gamma = \inf(\supp(\calB))$.
For Term~\eqref{eq:fpa_boundary}, we deduce that
\begin{align*}
\text{Term~\eqref{eq:fpa_boundary}}
& ~=~ \Ex_{\bv,\, \bs,\, \alloc} \big[\, v_{\alloc(\bs(\bv))} \cdot \indicator(\max(\bs(\bv)) = \gamma) \,\big] \\
& ~=~ \Ex_{\bv,\, \bs, \, \alloc} \big[\, v_{\alloc(\bs(\bv))} \,\bigmid\, \max(\bs(\bv)) = \gamma \,\big] \cdot \Prx_{\bv,\, \bs} \big[\, \max(\bs(\bv)) = \gamma \,\big] \\
& ~=~ \Ex[ P ] \cdot \calB(\gamma).
\end{align*}
The last equality uses the fact that conditioned on the boundary first-order bid $\big\{\max(\bs(\bv)) = \gamma\big\}$, the allocated bidder $\alloc(\bs(\bv))$'s value follows the distribution $P$ (\Cref{lem:conditional_value}).
For Term~\eqref{eq:fpa_high}, we deduce that
\begin{align}
\text{Term~\eqref{eq:fpa_high}}
& ~=~ \Ex_{\bv,\, \bs,\, \alloc} \big[\, v_{\alloc(\bs(\bv))} \cdot \indicator(\max(\bs(\bv)) > \gamma) \,\big] \phantom{\bigg.}
\nonumber \\
& ~=~ \Ex_{\bb,\, \bs^{-1},\, \alloc} \big[\, s_{\alloc(\bb)}^{-1}(b_{\alloc(\bb)}) \cdot \indicator(\max(\bb) > \gamma) \,\big] \phantom{\bigg.}
\label{eq:fpa:h1}\tag{N1} \\
& ~=~ \sum_{i \in [n]} \Ex_{\bb,\, s_{i}^{-1},\, \alloc} \big[\, s_{i}^{-1}(b_{i}) \cdot \indicator(\alloc(\bb) = i) \cdot \indicator(b_{i} > \gamma) \,\big] \phantom{\bigg.} \hspace{1.63cm}
\label{eq:fpa:h2}\tag{N2} \\
& ~=~ \sum_{i \in [n]} \Ex_{b_{i}} \big[\, \varphi_{i}(b_{i}) \cdot \alloc_{i}(b_{i}) \cdot \indicator(b_{i} > \gamma) \,\big] \phantom{\bigg.}
\label{eq:fpa:h3}\tag{N3} \\
& ~=~ \sum_{i \in [n]} \Ex_{b_{i}} \big[\, \varphi_{i}(b_{i}) \cdot \calB_{-i}(b_{i}) \cdot \indicator(b_{i} > \gamma) \,\big] \phantom{\bigg.}
\label{eq:fpa:h4}\tag{N4} \\
& ~=~ \sum_{i \in [n]} \bigg(\int_{\gamma}^{\lambda} \varphi_{i}(b) \cdot \calB_{-i}(b) \cdot B'_{i}(b) \cdot \d b\bigg)
\nonumber \\
& ~=~ \sum_{i \in [n]} \bigg(\int_{\gamma}^{\lambda} \varphi_{i}(b) \cdot \frac{B'_{i}(b)}{B_{i}(b)} \cdot \calB(b) \cdot \d b\bigg).
\label{eq:fpa:h5}\tag{N5}
\end{align}
\eqref{eq:fpa:h1}: Redenote $b_{i} = s_{i}(v_{i}) \sim B_{i}$ for $i \in [n]$. Recall \Cref{def:inverse} that each random inverse $s_{i}^{-1}(b_{i})$ for $b_{i} \sim B_{i}$ is identically distributed as the random value $v_{i} \sim V_{i}$. \\
\eqref{eq:fpa:h2}: Divide the event $\big\{\max(\bb) > \gamma\big\}$ into subevents $\big\{\alloc(\bb) = i ~\text{and}~ b_{i} = \max(\bb) > \gamma \big\}$ for $i \in [n]$. \\
\eqref{eq:fpa:h3}: \Cref{lem:high_bid} (\Cref{lem:high_bid:inverse}) that $s_{i}^{-1}(b) = \varphi_{i}(b)$ almost surely for any normal bid $b \in \supp_{> \gamma}(B_{i})$,
AND \Cref{def:interim_utility} that $\alloc_{i}(b) = \Prx_{\bb_{-i},\, \alloc} \big[ \alloc(\bb_{-i},\, b) = i \big] = \Ex_{\bb_{-i},\, \alloc} \big[ \indicator(\alloc(\bb_{-i},\, b) = i) \big]$. \\
\eqref{eq:fpa:h4}: \Cref{cor:allocation} that $\alloc_{i}(b) = \calB_{-i}(b)$ for any normal bid $b > \gamma$. \\
\eqref{eq:fpa:h5}: The first-order bid distribution $\calB(b) = B_{i}(b) \cdot \calB_{-i}(b)$.
\vspace{.1in}
Combining Terms~\eqref{eq:fpa_boundary} and \eqref{eq:fpa_high} together finishes the proof of \Cref{lem:auction_welfare}.
\end{proof}
Moreover, \Cref{lem:optimal_welfare} formulates the expected optimal {\textsf{Social Welfare}} ${\sf OPT}(\bV,\, \bs,\, \alloc)$.
\begin{lemma}[Optimal {\textsf{Social Welfare}}]
\label{lem:optimal_welfare}
The expected optimal {\textsf{Social Welfare}} ${\sf OPT}(\bV,\, \bs,\, \alloc)$ can be formulated based on the conditional value distribution and the equilibrium bid distributions $(P,\, \bB)$:
\begin{align*}
{\sf OPT}(\bV,\, \bs,\, \alloc)
~=~ {\sf OPT}(P,\, \bB)
~=~ \gamma + \int_{\gamma}^{+\infty} \Big(1 - P(v) \cdot \prod_{i \in [n]} \Prx_{b_{i}}\big[b_{i} \leq \gamma \vee \varphi_{i}(b_{i})\leq v \big]\Big) \cdot \d v.
\end{align*}
\end{lemma}
\begin{proof}
This proof simply follows from \Cref{lem:value_dist}, i.e., we essentially reconstruct the value distributions $\bV = \{V_{i}\}_{i \in [n]}$ from the tuple $(P,\, \bB)$.
In any realization, the outcome optimal {\textsf{Social Welfare}} is the first-order value $\max(\bv)$. Over the randomness of the values $\bv = (v_{i})_{i \in [n]} \sim \bV$, the expected optimal {\textsf{Social Welfare}} ${\sf OPT}$ is given by
\begin{align}
{\sf OPT}
& ~=~ \Ex_{\bv} \big[\, \max(\bv) \,\big]
~=~ \Ex_{\bv} \big[\, \max(\bv,\, \gamma) \,\big] \phantom{\bigg.}
\label{eq:optimal_welfare:1}\tag{O1} \\
& ~=~ \int_{0}^{+\infty} \Big(1 - \prod_{i \in [n]} V_{i}(v) \cdot \indicator(v \geq \gamma)\Big) \cdot \d v
\label{eq:optimal_welfare:2}\tag{O2} \\
& ~=~ \int_{0}^{+\infty} \Big(1 - \Big(1 - P(v) \cdot \prod_{i \in [n]} \Prx_{b_{i}}\big[b_{i} \leq \gamma \vee \varphi_{i}(b_{i})\leq v \big]\cdot \indicator(v \geq \gamma) \Big) \cdot \d v
\label{eq:optimal_welfare:3}\tag{O3} \\
& ~=~ \gamma + \int_{\gamma}^{+\infty} \Big(1 - P(v) \cdot \prod_{i \in [n]} \Prx_{b_{i}}\big[b_{i} \leq \gamma \vee \varphi_{i}(b_{i})\leq v \big]\Big) \cdot \d v.
\nonumber
\end{align}
\eqref{eq:optimal_welfare:1}: The first-order value $\max(\bv)$ for $\bv \sim \bV$ is always boundary/normal $\geq \gamma$. \\
\eqref{eq:optimal_welfare:2}: The expectation of a {\em nonnegative} distribution $F$ is given by $\E[F] = \int_{0}^{+\infty} \big(1 - F(v)\big) \cdot \d v$. \\
\eqref{eq:optimal_welfare:3}: Apply \Cref{lem:value_dist}. It also holds in case that no monopolist exists since, suppose so, we have $P(v)=1$ for $v \geq \gamma$.
This finishes the proof of \Cref{lem:optimal_welfare}.
\end{proof}
The above $(P,\, \bB)$-based {\textsf{Social Welfare}} formulas ${\sf FPA}(P,\, \bB)$ and ${\sf OPT}(P,\, \bB)$, together with the characterizations of $P$ and $\bB$, are the foundation of the whole paper. From all these discussions, we notice that the below-$\gamma$ parts of bid distributions $\bB$ and value distributions $\bV$ are less important. To stress this observation, we have the key definition of {\em valid instances} $(P,\, \bB) \in \mathbb{B}_{\sf valid}$.
\begin{definition}[Valid instances]
\label{def:valid}
An instance $(P,\, \bB)$, where $\bB = \{B_{i}\}_{i \in [n]}$, is described by $n + 1$ independent distributions and is called {\em valid} $(P,\, \bB) \in \mathbb{B}_{\sf valid}$ when the following hold.
\begin{itemize}
\item There are $\gamma \in [0,\, +\infty)$ and $\lambda \in [\gamma,\, +\infty]$ such that $\gamma = \inf(\supp(B_{i})) \leq \sup(\supp(B_{i})) \leq \lambda$; with at least one equality $\max(\supp(B_{i})) = \lambda$. It is important that all these bid distributions $\bB = \{B_{i}\}_{i \in [n]}$ share the same infimum bid $\gamma$.
\item $B_{i}(b)$ for $i \in [n]$ each has no probability mass on $b \in (\gamma,\, \lambda]$, excluding the boundary $\gamma$, thus each being continuous on the CLOSED interval $b \in [\gamma,\, \lambda]$.
\item Let $\calB(b) = \prod_{i \in [n]} B_{i}(b)$ and $\calB_{-i}(b) = \prod_{k \in [n] \setminus \{i\}} B_{k}(b)$ for $i\in[n]$. These first-order/competing bid distributions $\calB(b)$ and $\calB_{-i}(b)$ for $i \in [n]$ each have probability densities almost everywhere on $b \in (\gamma,\, \lambda]$, thus each being strictly increasing on the CLOSED interval $b \in [\gamma,\, \lambda]$.
\item The bid-to-value mappings $\varphi_{i}(b) \eqdef b + \frac{\calB_{-i}(b)}{\calB'_{-i}(b)}$ for $i \in [n]$ each are (weakly) increasing over the bid support $b \in [\gamma,\, \lambda]$.
\item $P$ is a distribution supported on $[\gamma, \varphi_h(\gamma)]$ for some (monopolist) $h\in[n]$. Without loss of generality, we reindex the bidders $i \in [n]$ such that $h = 1$.
\end{itemize}
\end{definition}
The valid instances $(P,\, \bB) \in \mathbb{B}_{\sf valid}$ capture nearly all {\textsf{Bayesian Nash Equilibria}} $\big\{ \bs \in \bbBNE(\bV,\, \alloc) \big\}$. Essentially, there is a bijection between these two spaces (without considering some less important details about {\textsf{Bayesian Nash Equilibria}}), especially in a sense of the {\textsf{Social Welfare}} invariant.
This is formalized into \Cref{thm:valid}.
\begin{theorem}[Valid instances]
\label{thm:valid}
(I)~For any valid instance $(P,\, \bB) \in \mathbb{B}_{\sf valid}$, there is some equilibrium $\bs \in \bbBNE(\bV,\, \alloc) \neq \emptyset$, for some value distribution $\bV = \{V_{i}\}_{i \in [n]}$, whose conditional value distribution and bid distribution are exactly the $(P,\, \bB)$.
(II)~For any value distribution $\bV = \{V_{i}\}_{i \in [n]}$ and any equilibrium $\bs \in \bbBNE(\bV,\, \alloc) \neq \emptyset$,
there is some valid instance $(P,\, \bB) \in \mathbb{B}_{\sf valid}$ that yields the same auction/optimal {\textsf{Social Welfares}} ${\sf FPA}(P,\, \bB) = {\sf FPA}(\bV,\, \bs,\, \alloc)$ and ${\sf OPT}(P,\, \bB) = {\sf OPT}(\bV,\, \bs,\, \alloc)$.
\end{theorem}
\begin{figure}[t]
\centering
\subfloat[\label{fig:valid:monopolist}
The monopolist $h = B_{1}$]{
\includegraphics[width = .49\textwidth]
{quantile_monopolist.png}}
\hfill
\subfloat[\label{fig:valid:non}
{A non-monopoly bidder $B_{i}$ for $i \in [2: n]$}]{
\includegraphics[width = .49\textwidth]
{quantile_non.png}}
\caption{Demonstration for valid instances $(P,\, \bB) \in \mathbb{B}_{\sf valid}$ and \Cref{thm:valid}.
\label{fig:valid}}
\end{figure}
\begin{proof}
See \Cref{fig:valid} for a visual aid. We prove Items~(I) and (II) separately.
(I)~Given a valid instance $(P,\, \bB) \in \mathbb{B}_{\sf valid}$, we can construct
the boundary/normal value $v \geq \gamma$ part of some value distribution $\bV = \{V_{i}\}_{i \in [n]}$, following \Cref{lem:value_dist}.
For the low value $v < \gamma$ part, we simply let $V_{i}(v) \equiv 0$ on $v < \gamma$, namely putting all the undecided probabilities to the boundary value $\gamma$. Then the value distribution $\bV = \{V_{i}\}_{i \in [n]}$ is well defined.
Since we zero-out everything below $\gamma$, the equivalence in \Cref{quantiles} can be extend to the entire quantile space $q \in [0,1]$. Hence, we can construct the equilibrium by identifying the quantiles for value distributions and bid distributions.
Consider the quantile bid/value functions $\{B_{i}^{-1}\}_{i \in [n]}$ and $\{V_{i}^{-1}\}_{i \in [n]}$.
As \Cref{fig:valid} shows, we construct the strategy profile $\bs = \{s_{i}\}_{i \in [n]}$ as follows: For $i \in [n]$ and $v \in \supp(V_{i})$, let $q \sim U[0,\, 1]$ be a uniform random quantile, then
\[
\Prx_{s_{i}}\big[\, s_{i}(v) = b \,\big] ~=~ \Prx_{q \,\sim\, U[0,\, 1]} \big[\, B_{i}^{-1}(q) = b \,\bigmid\, V_{i}^{-1}(q) = v \,\big],
\qquad\qquad \forall b \in [\gamma,\, \lambda].
\]
We conclude Item~(I) with verifying the equilibrium conditions (\Cref{def:bne_formal}).
\setcounter{fact}{0}
\begin{fact}
\label{fact:valid:equilibrium}
$u_{i}(v,\, s_{i}(v)) \geq u_{i}(v,\, b^{*})$ almost surely, for each bidder $i \in [n]$, any value $v \in \supp(V_{i})$, and any deviation bid $b^{*} \geq 0$.
\end{fact}
\begin{proof}
First, the monopolist $B_{1}$ (\Cref{cor:allocation,def:monopolist,lem:monopolist}) has the allocation $\alloc_{1}(b) = \calB_{-1}(b)$ for a boundary/normal bid $b \geq \gamma$ and $\alloc_{1}(b) = 0$ for a low $b \in [0,\, \gamma)$. \\
{\bf (i)}~A Normal Bid $s_{1}(v) \in (\gamma,\, \lambda]${\bf .}
The underlying value $v = \varphi_{1}(s_{1}(v)) > s_{1}(v) > \gamma$, by construction and \Cref{lem:value_dist,lem:high_bid}.
Accordingly, the current allocation/utility are positive $u_{1}(v,\, s_{1}(v)) = (v - s_{1}(v)) \cdot \calB_{-1}(s_{1}(v)) > 0$.
Then a low deviation bid $b^{*} < \gamma$ is suboptimal, because the deviated allocation/utility are zero $u_{1}(v,\, b^{*}) = (v - b^{*}) \cdot \alloc_{1}(b^{*}) = 0$.
Moreover, a boundary/normal deviation bid $b^{*} \geq \gamma$ is suboptimal -- The current bid $s_{1}(v) > \gamma$ maximizes the utility formula $u_{1}(v,\, b^{*}) = (v - b^{*}) \cdot \calB_{-1}(b^{*})$, because the partial derivative $\frac{\partial u_{1}}{\partial b^{*}} = \big(v - \varphi_{1}(b^{*})\big) \cdot \calB'_{-1}(b^{*})$, the underlying value $v = \varphi_{1}(s_{1}(v))$, and the mapping $\varphi_{1}(b^{*})$ is increasing (\Cref{lem:high_bid:monotone} of \Cref{lem:high_bid}). \\
{\bf (ii)}~A Boundary Bid $s_{1}(v) = \gamma${\bf .}
The underlying value $v = s_{1}^{-1}(\gamma)$ exactly follows the conditional value distribution $P$ and ranges within $\supp(P) \subseteq [\gamma,\, \varphi_{1}(\gamma)]$, by construction and \Cref{def:conditional_value,def:valid}.
Reusing the above arguments (i.e., the utility formula $u_{1}(v,\, b^{*})$ is decreasing for $b^{*} \geq \gamma$),
we can easily see that the current bid $s_{1}(v) = \gamma$ is optimal.
In sum, the monopolist $B_{1}$ meets the equilibrium conditions.
\vspace{.1in}
Each non-monopoly bidder $i \in [2: n]$ (\Cref{cor:allocation,def:monopolist,lem:monopolist}) has the allocation $\alloc_{i}(b) = \calB_{-i}(b)$ for a normal bid $b > \gamma$ and $\alloc_{i}(b) = 0$ for a low/boundary bid $b \in [0,\, \gamma]$. \\
{\bf (i)}~A Normal Bid $s_{i}(v) \in (\gamma,\, \lambda]${\bf .}
Reusing the above arguments for the monopolist $B_{1}$,
we can easily see that the current normal bid $s_{i}(v) \in (\gamma,\, \lambda]$ is optimal. \\
{\bf (ii)}~A Boundary Bid $s_{i}(v) = \gamma${\bf .}
The underlying value must be the boundary value $v = s_{i}^{-1}(\gamma) \equiv \gamma$, by construction and \Cref{def:conditional_value,def:valid}.
Obviously, the current {\em zero} utility $u_{i}(v,\, s_{i}(v)) = 0$ and the current {\em boundary} bid $s_{i}(v) = \gamma$ are optimal.
In sum, each non-monopoly bidder $i \in [2: n]$ meets the equilibrium conditions.
{\bf \Cref{fact:valid:equilibrium}} follows then.
\end{proof}
(II)~Given an equilibrium $\bs \in \bbBNE(\bV,\, \alloc) \neq \emptyset$, as before, consider the bid distribution $\bB^{(\bs)}$, the conditional value distribution $P^{(\bs)}$, and the infimum/supremum first-order bids $\gamma = \inf(\supp(\calB^{(\bs)}))$ and $\lambda = \sup(\supp(\calB^{(\bs)}))$.
Following \Cref{lem:bid_distribution,lem:high_bid,lem:monopolist,def:conditional_value}, this instance $(P^{(\bs)},\, \bB^{(\bs)})$ is almost valid, except that
the distribution $\bB^{(\bs)} = \{B_{i}^{(\bs)}\}_{i \in [n]}$ can be supported on low bids $b < \gamma$. (Recall that the distribution $P^{(\bs)}$ is just supported on boundary/high values $v \geq \gamma$.)
By truncating the bid distribution $B_{i}(b) \equiv B_{i}^{(\bs)}(b) \cdot \indicator(b \geq \gamma)$ for $i \in [n]$ and reusing the conditional value distribution $P \equiv P^{(\bs)}$, we obtain a valid instance $(P,\, \bB) \in \mathbb{B}_{\sf valid}$.
Particularly, the truncation $B_{i}(b) \equiv B_{i}^{(\bs)}(b) \cdot \indicator(b \geq \gamma)$ does not modify the auction/optimal {\textsf{Social Welfares}} (cf.\ \Cref{lem:auction_welfare,lem:optimal_welfare}).
This finishes the proof of Item~(II).
\end{proof}
\begin{comment}
\begin{itemize}
\item {\bf A Normal Bid $s_{1}(v) \in (\gamma,\, \lambda]$.}
the boundary bid $s_{i}(v) = \gamma$.
following the conditional value distribution $v \sim P$.
This high bid $b > \gamma$ yields (\Cref{lem:allocation} (\Cref{lem:allocation:2}) and \Cref{lem:bid_distribution} (\Cref{lem:bid_distribution:continuity})) a {\em nonzero} allocation $\alloc_{i}(b) = \calB_{-i}(b) > 0$ and (\Cref{lem:high_bid}) a {\em higher} value $v = s_{i}^{-1}(b) = \varphi_{i}(b) > b$, hence a {\em positive} utility $u_{i}(v,\, b) = (v - b) \cdot \calB_{-i}(b) > 0$. A deviation bid $b^{*} \geq 0$ has three possibilities.
\begin{itemize}
\item A low deviation bid $\{b^{*} < \gamma\}$.
Again, this yields a zero deviated utility $u_{i}(v,\, b^{*}) = 0$.
\item A high deviation bid $\{b^{*} > \gamma\}$.
We reuse the arguments in the proof of \Cref{lem:high_bid}.
The same as before, we can formulate the deviated utility $u_{i}(v,\, b^{*}) = (v - b^{*}) \cdot \calB_{-i}(b^{*})$ and the partial derivative $\frac{\partial u_{i}}{\partial b^{*}} = \big(v - \varphi_{i}(b^{*})\big) \cdot \calB'_{-i}(b^{*})$, where the considered value/inverse (\Cref{lem:high_bid:inverse} of \Cref{lem:high_bid}) $v = s_{i}^{-1}(b) = \varphi_{i}(b)$.
Moreover, the bid-to-value mapping $\varphi_{i}(b^{*})$ (\Cref{lem:high_bid:monotone} of \Cref{lem:high_bid}) is increasing for $b^{*} \geq \gamma$, {\em including the boundary}.
Given these, the deviated utility $u_{i}(v,\, b^{*})$ is increasing when $b^{*} \in (\gamma,\, b]$ and is decreasing when $b^{*} \geq b$, thus being optimized by underlying bid, $u_{i}(v,\, b) \geq u_{i}(v,\, b^{*})$ for any $b^{*} > \gamma$.
\item A boundary deviation bid $\{b^{*} = \gamma\}$.
For the deviated utility $u_{i}(v,\, b^{*})$, we deduce that
{\centering
$u_{i}(v,\, b^{*})
\,=\, (v - b^{*}) \cdot \alloc_{i}(b^{*})
\,\leq\, \lim_{t \searrow \gamma} \big((v - t) \cdot \alloc_{i}(t)\big)
\,=\, \lim_{t \searrow \gamma} u_{i}(v,\, t)
\,\leq\, u_{i}(v,\, b)$.
\par}
The first inequality holds because $b^{*} = \gamma$, (\Cref{lem:inverse:dichotomy} of \Cref{lem:inverse}) $v = s_{i}^{-1}(b) > b$, and (\Cref{lem:allocation:1} of \Cref{lem:allocation}) the interim allocation formula $\alloc_{i}(t)$ is increasing for $t \geq 0$. \\
The second inequality holds because (as shown above) $u_{i}(v,\, b) \geq u_{i}(v,\, t)$ for any $t > \gamma$.
\end{itemize}
Hence, the underlying high bid $b > \gamma$ is optimal: $u_{i}(v,\, b) \geq u_{i}(v,\, b^{*})$ for any $b^{*} \geq 0$.
\item {\bf Case~\term[II]{thm:bne:case_II}: The Underlying Bid is Boundary $\{b = \gamma\}$.}
The considered bidder $i \in [n]$ has two possibilities.
\begin{itemize}
\item One of the non-monopoly bidders $i \in [n] \setminus \{h\}$.
By construction, this bidder takes the {\em boundary} value $v = s_{i}^{-1}(\gamma) = \gamma$ and never can be allocated $\alloc(\bb) \neq i$.
Obviously, the current {\em zero} utility $u_{i}(v,\, \gamma) = 0$ is optimal.
\item The monopoly bidder $i = h$.
By construction, this bidder has a {\em boundary/high} value $v = s_{h}^{-1}(\gamma) \in [\gamma,\, \varphi_{h}(\gamma)]$ and {\em will} be allocated $\alloc(\bb) = h$ once being the first-order bid $\big\{\max(\bb) = \gamma\big\}$. I.e., the underlying boundary bid $b = \gamma$ yields (\Cref{lem:allocation:2} of \Cref{lem:allocation}) the allocation $\alloc_{i}(\gamma) = \calB_{-i}(\gamma)$ and
the utility $u_{i}(v,\, \gamma) = (v - \gamma) \cdot \calB_{-i}(\gamma) \geq 0$.
This utility $u_{i}(v,\, b) = (v - b) \cdot \calB_{-i}(b) \geq 0$ has the same format as the one in {\bf Case~\ref{thm:bne:case_I}}, so we can reuse the arguments for {\bf Case~\ref{thm:bne:case_I}} to verify its optimality.
\end{itemize}
Hence, the underlying boundary bid $b = \gamma$ is optimal: $u_{i}(v,\, b) \geq u_{i}(v,\, b^{*})$ for any $b^{*} \geq 0$.
\end{itemize}
on either a normal bid $s_{i}(v) > \gamma$ or a boundary bid $s_{i}(v) = \gamma$.
Conditioned on the boundary bid $b = \gamma$ (\Cref{def:monopolist,def:conditional_value,lem:monopolist}), the index-$1$ monopolist $h = B_{1}$'s value exactly follows the conditional value distribution $s_{1}^{-1}(\gamma) \sim P$, whereas each non-monopoly bidder $i \in [2: n]$ always takes the boundary value $s_{i}^{-1}(\gamma) \equiv \gamma$.
Moreover, conditioned on the tiebreak $\big\{ b_{1} = \max(\bb) = \gamma \big\}$, (\Cref{def:valid}) the index-$1$ monopolist is allocated $\alloc(\bb) = B_{1}$ almost surely.
Given these, it is easy to check that this {\em is} an equilibrium $\bs \in \bbBNE(\bV,\, \alloc)$ and the $(P,\, \bB)$ are exactly the conditional value distribution and the bid distribution.
\end{comment}
We conclude this section with (\Cref{cor:poa_identity}) an equivalent definition for {\textsf{Price of Anarchy}} in {\textsf{First Price Auctions}}, as an implication of \Cref{thm:valid}.
\begin{corollary}[{\textsf{Price of Anarchy}}]
\label{cor:poa_identity}
Regarding {\textsf{First Price Auctions}}, the {\textsf{Price of Anarchy}} is given by
\begin{align*}
{\sf PoA} ~=~ \inf \bigg\{\, \frac{{\sf FPA}(P,\, \bB)}{{\sf OPT}(P,\, \bB)} \,\biggmid\, (P,\, \bB) \in \mathbb{B}_{\sf valid} ~\text{\em and}~ {\sf OPT}(P,\, \bB) < +\infty \,\bigg\}.
\end{align*}
\end{corollary}
\newpage
\section{Upper Bound Analysis}
\label{sec:LB}
In this section, we first (\Cref{exp:LB}) present our worst case instances in terms of {\em bid distributions}; then (\Cref{lem:LB_validity}) check the validity, i.e., the corresponding {\em value distributions} are well defined; and then (\Cref{lem:LB_poa}) evaluate the auction/optimal {\textsf{Social Welfares}} from this instance. The upper-bound part of \Cref{thm:main} will be a direct consequence of \Cref{lem:LB_validity,lem:LB_poa}.
The idea is to simulate the worst-case pseudo instance $H^{*} \otimes L^{*}$ from \Cref{sec:UB}, {\em using a number of i.i.d.\ low-impact bidders $\{L\}^{\otimes n}$ to approximate the pseudo bidder $L^{*}$} (\Cref{rem:pseudo_instance}): Each individual low-impact bidder $L$ is likely to have a small bid $\approx 0$, but the highest bid from $\{L\}^{\otimes n}$ is (almost) identically distributed as the worst-case pseudo bidder $L^{*}$.
\begin{example}[Worst-case instances]
\label{exp:LB}
Let $\lambda^{*}=1 - 4 / e^{2}$ and $L^{*}(b) = \frac{1 - \lambda^{*}}{1 - b}$ for any $b \in [0,\, \lambda^{*}]$ be a CDF function. Another CDF function $H^{*}(b)$ is given by the implicit equation
\begin{align}
\label{eq:LB:H}
\left\{\, (b,\, H^{*}) \,\middlemid\,
\begin{aligned}
& b ~=~ 1 - 4H^{*} \cdot \exp\Big(2 - 4\sqrt{H^{*}}\Big) \\
& \text{such that $0 \leq b \leq \lambda^{*}$ and $1 / 4 \leq H^{*} \leq 1$} \phantom{\bigg.}
\end{aligned} \,\right\}.
\end{align}
For an arbitrarily small constant $\epsilon \in (0,\, 1)$, consider the following $(n + 1)$-bidder instance $H \otimes \{L\}^{\otimes n}$ for $n \eqdef \lceil 3 / \epsilon \rceil \geq 4$:
\begin{itemize}
\item $H(b) \eqdef H^{*}(b)$ for $b \in [0,\, \lambda^{*}]$ denotes the monopolist.
\item $L(b) \eqdef \big(L^{*}(b)\big)^{\frac{1}{n - 1}}$ for $b \in [0,\, \lambda^{*}]$ denotes the {\em common} bid distribution of $n \geq 4$ many i.i.d.\ {\em low-impact} bidders.
\end{itemize}
\end{example}
Here we choose $L(b) \eqdef \big(L^{*}(b)\big)^{\frac{1}{n - 1}}$ rather than $ \big(L^{*}(b)\big)^{\frac{1}{n}}$ such that each lower-impact bidder $L$ has the competing bid distribution $H(b)\cdot \big(L(b)\big)^{n - 1} = H^{*}(b) \cdot L^{*}(b)$, exactly the same as the pseudo bidder $L^{*}$. But in this way, the monopolist $H$'s competing bid $\big(L(b)\big)^{n}=\big(L^{*}(b)\big)^{\frac{n}{n - 1}}$ slightly differs from the $L^{*}(b)$,\footnote{The other choice $L(b) \eqdef\big(L^{*}(b)\big)^{\frac{1}{n}}$ is also fine, but the analysis will be more complicated.}
so his/her value is not a constant $\equiv 1$ but a random variable $\approx 1$ spanning in a tiny range.
These are all formalized in \Cref{lem:LB_validity}, which checks the validity of this $(n + 1)$-bidder instance; see \Cref{fig:LB} for a visual aid.
\begin{lemma}[Validity]
\label{lem:LB_validity}
Given an arbitrarily small constant $\epsilon \in (0,\, 1)$, the following hold for the $(n + 1)$-bidder instance $H \otimes \{L\}^{\otimes n}$ in \Cref{exp:LB}:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:LB_validity:1}
The monopolist $H$ has an increasing bid-to-value mapping $\varphi_{H}(b) \eqdef 1 - \frac{1 - b}{n}$ for $b \in [0,\, \lambda^{*}]$ and a well-defined value distribution $V_{H}(v)$ given by the implicit equation
\begin{align*}
\left\{\, (v,\, V_{H}) \,\middlemid\,
\begin{aligned}
& v ~=~ 1 - \tfrac{1}{n} \cdot 4V_{H} \cdot \exp\Big(2 - 4\sqrt{V_{H}}\Big) \\
& \text{\em such that $1 - \tfrac{1}{n} \leq v \leq 1 - \tfrac{4 / e^{2}}{n}$ and $1 / 4 \leq V_{H} \leq 1$} \phantom{\bigg.}
\end{aligned} \,\right\}.
\end{align*}
The strategy $s_{H}$ as the inverse of $\varphi_{H}(b)$ is given by $s_H(v) = 1 - n \cdot (1 - v)$.
\item\label{lem:LB_validity:2}
Each low-impact bidder $L$ has an increasing bid-to-value mapping $\varphi_{L}(b)$ given by the parametric equation
\begin{align*}
\Big\{\, (b,\, \varphi_{L}) = \big(1 - t^{2} \cdot e^{2 - 2t},\quad 1 - t \cdot e^{2 - 2t}\big) \,\Bigmid\,
1 \leq t \leq 2 \,\Big\}
\end{align*}
and a well-defined value distribution $V_{L}(v)$ given by the parametric equation
\begin{align*}
\Big\{\, (v,\, V_{L}) = \big(1 - t \cdot e^{2 - 2t},\quad \big(4 / t^{2} \cdot e^{2t - 4}\big)^{\frac{1}{n - 1}}\big) \,\Bigmid\,
1 \leq t \leq 2 \,\Big\}.
\end{align*}
The strategy $s_{L}$ as the inverse of $\varphi_{L}(b)$ is given by the parametric equation $s_L(1 - t \cdot e^{2 - 2t})=1 - t^{2} \cdot e^{2 - 2t}$ for $1 \leq t \leq 2$.
\item\label{lem:LB_validity:3}
The strategy profile $s_{H} \otimes \{s_{L}\}^{\otimes n}$ forms an equilibrium $s_{H} \otimes \{s_{L}\}^{\otimes n} \in \bbBNE(V_{H} \otimes \{V_{L}\}^{\otimes n})$.
\end{enumerate}
\end{lemma}
\begin{figure}[t]
\centering
\includegraphics[width = .9\textwidth]{Lower_Bound.png}
\caption{Demonstration for the $(n + 1)$-bidder instance $H \otimes \{L\}^{\otimes n}$ in \Cref{exp:LB}. \\
(i)~The monopolist $H$ has a bid CDF $H$ (orange) given by Implicit Equation~\eqref{eq:LB:H} and a value CDF $V_{H}$ (red) given by \Cref{lem:LB_validity:1} of \Cref{lem:LB_validity}. \\
(ii)~Each of the i.i.d.\ low-impact bidders $\{L\}^{\otimes n}$ has a bid CDF $L$ (green) given by $L(b) = (\frac{1 - \lambda^{*}}{1 - b})^{\frac{1}{n - 1}}$ for any $b \in [0,\, \lambda^{*}] = [0,\, 1 - 4 / e^{2}]$ and a value CDF $V_{L}$ (blue) given by \Cref{lem:LB_validity:2} of \Cref{lem:LB_validity}.}
\label{fig:LB}
\end{figure}
\begin{proof}
Let us prove {\bf \Cref{lem:LB_validity:1,lem:LB_validity:2}} one by one. Then {\bf \Cref{lem:LB_validity:3}} will be a direct consequence.
\vspace{.1in}
\noindent
{\bf \Cref{lem:LB_validity:1}.}
The monopolist $H$ has a competing bid distribution $L(b)^{n} = L^{*}(b)^{\frac{n}{n - 1}} = (\frac{1 - \lambda^{*}}{1 - b})^{\frac{n}{n - 1}}$ and thus (by elementary algebra) a bid-to-value mapping
\begin{align*}
\varphi_{H}(b) ~=~ b + L(b)^{n} \big/ \big(L(b)^{n}\big)' ~=~ b + \tfrac{n - 1}{n} \cdot (1 - b) ~=~ 1 - \tfrac{1 - b}{n}.
\end{align*}
This mapping $\varphi_{H}(b)$ is increasing on $b \in [0,\, \lambda^{*}] = [0,\, 1 - 4 / e^{2}]$, with the minimum $\varphi_{H}(0) = 1 - \frac{1}{n}$ and the maximum $\varphi_{H}(\lambda^{*}) = 1 - \frac{1 - \lambda^{*}}{n} = 1 - \frac{4 / e^{2}}{n}$.
According to \Cref{lem:value_dist}, the value distribution $V_{H}(v)$ follows the parametric equation
\begin{align*}
\Big\{\, (v,\, V_{H}) = \big(\varphi_{H}(b),\, H(b)\big) \,\Bigmid\, b \in [0,\, \lambda^{*}] \,\Big\},
\end{align*}
where the formula $V_{H} = H(b) = H^{*}(b)$ is defined by Implicit Equation~\eqref{eq:LB:H}. Based on elementary algebra (i.e., using the substitution $b = 1 - n \cdot (1 - v)$ due to the formula $v = \varphi_{H}(b) = 1 - \frac{1 - b}{n}$), we obtain the claimed implicit equation for $V_{H}(v)$. {\bf \Cref{lem:LB_validity:1}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:LB_validity:2}.}
For notational brevity, below we may simply write $H^{*} = H^{*}(b)$ and $L^{*} = L^{*}(b)$ etc.
We let $t \eqdef 2\sqrt{H^{*}} \in [1,\, 2]$ as a function of $b$. Then
$b = 1 - t^2 \cdot \exp(2 - 2t)$ by \Cref{eq:LB:H} and
\begin{equation}
\tfrac{\d b}{\d t} ~=~ (2t^{2} - 2t) \cdot \exp(2 - 2t).
\end{equation}
The low-impact bidders $L^{\otimes n}$ have the same competing bid distribution $B(b) = H(b) \cdot \big(L(b)\big)^{n - 1}= H^{*}(b) \cdot L^{*}(b)$. This distribution can be written as
\[
B ~=~ H^{*} \cdot L^{*} ~=~ H^{*} \cdot \tfrac{4 / e^{2}}{1 - b} ~=~ \exp\big(-4 + 2t\big),
\]
where
the last step applies $H^{*}=t^2/4$ and $b = 1 - t^2 \cdot \exp(2 - 2t)$. The above formula switches the range $t\in [1,\, 2]$ into the range $B \in [1 / e^{2},\, 1]$.
Thus
\begin{align*}
\varphi_{L}
& ~=~ b + B \big/ (\tfrac{\d B}{\d b}) \\
& ~=~ b + \exp\big(-4 + 2t \big) \cdot (\tfrac{\d b}{\d t}) \big/ (\tfrac{\d B}{\d t}) \\
& ~=~ 1 - t^2 \cdot \exp(2 - 2t) + (t^{2} - t) \cdot \exp(2 - 2t) \\
& ~=~ 1 - t \cdot e^{2 - 2t}.
\end{align*}
We conclude with the parametric equation for $ \varphi_{L}(b)$ claimed in the statement of \Cref{lem:LB_validity:2}:
\begin{align*}
\Big\{\, (b,\, \varphi_{L}) = \big(1 - t^{2} \cdot e^{2 - 2t},\quad 1 - t \cdot e^{2 - 2t}\big) \,\Bigmid\,
1 \leq t \leq 2 \,\Big\}
\end{align*}
With respect to $t \in [1,\, 2]$, the both formulas $b = 1 - t^{2} \cdot e^{2 - 2t}$ and $\varphi_{L} = 1 - t \cdot e^{2 - 2t}$ are increasing functions. Thus, the bid-to-value mapping $\varphi_{L}(b)$ is increasing for $b \in [0,\, 1 - 4 / e^{2}] = [0,\, \lambda^{*}]$, with the minimum $\varphi_{L}(0) = 0$ and the maximum $\varphi_{L}(\lambda^{*}) = 1 - 2 / e^{2}$.
According to \Cref{lem:value_dist}, the value distribution $V_{L}(v)$ follows the parametric equation
\begin{align*}
\Big\{\, (v,\, V_{L}) = \big(\varphi_{L}(b),\, L(b)\big) \,\Bigmid\, b \in [0,\, \lambda^{*}] \,\Big\},
\end{align*}
where the formula $V_{L} = L(b) = (\frac{1 - \lambda^{*}}{1 - b})^{\frac{1}{n - 1}} = (\frac{4 / e^{2}}{1 - b})^{\frac{1}{n - 1}}$. By elementary algebra (i.e., employing the formula $b = 1 - t^{2} \cdot e^{2 - 2t}$ in the defining parametric equation for $\varphi_{L}(b) $), we derive the claimed parametric equation for $V_{H}(v)$. {\bf \Cref{lem:LB_validity:2}} follows then. This finishes the proof.
\end{proof}
Now we study the auction/optimal {\textsf{Social Welfares}} from our $(n + 1)$-bidder instance $H \otimes \{L\}^{\otimes n}$.
\begin{lemma}[{\textsf{Price of Anarchy}}]
\label{lem:LB_poa}
Given an arbitrarily small constant $\epsilon \in (0,\, 1)$, the following hold for the $(n + 1)$-bidder instance $H \otimes \{L\}^{\otimes n}$ in \Cref{exp:LB}:
\begin{enumerate}[font = {\em\bfseries}]
\item\label{lem:LB_poa:1}
The expected optimal {\textsf{Social Welfare}} ${\sf OPT}(H \otimes \{L\}^{\otimes n}) \geq 1 - \epsilon$; and
\item\label{lem:LB_poa:2}
The expected auction {\textsf{Social Welfare}} ${\sf FPA}(H \otimes \{L\}^{\otimes n}) \leq 1 - 1 / e^{2} $.
\end{enumerate}
\end{lemma}
\begin{proof}
Let us prove {\bf \Cref{lem:LB_poa:1,lem:LB_poa:2}} one by one.
\vspace{.1in}
\noindent
{\bf \Cref{lem:LB_poa:1}.}
The value $v_{H} \sim V_{H}$ of the monopolist $H$, supported on $\supp(V_{H}) = \big[1 - \frac{1}{n},\, 1 - \frac{4 / e^{2}}{n}\big]$, is higher than the values of the lower-impact bidders $\{L\}^{\otimes n}$, which are supported on
$[0, 1-2/e^2]$. Thus the optimal {\textsf{Social Welfare}} ${\sf OPT}(H \otimes \{L\}^{\otimes n})$ always stems from the monopolist $H$:
\[
{\sf OPT}(H \otimes \{L\}^{\otimes n}) ~=~ \E_{v_{H}}\big[v_{H}\big] ~\geq~ 1 - 1 \big/ n ~\geq~ 1 - \epsilon,
\hspace{5.95cm}
\]
where the last step uses $n = \lceil 3 / \epsilon \rceil \geq 1 / \epsilon$ (\Cref{exp:LB}). {\bf \Cref{lem:LB_poa:1}} follows then.
\vspace{.1in}
\noindent
{\bf \Cref{lem:LB_poa:2}.}
Recall that the low-impact bidders $L^{\otimes n}$ have the common bid-to-value mapping $\varphi_{L}(b)$.
Following \Cref{lem:auction_welfare} (with $\calB(b) = H(b) \cdot L(b)^{n}$),
the auction {\textsf{Social Welfare}} ${\sf FPA}(H \otimes \{L\}^{\otimes n})$ from our $(n + 1)$-bidder instance is given by
\begin{align*}
{\sf FPA}(H \otimes \{L\}^{\otimes n})
& ~=~ \varphi_{H}(0) \cdot \calB(0) + \int_{0}^{\lambda^{*}} \Big(\varphi_{H}(b) \cdot \tfrac{H'(b)}{H(b)} \cdot \calB(0) + n \cdot \varphi_{L}(b) \cdot \tfrac{L'(b)}{L(b)} \cdot \calB(b)\Big) \cdot \d b \\
& ~\leq~ \calB(0) + \int_{0}^{\lambda^{*}} \Big(\tfrac{H'(b)}{H(b)} \cdot \calB(0) + n \cdot \varphi_{L}(b) \cdot \tfrac{L'(b)}{L(b)} \cdot \calB(b)\Big) \cdot \d b \\
& ~=~ \calB(0) + \int_{0}^{\lambda^{*}} \calB'(b) \cdot \d b - \int_{0}^{\lambda^{*}} n \cdot (1 - \varphi_{L}(b)) \cdot \tfrac{L'(b)}{L(b)} \cdot \calB(b) \cdot \d b \\
& ~=~ 1 - \int_{0}^{\lambda^{*}} n \cdot (1-\varphi_{L}(b)) \cdot \tfrac{1}{(n - 1) \cdot (1 - b)} \cdot H(b) \cdot \big(\tfrac{1 - \lambda^{*}}{1 - b}\big)^{\frac{n}{n - 1}} \cdot \d b \\
& ~\leq~ 1 - \tfrac{n}{n - 1} \cdot (1 - \lambda^{*})^{\frac{n}{n - 1}} \cdot \int_{0}^{\lambda^{*}} \frac{(1 - \varphi_{L}(b)) \cdot H(b)}{(1 - b)^2} \cdot \d b \\
& ~=~ 1 - \tfrac{n}{n - 1} \cdot (1 - \lambda^{*})^{\frac{n}{n - 1}} \cdot \int_{1}^{2} \frac{(t \cdot e^{2 - 2t}) \cdot (t^{2} / 4)}{(t^{2} \cdot e^{2 - 2t})^{2}} \cdot \Big(\frac{\d b}{\d t}\Big) \cdot \d t \\
& ~=~ 1 - \tfrac{n}{n - 1} \cdot (1 - \lambda^{*})^{\frac{n}{n - 1}} \cdot \int_{1}^{2} \frac{t - 1}{2} \cdot \d t \\
& ~=~ 1 - \tfrac{n}{n - 1} \cdot (4 / e^2)^{\frac{1}{n - 1}} \cdot e^{-2} \phantom{\bigg.} \\
& ~\leq~ 1 - e^{-2},
\end{align*}
where the last inequality uses the fact $\frac{n}{n - 1} \cdot (4/e^2)^{\frac{1}{n - 1}} = \big((1 + \frac{1}{n - 1})^{n - 1}\cdot 4 / e^2\big)^{\frac{1}{n - 1}} \geq (8 / e^2)^{\frac{1}{n - 1}}> 1$. The definition of $t \in [1,\, 2]$ and the expressions of $b$, $H$, $\varphi_{L}$ and $\frac{\d b}{\d t}$ in terms of $t$ are given in the proof of \Cref{lem:LB_poa}.
{\bf \Cref{lem:LB_poa:2}} follows then.
This finishes the proof.
\end{proof}
We directly infer the upper-bound part of \Cref{thm:main} (restated below) from \Cref{lem:LB_poa}, since the constant $\epsilon \in (0,\, 1)$ chosen in \Cref{exp:LB} can be arbitrarily small.
\begin{restate}[{\Cref{thm:main}}]
The {\textsf{Price of Anarchy}} in {\textsf{First Price Auctions}} is $\leq 1 - 1 / e^{2} \approx 0.8647$.
\end{restate}
|
2,869,038,155,609 | arxiv | \section{Introduction}
\subsection*{Overview}
In this paper we study deformation spaces of marked metric graphs of groups.
Since its first appearance on the scene (\cite{cv}), the celebrated Culler-Vogtmann Outer Space
became a classical subject of research. It turned out to be a very useful tool for
understanding properties of automorphisms of free groups (see for instance~\cite{BestvinaHandel,MR748994, MR721773,MR879856,hatcher,MR1396778,MR2395795}).
A typical object in the Outer Space
of $F_n$ is a marked graph with fundamental group of rank $n$, and locally Euclidean
coordinates are defined by turning graphs into metric graph by an assignment of positive
edge-lengths. Outer Space is not compact and there are basically two ways of going to infinity:
making the marking diverge or collapsing a collection of sub-graphs of a given element $X$ of
Outer Space. The second operation has a local flavour and it is similar to the operation of
pinching a curve of a surface. Attaching these ``collapsed'' points, leads one to define the simplicial bordification of the deformation space. If one starts with Culler-Vogtmann space, the result is the free splitting complex, which is related to the free factor complex (see for instance~\cite{BF1,BR,BG,handelmosher,HMII,HH,KR}).
Collapsing comes naturally into play when one is analysing a reducible automorphism of $F_n$ induced
by simplicial map $f:X\to X$ which exhibits an invariant collection of sub-graphs of $X$.
Once collapsed, $X$ is turned into a graph of groups corresponding to a free splitting
of $F_n$. On the other hand, the collapsed part is not necessarily connected. This phenomenon has
led researchers to investigate more general deformations spaces. Namely deformation spaces of
(not necessarily connected) graph of groups, possibly with marked points or ``hairs'' (see for
instance~\cite{GuirardelLevitt,FM13,FM18I,FM18II,MR3342683,CKV}).
One of the main tools used to study the action of automorphisms on deformation spaces is
the theory of Stalling folds (\cite{Sta}) and the so-called Lispchitz metric
(\cite{FM11,FM12,MR3342683}). In particular, given an automorphism $\phi$, one can study the
displacement function $\lambda_\phi$ defined as $\lambda_\phi(X)=\Lambda(X,\phi X)$ (here
$\Lambda$ denotes the maximal stretching factor from $X$ to $\phi X$, whose logarithm is the
asymmetric Lipschitz metric). Of particular interest is the set $\operatorname{Min}(\phi)$ of minimally
displaced points. When $\phi$ is irreducible, this coincides with the set of points supporting train-track maps
(\cite{FM18I}) and its structure is particularly useful for example in building algorithm for
decision problems. It is used in~\cite{FM18II} for a metric approach to the conjugacy
problem for irreducible automorphism of free groups (solved originally in~\cite{MR1396778})
and the reducibility problem of free groups (solved originally in~\cite{K1,K2}).
\subsection*{Main results of the paper}
If one is interested in effective procedures, one of the main problem is that general
deformation spaces have a simplicial structure that is not locally finite. So if one starts
from a simplex and wishes to enumerate neighbouring simplices, there is no chance to make this
procedure effective.
\medskip
In this paper we prove that the minset $\operatorname{Min}(\phi)$ for irreducible automorphisms of exponential
growth is locally finite; namely given a simplex intersecting $\operatorname{Min}(\phi)$,
one can give a finite of its neighbours so that any simplex {\em not} in that list, does not
intersect $\operatorname{Min}(\phi)$. This is the content of our Theorem~\ref{locfin}. Moreover, it is also uniformly locally finite, Corollary~\ref{unif}.
\begin{thm*}[Theorems~\ref{locfin} and \ref{unif}]
Let $G$ be a group equipped with a free splitting, $\G$. Let $\phi$ be an automorphism of $G$ which preserves the splitting and is irreducible with $\lambda(\phi) > 1$. Then $\operatorname{Min}(\phi)$ - also seen as the points which support train track maps for $\phi$ - is uniformly locally finite both as a subset of the deformation space $\O(\G)$ and its volume 1 subspace, $\O_1(\G)$.
\end{thm*}
\begin{rem}
We note that the number $\lambda (\phi)$ in the Theorem above is the (minimal) displacement of $\phi$ relative to the splitting $\G$.
For instance, if one takes a relative train track representative for an automorphism of $F_n$, then one gets a free splitting of $G=F_n$ by taking the largest invariant subgraph (the union of all the strata except the top one). The resulting automorphism is irreducible in the corresponding relative space, and the number $\lambda(\phi)$ is the Perron-Frobenius eigenvalue of the top stratum.
\end{rem}
\medskip
The application we have in mind for this kind of result is an effective study of the minset for a
reducible automorphism.
In general this minset is empty, but starting with a reducible automorphism $\phi$ of a free group, one can collapse an invariant free factor to obtain a new deformation space on which the automorphism acts. If $\phi$ is relatively irreducible in that space, then its minset is locally finite. Otherwise, one can keep collapsing free factors until it is relatively irreducible.
It is easy to see that the minset for a reducible automorphism is not locally finite in
general, however for any automorphism and any simplex with a given displacement, there are only
finitely many possible simple folds which produce simplices of strictly smaller displacement - Corollary~\ref{FoldingCandidateRegular1}.
The idea behind Theorem~\ref{locfin} is the following. For a minimally displaced point $X$, it is
known that folding an illegal turn of an optimal map $f:X\to X$ representing $\phi$ produces a
path in $\operatorname{Min}(\phi)$, called folding path. (See for instance~\cite{FM13,FM18I}). But it is also
clear that there are legal turns that can be folded without exiting $\operatorname{Min}(\phi)$, for instance,
this may happen at illegal turns for $\phi^{-1}$. The strategy is to understand which legal turns can be folded, and we are able to produce a finite list such that if a turn
$\tau$ is not in that list, then by folding $\tau$ one exits the minimally displaced set. In our terminology, folding a {\em critical} turn could allow one to remain in the minset, whereas folding a {\em regular} turn forces one to leave it. One then understands arbitrary neighbouring simplices, by looking at which of them can be reached by a (uniformly bounded) number of critical folds - these are the only ones that may be minimally displaced. However, one complication is that it is possible that a critical fold could {\em increase} the displacement, and a subsequent critical fold decrease it so that one re-enters the minimally displaced set. Nevertheless, our result produces a finite list containing all neighbouring simplices that are minimally displaced.
\begin{rem}
We have written the paper for deformation spaces of free splittings of $G$, namely connected graph of
groups with trivial edge-groups. However, every result
of the paper remains true for deformations spaces of non-connected graph of groups, as
developed for instance in~\cite{FM18I,FM18II}. This is because connectedness plays no role in our proofs. (In those papers, non-connectedness was crucial since the main argument was an inductive one.)
Nonetheless, we decided to stick to the connected case for the benefit of the reader.
\end{rem}
\subsection*{Structure of the paper}
We have decided to write the paper in a reverse order; we start immediately with the core of
the paper, postponing the section of general definitions to the end. This is because the
definitions and terminology we use are quite standard, and the reader used to the subject can
start reading directly.
\section{Preliminaries}
We recall some notation here and, we refer to Section~\ref{definitions} for more details.
\begin{conv} Deformations spaces - here, of free splittings, even though the concept exists more generally - can be viewed either as spaces of trees, or graphs. We adopt
here the graphs-viewpoint, but one can easily pass
from one viewpoint to the other by taking universal covers and $G$-quotients (see below for more details).
Throughout the whole paper, $\G$ will denote a fixed free splitting of the group $G$.
That is, we write $G= G_1* \ldots * G_k*F_n$, but this need not be the Grushko decomposition of $G$. In fact, in the examples we have in mind, $G$ is a free group, and the free factors $G_i$ correspond to a collection of invariant free factors under some automorphism of $G$.
$\O(\G)$ will denote the deformation space of a free splitting of a group $G$.
The typical object $X\in \O(\G)$ is
therefore a marked metric graph of groups, with trivial edge groups,
and whose valence one or two vertices have non-trivial vertex group. (One can also think of $X$ as a $G$-tree with trivial edge stabilisers, where the vertex stabilisers are precisely the conjugates of the $G_i$. Elements of $G$ which fix some vertex are called elliptic, and the others are hyperbolic.) Note that every $X\in \O(\G)$ has the same elliptic elements (and this characterises the points in the space).
For a vertex $v\in X$ we denote $G_v$ its vertex group. If $G_v$ is trivial, then $v$ is said to be {\em free}.
These spaces - $ \O(\G)$ - naturally occur in the bordification of
classical Culler-Vogtmann Outer Space, on collapsing invariant subgraphs.
We denote by $\operatorname{Aut}(\G)$ the group of automorphisms of $G$ which preserve the splitting; that is, each $G_i$ in the splitting is sent to a conjugate of another (possibly the same) $G_j$. That is, $\operatorname{Aut}(\G)$ the group of automorphisms of $G$ which preserve the elliptic elements. Similarly, $\operatorname{Out}(\G) = \operatorname{Aut}(\G)/ \operatorname{Inn}(G)$.
\end{conv}
\subsection{Graph of Groups and $G$-trees}
We recall some basic notions of Bass-Serre theory. The main references for this section are \cite{Bass} and \cite{Serre}.
Given a graph $\Gamma$, we denote by $V(\Gamma)$ the set of vertices of $\Gamma$ and by $E(\Gamma)$ the set of (oriented) edges of $\Gamma$. If $e$ is an (oriented) edge, we denote by $\iota(e)$, the initial vertex of $e$ and by $\tau(e)$, the terminal vertex of $e$.
\begin{defn}[Graph of Groups]
A graph of groups $X$ consists of a connected graph $\Gamma$ together with groups $G_v$ for every vertex $v \in V(\Gamma)$ and edge groups $G_e = G_{\bar{e}}$, and monomorphisms $\alpha_e : G_e \to G_{\tau(e)} $ for every (oriented) edge $e \in E(\Gamma)$.
\end{defn}
In this paper, we work with free products of groups, which by Bass-Serre theory, arise as the fundamental groups of graph of groups with trivial edge groups. For the remainder of the section, we suppose that all the graph of groups have trivial edge groups. In this case, we can see a graph of groups $X$, as a pair which consists of a graph $\Gamma$ and a collection of groups $G_v$, one for each $v$ vertex of $\Gamma$; we simply write $X = (\Gamma,\{G_v\} _{v \in V(\Gamma)})$. We refer to $\Gamma$, as the topological space (or the graph) associated to $X$.
Let $X = (\Gamma,\{G_v\} _{v \in V(\Gamma)})$ be a graph of groups.
An \textit{edge- path} $P$ (of combinatorial length $k \geq 0$) in $X$ is a sequence of the form $ (g_1,e_1,g_2,e_2,\dots,g_k,e_k,g_{k+1})$ where the $e_i$'s are oriented edges of $\Gamma$ such that the terminal point of each $e_i$ is the
initial point of $e_{i+1}$, and each $g_i$ is a group element from the vertex group based at the initial point of $e_i$; in this case, we simply write $P = g_1e_1\dots g_k e_k g_{k+1}$. In the present work, we also use ``paths'' which are not edge-paths, as their endpoints may not be vertices. The previous definition is extended from edge-paths to \textit{paths}, by allowing the first and the last segments $e_1,e_k$ of $P$ to be partial edges (note that if $\iota(e_1)$ is not a vertex, then we set $g_1$=1 and, similarly, if $\tau(e_k)$ is not a vertex then we set $g_{k+1} = 1$).
A path is called \textit{loop}, if the endpoint of its last edge coincides with the initial point of its first edge.
We say that a path $P$, as above, is \textit{reduced}, if whenever $e_i = \bar{e}_{i+1}$, $i=1,\dots,k-1$, then $g_i$ is not the trivial group element. Furthermore, we say that a loop $P = g_1 e_1 g_2 e_2 \dots g_k e_k g_{k+1}$ is \textit{cyclically reduced} if it is reduced and if $e_k = \bar{e}_1$ then $g_{k+1} g_1 \neq 1$.
We can always represent the conjugacy class of a loop in the form $g_1 e_1 g_2 e_2 \dots g_ke_k$; in this case, being cyclically reduced means that whenever $e_i = \bar{e}_{i+1}$, then $g_i$ is not the trivial group element, with the subscripts taken modulo $k$.
We say that two paths are {\em equivalent} if one can be obtained from the other by a sequence of insertions or deletions of inverse-pairs. We then denote by $\pi(X)$, the set of equivalence classes of edge paths.
Then $\pi(X)$ has the structure of a groupoid, whose operation is the concatenation of paths. Note that $\pi(X)$ is not a group in general, for if $p,q \in \pi(X)$, then the concatenation $p*q = pq$ is only defined exactly when the terminal vertex of $p$ is the initial vertex of $q$. In our setting, every path $p$ is equal (in $\pi(X)$) to a unique reduced path (with the same endpoints), which we denote by $[p]$.
(Alternatively, we can take $\pi(X)$ to be the set of reduced edge-paths, under the operation of concatenation followed by reduction.)
If $a,b$ are two points of $\Gamma$ (not necessarily vertices), we can define the set $P[a,b]$, of paths in $X$, from $a$ to $b$. If $a,b$ are vertices, then $P[a,b]$ can be seen as a subset of $\pi(X)$. In the special, case where $a=b$, the set of loops based at $a$, $P[a,a]$, form a subgroup of $\pi(X)$ (as any two loops based at $a$, can be concatenated, inducing a loop based at $a$).
\begin{defn}[Fundamental group]
Let $X = (\Gamma,\{G_v\} _{v \in V(\Gamma)})$ be a graph of groups. If $u$ is a vertex of $\Gamma$, then the fundamental group $\pi_1(X,u) = P[u,u]$ of $\Gamma$ is the set of loops in $\Gamma$ based at $u$ (as elements of $\pi(X)$). In our case, $$\pi_1(X,u) = (\ast_{v \in V(\Gamma)} G_v) \ast \pi_1 (\Gamma,u).$$
\end{defn}
Note that the isomorphism type of $\pi_1(X,u)$ does not depend on the choice of $u$. If $u,w$ are vertices of $\Gamma$, then $\pi_1(X,u), \pi_1(X,w)$ are conjugate in $\pi(X)$ (and we simply write $\pi_1(X)$, when we ignore the base point).
We can now define the universal cover (or Bass-Serre tree) of a graph of groups.
\begin{defn}[Universal Cover]
Let $X = (\Gamma,\{G_v\} _{v \in V(\Gamma)})$ be a graph of groups. If $v$ is a vertex of $\Gamma$, then we define the tree $T = \widetilde{(X,v)} = \widetilde{X}$, as follows:
\begin{itemize}
\item The set of vertices $V(T)$ are exactly the ``cosets'' $pG_w$, $w \in V(\Gamma)$, where $p$ is a reduced edge- path from $w$ to $v$.
\item There is an (oriented) edge from $p_1G_{w_1}$ to $p_2G_{w_2}$, if and only if there are elements $g_i \in G_{w_i}, i=1,2$ and an edge $e \in E(\Gamma)$, so that $p_2$ = $p_1g_{1}eg_2$.
\end{itemize}
\end{defn}
\begin{rem}
\begin{enumerate}[{i)}] \
\item As every element of $G =\pi_1(X,v)$ is a loop based at $v$, there is natural simplicial (left) action of the fundamental group on the universal cover $T$.
\item If $w$ is a vertex of $\Gamma$, then there is a unique (reduced) path $p_w$ of $\Gamma_V$, from $v$ to $w$. Then the stabiliser $Stab_T(pG_w) = pG_wp^{-1}$
\item The stabilisers of edges are, by construction, trivial.
\item One can define arbitrary points of $T$ in exactly the same way, by allowing the paths $p$ to be general paths (not just edge-paths) from $v$ to $w$, with the convention that $G_w$ is trivial if $w$ is not a vertex.
\end{enumerate}
\end{rem}
Given a graph of groups with trivial edge groups, we described the action of its fundamental group on a tree with trivial edge stabilisers (and vertex stabilisers which coincide with the set of conjugacy classes of the vertex groups).
Conversely, if a group $G$ acts on a tree with trivial edge stabilisers, we can construct a graph of groups with fundamental group $G$.
\begin{defn}[Quotient graph of groups]
Let's suppose that $G$ acts on a tree $T$ with trivial edge stabilisers. Then we can define the quotient graph of groups $X$, by taking the quotient graph $\Gamma = G/T$ and as vertex groups the stabilisers of some orbit of each vertex (and trivial edge groups). In this case, $\pi_1(X)$ is (isomorphic to) $G$.
\end{defn}
\begin{rem}
By Bass-Serre theory, we know that these two constructions (graph of groups and the universal cover) are equivalent and we can go from the level of the tree to the level of the graph of groups and vice versa. (We have done some choices of fundamental domains which are inessential, as they produce isomorphic structures, every time.)
\end{rem}
Now we restrict to graph of groups in the same relative outer space, as in our Convention. More specifically, we assume that our graph of groups, will be $\G$-graph of groups (see Definition \ref{G-graphs}, for more details), for the fixed splitting $\G$ of our group $G$.
As in our description for the $\G$-trees (see \ref{definitions}), every $\G$-graph as a point (simplex) of $\O(\G)$, is equipped with a marking.
\begin{defn}
A $\G$-graph dual to a $\G$-tree. Namely, it is a finite connected $G$-graph of groups $X$, along with an isomorphism, $\Psi_X: G \to \pi_1(X)$ - a marking - such that:
\begin{itemize}
\item $X$ has trivial edge-groups;
\item the fundamental group of $X$ as a topological space is $F_n$;
\item the splitting given by the vertex groups is equivalent, via $\Psi_X$, to $\G$. That is, $\Psi_X$ restricts to a bijection from the conjugacy classes of the $G_i$ to the vertex groups of $X$.
\end{itemize}
\end{defn}
The following definition is not the definition of the morphism that is given by Bass in \cite{Bass}, as he requires the map to be a graph morphism (sending edges to edges and vertices to vertices). However, in our case, his method can be easily adjusted for the more general maps that we define.
\begin{defn}[Maps Between Graphs of Groups]
Let $X = (\Gamma, \{G_v\}), Y = (\Gamma ', \{H_w\})$ be two (marked) $\G$-graphs.
A map $F $ between $X,Y$ consists of:
\begin{enumerate}
\item Two maps $f_V,f_E$:
\begin{enumerate}
\item $f_V : V(\Gamma) \to \Gamma'$ and
\item For each edge $e$ with $\iota(e) = v, \tau(e) = v' $, $f_E(e)$ is a path from $f_V(v)$ to $f_V(v')$.
\end{enumerate}
\item Isomorphisms between the vertex groups:
$\phi_v : G_v \to H_{f_V(v)}$ (by abusing the notation, we set $H_{f_V(v)} = 1$, if $f_V(v)$ is not a vertex).
\end{enumerate}
\end{defn}
Note that after sub-dividing some edges of the co-domain, we can suppose that our maps are simplicial. For any two $\G$-graphs $X,Y$, a map $F: X \to Y$, induces a (natural) homomorphism $\Phi_F: \pi(X) \to \pi(Y)$ between the set of paths, which restricts to a homomorphism $\Phi_F: \pi_1(X) \to \pi_1(Y)$ between the fundamental groups (well defined up to composition with inner automorphisms). The maps that we define in the following definition, play the role, at level of $\G$-graphs, of $G$-equivariant maps between $\G$-trees.
\begin{defn}
We say that a map $F: X \to Y$ is a $\G$-map, if the induced homomorphism $\Phi_F$ is the ``change of markings'', i.e. $\Phi_F \Psi_X = \Psi_Y $ (up to composition with inner automorphisms).
\end{defn}
\begin{rem}
\begin{enumerate}[i)] \
\item By Bass-Serre theory, it's clear now that the notion of $\G$-maps at the level of marked $\G$- graphs is equivalent to the notion of $G$-equivariant ``straight" maps at the level of $\G$-trees - that is, equivariant, surjective, continuous maps that are determined by the images of vertices by extending linearly over edges. In fact, we can move from one to the other by simply considering lifts and projections, using Bass-Serre theory (even if the lifts and the projections are not unique, as they depend on the choice of some fundamental domain, all of them are equivalent).
\item Note that we cannot talk about continuity between graph of groups, but we could alternatively consider the equivalent notion of graph of spaces (we simply change the vertex groups $G_i$ with a topological space $\Gamma_i$, with $\pi_1(\Gamma_i) = G_i$). In that case, the maps between two graph of spaces would be continuous.
\end{enumerate}
\end{rem}
\subsection{Basic Notation}
\begin{nt} We will use the following standard notation:
\begin{itemize}
\item $\O_1(\G)$ the volume one subspace of $\O(\G)$.
\item $\Delta$ denotes an open simplex of $\O(\G)$.
\item $\Delta_X$ denotes the simplex with underlying graph of groups $X$.
\item $\bar e$ denotes the inverse of oriented edge $e$. Same notation for paths.
\item $\gamma\cdot\eta$ denotes concatenation of paths.
\item $L_X(\gamma)$ denotes the reduced length in $X$ of a loop $\gamma$, if $X$ is seen as a
$G$-graph.
\item Folding a turn $\{a,b\}$ by an amount of $t$ means identify initial segments of $a$ and
$b$ of length $t$. This is always well-defined for small enough $t$.
\item Given an automorphism $\phi \in \operatorname{Out}(\G)$, $\lambda_\phi:\O(\G)\to\R$ denotes the displacement
function $\lambda_\phi(X)=\Lambda(X,\phi X)$ (this is well defined as the inner automorphisms act trivially). For a simplex $\Delta$ we set
$\lambda_\phi(\Delta)=\inf_{X\in\Delta}\lambda_\phi(X)$; we set
$\lambda(\phi)=\inf_{X \in \O(\G)}\lambda_\phi(X)$.
\item An $\O$-map between elements of $\O(\G)$ is a map that realises the difference of
markings. A straight map between elements of $\O(\G)$ is an $\O$-map with constant speed on
edges. (See the definitions section on page~\pageref{definitions}).
\end{itemize}
\end{nt}
Since we decided to adopt the graphs-viewpoint, some words of explanation are needed about turns.
A turn at a non-free vertex $v$ of $X$
is given by the equivalence class of unoriented pair $\{g_1e_1,g_2e_2\}$, where $e_1,e_2$ are
(germs of) oriented edges with the same initial vertex, $v$; $g_1,g_2$ are elements in the vertex-group $G_v$, and the equivalence
relation is given by the diagonal action of $G_v$. If we think $X$ as a $G$-tree, a turn at a non-free vertex $v$, is a $G_{v}$-orbit of an unoriented pair of edges $\{g_1e_1,g_2e_2\}$ where $g_1,g_2 \in G_{v}$ and $e_1, e_2$ are oriented edges of $X$ emanating at $v$. In other words, the projection of a turn in a tree is a turn in the quotient graph of groups, and, conversely, the lift of a turn in the graph of groups is a turn in the universal cover.
In any case, we denote the turn given by the class of
$\{a,b \}$ simply $[a,b]$. (Note that $[a,b]=[b,a]=[ga,gb]$)
For the convenience of the reader, in this section we give the definitions for both points of view, by thinking $X$ both as $\G$-graph of groups and as $\G$-tree. More specifically, everything in this section is written, the graphs-viewpoint and we translate into the language of trees, if needed. In the rest of the paper, we use these two notions interchangeably, as everything can be easily translated between these two point of views.
\begin{defn}
Let $X\in\O(\G)$. A turn $[x,x]$ is said {\bf trivial}.
A turn $\tau=[a,gb]$ at a vertex $v$ of $X$ is called {\bf degenerate}, if $a=b$ (resp., if $a,b$ are in the same $G_v$-orbit); it is called non-degenerate otherwise. If $e$ is an edge starting {\em and} ending at $v$ (resp., if both endpoints of $e$ are in the same $G$-orbit, as $v$), it determines a non-degenerate turn at $v$.
\end{defn}
\begin{defn}
Given a straight map $f: X \to Y$, at the graph level, we say that $f$ maps the turn $[a,gb]$ to the turn $[c,hd]$, if the initial paths of (combinatorial) length $1$ in the graph of groups (resp., if the initial edge or germ of edge), of $f(a)$ and $f(gb)$ are $c$ and $hd$ (in some order).
We will sometimes abuse notation and say that $f$ maps the turn $[a,gb]$ to the turn $[f(a), f(gb)]$, even though we really mean this to be the initial sub-paths of combinatorial length $1$ of the (in general) paths given.
We say that $[a,gb]$ is $f$-legal, if $f$ maps $[a,gb]$ to a non-trivial turn. If $f$ maps either of $a$ or $b$ to a vertex, then we say the turn is illegal.
If, moreover, $X=Y$, then we say that $[a,gb]$ is
$\langle\sim_{f^k}\rangle$-legal if $f^k$ maps $[a,gb]$ to a non-trivial turn for all integers $k \geq 1$.
\end{defn}
\begin{lem}
Let $X\in\O(\G)$ and $\tau=[a,gb]$ be a non-degenerate turn at a vertex $v$. Then
(equivariantly) folding $a$ and $gb$ gives a new element in $\O(\G)$.
\end{lem}
\begin{proof}
The proof is straightforward and left to the reader.
\end{proof}
\begin{rem}
Note that if $e$ is an edge emanating from $v$, a non- free vertex of $X\in\O(\G)$, and if $g\in
G_v$ is such that $<g>\neq G_v$, then by folding a degenerate turn $[e,ge]$ we obtain a tree with
non-trivial
edge groups(resp., stabilisers). Namely, the new edge $e$ emanating from $v$ has edge group $<g>$ (resp., stabiliser).
Therefore, in practice, a non-degenerate turn is the same as a ``foldable'' turn.
\end{rem}
\begin{rem}
\label{actions}
Suppose that $X \in\O(\G)$ and $f:X\to X$ is a straight $\O$-map. By this we mean that there are two $G$ markings on $X$ (resp., $G$-actions), and $f$ is $\G$-map (resp., $G$-equivariant) with respect to the two different markings (resp., actions) (both of which lie in the same deformation space, and hence have the same elliptic elements).
We will describe the change of action in more detail (at graph level, we have a very similar description, by changing the marking, instead of changing the action).
Let's give these names; we will denote the first action by $\cdot$ and the second action by $\star$. Then we will always have that there is an element, $\phi \in \operatorname{Aut}(\G)$ such that,
$$
f(g \cdot x) = g \star f(x) = \phi(g) \cdot f(x).
$$
Then, on iterating $f$, we get,
$$
f^r(g \cdot x)= \phi^r(g) \cdot f^r(x).
$$
\end{rem}
\medskip
Note that if $e$ is an edge emanating from a vertex $v$, then for every $\G$-map $f$ (resp., $G$-equivariant map),
the degenerate turn $[e,ge]$ is $f$-legal for any $g\neq Id\in G_v$, as long as $f$ does not map $e$ to a vertex.
\begin{lem}\label{L0}
Let $X,Y\in\O(\G)$ and $f:X\to Y$ be a straight $\O$-map. Let $v$ be a vertex of $X$ and $G_v$ its
stabiliser. Let $\tau=[a,gb]$ be a turn at $v$, such that neither $f(a)$ nor $f(b)$ is a single vertex. If $\tau$ is $f$-illegal, then
for any $g'\neq g\in G_v$ the turn $[a,g'b]$ is $f$-legal.
If $X=Y$ and $\tau$ is $\langle\sim_{f^k}\rangle$-illegal, then
for any $g'\neq g\in G_v$ the turn $[a,g'b]$ is $\langle\sim_{f^k}\rangle$-legal.
\end{lem}
\begin{proof} Let's prove the second claim first.
Since $\tau$ is $\langle\sim_{f^k}\rangle$-illegal, then there is some power $r\geq1$ so that $f^r(a)$
and $\phi^r(g) f^r(b)$ are the same germ - we are using the automorphism $\phi$ as in Remark~\ref{actions}. It follows that $[f^r(a), \phi^r(g') f^r(b)]$ is degenerate and
legal. Since $f$ is an $\O$-map, it follows that $f^{r+l}(\tau)=[f^{r+l}(a), \phi^{r+l}(g')f^{r+l}(b)]$ is degenerate
and legal for any $l\geq 0$. Since $f$-images of illegal turns are $f^n$-illegal for any
$n$, then $[f^m(a), \phi^m(g')f^m(b)]$ is legal also for $m\leq r$.
First claim now follows by exactly the same argument with $r=1$ and $l=0$.
\end{proof}
\begin{defn}
Let $X\in\O(\G)$ and let $\tau$ be a turn of $X$. For any loop $\gamma$ in $X$ (resp., a path with endpoints which lie in the same $G$-orbit)
we denote by $$\#(\gamma,\tau)$$ the number of times that the {\bf cyclically reduced} representative of
$\gamma$ crosses $\tau$. We recall that $\tau$ is not an oriented object, so we do
not take in account crossing directions.
\end{defn}
The following lemma is almost tautological, but important for our purposes.
\begin{lem}\label{LCR}
Let $X,Y\in \O(G)$ and $f:X\to Y$ any straight map. If $\gamma$ is a $f$-legal path in $X$,
then it is reduced.
\end{lem}
\begin{proof}
If $\gamma$ is not reduced, then it contains a sequence $\bar ee$, hence a turn of the kind
$[x,x]$. That turn cannot be $f$-legal.
\end{proof}
\begin{defn}
Let $X\in\O(\G)$, and $\tau$ be a turn of $X$. We say that $\tau$ is {\bf (non-) free} if it is
based at a (non-) free vertex. We say that $\tau$ is {\bf infinite non-free} if it is based at a
vertex with infinite vertex group (resp., infinite stabiliser). We say that $\tau$ is {\bf finite non-free} if it is non-free
and based at a vertex with finite vertex group (resp., finite stabiliser).
Given an invariant sub-graph $Y\subseteq X$ (resp., $G$- sub-forest), we say that
$\tau$ is in
$Y$ if both germs of edges of $\tau$ belong to $Y$.
\end{defn}
\section{Unfolding projections and local surgeries on paths}\label{s3}
Suppose $\Delta $ is a simplex in $\O(\G)$, with underlying graph of groups $X$.
Let $\tau$ be a non-degenerate turn in $X$. We denote by $\Delta_\tau$ the simplex obtained by
(equivariantly) folding $\tau$. If $\tau$ is free and trivalent then $\Delta_\tau$ trivially
equals $\Delta$. Otherwise, $\Delta$ is a codimension-one face of
$\Delta_\tau$. In the latter case there is a natural
projection $\Delta_\tau\to\Delta$ corresponding to the collapse of the newly created
edge. Rather, we will use the {\bf unfolding projection}, which is defined as follows.
Given $Y\in\Delta_\tau$, we will define lengths of edges of $X$ so that
isometrically folding $\tau$ eventually produces $Y$.
Let $e_1,e_2$ be the edges defining $\tau$ (possibly $e_1=e_2$ is $\tau$ arises at an
edge-loop) and let $e$ be the extra edge added in $\Delta_\tau$ after folding $\tau$.
Firstly, every edge of $X$, different to $e_1,e_2$, will have the same length as its length in
$Y$. Then, for $i=1,2$ we set the length of $e_i$ to be $L_{Y}(e_i)+L_{Y}(e)$ if $e_1\neq e_2$,
and $L_{Y}(e_i)+2L_{Y}(e)$ if $e_1=e_2$.
We denote the resulting metric graph, which is an element of $\Delta$, by $\operatorname{unf}_\tau(Y)$
and we say that it is obtained by {\bf unfolding $\tau$}. The map $$\operatorname{unf}_\tau:\Delta_\tau\to\Delta$$
is our unfolding projection.
\begin{lem}\label{unf}
The map $\operatorname{unf}_\tau$ is surjective. Moreover,
Folding $\tau$ by an amount
of $L_{Y}(e)$ produces a simplicial segment from $\operatorname{unf}_\tau(Y)$ to $Y$.
\end{lem}
\begin{proof}
The proof immediately follows from the construction.
\end{proof}
\medskip
We describe now local surgeries on paths. As above, let $\Delta$ be a simplex of $\O(\G)$
with underlying graph $X$. Since we adopt the graphs-viewpoint, then we may view $G$ as the fundamental group of the graph of groups given by $X$.
If $ g_1 e_1 g_2 e_2 \dots g_k e_k$ is a loop in the graph of groups, then it crosses $k$ turns (including multiplicity); each sub-path of the form $e_i g_{i+1} e_{i+1}$ determines a turn, $[\bar{e}_i,g_{i+1}e_{i+1}]$, where the indices are taken modulo $k$. (The specific metric on $X$ is not relevant for this discussion, merely the fact that we have a way of representing elements/conjugacy classes as loops in the underlying graph of groups for $X$.)
Thus a path (or loop) is reduced (cyclically reduced) if the turns it crosses (cyclically crosses) are all non-trivial.
With this description, we may modify any given path by replacing one of the $g_i$ with some
other element $g$ in the same vertex group. The turns crossed by this new path are exactly the
same as the original, except for one turn $\tau=[\bar{e}_{i-1}, g_ie_i]$ which is replaced with
$[\bar{e}_{i-1}, ge_i]$. We denote the modified turn and loop respectively $$\tau_g\qquad\text{and}\qquad \gamma_{\tau,g}.$$
Putting everything in formulas we have:
\begin{lem}[Turn-surgery of paths]\label{Surgery}
Let $\Delta$ be a simplex of $\O(\G)$, $\gamma=g_1e_1\dots g_ke_k$ a cyclically reduced loop
realised in
the underlying graph of groups and $\tau=[\bar{e}_{i-1},g_ie_i]$ be a turn crossed by
$\gamma$. Let
$v$ be the initial vertex of $e_i$. Then, if $\tau$ is non-degenerate, for any $g\neq g_i\in
G_v$ the loop $\gamma_{\tau,g}$ is cyclically reduced and satisfies
$$\#(\gamma_{\tau,g},\tau')=
\left\{\begin{array}{ll}
\#(\gamma,\tau') & \text{if }\tau'\neq\tau,\tau_g\\
\#(\gamma,\tau')-1 & \text{if }\tau'=\tau\\
\#(\gamma,\tau')+1 & \text{if }\tau'=\tau_g
\end{array}\right.$$
\end{lem}
Moreover, if $\tau$ is degenerate (hence $e_{i-1}=\bar e_i$), than the same is true if in
addition we choose $g\neq id$.
\begin{proof}
Since $\gamma$ is cyclically reduced and $\tau$ is not degenerate, then $\gamma_{\tau,g}$ is
reduced. The same holds true if $\tau$ is degenerate and $g\neq id$. The claim now easily
follows by counting the number of times that a turn appears along $\gamma_{\tau,g}$.
\end{proof}
We introduce also a second surgery on paths. Let
$\gamma=g_1e_1\dots g_ke_k$ denote a loop as above. Let $e=e_i$ be an oriented edge crossed by $\gamma$ at least twice and let $j$ be the
next index so that $e_j=e$. We can therefore form the loop $g_je_i\dots g_{j-1}e_{j-1}$ (note that the formed loop starts with the group element $g_j$ instead of $g_i$, as in this case any turn which is crossed by the this new loop, was seen as a turn crossed by $\gamma$).
We refer to such procedure as {\em edge-surgery}, and denote the resulting loop by $$\gamma_e.$$
Note that every turn (cyclically) crossed by $\gamma_e$ is also a turn crossed by $\gamma$, so if $\gamma$ is cyclically reduced, then
$\gamma_e$ is cyclically reduced as well.
By construction, $\gamma_e$ crosses the oriented edge $e$ only once. Still, it may cross $\bar
e$ and other edges multiple times.
\begin{lem}[Edge-reduction of loops]\label{Surgery3}
Let $\Delta$ be a simplex of $\O(\G)$, $\gamma=g_1e_1\dots g_ke_k$ a cyclically reduced loop realised in
the underlying graph of groups. Then for every
$e_i$ there is a cyclically reduced loop $\gamma'$, obtained by recursive edge-surgeries on $\gamma$, such that
first, $\gamma'$ crosses $e_i$, and second,
$\gamma'$ crosses every oriented edge at most once. (Possibly $\gamma'=\gamma$ if $\gamma$ had those properties).
\end{lem}
\begin{proof}
For a loop $\eta$ set $n(\eta)$ the total number of repetitions (counted with
multiplicity) of oriented edges. So $\eta$ crosses any oriented edge at most once if and
only if $n(\eta)=0$. If $n(\gamma)>0$, then there is $g_je_j\dots g_ie_i\dots g_le_l$
a sub-path of $\gamma$ containing $e_i$ and so that $e_j=e_l$
(indices are taken cyclically). The loop $\gamma_{e_j}$ contains $e_i$ and
$n(\gamma_{e_j})\leq n(\gamma)-1$. We conclude by arguing inductively as $n(\gamma)$ is strictly
decreasing under edge-surgeries.
\end{proof}
There is a version of the previous lemma for turns:
\begin{lem}\label{Surgery4}
Let $\Delta$ be a simplex of $\O(\G)$, $\gamma=g_1e_1\dots g_ke_k$ a cyclically reduced loop realised in
the underlying graph of groups.
If $\tau = [e,g e']$ is a non-trivial turn, which is crossed by $\gamma$, then we can find some cyclically reduced loop $\gamma'$, obtained by recursive edge surgeries on $\gamma$, such that first, $\gamma'$ crosses $\tau$ and second, $\gamma'$ crosses every oriented edge at most once.
\end{lem}
\begin{proof}
Let $\gamma=g_1e_1\dots g_ke_k$ be a cyclically reduced loop, as above. Without loss of generality, we can assume that $\gamma$ is of the form $\gamma = ge' \dots \bar{e}$ and there are no other occurrences of $e'$ or $\bar{e}$ in $\gamma$, as otherwise we can preform edge-surgeries to change $\gamma$ to a cyclically reduced loop satisfying this property and which still crosses $\tau$ (cyclically).
Now suppose that there is some oriented edge $E$ which is crossed by $\gamma$ at least twice. In this case, if $e_i$ and $e_j$ are the first and the last occurrences of $E$ in $\gamma$, respectively, then we replace $\gamma$ with the cyclically reduced loop $\gamma_1 = ge' \dots g_{i-1}e_{i-1} g_j e_j \dots g_k e_k$ which still crosses $\tau$ and, in addition, crosses $E$ once. By arguing inductively on the number of repetitions, we can find a $\gamma'$ with the requested properties.
\end{proof}
\section{Critical and Regular turns}
Firstly we explain our strategy.
Given $X\in\O(\G)$ which is
minimally displaced by an automorphism $\phi$, we want to control the number of ways we can fold
a turn of $X$, without exiting $\operatorname{Min}(\phi)$. If a straight map $f:X\to X$ representing $\phi$
sends an edge of a maximally stretched loop $\gamma$ across a turn $\tau$, then by folding $\tau$ we
decrease the length of $f(\gamma)$. ``Morally'', this is the only way we can decrease
stretching factors of loops, and if we fold a loop {\em not} in the image of an edge, we increase the displacement.
``Morally'' does not mean ``literally'', and in fact one has to (focus on legal loops in
tension graph, and) analyse what happens to the images of turns. Our plan is to select a finite
number of turns that will be enough to control the displacement.
These will be our ``simplex critical turns'' that we introduce at the end of this section. The upshot of this process will be that the folding of simplex regular (i.e. non-critical) turns, strictly increases
the displacement. We note that our set of critical turns won't be optimal, in the sense that
we may a priori increase the displacement also by folding a critical turn; for instance we include all free turns for convenience.
It would be
interesting to have a nice characterisation of exactly those turns whose folding do not
increase the displacement. The next lemma is the key observation we begin with.
\begin{lem}\label{L2}
Let $[\phi]\in\operatorname{Out}(\G)$. Let $\Delta$ be a simplex of $\O(\G)$ and $f$ be an optimal
map representing $\phi$ on a point $X$ of $\Delta$. Let $\tau$ be a non-degenerate
turn, and let $\Delta_\tau$ be the simplex obtained by
folding $\tau$. Let $X^t$ denote the point of $\Delta_\tau$ obtained from $X$ by folding
$\tau$ by an amount $t$.
If there is an $f$-legal loop $\gamma$ in the tension graph of $f$ (see Definition~\ref{deftg}) such that
\begin{equation}
\label{in1}
\#(f(\gamma),\tau)\leq\lambda_\phi(X)\#(\gamma,\tau)\qquad\text{(resp. with strict inequality)}\tag{$\heartsuit$}
\end{equation}
then $$\lambda_\phi(X^t)\geq\lambda_\phi(X) \qquad\text{(resp. with strict inequality)}.$$
\end{lem}
\begin{proof} For any legal loop $\gamma$ in the tension graph of $X$, in $X^t$ we have $$\lambda_\phi(X^t)=\sup_g\frac{L_{X^t}(f(g))}{L_{X^t}(g)}\geq\frac{L_{X^t}(f(\gamma))}{L_{X^t}(\gamma)}=\frac{\lambda_\phi(X)L_{X}(\gamma)-2t\#(f(\gamma),\tau)}{L_{X}(\gamma)-2t\#(\gamma,\tau)}.$$
and $\lambda_\phi(X^t)\geq\lambda_\phi(X)$ is guaranteed (with strict inequality) provided that
$$\lambda_\phi(X)L_{X}(\gamma)-2t\#(f(\gamma),\tau)\geq
\lambda_\phi(X)(L_{X}(\gamma)-2t\#(\gamma,\tau))$$
(resp. with strict inequality), and that last inequality clearly reduces to (\ref{in1}).
\end{proof}
What we will do from now on is showing that, except for finitely many turns, we can guarantee the
existence of a loop $\gamma$ satisfying the hypothesis of Lemma~\ref{L2}.
\bigskip
We now make a choice of a single non-trivial element $h_v\in G_v$, for each non-trivial
$G_v$. Some of our subsequent constructions will be dependent on this choice, but we will never
need to revise this choice so we will not need to refer to the specific elements.
We denote the collection of such chosen elements by $H$: $$H=\{h_v: v\text{ is a non-free vertex}\}.$$
\begin{defn}\label{ADelta}
For any simplex $\Delta$ of $\O(\G)$ define a set of loops, $A_{\Delta}$ as
follows: a cyclically reduced loop $g_1e_1\dots g_ke_k$ in the underlying
graph of $\Delta$ is in
$A_\Delta$ if and only if
\begin{enumerate}
\item it crosses every (un-oriented) edge at most $4$ times, and
\item every non-trivial $g_i$ belongs to $H$.
\end{enumerate}
\end{defn}
\begin{rem}
The reason for constructing $A_{\Delta}$ is that it is finite, and gives us a local coordinate system of loops which will be sufficient for calculating displacements and the Lipschitz metric, locally.
\end{rem}
\begin{defn}\label{defncc}
Let $[\phi]\in\operatorname{Out}(\G)$. For a simplex, $\Delta$, in the underlying graph of
$\Delta$ we say that a turn $\tau$ is {\bf
candidate regular} if it is infinite non-free and $\#(\phi(\gamma),\tau)=0$ for all loops
in $A_\Delta$;
a turn is {\bf candidate critical} if it is not regular (so $\tau$ is critical if
either its vertex group is finite or if it appears in $\phi(A_\Delta)$). We denote the set of
candidate critical turns of $\Delta$ by $\mathcal{C}_C(\Delta)$. (We remark that even if we
do not refer to $\phi$ in the notation, the set $\mathcal C_C(\Delta)$ depends on $\phi$).
\end{defn}
\begin{lem}\label{LB1}
Let $X, Y \in \O(\G)$ and $f:X \to Y$ a straight map. Let $\Delta=\Delta_X$.
Suppose $\xi$ is either an edge or a free turn of
$X$, which is crossed by an $f$-legal loop $\gamma_0$. Then $\xi$ is also crossed by a $f$-legal
loop $\gamma\in
A_\Delta$ which additionally crosses any oriented edge at most once. If $\gamma_0$ is
in the tension graph, then so is $\gamma$.
Moreover, under the same hypotheses, if additionally $X=Y$ and $\gamma_0$ is $\langle\sim_{f^k}\rangle$-legal, then $\gamma$ may also be chosen to be $\langle\sim_{f^k}\rangle$-legal.
\end{lem}
\begin{proof}
Let $\gamma_0$ be a legal loop crossing $\xi$. By Lemma~\ref{LCR} $\gamma_0$ is cyclically reduced.
By Lemmas~\ref{Surgery3} or \ref{Surgery4}, as appropriate, we can reduce $\gamma_0$, via edge-surgeries, to a loop $\gamma_1$,
still crossing $\xi$, and which crosses
any oriented edge at most once.
In particular,
$\gamma_1$ satisfies condition $(1)$ for belonging to $A_\Delta$.
Since $\gamma_1$ is obtained from $\gamma_0$ by edge-surgeries, the turns (cyclically) crossed by $\gamma_1$ are also crossed by $\gamma_0$. Hence if $\gamma_0$ is $f$-legal (respectively $\langle\sim_{f^k}\rangle$-legal), then so is $\gamma_1$.
We now perform turn surgeries on $\gamma_1$ to produce a loop in $A_{\Delta}$. Condition (1) of Definition~\ref{ADelta} is already satisfied, so we only need to concern ourselves with condition (2), which is about the non-free turns crossed by the loop.
Suppose that $\gamma$ contains a sub-path at a non-free vertex, v, $e_{i-1} g_i e_i$, crossing the corresponding non-free turn, $[\bar{e}_{i-1}, g_i e_i]$. Let $h \in H$ be the corresponding group element of $G_v$. Then by Lemma~\ref{L0}, at least one of $[\bar{e}_{i-1}, e_i]$ and $[\bar{e}_{i-1}, h e_i]$ is $f$-legal (respectively $\langle\sim_{f^k}\rangle$-legal).
Therefore, by making appropriate choices at each non-free turn crossed by $\gamma_1$, we can perform a sequence of turn surgeries to produce an $f$-legal loop $\gamma$ (respectively $\langle\sim_{f^k}\rangle$-legal) which is in $A_{\Delta}$ and still crosses $\xi$ (since $\xi$ is unaffected by turn surgeries).
Moreover, $\gamma$ contains only edges that were originally
edges of $\gamma_0$, so if
$\gamma_0$ is in the tension graph, so is $\gamma$.
\end{proof}
\begin{rem*}
Note that in the previous result, we prove that the path $\gamma$ crosses each oriented edge at most one, hence each un-oriented edge at most twice, even though the requirement for being in $A_{\Delta}$ is that it crosses each un-oriented edge at most 4 times. The reason is that we use two such loops in the following Lemma; we contruct the loops in Lemma~\ref{LB2} by using two loops from Lemma~\ref{LB1}.
\end{rem*}
\begin{lem}\label{LB2}
Let $X, Y \in \O(\G)$ and $f:X \to Y$ a straight map. Let $\Delta=\Delta_X$.
Let $\tau=[a,gb]$ be a non-free $f$-legal turn so that both edges $a,b$ are crossed by $f$-legal loops.
Then, there exists a $f$-legal loop $\gamma$ which crosses $\tau$. If the loops for $a$ and $b$ are in
the tension graph, then so is $\gamma$.
Moreover, we could take $\gamma = \gamma' _{\tau, g_1}$, for some $g_1 \in G_v$ where $\gamma ' \in A_{\Delta}$.
Finally, the same is true for
$\langle\sim_{f^k}\rangle$-legality in the case where $X=Y$.
\end{lem}
\begin{proof}
We orient $a,b$ so that $v$ is the common starting point. By Lemma~\ref{LB1} there exist legal
loops $\gamma^a$ and $\gamma^b$, crossing $a$ and $b$ respectively, each
crossing any oriented edge at most once, so that $\overline{\gamma^a}$ and $\gamma^b$ and are in $A_\Delta$; we choose, $\gamma_a, \gamma_b$ to start with $a$ and $b$ respectively. The loop $\gamma=\overline{\gamma^a} g \gamma_b$ crosses
$\tau$ by construction,
and it crosses any un-oriented edge at most $4$ times. Let $\omega$ be the non-free turn determined at
the concatenation of the end $\gamma^b$ and the beginning $\overline{\gamma^a}$. By construction $\gamma$ is
legal except possibly at $\omega$. Hence, by Lemma~\ref{L0}, up to possibly replacing $\gamma$ with
$\gamma_{\omega,h_v}$ or $\gamma_{\omega,id}$ we may assume that $\gamma$ is legal. By Lemma~\ref{LCR} $\gamma$ is cyclically reduced.
Moreover, both $\gamma_{\tau,id},\gamma_{\tau,h_v}$ satisfy condition $(2)$ for
belonging to $A_\Delta$ and at least one of them is legal by Lemma~\ref{L0}.
Clearly if both $\gamma^a$ and $\gamma^b$ are in the tension graph of $f$, then so is $\gamma$.
\end{proof}
\begin{rem}\label{RB3}
If $f:X\to Y$ is a minimal optimal map, then any edge in the tension graph is crossed by a
$f$-legal loop in the tension graph, and so satisfies
hypothesis of Lemma~\ref{LB1}, and any non-free legal turn in the tension graph satisfies the
hypothesis of Lemma~\ref{LB2}, in the tension graph.
This is just by the definition of {\em minimal} optimal map
(see Definition~\ref{defnmo}). Moreover, we recall also that if $\phi$ is irreducible and
$f:X\to X$ is an optimal map representing $\phi$ on a minimally displaced point, then the
tension graph of $f$ is the whole $X$ (see Lemma~\ref{wasfootnote}).
\end{rem}
\begin{rem}
Note that for Lemma \ref{LB2}, the hypothesis that the turns are non-free is essential, as the lemma fails for free turns.
Let $\phi$ be the automorphism of $F_2 = <a,b>$, which sends $a$ to $aba$ and $b$ to $ba$. Then the iwip automorphism $\phi$ admits a natural train track representative - which we also call $\phi$ - on the rose $R$, where we identify each petal of $R$ with an element of the free basis $\{a,b\}$. Moreover, the turn $\tau = [a,b]$ is $\langle\sim_{\phi^k}\rangle$- legal, as for every positive integer $k$, $\phi^k(a),\phi^k(b)$ start with $a$,$b$, respectively.
However, note that a legal loop cannot contain the cyclic subwords $a b^{-1}$ or $b a^{-1}$. Therefore the only legal loops are either positive or negative words in $a$ and $b$. In particular, the free turn $\tau$ is $\phi$-legal, but it cannot be extended to a $\phi$-legal loop.
\end{rem}
\begin{lem}\label{LB4}
Let $[\phi]\in\operatorname{Out}(\G)$ be an irreducible element, let $X\in\operatorname{Min}(\phi)$ and $f:X\to X$ a
train track map representing $\phi$. Then, every edge of $X$ is crossed by a $\langle
\sim_{f^k}\rangle$-legal loop. In particular, Lemma~\ref{LB1} holds true for any edge of
$X$, and Lemma~\ref{LB2} for any non-free $\langle\sim_{f^k}\rangle$-legal turn.
\end{lem}
\begin{proof}
Any train track map is also train track with respect to $\langle
\sim_{f^k}\rangle$-legality; namely, it maps $\langle \sim_{f^k}\rangle$-legal paths to $\langle
\sim_{f^k}\rangle$-legal paths (\cite[Corollary~8.12]{FM13}).
Since $\phi$ is irreducible and, the tension graph of $f$ is the whole of $X$
and any vertex is at least two-gated with respect to $\langle
\sim_{f^k}\rangle$. Therefore, there exists a $\langle \sim_{f^k}\rangle$-legal loop, $\gamma_0$, in $X$. The iterated images $f^n(\gamma_0)$ form a sub-graph of $X$ which is
$f$-invariant. By irreducibility, that sub-graph must be the whole $X$. In particular any
edge $e$ is in the loop $f^n(\gamma_0)$ for some $n$, and that loop is $\langle
\sim_{f^k}\rangle$-legal because $f$ is a train-track map.
\end{proof}
The following is just a list of immediate corollaries of previous lemmas.
\begin{lem}\label{images}
Let $[\phi]\in\operatorname{Out}(\G)$ and $\Delta$ a simplex in $\O(\G)$. Then $\mathcal C_C(\Delta)$ contains
(at least) all turns of the following kinds:
\begin{enumerate}[(i)]
\item Free and finite non-free turns;
\item $f$-images of finite non-free turns, where $f$ is any straight $\O$-map landing
on $X$;
\item turns in the $f$-image of an edge crossed by some $f$-legal loop, where $f:X\to
X$ is any straight map representing $\phi$. In particular those include:
\begin{enumerate}[(a)]
\item edges in the tension graph of $f$, when $f:X\to X$ is a minimal optimal map
representing $\phi$;
\item any edge, provided the tension graph of $f$ is the whole $X$, {\em e.g.} if
$\phi$ is
irreducible and $f$ is an optimal map representing $\phi$ on the minimally
displaced point $X$;
\end{enumerate}
\item turns in the $f$-image of a free turn crossed by some $f$-legal loop, where $f:X\to
X$ is any straight map representing $\phi$.
\end{enumerate}
\end{lem}
\begin{proof}
$(i)$ is by definition. For $(ii)$, note that, since $f$ is an $\O$-map, the $f$-image of a
finite non-free vertex is again a finite non-free vertex.
Cases $(iii)$ and $(iv)$ follow immediately from Lemmas~\ref{LB1} and~\ref{RB3}. In
particular, case $(iii)-(a)$ follow from Lemma~\ref{LB1} by Remark~\ref{RB3}; case
$(iii)-(b)$ from Lemma~\ref{LB1} by Remark~\ref{RB3}; case $(iv)$ from Lemma~\ref{LB1}.
\end{proof}
\begin{prop}\label{CandLegal0}
Let $[\phi]\in\operatorname{Out}(\G)$, $\Delta$ a simplex in $\O(\G)$,
and $f:X\to X$ be a straight map representing $\phi$ at a point $X\in \Delta$.
Let $\tau_1, \ldots, \tau_k$ be candidate-regular turns.
Then for any $j=1,\dots,k$, any $f$-legal loop $\gamma_0$ crossing $\tau_j$ can be
modified via turn-surgeries (at infinite non-free turns) to an $f$-legal loop $\gamma$ so that
\begin{enumerate}[(i)]
\item $\#(\gamma,\tau_j)=1$ and,
\item $\sum_{i=1}^k \#(\gamma,\tau_i) = 1$ and,
\item $\sum_{i=1}^k \#(f(\gamma),\tau_i) \leq 1$ and,
\item $\sum_{i=1}^k \#(f(\gamma),\tau_i) =0$ unless $f$ maps $\tau_j$ to some $\tau_l$.
\end{enumerate}
Moreover, if in addition
$\gamma_0$ is $\langle \sim_{f^k}\rangle$-legal, then $\gamma$ can be chosen to
be $\langle \sim_{f^k}\rangle$-legal.
\end{prop}
\begin{proof}
Without loss of generality we may assume that $\gamma_0$ crosses $\tau_1$.
We modify $\gamma_0$ by using turn-surgeries (Lemma~\ref{Surgery}) in order to
get a new loop $\gamma$ that satisfies the extra properties. We will apply surgeries
only on turns at non-free vertices with infinite stabilisers, so there will be infinitely many
choices every time.
Concretely, let $\gamma_0$ be represented as $g_1e_1\dots g_ne_n$ (with cyclic indices modulo
$n$) so that $\tau_1=[\bar{e}_1,g_2e_2]$.
For any infinite non-free turn $\tau=[\bar{e}_i,g_{i+1}e_{i+1}]$ with $i\neq 1$, we choose an element
$a$ in the corresponding vertex group so that $\tau_a=[\bar{e}_i,ae_{i+1}]$ satisfies
\begin{enumerate}[(1)]
\item $\tau_a$ is not one of the $\tau_i$;
\item $\tau_a$ is $\langle \sim_{f^k}\rangle$-legal;
\item $f(\tau_a)$ is not one of the $\tau_i$;
\end{enumerate}
Such an element exists because $\tau$ is infinite non-free, there are finitely many $\tau_i$, and
by Lemma~\ref{L0} all but one choice for $a$ produces a $\langle \sim_{f^k}\rangle$-legal turn. We define $\gamma$ as the
result of the turn-surgeries at all such infinite non-free vertices, by using the chosen
group elements.
Condition $(2)$ assures that $\gamma$ is legal, and $\langle \sim_{f^k}\rangle$-legal if
$\gamma_0$ where so. Since we did not touch $\tau_1$, condition
$(1)$ gives us point $(i)$ and $(ii)$. As for $(iii)$, let's analyse the turns crossed by
$f(\gamma)$. They come in several types:
\begin{enumerate}[(a)]
\item a turn crossed by the $f$-image of an edge of $\gamma$,
\item the $f$-image of a free turn of $\gamma$,
\item the $f$-image of a finite non-free turn of $\gamma$,
\item the $f$-image of an infinite non-free turn of $\gamma$.
\end{enumerate}
By Lemma~\ref{images}, the first three are all candidate critical, so none of the $\tau_i$
appears in this way (note that in type (b), the free turns that appear, are crossed by the $f$-legal loop $\gamma$, so the hypothesis of \ref{images} (iv) is satisfied). Crossings of kind $(d)$ are avoided by condition $(3)$, except possibly
if $f(\tau_1)$ equals one of the $\tau_i$'s. Points $(iii)$ and $(iv)$ follow.
\end{proof}
\begin{cor}\label{CandLegal2}
Let $[\phi]\in\operatorname{Out}(\G)$, $\Delta$ a simplex in $\O(\G)$,
and $f:X\to X$ be a minimal optimal map representing $\phi$ on a point $X\in \Delta$.
Suppose that $\tau_1, \ldots, \tau_k$ are candidate regular turns. If there is a turn
$\tau_j$
which is $f$-legal and in the tension graph of $f$, then there exists an
$f$-legal loop, $\gamma$, in the tension graph, and such that:
\begin{enumerate}[(i)]
\item $\#(\gamma,\tau_j) = 1$ and,
\item $\sum_{i=1}^k \#(\gamma,\tau_i) = 1$ and,
\item $\sum_{i=1}^k \#(f(\gamma),\tau_i) \leq 1$ and,
\item $\sum_{i=1}^k \#(f(\gamma),\tau_i) =0$ unless $f$ maps $\tau_j$ to some $\tau_l$.
\end{enumerate}
\end{cor}
\begin{proof}
By Remark~\ref{RB3}, Lemma~\ref{LB2} applies for the infinite non-free turn, $\tau_j$. So there is a
$f$-legal loop $\gamma_0$, in the tension graph, and crossing $\tau_j$.
Proposition~\ref{CandLegal0} applies. Since $\gamma$ is obtained from $\gamma_0$ via
turn-surgeries, and since $\gamma_0$ is in the tension graph, so also $\gamma$ is in the
tension graph.
\end{proof}
\begin{cor}\label{candlegal3}
Let $[\phi]\in\operatorname{Out}(\G)$ an irreducible element, $\Delta$ a simplex in $\O(\G)$,
and $f:X\to X$ a train-track map representing $\phi$ on a point $X\in \Delta$.
Suppose that $\tau_1, \ldots, \tau_k$ are candidate regular turns. If there is a turn
$\tau_j$ which is $\langle \sim_{f^k}\rangle$-legal, then there exists a
$\langle \sim_{f^k}\rangle$-legal loop, $\gamma$ such that:
\begin{enumerate}[(i)]
\item $\#(\gamma,\tau_j) = 1$ and,
\item $\sum_{i=1}^k \#(\gamma,\tau_i) = 1$ and,
\item $\sum_{i=1}^k \#(f(\gamma),\tau_i) \leq 1$ and,
\item $\sum_{i=1}^k \#(f(\gamma),\tau_i) =0$ unless $f$ maps $\tau_j$ to some $\tau_l$.
\end{enumerate}
\end{cor}
\begin{proof}
Lemma~\ref{LB4} applied for $\tau_j$ (which is necessarily infinite non-free, as it is regular) guarantees
the existence of a $\langle \sim_{f^k}\rangle$-legal
loop $\gamma_0$ crossing $\tau_j$.
Proposition~\ref{CandLegal0} applies.
\end{proof}
\begin{cor}\label{C2}
Let $[\phi]\in\operatorname{Out}(\G)$. Let $\Delta$ be a simplex of $\O(\G)$ and $f$ be a minimal
optimal map representing $\phi$ on a point $X$ of $\Delta$. Let $\tau$ be a non-degenerate
candidate regular turn, and let $\Delta_\tau$ be the simplex
obtained by
folding $\tau$. If $X^t$ denotes the point of $\Delta_\tau$ obtained from $X$ by folding
$\tau$ by an amount $t$, then $$\lambda_\phi(X^t)\geq\lambda_\phi(X).$$
Moreover, if $\tau$ is $f$-legal and in the tension graph, and if $\lambda(\phi)>1$, then
the inequality is strict.
\end{cor}
\begin{proof}
By Remark~\ref{RB3} and Lemma~\ref{LB1}, there exists an $f$-legal loop $\gamma \in A_{\Delta}$ in the
tension graph, and any turn which is crossed by the image of $f(\gamma) = \phi (\gamma)$ is
candidate critical just by definition of candidate critical.
Since $\tau$ is regular
$$\#(f(\gamma),\tau)=0,$$ the non-strict version
of hypothesis~(\ref{in1}) of Lemma~\ref{L2} is fulfilled, and first claim follows.
If in addition $\tau$ is legal and in the tension graph, we invoke Corollary~\ref{CandLegal2} (with $k=1$) to build a legal loop
$\gamma$ in the tension graph so that $\#(\gamma,\tau)>0$ and
$$\#(f(\gamma),\tau)\leq \#(\gamma,\tau).$$ The non strict version of inequality~(\ref{in1})
follows because $\lambda_\phi(X)\geq 1$. Moreover,
if $\lambda(\phi)>1$, then $\lambda_\phi(X)\geq\lambda(\phi)>1$ and also the strict version is proved.
\end{proof}
\begin{cor}\label{FoldingCandidateRegular1} Let $[\phi]\in \operatorname{Out}(\G)$.
Let $\tau$ be a non-degenerate candidate regular turn with respect to a simplex
$\Delta$ of $\O(\G)$, and $\Delta_{\tau}$ be the simplex obtained by folding
$\tau$.Then $$\lambda_\phi(\Delta_{\tau}) \geq \lambda_\phi(\Delta).$$
\end{cor}
\begin{proof}
Any point $Y\in\Delta_\tau$ is obtained by folding $\operatorname{unf}_\tau(Y)$ (Lemma~\ref{unf}).
Corollary~\ref{C2} tells us $\lambda_\phi(Y)\geq\lambda_\phi(\operatorname{unf}_\tau(Y))$. The claim follows
taking infima.
\end{proof}
Corollary~\ref{FoldingCandidateRegular1} provides the kind of non-strict inequalities we are
searching for. We now focus on turns whose folding guarantees the strict inequality $\lambda_\phi(\Delta_{\tau}) > \lambda_\phi(\Delta)$.
\begin{lem}\label{L3}
Let $[\phi]\in \operatorname{Out}(\G)$. Let $X\in \O(\G)$ and $f:X\to X$ be an optimal map representing
$\phi$. For any $X_0\in\Delta_X$ let $f^0:X_0\to X_0$ denote the map $f$ read in $X_0$.
There is a neighbourhood $U$ of $X$ in $\Delta_X$ such that for any $X_0\in U$
there is a minimal optimal map $f_0:X_0\to X_0$ such that
$$d_\infty(f^0,f_0)<\e$$
and, for any $x,y\in X$ $$|d_X(f(x),f(y))-d_{X_0}(f^0(x),f^0(y)|<\e.$$
\end{lem}
\begin{proof}
The function $\lambda_\phi(X)$ is continuous on $X$ and, tautologically, the metric of
$X$ changes continuously on $X$. Therefore, for any $\e>0$ there is a
neighbourhood $U$ of $X$ in $\Delta_X$ such that
\begin{itemize}
\item $|\lambda_\phi(X_0)-\lambda_\phi(X)|<\e$,
\item $d_\infty(\operatorname{Str}(f^0),f^0)<\e$,
\item $|\operatorname{Lip}(\operatorname{Str}(f^0))-\operatorname{Lip}(f)|<\e$.
\end{itemize}
By~\cite[Theorem~3.15]{FM18I} there exists a weakly optimal map $f_1:X_0\to X_0$ representing
$\phi$ such that $$d_\infty(f_1,\operatorname{Str}(f^0))\leq \operatorname{vol}(X_0)(\operatorname{Lip}(\operatorname{Str}(f^0))-\lambda_\phi(X_0))$$
and, by~\cite[Theorem~3.15 and Theorem~3.24]{FM18I} there exists a minimal optimal map
$f_0:X_0\to X_0$ representing $\phi$ such that
$$d_\infty(f_1,f_0)<2\e.$$
Putting together all such inequalities, and since $\e$ is arbitrary, we get that for any $\e>0$
there is $U$ so that for all $X_0\in U$ we have $$d_\infty(f^0,f_0)<\e.$$
Moreover, it is clear that we can choose $U$ in such a way that for any $x,y\in X$ we have $$|d_X(f(x),f(y))-d_{X_0}(f^0(x),f^0(y)|<\e.$$
\end{proof}
\begin{lem}\label{L4}
Let $[\phi]\in \operatorname{Out}(\G)$. For any $X\in \O(\G)$ and optimal map $f:X\to X$ representing
$\phi$, there is a neighbourhood $U$ of $X$ in $\Delta_X$ such that for any $X_0\in U$ there is a minimal optimal map $f_0:X_0\to X_0$ such that if $\tau$ is a non-free turn in $X$ which is $f$-legal, then $\tau$ is $f_0$-legal.
\end{lem}
\begin{proof}
We apply Lemma~\ref{L3}. By equivariance, if $v$ is the
non-free vertex where $\tau$ is based, then $f(v)=f_0(v)$ is a non-free vertex.
Estimates of Lemma~\ref{L3} now easily imply that if $\tau$ is legal in $X$, it remains legal for small perturbations.
\end{proof}
\begin{lem}\label{Cylinder} Let $[\phi]\in\operatorname{Out}(\G)$, $\Delta$ be a simplex in $\O(\G)$,
and $X\in\Delta$ be a point which is minimally displaced by $\phi$.
Suppose that $\Delta '$ is a simplex with face $\Delta$ and that there is a point $Y \in \Delta'$ which is minimally displaced by $\phi$. Then for any open neighborhood $U$ of $X$ in $\Delta ' $ there is a point $Z$ in $U$ which is minimally displaced by $\phi$.
\end{lem}
\begin{proof}
This is an immediate application of the convexity properties of $\lambda_{\phi}$ (namely, by ~\cite[Lemma~6.2]{FM18I}). More specifically, for $X,Y$ as above, the linear segment $\overline{YX}$ eventually enters in $U$, by continuity. On the other hand, by convexity properties of $\lambda_\phi$, any point of the segment $\overline{YX}$ is minimally displaced by $\phi$, which gives us the required result.
\end{proof}
\begin{prop}\label{FoldingCandidateRegular2} Let $[\phi]\in\operatorname{Out}(\G)$ be irreducible and with
$\lambda(\phi)>1$.
Let $\Delta$ be a simplex of $\O(\G)$, and let $X \in \Delta$.
Let $f:X\to X$ be a minimal optimal map representing $\phi$. Let $\tau$ be a non-degenerate, candidate
regular turn with respect to $\Delta$, which is also $f$-legal and let $\Delta_{\tau}$ be the
simplex obtained by folding $\tau$\footnote{
Note also that since $\tau$ is regular, it is in particular infinite non
free, so $\Delta_\tau$ is different from $\Delta$.
}. Then for
all $Y\in\Delta_\tau$ we have $$\lambda_\phi(Y) > \lambda(\phi).$$
\end{prop}
\begin{proof} From Corollary~\ref{FoldingCandidateRegular1} we know
$\lambda_\phi(\Delta_\tau)\geq\lambda_\phi(\Delta)$, and if
$\lambda_\phi(\Delta)>\lambda(\phi)$ the claim follows. Thus we may assume $\lambda_\phi(\Delta)=\lambda(\phi)$.
For any $Z\in\Delta$ denote by $Z^t$ the point of $\Delta_\tau$ obtained from $Z$ by folding
$\tau$ by an amount of $t$ (which is well defined, for any $Z$, for small enough $t$).
We will prove that there is an open neighbourhood $U$ of $X$ in $\Delta$ and
$T>0$ (which depends only on $\phi$ and $X$) so that for all $Z \in U$ and $t<T$, the
point $Z^t$, is not minimally displaced. Then the result follows, as $U^T = \{ Z^t : Z\in U,
t<T \} $ is an open neighbourhood of $X$ in $\Delta_{\tau}$, and by
Lemma~\ref{Cylinder}, if $\Delta_{\tau}$ were to contain a minimally displaced point, we would
be able to find a minimally displaced point in $U^T$, leading to a contradiction. Whence $\lambda_{\phi} (Y) >\lambda(\phi)$ for all $Y \in \Delta_{\tau}$.
We prove now our claim. Since $\tau$ is candidate regular, in particular it is non-free.
By Lemma~\ref{L4} there is a neighbourhood $U$ of $X$ in $\Delta$ so that for any
point $Z \in U$, there is a minimal optimal map $f_Z: Z \to Z$, such that $\tau$ is
$f_Z$-legal. Clearly, for any such $U$ there is $T>0$ so that $Z^t$ is well defined for all $Z\in U$ and $t<T$.
By Corollary~\ref{C2} $\lambda_{\phi} (Z^t) \geq \lambda_{\phi} (Z)$, and if
$Z$ is not minimally displaced, the result follows.
So, suppose that $Z \in \operatorname{Min}(\phi)$. In this case, since $f_Z$ is an optimal map
representing $\phi$, and since $\phi$ is irreducible, then the tension graph of $f_Z$ is the
whole $Z$ (Lemma\ref{wasfootnote}),
and since $\lambda(\phi)>1$, Corollary~\ref{C2} applies in its strict inequality version.
In any case, $Z^t$ cannot be minimally displaced and our claim follows.
\end{proof}
\begin{rem}
If one is interested in a version of Proposition~\ref{FoldingCandidateRegular2} for reducible
automorphisms, one has just to add the hypothesis that $\tau$ is {\em stably} in the tension
graph, that is to say, that $\tau$ is in the tension graph of any $f_Z$ for $Z$ close enough
to $X$. This will be enough to apply Corollary~\ref{C2} as we did in the proof for
irreducible automorphisms.
\end{rem}
\begin{lem}\label{criticallemma}
Let $[\phi]\in\operatorname{Out}(\G)$ be an irreducible element with $\lambda(\phi)>1$.
Suppose that
$X_1,X_2\in\Delta$ are two points minimally displaced by $\phi$ and let $f_1,f_2$ be train track
representative of $\phi$ on $X_1$ and $X_2$ respectively.
Suppose that $\tau$ is candidate regular turn. Then $\tau$ is $f_1$-legal in $X_1$ if and only
if it is $f_2$-legal in $X_2$.
\end{lem}
\begin{proof}
Suppose for contradiction that $\tau$ is $f_1$-legal but
$f_2$-illegal (in particular it is non-degenerate). Let $\Delta_\tau$ be the simplex obtained by folding $\tau$.
Since $\tau$ is candidate regular, we apply
Proposition~\ref{FoldingCandidateRegular2} (we can because train tracks are minimal optimal map
by Lemma~\ref{wasfootnote}), and we
have that $\lambda_{\phi}(Y) > \lambda(\phi)$ for any $Y \in \Delta_{\tau}$. On the other hand,
$\tau$ is $f_2$-illegal, and $\operatorname{Min}(\phi)$ is invariant under isometrically folding
illegal turns (see \cite[~Theorem 8.23]{FM13}), which means that there is a point $Y \in
\Delta_{\tau}$ which is minimally displaced and that leads us to a contradiction. Clearly we
can switch the roles of $X_1$ and $X_2$, and the proof is complete.
\end{proof}
\begin{defn}[Simplex Critical Turns]\label{defnsc}
Let $[\phi]\in\operatorname{Out}(\G)$ . For a simplex, $\Delta$, we say that a turn
$\tau$ is {\bf simplex critical} if it is either candidate critical (see \ref{defncc}) or $f$-illegal for
some train track representative of $\phi$ defined on some point of $\Delta$ (if any).
A turn is called {\bf simplex regular} if it is not critical. The set of simplex
critical turns is denoted by $\mathcal C_\Delta$. Sometimes we will use the short
notation {\em $\Delta$-critical} to means {\em simplex critical in $\Delta$}.
\end{defn}
\begin{thm}
\label{critical} If $[\phi]\in\operatorname{Out}(\G)$ is irreducible with $\lambda (\phi) > 1$, then
the set $\mathcal{C}_{\Delta}$ is finite. In fact, it is sufficient to add to
$\mathcal{C}_C(\Delta)$ the illegal turns for a single train track map (if any) to
obtain the whole of $\mathcal{C}_{\Delta}$.
\end{thm}
\begin{proof}
The set $\mathcal C_C(\Delta)$ is clearly finite by construction (see Definition~\ref{defncc}), and if there is no minimal
displaced point in $\Delta$, we have nothing to prove. Otherwise, chose any train track
representative of $\phi$, $f$, defined on a point of $\Delta$. $f$-illegal turns are finitely
many (Lemma~\ref{L0}) and
Lemma~\ref{criticallemma}
tells us that
$\mathcal{C}_{\Delta} = \mathcal{C}_C(\Delta) \cup \{\tau : \tau $ is an $f$-illegal turn$\}$ and, in particular, it is finite.
\end{proof}
\begin{rem}
It is worth mentioning that simplex regular turns can be effectively detected, having a train
track map $f$ in hand. Namely, suppose
that a turn $\tau$ is
\begin{enumerate}
\item not a free turn nor a turn with finite vertex group; and
\item not the $f$-image of an edge; and
\item not the $f$-image of a free turn; and
\item not the $f$-image of a turn involving group elements in $H$; and
\item not $f$-illegal;
\end{enumerate}
then $\tau$ is simplex regular. In particular, Proposition~\ref{FoldingCandidateRegular2} tells
us that if we have $X\in\operatorname{Min}(\phi)$ and we want to
find all neighbours of $X$ obtained from $X$ by a single turn-fold, and which still are in
$\operatorname{Min}(\phi)$, then we only need to check turns in the finite complement of the above effective list,
namely turns that are either
\begin{enumerate}
\item free or with finite vertex group; or
\item in the $f$-image of an edge; or
\item the $f$-image of a free turn; or
\item the $f$-image of a turn involving group elements in $H$; or
\item $f$-illegal.
\end{enumerate}
\end{rem}
\section{Folding and unfolding collapsed forests}
Here we extend the unfolding construction of Section~\ref{s3} to the general case of two
simplices, one face of the other. We remind that we always understand that a
straight map between elements of $\O(\G)$ is an $\O$-map (i.e. $G$-equivariant at level of trees).
A straight map $p:X\to Y$, defines on $X$ a simplicial structure $\sigma_p$, by pulling back
that of $Y$. With respect to $\sigma_p$, the map $p$ is tautologically simplicial.
We define the {\bf simplicial volume} of $p$, $\operatorname{svol}(p)$ as the number of edges of $\sigma_p$.
If in addiction $p$ is {\bf locally isometric on edges}, then it defines (some) folding paths $X=X_0,\dots,X_n=Y$
obtained by recursively identifying pairs of edges of $\sigma_p$ having a common vertex and the same
$p$-image. Together with the $X_i$ there are quotient maps $q_i:X_{i-1}\to X_i$ given by the
identification, and maps $p_i:X\to Y$ defined by $p_i(x)=p(q_i^{-1}(x))$. (Note that $p_0=p$ and $p_n=id$).
We refer to any folding path obtained as above as a {\bf folding path directed by $p$}. We
say that $X=X_0,\dots,X_n=Y$ has length $n$.
\begin{lem}
\label{redsimpvol}
Let $X, Y \in \O(\G)$ and $p:X \to Y $ be a straight map which is locally isometric on
edges. Then any folding path directed by $p$ has length at most $\operatorname{svol}(p)$.
\end{lem}
\begin{proof}
At any step the number of edges decreases by one.
\end{proof}
\begin{lem}
\label{unfold}
Let $\Delta,\Delta'$ be simplices of $\O(\G)$ so that
$\Delta$ is a face of $\Delta'$. For any $Y \in \Delta'$ there is a point,
$\operatorname{unf}(Y) \in \Delta$ and a straight map, $p: \operatorname{unf}(Y) \to Y$, such that:
\begin{enumerate}
\item $p$ is a local isometry on edges;
\item If $v$ is a non-free vertex in $Y$, then $p^{-1}(v)$ is a single vertex;
\item $\operatorname{svol}(p)$ is at most $2D(\Delta')^2$, where $D(\Delta')$ is
the number of edges of $\Delta'$.
\end{enumerate}
Moreover, any folding path directed by $p$ produces maps which still satisfy $(1)$
and $(2)$.
\end{lem}
\begin{proof}
The underlying graph $X$ of $\Delta$ is obtained by the collapse in $Y$ of a simplicial forest
$\F=T_0\sqcup\dots\sqcup T_k$ each of whose tree $T_i$ contains at most one non-free
vertex. We define $\operatorname{unf}(Y)$ by isometrically unfolding each tree. More precisely, for any
$T_i$ we choose a root-vertex $w_i$ with the requirement that $w_i$ is the unique
non-free vertex of $T_i$, if any. For any leaf $y$ of the forest, say $y$ is a leaf of $T_i$,
there is a unique path $\gamma_y$ connecting $y$ to $w_i$ in $T_i$. For notational convenience
we define $\gamma_y$ to be the constant path for any other vertex of $Y$.
The metric on $X$ defining the point $\operatorname{unf}(Y)$ is given as follows. Any edge $e$ of $X$ has a
preimage in $Y$ which is also an edge. We declare $$L_{\operatorname{unf}(Y)}(e)=L_{Y}(\gamma_a)+L_Y(e)+L_Y(\gamma_b)$$
where $a,b$ are the endpoints of the preimage of $e$ in $Y$.
As an oriented edge, $e$ is therefore the concatenation of three sub-segments $$e=A\cdot
E\cdot B$$ of lengths $L_{Y}(\gamma_a), L_Y(e),$ and $L_Y(\gamma_b)$ respectively. The map
$p$ is now defined by isometrically identifying $A$ with $\gamma_a$, $E$ with the copy of $e$
in $Y$, and $B$ with $\gamma_b$. The union of all $A$-segments and $B$-segments form a forest
which can be viewed as an isometric unfolding of $\F$, and out of that forest, $p$ is basically
the identity by definition. Conditions $(1)$ and $(2)$ immediately follow. As for $(3)$, it
suffices to note that for any $y\in Y$, the cardinality of $p^{-1}(y)$ is bounded by the number
of leaves of $\F$, which is bounded by $2D(\Delta')$. Therefore $\sigma_p$ has at most
$2D(\Delta)D(\Delta')\leq 2D(\Delta')^2$ edges. The last claim is easy to verify and we leave it to the reader.
\end{proof}
\begin{lem}
\label{cancel} Let $X, Y \in \O(\G)$.
Let $p:X \to Y$ be a straight map such that if $v$ is a non-free vertex in $Y$, then $p^{-1}(v)$ is a single vertex in $X$ (condition $(2)$ in Lemma~\ref{unfold}).
Let $\alpha, \beta$ be paths in $X$, starting at the same vertex, and so that $\bar\alpha\beta$
is reduced. If $p(\alpha) = p(\beta)$, then the only non-free vertex
crossed by each of them, if any, is their initial vertex (which is crossed only once).
\end{lem}
\begin{proof}
We argue by contradiction assuming that $\alpha$ crosses a non-free vertex other than its
initial point (we consider multiple crossings of the same vertex as distinct crossings). Up to possibly truncating $\alpha$ and $\beta$, we may assume that the last
vertex $w$ of $\alpha$ is non-free, and that $\alpha$ crosses no other non-free vertex
except possibly its initial point. Then our assumption on $p$ implies that the last vertex
of $\beta$ must be $w$. But in this case $\alpha \bar \beta$ would define a non trivial group-element which is collapsed by $p$, contradicting that $X,Y$ are in the same deformation space.
\end{proof}
\begin{prop} \label{foldable}
Let $[\phi]\in\operatorname{Out}(\G)$ be an irreducible element with $\lambda(\phi)>1$.
Let $X, Y \in \O(\G)$, $X \neq Y$, and suppose that there is $p:X \to Y$ a straight map such that
\begin{enumerate}
\item $p$ is a local isometry on edges;
\item if $v$ is a non-free vertex in $Y$, then $p^{-1}(v)$ is a single vertex.
\end{enumerate}
If any $p$-illegal turn is simplex regular (Definition~\ref{defnsc}), then $\lambda_\phi(Y)>\lambda(\phi)$.
\end{prop}
\begin{proof}
By Lemma~\ref{L0}, the set of $p$-illegal turn is a finite set $\{\tau_1,\dots,\tau_k\}$. Since $X \neq Y$, this set is non-empty.
Denote $\Delta$ the simplex of $X$. Let $f:X\to X$ be a minimal
optimal map representing $\phi$. Note that for any element of $G$, seen as a loop
$\gamma$ in $X$, the loop $p(f(\gamma))$ represents $\phi(p(\gamma))$ in $Y$.
Firstly, we deal with the case where $X \notin \operatorname{Min}(\phi)$. By Remark~\ref{RB3} and Lemma~\ref{LB1} there is an $f$-legal loop $\gamma \in A_{\Delta}$ in the tension
graph of $f$. Its image, $f(\gamma)$, crosses only candidate critical turns by definition of
candidate critical and, in particular, it doesn't cross any of
the $\tau_i$'s. In other words, $f(\gamma)$ is $p$-legal and therefore, as $p$ is a local
isometry on every edge, $L_Y(p(f(\gamma))) = L_X(f(\gamma))$.
On the other hand, again because $p$ is a local isometry, $L_Y(p(\gamma)) \leq L_X(\gamma)$
which means that $$\lambda_{\phi}(Y) \geq \frac{L_Y(p(f(\gamma)))}{L_Y(p(\gamma))} \geq
\frac{L_X(f(\gamma))}{L_X(\gamma)} = \lambda_{\phi} (X) > \lambda(\phi).$$
Suppose now $X\in\operatorname{Min}(\phi)$. In this case we may assume that $f$ is a train track map
representing $\phi$ (in particular $f$ is a minimal optimal map, see Section~\ref{definitions}). All $\tau_i$ are $f$-legal because of simplex-regularity.
Let's first assume that there is some $\tau_j$ which is mapped by $f$ to a turn distinct from any of the $\tau_i$'s. Then, by Corollary~\ref{CandLegal2}, there is an $f$-legal loop $\gamma$ (which is in the tension graph) such that,
\begin{enumerate}
\item $\sum_{i=1}^k \#(\gamma,\tau_i) = 1$,
\item $\sum_{i=1}^k \#(f(\gamma),\tau_i) =0$.
\end{enumerate}
Hence, $L_Y(p(\gamma)) < L_X(\gamma)$ whereas, $L_Y(p(f(\gamma))) = L_X(f(\gamma))$. Hence,
$$
\lambda_{\phi}(Y) \geq \frac{L_Y(p(f(\gamma)))}{L_Y(p(\gamma))} > \frac{L_X(f(\gamma))}{L_X(\gamma)} = \frac{\lambda_\phi(X)L_X(\gamma)}{L_X(\gamma)} = \lambda (\phi).
$$
Otherwise, $f$ must leave invariant the set of $\tau_i$. We will now work with the
$\langle\sim_{f^k}\rangle$-legal structure, in order to ensure that the image of a legal loop
is again legal. Since all $\tau_i$ are legal and set of $\tau_i$ is invariant under the action of $f$, then they also are
$\langle\sim_{f^k}\rangle$-legal.
Let $\Sigma$ be the set of $\langle\sim_{f^k}\rangle$-legal loops $\gamma$ in $X$ that satisfy
$$\sum_{i=1}^k\#(\gamma,\tau_i)=1.$$
By Corollary~\ref{candlegal3}, the set $\Sigma$ is not empty.
Let $$C=\sup\{L_X(\gamma)-L_Y(p(\gamma)): \gamma\in \Sigma\} .$$
The Bounded Cancellation Lemma (see for
instance~\cite[Proposition~3.12]{Horbez}) and discreteness show that $C$ is a maximum, which
means that $C$ is realised by some loop $\gamma_C$. Moreover, since the $\tau_i$'s are
$p$-illegal, $C>0$. We claim that $\gamma_C$ can be chosen so that $f(\gamma_C)$ also belongs
to $\Sigma$. This will be enough as, since $\gamma_C$ realises $C$, then $L_Y(p(\gamma_C))= L_X(\gamma_C)-C$, while
$L_Y(p(f(\gamma_C)))\geq L_X(f(\gamma_C))-C$ (because $f(\gamma_C) \in \Sigma$). But then
$$
\lambda_{\phi}(Y) \geq \frac{L_Y(p(f(\gamma_C)))}{L_Y(p(\gamma_C))} \geq \frac{L_X(f(\gamma_C)) -
C}{L_X(\gamma_C) - C} = \frac{\lambda_\phi(X)L_X(\gamma_C) - C}{L_X(\gamma_C)
- C} > \lambda (\phi)
$$
where the strict inequality follows from the fact that $\lambda_\phi(X)= \lambda(\phi) > 1$.
We prove now our claim. Consider any $\gamma\in\Sigma$ realising the maximum $C$.
By Proposition~\ref{CandLegal0} $\gamma$ can be modified via turn surgeries to a
$\langle\sim_{f^k}\rangle$-legal turn $\gamma'$
such that $\sum_i\#(f(\gamma'),\tau_i)=1$. Note that such surgeries occur only at
non-free vertices.
It remains to show that the performed surgeries do not affect the $p$-cancellation of the
original loop $\gamma$. As $\tau_j$ is the unique $p$-illegal of $\gamma$, there exist
sub-paths $\alpha, \beta$ of $\gamma$ so that $p(\alpha) = p(\beta)$, the first edge of
$\alpha$ together with the first edge of $\beta$ form the turn $\tau_j$, and $L_X(\alpha) =
L_X(\beta) = C/2$. That is, $\alpha$ and $\beta$ are the sub-paths of $\gamma$ which realise
the $p$-cancellation. By Lemma~\ref{cancel}, both $\alpha$ and $\beta$ cross only free turns
and so the performed surgeries did not affect neither $\alpha$, nor $\beta$. As the turn
$\tau_j$ is not affected by the surgeries, as well, it follows that the $p$-cancellation of
$\gamma'$, is the same as the $p$-cancellation of $\gamma$, that is to say
$$ L_X(\gamma')-L_Y(p(\gamma')) = L_X(\gamma)- L_Y(p(\gamma))=C$$
as we wanted.
\end{proof}
\section{Exploring the Minset}
Proposition~\ref{FoldingCandidateRegular2} tells us that if we want to travel along
$\operatorname{Min}(\phi)$, we have to perform only simplex critical turns. Given two simplices
$\Delta,\Delta^1$, one face on the other, we can easily go from
$\Delta$ to $\Delta^1$ in few steps by simple folds. However, even if both simplices intersect
$\operatorname{Min}(\phi)$, such folds need not necessarily to be
simplex critical. Nonetheless, it may exists a, a priori longer, folding path between them that uses only
simplex critical folds.
\begin{defn}
Let $[\phi]\in\operatorname{Out}(\G)$ and $\Delta$ be a simplex in $\O(\G)$.
We denote by the {\bf simplex critical neighbourhood} of $\Delta$ of radius 1, all the
simplices of $\O(\G)$ which can be obtained from $\Delta$ via a simplex critical fold,
including $\Delta$ itself.
We denote by the simplex critical neighbourhood of $\Delta$ of radius $n+1$, the union of all the simplex critical neighbourhoods of radius 1, of all simplices in the simplex critical neighbourhood of $\Delta$ of radius $n$.
\end{defn}
\begin{rem}
By Theorem~\ref{critical}, if $\phi$ is irreducible with $\lambda(\phi) > 1$, then any simplex critical neighbourhood of finite radius consists of finitely many simplices.
\end{rem}
\begin{defn}
Let $X$ be a point of $\O(\G)$ and let's denote by $\Delta = \Delta_X$ the
corresponding simplex. The dimension of $D(\Delta)$ of $\Delta$ is the number of edges
of $X$.
We denote by $D=D(\G)$ the dimension of $\O(\G)$, i.e. the maximum
number of edges we see in elements of $\O(\G)$.
In $\O_1(\G)$ the dimension of the simplex containing $X$ is one less, as is the dimension of the entire space.
\end{defn}
\begin{thm}
\label{locfin}
Let $[\phi]\in\operatorname{Out}(\G)$ be an irreducible automorphism with $\lambda (\phi) > 1$.
Let $\Delta, \Delta^1$ be open simplices of $\O(\G)$, both intersecting
$\operatorname{Min}(\phi)$, and such that $\Delta$ is a face of $\Delta^1$. Then, $\Delta^1$ is
contained in the simplex critical neighbourhood of $\Delta$ of radius $2 D(\G)^2$.
In particular, $\operatorname{Min}(\phi)$ is locally finite.
We immediately deduce the same statement for $\O_1(\G)$.
\end{thm}
\begin{proof}
We will show there exists a sequence of simplices,
$\Delta_0, \Delta_1, \ldots, \Delta_n$, with $\Delta_0 = \Delta$ and $\Delta_n =
\Delta^1$, where $n \leq 2 (D(\G))^2 $, and such that each $\Delta_{i+1}$ is obtained by
folding a $\Delta_i$-simplex critical turn.
The underlying graph of $\Delta$ is obtained from that of $\Delta^1$ by collapsing a
forest $\F$ each of whose tree contains at most one non-free vertex. We apply
Lemma~\ref{unfold}, to get a straight map, $p:\operatorname{unf}(Y) \to Y$ which is locally isometric
on edges. Subdivide $\operatorname{unf}(Y)$ so that the $p$-image of each subdivided edge is a single edge in $Y$.
Proposition~\ref{foldable} tells us that it must exist a $p$-illegal turn that is also simplex critical. Fold this turn; this is an isometric fold directed by $p$, since $p$ is an isometry on edges, and we get a map $p_1:X_1\to
Y$ which, by Lemma~\ref{unfold}, satisfies conditions $(1)$ and $(2)$ of
Proposition~\ref{foldable}, which therefore applies. This process recursively defines a
folding path from $\operatorname{unf}(Y)$ to $Y$, directed by $p$. The length of such folding path is
bounded by $2D(\Delta^1)^2$ by Lemmas~\ref{redsimpvol} and~\ref{unfold}.
To conclude the proof, note that any simplex of $\O(\G)$ adjacent to $\Delta$ and
lying in $\operatorname{Min}(\phi)$ either has $\Delta$ as a face, or is a face of $\Delta$. The
first case is dealt with above, and in the second case, we know that there are at most
$2^{D(\Delta)} \leq 2 ^{D(\G)}$ faces.
The statement for $\O_1(\G)$ follows since the displacement function is invariant under change of volume.
\end{proof}
We now show that Theorem~\ref{locfin} may be strengthened to show that the minimally displaced
set is {\em uniformly} locally finite. That is, there is a uniform bound (depending only on
$\lambda(\phi)$ and $D(\G)$) on the number of
simplices, adjacent to a given simplex in $\operatorname{Min}(\phi)$ which are also in $\operatorname{Min}(\phi)$. In what
follows we are not focused in optimal bounds.
\begin{defn}
Let $\Delta$ be a simplex in $\O(\G)$. Then we define the centre $X_{\Delta} \in
\Delta$ to be the graph where all edges have the same length.
\end{defn}
Since we are interested in the function
$\lambda_{\phi}$, which is scale invariant, we may scale the metric on
$X_{\Delta}$ as we wish; we will therefore decree that all the edges of $X_{\Delta}$
have length $1$.
\begin{lem}\label{Lin} Let $[\phi]\in\operatorname{Out}(\G)$. For any $X,Y\in \O(\G)$ we have
$$\frac{\lambda_\phi(X)}{\lambda_\phi(Y)}\leq\Lambda(X,Y)\Lambda(Y,X).$$
\end{lem}
\begin{proof}
This immediately follows from the non-symmetric {\em triangle inequality}:
$$
\lambda_{\phi}(X)=\Lambda(X,\phi X) \leq \Lambda(X,Y) \Lambda(Y,\phi Y)
\Lambda(\phi Y, \phi X)=\Lambda(X, Y) \lambda_{\phi}(Y) \Lambda( Y, X).
$$
\end{proof}
As above,
$D=D(\G)=\dim(\O(\G))$ is the maximum number of (orbits of) edges we see in elements of $\O(\G)$.
\begin{lem}
\label{numfold}
Let $[\phi] \in Out(\G)$, and $\Delta, \Delta'$ be simplices in $\O(\G)$ such
that $\Delta$ is a face of $\Delta'$. Then
$$
\frac{1}{2D}\lambda_{\phi}(X_{\Delta}) \leq \lambda_{\phi}(X_{\Delta'}) \leq 2D\lambda_{\phi}(X_{\Delta}).
$$
\end{lem}
\begin{proof}
By Lemma~\ref{Lin}, it is sufficient to prove that
$$
\Lambda(X_{\Delta}, X_{\Delta'}) \Lambda(X_{\Delta'}, X_{\Delta}) \leq 2D.
$$
Since the product on the left is scale invariant (even though each factor is not) we
are free to choose the volumes for each of the points. Specifically, the two centres
have different volumes, as we set every edge to have length $1$.
In particular,
$$
\Lambda(X_{\Delta'}, X_{\Delta}) \leq 1,
$$
as loops become shorter when we collapse a forest (since the length in each case is a count of the number of edges).
To complete the argument, we consider the map $p:\operatorname{unf}(X_{\Delta'})\to
X_{\Delta'}$ given by Lemma~\ref{unfold}, and we read it as a map from $X_\Delta\to X_{\Delta'}$.
It is easy to see that the image of an edge under this map cannot cross the same edge more than twice (usually no more than once, but twice may happen if
$\Delta$ has some edge-loop). It follows that $L_{X_{\Delta'}}(p(\gamma))\leq
2DL_X(\gamma)$. Hence $\Lambda(X_\Delta,X_{\Delta'})\leq 2D$.
\end{proof}
The maximum number of vertices we see in elements of $\O(\G)$ is bounded by $2D$.
Denote by $M=M(\G)$ the maximal cardinality of finite vertex groups. Set $K=K(\G)=D+M+1$.
\begin{lem}
\label{numcrit}
Let $[\phi]\in\operatorname{Out}(\G)$, $\Delta$ be a simplex of $\O(\G)$, and
$\mathcal{C}_{\Delta}$ be the corresponding set of simplex critical
turns. Then
$$
|\mathcal{C}_{\Delta}| \leq(10K)!\lambda_{\phi}(X_{\Delta})
$$
\end{lem}
\begin{proof}
Since all the edges of $X_{\Delta}$ have length $1$, the number of turns crossed by a
loop is then equal to
its length in $X_{\Delta}$. Hence, the number of turns crossed by an element of
$A_\Delta$ is bounded above by
$$
\lambda_{\phi}(X_{\Delta}) 4 D |A_{\Delta}|,
$$
where the term $4 D$ appears as the maximum length of a loop in $A_{\Delta}$, as read in $X_{\Delta}$ (see Definition~\ref{ADelta}). Note that the length in $X_{\Delta}$ is simply the number of edges.
To estimate $|A_{\Delta}|$, we count the number of sequences of edges, where each edge
appears at most $4$ times - ignoring the incidence relations to simplify matters. The
number of sequences of $n$ objects of length $k$, is $n!/(n-k)! \leq n!$, and so the
number of sequences of $n$ objects of length at most $n$ is bounded by $(n+1)!$. For
building a loop in $A_\Delta$ we have $D$ edges, each of which appear at
most $4$ times and, taking in account the group elements, the total number of objects
we can use to write an element of $A_\Delta$ is $8D$. Hence
$$
|A_{\Delta}| \leq (8D+1)!
$$
This only bounds the number of turns crossed by elements of $A_\Delta$.
In order to bound the simplex critical turns, we need to add all turns based at
vertices with finite vertex group, and all
the illegal turns for a putative train track map. The former is bounded by the number
of possible pairs of germs of edges multiplied by the cardinality of finite vertex
group, hence
by $M(\G)D^2$. The latter, because of Lemma~\ref{L0}, is bounded
by the number of pairs of edges; hence by $D^2$. In
total we have
$$\lambda_\phi(X_\Delta)4D(8D+1)!+(M+1)D^2\leq \lambda_\phi(X_\Delta)4K(8K)!+K^3$$
and the result follows.
\end{proof}
\begin{lem}
\label{numirred}
Let $[\phi]\in\O(\G)$ be an irreducible element, and suppose that $\Delta$ is an
open simplex of $\O(\G)$ which contains a point of $\operatorname{Min}(\phi)$. Then,
$$
\lambda_{\phi}(X_{\Delta}) \leq 9 \dim\O(\G)\lambda(\phi)^{3D+2}.
$$
\end{lem}
\begin{proof}
Let $X_{\min}$ denote a point of $\Delta$ which is minimally displaced by $\phi$.
We will use the fact that $X_{\min}$ is $\epsilon$-thick, as in
\cite[Proposition~10]{BestvinaBers} (it is proved there in $CV_n$, but the proof is the same in this
context, see also~\cite[Section 8]{FM13}). That is, since $\phi$ is irreducible, there
is a lower bound on $L_X(\gamma)/\operatorname{vol}(X)$ that
depends only on $\lambda(\phi)$ (and not on $X\in\O(\G)$ nor on the non-elliptic
element
$\gamma$). Concretely, this lower bound can be taken to be the reciprocal of
$C(\phi)=3\dim(\O(\G))\lambda(\phi)^{3\dim(\O(\G))+1}$. If we normalise
$X_{\min}$ to have volume $1$ then $\Lambda(X_\Delta,X_{\min})\leq 1$. Moreover,
since the stretching factor
$\Lambda$ is realised by candidate loops of simplicial length at most $3$ (by the Sausage
Lemma~\cite[Theorem~$9.10$]{FM13}),
we can then deduce that,
$$
\Lambda(X_{\min}, X_{\Delta}) \Lambda(X_{\Delta}, X_{\min}) \leq
3C(\phi) \Lambda(X_{\Delta}, X_{\min})\leq 3C(\phi).
$$
The result now follows from Lemma~\ref{Lin}.
\end{proof}
\begin{cor}
\label{unif}
Let $[\phi]\in\operatorname{Out}(\G)$ be an irreducible element with $\lambda (\phi) > 1$. Then $\operatorname{Min}(\phi)$ is uniformly locally finite.
\end{cor}
\begin{proof}
By Theorem~\ref{locfin}, it is sufficient to show that the simplex critical
neighbourhood $N$ of radius $2D^2$, of a minimally displaced simplex $\Delta_0$,
contains a uniformly bounded number of simplices (the number of faces of a simplex is
always uniformly bounded).
Hence it suffices to uniformly bound the cardinality of $\mathcal C_\Delta$ for each
simplex we encounter in $N$.
By Lemma~\ref{numfold}, $\lambda_\phi(X_\Delta)\leq
(2D)^{2D^2}\lambda_{\phi}(X_{\Delta_0})$, which is uniformly bounded by Lemma~\ref{numirred}.
Lemma~\ref{numcrit} completes the proof.
\end{proof}
\section{Definitions and basic results used in the paper
\label{definitions}
Our notation and definitions are quite standard. We briefly recall them here, referring the reader
to~\cite{FM18I} for a detailed discussion.
\begin{defn}
A free splitting $\G$ of a group $G$ is a decomposition of $G$ as a free product $G_1*\cdots
*G_k*F_n$ where $F_n$ is the free group of rank $n$. We admit the {\em trivial} splitting
$G=F_n$. We do not require that the groups $G_i$'s are indecomposable.
\end{defn}
\begin{defn}
A simplicial $G$-tree is a simplicial tree $T$ endowed with a faithful simplicial action of
$G$. $T$ is minimal if it has no proper $G$-invariant sub-tree. A $G$-graph is a graph of
groups whose fundamental group, as graph of groups, is isomorphic to $G$. The action of $G$
on a $G$-tree, is called marking.
\end{defn}
\begin{defn}\label{G-graphs}
Let $\G$ be a free splitting of $G$. In terms of Bass-Serre theory, a $\G$-tree is
the tree dual to $\G$, and a $\G$-graph is the corresponding graphs of groups.
More explicitly, a
$\G$-tree is a simplicial $G$-tree $T$ such that:
\begin{itemize}
\item For every $G_i$ there is exactly one orbit of vertices whose stabilizer is conjugate to
$G_i$. Such vertices are called {\em non-free}. Other vertices are called {\em free}.
\item $T$ has trivial edge stabilisers.
\end{itemize}
A $\G$-graph dual to a $\G$-tree. Namely, it is a finite connected $G$-graph of groups $X$, along with an isomorphism, $\Psi_X: G \to \pi_1(X)$ - a marking - such that:
\begin{itemize}
\item $X$ has trivial edge-groups;
\item the fundamental group of $X$ as a topological space is $F_n$;
\item the splitting given by the vertex groups is equivalent, via $\Psi_X$, to $\G$. That is, $\Psi_X$ restricts to a bijection from the conjugacy classes of the $G_i$ to the vertex groups of $X$.
\end{itemize}
\end{defn}
The universal cover of a $\G$-graph is a $\G$-tree and the $G$-quotient of a $\G$-tree is a $\G$-graph.
\begin{defn}
Let $\G$ be a splitting of a group $G$. The Outer Space of $\G$, also known as deformation space of
$\G$, and denoted $\O(\G)$ is the set of classes of minimal, simplicial, metric $\G$-graphs $X$ with no
redundant vertex. (The equivalence relation is
given by $G$-isometries.)
We denote by $\O_1(\G)$ the volume $1$ subset of $\O(\G)$.
\end{defn}
$\O(\G)$ can be regarded also as set of $\G$-trees, but in the present paper we adopted the
graph view-point.
Given a graph of groups $X$ with trivial edge groups, we denote by $\O(X)$
the corresponding deformation space $\O(\pi_1(X))$ (we notice that $X\in\O(X)$ if $X$ is a core
graph with no redundant vertex) where $\pi_1(X)$ is endowed with the splitting given by vertex groups.
We refer the reader to~\cite{FM13,FM18I,GuirardelLevitt} for more details
on deformation spaces.
\begin{defn}
Let $X\in\O(\G)$. The simplex $\Delta_X$ is the set of marked metric graphs obtained from
$X$ by just changing edge-lengths. Since edge-lengths are strictly positive we think $\Delta_X$
as (a cone over) an open simplex. Given a simplex $\Delta$ one can consider the
the closure of $\Delta\in \O(\G)$ or its simplicial bordification. Namely, faces of
$\Delta$ come in two flavours: that in $\O(\G)$, called finitary faces,
and that not in $\O(\G)$ (typically in other deformation spaces) called faces at infinity.
We also have a simplex $\Delta_1(X)$ in $\O_1(\G)$ - the intersection of $\Delta(X)$ with
$\O_1(\G)$ - which is a standard open simplex of one dimension less.
\end{defn}
There are two topologies on $\O(\G)$, both of which restrict to the Euclidean topology on each simplex; these are the weak topology and the axes or Gromov-Hausdorff topology. The topology induced by the Lipschitz metric is the latter one.
\begin{defn}
Let $G$ be endowed with the splitting $\G:G=G_1*\dots*G_i*F_n$.
The group of automorphisms of $G$
that preserve the set of conjugacy classes of the $G_i$'s is
denoted by $\operatorname{Aut}(\G)$. We set
$\operatorname{Out}(\G)=\operatorname{Aut}(\G)/\operatorname{Inn}(G)$.
\end{defn}
The group $\operatorname{Aut}(\G)$ naturally acts on $\O(\G)$ by precomposition on marking, and
$\operatorname{Inn}(\G)$ acts trivially, so $\operatorname{Out}(\G)$ acts on $\O(\G)$.
Since the volume is invariant under this action, we also get an action of $\operatorname{Aut}(\G)$ and $\operatorname{Out}(\G)$ on $\O_1(\G)$.
\begin{defn} Given a splitting $\G$ of $G$, and $X,Y\in\O(\G)$, a map $f:X\to Y$ is called an
$\O$-map at the level of the tree, if it is Lipschitz-continuous and $G$-equivariant (resp., a $\G$-map between graph of groups, it called an $\O$-map if it is the projection of an $\O$-map at level of the universal covers). The Lipschitz constant of $f$ is denoted by $\operatorname{Lip}(f)$.
\end{defn}
\begin{defn}
Let $X,Y$ be two metric graphs.
A Lipschitz-continuous map $f:X\to Y$ is
{\em straight} if it has constant speed on edges, that is to say, for any edge $e$
of $X$ there is a non-negative number $\lambda_e(f)$ such that edge $e$ is uniformly stretched
by a factor $\lambda_e(f)$.
A straight map between elements of $\O(\G)$ is always supposed to be an $\O$-map.
\end{defn}
\begin{rem} $\O$-maps always exist and the images of non-free vertices are
determined a priori by equivariance
(see for instance~\cite{FM13}).
For any $\O$-map $f$ there is a unique straight map denoted by $\operatorname{Str}(f)$, which is homotopic,
relative to vertices, to $f$. We have $\operatorname{Lip}(\operatorname{Str}(f))\leq \operatorname{Lip}(f)$.
\end{rem}
\begin{defn}\label{deftg}
Let $f:X\to Y$ be a straight map. We set $\lambda_{\max}(f)=\max_{e}\lambda_e(f)=\operatorname{Lip}(f)$ and
define the {\em tension graph} of $f$ as the set
$$\{e \text{ edge of } X : \lambda_e(f)=\lambda_{\max}\}.$$
\end{defn}
\begin{defn}
Given $X,Y\in\O(\G)$ we define $\Lambda(X,Y)$ as the infimum of Lipschitz constants of
$\O$-maps from $X$ to $Y$. That inf is in fact a minimum and coincides with $\max_\gamma
\frac{L_Y(\gamma)}{L_X(\gamma))}$ where $\gamma$ runs on the set of loops in $X$ (seen as a
graph). (See for instance~\cite{FM11,FM13,FM18I}.)
\end{defn}
\begin{defn}
A gate structure on a graph of groups $X$ is a $G$-equivariant equivalence relation of germs of edges at
vertices of the universal cover, $\widetilde{X}$, of $X$. A \textit{train track structure} on a graph of groups $X$ is a gate structure on $X$ with at least two gates at each vertex.
\end{defn}
\begin{rem}
For a straight map $f:X\to Y$, we consider two different gate structures, which we denote by
$\sim_f$ and $\langle\sim_{f^k}\rangle$, the latter being defined only if $X=Y$. Two germs of $X$ are $\sim_f$-equivalent, if they have the same non-collapsed $f$-image and they are $\langle\sim_{f^k}\rangle$-equivalent, if they have the same non-collapsed $f^k$-image for some positive integer $k$.
This second gate structure - $\langle\sim_{f^k}\rangle$ - only makes sense if $Y$ is the same topological object as $X$, so that we may iterate $f$. However, $Y$ will usually be a different point of $\O(\G)$ since the $G$-action will be different.
We refer to $\sim_f$, as the gate structure which is induced by $f$.
\end{rem}
\begin{defn}
A turn of $X\in\O(\G)$ is (the $G_v$-orbit of) an unoriented pair of germs of edges based
at a vertex $v$ of $X$. A turn is legal if its germs are not in the same gate. A simplicial
path in $X$ is legal if it crosses only legal turns. Legality here depends on the gate structure, which for us will either be the $\sim_f$ or $\langle\sim_{f^k}\rangle$ structure for some $\O$-map $f$ with domain $X$.
\end{defn}
\begin{defn}\label{defnmo}
Let $X,Y\in\O(\G)$. A straight map $f:X\to Y$ is said to be optimal if
$\operatorname{Lip}(f)=\Lambda(X,Y)$ and every vertex of the tension graph is at least two-gated (i.e. the gate structure $\sim_f$ is a train track structure on the tension graph). An optimal map is minimal if every edge of the tension graph extends to a legal loop in the tension graph (not all optimal maps are minimal, but minimal optimal maps always exist (see~\cite{FM18I})).
\end{defn}
\begin{defn}
Given $[\phi]\in\operatorname{Out}(\G)$ and $X\in\O(\G)$ we say that an $\O$-map $f:X\to \phi X$
represents $\phi$.
Note that $X$ and $\phi X$ are the same graph with different markings, so we sometimes abuse notation by saying that $f$ is a map $f: X \to X$ which represents $\phi$.
In this situation we can speak of the $\langle\sim_{f^k}\rangle$ gate structure.
\end{defn}
Any $\phi$ is represented by a minimal optimal map (see~\cite{FM18I}).
\begin{defn}
We call $[\phi] \in \operatorname{Out}(\G)$ {\em reducible} if there exists an $\O$ map $f: X \to X$
representing $\phi$, a lift $\widetilde f:\widetilde X\to \widetilde X$ and a $G$-subforest, $Y \subsetneq \widetilde
X$ which is $\widetilde f$-invariant and contains the axis of a hyperbolic element. Otherwise $[\phi]$ is called irreducible.
\end{defn}
\begin{rem}
An automorphism $\phi$ is called {\em iwip} - irreducible with irreducible powers - if every positive iterate of $\phi$ is irreducible. We mention this for completeness, but we are concerned with the general irreducible class for this paper.
\end{rem}
\begin{defn}
A straight map $f:X\to X$ representing $\phi$ is a train track map, if there is a train track structure on $X$ so that:
\begin{itemize}
\item $f$ maps edges to legal paths
\item If $f(v)$ is a vertex, then $f$ maps inequivalent germs at $v$ to inequivalent
germs at $f(v)$.
\end{itemize}
\end{defn}
\begin{rem} (See~\cite{FM13, FM18I} for more details):
\begin{enumerate}
\item If $f:X\to X$ is train track map representing $\phi$ (with respect to some gate structure), then it is a train track map with respect to the $\langle\sim_{f^k}\rangle$ gate structure.
\item Irreducible elements of $\operatorname{Out}(\G)$ admit train track representatives.
\end{enumerate}
\end{rem}
\begin{defn}
Given $[\phi]\in\operatorname{Out}(\G)$ the displacement function $$\lambda_\phi:\O(\G)\to\O(\G)$$
is defined by $$\lambda_\phi(X)=\Lambda(X,\phi X).$$ If $f:X\to X$ is any $\O$-map
representing $\phi$, then $$\lambda_\phi(X)=\sup_{\gamma}\frac{L_X(\phi\gamma)}{L_X(\gamma)}$$
where the sup is taken over all loops $\gamma$ in $X$ (and it is actually a maximun
by~\cite{FM13,FM18I}) and $L_X(\gamma)$ denotes the reduced length of $\gamma$.
For a simplex $\Delta$ we define $$\lambda_\phi(\Delta)=\inf_{X\in\Delta}\lambda_\phi(X)$$
and $$\lambda(\phi)=\inf_{X\in\O(\G)}\lambda_\phi(X).$$
\end{defn}
\begin{rem}
Note that the displacement function - $\lambda_{\phi}$ - is invariant under change of volume, so one can work interchangeably between $\O(\G)$ and $\O_1(\G)$.
\end{rem}
\begin{thm}[\cite{BestvinaBers,FM13,FM18I}]
Given $[\phi] \in \operatorname{Out}(\G)$ we define,
$$\operatorname{Min}(\phi) = \{ X \in \O(\G) \ : \ \lambda_{\phi}(X) = \lambda(
\phi)\}.$$
(Similarly for $\O_1(\G)$.) Then, if $\phi$ is irreducible, $\operatorname{Min}(\phi)$ is non-empty and coincides with the set of points which admit a train track map representing $\phi$.
\end{thm}
\begin{rem} There is also a generalisation of the previous theorem for reducible automorphisms,
but in that case $\operatorname{Min}(\phi)$ may be empty in $\O(\G)$. In any case $\operatorname{Min}(\phi)$ is never
emtpy if we add to $\O(\G)$ the simplicial bordification at infinity.
While we don't use the following in this paper, it seems worthwhile mentioning that $\operatorname{Min}(\phi)$
coincides with the set of points supporting partial train-tracks (which reduce to classical
train-tracks in irreducible case). (See~\cite{FM13,FM18I} for more details).
\end{rem}
\begin{lem}\label{wasfootnote}
Let $[\phi]\in\operatorname{Out}(\G)$ be an irreducible element and let $X$ be a minimally displaced
point. Let $f:X\to X$ be an optimal map representing $\phi$, then
\begin{enumerate}
\item The tension graph of $f$ is the whole $X$.
\item If $f$ is train track then it is a minimal optimal map.
\end{enumerate}
\end{lem}
\begin{proof}
By \cite[Lemma~4.16]{FM18I} (see also\cite{FM13}), since $f$ is an optimal map representing
$\phi$, its tension graph contains an invariant sub-graph
$\phi$. By irreducibility, that sub-graph must be the whole $X$. In~\cite{FM18I} (or also in~\cite{FM13}) it is proved
that if $f$ is a train track, and
it is not minimal, then any neighborhood of $X$ in the $\operatorname{Min}(\phi)$ supports an optimal map
representing $\phi$ whose tension graph is not the whole $X$, contradicting point $(1)$.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,869,038,155,610 | arxiv | \section{Introduction}
\label{intro}
The purpose of this paper is to compare the field of
local times of a discrete-time
Markov process with the corresponding field of i.i.d.\
random variables distributed
according to the stationary measure of this process,
in total variation distance.
We mention that local times (also called occupation times)
of Markov processes is a very well studied subject.
It is frequently possible to obtain a complete
characterization of the law of this field
in terms of some Gaussian random field or process,
especially in continuous time (and space) setup.
The reader is probably familiar with
Ray-Knight theorems as well as Dynkin’s and Eisenbaum’s
isomorphism theorems; cf.\ e.g.\ \cite{R,Szn12}.
One should observe, however, that these theorems
usually work in the case when the underlying Markov
process is reversible and/or symmetric in some sense.
To explain what we are doing in this paper,
let us start by considering the following example:
let $(X_j)_{j\geq 1}$ be a Markov chain
on the state space $\Sigma=\{0,1\}$,
with the following transition probabilities:
$\mathbb{P}[X_{n+1}=k\mid X_n=k]=1-\mathbb{P}[X_{n+1}=1-k\mid X_n=k]
=\frac{1}{2}+\varepsilon$ for $k=0,1$,
where~$\varepsilon\in(0,\frac{1}{2})$ is small.
Clearly, by symmetry, $(\frac{1}{2},\frac{1}{2})$ is
the stationary distribution of this Markov chain.
Next, let $(Y_j)_{j\geq 1}$
be a sequence of i.i.d.\ Bernoulli random variables
with success probability~$\frac{1}{2}$.
What can we say about the distance in total variation between
the laws
of $(X_1,\ldots,X_n)$ and $(Y_1,\ldots,Y_n)$? Note that the
``na\"\i{}ve'' way of trying to force
the trajectories to be equal (given $X_1=Y_1$, use the maximal
coupling of $X_2$ and $Y_2$;
if it happened that $X_2=Y_2$, then try to couple $X_3$
and $Y_3$, and so on) works
only up to $n=O(\varepsilon^{-1})$. Even though this method is probably
not optimal, in this case it is easy to obtain that the total
variation distance converges to~$1$
as $n\to\infty$. This is because of the
following: consider the event
\[
\Xi^Z = \Big\{\frac{1}{n}\sum_{j=1}^{n-1}
\mathds{1}_{\{Z_j = Z_{j+1}\}} > \frac{1}{2} + \frac{\varepsilon}{2} \Big\},
\]
where $Z$ is~$X$ or~$Y$.
Clearly, the random variables $\mathds{1}_{\{Z_j = Z_{j+1}\}}$,
$j\in\{1,\ldots,n-1\}$ are i.i.d.\ Bernoulli,
with success probabilities $\frac{1}{2}+\varepsilon$ and $\frac{1}{2}$
for $Z=X$ and~$Z=Y$
correspondingly. Therefore, if $n\gg \varepsilon^{-2}$,
it is elementary to obtain that that
$\mathbb{P}[\Xi^X]\approx 1$ and $\mathbb{P}[\Xi^Y]\approx 0$,
and so the total variation distance
between the \emph{trajectories} of~$X$ and~$Y$
is almost~$1$ in this case.
So, even in the case when the Markov chain gets quite close
to the stationary distribution
just in one step, usually it is not possible to couple its trajectory
with an i.i.d.\ sequence,
unless the length of the trajectory is relatively short.
Assume, however, that we are not interested in the exact
trajectory of~$X$ or~$Y$,
but rather, say, in the number of visits to~$0$ up to time~$n$.
That is, denote
\[
L_n^Z(0) = \sum_{j=1}^n \mathds{1}_{\{Z_j=0\}}
\]
for $Z=X$ or $Y$. Are $L_n^X(0)$ and $L_n^Y(0)$
close in total variation distance
for \emph{all}~$n$?
Well, the random variable $L_n^Y(0)$ has the binomial
distribution with parameters~$n$
and~$\frac{1}{2}$, so it is approximately Normal with
mean~$\frac{n}{2}$
and standard deviation~$\frac{\sqrt{n}}{2}$. As for $L_n^X(0)$,
it is elementary to obtain that it is
approximately Normal with mean~$\frac{n}{2}$ and
standard deviation~$\sqrt{n}\big(\frac{1}{2}+O(\varepsilon)\big)$.
Then, it is also elementary to obtain that the total variation distance between
these two Normals is~$O(\varepsilon)$, \emph{uniformly} in~$n$
(indeed, that total variation distance equals the total variation distance
between the Standard Normal and the centered Normal with variance
$(1+O(\varepsilon))^2$; that distance is easily verified to be of order~$\varepsilon$).
This \emph{suggests} that the total variation distance between~$L_n^X(0)$
and~$L_n^Y(0)$ should be also of order~$\varepsilon$ uniformly in~$n$.
Observe, by the way, that the distribution of the local times
of a two-state Markov chain can be explicitly written
(cf.~\cite{BG}), so one can obtain a rigorous proof
of the last statement in a direct way, after some work.
Let us define the \emph{local time} of a stochastic process~$Z$
at site~$x$ at time~$n$ as the number of visits
to~$x$ up to time~$n$:
\begin{equation*}
L^Z_n(x) = \sum_{j=1}^n \mathds{1}_{\{Z_j=x\}}
\end{equation*}
(sometimes we omit the upper index when it is clear which process
we are considering).
The above example shows that, if one is only interested in the local
times of the
Markov chain (and not the complete trajectory),
then there is hope to obtain a coupling
with the local times of an i.i.d.\ random sequence
(which is much easier to handle).
Observe that there are many quantities of interest that
can be expressed in terms
of local times only
(and do not depend on the order), such as, for instance,
\begin{itemize}
\item hitting time of a site~$x$: $\tau(x) = \min\{n: L_n(x)>0\}$;
\item cover time: $\min\{n: L_n(x)>0 \text{ for all }x\in\Sigma\}$,
where $\Sigma$ is the space where the process lives;
\item blanket time~\cite{DLP}: $\min\{n\geq 1: L_n(x)\geq \delta n \pi(x)\}$,
where $\pi$ is the stationary measure of the process
and $\delta\in (0,1)$ is a parameter;
\item disconnection time~\cite{DSzn,Szn10}:
loosely speaking, it is the time~$n$ when the set $\{x: L_n(x)>0\}$
becomes ``big enough'' to ``disconnect''
the space~$\Sigma$ in some precise sense;
\item the set of favorite (most visited) sites (e.g.~\cite{HS,T}):
$\{x: L_n(x)\geq L_n(y)\text{ for all }y\in\Sigma\}$;
\item and so on.
\end{itemize}
Therefore, if it is possible to obtain a coupling as above that
works with high probability, then that coupling may be useful.
Note also that, although
not every Markov chain comes close to
the stationary distribution in just one step, that
can be sometimes circumvented by considering
the process at times $k, 2k, 3k, \ldots$ with a large~$k$.
In particular, we expect that our results may be useful when
dealing with
\emph{excursion processes} (i.e., when~$\Sigma$ is a set
of excursions of a random walk).
One may e.g.~refer to \cite{MS}, cf.\ Lemma~2.2 there
(note that the order of excursions does not matter,
so one would be able to get rid of the factor~$m$
in the right-hand side);
also, we are working now on applications of our results to
the decoupling for random
interlacements~\cite{BGP}.
\section{Notations and results}
\label{notations}
We start describing the assumptions under which we will prove our main result.
Let~$(\Sigma,d)$ be a compact metric space, with~${\mathcal{B}}(\Sigma)$
representing its Borel~$\sigma$-algebra.
\begin{assump}\label{assump_1}
Assume that $(\Sigma,d)$ is of \emph{polynomial class}: there
exist some $\beta\geq0$ and~$\phi\geq 1$ such that
for all $r\in(0,1]$, the number of open balls of radius at most~$r$
needed to cover $\Sigma$ is smaller than or equal to $\phi r^{-\beta}$.
\end{assump}
As an example of metric space of polynomial class, consider first a
finite space~$\Sigma$, endowed with the discrete metric
\[
d(x,y)=\mathds{1}_{\{x\neq y\}}, \text{ for } x,y\in\Sigma.
\]
In this case, we can choose~$\beta=0$ and~$\phi=|\Sigma|$ (where~$|\Sigma|$
represents the cardinality of~$\Sigma$). As a second example, let us
consider~$\Sigma$ to be a compact $k$-dimensional Lipschitz
submanifold of~$\mathbb{R}^m$ with metric induced by the Euclidean norm of~$\mathbb{R}^m$.
In this case we can take~$\beta=k$, but $\phi$ will in general depend
on the precise structure of~$\Sigma$.
It is important to observe that, for a finite~$\Sigma$, it may not
be the best idea to use the above discrete metric; one may be better off
with another one, e.g., the metric inherited from the Euclidean
space where~$\Sigma$ is immersed (see e.g.\ the proof
of Lemma~2.9 of~\cite{CP}).
We consider a Markov chain~$X=(X_i)_{i\geq 1}$ with transition
kernel~$\mathop{\mathfrak{P}}(x,dy)$ on~$(\Sigma,{\mathcal{B}}(\Sigma))$, and we suppose that
the chain has a unique invariant probability measure~$\pi$.
Moreover, we assume the transition kernel to be absolutely continuous
with respect to~$\pi$, $\mathop{\mathfrak{P}}(x,\cdot)\ll\pi(\cdot)$ for all~$x\in\Sigma$.
Let us denote by~$p(x,\cdot)$ the Radon-Nikodym derivative
(i.e., \emph{density}) of~$\mathop{\mathfrak{P}}(x,\cdot)$
with respect to~$\pi$: for~$x\in\Sigma$,
\begin{align*}
\mathop{\mathfrak{P}}(x,A) = \int_{A} p(x,y) \pi(dy), \text{ for all } A\in{\mathcal{B}}(\Sigma).
\end{align*}
We also consider
\begin{assump}\label{assump_2}
Assume that the density $p(x,\cdot)$ is
\emph{uniformly H\"older continuous}, that is, there exist constants $\kappa>0$
and $\gamma\in(0,1]$ such that for all $x,z,z'\in \Sigma$,
\begin{equation*}
|p(x,z)-p(x,z')|\leq \kappa d^{\gamma}(z,z').
\end{equation*}
\end{assump}
In the rest of this paper, we assume that the chain~$X$ starts
with some probability law absolutely continuous with respect to~$\pi$
and we denote by~$\nu$ its density. We also work under
\begin{assump}\label{assump_3}
Let $\varepsilon_0\in (0,1)$. Suppose that there exists $\varepsilon\in (0,\varepsilon_0]$ such that
\begin{equation}
\label{max_eps}
\sup_{x,y\in\Sigma}|p(x,y)-1| \leq \varepsilon,
\end{equation}
and
\begin{equation}
\label{cond_nu}
\sup_{x\in \Sigma}|\nu(x)-1| \leq \varepsilon.
\end{equation}
\end{assump}
\noindent
Observe that~\eqref{cond_nu} is not very restrictive because,
due to \eqref{max_eps}, the chain will anyway
come quite close to stationarity already on step~$2$.
Additionally, let us denote by $Y=(Y_i)_{i\geq 1}$ a sequence
of i.i.d.\ random variables with law~$\pi$.
Before stating our main result, we recall the definition of the total
variation distance between
two probability measures~$\bar{\mu}$ and~$\hat{\mu}$ on some measurable
space $(\Omega, \mathcal{T})$,
\begin{equation*}
\|\bar{\mu}-\hat{\mu}\|_{\text{TV}}
=\sup_{A\in\mathcal{T}}|\bar{\mu}(A)-\hat{\mu}(A)|.
\end{equation*}
When dealing with random elements $U$ and $V$, we will write (with a slight abuse of notation) $\|U-V\|_{\text{TV}}$ to denote the total variation distance between the laws of $U$ and $V$.
Denoting by $L_n^Z:=(L_n^Z(x))_{x\in \Sigma}$ the local time field of the process $Z=X$ or $Y$ at time $n$, we are now ready to state
{\thm \label{Main_Thm} Under Assumptions~\ref{assump_1}--\ref{assump_3},
there exists a positive cons\-tant~$K=K(\varepsilon_0)$
such that, for all $n\geq 1$, it holds that
\begin{align*}
\|L_n^X - L_n^Y\|_{\emph{TV}} \leq K\varepsilon
\displaystyle\sqrt{1 + \ln(\phi 2^{\beta})
+ \frac{\beta}{\gamma}\ln\Big(\frac{\kappa\vee (2\varepsilon)}{\varepsilon}\Big)}.
\end{align*}
}
As an application of our main theorem, consider a
finite state space~$\Sigma$, endowed with the discrete metric.
As we have already mentioned, in this case we can choose~$\beta=0$
and~$\phi=|\Sigma|$. Additionally, observe that, for any
Markov chain~$X$ on $\Sigma$, under Assumption \ref{assump_3}, we can always take~$\kappa=2$ and~$\gamma=1$,
so the H\"older continuity of~$p$
is automatically verified here. Thus, Theorem~\ref{Main_Thm} leads to
\begin{equation*}
\|L_n^X - L_n^Y \|_{\text{TV}}
\leq K\varepsilon \displaystyle\sqrt{1 + \ln|\Sigma|},
\end{equation*}
for all~$n\geq 1$.
Observe that, since~$K$ is unknown, Theorem~\ref{Main_Thm} becomes interesting only when $\varepsilon$ is small enough. Therefore, it is also relevant to check if we can obtain a uniform control (in $n$) of
$\|L_n^X - L_n^Y\|_{\text{TV}}$ away from~$1$, for all~$\varepsilon\in(0,1)$.
In this direction, we obtain the following
{\thm \label{Thm2} Under Assumptions~\ref{assump_1}--\ref{assump_3}, there exists a positive cons\-tant~$K'=K'(\beta,\varphi,\kappa,\gamma,\varepsilon_0)$
such that, for all $n\geq 1$, it holds that
\begin{equation*}
\|L_n^X - L_n^Y\|_{\emph{TV}} \leq 1-K'.
\end{equation*}
}
Such a result may be useful e.g.\ in the following context:
if we are able to prove that, for the i.i.d.\ sequence,
something happens with probability close to~$1$, then
the same happens for the field of local time of the Markov chain
with at least uniformly positive probability.
Observe that it is not unusual that the fact
that the probability of something is uniformly positive
implies that it should be close to~$1$ then.
The rest of the paper is organized in the following way.
In Section~\ref{Sim}, among other things, we show how the soft local time method
can be applied to the Markov chain~$X$ for constructing its
local time field.
In Section~\ref{coupling} we present the construction of a
coupling between the local time fields of the two processes~$X$ and~$Y$ at time~$n$.
In Section~\ref{TV_binomial} we estimate the total variation
distance between two binomial point processes. This auxiliary result
will be useful to bound from above the probability of the complement
of the coupling event introduced in Section~\ref{coupling}.
In Section~\ref{Premres} we use a concentration inequality due to~\cite{Adam08}
together with the machinery of empirical processes to obtain some
intermediate results. In Section~\ref{Main_Thm_proof}
we give the proof of Theorem~\ref{Main_Thm}. Finally, in Section~\ref{Second_Thm}, we give the proof of Theorem~\ref{Thm2}.
We end this section with considerations on the notation for constants
used in this paper. Throughout the text, in general, we
use capital~$C_1,C_2,\dots$ to denote global constants that
appear in the results.
When these constants depend on some parameter(s),
we will explicitly put (or mention) the dependence, otherwise the constants are
considered universal. Moreover, we will use small~$c_1,c_2,\dots$ to
denote local constants that appear locally in the proofs,
restarting the enumeration at the beginning of each proof.
\section{Constructions using soft local times}
\label{Sim}
We assume that the reader is familiar with the
general idea of using Poisson point processes for constructing
general adapted stochastic processes, also known as
the \emph{soft local time} method. We refer to Section~4
of~\cite{PT15} for the general theory, and also
to Section~2 of~\cite{CGPV13} for a simplified introduction.
In this paper we use a modified version of
this technique to couple the local time fields of
both processes, and we do this precisely in Section~\ref{coupling}.
In this section
we first present two different constructions of the Markov chain~$X$,
and then we present a construction of the local time field of~$X$ only.
Then, in Section~\ref{Sim_IID}, we present a construction
of the i.i.d.\ sequence~$Y$. All these constructions
consist of applying the method of soft local times
in a (relatively) straightforward way.
Let~$\alpha$ be the \emph{regeneration coefficient} of the
chain~$X$ with respect to~$\pi$ (see Definition~4.28 of~\cite{FG}) defined by
\begin{align*}
\alpha := \inf_{x,y\in\Sigma} p(x,y).
\end{align*}
Note that $\alpha\leq 1$ since $p(x,\cdot)$ is a probability density
for all $x\in \Sigma$.
Moreover, \eqref{max_eps} implies
that~$\alpha\geq 1-\varepsilon =:q$.
Hence, we consider the following decomposition
\begin{align*}
p(x,\cdot) = q + (1-q)\mu(x,\cdot), \text{ for all } x\in\Sigma,
\end{align*}
where $\mu(x,\cdot)=1+\frac{p(x,\cdot) -1}{1-q}\geq 0$
is a probability density with respect to~$\pi$.
On some probability
spa\-ce~$(\tilde{\Omega},\tilde{\mathcal{T}},\mathbb{P})$, suppose that we are
given the following independent random elements:
\begin{itemize}
\item A sequence~$(I_j)_{j\geq 1}$ with~$I_1=1$ and~$(I_j)_{j\geq 2}$
i.i.d.~Bernoulli($q$) random variables;
\item A Poisson point process~$\eta$ on~$\Sigma\times\mathbb{R}_+$
with intensity measure~$\pi\otimes\lambda_+$, where~$\lambda_+$
is the Lebesgue measure on~$\mathbb{R}_+$ and $\pi$ is the invariant
probability measure of~$X$ (cf.\ Section~\ref{notations}).
\end{itemize}
Then, we define the sequence~$(\rho_j)_{j\geq 0}$ such that
\begin{align*}
\rho_0 &= 1,\\
\rho_{k+1} &= \inf\{ j>\rho_k : I_j=1 \} \text{ for } k\geq 1.
\end{align*}
We interpret the elements of the sequence~$(\rho_j)_{j\geq 1}$ as being
the random regeneration times of the Markov chain~$X$: at each random
time~$\rho_j$, the chain~$X$ starts afresh with law~$\pi$.
In this way, the chain will be viewed as a sequence of
(independent) blocks (called {\it regeneration blocks}) with starting law~$\pi$
and transitions
according to~$\mu(\cdot,\cdot)$. Such blocks thus have lengths given by
the differences of the subsequent elements of~$(\rho_j)_{j\geq 1}$.
\subsection{Construction of the local time field
of the Markov chain~$X$}
\label{Sim_MC}
We first give two ways to construct the Markov chain~$X$ up to time~$n$
using soft local times.
For that,
we consider the Poisson point process described above,
\begin{align*}
\eta = \sum_{\lambda\in\Lambda} \boldsymbol{\delta}_{(z_{\lambda}, t_{\lambda})},
\end{align*}
(where $\Lambda$ is a countable index set),
and we proceed with the soft local time scheme in the classical way first.
Denote by~$(x_i)_{i\geq 1}$ the elements of~$\Sigma$
which we will consecutively construct.
We begin with the construction of~$x_1$ by defining
\begin{align*}
\xi_1 &= \inf\big\{\ell\geq 0: \exists (z_{\lambda},t_{\lambda})
\text{ such that } \ell \nu(z_{\lambda}) \geq t_{\lambda}\big\}, \\
G^X_1(x) &= \xi_1 \nu(x), \;\text{for all}\; x\in \Sigma,
\end{align*}
and~$(x_1,t_1)$ to be the unique pair~$(z_{\lambda},t_{\lambda})$
satisfying~$G^X_1(z_{\lambda}) = t_{\lambda}$.
Then, once we have obtained the first state~$x_1$ visited by the chain~$X$,
we proceed to the construction of the other ones. For~$i=2,3,\dots$, let
\begin{align*}
\xi_i &= \inf\big\{\ell\geq 0: \exists (z_{\lambda},t_{\lambda})
\notin\{(x_k,t_k)\}_{k=1}^{i-1} \text{ such that } \\
& \hspace{5cm} G^X_{i-1}(z_{\lambda}) + \ell p(x_{i-1},z_{\lambda})
\geq t_{\lambda}\big\}, \\
G^X_i(x) &= G^X_{i-1}(x) + \xi_i p(x_{i-1},x), \;\text{for all}\; x\in \Sigma,
\end{align*}
and~$(x_i,t_i)$ to be the unique pair~$(z_{\lambda},t_{\lambda})$ out
of the set~$\{(x_k,t_k)\}_{k=1}^{i-1}$ satisfying~$G^X_i(z_{\lambda})
= t_{\lambda}$.
Thus, after performing this iterative scheme for~$n$ iterations,
we obtain the accumulated soft local time of the Markov chain~$X$
at time~$n$, which is given by
\begin{align*}
G^X_n(x) = \xi_1 \nu(x) + \sum_{k=2}^{n} \xi_k p(x_{k-1},x) ,
\end{align*}
for~$x\in\Sigma$.
Next, we present an alternative construction of the same Markov chain,
taking into account the regeneration times of~$X$, using the Poisson point
process $\eta$ and the sequence~$(I_j)_{j\geq 1}$ of Bernoulli random variables introduced at
the beginning of this section. Denote now by~$(\hat{x}_i)_{i\geq 1}$
the elements of~$\Sigma$
which we will consecutively construct in this alternative way.
By construction, the sequence~$(\hat{x}_i)_{i\geq 1}$ will also
have the law of the Markov chain~$X$.
We begin with the construction of~$\hat{x}_1$ by defining
\begin{align*}
\hat{\xi}_1 &= \inf\big\{\ell\geq 0: \exists (z_{\lambda},t_{\lambda})
\text{ such that } \ell \nu(z_{\lambda}) \geq t_{\lambda}\big\}, \\
\hat{G}^X_1(x) &= \hat{\xi}_1 \nu(x), \;\text{for all}\; x\in \Sigma,
\end{align*}
and~$(\hat{x}_1,\hat{t}_1)$ to be the unique pair~$(z_{\lambda},t_{\lambda})$
satisfying~$\hat{G}^X_1(z_{\lambda}) = t_{\lambda}$.
Then, once we have obtained the first state~$\hat{x}_1$
visited by the chain~$X$, we proceed to the construction
of the other ones. For~$i=2,3,\dots$, define
\begin{align*}
g_i(\hat{x}_{i-1},\cdot) &=
\begin{cases}
\mu(\hat{x}_{i-1},\cdot) , & \text{ if } I_i=0, \\
1 , & \text{ if } I_i=1,
\end{cases}
\end{align*}
and then
\begin{align*}
\hat{\xi}_i &= \inf\big\{\ell\geq 0: \exists (z_{\lambda},t_{\lambda})
\notin\{(\hat{x}_k,\hat{t}_k)\}_{k=1}^{i-1} \text{ such that } \\
& \hspace{5cm} \hat{G}^X_{i-1}(z_{\lambda})
+ \ell g_i(\hat{x}_{i-1},z_{\lambda}) \geq t_{\lambda}\big\}, \\
\hat{G}^X_i(x) &= \hat{G}^X_{i-1}(x)
+ \hat{\xi}_i g_i(\hat{x}_{i-1},x),\;\text{for all}\; x\in \Sigma,
\end{align*}
and~$(\hat{x}_i,\hat{t}_i)$ to be the unique
pair~$(z_{\lambda},t_{\lambda})$ out of the
set~$\{(\hat{x}_k,\hat{t}_k)\}_{k=1}^{i-1}$
satisfying~$\hat{G}^X_i(z_{\lambda}) = t_{\lambda}$.
Thus, after performing this iterative scheme for~$n$ iterations,
we obtain the accumulated soft local time at
time~$n$
\begin{align*}
\hat{G}^X_n(x) = \hat{\xi}_1 \nu(x) + \sum_{k=2}^{n} \hat{\xi}_k
\big(I_k +(1-I_k)\mu(\hat{x}_{k-1},x) \big),
\end{align*}
for~$x\in\Sigma$. Observe that, under~$\mathbb{P}$, $\hat{G}^X_n$
and $G^X_n$ have the same law.
Since in this paper we are interested in the random
field of local times of the chain until time~$n$,
the order of appearance of the states of~$X$ is not relevant
for us and we will use the soft local time scheme in a slightly
different way from that described above.
Specifically, we use the random variables $I_1,\dots, I_n$ as in
the previous construction but now we first construct all the regeneration
blocks of size strictly greater than one and then the regeneration blocks
of size one.
We proceed by considering the Poisson point process~$\eta$ and
the random variables~$I_1, I_2, \dots, I_n$.
Then, we define the random set~${\mathfrak{H}}\subset\{1,2,\dots,n\}$ as
\begin{align}
{\mathfrak{H}}= \big\{j\in\{2,3,\dots,n-1\} : I_j I_{j+1}=1 \}
\cup \{j\in\{n\} : I_j=1\big\},
\label{set_H}
\end{align}
and the random permutation~${\mathfrak{S}}:\{1,2,\dots,n\}
\rightarrow\{1,2,\dots,n\}$ in the following way:
\begin{itemize}
\item for~$j\in {\mathfrak{H}}^c$, ${\mathfrak{S}}(j) = j-\sum_{i=2}^{j-1}
I_i I_{i+1}$,
\item for~$j\in {\mathfrak{H}}$, ${\mathfrak{S}}(j)
= |{\mathfrak{H}}^c| + |\{i\in{\mathfrak{H}} : i\leq j\}|$,
\end{itemize}
with the convention that~$\sum_{i=2}^{k} =0$ if~$k<2$.
Now, define
\begin{align*}
\tilde{\xi}_1 &= \inf\big\{\ell\geq 0: \exists (z_{\lambda},t_{\lambda})
\text{ such that } \ell \nu(z_{\lambda}) \geq t_{\lambda}\big\}, \\
\tilde{G}^X_1(x) &= \tilde{\xi}_1 \nu(x),\;\text{for all}\; x\in \Sigma,
\end{align*}
and~$(\tilde{x}_1,\tilde{t}_1)$ to be the unique
pair~$(z_{\lambda},t_{\lambda})$
satisfying~$\tilde{G}^X_1(z_{\lambda}) = t_{\lambda}$.
Next, for~$i=2,3,\dots,n$, define
\begin{align*}
\tilde{\xi}_i &= \inf\big\{\ell\geq 0: \exists (z_{\lambda},t_{\lambda})
\notin\{(\tilde{x}_k,\tilde{t}_k)\}_{k=1}^{i-1} \text{ such that } \\
& \hspace{2cm} \tilde{G}^X_{i-1}(z_{\lambda})
+ \ell (I_{{\mathfrak{S}}^{-1}(i)} + (1-I_{{\mathfrak{S}}^{-1}(i)})
\mu(\tilde{x}_{i-1},z_{\lambda})) \geq t_{\lambda}\big\}, \\
\tilde{G}^X_i(x) &= \tilde{G}^X_{i-1}(x) + \tilde{\xi}_i
(I_{{\mathfrak{S}}^{-1}(i)} + (1-I_{{\mathfrak{S}}^{-1}(i)})
\mu(\tilde{x}_{i-1},x)),\;\text{for all}\; x\in \Sigma,
\end{align*}
and~$(\tilde{x}_i,\tilde{t}_i)$ to be the unique
pair~$(z_{\lambda},t_{\lambda})$ out of the
set~$\{(\tilde{x}_k,\tilde{t}_k)\}_{k=1}^{i-1}$
satisfying~$\tilde{G}^X_i(z_{\lambda}) = t_{\lambda}$.
At the end of this procedure, we obtain the accumulated soft
local time until time~$n$,
\begin{align*}
\tilde{G}^X_n(x) = \tilde{\xi}_1\nu(x) +\sum_{i=2}^{n} \tilde{\xi}_i
(I_{{\mathfrak{S}}^{-1}(i)} + (1-I_{{\mathfrak{S}}^{-1}(i)})
\mu(\tilde{x}_{i-1},x)),
\end{align*}
for~$x\in\Sigma$, observing that~$\tilde{G}^X_n$ has the same
law as~$G^X_n$ and~$\hat{G}^X_n$, under~$\mathbb{P}$. Also, observe
that when proceeding in this way, we obtain the decomposition
\begin{align*}
\tilde{G}^X_n(x) = \tilde{G}^X_{|{\mathfrak{H}}^c|}(x)
+ (\tilde{G}^X_n(x)-\tilde{G}^X_{|{\mathfrak{H}}^c|}(x)),
\end{align*}
where~$\tilde{G}^X_{|{\mathfrak{H}}^c|}(x)$ is the accumulated soft local time
corresponding to the construction of the first block plus the
regeneration blocks of size strictly greater than one, until time~$n$.
By implementing this last scheme, one produces a sequence~$(\tilde{x}_i)_{i}$
with~$n$ elements, the local time field of which has the law of the local time
field of~$X$ at time~$n$, just as we wanted. Also, we recall the property
(of the soft local times) that the elements in the family~$(\tilde{\xi}_i)_i$
are all i.i.d.~Exponential($1$) random variables, independent
of all other random elements (cf.~\cite{PT15}).
\subsection{Construction of the i.i.d.~sequence~$Y$}
\label{Sim_IID}
Now, we describe how we can construct the i.i.d.~sequence~$Y_1,\ldots,Y_n$
using the soft local time technique.
We denote by~$(y_i)_{i\geq 1}$ the elements of~$\Sigma$ which we
will consecutively construct.
Considering the same Poisson point process~$\eta$ described
above, we begin with the construction of~$y_1$ by defining
\begin{align*}
\xi'_1 &= \inf\big\{\ell\geq 0: \exists (z_{\lambda},t_{\lambda})
\text{ such that } \ell \geq t_{\lambda}\big\}, \\
G^Y_1(x) &= \xi'_1,\;\text{for all}\; x\in \Sigma,
\end{align*}
and~$(y_1,t_1)$ to be the unique pair~$(z_{\lambda},t_{\lambda})$
satisfying~$G^Y_1(z_{\lambda}) = t_{\lambda}$.
Then, we proceed to the construction of~$y_2,y_3,\dots,y_n$:
for~$i=2,3,\dots,n$, define
\begin{align*}
\xi'_i &= \inf\big\{\ell\geq 0: \exists (z_{\lambda},t_{\lambda})
\notin\{(y_k,t_k)\}_{k=1}^{i-1} \text{ such that }G^Y_{i-1}(z_{\lambda})
+ \ell \geq t_{\lambda}\big\}, \\
G^Y_i(x) &= G^Y_{i-1}(x) + \xi'_i, \;\text{for all}\; x\in \Sigma,
\end{align*}
and~$(y_i,t_i)$ to be the unique pair~$(z_{\lambda},t_{\lambda})$
out of the set~$\{(y_k,t_k)\}_{k=1}^{i-1}$ satisfying~$G^Y_i(z_{\lambda}) = t_{\lambda}$.
At the end of this iterative scheme,
we obtain the first~$n$ elements of the sequence~$Y$.
As before, the elements in the family~$(\xi'_i)_i$ are all
i.i.d.\ Exponential($1$) random variables, and independent of
all the other quantities.
The use of the soft local times to construct the sequence~$Y$,
as described above, produces the accumulated soft local time until time~$n$,
\begin{align*}
G_n^Y(x) = \sum_{k=1}^{n} \xi'_k, \text{ for } x\in\Sigma.
\end{align*}
\section{The coupling}
\label{coupling}
Before starting the construction of our coupling we need to introduce some notation. For this, consider the following independent
random elements: a sequence $(e_n)_{n\geq 1}$ of Exponential(1)
independent random variables, a stationary version of the Markov chain~$(X_n)_{n\geq 1}$, which we call~$(M_n)_{n\geq 1}$
(that is, $(M_n)_{n\geq 1}$ has transition density~$p$ and initial law~$\pi$), and a Geometric($q$) random variable~$T$. Then,
we define the random function
\begin{equation}
\label{widehatW}
\widehat{W}(x)=\sum_{k=1}^{T} e_k(1-p(M_{k-1},x)),
\;\text{for all}\; x\in \Sigma,
\end{equation}
and consider a sequence of i.i.d.\ random functions
$(\widehat{W}_n)_{n\geq 1}$ with the same law as $\widehat{W}$.
We will show in Section~\ref{Est_F} that
\begin{equation}
\label{EPE}
\sup_{n\in\mathbb{N}}\frac{1}{\sqrt{n}}E\Big[\sup_{x\in \Sigma}\Big|\sum_{k=1}^{n}\widehat{W}_k(x)\Big|
\Big]\leq F
\end{equation}
where
\begin{align}
F := C_4\varepsilon \displaystyle\sqrt{1 + \ln(\phi 2^{\beta}) + \frac{\beta}{\gamma}\ln\Big(\frac{\kappa\vee (2\varepsilon)}{\varepsilon}\Big)}
\label{F_exp}
\end{align}
and $C_4=C_4(\varepsilon_0)$ is a positive constant depending on~$\varepsilon_0$.
Now, we present the construction of a coupling between
the local time fields of the Markov
chain~$X$ and the i.i.d.\ sequence~$Y$
at time~$n$. We will use the random element
${\mathcal{W}}:=((I_1,\dots,I_n),\eta)$ (introduced in Section~\ref{Sim})
and, the auxiliary random elements~$V$, $V'$, $V''$ and~$\eta'$
(that we define later in this section),
to construct a coupling
between two copies
of~$\eta$ which we call~$\eta_X$ and~$\eta_Y$.
These copies will be such that
the third construction of Section~\ref{Sim_MC}
applied to~$\eta_X$ and the
construction of Section~\ref{Sim_IID} applied to~$\eta_Y$ will
give high
probability of successful coupling of
the local time fields of~$X$ and~$Y$, for~$\varepsilon$ sufficiently small.
It is important to stress that the ``na\"\i{}ve''
coupling (that is, using the same realization of the
Poisson marks for constructing both the Markov chain
and the i.i.d.\ sequence)
does not work, because it is not probable that both
constructions will pick \emph{exactly} the same marks
(look at the two marks at the upper
right part of the top pictures on Figure~\ref{f_resampling}).
To circumvent this, we proceed as shown on Figure~\ref{f_resampling}:
we first remove all the
marks which are above the ``dependent'' part (that is, the marks strictly above the curve~$\tilde{G}^{X}_{|{\mathfrak{H}}^c|}$), and then resample
them using the maximal coupling of the ``projections''.
In the following, we describe this construction
in a rigorous way.
In order to construct the coupling we are looking for, we assume that,
in addition to the Bernoulli sequence
$(I_j)_{j\geq 1}$ and the Poisson point
process~$\eta$, the triple $(\tilde{\Omega}, \tilde{\mathcal{T}}, \mathbb{P})$
(from Section~\ref{Sim}) also support an independent copy of~$\eta$,
which we call~$\eta'$.
We will also need other random elements on $(\tilde{\Omega}, \tilde{\mathcal{T}}, \mathbb{P})$
to be defined later. We assume that $(\tilde{\Omega}, \tilde{\mathcal{T}}, \mathbb{P})$
is large enough to support all these random elements.
\begin{figure}
\begin{center}
\includegraphics{resampling_2}
\caption{Resampling of the ``independent parts''}
\label{f_resampling}
\end{center}
\end{figure}
We start with the construction of~$\eta_X$.
As explained in Section~\ref{Sim_MC},
we first use~${\mathcal{W}}$ to construct the local time field of
the Markov chain~$X$
up to time~$n$. In this way,
we obtain the soft local time curves
$\tilde{G}^X_i$, for $1\leq i\leq n$,
together with the sequences $\tilde{\xi}_1,\dots, \tilde{\xi}_n$
and $\tilde{x}_1,\dots, \tilde{x}_n$.
Then, we define the random function
\begin{align}
\Psi(\cdot) = \frac{\displaystyle\sum_{i=1}^n
\tilde{\xi}_i-\tilde{G}^{X}_{|{\mathfrak{H}}^c|}(\cdot)}
{\displaystyle\sum_{i=|{\mathfrak{H}}^c|+1}^{n}\tilde{\xi}_i}
\label{fct_psi}
\end{align}
and for all~$i\in\mathbb{N}$ the events
\begin{equation}
{\mathsf{A}}_i=\Big\{\sup_{x\in \Sigma}|\Psi(x)-1|\leq \frac{(1+i)F}{\sqrt{n}}\Big\},
\label{event_Av}
\end{equation}
where~$F$ is defined in~\eqref{F_exp}.
Now, we partition $\tilde{\Omega}$ using the events ${\mathsf{B}}_1:={\mathsf{A}}_1$ and ${\mathsf{B}}_{i+1}={\mathsf{A}}_{i+1}\setminus{\mathsf{A}}_i$, for $i\geq 1$ and define
\[
\mathsf{G}=\bigcup_{i\in \mathbb{N}:(1+i)F\leq 1}{\mathsf{B}}_i.
\]
Observe that, on $\mathsf{G}$, $\Psi$ is actually a (random) probability density with respect to~$\pi$.
Note that,
under~$\mathbb{P}$, $G'_n:=\sum_{i=1}^n\tilde{\xi}_i$ has the same law
as~$G^Y_n$, the soft local time of $Y$ at time~$n$
(cf.\ Section~\ref{Sim_IID} and the middle right picture on Figure~\ref{f_resampling}). Anticipating on what is coming, on~$\mathsf{G}$, the law
$\Psi \text{d}\pi$ will serve as the ``compensating'' law to reconstruct
the~$|{\mathfrak{H}}|$ marks of~$\eta_Y$ between $\tilde{G}^{X}_{|{\mathfrak{H}}^c|}$
and~$G'_n$.
Now, going back to the construction of~$\eta_X$,
on~$\mathsf{G}$, we adopt a resampling scheme:
we first ``erase''
all the marks of the point process~$\eta$ in the space~$\Sigma\times\mathbb{R}_+$ that are on the curves
$\tilde{G}^X_{|{\mathfrak{H}}^c|+1},\dots,\tilde{G}^X_n$,
then we reconstruct the marks as follows.
We introduce the random vector $V:=(V_1,\dots,V_{_{|{\mathfrak{H}}|}})$
such that under $\mathbb{P}[\;\cdot \mid {\mathcal{W}}]$, its coordinates are
independent and distributed according to the invariant measure~$\pi$.
We use the random vector~$V$ to place the (new) marks
\[
\Big(V_1,\tilde{G}^X_{|{\mathfrak{H}}^c|+1}(V_1)\Big),
\Big(V_2,\tilde{G}^X_{|{\mathfrak{H}}^c|+2}(V_2)\Big),\dots,
\Big(V_{|{\mathfrak{H}}|},\tilde{G}^X_{n}(V_{|{\mathfrak{H}}|})\Big),
\]
on the curves $\tilde{G}^X_{|{\mathfrak{H}}^c|+1},\dots,\tilde{G}^X_n$ (see Figure~\ref{f_resampling}, bottom left picture).
On~$\mathsf{G}^c$, we keep the original marks. Hence, $\eta_X$
is the point process obtained using this resampling
procedure.
We continue with the construction of $\eta_Y$. We will
construct the marks of $\eta_Y$ below $G'_n$ and then
``glue''~$\eta'$ above~$G'_n$ to complete the marks of~$\eta_Y$.
We now need the following two random vectors.
First, consider the random vector
$V':=(V'_1,\dots,V'_{|{\mathfrak{H}}|})$ such that
under $\mathbb{P}[\;\cdot \mid {\mathcal{W}}=w]$:\\
For $w\in \mathsf{G}$,
\begin{itemize}
\item $V'$ has independent coordinates distributed
according to $\Psi \text{d}\pi$;
\item the elements $\big(\sum_{i=1}^{|{\mathfrak{H}}|}
\mathds{1}_{\{x\}}(V'_i)\big)_{x\in \Sigma }$
and
$\big(\sum_{i=1}^{|{\mathfrak{H}}|}\mathds{1}_{\{x\}}(V_i)\big)_{x\in \Sigma }$
are maximally coupled;
\end{itemize}
and for $w\in \mathsf{G}^c$,
\begin{itemize}
\item $V'$ has independent coordinates distributed
according to~$\pi$;
\item the vectors $(V'_1,\dots,V'_{|{\mathfrak{H}}|})$
and $(V_1,\dots,V_{|{\mathfrak{H}}|})$ are independent.
\end{itemize}
We also introduce the random vector
$V'':=(V''_1,\dots,V''_{|{\mathfrak{H}}^c|})$,
such that under $\mathbb{P}[\;\cdot \mid {\mathcal{W}}]$, $V''$ has
independent coordinates distributed according to~$\pi$
and is independent of the pair $(V,V')$.
On $\mathsf{G}$, we construct the marks of the point
process~$\eta_Y$ below $G'_n$ in the following way:
we keep the marks obtained below $\tilde{G}^{X}_{|{\mathfrak{H}}^c|}$
and we use the law $\Psi\text{d}\pi$ to complete the process
until~$G'_n$.
For this, we adopt a resampling scheme just as before.
We first erase all the marks of the point process~$\eta$
that are (strictly) above $\tilde{G}^{X}_{|{\mathfrak{H}}^c|}$,
then we resample the part of the process $\eta$ up to~$G'_n$,
using the marks:
\begin{align*}
\Big(V'_1,\tilde{G}^X_{|{\mathfrak{H}}^c|}(V'_1)+\Psi(V'_1)\tilde{\xi}_{|{\mathfrak{H}}^c|+1}\Big),&\dots, \Big(V'_j,\tilde{G}^X_{|{\mathfrak{H}}^c|}(V'_j)+\Psi(V'_j)\sum_{i=1}^j\tilde{\xi}_{|{\mathfrak{H}}^c|+i}\Big),\dots\nonumber\\
&\dots,\Big(V'_{|{\mathfrak{H}}|},\tilde{G}^X_{|{\mathfrak{H}}^c|}(V'_{|{\mathfrak{H}}|})+\Psi(V'_{|{\mathfrak{H}}|})\sum_{i=1}^{|{\mathfrak{H}}|}\tilde{\xi}_{|{\mathfrak{H}}^c|+i}\Big)
\end{align*}
(see Figure~\ref{f_resampling}, bottom right picture). On $\mathsf{G}^c$, we construct the points below~$G'_n$
as follows. We consider the decomposition
$G'_n=\sum_{i=1}^{|{\mathfrak{H}}^c|}\tilde{\xi}_i
+\sum_{i=|{\mathfrak{H}}^c|+1}^n\tilde{\xi}_i$.
First, we use the random vector~$V''$ to sample the marks
$$\Big(V''_1,\tilde{\xi}_1\Big),\Big(V''_2,\sum_{i=1}^{2}\tilde{\xi}_i\Big),\dots, \Big(V''_{|{\mathfrak{H}}^c|},\sum_{i=1}^{|{\mathfrak{H}}^c|}\tilde{\xi}_i\Big),$$
on the first $|{\mathfrak{H}}^c|$ curves.
Then, on the second part, we use the random vector~$V'$
to sample the marks
$$\Big(V'_1,\sum_{i=1}^{|{\mathfrak{H}}^c|+1}\tilde{\xi}_i\Big),\Big(V'_2,\sum_{i=1}^{|{\mathfrak{H}}^c|+2}\tilde{\xi}_i\Big),\dots, \Big(V'_{|{\mathfrak{H}}|},\sum_{i=1}^{n}\tilde{\xi}_i\Big),$$
on the $n-|{\mathfrak{H}}^c|$ last curves.
For the sake of brevity, let us denote by $m^X_1,\dots, m^X_n$ and $m^Y_1,\dots, m^Y_n$, the first coordinates ($\in \Sigma$) of the marks of $\eta_X$ and $\eta_Y$ below the curves
$\tilde{G}^X_n$ and~$G'_n$ respectively. Let~$\tilde{L}_n$ and~$L'_n$ be the fields of
local times associated to these first coordinates, that is, for all $x\in \Sigma$,
\begin{equation*}
\tilde{L}_n(x)=\sum_{i=1}^n\mathds{1}_{\{x\}}(m^X_i)\phantom{**} \text{and}\phantom{**} L'_n(x)=\sum_{i=1}^n\mathds{1}_{\{x\}}(m^Y_i).
\end{equation*}
By construction, we have the following
{\prop
We have that~$\eta_X\stackrel{\text{\tiny law}}{=}\eta_Y\stackrel{\text{\tiny law}}{=}\eta$.
Furthermore, it holds that
$\tilde{L}_n\stackrel{\text{\tiny law}}{=} L^X_n$ and
$L'_n\stackrel{\text{\tiny law}}{=} L^Y_n$
(where $\stackrel{\text{\tiny law}}{=} $ stands for equality in law).
}
\medskip
\noindent
Consequently, we obtain a coupling between~$L^X_n$ and~$L^Y_n$.
We will denote by~$\Upsilon$ the coupling event associated to
this coupling (that is, $\Upsilon = \{\tilde{L}_n=L'_n\}$).
In Section~\ref{Main_Thm_proof},
we will obtain an upper bound for $\mathbb{P}[\Upsilon^c]$.
\section{Total variation distance between binomial point processes} \label{TV_binomial}
In this section, we estimate the total variation distance
between two binomial point processes on some measurable
space $(\Omega, \mathcal{T})$ with laws~${{\bf P}}_n$ and~${{\bf Q}}_n$
of respective parameters $({\bf p}_n,n)$ and $({\bf q}_n,n)$,
where $n\in \mathbb{N}$ and ${\bf p}_n$, ${\bf q}_n$ are two probability
laws on $(\Omega, \mathcal{T})$. We also assume
that ${\bf q}_n\ll {\bf p}_n$ and that~${\bf p}_n$ and~${\bf q}_n$
are close in a certain sense to be defined below.
For two probability measures~$\bar{\mu}$ and~$\hat{\mu}$
on $(\Omega, \mathcal{T})$, we recall that
if $\bar{\mu}\ll \hat{\mu}$,
\begin{equation}
\label{TVdiscrete}
\|\bar{\mu}-\hat{\mu}\|_{\text{TV}}
=\frac{1}{2}\int_{\Omega}
\Big|\frac{\text{d}\bar{\mu}}{\text{d}\hat{\mu}}-1\Big|d\hat{\mu}.
\end{equation}
We will prove the following result, which is actually
a little bit more than we need in this paper.
{\prop
\label{Propmulti}
Let $\delta_0\in (0,1]$ and $\delta \in [0,\delta_0)$
such that for all $n\in \mathbb{N}$,
$|\frac{\emph{d}{\bf q}_n}{\emph{d}{\bf p}_n}(x)-1|\leq\delta n^{-1/2}$
for all~$x\in \Omega$. Then, for
$C_1(\delta_0)=\exp(\delta_0^2)
\frac{\sinh(\delta_0^2)}{\delta_0}+\sqrt{2\pi}\exp(\frac{5}{2}\delta_0^2)$
we have, for all $n\in \mathbb{N}$,
\begin{equation*}
\|{\bf P}_n-{\bf Q}_n\|_{\emph{TV}}\leq C_1(\delta_0)\delta.
\end{equation*}
}
\begin{proof}
In this proof, when we want to emphasize the probability
law under which we take the expectation we will indicate the
law as a subscript. For example, the expectation under some
probability law~$\bar{\mu}$ will be denoted by~$E_{\bar{\mu}}$.
To begin, let us suppose that $n\geq 2$. We first observe
that~${{\bf P}}_n$ and~${{\bf Q}}_n$ can be seen as probability measures
on the space of $n$-point measures ${\mathcal{M}}_n=\{m:m=\sum_{i=1}^{n}\boldsymbol{\delta}_{x_i},
x_i\in \Omega, 1\leq i\leq n\}$ endowed with
the $\sigma$-algebra generated by the
mappings $\Phi_B:{\mathcal{M}}_n\to \mathbb{Z}_+$ defined by $\Phi_B(m)=m(B)=\sum_{i=1}^n\boldsymbol{\delta}_{x_i}(B)$,
for all $B\in \mathcal{T}$. Observe that the law of~${\bf P}_n$
(respectively,~${\bf Q}_n$) is completely characterized by its values
on the sets of the form $\{m\in{\mathcal{M}}_n:m(B_1)=n_1,\dots, m(B_J)=n_J\}$,
where $J\in \mathbb{Z}_+$, $B_1,\dots, B_J$ are disjoint sets
in~$\mathcal{T}$ and $n_1,\dots,n_J$ are non-negative integers
such that $n_1+\dots+n_J=n$.
With this observation it is easy to deduce that ${\bf Q}_n\ll {\bf P}_n$
and check that its Radon-Nikodym derivative with respect
to~${\bf P}_n$ is given by
\[
\frac{\text{d}{\bf Q}_n}{\text{d}{\bf P}_n}(m)
=\prod_{i=1}^n\frac{\text{d}{\bf q}_n}{\text{d}{\bf p}_n}(x_i)
\]
where $m=\sum_{i=1}^{n}\boldsymbol{\delta}_{x_i}$.\\
By (\ref{TVdiscrete}) we obtain that
\begin{align*}
\|{\bf P}_n-{\bf Q}_n\|_{\text{TV}}&=\frac{1}{2}\int_{{\mathcal{M}}_n}\Big|
\frac{\text{d}{\bf Q}_n}{\text{d}{\bf P}_n}(m)-1\Big|\text{d}{\bf P}_n(m)\nonumber\\
&=\frac{1}{2}\int_{{\mathcal{M}}_n}\Big|\prod_{i=1}^n
\frac{\text{d}{\bf q}_n}{\text{d}{\bf p}_n}(x_i)-1\Big|\text{d}{\bf P}_n(m).
\end{align*}
Now, for all $n\in \mathbb{N}$, we define the function
$f_n:\Omega\to \mathbb{R}$ such that, for~$x\in \Omega$,
we have $f_n(x)=\frac{\text{d}{\bf q}_n}{\text{d}{\bf p}_n}(x)-1$.
Observe that $E_{{\bf p}_n}[f_n]=0$ and that
$\|f_n\|_{\infty}\leq\delta n^{-1/2}$ for all $n\geq 2$. We have that
\begin{align}
\label{TV1}
\|{\bf P}_n-{\bf Q}_n\|_{\text{TV}}&=\frac{1}{2}\int_{{\mathcal{M}}_n}\Big|
\prod_{i=1}^n(1+f_n(x_i))-1\Big|\text{d}{\bf P}_n(m)\nonumber\\
&=\frac{1}{2}\int_{{\mathcal{M}}_n}\Big|
\exp\Big(\sum_{i=1}^n\ln(1+f_n(x_i))\Big)-1\Big|
\text{d}{\bf P}_n(m)\nonumber\\
&=\frac{1}{2}E_{{\bf P}_n}|\exp\{m(g_n)\}-1|
\end{align}
where, for all $n\geq 2$, $g_n$ is the function $\Omega\to \mathbb{R}$
defined by $g_n=\ln(1+f_n)$
(by using $|\frac{\text{d}{\bf q}_n}{\text{d}{\bf p}_n}(x)-1|\leq
\delta n^{-1/2}$ for all~$x\in \Omega$,
we can observe that $g_n$ is well defined)
and $m(g_n):=\int g_n dm$. Using the fact that $|\ln(1+x)|\leq 2|x|$
for $x\in (-1/\sqrt{2},1/\sqrt{2})$, we deduce that
$\|g_n\|_{\infty}\leq 2\|f_n\|_{\infty}\leq 2\delta n^{-1/2}$,
for all $n\geq 2$.
Now observe that under ${\bf P}_n$, $m(g_n)$ has the same law as
the sum $g_n(X_1)+\dots+ g_n(X_n)$, where the random
variables $X_1,\dots,X_n$ are i.i.d.\ with law~${\bf p}_n$.
We deduce that
\begin{align*}
E_{{\bf P}_n}|\exp\{m(g_n)\}-1|
=E|\exp\{g_n(X_1)+\dots+g_n(X_n)\}-1|.
\end{align*}
Observe that $|E[g_n(X_1)]|=|E[(g_n-f_n)(X_1)]|$
since $E[f_n(X_1)]=E_{{\bf p}_n}[f_n]=0$.
Now we use the fact that, for all $x\in \mathbb{R}$ such
that $|x|\leq 1/\sqrt{2}$, we have that
\[
|\ln(1+x)-x|\leq 2x^2.
\]
Since $\|f_n\|_{\infty} \leq \delta n^{-1/2}$ we obtain that
$\|g_n-f_n\|_{\infty}\leq 2\delta^2 n^{-1}$. We deduce that
\begin{align}
\label{EST1}
|E[g_n(X_1)]|\leq \|g_n-f_n\|_{\infty}\leq 2\delta^2 n^{-1}.
\end{align}
Let $Z_n:=\sum_{k=1}^{n}f_n(X_k)$. Using the fact that
$|\exp(x) -1|\leq \exp(|x|)-1$ for all $x\in \mathbb{R}$ and~(\ref{EST1})
we obtain that
\begin{align}
\label{TV2}
E_{{\bf P}_n}|\exp\{(m(g_n)\}-1| &\leq E\Big[\exp\Big(|Z_n + \sum_{k=1}^{n}(g_n-f_n)(X_k)|\Big)-1\Big]\nonumber\\
&\leq \exp(2\delta^2) E\Big[\exp(|Z_n|)\Big]-1.
\end{align}
Now let us obtain an upper bound for the expectation of the
right-hand side of~(\ref{TV2}). Using integration by
parts, we have
\begin{align}
\label{Parts}
E\Big[\exp(|Z_n|)\Big]=1+\int_{0}^{\infty}e^tP[|Z_n|
\geq t]dt.
\end{align}
By Hoeffding's inequality, we have, for all $n\geq 2$ and
for all $t\geq 0$,
\begin{equation*}
P[|Z_n|\geq t]\leq 2\exp\Big(-\frac{t^2}{2\delta^2}\Big).
\end{equation*}
Using this last inequality in~(\ref{Parts}), we obtain that
\begin{align*}
E\Big[\exp(|Z_n|)\Big]&=1+2\int_{0}^{\infty}
\exp\Big(t-\frac{t^2}{2\delta^2}\Big)dt\nonumber\\
&\leq 1+2\delta \exp\Big(\frac{\delta^2}{2}\Big)\sqrt{2\pi}.
\end{align*}
Therefore, going back to~(\ref{TV2}) and using the fact that
$\delta< \delta_0$, we deduce that for all $n\geq 2$,
\begin{align}
\label{EST3}
E_{{\bf P}_n}|\exp\{m(g_n)\}-1| &\leq \exp(2\delta^2)
\Big[1+2\delta \exp\Big(\frac{\delta^2}{2}\Big)\sqrt{2\pi}\Big]-1\nonumber\\
&\leq \delta\Big[2\exp(\delta^2)
\frac{\sinh(\delta^2)}{\delta}+2\sqrt{2\pi}\exp\Big(\frac{5}{2}\delta^2\Big)\Big]\nonumber\\
&\leq c_1(\delta_0) \delta,
\end{align}
where $c_1(\delta_0):=2\exp(\delta_0^2)
\frac{\sinh(\delta_0^2)}{\delta_0}+2\sqrt{2\pi}\exp(\frac{5}{2}\delta_0^2)$.
Gathering~(\ref{TV1}), (\ref{EST3}) and considering the fact that
$\|{\bf P}_1-{\bf Q}_1\|_{\text{TV}}\leq \frac{\delta}{2}$,
we conclude the proof of Proposition~\ref{Propmulti}
by taking $C_1= c_1/2$.
\end{proof}
\section{Controlling the ``dependent part'' of the soft
local time}
\label{Premres}
We recall that, for $i\in\mathbb{N}$, we defined in Section~\ref{coupling}
the events
\begin{equation*}
{\mathsf{A}}_i=\Big\{\sup_{x\in \Sigma}\big|\Psi(x)-1\big|
\leq \frac{(1+i)F}{\sqrt{n}} \Big\}.
\end{equation*}
The goal of this section is to prove the following
{\prop \label{goodenv} There exist a positive constant~$C_2=C_2(\varepsilon_0)$ and $n_0=n_0(\varepsilon_0)\in\mathbb{N}$
such that, for all integer~$n\geq n_0$ and
$i\in\mathbb{N}$, it holds that
\begin{align*}
\mathbb{P}\Big[{\mathsf{A}}_i^c\;\Big|\; |{\mathfrak{H}}|> \frac{q_0^2}{6}n \Big] &\leq \frac{C_2}{(1+i)^3}
\end{align*}
where $q_0:=1-\varepsilon_0$.
}
We postpone the proof of this proposition to
Section~\ref{Proof_Av}. Before that, in Section~\ref{Est_F} we show~\eqref{EPE} and in Section~\ref{concentr}
we use a concentration inequality to obtain a tail estimate on the numerator of $\Psi-1$ (see~\eqref{Psidecomp}).
\subsection{Proof of inequality~\eqref{EPE}}
\label{Est_F}
In this section, we present a standard method based on bracketing numbers to prove~(\ref{EPE}).
Without loss of generality, we assume
in this section that~$\kappa$ from
Assumption~\ref{assump_2} is greater than or equal to~$2\varepsilon$.
We start introducing
the space~${\mathcal{S}}=\mathbb{R}^{\Sigma}$ and the class~${\mathfrak{F}}=(f_x)_{x\in\Sigma}$ of
functions~$f_x:{\mathcal{S}}\rightarrow \mathbb{R}$ such that~$f_x(\omega)=\omega(x)$,
for~$\omega\in{\mathcal{S}}$.
In this setting, for~$s>0$ and~$\mathfrak{E}:{\mathcal{S}}\rightarrow\mathbb{R}$
an envelope function of the class~${\mathfrak{F}}$ (that is, a function such that~$\mathfrak{E}\geq |f_x|$ for all~$f_x\in{\mathfrak{F}}$), we need to estimate the bracketing number
\begin{align*}
N_{[\,]}\Big(s\|\mathfrak{E}\|_{2},{\mathfrak{F}},L_2\Big),
\end{align*}
which is defined to be the minimum number of brackets
\begin{align*}
[f_1,f_2] := \big\{f:{\mathcal{S}}\rightarrow \mathbb{R}; f_1\leq f\leq f_2\big\},
\end{align*}
satisfying~$\|f_2-f_1\|_{2} < s\|\mathfrak{E}\|_{2}$, that are
needed to cover the class~${\mathfrak{F}}$, where the given
functions~$f_1$ and~$f_2$ have finite $L_2$-norms
(see Definition~2.1.6 of~\cite{VW96}).
For that, we consider an (initially arbitrary) exhaustive and
finite collection of subsets of the space~$\Sigma$, that is,
a finite collection~$\{{\mathsf{D}}_i\}_i$ such that~${\mathsf{D}}_i\subset\Sigma$
for each~$i$ and~$\bigcup_i{\mathsf{D}}_i=\Sigma$, and for each such set~${\mathsf{D}}_i$
we define two functions~$f_{{\mathsf{D}}_i},\hat{f}_{{\mathsf{D}}_i}:{\mathcal{S}}\rightarrow\mathbb{R}$,
\begin{align*}
f_{{\mathsf{D}}_i}(\omega) = \inf_{z_0\in{\mathsf{D}}_i} \omega(z_0) ~\mbox{ and }~ \hat{f}_{{\mathsf{D}}_i}(\omega) = \sup_{z_0\in{\mathsf{D}}_i} \omega(z_0),
\end{align*}
so that, if~$x\in{\mathsf{D}}_i$ then~$f_x\in[f_{{\mathsf{D}}_i},\hat{f}_{{\mathsf{D}}_i}]$.
Thus, to each set in the family~$\{{\mathsf{D}}_i\}_i$ we associate
a bracket, and so any particular finite exhaustive family of subsets
of~$\Sigma$ induces a finite collection of
brackets~$\{[f_{{\mathsf{D}}_i},\hat{f}_{{\mathsf{D}}_i}]\}_i$ which cover the class~${\mathfrak{F}}$.
So, in order to properly estimate the bracketing number,
the task is to determine a suitable collection~$\{{\mathsf{D}}_i\}_i$
of subsets of~$\Sigma$ in such a way that the induced brackets have their
sizes all smaller than~$s\|\mathfrak{E}\|_{2}$. The number of sets in that
collection will serve as an upper bound for~$N_{[\,]}$.
Through the following lemma we better characterize the size
(in~$L_2$) of the induced brackets we are considering.
{\lem Under Assumption~\ref{assump_2}, it holds that, for any
set~${\mathsf{D}}\subset\Sigma$,
\begin{align*}
\|\hat{f}_{{\mathsf{D}}}-f_{{\mathsf{D}}}\|_{2} \leq \frac{\sqrt{2}\kappa}
{q}\displaystyle\max_{z,z'\in{\mathsf{D}}}d^{\gamma}(z,z').
\end{align*}
\label{bracket_size}
}
\begin{proof}
Recalling the notation introduced at the beginning of Section~\ref{coupling},
we want to bound the~$L_2$-norm
\begin{align*}
\Big\|\hat{f}_{{\mathsf{D}}}\Big(\widehat{W}_1(\cdot)\Big)-f_{{\mathsf{D}}}\Big(\widehat{W}_1(\cdot)
\Big)\Big\|_{2}
= \Big\|\sup_{z\in{\mathsf{D}}}\widehat{W}_1(z)-\inf_{z\in{\mathsf{D}}}\widehat{W}_1(z)\Big\|_{2}
\end{align*}
which is
{\allowdisplaybreaks
\begin{align*}
\Big\| \sup_{z\in{\mathsf{D}}} \sum_{k=1}^{T} e_k (1- &p(M_{k-1},z))
- \inf_{z\in{\mathsf{D}}} \sum_{k=1}^{T} e_k (1 - p(M_{k-1},z))\Big\|_{2}\\
&= \Big\| \sup_{z\in{\mathsf{D}}} \sum_{k=1}^{T} e_k p(M_{k-1},z) -
\inf_{z\in{\mathsf{D}}} \sum_{k=1}^{T} e_k p(M_{k-1},z)\Big\|_{2} \\
&\leq \Big\| \sum_{k=1}^{T} e_k \Big(\sup_{z\in{\mathsf{D}}} p(M_{k-1},z)
- \inf_{z\in{\mathsf{D}}} p(M_{k-1},z)\Big) \Big\|_{2} \\
&\leq \kappa\Big\| \sum_{k=1}^{T_1} e_k \Big\|_{2} \displaystyle\max_{z,z'\in{\mathsf{D}}}d^{\gamma}(z,z'),
\end{align*}
where we used Assumption~\ref{assump_2} to establish
the second inequality.
}
Thus, we conclude the proof by using the fact
that~$\sum_{k=1}^{T} e_k$ is exponentially distributed
with parameter~$q$, so that
\begin{align*}
\Big\| \sum_{k=1}^{T} e_k \Big\|_{2} = \frac{\sqrt{2}}{q}.
\end{align*}
\end{proof}
In view of the above result,
we must impose the sets we are constructing,
$\{{\mathsf{D}}_i\}_i$, to be such that
\begin{align}
\max_{z,z'\in{\mathsf{D}}_i}d(z,z') < \Big(s\|\mathfrak{E}\|_{2}\frac{q}{\sqrt{2}\kappa}\Big)^{1/\gamma}, \mbox{ for each } i,
\label{condition_max}
\end{align}
in order to obtain, from Lemma~\ref{bracket_size}, that
\begin{align}
\|\hat{f}_{{\mathsf{D}}_i}-f_{{\mathsf{D}}_i}\|_{2} < s\|\mathfrak{E}\|_{2}, \mbox{ for each } i.
\label{condition_max_2}
\end{align}
Then, we prove
{\prop Under Assumptions~\ref{assump_1} and~\ref{assump_2},
there exists a universal positive constant~$C_3$ such that, if
\begin{align}
\frac{\gamma}{\beta} \Big[1 + \ln(\phi 2^{\beta})
+ \frac{\beta}{\gamma}\ln\Big(\frac{\sqrt{2}\kappa}
{q\|\mathfrak{E}\|_{2}}\Big)\Big] \geq \frac{1}{2},
\label{cond_geq1}
\end{align}
then it holds that, for all~$n\in\mathbb{N}$,
\begin{align*}
E\Big[\sup_{x\in \Sigma}\Big|\sum_{k=1}^{n}\widehat{W}_k(x)\Big|
\Big] \leq C_3 \displaystyle\sqrt{1
+ \ln(\phi 2^{\beta}) + \frac{\beta}{\gamma}
\ln\Big(\frac{\sqrt{2}\kappa}{q\|\mathfrak{E}\|_{2}}\Big)} \|\mathfrak{E}\|_{2}\sqrt{n}.
\end{align*}
\label{expect_R_tilde}
}
\begin{proof}
For~$s>0$, we just define the sets~$\{{\mathsf{D}}_i\}_i$ to
be open balls of radius at most
\begin{align*}
\frac{1}{2}\Big(s\|\mathfrak{E}\|_{2}\frac{q}{\sqrt{2}\kappa}\Big)^{1/\gamma},
\end{align*}
so that~\eqref{condition_max} is verified (and
consequently~\eqref{condition_max_2} too,
by Lemma~\ref{bracket_size}).
Then, under Assumption~\ref{assump_1}, we can say that
the number of such balls that are needed to cover~$\Sigma$
is at most
\begin{align*}
\phi 2^{\beta}\Big[\frac{\sqrt{2}\kappa}
{qs\|\mathfrak{E}\|_{2}}\Big]^{\beta/\gamma},
\end{align*}
and therefore, for any~$s>0$,
\begin{align*}
N_{[\,]}\Big(s\|\mathfrak{E}\|_{2},{\mathfrak{F}},L_2\Big)
\leq \phi 2^{\beta}\Big[\frac{\sqrt{2}\kappa}
{qs\|\mathfrak{E}\|_{2}}\Big]^{\beta/\gamma}.
\end{align*}
Taking this bound on the bracketing number into account,
we can estimate the bracketing entropy integral (of the class~${\mathfrak{F}}$)
\begin{align*}
J_{[\,]}\Big(1,{\mathfrak{F}},L_2\Big) := \int_{0}^{1}
\sqrt{1+\ln N_{[\,]}\Big(s\|\mathfrak{E}\|_{2},{\mathfrak{F}},L_2\Big)} ds,
\end{align*}
(see its definition e.g.\ in Section~2.14.1
of~\cite{VW96}, page 240). We do that by just bounding it above by
\begin{align*}
\int_{0}^{1} \sqrt{1+\ln \Big(\phi 2^{\beta}
\Big[\frac{\sqrt{2}\kappa}{qs\|\mathfrak{E}\|_{2}}\Big]^{\beta/\gamma}\Big)} ds,
\end{align*}
which, after some changes of variables, can be shown to be equal to
\begin{align*}
\Big(\frac{\beta}{\gamma}\Big)^{1/2}
(\phi 2^{\beta}e)^{\gamma/\beta}
\Big(\frac{\sqrt{2}\kappa}{q\|\mathfrak{E}\|_{2}}\Big)
\int_{\frac{\gamma}{\beta} [1
+ \ln (\phi 2^{\beta}[\frac{\sqrt{2}\kappa}
{q\|\mathfrak{E}\|_{2}}]^{\beta/\gamma})]}^{\infty}
\sqrt{x}e^{-x} dx.
\end{align*}
Now, using the asymptotic behaviour of the (upper) incomplete Gamma function
\begin{align*}
\Gamma(y) = \int_{y}^{\infty} \sqrt{x}e^{-x} dx
\end{align*}
as~$y\rightarrow\infty$, it is elementary to see that there exists
a universal positive constant~$c_1$ such
that~$\Gamma(y)\leq c_1\sqrt{y}e^{-y}$ for all~$y\geq 1/2$.
Thus, since we are assuming~\eqref{cond_geq1}, we have
\begin{align*}
J_{[\,]}\Big(1,{\mathfrak{F}},L_2\Big) \leq c_1 \sqrt{1 + \ln(\phi 2^{\beta})
+ \frac{\beta}{\gamma}\ln\Big(\frac{\sqrt{2}\kappa}{q\|\mathfrak{E}\|_{2}}\Big)}.
\end{align*}
Finally, we use Theorem 2.14.2 of~\cite{VW96} to obtain that,
if~\eqref{cond_geq1} holds, then (for a universal positive
constant~$c_2$)
\begin{align*}
E\Big[\sup_{x\in \Sigma}\Big|\sum_{k=1}^{n}\widehat{W}_k(x)\Big|
\Big] &\leq c_2 J_{[\,]}\Big(1,{\mathfrak{F}},L_2\Big)
\|\mathfrak{E}\|_{2} \sqrt{n} \\
&\leq c_2 c_1 \sqrt{1 + \ln(\phi 2^{\beta})
+ \frac{\beta}{\gamma}\ln\Big(\frac{\sqrt{2}\kappa}
{q\|\mathfrak{E}\|_{2}}\Big)} \|\mathfrak{E}\|_{2} \sqrt{n},
\end{align*}
which completes the proof with~$C_3=c_2 c_1$.
\end{proof}
We now investigate the~$L_2$-norm of the envelope~$\mathfrak{E}$. If we take, for example, an envelope of~${\mathfrak{F}}$ given by
\begin{align}
\mathfrak{E}(\omega) = \varepsilon\sum_{k=1}^{T} e_k ,
\mbox{ for any } \omega \in {\mathcal{S}}
\label{envelope}
\end{align}
(so that~$\mathfrak{E}\geq |f_x|$ for all~$f_x\in{\mathfrak{F}}$), then we have
{\prop If~$\mathfrak{E}$ is given by~\eqref{envelope}, then it holds that
\begin{align*}
\|\mathfrak{E}\|_{2} = \frac{\sqrt{2}}{q}\varepsilon.
\end{align*}
\label{envelope_norm}
}
\begin{proof}
Use the fact that~$\sum_{i=1}^{T}e_i$ is
exponentially distributed with parameter~$q$.
\end{proof}
Thus, using Propositions~\ref{expect_R_tilde}, \ref{envelope_norm} and the fact that~$q\geq 1-\varepsilon_0$, we have under Assumptions~\ref{assump_1} and~\ref{assump_2}, for all~$n\in\mathbb{N}$,
\begin{align*}
E\Big[\sup_{x\in \Sigma}\Big|\sum_{k=1}^{n}\widehat{W}_k(x)\Big|
\Big] \leq F\sqrt{n}
\end{align*}
where
\begin{align*}
F := C_4\varepsilon \displaystyle\sqrt{1 + \ln(\phi 2^{\beta}) + \frac{\beta}{\gamma}\ln\Big(\frac{\kappa\vee (2\varepsilon)}{\varepsilon}\Big)},
\end{align*}
and~$C_4=C_4(\varepsilon_0)$ is a positive constant depending on~$\varepsilon_0$.
\subsection{A tail bound involving~$\Psi$}
\label{concentr}
Recall the definition of~$\Psi$ from~\eqref{fct_psi}:
for~$x\in\Sigma$,
\begin{equation}
\label{Psidecomp}
\Psi(x) = \frac{\displaystyle\sum_{i=1}^n
\tilde{\xi}_i-\tilde{G}^{X}_{|{\mathfrak{H}}^c|}(x)}
{\displaystyle\sum_{i=|{\mathfrak{H}}^c|+1}^{n}\tilde{\xi}_i}
=1 + \frac{\displaystyle\sum_{i=1}^{|{\mathfrak{H}}^c|}
\tilde{\xi}_i-\tilde{G}^{X}_{|{\mathfrak{H}}^c|}(x)}
{\displaystyle\sum_{i=|{\mathfrak{H}}^c|+1}^{n}\tilde{\xi}_i}.
\end{equation}
In this section, we will use a concentration inequality from~\cite{Adam08},
to estimate the tail of the numerator of the last expression above.
Recalling the notation of Section \ref{Sim_MC}, if we define for~$x\in\Sigma$
\begin{align*}
Z_n(x)=\sum_{i=1}^{n}\xi_i-G^X_n(x),
\end{align*}
we observe that $Z_n(\cdot)\stackrel{\text{\tiny law}}{=} \sum_{i=1}^{|{\mathfrak{H}}^c|} \tilde{\xi}_i-\tilde{G}^{X}_{|{\mathfrak{H}}^c|}(\cdot)$.
Now we use the fact that the Markov chain~$X$ has
its regenerations at times~$(\rho_j)_{j\geq 1}$
and we approximate~$Z_n$ by a sum of independent random elements.
Specifically, we define
\begin{align*}
W_j(\cdot) = Z_{\rho_{j+1}-1}(\cdot) - Z_{\rho_j-1}(\cdot), \text{ for } j\geq 0,
\end{align*}
with the convention that~$Z_0(\cdot)$=0, and we intend
to approximate~$Z_n(\cdot)$ by a suitably chosen sum
of $W_j(\cdot)$'s.
Write $T_0=\rho_1-\rho_0$ for the length of the first block,
and
\begin{align*}
T_j = \rho_{j+1} - \rho_j, \qquad j\geq 1,
\end{align*}
for the lengths of the subsequent regeneration blocks.
The random variables~$(T_j)_{j\geq 0}$
are i.i.d.\ Geometric($q$). The random
elements~$(W_j(\cdot))_{j\geq 0}$ are independent and,
additionally, the elements~$(W_j(\cdot))_{j\geq 1}$
are identically distributed. Also, for any~$x\in\Sigma$, we have
\begin{align*}
W_0(x) = \xi_1(1 -\nu(x)) + \sum_{k=2}^{T_0}
\xi_k (1 - p(x_{k-1},x) ),
\end{align*}
and, recalling~\eqref{widehatW},
\begin{align}
W_1(x) \stackrel{\text{\tiny law}}{=} \widehat{W}(x) = \sum_{k=1}^{T} e_k (1 - p(M_{k-1},x) ).
\label{W1_law}
\end{align}
Moreover, let us observe the fact that~$\mathbb{E}[W_1(x)]=0$ for any~$x\in\Sigma$.
Let us denote, for~$n,m\in\mathbb{N}$,
\begin{align}
\label{Rn}
R_n &= \sup_{x\in\Sigma}\Big|Z_n(x)\Big|, \\
\tilde{R}_m &= \sup_{x\in\Sigma}\Big|\sum_{j=1}^m W_j(x)\Big|, \nonumber
\end{align}
with the convention that~$\tilde{R}_0=0$.
Observe that in Section~\ref{Est_F} we actually proved that
\begin{align}
\mathbb{E}[\tilde{R}_m] \leq F\sqrt{m},
\label{EPE_F_exp}
\end{align}
where~$F$ is defined in~\eqref{F_exp}.
We now obtain
{\prop For all~$\theta>0$, it holds that
\begin{align*}
\mathbb{P}\Big[R_n \geq 8 F \sqrt{n} + 7\theta n\Big] & \leq 2\exp\Big\{-\frac{3}{8}\frac{q^2\theta^2 n}{\varepsilon^2}\Big\} + 6\exp\Big\{-3C_5\frac{q\theta n}{\varepsilon\ln(3n+1)}\Big\}\\
&\phantom{***} + 2\exp\Big\{-\frac{q\theta n}{2\varepsilon}\Big\},
\end{align*}
where $C_5$ is a universal positive constant.
\label{Prop_Rn}
}
\begin{proof}
We will use an argument analogous to the one used in the proof of Lemma 2.9 in \cite{CP}. To begin, let us assume that there exists a positive constant~$C_5$ such that, for all~$\theta>0$,
\begin{align}
\mathbb{P}\Big[\tilde{R}_m \geq \frac{3}{2} F \sqrt{m} + \theta m\Big] \leq \exp\Big\{-\frac{q^2\theta^2 m}{8\varepsilon^2}\Big\}
+ 3\exp\Big\{-C_5\frac{q\theta m}{\varepsilon\ln(m+1)}\Big\}.
\label{assump}
\end{align}
(This statement will be proved later, in Proposition~\ref{Prop_Rn_tilde}).
Using Markov's inequality and~\eqref{EPE_F_exp}, it is elementary to show that, for~$c_1= 4/3$,
\begin{align*}
\mathbb{P}\Big[\tilde{R}_m \geq \frac{3}{2} c_1 F \sqrt{m} + \theta m\Big] \leq \frac{1}{2},
\end{align*}
so that
\begin{align}
\mathbb{P}\Big[\tilde{R}_m \geq& 2 F \sqrt{m} + \theta m\Big] \nonumber \\
&\leq \frac{1}{2} \wedge \Big(\exp\Big\{-\frac{q^2\theta^2 m}{8\varepsilon^2}\Big\}
+ 3\exp\Big\{-C_5\frac{q\theta m}{\varepsilon\ln(m+1)}\Big\}\Big).
\label{EP2}
\end{align}
Now, let us define
\begin{align*}
\tilde{M} = \min\Big\{ i\in[0,3m] : \tilde{R}_i \geq 8 F \sqrt{m} + 6\theta m\Big\},
\end{align*}
with the convention that $\min\emptyset=\infty$, so that
\begin{align*}
\mathbb{P}\Big[\tilde{M}\in[0,3m]\Big] = \mathbb{P}\Big[\max_{i\in[0,3m]}\tilde{R}_i \geq& 8 F \sqrt{m} + 6\theta m\Big].
\end{align*}
Then, using~\eqref{EP2} one gets
\begin{align*}
\exp\Big\{&-\frac{3}{8}\frac{q^2\theta^2 m}{\varepsilon^2}\Big\} + 3\exp\Big\{-3C_5\frac{q\theta m}{\varepsilon\ln(3m+1)}\Big\} \\
&\geq \mathbb{P}\big[\tilde{R}_{3m} \geq 2 F \sqrt{3m} + 3\theta m\big] \\
&\geq \sum_{j=0}^{3m} \mathbb{P}[\tilde{M}=j] \mathbb{P}\Big[\tilde{R}_{3m-j} < 2 F \sqrt{3m} + 3\theta m\Big] \\
&\geq \mathbb{P}\big[\tilde{M}\in[0,3m]\big] \min_{j\in[0,3m]} \mathbb{P}\Big[\tilde{R}_{3m-j} < 2 F \sqrt{3m-j} + \theta (3m-j)\Big] \\
&\geq \frac{1}{2} \mathbb{P}\big[\tilde{M}\in[0,3m]\big],
\end{align*}
which in turn proves that
\begin{align}
\mathbb{P}\Big[\max_{i\in[0,3m]} & \tilde{R}_i \geq 8 F \sqrt{m} + 6\theta m\Big] \nonumber \\
&\leq 2\exp\Big\{-\frac{3}{8}\frac{q^2\theta^2 m}{\varepsilon^2}\Big\} + 6\exp\Big\{-3C_5\frac{q\theta m}{\varepsilon\ln(3m+1)}\Big\}.
\label{Max_inqty}
\end{align}
Now observe that, if we define
\begin{align*}
\sigma_n = \min\{j\geq 1 : \rho_j > n\},
\end{align*}
then~$\sigma_n - 1$ is a Binomial($n-1,q$) random variable, and, under Assumption~\ref{assump_3},
\begin{align}
R_n \leq \tilde{R}_{\sigma_n-1} + \varepsilon\sum_{i=n+1}^{\rho_{\sigma_n}} \xi_i + \varepsilon\sum_{i=1}^{\rho_1-1}\xi_i.
\label{Rn_inqty}
\end{align}
Note that~$\rho_{\sigma_n}-n$ and~$\rho_1-1$ are both Geometric($q$) distributed random variables. Consequently, $\sum_{i=n+1}^{\rho_{\sigma_n}} \xi_i$ and~$\sum_{i=1}^{\rho_1-1} \xi_i$ are both Exponential($q$) distributed and we deduce that
\begin{align}
\mathbb{P}\Big[ \varepsilon\sum_{i=n+1}^{\rho_{\sigma_n}} \xi_i + \varepsilon\sum_{i=1}^{\rho_1-1} \xi_i \geq \theta n\Big]
&\leq \mathbb{P}\Big[ \varepsilon\sum_{i=n+1}^{\rho_{\sigma_n}}\xi_i \geq \frac{\theta n}{2}\Big] + \mathbb{P}\Big[\varepsilon\sum_{i=1}^{\rho_1-1} \xi_i \geq \frac{\theta n}{2}\Big] \nonumber\\
&= 2\exp\Big\{-\frac{q\theta n}{2\varepsilon}\Big\}.
\label{Exp_inqty}
\end{align}
Finally, using~\eqref{Rn_inqty} together
with~\eqref{Max_inqty} and~\eqref{Exp_inqty}, we obtain that
\begin{align*}
\mathbb{P}\big[& R_n \geq 8 F \sqrt{n} + 7\theta n\big] \\
& \leq \mathbb{P}\Big[ \max_{i\in[0,3n]}\tilde{R}_i \geq 8 F \sqrt{n}
+ 6\theta n\Big] + \mathbb{P}\Big[ \sum_{i=n+1}^{\rho_{\sigma_n}} \xi_i
+ \sum_{i=1}^{\rho_1-1} \xi_i \geq \frac{\theta n}{\varepsilon}\Big] \\
&\leq 2\exp\Big\{-\frac{3}{8}\frac{q^2\theta^2 n}{\varepsilon^2}\Big\}
+ 6\exp\Big\{-3C_5\frac{q\theta n}{\varepsilon\ln(3n+1)}\Big\}
+ 2\exp\Big\{-\frac{q\theta n}{2\varepsilon}\Big\} .
\end{align*}
\end{proof}
Then, we deduce the following
{\cor For all $i,n\in\mathbb{N}$, it holds that
\begin{align*}
\mathbb{P}\big[R_n \geq 8 F \sqrt{n} (1+i)\big] &
\leq 2\exp\Big\{-C_6\frac{F^2 i^2 }{\varepsilon^2}\Big\}
+ 8\exp\Big\{-C_7\frac{Fi}{\varepsilon}\frac{\sqrt{n}}{\ln(3n+1)}\Big\},
\end{align*}
where $C_6$ and~$C_7$ are positive constants that depend on $\varepsilon_0$.
\label{Cor_Rn}
}
\begin{proof}
Take~$\theta=\frac{8}{7}\frac{Fi}{\sqrt{n}}$ in
Proposition~\ref{Prop_Rn}, and use the fact that~$q\geq 1-\varepsilon_0$.
\end{proof}
Before proving assertion~\eqref{assump},
which was assumed to be true in the beginning of the proof
of Proposition~\ref{Prop_Rn}, we must prove some
preliminary results.
{\lem It holds that
\begin{align*}
\sigma^2 := \sup_{x\in\Sigma} \sum_{j=1}^m \mathbb{E}\big[W_j(x)^2\big]
\leq \frac{2}{q^2}\varepsilon^2m.
\end{align*}
\label{lem_sigma2}
}
\begin{proof}
Since, for any~$x\in\Sigma$, the elements of~$(W_j(x))_{j\geq 1}$
are i.i.d., we only have to prove that, for any~$x\in\Sigma$,
\begin{align*}
\mathbb{E}\big[W_1(x)^2\big] \leq \frac{2}{q^2}\varepsilon^2.
\end{align*}
We use~\eqref{W1_law} and~\eqref{max_eps} to write, for any~$x\in\Sigma$,
\begin{align*}
\mathbb{E}\big[W_1(x)^2\big] \leq \varepsilon^2
\mathbb{E}\Big[\Big(\sum_{k=1}^{T} e_k \Big)^2 \Big].
\end{align*}
Since~$T$ is Geometric($q$), we use the fact that~$\sum_{i=1}^{T}e_i$ is exponentially
distributed with parameter~$q$ to see that
\begin{align*}
\mathbb{E}\big[W_1(x)^2\big] \leq \varepsilon^2 \frac{2}{q^2},
\end{align*}
and this completes the proof.
\end{proof}
In order to formulate the next lemma,
we define the so-called $\psi_1$-Orlicz norm of a
random variable~$X$, in the following way:
\begin{align*}
\|X\|_{\psi_{1}} = \inf\big\{t>0 : \mathbb{E} e^{|X|/t} \leq 2\big\},
\end{align*}
see Definition 1 of~\cite{Adam08}.
{\lem It holds that
\begin{align*}
\Big\| \max_{1\leq j\leq m} \sup_{x\in\Sigma}|W_j(x)|
\Big\|_{\psi_1} \leq C_8 \frac{\varepsilon}{q} \ln(m+1),
\end{align*}
where~$C_8$ is a universal positive constant.
\label{lem_orlicz}
}
\begin{proof}
First, Lemma 2.2.2 of~\cite{VW96} provides the inequality
\begin{align*}
\Big\|\max_{1\leq j \leq m} \sup_{x\in\Sigma} |W_j(x)|
\Big\|_{\psi_{1}} \leq c_1 \max_{1\leq j \leq m} \Big\|\sup_{x\in\Sigma}
|W_j(x)| \Big\|_{\psi_{1}} \ln (m+1),
\end{align*}
for a universal positive constant~$c_1$.
But, due to~\eqref{max_eps},
\begin{align*}
\sup_{x\in\Sigma}\Big|\sum_{k=1}^{T} e_k
(1 - p(M_{k-1},x) ) \Big| \leq \varepsilon \sum_{k=1}^{T} e_k,
\end{align*}
so that (recall~\eqref{W1_law})
\begin{align*}
\Big\|\sup_{x\in\Sigma} |W_1(x)| \Big\|_{\psi_1} \leq \varepsilon
\Big\|\sum_{k=1}^{T} e_k \Big\|_{\psi_1}.
\end{align*}
Then, since~$\sum_{k=1}^{T} e_k$ is exponentially
distributed with mean~$1/q$ and the~$\psi_1$-Orlicz norm of an
exponential random variable equals twice its mean, we
obtain the result (recalling that the~$W_j$'s, for $j\geq 1$,
are independent and identically distributed).
\end{proof}
Now, in order to address the problem of estimating the probability
involving~$\tilde{R}_m$ (whose bound was postulated in~\eqref{assump})
from the viewpoint of the theory of empirical processes, we recall
the space~${\mathcal{S}}=\mathbb{R}^{\Sigma}$ and the class~${\mathfrak{F}}=(f_x)_{x\in\Sigma}$ of
functions~$f_x:{\mathcal{S}}\rightarrow \mathbb{R}$ such that~$f_x(\omega)=\omega(x)$,
for~$\omega\in{\mathcal{S}}$, so that the above mentioned probability can be
rewritten as
\begin{align}
\mathbb{P}\Big[ \sup_{f_x\in{\mathfrak{F}}} \Big|\sum_{j=1}^{m} f_x(W_j(\cdot))\Big|
\geq \frac{3}{2} F \sqrt{m} + \theta m \Big],
\label{prob_int}
\end{align}
with~$W_j(\cdot)$ being interpreted as a vector in~${\mathcal{S}}$ whose components
are~$W_j(x)$ for each~$x\in\Sigma$. In this setting, we are able to apply
Theorem~4 of~\cite{Adam08} to prove~\eqref{assump},
and this is done in the next proposition.
{\prop There exists a universal positive constant~$C_5$ such that,
for all~$\theta>0$,
\begin{align*}
\mathbb{P}\Big[\tilde{R}_m \geq \frac{3}{2} F \sqrt{m} + \theta m\Big]
\leq \exp\Big\{-\frac{q^2\theta^2 m}{8\varepsilon^2}\Big\}
+ 3\exp\Big\{-C_5\frac{q\theta m}{\varepsilon\ln(m+1)}\Big\}.
\end{align*}
\label{Prop_Rn_tilde}
}
\begin{proof}
We use~\eqref{EPE_F_exp} and just apply Theorem 4 of~\cite{Adam08}
(with~$\delta=1$, $\eta=1/2$ and~$\alpha=1$ there)
together with Lemmas~\ref{lem_sigma2} and~\ref{lem_orlicz},
to see that there exist universal positive constants~$C$ and~$C_8$
($C$ is from Theorem 4 of~\cite{Adam08} and~$C_8$ is
from Lemma~\ref{lem_orlicz}) such that, for all~$t>0$,
\begin{align*}
\mathbb{P}\Big[\tilde{R}_m \geq \frac{3}{2} F \sqrt{m} + t \Big]
\leq \exp\Big\{-\frac{q^2t^2}{8\varepsilon^2 m}\Big\}
+ 3\exp\Big\{-\frac{qt}{C_8C\varepsilon\ln(m+1)}\Big\}.
\end{align*}
We conclude the proof by setting~$t=\theta m$,
for~$\theta>0$, and~$C_5=(C_8C)^{-1}$.
\end{proof}
\subsection{Proof o Proposition \ref{goodenv}} \label{Proof_Av}
We begin this section obtaining a tail estimate for the cardinality of the random set~${\mathfrak{H}}$ introduced in~\eqref{set_H}, which verifies
\begin{align}
\label{Xidef}
|{\mathfrak{H}}| =
\sum_{j=2}^{n-1} (I_j I_{j+1}) + I_n.
\end{align}
{\prop \label{Prop_H_comp} There exist a positive
constant~$C_9=C_9(\varepsilon_0)$ and $n_2=n_2(\varepsilon_0)\in \mathbb{N}$, such that, for all $\varepsilon\in (0,\varepsilon_0]$ and~$n\geq n_2$,
it holds that
\begin{align*}
\mathbb{P}\Big[|{\mathfrak{H}}|\leq \frac{q_0^2}{6}n\Big] \leq \min\Big\{C_9 \varepsilon,
\frac{1}{2}\Big\}
\end{align*}
where $q_0:=1-\varepsilon_0$.
}
\begin{proof}
First, observe that for $n\geq 4 $ we have that
\begin{align*}
\mathbb{P}\Big[|{\mathfrak{H}}| \leq \frac{q_0^2}{6}n \Big]
&\leq \mathbb{P}\Big[\sum_{j=2}^{n-1} (I_j I_{j+1})
\leq \frac{q_0^2}{6}n \Big]
\leq \mathbb{P}\Big[\sum_{k=1}^{\lfloor\frac{n-2}{2}\rfloor}
(I_{2k} I_{2k+1}) \leq \frac{q_0^2}{6}n \Big].
\end{align*}
Moreover, observe that the random variables~$(I_{2k} I_{2k+1})$,
$k=1,2,\dots,\lfloor\frac{n-2}{2}\rfloor$, are independent
Bernoulli($q^2$). Now, we recall the standard lower tail bound
for the binomial law: for~$\mathcal{X}\sim$ Binomial($m,p$)
and~$\delta\geq 0$, we have that
\begin{equation*}
\mathbb{P}\big[\mathcal{X}\leq (1-\delta) m p\big] \leq \exp\{-m I(p,\delta)\},
\end{equation*}
where
\begin{equation*}
I(p,\delta):=p(1-\delta)\ln(1-\delta)
+p\Big(\frac{1-p}{p}+\delta\Big)
\ln\Big(1+\frac{\delta p}{1-p}\Big).
\end{equation*}
Applying the above formula to the random variable
\begin{align*}
\sum_{k=1}^{\lfloor\frac{n-2}{2}\rfloor} (I_{2k} I_{2k+1})
\end{align*}
with~$1-\delta=q_0^2/(2q^2)$, and using the fact that~$q\geq q_0$,
we obtain that
\begin{align*}
\mathbb{P}\Big[\sum_{k=1}^{\lfloor\frac{n-2}{2}\rfloor} (I_{2k} I_{2k+1})
\leq \frac{q_0^2}{6}n \Big] \leq \exp\Big\{-\frac{n'_1}{3}I\Big(q^2,1-\frac{q_0^2}{2q^2}\Big)\Big\}\leq \frac{1}{2},
\end{align*}
for~$n\geq 12\vee n'_1$, where $n'_1:=\Big\lceil \frac{3\ln 2}{I(q_0^2, 1/2)}\Big\rceil$. Finally, recall that $q=1-\varepsilon$ to conclude the proof.
\end{proof}
Next, we obtain an upper bound for
\[
\mathbb{E}\Big[\sup_{x\in \Sigma}|\Psi(x)-1|^3 \; \Big|\; |{\mathfrak{H}}|> \frac{q_0^2}{6}n\Big].
\]
Then, the upper bound for $\mathbb{P}[{\mathsf{A}}_i^c\mid |{\mathfrak{H}}|> \frac{q_0^2}{6}n]$
will be a direct application of Markov's inequality.
Applying Proposition~\ref{Prop_H_comp} and recalling~\eqref{Rn}, we have
\begin{align}
\label{EXPPSI}
\mathbb{E}\Big[\sup_{x\in \Sigma}|\Psi(x)-1|^3\; \Big|\; |{\mathfrak{H}}|> \frac{q_0^2}{6}n\Big]
&\leq 2\mathbb{E}\Big[\sup_{x\in\Sigma}\Big|
\frac{\sum_{i=1}^{|{\mathfrak{H}}^c|}
\tilde{\xi}_i-\tilde{G}^{X}_{|{\mathfrak{H}}^c|}(x)}
{\sum_{i=|{\mathfrak{H}}^c|+1}^{n}\tilde{\xi}_i}\Big|^3
\mathds{1}_{\{|{\mathfrak{H}}|> \frac{q_0^2}{6}n\}}\Big]\nonumber\\
&\leq 2\mathbb{E}\Big[\sup_{x\in\Sigma}\Big|
\sum_{i=1}^{|{\mathfrak{H}}^c|}
\tilde{\xi}_i-\tilde{G}^{X}_{|{\mathfrak{H}}^c|}(x)\Big|^3\Big]
\mathbb{E}\Big[\Big(\sum_{i=1}^{\lfloor
\frac{q_0^2}{6}n\rfloor}\tilde{\xi}_i\Big)^{-3}\Big]\nonumber\\
&=2\mathbb{E}[R_n^3]\mathbb{E}\Big[\Big(
\sum_{i=1}^{\lfloor \frac{q_0^2}{6}n\rfloor}\tilde{\xi}_i\Big)^{-3}\Big],
\end{align}
where at the second step, we used the fact that, conditionally
on $\sigma(I_j,j\geq 1)$ the numerator and the denominator of
the ratio in the first line are independent, and the event
$\{|{\mathfrak{H}}|> \frac{q_0^2}{6}n\}$ is measurable with respect to
$\sigma(I_j,j\geq 1)$. Then, using an integration by parts,
Corollary~\ref{Cor_Rn}, and the fact that the square root
in~(\ref{F_exp}) is greater than one, we obtain that
\begin{align*}
\mathbb{E}\Big[\Big(\frac{R_n}{8F\sqrt{n}}\Big)^3\Big]&=3\int_0^{\infty}t^2
\mathbb{P}\Big[\frac{R_n}{8F\sqrt{n}}>t\Big]dt\nonumber\\
&\leq 1+3\int_1^{\infty}t^2
\mathbb{P}\Big[\frac{R_n}{8F\sqrt{n}}>t\Big]dt\nonumber\\
&\leq c_1,
\end{align*}
where $c_1$ is a universal positive constant. Therefore, we have
\begin{equation}
\label{RICO}
\mathbb{E}[R_n^3]\leq c_2F^3n^{3/2},
\end{equation}
where $c_2$ is a universal positive constant.
On the other hand, since
$\big(\sum_{i=1}^{\lfloor \frac{q_0^2}{6}n\rfloor}\tilde{\xi}_i\big)^{-1}$
is an Inverse Gamma random variable with parameters
$(\lfloor \frac{q_0^2}{6}n\rfloor,1)$, we obtain that
\begin{equation}
\label{InvGam}
\mathbb{E}\Big[\Big(\sum_{i=1}^{\lfloor \frac{q_0^2}{6}n\rfloor}\tilde{\xi}_i\Big)^{-3}\Big]= \frac{1}{(\lfloor \frac{q_0^2}{6}n\rfloor-1)(\lfloor \frac{q_0^2}{6}n\rfloor-2)(\lfloor \frac{q_0^2}{6}n\rfloor-3)}\leq c_3 n^{-3}
\end{equation}
for $n\geq 24/q_0^2$ and $c_3$ a positive constant depending on $\varepsilon_0$.
Gathering (\ref{EXPPSI}), (\ref{RICO}), and~(\ref{InvGam})
and applying Markov's inequality we finally obtain
\begin{equation*}
\mathbb{P}\Big[{\mathsf{A}}_i^c \; \Big|\; |{\mathfrak{H}}|> \frac{q_0^2}{6}n\Big]\leq \frac{C_2}{(1+i)^3}
\end{equation*}
for some positive constant $C_2=C_2(\varepsilon_0)$.
\qed
\section{Proof of Theorem~\ref{Main_Thm}}
\label{Main_Thm_proof}
For $n\geq n_0$ (where $n_0$ is from Proposition~\ref{goodenv}), we estimate $\mathbb{P}[\Upsilon^c]$ from above
(recall that $\Upsilon$ is the coupling event from
Section~\ref{coupling}) to obtain an upper bound on the total
variation distance between $L_n^X$ and $L_n^Y$.
At this point, we mention that we will use the notation from Section~\ref{coupling}.
By definition of the total variation distance, we have that
\begin{equation}
\label{Theoproof1}
\|L_n^X-L_n^Y\|_{\text{TV}}\leq \mathbb{P}[\Upsilon^c].
\end{equation}
First, let us decompose $\Upsilon$ according to
$\mathsf{C}:=\{|{\mathfrak{H}}|> \frac{q_0^2}{6}n\}$ (recall~\eqref{set_H})
and its complement:
\begin{equation}
\label{Theoproof2}
\mathbb{P}[\Upsilon^c]=\mathbb{P}[\Upsilon^c, \mathsf{C}]+\mathbb{P}[\Upsilon^c, \mathsf{C}^c]\leq \mathbb{P}[\Upsilon^c,\mathsf{C}]+\mathbb{P}[\mathsf{C}^c].
\end{equation}
Now we use the partition $\mathsf{B}_i$, $i\geq 1$, defined in Section \ref{coupling}, to write
\begin{equation*}
\mathbb{P}[\Upsilon^c,\mathsf{C}]=\sum_{i=1}^{\infty}\mathbb{P}[\Upsilon^c,\mathsf{B}_i,\mathsf{C}].
\end{equation*}
Since $\mathsf{B}_i\cap\mathsf{C}$ is $\sigma({\mathcal{W}})$-measurable (we recall that ${\mathcal{W}}$ was introduced in
Section~\ref{coupling}), we have that
\begin{equation*}
\mathbb{P}[\Upsilon^c, \mathsf{B}_i,\mathsf{C}]=\mathbb{E}[{\bf 1}_{\mathsf{B}_i\cap\mathsf{C}}\mathbb{P}[\Upsilon^c\mid {\mathcal{W}}]].
\end{equation*}
Then, observe that from the coupling construction
of Section~\ref{coupling}, we have on the set $\mathsf{G}\cap\mathsf{C}$,
\begin{equation*}
\mathbb{P}[\Upsilon^c\mid {\mathcal{W}}]\leq \big\|\mathbb{P}[{V}\in \cdot\mid {\mathcal{W}}]-\mathbb{P}[{V'}\in \cdot\mid {\mathcal{W}}]\big\|_{\text{TV}}.
\end{equation*}
Hence, applying Proposition \ref{Propmulti} to the term in the
right-hand side with $\delta_0=1$ we obtain, on the sets
$\mathsf{B}_i\cap\mathsf{C}$ such that
$\mathsf{B}_i\subset\mathsf{G}$,
\begin{equation*}
\mathbb{P}[\Upsilon^c\mid {\mathcal{W}}]\leq C_1(1)(1+i)F.
\end{equation*}
Since $C_1(1)>1$ this last upper bound is still valid on the sets $\mathsf{B}_i\cap\mathsf{C}$ such that $\mathsf{B}_i\subset\mathsf{G}^c$.
We deduce that
\begin{equation*}
\mathbb{P}[\Upsilon^c, \mathsf{C}]\leq C_1(1)F\sum_{i=1}^{\infty}(i+1)\mathbb{P}[\mathsf{B}_i,\mathsf{C}]\leq C_1(1)F\Big(2+\sum_{i=2}^{\infty}(i+1)\mathbb{P}[\mathsf{A}^c_{i-1}\mid \mathsf{C}]\Big),
\end{equation*}
which by Proposition \ref{goodenv} implies
\begin{equation}
\label{Theoproof3}
\mathbb{P}[\Upsilon^c, \mathsf{C}]\leq c_1 F
\end{equation}
for some positive constant~$c_1=c_1(\varepsilon_0)$.
Finally, combining~(\ref{Theoproof1}),
(\ref{Theoproof2}),
(\ref{Theoproof3}), Proposition~\ref{Prop_H_comp}
and~(\ref{F_exp}),
we obtain Theorem~\ref{Main_Thm} for $n\geq n_0$.
For $n<n_0$, we simply perform a step-by-step coupling between
the Markov chain~$X$ and the sequence~$Y$
(as described in the introduction)
to obtain $\|L_n^X-L_n^Y\|_{\text{TV}}\leq n_0 \varepsilon$
and thus prove Theorem~\ref{Main_Thm}.
\qed
\section{Proof of Theorem~\ref{Thm2}}
\label{Second_Thm}
First, we prove a preliminary lemma. As in Section~\ref{TV_binomial},
consider again two binomial point processes on some measurable space
$(\Omega, \mathcal{T})$ with laws~${{\bf P}}_n$ and~${{\bf Q}}_n$
of respective parameters $({{\bf p}}_n,n)$ and $({\bf q}_n,n)$,
where $n\in \mathbb{N}$ and~${\bf p}_n$, ${\bf q}_n$ are
two probability laws on $(\Omega, \mathcal{T})$ such that
${\bf q}_n\ll {\bf p}_n$. Then, we have
{\lem
\label{Propmulti_2}
Let~$\delta>0$ such that, for all $n\in \mathbb{N}$,
$|\frac{\mathrm{d}{\bf q}_n}{\mathrm{d}{\bf p}_n}(x)-1|\leq\delta n^{-1/2}$
for all~$x\in \Omega$. Then
\begin{equation*}
\sup_{n\geq 1}\|{\bf P}_n-{\bf Q}_n\|_{\emph{TV}}\leq 1 - C_{10}(\delta),
\end{equation*}
where~$C_{10}(\delta)$ is a positive constant depending on $\delta$.
}
\begin{proof}
As pointed out in the proof of Proposition~\ref{Propmulti},
${\bf P}_n$ and~${\bf Q}_n$ can be seen as probability measures
on the space of $n$-point measures
\[
{\mathcal{M}}_n=\big\{m:m=\sum_{i=1}^{n}\boldsymbol{\delta}_{x_i},
x_i\in \Omega, 1\leq i\leq n\big\}
\]
endowed with the $\sigma$-algebra generated by the mappings
$\Phi_B:{\mathcal{M}}_n\to \mathbb{Z}_+$ defined by $\Phi_B(m)=m(B)
=\sum_{i=1}^n\boldsymbol{\delta}_{x_i}(B)$,
for all $B\in \mathcal{T}$. Also, recall that ${\bf Q}_n\ll {\bf P}_n$
and its Radon-Nikodym derivative with respect to ${\bf P}_n$ is given by
\[
\frac{\text{d}{\bf Q}_n}{\text{d}{\bf P}_n}(m)
=\prod_{i=1}^n\frac{\text{d}{\bf q}_n}{\text{d}{\bf p}_n}(x_i)
\]
where $m=\sum_{i=1}^{n}\boldsymbol{\delta}_{x_i}$.
Moreover, for~$n\in \mathbb{N}$, recall
the functions~$f_n,g_n:\Omega\to \mathbb{R}$ given by
\begin{align*}
f_n(x) = \frac{\text{d}{\bf q}_n}{\text{d}{\bf p}_n}(x)-1 \ \ \mbox{ and } \ \ g_n(x) = \ln(f_n(x)+1), \ \ \mbox{ for } x\in \Omega.
\end{align*}
We start by proving the lemma for all large enough~$n$.
It is convenient to introduce now two new distinct elements~$\textbf{0}_1$ and~$\textbf{0}_2$ in order to define a new space~$\hat{\Omega}=\Omega\cup\{\textbf{0}_1,\textbf{0}_2\}$ (we assume that
$\textbf{0}_1, \textbf{0}_2\notin \Omega$), endowed with the
$\sigma$-algebra~$\hat{\mathcal{T}}:=\sigma(\mathcal{T},\{\textbf{0}_1\})$.
Then, on~$(\hat{\Omega},\hat{\mathcal{T}})$ we consider a
new binomial point process with law~${\hat{{\bf P}}}_{n,k}$ of parameters
$(\hat{{\bf p}}_n,k)$, where $k\in \mathbb{N}$ and~$\hat{{\bf p}}_n$
is the probability law on $(\hat{\Omega}, \hat{\mathcal{T}})$ given by
\begin{align*}
\hat{{\bf p}}_n(A) =
\begin{cases}
\frac{{\bf p}_n(A)}{2}, &\mbox{ for } A\in\mathcal{T}, \\
\frac{1}{4}, &\mbox{ for } A\in \{\{\textbf{0}_1\},\{\textbf{0}_2\}\},
\end{cases}
\end{align*}
Additionally, for $n>\delta^2 $, consider another binomial point
process on~$(\hat{\Omega},\hat{\mathcal{T}})$ with
law~${\hat{{\bf Q}}}_{n,k}$ and parameters~$(\hat{{\bf q}}_n,k)$,
where~$\hat{{\bf q}}_n$ is the probability law on $(\hat{\Omega}, \hat{\mathcal{T}})$ such that~$\hat{{\bf q}}_n(A)=\frac{{\bf q}_n(A)}{2}$
for all~$A\in\mathcal{T}$ and
\begin{align*}
\hat{{\bf q}}_n(\{\textbf{0}_1\})=\frac{1}{4}\Big(1+\frac{\delta}{\sqrt{n}}\Big),\; \hat{{\bf q}}_n(\{\textbf{0}_2\}) = \frac{1}{4}\Big(1-\frac{\delta}{\sqrt{n}}\Big)
\end{align*}
so that $\hat{{\bf q}}_n(\{\textbf{0}_1\})+\hat{{\bf q}}_n(\{\textbf{0}_2\}) = 1/2$.
Thus, $\hat{{\bf P}}_{n,k}$ and~$\hat{{\bf Q}}_{n,k}$ can be seen as probability
measures on the space
\[
\hat{{\mathcal{M}}}_k=\big\{\hat{m}:\hat{m}=\sum_{i=1}^{k}\boldsymbol{\delta}_{x_i},
x_i\in \hat{\Omega}, 1\leq i\leq k\big\}.
\]
We need to introduce the corresponding functions~$\hat{f}_n,\hat{g}_n,:\hat{\Omega}\to \mathbb{R}$ given by
\begin{align*}
\hat{f}_n(x) = \frac{\text{d}\hat{{\bf q}}_n}{\text{d}\hat{{\bf p}}_n}(x)-1 \ \
\mbox{ and } \ \ \hat{g}_n(x) = \ln(\hat{f}_n(x)+1).
\end{align*}
Also, define~$\hat{h}_n:\hat{\Omega}\to \mathbb{R}$ as~$\hat{h}_n = \delta^{-1}n^{1/2}\hat{f}_n$ and the set~$\hat{{\mathcal{M}}}^{\hat{{\bf Q}}}_k
=\big\{\hat{m}\in\hat{{\mathcal{M}}}_k:\frac{\text{d}\hat{{\bf Q}}_{n,k}}
{\text{d}\hat{{\bf P}}_{n,k}}(\hat{m}) \geq 1\big\}$.
Since
\begin{align*}
\frac{\text{d}\hat{{\bf Q}}_{n,k}}{\text{d}\hat{{\bf P}}_{n,k}}(\hat{m})
\geq 1 \Leftrightarrow \sum_{i=1}^k
\ln\Big(\frac{\text{d}\hat{{\bf q}}_n}{\text{d}\hat{{\bf p}}_n}(x_i)\Big)
\geq 0,
\end{align*}
for~$\hat{m}\in\hat{{\mathcal{M}}}_k$, we have for any~$n,k\in\mathbb{N}$
\begin{align*}
\|\hat{{\bf P}}_{n,k}-\hat{{\bf Q}}_{n,k}\|_{\text{TV}}
&=\int_{\hat{{\mathcal{M}}}_k}\Big(\frac{\text{d}\hat{{\bf Q}}_{n,k}}
{\text{d}\hat{{\bf P}}_{n,k}}(\hat{m})-1\Big)^+\text{d}\hat{{\bf P}}_{n,k}(\hat{m})\nonumber\\
&=\int_{\hat{{\mathcal{M}}}^{\hat{{\bf Q}}}_k}\Big(\frac{\text{d}\hat{{\bf Q}}_{n,k}}
{\text{d}\hat{{\bf P}}_{n,k}}(\hat{m})-1\Big)\text{d}\hat{{\bf P}}_{n,k}(\hat{m})\nonumber\\
&\leq 1 - \hat{{\bf P}}_{n,k}\Big[\hat{m}(\hat{g}_n) \geq 0\Big].
\end{align*}
Next, we will bound~$\hat{{\bf P}}_{n,2n}[\hat{m}(\hat{g}_n) \geq 0]$ from above.
If we define~$n_2=n_2(\delta)=\lceil 3\delta^2\rceil$ then using the
fact that~$\ln(1+x)\geq x-x^2$ for~$x\in(-1/\sqrt{3},1/\sqrt{3})$, we
have that, for~$n\geq n_2$,
\begin{align*}
\hat{{\bf P}}_{n,2n}\Big[\hat{m}(\hat{g}_n) \geq 0\Big] \geq \hat{{\bf P}}_{n,2n}\Big[\hat{m}(\hat{f}_n) \geq \hat{m}(\hat{f}^2_n)\Big] = \hat{{\bf P}}_{n,2n}\Bigg[\frac{\hat{m}(\hat{h}_n)}{\sqrt{n}} \geq \delta\frac{\hat{m}(\hat{h}^2_n)}{n}\Bigg].
\end{align*}
On the other hand,
observe that, under~$\hat{{\bf P}}_{n,2n}$,
the random variables $\hat{m}(\hat{h}_n)$
and~$\hat{m}(\hat{h}^2_n)$ have the same law
as~$\hat{h}_n(\hat{X}_1)+\dots+ \hat{h}_n(\hat{X}_{2n})$ and~$\hat{h}^2_n(\hat{X}_1)+\dots+ \hat{h}^2_n(\hat{X}_{2n})$, respectively, where the random variables $\hat{X}_1,\dots,\hat{X}_{2n}$ are i.i.d.~with law~$\hat{{\bf p}}_n$. Moreover, we have
that~$|\hat{h}_n(\hat{X}_1)|\leq 1$, $\hat{{\bf p}}_n$-a.s., $E_{\hat{{\bf p}}_n}[\hat{h}_n(\hat{X}_1)]=0$
and~$\sigma^2:=E_{\hat{{\bf p}}_n}[\hat{h}^2_n(\hat{X}_1)]\geq 1/2$. If we denote the standard Normal distribution function by~$\Phi$,
and take~$n_3=n_3(\delta)=4\lceil\frac{1}{(1-\Phi(\delta))^2}\rceil\vee n_2$, then,
by using the Berry-Esseen theorem (with~$1/2$ as an upper bound
for the Berry-Esseen constant, see for example~\cite{Tyurin}),
we obtain that, for~$n\geq n_3$,
\begin{align*}
\hat{{\bf P}}_{n,2n}\Bigg[\frac{\hat{m}(\hat{h}_n)}{\sqrt{2n}} \geq \delta\frac{\hat{m}(\hat{h}^2_n)}{n\sqrt{2}}\Bigg] \geq \hat{{\bf P}}_{n,2n}\Bigg[\frac{\hat{m}(\hat{h}_n)}{\sigma\sqrt{2n}} \geq \frac{\delta}{\sigma\sqrt{2}}\Bigg] \geq c_1,
\end{align*}
where
\begin{align*}
c_1=c_1(\delta) = \frac{1}{2}\Big(1-\Phi(\delta)\Big).
\end{align*}
Then observe that the above implies that
\begin{align}
\|\hat{{\bf P}}_{n,2n}-\hat{{\bf Q}}_{n,2n}\|_{\text{TV}} \leq 1-c_1,
\label{TVD_2n}
\end{align}
for all~$n\geq n_3$.
Now, denote by~$\hat{\mu}_{n,k}$ the maximal coupling of~$\hat{{\bf P}}_{n,k}$ and~$\hat{{\bf Q}}_{n,k}$,
and by $(\hat{m}_1, \hat{m}_2)$ the elements of
$\hat{{\mathcal{M}}}_{k}\times \hat{{\mathcal{M}}}_{k}$.
Let~$\mathsf{K}$ be the coupling event (that is, $\mathsf{K}=\{(\hat{m}_1, \hat{m}_2): \hat{m}_1 = \hat{m}_2\}$), $\mathsf{K}_1$
the coupling event of~$(\hat{m}_i(\{\textbf{0}_1\}), \hat{m}_i(\{\textbf{0}_2\}))$,
for $i=1,2$, and $\mathsf{K}_2$ the coupling event of~$(\hat{m}_i(A))_{A\in\mathcal{T}}$, for $i=1,2$,
and also observe that~$\mathsf{K}=\mathsf{K}_1\cap\mathsf{K}_2$.
Thus, we deduce that, for all~$n,k\in\mathbb{N}$,
\begin{align*}
\|\hat{{\bf P}}_{n,k}-\hat{{\bf Q}}_{n,k}\|_{\text{TV}} &= 1- \hat{\mu}_{n,k}[\mathsf{K}] \\
&= 1- \sum_{\ell=0}^{k}\hat{\mu}_{n,k}[\mathsf{K}_1 \cap \mathsf{K}_2, \hat{m}_1(\{\textbf{0}_1,\textbf{0}_2\})= \hat{m}_2(\{\textbf{0}_1,\textbf{0}_2\})=\ell ] \\
&\geq \sum_{\ell=0}^{k} \Big(1-\hat{\mu}_{n,k}[\mathsf{K}_2 \mid \hat{m}_1(\{\textbf{0}_1,\textbf{0}_2\})= \hat{m}_2(\{\textbf{0}_1, \textbf{0}_2\})=\ell] \Big) \mathfrak{p}_{\ell}^{k} \\
&= \sum_{\ell=0}^{k} \hat{\mu}_{n,k}[\mathsf{K}_2^c \mid \hat{m}_1(\{\textbf{0}_1,\textbf{0}_2\})= \hat{m}_2(\{\textbf{0}_1,\textbf{0}_2\})=\ell] \mathfrak{p}_{\ell}^{k} \\
&\geq \sum_{\ell=0}^{k} \mathfrak{p}_{\ell}^{k} \|{\bf P}_{n,\ell}-{\bf Q}_{n,\ell}\|_{\text{TV}},
\end{align*}
where~$\mathfrak{p}_{\ell}^{k}$ is the probability mass function of a Binomial($k,1/2$) random variable at~$\ell$ and ${\bf P}_{n,\ell}$ (respectively,~${\bf Q}_{n,\ell}$) is a binomial process with paramenters $({\bf p}_n,\ell)$ (respectively,~$({\bf q}_n,\ell)$).
Using~\eqref{TVD_2n} and the fact
that~$\|\hat{{\bf P}}_{n,k}-\hat{{\bf Q}}_{n,k}\|_{\text{TV}}$ is non decreasing
in~$k$ (this follows from the fact
that~$\|\hat{{\bf P}}_{n,k}-\hat{{\bf Q}}_{n,k}\|_{\text{TV}}
= \frac{1}{2}E[|1-\mathcal{L}_k(X_1, \dots, X_k)|]$,
where~$(X_i)_{i\geq 1}$ are i.i.d random variables with
law~${\bf p}_n$ and~$\mathcal{L}_k(X_1, \dots, X_k)
:=\Pi_{i=1}^k\frac{\mathrm{d}{\bf q}_n}{\mathrm{d}{\bf p}_n}(X_i)$
is a martingale under the canonical filtration), we obtain that,
for all~$n\geq n_3$ and~$i \leq n$,
\begin{align}
\sum_{k=0}^{2i} \mathfrak{p}_{k}^{2i} \|{\bf P}_{n,k}-{\bf Q}_{n,k}\|_{\text{TV}}
\leq 1 - c_1.
\label{bound_sum}
\end{align}
Using again the Berry-Esseen theorem
(once again with~$1/2$ as an upper bound for the Berry-Esseen constant),
we can deduce that there exist $n_4=n_4(\delta)
=\lceil(\frac{12}{1-\Phi(\delta)})^2\rceil$ and $c_2=c_2(\delta)=-\frac{1}{2}\Phi^{-1}(\frac{1-\Phi(\delta)}{24})\geq1$,
such that for all $i\geq n_4$, we have
\begin{align*}
\sum_{k\in \mathcal{I}_i(\delta)} \mathfrak{p}_{k}^{2i}
\geq 1 - \frac{c_1}{3},
\end{align*}
where~$\mathcal{I}_i(\delta) := [i-c_2\sqrt{i}, i+c_2\sqrt{i}]$.
On the other hand, if $i\geq n_3$, by~\eqref{bound_sum} it follows that
\begin{align*}
\sum_{k\in \mathcal{I}_i(\delta)}
\mathfrak{p}_{k}^{2i} \|{\bf P}_{n,k}-{\bf Q}_{n,k}\|_{\text{TV}}
\leq 1 - c_1(\delta),
\end{align*}
so that, if $i\geq n_3\vee n_4$, there exists~$i_0\in\mathcal{I}_i(\delta)$ such that
\begin{align*}
\|{\bf P}_{n,i_0}-{\bf Q}_{n,i_0}\|_{\text{TV}}
\leq \frac{1 - c_1}{1 - \frac{c_1}{3}} \leq 1 - \frac{2}{3}c_1.
\end{align*}
To conclude the proof of the lemma, observe that for any~$n$ large enough, there exists~$i\geq n_3\vee n_4\vee c_2^2$ such that~$n-(i+\lfloor c_2\sqrt{i} \rfloor)\in[0,3]$,
the above argument allows to obtain~$i_0\in\mathbb{N}$ such that~$n-5c_2\sqrt{n} \leq i_0 \leq n$ and
\begin{align}
\|{\bf P}_{n,i_0}-{\bf Q}_{n,i_0}\|_{\text{TV}} \leq 1 - \frac{2}{3}c_1.
\label{i0}
\end{align}
Now, if $n-i_0\leq (n_3\vee n_4\vee c_2^2)+5\sqrt{5} c_2^{3/2}n^{1/4}$,
using (\ref{i0}) and making a point-by-point coupling between
the $n-i_0$ remaining points of the binomial processes~${\bf P}_{n}$
and~${\bf Q}_{n}$, we obtain that
\begin{align}
\|{\bf P}_{n}-{\bf Q}_{n}\|_{\text{TV}} \leq 1-\frac{2}{3}c_1(\delta)\Bigg(1-\frac{[(n_3\vee n_4\vee c_2^2)+5\sqrt{5} c_2^{3/2}n^{1/4}]\delta}{2\sqrt{n}}\Bigg)^+.
\label{Case1}
\end{align}
On the other hand, if
$n-i_0> (n_3\vee n_4\vee c_2^2)+5\sqrt{5} c_2^{3/2}n^{1/4}$,
we first consider $j\in \mathbb{N}$ such that
\begin{equation*}
n-i_0 - ( j-i_0+ \lfloor c_2\sqrt{j-i_0}\rfloor) \in [0,3].
\end{equation*}
Observe that in this case, $j-i_0> n_3\vee n_4\vee c_2^2$
and thus by the former analysis we obtain that there exists $j_0>i_0$
such that~$n-5\sqrt{5}c_2^{3/2}n^{1/4} \leq j_0 \leq n$ and
\begin{align}
\|{\bf P}_{n,j_0-i_0}-{\bf Q}_{n,j_0-i_0}\|_{\text{TV}} \leq 1 - \frac{2}{3}c_1.
\label{j0}
\end{align}
Using (\ref{i0}), (\ref{j0}) and performing a point-by-point
coupling between the $n-j_0$ remaining points of the binomial
processes~${\bf P}_{n}$ and~${\bf Q}_{n}$, we obtain that
\begin{align}
\|{\bf P}_{n}-{\bf Q}_{n}\|_{\text{TV}} \leq 1-\Big(\frac{2}{3}c_1\Big)^2\Bigg(1-\frac{5\sqrt{5} c_2^{3/2}n^{1/4}\delta}{2\sqrt{n}}\Bigg)^+.
\label{Case2}
\end{align}
Finally, using~(\ref{Case1}) and~(\ref{Case2}),
we obtain that there exists $n_5=n_5(\delta)$ such that,
for $n\geq n_5$ we have that
\begin{align}
\label{OPIO}
\|{\bf P}_{n}-{\bf Q}_{n}\|_{\text{TV}} \leq 1-c_3,
\end{align}
where $c_3=c_3(\delta)$ is a positive constant depending on~$\delta$.
To finish the proof of Lemma~\ref{Propmulti_2},
we consider the case $n<n_5$.
Since ${\bf q}_n\ll{\bf p}_n$ and
$|\frac{\mathrm{d}{\bf q}_n}{\mathrm{d}{\bf p}_n}(x)-1|\leq\delta$
for all~$x\in \Omega$, we first observe that
\begin{equation*}
\max_{1\leq n< n_5}\|{\bf p}_n-{\bf q}_n\|_{\text{TV}}\leq 1-\frac{1}{1+\delta}.
\end{equation*}
Then, from this last fact we obtain that
\begin{equation*}
\max_{1\leq n< n_5} \|{\bf P}_{n}-{\bf Q}_{n}\|_{\text{TV}}
\leq 1-\Big(\frac{1}{1+\delta}\Big)^{n_5}.
\end{equation*}
Together with~(\ref{OPIO}), this concludes the proof of
Lemma~\ref{Propmulti_2}.
\end{proof}
We now prove Theorem~\ref{Thm2}.
For this proof, we consider~$\varepsilon_0=\varepsilon$ in Assumption~\ref{assump_3}.
We start taking $i=i_1:=\lceil (2C_2)^{1/3}\rceil-1$
in Proposition~\ref{goodenv}, so that
\begin{equation*}
\mathbb{P}\Big[{\mathsf{A}}_{i_1} \;\Big|\; |{\mathfrak{H}}|> \frac{q_0^2}{6}n\Big]
\geq \frac{1}{2},
\end{equation*}
for all~$n\geq n_0$ (where~$n_0$ is from
Proposition~\ref{goodenv}). Then, using Proposition~\ref{Prop_H_comp},
we obtain that $\mathbb{P}[{\mathsf{A}}_{i_1}]\geq 1/4$, for $n\geq n_0$.
Now, observe that, by Lemma~\ref{Propmulti_2}, for all
$n\geq n'_0 = n'_0(\beta,\varphi, \kappa, \gamma, \varepsilon_0)
:=n_0 \vee [F(1+i_1)]^2$ (where $F$ is given in~(\ref{F_exp})
with~$\varepsilon=\varepsilon_0$) we obtain that
\begin{equation}
\label{TIRIO}
\|L_n^X - L_n^Y\|_{\text{TV}}\leq 1-\mathbb{P}[\Upsilon]
\leq 1-\mathbb{P}[\Upsilon,{\mathsf{A}}_{i_1}]\leq 1-\frac{1}{4}C_{10}((1+i_1)F).
\end{equation}
Then, to complete the proof we just observe that
\begin{equation}
\label{TYREL}
\max_{1\leq n< n'_0}\|L_n^X - L_n^Y\|_{\text{TV}}
\leq 1-\Big(\frac{1}{1+\varepsilon_0}\Big)^{n'_0}.
\end{equation}
Finally, using~(\ref{TIRIO}) and~(\ref{TYREL}), we conclude
the proof of Theorem~\ref{Thm2}.
\qed
\section*{Acknowledgements}
Diego F.~de Bernardini thanks São Paulo Research Foundation (FAPESP) (grant \#2016/13646-4) and Fundo de Apoio ao Ensino,
\`a Pesquisa e \`a Extensão (FAEPEX) (grant \#2866/16). Christophe Gallesco was partially supported by CNPq (grant 313496/2014-5).
Serguei Popov was partially supported by CNPq (grant 300886/2008--0).
|
2,869,038,155,611 | arxiv | \section{Introduction}\label{sec:intro}
An \emph{ordinary line} of a set of points in the plane is a line passing through exactly two points of the set.
The classical Sylvester--Gallai theorem states that every finite non-collinear point set in the plane spans at least one ordinary line.
In fact, for sufficiently large $n$, an $n$-point non-collinear set in the plane spans at least $n/2$ ordinary lines, and this bound is tight if $n$ is even.
This was shown by Green and Tao \cite{GT13} via a structure theorem characterising all finite point sets with few ordinary lines.
It is then natural to consider higher dimensional analogues.
Motzkin \cite{M51} noted that there are finite non-coplanar point sets in $3$-space that span no plane containing exactly three points of the set.
He proposed considering instead hyperplanes $\Pi$ in $d$-space such that all but one point contained in $\Pi$ is contained in a $(d-2)$-dimensional flat of $\Pi$.
The existence of such hyperplanes was shown by Motzkin \cite{M51} for $3$-space and by Hansen \cite{H65} in higher dimensions.
Purdy and Smith \cite{PS10} considered instead finite non-coplanar point sets in $3$-space with no three points collinear,
and provided a lower bound on the number of planes containing exactly three points of the set.
Referring to such a plane as an \emph{ordinary plane}, Ball \cite{B18} proved a $3$-dimensional analogue of Green and Tao's \cite{GT13} structure theorem, and found the exact minimum number of ordinary planes spanned by sufficiently large non-coplanar point sets in real projective $3$-space with no three points collinear.
Using an alternative method, we \cite{LS18} were able to prove a more detailed structure theorem but with a stronger condition; see Theorem~\ref{thm:plane} in Section~\ref{sec:proof}.
Ball and Monserrat \cite{BM17} made the following definition, generalising ordinary planes to higher dimensions.
\begin{definition}
An \emph{ordinary hyperplane} of a set of points in real projective $d$-space, where every $d$ points span a hyperplane, is a hyperplane passing through exactly $d$ points of the set.
\end{definition}
They \cite{BM17} also proved bounds on the minimum number of ordinary hyperplanes spanned by such sets (see also \cite{M15}).
Our first main result is a structure theorem for sets with few ordinary hyperplanes.
The elliptic normal curves and rational acnodal curves mentioned in the theorem and their group structure will be described in Section~\ref{sec:curves}.
Our methods extend those in our earlier paper \cite{LS18}, and we detail them in Section~\ref{sec:tools}.
\begin{theorem}\label{thm:main1}
Let $d \geqslant 4$, $K > 0$, and suppose $n \geqslant C\max\{(dK)^8, d^3 2^dK\}$ for some sufficiently large absolute constant $C > 0$.
Let $P$ be a set of $n$ points in $\mathbb{R}\mathbb{P}^d$ where every $d$ points span a hyperplane.
If $P$ spans at most $K\binom{n-1}{d-1}$ ordinary hyperplanes, then $P$ differs in at most $O(d2^dK)$ points from a configuration of one of the following types:
\begin{enumerate}[label=\rom]
\item A subset of a hyperplane;\label{case:hyperplane}
\item A coset $H \oplus x$ of a subgroup $H$ of an elliptic normal curve or the smooth points of a rational acnodal curve of degree $d+1$, for some $x$ such that $(d+1)x \in H$.\label{case:curve}
\end{enumerate}
\end{theorem}
It is easy to show that conversely, a set of $n$ points where every $d$ span a hyperplane and differing from \ref{case:hyperplane} or \ref{case:curve} by $O(K)$ points, spans
$O(K\binom{n-1}{d-1})$ ordinary hyperplanes.
By \cite{BM17}*{Theorem 2.4}, if a set of $n$ points where every $d$ points span a hyperplane itself spans $K\binom{n-1}{d-1}$ ordinary hyperplanes, and is not contained in a hyperplane, then $K = \Omega(1/d)$.
Theorem~\ref{thm:main2} below implies that $K\ge1$ for sufficiently large $n$ depending on $d$.
For a similar structure theorem in dimension $4$ but with $K = o(n^{1/7})$, see Ball and Jimenez \cite{BJ18}, who show that $P$ lies on the intersection of five quadrics.
Theorem~\ref{thm:main1} proves \cite{BJ18}*{Conjecture~12},
noting that elliptic normal curves and rational acnodal curves lie on $\binom{d}{2} - 1$ linearly independent quadrics \citelist{\cite{Klein}*{p.~365}\cite{Fi08}*{Proposition~5.3}}.
We also mention that Monserrat \cite{M15}*{Theorem 2.10} proved a structure theorem stating that almost all points of the set lie on the intersection of $d-1$ hypersurfaces of degree at most~$3$.
Our second main result is a tight bound on the minimum number of ordinary hyperplanes, proving \cite{BM17}*{Conjecture 3}.
Note that our result holds only for sufficiently large $n$; see \citelist{\cite{BM17}\cite{M15}\cite{J18}} for estimates when $d$ is small or $n$ is not much larger than $d$.
\begin{theorem}\label{thm:main2}
Let $d \geqslant 4$ and let $n\geqslant Cd^3 2^d$ for some sufficiently large absolute constant $C > 0$.
The minimum number of ordinary hyperplanes spanned by a set of $n$ points in $\mathbb{R}\mathbb{P}^d$, not contained in a hyperplane and where every $d$ points span a hyperplane, is
\[ \binom{n-1}{d-1} - O\left(d2^{-d/2}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right).\]
This minimum is attained by a coset of a subgroup of an elliptic normal curve or the smooth points of a rational acnodal curve of degree $d+1$, and when $d+1$ and $n$ are coprime, by $n-1$ points in a hyperplane together with a point not in the hyperplane.
\end{theorem}
Green and Tao \cite{GT13} also used their structure theorem to solve the classical orchard problem of finding the maximum number of
$3$-point lines
spanned by a set of $n$ points in the plane, for $n$ sufficiently large.
We solved the $3$-dimensional analogue in \cite{LS18}.
Our third main result is the $d$-dimensional analogue.
We define a \emph{$(d+1)$-point hyperplane} to be a hyperplane through exactly $d+1$ points of a given set.
\begin{theorem}\label{thm:main3}
Let $d \geqslant 4$ and let $n\geqslant Cd^3 2^d$ for some sufficiently large absolute constant $C > 0$.
The maximum number of $(d+1)$-point hyperplanes spanned by a set of $n$ points in $\mathbb{R}\mathbb{P}^d$ where every $d$ points span a hyperplane is
\[ \frac{1}{d+1} \binom{n-1}{d} + O\left(2^{-d/2}\binom{n}{\lfloor\frac{d-1}{2}\rfloor}\right). \]
This maximum is attained by a coset of a subgroup of an elliptic normal curve or the smooth points of a rational acnodal curve of degree $d+1$.
\end{theorem}
While the bounds in Theorems~\ref{thm:main2} and~\ref{thm:main3} are asymptotic, we provide a recursive method (as part of our proofs) to calculate the exact extremal values for a given $d$ and $n$ sufficiently large in Section~\ref{sec:extremal}.
In principle, the exact values can be calculated for any given $d$ and turns out to be a quasi-polynomial in $n$ with a period of $d+1$.
We present the values for $d = 4, 5, 6$ at the end of Section~\ref{sec:extremal}.
\subsection*{Relation to previous work}
The main idea in our proof of Theorem~\ref{thm:main1}
is to induct on the dimension $d$, with the base case $d=3$ being our earlier structure theorem for sets defining few ordinary planes \cite{LS18},
which in turn is based on Green and Tao's Intermediate Structure Theorem for sets defining few ordinary lines \cite{GT13}*{Proposition~5.3}.
Roughly, the structure theorem in $3$-space states that if a finite set of points is in general position (no three points collinear) and spans few ordinary planes, then most of the points must lie on a plane, two disjoint conics, or an elliptic or acnodal space quartic curve.
In fact, we can define a group structure on these curves encoding when four points are coplanar, in which case our point set must be very close to a coset of the curve.
(See Theorem~\ref{thm:plane}
for a more precise statement.)
As originally observed by Ball \cite{B18} in $3$-space, the general position condition allows the use of projection to leverage Green and Tao's Intermediate Structure Theorem \cite{GT13}*{Proposition~5.3}.
This avoids having to apply their Full Structure Theorem \cite{GT13}*{Theorem~1.5}, which has a much worse lower bound on $n$, as it avoids the technical Section~6 of \cite{GT13}, dealing with the case in the plane when most of the points lie on a large, though bounded, number of lines.
On the other hand, to get to the precise coset structure, we used additive-combinatorial results from \cite{GT13}*{Section~7}, specifically \cite{GT13}*{Propositions~A.5, Lemmas 7.2, 7.4, 7.7, and Corollary 7.6}.
In this paper, the only result of Green and Tao \cite{GT13} we explicitly use is \cite{GT13}*{Proposition~A.5}, which we extend in Proposition~\ref{cor:a5}, while all other results are subsumed in the structure theorem in $3$-space.
In dimensions $d>3$, the general position condition also allows the use of projections from a point to a hyperplane (see also Ball and Monserrat \cite{BM17}).
In Section~\ref{ssec:projections} we detail various technical results about the behaviour of curves under such projections, which are extensions of $3$-dimensional results in \cite{LS18}.
While the group structure on elliptic or singular space quartic curves are well studied (see for instance \cite{Muntingh}), we could not find references to the group structure on singular rational curves in higher dimensions.
This is our main focus in Section~\ref{sec:curves}, which in a way extends \cite{LS18}*{Section 3}.
In particular, we look at Sylvester's theorem on when a binary form can be written as a sum of perfect powers, which has its roots in classical invariant theory.
In extending the results of \cite{LS18}*{Section 3}, we have to consider how to generalise the catalecticant (of a binary quartic form), which leads us to the secant variety of the rational normal curve as a determinantal variety.
Green and Tao's Intermediate Structure Theorem in $2$-space has a slightly different flavour to their Full Structure Theorem, the structure theorem in $3$-space, and Theorem~\ref{thm:main1}.
However, this is not the only reason why we start our induction at $d=3$.
A more substantial reason is that there are no smooth rational cubic curves in $2$-space; as is well known, all rational planar cubic curves are singular.
Thus, both smooth and singular rational quartics in $3$-space project onto rational cubics, and we need some way to tell them apart.
In higher dimensions, we have Lemma~\ref{lemma:singular_projection}
to help us, but since this is false when $d=3$, the induction from the plane to $3$-space \cite{LS18} is more technical.
This is despite the superficial similarity between the $2$- and $3$-dimensional situations where there are two almost-extremal cases while there is essentially only one case when $d>3$.
Proving Theorem~\ref{thm:main1}, which covers the $d > 3$ cases, is thus in some sense less complicated, since not only are we leveraging a more detailed structure theorem (Theorems~\ref{thm:main1}
and~\ref{thm:plane}
as opposed to \cite{GT13}*{Proposition~5.3}), we also lose a case.
However, there are complications that arise in how to generalise and extend results from $2$- and $3$-space to higher dimensions.
\section{Notation and tools}\label{sec:tools}
By $A = O(B)$, we mean there exists an absolute constant $C > 0$ such that $0\leqslant A \leqslant CB$.
Thus, $A=-O(B)$ means there exists an absolute constant $C>0$ such that $-CB\leqslant A\leqslant 0$.
We also write $A=\Omega(B)$ for $B=O(A)$.
None of the $O(\cdot)$ and $\Omega(\cdot)$ statements in this paper have implicit dependence on the dimension~$d$.
We write $A\mathbin{\triangle} B$ for the symmetric difference of the sets $A$ and $B$.
Let $\mathbb{F}$ denote the field of real or complex numbers, let $\mathbb{F}^* = \mathbb{F}\setminus{\{0\}}$, and let $\mathbb{F}\mathbb{P}^d$ denote the $d$-dimensional projective space over $\mathbb{F}$.
We denote the homogeneous coordinates of a point in $d$-dimensional projective space by a $(d+1)$-dimensional vector $[x_0, x_1, \dots, x_d]$.
We call a linear subspace of dimension $k$ in $\mathbb{F}\mathbb{P}^d$ a \emph{$k$-flat}; thus a point is a $0$-flat, a line is a $1$-flat, a plane is a $2$-flat, and a hyperplane is a $(d-1)$-flat.
We denote by $Z_{\mathbb{F}}(f)$ the set of $\mathbb{F}$-points of the algebraic hypersurface defined by the vanishing of a homogeneous polynomial $f\in\mathbb{F}[x_0, x_1, \dots, x_d]$.
More generally, we consider a (closed, projective) \emph{variety} to be any intersection of algebraic hypersurfaces.
We say that a variety is pure-dimensional if each of its irreducible components has the same dimension.
We consider a \emph{curve} of degree $e$ in $\mathbb{C}\mathbb{P}^d$ to be a variety $\delta$ of pure dimension $1$ such that a generic hyperplane in $\mathbb{C}\mathbb{P}^d$ intersects $\delta$ in $e$ distinct points.
More generally, the degree of a variety $X \subset \mathbb{C}\mathbb{P}^d$ of dimension $r$ is
\[ \deg(X) := \max \setbuilder{|\Pi \cap X|}{\text{$\Pi$ is a $(d-r)$-flat such that $\Pi \cap X$ is finite}}. \]
We say that a curve is \emph{non-degenerate} if it is not contained in a hyperplane, and \emph{non-planar} if it is not contained in a $2$-flat.
We call a curve \emph{real} if each of its irreducible components contains infinitely many points of $\mathbb{R}\mathbb{P}^d$.
Whenever we consider a curve in $\mathbb{R}\mathbb{P}^d$, we implicitly assume that its Zariski closure is a real curve.
We denote the Zariski closure of a set $S \subseteq \mathbb{C}\mathbb{P}^d$ by $\overline{S}$.
We will use the \emph{secant variety $\Sec_{\mathbb{C}}(\delta)$} of a curve $\delta$, which is the Zariski closure of the set of points in $\mathbb{C}\mathbb{P}^d$ that lie on a line through some two points of $\delta$.
\subsection{B\'ezout's theorem}\label{ssec:bezout}
B\'ezout's theorem gives the degree of an intersection of varieties.
While it is often formulated as an equality, in this paper we only need the weaker form that ignores multiplicity and gives an upper bound.
The (set-theoretical) intersection $X\cap Y$ of two varieties is just the variety defined by $P_X\cup P_Y$, where $X$ and $Y$ are defined by the collections of homogeneous polynomials $P_X$ and $P_Y$ respectively.
\begin{theorem}[B\'ezout \cite{Fu84}*{Section~2.3}]\label{thm:bezout}
Let $X$ and $Y$ be varieties in $\mathbb{C}\mathbb{P}^d$ with no common irreducible component.
Then $\deg(X \cap Y) \leqslant \deg(X) \deg(Y)$.
\end{theorem}
\subsection{Projections}\label{ssec:projections}
Given $p\in\mathbb{F}\mathbb{P}^d$, the \emph{projection from $p$}, $\pi_p\colon \mathbb{F}\mathbb{P}^d\setminus\{p\}\to \mathbb{F}\mathbb{P}^{d-1}$, is defined by identifying $\mathbb{F}\mathbb{P}^{d-1}$ with any hyperplane $\Pi$ of $\mathbb{F}\mathbb{P}^d$ not passing through $p$, and then letting $\pi_p(x)$ be the point where the line $px$ intersects $\Pi$ \cite{H92}*{Example~3.4}.
Equivalently, $\pi_p$ is induced by a surjective linear transformation $\mathbb{F}^{d+1}\to\mathbb{F}^d$ where the kernel is spanned by the vector~$p$.
As in our previous paper \cite{LS18}, we have to consider projections of curves where we do not have complete freedom in choosing a generic projection point $p$.
Let $\delta \subset \mathbb{C}\mathbb{P}^d$ be an irreducible non-planar curve of degree $e$, and let $p$ be a point in $\mathbb{C}\mathbb{P}^d$.
We call $\pi_p$ \emph{generically one-to-one on $\delta$} if there is a finite subset $S$ of $\delta$ such that $\pi_p$ restricted to $\delta\setminus S$ is one-to-one.
(This is equivalent to the birationality of $\pi_p$ restricted to $\delta\setminus\{p\}$ \cite{H92}*{p.~77}.)
If $\pi_p$ is generically one-to-one, the degree of the curve $\overline{\pi_p(\delta \setminus \{p\})}$ is $e-1$ if $p$ is a smooth point on $\delta$, and is $e$ if $p$ does not lie on~$\delta$;
if $\pi_p$ is not generically one-to-one, then the degree of $\overline{\pi_p(\delta \setminus \{p\})}$ is at most $(e-1)/2$ if $p$ lies on $\delta$, and is at most $e/2$ if $p$ does not lie on~$\delta$
\cite{H92}*{Example 18.16}, \cite{Kollar}*{Section 1.15}.
The following three lemmas on projections are proved in \cite{LS18} in the case $d=3$.
They all state that most projections behave well and can be considered to be quantitative versions of the trisecant lemma \cite{KKT08}.
The proofs of Lemmas~\ref{lem:projection2} and \ref{lem:projection3} are almost word-for-word the same as the proofs of the $3$-dimensional cases in \cite{LS18}.
All three lemmas can also be proved by induction on the dimension $d \geqslant 3$ from the $3$-dimensional case.
We illustrate this by proving Lemma~\ref{lem:projection1}.
\begin{lemma}\label{lem:projection1}
Let $\delta$ be an irreducible non-planar curve of degree $e$ in $\mathbb{C}\mathbb{P}^d$, $d\geqslant 3$.
Then there are at most $O(e^4)$ points $p$ on $\delta$ such that $\pi_p$ restricted to $\delta\setminus\{p\}$ is not generically one-to-one.
\end{lemma}
\begin{proof}
The case $d=3$ was shown in \cite{LS18}, based on the work of Furukawa \cite{Fu11}.
We next assume that $d \geqslant 4$ and that the lemma holds in dimension $d-1$.
Since $d>3$ and the dimension of $\Sec_\mathbb{C}(\delta)$ is at most $3$ \cite{H92}*{Proposition~11.24}, there exists a point $p \in \mathbb{C}\mathbb{P}^d$ such that all lines through $p$ have intersection multiplicity at most $1$ with $\delta$.
It follows that the projection $\delta':=\overline{\pi_p(\delta)}$ of $\delta$ is a non-planar curve of degree $e$ in $\mathbb{C}\mathbb{P}^{d-1}$.
Consider any line $\ell$ not through $p$ that intersects $\delta$ in at least three distinct points $p_1,p_2,p_3$.
Then $\pi_p(\ell)$ is a line in $\mathbb{C}\mathbb{P}^{d-1}$ that intersects $\delta'$ in three points $\pi_p(p_1), \pi_p(p_2), \pi_p(p_3)$.
It follows that if $x\in\delta$ is a point such that for all but finitely many points $y\in\delta$, the line $xy$ intersects $\delta$ in a point other than $x$ or $y$, then $x':=\pi_p(x)$ is a point such that for all but finitely many points $y':=\pi_p(y)\in\delta'$, the line $x'y'$ intersects $\delta'$ in a third point.
That is, if $\pi_x$ restricted to $\delta$ is not generically one-to-one, then the projection map $\pi_{x'}$ in $\mathbb{C}\mathbb{P}^{d-1}$ restricted to $\delta'$ is not generically one-to-one.
By the induction hypothesis, there are at most $O(e^4)$ such points and we are done.
\end{proof}
\begin{lemma}\label{lem:projection2}
Let $\delta$ be an irreducible non-planar curve of degree $e$ in $\mathbb{C}\mathbb{P}^d$, $d\geqslant 3$.
Then there are at most $O(e^3)$ points $x \in \mathbb{C}\mathbb{P}^d \setminus \delta$ such that $\pi_x$ restricted to $\delta$ is not generically one-to-one.
\end{lemma}
\begin{lemma}\label{lem:projection3}
Let $\delta_1$ and $\delta_2$ be two irreducible non-planar curves in $\mathbb{C}\mathbb{P}^d$, $d\geqslant 3$, of degree $e_1$ and $e_2$ respectively.
Then there are at most $O(e_1e_2)$ points $p$ on $\delta_1$ such that $\overline{\pi_p(\delta_1\setminus\{p\})}$ and $\overline{\pi_p(\delta_2\setminus\{p\})}$ coincide.
\end{lemma}
\section{Curves of degree \texorpdfstring{$d+1$}{d+1}}\label{sec:curves}
In this paper, irreducible non-degenerate curves of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$ play a fundamental role.
Indeed, the elliptic normal curve and rational acnodal curve mentioned in Theorem~\ref{thm:main1} are both such curves.
In this section, we describe their properties that we need.
These properties are all classical, but we did not find a reference for the group structure on singular rational curves of degree $d+1$, and therefore consider this in detail.
It is well-known in the plane that there is a group structure on any smooth cubic curve or the set of smooth points of a singular cubic.
This group has the property that three points sum to the identity if and only if they are collinear.
Over the complex numbers, the group on a smooth cubic is isomorphic to the torus $(\mathbb{R}/\mathbb{Z})^2$, and the group on the smooth points of a singular cubic is isomorphic to $(\mathbb{C},+)$ or $(\mathbb{C}^*,\cdot)$ depending on whether the singularity is a cusp or a node.
Over the real numbers, the group on a smooth cubic is isomorphic to $\mathbb{R}/\mathbb{Z}$ or $\mathbb{R}/\mathbb{Z}\times\mathbb{Z}_2$ depending on whether the real curve has one or two semi-algebraically connected components, and the group on the smooth points of a singular cubic is isomorphic to $(\mathbb{R},+)$, $(\mathbb{R},+)\times\mathbb{Z}_2$, or $\mathbb{R}/\mathbb{Z}$ depending on whether the singularity is a cusp, a crunode, or an acnode.
See for instance \cite{GT13} for a more detailed description.
In higher dimensions, it turns out that an irreducible non-degenerate curve of degree $d+1$ does not necessarily have a natural group structure, but if it has, the behaviour is similar to the planar case.
For instance, in $\mathbb{C}\mathbb{P}^3$, an irreducible non-degenerate quartic curve is either an elliptic quartic, with a group isomorphic to an elliptic curve such that four points on the curve are coplanar if and only if they sum to the identity, or a rational curve.
There are two types, or species, of rational quartics.
The rational quartic curves of the first species are intersections of two quadrics (as are elliptic quartics), they are always singular, and there is a group on the smooth points such that four points on the curve are coplanar if and only if they sum to the identity.
Those of the second species lie on a unique quadric, are smooth, and there is no natural group structure analogous to the other cases.
See \cite{LS18} for a more detailed account.
The picture is similar in higher dimensions.
\begin{definition}[Clifford \cite{Clifford}, Klein \cite{Klein}]
An \emph{elliptic normal curve} is an irreducible non-degenerate smooth curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$ isomorphic to an elliptic curve in the plane.
\end{definition}
\begin{prop}[\cite{S09}*{Exercise 3.11 and Corollary 5.1.1}, \cite{S94}*{Corollary 2.3.1}]\label{prop:elliptic_group}
An elliptic normal curve $\delta$ in $\mathbb{C}\mathbb{P}^d$, $d\geqslant 2$, has a natural group structure such that $d+1$ points in $\delta$ lie on a hyperplane if and only if they sum to the identity.
This group is isomorphic to $(\mathbb{R}/\mathbb{Z})^2$.
If the curve is real, then the group is isomorphic to $\mathbb{R}/\mathbb{Z}$ or $\mathbb{R}/\mathbb{Z}\times\mathbb{Z}_2$ depending on whether the real curve has one or two semi-algebraically connected components.
\end{prop}
A similar result holds for singular rational curves of degree $d+1$.
Since we need to work with such curves and a description of their group structure is not easily found in the literature, we give a detailed discussion of their properties in the remainder of this section.
A \emph{rational curve} $\delta$ in $\mathbb{F}\mathbb{P}^d$ of degree $e$ is a curve that can be parametrised by the projective line,
\[ \delta\colon \mathbb{F}\mathbb{P}^1 \to \mathbb{F}\mathbb{P}^d, \quad [x,y] \mapsto [q_0(x,y),\dots,q_{d}(x,y)], \]
where each $q_i$ is a homogeneous polynomial of degree $e$ in the variables $x$ and $y$.
The following lemma is well known (see for example \cite{SR85}*{p.~38, Theorem~VIII}), and can be proved by induction from the planar case using projection.
\begin{prop}\label{prop:curves}
An irreducible non-degenerate curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$, $d \geqslant 2$, is either an elliptic normal curve or rational.
\end{prop}
We next describe when an irreducible non-degenerate rational curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$ has a natural group structure.
It turns out that this happens if and only if the curve is singular.
We write $\nu_{d+1}$ for the \emph{rational normal curve} in $\mathbb{C}\mathbb{P}^{d+1}$ \cite{H92}*{Example~1.14}, which we parametrise as
\[ \nu_{d+1}:[x,y] \mapsto [y^{d+1},-xy^d,x^2y^{d-1},\dotsc,(-x)^{d-1}y^2,(-x)^dy,(-x)^{d+1}]. \]
Any irreducible non-degenerate rational curve $\delta$ of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$ is the projection of the rational normal curve, and we have
\[ \delta[x,y] = [y^{d+1},-xy^d,x^2y^{d-1},\dotsc,(-x)^{d-1}y^2,(-x)^dy,(-x)^{d+1}] A, \]
where $A$ is a $(d+2)\times(d+1)$ matrix of rank $d+1$ (since $\delta$ is non-degenerate) with entries derived from the coefficients of the polynomials $q_i$ of degree $d+1$ in the parametrisation of the curve (with suitable alternating signs).
Thus $\delta \subset \mathbb{C}\mathbb{P}^d$ is the image of $\nu_{d+1}$ under the projection map $\pi_p$ defined by $A$.
In particular, the point of projection $p=[p_0,p_1,\dots,p_{d+1}]\in\mathbb{C}\mathbb{P}^{d+1}$ is the ($1$-dimensional) kernel of $A$.
If we project $\nu_{d+1}$ from a point $p\in \nu_{d+1}$, then we obtain a rational normal curve in $\mathbb{C}\mathbb{P}^d$.
However, since $\delta$ is of degree $d+1$, necessarily $p\notin \nu_{d+1}$.
Conversely, it can easily be checked that for any $p\notin \nu_{d+1}$, the projection of $\nu_{d+1}$ from $p$ is a rational curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$.
We will use the notation $\delta_p$ for this curve.
We summarise the above discussion in the following proposition that will be implicitly used in the remainder of the paper.
\begin{prop}
An irreducible non-degenerate rational curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$ is projectively equivalent to $\delta_p$ for some $p\in\mathbb{C}\mathbb{P}^{d+1}\setminus\nu_{d+1}$.
\end{prop}
We use the projection point $p$ to define a binary form and a multilinear form associated to $\delta_p$.
The \emph{fundamental binary form} associated to $\delta_p$ is the homogeneous polynomial of degree $d+1$ in two variables $f_p(x,y) := \sum_{i=0}^{d+1}p_i\binom{d+1}{i}x^{d+1-i}y^i$.
Its \emph{polarisation} is the multilinear form $F_p\colon(\mathbb{F}^2)^{d+1}\to\mathbb{F}$ \cite{D03}*{Section~1.2} defined by
\[ F_p(x_0,y_0,x_1,y_1,\dots,x_d,y_d) := \frac{1}{(d+1)!}\sum_{I\subseteq\{0,1,\dots,d\}} (-1)^{d+1-|I|} f_p\left(\sum_{i\in I} x_i,\sum_{i\in I} y_i\right). \]
Consider the multilinear form $G_p(x_0,y_0,\dots,x_d,y_d) = \sum_{i=0}^{d+1} p_i P_i$,
where
\begin{equation}\label{eq:P_i}
P_i(x_0,y_0,x_1,y_1,\dots,x_d,y_d) := \sum_{I\in\binom{\{0,1,\dots,d\}}{i}}\prod_{j\in\overline{I}} x_j\prod_{j\in I} y_j
\end{equation}
for each $i=0,\dots,d+1$.
Here the sum is taken over all subsets $I$ of $\{0,1,\dots,d\}$ of size $i$, and $\overline{I}$ denotes the complement of $I$ in $\{0,1,\dots,d\}$.
It is
easy to see that the binary form $f_p$ is the \emph{restitution} of $G_p$, namely \cite{D03}*{Section~1.2}
\[ f_p(x,y) = G_p(x,y,x,y,\dots,x,y). \]
Since the polarisation of the restitution of a multilinear form is itself \cite{D03}*{Section~1.2},
we must thus have $F_p = G_p$.
(This can also be checked directly.)
\begin{lemma}\label{lem:cohyperplane}
Let $\delta_p$ be an irreducible non-degenerate rational curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$, $d\geqslant 2$, where $p\in\mathbb{C}\mathbb{P}^{d+1}\setminus \nu_{d+1}$.
A hyperplane intersects $\delta_p$ in $d+1$ points $\delta_p[x_i,y_i]$, $i=0,\dots,d$, counting multiplicity, if and only if $F_p(x_0,y_0,x_1,y_1,\dots,x_d,y_d)=0$.
\end{lemma}
\begin{proof}
We first prove the statement for distinct points $[x_i,y_i]\in\mathbb{C}\mathbb{P}^1$.
Then the points $\delta_p[x_i,y_i]$ are all on a hyperplane if and only if the hyperplane in $\mathbb{C}\mathbb{P}^{d+1}$ through the points $\nu_{d+1}[x_i,y_i]$ passes through $p$.
It will be sufficient to prove the identity
\begin{equation}\label{identity}
D:= \det\begin{pmatrix} \nu_{d+1}[x_0,y_0] \\ \vdots \\ \nu_{d+1}[x_d,y_d] \\ p \end{pmatrix}
= F_p(x_0,y_0,x_1,y_1,\dots,x_d,y_d)
\prod_{0 \leqslant j<k \leqslant d} \begin{vmatrix} x_j & x_k\\ y_j & y_k \end{vmatrix},
\end{equation}
since the second factor on the right-hand side does not vanish because the points $[x_i, y_i]$ are distinct.
We first note that
\begin{align}
D &=
\begin{vmatrix}
y_0^{d+1} & -x_0y_0^d & x_0^2y_0^{d-1} & \dotsc & (-x_0)^dy_0 & (-x_0)^{d+1} \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
y_d^{d+1} & -x_d y_d ^d & x_d ^2y_d ^{d-1} & \dotsc & (-x_d )^dy_d & (-x_d )^{d+1} \\
p_0 & p_1 & p_2 & \dotsc & p_d & p_{d+1}
\end{vmatrix} \notag \\
& = (-1)^{\left \lfloor \frac{d+2}{2} \right \rfloor}
\begin{vmatrix}
y_0^{d+1} & x_0y_0^d & x_0^2y_0^{d-1} & \dotsc & x_0^d y_0 & x_0^{d+1} \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
y_d^{d+1} & x_d y_d ^d & x_d ^2y_d ^{d-1} & \dotsc & x_d^dy_d & x_d^{d+1} \\[3pt]
p_0 & -p_1 & p_2 & \dotsc & (-1)^d p_d & (-1)^{d+1} p_{d+1}
\end{vmatrix}. \label{det}
\end{align}
We next replace $(-1)^i p_i$ by $x^i y^{d+1-i}$ for each $i=0,\dots,d+1$ in the last row of the determinant in \eqref{det} and obtain the Vandermonde determinant
\begin{align*}
\phantom{D} &\mathrel{\phantom{=}}
(-1)^{\left \lfloor \frac{d+2}{2} \right \rfloor}
\begin{vmatrix}
y_0^{d+1} & x_0y_0^d & x_0^2y_0^{d-1} & \dotsc & x_0^d y_0 & x_0^{d+1} \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
y_d^{d+1} & x_d y_d^d & x_d^2 y_d^{d-1} & \dotsc & x_d^d y_d & x_d^{d+1} \\[3pt]
y^{d+1} & x y^d & x^2 y^{d-1} & \dotsc & x^d y & x^{d+1}
\end{vmatrix} \\
&= (-1)^{\left \lfloor \frac{d+2}{2} \right \rfloor}
\prod_{0\leqslant j<k\leqslant d}
\begin{vmatrix}
y_j & y_k \\
x_j & x_k
\end{vmatrix}
\prod_{0\leqslant j\leqslant d}
\begin{vmatrix}
y_j & y \\
x_j & x
\end{vmatrix} \\
&= (-1)^{\left \lfloor \frac{d+2}{2} \right \rfloor} (-1)^{\binom{d+2}{2}}
\prod_{0\leqslant j<k\leqslant d}
\begin{vmatrix}
x_j & x_k \\
y_j & y_k
\end{vmatrix} \prod_{0\leqslant j\leqslant d}
\begin{vmatrix}
x_j & x \\
y_j & y
\end{vmatrix}.
\end{align*}
Finally, note that $(-1)^{\lfloor(d+2)/2\rfloor} (-1)^{\binom{d+2}{2}} = 1$ and that the coefficient of $x^i y^{d+1-i}$ in $\prod_{0\leqslant j\leqslant d} \begin{vmatrix} x_j & x \\ y_j & y \end{vmatrix}$ is
\[ \sum_{I\subseteq\binom{\{0,\dots,d\}}{i}}\prod_{j\in I}(-y_j)\prod_{j\in\overline{I}} x_j = (-1)^iP_i, \]
where $P_i$ is as defined in \eqref{eq:P_i}.
It follows that the coefficient of $p_i$ in \eqref{det} is $P_i$, and \eqref{identity} follows.
We next complete the argument for the case when the points $[x_i,y_i]$ are not all distinct.
First suppose that a hyperplane $\Pi$ intersects $\delta_p$ in $\delta_p[x_i,y_i]$, $i=0,\dots,d$.
By Bertini's theorem \cite{H77}*{Theorem~II.8.18 and Remark~II.8.18.1}, there is an arbitrarily close perturbation $\Pi'$ of $\Pi$ that intersects $\delta_p$ in distinct points $\delta_p[x_i',y_i']$.
By what has already been proved, $F_p(x_0',y_0',\dots,x_d',y_d')=0$.
Since $\Pi'$ is arbitrarily close and $F_p$ is continuous, $F_p[x_0,y_0,\dots,x_d,y_d]=0$.
Conversely, suppose that $F_p(x_0,y_0,\dots,x_d,y_d)=0$ where the $[x_i,y_i]$ are not all distinct.
Perturb the points $[x_0,y_0],\dots,[x_{d-1},y_{d-1}]$ by an arbitrarily small amount to $[x_0',y_0'],\dots,[x_{d-1}',y_{d-1}']$ respectively, so as to make $\delta_p[x_0',y_0'],\dots,\delta_p[x_{d-1}',y_{d-1}']$ span a hyperplane $\Pi'$ that intersects $\delta_p$ again in $\delta_p[x_d',y_d']$, say, and so that $[x_0',y_0'],\dots,[x_{d}',y_{d}']$ are all distinct.
If we take the limit as $[x_i',y_i']\to[x_i,y_i]$ for each $i=0,\dots,d-1$, we obtain a hyperplane $\Pi$ intersecting $\delta_p$ in $\delta_p[x_0,y_0],\dots,\delta_p[x_{d-1},y_{d-1}],\delta_p[x_d'',y_d'']$, say.
Then $F_p(x_0,y_0,\dots,x_{d-1},y_{d-1},x_d'',y_d'')=0$.
Since the multilinear form $F_p$ is non-trivial, it follows that $[x_d,y_d]=[x_d'',y_d'']$.
Therefore, $\Pi$ is a hyperplane that intersects $\delta_p$ in $\delta_p[x_i,y_i]$, $i=0,\dots,d$.
\end{proof}
The secant variety $\Sec_{\mathbb{C}}(\nu_{d+1})$ of the rational normal curve $\nu_{d+1}$ in $\mathbb{C}\mathbb{P}^{d+1}$ is equal to the set of points that lie on a proper secant or tangent line of $\nu_{d+1}$, that is, on a line with intersection multiplicity at least $2$ with $\nu_{d+1}$.
We also define the real secant variety of $\nu_{d+1}$ to be the set $\Sec_{\mathbb{R}}(\nu_{d+1})$ of points in $\mathbb{R}\mathbb{P}^{d+1}$ that lie on a line that either intersects $\nu_{d+1}$ in two distinct real points or is a tangent line of $\nu_{d+1}$.
The \emph{tangent variety} $\Tan_{\mathbb{F}}(\nu_{d+1})$ of $\nu_{d+1}$ is defined to be the set of points in $\mathbb{F}\mathbb{P}^{d+1}$ that lie on a tangent line of $\nu_{d+1}$.
We note that although $\Tan_{\mathbb{R}}(\nu_{d+1}) = \Tan_{\mathbb{C}}(\nu_{d+1})\cap\mathbb{R}\mathbb{P}^{d+1}$, we only have a proper inclusion $\Sec_{\mathbb{R}}(\nu_{d+1}) \subset \Sec_{\mathbb{C}}(\nu_{d+1})\cap\mathbb{R}\mathbb{P}^{d+1}$ for $d\geqslant 2$.
We will need a concrete description of $\Sec_{\mathbb{C}}(\nu_{d+1})$ and its relation to the smoothness of the curves $\delta_p$.
For any $p\in\mathbb{F}\mathbb{P}^{d+1}$ and $k=2,\dots,d-1$, define the $(k+1)\times(d-k+2)$ matrix
\begin{equation*}
M_{k}(p) := \begin{pmatrix}
p_0 & p_1 & p_2 & \dots & p_{d-k+1} \\
p_1 & p_2 & p_3 & \dots & p_{d-k+2} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
p_k & p_{k+1} & p_{k+2} & \dots & p_{d+1}
\end{pmatrix}.
\end{equation*}
Suppose that $\delta_p$ has a double point, say $\delta_p[x_0,y_0]=\delta_p[x_1,y_1]$.
This is equivalent to $p$, $\nu_{d+1}[x_0,y_0]$, and $\nu_{d+1}[x_1,y_1]$ being collinear, which is equivalent to $p$ being on the secant variety of $\nu_{d+1}$.
(In the degenerate case where $[x_0,y_0]=[x_1,y_1]$, we have that $p\in\Tan_\mathbb{F}(\nu_{d+1})$.)
Then $\delta_p[x_0,y_0]$, $\delta_p[x_1,y_1]$, $\delta_p[x_2,y_2]$,\dots, $\delta_p[x_d,y_d]$ are on a hyperplane in $\mathbb{F}\mathbb{P}^d$ for all $[x_2,y_2],\dots,[x_d,y_d]\in\mathbb{F}\mathbb{P}^1$.
It follows that the coefficients of $F_p(x_0,y_0,x_1,y_1,x_2,y_2,\dots,x_d,y_d)$ as a polynomial in $x_2,y_2,\dots,x_d,y_d$ all vanish, that is,
\[ p_ix_0x_1 + p_{i+1}(x_0y_1+y_0x_1) + p_{i+2}y_0y_1 = 0\]
for all $i=0,\dots,d-1$.
This can be written as
$[x_0x_1, x_0y_1 + y_0x_1, y_0y_1] M_{2}(p) = 0$.
Conversely, if $M_{2}(p)$ has rank $2$ with say
$[c_0, 2c_1, c_2] M_{2}(p) = 0$,
then there is a non-trivial solution to the linear system with $c_0 = x_0x_1$, $c_1 = x_0y_1 + y_0x_1$, $c_2 = y_0y_1$, and we have $c_0x^2+2c_1xy+c_2y^2 = (x_0x+y_0y)(x_1x+y_1y)$.
In the degenerate case where $[x_0,y_0]=[x_1,y_1]$, we have that the quadratic form has repeated roots.
It follows that $M_{2}(p)$ has rank at most $2$ if and only if $p\in\Sec_{\mathbb{C}}(\nu_{d+1})$ (also note that $M_{2}(p)$ has rank $1$ if and only if $p\in\nu_{d+1}$).
We note for later use that since the null space of $M_{2}(p)$ is $1$-dimensional if it has rank $2$, it follows that each $p\in\Sec_{\mathbb{C}}(\nu_{d+1})$ lies on a unique secant (which might degenerate to a tangent).
This implies that $\delta_p$ has a unique singularity when $p\in\Sec_{\mathbb{C}}(\nu_{d+1})\setminus{\nu_{d+1}}$, which is a node if $p\in\Sec_{\mathbb{C}}(\nu_{d+1})\setminus\Tan_{\mathbb{C}}(\nu_{d+1})$ and a cusp if $p\in\Tan_{\mathbb{C}}(\nu_{d+1})\setminus{\nu_{d+1}}$.
In the real case there are two types of nodes.
If $p\in\Sec_{\mathbb{R}}(\nu_{d+1})\setminus\nu_{d+1}$, then the roots $[x_0,y_0],[x_1,y_1]$ are real, and $\delta_p$ has either a cusp when $p\in\Tan_{\mathbb{R}}(\nu_{d+1})\setminus\nu_{d+1}$ and $[x_0,y_0]=[x_1,y_1]$, or a crunode when $p\in\Sec_{\mathbb{R}}(\nu_{d+1})\setminus\Tan_{\mathbb{R}}(\nu_{d+1})$ and $[x_0,y_0]$ and $[x_1,y_1]$ are distinct roots of the real binary quadratic form $c_0x^2+2c_1xy+c_2y^2$.
If $p\in\Sec_{\mathbb{C}}(\nu_{d+1})\setminus\Sec_{\mathbb{R}}(\nu_{d+1})\cap\mathbb{R}\mathbb{P}^{d+1}$ then the quadratic form has conjugate roots $[x_0,y_0]=[\overline{x_1},\overline{y_1}]$ and $\delta_p$ has an acnode.
If $p\notin\Sec(\nu_{d+1})$, then $\delta_p$ is a smooth curve of degree $d+1$.
It follows that $\delta_p$ is singular if and only if $p\in\Sec(\nu_{d+1})\setminus{\nu_{d+1}}$.
For the purposes of this paper, we make the following definitions.
\begin{definition}
A \emph{rational singular curve} is an irreducible non-degenerate singular rational curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$.
In the real case, a \emph{rational cuspidal curve}, \emph{rational crunodal curve}, or \emph{rational acnodal curve} is a rational singular curve isomorphic to a singular planar cubic with a cusp, crunode, or acnode respectively.
\end{definition}
In particular, we have shown the case $k=2$ of the following well-known result.
\begin{prop}[\cite{H92}*{Proposition~9.7}]\label{prop:secant}
Let $d\geqslant 3$.
For any $k=2,\dots,d-1$, the secant variety of $\nu_{d+1}$ is equal to the locus of all $[p_0, p_1,\dots,p_{d+1}]$ such that $M_{k}(p)$ has rank at most~$2$.
\end{prop}
\begin{corollary}\label{lem:first_species}
Let $d\geqslant 3$.
For any $k=2,\dots,d-1$ and $p\in\mathbb{C}\mathbb{P}^{d+1}\setminus\nu_{d+1}$, the curve $\delta_p$ of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$ is singular if and only if $\rank M_{k}(p) \leqslant 2$.
\end{corollary}
We next use Corollary~\ref{lem:first_species} to show that the projection of a smooth rational curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$ from a generic point on the curve is again smooth when $d\geqslant 4$.
This is not true for $d=3$, as there is a trisecant through each point of a quartic curve of the second species in $3$-space.
(The union of the trisecants form the unique quadric on which the curve lies \cite{H92}*{Exercise 8.13}.)
\begin{lemma}\label{lemma:singular_projection}
Let $\delta_p$ be a smooth rational curve of degree $d+1$ in $\mathbb{C}\mathbb{P}^d$, $d\geqslant 4$.
Then for all but at most three points $q\in\delta_p$, the projection $\overline{\pi_q(\delta_p\setminus\{q\})}$ is a smooth rational curve of degree $d$ in $\mathbb{C}\mathbb{P}^{d-1}$.
\end{lemma}
\begin{proof}
Let $q=\delta_p[x_0,y_0]$.
Suppose that $\overline{\pi_q(\delta_p\setminus\{q\})}$ is singular.
Then there exist $[x_1,y_1]$ and $[x_2,y_2]$ such that $\pi_q(\delta_p[x_1,y_1])=\pi_q(\delta_p[x_2,y_2])$
and the points $\delta_p[x_0,y_0]$, $\delta_p[x_1,y_1]$, and $\delta_p[x_2,y_2]$ are collinear.
Then for arbitrary $[x_3,y_3],\dots,[x_d,y_d]\in\mathbb{C}\mathbb{P}^1$, the points $\delta_p[x_i,y_i]$, $i=0,\dots,d$ are on a hyperplane, so by Lemma~\ref{lem:cohyperplane}, $F_p(x_0,y_0,\dots,x_d,y_d)$ is identically $0$ as a polynomial in $x_3,y_3,\dots,x_d,y_d$.
The coefficients of this polynomial are of the form
\[ p_i x_0x_1x_2 + p_{i+1}(x_0x_1y_2+x_0y_1x_2+y_0x_1x_2) + p_{i+2}(x_0y_1y_2+y_0x_1y_2+y_0y_1x_2) + p_{i+3} y_0y_1y_2 \]
for $i=0,\dots,d-2$.
This means that the linear system
$[c_0, 3c_1, 3c_2, c_3] M_3(p) = 0$
has a non-trivial solution $c_0=x_0x_1x_2$, $3c_1=x_0x_1y_2+x_0y_1x_2+y_0x_1x_2$, $3c_2=x_0y_1y_2+y_0x_1y_2+y_0y_1x_2$, $c_3=y_0y_1y_2$.
The binary cubic form $c_0 x^3 + 3c_1 x^2y + c_2 xy^2 + c_3 y^3$ then has the factorisation $(x_0x+y_0y)(x_1x+y_1y)(x_2x+y_2y)$,
hence its roots give the collinear points on $\delta_p$.
Since $\delta_p$ is smooth, $M_3(p)$ has rank at least $3$ by Corollary~\ref{lem:first_species}, and so the cubic form is unique up to scalar multiples.
It follows that there are at most three points $q$ such that the projection $\overline{\pi_q(\delta_p\setminus\{q\})}$ is not smooth.
\end{proof}
We need the following theorem on the fundamental binary form $f_p$ that is essentially due to Sylvester \cite{S51} to determine the natural group structure on rational singular curves.
Reznick \cite{Rez2013} gives an elementary proof of the generic case where $p$ does not lie on the tangent variety.
(See also Kanev \cite{K99}*{Lemma~3.1} and Iarrobino and Kanev \cite{IK99}*{Section~1.3}.)
We provide a very elementary proof that includes the non-generic case.
\begin{theorem}[Sylvester \cite{S51}]\label{thm:sylvester}
Let $d\geqslant 2$.
\begin{enumerate}[label=\rom]
\item \label{sylvester1}
If $p\in\Tan_{\mathbb{C}}(\nu_{d+1})$, then there exist binary linear forms $L_1,L_2$ such that $f_p(x,y)=L_1(x,y)^dL_2(x,y)$.
Moreover, if $p \notin \nu_{d+1}$ then $L_1$ and $L_2$ are linearly independent,
and if $p\in\mathbb{R}\mathbb{P}^{d+1}$ then $L_1$ and $L_2$ are both real.
\item\label{sylvester2}
If $p\in\Sec_{\mathbb{C}}(\nu_{d+1})\setminus\Tan_{\mathbb{C}}(\nu_{d+1})$, then there exist linearly independent binary linear forms $L_1, L_2$ such that $f_p(x,y) = L_1(x,y)^{d+1} - L_2(x,y)^{d+1}$.
Moreover, if $p\in\mathbb{R}\mathbb{P}^{d+1}\setminus\Sec_{\mathbb{R}}(\nu_{d+1})$ then $L_1$ and $L_2$ are complex conjugates, while if $p\in\Sec_{\mathbb{R}}(\nu_{d+1})$ then there exist linearly independent real binary linear forms $L_1, L_2$ such that $f_p(x,y) = L_1(x,y)^{d+1} \pm L_2(x,y)^{d+1}$, where we can always choose the lower sign when $d$ is even, and otherwise depends on $p$.
\end{enumerate}
\end{theorem}
\begin{proof}
\ref{sylvester1}:
We work over $\mathbb{F}\in\{\mathbb{R},\mathbb{C}\}$.
Let $p = [p_0,p_1,\dots,p_{d+1}]\in \Tan_\mathbb{F}(\nu_{d+1})$.
Let $p_*=\nu_{d+1}[\alpha_1,\alpha_2]$ be the point on $\nu_{d+1}$ such that the line $pp_*$ is tangent to $\nu_{d+1}$ (if $p\in\nu_{d+1}$, we let $p_*=p$).
We will show that
\begin{equation}\label{tangent}
f_p(x,y) = \sum_{i=0}^{d+1} p_i\binom{d+1}{i}x^{d+1-i}y^i = (\alpha_2 x - \alpha_1 y)^d(\beta_2 x-\beta_1 y)
\end{equation}
for some $[\beta_1,\beta_2]\in\mathbb{F}\mathbb{P}^1$.
First consider the special case $\alpha_1=0$.
Then $p_*=[1,0,\dots,0]$ and the tangent to $\nu_{d+1}$ at $p_*$ is the line $x_2=x_3=\dots=x_{d+1}=0$.
It follows that $f_p(x,y)= p_0x^{d+1} + p_1(d+1)x^dy = (1x-0y)^d(p_0x+p_1(d+1)y)$.
If $p_1 = 0$, then $p = p_* \in \nu_{d+1}$.
Thus, if $p \notin \nu_{d+1}$, then $p_1 \ne 0$, and $x$ and $p_0x+p_1(d+1)y$ are linearly independent.
We next consider the general case $\alpha_1\neq 0$.
Equating coefficients in \eqref{tangent}, we see that we need to find $[\beta_1,\beta_2]$ such that
\[ p_i\binom{d+1}{i} = \binom{d}{i}\alpha_2^{d-i}(-\alpha_1)^i\beta_2-\binom{d}{i-1}\alpha_2^{d-i+1}(-\alpha_1)^{i-1}\beta_1 \]
for each $i=0,\dots,d+1$, where we use the convention $\binom{d}{-1}=\binom{d}{d+1}=0$.
This can be simplified to
\begin{equation}\label{tangent2}
p_i = \left(1-\frac{i}{d+1}\right)\alpha_2^{d-i}(-\alpha_1)^i\beta_2 - \frac{i}{d+1}\alpha_2^{d-i+1}(-\alpha_1)^{i-1}\beta_1.
\end{equation}
Since we are working projectively, we can fix the value of $\beta_1$ from the instance $i=d+1$ of~\eqref{tangent2} to get
\begin{equation}\label{tangent3}
p_{d+1} = -(-\alpha_1)^d\beta_1.
\end{equation}
If $p_{d+1}\neq 0$, we can divide \eqref{tangent2} by \eqref{tangent3}.
After setting $\alpha=\alpha_2/\alpha_1$, $\beta=\beta_2/\beta_1$, and $a_i=p_i/p_{d+1}$, we then have to show that for some $\beta\in\mathbb{F}$,
\begin{equation}\label{eqn:*}
a_i = -\left(1-\frac{i}{d+1}\right)(-\alpha)^{d-i}\beta+\frac{i}{d+1}(-\alpha)^{d-i+1}
\end{equation}
for each $i=0,\dots,d$.
We next calculate in the affine chart $x_{d+1}=1$ where the rational normal curve becomes $\nu_{d+1}(t) = ((-t)^{d+1},(-t)^d,\dots,-t)$, $p=(a_0,\dots,a_d)$, and $p_*=\nu_{d+1}(\alpha)$.
The tangency condition means that $p_*-p$ is a scalar multiple of
\[\nu_{d+1}'(\alpha) = ((d+1)(-\alpha)^d,d(-\alpha)^{d-1},\dots,2\alpha,-1), \]
that is, we have for some $\lambda\in\mathbb{F}$ that
$(-\alpha)^{d+1-i}-a_i = \lambda(d+1-i)(-\alpha)^{d-i}$ for all $i=0,\dots,d$.
Set $\beta=\alpha+\lambda(d+1)$.
Then $(-\alpha)^{d+1-i}-a_i = (\beta-\alpha)(1-\frac{i}{d+1})(-\alpha)^{d-i}$, and we have
\begin{align*}
a_i &= (-\alpha)^{d+1-i}-(\beta-\alpha)\left(1-\frac{i}{d+1}\right)(-\alpha)^{d-i} \\
&= -\left(1-\frac{i}{d+1}\right)(-\alpha)^{d-i}\beta+\frac{i}{d+1}(-\alpha)^{d-i+1},
\end{align*}
giving \eqref{eqn:*} as required.
If $\alpha = \beta$, then $\lambda = 0$ and $p = p_* \in \nu_{d+1}$.
Thus, if $p \notin \nu_{d+1}$, then $\alpha \ne \beta$, and $\alpha_2 x - \alpha_1 y$ and $\beta_2 x - \beta_1 y$ are linearly independent.
We still have to consider the case $p_{d+1}=0$.
Then $\beta_1=0$ and we need to find $\beta_2$ such that
\begin{equation}\label{eqn:**}
p_i = \left(1-\frac{i}{d+1}\right)\alpha_2^{d-i}(-\alpha_1)^i\beta_2
\end{equation}
for all $i=0,\dots,d$.
Since $p_{d+1}=0$, we have that $\nu_{d+1}'(\alpha)$ is parallel to $(p_0,\dots,p_d)$, that is,
\[ p_i = \lambda(d+1-i)(-\alpha)^{d-i}\]
for some $\lambda\in\mathbb{F}^*$.
Set $\beta_2 = \lambda(d+1)/(-\alpha_1)^d$.
Then
\begin{equation*}
p_i = \frac{(-\alpha_1)^d\beta_2}{d+1}(d+1-i)\left(\frac{\alpha_2}{-\alpha_1}\right)^{d-i}
= \left(1-\frac{i}{d+1}\right)\alpha_2^{d-i}(-\alpha_1)^i\beta_2,
\end{equation*}
again giving \eqref{eqn:**} as required.
Note that since $\alpha_1 \ne 0$ but $\beta_1 = 0$, $\alpha_2 x - \alpha_1 y$ and $\beta_2 x - \beta_1 y$ are linearly independent.
Note also that since $\lambda \ne 0$, we have $\beta_2 \ne 0$ and $p \ne [1,0,\dotsc,0]$, hence $p \notin \nu_{d+1}$.
\ref{sylvester2}:
Let $p=[p_0,\dots,p_{d+1}] \in \Sec_{\mathbb{C}}(\nu_{d+1})\setminus\Tan_{\mathbb{C}}(\nu_{d+1})$, and suppose that $p$ lies on the secant line through the distinct points $p_1 := \nu_{d+1}[\alpha_1, \alpha_2]$ and $p_2 := \nu_{d+1}[\beta_1, \beta_2]$.
Since $p, p_1, p_2$ are distinct and collinear, there exist $\mu_1, \mu_2\in\mathbb{C}^*$ such that $p = \mu_1 p_1 + \mu_2 p_2$.
This means that for $i = 0, \dotsc, d+1$, we have
\[ p_i = \mu_1(-\alpha_1)^i\alpha_2^{d+1-i} + \mu_2(-\beta_1)^i\beta_2^{d+1-i}. \]
Then
\begin{align*}
f_p(x,y) &= \sum_{i=0}^{d+1} p_i \binom{d+1}{i} x^{d+1-i} y^{i} \\
&= \mu_1\sum_{i=0}^{d+1}\binom{d+1}{i}(\alpha_2 x)^{d+1-i}(-\alpha_1 y)^i + \mu_2 \sum_{i=0}^{d+1}\binom{d+1}{i}(\beta_2 x)^{d+1-i}(-\beta_1 y)^i \\
&= \mu_1(\alpha_2 x-\alpha_1 y)^{d+1} + \mu_2(\beta_2 x-\beta_1 y)^{d+1}\\
&= L_1(x,y)^{d+1}- L_2(x,y)^{d+1}
\end{align*}
where the linear forms $L_1,L_2$ are linearly independent.
If $p\in\mathbb{R}\mathbb{P}^{d+1}\setminus{\Sec_{\mathbb{R}}(\nu_{d+1})}$, then $f_p$ is real and $p_1$ and $p_2$ are non-real points.
Taking conjugates, we have
\[ p = \overline{\mu_1}\nu_{d+1}[\overline{\alpha_1},\overline{\alpha_2}]+\overline{\mu_2}\nu_{d+1}[\overline{\beta_1},\overline{\beta_2}] \]
as vectors, and because of the uniqueness of secants of the rational normal curve through a given point, we obtain $\overline{\mu_1}=\mu_2$ and $\nu_{d+1}[\overline{\alpha_1},\overline{\alpha_2}] = \nu_{d+1}[\beta_1,\beta_2]$, hence $\overline{\alpha_1}=\beta_1$ and $\overline{\alpha_2}=\beta_2$.
It follows that $\overline{L_1(x,y)} = L_2(\overline{x},\overline{y})$.
If $p\in\Sec_{\mathbb{R}}(\nu_{d+1})$, then $p_1$ and $p_2$ are real, so $[\mu_1,\mu_2],[\alpha_1,\alpha_2],[\beta_1,\beta_2]\in\mathbb{R}\mathbb{P}^1$, and we obtain $f_p(x,y)=L_1^{d+1}\pm L_2^{d+1}$ for some linearly independent $L_1,L_2$ over $\mathbb{R}$, where the choice of sign depends on~$p$.
\end{proof}
We are now in a position to describe the group laws on rational singular curves.
We first note the effect of a change of coordinates on the parametrisation of $\delta_p$.
Let $\varphi\colon\mathbb{F}\mathbb{P}^1\to\mathbb{F}\mathbb{P}^1$ be a projective transformation.
Then $\nu_{d+1}\circ\varphi$ is a reparametrisation of the rational normal curve.
It is not difficult to see that there exists a projective transformation $\psi\colon\mathbb{F}\mathbb{P}^{d+1}\to\mathbb{F}\mathbb{P}^{d+1}$ such that $\nu_{d+1}\circ\varphi = \psi\circ\nu_{d+1}$.
It follows that if we reparametrise $\delta_p$ using $\varphi$, we obtain
\[ \delta_p\circ\varphi = \pi_p\circ\nu_{d+1}\circ\varphi = \pi_p\circ\psi\circ\nu_{d+1} = \psi'\circ\pi_{\psi^{-1}(p)}\circ\nu_{d+1}\cong\delta_{\psi^{-1}(p)}, \]
where $\psi'\colon\mathbb{F}\mathbb{P}^d\to\mathbb{F}\mathbb{P}^d$ is an appropriate projective transformation such that first transforming $\mathbb{F}\mathbb{P}^{d+1}$ with $\psi$ and then projecting from $p$ is the same as projecting from $\psi^{-1}(p)$ and then transforming $\mathbb{F}\mathbb{P}^d$ with $\psi'$.
So by reparametrising $\delta_p$, we obtain $\delta_{p'}$ for some other point $p'$ that is in the orbit of $p$ under the action of projective transformations that fix $\nu_{d+1}$.
Since $\delta_p\circ\varphi[x_0,y_0],\dots,\delta_p\circ\varphi[x_d,y_d]$ lie on a hyperplane if and only if the $\delta_{\psi^{-1}(p)}[x_i,y_i]$'s
are on a hyperplane, it follows from Lemma~\ref{lem:cohyperplane} that $F_p(\varphi(x_0,y_0),\dots,\varphi(x_d,y_d))$ is a scalar multiple of $F_{\psi^{-1}(p)}(x_0,y_0,\dots,x_d,y_d)$, in which case $f_p\circ\varphi = f_{\psi^{-1}(p)}$ up to a scalar multiple.
Thus, we obtain the same reparametrisation of the fundamental binary form $f_p$.
\begin{prop}\label{prop:rational_group}
A rational singular curve $\delta_p$ in $\mathbb{C}\mathbb{P}^d$ has a natural group structure on its subset of smooth points $\delta_p^*$ such that $d+1$ points in $\delta_p^*$ lie on a hyperplane if and only if they sum to the identity.
This group is isomorphic to $(\mathbb{C},+)$ if the singularity of $\delta_p$ is a cusp and isomorphic to $(\mathbb{C}^*,\cdot)$ if the singularity is a node.
If the curve is real and cuspidal or acnodal, then it has a group isomorphic to $(\mathbb{R},+)$ or $\mathbb{R}/\mathbb{Z}$ depending on whether the singularity is a cusp or an acnode, such that $d+1$ points in $\delta_p^*$ lie on a hyperplane if and only if they sum to the identity.
If the curve is real and the singularity is a crunode, then the group is isomorphic to $(\mathbb{R},+) \times \mathbb{Z}_2$, but $d+1$ points in $\delta_p^*$ lie on a hyperplane if and only if they sum to $(0,0)$ or $(0,1)$, depending on $p$.
\end{prop}
\begin{proof}
First suppose $\delta_p$ is cuspidal and $\mathbb{F}\in\{\mathbb{R},\mathbb{C}\}$, so that $p \in \Tan_{\mathbb{F}}(\nu_{d+1}) \setminus{\nu_{d+1}}$.
By Theorem~\ref{thm:sylvester}, $f_p=L_1^dL_2$ for some linearly independent linear forms $L_1$ and $L_2$.
By choosing $\varphi$ appropriately, we may assume without loss of generality that $L_1(x,y)=x$ and $L_2(x,y)=(d+1)y$, so that $f_p(x,y)=(d+1)x^dy$ and $p=[0,1,0,\dots,0]$, with the cusp of $\delta_p$ at $\delta_p[0,1]$.
It follows that the polarisation of $f_p$ is $F_p(x_0,y_0,\dotsc,x_d,y_d)= P_1 = x_0x_1\dotsb x_d\sum_{i=0}^d y_i/x_i$.
For $[x_i,y_i]\neq[0,1]$, $i=0,\dots,d$, the points $\delta_p[x_i,y_i]$ are on a hyperplane if and only if $\sum_{i=0}^d y_i/x_i=0$.
Thus we identify $\delta_p[x,y]\in\delta_p^*$ with $y/x\in\mathbb{F}$, and the group is $(\mathbb{F},+)$.
Next suppose $\delta_p$ is nodal, so that $p \in \Sec_{\mathbb{C}}(\nu_{d+1})\setminus\Tan_{\mathbb{C}}(\nu_{d+1})$.
By Theorem~\ref{thm:sylvester}, $f_p=L_1^{d+1}-L_2^{d+1}$ for some linearly independent linear forms $L_1$ and $L_2$.
Again by choosing $\varphi$ appropriately, we may assume without loss of generality that $L_1(x,y)=x$ and $L_2(x,y)=y$, so that $f_p(x,y)=x^{d+1}-y^{d+1}$ and $p=[1,0,\dots,0,-1]$, with the node of $\delta_p$ at $\delta_p[0,1]=\delta_p[1,0]$.
The polarisation of $f_p$ is $F_p(x_0,y_0,\dots,x_d,y_d)= P_0-P_{d+1} = x_0x_1\dotsb x_d - y_0y_1\dotsb y_d$.
Therefore, $\delta_p[x_i,y_i]$, $i=0,\dotsc,d$, are on a hyperplane if and only if $\prod_{i=0}^d y_i/x_i = 1$.
Thus we identify $\delta_p[x,y]\in\delta_p^*$ with $y/x\in\mathbb{C}^*$, and the group is $(\mathbb{C}^*,\cdot)$.
Now suppose $\delta_p$ is real and the node is an acnode.
Then the linearly independent linear forms $L_1$ and $L_2$ given by Theorem~\ref{thm:sylvester} are $L_1(x,y) = \alpha x+ \beta y$ and $L_2(x,y)=\overline{\alpha} x + \overline{\beta} y$ for some $\alpha,\beta\in\mathbb{C}\setminus\mathbb{R}$.
There exists $\varphi\colon\mathbb{R}\mathbb{P}^1\to\mathbb{R}\mathbb{P}^1$ such that $L_1\circ\varphi = x + iy$ and $L_2\circ\varphi = x-iy$, hence we may assume after such a reparametrisation that $f_p(x,y) = (x+iy)^{d+1} - (x-iy)^{d+1}$ and that the node is at $\delta_p[i,1]=\delta_p[-i,1]$.
The polarisation of $f_p$ is $F_p(x_0,y_0,\dots,x_d,y_d) = \prod_{j=0}^d(x_j+iy_j) - \prod_{j=0}^d(x_j-iy_j)$, and it follows that $\delta_p[x_0,y_0], \dotsc, \delta_p[x_d,y_d]$ are collinear if and only if $\prod_{j=0}^d\frac{x_j+iy_j}{x_j-iy_j} = 1$.
We now identify $\mathbb{R}\mathbb{P}^1$ with the circle $\mathbb{R}/\mathbb{Z} \cong \setbuilder{z \in \mathbb{C}}{|z|=1}$ using the M\"obius transformation $[x,y]\to \frac{x+iy}{x-iy}$.
It remains to consider the crunodal case.
Then, similar to the complex nodal case, we obtain after a reparametrisation that $\delta_p[x_i,y_i]$, $i = 0, \dotsc, d$, are on a hyperplane if and only if $\prod_{i=0}^d y_i/x_i = \pm 1$, where the sign depends on $p$.
Thus we identify $\delta_p[x,y]\in\delta_p^*$ with $y/x\in\mathbb{R}^*$, and the group is $(\mathbb{R}^*,\cdot)\cong\mathbb{R}\times\mathbb{Z}_2$, where $\pm 1\in\mathbb{R}^*$ corresponds to $(0,0),(0,1)\in\mathbb{R}\times\mathbb{Z}_2$ respectively.
\end{proof}
The group on an elliptic normal curve or a rational singular curve of degree $d+1$ as described in Propositions~\ref{prop:elliptic_group} and \ref{prop:rational_group} is not uniquely determined by the property that $d+1$ points lie on a hyperplane if and only if they sum to some fixed element $c$.
Indeed, for any $t\in(\delta^*,\oplus)$, $x\boxplus y:= x\oplus y\oplus t$ defines another abelian group on $\delta^*$ with the property that $d+1$ points lie on a hyperplane if and only if they sum to $c\oplus dt$.
However, these two groups are isomorphic in a natural way with an isomorphism given by the translation map $x\mapsto x\ominus t$.
The next proposition show that we always get uniqueness up to some translation.
It will be used in Section~\ref{sec:extremal}.
\begin{prop}\label{prop:unique}
Let $(G,\oplus,0)$ and $(G,\boxplus,0')$ be abelian groups on the same ground set, such that for some $d\geqslant 2$ and some $c,c'\in G$,
\[ x_1\oplus\dotsb\oplus x_{d+1} =c \iff x_1\boxplus\dotsb\boxplus x_{d+1}=c'\quad\text{for all }x_1,\dots,x_{d+1}\in G.\]
Then $(G,\oplus,0) \to (G,\boxplus,0'), x\mapsto x\boxminus 0 = x \oplus 0'$ is an isomorphism, and
\[ c'=c\boxplus \underbrace{0 \boxplus \dotsb \boxplus 0}_\text{$d$ times} = c \ominus (\underbrace{0' \oplus \dotsb \oplus 0'}_\text{$d$ times}). \]
\end{prop}
\begin{proof}
It is clear that the cases $d\geqslant 3$ follow from the case $d=2$, which we now show.
First note that for any $x,y\in G$, $x\boxplus y\boxplus(c \ominus x \ominus y) = c'$ and $(x \oplus y)\boxplus 0\boxplus(c \ominus x \ominus y) = c'$, since $x \oplus y \oplus (c \ominus x \ominus y)=(x \oplus y) \oplus 0 \oplus (c \ominus x \ominus y)=c$.
Thus we have $x\boxplus y = (x \oplus y)\boxplus 0$, hence
$(x \oplus y) \boxminus 0 = x \boxplus y \boxminus 0 \boxminus 0 = (x \boxminus 0) \boxplus (y \boxminus 0)$.
Similarly we have $x \oplus y=(x\boxplus y) \oplus 0'$, hence $x \boxplus y = x \oplus y \ominus 0'$,
so in particular
$0' = 0 \boxminus 0 = 0 \oplus (\boxminus 0) \ominus 0'$, and $\boxminus 0 = 0' \oplus 0'$.
So we also have $x\boxminus 0 = x \oplus (\boxminus 0) \ominus 0' = x \oplus 0'$,
and $(G,\oplus,0) \to (G,\boxplus,0'), x\mapsto x\boxminus 0 = x \oplus 0'$ is an isomorphism.
\end{proof}
\section{Structure theorem}\label{sec:proof}
We prove Theorem~\ref{thm:main1} in this section.
The main idea is to induct on the dimension $d$ via projection.
We start with the following statement of the slightly different case $d=3$, which is \cite{LS18}*{Theorem~1.1}.
Note that it contains one more type that does not occur when $d\geqslant 4$.
\begin{theorem}\label{thm:plane}
Let $K > 0$ and suppose $n \geqslant C\max\{K^8,1\}$ for some sufficiently large absolute constant $C > 0$.
Let $P$ be a set of $n$ points in $\mathbb{R}\mathbb{P}^3$ with no $3$ points collinear.
If $P$ spans at most $Kn^2$ ordinary planes,
then up to projective transformations, $P$ differs in at most $O(K)$ points from a configuration of one of the following types:
\begin{enumerate}[label=\rom]
\item A subset of a plane;
\item A subset of two disjoint conics lying on the same quadric with $\frac{n}{2}\pm O(K)$ points of $P$ on each of the two conics;
\item A coset of a subgroup of the smooth points of an elliptic or acnodal space quartic curve.
\end{enumerate}
\end{theorem}
We first prove the following weaker lemma using results from Section~\ref{sec:tools}.
\begin{lemma}\label{lem:intermediate}
Let $d \geqslant 4$, $K > 0$, and suppose $n \geqslant C\max\{d^3 2^dK, (dK)^8\}$ for some sufficiently large absolute constant $C > 0$.
Let $P$ be a set of $n$ points in $\mathbb{R}\mathbb{P}^d$ where every $d$ points span a hyperplane.
If $P$ spans at most $K\binom{n-1}{d-1}$ ordinary hyperplanes, then all but at most $O(d2^dK)$ points of $P$ are contained in a hyperplane or an irreducible non-degenerate curve of degree $d+1$ that is either elliptic or rational and singular.
\end{lemma}
\begin{proof}
We use induction on $d \geqslant 4$ to show that for all $K>0$ and all $n \geqslant f(d,K)$,
for all sets $P$ of $n$ points in $\mathbb{R}\mathbb{P}^d$ with any $d$ points spanning a hyperplane,
if $P$ has at most $K\binom{n-1}{d-1}$ ordinary hyperplanes,
then all but at most $g(d,K)$ points of $P$ are contained in a hyperplane or an irreducible non-degenerate curve of degree $d+1$, and that if the curve is rational then it has to be singular, where
\[ g(d,K) := \sum_{k=0}^d k^3 2^{d-k} + C_12^d(d-1)K \]
and
\[ f(d,K) := d^2(g(d,K) + C_2 d^{10}) + C(d-1)^8K^8 \]
for appropriate $C_1,C_2>0$ to be determined later and $C$ from Theorem~\ref{thm:plane}.
We assume that this holds in $\mathbb{R}\mathbb{P}^{d-1}$ if $d \geqslant 5$, while Theorem~\ref{thm:plane} takes the place of the induction hypothesis when $d=4$.
Let $P'$ denote the set of points $p \in P$ such that there are at most $\frac{d-1}{d-2}K\binom{n-2}{d-2}$ ordinary hyperplanes through $p$.
By counting incident point-ordinary-hyperplane pairs, we obtain
\[dK\binom{n-1}{d-1} > (n-|P'|)\frac{d-1}{d-2}K\binom{n-2}{d-2},\]
which gives $|P'| > n/(d-1)^2$.
For any $p \in P'$, the projected set $\pi_p(P \setminus \{p\})$ has $n-1$ points and spans at most $\frac{d-1}{d-2}K\binom{n-2}{d-2}$ ordinary $(d-2)$-flats in $\mathbb{R}\mathbb{P}^{d-1}$, and any $d-1$ points of $\pi_p(P \setminus \{p\})$ span a $(d-2)$-flat.
To apply the induction hypothesis, we need
\[f(d,K) \geqslant 1+ f(d-1,\tfrac{d-1}{d-2}K),\]
as well as $f(3,K)\geqslant C\max\{K^8,1\}$, both of which easily follow from the definition of $f(d,K)$.
Then all except $g(d-1,\frac{d-1}{d-2}K)$
points of $\pi_p(P \setminus \{p\})$ are contained in a $(d-2)$-flat or a non-degenerate curve $\gamma_p$ of degree $d$ in $\mathbb{R}\mathbb{P}^{d-1}$, which is either irreducible or possibly two conics with $\frac{n}{2} \pm O(K)$ points on each when $d=4$.
If there exists a $p \in P'$ such that all but at most $g(d-1,\frac{d-1}{d-2}K)$
points of $\pi_p(P \setminus \{p\})$ are contained in a $(d-2)$-flat, then we are done, since
$g(d,K) > g(d-1,\frac{d-1}{d-2}K)$.
Thus we may assume without loss of generality that for all $p\in P'$ we obtain a curve $\gamma_p$.
Let $p$ and $p'$ be two distinct points of $P'$.
Then all but at most $2g(d-1,\frac{d-1}{d-2}K)$ points of $P$ lie on the intersection $\delta$ of the two cones $\overline{\pi^{-1}_p(\gamma_p)}$ and $\overline{\pi^{-1}_{p'}(\gamma_{p'})}$.
Since the curves $\gamma_p$ and $\gamma_{p'}$ are $1$-dimensional, the two cones are $2$-dimensional.
Since their vertices $p$ and $p'$ are distinct, the cones do not have a common irreducible component, so their intersection is a variety of dimension at most~$1$.
By B\'ezout's theorem (Theorem~\ref{thm:bezout}),
$\delta$ has total degree at most $d^2$, so has to have at least one $1$-dimensional irreducible component.
Let $\delta_1, \dotsc, \delta_k$ be the $1$-dimensional components of $\delta$, where $1\leqslant k \leqslant d^2$.
Let $\delta_1$ be the component with the most points of $P'$ amongst all the $\delta_i$, so that
\[|P' \cap \delta_1| \geqslant \frac{|P'|-2g(d-1,\frac{d-1}{d-2}K)}{d^2}.\]
Choose a $q \in P' \cap \delta_1$ such that $\pi_q$ is generically one-to-one on $\delta_1$.
By Lemma~\ref{lem:projection1} there are at most $O(\deg(\delta_1)^4)=O(d^8)$ exceptional points, so we need
\begin{equation}\label{constraint1}
|P'\cap\delta_1|> C_2 d^8.
\end{equation}
Since $|P'| > n/(d-1)^2$, we need
\[ \frac{\frac{n}{(d-1)^2}-2g(d-1,\frac{d-1}{d-2}K)}{d^2} > C_2 d^8,\]
or equivalently, $n > (d-1)^2(2g(d-1,\frac{d-1}{d-2}K)+C_2 d^{10})$.
However, this follows from the definition of $f(d,K)$.
If $\pi_q$ does not map $\delta_1 \setminus \{q\}$ into $\gamma_q$, then by B\'ezout's theorem (Theorem~\ref{thm:bezout}), $n-1-g(d-1,\binom{d-1}{d-2}K)\leqslant d^3$.
However, this does not occur since $f(d,K) > g(d-1,\binom{d-1}{d-2}K) + d^3 + 1$.
Thus, $\pi_q$ maps $\delta_1\setminus\{q\}$ into $\gamma_q$, hence $\delta_1$ is an irreducible curve of degree $d+1$ (or, when $d=4$, possibly a twisted cubic containing at most $n/2+O(K)$ points of $P$).
We first consider the case where $\delta_1$ has degree $d+1$.
We apply Lemma~\ref{lem:projection3} to $\delta_1$ and each $\delta_i$, $i=2,\dots,k$, and for this we need $|P'\cap\delta_1| > C'' d^4$, since $\deg(\delta_1)\leqslant d^2$ and $\sum_{i=2}^d\deg(\delta_i)\leqslant d^2$.
However, this condition is implied by \eqref{constraint1}.
Thus we find a $q' \in P' \cap \delta_1$ such that $\overline{\pi_{q'}(\delta_1\setminus\{q'\})}=\gamma_{q'}$ as before, and in addition, the cone $\overline{\pi_{q'}^{-1}(\gamma_{q'})}$ does not contain any other $\delta_i$, $i=2,\dots,k$.
Since all points of $P$ except $2g(d-1,\frac{d-1}{d-2}K)+d^2$ lie on $\delta_1\cup\dots\cup\delta_k$, we obtain by B\'ezout's theorem (Theorem~\ref{thm:bezout}) that
\[|P\setminus{\delta_1}| \leqslant d(d^2-d-1) + d^2 + 2g(d-1,\tfrac{d-1}{d-2}K) < g(d,K).\]
We next dismiss the case where $d=4$ and $\delta_1$ is a twisted cubic.
We redefine $P'$ to be the set of points $p\in P$ such that there are at most $12Kn^2$ ordinary hyperplanes through $p$.
Then $|P'|\geqslant 2n/3$.
Since we have $|P\cap\delta_1|\leqslant n/2 +O(K)$, by Lemma~\ref{lem:projection2} there exists $q'\in P'\setminus\delta_1$ such that the projection from $q'$ will map $\delta_1$ onto a twisted cubic in $\mathbb{R}\mathbb{P}^3$.
However, by B\'ezout's theorem (Theorem~\ref{thm:bezout}) and Theorem~\ref{thm:plane}, $\pi_{q'}(\delta_1\setminus\{q'\})$ has to be mapped onto a conic, which gives a contradiction.
Note that $g(d,K)=O(d 2^d K)$ since $K = \Omega(1/d)$ by \cite{BM17}*{Theorem 2.4}.
We have shown that all but $O(d2^dK)$ points of $P$ are contained in a hyperplane or an irreducible non-degenerate curve $\delta$ of degree $d+1$.
By Proposition~\ref{prop:curves}, this curve is either elliptic or rational.
It remains to show that if $\delta$ is rational, then it has to be singular.
Similar to what was shown above, we can find more than $3$ points $p\in\delta$ for which the projection $\overline{\pi_p(\delta\setminus\{p\})}$ is a rational curve of degree $d$ that is singular by the induction hypothesis.
Lemma~\ref{lemma:singular_projection} now implies that $\delta$ is singular.
\end{proof}
To get the coset structure on the curves as stated in Theorem~\ref{thm:main1}, we use a simple generalisation of an additive combinatorial result used by Green and Tao \cite{GT13}*{Proposition A.5}.
This captures the principle that if a finite subset of a group is almost closed, then it is close to a subgroup.
The case $d=3$ was shown in~\cite{LMMSSZ18}.
\begin{lemma}\label{cor:a5}
Let $d \geqslant 2$.
Let $A_1, A_2, \dotsc, A_{d+1}$ be $d+1$ subsets of some abelian group $(G,\oplus)$, all of size within $K$ of $n$, where $K \leqslant cn/d^2$ for some sufficiently small absolute constant $c > 0$.
Suppose there are at most $Kn^{d-1}$ $d$-tuples $(a_1,a_2,\dotsc,a_d) \in A_1 \times A_2 \times \dotsb \times A_d$ for which $a_1 \oplus a_2 \oplus \dotsb \oplus a_d \notin A_{d+1}$. Then there is a subgroup $H$ of $G$ and cosets $H \oplus x_i$ for $i = 1, \dotsc, d$ such that
\begin{equation*}
|A_i \mathbin{\triangle} (H \oplus x_i)|, \left| A_{d+1} \mathbin{\triangle} \left( H \oplus \bigoplus_{i=1}^d x_i \right) \right| = O(K).
\end{equation*}
\end{lemma}
\begin{proof}
We use induction on $d \geqslant 2$ to show that the symmetric differences in the conclusion of the lemma have size at most $C \prod_{i=1}^d (1+\frac{1}{i^2})K$ for some sufficiently large absolute constant $C > 0$.
The base case $d=2$ is \cite{GT13}*{Proposition A.5}.
Fix a $d \geqslant 3$.
By the pigeonhole principle, there exists $b_1 \in A_1$ such that there are at most
\[ \frac{1}{n-K}Kn^{d-1} \leqslant \frac{1}{1-\frac{c}{d^2}} Kn^{d-2} \]
$(d-1)$-tuples $(a_2, \dotsc, a_d) \in A_2 \times \dotsb \times A_d$ for which $b_1 \oplus a_2 \oplus \dotsb \oplus a_d \notin A_{d+1}$, or equivalently $a_2 \oplus \dotsb \oplus a_d \notin A_{d+1} \ominus b_1$.
Since
\[ \frac{1}{1-\frac{c}{d^2}} K \leqslant \frac{c}{d^2-c}n \leqslant \frac{c}{(d-1)^2}n, \]
we can use induction to get a subgroup $H$ of $G$ and $x_2, \dotsc, x_d\in G$ such that for $j = 2, \dotsc, d$ we have
\[ |A_j \mathbin{\triangle} (H \oplus x_j)|, \left|(A_{d+1} \ominus b_1) \mathbin{\triangle} \left(H \oplus \bigoplus_{j=2}^d x_j \right) \right| \leqslant C\prod_{i=1}^{d-1}\left(1+\frac{1}{i^2}\right)\frac{1}{1-\frac{c}{d^2}}K. \]
Since $|A_d \cap (H \oplus x_d)| \geqslant n - K - C\prod_{i=1}^{d-1}(1+\frac{1}{i^2})\frac{1}{1-\frac{c}{d^2}}K$,
we repeat the same pigeonhole argument on $A_d\cap (H \oplus x_d)$ to find a $b_d \in A_d \cap (H \oplus x_d)$ such that there are at most
\begin{align*}
\frac{1}{n - K - C\prod_{i=1}^{d-1}\left(1+\frac{1}{i^2}\right)\frac{1}{1-\frac{c}{d^2}}K} Kn^{d-1} &\leqslant \frac{1}{1-\frac{c}{d^2}-C\prod_{i=1}^{d-1}\left(1+\frac{1}{i^2}\right) \frac{c}{d^2-c}} Kn^{d-2}\\
&\leqslant \frac{1}{1-C_1\frac{c}{d^2-c}}Kn^{d-2} \\
&\leqslant \left(1 + \frac{C_2c}{d^2-c}\right)Kn^{d-2}\\
&\leqslant \left(1 + \frac{1}{d^2}\right)Kn^{d-2}
\end{align*}
$(d-1)$-tuples $(a_1, \dotsc, a_{d-1}) \in A_1 \times \dotsb A_{d-1}$ with $a_1 \oplus \dotsb \oplus a_{d-1} \oplus b_d \notin A_{d+1}$,
for some absolute constants $C_1,C_2>0$ depending on $C$, by making $c$ sufficiently small.
Now $(1+\frac{1}{d^2})K \leqslant cn/(d-1)^2$, so by induction again, there exist a subgroup $H'$ of $G$ and elements $x_1, x_2', \dotsc, x_{d-1}' \in G$ such that for $k = 2, \dotsc, d-1$ we have
\begin{equation*}
|A_1 \mathbin{\triangle} (H' \oplus x_1)|, |A_k \mathbin{\triangle} (H' \oplus x_k')|, \left|(A_{d+1} \ominus b_d) \mathbin{\triangle} \left(H' \oplus x_1 \oplus \bigoplus_{k=2}^{d-1} x_k' \right) \right|
\leqslant C\prod_{i=1}^{d-1}\left(1+\frac{1}{i^2}\right) \left(1 + \frac{1}{d^2}\right) K.
\end{equation*}
From this, it follows that $|(H \oplus x_k) \cap (H' \oplus x_k')| \geqslant n - K - 2C\prod_{i=1}^d (1 + \frac{1}{i^2}) K = n - O(K)$.
Since $(H \oplus x_k) \cap (H' \oplus x_k')$ is non-empty, it has to be a coset of $H' \cap H$.
If $H' \neq H$, then $|H' \cap H| \leqslant n/2 + O(K)$, a contradiction since $c$ is sufficiently small.
Therefore, $H=H'$, and $H \oplus x_k = H' \oplus x_k'$.
So we have
\[ |A_i \mathbin{\triangle} (H \oplus x_i)|, \left|A_{d+1} \mathbin{\triangle} \left(H \oplus \bigoplus_{\ell=1}^{d-1} x_\ell \oplus b_d \right) \right| \leqslant C\prod_{i=1}^d\left(1+\frac{1}{i^2}\right)K. \]
Since $b_d \in H \oplus x_d$, we also obtain
\[ \left|A_{d+1} \mathbin{\triangle} \left(H \oplus \bigoplus_{i=1}^d x_i \right) \right| \leqslant C\prod_{i=1}^d\left(1+\frac{1}{i^2}\right)K. \qedhere \]
\end{proof}
To apply Lemma~\ref{cor:a5}, we first need to know that removing $K$ points from a set does not change the number of ordinary hyperplanes it spans by too much.
\begin{lemma}\label{lem:stability}
Let $P$ be a set of $n$ points in $\mathbb{R}\mathbb{P}^d$, $d \geqslant 2$, where every $d$ points span a hyperplane.
Let $P'$ be a subset that is obtained from $P$ by removing at most $K$ points.
If $P$ spans $m$ ordinary hyperplanes, then $P'$ spans at most $m + \frac{1}{d}K\binom{n-1}{d-1}$
ordinary hyperplanes.
\end{lemma}
\begin{proof}
Fix a point $p\in P$.
Since every $d$ points span a hyperplane, there are at most $\binom{n-1}{d-1}$ sets of $d$ points from $P$ containing $p$ that span a hyperplane through $p$.
Thus, the number of $(d+1)$-point hyperplanes through $p$ is at most $\frac{1}{d}\binom{n-1}{d-1}$, since a set of $d+1$ points that contains $p$ has $d$ subsets of size $d$ that contain $p$.
If we remove points of $P$ one-by-one to obtain $P'$, we thus create at most $\frac{1}{d}K\binom{n-1}{d-1}$ ordinary hyperplanes.
\end{proof}
The following lemma then translates the additive combinatorial Lemma~\ref{cor:a5} to our geometric setting.
\begin{lemma}\label{lem:curve}
Let $d \geqslant 4$, $K>0$, and suppose $n \geqslant C(d^3K + d^4)$ for some sufficiently large absolute constant $C>0$.
Let $P$ be a set of $n$ points in $\mathbb{R}\mathbb{P}^d$ where every $d$ points span a hyperplane.
Suppose $P$ spans at most $K\binom{n-1}{d-1}$ ordinary hyperplanes,
and all but at most $dK$ points of $P$ lie on an elliptic normal curve or a rational singular curve $\delta$.
Then $P$ differs in at most $O(dK+d^2)$ points from a coset $H \oplus x$ of a subgroup $H$ of $\delta^*$, the smooth points of $\delta$, for some $x$ such that $(d+1)x \in H$.
In particular, $\delta$ is either an elliptic normal curve or a rational acnodal curve.
\end{lemma}
\begin{proof}
Let $P' = P \cap \delta^*$.
Then by Lemma~\ref{lem:stability}, $P'$ spans at most $K\binom{n-1}{d-1}+d\frac{1}{d}K\binom{n-1}{d-1} = 2K\binom{n-1}{d-1}$ ordinary hyperplanes.
First suppose $\delta$ is an elliptic normal curve or a rational cuspidal or acnodal curve.
If $a_1, \dotsc, a_d \in \delta^*$ are distinct, then by Propositions~\ref{prop:elliptic_group} and~\ref{prop:rational_group}, the hyperplane through $a_1, \dotsc, a_d$ meets $\delta$ again in the unique point $a_{d+1} = \ominus(a_1 \oplus \dotsb \oplus a_d)$.
This implies that $a_{d+1} \in P'$ for all but at most $d!O(K\binom{n-1}{d-1})$ $d$-tuples $(a_1, \dotsc, a_d) \in (P')^d$ with all $a_i$ distinct.
There are also at most $\binom{d}{2}n^{d-1}$ $d$-tuples $(a_1, \dotsc, a_d) \in (P')^d$ for which the $a_i$ are not all distinct.
Thus, $a_1 \oplus \dotsb \oplus a_d \in \ominus P'$ for all but at most $O((dK+d^2)n^{d-1})$ $d$-tuples $(a_1, \dotsc, a_d) \in (P')^d$.
Applying Lemma~\ref{cor:a5} with $A_1 = \dotsb = A_d = P'$ and $A_{d+1} = \ominus P'$, we obtain a finite subgroup $H$ of $\delta^*$ and a coset $H \oplus x$ such that $|P' \mathbin{\triangle} (H \oplus x)| = O(dK+d^2)$ and
$|\ominus P' \mathbin{\triangle} (H \oplus dx)| = O(dK+d^2)$, the latter being equivalent to $|P' \mathbin{\triangle} (H \ominus dx)| = O(dK+d^2)$.
Thus we have $|(H \oplus x) \mathbin{\triangle} (H \ominus dx)| = O(dK+d^2)$, which implies $(d+1)x \in H$.
Also, $\delta$ cannot be cuspidal, otherwise by Proposition~\ref{prop:rational_group} we have $\delta^* \cong (\mathbb{R}, +)$, which has no finite subgroup of order greater than~$1$.
Now suppose $\delta$ is a rational crunodal curve.
By Proposition~\ref{prop:rational_group}, there is a bijective map $\varphi: (\mathbb{R},+) \times \mathbb{Z}_2 \rightarrow \delta^*$ such that $d+1$ points in $\delta^*$ lie in a hyperplane if and only if they sum to $h$, where $h=\varphi(0,0)$ or $\varphi(0,1)$ depending on the curve $\delta$.
If $h=\varphi(0,0)$ then the above argument follows through, and we obtain a contradiction as we have by Proposition~\ref{prop:rational_group} that $\delta^* \cong (\mathbb{R},+) \times \mathbb{Z}_2$, which has no finite subgroup of order greater than~$2$.
Otherwise, the hyperplane through distinct $a_1, \dotsc, a_d \in \delta^*$ meets $\delta$ again in the unique point $a_{d+1} = \varphi(0,1) \ominus(a_1 \oplus \dotsb \oplus a_d)$.
As before, this implies that $a_{d+1} \in P'$ for all but at most $O((dK+d^2)n^{d-1})$ $d$-tuples $(a_1, \dotsc, a_d) \in (P')^d$,
or equivalently $a_1 \oplus \dotsb \oplus a_d \in \varphi(0,1) \ominus P'$.
Applying Lemma~\ref{cor:a5} with $A_1 = \dotsb = A_d = P'$ and $A_{d+1} = \varphi(0,1) \ominus P'$, we obtain a finite subgroup $H$ of $\delta^*$, giving a contradiction as before.
\end{proof}
We can now prove Theorem~\ref{thm:main1}.
\begin{proof}[Proof of Theorem~\ref{thm:main1}]
By Lemma~\ref{lem:intermediate}, all but at most $O(d2^dK)$ points of $P$ are contained in a hyperplane or an irreducible curve $\delta$ of degree $d+1$ that is either elliptic or rational and singular.
In the prior case, we get Case~\ref{case:hyperplane} of the theorem, so suppose we are in the latter case.
We then apply Lemma~\ref{lem:curve} to obtain Case~\ref{case:curve} of the theorem, completing the proof.
\end{proof}
\section{Extremal configurations}\label{sec:extremal}
We prove Theorems~\ref{thm:main2} and~\ref{thm:main3} in this section.
It will turn out that minimising the number of ordinary hyperplanes spanned by a set is equivalent to maximising the number of $(d+1)$-point planes,
thus we can apply Theorem~\ref{thm:main1} in both theorems.
Then we only have two cases to consider, where most of our point set is contained either in a hyperplane or a coset of a subgroup of an elliptic normal curve or the smooth points of a rational acnodal curve.
The first case is easy, and we get the following lower bound.
\begin{lemma}\label{lem:hyperplane}
Let $d \geqslant 4$, $K \geqslant 1$, and let $n \geqslant 2dK$.
Let $P$ be a set of $n$ points in $\mathbb{R}\mathbb{P}^d$ where every $d$ points span a hyperplane.
If all but $K$ points of $P$ lie on a hyperplane, then $P$ spans at least $\binom{n-1}{d-1}$ ordinary hyperplanes, with equality if and only if $K = 1$.
\end{lemma}
\begin{proof}
Let $\Pi$ be a hyperplane with $|P \cap \Pi| = n - K$.
Since $n-K>d$, any ordinary hyperplane spanned by $P$ must contain at least one point not in $\Pi$.
Let $m_i$ be the number of hyperplanes containing exactly $d-1$ points of $P\cap\Pi$ and exactly $i$ points of $P\setminus\Pi$, $i=1,\dots,K$.
Then the number of unordered $d$-tuples of elements from $P$ with exactly $d-1$ elements in $\Pi$ is
\[ K\binom{n-K}{d-1} = m_1 + 2m_2 + 3m_3 + \dots + Km_K.\]
Now consider the number of unordered $d$-tuples of elements from $P$ with exactly $d-2$ elements in $\Pi$, which equals $\binom{K}{2}\binom{n-K}{d-2}$.
One way to generate such a $d$-tuple is to take one of the $m_i$ hyperplanes containing $i$ points of $P\setminus\Pi$ and $d-1$ points of $P\cap\Pi$, choose two of the $i$ points, and remove one of the $d-1$ points.
Since any $d$ points span a hyperplane, there is no overcounting.
This gives
\begin{align*}
\binom{K}{2}\binom{n-K}{d-2} &\geqslant (d-1)\left(\binom{2}{2}m_2+\binom{3}{2}m_3+\binom{4}{2}m_4 + \dotsb\right)\\
&\geqslant \frac{d-1}{2} (2m_2+3m_3+4m_4+\dotsb).
\end{align*}
Hence the number of ordinary hyperplanes is at least
\[ m_1\geqslant K\binom{n-K}{d-1}-\frac{K(K-1)}{d-1}\binom{n-K}{d-2} = K\binom{n-K}{d-1}\frac{n-2K-d+3}{n-K-d+2}.\]
We next show that for all $K\geqslant 2$, if $n \geqslant 2dK$ then
\[ K\binom{n-K}{d-1}\frac{n-2K-d+3}{n-K-d+2} > \binom{n-1}{d-1}.\]
This is equivalent to
\begin{equation}\label{ineq1}
K > \frac{n-K+1}{n-2K-d+3}\prod_{i=1}^{K-2}\frac{n-i}{n-d-i+1}.
\end{equation}
Note that
\begin{equation}\label{ineq2}
\frac{n-K+1}{n-2K-d+3} < 2
\end{equation}
if $n > 3K+2d-5$ and
\begin{equation}\label{ineq3}
\frac{n-i}{n-d-i+1} <\frac{i+2}{i+1}
\end{equation}
if $n\geqslant (i+2)d$ for each $i=1,\dots,K-2$.
However, since $2dK > (i+2)d$ and also $2dK > 4K+2d-5$, the inequality \eqref{ineq1} now follows from \eqref{ineq2} and \eqref{ineq3}.
\end{proof}
The second case needs more work.
We first consider the number of ordinary hyperplanes spanned by a coset of a subgroup of the smooth points $\delta^*$ of an elliptic normal curve or a rational acnodal curve.
By Propositions~\ref{prop:elliptic_group} and~\ref{prop:rational_group}, we can consider $\delta^*$ as a group isomorphic to either $\mathbb{R}/\mathbb{Z}$ or $\mathbb{R}/\mathbb{Z} \times \mathbb{Z}_2$.
Let $H \oplus x$ be a coset of a subgroup $H$ of $\delta^*$ of order $n$ where $(d+1)x = \ominus c \in H$.
Since $H$ is a subgroup of order $n$ of $\mathbb{R}/\mathbb{Z}$ or $\mathbb{R}/\mathbb{Z} \times \mathbb{Z}_2$, we have that either $H$ is cyclic, or $\mathbb{Z}_{n/2}\times\mathbb{Z}_2$ when $n$ is divisible by $4$.
The exact group will matter only when we make exact calculations.
Note that it follows from the group property that any $d$ points on $\delta^*$ span a hyperplane.
Also, since any hyperplane intersects $\delta^*$ in $d+1$ points, counting multiplicity, it follows that an ordinary hyperplane of $H\oplus x$ intersects $\delta^*$ in $d$ points, of which exactly one of them has multiplicity $2$, and the others multiplicity $1$.
Denote the number of ordered $k$-tuples $(a_1, \dotsc, a_k)$ with distinct $a_i \in H$ that satisfy $m_1 a_1 \oplus \dotsb \oplus m_k a_k = c$ by $[m_1, \dotsc, m_k; c]$.
Then the number of ordinary hyperplanes spanned by $H \oplus x$ is
\begin{equation}\label{expr1} \frac{1}{(d-1)!} [2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-1$ times}\!; c].
\end{equation}
We show that we can always find a value of $c$ for which \eqref{expr1} is at most $\binom{n-1}{d-1}$.
\begin{lemma}\label{lem:coset1}
Let $\delta^*$ be an elliptic normal curve or the smooth points of a rational acnodal curve in $\mathbb{R}\mathbb{P}^d$, $d \geqslant 2$.
Then any finite subgroup $H$ of $\delta^*$ of order $n$ has a coset $H \oplus x$ with $(d+1)x \in H$, that spans at most $\binom{n-1}{d-1}$ ordinary hyperplanes.
Furthermore, if $d+1$ and $n$ are coprime, then any such coset spans exactly $\binom{n-1}{d-1}$ ordinary hyperplanes.
\end{lemma}
\begin{proof}
It suffices to show that there exists $c \in H$ such that the number of solutions $(a_1, \dotsc, a_d)\in H^d$ of the equation $2a_1 \oplus a_2 \oplus \dotsb \oplus a_d = c$, where $c = \ominus(d+1)x$, is at most $(d-1)!\binom{n-1}{d-1}$.
Fix $a_1$ and consider the substitution $b_i = a_i - a_1$ for $i = 2, \dotsc, d$.
Note that $2a_1 \oplus \dotsb \oplus a_d = c$ and $a_1,\dots,a_d$ are distinct if and only if $b_2 \oplus \dotsb \oplus b_d = c \ominus (d+1)a_1$ and $b_2,\dots,b_d$ are distinct and non-zero.
Let
\[ A_{c,j} = \setbuilder{(j, a_2, \dotsc, a_d)}{2j \oplus a_2 \oplus \dotsb \oplus a_d = c, \text{$a_2, \dotsc, a_d \in H \setminus \{j\}$ distinct}}, \]
and let
\[ B_k = \setbuilder{(b_2, \dotsc, b_d)}{b_2 \oplus \dotsb \oplus b_d = k, \text{$b_2, \dotsc, b_d \in H \setminus \{0\}$ distinct}}. \]
Then $|A_{c,j}| = |B_{c \ominus (d+1)j}|$, and the number of ordinary hyperplanes spanned by $H \oplus x$ is
\[ \frac{1}{(d-1)!} \sum_{j\in H} |A_{c,j}|. \]
If $d+1$ is coprime to $n$, then $c \ominus (d+1)j$ runs through all elements of $H$ as $j$ varies.
So we have $\sum_j |B_{c \ominus (d+1)j}| = (n-1)\dotsb (n-d+1)$, hence for all $c$,
\[ \frac{1}{(d-1)!} \sum_{j\in H} |A_{c,j}| = \binom{n-1}{d-1}. \]
If $d+1$ is not coprime to $n$, then $c \ominus (d+1)j$ runs through a coset of a subgroup of $H$ of size $n/\gcd(d+1,n)$ as $j$ varies.
We now have
\[ \sum_{j \in H} |B_{c \ominus (d+1)j}| = \gcd(d+1,n) \sum_{k \in c \ominus (d+1)H} |B_k|. \]
Summing over $c$ gives
\begin{align*}
\sum_{c \in H} \sum_{j \in H} |A_{c,j}| &= \gcd(d+1,n) \sum_{c \in H} \sum_{k \in c \ominus (d+1)H} |B_k|\\
&= \gcd(d+1,n) \frac{n}{\gcd(d+1,n)} (n-1)\dotsb (n-d+1)\\
& = n (n-1)\dotsb (n-d+1).
\end{align*}
By the pigeonhole principle, there must then exist a $c$ such that
\[ \frac{1}{(d-1)!} \sum_{j\in H} |A_{c,j}| \leqslant \binom{n-1}{d-1}. \qedhere \]
\end{proof}
We next want to show that $[2, \!\!\overbrace{1, \dotsc, 1}^\text{$d-1$ times}\!\!; c]$ is always very close to $(d-1)!\binom{n-1}{d-1}$, independent of $c$ or the group $H$.
Before that, we prove two simple properties of $[m_1, \dotsc, m_k; c]$.
\begin{lemma}\label{lem:upper}
$[m_1,\dots,m_k;c]\leqslant 2m_k(k-1)!\binom{n}{k-1}$.
\end{lemma}
\begin{proof}
Consider a solution $(a_1,\dotsc,a_k)$ of $m_1a_1 \oplus \dotsb \oplus m_ka_k=c$ where all the $a_i$ are distinct.
We can choose $a_1,\dotsc,a_{k-1}$ arbitrarily in $(k-1)!\binom{n}{k-1}$ ways, and $a_k$ satisfies the equation $m_ka_k = c \ominus m_1a_1 \ominus \dotsb \ominus m_{k-1}a_{k-1}$, which has at most $m_k$ solutions if $H=\mathbb{Z}_n$ and at most $2m_k$ solutions if $H=\mathbb{Z}_2\times\mathbb{Z}_{n/2}$.
\end{proof}
\begin{lemma}\label{lem:recurrence}
We have the recurrence relation
\begin{align}
[m_1,\dots,m_{k-1},1;c] = (k-1)!\binom{n}{k-1} &- [m_1 +1,m_2,\dots,m_{k-1};c]\notag\\
&- [m_1,m_2 +1,m_3,\dots,m_{k-1};c]\notag\\
&- \dotsb\notag\\
&- [m_1,\dots,m_{k-2},m_{k-1}+1;c]. \notag
\end{align}
\end{lemma}
\begin{proof}
We can arbitrarily choose distinct values from $H$ for $a_1,\dots,a_{k-1}$, which determines $a_k$, and then we have to subtract the number of $k$-tuples where $a_k$ is equal to one of the other $a_i$, $i=1,\dots,k-1$.
\end{proof}
\begin{lemma}\label{lem:2111}
\[ [2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-1$ times}\!\!; c] = (d-1)! \left( \binom{n-1}{d-1} + \varepsilon(d,n)\right),\]
where
\[ |\varepsilon(d,n)| = \begin{cases}
O\left(2^{-d/2}\binom{n}{(d-1)/2}+\binom{n}{(d-3)/2}\right) & \text{if $d$ is odd,}\\
O\left(d 2^{-d/2}\binom{n}{d/2-1}+\binom{n}{d/2-2}\right) & \text{if $d$ is even.}
\end{cases} \]
\end{lemma}
\begin{proof}
Applying Lemma~\ref{lem:recurrence} once, we obtain
\[ [2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-1$ times}\!\!; c] = (d-1)!\binom{n}{d-1} - [3, \!\!\underbrace{1, \dotsc, 1}_\text{$d-2$ times}\!\!; c] - (d-2)[2, 2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-3$ times}\!\!; c]. \]
Note that at each stage of the recurrence in Lemma~\ref{lem:recurrence} (as long as it applies),
there are $(d-1)(d-2)\dotsb(d-k)$ terms of length $d-k$, where we define the \emph{length} of $[m_1, \dotsc, m_k; c]$ to be $k$.
If $d$ is odd, we can continue this recurrence until we reach
\begin{align*}
[2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-1$ times}\!\!; c] &= (d-1)! \left( \binom{n}{d-1} - \binom{n}{d-2} + \dotsb + (-1)^{(d+1)/2} \binom{n}{(d+1)/2} \right)\\
&\qquad + (-1)^{(d-1)/2}R,
\end{align*}
where $R$ is the sum of $(d-1)(d-2)\dotsb(d-(d-1)/2)$ terms of length $(d+1)/2$.
Among these there are \[\frac{\binom{d-1}{2}\binom{d-3}{2}\dotsb\binom{2}{2}}{(\frac{d-1}{2})!} = (d-2)(d-4)\dotsb 3 \cdot1\] terms of the form $[2, \dotsc, 2; c]$.
We now write $R=A+B$, where $A$ is the same sum as $R$, except that we replace each occurrence of $[2,\dots,2;c]$ by $[1,\dots,1;c]$, and
\[ B := (d-2)(d-4)\dotsb 3\cdot 1 ([\underbrace{2,\dotsc,2}_\text{$\frac{d+1}{2}$ times};c] - [\!\underbrace{1,\dotsc,1}_\text{$\frac{d+1}{2}$ times}\!;c]).\]
We next bound $A$ and $B$.
We apply Lemma~\ref{lem:recurrence} to each term in $A$, after which we obtain $(d-1)(d-2)\dotsb(d-(d+1)/2)$ terms of length $(d-1)/2$.
Then using the bound in Lemma~\ref{lem:upper}, we obtain
\begin{align*}
A &= (d-1)!\binom{n}{(d-1)/2} - O\left((d-1)(d-2)\dotsb(d-(d+1)/2)\left(\tfrac{d-3}{2}\right)!\binom{n}{(d-3)/2}\right)\\
&= (d-1)!\left(\binom{n}{(d-1)/2} - O\left(\binom{n}{(d-3)/2}\right) \right).
\end{align*}
For $B$, we again use Lemma~\ref{lem:upper} to get
\begin{align*}
|B| &= O\left((d-2)(d-4)\dotsb 3\cdot 1 \left(\frac{d-1}{2}\right)! \binom{n}{(d-1)/2} \right)\\
&= O\left((d-2)(d-4)\dotsb 3\cdot 1 \cdot 2^{-\frac{d-1}{2}}(d-1)(d-3)\dotsb 4 \cdot 2 \binom{n}{(d-1)/2} \right)\\
&= O\left((d-1)!2^{-\frac{d-1}{2}}\binom{n}{(d-1)/2}\right).
\end{align*}
Thus we obtain
\begin{multline*}
[2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-1$ times}\!\!; c] = (d-1)! \left( \binom{n}{d-1} - \binom{n}{d-2} + \dotsb +(-1)^{\frac{d+1}{2}} \binom{n}{(d+1)/2} \right)\\
+ (-1)^{\frac{d-1}{2}}(d-1)!\left(\binom{n}{(d-1)/2} - O\left(\binom{n}{(d-3)/2}\right) \right) + (-1)^{\frac{d-1}{2}}B\\
= (d-1)!\left( \binom{n-1}{d-1} + (-1)^{\frac{d+1}{2}} O\left(\binom{n}{(d-3)/2}\right)\pm O\left(2^{-\frac{d-1}{2}}\binom{n}{(d-1)/2}\right)\right),
\end{multline*}
which finishes the proof for odd $d$.
If $d$ is even, we obtain
\begin{equation*}
[2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-1$ times}\!\!; c] = (d-1)! \left( \binom{n}{d-1} - \binom{n}{d-2} + \dotsb + (-1)^{\frac{d}{2}+1} \binom{n}{d/2} \right)
+ (-1)^{d/2}R,
\end{equation*}
where $R$ now is the sum of $(d-1)(d-2)\dotsb(d-d/2)$ terms of length $d/2$.
Among these there are
\[ \frac{(d-1)\binom{d-2}{2}\binom{d-4}{2}\dotsb\binom{2}{2}}{(\frac{d-2}{2})!} + \frac{2\binom{d-1}{3}\binom{d-4}{2}\dotsb\binom{2}{2}}{(\frac{d-4}{2})!} = (d+1)(d-1)\dotsb7\cdot5\]
terms of the form $[3,2,\dots,2;c]$.
Again we write $R=A+B$, where $A$ is the same sum as $R$, except that each occurrence of $[3,2,\dots,2;c]$ is replaced by $[1,\dots,1;c]$, and
\[ B := (d+1)(d-1)\dotsb7\cdot5([3,\!\!\underbrace{2,\dotsc,2}_\text{$\frac{d}{2}-1$ times}\!\!;c] - [\underbrace{1,\dotsc,1}_\text{$\frac{d}{2}$ times};c]).\]
Similar to the previous case, we obtain
\[ A = (d-1)!\left(\binom{n}{d/2-1}-O\left(\binom{n}{d/2-2}\right)\right)\]
and \[ |B| = O\left((d+1)(d-1)\dotsb7\cdot5(\tfrac{d}{2}-1)!\binom{n}{d/2-1}\right) = O\left(2^{-d/2}d!\binom{n}{d/2-1}\right),\]
which finishes the proof for even $d$.
\end{proof}
Computing $[2, \dotsc, 2; c]$ and $[3, 2, \dotsc, 2; c]$ exactly is more subtle and depends on $c$ and the group $H$.
We do not need this for the asymptotic Theorems~\ref{thm:main2} and~\ref{thm:main3}, and will only need to do so when computing exact extremal values.
To show that a coset is indeed extremal, we first consider the effect of adding a single point.
The case where the point is on the curve is done in Lemma~\ref{lem:7.7-}, while Lemma~\ref{lem:7.7} covers the case where the point is off the curve.
We then obtain a more general lower bound in Lemma~\ref{lem:coset2}.
\begin{lemma}\label{lem:7.7-}
Let $\delta^*$ be an elliptic normal curve or the smooth points of a rational acnodal curve in $\mathbb{R}\mathbb{P}^d$, $d \geqslant 2$.
Suppose $H \oplus x$ is a coset of a finite subgroup $H$ of $\delta^*$ of order $n$, with $(d+1)x \in H$.
Let $p\in\delta^*\setminus (H\oplus x)$.
Then there are at least $\binom{n}{d-1}$ hyperplanes through $p$ that meet $H \oplus x$ in exactly $d-1$ points.
\end{lemma}
\begin{proof}
Take any $d-1$ points $p_1, \dotsc, p_{d-1} \in H \oplus x$.
Suppose that the (unique) hyperplane through $p,p_1,\dots,p_{d-1}$ contains another point $p'\in H\oplus x$.
Since $p\oplus p_1\oplus \dots \oplus p_{d-1}\oplus p' = 0$ by Propositions~\ref{prop:elliptic_group} and~\ref{prop:rational_group}, we obtain that $p\in H\ominus dx$.
Since $(d+1)x\in H$, we obtain $p\in H\oplus x$, a contradiction.
Therefore, the hyperplane through $p,p_1,\dots,p_{d-1}$ does not contain any other point of $H\oplus x$.
It remains to show that if $\{p_1,\dots,p_{d-1}\}\neq\{p_1',\dots,p_{d-1}'\}$ where also $p_1',\dots,p_{d-1}'\in H\oplus x$, then the two sets span different hyperplanes with $p$.
Suppose they span the same hyperplane.
Then $\ominus(p \oplus p_1 \oplus \dotsb \oplus p_{d-1})$ also lies on this hyperplane, but not in $H\oplus x$, as shown above.
Also, $p_i'\notin\{p_1,\dots,p_{d-1}\}$ for some $i$, and then $p_1,\dots,p_{d-1},p_i'$, and $\ominus(p \oplus p_1 \oplus \dotsb \oplus p_{d-1})$ are $d+1$ distinct points on a hyperplane, so their sum is $0$, which implies $p=p_i'$, a contradiction.
So there are $\binom{n}{d-1}$ hyperplanes through $p$ meeting $H \oplus x$ in exactly $d-1$ points.
\end{proof}
The following Lemma generalises \cite{GT13}*{Lemma~7.7}, which states that if $\delta^*$ is an elliptic curve or the smooth points of an acnodal cubic curve in the plane, $H\oplus x$ is a coset of a finite subgroup of order $n>10^4$, and if $p\notin\delta^*$, then there are at least $n/1000$ lines through $p$ that pass through exactly one element of $H\oplus x$.
A naive generalisation to dimension $3$ would state that if $\delta^*$ is an elliptic or acnodal space quartic curve with a finite subgroup $H$ of sufficiently large order $n$, and $x\in\delta^*$ and $p\notin\delta^*$, then there are $\Omega(n^2)$ planes through $p$ and exactly two elements of $H\oplus x$.
This statement is false, even if we assume that $4x\in H$ (the analogous assumption $3x\in H$ is not made in \cite{GT13}), as can be seen from the following example.
Let $\delta$ be an elliptic quartic curve obtained from the intersection of a circular cylinder in $\mathbb{R}^3$ with a sphere which has centre $c$ on the axis $\ell$ of the cylinder.
Then $\delta$ is symmetric in the plane through $c$ perpendicular to $\ell$, and we can find a finite subgroup $H$ of any even order $n$ such that the line through any element of $H$ parallel to $\ell$ intersects $H$ in two points.
If we now choose $p$ to be the point at infinity on $\ell$, then we obtain that any plane spanned by $p$ and two points of $H$ not collinear with $p$, intersects $H$ in two more points.
Note that the projection $\pi_p$ maps $\delta$ to a conic, so is not generically one-to-one.
The number of such $p$ is bounded by the trisecant lemma (Lemma~\ref{lem:projection2}).
However, as the next lemma shows, a generalisation of \cite{GT13}*{Lemma~7.7} holds except that in dimension~3 we have to exclude such points $p$.
\begin{lemma}\label{lem:7.7}
Let $\delta$ be an elliptic normal curve or a rational acnodal curve in $\mathbb{R}\mathbb{P}^d$, $d \geqslant 2$, and let $\delta^*$ be its set of smooth points.
Let $H$ be a finite subgroup of $\delta^*$ of order $n$, where $n\geqslant Cd^4$ for some sufficiently large absolute constant $C>0$.
Let $x\in\delta^*$ satisfy $(d+1)x\in H$.
Let $p\in\mathbb{R}\mathbb{P}^d\setminus\delta^*$.
If $d=3$, assume furthermore that $\delta$ is not contained in a quadric cone with vertex $p$.
Then there are at least $c\binom{n}{d-1}$ hyperplanes through $p$ that meet the coset $H \oplus x$ in exactly $d-1$ points, for some sufficiently small absolute constant $c>0$.
\end{lemma}
\begin{proof}
We prove by induction on $d$ that under the given hypotheses there are at least $c'\prod_{i=2}^d(1-\frac{1}{i^2})\binom{n}{d-1}$ such hyperplanes for some sufficiently small absolute constant $c'>0$.
The base case $d = 2$ is given by \cite{GT13}*{Lemma~7.7}.
Next assume that $d\geqslant 3$, and that the statement holds for $d-1$.
Fix a $q\in H\oplus x$, and consider the projection $\pi_q$.
Since $q$ is a smooth point of $\delta$, $\overline{\pi_q(\delta\setminus\{q\})}$ is a non-degenerate curve of degree $d$ in $\mathbb{R}\mathbb{P}^{d-1}$ (otherwise its degree would be at most $d/2$, but a non-degenerate curve has degree at least $d-1$).
The projection $\pi_q$ can be naturally extended to have a value at $q$, by setting $\pi_q(q)$ to be the point where the tangent line of $\delta$ at $q$ intersects the hyperplane onto which $\delta$ is projected.
(This point is the single point in $\overline{\pi_q(\delta\setminus\{q\})}\setminus\pi_q(\delta\setminus\{q\})$.)
The curve $\pi_q(\delta)$ has degree $d$ and is either elliptic or rational and acnodal, hence it has a group operation $\boxplus$ such that $d$ points are on a hyperplane in $\mathbb{R}\mathbb{P}^{d-1}$ if and only if they sum to the identity.
Observe that any $d$ points $\pi_q(p_1),\dots,\pi_q(p_d)\in\pi_q(\delta^*)$ lie on a hyperplane in $\mathbb{R}\mathbb{P}^{d-1}$ if and only if $p_1\oplus\dots\oplus p_d \oplus q = 0$.
By Proposition~\ref{prop:unique} it follows that the group on $\pi_q(\delta^*)$ obtained by transferring the group $(\delta^*,\oplus)$ by $\pi_q$ is a translation of $(\pi_q(\delta^*),\boxplus)$.
In particular, $\pi_q(H\oplus x) = H'\boxplus x'$ for some subgroup $H'$ of $(\pi_q(\delta^*),\boxplus)$ of order $n$, and $(d+1)x'\in H'$.
We would like to apply the induction hypothesis, but we can only do that if $\pi_q(p)\notin\pi_q(\delta^*)$, and when $d=4$, if $\pi_q(p)$ is not the vertex of a quadric cone containing $\pi_q(\delta)$.
We next show that there are only $O(d^2)$ exceptional points $q$ to which we cannot apply induction.
Note that $\pi_q(p)\in\pi_q(\delta^*)$ if and only if the line $pq$ intersects $\delta$ with multiplicity $2$, which means we have to bound the number of these lines through $p$.
To this end, we consider the projection of $\delta$ from the point $p$.
Suppose that $\pi_p$ does not project $\delta$ generically one-to-one to a degree $d+1$ curve in $\mathbb{R}\mathbb{P}^{d-1}$.
Then $\pi_p(\delta)$ has degree at most $(d+1)/2$.
However, its degree is at least $d-1$ because it is non-degenerate.
It follows that $d=3$, and that $\pi_p(\delta)$ has degree $2$ and is irreducible, so $\delta$ is contained in a quadric cone with vertex $p$, which we ruled out by assumption.
Therefore, $\pi_p$ projects $\delta$ generically one-to-one onto the curve $\pi_p(\delta)$, which has degree $d+1$ and has at most $\binom{d}{2}$ double points (this follows from the Pl\"ucker formulas after projecting to the plane \cite{W78}*{Chapter~III, Theorem~4.4}).
We thus have that an arbitrary point $p \in \mathbb{R}\mathbb{P}^d \setminus \delta$ lies on at most $O(d^2)$ secants or tangents of $\delta$ (or lines through two points of $\delta^*$ if $p$ is the acnode of $\delta$).
If $d=4$, we also have to avoid $q$ such that $\pi_q(p)$ is the vertex of a cone on which $\pi_q(\delta)$ lies.
Such $q$ have the property that if we first project $\delta$ from $q$ and then $\pi_q(\delta)$ from $\pi_q(p)$, then the composition of these two projections is not generically one-to-one.
Another way to do these to successive projections is to first project $\delta$ from $p$ and then $\pi_p(\delta)$ from $\pi_p(q)$.
Thus, we have that $\pi_p(q)$ is a point on the quintic $\pi_p(\delta)$ in $\mathbb{R}\mathbb{P}^3$ such that the projection of $\pi_p(\delta)$ from $\pi_p(q)$ onto $\mathbb{R}\mathbb{P}^2$ is not generically one-to-one.
However, there are only $O(1)$ such points by Lemma~\ref{lem:projection2}.
Thus there are at most $Cd^2$ points $q\in H\oplus x$ to which we cannot apply the induction hypothesis.
For all remaining $q\in H\oplus x$, we obtain by the induction hypothesis that there are at least $c'\prod_{i=2}^{d-1}(1-\frac{1}{i^2})\binom{n}{d-2}$ hyperplanes $\Pi$ in $\mathbb{R}\mathbb{P}^{d-1}$ through $\pi_q(p)$ and exactly $d-2$ points of $H'\boxplus x'$.
If none of these $d-2$ points equal $\pi_q(q)$, then $\pi_q^{-1}(\Pi)$ is a hyperplane in $\mathbb{R}\mathbb{P}^d$ through $p$ and $d-1$ points of $H\oplus x$, one of which is $q$.
There are at most $\binom{n-1}{d-3}$ such hyperplanes in $\mathbb{R}\mathbb{P}^{d-1}$ through $\pi_q(q)$.
Therefore, there are at least $c'\prod_{i=2}^{d-1}(1-\frac{1}{i^2})\binom{n}{d-2} - \binom{n-1}{d-3}$ hyperplanes in $\mathbb{R}\mathbb{P}^d$ that pass through $p$ and exactly $d-1$ points of $H\oplus x$, one of them being $q$.
If we sum over all $n-Cd^2$ points $q$, we count each hyperplane $d-1$ times, and we obtain that the total number of such hyperplanes is at least
\begin{equation}\label{eq1}
\frac{n-Cd^2}{d-1}\left(c'\prod_{i=2}^{d-1}\left(1-\frac{1}{i^2}\right)\binom{n}{d-2} - \binom{n-1}{d-3}\right).
\end{equation}
It can easily be checked that
\begin{equation}\label{eq2}
\frac{n-Cd^2}{d-1}\binom{n}{d-2} \geqslant \left(1 - \frac{1}{2d^2}\right)\binom{n}{d-1}
\end{equation}
if $n >2Cd^4$, and that
\begin{equation}\label{eq3}
c'\prod_{i=2}^{d-1}\left(1-\frac{1}{i^2}\right)\frac{1}{2d^2}\binom{n}{d-1} \geqslant \frac{n-Cd^2}{d-1}\binom{n-1}{d-3}
\end{equation}
if $n > 4d^3/c'$.
It now follows from \eqref{eq2} and \eqref{eq3} that the expression \eqref{eq1} is at least
\[ c'\prod_{i=2}^{d}\left(1-\frac{1}{i^2}\right)\binom{n}{d-1},\]
which finishes the induction.
\end{proof}
\begin{lemma}\label{lem:coset2}
Let $\delta^*$ be an elliptic normal curve or the smooth points of a rational acnodal curve in $\mathbb{R}\mathbb{P}^d$, $d \geqslant 4$, and let $H \oplus x$ be a coset of a finite subgroup $H$ of $\delta^*$, with $(d+1)x\in H$.
Let $A\subseteq H\oplus x$ and $B\subset\mathbb{R}\mathbb{P}^d\setminus(H\oplus x)$ with $|A|=a$ and $|B|=b$.
Let $P = (H \oplus x \setminus A) \cup B$ with $|P|=n$ be such that every $d$ points of $P$ span a hyperplane.
If $A$ and $B$ are not both empty and $n\geqslant C(a+b+d^2)d$ for some sufficiently large absolute constant $C>0$, then $P$ spans at least $(1+c)\binom{n-1}{d-1}$ ordinary hyperplanes for some sufficiently small absolute constant $c>0$.
\end{lemma}
\begin{proof}
We first bound from below the number of ordinary hyperplanes of $(H\oplus x)\setminus A$ that do not pass through a point of $B$.
The number of ordinary hyperplanes of $(H\oplus x)\setminus A$ that are disjoint from $A$ is
\[ \frac{1}{(d-1)!}\left|\setbuilder{(a_1,\dots,a_d)\in(H\setminus(A\ominus x))^d}{\begin{array}{c}2a_1\oplus a_2 \oplus\dotsb \oplus a_d=\ominus (d+1)x,\\ \text{$a_1,\dots,a_d$ are distinct}\end{array}}\right|.\]
If we denote by by $[m_1, \dotsc, m_k]'$ the number of ordered $k$-tuples $(a_1, \dotsc, a_k)$ with distinct $a_i \in H\setminus(A\ominus x)$ that satisfy $m_1 a_1 \oplus \dotsb \oplus m_k a_k = \ominus (d+1)x$, then we obtain, similar to the proofs of Lemmas~\ref{lem:upper} and \ref{lem:recurrence}, that
\begin{align*}
[2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-1$ times}\!]' &= (d-1)!\binom{n-b}{d-1} - [3, \!\!\underbrace{1, \dotsc, 1}_\text{$d-2$ times}\!]' - (d-2)[2, 2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-3$ times}\!]'\\
&\geqslant (d-1)!\binom{n-b}{d-1} - 2(d-2)!\binom{n-b}{d-2}-2(d-2)(d-2)!\binom{n-b}{d-2}\\
&= (d-1)!\binom{n-b}{d-1} - 2(d-1)!\binom{n-b}{d-2},
\end{align*}
and it follows that the number of ordinary hyperplanes of $(H\oplus x)\setminus A$ disjoint from $A$ is at least $\binom{n-b}{d-1}-2\binom{n-b}{2}$.
Next, we obtain an upper bound on the number of these hyperplanes that pass through a point $q\in B$.
Let the ordinary hyperplane $\Pi$ pass through $p_1,p_2,\dots,p_d\in (H\oplus x)\setminus A$,
with $p_1$ being the double point.
Since $q\in\Pi$ and any $d$ points determine a hyperplane, $\Pi$ is still spanned by $q,p_1,\dots,p_{d-1}$, after a relabelling of $p_2,\dots,p_d$.
Let $S$ be a minimal subset of $\{p_2,\dots,p_{d-1}\}$ such that the tangent line $\ell$ of $\delta$ at $p_1$ lies in the flat spanned by $S\cup\{q,p_1\}$.
If $S$ is empty, then $\ell$ is a tangent from $q$ to $\delta$, of which there are at most $d(d+1)$ (this follows again from projection and the Pl\"ucker formulas \citelist{\cite{W78}*{Chapter~IV, p.~117}\cite{NZ}*{Corollary~2.5}}).
Therefore, the number of ordinary hyperplanes through $p_1,p_2,\dots,p_d\in (H\oplus x)\setminus A$ with the tangent of $\delta$ at $p_1$ passing through $q$ is at most $d(d+1)\binom{n-b}{d-2}$.
If on the other hand $S$ is non-empty, then there is some $p_i$, say $p_{d-1}$, such that $q,p_1,\dots,p_{d-2}$ together with $\ell$ generate $\Pi$.
Therefore, $\Pi$ is determined by $p_1$, the tangent through $p_1$, and some $d-3$ more points $p_i$.
There are at most $(n-b)\binom{n-b-1}{d-3} = (d-2)\binom{n-b}{d-2}$ ordinary hyperplanes through $q$ in this case.
The number of ordinary hyperplanes of $(H\oplus x)\setminus A$ that contain a point from $A$ is at least
\[a\left(\binom{n-b}{d-1}-a\binom{n-b}{d-2}-(n-b)\binom{n-b-1}{d-3}\right) = a\binom{n-b}{d-1}-(a^2+a(d-2))\binom{n-b}{d-2},\]
since we can find such a hyperplane by choosing a point $p\in A$ and $d-1$ points $p_1,\dots,p_{d-1}\in(H\oplus x)\setminus A$, and then the remaining point $\ominus(p\oplus p_1\oplus\dots\oplus p_{d-1})$ might not be a new point in $(H\oplus x)\setminus A$ by either being in $A$ (possibly equal to $p$) or being equal to one of the $p_i$.
The number of these hyperplanes that also pass through some point of $B$ is at most $ab\binom{n-b}{d-2}$.
Therefore, the number of ordinary hyperplanes of $(H\oplus x)\setminus A$ that miss $B$ is at least
\begin{equation}\label{lower1}
(1+a)\binom{n-b}{d-1}-\left(2 + b(d(d+1)+d-2) + a^2+a(d-2) + ab\right)\binom{n-b}{d-2}.
\end{equation}
Next, assuming that $B\neq\emptyset$, we find a lower bound to the number of ordinary hyperplanes through exactly one point of $B$ and exactly $d-1$ points of $(H\oplus x)\setminus A$.
The number of hyperplanes through at least one point of $B$ and exactly $d-1$ points of $(H\oplus x)\setminus A$ is at least $bc'\binom{n-b}{d-1}-ab\binom{n-b}{d-2}$ by Lemmas~\ref{lem:7.7-} and~\ref{lem:7.7} for some sufficiently small absolute constant $c'>0$.
The number of hyperplanes through at least two points of $B$ and exactly $d-1$ points of $(H\oplus x)\setminus A$ is at most $\binom{b}{2}\binom{n-b}{d-2}$.
It follows that there are at least $bc'\binom{n-b}{d-1}-\bigl(ab+\binom{b}{2}\bigr)\binom{n-b}{d-2}$ ordinary hyperplanes passing though a point of~$B$.
Combining this with \eqref{lower1}, $P$ spans at least
\begin{equation*} (1+a+bc')\binom{n-b}{d-1}-\left(2 + b(d(d+1)+d-2) + a^2+a(d-2) + 2ab + \binom{b}{2}\right)\binom{n-b}{d-2} =: f(a,b)
\end{equation*}
ordinary hyperplanes.
Since
\[f(a+1,b) - f(a,b) = \binom{n-b}{d-1}-(2a+2b+d-1)\binom{n-b}{d-2}\]
is easily seen to be positive for all $a\geqslant 0$ as long as $n > (2a+2b+d-1)(d-1)+b+d-2$, we have without loss of generality that $a=0$ in the case that $b\geqslant 1$.
Then
$f(0,b+1)-f(0,b)$ is easily seen to be at least
\[ c'\binom{n-b-1}{d-1} - (d^2+d-2+b)\binom{n-b-1}{d-2},\]
which is positive for all $b\geqslant 1$ if $n \geqslant C(b+d^2)d$ for $C$ sufficiently large.
Also, $f(0,1) = (1+c')\binom{n-1}{d-1} - (d^2+2d)\binom{n-1}{d-2}) \geqslant (1+c)\binom{n-1}{d-1}$ if $n\geqslant Cd^3$.
This completes the proof in the case where $B$ is non-empty.
If $B$ is empty, then we can bound the number of ordinary hyperplanes from below by setting $b=0$ in \eqref{lower1}, and checking that the resulting expression
\[ (1+a)\binom{n}{d-1} - \left(d+a^2+a(d-2)\right)\binom{n}{d-2}\] is increasing in $a$ if $n > (2a+d-1)(d-1)+d-2$, and larger than $\frac32\binom{n-1}{d-1}$ if $n > Cd^3$.
\end{proof}
We are now ready to prove Theorems~\ref{thm:main2} and~\ref{thm:main3}.
\begin{proof}[Proof of Theorem~\ref{thm:main2}]
Let $P$ be the set of $n$ points.
By Lemma~\ref{lem:coset1}, we may assume that $P$ has at most $\binom{n-1}{d-1}$ ordinary hyperplanes.
Since $n \geqslant C d^3 2^d$, we may apply Theorem~\ref{thm:main1} to obtain that up to $O(d2^d)$ points, $P$ lies in a hyperplane or is a coset of a subgroup of an elliptic normal curve or the smooth points of a rational acnodal curve.
In the first case, by Lemma~\ref{lem:hyperplane}, since $n \geqslant C d^3 2^d$, the minimum number of ordinary hyperplanes is attained when all but one point is contained in a hyperplane and we get exactly $\binom{n-1}{d-1}$ ordinary hyperplanes.
In the second case, by Lemma~\ref{lem:coset2}, again since $n \geqslant C d^3 2^d$, the minimum number of ordinary hyperplanes is attained by a coset of an elliptic normal curve or the smooth points of a rational acnodal curve.
Lemmas~\ref{lem:coset1} and \ref{lem:2111} then complete the proof.
Note that the second term in the error term of Lemma~\ref{lem:2111} is dominated by the first term because of the lower bound on $n$, and that the error term here is negative by Lemma~\ref{lem:coset1}.
\end{proof}
Note that if we want to find the exact minimum number of ordinary hyperplanes spanned by a set of $n$ points in $\mathbb{R}\mathbb{P}^d$, $d \geqslant 4$, not contained in a hyperplane and where every $d$ points span a hyperplane, we can continue with the calculation of $[2, 1, \dotsc, 1; c]$ in the proof of Lemma~\ref{lem:2111}.
As seen in the proof of Lemma~\ref{lem:coset1}, this depends on $\gcd(d+1, n)$.
We also have to minimise over different values of $c \in H$, and if $n \equiv 0 \pmod{4}$, consider both cases $H \cong \mathbb{Z}_n$ and $H \cong \mathbb{Z}_{n/2} \times \mathbb{Z}_2$.
For example, it can be shown that if $d=4$, the minimum number is
\[
\begin{cases}
\binom{n-1}{3} - 4 & \text{if } n \equiv 0 \pmod{5},\\
\binom{n-1}{3} & \text{otherwise},
\end{cases}
\]
if $d=5$, the minimum number is
\[
\begin{cases}
\binom{n-1}{4} - \frac{1}{8}n^2 + \frac{1}{12} n - 1 & \text{if } n \equiv 0 \pmod{6},\\
\binom{n-1}{4} & \text{if } n \equiv 1, 5 \pmod{6},\\
\binom{n-1}{4} -\frac{1}{8}n^2 + \frac{3}{4}n - 1 & \text{if } n \equiv 2, 4 \pmod{6},\\
\binom{n-1}{4} - \frac{2}{3}n + 2 & \text{if } n \equiv 3 \pmod{6},
\end{cases}
\]
and if $d=6$, the minimum number is
\[
\begin{cases}
\binom{n-1}{5} - 6 & \text{if } n \equiv 0 \pmod{7},\\
\binom{n-1}{5} & \text{otherwise.}
\end{cases}
\]
\begin{proof}[Proof of Theorem~\ref{thm:main3}]
We first show that there exist sets of $n$ points, with every $d$ points spanning a hyperplane, spanning at least $\frac{1}{d+1} \binom{n-1}{d} + O\left(2^{-d/2}\binom{n}{\lfloor \frac{d-1}{2} \rfloor}\right)$ $(d+1)$-point hyperplanes.
Let $\delta^*$ be an elliptic normal curve or the smooth points of a rational acnodal curve.
By Propositions~\ref{prop:elliptic_group} and~\ref{prop:rational_group}, the number of $(d+1)$-point hyperplanes spanned by a coset $H \oplus x$ of $\delta^*$ is
\[ \frac{1}{(d+1)!}[\!\underbrace{1, \dotsc, 1}_\text{$d+1$ times}\!; c] \]
for some $c \in \delta^*$.
Note that
\[ [\!\underbrace{1, \dotsc, 1}_\text{$d+1$ times}\!; c] = d!\binom{n}{d} - d[2, \!\!\underbrace{1, \dotsc, 1}_\text{$d-1$ times}\!\!; c], \]
so if we take $H \oplus x$ to be a coset minimising the number of ordinary hyperplanes, then by Theorem~\ref{thm:main2}, there are
\begin{align}
&\mathrel{\phantom{=}} \frac{1}{d+1} \left( \binom{n}{d} - \binom{n-1}{d-1} \right) + O\left(2^{-\frac{d}{2}}\binom{n}{\lfloor \frac{d-1}{2} \rfloor}\right) \notag \\
&= \frac{1}{d+1} \binom{n-1}{d} + O\left(2^{-\frac{d}{2}}\binom{n}{\lfloor \frac{d-1}{2} \rfloor}\right)\label{eqn:d+1}
\end{align}
$(d+1)$-point hyperplanes.
Next let $P$ be an arbitrary set of $n$ points in $\mathbb{R}\mathbb{P}^d$, $d \geqslant 4$, where every $d$ points span a hyperplane.
Suppose $P$ spans the maximum number of $(d+1)$-point hyperplanes.
Without loss of generality, we can thus assume $P$ spans at least $\frac{1}{d+1}\binom{n-1}{d} + O\left(2^{-d/2}\binom{n}{\lfloor \frac{d-1}{2} \rfloor}\right)$ $(d+1)$-point hyperplanes.
Let $m_i$ denote the number of $i$-point hyperplanes spanned by $P$.
Counting the number of unordered $d$-tuples, we get
\[ \binom{n}{d} = \sum_{i \geqslant d} \binom{i}{d} m_i \geqslant m_d + (d+1)m_{d+1}, \]
hence we have
\[ m_d \leqslant \binom{n}{d} - \binom{n-1}{d} - O\left(d2^{-\frac{d}{2}}\binom{n}{\lfloor \frac{d-1}{2} \rfloor}\right) = O\left(\binom{n-1}{d-1}\right), \]
and we can apply Theorem~\ref{thm:main1}.
In the case where all but $O(d2^d)$ points of $P$ are contained in a hyperplane, it is easy to see that $P$ spans $O(d2^d\binom{n}{d-1})$ $(d+1)$-point planes, contradicting the assumption.
So all but $O(d2^d)$ points of $P$ are contained in a coset $H \oplus x$ of a subgroup $H$ of $\delta^*$.
Consider the identity
\[ (d+1)m_{d+1} = \binom{n}{d} - m_d - \sum_{i \geqslant d+2} \binom{i}{d} m_i. \]
By Theorem~\ref{thm:main2} and Lemma~\ref{lem:coset2}, we know that $m_d \geqslant \binom{n-1}{d-1} - O\left(d2^{-d/2}\binom{n}{\lfloor \frac{d-1}{2} \rfloor}\right)$ and any deviation of $P$ from the coset $H \oplus x$ adds at least $c\binom{n-1}{d-1}$ ordinary hyperplanes for some sufficiently small absolute constant $c>0$.
Since we also have
\begin{align*}
\sum_{i \geqslant d+2} \binom{i}{d} m_i &= \binom{n}{d} - m_d - (d+1)m_{d+1}\\
&= \binom{n}{d} - \binom{n-1}{d-1} - \binom{n-1}{d} + O\left(d2^{-\frac{d}{2}}\binom{n}{\lfloor \frac{d-1}{2} \rfloor}\right)\\
&= O\left(d2^{-\frac{d}{2}}\binom{n}{\lfloor \frac{d-1}{2} \rfloor}\right),
\end{align*}
we can conclude that $m_{d+1}$ is maximised when $P$ is exactly a coset of a subgroup of $\delta^*$, in which case \eqref{eqn:d+1} completes the proof.
\end{proof}
Knowing the exact minimum number of ordinary hyperplanes spanned by a set of $n$ points in $\mathbb{R}\mathbb{P}^d$, $d \geqslant 4$, not contained in a hyperplane and where every $d$ points span a hyperplane then also gives the exact maximum number of $(d+1)$-point hyperplanes.
Continuing the above examples, for $d = 4$, the maximum number is
\[
\begin{cases}
\frac15\binom{n-1}{4} +\frac45 & \text{if } n \equiv 0 \pmod{5},\\
\frac15\binom{n-1}{4} & \text{otherwise},
\end{cases}
\]
for $d=5$, the maximum number is
\[
\begin{cases}
\frac{1}{6}\binom{n-1}{5} + \frac{1}{48}n^2 - \frac{1}{72}n + \frac16 & \text{if } n \equiv 0 \pmod{6},\\
\frac{1}{6}\binom{n-1}{5} & \text{if } n \equiv 1, 5 \pmod{6},\\
\frac{1}{6}\binom{n-1}{5} + \frac{1}{48}n^2 - \frac{1}{8}n +\frac16 & \text{if } n \equiv 2, 4 \pmod{6},\\
\frac{1}{6}\binom{n-1}{5} +\frac{1}{9}n -\frac{1}{3} & \text{if } n \equiv 3 \pmod{6},
\end{cases}
\]
and for $d=6$, the maximum number is
\[
\begin{cases}
\frac17\binom{n-1}{6} +\frac67 & \text{if } n \equiv 0 \pmod{7},\\
\frac17\binom{n-1}{6} & \text{otherwise}.
\end{cases}
\]
\section*{Acknowledgments}
We thank Peter Allen, Alex Fink, Misha Rudnev, and an anonymous referee for helpful remarks and for pointing out errors in a previous version.
\begin{bibdiv}
\begin{biblist}
\bib{B18}{article}{
author={Ball, Simeon},
title={On sets defining few ordinary planes},
journal={Discrete Comput.\ Geom.},
volume={60},
date={2018},
number={1},
pages={220--253},
}
\bib{BJ18}{article}{
author={Ball, Simeon},
author={Jimenez, Enrique},
title={On sets defining few ordinary solids},
note={arXiv:1808.06388},
}
\bib{BM17}{article}{
author={Ball, Simeon},
author={Monserrat, Joaquim},
title={A generalisation of {S}ylvester's problem to higher dimensions},
journal={J. Geom.},
volume={108},
date={2017},
number={2},
pages={529--543},
}
\bib{Clifford}{article}{
author={Clifford, William Kingdon},
title={On the classification of loci},
journal={Philosophical Transactions of the Royal Society of London},
volume={169},
date={1878},
pages={663--681},
}
\bib{D03}{book}{
author={Dolgachev, Igor},
title={Lectures on Invariant Theory},
publisher={Cambridge University Press},
date={2003},
}
\bib{Fi08}{article}{
author={Fisher, Tom},
title={The invariants of a genus one curve},
journal={Proc. Lond. Math. Soc.},
volume={97},
date={2008},
number={3},
pages={753--782},
}
\bib{Fu84}{book}{
author={Fulton, William},
title={Introduction to intersection theory in algebraic geometry},
series={CBMS Regional Conference Series in Mathematics},
volume={54},
publisher={American Mathematical Society},
date={1984},
}
\bib{Fu11}{article}{
author={Furukawa, Katsuhisa},
title={Defining ideal of the Segre locus in arbitrary characteristic},
journal={J. Algebra},
volume={336},
date={2011},
number={1},
pages={84--98},
}
\bib{GT13}{article}{
author={Green, Ben},
author={Tao, Terence},
title={On sets defining few ordinary lines},
journal={Discrete Comput.\ Geom.},
volume={50},
date={2013},
number={2},
pages={409--468},
}
\bib{H65}{article}{
author={Hansen, Sten},
title={A generalization of a theorem of Sylvester on the lines determined by a finite point set},
journal={Math. Scand.},
volume={16},
date={1965},
pages={175--180},
}
\bib{H92}{book}{
author={Harris, J.},
title={Algebraic Geometry: A First Course},
publisher={Springer},
date={1992},
}
\bib{H77}{book}{
author={Hartshorne, R.},
title={Algebraic Geometry},
publisher={Springer},
date={1977},
}
\bib{IK99}{book}{
author={Iarrobino, Anthony},
author={Kanev, Vassil},
title={Power Sums, Gorenstein Algebras, and Determinantal Loci},
series={Lecture Notes in Mathematics},
volume={1721},
publisher={Springer},
date={1999},
}
\bib{J18}{thesis}{
author={Jimenez Izquierdo, Enrique},
title={On sets of points with few ordinary hyperplanes},
type={Master's Thesis},
address={Universitat Polit\`ecnica de Catalunya},
date={2018},
}
\bib{KKT08}{article}{
author={Kaminski, J. Y.},
author={Kanel-Belov, A.},
author={Teicher, M.},
title={Trisecant lemma for nonequidimensional varieties},
journal={J. Math. Sci.},
volume={149},
number={2},
year={2008},
pages={1087--1097},
}
\bib{K99}{article}{
author={Kanev, Vassil},
title={Chordal varieties of {V}eronese varieties and catalecticant matrices},
journal={J. Math. Sci.},
volume={94},
date={1999},
number={1},
pages={1114--1125},
}
\bib{Klein}{article}{
author={Klein, Felix},
title={\"Uber die elliptischen Normalcurven der $N$ten Ordnung und zugeh\"orige Modulfunctionen der $N$ten Stufe},
journal={Abhandlungen der mathematisch-physischen Classe der K\"oniglich S\"achsischen Gesellschaft der Wissenschaften},
volume={13},
date={1885},
number={4},
pages={337--399},
}
\bib{Kollar}{book}{
author={Koll\'ar, J\'anos},
title={Lectures on Resolution of Singularities},
series={Annals of Mathematics Studies},
volume={166},
publisher={Princeton University Press},
date={2007},
}
\bib{LMMSSZ18}{article}{
author={Lin, Aaron},
author={Makhul, Mehdi},
author={Nassajian Mojarrad, Hossein},
author={Schicho, Josef},
author={Swanepoel, Konrad},
author={de Zeeuw, Frank},
title={On sets defining few ordinary circles},
journal={Discrete Comput. Geom.},
volume={59},
date={2018},
number={1},
pages={59--87},
}
\bib{LS18}{article}{
author={Lin, Aaron},
author={Swanepoel, Konrad},
title={Ordinary planes, coplanar quadruples, and space quartics},
journal={J. Lond. Math. Soc. (2)},
volume={100},
date={2019},
pages={937--956},
}
\bib{M15}{thesis}{
author={Monserrat, Joaquim},
title={Generalisation of {S}ylvester's problem},
type={Bachelor's Degree Thesis},
address={Universitat Polit\`ecnica de Catalunya},
date={2015},
}
\bib{M51}{article}{
author={Motzkin, T.},
title={The lines and planes connecting the points of a finite set},
journal={Trans. Amer. Math. Soc.},
volume={70},
date={1951},
number={3},
pages={451--464},
}
\bib{Muntingh}{thesis}{
author={Muntingh, Georg},
title={Topics in polynomial interpolation theory},
type={Ph.D. Dissertation},
address={University of Oslo},
date={2010},
}
\bib{NZ}{unpublished}{
author={{Nassajian Mojarrad}, Hossein},
author={de~Zeeuw, Frank},
title={On the number of ordinary circles},
note={arXiv:1412.8314},
}
\bib{PS10}{article}{
author={Purdy, George B.},
author={Smith, Justin W.},
title={Lines, circles, planes and spheres},
journal={Discrete Comput. Geom.},
volume={44},
date={2010},
number={4},
pages={860--882},
}
\bib{Rez2013}{article}{
author={Reznick, Bruce},
title={On the length of binary forms},
conference={
title={Quadratic and higher degree forms},
},
book={
series={Dev. Math.},
volume={31},
publisher={Springer},
},
date={2013},
pages={207--232},
}
\bib{SR85}{book}{
author={Semple, J. G.},
author={Roth, L.},
title={Introduction to Algebraic Geometry},
publisher={The Clarendon Press},
date={1985},
note={Reprint of the 1949 original},
}
\bib{S09}{book}{
author={Silverman, J.~H.},
title={The Arithmetic of Elliptic Curves},
edition={Second Edition},
publisher={Springer},
date={2009},
}
\bib{S94}{book}{
author={Silverman, J.~H.},
title={Advanced Topics in the Arithmetic of Elliptic Curves},
publisher={Springer},
date={1994},
}
\bib{S51}{article}{
author={Sylvester, J. J.},
title={On a remarkable discovery in the theory of canonical forms and of hyperdeterminants},
journal={Philosophical Magazine},
volume={2},
date={1851},
pages={391--410},
note={Paper 41 in The Collected Mathematical Papers of James Joseph Sylvester, Cambridge University Press, 1904},
}
\bib{W78}{book}{
author={Walker, R. J.},
title={Algebraic Curves},
publisher={Springer},
date={1978},
}
\end{biblist}
\end{bibdiv}
\newpage
\begin{dajauthors}
\begin{authorinfo}[AL]
Aaron Lin\\
Department of Mathematics\\
London School of Economics and Political Science\\
United Kingdom\\
aaronlinhk\imageat{}gmail\imagedot{}com
\end{authorinfo}
\begin{authorinfo}[KS]
Konrad Swanepoel\\
Department of Mathematics\\
London School of Economics and Political Science\\
United Kingdom\\
k\imagedot{}swanepoel\imageat{}lse\imagedot{}ac\imagedot{}uk \\
\url{http://personal.lse.ac.uk/swanepoe/}
\end{authorinfo}
\end{dajauthors}
\end{document}
|
2,869,038,155,612 | arxiv | \section{\label{sec:level1}Introduction}
Thermal phase transitions in classical systems connected to a heath bath, as well as quantum phase transitions in closed quantum systems, are well-understood \cite{KersonHuang_2014,sachdev}. Very recently, a strong interest has emerged in the critical behavior of open quantum systems \cite{breuer}, for example in a driven-dissipative context, fueled by the realization that such systems may gain importance as a workhorse in quantum technologies \cite{Verstraete2009}.
The central object in such systems is not the free energy as in thermal phase transitions or the Hamiltonian as in quantum phase transitions, but the Liouvillian superoperator \cite{CarmichaelBOOK,liouvillianSpectral}. This object can exhibit a rich phenomenology \cite{Nigro2019}: as both the quantum phase transition and the thermal phase transitions are --in principle-- limiting cases corresponding to a vanishing dissipation rate or coupling with a thermal bath, respectively. It appears nonetheless that thermal behavior may emerge in this setting more often, as long as sufficiently strong dissipation is present. Some recent results on this emergence of classical critical behavior were obtained regarding the phase of free exciton-polariton condensates \cite{Kulczykowski2017,Comarondynamical2018, Sieberer2013,FF2017bistability} and the Dicke model \cite{Klinder2015,Lang2016,Maghrebi2019}, and include also non-markovian extensions \cite{nonmarkovianthermalization}.
Nevertheless, a number of genuinely non-equilibrium phase transitions are also known to be possible in this context \cite{Marcuzzi_2015,Diehl2016,Strack2015,Young2019,Gelhausen2018}.
One of the promising platforms in the study of dissipative phase transitions are arrays of quadratically (two-photon) driven Kerr cavities, objects originally suggested in the quest for noise-resilient quantum codes \cite{exacsolutionSciRep,Goto16,Goto2016,Nigg17,Puri2017} and for which building blocks have currently been experimentally realized with superconducting circuits \cite{Leghtas15,Wang16}. In this system, it has been explicitly demonstrated numerically that its symmetry-breaking phase transition \cite{VincenzoMF} exhibits a crossover from quantum- to classical critical behavior regarding the steady state properties for increasing dissipation in the form of single-photon losses. In particular, the system was found to belong to the universality class of the quantum or thermal Ising model respectively \cite{PRLspinmodel,twophotonGTA}. The crossover between these two spin models has also been subject to more general recent studies on the dynamics \cite{Foss_Feig_2013a,*Foss_Feig_2013b} and thermalization mechanism \cite{Jaschke_2019}.
Here, we are concerned with extending the conclusions in the large-dissipation limit to the dynamical aspects of the phase transition. In particular, we are interested in the dynamical critical exponent $z$ that relates the correlation time, and in closed quantum systems the Hamiltonian gap, to the control parameter and the correlation length. Near a critical point, the correlation time diverges, leading to the occurrence of critical slowing down. Some current works have addressed the latter effect in a dissipative context for optical bistability (a \emph{first-order} dissipative phase transition) \cite{Casteels16KZ,Vincentini18,Fink2018}, fermionic lattices \cite{electronslowingdown}, optical lattice clocks \cite{Changlatticeclock} and miscellaneous spin lattices \cite{RiccardoXYZ,Cai2013}.
For our study of the quadratically driven Kerr lattice, we will use two separate approaches. First, we address the occurrence of the Kibble-Zurek (KZ) effect. Because of the critical slowing down, domains are formed when the control parameter (photon driving in our case) is tuned through the transition at finite speed. According to a dynamical scaling hypothesis based on the KZ effect \cite{kibble_2007,scalingPolk}, we are able to extract $z$ from square lattices subject to a linear quench and obtain $1.9<z<2.3$.
Secondly, we study the slow relaxation dynamics to the steady state. This latter timescale is known to correspond directly to the inverse Liouvillian gap \cite{liouvillianSpectral}.
We here find scaling with $1.8<z<2.3$. The fact that both values are consistent implies that the Liouvillian gap exhibits the same scaling relations as the Hamiltonian gap in closed quantum systems.
The two-dimensional classical Ising model is known as a paradigmatic example of the second-order phase transition, both in terms of its historical importance as for its pedagogical value \cite{Ising1D,Ising2D,KersonHuang_2014}. In numerical simulations, it is typically implemented with a somewhat heuristic update rule, which is not uniquely defined by the steady state properties. Moreover, different update rules applied to the same model result in different values of $z$. The most well-known of these rules is the single-site update rule or Metropolis algorithm, which is consistent with experimental observations of the critical dynamics in an iron film \cite{Dunlavy2005}. The same value of $z$ is found in the Hohenberg-Halperin model A dynamics \cite{HohenbergHalperin} as well as in $\phi^4$-theory \cite{Zhong2018}.
We find here that the value of $z$ extracted from quadratically-driven Kerr resonators is indeed consistent with the value $z_m\approx 2.18$ from the Metropolis algorithm and model A dynamics. This correspondence between our nonequilibrium critical dynamics and model A dynamics is in line with Ref. \cite{FF2017bistability} on critical dynamics of a driven-dissipative Bose-Hubbard model.
\section{Quadratically driven photonic lattices}
The Hamiltonian of a $d$-dimensional lattice of nonlinear bosonic resonators with nearest-neighbour hopping and coherent two-photon driving, rotating at half the driving frequency, is
\begin{align}\label{eq:Hamiltonian}
\hat{H}=&\sum_{i=1}-\Delta{\aop^\dagger}_i\hat{a}_i+\frac{U}{2}\hat{a}_i^{\dagger 2}\hat{a}_i^2+\frac{G}{2}{\hat{a}_i}^{\dagger 2}+\frac{G^*}{2}\hat{a}_i^2 \nonumber\\
&-\sum_{\langle i j\rangle}\frac{J}{2d}({\aop^\dagger}_i\hat{a}_j+{\aop^\dagger}_j\hat{a}_i),
\end{align}
where $\hat{a}_i~({\aop^\dagger}_i)$ annihilates (creates) a photon at site $i$ and the last summation runs over nearest-neighbour pairs \cite{VincenzoMF}. $G$ is the two-photon driving amplitude, $U$ the Kerr nonlinearity, $J$ the hopping strength and $\Delta$ the detuning between half the driving frequency and the cavity frequency.
Furthermore, dissipation in the form of single-photon losses can, under Born-Markov approximation, be described with a jump operator $\sqrt{\gamma}\hat{a}$, leading to a Lindblad master equation
\begin{equation}
\label{eq:Liouvillian}
\pdv{\hat{\rho}}{t}=\mathcal{L}\hat{\rho},=-i\comm{\hat{H}}{\hat{\rho}}+\gamma\sum_j\hat{a}_j\hat{\rho}{\aop^\dagger}_j-\frac{\gamma}{2}\acomm{{\aop^\dagger}_j\hat{a}_j}{\hat{\rho}}.
\end{equation}
Generally, two-photon losses can also be present in this system, but as suggested by analytic \cite{PRLspinmodel} and numeric \cite{twophotonGTA} arguments, these have little qualitative influence on the critical behavior \footnote{at least regarding the steady-state properties} while being computationally constraining, so that we don't include this process explicitly in this work.
A first study on system \eqref{eq:Liouvillian} was performed on the mean-field level in \cite{VincenzoMF} and predicted the occurrence of spontaneous breaking of the $\mathbb{Z}_2$-symmetry when increasing $G$. Recently, this prediction has been confirmed in a similar parameter regime with the additional finding that the transition belongs to the universality class of the thermal Ising model \cite{twophotonGTA}. In the weak-loss limit, by contrast, the transition belongs to the quantum-Ising universality class and the system can be explicitly mapped on a $XY$-spin model \cite{PRLspinmodel}. For negative $J$, antiferromagnetic behavior emerges \cite{antiferromagnet}. A brief look at the single-site problem \cite{exacsolutionSciRep} provides a simple picture to understand these behaviors: in each individual site, there are two metastable coherent state solutions with displacement $\pm\alpha_0$ (with a purely imaginary value in our considered case in absence of two-photon losses), that can be easily interpreted as $\ket{\uparrow}$ and $\ket{\downarrow}$ spin states. Alignment of these coherent states in neighbouring sites then corresponds to ferromagnetism, whereas a more random distribution $+\alpha_0$ and $-\alpha_0$ corresponds to classical paramagnetic behavior and maximal disalignment means antiferromagnetism. In the quantum paramagnet, the sites are highly entangled into superpositions of the coherent states.
The former studies were all restricted to steady-state properties of the transition. Here, we extend the results of Ref. \cite{twophotonGTA} obtained with the Gaussian Trajectory Approach (GTA) to the dynamical properties of the phase transition.
According to the quantum trajectory framework \cite{breuer,CarmichaelBOOK}, an open quantum system can be described as a stochastic average over quantum trajectories $\{\ket{\psi}_s\}$, corresponding to single-shot experimental realizations. Crucially, even though the full evolution \eqref{eq:Liouvillian} does not break the symmetry, individual trajectories $\{\ket{\psi}_s\}$ can. The GTA \cite{gaussianmethod,photoncondensate} adds a Gaussian ansatz to this formalism. This means that every trajectory $\ket{\psi}_s$ of an $N$-mode system is characterized only by the the coherent displacements $\{\alpha_{s,i}\}$ and the anomalous $\{u_{s,ij}\}$ and normal $\{v_{s,ij}\}$ quantum correlations where $1\leq i,j\leq N$. Explicitly, these coefficients are defined as
\begin{align}
\alpha_{s,i}&=\bra{\psi_s}\hat{a}_i\ket{\psi_s}\nonumber\\
u_{s,ij}&=\bra{\psi_s}\hat{a}_i\hat{a}_j\ket{\psi_s}-\alpha_{s,i}\alpha_{s,j}\nonumber\\
v_{s,ij}&=\bra{\psi_s}{\aop^\dagger}_i\hat{a}_j\ket{\psi_s}-\alpha_{s,i}^*\alpha_{s,j}.
\end{align}
The Gaussian ansatz thus reduces the complexity of each trajectory to quadratic as function of system size. In ref. \cite{twophotonGTA}, the corresponding GTA equations were derived explicitly for the quadratically driven photonic lattice.
By solving these to the steady state, emergence of an ordered phase was first witnessed by a macroscopic occupation of the $k_0$-mode. Furthermore, the quantity $\ol{\alpha} = \Im \frac{1}{N}\sum_i\alpha_i$ becomes a suitable real-valued order parameter akin to a magnetization of which the distribution (sampled by values $\ol{\alpha}_s$) changes from monomodal in the paramagnetic (disordered) phase to bimodal in the ferromagnetic (ordered) phase. Using finite-size scaling of the Binder cumulant, which quantifies this behavior \cite{Binder1981}, the critical exponent $\nu=1$ was extracted, indicating that the transition belongs to the universality class of the classical Ising model; as well as the critical value $G_c\approx0.86$ for the parameters $U=\gamma=J=1,\Delta=-1$.
For the dynamical numerical studies in this work, we are able to evolve these GTA equations with the same timestep as the static case \cite{twophotonGTA}, ($h=10^{-4}$ for the Euler-Maruyama method which coincides with Milstein's method \cite{milstein_tretyakov_2010}) because our interest is focused on timescales that are slower than the ones corresponding to individual Hamiltonian or dissipation terms.
\section{Kibble-Zurek scaling at a linear quench}
When a parameter in a thermodynamic system is slowly varied, the adiabatic theorem assures that the system remains at all times in an equilibrium state at constant entropy. The minimal ramp time for equilibrium to be preserved is given by the relaxation time. When a parameter is quenched through a second order phase transition, the relaxation time diverges, adiabaticity always breaks down and domains with different values of the symmetry-breaking order parameter are formed.
This mechanism is known as the Kibble-Zurek (KZ) mechanism \cite{kibble_2007}. Originally introduced in a cosmological context \cite{Kibble1980}, it has been developed further mainly in condensed matter systems, with first applications to liquid helium \cite{Zurek1985} and rotor models, where the defects are vortices \cite{kibble_2007}. More recently, the KZ mechanism has been applied to more generic systems where the defects take the form of domain walls \cite{Chandran2012,Sabbatini_2012}. Extensions to quantum phase transitions have also been developed \cite{Dziarmaga2010,rmpPolkovnikov,scalingPolk,Jaschke_2017,Silvi2016}.
More specifically, the KZ mechanism, works as follows (Fig. \ref{fig:KZ}). We envision a continuous quench where, starting from the steady-state solution at some value $G_0$ in the paramagnetic regime, $G$ is increased linearly up to $G_c$ in a total time $T$:
\begin{equation}
G(t)=G_0+vt\qquad 0\leq t \leq T,
\end{equation}
where $v=\frac{G_c-G_0}{T}$ .
As $G_c$ is approached, the correlation time diverges as $\tau\sim (G-G_c)^{-z\nu}$ while the correlation length scales as $\xi\sim (G-G_c)^{-\nu}$. From some point in time during the quench $T-\hat{t}$ onward, $\tau(G)>\hat{t}$: the dynamics freezes as the time available for the dynamics drops below the correlation time. The correlation length is then unable to increase beyond $\xi_c\sim (G(T-\hat{t})-G_c)^{-\nu}$, setting the final domain size. Crucially, the lower $v$, the larger $G(T-\hat{t})$ and thus the larger the domains. The former arguments assume universality in an infinite lattice. An example of such domain formation after quenching with different $v$ from a given initial state is given in Fig. \ref{fig:islands}.
Focussing on the steady-state properties of a more realistic finite lattice system, there are three distinct length scales present: the lattice spacing $a=1$, correlation length $\xi$ and system size $L$. Close to criticality, one has $\xi\gtrsim a$ leading to the scaling hypothesis that all quantities can be expressed as a function of the dimensionless ratio $\xi/L$ \cite{cardyfinitesize}. Such approach was also used to study system \eqref{eq:Liouvillian} in previous work \cite{twophotonGTA}.
For dynamic scaling, we now follow the scaling approach introduced in ref. \cite{scalingPolk}.
Three different timescales are present: $\tau_a$ relates to the local individual Hamiltonian processes and loss rate, $\tau$ the correlation time, and $\tau_{KZ}$ the time scale for the adiabatic evolution of the finite system. In general, universal behavior for $\xi$ and $\tau$ , and hence standard Kibble-Zurek scaling, can be expected for $\tau_a<\hat{t}<\tau_{KZ}$ (Fig. \ref{fig:KZ}).
Each of these timescales is associated with velocity scales \cite{scalingPolk}.
On the order of the lattice spacing \footnote{More precisely, $v_a$ relates to the energy scales of individual Hamiltonian processes and dissipation rate, which we all take of order one.},
$v_a\sim J^2a^{-(z+1/\nu)}=J^2$ marks the transition speed between possibly microscopic dynamics and long-range universal behavior. The second scale is the quench speed $v$. Third, there is the Kibble-Zurek speed $v_{KZ}(L)\sim J^2 L^{-(z+1/\nu)}$: marking the speed where domain size becomes comparable with the lattice size (below $v_ {KZ}(L)$ the dynamics remain adiabatic).
Now, two separate scaling functions appear for the second moment of the order parameter after the quench $(t=T)$ in different regimes \cite{scalingPolk}. If the quench is sufficiently slow for the microscopic processes to be unimportant $(v\lesssim v_a)$, one expects universal scaling as a function of $v/v_{KZ}$:
\begin{equation}
\ev{\ol{\alpha}^2}=L^{-2\beta/\nu}f_1(vL^{z+1/\nu}),
\end{equation}
where $\ev{\cdot}$ denotes a statistical expectation value over trajectories. On the other hand, if the quench is sufficiently fast for the finite size to be unimporant ($v\gtrsim v_{KZ}(L)$), universal scaling of $v/v_a$ is expected, leading to
\begin{equation}
\ev{\ol{\alpha}^2}=L^{-d}f_2(v^{-1}).
\end{equation}
In the overlapping region $v_{kz}\lesssim v \lesssim v_a$, both scaling functions overlap with a power-law dependence $\ev{\ol{\alpha}^2}\propto v^{-x}$ with exponent
\begin{equation}\label{eq:powerlaw}
x=\frac{d-2\beta/\nu}{z+1/\nu}.
\end{equation}
The above arguments have extensively been verified for thermal systems, but the generic nature of arguments suggests that they should also work out of equilibrium. In order to verify the KZ mechanism in the two-photon driven dissipative Hubbard model, we have simulated quenches in the amplitude of the two-photon drive of the Bose-Hubbard model from $G_0=0.7J$ to $G_c=0.86J$.
In Fig. \ref{fig:scaling} the extracted slow and fast scaling functions $f_1$ and $f_2$ are shown for different square lattice sizes, and collapse is observed for both when using exponents $\beta=0.125,~\nu=1,~z\approx2.18$. The former two numbers are static critical exponents corresponding to the 2D thermal Ising model \cite{twophotonGTA}. The value of $z$, the dynamical critical exponent, corresponds to Metropolis dynamics in this model \footnote{The same classical model can have different simulation algorithms (update rules) with other values of $z$ \cite{scalingPolk}}. In the intermediate regime for $v$, the predicted power-law \eqref{eq:powerlaw} is further observed consistent with the aforementioned values of the critical exponents.
We thus have obtained strong evidence for the fact that not only the static, but also the dynamical properties of the two-photon driven Bose-Hubbard model are in the Ising universality class, more precisely of Metropolis dynamics.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{KibbleZurek_improved.eps}
\caption{The Kibble-Zurek effect: when $G$ is linearly increased, the time until $G_c$ is reached (red, dashed line) becomes less than the diverging correlation time $\tau$ (blue line) at $\hat{t}$. From $\hat{t}$ onwards, the dynamics freezes (blue region). In a finite system, the true value of $\tau$ (full line) only follows the universal behavior for intermediate velocities $v$ for which the crossing occurs at $\tau_a<\tau<\tau_{KZ}$.}
\label{fig:KZ}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{islands_improved.eps}
\caption{During the quench protocol, $G$ is increased linearly with time from $G_0=0.7J$ to $G_c=0.86J$, the critical point \cite{twophotonGTA}. The upper left panel shows a Monte Carlo sampling of the steady state at the $G=G_0$. The other panels show a sample of the final state that was evolved with different quench speeds $v$ For $v\gtrsim 1$, almost no evolution has been able to take place. For decreasing $v$, the correlations are able to spread further, until for $v\lesssim v_{kz}(L)$ the whole 10x10 lattice is correlated. (color online)}
\label{fig:islands}
\end{figure}
\begin{figure*}
\includegraphics[width=0.9\linewidth]{fscombined_marker12.eps}
\caption{Dynamic scaling functions for different system sizes showing collapse in the adiabatic ($f_1$,(a)) and diabatic limit ($f_2$,(b)), for linear quenches from $G_0=0.7J$ to $G_c=0.86J$. In the intermediate regime, there is a universal power-law scaling (black line) with exponent $x=\frac{d-2\beta/\nu}{z+1/\nu}$, where dimension $d=2$; $\beta=0.125,~\nu=1$ are critical exponents of the universality class of the 2D classical Ising model; and $z$ is the dynamical critical exponent. Results are consistent with $z=z_m\approx2.18$, the dynamical critical exponent characterizing Metropolis dynamics in the 2D Ising model, this value was also used for the finite-size scaling itself. Parameters: $U=\gamma=J=1,\Delta=-1$. For the fastest quenches ($v/J^2\geq10^{-2}$), $10^3$ trajectories were used, and $10^2$ trajectories for the slower quenches. The yellow vertical lines denote approximate values for $v_{KZ}$ and $v_a$, marking the edges of regime where power-law scaling is valid. We find that the prefactor in the definition of $v_{KZ}$ is of order one, whereas $v_a\approx 10 J^2$}.
\label{fig:scaling}
\end{figure*}
\section{The Liouvillian gap scales with the same exponent}
In Hamiltonian systems, the dynamical exponent $z$ is further known to govern the scaling of the gap $\Delta_H$ \cite{sachdev}:
\begin{equation}\label{eq:hamgapscaling}
\Delta_H\sim\xi^{-z}\text{ \& }\Delta_H\sim\abs{g-g_c}^{z\nu},
\end{equation}
where $g$ is the control parameter. Because system \eqref{eq:Liouvillian} studied here belongs to a classical universality class due to its driven-dissipative nature, the fate of relations \eqref{eq:hamgapscaling} is not \emph{a priori} clear.
In an open system, the dynamics are governed by the Liouvillian superoperator $\mathcal{L}$ \cite{liouvillianSpectral}. Likewise, the timescale of slowest relaxation to the steady state is determined by the Liouvillian gap $\lambda$, defined as (minus) the real part of the first nonzero eigenvalue of $\mathcal{L}$. One can thus ask if $\Delta_H$ can be replaced by $\lambda$ in relations \eqref{eq:hamgapscaling}.
If this replacement in the first relation of \eqref{eq:hamgapscaling} is valid, then one must have in a finite system \cite{cardyfinitesize}
\begin{equation}\label{eq:gapscaling}
\lambda=\xi^{-z}\tilde{f}(\xi/L)=L^{-z}f(L^{1/\nu}(G-G_c)),
\end{equation}
where $\tilde{f},f$ are unknown scaling functions.
In order to obtain values of $\lambda$ for different values of $G$ and $L$ numerically, we perform the following procedure in each case. After starting from a fully polarized ($\alpha_j=1i,\forall j$) state, the system is left to evolve freely. An exponential $\sim e^{-\lambda t}$ is then fitted to the slow relaxation process towards the steady state, as illustrated for a 10x10 lattice in the left panel of Fig. \ref{fig:gaps}.
Validity of Eq. \eqref{eq:gapscaling} implies a collapse of curves for different system sizes when plotting $L^z\lambda$ as function of $L^{1/\nu}(G-G_c)$. In the right panel of Fig. \ref{fig:gaps}, we see that this is indeed the case for $\nu=1,z=z_m$, the same values as in the previous section.
Furthermore, relation \eqref{eq:gapscaling} also directly relates $\lambda$ to control parameter $G$. In the limit of large system sizes, we observe indeed (black line on Fig. 2)
\begin{equation}\label{eq:gapscaling2}
\lambda\sim\abs{G-G_c}^{z\nu},
\end{equation}
where the numerical result is especially good in the regime $\xi<L$.
This means that both relations \eqref{eq:hamgapscaling} are valid for the considered Liouvillian dynamics.
\begin{figure*}
\includegraphics[width=0.9\linewidth]{gaptogether_improved8x8_marker12.eps}
\caption{(a): Errorbars show the decay of $\ol{\alpha}$ to the steady state value (0) in a 10x10 lattice after initialization in a polarized $\alpha_j=1i,\forall j$ state, average over trajectories. From bottom to top, $G$ values run increment from $0.74J$ with steps $0.02J$ to $0.86J$. Full lines in corresponding colors: fits $\propto e^{-\lambda t}$, with fitting from $tJ=20/30/40$ onwards (6x6,10x10/12x12/16x16) ($tJ=10-50$ for the 8x8 case). (b): extracted values of $\lambda$, rescaled assuming $z=z_m$ as function of rescaled $G$. Especially in the $\xi < L$ regime (where the function argument less than -1), collapse is very clear and in agreement with power-law behavior \eqref{eq:gapscaling2}. Closer to the critical point, the data collapse becomes worse. It should be noted that in this regime, the time scales become very long and the extracted decay rate may be less accurate. Moreover, the slow dynamics is more sensitive to rare events, that may not be sufficiently sampled. $10^3$ trajectories were used in the simulations up to 12x12 lattices and $10^2$ for the 16x16 case.}
\label{fig:gaps}
\end{figure*}
\section{Conclusions}
In a dissipative Bose-Hubbard model with two-photon driving, the classical Ising model can be simulated, where the role of magnetization is taken by polarization of the optical phase. Not only does the transition in this system belong to the universality class of the Ising model at equilibrium, but also the dynamical properties match as witnessed by the value of critical exponent $z$.
We have also shown that the Liouvillian gap $\lambda$ scales with the same exponent $z$, with relations very reminiscent of the scaling of Hamiltonian gaps at quantum phase transitions in closed systems. To what extent this scaling behavior is generic for open quantum systems is an interesting open problem.
It is further interesting to note that there are alternatives for metropolis in classical Ising simulations, with different values of $z$, which can converge to the steady state faster (Swedsen-Wang, Wolff) or slower (East). One may wonder if suitable reservoir engineering would allow simulation of these as well. This could be useful to speed up or slow down relaxation to the same steady state numerically or experimentally. A possibility would be exploiting the difference between one common bath or independent baths, which has shown to reflect at least on the decoherence time of an open quantum system \cite{Jaschke_2019}
It would also be interesting to see how the dynamical criticality behaves in the quantum regime, where Kibble-Zurek effect can exhibit richer behavior \cite{Silvi2016}, as has also been suggested to study experimentally with trapped ions \cite{Puebla2019}.
Recently, also anti-Kibble Zurek behavior was found under certain circumstances in open quantum systems \cite{Dutta2016,Puebla2019anti}.
Even within a classical regime, optical simulation of the Ising model has been proposed to solve NP-hard tasks \cite{Barahona_1982}, including through degenerate parametric oscillators \cite{Marandi2014,Inagaki603,McMahon614}, a system with an analogous symmetry breaking to ours. To this purpose, our results also point out that, especially in absence of an all-to-all connected setup, care must be taken when driving through the transition that $v<v_{KZ}$ in order to find the true steady state (Ising ground state) and not a metastable state with Kibble-Zurek domains.
Recently also implementations of the Ising model in a Kerr resonator have been proposed \cite{Kyriienko2019} where optical bistability in the single-photon driven case is used to map the two spin states. Unlike this situation, the $\mathbb{Z}_2$ symmetry is exact in Kerr resonators with two-photon driving studied here, which might benefit the accuracy of the results of such optimization algorithms.
\acknowledgments
We acknowledge stimulating discussions and comments on the manuscript from R. Rota and F. Minganti. The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government – department EWI. Financial support from the project FWO-39532 is acknowledged.
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
2,869,038,155,613 | arxiv | \section{Introduction}
Let $\Sigma_k$ denote the alphabet $\{0,1,\ldots, k-1\}$. Let $u$ and $v$ be two words of equal length. The \emph{Hamming distance} $\ham(u,v)$ between $u$ and $v$ is defined to be the number of positions where $u$ and $v$ differ~\cite{Hamming:1950}. For example, $\ham({\tt four}, {\tt five}) = 3.$
A word $w$ is said to be a \emph{power} if it can be written as $w=z^i$ for some word $z$ where $i\geq 2$. Otherwise $w$ is said to be \emph{primitive}. For example, ${\tt hotshots} = ({\tt hots})^2$ is a power, but ${\tt hots}$ is primitive. The words $u$ and $v$ are said to be \emph{conjugates} (or $v$ is a \emph{conjugate} of $u$) if there exist non-empty words $x$, $y$ such that $u = xy$ and $v = yx$. If $\ham(u,v)=\ham(xy,yx)=0$, then $x$ and $y$ are said to \emph{commute}. If $x$ and $y$ are both non-empty, then $v$ is said to be a \emph{non-trivial} conjugate of $u$. Let $\sigma$ be the left-shift map, so that $\sigma^i(u)=yx$ where $u=xy$ and $|x|=i$, where $i$ is an integer with $0\leq i \leq |u|.$ For example, any two of the words ${\tt eat}$, ${\tt tea}$, and ${\tt ate}$ are conjugates because ${\tt eat} =\sigma({\tt tea}) = \sigma^2({\tt ate})$.
Lyndon and Sch\"{u}tzenberger~\cite{Lyndon&Schutzenberger:1962} characterized all words $x$, $y$ that commute. Alternatively, they characterized all words $u$ that have a non-trivial conjugate $v$ such that $\ham(u,v)=0$.
\begin{theorem}[Lyndon-Sch\"{u}tzenberger~\cite{Lyndon&Schutzenberger:1962}]\label{theorem:commute}
Let $u$ be a non-empty word. Then $u=xy$ has a non-trivial conjugate $v=yx$ such that $\ham(xy,yx)=0$ if and only if there exists a word $z$, and integers $i,j\geq 1$ such that $x=z^i$, $y=z^j$, and $u=v=z^{i+j}$.
\end{theorem}
Later, Fine and Wilf~\cite{Fine&Wilf:1965} showed that one can achieve the forward implication of Theorem~\ref{theorem:commute} with a weaker hypothesis. Namely, that $xy$ and $yx$ need not be equal, but only agree on the first $|x|+|y|+\gcd(|x|,|y|)$ terms.
\begin{theorem}[Fine-Wilf~\cite{Fine&Wilf:1965}]\label{theorem:finewilf}
Let $x$ and $y$ be non-empty words. If $xy$ and $yx$ agree on a prefix of length at least $|x|+|y|-\gcd(|x|,|y|)$, then there exists a word $z$, and integers $i,j\geq 1$ such that $x=z^i$, $y=z^j$, and $xy=yx=z^{i+j}$.
\end{theorem}
Fine and Wilf also showed that the bound of $|x|+|y|-\gcd(|x|,|y|)$ is optimal, in the sense that if $xy$ and $yx$ agree only on the first $|x|+|y|-\gcd(|x|,|y|)-1$ terms, then $xy$ need not equal $yx$. They demonstrated this by constructing words $x$, $y$ of any length such that $xy$ and $yx$ agree on the first $|x|+|y|-\gcd(|x|,|y|)-1$ terms and differ at position $|x|+|y|-\gcd(|x|+|y|)$. We call pairs of words $x$, $y$ of this form \emph{Fine-Wilf pairs}.
These words have been shown to have a close relationship with the well-known \emph{finite Sturmian words}~\cite{deLuca&Mignosi:1994}.
\begin{example} We give some examples of words that display the optimality of the Fine-Wilf result.
\noindent
Let $x=000000010000$ and $y=00000001$. Then $|x|=12$, $|y|=8$, and $\gcd(|x|,|y|)=4$.
\begin{align}
xy &= 000000010000000{\color{red}0}0001\nonumber \\
yx &= 000000010000000{\color{red}1}0000\nonumber
\end{align}
Let $x=010100101010$ and $y=0101001$. Then $|x|=12$, $|y|=7$, and $\gcd(|x|,|y|)=1$.
\begin{align}
xy &= 01010010101001010{\color{red}0}1 \nonumber \\
yx &= 01010010101001010{\color{red}1}0\nonumber
\end{align}
\end{example}
One remarkable property of these words is that they ``almost" commute, in the sense that $xy$ and $yx$ agree for as long a prefix as possible and differ in as few positions as possible. See Lemma~\ref{lemma:finewilf2} for a proof of this property.
One might na\"{i}vely think that the smallest possible Hamming distance between $xy$ and $yx$ after $0$ is $1$, but this is incorrect. Shallit~\cite{Shallit:2009} showed that $\ham(xy,yx)\neq 1$ for any words $x$ and $y$; see Lemma~\ref{lemma:hammingNon1}. Thus, after $0$, the smallest possible Hamming distance between $xy$ and $yx$ is $2$. If $\ham(xy,yx)=2$, then we say $x$ and $y$ \emph{almost commute}.
\begin{lemma}[Shallit~\cite{Shallit:2009}]
\label{lemma:hammingNon1}
Let $x$ and $y$ be words. Then $\ham(xy,yx)\neq 1$.
\end{lemma}
A similar concept, called the \emph{$2$-error border}, was introduced in a paper by Klav\v{z}ar and Shpectorov~\cite{Sanda&Sergey:2012}. A word $w$ is said to have a \emph{$2$-error border} of length $i$ if there exists a length-$i$ prefix $u$ of $w$, and a length-$i$ suffix $u'$ of $w$ such that $w = ux = yu'$ and $\ham(u,u')=2$ for some $x$, $y$. The $2$-error border was originally introduced in an attempt to construct graphs that keep similar properties to $n$-dimensional hypercubes. The $n$-dimensional hypercube is a graph that models Hamming distance between length-$n$ binary words. See~\cite{Wei:2017, Wei&Yang&Zhu:2019, Beal&Crochemore:2021} for more on $2$-error borders.
In this paper, we characterize and count all words $u$ that have a conjugate $v$ such that $\ham(u,v)=2$. As a result, we also characterize and count all pairs of words $x$, $y$ that almost commute.
Let $n$ and $i$ be integers such that $n>i\geq 1$. Let $H(n)$ denote the set of length-$n$ words $u$ over $\Sigma_k$ that have a conjugate $v$ such that $\ham(u,v) =2$. Let $h(n)=|H(n)|$. Let $H(n,i)$ denote the set of length-$n$ words $u$ over $\Sigma_k$ such that $\ham(u,\sigma^i(u))=2$. Let $h(n,i) = |H(n,i)|$.
The rest of the paper is structured as follows. In Section~\ref{section:finewilf} we prove that Fine-Wilf pairs almost commute. In Section~\ref{section:countH} we characterize the words in $H(n,i)$ and present a formula to calculate $h(n,i)$. In Section~\ref{section:interlude} we prove some properties of $H(n,i)$ and $H(n)$ that we make use of in later sections. In Section~\ref{section:finalcount} we present a formula to calculate $h(n)$. In Section~\ref{section:exactly} we count the number of length-$n$ words $u$ with \emph{exactly} one conjugate such that $\ham(u,v)=2$. In Section~\ref{section:lyndon} we count the number of Lyndon words in $H(n)$. Finally, in Section~\ref{section:asymptotics} we show that there is no one easily-expressible bound on the growth of $h(n)$.
\section{Fine-Wilf pairs almost commute}\label{section:finewilf}
In this section we prove that Fine-Wilf pairs almost commute. This result appears without proof in~\cite{Shallit:2015}.
\begin{lemma}\label{lemma:finewilf2}
Let $x$ and $y$ be non-empty words. Suppose $xy$ and $yx$ agree on a prefix of length $|x|+|y|-\gcd(|x|,|y|)-1$ but disagree at position $|x|+|y|-\gcd(|x|,|y|)$. Then $\ham(xy,yx)=2$.
\end{lemma}
\begin{proof}
The proof is by induction on $|x|+|y|$. Suppose $xy$ and $yx$ agree on a prefix of length $|x|+|y|-\gcd(|x|,|y|)-1$ but disagree at position $|x|+|y|-\gcd(|x|,|y|)$. Without loss of generality, let $|x|\leq |y|$.
First, we take care of the case when $|x|=|y|$, which also takes care of the base case $|x|+|y|=2$. Since $|x|=|y|$, we have that $\gcd(|x|,|y|)=|x|=|y|$. Therefore, $x$ and $y$ share a prefix of length $|x|+|y|-\gcd(|x|,|y|)-1=|x|-1$ but disagree at position $|x|$. This implies that $\ham(x,y)=1$. Thus $\ham(xy,yx)=2\ham(x,y)=2$.
Suppose $|x|<|y|$. Then $\gcd(|x|,|y|) \leq |x|$. So $|x|+|y|-\gcd(|x|,|y|)-1\geq |y|-1$. Thus $xy$ and $yx$ must share a prefix of length $\geq |y|-1$. However, since $|x|<|y|$, we have that $x$ must then be a proper prefix of $y$. So write $y=xt$ for some non-empty word $t$. Then $\ham(xy,yx)=\ham(xxt,xtx)=\ham(xt,tx)$. Since $xt$, $tx$ are suffixes of $xy$, $yx$ we have that $xt$ and $tx$ agree on the first $|y|-\gcd(|x|,|y|)-1$ terms and disagree at position $|y|-\gcd(|x|,|y|)$. Clearly $\gcd(|x|,|y|)=\gcd(|x|,|xt|) =\gcd(|x|,|x|+|t|)=\gcd(|x|,|t|)$, and $|y|-\gcd(|x|,|y|) = |x|+|t|-\gcd(|x|,|t|)$. Therefore $xt$ and $tx$ share a prefix of length $|x|+|t|-\gcd(|x|,|t|)-1$ and differ at position $|x|+|t|-\gcd(|x|,|t|)$. By induction $\ham(xt,tx)=2$, and thus $\ham(xy,yx)=2$.
\end{proof}
\section{Counting $H(n,i)$}\label{section:countH}
In this section we characterize the words in $H(n,i)$ and use this characterization to provide an explicit formula for $H(n,i)$.
\begin{lemma}\label{lemma:bijection}
Let $n$, $i$ be positive integers such that $n>i$. Let $g=\gcd(n,i)$. Let $w$ be a length-$n$ word. Let $w=x_0x_1\cdots x_{n/g-1}$ where $|x_j|=g$ for all $j$, $0\leq j \leq n/g-1$. Then $w\in H(n,i)$ iff there exist two distinct integers $j_1$, $j_2$, $0\leq j_1 < j_2\leq n/g-1$ such that $\ham(x_{j_1},x_{j_2})=1$ and $x_{j}= x_{(j+i/g)\bmod{n/g}}$ for all $j\neq j_1,j_2$, $0\leq j \leq n/g-1$.
\end{lemma}
\begin{proof} We write $w= x_0x_1\cdots x_{n/g-1}$ where $|x_j|=g$ for all $j$, $0\leq j \leq n/g-1$. Since $g$ divides $i$, we have that $\sigma^i(w) = x_{i/g}\cdots x_{n/g-1}x_0\cdots x_{i/g-1}$.
$\Longrightarrow:$ Suppose $w\in H(n,i)$. Then
\begin{align}
\ham(w,\sigma^i(w)) &= \ham(x_0x_1\cdots x_{n/g-1},x_{i/g}\cdots x_{n/g-1}x_0\cdots x_{i/g-1})\nonumber \\
&= \sum_{j=0}^{n/g-1}\ham(x_j,x_{(j+i/g)\bmod{n/g}})\nonumber \\
&=2.\nonumber
\end{align}
In order for the Hamming distance between $w$ and $\sigma^i(w)$ to be $2$, we must have that either $\ham(x_j,x_{(j+i/g)\bmod {n/g}})=2$ for exactly one $j$, $0 \leq j \leq n/g-1$, or $\ham(x_{j_1}, x_{(j_1+i/g)\bmod{n/g}})=1$ and $\ham(x_{j_2}, x_{(j_2+i/g)\bmod{n/g}})=1$ for two distinct integers $j_1,j_2$, $0\leq j_1<j_2\leq n/g-1$.
Suppose $\ham(x_j,x_{(j+i/g)\bmod {n/g}})=2$ for some $j$, $0 \leq j \leq n/g-1$. Then it follows that $x_p = x_{(p+i/g)\bmod {n/g}}$ for all $p\neq j$, $0\leq p \leq n/g-1$. Since $g=\gcd(n,i)$, we have that $\gcd(n/g,i/g)=1$. The additive order of $i/g$ modulo $n/g$ is $\frac{n/g}{\gcd(n/g,i/g)}=n/g$. Therefore, we have that $x_{(j+i/g)\bmod {n/g}} = x_{(j+2i/g)\bmod {n/g}}=\cdots =x_{(j+(n/g-1)i/g)\bmod {n/g}}=x_j$ and $\ham(x_j,x_{(j+i/g)\bmod {n/g}})=2$, a contradiction.
Suppose $\ham(x_{j_1}, x_{(j_1+i/g)\bmod{n/g}})=1$ and $\ham(x_{j_2}, x_{(j_2+i/g)\bmod{n/g}})=1$ for two distinct integers $j_1,j_2$, $0\leq j_1<j_2\leq n/g-1$. Then it follows that $x_j = x_{(j+i/g)\bmod{n/g}}$ for all $j \neq j_1,j_2$, $0\leq j \leq n/g-1$. Since the additive order of $i/g$ modulo $n/g$ is $n/g$, we have that if we start at $j_1$ and successively add $i/g$ and $\bmod{n/g}$ the result, then we will reach every integer between $0$ and $n/g-1$. Therefore, we will reach $j_2$ before we reach $j_1$ again. Thus, since $x_j=x_{(j+i/g)\bmod{n/g}}$ for all $j\neq j_1,j_2$, $0\leq j \leq n/g-1$, we have that $x_{(j_1+i/g)\bmod{n/g}} = x_{(j_1+2i/g)\bmod{n/g}} = \cdots = x_{j_2}$. But now we have $\ham(x_{j_1}, x_{(j_1+i/g)\bmod{n/g}})=1$ and $x_{(j_1+i/g)\bmod{n/g}} =x_{j_2}$, which implies $\ham(x_{j_1},x_{j_2})=1$.
$\Longleftarrow:$ Suppose there exist two distinct integers $j_1$, $j_2$, $0\leq j_1<j_2\leq n/g-1$ such that $\ham(x_{j_1},x_{j_2}) = 1$ and $x_j=x_{(j+i/g)\bmod{n/g}}$ for all $j\neq j_1,j_2$, $0\leq j \leq n/g-1$. Since the additive order of $i/g$ modulo $n/g$ is $n/g$, we have that if we start at $j_1$ and successively add $i/g$ modulo $n/g$, then we will reach every integer between $0$ and $n/g-1$. But this means that we will reach $j_2$ before we get to $j_1$ again. Thus, we have that $x_{(j_1+i/g)\bmod{n/g}} = x_{(j_1+2i/g)\bmod{n/g}} = \cdots = x_{j_2}$. Similarly, if we start at $j_2$ and successively add $i/g$ modulo $n/g$ we will reach $j_1$ before looping back to $j_2$. So $x_{(j_2+i/g)\bmod{n/g}} = x_{(j_2+2i/g)\bmod{n/g}} = \cdots = x_{j_1}$. Therefore, we have that $w\in H(n,i)$ since
\begin{align}
\ham(w,\sigma^{i}(w)) &= \ham(x_0x_1\cdots x_{n/g-1},x_{i/g}\cdots x_{n/g-1}x_0\cdots x_{i/g-1})\nonumber \\
&=\sum_{j=0}^{n/g-1}\ham(x_j,x_{(j+i/g)\bmod{n/g}})\nonumber \\
&= \ham(x_{j_1}, x_{(j_1+i/g)\bmod{n/g}}) +\ham(x_{j_2}, x_{(j_2+i/g)\bmod{n/g}})\nonumber\\
&= \ham(x_{j_1}, x_{j_2}) +\ham(x_{j_2}, x_{j_1})\nonumber\\
&=2.\nonumber
\end{align}
\end{proof}
\begin{lemma}\label{lemma:formula} Let $n,i$ be positive integers such that $n>i.$ Then
\[h(n,i) = \frac{1}{2}k^{\gcd(n,i)}(k-1)n\bigg(\frac{n}{\gcd(n,i)}-1\bigg).\]
\end{lemma}
\begin{proof}
Let $w$ be a length-$n$ word. Let $g=\gcd(n,i)$. We split up $w$ into length-$g$ blocks. We write $w=x_0x_1\cdots x_{n/g-1}$ where $|x_j|=g$ for all $j$, $0\leq j \leq n/g-1$. Lemma~\ref{lemma:bijection} gives a complete characterization of $H(n,i)$. Namely, the word $w$ is in $H(n,i)$ if and only if there exist two distinct integers $j_1,j_2$, $0\leq j_1<j_2\leq n/g-1$ such that $\ham(x_{j_1},x_{j_2})=1$ and $x_j=x_{(j+i/g)\bmod{n/g}}$ for all $j\neq j_1,j_2$, $0\leq j\leq n/g-1$. Given $j_1$, $j_2$, $x_{j_1}$, and $x_{j_2}$, all $x_j$ for $j\neq j_1,j_2$, $0\leq j \leq n/g-1$ are already determined.
There are \[\sum_{j_2=1}^{n/g-1}\sum_{j_1=0}^{j_2-1} 1 = \frac{1}{2}\frac{n}{g}\bigg(\frac{n}{g}-1\bigg)\] choices for $j_1$ and $j_2$. There are $k^g$ options for $x_{j_1}$. Considering that $x_{j_1}$ and $x_{j_2}$ differ in exactly one position, there are $g(k-1)$ choices for $x_{j_2}$ given $x_{j_1}$. Putting everything together we have that
\begin{align}
h(n,i) &=
\overbrace{\frac{1}{2}\frac{n}{g}\bigg(\frac{n}{g} -1\bigg)}^{\text{choices for }j_1\text{ and } j_2}\overbrace{k^g}^{\text{choices for }x_{j_1}}\overbrace{g(k-1)}^{\text{choices for }x_{j_2}\text{ given }x_{j_1}}\nonumber\\
&= \frac{1}{2}k^{\gcd(n,i)}(k-1)n\bigg(\frac{n}{\gcd(n,i)}-1\bigg). \nonumber
\end{align}
\end{proof}
\begin{corollary}
Let $m,n\geq 1$ be integers. Then there are exactly \[h(n+m,m) = \frac{1}{2}k^{\gcd(n+m,m)}(k-1)(n+m)\bigg(\frac{n+m}{\gcd(n+m,m)}-1\bigg).\] pairs of words $(x,y)$ of length $(m,n)$ such that $\ham(xy,yx)=2$.
\end{corollary}
\section{Some useful properties}\label{section:interlude}
In this section we prove some properties of $H(n,i)$ and $H(n)$ that we use in later sections.
\begin{lemma}\label{lemma:half}
Let $u$ be a length-$n$ word. Let $i$ be an integer with $0\leq i \leq n$. If $u\in H(n,i)$ then $u\in H(n,n-i).$
\end{lemma}
\begin{proof}
Suppose $i\leq n/2$. Then we can write $u=xtz$ for some words $t,z$ where $|x|=|z|=i$ and $|t|=n-2i$. We have that $\ham(xtz,tzx) = \ham(xt,tz) + \ham(z,x) = 2$. Consider the word $zxt$. Clearly $v=zxt$ is a conjugate of $u=xtz$ such that $\ham(xtz,zxt) = \ham(x,z) +\ham(tz,xt) = 2$ where $u = (xt)z$ and $v = z(xt)$ with $|xt| = n-i$. Therefore $u\in H(n,n-i).$
Suppose $i>n/2$. Then we can write $u=zty$ for some words $t,z$ where $|z|=|y|=n-i$ and $|t|=2i-n$. We have that $\ham(zty,yzt) = \ham(z,y) + \ham(ty,zt) = 2$. Consider the word $tyz.$ Clearly $v=tyz$ is a conjugate of $u=zty$ such that $\ham(zty,tyz) = \ham(zt,ty) +\ham(y,z) = 2$ where $u = z(ty)$ and $v = (ty)z$ with $|z| =n- i$. Therefore $u\in H(n,n-i).$
\end{proof}
\begin{lemma}
Let $u$ be a length-$n$ word. If $u\in H(n)$, then $\ham(u,v)>0$ for any non-trivial conjugate $v$ of $u$.
\end{lemma}
\begin{proof}
We prove the contrapositive of the lemma statement. Namely, we prove that if there exists a non-trivial conjugate $v$ of $u$ such that $\ham(u,v)=0$ then $u\not \in H(n)$.
Suppose $u=xy$ and $v=yx$ for some non-empty words $x$, $y$. Then by Theorem~\ref{theorem:commute} we have that there exists a word $z$, and an integer $i\geq 2$ such that $u = v = z^i$. Let $w$ be a conjugate of $u$. Then $w = (ts)^i$ where $z=st$. So $\ham(u,w) = \ham((st)^i,(ts)^i) = i\ham(st,ts)$. If $st=ts$, then $\ham(u,w)=0$. If $st\neq ts$, then $\ham(st,ts)\geq 2$ (Lemma~\ref{lemma:hammingNon1}). Since $\ham(st,ts)\geq 2$ and $i\geq 2$, we have $\ham(u,w)\geq 4$. Thus $u\not\in H(n)$.
\end{proof}
\begin{corollary}\label{corollary:notH}
Let $u$ be a length-$n$ word. If $u$ is a power, then $u\not\in H(n)$.
\end{corollary}
\begin{corollary}\label{corollary:primitive}
All words in $H(n)$ are primitive.
\end{corollary}
\section{Counting $H(n)$}\label{section:finalcount}
\begin{comment}
To obtain a formula for $h(n)$ one would hope that, aside from some minor issues with double-counting, one would only need to find a formula for $h(n,i)$ and then sum over all $i\in \{1,2,\ldots, n-1\}$. However, Lemma~\ref{lemma:half} shows an equivalence between $H(n,i)$ and $H(n,n-i)$, so this strategy must be. This points to summing $h(n,i)$ over all $i \leq n-i$, i.e., $i\in \{1,2,\ldots, \lfloor n/2\rfloor\}$. But what about summing $h(n,i)$ over all $i\in \{1,2,\ldots, \lfloor n/2\rfloor\}$? This seems more promising, but it does not work either since we could be double-counting words that are in $H(n,i)$ for multiple $i\in \{1,2,\ldots, \lfloor n/2\rfloor\}$. For example, the word $1010101110$ is in both $H(10,2)$ and $H(10,4)$. We address this problem later, in Section~\ref{section:finalcount}.
\end{comment}
Lemma~\ref{lemma:half} shows that $H(n,i) = H(n,n-i)$, which in turn implies that $h(n) \leq \sum_{i=1}^{\lfloor n/2\rfloor} h(n,i)$. To make this inequality an equality we need to be able to account for those words that are double-counted in the sum $\sum_{i=1}^{\lfloor n/2\rfloor} h(n,i)$. In this section we resolve this problem and give an exact formula for $h(n)$. More specifically, we show that all words $w$ that are in both $H(n,i)$ and $H(n,j)$, for $i\neq j$, must exhibit a certain regular structure that we can explicitly describe. Then we use this structure result, in addition to the results from Section~\ref{section:countH} and Section~\ref{section:interlude}, to give an exact formula for $h(n)$.
\begin{lemma}\label{lemma:twoPeriods}
Let $n,i,j$ be positive integers such that $n\geq 2i>2j$. Let $g=\gcd(n,i,j)$. Let $w$ be a length-$n$ word. Then $w\in H(n,i)$ and $w\in H(n,j)$ if and only if there exists a word $u$ of length $g$, a word $v$ of length $g$ with $\ham(u,v)=1$, and a non-negative integer $p<n/g$ such that $w=u^pvu^{n/g-p-1}$.
\end{lemma}
\begin{proof}~\\
\noindent $\Longrightarrow:$ The proof is by induction on $|w|=n$. Suppose $w\in H(n,i)$ and $w\in H(n,j)$. Since $n\geq 2i>2j$, we have that $\gcd(n,i,j)\leq n/4$.
First, we take care of the case when $4\mid n$ and $\gcd(n,i,j)=n/4$, which also includes the base case $n=4$, $i=2$, $j=1$. Since $\gcd(n,i,j)=n/4$, we must have $i=n/2$ and $j=n/4$. So we can write $w=x_1x_2x_3x_4$ where $|x_1|=|x_2|=|x_3|=|x_4| = n/4$. Thus $\ham(w,\sigma^{i}(w))=\ham(x_1x_2x_3x_4, x_3x_4x_1x_2)=2\ham(x_1x_2,x_3x_4)=2$. Therefore, $x_1x_2$ and $x_3x_4$ differ in only one position. This implies that either $x_1=x_3$ or $x_2=x_4$.
Suppose $x_1=x_3$ and $\ham(x_2,x_4)=1$. Let $u=x_1$. Since $w\in H(n,j)$, we have $\ham(x_1x_2x_3x_4,x_2x_3x_4x_1)=\ham(ux_2ux_4,x_2ux_4u)=2\ham(x_2,u)+2\ham(x_4,u)=2$. But this is only possible if $\ham(x_2,u)=0$ or $\ham(x_4,u)=0$. If $\ham(x_2,u)=0$, then $\ham(x_4,u)=1$. So we can write $w=uuuv$ where $v=x_4$. If $\ham(x_4,u)=0$, then we can write $w=uvuu$ where $v=x_2$ and $\ham(u,v)=1$.
Suppose $x_2=x_4$ and $\ham(x_1,x_3)=1$. Let $u=x_2$. Then for similar reasons to the case when $x_1=x_3$ we have that $w=uuvu$ or $w=vuuu$ where $v=x_3$ or $v=x_1$ with $\ham(u,v)=1$.
Now, we take care of the case when $\gcd(n,i,j)<n/4$. Write $w=xyx'y'z$ for words $x,y,x',y',z$ where $|xy|=|x'y'|=i$, and $|x|=|x'|=j$. Since $w\in H(n,i)$ and $w\in H(n,j)$, we have that $\ham(xyx'y'z, x'y'zxy)=\ham(xyx'y'z, yx'y'zx)=2$. There are three cases to consider: $1) \ham(xy,x'y')=0$, $2) \ham(xy,x'y')=1$, and $3) \ham(xy,x'y')=2$.
\begin{enumerate}[{}1{)}]
\item
Suppose $\ham(xy,x'y')=0$. Then $\ham(xyxyz,xyzxy)=\ham(xyxyz,yxyzx)=2$. Clearly $\ham(xyxyz,xyzxy)=\ham(xyz,zxy)=2$, so $xyz \in H(n-i,i)$. Now, either $xy=yx$ or $xy\neq yx$. If $xy=yx$, then we clearly have $\ham(xyxyz,yxyzx) = \ham(xyz,yzx)=2$. Therefore, we have $xyz\in H(n-i,j)$. Let $g = \gcd(n-i,i,j)$. We have that $g=\gcd(n-i,i,j) = \gcd(\gcd(n-i,i),j) = \gcd(\gcd(n,i),j) =\gcd(n,i,j)$. If $n-i \geq 2i\geq 2j$, then we can apply induction to $xyz$ directly. If $n-i < 2i$ and $n-i \geq 2j$, then $n-i > 2(n-2i)$. If $n-i < 2j < 2i$, then $n-i > 2(n-i-j)$, and $n-i > 2(n-2i)$. By Lemma~\ref{lemma:half}, we have that if $xyz \in H(n-i,i)$ and $xyz\in H(n-i,j)$, then $xyz\in H(n-i,n-2i)$ and $xyz\in H(n-i,n-i-j)$. Additionally, in the case when $n-i < 2i$ and $n-i \geq 2j$, we have that $\gcd(n-i,n-2i,j) = \gcd(n,i,j)=g$. On the other hand, when $n-i < 2j <2i$, we have that $\gcd(n-i,n-2i,n-i-j) = \gcd(n,i,j)=g$.
Thus, by induction there exists a word $u$ of length $g$, a word $v$ of length $g$ with $\ham(u,v)=1$, and a non-negative integer $p' < (n-i)/g$ such that $xyz=u^{p'}vu^{(n-i)/g-p'-1}$. Since $xy=yx$ and $g\mid \gcd(i,j)$, it is clear that $xy = u^{i/g}$. Then $w = xyxyz = u^{p'+i/g} v u^{(n-i)/g-p'-1}$. Letting $p=p'+i/g$, we have $w = u^{p} v u^{n/g-p-1}$.
If $xy\neq yx$, then we must have $\ham(xy,yx)=2$. But since $\ham(xyxyz,yxyzx) = 2$, we must have $\ham(xyz,yzx) = 0$. This means that $xyz$ is a power, but we have already demonstrated that $xyz \in H(n-i,i)$. By Corollary~\ref{corollary:notH}, this is a contradiction.
\item
Suppose $\ham(xy,x'y')=1$. Either $\ham(x,x')=1$ or $\ham(y,y')=1$. Assume, without loss of generality, that $\ham(x,x')=1$. Then $y=y'$ and $w=xyx'yz$. Since $w\in H(n,i)$ and $w\in H(n,j)$, we have that $\ham(xyx'yz, x'yzxy)=\ham(xyx'yz, yx'yzx)=2$. Since $\ham(xy,x'y)=1$, we must have $\ham(x'yz,zxy)=1$. Additionally, since $\ham(xy,yx)$ is $0$ or $2$, we have that $\ham(xy,yx')=1$ and $\ham(x'yz,yzx)=1$. Consider $\ham(xyz,zxy)$ and $\ham(xyz,yzx)$. Since $\ham(x'yz,zxy)=1$ and $\ham(x'yz,yzx)=1$, we have that $\ham(xyz,zxy)=1\pm 1$ and $\ham(xyz,yzx)=1\pm 1$. If exactly one of $\ham(xyz,zxy)$ and $\ham(xyz,yzx)$ is $0$, then $xyz$ must be a power and also in $H(n)$, a contradiction by Corollary~\ref{corollary:notH}. So $\ham(xyz,zxy)=\ham(xyz,yzx)=2$ or $\ham(xyz,zxy)=\ham(xyz,yzx)=0$.
Suppose $\ham(xyz,zxy)=\ham(xyz,yzx)=2$. Then $xyz \in H(n-i,i)$ and $xyz \in H(n-i,j)$. Let $g=\gcd(n-i,i,j)=\gcd(n,i,j)$. As in the case when $\ham(xy,x'y')=0$, we might not be able to apply induction directly to $xyz$ since we may have $n-i < 2i$ or $n-i < 2j$. But the same reasoning applied to the case when $\ham(xy,x'y')=0$ works for this case. When $n-i < 2i$, we have that $xyz \in H(n-i,n-2i)$, $n-i > 2(n-2i)$, and $g=\gcd(n,i,j) = \gcd(n-i,n-2i,j)$. When $n-i < 2j < 2i$, we have that $xyz \in H(n-i,n-i-j)$, $n-i > 2(n-i-j)$, and $g=\gcd(n,i,j) = \gcd(n-i,n-2i,n-i-j)$.
Therefore, by induction there exists a word $u$ of length $g$, a word $v$ of length $g$ with $\ham(u,v)=1$, and a non-negative integer $p'<(n-i)/g$ such that $xyz = u^{p'}vu^{(n-i)/g - p'-1}$. Since $\ham(x'yz,zxy)=1$ and $\ham(x'yz,yzx)=1$, we must have $x'yz = u^{(n-i)/g}$. Therefore $x=u^{p}v u^{i/g-p-1}$ for some $p < i/g < n/g$. Then $w=xyx'yz = u^{p}v u^{i/g-p-1} u^{(n-i)/g} = u^p v u^{n/g-p-1}$.
Suppose $\ham(xyz,zxy)=\ham(xyz,yzx)=0$. Then, by Theorem~\ref{theorem:commute}, there exist two words $s$, $t$, and four integers $i_1,i_2,j_1,j_2\geq 1$ such that $xy = s^{i_1}$, $z=s^{j_1}$, $x=t^{i_2}$, and $yz = t^{j_2}$. Since $xyz = s^{i_1+j_1}=t^{i_2+j_2}$ for $i_1+j_1,i_2+j_2\geq 2$, we have that $|s|,|t| \leq (n-i)/2$. Therefore, we must have that both $st$ and $ts$ are a prefix of $xyz$. By Theorem~\ref{theorem:commute}, there exists a word $w$, and two integers $i_3,j_3\geq 1$ such that $s=w^{i_3}$ and $t=w^{j_3}$.
Now, since $|s|$ divides both $|xy|=i$ and $|xyz|=n-i$, we have that $|s|$ divides $\gcd(n-i,i)=\gcd(n,i)$. Similarly, we have that $|t|$ divides $\gcd(n-i,j)$. Since $|w|$ divides both $|s|$ and $|t|$, it also must divide $\gcd(\gcd(n,i),\gcd(n-i,j))=\gcd(n,i,j)$. Therefore, there exists a length-$g$ word $u$ such that $x= u^{j/g}$, $xy = u^{i/g}$, and $xyz = u^{(n-i)/g}$. By assumption we have that $\ham(y,y')=1$, so $y' = u^{p'}vu^{i/g-j/g-p'-1}$ for some integer $p'$, $0\leq p' < i/g-j/g$, and some length-$g$ word $v$ with $\ham(u,v)=1$. But then, letting $p=j/g+p'$, we have that $w=xy'xyz = u^{j/g}u^{p'}vu^{i/g-j/g-p'-1}u^{(n-i)/g} = u^pvu^{n/g-p-1}$.
\item
Suppose $\ham(xy,x'y')=2$. Then $\ham(x'y'z, zxy)=0$. Therefore, $xy$ must be a suffix of $w$. Rewrite $w$ as $w=xytxy$ for some word $t$. Clearly $\ham(xytxy, txyxy)=2$ iff $\ham(xyxyt,xytxy)=2$ and $\ham(xytxy, ytxyx)=2$ iff $\ham(xyxyt,yxytx)=2$. Therefore, this case is taken care of by case $1)$ ($\ham(xy,x'y')=0$).
\end{enumerate}
\bigskip
\noindent $\Longleftarrow:$ Let $g=\gcd(n,i,j)$. Suppose we can write $w = u^{p}vu^{n/g-p-1}$ where $|u|=|v|=g$, and $\ham(u,v)=1$. Since $g\mid i$, we can write
\[\ham(w,\sigma^i(w)) = \ham(u^{p}vu^{n/g-p-1}, u^{p-i/g}vu^{n/g+i/g-p-1}) = 2\ham(u,v)=2\]
if $p\leq i/g$, and
\[\ham(w,\sigma^i(w)) = \ham(u^{p}vu^{n/g-p-1}, u^{n/g -i+p}v u^{p-i-1}) = 2\ham(u,v)=2\]
if $p> i/g$.
Since $g$ divides $j$ as well, a similar argument works to show $\ham(w,\sigma^{j}(w))=2$ as well. Therefore, $w\in H(n,i)$ and $w\in H(n,j)$.
\end{proof}
Lemma~\ref{lemma:twoPeriods} shows that any word $w$ that is in $H(n,i)$ and $H(n,j)$ for $j<i\leq n/2$ is of Hamming distance $1$ away from a power. Therefore, to count the number of such words, we need a formula for the number of powers.
Clearly a word is a power if and only if it is not primitive. This implies that $p_k(n)=k^n-\psi_k(n)$ where $\psi_k(n)$ is the number of length-$n$ primitive words over a $k$-letter alphabet. From Lothaire's 1983 book~\cite[p.~9]{Lothaire:1983} we also have that
\[\psi_k(n) = \sum_{d\mid n}\mu(d)k^{n/d}\] where $\mu$ is the M\"{o}bius function.
Let $H'(n,i)$ denote the number of words $w\in H(n,i)$ that are also in $H(n,j)$ for some $j<i$. Let $h'(n,i) = |H'(n,i)|$.
\begin{corollary}
Let $n,i$ be positive integers such that $n\geq 2i$. Then
\[h'(n,i) = \begin{cases}
n(k-1)p_k(i), & \text{if $i\mid n$;} \\
n(k-1)k^{\gcd(n,i)}, & \text{otherwise.}
\end{cases}\]
\end{corollary}
Let $H''(n,i)$ denote the number of words $w\in H(n,i)$ such that $w\not\in H(n,j)$ for all $j<i$. Let $h''(n,i) = |H''(n,i)|$.
\begin{lemma}\label{lemma:shortest}
Let $n,i$ be positive integers such that $n >i$. Then
\[h''(n,i) = \begin{cases}
\frac{1}{2}n(k-1)\big(k^{\gcd(n,i)}\big(\frac{n}{\gcd(n,i)}-1\big)-2p_k(i)\big), & \text{if $i\mid n$;} \\
\frac{1}{2}k^{\gcd(n,i)}(k-1)n\big(\frac{n}{\gcd(n,i)}-3\big), & \text{otherwise.}
\end{cases}\]
\end{lemma}
\begin{proof}
Let $w$ be a length-$n$ word. The word $w$ is in $H''(n,i)$ precisely if it is in $H(n,i)$ but not in any $H(n,j)$ for $j<i$. So computing $h''(n,i)$ reduces to computing the number of length-$n$ words that are in $H(n,i)$ and $H(n,j)$ for some $j<i$ (i.e., $h'(n,i)$) and then subtracting it from the number of words in $H(n,i)$ (i.e., $h(n,i)$). Therefore
\[h''(n,i) =h(n,i)-h'(n,i)= \begin{cases}
\frac{1}{2}n(k-1)\big(k^{\gcd(n,i)}\big(\frac{n}{\gcd(n,i)}-1\big)-2p_k(i)\big), & \text{if $i\mid n$;} \\
\frac{1}{2}k^{\gcd(n,i)}(k-1)n\big(\frac{n}{\gcd(n,i)}-3\big), & \text{otherwise.}
\end{cases}\]
\end{proof}
\begin{theorem}
Let $n$ be an integer $\geq 2$. Then \[h(n) = \sum_{i=1}^{\lfloor n/2\rfloor}h''(n,i).\]
\end{theorem}
\begin{proof}
Every word that is in $H(n)$ must also be in $H(n,i)$ for some integer $i$ in the range $1\leq i \leq n-1$. By Lemma~\ref{lemma:half} we have that every word that is in $H(n,i)$ is also in $H(n,n-i)$. Therefore we only need to consider words in $H(n,i)$ where $i$ is an integer with $i\leq n-i \implies i \leq n/2.$ Consider the quantity $S= \sum\limits_{i=1}^{\lfloor n/2 \rfloor} h(n,i).$ Since any member of $H(n)$ must also be a member of $H(n,i)$ for some $i\leq \lfloor n/2\rfloor$, we have that $h(n) \leq S.$ But any member of $H(n,i)$ may also be a member of $H(n,j)$ for some $j<i.$ These words are accounted for multiple times in the sum $S$. To avoid double-counting we must count the number of words $w$ that are in $H(n,i)$ but not in $H(n,j)$ for any $j<i$. This quantity is exactly $h''(n,i)$. Therefore \[h(n) = \sum_{i=1}^{\lfloor n/2\rfloor} h''(n,i).\] \end{proof}
\section{Exactly one conjugate}\label{section:exactly}
So far we have been interested in length-$n$ words $u$ that have at least one conjugate of Hamming distance $2$ away from $u$. But what about length-$n$ words $u$ that have exactly one conjugate of Hamming distance $2$ away from $u$? In this section we provide a formula for the number $h'''(n)$ of length-$n$ words $u$ with exactly one conjugate $v$ such that $\ham(u,v)=2$.
Let $n$ and $i$ be positive integers such that $n > i$. Let $h'''(n)$ denote the set of length-$n$ words $u$ over $\Sigma_k$ that have exactly one conjugate $v$ with $\ham(u,v)=2$. Let $h'''(n) = |H'''(n)|$. Let $H'''(n,i)$ denote the set of length-$n$ words $w$ such that $w$ is in $H(n,i)$ but is not in $H(n,j)$ for any $j\neq i.$ Let $h'''(n,i) = |H'''(n,i)|$.
Suppose $w\in H'''(n,i)$. Then by definition we have that $w\in H(n,i)$ and $w\not\in H(n,j)$ for any $j\neq i$. But by Lemma~\ref{lemma:half} we have that if $w$ is in $H(n,i)$ then it must also be in $H(n,n-i)$. So if $i \neq n-i$, then $w$ has at least two distinct conjugates of Hamming distance $2$ away from it, namely $\sigma^i(w)$ and $\sigma^{n-i}(w)$. Therefore we have $i=n-i$. This implies that $n$ must be even, so $H'''(2m+1)=\{\}$ for all $m\geq 1.$ Since $i=n-i\implies i=n/2$, we have that $w\in H(n,n/2)$. However $w$ cannot be in $H(n,j)$ for any $j\neq n/2$. Since any word in $H(n,j)$ is also in $H(n,n-j)$, the condition of $w\not \in H(n,j)$ for any $j\neq n/2$ is equivalent to $w\not\in H(n,j)$ for any $j$ with $1\leq j < n/2.$ But this is just the definition of $H''(n,n/2)$. From this we get the following theorem.
\begin{theorem}
Let $n\geq 1$ be an integer. Then \[h'''(n) = \begin{cases}
\frac{1}{2}n(k-1)(k^{n/2}-2p_k(n/2)), & \text{if $n$ is even;} \\
0, & \text{otherwise.}
\end{cases}\]
\end{theorem}
\section{Lyndon conjugates}\label{section:lyndon}
A \emph{Lyndon word} is a word that is lexicographically smaller than any of its non-trivial conjugates. In this section we count the number of Lyndon words in $H(n)$.
\begin{lemma}\label{lemma:anyconjugate}
Let $u$ be a length-$n$ word. If $u\in H(n)$, then any conjugate $v$ of $u$ is also in $H(n)$.
\end{lemma}
\begin{proof}
Suppose $u\in H(n)$. Then $\ham(u,\sigma^i(u))=2$ for some $i\geq 0$. If we shift both $u$ and $\sigma^i(u)$ by the same amount, then the symbols that are being compared to each other do not change. Thus $\ham(\sigma^j(u), \sigma^{i+j}(u)) = 2$ for all $j\geq 0$. So any conjugate $v=\sigma^j(u)$ must also be in $H(n)$.
\end{proof}
\begin{theorem}\label{theorem:lyndon}
There are $\frac{h(n)}{n}$ Lyndon words in $H(n)$.
\end{theorem}
\begin{proof}
Corollary~\ref{corollary:primitive} says that all members of $H(n)$ are primitive and Lemma~\ref{lemma:anyconjugate} says that if a word is in $H(n)$, then any conjugate of it is also in $H(n)$. It is easy to verify that every primitive word has exactly one Lyndon conjugate. Therefore exactly $\frac{h(n)}{n}$ words in $H(n)$ are Lyndon words.
\end{proof}
\section{Asymptotic behaviour of $h(n)$}\label{section:asymptotics}
In this section we show that there is no one easily-expressible bound on the growth of $h(n)$. We do this by demonstrating that $h(n)$ is a polynomial for prime $n$, and that $h(n)$ is bounded below by an exponential for even $n$.
\begin{lemma}
Let $n$ be a prime number. Then \[h(n)=\frac{1}{4}k(k-1)n(n^2-4n+7).\]
\end{lemma}
\begin{proof}
Let $n>1$ be a prime number. Since $n$ is prime, we have that $\gcd(n,i)=1$ for all integers $i$ with $1<i<n$. Then
\begin{align} h(n) &= \sum_{i=1}^{(n-1)/2}h''(n,i) \nonumber\\
&=\frac{1}{2}k(k-1)n(n-1)+\sum_{i=2}^{(n-1)/2}\frac{1}{2}k^{\gcd(n,i)}(k-1)n\bigg(\frac{n}{\gcd(n,i)}-3\bigg)\nonumber \\
&= \frac{1}{2}k(k-1)n(n-1)+\sum_{i=2}^{(n-1)/2}\frac{1}{2}k(k-1)n(n-3) \nonumber \\
&= \frac{1}{2}k(k-1)n(n-1)+\bigg(\frac{n-3}{2}\bigg)\frac{1}{2}k(k-1)n(n-3) \nonumber \\
&= \frac{1}{4}k(k-1)n(n^2-4n+7).\nonumber
\end{align}
\end{proof}
\begin{lemma}
Let $n> 1$ be an integer. Then $h(2n) \geq \frac{1}{2}nk^{n}$.
\end{lemma}
\begin{proof}
Since any word in $H(2n,n)$ must also be in $H(2n)$, we have that $h(2n) \geq h(2n,n)$. From Lemma~\ref{lemma:formula} we see that $h(2n,n) = \frac{1}{2}k^{\gcd(2n,n)}(k-1)n\big(\frac{2n}{\gcd(2n,n)}-1\big) = \frac{1}{2}k^n (k-1)n.$ Since $k\geq 2$, we have that $k-1\geq 1$. Therefore $h(2n) \geq \frac{1}{2}k^n (k-1)n \geq \frac{1}{2} nk^n$ for all $n>1$.
\end{proof}
\bibliographystyle{unsrt}
|
2,869,038,155,614 | arxiv | \section{Introduction}
\label{sec-intro}
\vspace{-0.1in}
{\em Clustering} is one of the most fundamental problems in data analysis~\cite{jain2010data}. Given a set of elements, the goal of clustering is to partition the set into several groups based on their similarities or dissimilarities.
Several clustering models have been extensively studied, such as $k$-center, $k$-median, and $k$-means clustering~\cite{awasthi2014center}.
In reality, datasets often are noisy and contain outliers. Moreover, outliers could seriously affect the final results in data analysis~\cite{tan2006introduction,chandola2009anomaly}. Clustering with outliers can be viewed as a generalization of ordinary clustering problems; however, the existence of outliers makes the problems to be much more challenging.
We focus on the problem of {\em $k$-center clustering with outliers} in this paper. Given a metric space with $n$ vertices and a pre-specified number of outliers $z<n$, the problem is to find $k$ balls to cover at least $n-z$ vertices and minimize the maximum radius of the balls. The problem also can be defined in Euclidean space so that the cluster centers can be any points in the space (i.e., not restricted to be selected from the input points). The $2$-approximation algorithms for ordinary $k$-center clustering (without outliers) were given in \cite{gonzalez1985clustering,hochbaum1985best}, and it was proved that any approximation ratio lower than $2$ implies $P=NP$.
A $3$-approximation algorithm for $k$-center clustering with outliers in arbitrary metrics was proposed by Charikar et al.~\cite{charikar2001algorithms}; for the problem in Euclidean space, their approximation ratio becomes $4$. A following streaming $(4+\epsilon)$-approximation algorithm was proposed by McCutchen and Khuller~\cite{mccutchen2008streaming}.
Recently, Chakrabarty et al.~\cite{DBLP:conf/icalp/ChakrabartyGK16} proposed a $2$-approximation algorithm for metric $k$-center clustering with outliers (but it is unclear of the resulting approximation ratio for the problem in Euclidean space).
Existing algorithms often have high time complexities. For example, the complexities of the algorithms in~\cite{charikar2001algorithms,mccutchen2008streaming} are $O(k n^2\log n)$ and $O\big(\frac{1}{\epsilon}(kzn+(kz)^2\log \Phi)\big)$ respectively, where $\Phi$ is the ratio of the optimal radius to the smallest pairwise distance among the vertices; the algorithm in~\cite{DBLP:conf/icalp/ChakrabartyGK16} needs to solve a complicated model of linear programming and the exact time complexity is not provided.
The coreset based idea of Badoiu et al.~\cite{BHI} needs to enumerate a large number of possible cases and also yields a high complexity. Several distributed algorithms for $k$-center clustering with outliers were proposed recently~\cite{malkomes2015fast,guha2017distributed,DBLP:journals/corr/abs-1802-09205,li2018distributed}; most of these distributed algorithms, to our best knowledge, rely on the sequential algorithm~\cite{charikar2001algorithms}.
In this paper, we aim to design quality guaranteed algorithm with low complexity for the problem of $k$-center clustering with outliers.
Our idea is inspired by the greedy method from Gonzalez~\cite{gonzalez1985clustering} for solving ordinary $k$-center clustering. Based on some novel insights, we show that this greedy method also works for the problem with outliers (Section~\ref{sec-center}). Our approach can achieve the approximation ratio $2$ with respect to the clustering cost (i.e., the radius); moreover, the time complexity is linear in the input size.
Charikar et al.\cite{charikar2003better} showed that if more than $z$ outliers are allowed to remove, the random sampling technique can be applied to reduce the data size for metric $k$-center clustering with outliers. Recently, Huang et al.\cite{huang2018epsilon} showed a similar result for instances in Euclidean space (and they name the sample as ``robust coreset''). In Section~\ref{sec-core1}, we prove that the sample size of \cite{huang2018epsilon} can be further reduced.
We also consider the problem in doubling metrics, motivated by the fact that many real-world datasets often manifest low intrinsic dimensions~\cite{belkin2003problems}.
For example, image sets usually can be represented in low dimensional manifold though the Euclidean dimension of the image vectors can be very high. ``Doubling dimension'' is widely used for measuring the intrinsic dimensions of datasets~\cite{talwar2004bypassing} (the formal definition is given in Section~\ref{sec-pre}). Rather than assuming the whole $(X,d)$ has a low doubling dimension, we only assume that \textbf{the inliers of the given data have a low doubling dimension $\rho>0$.} We do not have any assumption on the outliers; namely, the outliers can scatter arbitrarily in the space. We believe that this assumption captures a large range of high dimensional instances in reality.
With the assumption, we show that our approach can further improve the clustering quality. In particular, the greedy approach is able to construct a coreset for the problem of $k$-center clustering with outliers; as a consequence, the time complexity can be significantly reduced if running existing algorithms on the coreset (Section~\ref{sec-doubling}). {\em coreset} construction is a technique for reducing data size so as to speedup many optimization problems; we refer the reader to the surveys~\cite{DBLP:journals/corr/Phillips16,bachem2017practical} for more details. The size of our coreset is $2z+O\big((2/\mu)^\rho k\big)$, where $\mu$ is a small parameter measuring the quality of the coreset; the construction time is $O((\frac{2}{\mu})^\rho kn)$. Note that $z$ and $k$ are often much smaller than $n$ in practice; the coefficient $2$ of $z$ actually can be further reduced to be arbitrarily close to $1$, by increasing the coefficient of the second term $(2/\mu)^\rho k$. Moreover, our coreset is a natural ``composable coreset''~\cite{DBLP:conf/pods/IndykMMM14} which could be potentially applied to distributed clustering with outliers. Very recently, Ceccarello et al.\cite{DBLP:journals/corr/abs-1802-09205} also provided a coreset for $k$-center clustering with $z$ outliers in doubling metrics, where their size is $T=O((k+z)(\frac{24}{\mu})^\rho)$ with $O(nT )$ construction time. Thus our result is a significant improvement in terms of coreset size and construction time.
Huang et al.\cite{huang2018epsilon} considered the coreset construction for $k$-median/means clustering with outliers in doubling metrics, however, their method cannot be extended to the case of $k$-center. Aghamolaei and Ghodsi \cite{DBLP:conf/cccg/AghamolaeiG18} considered the coreset construction for ordinary $k$-center clustering without outliers.
Our proposed algorithms are easy to implement in practice. To study the performance of our algorithms, we test them on both synthetic and real datasets in Section~\ref{sec-exp}. The experimental results suggest that our method outperforms existing methods in terms of clustering quality and running time. Also, the running time can be significantly reduced via building coreset where the clustering quality can be well preserved simultaneously.
\vspace{-0.1in}
\subsection{Preliminaries}
\label{sec-pre}
\vspace{-0.05in}
We consider the problem of $k$-center with outliers in arbitrary metrics and Euclidean space $\mathbb{R}^D$. Let $(X, d)$ be a metric, where $X$ contains $n$ vertices and $d(\cdot, \cdot)$ is the distance function; with a slight abuse of notation, we also use the function $d$ to denote the shortest distance between two subsets $X_1, X_2\subseteq X$, i.e., $d(X_1, X_2)=\min_{p\in X_1, q\in X_2}d(p, q)$. We assume that the distance between any pair of vertices in $X$ is given in advance; for the problem in Euclidean space, it takes $O(D)$ time to compute the distance between any pair of points.
Below, we introduce several important definitions that are used throughout the paper.
\vspace{-0.05in}
\begin{definition}[$k$-Center Clustering with Outliers]
\label{def-outlier}
Given a metric $(X, d)$ with two positive integers $k$ and $z<n$, $k$-center clustering with outliers is to find a subset $X'\subseteq X$, where $|X'|\geq n-z$, and $k$ centers $\{c_1, \cdots, c_k\}\subseteq X$, such that $\max_{p\in X'}\min_{1\leq j\leq k}d(p, c_j)$ is minimized. If given a set $P$ of $n$ points in $\mathbb{R}^D$, the problem is to find a subset $P'\subseteq P$, where $|P'|\geq n-z$, and $k$ centers $\{c_1, \cdots, c_k\}\subseteq\mathbb{R}^D$, such that $\max_{p\in P'}\min_{1\leq j\leq k}||p-c_j||$ is minimized.
\end{definition}
\textbf{Note.} For the sake of convenience, we describe the following definitions only in terms of metric space. In fact, the definitions can be easily modified for the problem in Euclidean space.
In this paper, we always use $X_{opt}$, a subset of $X$ with size $n-z$, to denote the subset yielding the optimal solution. Also, let $\{C_1, \cdots, C_k\}$ be the $k$ clusters forming $X_{opt}$, and the resulting clustering cost be $r_{opt}$; that is, each $C_j$ is covered by an individual ball with radius $r_{opt}$.
Usually, optimization problems with outliers are challenging to solve. Thus we often relax our goal and allow to miss a little more outliers in practice. Actually the same relaxation idea has been adopted by a number of works on clustering with outliers before~\cite{charikar2003better,huang2018epsilon,alon2003testing,li2018distributed}.
\vspace{-0.05in}
\begin{definition}[$(k,z)_{\epsilon}$-Center Clustering]
\label{def-relax}
Let $(X,d)$ be an instance of $k$-center clustering with $z$ outliers, and $\epsilon\geq 0$. $(k,z)_{\epsilon}$-center clustering is to find a subset $X'$ of $X$, where $|X'|\geq n-(1+\epsilon)z$, such that the corresponding clustering cost of Definition~\ref{def-outlier} on $X'$ is minimized.
\textbf{(\rmnum{1})} Given a set $A$ of cluster centers ($|A|$ could be larger than $k$), the resulting clustering cost,
\begin{eqnarray}
\min\big\{\max_{p\in X'}\min_{c\in A}d(p, c)\mid X'\subseteq X, |X'|\geq n-(1+\epsilon)z\big\}
\end{eqnarray}
is denoted by $\phi_{\epsilon}(X, A)$.
\textbf{(\rmnum{2})} If $|A|=k$ and $\phi_{\epsilon}(X, A)\leq\alpha r_{opt}$ with $\alpha>0$\footnote{Since we remove more than $z$ outliers, it is possible to have an approximation ratio $\alpha<1$, i.e, $\phi_{\epsilon}(X, A)< r_{opt}$.},
it is called an $\alpha$-approximation. Moreover, if $|A|=\beta k$ with $\beta> 1$, it is called an $(\alpha, \beta)$-approximation.
\end{definition}
Obviously, the problem in Definition~\ref{def-outlier} is a special case of $(k,z)_{\epsilon}$-center clustering with $\epsilon=0$.
Further, Definition~\ref{def-outlier} and \ref{def-relax} can be naturally extended to \textbf{weighted case:} each vertex $p$ has a non-negative weight $w_p$ and the total weight of outliers should be equal to $z$; the distance $d(p, c_j)$ in the objective function is replaced by $w_p\cdot d(p, c_j)$.
Then we have the following definition of coreset.
\vspace{-0.03in}
\begin{definition}[Coreset]
\label{def-coreset}
Given a small parameter $\mu\in(0,1)$ and an instance $(X,d)$ of $k$-center clustering with $z$ outliers, a set $S\subseteq X$ is called a $\mu$-coreset of $X$, if each vertex of $S$ is assigned a non-negative weight and $\phi_{0}(S, H)\in (1\pm\mu)\phi_{0}(X, H)$ for any set $H\subseteq X$ of $k$ vertices.
\end{definition}
\vspace{-0.03in}
Given a large-scale instance $(X, d)$, we can run existing algorithm on its coreset $S$ to compute an approximate solution for $X$; if $|S|\ll n$, the resulting running time can be significantly reduced. Formally, we have the following claim (see the proof in Section~\ref{sec-proof-c1}).
\begin{claim}
\label{pro-core}
If the set $H$ yields an $\alpha$-approximation of the $\mu$-coreset $S$, it yields an $\alpha\times\frac{1+\mu}{1-\mu}$-approximation of $X$.
\end{claim}
As mentioned before, we also consider the case with low doubling dimension. Roughly speaking, doubling dimension describes the expansion rate of the metric.
For any $p\in X$ and $r\geq 0$, we use $Ball(p, r)$ to denote the ball centered at $p$ with radius $r$.
\vspace{-0.03in}
\begin{definition}[Doubling Dimension]
\label{def-dd}
The doubling dimension of a metric $(X,d)$ is the smallest number $\rho>0$, such that for any $p\in X$ and $r\geq 0$, $X\cap Ball(p, 2r)$ is always covered by the union of at most $2^\rho$ balls with radius $r$.
\end{definition}
\vspace{-0.05in}
\section{Algorithms for $(k,z)_{\epsilon}$-Center Clustering}
\label{sec-center}
\vspace{-0.05in}
For the sake of completeness, let us briefly introduce the algorithm of \cite{gonzalez1985clustering} for ordinary $k$-center clustering first. Initially, it arbitrarily selects a vertex from $X$, and iteratively selects the following $k-1$ vertices, where each $j$-th step ($2\leq j\leq k$) chooses the vertex having the largest minimum distance to the already selected $j-1$ vertices; finally, each input vertex is assigned to its nearest neighbor of these selected $k$ vertices. It can be proved that this greedy strategy results in a $2$-approximation of $k$-center clustering; the algorithm also works for the problem in Euclidean space and results in the same approximation ratio. In this section, we show that a modified version of Gonzalez's algorithm yields approximate solutions for $(k,z)_{\epsilon}$-center clustering.
In Section~\ref{sec-center-bi} and \ref{sec-center-single}, we present our results for metric $k$-center with outliers. Actually, it is easy to see that Algorithm \ref{alg-bi} and \ref{alg-single} yield the same approximation ratios if the input instance is a set of points in Euclidean space (the analysis is almost identical, and we omit the details due to the space limit); only the running times are different, since it takes $O(D)$ time to compute distance between two points in $\mathbb{R}^D$.
\vspace{-0.03in}
\subsection{$(2, O(\frac{1}{\epsilon}))$-Approximation}
\label{sec-center-bi}
\vspace{-0.03in}
Here, we consider bi-criteria approximation that returns more than $k$ cluster centers. The main challenge for implementing Gonzalez's algorithm is that the outliers and inliers are mixed in $X$; for example, the selected vertex, which has the largest minimum distance to the already selected vertices, is very likely to be an outlier, and therefore the clustering quality could be arbitrarily bad. Instead, our strategy is to take a small sample from the farthest subset. We implement our idea in Algorithm~\ref{alg-bi}. For simplicity, let $\gamma$ denote $z/n$ in the algorithm; usually we can assume that $\gamma$ is a value much smaller than $1$. We prove the correctness of Algorithm~\ref{alg-bi} below.
\begin{algorithm}[tb]
\caption{Bi-criteria Approximation Algorithm}
\label{alg-bi}
\begin{algorithmic}
\STATE {\bfseries Input:} An instance $(X, d)$ of metric $k$-center clustering with $z$ outliers, and $|X|=n$; parameters $\epsilon>0$, $\eta\in (0,1)$, and $t\in\mathbb{Z}^+$.
\STATE
\begin{enumerate}
\item Let $\gamma=z/n$ and initialize a set $E=\emptyset$.
\item Initially, $j=1$; randomly select $\frac{1}{1-\gamma}\log\frac{1}{\eta}$ vertices from $X$ and add them to $E$.
\item Run the following steps until $j= t$:
\begin{enumerate}
\item $j=j+1$ and let $Q_j$ be the farthest $(1+\epsilon)z$ vertices of $X$ to $E$ (for each vertex $p\in X$, its distance to $E$ is $\min_{q\in E}d(p, q)$).
\item Randomly select $\frac{1+\epsilon}{\epsilon}\log\frac{1}{\eta}$ vertices from $Q_j$ and add them to $E$.
\end{enumerate}
\end{enumerate}
\STATE {\bfseries Output} $E$.
\end{algorithmic}
\end{algorithm}
\vspace{-0.05in}
\begin{lemma}
\label{lem-select1}
With probability at least $1-\eta$, the set $E$ in Step 2 of Algorithm~\ref{alg-bi} contains at least one point from $X_{opt}$.
\end{lemma}
Since $|X_{opt}|/|X|= 1-\gamma$, Lemma~\ref{lem-select1} can be easily obtained by the following folklore claim (we show the proof in Section~\ref{sec-pro-sample}).
\vspace{-0.05in}
\begin{claim}
\label{pro-sample}
Let $U$ be a set of elements and $V\subseteq U$ with $\frac{|V|}{|U|}=\tau>0$. Given $\eta\in(0,1)$, if one randomly samples $\frac{1}{\tau}\log\frac{1}{\eta}$ elements from $U$, with probability at least $1-\eta$, the sample contains at least one element from $V$.
\end{claim}
\vspace{-0.05in}
Recall that $\{C_1, C_2, \cdots, C_k\}$ are the $k$ clusters forming $X_{opt}$. Denote by $\lambda_j(E)$ the number of the clusters which have non-empty intersection with $E$ at the beginning of $j$-th round in Step~3 of Algorithm~\ref{alg-bi}. For example, initially $\lambda_1(E)\geq 1$ by Lemma~\ref{lem-select1}. Obviously, if $\lambda_j(E)=k$, i.e., $C_l\cap E\neq\emptyset$ for any $1\leq l\leq k$, $E$ yields a $2$-approximation for $k$-center clustering with outliers through the triangle inequality.
\vspace{-0.02in}
\begin{claim}
\label{cla-e2}
If $\lambda_j(E)=k$, then $\phi_0(X, E)\leq 2 r_{opt}$.
\end{claim}
\vspace{-0.05in}
\begin{lemma}
\label{lem-select2}
In each round of Step 3 of Algorithm~\ref{alg-bi}, with probability at least $1-\eta$, either (1) $d(Q_j, E)\leq 2 r_{opt}$ or (2) $\lambda_j(E)\geq\lambda_{j-1}(E)+1$.
\end{lemma}
\begin{proof}
Suppose that (1) is not true, i.e., $d(Q_j, E)> 2 r_{opt}$, and we prove that (2) is true. Let $\mathcal{J}$ include all the indices $l\in\{1, 2, \cdots, k\}$ with $ E\cap C_l\neq\emptyset$. We claim that $Q_j\cap C_l=\emptyset$ for each $l\in \mathcal{J}$. Otherwise, let $p\in Q_j\cap C_l$ and $p'\in E\cap C_l$; due to the triangle inequality, we know that $d(p,p')\leq 2 r_{opt}$ which is in contradiction to the assumption $d(Q_j, E)> 2 r_{opt}$. Thus, $Q_j\cap X_{opt}$ only contains the vertices from $C_l$ with $l\notin \mathcal{J}$. Moreover, since the number of outliers is $z$, we know that $\frac{|Q_j\cap X_{opt}|}{|Q_j|}\geq \frac{\epsilon}{1+\epsilon}$. By Claim~\ref{pro-sample}, if randomly selecting $\frac{1+\epsilon}{\epsilon}\log\frac{1}{\eta}$ vertices from $Q_j$, with probability at least $1-\eta$, the sample contains at least one vertex from $Q_j\cap X_{opt}$; also, the vertex must come from $\cup_{l\notin \mathcal{J}}C_l$. That is, (2) $\lambda_j(E)\geq\lambda_{j-1}(E)+1$ happens.
\qed\end{proof}
If (1) of Lemma~\ref{lem-select2} happens, i.e., $d(Q_j, E)\leq 2 r_{opt}$, then it implies that $\max_{p\in X\setminus Q_j}d(p, E)\leq 2 r_{opt}$;
moreover, since $|Q_j|=(1+\epsilon)z$, we have $\phi_{\epsilon}(X,E)\leq 2 r_{opt}$.
Next, we assume that (1) in Lemma~\ref{lem-select2} never happens, and prove that $\lambda_j(E)=k$ with constant probability when $j=\Theta(k)$. The following idea actually has been used by Aggarwal et al.~\cite{aggarwal2009adaptive} for achieving a bi-criteria approximation for $k$-means clustering. Define a random variable $x_j$: $x_j=1$ if $\lambda_j(E)=\lambda_{j-1}(E)$, or $0$ if $\lambda_j(E)\geq\lambda_{j-1}(E)+1$, for $j=1, 2, \cdots$. So $\mathbb{E}[x_j]\leq\eta$ by Lemma~\ref{lem-select2} and
\begin{eqnarray}
\sum_{1\leq s\leq j}(1-x_s)\leq\lambda_j(E). \label{for-azuma2}
\end{eqnarray}
Also, let $J_j=\sum_{1\leq s\leq j}(x_s-\eta)$ and $J_0=0$. Then, $\{J_0, J_1, J_2, \cdots\}$ is a super-martingale with $J_{j+1}-J_j< 1$ (more details are shown in Section~\ref{sec-proof-the}). Through {\em Azuma-Hoeffding inequality}~\cite{alon2004probabilistic}, we have
$Pr(J_t\geq J_0+\delta)\leq e^{-\frac{\delta^2}{2t}}$
for any $t\in\mathbb{Z}^+$ and $\delta>0$. Let $t=\frac{k+\sqrt{k}}{1-\eta}$ and $\delta=\sqrt{k}$, the inequality implies that
\begin{eqnarray}
Pr(\sum_{1\leq s\leq t}(1-x_s)\geq k)\geq 1-e^{-\frac{1-\eta}{4}}. \label{for-azuma}
\end{eqnarray}
Combining (\ref{for-azuma2}) and (\ref{for-azuma}), we know that $\lambda_t(E)=k$ with probability at least $1-e^{-\frac{1-\eta}{4}}$. Moreover, $\lambda_t(E)=k$ directly implies that $E$ is a $2$-approximate solution by Claim~\ref{cla-e2}. Together with Lemma~\ref{lem-select1}, we have the following theorem.
\begin{theorem}
\label{the-biapprox}
Let $\epsilon>0$. If we set $t=\frac{k+\sqrt{k}}{1-\eta}$ for Algorithm~\ref{alg-bi}, with probability at least $(1-\eta)(1-e^{-\frac{1-\eta}{4}})$, $\phi_{\epsilon}(X,E)\leq 2 r_{opt}$.
\end{theorem}
\textbf{Quality and Running time.} If $\frac{1}{\eta}$ and $\frac{1}{1-\gamma}$ are constant numbers, Theorem~\ref{the-biapprox} implies that $E$ is a $\big(2, O(\frac{1}{\epsilon})\big)$-approximation for $(k,z)_{\epsilon}$-center clustering of $X$ with constant probability. In each round of Step~3, there are $O(\frac{1}{\epsilon})$ new vertices added to $E$, thus it takes $O(\frac{1}{\epsilon}n)$ time to update the distances from the vertices of $X$ to $E$; to select the set $Q_j$, we can apply the linear time selection algorithm~\cite{blum1973time}. Overall, the running time of Algorithm~\ref{alg-bi} is $O(\frac{k}{\epsilon}n)$. If the given instance is in $\mathbb{R}^D$, the running time will be $O(\frac{k}{\epsilon}n D)$.
Further, we consider the instances under some practical assumption, and provide new analysis of Algorithm \ref{alg-bi}.
In reality, the clusters are usually not too small, compared with the number of outliers. For example, it is rare to have a cluster $C_l$ that $|C_l|\ll z$.
\begin{theorem}
\label{the-biapprox2}
If each optimal cluster $C_l$ has size at least $\epsilon z$ for $1\leq l\leq k$, the set $E$ of Algorithm~\ref{alg-bi} is a $\big(4, O(\frac{1}{\epsilon})\big)$-approximation for the problem of $(k,z)_{0}$-center clustering with constant probability.
\end{theorem}
Compared with Theorem~\ref{the-biapprox}, Theorem~\ref{the-biapprox2} shows that we can exactly exclude $z$ outliers (rather than $(1+\epsilon)z$), though the approximation ratio with respect to the radius becomes $4$.
\begin{proof}[Proof of Theorem~\ref{the-biapprox2}]
We take a more careful analysis on the proof of Lemma~\ref{lem-select2}. If (1) never happens, eventually $\lambda_j(E)$ will reach $k$ and thus $\phi_0(X, E)\leq 2 r_{opt}$ (Claim~\ref{cla-e2}). So we focus on the case that (1) happens before $\lambda_j(E)$ reaching $k$.
Suppose at $j$-th round, $d(Q_j, E)\leq 2 r_{opt}$ but $\lambda_j(E)< k$.
We consider two cases \textbf{(\rmnum{1})} there exists some $l_0\notin \mathcal{J}$ such that $C_{l_0}\subseteq Q_j$ and \textbf{(\rmnum{2})} otherwise.
For \textbf{(\rmnum{1})}, we have $C_{l_0}\subseteq Q_j$ for some $l_0\notin \mathcal{J}$. Note that we assume $|C_{l_0}|\geq \epsilon z$, i.e., $\frac{|C_{l_0}|}{|Q_j|}\geq \frac{\epsilon}{1+\epsilon}$. Using the same manner in the proof of Lemma~\ref{lem-select2}, we know that (2) $\lambda_j(E)\geq\lambda_{j-1}(E)+1$ happens with probability $1-\eta$. Thus, if \textbf{(\rmnum{1})} is always true, we can continue Step 3 and eventually $\lambda_j(E)$ will reach $k$, that is, a $\big(2, O(\frac{1}{\epsilon})\big)$-approximation of $(k,z)_{0}$-center clustering is obtained with constant probability.
For \textbf{(\rmnum{2})}, we have $C_{l}\setminus Q_j\neq\emptyset$ for all $l\notin\mathcal{J}$. Together with the assumption $d(Q_j, E)\leq 2 r_{opt}$, we know that there exists $q_l\in C_{l}\setminus Q_j$ (for each $l\notin\mathcal{J}$) such that $d(q_l, E)\leq d(Q_j, E)\leq 2 r_{opt}$. Consequently, we have that $\forall q\in C_{l}$,
\begin{eqnarray}
d(q, E)&\leq& ||q-q_l||+d(q_l, E)\leq 4 r_{opt} \text{ (see the left of Figure.~\ref{fig-4app})}.
\end{eqnarray}
Note that for any $l\in \mathcal{J}$, $d(E, C_l)\leq 2 r_{opt}$ by the triangle inequality. Thus,
\begin{eqnarray}
\phi_0(X, E)&\leq& \max_{q\in \cup^k_{l=1}C_l} d(q, E)\leq 4 r_{opt}. \label{for-biapprox21}
\end{eqnarray}
So a $\big(4, O(\frac{1}{\epsilon})\big)$-approximation of $(k,z)_{0}$-center clustering is obtained.
\qed\end{proof}
\begin{algorithm}[tb]
\caption{$2$-Approximation Algorithm}
\label{alg-single}
\begin{algorithmic}
\STATE {\bfseries Input:} An instance $(X,d)$ of metric $k$-center clustering with $z$ outliers, and $|X|=n$; a parameter $\epsilon>0$.
\STATE
\begin{enumerate}
\item Initialize a set $E=\emptyset$.
\item Let $j=1$; randomly select one vertex from $X$ and add it to $E$.
\item Run the following steps until $j= k$:
\begin{enumerate}
\item $j=j+1$ and let $Q_j$ be the farthest $(1+\epsilon)z$ vertices to $E$.
\item Randomly select one vertex from $Q_j$ and add it to $E$.
\end{enumerate}
\end{enumerate}
\STATE {\bfseries Output} $E$.
\end{algorithmic}
\end{algorithm}
\subsection{$2$-Approximation}
\label{sec-center-single}
If $k$ is a constant, we show that a single-criterion $2$-approximation can be achieved. Actually, we use the same strategy as Section~\ref{sec-center-bi}, but run only $k$ rounds with each round sampling only one vertex. See Algorithm~\ref{alg-single}.
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-25pt}
\begin{center}
\includegraphics[width=0.45\textwidth]{fig1-new}
\end{center}
\vspace{-22pt}
\caption{Left: $p_e$ is a point of $E$ having distance $\leq 2 r_{opt}$ to $p_l$; right: $p_e$ is any point of $E$, $o_e$ and $o_p$ are the centers taking charge of $p_e$ and $p$.}
\label{fig-4app}
\vspace{-15pt}
\end{wrapfigure}
Denote by $\{v_1, \cdots, v_k\}$ the $k$ sampled vertices of $E$. Actually, the proof of Theorem~\ref{the-kcenter} is similar to the analysis in Section~\ref{sec-center-bi}. The only difference is that the probability that (2) $\lambda_j(E)\geq\lambda_{j-1}(E)+1$ happens is at least $\frac{\epsilon}{1+\epsilon}$. Also note that $v_1\in X_{opt}$ with probability $1-\gamma$ ($\gamma=z/n$). If all of these events happen, either we obtain a $2$-approximation before $k$ steps (i.e., $d(E, X\setminus Q_j)\leq 2 r_{opt}$ for some $j<k$), or $\{v_1, \cdots, v_k\}$ fall into the $k$ optimal clusters $C_1, C_2, \cdots, C_k$ separately (i.e., $\lambda_k(E)=k$). No matter which case happens, we always obtain a $2$-approximation with respect to $(k,z)_{\epsilon}$-center clustering. So we have Theorem~\ref{the-kcenter}.
\begin{theorem}
\label{the-kcenter}
With probability at least $(1-\gamma)(\frac{\epsilon}{1+\epsilon})^{k-1}$, Algorithm~\ref{alg-single} returns a $2$-approximation for the problem of $(k,z)_{\epsilon}$-center clustering on $X$. The running time is $O(kn)$. If the given instance is in $\mathbb{R}^D$, the running time will be $O(kn D)$.
\end{theorem}
To boost the probability of Theorem~\ref{the-kcenter}, we just need to repeatedly run the algorithm; the success probability is easy to calculate by taking the union bound.
\begin{corollary}
\label{the-kcenter2}
If we run Algorithm~\ref{alg-single} $ O\big(\frac{1}{1-\gamma}(\frac{1+\epsilon}{\epsilon})^{k-1}\big)$ times, with constant probability, at least one time the algorithm returns a $2$-approximation for the problem of $(k,z)_\epsilon$-center clustering.
\end{corollary}
Similar to Theorem~\ref{the-biapprox2}, we consider the practical instances.
We show that the quality of Theorem~\ref{the-kcenter} can be preserved even exactly excluding $z$ outliers, if the optimal clusters are ``well separated''. The property was also studied for other clustering problems in practice~\cite{DBLP:journals/pami/KanungoMNPSW02,DBLP:journals/corr/abs-1205-4891}. Let $\{o_1, \cdots, o_k\}$ be the $k$ cluster centers of the optimal clusters $\{C_1, \cdots, C_k\}$.
\begin{theorem}
\label{the-kcenter3}
Suppose that each optimal cluster $C_l$ has size at least $\epsilon z$ and $||o_l-o_{l'}||>4 r_{opt}$ for $1\leq l\neq l'\leq k$. Then with probability at least $(1-\gamma)(\frac{\epsilon}{1+\epsilon})^{k-1}$, Algorithm~\ref{alg-single} returns a $2$-approximation for the problem of $(k,z)_0$-center clustering.
\end{theorem}
\begin{proof}
Initially, we know that $\lambda_1(E)=1$ with probability $1-\gamma$. Suppose that at the beginning of the $j$-th round of Algorithm~\ref{alg-single} with $2\leq j\leq k$, $E$ already has $j-1$ vertices separately falling in $j-1$ optimal clusters; also, we still let $\mathcal{J}$ be the set of the indices of these $j-1$ clusters. Then we have the following claim.
\begin{claim}
\label{cl-kcenter3}
$|Q_j\cap (\cup_{l\notin \mathcal{J}}C_l)|\geq \epsilon z$.
\end{claim}
\begin{proof}
For any $p\in \cup_{l\notin \mathcal{J}}C_l$, we have
\begin{eqnarray}
d(p, E)> 4 r_{opt}- r_{opt}-r_{opt}=2 r_{opt} \label{for-cl1}
\end{eqnarray}
from triangle inequality and the assumption $||o_l-o_{l'}||>4 r_{opt}$ for $1\leq l\neq l'\leq k$ (see the right of Figure.~\ref{fig-4app}). In addition, for any $p\in \cup_{l\in \mathcal{J}}C_l$, we have
\begin{eqnarray}
d(p, E)\leq 2 r_{opt}. \label{for-cl2}
\end{eqnarray}
We consider two cases. If $d(Q_j, E)\leq 2 r_{opt}$ at the current round, then (\ref{for-cl1}) directly implies that $\cup_{l\notin \mathcal{J}}C_l\subseteq Q_j$ (recall $Q_j$ is the set of farthest vertices to $E$); thus $|Q_j\cap (\cup_{l\notin \mathcal{J}}C_l)|=|\cup_{l\notin \mathcal{J}}C_l|\geq \epsilon z$ by the assumption that any $|C_l|\geq \epsilon z$.
Otherwise, $d(Q_j, E)> 2 r_{opt}$. Then $Q_j\cap (\cup_{l\in \mathcal{J}}C_l)=\emptyset$ by (\ref{for-cl2}). Moreover, since there are only $z$ outliers and $|Q_j|=(1+\epsilon)z$, we know that $|Q_j\cap (\cup_{l\notin \mathcal{J}}C_l)|\geq \epsilon z$.
\qed\end{proof}
Claim~\ref{cl-kcenter3} reveals that with probability at least $\frac{\epsilon}{1+\epsilon}$, the new added vertex falls in $\cup_{l\notin \mathcal{J}}C_l$, i.e., $\lambda_j(E)=\lambda_{j-1}(E)+1$. Overall, we know that $\lambda_k(E)=k$, i.e., $E$ is a $2$-approximation of $(k, z)_0$-center clustering (by Claim~\ref{cla-e2}), with probability at least $(1-\gamma)(\frac{\epsilon}{1+\epsilon})^{k-1}$.
\qed\end{proof}
\subsection{Reducing Data Size via Random Sampling}
\label{sec-core1}
Given a metric $(X,d)$, Charikar et al.~\cite{charikar2003better} showed that we can use a random sample $S$ to replace $X$. Recall $\gamma=z/n$. Let $|S|=O(\frac{k}{\epsilon^2 \gamma}\ln n)$ and $E$ be an $\alpha$-approximate solution of $(k, z)_\epsilon$-center clustering on $(S,d)$, then $E$ is an $\alpha$-approximate solution of $(k, z)_{O(\epsilon)}$-center clustering on $(X,d)$ with constant probability. In $D$-dimensional Euclidean space, Huang et al.~\cite{huang2018epsilon} showed a similar result, where the sample size $|S|=\tilde{O}(\frac{1}{\epsilon^2\gamma^2}kD)$\footnote{The asymptotic notation $\tilde{O}(f)=O\big(f\cdot polylog(\frac{kD}{\epsilon\gamma})\big)$.} (to be consistent with our paper, we change the notations in their theorem). In this section, we show that the sample size of \cite{huang2018epsilon} can be further improved to be $\tilde{O}(\frac{1}{\epsilon^2\gamma}kD)$, which can be a significant improvement if $\frac{1}{\gamma}=\frac{n}{z}$ is large.
Let $P$ be a set of $n$ points in $\mathbb{R}^D$. Consider the range space $\Sigma=(P, \Pi)$ where each range $\pi\in \Pi$ is the complement of union of $k$ balls in $\mathbb{R}^D$. We know that the VC dimension of balls is $O(D)$~\cite{alon2004probabilistic}, and therefore the VC dimension of union of $k$ balls is $O(kD \log k)$~\cite{blumer1989learnability}. That is, the VC dimension of the range space $\Sigma$ is $O(kD \log k)$.
Let $\epsilon\in(0,1)$, and an ``$\epsilon$-sample'' $S$ of $P$ is defined as follows: $\forall \pi\in\Pi$, $\big|\frac{|\pi\cap P|}{|P|}-\frac{|\pi\cap S|}{|S|}\big|\leq \epsilon$; roughly speaking, $S$ is an approximation of $P$ with an additive error inside each range $\pi$.
Given a range space with VC dimension $m$, an $\epsilon$-sample can be easily obtained via uniform sampling~\cite{alon2004probabilistic}, where the success probability is $1-\lambda$ and the sample size is $O\big(\frac{1}{\epsilon^2}(m\log\frac{m}{\epsilon}+\log\frac{1}{\lambda})\big)$ for any $0<\lambda<1$.
For our problem, we need to replace the ``$\epsilon$'' of the ``$\epsilon$-sample'' by $\epsilon\gamma$ to guarantee that the number of uncovered points is bounded by $\big(1+O(\epsilon)\big)\gamma n$ (we show the details below); the resulting sample size will be $\tilde{O}(\frac{1}{\epsilon^2\gamma^2}kD)$ that is the same as the sample size of \cite{huang2018epsilon} (we assume that the term $\log\frac{1}{\lambda}$ is a constant for convenience).
Actually, the front factor $\frac{1}{\epsilon^2\gamma^2}$ of the sample size can be further reduced to be $\frac{1}{\epsilon^2\gamma}$ by a more careful analysis. We observe that there is no need to guarantee the additive error for each range $\pi$ (as the definition of $\epsilon$-sample). Instead, only a multiplicative error for the ranges covering at least $\gamma n$ points should be sufficient. Note that when a range covers more points, the multiplicative error is weaker than the additive error and thus the sample size is reduced. For this purpose, we use {\em relative approximation}~\cite{har2011relative,li2001improved}: let $S\subseteq P$ be a subset of size $\tilde{O}(\frac{1}{\epsilon^2\gamma}kD)$ chosen uniformly at random, then with constant probability,
\begin{eqnarray}
\forall \pi\in\Pi,\ \Big|\frac{|\pi\cap P|}{|P|}-\frac{|\pi\cap S|}{|S|}\Big|\leq \epsilon\times\max\Big\{\frac{|\pi\cap P|}{|P|}, \gamma\Big\}. \label{for-relativesample}
\end{eqnarray}
We formally state our result below.
\begin{theorem}
\label{the-samplereduce}
Let $P$ be an instance for the problem of $k$-center clustering with outliers in $\mathbb{R}^{D}$ as described in Definition~\ref{def-outlier}, and $S\subseteq P$ be a subset of size $\tilde{O}(\frac{1}{\epsilon^2\gamma}kD)$ chosen uniformly at random. Suppose $\epsilon\leq 0.5$. Let $S$ be a new instance for the problem of $k$-center clustering with outliers where the number of outliers is set to be $z'=(1+\epsilon)\gamma |S|$. If $E$ is an $\alpha$-approximate solution of $(k, z')_{\epsilon}$-center clustering on $S$, then $E$ is an $\alpha$-approximate solution of $(k, z)_{O(\epsilon)}$-center clustering on $P$, with constant probability.
\end{theorem}
\begin{proof}
We assume that $S$ is a relative approximation of $P$ and (\ref{for-relativesample}) holds (this happens with constant probability). Let $\mathbb{B}_{opt}$ be the set of $k$ balls covering $(1-\gamma)n$ points induced by the optimal solution for $P$, and $\mathbb{B}_{S}$ be the set of $k$ balls induced by an $\alpha$-approximate solution of $(k, z')_{\epsilon}$-center clustering on $S$. Suppose the radius of each ball in $\mathbb{B}_{opt}$ (resp., $\mathbb{B}_{S}$) is $r_{opt}$ (resp., $r_S$). We denote the complements of $\mathbb{B}_{opt}$ and $\mathbb{B}_{S}$ as $\pi_{opt}$ and $\pi_{S}$, respectively.
First, since $\mathbb{B}_{opt}$ covers $(1-\gamma)n$ points of $P$ and $S$ is a relative approximation of $P$, we have
\begin{eqnarray}
\frac{\big|\pi_{opt}\cap S\big| }{|S|}\leq \frac{\big|\pi_{opt}\cap P\big| }{|P|}+ \epsilon\times\max\Big\{\frac{|\pi_{opt}\cap P|}{|P|}, \gamma\Big\}= (1+\epsilon)\gamma \label{for-samplereduce4}
\end{eqnarray}
by (\ref{for-relativesample}). That is, the set balls $\mathbb{B}_{opt}$ cover at least $\big(1-(1+\epsilon)\gamma\big) |S|$ points of $S$, and therefore it is a feasible solution for the instance $S$ with respect to the problem of $k$-center clustering with $z'$ outliers.
Since $\mathbb{B}_{S}$ is an $\alpha$-approximate solution of $(k, z')_{\epsilon}$-center clustering on $S$, we have
\begin{eqnarray}
r_S \leq \alpha r_{\textrm{opt}}; \hspace{0.2in}
|\pi_{S}\cap S| \leq (1+\epsilon)z'=(1+\epsilon)^2\gamma|S|. \label{for-samplereduce2}
\end{eqnarray}
Now, we claim that
\vspace{-0.2in}
\begin{eqnarray}
\big|\pi_{S}\cap P\big|\leq \frac{(1+\epsilon)^2}{1-\epsilon}\gamma |P|. \label{for-samplereduce3}
\end{eqnarray}
Assume that (\ref{for-samplereduce3}) is not true, then (\ref{for-relativesample}) implies
$\Big|\frac{|\pi_{S}\cap P|}{|P|}-\frac{|\pi_{S}\cap S|}{|S|}\Big|\leq \epsilon\times \max\Big\{\frac{|\pi_{S}\cap P|}{|P|},\gamma\Big\}=\epsilon \frac{|\pi_{S}\cap P|}{|P|}$.
So $\frac{|\pi_{S}\cap S|}{|S|}\geq (1-\epsilon)\frac{|\pi_{S}\cap P|}{|P|}>(1+\epsilon)^2\gamma$, which is in contradiction with the second inequality of (\ref{for-samplereduce2}), and thus (\ref{for-samplereduce3}) is true. We assume $\epsilon\leq 0.5$, so $\frac{1}{1-\epsilon}\leq 1+2\epsilon$ and $\frac{(1+\epsilon)^2}{1-\epsilon}=1+O(\epsilon)$. Consequently (\ref{for-samplereduce3}) and the first inequality of (\ref{for-samplereduce2}) together imply that $\mathbb{B}_{S}$ is an $\alpha$-approximate solution of $(k, z)_{O(\epsilon)}$-center clustering on $P$.
\qed\end{proof}
\vspace{-0.15in}
\section{Coreset Construction in Doubling Metrics}
\label{sec-doubling}
\vspace{-0.1in}
In this section, we always assume the following is true by default:
\vspace{0.05in}
{\em Given an instance $(X,d)$ of $k$-center clustering with outliers, the metric $(X_{opt},d)$, i.e., the metric formed by the set of inliers, has a constant doubling dimension $\rho>0$.}
\vspace{0.05in}
We do not have any restriction on the outliers $X\setminus X_{opt}$. Thus the above assumption is more relaxed and practical than assuming the whole $(X,d)$ has a constant doubling dimension.
From Definition~\ref{def-dd}, we directly know that each optimal cluster $C_l$ of $X_{opt}$ can be covered by $2^\rho$ balls with radius $r_{opt}/2$ (see the left figure in Figure.~\ref{fig-dd}). Imagine that the instance $(X, d)$ has $2^\rho k$ clusters, where the optimal radius is at most $r_{opt}/2$. Therefore, we can just replace $k$ by $2^\rho k$ when running Algorithm~\ref{alg-bi}, so as to reduce the approximation ratio (i.e., the ratio of the resulting radius to $r_{opt}$) from $2$ to $1$.
\begin{wrapfigure}{r}{0.45\textwidth}
\vspace{-30pt}
\begin{center}
\includegraphics[width=0.37\textwidth]{doubling2}
\end{center}
\vspace{-22pt}
\caption{Illustrations for Theorem~\ref{the-double-biapprox} and \ref{the-coreset}.}
\label{fig-dd}
\vspace{-18pt}
\end{wrapfigure}
\vspace{-0.1in}
\begin{theorem}
\label{the-double-biapprox}
If we set $t=\frac{2^\rho k+2^{\rho/2}\sqrt{k}}{1-\eta}$ for Algorithm~\ref{alg-bi}, with probability $(1-\eta)(1-e^{-\frac{1-\eta}{4}})$, $\phi_{\epsilon}(X,E)\leq r_{opt}$. So the set $E$ is a $\big(1, O(\frac{2^\rho}{\epsilon})\big)$-approximation for the problem of $(k,z)_{\epsilon}$-center clustering, and the running time is $O(2^\rho\frac{k}{\epsilon}n)$.
\end{theorem}
\vspace{-0.1in}
If considering the problem in Euclidean space $\mathbb{R}^D$ where the doubling dimension of the inliers is $\rho$, the running time becomes $O(2^\rho\frac{k}{\epsilon}nD)$.
Inspired by Theorem~\ref{the-double-biapprox}, we can further construct coreset for $k$-center clustering with outliers (see Definition~\ref{def-coreset}). Let $\mu\in (0,1)$. If applying Definition~\ref{def-dd} recursively, we know that each $C_l$ is covered by $2^{\rho\log 2/\mu}=(\frac{2}{\mu})^\rho$ balls with radius $\frac{\mu}{2} r_{opt}$, and $X_{opt}$ is covered by $(\frac{2}{\mu})^\rho k$ such balls in total. See the right figure in Figure.~\ref{fig-dd}. Based on this observation, we have Algorithm~\ref{alg-coreset} for constructing $\mu$-coreset.
\vspace{-0.05in}
\begin{theorem}
\label{the-coreset}
With constant probability, Algorithm~\ref{alg-coreset} outputs a $\mu$-coreset $E$ of $k$-center clustering with $z$ outliers. The size of $E$ is at most $2z+O\big((\frac{2}{\mu})^\rho k\big)$, and the construction time is $O((\frac{2}{\mu})^\rho kn)$.
\end{theorem}
\textbf{Remark.} \textbf{(1)} The previous ideas based on uniform sampling~\cite{charikar2003better,huang2018epsilon} (also our idea in Section~\ref{sec-core1}) cannot get rid of the violation on the number of outliers; the sample sizes will become infinity if not allowing to remove more than $z$ outliers. Our coreset in Theorem~\ref{the-coreset} works for removing $z$ outliers exactly. Consequently, our coreset can be used for existing algorithms of $k$-center clustering with outliers, such as \cite{charikar2001algorithms}, to reduce their complexities. \textbf{(2)} Another feature is that our coreset is a natural composable coreset. If $X$ (or the point set $P$) is partitioned into $L$ parts, we can run Algorithm~\ref{alg-coreset} for each part, and obtain a coreset with size $\Big(2z+O\big((\frac{2}{\mu})^\rho k\big)\Big) L$ in total (the proof is almost identical to the proof of Theorem~\ref{the-coreset} below). So our coreset construction can potentially be applied to distributed clustering with outliers. \textbf{(3)} The coefficient $2$ of $z$ actually can be further reduced by modifying the value of $\epsilon$ in Step 2 of Algorithm~\ref{alg-coreset} (we just set $\epsilon=1$ for simplicity). In general, the size of $E$ is $(1+\epsilon)z+O\big(\frac{1}{\epsilon}(\frac{2}{\mu})^\rho k\big)$ and the construction time is $O(\frac{1}{\epsilon}(\frac{2}{\mu})^\rho kn)$ (or $O(\frac{1}{\epsilon}(\frac{2}{\mu})^\rho knD)$ in $\mathbb{R}^D$).
\begin{algorithm}[tb]
\caption{The Coreset Construction}
\label{alg-coreset}
\begin{algorithmic}
\STATE {\bfseries Input:} An instance $(X,d)$ of metric $k$-center clustering with $z$ outliers, and $|X|=n$; parameters $\eta$ and $\mu\in (0,1)$.
\STATE
\begin{enumerate}
\item Let $l=(\frac{2}{\mu})^\rho k$.
\item Set $\epsilon=1$ and run Algorithm~\ref{alg-bi} $t=\frac{l+\sqrt{l}}{1-\eta}$ rounds.
Denote by $\tilde{r}$ the maximum distance between $E$ and $X$ by excluding the farthest $2z$ vertices, after the final round of Algorithm~\ref{alg-bi}.
\item Let
$X_{\tilde{r}}=\{p\mid p\in X \text{ and } d(p, E)\leq \tilde{r}\}$.
\item For each vertex $p\in X_{\tilde{r}}$, assign it to its nearest neighbor in $E$; for each vertex $q\in E$, let its weight be the number of vertices assigning to it.
\item Add $X\setminus X_{\tilde{r}}$ to $E$; each vertex of $X\setminus X_{\tilde{r}}$ has weight $1$.
\end{enumerate}
\STATE {\bfseries Output} $E$ as the coreset.
\end{algorithmic}
\end{algorithm}
\begin{proof}[Proof of Theorem~\ref{the-coreset}]
Similar to Theorem~\ref{the-double-biapprox}, we know that $|X_{\tilde{r}}|= n-2z$ and $\tilde{r}\leq 2\times \frac{\mu}{2} r_{opt}=\mu r_{opt}$ with constant probability in Algorithm~\ref{alg-coreset}. Thus, the size of $E$ is $|X\setminus X_{\tilde{r}}|+O\big((\frac{2}{\mu})^\rho k\big)= 2z+O\big((\frac{2}{\mu})^\rho k\big)$. Moreover, it is easy to see that the running time of Algorithm~\ref{alg-coreset} is $O((\frac{2}{\mu})^\rho kn)$.
Next, we show that $E$ is a $\mu$-coreset of $X$. For each vertex $q\in E$, denote by $w(q)$ the weight of $q$; for the sake of convenience in our proof, we view each $q$ as a set of $w(q)$ overlapping unit weight vertices. Thus, from the construction of $E$, we can see that there is a bijective mapping $f$ between $X$ and $E$, where
\begin{eqnarray}
||p-f(p)||\leq\tilde{r}\leq\mu r_{opt}, \hspace{0.2in} \forall p\in X. \label{for-map}
\end{eqnarray}
Let $H=\{c_1, c_2, \cdots, c_k\}$ be any $k$ vertices of $X$. Suppose that $H$ induces $k$ clusters $\{A_1, A_2, \cdots, A_k\}$ (resp., $\{B_1, B_2, \cdots, B_k\}$) with respect to the problem of $k$-center clustering with $z$ outliers on $E$ (resp., $X$), where each $A_j$ (resp., $B_j$) has the cluster center $c_j$ for $1\leq j\leq k$. Let $r_E=\phi_0(E, H)$ and $r_X=\phi_0(X, H)$, respectively. Also, let $r'_E$ (resp., $r'_X$) be the smallest value $r$, such that for any $1\leq j\leq k$, $f(B_j)\subseteq Ball(c_j, r)$ (resp., $f^{-1}(A_j)\subseteq Ball(c_j, r)$). We need the following claim.
\begin{claim}
\label{cla-core}
$|r'_E-r_X|\leq \mu r_{opt}$ and $|r'_X-r_E|\leq \mu r_{opt}$ (see the proof in Section~\ref{sec-proof-cla-core}).
\end{claim}
In addition, since $\{f(B_1), \cdots, f(B_k)\}$ also form $k$ clusters for the instance $E$ with the fixed $k$ cluster centers of $H$, we know that
$r'_E\geq \phi_0(E, H)=r_E$.
Similarly, we have
$r'_X\geq r_X$.
Combining Claim~\ref{cla-core}, we have
\begin{eqnarray}
r_X-\mu r_{opt}\leq \underbrace{r'_X-\mu r_{opt}\leq r_E}_{\text{by Claim~\ref{cla-core}}}\leq \underbrace{r'_E\leq r_X+\mu r_{opt}}_{\text{by Claim~\ref{cla-core}}}.
\end{eqnarray}
So $|r_X-r_E|\leq \mu r_{opt}$, i.e., $\phi_0(E, H)\in \phi_0(X, H)\pm \mu r_{opt}\subseteq(1\pm \mu) \phi_0(X, H)$. Therefore $E$ is a $\mu$-coreset of $(X,d)$.
\qed\end{proof}
\section{Experiments}
\label{sec-exp}
\vspace{-0.1in}
Our experimental results were obtained on a Windows workstation with 2.8GHz Intel(R) Core(TM) i5-840 and 8GB main memory; the algorithms were implemented in Matlab R2018a. We test our algorithms on both synthetic and real datasets. For Algorithm~\ref{alg-single}, we take two well known algorithms of $k$-center clustering with outliers, $Base1$ of~\cite{charikar2001algorithms} and $Base2$ of~\cite{mccutchen2008streaming}, as the baselines. For Algorithm~\ref{alg-coreset}, we compare our coreset construction with uniform random sampling.
To generate the synthetic datasets, we set $n=10^{5}$ and $D=10^{3}$, and vary the values of $z$ and $k$.
First, randomly generate $k$ clusters inside a hypercube of side length $200$, where each cluster is a random sample from a Gaussian distribution with variance $10$; each cluster has a random number of points and we keep the total number of points to be $n-z$; we compute the minimum enclosing balls respectively for these $k$ clusters (by using the algorithm of \cite{badoiu2003smaller}), and randomly generate $z$ outliers outside the balls. The maximum radius of the balls is used as $r_{opt}$.
We also use three real datasets. MNIST dataset~\cite{lecun1998gradient} contains $n=60,000$ handwritten digit images from $0$ to $9$, where each image is represented by a $784$-dimensional vector. The $10$ digits form $k=10$ clusters.
Caltech-256 dataset~\cite{fei2007learning} contains $30,607$ colored images with 256 categories, where each image is represented by a $4096$-dimensional vector. We choose $n=2,232$ images of 20 categories to form $k=20$ clusters.
CIFER-10 training dataset~\cite{krizhevsky2009learning} contains $n=50,000$ colored images in 10 classes as $k=10$ clusters, where each image is represented by a $4096$-dimensional vector. For each real dataset, we use the minimum enclosing ball algorithm of \cite{badoiu2003smaller} to compute $r_{opt}$, and randomly generate $z=5\% n$ outliers outside the corresponding balls.
\textbf{Results and analysis.} Note that we exactly exclude $z$ outliers (rather than $(1+\epsilon)z$ as stated in Theorem~\ref{the-biapprox} and \ref{the-kcenter}) in our experiments, and calculate the approximation ratio $\phi_0(X, E)/r_{opt}$ for each instance, if $E$ is the set of returned cluster centers.
We first run our Algorithm~\ref{alg-bi} on synthetic and real datasets. For synthetic datasets, we set $k=2$-$20$, and $\beta=|E|/k=8$ via modifying the values of $\epsilon$ and $\eta$ appropriately (that means we output $8k$ cluster centers); normally, we set $\eta=0.1$ and $\epsilon\approx 0.7$. We try the instances with $z=\{2\%n, 4\%n, 6\%n, 8\%n, 10\%n\}$, and report the average results in Figure~\ref{fig-a1kr} and \ref{fig-a1kt}; the approximation ratios are within $1.3$-$1.4$ and the running times are less than $30$s. Actually, the performance is quite stable regarding different values of $z$ in our experiments, and the standard variances of approximation ratios and running times are less than $0.03$ and $0.12$, respectively. We also vary the value of $\beta$ from $4$ to $28$ with $k=10$. Figure~\ref{fig-a1er} shows that the approximation ratio slightly decreases as $\beta$ increases. The running times are all around $14$s and do not reveal a clear increasing trend as $\beta$ increases. We think the reason behind may be that we just use the simple $O(n\log n)$ sorting algorithm, rather than the linear time selection algorithm~\cite{blum1973time}, for computing $Q_j$ in practice (see Step 3(a) of Algorithm~\ref{alg-bi}); thus the running time is not linearly dependent on $|E|$. The results for real datasets are shown in Section~\ref{sec-exp-alg1}; the approximation ratios are all below $1.3$ and the running times are less than $35$s even for the largest CIFER-10 dataset.
\begin{figure}[h]
\vspace{-.6cm}
\centering
\subfloat[]{\label{fig-a1kr}\includegraphics[height=1in]{a1_k_ratio}}
\subfloat[]{\label{fig-a1kt}\includegraphics[height=1in]{a1_k_time}}
\subfloat[]{\label{fig-a1er}\includegraphics[height=1in]{a1_e_ratio}}
\vspace{-0.1in}
\caption{The experimental results of Algorithm~\ref{alg-bi} on synthetic datasets.}
\vspace{-0.2in}
\end{figure}
We also test our Algorithm~\ref{alg-single} on synthetic and real datasets. We set $\epsilon=1$ so that to avoid to repeat running Algorithm~\ref{alg-single} too many times (see Corollary~\ref{the-kcenter2}), but we still exactly exclude $z$ outliers for calculating the approximation ratio as mentioned before. Our results are shown in Table~\ref{tab-alg2}. The synthetic and real datasets are too large to the baseline algorithms $Base1$ and $Base2$, e.g., they run too slowly or even out of memory in our workstation if $n$, $z$, and $D$ are large (they have complexities $\Omega(n^2 D)$ or $\Omega(kznD)$)\footnote{We are aware of several distributed algorithms for $k$-center clustering with outliers\cite{malkomes2015fast,guha2017distributed,DBLP:journals/corr/abs-1802-09205,li2018distributed}, but we only consider the setting with single machine in this paper.}. To make a fair comparison, we run $Base1$, $Base2$, and Algorithm~\ref{alg-single} on smaller synthetic datasets with $(n=2000, D=10)$ and $(n=2000, D=100)$; we also set $z=\{2\%n, 4\%n, 6\%n, 8\%n, 10\%n\}$ as before and report the average results. When $D=10$, $Base1$ and Algorithm~\ref{alg-single} achieve approximation ratios $<1.5$ generally (Figure~\ref{fig-a210r}); moreover, $Base2$ and Algorithm~\ref{alg-single} run much faster than $Base1$ (Figure~\ref{fig-a210t}). However, when $D=100$, $Base1$ and $Base2$ yield much worse approximation ratios than Algorithm~\ref{alg-single} (Figure~\ref{fig-a2100r} and \ref{fig-a2100t}). Our experiment reveals that Algorithm~\ref{alg-single} can achieve a more stable performance when dimensionality increases.
\begin{figure}[h]
\vspace{-.8cm}
\centering
\subfloat[]{\label{fig-a210r}\includegraphics[height=0.89in]{a2_10r}}
\subfloat[]{\label{fig-a210t}\includegraphics[height=0.89in]{a2_10t}}
\subfloat[]{\label{fig-a2100r}\includegraphics[height=0.89in]{a2_100r}}
\subfloat[]{\label{fig-a2100t}\includegraphics[height=0.89in]{a2_100t}}
\vspace{-0.1in}
\caption{Comparison of Base1, Base2, and Algorithm~\ref{alg-single} on smaller synthetic datasets ((a) and (b) for $D=10$; (c) and (d) for $D=100$).}
\vspace{-0.25in}
\end{figure}
\begin{table}[h]
\vspace{-0.35in}
\centering
\caption{The results of Algorithm~\ref{alg-single} on synthetic and real datasets}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}[4]{*}{} & \multicolumn{4}{c|}{Synthetic datasets} & \multicolumn{3}{c|}{Real datasets} \\
\cline{2-8} & k=2 & k=4 & k=6 & k=8 & MNIST & CALTECH256 & CIFAR10 \\
\hline
Approx. ratio& 1.410 & 1.403 & 1.406 & 1.423 & 1.277 & 1.368 & 1.249 \\
\hline
Running time(s) & 8.097 & 63.636 & 374.057 & 1939.004 & 2644.709 & 3381.421 & 13295.306 \\
\hline
\end{tabular}%
\label{tab-alg2}%
\vspace{-0.2in}
\end{table}%
Finally, we compare the performances of our coresets method (Algorithm~\ref{alg-coreset}) and uniform random sampling in terms of reducing data sizes. Though real-world image datasets often are believed to have low intrinsic dimenions~\cite{belkin2003problems}, it is difficult to compute them (e.g., doubling dimension) accurately. In practice, we can directly set an appropriate value for $l$ in Step 1 of Algorithm~\ref{alg-coreset} (without knowing the value of doubling dimension $\rho$). For example, the size of coreset is $2z+O\big((\frac{2}{\mu})^\rho k\big)=2z+O(l)$ according to Theorem~\ref{the-coreset}, so we keep the sizes of our coresets to be $\{15\%n, 20\%n, 25\%n\}$ via modifying the value of $l$ in our experiments. Correspondingly, we also set the sizes of random samples to be $\{15\%n, 20\%n, 25\%n\}$. We run Algorithm~\ref{alg-single} on the corresponding random samples and coresets, and report the results in Table~\ref{tab-core}. Running Algorithm~\ref{alg-single} on the coresets yields approximation ratios close to those obtained by directly running the algorithm on the original datasets; the results also remain stably when the level reduces from $25\%$ to $15\%$. More importantly, our coresets significantly reduce the running times (e.g., it only needs $15\%$-$35\%$ time by using $15\%$-level coreset). Comparing with the random samples, our coresets can achieve significantly lower approximation ratios especially for the $15\%$ level. Note that the coreset based approach takes more time than uniform random sampling, because we count the time spent for coreset construction.
\begin{table}[h]
\vspace{-0.3in}
\centering
\caption{The results of Algorithm~\ref{alg-single} on random samples, coresets, and original datasets}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{3}{c|}{random sampling} & \multicolumn{3}{c|}{coreset} & \multirow{2}[4]{*}{100\%} \\
\cline{3-8} \multicolumn{2}{|c|}{} & 15\% & 20\% & 25\% & 15\% & 20\% & 25\% & \\
\hline
MNIST & Appro. Ratio & 1.591 & 1.597 & 1.566 & 1.275 & 1.261 & 1.261 & 1.277 \\
\cline{2-9} & running time(s) & 624.612 & 769.517 & 958.549 & 936.393 & 1071.926 & 1262.996 & 2644.709 \\
\hline
CALTECH256 & Appro. Ratio & 2.903 & 2.826 & 1.935 & 1.713 & 1.722 & 1.701 & 1.368 \\
\cline{2-9} & running time(s) & 486.647 & 502.605 & 598.059 & 489.625 & 505.537 & 600.900 & 3381.421 \\
\hline
CIFAR10 & Appro. Ratio & 1.538 & 1.383 & 1.446 & 1.248 & 1.256 & 1.249 & 1.249 \\
\cline{2-9} & running time(s) & 2420.943 & 2170.416 & 2938.773 & 3526.752 & 3264.858 & 4033.862 & 13295.306 \\
\hline
\end{tabular}%
\label{tab-core}%
\vspace{-0.3in}
\end{table}%
\vspace{-0.05in}
\section{Future Work}
\vspace{-0.1in}
Following our work, several interesting problems deserve to be studied in future. For example,
can the coreset construction time of Algorithm~\ref{alg-coreset} be improved, like the fast net construction method proposed by Har-Peled and Mendel~\cite{har2006fast} in doubling metrics? It is also interesting to study other problems involving outliers by using greedy strategy.
\newpage
\bibliographystyle{abbrv}
|
2,869,038,155,615 | arxiv | \section*{\refname}}
\usepackage{graphicx}
\usepackage{epstopdf}
\newcommand{\caphead}[1]{{\bf #1}}
\newcommand{\figco}{}
\renewcommand{\thesection}{\Roman{section}}
\renewcommand{\thesubsection}{\Roman{section} \Alph{subsection}}
\renewcommand{\thesubsubsection}{\Roman{section} \Alph{subsection} \arabic{subsubsection}}
\makeatletter
\def\p@subsection{}
\makeatother
\makeatletter
\def\p@subsubsection{}
\makeatother
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{corollary}{Corollary}
\newtheorem{definition}{Definition}
\newtheorem{example}{Example}
\newtheorem{property}{Property}
\newtheorem{conjecture}{Conjecture}
\newtheorem{proposition}{Proposition}
\makeatletter
\newcommand\footnoteref[1]{\protected@xdef\@thefnmark{\ref{#1}}\@footnotemark}
\makeatother
\usepackage{soul}
\usepackage{color}
\usepackage[dvipsnames]{xcolor}
\usepackage[normalem]{ulem}
\newcommand{\delete}[1]{{\color{Purple}#1}}
\newcommand{\nicole}[1]{{\color{Green}#1}}
\newcommand{\q}[1]{{\color{Red}#1}}
\newcommand{\vec{D}}{\vec{D}}
\newcommand{{\rm KL}}{{\rm KL}}
\newcommand{\Min}{ {\rm min} }
\newcommand{\Max}{ {\rm max} }
\newcommand{\inter}{ {\rm int} }
\newcommand{ {\rm h.c.} }{ {\rm h.c.} }
\newcommand{ {\rm c.c.} }{ {\rm c.c.} }
\newcommand{ {\rm tot} }{ {\rm tot} }
\newcommand{\rms}{ {\rm rms} }
\newcommand{\Sh}{{\rm Sh}}
\newcommand{\vN}{{\rm vN}}
\def\const{ {\rm const.} }
\def\coeff{ {\rm coeff.} }
\def\dbar{{\mathchar'26\mkern-12mu d}}
\newcommand{\Tr}{{\rm Tr}}
\def\id{\mathbbm{1}}
\newcommand{\kB}{k_\mathrm{B}}
\newcommand{\fwd}{\mathrm{fwd}}
\newcommand{\rev}{\mathrm{rev}}
\newcommand{\mathrm{diss}}{\mathrm{diss}}
\newcommand{\mathrm{forfeit}}{\mathrm{forfeit}}
\newcommand{\mathrm{worst}}{\mathrm{worst}}
\newcommand{\mathrm{cost}}{\mathrm{cost}}
\DeclareMathOperator{\supp}{supp}
\newcommand{\;\mathrm{K}}{\;\mathrm{K}}
\newcommand{{\rm span}}{{\rm span}}
\newcommand{\Hil}{\mathcal{H}}
\newcommand{\Basis}{\mathcal{B}}
\newcommand{\Sys}{\mathcal{S}}
\newcommand{\Sites}{N}
\newcommand{\Dim}{d}
\newcommand{ {(0)} }{ {(0)} }
\newcommand{ {(1)} }{ {(1)} }
\newcommand{ {(2)} }{ {(2)} }
\newcommand{ {(3)} }{ {(3)} }
\newcommand{ {(j)} }{ {(j)} }
\newcommand{ \bm{(} }{ \bm{(} }
\newcommand{ \bm{)} }{ \bm{)} }
\newcommand*{\Parens}[1]{\left( #1 \right)}
\newcommand*{\Brackets}[1]{\left[ #1 \right]}
\newcommand*{\Set}[1]{\left\{ #1 \right\}}
\newcommand*{\Verts}[1]{\left\lvert #1 \right\rvert}
\let\oldth\th
\renewcommand\th{ {\rm th} }
\newcommand*{\bra}[1]{\langle #1\rvert}
\newcommand*{\ket}[1]{\lvert #1 \rangle}
\newcommand*{\braket}[2]{\langle #1 \lvert #2 \rangle}
\newcommand*{\ketbra}[2]{\lvert #1 \rangle\!\langle #2 \rvert}
\newcommand*{\expval}[1]{\left\langle #1 \right\rangle}
\begin{document}
\title{Learning about learning by many-body systems}
\author{Weishun Zhong}
\email{[email protected]. The first two coauthors contributed equally.}
\affiliation{Physics of Living Systems, Department of Physics, Massachusetts Institute of Technology, 400 Tech Square, Cambridge, MA 02139, USA}
\author{Jacob M. Gold}
\email{[email protected]}
\affiliation{Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA}
\author{Sarah Marzen}
\email{[email protected]}
\affiliation{Physics of Living Systems, Department of Physics, Massachusetts Institute of Technology, 400 Tech Square, Cambridge, MA 02139, USA}
\affiliation{W. M. Keck Science Department,
Pitzer, Scripps, and Claremont McKenna Colleges,
Claremont, CA 91711, USA}
\author{Jeremy L. England}
\email{[email protected]}
\affiliation{Physics of Living Systems, Department of Physics, Massachusetts Institute of Technology, 400 Tech Square, Cambridge, MA 02139, USA}
\affiliation{GlaxoSmithKline AI/ML, 200 Cambridgepark Drive, Cambridge MA, 02140, USA}
\author{Nicole Yunger Halpern}
\email{[email protected]}
\affiliation{ITAMP, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA}
\affiliation{Department of Physics, Harvard University, Cambridge, MA 02138, USA}
\affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA}
\date{\today}
\begin{abstract}
Many-body systems from soap bubbles to suspensions to polymers learn the drives that push them far from equilibrium. This learning has been detected with thermodynamic properties, such as work absorption and strain. We progress beyond these macroscopic properties that were first defined for equilibrium contexts: We quantify statistical mechanical learning with representation learning, a machine-learning model in which information squeezes through a bottleneck. We identify a structural parallel between representation learning and far-from-equilibrium statistical mechanics. Applying this parallel, we measure four facets of many-body systems' learning: classification ability, memory capacity, discrimination ability, and novelty detection. Numerical simulations of a classical spin glass illustrate our technique. This toolkit exposes self-organization that eludes detection by thermodynamic measures. Our toolkit more reliably and more precisely detects and quantifies learning by matter.
\end{abstract}
{\let\newpage\relax\maketitle}
Many-body systems can learn and remember patterns of drives
that propel them far from equilibrium.
Such behaviors have been predicted and observed in many settings,
from charge-density waves~\cite{Coppersmith_97_Self,Povinelli_99_Noise}
to non-Brownian suspensions~\cite{Keim_11_Generic,Keim_13_Multiple,Paulsen_14_Multiple},
polymer networks~\cite{Majumdar_18_Mechanical},
soap-bubble rafts~\cite{Mukherji_19_Strength},
and macromolecules~\cite{Zhong_17_Associative}.
Such learning holds promise for engineering materials
capable of memory and computation.
This potential for applications, with experimental accessibility and ubiquity,
have earned these classical nonequilibrium many-body systems much attention recently~\cite{Keim_19_Memory}.
We present a machine-learning toolkit for
measuring the learning of drive patterns by many-body systems.
Our toolkit detects and quantifies many-body learning
more thoroughly and precisely than thermodynamic tools used to date.
A classical, randomly interacting spin glass
exemplifies learning driven matter.
Consider sequentially applying fields from a set $\{ \vec{A}, \vec{B}, \vec{C} \}$,
which we call a \emph{drive}.
The spins flip, absorbing work.
In a certain parameter regime, the power absorbed shrinks adaptively:
The spins migrate toward a corner of configuration space
where their configuration approximately withstands the drive's insults.
Consider then imposing fields absent from the original drive.
Subsequent spin flips absorb more work
than if the field belonged to $\{ \vec{A}, \vec{B}, \vec{C} \}$.
A simple, low-dimensional property of the material---absorbed power---distinguishes
drive inputs that fit a pattern from drive inputs that do not.
This property reflects a structural change in the spin glass's configuration.
The change is long-lived and not easily erased by a new drive.
For these reasons, we say that the material has learned the drive.
Many-body learning has been quantified with
properties commonplace in thermodynamics.
Examples include power, as explained above,
and strain in polymers that learn stress amplitudes.
Such thermodynamic diagnoses have provided insights
but suffer from two shortcomings.
First, the thermodynamic properties vary from system to system.
For example, work absorption characterizes the spin glass's learning;
strain characterizes non-Brownian suspensions'.
A more general approach would facilitate comparisons and standardize analyses.
Second, thermodynamic properties were defined for macroscopic equilibrium states.
Such properties do not necessarily describe
far-from-equilibrium systems' learning optimally.
Separately from many-body systems' learning,
machine learning has flourished over the past decade~\cite{Nielsen_15_Neural,Goodfellow_16_Deep}.
Machine learning has enhanced our understanding
of how natural and artificial systems learn.
We apply machine learning to measure learning by many-body systems.
We use the machine-learning model known as \emph{representation learning}~\cite{Bengio_12_Representation}
[Fig.~\ref{fig_VAE_SM_Parallel}(a)].
A representation-learning neural network receives a high-dimensional variable $X$,
such as a sentence missing a word, e.g.,
``The \underline{\hspace{.5cm}} is shining.''
The neural network compresses relevant information
into a low-dimensional \emph{latent variable} $Z$,
e.g., word types and relationships.
The neural network decompresses $Z$ into a prediction $\hat{Y}$
of a high-dimensional variable $Y$.
$Y$ can be the word missing from the sentence;
$\hat{Y}$ can be ``sun.''
The size of the bottleneck $Z$ controls a tradeoff
between the memory consumed and the prediction's accuracy.
We call the neural networks that perform representation learning
\emph{bottleneck neural networks}.
\begin{figure}[hbt]
\centering
\includegraphics[width=.2\textwidth, clip=true]{VAE_SM_Parallel.pdf}
\caption{\caphead{Parallel between two structures:}
(a) Structure of a bottleneck neural network, which performs representation learning.
(b) Structure of a far-from-equilibrium-statistical-mechanics problem.}
\label{fig_VAE_SM_Parallel}
\end{figure}
Representation learning, we argue, shares its structure with
problems in which a strong drive forces a many-body system
[Fig.~\ref{fig_VAE_SM_Parallel}(b)].
The system's microstate, like $X$, occupies a high-dimensional space.
A macrostate synopsizes the microstate in a few numbers,
such as particle number and magnetization.
This synopsis parallels $Z$.
If the system has learned the drive, the macrostate encodes the drive.
One may reconstruct the drive from the macrostate,
as a bottleneck neural network reconstructs $Y$ from $Z$.\footnote{
See~\cite{Alemi_18_TherML} for a formal parallel
between representation learning and equilibrium thermodynamics.}
Applying this analogy, we use representation learning
to measure how effectively a far-from-equilibrium many-body system
learns a drive.
We illustrate with numerical simulations of the spin glass,
whose learning has been detected with work absorption~\cite{Gold_19_Self}.
However, our methods generalize to other platforms.
Our measurement scheme offers three advantages:
\begin{enumerate
%
\item
Bottleneck neural networks register learning behaviors
more thoroughly and precisely
than work absorption.
\item
Our framework applies to a wide class of
strongly driven many-body systems.
The framework does not rely on any particular thermodynamic property
tailored to, e.g., spins.
%
\item
Our approach unites a machine-learning sense of learning
with the statistical mechanical sense.
This union is conceptually satisfying.
\end{enumerate}
We apply representation learning to measure
classification, memory capacity, discrimination, and novelty detection.
Our techniques can be extended to other facets of learning.
Our measurement protocols share the following structure:
The many-body system is trained with
a drive (e.g., fields $\vec{A}$, $\vec{B}$, and $\vec{C}$).
Then, the system is tested (e.g., with a field $\vec{D}$).
Training and testing are repeated in many trials.
Configurations realized by the many-body system
are used to train a bottleneck neural network via unsupervised learning.
The neural network may then receive configurations from
the testing of the many-body system.
Finally, we analyze the neural network's bottleneck.
The rest of this paper is organized as follows:
We introduce our bottleneck neural network,
then the spin glass with which we illustrate.
We then prescribe how to quantify, using representation learning,
the learning of a drive by a many-body system.
Finally, we
detail opportunities engendered by this study.
The feasibility of applying our toolkit is supported in~\cite[Sec.~III B]{Zhong_20_Learning}.
\emph{Bottleneck neural network:}
The introduction identified a parallel between
thermodynamic problems and bottleneck neural networks
(Fig.~\ref{fig_VAE_SM_Parallel}).
In the thermodynamic problem, $Y \neq X$ represents the drive.
We could design a bottleneck neural network
that predicts drives from configurations $X$.
But the neural network would undergo supervised learning,
by today's standards.
Supervised learning gives the neural network tuples
(configuration of the many-body system, label of drive that generated the configuration).
The drive labels are not directly available to the many-body system.
The neural network's predictions would not necessarily reflect
only learning by the many-body system.
Hence we design a bottleneck neural network that performs unsupervised learning,
receiving only configurations.
This neural network is a \emph{variational autoencoder},~\cite{Kingma_13_Auto,JR_14_Stochastic,Doersch_16_Tutorial},
a generative model:
It receives samples $x$ from a distribution over
the possible values of $X$,
learns about the distribution, and generates new samples.
The neural network approximates the distribution
via Bayesian variational inference~\cite[App.~A]{Zhong_20_Learning}.
Network parameters are optimized during training via backpropagation.
Our variational autoencoder has five fully connected hidden layers,
with neuron numbers 200-200-(number of $Z$ neurons)-200-200.
We usually restrict the latent variable $Z$ to 2-4 neurons.
This choice facilitates the visualization of the latent space
and suffices to quantify our spin glass's learning.
Growing the number of degrees of freedom,
and the number of drives, may require more dimensions.
But our study suggests that the number of dimensions needed
$\ll$ the system size.
Figure~\ref{fig_Latent_Space} depicts the latent space $Z$.
Each neuron corresponds to one axis
and represents a continuous-valued real number.
The neural network maps each inputted configuration
to one latent-space dot.
Close-together dots correspond to
configurations produced by the same field,
if the spin glass and neural network learn well.
We illustrate this clustering in Fig.~\ref{fig_Latent_Space}
by coloring each dot according to the field that produced it.
\begin{figure}[hbt]
\centering
\includegraphics[width=.35\textwidth, clip=true]{Latent_Space}
\caption{\caphead{Visualization of latent space, $Z$:}
$Z$ consists of neurons $Z_1$ and $Z_2$.
A variational autoencoder formed $Z$
while training on configurations assumed by a 256-spin glass
during repeated exposure to three fields, $A$, $B$, and $C$.
The neural network mapped each configuration to a dot in latent-space.
We color each dot in accordance with
the field that produced the configuration.
Same-color dots cluster together:
The neural network identified
which configurations resulted from the same field.}
\label{fig_Latent_Space}
\end{figure}
\emph{Spin glass:}
A spin glass exemplifies the statistical mechanical learner~\cite{Gold_19_Self}.
Simulations are of $\Sites = 256$ classical spins.
The $j^\th$ spin occupies one of two possible states:
$s_j = \pm 1$.
The spins couple together and experience an external magnetic field:
Spin $j$ evolves under a Hamiltonian
\begin{align}
\label{eq_Hamiltonian_j}
H_j(t)
= \sum_{k \neq j} J_{jk} s_j s_k
+ A_j(t) s_j ,
\end{align}
and the spin glass evolves under
$H(t) = \frac{1}{2} \sum_{j = 1}^\Sites H_j(t)$,
at time $t$.
We call the first term in Eq.~\eqref{eq_Hamiltonian_j} the \emph{interaction energy}
and the second term the \emph{field energy}.
The couplings $J_{j k} = J_{kj}$ are defined in terms of
an Erd\"{o}s-R\'enyi random network:
Spins $j$ and $k$ have some probability $p$
of interacting, for all $j$ and $k \neq j$.
Each spin couples to eight other spins, on average.
The nonzero couplings $J_{j k}$ are selected according to
a normal distribution of standard deviation 1.
$A_j(t)$ denotes the magnitude and sign
of the external field experienced by spin $j$ at time $t$.
The field always points along the same direction, the $z$-axis,
so we omit the arrow from $\vec{A}_j(t)$.
We will simplify the notation for the field from $\{ A_j(t) \}_j$ to $A$
(or $B$, etc.).
Each $A_j$ is selected according to
a normal distribution of standard deviation 3.
The field changes every 100 seconds.
To train the spin glass, we construct a drive
by forming a set $\{A, B, \ldots \}$ of random fields.
We randomly select a field from the set, then apply the field for 100 s.
This selection-and-application process is performed 300 times.
The spin glass exchanges heat with
a bath at a temperature $T = 1 / \beta$.
We set Boltzmann's constant to $\kB = 1$.
Energies are measured in Kelvins (K).
To flip, a spin must overcome a height-$B$ energy barrier.
Spin $j$ tends to flip at a rate
$\omega_j
= e^{\beta [ H_j(t) - B]}
/ (1 \text{ second}) \, .$
This rate has the form of Arrhenius's law
and obeys detailed balance.
The average spin flips once per $10^7$ s.
We model the evolution with discrete 100-s time intervals,
using the Gillespie algorithm.
The spins absorb work when the field changes,
as from $\{ A_j(t) \}$ to $\{ A'_j(t) \}$.
The change in the spin glass's energy equals
the work absorbed by the spin glass:
$W := \sum_{j = 1}^\Sites
\left[ A'_j(t) - A_j(t) \right] s_j .$
Absorbed power is defined as $W / ( \text{100 s} )$.
The spin glass dissipates heat by
losing energy as spins flip.
The spin glass is initialized in a uniformly random configuration $C$.
Then, the spins relax in the absence of any field for 100,000 seconds.
The spin glass navigates to near a local energy minimum.
If a protocol is repeated in multiple trials,
all the trials begin with the same $C$.
In a certain parameter regime, the spin glass learns its drive effectively,
even according to the absorbed power~\cite{Gold_19_Self}.
Consider training the spin glass on a drive $\{ A, B, C \}$.
The spin glass absorbs much work initially.
If the spin glass learns the drive, the absorbed power declines.
If a dissimilar field $D$ is then applied, the absorbed power spikes.
The spin glass learns effectively in the Goldilocks regime
$\beta = 3$ K$^{-1}$ and $B = 4.5$ K~\cite{Gold_19_Self}:
The temperature is high enough,
and the barriers are low enough,
that the spin glass can explore phase space.
But $T$ is low enough, and the barriers are high enough,
that the spin glass is not hopelessly peripatetic.
We distinguish robust learning from superficially similar behaviors
in~\cite[App.~B]{Zhong_20_Learning}.
\emph{How to detect and quantify a many-body system's learning of a drive,
using representation learning:}
Learning has many facets;
we detect and quantify four:
classification ability, memory capacity,
the discrimination of similar fields, and novelty detection.
We illustrate with classification here
and detail the rest in~\cite[Sec.~II]{Zhong_20_Learning}.
Other facets of learning may be quantified similarly.
Our representation-learning approach detects and measures learning
more reliably and precisely than absorbed power does.
The code used is accessible at~\cite{Github_repo}.
A system \emph{classifies} a drive when identifying the drive as
one of many possibilities.
A variational autoencoder, we find, reflects more of a spin glass's classification ability
than absorbed power does:
We generated random fields $A$, $B$, $C$, $D$, and $E$.
From 4 of the fields, we formed the drive $\mathcal{D}_1 := \{A, B, C, D\}$.
On the drive, we trained the spin glass in each of 1,000 trials.
In each of 1,000 other trials, we trained a fresh spin glass on
a drive $\mathcal{D}_2 := \{A, B, C, E\}$.
We repeated this process for each of the 5 possible 4-field drives.
Ninety percent of the trials were randomly selected for training our neural network.
The rest were used for testing.
Using the variational autoencoder, we measured
the spin glass's ability to classify drives:
We identified the configurations
occupied by the spin glass at a time $t$ in the training trials.
On these configurations, we trained the neural network.
The neural network populated the latent space with dots
(as in Fig.~\ref{fig_Latent_Space})
whose density formed a probability distribution.
We inputted into the neural network
a time-$t$ configuration from a test trial.
The neural network compressed the configuration into a latent-space point.
We calculated which drive most likely,
according to the probability density,
generated the latent-space point.
The calculation was maximum-likelihood estimation
(see~\cite{Bishop_06_Pattern} and~\cite[App.~C]{Zhong_20_Learning}).
We performed this testing and estimation for each trial in the test data.
The fraction of trials in which the estimation succeeded
constitutes the \emph{score}.
The score is plotted against $t$ in Fig.~\ref{fig_Classification}
(blue, upper curve).
\begin{figure}[hbt]
\centering
\includegraphics[width=.35\textwidth, clip=true]{Classification}
\caption{\caphead{Quantification of a many-body system's classification ability:}
A spin glass classified a drive as one of five possibilities.
We define the system's classification ability as
the score of maximum-likelihood estimation
performed with a variational autoencoder (blue, upper curve).
We compare with the score of maximum-likelihood estimation performed with
absorbed power (orange, lower curve).
The variational-autoencoder score rises to near the maximum, 1.00.
The thermodynamic score exceeds the random-guessing score, $1/5$, slightly.
The neural network detects more of the spins' classification ability.
}
\label{fig_Classification}
\end{figure}
We compare with the classification ability
attributed to the spin glass by the absorbed power:
For each drive and each time $t$,
we histogrammed the power absorbed
while that drive was applied at $t$
in a neural-network-training trial.
Then, we took a trial from the test set
and identified the power absorbed at $t$.
We inferred which drive most likely,
according to the histograms, produced that power.
The guess's score appears as the orange, lower curve
in Fig.~\ref{fig_Classification}.
A score maximizes at 1.00 if the drive is always guessed accurately.
The score is lower-bounded by the random-guessing value
$1 / (\text{number of drives}) = 1/5$.
In Fig.~\ref{fig_Classification}, each score grows
over tens of field switches.
The absorbed-power score begins at\footnote{
\label{foot_Why_Not_0.2}
The neural network's score begins a short distance from 0.20.
The distance, we surmise, comes from stochasticity of three types:
the spin glass's initial configuration, the maximum-likelihood estimation,
and stochastic gradient descent.
Stochasticity of only the first two types affects the absorbed-power score.
}
0.20 and comes to fluctuate around 0.25.
The neural network's score comes to fluctuate slightly below 1.00.
Hence the neural network detects more of the spin glass's classification ability
than the absorbed power does,
in addition to suggesting a means of quantifying the classification ability rigorously.
\emph{Discussion:}
We have detected and quantified a many-body system's learning of its drive,
using representation learning,
with greater sensitivity than absorbed power affords.
We illustrated by quantifying a many-body system's ability to classify drives,
with the score of maximum-likelihood estimates
calculated from a variational autoencoder's latent space.
Our toolkit extends to quantifying memory capacity,
discrimination, and novelty detection.
The scheme relies on a parallel that we identified
between statistical mechanical problems and neural networks.
Uniting statistical mechanical learning with machine learning,
the definition is conceptually satisfying.
The definition also has wide applicability,
not depending on whether
the system exhibits magnetization or strain or another thermodynamic response.
Furthermore, our representation-learning toolkit signals many-body learning
more sensitively than does
the seemingly best-suited thermodynamic tool.
This work engenders several opportunities.
We detail three below and four more in~\cite[Sec.~III]{Zhong_20_Learning}.
(i) \emph{Decoding latent space:}
Thermodynamicists parameterize macrostates with
volume, energy, magnetization, etc.
Thermodynamic macrostates parallel latent space
(Fig.~\ref{fig_VAE_SM_Parallel}).
Which variables parameterize the neural network's latent space?
Latent space could suggest definitions of new thermodynamic variables,
or hidden relationships amongst known thermodynamic variables.
We illustrate with part of the protocol
for quantifying classification:
Train the spin glass with a drive $\{ A, B, C \}$ in each of many trials.
On the end-of-trial configurations, train the neural network.
Figure~\ref{fig_Latent_Space_Visualize} reveals physical significances
of two latent-space directions:
The absorbed power grows along the diagonal from the bottom righthand corner
to the upper lefthand corner (Fig.~\ref{fig_Latent_Space_Dissipation}).
The magnetization grows radially (Fig.~\ref{fig_Latent_Space_Magnetization}).
The directions are nonorthogonal, suggesting
a nonlinear relationship between the thermodynamic variables.
Convention biases thermodynamicists toward measuring
volume, magnetization, heat, work, etc.
The neural network might identify new macroscopic variables
better-suited to far-from-equilibrium statistical mechanics,
or hidden nonlinear relationships amongst thermodynamic variables.
A bottleneck neural network could uncover new theoretical physics,
as discussed in, e.g.,~\cite{Carleo_19_Machine,Wu_19_Toward,Iten_20_Discovering}.
\begin{figure}[h]
\centering
\begin{subfigure}{0.35\textwidth}
\centering
\includegraphics[width=1\textwidth]{Latent_Space_Dissipation}
\caption{\caphead{Correspondence of absorbed power to a diagonal}}
\label{fig_Latent_Space_Dissipation}
\end{subfigure}
\begin{subfigure}{0.35\textwidth}
\centering
\includegraphics[width=1\textwidth]{Latent_Space_Magnetization}
\caption{\caphead{Correspondence of magnetization to the radial direction}}
\label{fig_Latent_Space_Magnetization}
\end{subfigure}
\caption{\caphead{Correspondence of latent-space directions
to thermodynamic quantities:}
A variational autoencoder trained on the configurations assumed by a spin glass
during its training with fields $A$, $B$, and $C$.
We have color-coded each latent-space plot,
highlighting how a thermodynamic property changes
along some direction.
In Fig.~\ref{fig_Latent_Space_Dissipation},
the absorbed power grows
from the bottom righthand corner to the upper lefthand corner.
In Fig.~\ref{fig_Latent_Space_Magnetization},
the magnetization grows radially.}
\label{fig_Latent_Space_Visualize}
\end{figure}
\emph{(ii) Resolving open problems in statistical mechanical learning:}
Our toolkit is well-suited to answering open problems about many-body learners;
we expect to report an experimental application in a followup.
An example problem concerns the soap-bubble raft in~\cite{Mukherji_19_Strength}.
Experimentalists trained a raft of soap bubbles with
an amplitude-$\gamma_{\rm t}$ strain.
The soap bubbles' positions were tracked,
and variances in positions were calculated.
No such measures distinguished trained rafts
from untrained rafts;
only stressing the raft and reading out the strain could~\cite{Mukherji_19_Strength,Miller_19_Raft}.
Bottleneck neural networks may reveal what microscopic properties distinguish
trained from untrained rafts.
\emph{(iii) Extensions to quantum systems:}
Far-from-equilibrium many-body systems have been realized with
many quantum platforms, including ultracold atoms~\cite{Langen_15_Ultracold},
trapped ions~\cite{Friis_18_Observation,Smith_16_Many},
and nitrogen vacancy centers~\cite{Kucsko_18_Critical}.
Applications to memories have been proposed~\cite{Abanin_19_Colloquium,Turner_18_Weak}.
Yet quantum memories that remember
\emph{particular coherent states} have been focused on.
The learning \emph{of strong drives} by quantum many-body systems
calls for exploration,
as the learning of strong drives by many-body systems
has proved productive in classical statistical mechanics.
Our framework can guide this exploration.
\emph{(iv) Learning about representation learning:}
We identified a parallel between representation learning and
statistical mechanics.
The parallel enabled us to use representation learning
to gain insight into statistical mechanics.
Recent developments in information-theoretic far-from-equilibrium statistical mechanics
(e.g.,~\cite{Still_12_Thermodynamics,Parrondo_15_Thermodynamics,Crutchfield_17_Origins,Kolchinsky_17_Dependence})
might, in turn, shed new light on representation learning.
\begin{acknowledgments}
The authors thank Alexander Alemi, Isaac Chuang, Emine Kucukbenli, Nick Litombe, Seth Lloyd, Julia Steinberg, Tailin Wu, and Susanne Yelin for useful discussions.
WZ is supported by ARO Grant W911NF-18-1-0101;
the Gordon and Betty Moore Foundation Grant, under No. GBMF4343;
and the Henry W. Kendall (1955) Fellowship Fund.
JMG is funded by the AFOSR, under Grant FA9950-17-1-0136.
SM was supported partially by the Moore Foundation,
via the Physics of Living Systems Fellowship.
This material is based upon work supported by, or in part by, the Air Force
Office of Scientific Research, under award number FA9550-19-1-0411.
JLE has been funded by the Air Force Office of Scientific Research grant FA9550-17-1-0136 and by the James S. McDonnell Foundation Scholar Grant 220020476.
NYH is grateful for an NSF grant for the Institute for Theoretical Atomic, Molecular, and Optical Physics at Harvard University and the Smithsonian Astrophysical Observatory.
NYH also thanks CQIQC at the University of Toronto, the Fields Institute, and Caltech's Institute for Quantum Information and Matter (NSF Grant PHY-1733907) for their hospitality during the development of this paper.
\end{acknowledgments}
\bibliographystyle{h-physrev}
|
2,869,038,155,616 | arxiv | \section{Introduction}
By the doubly-punctured plane we refer to the compact surface
with boundary (familiarly known as the ``pair of pants'')
obtained by removing, from a closed two-dimensional disc,
two disjoint open discs.
This work extends to that surface
the research
reported in \cite{torus} for the punctured torus.
Like the punctured torus, the doubly-punctured plane
has the homotopy type of a figure-eight.
Its fundamental group is free on two generators: once
these are chosen, say $a, b$,
a free homotopy class of curves on the surface can
be uniquely
represented as a reduced cyclic word in the symbols
$a, b, A, B$ (where $A$ stands for $a^{-1}$ and $B$ for $b^{-1}$).
A {\em cyclic word } $w$ is an equivalence class of words
related by a cyclic permutation of their letters; we will write
$w=\langle r_1 r_2 \dots r_n \rangle$ where the $r_i$ are
the letters of the word, and $\langle r_1 r_2 \dots r_n \rangle =
\langle r_2 \dots r_n r_1 \rangle$, etc. {\em Reduced}
means that the cyclic word contains no
juxtapositions of $a$ with $A$, or $b$ with $B$.
The {\em length} (with respect to the generating set $(a,b)$) of
a free homotopy class of curves is the number of letters occurring in
the corresponding reduced cyclic word.
This work studies the relation between length and
the {\em self-intersection
number} of a free homotopy class of curves: the
smallest number of
self-intersections among all general-position curves in the class.
(General position in this context means as usual that there are
no tangencies or multiple intersections).
The self-intersection number is a property of the free homotopy class
and hence
of the corresponding reduced cyclic word $w$;
we denote it by $\mathop\mathrm{SI} (w)$.
Note that a word and its inverse
have the same self-intersection number.
\start{theo}{upper bound even}
\begin{numlist}
\item The self-intersection
number for a reduced cyclic word of
length $L$ on the doubly-punctured plane is bounded above by
$L^2/4 + L/2 -1$.
\item If $L$ is even, this bound is sharp: for
$L \ge 4$ and even,
the cyclic words realizing the maximal self-intersection number are
(see \figc{pantsgrids}) $(aB)^{L/2}$ and $(Ab)^{L/2}$.
For $L=2$, they are $aa, AA, bb, BB, aB$ and $Ab$.
\item If $L$ is odd, the maximal self-intersection number of words
of length $L$ is
at least
$(L^2-1)/4$.
\end{numlist}
\end{theo}
\begin{figure}[htp]
\centering \includegraphics[width=3in]{even-length.eps} \includegraphics[width=3in]{odd-length.eps}
\caption{Left: curves of the form $\langle aBaBaB\rangle$
have maximum self-intersection number $L^2/4 + L/2 -1$
for their length (\theoc{upper bound even}). Right:
curves of the form $\langle aaBaBaB\rangle$
have self-intersection number $(L^2 -1)/4$. We
conjecture (\conjc{upper bound odd}) this is maximal, and
prove this conjecture in certain cases (\theoc{theo-odd}). }
\label{pantsgrids}
\end{figure}
\start{conj}{upper bound odd} The maximal self-intersection
number for a reduced cyclic word of
odd length $L = 2k+1$ on the doubly-punctured plane is
$(L^2-1)/4$; the words realizing the maximum have
one of the four forms $\langle (aB)^kB\rangle, \langle a(aB)^k\rangle,
\langle (Ab)^kb\rangle, \langle A(Ab)^k\rangle.$
\end{conj}
\start{defi}{blocks} Any reduced cyclic word is either a pure power
or may be written in the form
$\langle\alpha_1^{a_1}\beta_1^{b_1}\dots\alpha_n^{a_n}\beta_n^{b_n}\rangle$,
where
$\alpha_i \in \{a, A\}$, $\beta_i \in \{b, B\}$ , all $a_i$ and $b_i$
are positive, and
$\sum_1^n(a_i + b_i) = L$, the length of the word. We will
refer to each $\alpha_i^{a_i}\beta_i^{b_i}$ as an {\em $\alpha\beta$-block},
and to $n$ as the word's {\em number of $\alpha\beta$-blocks}.
\end{defi}
\start{theo}{theo-odd} On the doubly-punctured plane, consider
a reduced cyclic word $w$ of odd length $L $ with $n$
$\alpha\beta$-blocks. If $L>3n$, or $n$ is prime, or $n$ is a power of $2$, then
the self-intersection number of $w$
satisfies $\mathop\mathrm{SI}(w)\leq \frac{L^2 -1}{4}$. This bound is sharp.
\end{theo}
The doubly punctured plane has the property that self-intersection
numbers of words are bounded {\it below}.
\start{theo}{lower-bound} On the doubly punctured plane, curves in
the free homotopy class represented by
a reduced cyclic word of length $L$ have at least $L/2-1$ self-intersections if $L$ is even and $(L-1)/2$ self-intersections if $L$ is odd. These bounds are achieved by $(ab)^{\frac{L}{2}}$ and $(AB)^{\frac{L}{2}}$ if $L$ is even and by the four words $a(ab)^{\frac{L-1}{2}}$, etc. when $L$ is odd.
\end{theo}
\start{cory}{finite} A curve with minimal self-intersection number $k$ has
combinatorial length at most $2k+2$. There are therefore only
finitely many free homotopy classes with
minimal self-intersection number $k$.
\end{cory}
\start{rem}{unique} A surface of negative Euler characteristic which is not
the doubly punctured plane has infinitely many homotopy classes of simple
closed curves \cite{mm}. Since the $(k+1)$st power of a simple closed curve
kas self-intersection number $k$, it follows that for any $k$ there
are infinitely many distinct homotopy classes of curves with self-intersection
number $k$. (A more elaborate argument using the
mapping class group constructs, for any $k$, infinitely many
distinct {\em primitive}
classes (not a proper power of another class) with self-intersection
number $k$). So the doubly punctured plane is the unique surface of
negative Euler characteristic satisfying \coryc{finite}.
\end{rem}
The authors have benefited from discussions with Dennis Sullivan, and
are very grateful to Igor Rivin who contributed an essential
element to the
proof of \theoc{lower-bound}. Additionally, they have profited from
use of Chris Arettines' JAVA program, which draws minimally self-intersecting
representatives of free homotopy classes of curves in surfaces.
The program is currently available at
http://www.math.sunysb.edu/$\sim$moira/applets/chrisApplet.html
\subsection{Questions and related results}\label{rr}
The doubly punctured plane admits a
hyperbolic metric making its boundary geodesic. An elementary argument
shows that for curves on that surface, hyperbolic and combinatorial
lengths are quasi-isometric. Some of our combinatorial results can
be related in this way to statements about intersection numbers and
hyperbolic length.
A free
homotopy class of combinatorial length $L$ in a surface with boundary can be represented by $L$ chords in a fundamental polygon. Hence,
the maximal self-intersection number of a cyclic reduced word of length $L$ is bounded above by $\frac{L(L-1)}{2}$.
We prove in \cite{torus} that for the punctured
torus the maximal self-intersection number $\mathop\mathrm{SI}_{\max}(L)$ of a free homotopy
class of combinatorial length $L$ is equal to $(L^2-1)/4$ if $L$ is
even and to $(L-1)(L- 3)/4$ if $L$ is odd.
This implies that the limit of $\mathop\mathrm{SI}_{\max}(L)/L^{2}$ is $\frac{1}{4}$ as $L$
approaches infinity. (Compare \cite{lalley}). The same limit holds for the doubly punctured plane (\theoc{upper bound even}).
On the other hand, according to our (limited) experiments, there are no analogous polynomials for more
general surfaces; but it seems reasonable to ask:
\start{ques}{xxx}
Consider closed curves on a surface $S$ with boundary.
Let $\mathop\mathrm{SI}_{\max}(L)$ be the maximum self-intersection number for
all curves of combinatorial
length $L$. Does $\mathop\mathrm{SI}_{\max}(L)/L^{2}$ converge? And if so, to what limit?
Does this limit approach $\frac{1}{2}$ as the genus of the surface
approaches infinity? \end{ques}
\start{ques}{xxy}
Consider closed curves on a hyperbolic surface $S$ (possibly closed).
Let $\mathop\mathrm{SI}_{\max}(\ell)$ be the maximum self-intersection number for any
curve of
{\em hyperbolic}
length at most $\ell$. Does $\mathop\mathrm{SI}_{\max}(\ell)/\ell^{2}$ converge? And if so, to what limit?
\end{ques}
Basmajian \cite{basmajian} proved for a closed, hyperbolic surface $S$
that there exists an increasing sequence $M_k$ (for $k = 1, 2, 3, ...$)
going to
infinity so that if $w$ is a closed geodesic with self-intersection number
$k$, then its geometric length is larger than $M_k$ . Thus the length of a closed geodesic gets arbitrarily large
as its self-intersection gets large. For the
doubly punctured plane, in terms of the combinatorial length, we calculate $M_{k}=\sqrt{5+4k}-1$.
\section{A linear model}\label{linear}
In this section we will need to distinguish between
a cyclically reduced linear word $\mathsf w$ in the generators and their inverses,
and the the associated reduced cyclic word $w$.
We
introduce an algorithm for constructing
from $\mathsf w$
a representative curve for $w$.
An upper bound for the
self-intersection numbers of these representatives may be
easily estimated; taking the minimum of this bound
over cyclic permutations of $\alpha\beta$-blocks
will yield
a useful upper bound for $\mathop\mathrm{SI}(w)$.
\subsection{Skeleton words}\label{skel}
Given a
cyclically reduced word
$w = \langle\alpha_1^{a_1}\beta_1^{b_1}\dots\alpha_n^{a_n}\beta_n^{b_n}\rangle$,
where $\alpha_i = a \mbox{~or~} A$, $\beta_i = b \mbox{~or~} B$
and all $a_i, b_i > 0$,
the corresponding {\em skeleton word} is
$w_S = \langle\alpha_1\beta_1\dots\alpha_n\beta_n \rangle$, a word of length $2n$.
We now describe a systematic way for drawing a representative curve
for $w_S$ starting from one of its linear forms $\mathsf w_S$,
and for {\em thickening} this curve to a representative for $w$.
\begin{proclama not emphasized}{The
skeleton-construction algorithm:} (See Figures \ref{skeleton1} and \ref{skeleton-ababab}) Start by marking off $n$ points along each of the edges of the fundamental
domain; corresponding points on the $a,A$ sides are numbered
$1, 3, 5, \dots, 2n-1$ starting from their common corner; and
similarly corresponding points on the $b,B$ sides are
numbered $2n, \dots, 6, 4, 2$, the numbers decreasing away from
the common corner.
If the first letter in
$\mathsf w_S$ is $a$, draw a curve segment entering
the $a$-side at 1, and one exiting the $A$-side at 1 (vice-versa
if the first letter is $A$). That segment is
then extended to enter the $b$-side at 2 and exit the $B$-side at 2
if the next letter in $\mathsf w_S$ is $b$; vice-versa if it is $B$. And so
forth until the curve segment exiting the $b$ (or $B$)-side at $2n$
joins up with the initial curve segment drawn.
\end{proclama not emphasized}
We will refer to a segment of type $ab, ba, AB, BA$ as a {\em corner
segment}, and one of type $aB, Ab, bA, Ba$ as a {\em transversal}.
Note that (as above) a skeleton word has even length $2n$ and
therefore has $2n$ segments (counting the {\em bridging segment}
made up of the last letter and the first).
The number of transversals must also be even, since if they are
counted consecutively they go from lower-case to upper-case or
vice-versa, and the sequence (upper, lower, ... ) must end up
where it starts. It follows that the number of corners is also
even.
\begin{figure}[htb]
\centering \includegraphics[width =3in]{AbabAb-2.eps}
\caption{The skeleton curve $AbabAb$. }
\label{skeleton1}
\end{figure}
\start{prop}{Abn} The self-intersection number of the representative of
$(Ab)^{n}$ or $(aB)^n$ given by the curve-construction algorithm
equals $n^2 + n - 1$.
\end{prop}
\begin{proof} Consider $(Ab)^{n}$; see \figc{skeleton-ababab}, left.
This curve has only transversals.
There are $n$ parallel segments of type $Ab$;
they join $1, 3, \dots, (2n-1)$ on the $a$-side
to $2, 4, \dots, 2n$ on the $b$-side. There are
$n-1$ parallel segments of type $bA$, which
join $2, 4, \dots, 2n-2$ on the $B$-side to
$3, 5, \dots 2n-1$ on the $A$-side. Each of these
intersects all $n$ of the $Ab$ segments. Finally
the bridging $bA$ segment joins
$2n$ on the $B$-side to $1$ on the $A$-side.
This segment begins to the left of all the
other segments and ends up on their right:
it intersects all $2n-1$ of them. The total
number of intersections is $n(n-1) + 2n -1 =
n^2 + n - 1$. A symmetrical argument handles $(aB)^n$.
\end{proof}
\start{prop}{abn} The self-intersection number of the representative of $(ab)^{n}$ given by the curve-construction algorithm equals $(n-1)^{2}$.\end{prop}
\begin{proof} (See \figc{skeleton-ababab}, right)
This curve has only corners. There are
$n$ segments of type $ab$, joining $1, 3, \dots, 2n-1$
on the $A$-side to $2, 4, \dots, 2n$ on the $b$-side.
Since their endpoints interleave, each of these curves
intersects all the others. There are $n-1$ segments of
type $ba$, joining $2, 4, \dots, 2n-2$ on the $B$-side
to $3, 5, \dots 2n-1$ on the $a$-side. Again, each of
these curves intersects all the others. Finally the
bridging $ba$ segment joining $2n$ to $1$ spans both endpoints
of all the others and so intersects none of them.
The total number of intersections is $\frac{1}{2}n(n-1)
+ \frac{1}{2}(n-1)(n-2) = (n-1)^2$.
\end{proof}
\begin{figure}[htp]
\centering
\includegraphics[width=3in]{AbAbAb.eps}
\includegraphics[width=3in]{ababab-3.eps}
\caption{The skeleton curves $ababab$ and $AbAbAb$. }
\label{skeleton-ababab}
\end{figure}
\start{prop}{prop-2c} Let
$w$
be a skeleton word of length $2n$.
The number of corner segments
in $w$ is even, as remarked above;
write it as $2c$.
Then the self-intersection
number of
$w$
is bounded above by $n^2 + n - 1 - 2c$.
\end{prop}
\begin{proof}
Using Propositions \ref{Abn} and \refc{abn} we can assume that
$w$ has both corner-segments and transversals.
We may then choose a linear representative $\mathsf w$
with the property
that the
bridging segment between the end of the word and the
beginning is a transversal.
Of the $2c$ corners, $c$ will be on top (those of type $AB$ or $ba$)
and $c$ on the bottom (types $ab$ and $BA$). An $ab$ or $AB$ corner segment
joins a point numbered $2j-1$ to a point numbered $2j$ on the same
side, top or bottom, as $2j-1$.
It encloses segment endpoints $2j+1, 2j+3, \dots, 2n-1, 2, 4,
\dots 2j-2$, a total of $n-1$ endpoints;
similarly, a $ba$ or $BA$ segment
encloses $n-2$ endpoints. So there are at most $2c(n-1) -c(c-1)$ intersections
involving corners, correcting for same-side corners
having been counted twice. The $2n-2c$ transversals intersect each
other just as in the pure-transversal case, producing $(n-c)^2 + (n-c) -1$
intersections. The total number of intersections is therefore
bounded by $n^2 + n -1 -2c$. \figc{skeleton1} shows
the curve $AbabAb$ (here $n=3, c=1$) with 8 self-intersections.
\end{proof}
\subsection{Thickening a skeleton; proof of \theoc{upper bound even} (1), (2)}
Once the skeleton curve corresponding to $\mathsf w_S$ is constructed,
it may be {\em thickened} to
produce a representative curve for $w$. The algorithm
runs as follows.
\begin{proclama not emphasized}{The skeleton-thickening algorithm.} (See \figc{thickening}) Suppose for explicitness that $w$ starts with $A^{a_1}$. The extra $a_1-1$
copies of $A$, inserted after the first one, correspond to segments
entering the $a$-side (the first one at 1) and exiting the $A$-side
(the last one at a point opposite the displaced entrance point
of the first skeleton segment); the new segments are parallel.
Similarly the extra $b_1-1$ segments appear as parallel segments
originating and ending near the 2 marks on the $b$ and $B$-sides;
so there are no intersections between these segments and those in
the first band. Proceeding in this manner we introduce $n$ non-
intersecting bands
of $a_1-1, b_1-1, a_2-1, ..., b_n-1$ parallel segments. New intersections
occur between these bands and segments of the skeleton curve. The
two outmost bands (corresponding to $a_1$ and $b_n$) are each
intersected by one of the skeleton segments; the next inner bands
($a_2$ and $b_{n-1}$) each intersect three of the skeleton
segments; \dots; the two innermost bands ($a_n$ and $b_1$)
each intersect $(2n-1)$
of the skeleton segments.
\end{proclama not emphasized}
\begin{figure}[htp]
\centering
\includegraphics[width=5.5in]{thickening.eps}
\caption{The skeleton curve $AbabAb$ thickened to
represent the linear word $A^{a_1}b^{b_1}a^{a_2}b^{b_2}A^{a_3}b^{b_3}$.
The grey bands represent the curve segments corresponding to the
extra letters: $a_1-1$ copies of $A$, etc. Notice that the segments
from the skeleton curve intersect the $a_1$ and $b_3$ bands once,
the $a_2$ and $b_2$ bands three times, and the $a_3$ and $b_1$
bands five times.}\label{thickening}
\end{figure}
Adding these intersections to the
bound on the self-intersections of the skeleton curve itself yields
$$\mathop\mathrm{SI}(w) \leq (a_1 + b_n - 2) + 3(a_2 + b_{n-1} - 2) + \cdots +
(2n-1)(a_n + b_1 - 2) + n^2 + n - 1.$$
Since $1 + 3 + \cdots + (2n-1) = n^2$ we may repackage this
expression as
$$\mathop\mathrm{SI}(w) \leq f(a_1, \dots, a_n, b_1, \dots, b_n) - n^2 + n - 1,$$
where we define $f$ by
$$f(a_1, \dots, a_n, b_1, \dots, b_n) = (a_1 + b_n) + 3(a_2 + b_{n-1})
+ \cdots + (2n-1)(a_n + b_1).$$
\vspace{.1in}
Applying the skeleton-thickening algorithm to
the cyclic permutation $\alpha_1^{a_1}\beta_1^{b_1}\dots\alpha_n^{a_n}\beta_n^{b_n} \rightarrow
\alpha_2^{a_2}\beta_2^{b_2}
\dots\alpha_n^{a_n}\beta_n^{b_n}\alpha_1^{a_1}\beta_1^{b_1}$ yields another
curve representing the same word. There are $n$ such permutations, leading to
\begin{equation}\label{equation(*)}
\mathop\mathrm{SI}(w) \leq [\min_{i=0,\dots,n-1}
f\circ r^i(a_1, \dots, a_n, b_1, \dots, b_n)] - n^2 + n - 1,
\end{equation}
where $r$ is the coordinate permutation
$(a_1, \dots, a_n, b_1, \dots, b_n) \rightarrow
(a_2, \dots, a_n, a_1, b_2, \dots, b_n, b_1).$
\start{prop}{nL} Set $L = a_1 + \cdots + a_n + b_1 + \cdots + b_n$.
Then ${\displaystyle \min_{i=0,\dots,n-1}
f\circ r^i(a_1, \dots, a_n, b_1, \dots, b_n) \leq nL}.$
\end{prop}
\begin{proof}
We write
$$f(a_1, \dots, b_n) = (a_1 + b_n) + 3(a_2 + b_{n-1})+ \cdots + (2n-1)(a_n + b_1)$$
$$f\circ r(a_1, \dots, b_n) = (a_2 + b_1) + 3(a_3 + b_n)+ \cdots + (2n-1)(a_1 + b_2)$$
$$.$$
$$.$$
$$.$$
$$f\circ r^{n-1}(a_1, \dots, b_n) = (a_n + b_{n-1}) + 3(a_1 + b_{n-2})+ \cdots + (2n-1)(a_{n-1} + b_n).$$
The average of these $n$ functions is
$\frac{1}{n}(L + 3 L + \cdots (2n-1)L) = nL.$
Since the minimum of $n$ functions must be less than their average,
the proposition follows.
\end{proof}
\begin{prooftext}{Proof of \theoc{upper bound even}, (1) and (2)}
We work with
$w =
\langle\alpha_1^{a_1}\beta_1^{b_1}\dots\alpha_n^{a_n}\beta_n^{b_n}\rangle$.
We have established that
$$\mathop\mathrm{SI}(w) \leq \min_{i=0,\dots,n-1}
f\circ r^i(a_1, \dots, a_n, b_1, \dots, b_n) - n^2 + n - 1.$$
Using Proposition \ref{nL},
$$\mathop\mathrm{SI}(w) \leq nL - n^2 + n - 1 = -n^2 + n(L+1) -1.$$
For a given $L$, this function has its real maximum at $n = (L+1)/2$.
Since each
$\alpha\beta$-block contains at least 2 letters, $n$ must be less than or
equal to $L/2$. So a
bound on $\mathop\mathrm{SI}(w)$ is the value at $n = L/2$ ($L$ even) or
$n = (L - 1)/2$ ($L$ odd):
$$\mathop\mathrm{SI}(w) \leq \left \{
\begin{array}{ll}
L^2/4 + L/2 -1 & (L \mbox{ even})\\
L^2/4 + L/2 -7/4 & (L \mbox{ odd}).
\end{array} \right . $$
For $L$ even, note (\propc{Abn})
that the skeleton words $w=(aB)^n$ and $w=(Ab)^n$
satisfy $\mathop\mathrm{SI}(w) = n^2 + n - 1 = L^2/4 + L/2 -1$; so the
bound for this
case is sharp; furthermore since words with $n=L/2$ must be
skeleton words, it follows from \propc{prop-2c}
these are the only words attaining the bound.
\end{prooftext}
\start{rem}{odd}
For $L$ odd, our numerical experiments (which go up to $L=20$) and the special cases we
prove below have $\mathop\mathrm{SI}(w) \leq (L^2-1)/4 $, so the function
constructed here does not give a sharp bound.
\end{rem}
\section{Odd length words}\label{odd-length}
\subsection{A lower bound for the maximal self-intersection number;
proof of \theoc{upper bound even} (3)}
\begin{prooftext}{Proof of \theoc{upper bound even}, (3)
(The maximum self-intersection number
for words of odd length $L$ is at least $(L^2-1)/4$).}
We will show that the words of the form $a(aB)^{\frac{L-1}{2}}$ have self-intersection equal to $(L^{2}-1)/4$. Consider a
representative of $w$ as in \figc{pantsgrid2},
where $n = \frac{L-1}{2}$. There is
an $n \times n$ grid of intersection points in the center,
plus the $n$ additional intersections $p_2, \dots p_{2n}$, a total
of $n^2 + n = (L^{2}-1)/4$. We need to check that none of these
intersections spans a bigon (this is the only way
\cite{hs} that an intersection can be deformed away).
\begin{figure}[htp]
\centering
\includegraphics[width=2.5in]{aaBaBaB.eps}
\caption{The curve $a(aB)^n$
represented in the fundamental domain for the doubly punctured
disc.}\label{pantsgrid2}
\end{figure}
With notation from \figc{pantsgrid2}, the only vertices that could be
part of a bigon are those from which two segments exit along the same
edge, i.e. $p_2, p_4, \dots, p_{2n}$ . If we follow the segments from
$p_2$ through edge $A$ they lead to 1 on edge $A$ and $2n+1$ on edge
$b$, so no bigon there; the segments from $p_4$ through edge $A$
lead to $3, 2n+1$ on edge $b$, to $2, 2n$ on edge $A$ and then to
1 on edge $A$ and $2n-1$ on edge $b$, so no bigon; etc. Finally
the segments from $p_{2n}$ through edge $A$ lead to $2n-1, 2n+1$
on edge $b$ and eventually to 1 on edge $A$ and 3 on edge $b$:
no bigon.
\end{prooftext}
\subsection{Preliminaries for upper-bound calculation}\label{odd-prelim}
In the analysis of self-intersections of odd length
words the exact relation between $L$ (the length
of a word) and $n$ (its number of $\alpha\beta$-blocks)
becomes more important.
\start{prop}{3n bound} If a word $w$ has length $L$ and
$n$ $\alpha\beta$-blocks, with $L \geq 3n$, then
$\mathop\mathrm{SI}(w) \leq \frac{1}{4}(L^2 -1).$
Note that by \theoc{upper bound even} (3),
this estimate is sharp.
\end{prop}
\begin{proof}
As established in the previous section (equation \ref{equation(*)})
$\mathop\mathrm{SI}(w) \leq nL - n^2 + n -1.$
The inequality $nL - n^2 + n -1 \leq \frac{1}{4}(L^2 -1)$ is
equivalent to $L^2 -4nL + 4n^2 - 4n +3 \geq 0$. As a function
of $L$ this expression has two roots: $ 2n \pm \sqrt{4n-3}$;
as soon as $L$ is past the positive root, the inequality is
satisfied.
If $n \geq 3$, then $ L\geq 3n$
implies $L \geq 2n + \sqrt{4n-3}$.
If $n = 2$ our inequality
$\mathop\mathrm{SI}(w) \leq nL-n^2 + n -1$ translates to $\mathop\mathrm{SI}(w) \leq 2L-3$
which is less than $\frac{1}{4}(L^2 -1)$ always.
If $n=1$ our inequality becomes
$\mathop\mathrm{SI}(w) \leq L-1$, which is less than $\frac{1}{4}(L^2 -1)$
as soon as $L \geq 3$. The other only possibility is $L=2$, an
even length.
\end{proof}
\subsection{The cases: $n$ prime or $n$ a power of 2; proof of
\theoc{theo-odd} }
Other results for odd-length words require a more detailed
analysis of the functions $f\circ r^i(a_1, \dots, a_n, b_1, \dots, b_n)$,
keeping the notation of the previous section.
The proof of the following results is straightforward.
\start{lem}{t_i}
For a fixed $(a_1, \dots, a_n, b_1, \dots, b_n)$, set
$$s_a = a_1 + \cdots + a_n,$$
$$ s_b = b_1 + \cdots + b_n,$$
$$t_i = f\circ r^i(a_1, \dots, a_n, b_1, \dots, b_n).$$
Then
\begin{romlist}
\item $t_{i+1}-t_i = 2n(a_i-b_i) - 2(s_a-s_b)$
\item $t_0 - t_{n-1} = 2n(a_n-b_n) - 2(s_a-s_b)$.
\item $t_{i+j} - t_i = 2n(a_i+\cdots +a_{i+j-1} -
b_i-\cdots -b_{i+j-1})-
2j(s_a-s_b).$
\end{romlist}
In particular, if $t_i = t_{i+r}$, for some $r>0$, then
$$n(a_1-b_1+a_2-b_2+\cdots +a_{i+r-1}-b_{i+r-1}) = r (s_a - s_b).$$
\end{lem}
\start{lem}{diff ti} If $n$ is prime and $L < 3n$, then all
the numbers $t_0, \dots, t_{n-1}$ are different.
\end{lem}
\begin{proof} By \lemc{t_i}, if $t_i = t_{i+r}$, for some $r>0$,
then $n$ must divide $r$ or $s_a - s_b$. We will show each is
impossible. The first cannot happen because $r < n$. As for the
second, observe that
$s_a \geq n$ and $s_b \geq n$, and that their sum is $L < 3n$;
so $s_a - s_b = s_a + s_b - 2s_b < 3n - 2n = n$. So $n$ cannot divide
$s_a - s_b$ either.
\end{proof}
\start{lem}{diff ti-2} If $n$ is a power of $2$ and $L$ is odd,
then all
the numbers $t_0, \dots, t_{n-1}$ are different.
\end{lem}
\begin{proof} Arguing as in \lemc{diff ti}:
in this case, since $r < n$ it cannot be a multiple
of $n$, so $s_a - s_b$ must be even. But $s_a - s_b$ is congruent
mod 2 to $s_a + s_b = L$, which is odd.
\end{proof}
\start{prop}{n} If a word $w$ of odd length $L$ has a number
of $\alpha\beta$-blocks which is prime or a power of two then $\mathop\mathrm{SI}(w)\le (L^{2}-2)/4$.
\end{prop}
\begin{proof}
Let $n$ be the number of $\alpha\beta$-blocks in $w$.
By Lemmas \ref{diff ti} and \refc{diff ti-2} the numbers $t_0, \dots, t_{n-1}$ are all different; in
fact (\lemc{t_i}) their differences are all even, so any two of
them must be at least 2 units apart. It follows that
$$\sum_{i=0}^{n-1}t_i \geq \min t_i + (\min t_i + 2) + \cdots
+ (\min t_i + 2n-2) = n\min t_i + n(n-1)$$
so their average, which we calculated
in the proof of \propc{nL} to be $nL$, is greater than or
equal to $\min t_i + n -1$, and so
(using equation \ref{equation(*)})
$$\mathop\mathrm{SI}(w) \leq \min t_i -n^2 + n - 1 \leq nL -n^2 = n(L-n) \leq L^2/4;$$
since $L$ is odd and $\mathop\mathrm{SI}(w)$ is an integer, this means
$$\mathop\mathrm{SI}(w) \leq (L^2-1)/4.$$
\end{proof}
Propositions \ref{3n bound} and \ref{n} prove \theoc{theo-odd}.
\section{Lower bounds; proof of \theoc{lower-bound}}\label{lower}
\start{defi}{positive} A word in the generators of a surface group and their inverses is \emph{positive} if no generator occurs along with its inverse.
Note that a positive word is automatically cyclically reduced.
\end{defi}
\start{nota}{alpha1} If $w$ is a word in the alphabet $\{a, A, b, B\}$,
we denote by $\alpha(w)$ (resp. $\beta(w)$) the total number of occurrences
of $a$ and $A$ (resp. $b$ and $B$).
\end{nota}
\start{prop}{lower-case} For any reduced cyclic word $w$ in the
alphabet $\{a, A, b, B\}$ there is a positive
cyclic word $w'$ of the same length
with $\alpha(w') = \alpha(w), ~\beta(w') = \beta(w)$
and $\mathop\mathrm{SI}(w') \leq \mathop\mathrm{SI}(w)$.
\end{prop}
\begin{proof}
We show how to change $w$ into a word written with
only $a$ and $b$
while controlling the self-intersection number.
If all the letters in $w$ are capitals, take $w' = w^{-1}$.
Otherwise, look in $w$ for a maximal (cyclically)
connected string of (one or more) capital letters.
The letters at the ends of this string must be one of the pairs
$(A,A), (A,B), (B,A), (B,B)$. In the case $(B,B)$
(the other three cases admit a similar analysis), focus on that
string and write
$$w=\langle xa^{a_1}B^{b_1}A^{a_2}B^{b_2}\dots A^{a_{i}}B^{b_{i}}a^{a_{i+1}}
\rangle$$
where $x$ stands for the rest of the word.
Consider a representative of $w$ with minimal self-intersection.
In this representative consider the arcs corresponding to the segments
$aB$ (joining the last $a$ of the $a^{a_{1}}$-block to the first
$B$ of $B^{b_{1}}$) and $Ba$ (joining the last $B$ in $B^{b_{i}}$ to the first $a$ in $a^{a_{i+1}}$). These two arcs intersect in a point $p$. Perform surgery around $p$ in the following way: remove these two segments, and replace them with an $ab$ and a $ba$ respectively, using the same endpoints. This
surgery links the arc $a^{a_{i+1}}xa^{a_{1}}$ to the arc
$B^{b_1}A^{a_2}B^{b_2}\dots A^{a_{i}}B^{b_{i}}$ traversed in the
opposite direction, i.e. gives
a curve corresponding to the word
$$ w' = \langle a^{a_{i+1}}xa^{a_{1}}
(B^{b_1}A^{a_2}B^{b_2}\dots A^{a_{i}}B^{b_{i}})^{-1}\rangle. $$
This word has the same $\alpha$ and $\beta$
values as $w$, has lost at least one self-intersection, and has
strictly fewer upper-case letters than $w$. The process may be
repeated until all upper-case letters have been eliminated.
\end{proof}
\start{prop}{cut-the-word}
In any surface $S$ with boundary, Let $w$ be a cyclically reduced word in
the generators of $\pi_1S$ which does not admit a simple representative curve.
Then a linear word $\mathsf w$
representing $w$ (notation from \secc{linear})
can be written
as the concatenation $\mathsf w=\mathsf u\cdot \mathsf v$
of two linear words, in such a way that the associated cyclic words
satisfy
$\mathop\mathrm{SI}(u)+\mathop\mathrm{SI}(v)+1 \leq \mathop\mathrm{SI}(w)$. (Note that $u$ and $v$ are not necessarily cyclically reduced).
\end{prop}
\begin{proof}
\begin{figure}[htp]
\centering
\includegraphics[width=5.5in]{w=uv1.eps}
\caption{Splitting $\mathsf w$ as $\mathsf u\cdot \mathsf v$ does not add any new intersections, while
the intersection corresponding to $p$ is lost. This figure shows
$\mathsf w=Babba$ (I) yielding
$\mathsf u = aB$ and $\mathsf v=bba$ (II).}\label{cut-the-curve}
\end{figure}
Consider a minimal representative of $w$
drawn
in the fundamental domain. It must have self-intersections;
let $p$ be one of them. Let $\mathsf w = x_1x_2\dots x_L$, (where $x_i \in \{a, A, b, B\}$), be a linear repsesentative for $w$, and
suppose that $x_ix_{i+1}$ and $x_jx_{j+1}$, with $i<j$,
are the two segments intersecting at $p$, (see \figc{cut-the-curve}, where
$x_ix_{i+1} = Ba$ and $x_jx_{j+1} = ba$).
Set $\mathsf u =x_{j+1}\dots x_L x_1x_2\dots x_i$ and
$\mathsf v = x_{i+1}\dots x_j$. (In case $i+1=j$,
$\mathsf v$ is a single-letter word). The cyclic words $u$ and $v$
together contain all the segments of $w$, except that $x_ix_{i+1}$ and $x_jx_{j+1}$ have
been replaced by $x_ix_{j+1}$ and $x_jx_{i+1}$.
Furthermore, there is a one-to-one correspondence between the intersection points on $x_ix_{j+1}\cup x_jx_{i+1}$ and some subset of the intersection points on $x_ix_{i+1} \cup x_jx_{j+1}$. In fact, labeling the
endpoints of the segment corresponding to $x_ix_{i+1}$ (resp. $x_jx_{j+1}$) as $Q_i$ and $q_{i+1}$
(resp. $Q_j$ and $q_{j+1}$), as in \figc{cut-the-curve},
observe that the segment corresponding to $x_ix_{j+1}$ and the broken arc $Q_i p q_{j+1}$ have the same endpoints, so any segment intersecting the first must intersect the second and therefore intersect part of $x_ix_{i+1} \cup x_jx_{j+1}$; similarly for $x_jx_{i+1}$ and
$Q_j p q_{i+1}$ (compare \figc{cut-the-curve}).
Therefore the change from $w$ to $u \cup v$ does not add any new intersections, while
the intersection corresponding to $p$ is lost. Hence
$\mathop\mathrm{SI}(u)+\mathop\mathrm{SI}(v)+1\leq \mathop\mathrm{SI}(w).$ \end{proof}
The next lemma is needed in the proof of \propc{alpha}.
\start{lem}{six simple curves} In the doubly punctured plane $P$,
if a reduced, non-empty word has a simple representative curve, then that
curve is parallel to a boundary component. Thus with
the notation of \figc{pantsgrids} the only such
words are ${a,b,ab,A,B \mbox{~and~} AB}$.
\end{lem}
\begin{proof} Let $\gamma$ be a simple, essential curve in $P$.
Since $P$ is planar, $P \setminus \gamma$ has two connected components,
$P_{1}$ and $P_{2}$. Since $\gamma$ is essential, neither $P_{1}$ nor $P_{2}$
is contractible, hence their Euler characteristics satisfy
$\chi(P_{1}) \leq 0$ and $\chi(P_{2}) \leq 0$; since $\chi(P) = -1$
and $\chi(P)=\chi(P_{1})+\chi(P_{2})$ it follows that either
$\chi(P_{1})=0$ or $\chi(P_{2})=0$. Hence, one of the two connected
components is an annulus, which implies that $\gamma$ is parallel to a boundary component, as desired.
\end{proof}
\start{prop}{alpha} If $w$ is a positive
cyclic
word representing a free homotopy class in the doubly punctured plane then $\mathop\mathrm{SI}(w) \geq \alpha(w)-1$ and $\mathop\mathrm{SI}(w) \geq \beta(w)-1.$
\end{prop}
\begin{proof}
By \lemc{six simple curves} the only words corresponding to simple curves
are $a, b, ab$ and their inverses; for these, the statement
holds. In particular it holds for all words of length one.
Suppose $w$ is any other positive word; it has length $L$ strictly
greater than 1. We may suppose by induction that the statement
holds for all words of length less than $L$.
By \propc{cut-the-word}, since the curve associated to $w$ is non-simple,
the word $w$ has
a linear representative $\mathsf w$ which can be split as $\mathsf u\cdot \mathsf v$
so that the associated cyclic words satisfy $\mathop\mathrm{SI}(w)\geq \mathop\mathrm{SI}(u)+\mathop\mathrm{SI}(v)+1$.
Note that $u$ and $v$ have length strictly less than $L$;
furthermore since $w$ is positive, so are $u$ and $v$.
Therefore by the induction hypothesis
$\mathop\mathrm{SI}(u)+\mathop\mathrm{SI}(v)+1\geq \alpha(u) -1 +\alpha(v) -1 +1$, and
so $\mathop\mathrm{SI}(w) \geq \alpha(u) +\alpha(v) -1 = \alpha(w)-1.$
The $\beta$ inequality is proved in the same way.
\end{proof}
\begin{prooftext}{Proof of \theoc{lower-bound}} By \propc{lower-case}
there is a positive word $w'$ of length $L$ such that
$\alpha(w') = \alpha(w), ~\beta(w') = \beta(w)$ and
$\mathop\mathrm{SI}(w) \ge \mathop\mathrm{SI}(w')$. Then
\propc{alpha} yields $\mathop\mathrm{SI}(w') \geq \max \{\alpha(w),\beta(w)\}-1.$
Since $\alpha(w)+\beta(w)=L$ it follows that $\mathop\mathrm{SI}(w) \geq L/2-1$ if $L$ is even and $\mathop\mathrm{SI}(w) \geq (L+1)/2-1=(L-1)/2$ if $L$ is odd.
\end{prooftext}
\bibliographystyle{amsalpha}
|
2,869,038,155,617 | arxiv | \section{Introduction}
A social network is an interconnected structure of a group of agents formed for social interactions \cite{liu2011social}. Nowadays, social networks play an important role in spreading information, opinion, ideas, innovation, rumors etc. \cite{centola2010spread} \cite{nekovee2007theory}. This spreading process has a huge practical importance in viral marketing \cite{leskovec2007dynamics} \cite{chen2010scalable}, personalized recommendation \cite{song2006personalized}, feed ranking \cite{ienco2010meme}, target advertisement \cite{li2015real}, selecting influential twitters \cite{weng2010twitterrank} \cite{bakshy2011everyone}, selecting informative blogs \cite{leskovec2007cost}, etc. Hence, recent years have witnessed a significant attention in the study of \textit{influence propagation} in online social networks. Consider the case of viral marketing of a commercial house, where the goal is to attract the users for purchasing a particular product. The best way to do this is to select a set of highly influential users and distribute them free samples. If they like the product, they will share the information to their neighbors. Due to their high influence, many of the neighbors will try for the product and share the information to their neighbors. This cascading process will be continued and ultimately a large fraction of the users will try for the product. Naturally, number of free sample products will be limited due to economic reason. Hence, this process will be fruitful, if the free samples can be distributed among the highly influential users and the problem here bottoms down to select influential users from the network. This problem is known as \textit{Social Influence Maximization Problem} \footnote{Now onwards, we will use Target Set Selection and Social Influence Maximization interchangeably}.
\par Social influence occurs due to the diffusion of information in the network. This phenomenon in a networked system is well studied \cite{cowan2004network} \cite{kasprzak2012diffusion}. Specifically, there are two popularly adopted models to study the diffusion process, namely \textit{Independent Cascade Model} (abbreviated as \textit{IC Model}), which collects the independent behavior of the agents, and the other one is \textit{Linear Threshold Model} (abbreviated as \textit{LT Model}), which captures the collective behavior of the agents (detailed discussion is deferred till Section \ref{BID}) \cite{shakarian2015independent}. In both the models, information is diffused in discrete time steps from some initially identified nodes and continued for several rounds. In SIM Problem, our goal is to maximize influence by selecting appropriate seed nodes.
\par To study the SIM Problem, a social network is abstracted as a \textit{graph} with the users as the \textit{vertex set} and \textit{social ties} among the users as the edge set. It is also assumed that the \textit{diffusion threshold} (a measurement of how hard to influence the user and given in a numerical scale; more the value, more hard to influence the user) is given as the \textit{vertex weight} and influence probability between two users as \textit{edge weight}. In this settings, the SIM Problem is stated as follows: for a given size $k$ ($k \in \mathbb{Z}^{+}$), choose the set $\mathcal{S}$ of $k$ nodes, such that $\sigma(\mathcal{S})$ gets maximized \cite{sun2011survey}. Here $\sigma(.)$ is the \textit{social influence function}. For any given seed $\mathcal{S}$, $\sigma(\mathcal{S})$ returns the set of influenced nodes, when the diffusion process is over.
\subsection{Focus and Goal of the Survey}
In this survey, we have mainly focused on three aspects of the problem, as mentioned below.
\begin{itemize}
\item Variants of this problem studied in the literature,
\item Hardness results of this problem in both traditional as well as parameterized complexity framework,
\item Different solution approaches proposed in the literature.
\end{itemize}
The overview of this survey is shown in Figure \ref{fig:1}. There are several other aspects of the problem, such as \textit{SIM in the presence of adversaries}, \textit{in a time\mbox{-}varying social network}, \textit{in competitive scenario} etc., which we have not considered in this survey.
\begin{figure}
\centering
\includegraphics[scale=0.25]{Overview.png}
\caption{Overview of this survey}
\label{fig:1}
\end{figure}
The main goal of this survey is threefold:
\begin{itemize}
\item to provide comprehensive understanding about the SIM Problem and its different variants studied in the literature,
\item to develop a taxonomy for classifying the existing solution methodologies and present them in a concise manner,
\item to present an overview of the current research trend and future research directions regarding this problem.
\end{itemize}
We set the following two criteria for the studies to be included in this survey:
\begin{itemize}
\item Research work presented in the publication should produce theoretically or empirically better than some of the previously published results.
\item The presented solution methodology should be generic, i.e., it should work for a network of any topology.
\end{itemize}
\subsection{Organization of the Survey}
\par Rest of the paper is organized as follows: Section \ref{Sec:Bac} describes some background material required to understand the subsequent sections of this paper. Section \ref{Sec:VTSSP} formally introduces the SIM Problem and its variants studied in the literature. Section \ref{Sec:Hard} describes hardness results of this problem in both traditional as well as parameterized complexity theory framework. Section \ref{Sec:MRC} describes some major research challenges in and around this problem. Section \ref{Sec:SolTSS} describes the proposed taxonomy for classifying the existing solution methodologies in different categories and discuss them. Section \ref{Sec:SRD} presents the summary of the survey and gives some future research directions. Finally, Section \ref{CR} presents concluding remarks regarding this survey.
\section{Background} \label{Sec:Bac}
In this section, we have described relevant background topics upto required depth, such as \textit{basic graph theory}, relation between SIM and existing graph theoretic problems, \textit{approximation algorithm}, \textit{parameterized complexity theory} and \textit{information diffusion models} in social networks. The symbols and notations that have been used in the subsequent sections of this paper are given in Table \ref{Tab : 1}.
\begin{table} \label{Tab:1}
\centering
\caption{Symbols and Notations}
\label{Tab : 1}
\begin{tabular}{|c|c|}
\hline
{\ \textbf{Symbols}} & {\ \textbf{Interpretation}}\\
\hline
$G(V, E, \theta, \mathcal{P})$ & Directed, vertex and edge weighted social network\\
$V(G)$ & Set of vertices of network $G$\\
$E(G)$ & Set of edges of network $G$\\
$U$ & Set of users of the network, i.e., $U=V(G)$\\
$n$ & Number of users of the network, i.e., $n=\vert V(G) \vert$\\
$m$ & Number of Edges of the network, i.e., $m=\vert E(G) \vert$\\
$\theta$ & Vertex weight function of $G$, i.e., $\theta:V(G) \longrightarrow [0,1]$\\
$\theta_i$ & Weight of vertex $u_i$, i.e., $\theta_i=\theta(u_i)$\\
$\mathcal{P}$ & Edge weight function, i.e., $\mathcal{P}:E(G) \longrightarrow [0,1]$\\
$p_{ij}$ & Edge weight of the edge $(u_iu_j)$\\
$\mathcal{N}(u_i)$ & Open neighborhood of vertex $u_i$\\
$\mathcal{N}[u_i]$ & Closed neighborhood of vertex $u_i$\\
$[n]$ & Set $\left\{1, 2,\dots\ , n \right\}$\\
$\mathcal{N}^{in}(u_i)$ & Incomming neighbors of vertex $u_i$ \\
$\mathcal{N}^{out}(u_i)$ & Outgoing neighbors of vertex $u_i$\\
$deg^{in}(u_i)$ & Indegree of vertex $u_i$\\
$deg^{out}(u_i)$ & Outdegree of vertex $u_i$\\
$dist(u,v)$ & Number of edges in the shortest path between $u$ and $v$.\\
$\mathcal{S}$ & Seed set for diffusion, i.e., $\mathcal{S} \subset V(G)$\\
$k$ & Maximum allowable cardinality for the seed set, i.e., $\vert \mathcal{S} \vert \leq k$\\
$r$ & Maximum allowable round for diffusion \\
\hline
\end{tabular}
\end{table}
\subsection{Basic Graph Theory} \label{BGT}
Graphs are popularly used to represent most of the real world networked systems including social networks \cite{campbell2013social} \cite{wang2011understanding}. Here, we have reported some preliminary concepts of \textit{basic graph theory} from \cite{diestel2005graph}. A graph is denoted by $G(V, E)$ where $V(G)$ and $E(G)$ are the \textit{vertex set} and \textit{edge set} of $G$, respectively. For any arbitrary vertex, $u_i \in V(G)$, its \textit{open neighborhood} is defined as $\mathcal{N}(u_i)=\left\{u_j \vert (u_iu_j) \in E(G)\right\}$. \textit{Closed neighborhood} of $u_i$ will be $\mathcal{N}[u_i]=u_i \cup \mathcal{N}(u_i)$. \textit{Degree} of a vertex is defined as the \textit{cardinality} of its open neighborhood, i.e., $deg(u_i)=\vert \mathcal{N}(u_i) \vert$. For any $S \subset V(G)$, its open neighborhood and close neighborhood will be $\mathcal{N}(S)=\underset{u_i \in S}{\cup} \mathcal{N}(u_i)$ and $\mathcal{N}[S]=S \cup \mathcal{N}(S)$, respectively. Two vertices $u_i$ and $u_j$ are said to be \textit{true twins}, if $\mathcal{N}[u_i]=\mathcal{N}[u_j]$ and \textit{false twins}, if $\mathcal{N}(u_i)=\mathcal{N}(u_j)$. A graph is \textit{weighted}, if a real number is associated with its vertices or edges or both. A graph is \textit{directed}, if its edges have directions. The edges that join the same pair of vertices are known as parallel edges, and an edge whose both the end points are same is known as \textit{self\mbox{-}loop}. A graph is \textit{simple}, if it is free from self\mbox{-}loop and parallel edges.
\par Information diffusion process in a \textit{social network} is represented by a \textit{simple}, \textit{directed} and \textit{vertex} and \textit{edge weighted graph} $G(V, E, \theta, \mathcal{P})$. Here, $V(G)= \left\{u_1, u_2,\dots\ , u_n\right\}$, the set of users of the network and $E(G)= \left\{e_1, e_2,\dots\ , e_m\right\}$, the set of social ties among the users. $\theta$ and
$\mathcal{P}$ are the \textit{vertex} and \textit{edge weight} function, which assign a numerical value in between $0$ and $1$ to each vertex and edge, respectively, as its weight, i.e., $\theta:V(G) \longrightarrow [0,1]$ and $\mathcal{P}:E(G) \longrightarrow (0,1]$. In \textit{information diffusion}, vertex and edge weights are called node threshold and diffusion probability, respectively \cite{gruhl2004information}. More the value of $\theta_i$, more hard to influence the user $u_i$ and more the value of $p_{ij}$, it is more probable that $u_i$ can influence $u_j$. For any user $u_i \in V(G)$, its \textit{incoming neighbors} and \textit{outgoing neighbors} $\mathcal{N}^{in}(u_i)$ and $\mathcal{N}^{out}(u_i)$ are defined as: $\mathcal{N}^{in}(u_i) = \left\{ u_j \vert (u_ju_i) \in E(G) \right\}$ and $\mathcal{N}^{out}(u_i) = \left\{ u_j \vert (u_iu_j) \in E(G) \right\}$, respectively. For any user $u_i \in V(G)$, its \textit{indegree} and \textit{outdegree} is defined as $deg^{in}(u_i)= \vert \mathcal{N}^{in}(u_i) \vert$ and $deg^{out}(u_i)= \vert \mathcal{N}^{out}(u_i) \vert$, respectively. A \textit{path} in a directed graph is a sequence of vertices without repetition, such that between every consecutive vertices there will be an \textit{edge}. Two users are connected in the graph $G$, if there exists a directed path between them. A directed graph is said to be connected, if there exists a path between every pair of users.
\subsection{Relation between Target Set Selection and Other Graph Theoretic Problems}
\par The TSS Problem is a more generalized version of many standard graph theoretic problems discussed and mentioned in the literature, such as \textit{dominating set with threshold} \cite{harant1999dominating}, \textit{vector domination problem} \cite{raman2008parameterized}, \textit{k\mbox{-}tuple dominating set} \cite{klasing2004hardness} (in all these problems instead of multiple rounds, diffusion can run only for one round), \textit{vertex cover} \cite{chen2009approximability} (in this problem, vertex threshold is set equal to the number of neighbors of the node), \textit{irreversible k\mbox{-}conversion problem} \cite{dreyer2009irreversible}, \textit{r\mbox{-}neighbor bootstrap percolation problem} \cite{balogh2010bootstrap} (where the threshold of each vertex is $k$ or $r$ respectively) and \textit{dynamic monopolies} \cite{peleg2002local} (in this case, threshold is half of the neighbors of the user).
\subsection{Approximation Algorithm} Most of the \textit{optimization problems} arising in real life are NP\mbox{-}Hard \cite{garey2002computers}. Hence, we cannot expect to solve them by any deterministic algorithm in polynomial time. So, the goal is to get an approximate solution of the problem within affordable time. Approximation algorithms serve this purpose and also provide the worst case guarantee on solution quality. For a maximization problem $\mathcal{P}$, let $\mathcal{A}$ be an algorithm, which provides its solution and $\mathcal{I}$ be the set of all possible input instances of $\mathcal{P}$. For an input instance $I$ of $\mathcal{P}$; let, $\mathcal{A}^{*}(I)$ is the optimal solution and $\mathcal{A}(I)$ is the solution generated by the algorithm $\mathcal{A}$. Now, $\mathcal{A}$ will be called an $\alpha$\mbox{-}factor \textit{absolute approximation algorithm}, if $\forall I \in \mathcal{I}$, $\vert \mathcal{A}^{*}(I)- \mathcal{A}(I) \vert \leq \alpha$ and $\alpha$\mbox{-}factor \textit{relative approximation algorithm}, if $\forall I \in \mathcal{I}$, $max\{\frac{\mathcal{A}^{*}(I)}{\mathcal{A}(I)}, \frac{\mathcal{A}(I)}{\mathcal{A}^{*}(I)} \} \leq \alpha$ ($\mathcal{A}(I),\mathcal{A}^{*}(I) \neq 0$) \cite{williamson2011design}. Section \ref{Sec:AAPG} of this paper describes relative approximation algorithms for solving SIM Problem.
\subsection{Parameterized Complexity Theory}
Parameterized complexity theory is another way of dealing with NP\mbox{-}Hard optimization problems. It aims to classify computational problems based on the inherent difficulty with respect to multiple parameters related to the problem. There are several \textit{complexity classes} in parameterized complexity theory. The class FPT (\textit{Fixed Parameter Tractable}) contains the problems for which, any problem with instances $(x,k) \in \mathcal{I}$, where $x$ is the input , $k$ is the parameter and $ \mathcal{I}$ is the set of instances; its running time will be of $\mathcal{O}(f(k) \vert x \vert ^{\mathcal{O}(1)})$, where $f(k)$ is the function depending on only $k$ and $\vert x \vert$ denotes the length of the input. $W$ hierarchy is the collection of complexity classes with the property $W[0]=FPT$ and $W[i] \subseteq W[j]$ $\forall i \leq j$ \cite{downey1998parameterized}. Many normal computational problems occupy the lower levels of hierarchy, i.e., $W[1]$ and $W[2]$. In Section \ref{Sec:Hard}, we have described hardness results of TSS Problem in parameterized complexity theoretic setting.
\subsection{Information Diffusion in a Social Network} \label{BID}
Diffusion phenomena in a networked system has got attention from different disciplines, such as \textit{epidemiology} (how diseases spread in a human contact network?) \cite{salathe2010high}, \textit{social network analysis} (how information propagates in a social network?) \cite{xu2010information}, \textit{computer network} (how computer virus propagates in an e\mbox{-}mail network?) \cite{zou2007modeling} etc. \textit{Information Diffusion} in an on\mbox{-}line social networks is a phenomenon by which word-of-mouth effect occurs electronically. Hence, the mechanism of information diffusion is very well studied \cite{kimura2006tractable} \cite{valente1995network}. To study the diffusion process, there are some models in the literature \cite{heidari2016modeling}. Nature of these models varies from \textit{deterministic} to \textit{probabilistic}. Here, we have described some well studied \textit{information diffusion models} from the literature.
\begin{itemize}
\item \textit{Independent Cascade Model} (IC Model) \cite{shakarian2015independent}: This is one of the well studied probabilistic diffusion models used by Kempe et al. \cite{kempe2003maximizing} in their seminal work of \textit{social influence maximization}. In this model, a node can either be in active state (i.e., influenced) or in inactive state (i.e., not influenced). Initially (i.e., at $t=0$), all the nodes except the seeds are inactive. Every active node (say, $u_i$) at time stamp $t$ will get a chance to activate its currently \textit{inactive} neighbor ($u_j \in \mathcal{N}^{out}(u_i)$ and $u_j$ is inactive) with probability as their edge weight. If $u_i$ succeeds, then $u_j$ will become an active node in time stamp $t+1$. A node can change its state from inactive to active but not from active to inactive. This cascading process will be continued until no more active node is there in a time stamp. Suppose, this diffusion process starts at $t=0$ and continued till $t=\mathcal{T}$ and $\mathcal{A}_{t}$ denotes the set of active nodes till time stamp $t$, where $t \in [0, \mathcal{T}]$, then
\begin{center}
$\mathcal{A}_{0} \subseteq \mathcal{A}_{1} \subseteq \dots \subseteq \mathcal{A}_{t} \subseteq \mathcal{A}_{t+1} \subseteq \dots \subseteq \mathcal{A}_{\mathcal{T}} \subseteq V(\mathcal{G})$.
\end{center}
Node $u_i$ is said to be active at time stamp $t$, if $u_i \in \mathcal{A}_{t} \setminus \mathcal{A}_{t-1}$.
\item \textit{Linear Threshold Model} (\textit{LT Model}) \cite{shakarian2015independent}: This is another probabilistic diffusion model proposed by Kempe et al. \cite{kempe2003maximizing}. In this model, for any node (say $u_i$), all its neighbors who are activated just at previous time stamp together make a try to activate that node. This activation process will be successful, if the sum of the incoming active neighbor's probability becomes either greater than or equal to the node's threshold, i.e., $\forall u_j \in \mathcal{N}^{in}(u_i)$, if $\sum_{\forall u_j \in \mathcal{N}^{in}(u_i); u_j \in \mathcal{A}_{t}} p_{ji} \geq \theta_i$ then, $u_i$ will become active at time stamp $t+1$. This method will be continued until no more activation is possible. In this model, we can use the negative influence, which is not possible in IC Model. Later, several extensions of this two fundamental models have been proposed \cite{yang2010modeling}.\\
In both IC as well as LT Model, it is assumed that diffusion probability between two users is known. However, later there were several studies for computing diffusion probability \cite{saito2011learning} \cite{saito2008prediction} \cite{goyal2010learning} \cite{saito2010selecting} \cite{kimura2009finding}.
\item \textit{Shortest Path Model} (\textit{SP Model}): This is a special case of IC Model proposed by Kimura et al. \cite{kimura2006tractable}. In this model, an inactive node will get a chance to become active only through the shortest path from the initially active nodes, i.e., at $t=\underset{u \in \mathcal{A}_{0},v \in V(G)\setminus \mathcal{A}_{0}} {min} dist(u,v)$. A slightly different variation of SP Model proposed by the same author is \textit{SP1 Model}, which tells that an inactive node will get a chance of activation at $t=\underset{u \in \mathcal{A}_{0},v \in V(G)\setminus \mathcal{A}_{0}} {min} dist(u,v)$ and $t=\underset{u \in \mathcal{A}_{0},v \in V(G)\setminus \mathcal{A}_{0}} {min} dist(u,v)+1$.
\item \textit{Majority Threshold Model} (\textit{MT Model}): This is the deterministic threshold model proposed by Valente \cite{valente1996social}. In this model, the vertex threshold is defined as $\theta_i=\ceil*{\frac{deg(u_i)}{2}}$, which means that a node will become active, when atleast half of its neighbors are already active in nature.
\item \textit{Constant Threshold Model} (\textit{CT Model}): This is another deterministic diffusion model, where vertex threshold can be any value from 1 to its degree, i.e., $\theta_i \in [deg(u_i)]$.
\item \textit{Unanimous Threshold Model} (\textit{UT model}) \cite{chen2009approximability}: This is the most influence resistant model of diffusion. In this model, for each node in the network, its threshold value is set to its degree i.e., $\forall u_i \in V(G)$, $\theta_i=deg(u_i)$.
\end{itemize}
There are many other diffusion models, such as \textit{weighted cascade model}, where edge weight will be the reciprocal of the degree of the node; \textit{trivalency model}, where the edge weights are uniformly taken from the set: $\{0.1, 0.01, 0.001 \}$ etc. Readers require a detailed and exhaustive treatment on information diffusion models may refer to \cite{zhang2014recent}.
\section{SIM Problem and its Variants} \label{Sec:VTSSP}
In literature, SIM problem has been studied since early two thousand. Initially, this problem was introduced by Domingos and Richardson in the context of viral marketing \cite{domingos2001mining}. Due to its substantial practical importance across multiple domains, different variants of this problem have been introduced. In this section, we will describe them one by one.
\paragraph{Basic SIM Problem \cite{ackerman2010combinatorial}:} In the basic version of the \textit{TSS Problem} along with a \textit{directed social network} $G(V, E, \theta, \mathcal{P})$, we are given two integers: $k$ and $\lambda$, and asked to find out a subset of atmost $k$ nodes such that after the diffusion process is over atleast $\lambda$ number of nodes are activated. Mathematically, this problem can be stated as follows:
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$, $\lambda \in [n]$ and $k \in \mathbb{Z}^{+}$.}\\
\textit{\textbf{Problem:}Basic TSS Problem [Find out a $\mathcal{S} \subset V(G)$, such that $\vert \mathcal{S} \vert \leq k$, and $\vert \sigma(\mathcal{S}) \vert \geq \lambda$].}\\
\textit{\textbf{Output:} The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert \leq k$.}\\
\end{center}
\end{mdframed}
\paragraph{Top k-node Problem / Social Influence Maximization Problem (SIM Problem) \cite{narayanam2011shapley}:} This variant of the problem is most well studied. For a given social network $G(V, E, \theta, \mathcal{P})$, this problem asks to choose a set $\mathcal{S}$ of $k$ nodes (i.e., $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert=k$) such that the maximum number of nodes of the network become influenced at the end of diffusion process, i.e., $\sigma(\mathcal{S})$ will be maximized. Most of the algorithms presented in Section \ref{Sec:SolTSS} are solely develop for solving this problem. Mathematically, the \textit{Problem of Top k-node Selection} will be like the following:
\pagebreak
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$ and $k \in \mathbb{Z}^{+}$.}\\
\textit{\textbf{Problem:}Top k-node Problem [Find out a $\mathcal{S} \subset V(G)$ where $\vert \mathcal{S} \vert=k$ such that and for any other $\mathcal{S}^{'} \subset V(G)$ with $\vert \mathcal{S}^{'} \vert=k$, $\sigma(\mathcal{S}) \geq \sigma(\mathcal{S}^{'})$].}\\
\textit{\textbf{Output:} The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert = k$.}\\
\end{center}
\end{mdframed}
\paragraph{Influence Spectrum Problem} \cite{nguyen2017social} In this problem, along with the social network $G(V, E, \theta, \mathcal{P})$, we are also given with two integers: $k_{lower}$ and $k_{upper}$ with $k_{upper} > k_{lower}$. Our goal is to choose a set $\mathcal{S}$ for each $k \in [k_{lower}, k_{upper}]$, such that social influence in the network ($\sigma(\mathcal{S})$) is maximum in each case. Intutively, solving one instance of this problem is equivalent to solving $(k_{upper} - k_{lower} + 1)$ instances of SIM problem. As viral marketing is basically done in different phases and in each phase, seed set of different cardinalities can be used, influence spectrum problem appears in a natural way. Mathematically, influence spectrum problem can be written as follows:
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$ and $k_{lower},k_{upper} \in \mathbb{Z}^{+}$ with $k_{upper} > k_{lower}$.}\\
\textit{\textbf{Problem:}Influence Spectrum Problem [Find out a $\mathcal{S} \subset V(G)$ with $\vert \mathcal{S} \vert=k$, $\forall k \in [k_{lower}, k_{upper}]$ such that and for any other $\mathcal{S}^{'} \subset V(G)$ with $\vert \mathcal{S}^{'} \vert=k$, $\sigma(\mathcal{S}) \geq \sigma(\mathcal{S}^{'})$].}\\
\textit{\textbf{Output:} The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert = k$ for each $k \in [k_{lower}, k_{upper}]$.}\\
\end{center}
\end{mdframed}
\paragraph{$\lambda$ Coverage Problem \cite{narayanam2011shapley}:} This is another variant of SIM Problem, which considers the minimum number of influenced nodes required at the end of diffusion. For a given social network $G(V, E, \theta, \mathcal{P})$ and a constant $\lambda \in [n]$, this problem asks to find a subset $\mathcal{S}$ of its nodes with minimum cardinality, such that at least $\lambda$ number of nodes will be influenced at the end of diffusion process. Mathematically, this problem can be described in the following way:
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$ and $\lambda \in [n]$.}\\
\textit{\textbf{Problem:} $\lambda$ Coverage Problem [Find out the most minimum cardinality subset $\mathcal{S} \subset V(G)$ such that $\vert \sigma(\mathcal{S}) \vert \geq \lambda$ ].}\\
\textit{\textbf{Output:} The minimum cardinality seed set $\mathcal{S}$ for diffusion.}\\
\end{center}
\end{mdframed}
\paragraph{Weighted Target Set Selection Problem (WTSS Problem) \cite{raghavan2015weighted}:} This is another (infect weighted) variant of SIM Problem. Along with a social network $G(V, E, \theta, \mathcal{P})$, we are given another \textit{vertex weight function}, $\phi :V(G) \rightarrow \mathbb{N}_0$, signifying the cost associated with each vertex. This problem asks to find out a subset $\mathcal{S}$, which \textit{minimizes} total \textit{selection cost}, and also all the nodes will be influenced at the end of diffusion. Mathematically, this problem can be stated as follows:
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$, vertex cost function $\phi :V(G) \rightarrow \mathbb{N}_0$.}\\
\textit{\textbf{Problem:} Weighted TSS Problem [Find out the subset $\mathcal{S} \subset V(G)$ such that $\phi(\mathcal{S})$ is minimum and $\vert \sigma(\mathcal{S}) \vert = n$].}\\
\textit{\textbf{Output:} The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ with minimum $\phi(\mathcal{S})$ value.}\\
\end{center}
\end{mdframed}
\paragraph{r\mbox{-}round min\mbox{-}TSS Problem \cite{charikar2016approximating}:} It is a variant of SIM Problem, which considers the number of rounds required to complete the diffusion process. Along with a \textit{directed graph} $G(V, E, \theta, \mathcal{P})$, we are given the maximum number of allowable rounds $r \in \mathbb{Z}^{+}$, and asks to find out a minimum cardinality seed set $\mathcal{S}$, which activates all the nodes of the network within $r$\mbox{-}round. Mathematically, this problem can be described as follows:
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$ and $r \in \mathbb{Z}^{+}$.}\\
\textit{\textbf{Problem:} r\mbox{-}round min\mbox{-}TSS Problem [Find out the most minimum cardinality subset $\mathcal{S}$ such that $\cup_{i=1}^{r}\sigma_i(\mathcal{S})=V(G)$].}\\
\textit{\textbf{Output:} The Seed Set for Diffusion $\mathcal{S} \subset V(G)$.}\\
\end{center}
\end{mdframed}
Here, $\sigma_i(\mathcal{S})$ denotes the set of influenced nodes from the seed set $\mathcal{S}$ at the $i$\mbox{-}th round of diffusion.
\paragraph{Budgeted Influence Maximization Problem (BIM Problem) \cite{nguyen2013budgeted}:} This is another variant of SIM Problem, which is recently gaining popularity. Along with a \textit{directed graph} $G(V, E, \theta, \mathcal{P})$, we are given with a cost function $\mathcal{C}: V(G) \longrightarrow \mathbb{Z}^{+}$ and a fixed budget $\mathcal{B} \in \mathbb{Z}^{+}$. Cost function $\mathcal{C}$ assigns a nonuniform selection cost to every vertex of the network, which is the amount of incentive need to be paid, if that vertex is selected as a seed node. This problem asks for selecting a seed set within the budget, which maximizes the spread of influence in the network.
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$, a cost function $\mathcal{C}: V(G) \longrightarrow \mathbb{Z}^{+}$ and affordable budget $\mathcal{B} \in \mathbb{Z}^{+}$.}\\
\textit{\textbf{Problem:} Budgeted Influence Maximization Problem [Find out the seed set ($\mathcal{S}$) such that $\underset{u \in \mathcal{S}}{\sum} \mathcal{C}(u) \leq \mathcal{B}$ and for any other seed set $\mathcal{S}^{'}$ with $\underset{v \in \mathcal{S}^{'}}{\sum} \mathcal{C}(v) \leq \mathcal{B}$, $\vert \sigma(\mathcal{S}) \vert \geq \vert \sigma(\mathcal{S}^{'} \vert$)].}\\
\textit{\textbf{Output:} The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ with $\underset{u \in \mathcal{S}}{\sum} \mathcal{C}(u) \leq \mathcal{B}$.}\\
\end{center}
\end{mdframed}
\paragraph{$(\lambda,\beta, \alpha)$ TSS Problem \cite{cicalese2014latency}:} This is another variant of TSS Problem, which considers the maximum cardinality of the seed set ($\beta$), maximum allowable diffusion rounds ($\lambda$), and number of influenced nodes at the end of diffusion process ($\alpha$) all together. Along with the input graph $G(V, E, \theta, \mathcal{P})$, we are given with the parameters $\lambda, \beta$ and $\alpha $. Mathematically, this problem can be stated as follows:
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$, three parameters $\lambda, \beta \in \mathbb{N}$ and $\alpha \in [n]$.}\\
\textit{\textbf{Problem:}$(\lambda,\beta, \alpha)$ TSS Problem [Find out the subset $\mathcal{S} \subset V(G)$ such that $\vert \mathcal{S} \vert \leq \beta$, $ \vert \cup_{i=1}^{\lambda}\sigma_i(\mathcal{S}) \vert \geq \alpha$].}\\
\textit{\textbf{Output:} The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert \leq \beta$.}\\
\end{center}
\end{mdframed}
\paragraph{$(\lambda,\beta, A)$ TSS Problem \cite{cicalese2014latency}:} This is a slightly different from the $(\lambda,\beta, \alpha)$ TSS problem, in which instead of the required number of the nodes after the diffusion process, it explicitly maintains which nodes should be influenced. Along with the input social network $G(V, E, \theta, \mathcal{P})$, we are also given with maximum allowable rounds ($\lambda$), maximum cardinality of the seed set ($\beta$), and set of nodes $A \subseteq V(G)$ need to be influenced at the end of diffusion process as input. This problem asks for selecting a seed set of maximum $\beta$ elements, which will influence all the nodes in $A$ within $\lambda$ rounds of diffusion. Mathematically, the problem can be stated as follows:
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$, $A \subseteq V(G)$ and two parameters $\lambda, \beta \in \mathbb{N}$.}\\
\textit{\textbf{Problem:}$(\lambda,\beta, A)$ TSS Problem [Find out the subset $\mathcal{S} \subset V(G)$ such that $\vert \mathcal{S} \vert \leq \beta$, $A \subseteq \cup_{i=1}^{\lambda}\sigma_i(\mathcal{S})$].}\\
\textit{\textbf{Output:} The Seed Set for Diffusion $\mathcal{S} \subset V(G)$ and $\vert \mathcal{S} \vert \leq \beta$.}\\
\end{center}
\end{mdframed}
\paragraph{$(\lambda, A)$ TSS Problem \cite{cicalese2014latency}:} This is slightly different from $(\lambda,\beta, A)$ TSS Problem. Here, we are interested in finding the minimum cardinality seed set, such that within some fixed numbers of diffusion rounds ($\lambda$), a subset of the nodes ($A$) will be influenced. Mathematically, the problem can be stated as follows:
\begin{mdframed}[style=MyFrame]
\begin{center}
\textit{\textbf{Instance:} A Directed Graph $G(V, E, \theta, \mathcal{P})$, $A \subset V(G)$ and $\lambda \in \mathbb{N}$.}\\
\textit{\textbf{Problem:}$(\lambda, A)$ TSS Problem [Find out the subset $\mathcal{S}$ such that $A \subseteq \cup_{i=1}^{\lambda}\sigma_i(\mathcal{S})$ and for any other $\mathcal{S}^{'}$ with $\vert \mathcal{S}^{'} \vert < \vert \mathcal{S} \vert$ $A \not \subseteq \cup_{i=1}^{\lambda}\sigma_i(\mathcal{S}^{'})$].}\\
\textit{\textbf{Output:} Minimum cardinality Seed Set for Diffusion $\mathcal{S} \subset V(G)$.}\\
\end{center}
\end{mdframed}
\par We have described different variants of TSS Problem in social networks available in the literature. It is surprising to see that only Top\mbox{-}k node Problem has been studied, in depth.
\section{Hardness Results of TSS Problem} \label{Sec:Hard}
In this section, we have described hardness results of SIM Problem under both general as well as parameterized complexity theoretic perspective. Initially, the problem of social influence maximization was posed by Domingos and Richardson \cite{domingos2001mining} \cite{richardson2002mining} in the context of viral marketing. However, Kempe et al. \cite{kempe2003maximizing} was the first to investigate the computational issues of the problem. They were able to show that SIM Problem under IC and LT Model is a special case of \textit{Set Cover Problem} and \textit{Vertex Cover Problem}, respectively. Both the set cover and vertex cover problems are well-known \textit{NP\mbox{-}Hard} problems \cite{garey2002computers}. The conclusion is presented as Theorem \ref{Th:1}.
\begin{mythm} \label{Th:1} \cite{kempe2003maximizing}
Social Influence Maximization Problem is NP-Hard for both IC as well as LT model and also NP-Hard to approximate within a factor of $n^{(1-\epsilon)} \ \forall \epsilon>0$.
\end{mythm}
Chen \cite {chen2009approximability} studied variant of SIM Problem namely \textit{$\lambda$ Coverage Problem}. His study was different from Kempe et al.'s \cite{kempe2003maximizing} study in two ways. First one is, Kempe et al. \cite{kempe2003maximizing} investigated the Top\mbox{-}$k$ node problem, whereas Chen \cite {chen2009approximability} studied the $\lambda$\mbox{-}coverage problem. Secondly, Kempe et al. \cite{kempe2003maximizing} studied the diffusion process under IC and LT Models, which are probabilistic in nature, whereas Chen \cite {chen2009approximability} considered all the \textit{deterministic diffusion models} like \textit{majority threshold model}, \textit{constant threshold model} and \textit{unanimous threshold model}. In general, for the $\lambda$ Coverage Problem, Chen \cite {chen2009approximability} came up with a seminal result presented in Theorem \ref{Th:2}.
\begin{mythm} \cite{chen2009approximability} \label{Th:2}
TSS Problem cannot be approximated with in the constant factor $\mathcal{O}(2^{\log^{(1-\epsilon)}n})$ unless $NP \subset DTIME(n^{polylog (n)})$ for any fixed constant $\epsilon > 0$.
\end{mythm}
\par This theorem can be proved by a reduction from the \textit{Minimum Representative Problem} given in \cite{kortsarz2001hardness}. Next, they have shown that in \textit{majority threshold model} also, \textit{$\lambda$\mbox{-}coverage problem} follows the similar result as presented in Theorem \ref{Th:2}. However, when $\theta(u)=1$, $\forall u \in V(G)$ then TSS Problem can be solved very intuitively as targeting one node in each component results into the activation of all the nodes of the network. Surprisingly, this problem becomes hard, when we allow the vertex threshold to be at most 2, i.e., $\theta(u) \leq 2$ $\forall u \in V(G)$. They proved the following result in this regard.
\begin{mythm} \cite{chen2009approximability}
The TSS Problem is NP\mbox{-}Hard, when thresholds are at most 2, even for bounded bipartite graphs.
\end{mythm}
\par This theorem can be proved by a reduction from a variant of 3\mbox{-}SAT Problem presented in \cite{tovey1984simplified}. Moreover, Chen \cite{chen2009approximability} has shown that for \textit{unanimous threshold model}, the \textit{TSS Problem} is equivalent to \textit{vertex cover problem}, which is a well-known NP\mbox{-}Complete Problem.
\begin{mythm} \cite{chen2009approximability}
If all the vertex thresholds of the graph are unanimous (i.e. $\forall u \in V(G)$, $\theta(u)=deg(u)$), then the TSS Problem is identical to vertex cover problem.
\end{mythm}
\par Chen \cite{chen2009approximability} has also shown that if the underline graph is tree, then the TSS Problem can be solved in polynomial time and they have also given the \textit{ALG\mbox{-}Tree} Algorithm, which does this computation. To the best of the authors' knowledge, there is no other literature, which focuses on the hardness analysis of the TSS Problem in traditional complexity theoretic perspective. We have summarized the results in Table \ref{Tab:TC}.
\par Now, we describe the hardness results based on the parameterized complexity theoretic perspective. For basic notions about \textit{parameterized complexity}, readers may refer to \cite{downey2013fundamentals}. Bazgan et al. \cite{bazgan2014parameterized} showed that SIM Problem under constant threshold model (CTM) does not have any parameterized approximation algorithm with respect to the parameter \textit{seed set size}. Chopin et al. \cite{chopin2014constant}, \cite{Chopin2012} studied the TSS Problem in parameterized settings with respect to the parameters related to network cohesiveness like \textit{clique cover number} (number of cliques required to cover all the vertices of the network \cite{karp1972reducibility}), \textit{distance to clique} (number of vertices need to be deleted to obtain a clique), \textit{cluster vertex deletion number} (number of vertices to delete in order to obtain a collection of disjoint cliques); parameters related to network density like \textit{distance to cograph}, \textit{distance to interval graph}; parameters related to sparsity of the network, namely \textit{vertex cover number} (number of vertices to remove to obtain an edgeless graph), \textit{feedback edge set number and feedback vertex set number} (number of edges or vertices to remove to obtain a forest), \textit{pathwidth}, \textit{bandwidth}. It is interesting to note that computing all the parameters except \textit{feedback edge set number} is NP\mbox{-}Hard problem. The version of TSS Problem, they have worked with is $\lambda$\mbox{-}coverage problem with $\lambda=n$. They came up with the following two important results related to the sparsity parameters of the network:
\begin{mythm} \cite{chopin2014constant}
TSS Problem with majority threshold model is W[1] hard even
with respect to the combined parameter feedback vertex set, distance to co-graph, distance to interval graph, and path width.
\end{mythm}
\begin{mythm} \cite{chopin2014constant}
TSS Problem is fixed-parameter tractable with respect to the parameter bandwidth.
\end{mythm}
For proving the above two theorems, authors have used reduction rules used in \cite{nichterlein2013tractable} and \cite{nichterlein2010tractable}. Results related to dense structure property of the network is given in Theorems \ref{Th:7} through \ref{Th:9}.
\begin{mythm} \label{Th:7}
TSS Problem is W[1]\mbox{-}Hard with parameter cluster vertex deletion number.
\end{mythm}
\begin{mythm}
TSS Problem is NP\mbox{-}Hard and W[2] Hard with respect to the parameter target set size ($k$), even on graphs with clique cover number of two.
\end{mythm}
\begin{mythm} \label{Th:9}
TSS Problem is fixed parameter tractable with respect to the parameter `distance l to clique', if the threshold function satisfies following properties $\theta(u)>g(l) \Rightarrow \theta(u)=f(\Gamma(u))$ $\forall u \in V(G)$, $f:P(V(G)) \longrightarrow \mathbb{N}$ and $g: \mathbb{N} \longrightarrow \mathbb{N}$.
\end{mythm}
For detailed proof of Theorems \ref{Th:7} through \ref{Th:9}, readers may refer to \cite{chopin2014constant}. All the results related to the parameterized complexity theory has been summarized in Table \ref{Tab:PC}.
\begin{table}
\begin{center}
\begin{tabular}{ | p{3 cm} | p{2.2 cm} | p{6.8 cm} |}
\hline
\textbf{Name of the Problem} & \textbf{Diffusion Model} & \textbf{Major Findings} \\
\hline
\multirow{4}{*}{SIM} & IC Model & A special case of set cover problem and hence NP\mbox{-}Hard. \\
\cline{2-3}
& LT Model & A special case of vertex cover problem and hence NP\mbox{-}Hard.\\
\hline
\multirow{13}{*}{$\lambda$\mbox{-}Coverage Problem} & MT Model & Not only NP\mbox{-}Hard as well as can not be approximated in the constant factor $\mathcal{O}(2^{\log^{(1-\epsilon)}n})$ unless $NP \subset DTIME(n^{polylog (n)})$\\
\cline{2-3}
& CT Model with $\theta(u)=1$, $\forall u \in V(G)$ & Can be solved trivially by selecting a vertex from each component of the network.\\
\cline{2-3}
& CT Model with $\theta(u) \leq 2$, $\forall u \in V(G)$ & NP\mbox{-}Hard even for bounded bipartite graphs.\\
\cline{2-3}
& UT Model & Identical to vertex cover problem and hence NP\mbox{-}Hard\\
\hline
\end{tabular}
\end{center}
\caption{Hardness results of TSS Problem and its variants in traditional complexity theory perspective.}
\label{Tab:TC}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ | p{2.2 cm} | p{2.5 cm} | p{3 cm} | p{4cm} |}
\hline
\textbf{Name of the Problem} & \textbf{Diffusion Model} & \textbf{Parameter} & \textbf{Major Findings} \\
\hline
SIM & CT Model with $\theta(u) \in [deg(u)]$ & Seed Set Size & Does not have any parameterized approximation algorithm. \\
\hline
$\lambda$\mbox{-}coverage Problem with $\lambda=n$ & MT Model & Feedback vertex set number, Pathwidth, Distance to cograph, Distance to interval graph & The problem is $W[1]$\mbox{-}Hard. \\
\hline
$\lambda$\mbox{-}coverage Problem with $\lambda=n$ & GT Model & Cluster vertex deletion number & The problem is $W[1]$\mbox{-}Hard \\
\hline
$\lambda$\mbox{-}coverage Problem with $\lambda=n$ & CT Model & Cluster vertex deletion number & The problem is fixed parameter tractable. \\
\hline
$\lambda$\mbox{-}coverage Problem with $\lambda=n$ & GT Model & Seed set size & The problem is $W[2]$\mbox{-}Hard \\
\hline
$\lambda$\mbox{-}coverage Problem with $\lambda=n$ & MT Model, CT Model & distance to clique & The problem is fixed parameter tractable.\\
\hline
\end{tabular}
\end{center}
\caption{Hardness results of TSS Problem and its variants in parameterized complexity theory perspective.}
\label{Tab:PC}
\end{table}
\section{Major Research Challenges} \label{Sec:MRC}
Before entering into the critical review of the existing solution methodologies, in this section, we provide a brief discussion on major research challenges concerned with the SIM Problem. This will help the reader to understand which category of solution methodology can handle what challenge.
\begin{itemize}
\item \textbf{Trade of Between Accuracy and Computational Time:} From the discussion in Section \ref{Sec:Hard}, it is now well understood that the SIM Problem is computationally hard from both traditional as well as parameterized complexity theoretic prospective, in general. Hence, for some given $k \in \mathbb{Z}^{+}$, obtaining the most influential $k$ nodes within feasible time is not possible. In this scenario, the intuitive approach could be to use some heuristic method for selecting seed nodes. This will lead to less time for seed set generation. However, the number of influenced nodes generated by the seed nodes could be also arbitrarily less. In this situation, it is an important issue to design algorithms, which will run in affordable time and also, the gap between the optimal spread and the spread due to the seed set selected by an algorithm will be as much less as possible.
\item \textbf{Breaking the Barrier of Submodularity:} In general, the social influence function $\sigma(.)$ is submodular (Discussed in Section \ref{Sec:AAPG}). However, in many practical situations, such as \textit{opinion and topic specific influence maximization}, the social influence function may not be submodular \cite{li2013influence} \cite{gionis2013opinion}. This happens because one node can switch its state from positive opinion to negative opinion and the vice\mbox{-}versa. In this scenario, solving the SIM Problem may be more challenging due to the absence of submodularity property in the social influence function.
\item \textbf{Practicality of the Problem:} In general, the
SIM Problem takes many assumptions, such as every selected seed will perform up to expectation in the spreading process, influencing each node of the network is equally important etc. This assumptions may be unrealistic in some situations. Assume the case of \textit{target advertisement}, where instead of all the nodes, a set of target nodes are chosen and the aim is to maximize the influence within the target nodes \cite{epasto2017real} \cite{ke2018finding}. In another way, due to the probabilistic nature of diffusion, a seed node may not perform up to expectation in the influence spreading process. Solving the SIM Problem and its variants will be more challenging, if we relax these assumptions.
\item \textbf{Scalability:} Real life social networks have millions of nodes and billions of edges. So, solving the SIM and related problems for real life social networks, scalability should be an important issue for any solution methodology.
\item \textbf{Theoretical Challenges:} For a computational problem, any of its solution methodology is concerned with two aspects. First one is the \textit{computational time}. This is measured as the execution time, when the methodology is implemented with real life problem instances. The second one is the \textit{computational complexity}. This is measured as the \textit {asymptotic bound} of the methodology. Theoretical research on any computational problem always concerned with the second aspect of the problem. Hence, the theoretical challenge for the SIM Problem is to design algorithms with good asymptotic bounds.
\end{itemize}
\section{Solutions Methodologies} \label{Sec:SolTSS}
Due to the inherent hardness of the SIM Problem, over the years researchers have developed algorithms for finding seed set for obtaining near\mbox{-}optimal influence spread. In this section, the available solution methodologies in the literature have been described. First we describe our proposed taxonomy for classifying the solution methodologies. Figure \ref{Fig:2} gives a diagrammatic representation of the proposed taxonomy and we describe them below.
\begin{figure}
\centering
\includegraphics[height=8 cm, width= 15 cm]{Taxonomy.png}
\caption{Proposed taxonomy for classifying the solution methodologies.}
\label{Fig:2}
\end{figure}
\begin{itemize}
\item \textbf{Approximation algorithms with provable guarantee}: Algorithms in this category give the worst case bound for influence spread. However, most of them suffer from the scalability issues, which means, with the increase of the network size, running time grows heavily. Many of the algorithms of this category have near optimal asymptotic bounds.
\item \textbf{Heuristic solutions}: Algorithms of this category do not give any worst case bound on influence spread. However, most of them have more scalability and better running time compared to the algorithms of previous category.
\item \textbf{Meta\mbox{-}heuristic solutions}: Methodologies of this category are the metaheuristic optimization algorithms and many of them are developed based on the evolutionary computation techniques. These algorithms also do not give any worst case bound on influence spread.
\item \textbf{Community\mbox{-}Based Solutions}: Algorithms of this category use community detection of the underlying social network as an intermediate step to bring down the problem into community level and improves scalability. Most of the algorithms of this category are heuristic and hence, do not provide any worst case bound on influence spread.
\item \textbf{Miscellaneous}: Algorithms of this category do not follow any particular property and hence, we put them under this heading.
\end{itemize}
\subsection{Approximation Algorithms with Provable Guarantee} \label{Sec:AAPG}
Kempe et al. \cite{kempe2003maximizing} \cite{kempe2005influential} \cite{kempe2015maximizing} were the first to study the problem of social influence maximization as a \textit{combinatorial optimization} problem and investigated its computational issues under two diffusion models, namely LT and IC models. In there studies, they assumed that the \textit{social influence function}, $\sigma()$ is \textit{sub-modular} and \textit{monotone}. The function $\sigma:2^{V(G)} \rightarrow \mathbb{R}^{+}$ will be sub-modular, if it follows the \textit{diminishing return property}, which means $\forall \ \mathcal{S} \subset \mathcal{T} \subset V(G)$, $u_i \in V(G) \setminus \mathcal{T}$; $\sigma(\mathcal{S} \cup u_i)-\sigma(\mathcal{S}) \geq \sigma(\mathcal{T} \cup u_i)-\sigma(\mathcal{T})$ and $\sigma$ will be monotone, if for any $\mathcal{S} \subset V(G)$ and $\forall u_i \in V(G)\setminus \mathcal{S}$, $\sigma(\mathcal{S} \cup u_i) \geq \sigma(\mathcal{S})$. They proposed a greedy strategy for selecting seed set presented in Algorithm \ref{Brufo}.
\begin{algorithm}[H]
\KwData{Given Social Network $G(V, E, \theta, \mathcal{P})$ and some $k \in \mathbb{Z}^{+}$.}
\KwResult{Seed Set for diffusion $\mathcal{S} \subset V(G)$.}
$\mathcal{S} \leftarrow \phi$\;
\For{$i=1 $ to $k$}{
$u=\underset{u_i \in V(G)\setminus \mathcal{S}}{argmax} \quad \sigma(\mathcal{S} \cup u_i)-\sigma(\mathcal{S})$\;
$\mathcal{S} \gets \mathcal{S} \cup u$
}
$return \quad \mathcal{S}$
\caption{Kempe et al.'s \cite{kempe2003maximizing} Greedy Algorithm for \textit{Seed Set Selection}. (\textbf{Basic Greedy})}
\label{Brufo}
\end{algorithm}
Starting with the empty seed set ($\mathcal{S}$), Algorithm \ref{Brufo} iteratively selects node which is currently not in $\mathcal{S}$, and inclusion of which to $\mathcal{S}$ causes the maximum marginal increment in $\sigma()$. Let us assume that $\mathcal{S}_{i}$ denotes the seed set at $i-th$ iteration of the \textit{`for' loop} in Algorithm \ref{Brufo}. In $(i+1)-th$ iteration, $\mathcal{S}_{i+1}=\mathcal{S}_{i} \cup \{ u \}$, if $\sigma(\mathcal{S} \cup u)-\sigma(\mathcal{S})$ value becomes the maximum among all $u \in V(G) \setminus \mathcal{S}_{i}$. This iterative process will be continued until we reach the allowed cardinality of $\mathcal{S}$. Kempe et al. \cite{kempe2003maximizing} showed that Algorithm \ref{Brufo} provides $(1-\frac{1}{e}-\epsilon)$ with $\epsilon>0$ for the approximation bound on influence spread, maintained in Theorem \ref{Th:10}.
\begin{mythm} \label{Th:10}
Algorithm \ref{Brufo} provides $(1-\frac{1}{e}-\epsilon)$ with $\epsilon>0$ factor approximation bound for the SIM Problem; i.e.; if $\mathcal{S}^{*}$ be the $k$ element optimal seed set, then $\sigma(\mathcal{S}) \geq (1-\frac{1}{e}).\sigma(\mathcal{S}^{*})$, where $e=\sum_{x=1}^{\infty} \frac{1}{x!}$.
\end{mythm}
Though Algorithm \ref{Brufo} gives good approximation bound on influence spread, it suffers from two major shortcomings. For example, for any given seed set $\mathcal{S}$, exact computation of the influence spread (i.e., $\sigma(\mathcal{S})$) is $\#P\mbox{-}Complete$. Hence, they approximate the influence spread by running a huge number of \textit{Monte Carlo Simulations} (MCS), counting total number of influenced nodes in all simulation runs and taking average with the number of runs. However, recently Maehara et al. \cite{maehara2017exact} developed the first procedure for exact computation of influence spread using \textit{binary decision diagrams}. Secondly, the number of times influence function ($\sigma(.)$) needs to be evaluated is quite huge. For selecting a seed set of size $k$ with $\mathcal{R}$ number of MCS runs in a social network having $n$ nodes and $m$ edges will require $\mathcal{O}(kmn\mathcal{R})$ number of influence function evaluations. Hence, application of this algorithm for a medium size networks (only consisting of 15000 nodes; though real life networks are much larger) appears to be unrealistic \cite{chen2009efficient}, which means that the algorithm is not scalable enough.
\par In spite of having a few drawbacks, Kempe et al.'s \cite{kempe2003maximizing} study is considered to be the foundational work on the SIM Problem. This study has triggered a vast amount of research in this direction. In most of the cases, the main focus was to reduce the scalability problem incurred by Basic Greedy Algorithm in Kempe et al.'s work. Some of them landed with heuristics, in which the obtained solution could be far away from the optima. Still a few studies are there, in which scalability problem was reduced significantly without loosing approximation ratio. Here, we have listed the algorithms which could provide approximation guarantee, whereas in Section \ref{Sec:HS}, we have described all the heuristic methods.
\begin{itemize}
\item \textbf{CELF}: For improving the scalability problem, Leskovec et al. \cite{leskovec2007cost} proposed a \textit{Cost Effective Lazy Forward} (CELF) scheme by exploiting the sub-modularity property of the social influence function. The key idea in their study was: for any node, its marginal gain in influence spread in the current iteration cannot be more than its marginal gain in the previous iterations. Using this idea, they were able to make a drastic reduction in the number of evaluations of the influence estimation function ($\sigma(.)$), which leads to significant improvement in running time though the asymptotic complexity remains the same as that of the \textit{Basic Greedy Algorithm} (i.e., $\mathcal{O}(kmn\mathcal{R})$). Reported results in their paper shows that CELF can speed up the computation process upto 700 times compared to Basic Greedy Algorithm on benchmark data sets. This algorithm is also applicable in many other contexts, such as finding informative blogs in a \textit{web blog network}, optimal placement of sensors in a water distribution network for detecting out\mbox{-}breaks etc.
\item \textbf{CELF++}: Goyal et al. \cite{goyal2011celf++} proposed an optimized version of CELF by exploiting the sub-modularity property of social influence function and named it as CELF++. For each node $u$ of the network, CELF++ maintains a table of the form $<u.mg1, u.prev\_ best, u.mg2, u.f\_lag>$ where $u.mg1$ is the marginal gain in $\sigma(.)$ for the current $\mathcal{S}$; $u.prev\_ best$ is the node with the maximum marginal gain among the users scanned till now in the current iteration; $u.mg2$ is the marginal gain in $\sigma(.)$ for $u$ with respect to the $\mathcal{S} \cup \{prev\_ best \}$ and $u.flag$ is the iteration number, when $u.mg1$ was last updated. The key idea in CELF++ is that, if $u.prev\_ best$ is included in the seed set in the current iteration, then the marginal gain of $u$ in $\sigma(.)$ with respect to $\mathcal{S} \cup \{prev\_ best \}$ need not be recomputed in the next iteration. Reported results showed that CELF++ is 35-55 \% faster than CELF though the asymptotic complexity remains the same.
\item \textbf{Static Greedy}: Cheng et al. \cite{cheng2013staticgreedy} developed this algorithm for solving SIM problem, which provides both guaranteed accuracy as well as high scalability. This algorithm works in two stages. In the first stage, $R$ number of Monte Carlo snapshots are taken from the social network, where each edge $(uv)$ is selected based on the associated diffusion probability $p_{uv}$. In the second stage, starting from the empty seed set, a node having the maximum average marginal gain in influence spread over all sampled snapshots will be selected as a seed node. This process will be continued until $k$ nodes are selected. This algorithm has the running time of $\mathcal{O}(\mathcal{R}m + k\mathcal{R}m^{'}n)$ and space requirement of $\mathcal{O}(\mathcal{R}m^{'})$, where $\mathcal{R}$ and $m^{'}$ are the number of Monte Carlo samples and average number of active edges in the snapshots, respectively. Reported results show that the Static Greedy reduces the computational time by two orders of magnitude, while achieving the better influence spread compared to Degree Discount Heuristic (DDH), Maximum Degree Heuristic (MDH), Prefix excluding Maximum Influence Arborescence (PMIA) (discussed in Section \ref{Sec:HS}) Algorithms.
\item \textbf{Borgs et al.'s Method:} Borgs et al. \cite{borgs2014maximizing} proposed a completely different approach for solving SIM Problem under IC Model using \textit{reverse reachable sampling technique}. Other than the MCS runs , this is a new approach for estimating the influence spread. Their algorithm is randomized and succeeds with the probability of $\frac{3}{5}$ and has the running time of $\mathcal{O}((m+n)\epsilon^{-3}\log n)$, which improves the previously best known algorithm having the complexity of $\mathcal{O}(mnkPOLY(\epsilon^{-1}))$. Algorithm proposed by Borgs et al. is near\mbox{-}optimal since the lower bound is $\Omega(m+n)$. This algorithm works in two phases. In the first phase, stochastically a hypergraph ($\mathcal{H}$) is generated from the input social network. Second phase is concerned with the seed set selection. This is done by repeatedly choosing the node with maximum degree in $\mathcal{H}$, deleting it along with its incidence edges from $\mathcal{H}$. The $k$\mbox{-}element set obtained in this way is the seed set for diffusion. This work is mostly theoretically enriched and lacking of practical experimentation.
\item \textbf{Zohu et al.'s Method}: Zohu et al. \cite{zhu2015better} improved the approximation bound from $(1-\frac{1}{e})$ (which is approximately 0.63) to 0.857. They designed two approximation algorithms: first algorithm works for the problem, where the cardinality of the seed set ($\mathcal{S}$) is not restricted and the second one works, when there is some restricted upper bound on the cardinality of seed set. They formulated the influence maximization problem as an optimization problem given below.
\begin{equation}
\underset{\mathcal{S} \subset V(G)}{max} \quad \underset{u \in \mathcal{S}, v \in V(G) \setminus \mathcal{S}}{\sum} p_{uv},
\end{equation}
where $p_{uv}$ is the \textit{influence probability} between the users: $u$ and $v$. They converted this optimization problem into a \textit{quadratic integer programming problem} and solved the problem using the concept of \textit{semidefinite programming} \cite{feige1995approximating}.
\item \textbf{SKIM}: Cohen et al. \cite{cohen2014sketch} proposed a \textit{Sketch\mbox{-}Based Influence Maximization} (SKIM) algorithm, which improves the Basic Greedy Algorithm by ensuring in every iteration, with sufficiently high probability, or in expectation, the node we choose to add to the seed set has a marginal gain that is close to the maximum one. The running time of this algorithm is $\mathcal{O}(nl+ \sum_{i=1} \vert E^{i} \vert + m \epsilon^{-2} \log^{2}n)$, where $l$ is the number of snap shots of $G$, $E^{i}$ is the edge set of $G^{i}$. Reported results show that SKIM has high scalability over Basic Greedy, Two phase Influence Maximization (TIM), Influence Ranking and Influence Estimation (IRIE) etc. without compromising influence spread.
\item \textbf{TIM}: Tang et al. \cite{tang2014influence} developed a \textit{Two\mbox{-}phase Influence Maximization} (TIM) algorithm, which has the expected running time of $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$ with atleast $(1-n^{-l})$ probability for some given $k$, $\epsilon$ and $l$. As its name suggests, this algorithm has two phases. In the first phase, TIM computes lower bound on the maximum expected influence spread among all $k$ sized sets and uses this lower bound to estimate a parameter $\phi$. In the second phase, $\phi$ number of reverse reachability (RR) set samples have been picked up from the social network. Then, it derives a $k$ sized seed set that covers the maximum number of RR sets and returns as the final result. Reported results shows that TIM is two times faster than CELF++ and Borgs et al.'s \cite{borgs2014maximizing} Method, while achieving the same influence spread. To improve the running time of TIM, Tang et al. \cite{tang2014influence} proposed a heuristic, which takes all the RR sets, generated in an intermediate step of second phase of TIM as inputs. Then, it uses a greedy approach for the maximum coverage problem for selecting the seed set. This modified version of TIM is named as $\text{TIM}^{+}$. Reported results showed that $\text{TIM}^{+}$ is two times faster than TIM.
\item \textbf{IMM}: Tang et al. \cite{tang2015influence} proposed \textit{Influence Maximization via Martingales} (IMM) (a kind of stochastic process, in which, for the given current and preceding values, the conditional expectation of the next value, will be the current value itself), which achieves a $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$ expected running time and returns $(1-\frac{1}{e} - \epsilon)$ factor approximate solution with probability of $(1-n^{-l})$. IMM Algorithm also has two phases like TIM and $\text{TIM}^{+}$. First phase is concerned with sampling $RR$ sets from the given social network and the second phase is concerned with the seed set selection. In the first phase, unlike TIM and $\text{TIM}^{+}$, RR sets generated in the first phase are dependent because $(i+1)$\mbox{-}th RR set is generated based on whether first $i$ of RR sets are satisfying stopping criteria or not. In IMM, the RR sets generated in the sampling phase are reused in node selection phase, which is not the case in TIM or TIM+. In this way, IMM can eliminate a lot of unnecessary computations, which leads to significant improvement in running time though asymptotic complexity remains the same as that of TIM. Reported results conclude that IMM outperforms TIM, TIM+, IRIE (described in Section \ref{Sec:HS}) based on running time while achieving comparable influence spread.
\item \textbf{Stop-and-Stare}: Nguyen et al. \cite{nguyen2016stop} developed the Stop-and-Stare Algorithm (SSA) and its dynamic version DSSA for \textit{Topic\mbox{-}aware Viral Marketing} (TVM) problem. We have not discussed this problem, as it comes under topic aware influence maximization. However, this solution methodology can be used for solving SIM problem with minor modification. They showed that, the number of RR set samples used by their algorithms is asymptotically minimum. Hence, Stop-and-Stare is 1200 times faster than the state-of-the art IMM algorithm. We are not discussing the results, as they are for the TVM problem and out of the scope of this survey.
\item \textbf{BCT}: Recently, Nguyen et al. \cite{nguyen2017billion} proposed \textit{Billion-scale Cost-award Targeted} (BCT) algorithm for solving \textit{cost-aware targeted viral marketing} (CTVM) introduced by them. We have not discussed this problem, as it comes under topic aware influence maximization. However, this solution methodology can be adopted for solving SIM Problem as well under both IC and LT Models and have the running time of $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$ and $\mathcal{O}((k+l)n\log n/ \epsilon^2)$, respectively. We are not discussing about the results, as they are for CTVM Problem and out of scope of this survey.
\item \textbf{Nguyen et al.'s Method}: Nguyen et al. \cite{nguyen2013budgeted} studied the \textit{Budgeted Influence Maximization Problem} described in Section \ref{Sec:VTSSP}. They have formulated the following optimization problem in the context of \textit{Budgeted Influence Maximization}:\\
\begin{eqnarray}
max \quad \sigma(\mathcal{S}) \\
\text{subject to,}\underset{u \in \mathcal{S}}{\sum} \mathcal{C}(u) \leq \mathcal{B}
\end{eqnarray}
Now, if $\forall u \in V(G)$, $\mathcal{C}(u)=1$, then it becomes the SIM Problem. To solve this problem, they proposed two algorithms. First one is the modification of basic greedy algorithm proposed by Kempe et al. \cite{kempe2003maximizing} (Algorithm \ref{Brufo}) and second one was adopted from \cite{khuller1999budgeted}. In the first algorithm $\forall u \in V(G)\setminus \mathcal{S}$, they computed the increment of influence in unit cost as follows:
\begin{equation}
\delta(u)=\frac{\sigma(\mathcal{S} \cup u)- \sigma(\mathcal{S})} {\mathcal{C}(u)}
\end{equation}
Now, the algorithm choose $u$ to include in the seed set ($\mathcal{S}$), if it maximized the \textit{objective function} as well as $\mathcal{C}(\mathcal{S}_{i} \cup u) \leq \mathcal{B}$. This iterative process will be continued until no more nodes can be added within the budget. However, this algorithm does not give any constant approximation ratio. This algorithm can be modified to get the constant approximation ratio, as given in Algorithm \ref{Algo:2}.
\begin{algorithm}[H]
\KwData{Given Social Network $G(V, E, \theta, \mathcal{P})$, cost function $\mathcal{C}: V(G) \longrightarrow \mathbb{Z}^{+}$ some $\mathcal{B} \in \mathbb{Z}^{+}$.}
\KwResult{Seed Set for diffusion $\mathcal{S} \subset V(G)$.}
$S_{1}= \text{result of Naive Greedy}$\;
$S_{max}= \underset{u \in V(G)}{argmax} \quad \sigma(u)$\;
$\mathcal{S}=argmax(\sigma(S_{1}), \sigma(S_{max}))$\;
$return \quad \mathcal{S}$
\caption{Nguyen et al.'s \cite{nguyen2013budgeted} Greedy Algorithm for BIM Problem.}
\label{Algo:2}
\end{algorithm}
\begin{mythm}
Algorithm \ref{Algo:2} guarantees $(1-\frac{1}{\sqrt{e}})$ approximate solution for \textit{BIM Problem}.
\end{mythm}
For the detailed proof of Algorithm \ref{Algo:2}, readers are referred to the appendix of \cite{nguyen2012budgeted}.
\end{itemize}
\par Now, the presented algorithms have been summarized below. The main bottleneck in Kempe et al.'s \cite{kempe2003maximizing} Basic Greedy Algorithm is the evaluation of influence spread estimation function for a large number of MCS runs (say, 10000). If we reduce the MCS runs directly, then accuracy in computing influence spread may be compromised. So, the key scope for improvement is to reduce the number of evaluation of the influence estimation function in each MCS run. Both CELF and CELF++ exploit the sub-modularity property to achieve this goal and hence, are found to be faster than Basic Greedy Algorithm. On the other hand, Static Greedy algorithm uses all the randomly generated snapshots of the social network using MCS runs simultaneously. Hence, with the less number of MCS runs (say, 100) it is possible to have equivalent accuracy in spread. These four algorithms can be ordered in terms of maximum to minimum values of running time as follows: $\text{Basic Greedy} \succ \text{CELF} \succ \text{CELF++} \succ \text{Static Greedy}$.
\par Another scope of improvement in Kempe et al.'s \cite{kempe2003maximizing} work was estimating the influence spread by applying some method other than the heavily time consuming MCS runs. Borgs et al. \cite{borgs2014maximizing} explored this scope by proposing a drastically different approach for spread estimation, namely reverse reachable sampling technique. The algorithms (such as TIM, $\text{TIM}^{+}$, IMM) which used this method were seem to be much faster than CELF++ and also have competitive influence spread. Among TIM, $\text{TIM}^{+}$, and IMM , IMM was found to be the fastest one both theoretically (in terms of computational complexity), and empirically (in terms of computational time from experimentation) due to the reuse of the RR sets in the node selection phase. To the best of the authors' knowledge, IMM is the fastest algorithm, which was solely proposed for solving SIM Problem. However, BCT Algorithm proposed by Nguyen et al. \cite{nguyen2017billion}, which was originally proposed for solving CTVM problem, is the fastest solution methodology available in the literature that can be adopted for solving SIM Problem.
\par Now from this discussion, it is important to note that the scalability problem incurred by the Basic Greedy Algorithm had been reduced by the subsequent research. However, as the size of the social network data set has become gigantic, development of algorithms with high scalability remains the thrust area. Solution methodologies described till now have been summarized in Table \ref{Tab:4}. Algorithms for which complexity analysis had not been done by the author(s), we left that column of the table blank.
\begin{center}
\begin{table} \label{Tab:4}
\begin{tabular}{ | p{2.2 cm} | p{1.8 cm} | p{2.9cm} | p{2cm}| p{1 cm} |}
\hline
\textbf{Name of the Algorithm} & \textbf{Proposed By} & \textbf{Complexity} & \textbf{Applicable For} & \textbf{Model} \\ \hline
\textbf{Basic Greedy} & Kempe et al. \cite{kempe2003maximizing} & $\mathcal{O}(kmn\mathcal{R})$ & \textbf{SIM} & IC \& LT\\
\hline
\textbf{CELF} & Leskovec et al. \cite{leskovec2005graphs} & $\mathcal{O}(kmn\mathcal{R})$ & \textbf{SIM} & IC \& LT\\
\hline
\textbf{CELF++} & Goyal et al.\cite{goyal2011celf++} & $\mathcal{O}(kmn\mathcal{R})$ & \textbf{SIM} & IC \& LT \\
\hline
\textbf{Static Greedy} & Cheng et al. \cite{cheng2013staticgreedy} & $\mathcal{O}(\mathcal{R}m + kn\mathcal{R}m)$ & \textbf{SIM} & IC \& LT \\
\hline
\textbf{Brog et al.'s Method} & Brogs et al. \cite{borgs2014maximizing} &
$\mathcal{O}(kl^2(m+n)\log^2n/\epsilon^3)$ & \textbf{SIM} & IC \& LT \\
\hline
\textbf{Zohu et al.'s Method} & Zohu et al. \cite{zhu2015better} & - & \textbf{SIM} & IC \& LT \\
\hline
\textbf{SKIM} & Cohen et al. \cite{cohen2014sketch} & $\mathcal{O}(nl+ \sum_{i=1} \vert E^{i} \vert + m \epsilon^{-2} \log^{2}n)$ & \textbf{SIM} & IC \& LT \\
\hline
\textbf{TIM+}, \textbf{IMM} & Tang et al. \cite{tang2014influence}, \cite{tang2015influence} & $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$ & \textbf{SIM} & IC \& LT \\
\hline
\textbf{Stop-and-Stare} & Nguyen et al. \cite{nguyen2016stop} & - & \textbf{TVM} & IC \& LT \\
\hline
\textbf{Nguyen's Method} & Nguyen et al. \cite{nguyen2013budgeted} & $\mathcal{O}(n^2 (\log n+d) + kn(1+d))$ &\textbf{BIM} & IC \& LT \\
\hline
\textbf{BCT} & Nguyen et al. \cite{nguyen2017billion} & $\mathcal{O}((k+l)(n+m)\log n/ \epsilon^2)$& \textbf{SIM}, \textbf{BIM}, \textbf{CTVM} & IC \\
\hline
\textbf{BCT} & Nguyen et al. \cite{nguyen2017billion} & $\mathcal{O}((k+l)n\log n/ \epsilon^2)$& \textbf{SIM}, \textbf{BIM}, \textbf{CTVM} & LT \\
\hline
\end{tabular}
\caption{Approximation algorithms for SIM Problem and its variants.}
\end{table}
\end{center}
\subsection{Heuristic Solutions} \label{Sec:HS}
Algorithms of this category do not provide any approximation bound on the influence spread but have better running time and scalability. Here, we will describe the heuristic solution methodologies from the literature.
\begin{itemize}
\item \textbf{Random Heuristic}: For selecting seed set by this method, randomly pick $k$ nodes of the network and return them as seed set. In Kempe et al.'s \cite{kempe2003maximizing} experiment, this method has been used as a baseline method.
\item \textbf{Centrality\mbox{-}Based Heuristics}: Centrality is a well\mbox{-}known measure in network analysis, which signifies how much importance a node has in the network \cite{freeman1978centrality} \cite{landherr2010critical}. There are many centrality\mbox{-}based heuristics proposed in the literature for SIM Problem like \textit{Maximum Degree Heuristic} (MDH) (select $k$ highest degree nodes as seed node), High Clustering Coefficient Heuristic (HCH) (select $k$ nodes with the highest clustering coefficient value) \cite{wilson2009user} \cite{tabak2014directed}, High page rank heuristic \cite{Brin98The} (select $k$ nodes with the highest page rank value) etc.
\item \textbf{Degree Discount Heuristic} (DDH): This is basically the modified version of MDH and was proposed by Chen et al. \cite{chen2009efficient}. The key idea behind this method is following for any two nodes $u,v \in V(G)$, $(uv) \in E(G)$ and $u$ has been selected as a seed set by MDH, and then, during the counting the degree of $v$, the edge $(uv)$ should not be considered. Hence, due to the presence of $u$ in the seed set, the degree of $v$ will be discounted by 1. This method is also named as \textit{Single Discount Heuristic} (SDH). Experimental results of \cite{chen2009efficient} show that DDH can achieve better influence spread than MDH.
\item \textbf{SIMPATH}: This heuristic was proposed by Goyal et al. \cite{goyal2011simpath} for solving SIM Problem under LT Model. SIMPATH works based on the principal of CELF (discussed in Section \ref{Sec:AAPG}). However, instead of using computationally expensive Monte Carlo Simulations for estimating influence spread, SIMPATH uses path enumeration techniques for this purpose. This algorithm has a parameter ($\eta$) for controlling trade off between influence spread and running time. Reported results conclude that SIMPATH outperforms other heuristics, such as MDH, Page Rank, LDGA with respect to information spread.
\item \textbf{SPIN}: Narayanam et al. \cite{narayanam2011shapley} studied SIM Problem and $\lambda$ Coverage Problem as a co\mbox{-}operative game and proposed a \textit{Shapely Value\mbox{-}Based Discovery of Influential Nodes} (SPIN) Algorithm, which has the running time of $\mathcal{O}(t(n+m)\mathcal{R} + n \log n+ kn+ k\mathcal{R}m)$, where $t$ is the cardinality of the sample collision set being considered for the computation of shapely value. This algorithm has mainly two steps. First one is to generate a rank list of the nodes based on the shapley value and then, choose top\mbox{-}k of them and return as seed set. Reported results show that SPIN constantly outperforms MDH and HCH.
\item \textbf{MIA} and \textbf{PMIA}: Chen et al. \cite{chen2010scalable} and Wang et al. \cite{wang2012scalable} proposed \textit{maximum influence arborescence} (MIA) and Prefix excluding MIA (PMIA) model of influence propagation. They computed the propagation probability from a seed node to a non\mbox{-}seed node by multiplying the influence probabilities of the edges present in the shortest path. \textit{Maximum Influence Path} is the one having the maximum propagation probability and they considered that influence spreads through local arborescence (a directed graph in which, for a vertex $u$ called the root and any other vertex $v$, there is exactly one directed path from $u$ to $v$) only. Hence, the model is called MIA. In PMIA (\textit{Prefix excluding} MIA) model, for any seed $s_i$, its maximum influence path to other nodes should avoid all seeds that are before $s_i$. They proposed greedy algorithms for selecting seed set based on these two diffusion models. Reported results show that both MIA and PMIA can achieve high level of scalability.
\item \textbf{LDAG}: Chen et al. \cite{chen2010scalable2} developed this heuristic for solving SIM Problem under LT Model. Influence spread in a \textit{Directed Acyclic Graph} (DAG) is easy to compute. Hence, for computing the influence spread in general social networks, they introduced a \textit{Local Directed Acyclic Graph} (LDAG) based influence model, which computes local DAGs for each node to approximate influence spread. After constructing the DAGs, basic greedy algorithm proposed by Kempe et al. \cite{kempe2003maximizing} can be used to select the seed nodes. Reported results show that LDAG constantly outperforms DDH or Page Rank heuristic.
\item \textbf{IRIE}: Jung et al. \cite{jung2012irie} proposed this heuristic based on influence ranking (IR) and influence estimation (IE) for solving SIM Problem under IC and its extension IC\mbox{-}N (independent cascade with negative opinion) Model. They developed a global influence ranking like belief propagation approach. If we select top\mbox{-}k nodes, then there will be an overlap in influence spread by each node. For avoiding this shortcomings, they integrated a simple \textit{influence estimation} technique to predict additional influence impact of a seed on the other node of the network. Reported results show that IRIE can achieve better influence spread compared to MDH, Pagerank, PMIA etc. heuristics. However, IRIE has less running time and memory consumption.
\item \textbf{ASIM}: Galhotra et al. \cite{galhotra2015asim} designed this highly scalable heuristic for SIM Problem. For each node $u \in V(G)$, this algorithm assigns a score value (the weighted sum of the number of simple paths of length at most $d$ starting from that node). ASIM has the running time of $\mathcal{O}(kd(m+n))$ and its idea is quite similar to the SIMPATH Algorithm proposed by Goyal et al. \cite{goyal2011simpath}. Results show that ASIM takes less computational time and consumes less memory compared to CELF++ and TIM, while achieving the comparable influence spread.
\item \textbf{EaSyIm}: Galhotra et al. \cite{galhotra2016holistic} proposed \textit{opinion cum interaction} (OCI) model, which considers negative opinion as well. Based on the OCI Model, they formulated the \textit{maximizing effective opinion} problem and proposed two fast and scalable heuristics, namely Openion Spread Influence Maximization (OSIM) and EaSyIm having the running time of $\mathcal{O}(k \mathcal{D}(m+n))$ for this problem, where $\mathcal{D}$ is the diameter of the graph. Both the algorithms work in two phases. In the first phase, each node is assigned with some score based on the contribution on influence spread for all the paths starting at that node. Second step is concerned with the node processing step. The nodes with the maximum score value are selected as seed nodes. Reported empirical results show that OSIM and EaSyIm can achieve better influence spread compared to $\text{TIM}^{+}$, CELF++ with less running time.
\item \textbf{Cordasco et al.'s \cite{cordasco2015fast} \cite{cordasco2016active} Method}: Later Cordasco et al. proposed a fast and effective heuristic method for selecting the target set in a undirected social network \cite{cordasco2015fast} \cite{cordasco2016active}. This heuristic produces optimal solution for \textit{trees}, \textit{cycles} and \textit{complete graphs}. However, for real life social networks, this heuristic performs much better than the other methods available in the literature. They extended this work for directed social networks as well \cite{cordasco2015influence}.
\end{itemize}
There are several other studies also, which focused on developing heuristic. Nguyen et al. \citep{nguyen2013budgeted} proposed an efficient heuristic for solving BIM Problem. Wu et al. \cite{wu2017two} developed a two\mbox{-}stage stochastic programming approach for solving SIM Problem. In this study, instead of choosing a seed set of size exactly $k$, their problem is choosing a seed set of size less than or equal to $k$.
\par Now, the studies related to heuristic methods will be summarized here. Centrality\mbox{-}based heuristics (CBHs) consider the topology of the network only and hence, obtained influence spread in most of the cases is quite less compared to that of other states of the art methods. However, DDH performs slightly better than other CBHs, as it puts a little restriction on the selection of two adjacent nodes. The application of SIMPATH for seed selection is little advantageous, as it has a user controlled parameter $\eta$ to balance the trade\mbox{-}off between accuracy and running time. SPIN has the advantage, as it can be used for solving both Top\mbox{-}$k$ node problem as well as $\lambda$\mbox{-}Coverage Problem. MIA and PMIA have the better scalability compared to Basic Greedy. As LDAG works based on the principle of computation of influence spread in DAGs, it is seen to be faster. As various heuristics are experimented with different benchmark data sets, drawing a general conclusion about the performance will be difficult. Here, we have summarized some of the important algorithms for solving SIM and related problems, as presented in Table \ref{Tab:HS}. Algorithms for which complexity analysis has not been done in the paper, we have left that column empty in the table.
\begin{table} [h]
\begin{center}
\begin{tabular}{ | p{2cm} | p{2.5 cm} | p{2.7cm} | p{1 cm} |}
\hline
\textbf{Name of the Algorithm} & \textbf{Proposed By} & \textbf{Complexity} & \textbf{Model} \\ \hline
\textbf{SIMPATH} & Goyal et al. \cite{goyal2011simpath} & $\mathcal{O}(kmn\mathcal{R})$ & LT \\
\hline
\textbf{SPIN} & Narayanam et al. \cite{narayanam2011shapley} & $\mathcal{O}(t(n+m)\mathcal{R} + n \log n+ kn+ k\mathcal{R}m)$ & IC \& LT \\
\hline
\textbf{MIA},\textbf{PMIA} & Chen et al. \cite{chen2010scalable}, Wang et al. \cite{wang2012scalable} & - & MIA, PMIA\\
\hline
\textbf{LDGA} & Chen et al. \cite{chen2010scalable} & $\mathcal{O}(n^2 + k n^2 \log n)$ & MIA \\
\hline
\textbf{IRIE} & Jung et al. \cite{jung2012irie} & - & IC \& IC\mbox{-}N \\
\hline
\textbf{ASIM} & Galhotra et al. \cite{galhotra2015asim} & $\mathcal{O}(kd(m+n))$ & IC \\
\hline
\textbf{EaSyIm} & Galhotra et al. \cite{galhotra2016holistic} & $\mathcal{O}(k\mathcal{D}(m+n))$ & OI \\
\hline
\end{tabular}
\end{center}
\caption{Heuristic solutions for SIM Problem}
\label{Tab:HS}
\end{table}
\subsection{Metahuristic Solution Approaches} \label{Sec:Meta}
Since early seventies, metaheuristic algorithms had been used successfully to solve optimization problems arises in the broad domain of science and engineering \cite{yi2013three} \cite{yang2014computational}. There is no exception for solving SIM Problem as well.
\begin{itemize}
\item Bucur et al. \cite{bucur2016influence} solved the SIM Problem using \textit{genetic algorithm}. They demonstrated that with simple genetic operator, it is possible to find out approximate solution for influence spread within feasible run time. In most of the cases, influence spread obtained by their method was comparable with that of the Basic Greedy Algorithm proposed by Kempe et al. \cite{kempe2003maximizing}.
\item Jiang et al. \cite{jiang2011simulated} proposed \textit{simulated annealing}\mbox{-}based algorithm for solving the SIM Problem under IC Model. Reported results indicate that their proposed methodology runs 2-3 times faster compared to the existing heuristic methods in the literature.
\item Tsai et al. \cite{tsai2015genetic} developed the \textit{Genetic New Greedy Algorithm} (\textbf{GNA}) for solving SIM Problem under IC Model by combining genetic algorithm with the new greedy algorithm proposed by Chen et al. \cite{chen2009efficient}. Their reported results conclude that GNA can give 10 \% more influence spread compared to the genetic algorithm.
\item Gong et al. \cite{gong2016influence} proposed a \textit{discrete particle swarm optimization algorithm} for solving SIM Problem. They used the degree discount heuristic proposed by Chen et al. \cite{chen2009efficient} to initialize the seed set and \textit{local influence estimation (LIE) function} to approximate the two-hop influence. They introduced the \textit{network specific local search} strategy also for fast convergence of their proposed algorithm. Reported results conclude that this methodology outperforms the state of the art CELF++ with less computational time.
\end{itemize}
After that, several studies were also carried out in this direction \cite{sankar2016learning}, \cite{wang2017discrete}, \cite{liu2017effective} \cite{zhang2017maximizing}. Though there are a large number of metaheuristic algorithms \cite{yang2010nature}, only a few had been used for solving SIM Problem. Hence, the use of metaheuristic algorithms for solving SIM Problem and its variants has been largely ignored. Next, we have described the community\mbox{-}based solution methodologies for SIM Problem.
\subsection{Community\mbox{-}Based Solution Approaches} Most of the real\mbox{-}life social networks exhibit a community structure within it \cite{clauset2004finding}. A community is basically a subset of nodes, which are densely connected among themselves and sparsely connected with the other nodes of the network. In recent years, \textit{community\mbox{-}based solution framework} (\textbf{CBSF}) has been developed for solving SIM Problem.
\begin{itemize}
\item Wang et al. \cite{wang2010community} proposed the \textit{community\mbox{-}based greedy algorithm} for solving SIM Problem. This method consist of two steps, namely detecting communities based on information propagation and selecting communities for finding influential nodes. This algorithm could outperform the degree discount and random heuristic.
\item Chen et al. \cite{chen2012exploring} \cite{chen2014cim} developed a CBSF for solving SIM Problem and named it \textbf{CIM}. By exploiting the community structure, they selected some candidate seed sets, for each community and from the candidate seed sets they have selected the final seed set for diffusion. CIM could achieve better influence spread compared to some state\mbox{-}of\mbox{-}the art heuristic methods, such as CDH-Kcut, CDH-SHRINK and maximum degree. \item Rahimkhan et al. \cite{rahimkhani2015fast} proposed a CBSF for solving SIM Problem under LT Model and named it \textbf{ComPath}. They used Speaker-
listener Label Propagation Algorithm (SLPA) proposed by Xie et al. \cite{xie2011slpa} for detecting communities and then identified the most influential communities and candidate seed nodes. From the candidate seed set, they selected the final seed set based on the intra distance among the nodes of the candidate seed set. ComPath could outperform CELF, CELF++, maximum degree heuristic, maximum pagerank heuristic, LDGA.
\item Bozorgi et al. \cite{bozorgi2016incim} developed a CBSF for solving SIM Problem under LT Model and named it \textbf{INCIM}. Like ComPath, INCIM also use the SLPA Algorithm for detecting the communities. They proposed an algorithm for selecting seed, which computes the influence spread using the algorithm developed by Goyal et al. \cite{goyal2011simpath}. INCIM could outperform some state-of-the-art methodologies like LDGA, SIMPATH, IPA (a parallel algorithm for SIM Problem proposed by \cite{kim2013scalable}), high pagerank and high degree heuristic.
\item Shang et al. \cite{shang2017cofim} proposed a CBSF for solving SIM Problem and named it \textbf{CoFIM}. In this study they introduced a diffusion model, which works in two phases. In the first phase the seed set $\mathcal{S}$ was expanded to the neighbor nodes of $\mathcal{S}$, which would be usually allocated into different communities. Then, in the second phase, influence propagation within the communities was computed. Based on this diffusion model, they developed an incremental greedy algorithm for selecting seed set, which is analogous to the algorithm proposed by Kempe et al. \cite{kempe2003maximizing}. CoFIM could achieve better influence spread compared to that of IPA, TIM+, MDH and IMM.
\item Recently, Li et al. \cite{li2018community} proposed a community\mbox{-}based approach for solving the SIM Problem, where the users have a specific geographical location. They developed a social influence\mbox{-}based community detection algorithm using spectral clustering technique and a seed selection methodology by considering community\mbox{-}based influence index. Reported results show that this methodology is more efficient than many state\mbox{-}of\mbox{-}the\mbox{-}art methodologies, while achieving almost the same influence spread.
\end{itemize}
\par It is important to note that except the methodology proposed by Wang et al. \cite{wang2010community}, all these methods are basically heuristics. However, these methods use community detection of the underlying social network as an intermediate step to scale down the SIM Problem into community level. There are large number of algorithms available in the literature for detecting communities \cite{fortunato2010community}, \cite{chakraborty2017metrics}. Among them, which one should be used for solving SIM Problem? How is the quality of community detection and influence spread related? This questions are largely ignored in the literature.
\subsection{Miscellaneous} In this section, we have described some solution methodologies of SIM Problem, which are very different from the methodologies discussed till now. Also, each solution methodology presented here is different from another. It is reported in the literature that in any information diffusion process less than 10\% nodes are influenced beyond the hop count $2$ \cite{goel2012structure}. Based on this phenomenon, recently, Tang et al. \cite{tang2017influence} \cite{tang2018efficient} developed a hop\mbox{-}based approach for SIM Problem. Their methodology also gives a theoretical guarantee on influence spread. Ma et al. \cite{ma2008mining} proposed an algorithm for SIM Problem, which works based on the heat diffusion process. It could produce better influence spread compared to Basic Greedy Algorithm. Goyal et al. \cite{goyal2011data} developed a data\mbox{-}based approach for solving SIM Problem. They introduced the \textit{credit distribution (CD) model} that could grip the propagation traces to learn the influence flow pattern for approximating the influence spread. They showed that SIM Problem under CD Model is NP\mbox{-}Hard and reported results show that this model can achieve even better influence spread compared to IC and LT Models with less running time. Lee et al. \cite{lee2015query} introduced a query\mbox{-}based approach for solving SIM Problem under IC Model. Here, the query is for activating all the users of a given set $\mathcal{T}$, what should be the seed set? This methodology is intended for maximizing the influence of a particular group of users, which is the case in \textit{target-aware viral marketing}. Zhu et al. \cite{zhu2014maximizing} introduced the \textbf{CTMC\mbox{-}ICM} diffusion model, which is basically the blending of IC Model with \textit{Continuous Time Markov Chain}. They studied the SIM Problem under this model and came up with a new centrality metric \textit{Spread Rank}. Their reported results show that seed nodes selected based on spread rank centrality can achieve better influence spared compared to the traditional distance\mbox{-}based centrality measures, such as \textit{degree}, \textit{closeness}, \textit{betweenness}. Wang et al. \cite{wang2017maximizing} proposed the methodology \textbf{Fluidspread}, which works based on fluid dynamic principle and can reveal the dynamics of diffusion process. Kang et al. \cite{kang2016diffusion} introduced the notion of diffusion centrality for selecting influential nodes in a social network.
\section{Summary of the Survey and Future Research Directions} \label{Sec:SRD}
Based on the survey of the existing literature presented in Sections \ref{Sec:VTSSP} through \ref{Sec:SolTSS} we have summarized in this section the current research trends and given future directions.
\subsection{Current Research Trends}
\begin{itemize}
\item \textbf{Practicality of the Problem}: Most of the current studies is focused on the practical issues of the SIM Problem. One of the major applications of social influence maximization is viral marketing. So, in this context, influencing an user will be beneficial, only if he will be able to influence a reasonable number of other users of the network. Recent studies, such as \cite{nguyen2016cost} \cite{nguyen2017billion} along with the node selection cost also consider \textit{benefit} as another component in the SIM problem.
\item \textbf{Scalability}: Starting from kempe et al.'s \cite{kempe2003maximizing} seminal work, scalability remains an important issue in this area. To reduce scalability problem, instead of using Monte Carlo simulation\mbox{-}based spread estimation, recently Borgs et al. \cite{borgs2014maximizing} introduced reverse reachable set\mbox{-}based spread estimation. After this work, all the popular algorithms for SIM Problem, such as TIM, IMM, TIM+ etc uses this concept as an influence spread estimation technique for improving scalability.
\item \textbf{Diffusion Probability Computation}: TSS problem assumes that influence probability between any pair of users is known. However, this is a very unrealistic assumption. Though there were some previous studies in this direction, people tried to predict influence probability using machine learning techniques \cite{varshney2017predicting}.
\end{itemize}
Though since the last one and half decades or so, the \textit{TSS Problem} had been studied extensively from both theoretical as well as applied context, still to the best of our knowledge, some of the corners of this problem are either not or partially investigated. Here, we have listed some future research directions from both problem specification as well as solution methodology point of view.
\subsection{Future Directions}
Further research may be carried out in future in and around of TSS Problem of social networks, in the following directions:
\subsubsection{Problem Specific}
\begin{itemize}
\item As on\mbox{-}line social networks are formed by the rational agents, incentivization is required, if a node is selected as a seed node. For practical applications, it is also important to consider what benefit will be obtained (e.g., how many other non\mbox{-}seed nodes becoming influenced through that node etc.) by activating that node. At the same time , for influence propagation of time sensitive events ( where influencing one person after an event does not make any scene such as, political campaign before election, viral marketing for a seasonal product etc.) consideration of diffusion time is also important. To the best of our knowledge, there is no reported study on TSS Problem considering all three issues: \textit{cost, benefit, and time}.
\item Most of the studies done on SIM Problem and its variants are under either IC or LT diffusion model. However, recently, some other diffusion models have also been recently developed, such as Independent Cascade Model with Negative Opinion (IC\mbox{-}N) \cite{chen2011influence}, Opinion cum Interaction Model (OI) \cite{galhotra2016holistic}, Opinion\mbox{-}based Cascading Model (OC) \cite{zhang2013maximizing} etc., which consider negative opinion. SIM Problems and its different variants can also be studied under these newly developed diffusion models.
\item Most of the studies done on SIM Problem consider that the underlying social network is static including influence probabilities. However, this is not a practical assumption, as most of the social networks are time varying. Recent studies on SIM Problem started considering temporal nature of the social network \cite{tong2017adaptive}, \cite{zhuang2013influence}. As this has just started, there is a lot of scope to work in TSS Problem in time\mbox{-}varying social networks.
\item In real\mbox{-}world social networks, users have specific topics of choice. So, one user will be influenced by other users if both of them have similar choices. Keeping `topic' into consideration spread of influence can be increased, which is known as \textit{topic aware influence maximization}. Recent studies on influence maximization considers this phenomenon \cite{chen2015online} \cite{li2015real}. SIM Problems and its variants can be studied in this settings as well.
\end{itemize}
\subsubsection{Solution Methodology Specific}
\begin{itemize}
\item Among all the variants of TSS Problem in social networks described in Section \ref{Sec:VTSSP}, it is surprising to see that only SIM problem is well studied. Hence, solution methodologies developed for SIM Problem can be modified accordingly, so that they can be adopted for solving other variants of SIM problem as well.
\item One of the major issues in the solution methodology for SIM problem is the scalability. It is important to observe that the social network used in the Kempe et al.'s \cite{kempe2003maximizing} experiment had 10748 nodes and 53000 edges, whereas the recent study of Nguyen et al.'s \cite{nguyen2017billion} has used social network of with $41.7 \times 10^{6}$ nodes and $1.5 \times 10^{9}$ edges. From this example, it is clear that the size of the social network data sets is increasing day by day. Hence, developing more scalable algorithms is extremely important to handle large data sets.
\item From the discussion in Section \ref{Sec:Meta}, it is understood that though there are many evolutionary algorithms, only genetic algorithm, artificial bee colony optimization and discrete particle swarm optimization algorithm have been used till date for solving SIM Problem. Hence, other meta\mbox{-}heuristics, such as \textit{ant colony optimization}, \textit{differential evolution} etc. can also be used for this purpose.
\item There are many solution methodologies proposed in the literature. However, which one to choose in which situation and for what kind of network structure? For answering this question, by taking all the proposed methodologies from the literature a strong experimental evaluation is required with benchmark data sets. Recently, Arora et al. \cite{arora2017debunking} has done a benchmarking study with 11 most popular algorithms from the literature, and they have found some contradictions between their own experimental results and reported ones in the literature. More such benchmarking studies are required to investigate these issues.
\item Most of the algorithms presented in the literature are serial in nature. The issue of scalability in SIM problem can be tackled by developing distributed and parallel algorithms. To the best of the authors' knowledge, except \textbf{dIRIEr} developed by Zong et al. \cite{zong2014dirier}, there is no distributed algorithm existing in the literature. Recently, a few parallel algorithms have been developed for SIM Problem \cite{kim2013scalable} \cite{wu2016parallel}. So, this an open area to study the SIM problem and its variants under parallel and distributed settings.
\item Most of the solution methodologies are concerned with the selection of the seeds in one go, before the diffusion starts. In this case, if any one of the selected seeds does not perform up to expectation, then the number of influenced nodes will be lesser than expected. Considering this case, recently the framework of multiphase diffusion has been developed \cite{dhamal2016information}, \cite{han2018efficient}. Different variants of this problem can be studied in this framework.
\end{itemize}
\section{Concluding Remarks} \label{CR}
In this survey, first we have discussed the SIM problem and its different variants studied in the literature. Next, we have reported the hardness results of the problem. After that, we have reported major research challenges concerned with the SIM Problem and its variants. Subsequently, based on the approach, we have classified the proposed solution methodologies and discussed algorithms of each category. At the end, we have discussed the current research trends and given future directions. From this survey, we can conclude that SIM problem is well studied, though its variants are not and there is a continuous thirst for developing more scalable algorithm for these problems. We hope that presenting three dimensions (variants, hardness results and solution methodologies all together) of the problem will help the researchers and practitioners to have better understanding of the problem and better exposure in this field.
\section*{Acknowledgement}
Authors want to thank Ministry of Human Resource and Development (MHRD), Government of India for sponsoring the project: E-business Center of Excellence under the scheme of Center for Training and Research in Frontier Areas of Science and Technology (FAST), Grant No. F.No.5-5/2014-TS.VII .
|
2,869,038,155,618 | arxiv | \section{Introduction}
Surface plasmon polaritons (SPPs) are electromagnetic waves that travel along the interface of a metal and a dielectric. They are bound to the interface due to the mixing of the electromagnetic field oscillation with the oscillation of free electrons in the metal\cite{maier:plasmonics}. Their unique properties draw interest from a variety of fields\cite{stockman:39}, such as nanophotonics\cite{ozbay:189}, solar cells\cite{atwater:205}, biological imaging and sensing\cite{homola:3,chung:10907}. The fundamental property of plasmonic excitations is the scaling of the plasma resonance frequency with the square root of the electron density: $\omega_p\propto \sqrt{n}$. To observe plasma resonances at Terahertz (THz) frequencies, the electron density typically needs to be lowered down to the $n\sim 10^{15}-10^{16}$ cm$^{-3}$ range, which is achieved in undoped or lightly doped semiconductors. For example, THz-frequency SPPs in doped Si were found responsible for extraordinary transmission of subwavelength hole arrays\cite{gomezrivas:201306,azad:141102}. Alternatively, THz-frequency SPPs can exist on microscopically structured or corrugated metal surfaces\cite{williams:175,zhu:6216,fernandez:233104,maier:176805,ng:1059}.
Indium antimonide (InSb) is a semiconductor well suited for the study of THz SPPs. It combines a temperature-tunable electron density with very high electron mobility, which is another requirement for the existence of well-defined SPPs. Because of the low bandgap of InSb ($E_g=170$ meV), its intrinsic electron density at room temperature is about $10^{16}$ cm$^{-3}$, which corresponds to the bulk plasma frequency ($\omega_p^2=ne^2/\epsilon_0\epsilon_{\infty}m^*$) of $\omega_p/2\pi\sim1.8$ THz. Propagating and localized surface plasmons on InSb surfaces and microstructures have been studied actively in recent years\cite{berg:55,gomezrivas:847,isaac:113411,zhu:3129,hanham:226,jung:1007,deng:128}, and the potential for THz sensing and spectroscopy has been pointed out\cite{isaac:241115}.
In this work, we show that a microscopically thin layer of InSb ($2-8\:\mu\mathrm{m}$ thick) combined with a metallic surface grating, Fig.~\ref{fig:geom}, can serve as a sensitive THz SPP sensor. The micrometer-scale thickness allows the structure to operate in normal-incidence transmission mode, while the surface grating couples the incident THz radiation to the standing SPP waves at the InSb interface. This transmission-mode operation sets this structure apart from the recent THz sensing schemes based on wave propagation along the interface\cite{isaac:241115,ng:1059} or in a waveguide\cite{nagel:s601}. Our computational modeling of the THz optical properties of the grating/InSb structure shows the presence of two strong SPP resonances in the transmission spectrum. This THz plasmonic response is highly sensitive to the dielectric environment at the InSb interface and propose a sensing modality that takes advantage of the inherent tunability of InSb SPP properties by temperature and/or doping.
\begin{figure}[ht!]
\centering\includegraphics[width=7cm]{figure1}
\caption{\label{fig:geom}The gold grating and InSb layer structure is taken to stretch indefinitely in the $x$ and $y$ directions. The THz wave impinges straight down along the negative $z$ direction and is polarized along $x$.}
\end{figure}
Figure~\ref{fig:geom} shows the basic plasmonic structure consisting of a gold grating on the surface of a 5$\:\mu\mathrm{m}$ thick InSb sheet. We take the grating with a period $d=60\:\mu\mathrm{m}$ to be periodic and continue indefinitely in the $x$ direction, and we take the whole structure to be infinitely long in the $y$ direction. The THz wave is incident straight down along the negative $z$ direction. The gold grating strips are 200 nm thick and $30\:\mu\mathrm{m}$ wide with $30\:\mu\mathrm{m}$ gaps between them. These dimensions place the grating comfortably within the range of conventional photolithographic fabrication methods. As the grating couples the incident THz wave to the SPP modes at InSb interfaces, its periodicity also determines the fundamental wavevector $\beta_0$ of the standing SPP modes: $\beta_0=2\pi/d$. Thus, we can model the dispersion of the observed SPP modes by varying the grating period $d$; we keep the strip widths and the strip gaps equal to each other in all simulations.
\begin{figure}[ht!]
\centering\includegraphics[width=10cm]{figure2}
\caption{\label{fig:trans1}(a) Transmission of the $5\:\mu\mathrm{m}$-thick InSb layer with (thick solid lines) and without (dashed lines) the gold grating as a function of the bulk plasma frequency, which is indicated by the labels above each spectrum. The light-gray line shows the transmission of the structure with $\omega_p=0$. (b) Transmission of the $2\:\mu\mathrm{m}$-thick (thick solid lines) and $8\:\mu\mathrm{m}$-thick (dashed lines) InSb layer with gold grating as a function of the bulk plasma frequency. The light-gray line shows the transmission of a free-standing grating. The grating period is $d=60\:\mu\mathrm{m}$ for all curves in (a) and (b). }
\end{figure}
\section{Results and discussion}
\subsection{Plasmonic response of the gold grating/InSb structure}
THz optical properties of the grating/InSb structure were modeled using commercial software packages COMSOL Multiphysics and CST Microwave Studio. Both packages returned consistent simulation results. The simulation model was built in 2D to maximize the computation speed, by taking the structure to be infinite and translationally invariant in the $y$ direction, Fig.~\ref{fig:geom}. The simulations were performed by the COMSOL RF module's stationary solver in the frequency domain. We achieved identical results with either the periodic boundary conditions or the perfect electric conductor (PEC) boundary conditions. The material above and below the structure is air (or vacuum). The incident THz electric field is polarized in the $x$ direction. The complex dielectric permittivity of both InSb and gold is approximated by the Drude model, Eq. (\ref{eq:1}). The gold Drude parameters are $\epsilon_\infty=9.1$, $\omega_p/2\pi=728$ THz, and $\gamma=1.1\times10^{14}$ rad/s.
Figure~\ref{fig:trans1}(a) shows the transmission of the $5\:\mu\mathrm{m}$ InSb layer with and without the gold grating. The overall transmission exhibit a broad peak shape in both cases, with the peak position roughly following the bulk plasma frequency. Such band-pass structure of the overall transmission in this frequency range results from the Fabry-Perot effect between the top and bottom surfaces of the InSb layer and from the Drude response of conduction electrons. The low transmission on the zero-frequency side of the peak is due to the reflection and absorption of the THz wave according to the Drude model. The drop-off in transmission on the high-frequency side of the peak results from the onset of the first Fabry-Perot minimum. The peak position is roughly determined by the plasma frequency, as it sets the width of the low-transmission frequency window of the electron Drude response. The transmission peaks are narrower for the thicker InSb layers, because the low-transmission window on the zero-frequency side is sharper and deeper, while the separation between Fabry-Perot fringes is smaller, thus resulting in the faster transmission drop-off on the high-frequency side.
In addition to the broad peak, two clear resonance dips appear in the transmission spectrum in Fig.~\ref{fig:trans1}(a), as compared to the bare $5\:\mu\mathrm{m}$-thick InSb layer without the gold grating. The position of the resonant dips strongly depends on the bulk plasma frequency $\omega_p$, which gives the first indication that these resonances are associated with standing SPPs at the InSb/air interfaces. Figure~\ref{fig:trans1}(b) shows the dependence of the observed SPP resonances on the InSb thickness. As the thickness gets smaller, we observe that: (i) The resonances get stronger. (ii) The separation between the resonances increases. (iii) The lower-frequency resonance shifts to a lower frequency. As we show below, this phenomenology is consistent with the theoretical properties of the SPPs in the model of air/InSb/air trilayer without the metal grating, while also exhibiting significant differences with this simplified model.
In the air/InSb/air trilayer theory, the propagating SPP modes exist at each of the two InSb/air interfaces. For sufficiently thick InSb, these SPPs are completely independent and identical. They are bound to the interface, with the decay lengths $z_1$ and $z_2$ in InSb and air perpendicular to the interface. The inverse quantities $k_i=1/z_i$ ($i=1,2$) are the imaginary wavevectors in the directions perpendicular to the interface. As the InSb thickness gets small, the SPPs at the two interfaces interact and split into even and odd modes based on the parity of the $E_x(z)$ field. In the interaction regime, the SPP dispersion relations are given by\cite{maier:plasmonics}
\begin{equation}
k_i^2=\beta^2-k_0^2\epsilon_i\\
\label{eq:beta}
\end{equation}
and by
\begin{eqnarray}
\label{eq:odd}
\tanh k_1a = -\frac{k_2\epsilon_1}{k_1\epsilon_2},\\
\label{eq:even}
\tanh k_1a = -\frac{k_1\epsilon_2}{k_2\epsilon_1},
\end{eqnarray}
where $k_0=\omega/c$, $\beta$ is the SPP wavevector along the interface, $\epsilon_1$ and $\epsilon_2$ are the dielectric permittivities of InSb and air, and $a$ is the thickness of the InSb layer. The complex dielectric permittivity of InSb is approximated by the Drude model
\begin{equation}
\epsilon(\omega)=\epsilon_{\infty}(1-\frac{\omega_p^2}{\omega^2+i\omega\gamma}),
\label{eq:1}
\end{equation}
where is the bulk plasma frequency, $\gamma$ is the electron scattering rate, and $\epsilon_{\infty}$ is the background dielectric constant. We use the low-temperature InSb scattering rate of $\gamma=0.3\times 10^{12}$ rad/s, which was confirmed to be relatively independent of plasma frequency in the range of interest. InSb background dielectric constant is $\epsilon_{\infty}=15.6$.
\begin{figure}[ht!]
\centering\includegraphics[width=10cm]{figure3}
\caption{\label{fig:disp}Open circles - computational frequency vs. wavevector $\omega(\beta)$ dispersion of the resonances on the grating/InSb structure for different InSb thicknesses. Solid lines - thickness-dependent theoretical dispersion of SPP modes in the air/InSb/air trilayer structure. The single interface line shows the theoretical SPP dispersion on a single InSb/air interface without a grating, where both air and InSb assume infinite thickness. The bulk plasma frequency of InSb is $\omega_p=1.44$ THz.}
\end{figure}
Equations (\ref{eq:odd}) and (\ref{eq:even}) describe the SPP modes of odd and even parity, respectively\cite{maier:plasmonics}. Figure~\ref{fig:disp} shows the theoretical dispersion curves calculated from Eqs. (\ref{eq:beta}-\ref{eq:even}) for the plasma frequency $\omega_p=1.44$ THz and several thicknesses of the InSb layer. The dispersion of even modes was calculated by solving the system of Eqs. (\ref{eq:beta}) and (\ref{eq:even}). The odd mode dispersion was calculated by setting $\gamma=0$ in InSb and assuming that the wavevector $\beta\gg\omega_p/c$. In this case, we can take $k_1\simeq k_2 \simeq \beta$ and the odd mode dispersion for large wavevectors can be calculated from
\begin{equation}
\omega=\frac{\omega_p}{\left[ 1+(\epsilon_2/\epsilon_\infty)\tanh(a\beta)\right]^{1/2} },
\label{eq:oddlargeb}
\end{equation}
where $a$ is the thickness of the InSb slab. In Fig.~\ref{fig:disp}, we used the effective thickness $a$ equal to twice the actual InSb thickness. This is necessitated by the spatial distribution of the observed electric field $E_z$ in the higher SPP resonance, as we will discuss below.
\begin{figure}[ht!]
\centering\includegraphics[width=10cm]{figure4}
\caption{\label{fig:field}(a),(b) Spatial distribution of the electric field amplitude for the lower (1.36 THz) and higher (1.42 THz) SPP modes in the $5\:\mu\mathrm{m}-$thick InSb with grating period $d=60\:\mu\mathrm{m}$. The bulk plasma frequency is $\omega_p=1.44$ THz. The color bar indicates the relative amplitude scale using arbitrary units. The InSb layer fills the vertical ($-5\:\mu\mathrm{m}$,$0\:\mu\mathrm{m}$) interval. The gold strips cover the horizontal ($0\:\mu\mathrm{m}$,$15\:\mu\mathrm{m}$) and ($45\:\mu\mathrm{m}$,$60\:\mu\mathrm{m}$) intervals at the top surface of InSb. (c),(d) The spatial distribution of the $E_x$ component of the electric field for the lower and higher SPP modes. (e),(f) The spatial distribution of the $E_z$ component of the electric field for the lower and higher SPP modes.}
\end{figure}
We compare the theoretical dispersion curves with the resonance frequencies found in our numerical simulation, where the SPP wavevector is determined by the grating period $d$. By computing the transmission of our structure with different grating periods $d$, we can determine the dispersion $\omega(\beta)$ of the observed SPP resonances. The examination of the electric field distribution at the two resonance frequencies shows that the field's spatial dependence can be represented as a Fourier series with the fundamental wavevector $\beta_0=2\pi/d$. The largest-amplitude term in the series for the lower-frequency SPP resonance is the second harmonic $2\beta_0$, which we use to plot the computationally determined dispersion in Fig.~\ref{fig:disp}. The largest-amplitude term in the series for the high-frequency resonance is the fundamental wavevector $\beta_0$, and it is used to plot the computational dispersion of this resonance. Figure~\ref{fig:disp} shows a good agreement between the theoretical dispersion (solid lines) and the computationally observed one (symbols), indicating that the trilayer theory correctly describes the main phenomenology of the observed SPP resonances. Therefore, we assign the two strong absorption resonances in the spectra of Fig.~\ref{fig:trans1} as the even and odd SPP modes that exist on the surfaces of the air/InSb/air trilayer.
The separation between the even and odd SPP modes decreases with the bulk plasma frequency, as is evident in Fig.~\ref{fig:trans1}. As the plasma frequency is decreased, the two modes merge into one resonant line. At $\omega_p=0.62$ THz, only a single line appears in the spectra because the even and odd modes have merged into one. When the separation between the even and odd modes is large and the resonances are strong, a week resonant dip appears in the spectra in-between the main even and odd SPP modes, Fig.~\ref{fig:trans1}. It can be interpreted as an overtone of the lower even SPP mode.
Figures~\ref{fig:field}(a,b) show the electric field amplitude distribution at both SPP frequencies in the structure with $\omega_p=1.44$ THz, while Figs.~\ref{fig:field}(c,d) and Figs.~\ref{fig:field}(e,f) show the corresponding spatial distributions of the $x$-component ($E_x$) and the $z$-component ($E_z$) of the electric field. In the lower-frequency (1.36 THz) SPP mode, the electric field is concentrated in the gap between the gold strips of the grating. The $x$-component shows the same sign, Fig.~\ref{fig:field}(c), and the $z$-component shows the opposite sign, Fig.~\ref{fig:field}(e), at the upper and lower InSb interface. Such behavior corresponds to the even mode in the simple trilayer theory and further justifies our assignment of the lower SPP resonance as the even mode. The electric field of the higher-frequency (1.42 THz) SPP mode is the highest under the gold strips inside InSb, which is most evident for the $z$-component $E_z$ in Fig.~\ref{fig:field}(f). The spatial distribution of the field $E_z$ under the gold strips is consistent with the field of the odd mode in an air/InSb/air structure that is twice as thick. That is, if we "reflect" the field $E_z$ underneath the gold strip about the gold/InSb interface and into the upper half-space, we will obtain the correct field distribution of the even mode of the air/InSb/air trilayer with twice the thickness. For this reason, we used the effective thickness of 2$a$ in Eq. (\ref{eq:oddlargeb}) to determine the theoretical odd mode dispersion.
\begin{figure}[ht!]
\centering\includegraphics[width=10cm]{figure5}
\caption{\label{fig:cut} (a),(b) The electric field components $E_x(x)$ and $E_z(x)$ along the lower InSb surface in the $5\:\mu\mathrm{m}$-thick InSb structure at the lower (1.36 THz) and higher (1.42 THz) SPP modes. The grating period is $d=60\:\mu\mathrm{m}$ ($\beta_0=2\pi/d$) and bulk plasma frequency is $\omega_p=1.44$ THz. The horizontal black lines indicate the position of the grating gold strips on the upper surface of the InSb wafer. (c),(d) Amplitudes of the Fourier series expansion for the electric fields $E_x(x)$ and $E_z(x)$ shown in (a) and (b). In (c), only the cosine terms are shown, and the sine terms are all zero. In (d), only the sine terms are shown, and the cosine terms are all zero.}
\end{figure}
To gain further insight into the nature of the SPP modes in our structure, we plot the variation of the fields $E_x$ and $E_z$ along the $x$ direction in Figs.~\ref{fig:cut}(a,b), where $\omega_p=1.44$ THz. Since the grating is periodic in this direction, both fields can be written as Fourier series in terms of the fundamental wavevector $\beta_0=2\pi/d$, where $d$ is the grating period. At the edge of the unit cell $E_z=0$, therefore $E_z(x)$ only contains sine terms in the Fourier series. Similarly, $E_x(x)$ reaches a local maximum (or minimum) value at the edge, so it only contains cosine terms. Figures~\ref{fig:cut}(c,d) show the relative amplitudes of the main terms in Fourier series expansion of $E_x(x)$ and $E_z(x)$. The largest-amplitude term in the lower SPP mode is the second harmonic with the wavevector $2\beta_0$, with terms up to the third order being significant in both cases. We used the second harmonic wavevector to plot the even mode dispersion in Fig.~\ref{fig:disp}. The opposite sign of the first and second harmonic amplitudes of the lower SPP mode (red line in Fig.~\ref{fig:cut}(c)) indicates that the first and second harmonic cosine terms add constructively in the gap between the gold strips, while they add destructively underneath the strips. Similarly in Fig.~\ref{fig:cut}(d), the first and second harmonic sine terms of the lower SPP mode (red line) add constructively in the gap and destructively underneath the strips. At off-resonance frequencies, the electric field $E_z(x)$ becomes negligible. The simulation shows that the absorption resonances appear in the transmission spectrum only when the incident field excites a high amplitude $E_z(x)$ component, which is another clear indication of the SPP origin of the absorption.
\subsection{Potential for sensor applications}
We explored the potential of our grating/InSb structure for sensing applications. Typical sensors are based on the modulation of the resonant plasmonic response by the dielectric environment\cite{chung:10907}. We computed the THz transmission of the $2\:\mu\mathrm{m}$ grating/InSb structure in the presence of a thin $1\:\mu\mathrm{m}$ dielectric layer with THz refractive index $n=2$ at the lower InSb/air interface, and found a sharp difference in the resonant transmission response, Fig.~\ref{fig:6}. The structure with the dielectric layer exhibits the same two SPP resonances. However, the lower SPP resonance exhibits a clear and significant shift to a lower frequency. The higher SPP resonance does not change its frequency, but becomes dramatically stronger, with the area under the resonance peak increasing by a factor of 4. The on-resonance transmitted THz intensity is about 15\% lower for the structure with the dielectric layer. Our results demonstrate that the grating/InSb plasmonic structure can act as a sensitive probe of THz-frequency dielectric environment with sensitivity down to sub-$\:\mu\mathrm{m}$ dielectric layers.
\begin{figure}[ht!]
\centering\includegraphics[width=10cm]{figure6}
\caption{\label{fig:6}Transmission of a $2\:\mu\mathrm{m}$ grating/InSb structure with (red) and without (black) a $1\:\mu\mathrm{m}$ dielectric layer at the bottom InSb surface. The THz refractive index of the dielectric layer is $n=2$. InSb bulk plasma frequency is $\omega_p=1.86$ THz. The grating period is $d=60\:\mu\mathrm{m}$.}
\end{figure}
The plasmonic sensor sensitivity is usually quantified as $\eta=\Delta\lambda/\Delta n$ (in nm per refractive index unit, nm/RIU) or as $\eta=\Delta f/\Delta n$ (in Hz/RIU). For the lower SPP resonance in our structure, we get $\eta \approx7200$ nm/RIU (or $\eta\approx0.06$ THz/RIU). This sensitivity is lower than that of the sensors based on guided surface plasmon waves, such as those propagating on spoof THz plasmon surfaces\cite{ng:2195} (0.5 THz/RIU). The higher sensitivity of the sensors based on the propagating SPPs\cite{ng:2195,ng:1059} results from an increased interaction length of the SPP wave and the analyte along the guiding structure. That sensitivity is not matched in our sensor, as it is based on normal-incidence transmission geometry. However, the simpler optical architecture that does not require the coupling of free-space radiation to the guided SPP modes and the fabrication that relies on conventional photolithography may make our sensor competitive in many application areas.
While the intimate link between the dielectric environment and the surface plasmon resonance enables the delicate sensitivity of plasmonic sensors, their selectivity often needs to be engineered by chemically functionalizing the plasmonic structures to be receptive only to a specific analyte\cite{homola:3,chung:10907,stockman:39}. By contrast, THz plasmonic sensors may allow us to implement both sensitivity and selectivity using the inherent properties of THz SPPs, because many molecular materials exhibit very specific spectroscopic fingerprints at THz frequencies, mostly of vibrational origin\cite{melinger:a79,ng:1059,brown:061908,walther:261107}. Molecular detection and marker-free biosensing schemes based on such THz fingerprints have been proposed\cite{melinger:a79,ng:1059,walther:261107,nagel:s601}.
\begin{figure}[ht!]
\centering\includegraphics[width=10cm]{figure7}
\caption{\label{fig:7}(a) Interaction of the lower-frequency SPP mode with the lactose vibrational resonance at 1.37 THz (the arrow). The lower SPP mode is shown for the bulk plasma frequencies of $\omega_p=1.65$ THz, $1.79$ THz, and $1.91$ THz. The red lines show the transmission without the vibrational resonance (resonance strength set to zero). The blue lines show the transmission with the vibrational resonance. The InSb thickness is $2\:\mu\mathrm{m}$ and the grating period is $d=120\:\mu\mathrm{m}$. (b) Transmission of the grating/InSb structure with the $1\:\mu\mathrm{m}$ lactose layer with (blue line) and without (red line) the vibrational resonance when the SPP modes are detuned far away from 1.37 THz. The arrows indicate the lower and higher SPP modes and the lactose resonance at 1.37 THz.}
\end{figure}
We examine the potential of our grating/InSb structure for THz molecular selectivity. We use lactose ($\alpha$-lactose monohydrate) as a model molecular material, as it exhibits a strong vibrational resonance at 1.37 THz\cite{ng:1059,brown:061908,walther:261107}. Figure~\ref{fig:7} shows the computed transmission of the grating/InSb structure with a $1\:\mu\mathrm{m}$ thick lactose layer on the bottom surface. We model the lactose optical properties in the Lorentz model using the background dielectric constant $\epsilon_{\infty}=4$ (background refractive index $n=2$) and a realistic Lorentz oscillator strength\cite{brown:061908}. We use the Lorentz model damping rate of $0.1\times10^{12}$ rad/s, which is exhibited by thin polycrystalline molecular films at low temperature\cite{melinger:a79}. For comparison, Fig.~\ref{fig:7} also shows the transmission of the structure when the Lorentz oscillator strength is set to zero. When the SPP resonance is tuned away from 1.37 THz vibrational frequency, the vibrational resonance results in a 6\% drop in transmitted intensity, Fig.~\ref{fig:7}(b). When the lower SPP frequency is tuned in resonance with the vibrational frequency, the SPP absorption line splits into two absorption lines, with a significant increase in transmitted intensity, Fig.~\ref{fig:7}(a). On resonance, the transmitted intensity increases by almost 50\%, which is significantly higher than the 6\% difference found away from the SPP resonance. The splitting of the lower SPP resonance suggests a coupling between the SPP and vibrational modes of the analyte (lactose). This coupling distinguishes our sensor from other propagating or localized surface plasmon sensors and suggests a new sensing modality, in which the SPP resonance is tuned to sample the frequencies in the vicinity of 1.37 THz. By measuring the shape and strength of the lower SPP absorption as the SPP frequency changes, we can distinguish the lactose layer at the bottom surface from other materials that possess a similar background dielectric constant but do not exhibit the sharp resonant absorption at 1.37 THz, Fig.~\ref{fig:7}(a). In a similar fashion, this sensing modality can be employed for marker-free detection of other molecules with specific THz fingerprints.
\section{Conclusions}
We have explored the THz-frequency optical properties of a micrometer-thin InSb slab with a gold grating and found a strong resonant response consisting of two SPP modes. Their dispersion and dependence on InSb thickness are well described by the theory of SPPs propagating in the simple trilayer structure without the gold grating. The optical properties of the grating/InSb structure are highly sensitive to the presence of an analyte at the lower InSb interface, which allows potential applications as a THz plasmonic sensor. We have also found a coupling between the lower SPP mode and the vibrational mode of lactose and proposed a sensing modality that provides marker-free selectivity based on the THz vibrational spectral signatures of the analyte.
\section*{Funding}
Louisiana Board of Regents, the Board of Regents Support Fund (LEQSF(2012-15)-RD-A-23); Alfred P. Sloan Foundation (BR2013-123); KRISS (GP2016-034).
|
2,869,038,155,619 | arxiv | \section{INTRODUCTION}
Automated driving can bring significant benefits to human society. It can ameliorate accidents on the road, provide mobility to persons with reduced capacities, automate delivery services, help the elderly to safely move between places, among many others.
Deep Learning has been applied successfully to detect objects with great accuracy on point cloud. Previous work on this area, used similar techniques taken from the work on Convolutional Neural Networks for object detection on images (e.g., 3D-SSMFCNN~\cite{3DSSMFCNN}), and extended its use on point cloud projections. To name some: F-pointNet~\cite{FpointNet}, VoxelNet~\cite{VxNet}, AVOD~\cite{AVOD}, DoBEM~\cite{DoBEM}, MV3D~\cite{MV3D}, among others. However, none of these are capable to perform on real-time scenarios. These approaches will be examined further in Section \ref{Related}.
Knowing that point cloud can be treated as an image (if projecting it to a 2D plane) and exploit the powerful deep learning techniques from CNNs to detect objects on it, we propose a 3D LiDAR-based multi-class object detection network (LMNet). Our network aims to achieve real-time performance, so it can be applied to automated driving systems. In contrast to other approaches, such as MV3D, which uses multiple inputs: Top and Frontal projections, and RGB data taken from a camera. Ours adopts a single-stage strategy from a single point cloud. LMNet also employs a custom designed layer for dilated convolutions. This extends the perception regions, conducts pixel-wise segmentation and fine-tunes 3D bounding box prediction. Furthermore, combining it with a dropout strategy~\cite{Dropout} and data augmentation, our approach shows significant improvements in runtime and accuracy. The network can perform oriented 3D box regression, to predict location, size and orientation of objects in 3D space.
\textbf{The main contributions of this work are}:
\begin{itemize}
\item To design a CNN capable to achieve real-time like 3D multi-class object detection using only point cloud data on real-time on CPU.
\item To implement, test our work in a real vehicle.
\item To enable multi-class object detection in the same network (Vehicle, Pedestrian, Bike or Bicycle, as defined in the KITTI dataset).
\item To open-source the pre-trained models and inference code.
\end{itemize}
As for the evaluation of the 3D Object detection, we use the KITTI dataset~\cite{Kitti} evaluation server to submit and get a fair comparison. Experimental results suggest that LMNet can detect multi-class objects with certain accuracy for each category while achieving up to $50$ FPS on GPU. Performing better than those of DoBEM and 3D-SSMFCNN.
The structure of this paper is as follows: Section~\ref{Related} introduces the state-of-the-art architectures. Section~\ref{Proposed} details the input encoding and the LMNet. {Section~\ref{experiment} describes the network outputs, compares with other state-of-the-art architectures, and shows the tests on a real vehicle. Finally, we summarize and conclude in the Section~\ref{conclusions}.
\section{Related Work}\label{Related}
Previous research has focused primarily on improving the accuracy of the object detection. Thanks to this, great progress has been made in this field. However, these approaches leave performance out of the scope. In this section, we present our findings on the current literature and analyze how to improve performance.
\begin{table*}[h]
\caption{Implemented dilated layers}
\label{dilated_conv}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Layer & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline
Kernel size & $3\times3$ & $3\times3$ & $3\times3$ & $3\times3$ & $3\times3$ & $3\times3$ & $3\times3$ & $1\times1$\\ \hline
Dilation & 1 & 1 & 2 & 4 & 8 & 16 & 32 & N/A \\ \hline
Receptive filed & $3\times3$ & $3\times3$ & $7\times7$ & $15\times15$ & $31\times31$ & $63\times63$ & $127\times127$ & $127\times127$\\ \hline
Feature channels & 128 & 128 & 128 & 128 & 128 & 128 & 128 & 64\\ \hline
Activation function & Relu & Relu & Relu & Relu & Relu & Relu & Relu & Relu\\ \hline
\end{tabular}
\end{center}
\vspace{-8mm}
\end{table*}
While surveying current work in this field, we found out that object detection on LiDAR data can be classified into three main different methods to handle the point cloud: a$)$ direct manipulation of raw 3D coordinates, b$)$ point cloud projection and application of Fully Convolutional Networks, and c$)$ Augmentation of the perceptive fields from previous approach using dilated convolutions.
\subsection{Manipulation of 3D coordinates}
Within this class, we can find noteworthy mentions: PointNet~\cite{PointNet}, PointNet++~\cite{PointNet++} , F-PointNet~\cite{FpointNet}, VoxelNet, VeloFCN~\cite{VeloFCN}, and MV3D. Being PointNet the pioneer on this group and can be further classified into three sub-groups.
The first group is composed of PointNet and its variants. These methods extract point-wise features from raw point cloud data. Pointnet++ was developed on top of PointNet to handle object scaling. Both PointNet and PointNet++ have shown to work reliably well in indoor environments. More recently, F-Pointnet was developed to enable road object detection in urban driving environments. This method relies on an independent image-based object detector to generate high-quality object proposals. The point cloud within the proposal boxes is extracted and fed onto the point-wise based object detector, this helps to improve detection results on ideal scenarios. However, F-PointNet uses two different input sources, cannot be trained in an end-to-end manner, requiring the image based proposal generator to be independently trained. Additionally, this approach showed poor performance when image data from low-light environments were used to get the proposals, making it difficult to deploy on real application scenarios.
The second group of methods is Voxel based. VoxelNet~\cite{VxNet} divides the point cloud scene into fixed size 3D Voxel grids. A noticeable feature is that VoxelNet directly extracts the features from the raw point cloud in the 3D voxel grid. This method scored remarkably well in the KITTI benchmark.
Finally, the last group of popular approaches is 2D based methods. VeloFCN was the first one to project the point cloud to an image plane coordinate system. Exploiting the lessons learned from the image based CNN methods, it trained the network using the well-known VGG16~\cite{vgg16} architecture from Oxford. On top this feature layers, it added a second branch to the network to regress the bounding box locations. MV3D, an extended version of VeloFCN, introduced multi-view representations for point cloud by incorporating features from bird and frontal-views.
Generally speaking, 2D based methods have shown to be faster than those that work directly on 3D space. With speed in mind, and due to the constraint we previously set of using nothing more than LiDAR data, we decided to implement LMNet as a 2D based method. This not only allows LMNet to perform faster, but it also enables it to work on low-light scenarios, since it is not using RGB data.
\subsection{Fully Convolutional Networks}
FCN have demonstrated state-of-the-art results in several semantic segmentation benchmarks (e.g., PASCAL VOC, MSCOCO, etc.), and object detection benchmarks (e.g., KITTI, etc.). The key idea of many of these methods is to use feature maps from pre-trained networks (e.g., VGGNet) to form a feature extractor. SegNet~\cite{SegNet-unpooling} initially proposed an encoder-decoder architecture for semantic segmentation. During the encoding stage, the feature map is down-sampled and later up-sampled using an unpooling layer~\cite{SegNet-unpooling}. DeepLabV1~\cite{DeepLabNet} increases the receptive field at each layer using dilated convolution filter.
In this work, the 3D point cloud is represented as a plane that extracts the shape and depth information. Using this procedure, an FCN can be trained from scratch using the KITTI dataset data or its derivatives (e.g., data augmentation). This also enables us to design a custom network, and be more flexible while addressing the segmentation problem at hand. For instance, we can integrate a similar encoder-decoder technique as the one described in SegNet. More specifically, LMNet is designed to have larger perceptive fields, quickly process larger resolution feature maps, and simultaneously validate the segmentation accuracy~\cite{Wu2016}.
\subsection{Dilated convolution}
Traditional convolution filter limits the perceptive field to uniform kernel sizes (e.g., $3 \times 3$, $5 \times 5$, etc). An efficient approach to expanding the receptive field, while keeping the number of parameters and layers small, is to employ the dilated convolution. This technique enables an exponential expansion of the receptive field while maintaining resolution~\cite{LoDNN,Yu2015} (e.g., the feature map size can be as big same as the input size). For this to work effectively, it is important to restrict the number of layers to reduce the FCN's memory requirements. This is especially true when working with higher resolution feature maps. Dilated convolutions are widely used in semantic segmentation on images. To the best of our knowledge, LMNet is the first network that uses dilated convolution filter on point cloud data to detects objects.
Table~\ref{dilated_conv} shows the implemented dilated layers in LMNet. The dilated convolution layer is larger than the input features maps, which have a size of $64\times 512$ pixels. This allows the FCN to access a larger context window to refer road objects. For each dilated convolution layer, a dropout layer is added between convolution and ReLU layers to regularize the network and avoid over-fitting.
\subsection{Comparison}\label{comparison}
\begin{table}[t]
\caption{Summary of network architectures with input Data, Detection type, Source code availability and Inference time.}
\label{basic_info}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Network & Input Data & Class & Code & Inference \\ \hline
F-pointNet~\cite{FpointNet} & Image and Lidar & multi & Yes & 170ms\\ \hline
VoxelNet~\cite{VxNet} & Lidar & multi & N/A & 30ms\\ \hline
AVOD~\cite{AVOD} & Image and Lidar & multi & Yes & 80ms\\ \hline
MV3D~\cite{MV3D_code} & Image and Lidar & car & Yes & 350ms\\ \hline
DoBEM~\cite{DoBEM} & Image and Lidar & car & N/A & 600ms \\ \hline
3D-SSMFCNN~\cite{3DSSMFCNN} & Image & car & Yes & 100ms\\ \hline
\end{tabular}
\end{center}
\vspace{-5mm}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{LMNetV2.png}
\caption{LMNet architecture}
\label{architecture}
\vspace{-5mm}
\end{figure}
As this work aims to design a fast 3D multi-class object detection network, we mainly compare with other previously published 3D object detection networks: F-pointNet, VoxelNet, AVOD, MV3D, DoBEM, and 3D-SSMFCNN.
To have a basic overview of the previously mentioned networks, Table~\ref{basic_info} lists the items: (a) Data, which is what data feed in to the network; (b) type, which is detection availability (i.e. multi-class, car only); (c) Code, which is code availability, and; (d) Inference, which is the time taken to forward the input data and generate outputs.
Although, some of state-of-the-art network source code (viz., F-pointNet, AVOD) are available, the models reported to Kitti are not available. We can only infer that by the type of data being fed, the outlined accuracy, and the manually reported inference time. Additionally, none of them are capable to be processed in real-time (more than 30 FPS)~\cite{real-time}. A third party, Boston Didi team, implemented the MV3D model~\cite{MV3D_code}. This method can only perform single class detection and would not be suitable to perform a comparison. Furthermore, the reported inference performance is less than 3 FPS. For the previous reasons, we firmly believe that open sourcing a fast object detection method and its corresponding inference code can greatly contribute to the development of faster and highly accurate detectors in the automated driving community.
\section{Proposed Architecture}\label{Proposed}
The proposed network architecture takes as input five representations of the frontal-view projection of a 3D point cloud. These five input maps help the network to keep 3D information at hand. The architecture outputs an objectness map and 3D bounding box offset values calculated directly from the frontal-view map. The objectness map contains the class confidence values for each of the projected 3D points. The box candidates calculated from the offset values are filtered using a custom euclidean distance based non-maximum suppression procedure. A diagram of the proposed network architecture can be visualized in Fig.~\ref{architecture}.
Albeit frontal-view representations~\cite{3DVelo} have less information than bird-view~\cite{LoDNN,MV3D} representations or raw-point~\cite{PointNet,PointNet++} data. We can expect less computational cost and certain detection accuracy~\cite{Vote3D}.
\subsection{Frontal-view representation\label{FV_representation}}
To obtain a sparse 2D point map, we employ a cylindrical projection~\cite{3DVelo}. Given a 3D point $p = (x, y, z)$. Its corresponding coordinates in the frontal-view map $p_{f} = (r, c)$ can be calculated as follows:
\begin{align}
c &= \left \lfloor atan2(y,x) / \delta \theta \right \rfloor ,\\
r &= \left \lfloor atan2(z, \sqrt[]{x^2 + y^2}) / \delta \phi \right \rfloor .
\end{align}
\noindent where $\delta \theta$ and $\delta \phi$ are the horizontal and vertical resolutions (e.g., $0.32$ and $0.4$ while $0.08$, $0.4$ radian in Velodyne HDL-64E~\cite{HDL64E}) of the LiDAR sensor, respectively.
Using this projection, five feature channels are generated:
\begin{enumerate}
\item Reflection, which is the normalized value of the reflectivity as returned by the laser for each point.
\item Range, which is the distance on the XY plane (or the ground) of the sensor coordinate system. It can be calculated as $\sqrt{x^2 + y^2}$.
\item The distance to the front of the car. Equivalent to the x coordinate on the LiDAR frame.
\item Side, the y coordinate value on the sensor coordinate system. Positive values represent a distance to the left, while negative ones depict points located to the right of the vehicle.
\item Height, as measured from the sensor location. Equal to the z coordinate as shown in Fig.~\ref{FVs}.
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{re_134.png}\\
(a) Reflection\\
\includegraphics[width=0.35\textwidth]{ra_134.png}\\
(b) Range\\
\includegraphics[width=0.35\textwidth]{x_134.png}\\
(c) Forward\\
\includegraphics[width=0.35\textwidth]{y_134.png}\\
(d) Side \\
\includegraphics[width=0.35\textwidth]{z_134.png}\\
(e) Height\\
\caption{Encoded input point cloud\label{FVs}}
\vspace{-7mm}
\end{figure}
\subsection{Bounding Box Encoding}
As for the bounding boxes, we follow the encoding approach from ~\cite{3DVelo}, which considers the offset of corner points and its rotation matrix. There are two reasons on why to use this approach as shown in Ref~\cite{3DVelo}: (A) A faster CNN convergence due to the reduced offset distribution leading to a smaller regression search space, and (B) Enable rotation invariance. We briefly describe the encoding in this section.
Assuming a LiDAR point $p = (x, y, z) \in P$, an object point and a background point can be represented as $p \in O$, $p \in O^{c}$, respectively. The box encoding considers the points forming the objects, e.g., $p \in O$. The observation angles (e.g., azimuth and elevation) are calculated as follows:
\begin{align}
\theta &= atan2(y,x),\\
\phi &= atan2(z, \sqrt{x^2 + y^2}).
\end{align}
\noindent Therefore, the rotation matrix $R$ can be defined as follows,
\begin{equation}
R = R_{z}(\theta)R_{y}(\phi).
\end{equation}
\noindent where $R_{z}(\theta)$ and $R_{y}(\phi)$ are the rotation functions around the z and y axes, respectively. Then, the $i$-th bounding box corner $c_{p,i} = (x_{c,i}, y_{c,i}, z_{c,i})$ can be encoded as:
\begin{equation}\label{offset}
c'_{p,i} = R^{T}(c_{p,i} - p).
\end{equation}
\noindent Our proposed architecture regress $c'_{p}$ during training. The eight corners of a bounding box are concatenated in a 24-d vector as,
\begin{equation}
b'_{p} = (c'^{T}_{p,1},c'^{T}_{p,2},c'^{T}_{p,3}, ... , c'^{T}_{p,8})^{T}.
\end{equation}
\noindent Thus, the bounding box output map has 24 channels with the same resolution as the input one.
\subsection{Proposed Architecture}
The proposed CNN architecture is similar to LoDNN~\cite{LoDNN}. As illustrated in Fig.~\ref{architecture}, the CNN feature map is processed by two $3 \times 3$ convolution, eight dilated convolution and followed by $3 \times 3$ and $1 \times 1$ convolution layers. The trunk splits at the max-pooling layer into the objectness classification branch and the bounding box's corners offset regression branch. We use $(d)conv(c_{in},c_{out},k)$ to represent a 2-dimensional convolution/dilated-convolution operator where $c_{in}$ is the number of input channel, and $c_{out}$ is the number of output channels, $k$ represent the kernel size, respectively.
A five-channel map of size of $64 \times 512 \times 5$, as described in Section~\ref{FVs}, are fed into an encoder with two convolution layers, $conv(5,64,3)$ and $conv(64,64,3)$. These are followed by a max-pooling layer to the output of the encoder, this helps to reduce FCN's memory requirements as previously mentioned.
The encoder section is succeeded by seven dilated-convolutions~\cite{Yu2015} \
(see dilation parameters in Table~\ref{dilated_conv}
) and a convolution, e.g., $dconv_{1}(64,128,3)$; $dconv_{2-7}(128,128,3)$; and $conv(128,64,1)$}, with a dropout layer and a rectified linear unit (ReLU) activation applied to the pooled feature map, enabling the layer to extract multi-scale contextual information.
Following the context module, the main trunk bifurcates to create the objectness and corners branches. Both of them have a decoder that up-samples the feature map, from the dilated convolutions to the same size as the input. This is achieved with the help of a max-unpooling layer~\cite{SegNet-unpooling}, followed by two convolution layers: $conv(64,64,3)$ with ReLU, and; $conv(64,4,3)$ with ReLU for objectness branch, or $conv(64,24,3)$ for corners branch.
The objectness branch outputs an object confidence map, while the corners-branch generates the corner-points offsets. Softmax loss is employed to calculate the objectness loss between the confidence map and the encoded objectness map. Finally, Smooth $l_{1}$~\cite{Girshick2015} is utilized for the corner offset regression loss between the corners offset and ground truth corners offset.
To train a multi-task network, a re-weighting approach is employed as explained in Ref~\cite{3DVelo} to balance the objective of learning both objectness map and bounding box's offset values. The multi-task
function $L$ can be defined as follows:
\begin{equation}
L = \sum_{p \in P} w_{obj}(p) L_{obj}(p) + \sum_{p \in O} w_{cor}(p) L_{cor}(p),
\end{equation}
\noindent where $L_{obj}$ and $L_{cor}$ denote the softmax loss and regression loss of point $p$, respectively, while $w_{obj}$ and $w_{cor}$ are point-wise weights, e.g., objectness weight and corners weight, calculated by the following equations:
\begin{equation}
w_{cor}(p) = \left\{\begin{matrix}
\bar{s}[\kappa_{p}]/s(p) & p \in O,\\
1 & p \in O^{c},
\end{matrix}\right.
\end{equation}
\begin{equation}
w_{bac}(p) = \left\{\begin{matrix}
m|O|/|O^{c}| & p \in O^{c},\\
1 & p \in O,
\end{matrix}\right.
\end{equation}
\begin{equation}
w_{obj}(p) = w_{bac}(p) w_{cor}(p),
\end{equation}
\noindent where $\bar{s}[x]$ is the average shape size of the class $x \in \{\text{car, pedestrian, cyclist}\}$, $\kappa_{p}$ denotes point $p$ class, $s(p)$ represents the size of an object which the point $p$ belongs to, $|O|$ denotes the number of points on all objects, $|O^{c}|$ denotes the number of points on the background. In a point cloud scene, most of the points are corresponding to the background. A constant weight, $m$, is introduced to balances the softmax losses between the foreground and the background object. Empirically, $m$ is set to 4.
\subsection{Training Phase}
The KITTI dataset only has annotations for objects in the frontal-view on the camera image. Thus, we limit point cloud range in $[0, 70]\times [-40, 40]\times [-2, 2]$ meters, and ignore the points in out of image boundaries after projection (see Section~\ref{FV_representation}). Since KITTI uses the Velodyne HDL64E, we obtain a $64\times 512$ map for the frontal-view maps.
The proposed architecture is implemented using Caffe~\cite{caffe}, while adding the custom unpooling layers implementations. The network is trained in an end-to-end manner using stochastic gradient decent (SGD) algorithm with a learning rate of $10^{-6}$ for 200 epoch for our dataset. The batch size is set to 4.
\subsection{Data augmentation}
The number of point cloud samples in the KITTI training dataset is $7481$, which is considerably less than other image datasets (e.g., ILSVRC~\cite{ILSVRC15}, has $456567$ images for training). Therefore, data augmentation technique is applied to avoid overfitting, and to improve the generalization of our model. Each point cloud cluster that is corresponding to an object class was randomly rotating at LiDAR z-axis $[-15^{\circ}, 15^{\circ}]$. Using the proposed data augmentation, the training set dramatically increased to more than 20000 data samples.
\subsection{Testing Phase}
The objectness map includes a non-object class and the corner map may output many corner offsets. Consider the corresponding corner points \{$c_{p,i}|i\in 1, ... 8\}$ by the inverse transform of Eq.~\ref{offset}. Each bounding box candidates can be denoted by $b_{p} = (c^{T}_{p,1},c^{T}_{p,2},c^{T}_{p,3}, ... ,c^{T}_{p,8})^{T}$. The sets of all box candidates is $B = \{b_{p}|p \in obj\}$.
To reduce bounding boxes redundancy, we apply a modified non-maximum suppression (NMS) algorithm, which selects the bounding box based on the euclidean distance between the front-top-left and rear-bottom-right corner points of the box candidates. Each box $b_{p}$ is scored by counting its neighbor bounding boxes in B within distance $\delta_{car}, \delta_{pedestrian}, \delta_{cyclist}$, denoted as $\#\{||c_{pi,1} - c_{pj,1}|| + ||c_{pi,8} - c_{pj,8}|| < \delta_{class}\}$. Then, the bounding boxes B are sorted by descending score. Afterwards, the bounding box candidates whose score is less than five are discarded as outliers. Picking up a box who has the highest score in B, the euclidean distances between the box and the rest are calculated. Finally, the boxes whose distance are less than the predefined threshold values are removed. The empirically obtained thresholds for each class are: (a) $T_{car} = 0.7m$; (b) $T_{pedestrian}=0.3m$, and; (c) $T_{cyclist}=0.3m$.
\section{Experiments\label{experiment}}
The evaluation results of LMNet on the challenging KITTI object detection benchmark~\cite{Kitti} are shown in this section. The benchmark provides image and point cloud data, $7481$ sets for training and $7518$ sets for testing. LMNet is evaluated on KITTI's 2D, 3D and bird view object detection benchmark using online evaluation server.
\begin{table}[t]
\caption{Performance board}
\label{performance_board}
\begin{center}
\vspace{-5mm}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Newtwork & Accelerator & Inference & Car & Ped. & Cyc.\\ \hline
F-PointNet~\cite{FpointNet} & GTX 1080 & 88ms & \bf{70.39} & \bf{44.89} & \bf{56.77} \\ \hline
VoxelNet (Lidar)~\cite{VxNet} & Titan X & 30ms & 65.11 & 33.69 & 48.36 \\ \hline
AVOD~\cite{AVOD} & Titan Xp & 80ms & 65.78 & 31.51 & 44.90 \\ \hline
MV3D~\cite{MV3D} & Titan X & 360ms & 63.35 & N/A & N/A \\ \hline
MV3D (Lidar)~\cite{MV3D} & Titan X & 240ms & 52.73 & N/A & N/A \\ \hline
DoBEM~\cite{DoBEM} & Titan X & 600ms & 6.95 & N/A & N/A \\ \hline
3D-SSMFCNN~\cite{3DSSMFCNN} & Titan X & 100ms & 2.28 & N/A & N/A \\ \hline
Proposed & GTX 1080 & \bf{20ms} & 15.24 & 11.46 & 3.23 \\ \hline
\end{tabular}
\end{center}
\vspace{-9mm}
\end{table}
\subsection{Inference Time}
One of the major issue for the application of a 3D object detection network is inference time. We compare the inference time taken by LMNet and some representative state-of-art networks. Table~\ref{performance_board} shows inference time on CUDA enabled GPUs of LMNet and other representatives methods. LMNet achieved the fastest inference at $20$ms on a GTX $1080$. Further tests on a Titan Xp observed inference time of $12$ms, and $6.8$ms ($147$ FPS) for Tesla P40. LMNet is the fastest in the wild and enables real-time ($30$ FPS or better)~\cite{real-time} detection.
\subsection{Towards Real-Time Detection on General Purpose CPU}
To test the inference time on CPU, LMNet is using Intel Caffe~\cite{IntCaffe}, which is optimized for Intel architecture CPUs through Intel's math kernel library (MKL). For the execution measurements, we include not only the inference time but also the time required to pre-process, project and generate the input data from the point cloud, as well as the post-processing time used to generate the objectness map and bounding box coordinates. Table~\ref{runtime_info_cpu} shows that LMNet can achieve more than 10 FPS on an Intel CPU i5-6600K (4 cores, $3.5$ GHz) and 20 FPS on Xeon E5-2698 v4 (20 cores, $2.2$ GHz). Given that the Lidar scanning frequency is 10 Hz, the 10 FPS achieved by LMNet is considered as real-time. These results are promising for automated driving systems that only use general purpose CPU. LMNet could be deployed to edge devices.
\begin{table}[t]
\caption{Inference time on CPU}
\label{runtime_info_cpu}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Caffe version & i5-6600K & E5-2698 v4 \\ \hline
Caffe with OpenBlas & 351ms & 280ms\\ \hline
Intel Caffe with MKL2017 & 99ms & 47ms\\ \hline
\end{tabular}
\end{center}
\vspace{-4mm}
\end{table}
\subsection{Project Map Segmentation}
Based on our observations, the trained model at $200$ epochs is used for performance evaluation. The obtained segmented map and its corresponding ground truth label from the validation set can be appreciated in Fig.~\ref{segmentation}. The segmented object confidence map is very similar to the ground truth. Experiment result shows that LMNet can accurately classify points by its class.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{gt_134.png}\vspace{1mm} \\
(a) Ground truth label
\includegraphics[width=0.4\textwidth]{pr_134.png} \\
(b) Segmented map
\caption{Ground truth label and segmented map}
\label{segmentation}
\vspace{-6mm}
\end{figure}
\subsection{Object Detection}
The evaluation of LMNet was carried out on the KITTI dataset evaluation server. Particularly, multi-class object detection on 2D, 3D and BV categories. Table.~\ref{performance_board} shows the average precision on 3D object detection in the moderate setting. LMNet performed a certain 3D detection accuracies, $15.24\%$ for car, $11.46\%$ for pedestrian and $3.23\%$ for cyclist, respectively. The state-of-art networks achieve more than $50\%$ accuracy for car, $20\%$ for the pedestrian and $29\%$. MV3D achieved $52\%$ accuracy for car objects. However, those networks are either not executed in real-time, only single class, or not open-sourced as mentioned in the Section~\ref{comparison}. To our best knowledge, LMNet is the fastest multi-class object detection network using data only from LiDAR, with models and code open sourced.
\subsection{Test on real-world data}
We implemented LMNet as an Autoware~\cite{Autoware} module. Autoware is an autonomous driving framework for urban roads. It includes sensing, localization, fusion and perception modules. We decided to use this framework due to the easiness of installation, its compatibility with ROS~\cite{ROS} and because it is open-source, allowing us to focus only on the implementation of our method. The sensing module allowed us to connect with a Velodyne HDL-64E LiDAR, the same model as the one used in the KITTI dataset. With the help of the ROS, PCL and OpenCV libraries, we projected the sensor point cloud, constructed the five-channel input map as described in ~\ref{FV_representation}, fed them to our Caffe fork and obtained both the output maps. Fig.~\ref{ros_pointcloud_classified} shows the classified 3D point cloud.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{rviz_cnn_lidar.png}
\caption{3D point cloud classified by LMNet from a Velodyne HDL-64E }
\label{ros_pointcloud_classified}
\vspace{-6mm }
\end{figure}
\section{CONCLUSIONS\label{conclusions}}
This paper introduces LMNet, a multi-class, real-time network architecture for 3D object detection on point cloud. Experimental results show that it can achieve real-time performance on consumer grade CPUs. The predicted bounding boxes are evaluated on KITTI evaluation server. Although the accuracy is on par with other state-of-the-art architectures, LMNet is significantly faster and able to detect multi-class road objects. The implementation and pre-trained models are open-sourced, so anyone can easily test our claims. Training code is also planned to be open sourced. It is important to note that all the evaluations are using the bounding box location, as defined by the KITTI dataset. However, this does not correctly reflect the classifier accuracy at a point-wise level. As for future work, we intend to fine-tune the bounding box non-maximum suppression algorithm to perform better on the KITTI evaluation; Implement the network on low-power platforms such as the Intel Movidius's Myriad~\cite{myriad} and Atom, to test its capability on a wider range of IoT or embedded solutions.
\balance
\bibliographystyle{IEEEtran}
|
2,869,038,155,620 | arxiv | \section{Introduction}
Radio jets ejected from radio loud active galactic nuclei (AGNs)
sometimes show proper motion with apparent velocity exceeding the
speed of light $c$ (\cite{pearson81,hughes91}).
The widely-accepted explanation for
this phenomenon, called superluminal motion, is relativistic jet
flow in a direction along the observer's line-of-sight with
a Lorentz factor greater than 2 (\cite{rees66}).
Such relativistic motion is
thought to originate from a region very close to the putative
supermassive black hole which is thought to power each AGN
(\cite{lindenbell69,rees84}).
On the other hand, the great majority of AGNs are radio quiet and do not
produce powerful relativistic radio jets (\cite{rees84}).
These two classes of
active objects (radio loud and quiet) are also found
in the black hole candidates (BHCs) in our own Galaxy. Objects
with superluminal jets, such as GRS 1915+105 and GRO J1655-40,
belong to the radio loud class (\cite{mirabel94,tingay95}).
Other objects such as Cyg X-1 and GS 1124-68 are
relatively radio quiet and produce little or no jet.
What causes the difference between the two classes? Recent
observations of the BHCs in our Galaxy suggest that the Galactic
superluminal sources contain very rapidly rotating black holes
(normalized angular momentum, $a \equiv J/[GM_{\rm BH}^2/c]
=0.9-0.95$, where $G$ and $M_{\rm BH}$ are the gravitational
constant and black hole mass, respectively),
while the black holes in Cyg X-1 and GS 1124-68 are
spinning much less rapidly ($a = 0.3-0.5$) (\cite{cui98}).
A similar rapidly rotating black hole is also suggested
in the AGN of the Seyfert 1 galaxy MCG-6-30-15 by
the X-ray satellite ASCA (\cite{iwasawa96}). According to
recent (nonrelativistic) studies of magnetically-driven jets
from accretion disks by Kudoh \&\ Shibata (1995, 1997a),
the terminal velocity of the formed jet
is comparable to the rotational velocity of the disk at the foot of
the jet. Further nonrelativistic simulations of jet formation
confirm these results (\cite{kudo97b,ouyed97}),
except for the extremely large magnetic field/high jet-power case
(\cite{meier97,meier99})
in which very fast jets can be produced.
The rotation velocity at the innermost stable orbit of the
Schwarzschild black hole ($r= 3r_S$) is $0.5c$, where
$r_{\rm S} =2GM_{\rm BH}/c^2$ is the Schwarzschild radius.
In addition, it appears that the poloidal magnetic field
strength in disks around non-rotating black holes may be
not extremely strong if the magnetic field energy density
is comparable with that of the radiation (\cite{begelman84,rees84}).
Therefore, a jet produced by MHD acceleration from an accretion
disk around a non-rotating black hole should be sub-relativistic
and very weak. In fact, numerical simulations of jet formation in
a Schwarzschild metric show only sub-relativistic jet flow (\cite{koide99}),
except for the case when the initial black hole corona is in
hydrostatic equilibrium rather than free fall
(\cite{koide98}).
Several mechanisms for relativistic jet formation from rotating
black holes have been proposed
(\cite{blandford77,takahashi90}).
However, up until now no
one has performed a self-consistent numerical simulation of the
dynamic process of jet formation in a rotating black hole
magnetosphere. To this end, we have developed a Kerr
general relativistic magnetohydrodynamic (KGRMHD) code. In this
paper we report briefly on what we believe are some
of the first calculations of their kind ---
simulation of jet formation in a
rotating black hole magnetosphere.
\section{Numerical Method}
We use a 3 + 1 formalism of the general relativistic conservation
laws of particle number, momentum, and energy and Maxwell
equations with infinite electric conductivity (\cite{thorne86}).
The Kerr metric, which describes the spacetime around a rotating
black hole, is used in the calculation.
When we use Boyer-Lindquist coordinates,
$x^0=ct$, $x^1=r$, $x^2=\theta$, and $x^3=\phi$, the Kerr
metric $g_{\mu \nu}$ is written as follows,
\begin{equation}
ds^2=g_{\mu \nu}dx^{\mu}dx^{\nu}
=-h_0^2(cdt)^2+\sum_{i=1}^{3} h_i^2(dx^i)^2-2h_3\Omega _3 cdt dx^3 .
\end{equation}
\noindent By modifying the lapse function in our Schwarzschild black hole code
($\alpha = \root \of {1-r_{\rm S}/r}$) to be
$\alpha = \root \of {h_0^2+\Omega _3^2}$,
and adding some terms of $\Omega _3$ to the time evolution equations,
we were able to develop a KGRMHD code relatively easily.
(See Appendix C in \cite{koide99} for more details on this procedure
and the meaning of symbols used.)
We use the Zero Angular Momentum Observer (ZAMO) system
for the 3-vector quantities, such as velocity ${\bf v}$, magnetic field
${\bf B}$, and so on.
For scalars, we use the frame comoving with the fluid flow.
The simulation is performed
in the region $0.75r_{\rm S} \leq r \leq
20r_{\rm S}$, $0 \leq \theta \leq \pi /2$ with $210 \times 70$
mesh points, assuming axisymmetry with respect to the $z$-axis
and mirror symmetry with respect to the plane $z=0$.
A free boundary condition is employed
at $r=0.75 r_{\rm S}$ and $r=20r_{\rm S}$.
In the simulations, we use simplified tortoise coordinates,
$x={\rm log}(r/r_{\rm H} -1)$, where $r_{\rm H}$ is the
radius of the black hole horizon.
To avoid numerical oscillations, we use
a simplified TVD method (\cite{davis84,koide96,koide97,koide99}).
We checked the KGRMHD code by computing Kepler motion around
a rotating black hole and comparing with analytic results
(\cite{shapiro83}).
\section{Results}
The simulations were performed for two cases in which the disk co-rotates
and counter-rotates with respect to the black hole rotation.
Figures 1a-c illustrate the time evolution of the
counter-rotating disk case and
Fig. 1d the final state of the co-rotating case.
These figures show the rest mass density
(color), velocity (vectors), and magnetic field (solid lines) in
$0 \leq R \equiv r {\rm sin} \theta
\leq 7 r_{\rm S}$, $0 \leq z \equiv r {\rm cos} \theta \leq 7 r_{\rm S}$.
The black region at the origin shows the inside of the black hole horizon.
The angular momentum parameter of the black hole is $a=0.95$
and the radius is $r_{\rm H} =0.656 r_{\rm S}$.
The initial state in the simulation consists of
a hot corona and a cold accretion disk around the black hole (Fig. 1a).
In the corona, plasma is assumed to be in nearly
stationary infall, with the specific enthalpy
$h/\rho c^2 =1+\Gamma p/[(\Gamma -1)\rho c^2] =1.3$,
where $\rho$ is the rest mass density, $p$ is the pressure, and
$\Gamma$ is specific heat ratio and set $\Gamma =5/3$.
Far from the hole,
it becomes the stationary transonic solution exactly.
The accretion disk is located at $|{\rm cot} \theta | \leq 0.125$,
$r \geq r_{\rm D} =3r_{\rm S}$ and the initial
velocity of the disk is assumed to be
the velocity of a circular orbit around the Kerr black hole.
The co-rotating disk is stable,
but the counter-rotating disk is unstable
in the region $R \leq 4.4 r_{\rm S}$. Except for the disk rotation
direction, we use the same initial conditions in both cases.
The mass density of the disk is 100 times that of the
corona at the inner edge of the disk.
The mass density profile is given by that of a
hydrostatic equilibrium corona with a scale height
of $r_{\rm c} \sim 3 r_{\rm S}$.
The disk is in pressure balance with the corona, and
the magnetic field lines are perpendicular to the accretion disk.
We use the azimuthal component of the vector potential $A_{\rm \phi}$
of the Wald solution to set the magnetic field,
which provides a uniform magnetic field
far from the Kerr black hole (\cite{wald74}).
Here the magnetic field strength far from the black hole
is $0.3 \root \of {\rho _0 c^2}$, where $\rho _0$ is the
initial corona density at $r=3 r_{\rm S}$.
However, we do not use the time component of the
vector potential $A_t$ from Wald solution; instead,
we use the ideal MHD condition ${\bf E} +{\bf v} \times {\bf B} ={\bf 0}$
to determine the electric field ${\bf E}$.
Here the Alfv\'{e}n velocity and plasma beta value at the disk
($r=3.5r_{\rm S}$) are $v_{\rm A} = 0.03c$ and $\beta \sim 3.4$,
respectively.
Figure 1b shows the state at $t= 30\tau _{\rm S}$,
where $\tau _{\rm S}$ is defined as $\tau _{\rm S} \equiv r_{\rm S}/c$.
By this time the inner edge of the disk has rotated $0.75$ cycles,
{\it if} we assume the edge is at $R = 3r_{\rm S}$.\footnote{To calculate
inner disk rotation cycles, in this paper we
always will assume that the inner edge is located at $R = 3r_{\rm S}$,
regardless of how far inward the edge actually has accreted.}
Actually, the edge falls toward
the black hole and rotates faster at $R=2r_{\rm S}$.
The rapid infall produces a shock at $R = 3.1 r_{\rm S}$, and
the high pressure behind it begins to produce the jet.
This is the same pressure-driven jet formation process seen previously
in the Schwarzschild case (\cite{koide99}).
Figure 1c shows the final state of the counter-rotating disk case
at $t=47 \tau _{\rm S}$ when the inner edge of the disk
rotated 1.2 cycles.
The accretion disk continues to fall rapidly
toward the black hole, with the disk plasma entering
the ergosphere and then crossing the horizon,
as shown by the crowded magnetic field lines near $r=0.75r_{\rm S}$.
The magnetic field lines become radial due to dragging by the disk
infall near the black hole.
The jet is ejected almost along the magnetic field lines.
Its maximum total and poloidal velocities are the same,
$v = v_{\rm p} =0.44c$ at
$R=3.2r_{\rm S}$, $z=1.6r_{\rm S}$.
The mass density plot
(color) shows that the jet consists of
two layers. One is an inner, low density,
fast, magnetically-driven jet and the other is an outer, high density,
slow, gas pressure-driven jet. The latter comes from the disk near the shock
at $R =3.1 r_{\rm S}$ and is, therefore, similar
to the gas pressure-driven jet of Koide, Shibata, \&\ Kudoh (1998).
The former is new and has never been seen in the Schwarzschild
black hole case.
It comes from the disk near the ergosphere, and is
accelerated as follows.
As there is no stable orbit at $R \leq 4.4 r_{\rm S}$,
the disk falls rapidly into the ergosphere.
Inside the static limit , the velocity of frame dragging
exceeds the speed of light ($c \Omega_3 /\alpha > c$),
causing the disk to rotate
in the {\em same} direction of the black hole rotation
(relative to the fixed Boyer-Lindquist frame), even though it was
initially counter-rotating.
The rapid, differential frame dragging greatly enhances the
azimuthal magnetic field, which then accelerates the flow upward
and pinches it into a powerful collimated jet.
Figure 1d shows a snapshot of the co-rotating disk case at
$t=47 \tau _{\rm S}$. The disk stops its
infall near $R = 3 r_{\rm S}$
due to the centrifugal barrier with a shock at
$r =3.4r_{\rm S}$. The high pressure behind the shock causes
a gas pressure-driven jet with total and poloidal velocities of
$v=v_{\rm p} =0.30c$
at $R=3.4r_{\rm S}$, $z=2.4r_{\rm S}$.
A detailed analysis shows that a weak magnetically-driven jet is
formed outside the gas pressure-driven jet with
maximum total and poloidal velocities of $v=0.42c$ and
$v_{\rm p} =0.13c$, respectively. This two-layered shell
structure is similar to that of Schwarzschild black hole
case (\cite{koide98}).
The centrifugal barrier makes the disk take much long time
to reach the ergosphere, which causes the difference
between the co-rotating and counter-rotating disk cases.
To more fully illustrate the physics of the jet formation mechanism,
in figure 2 we show the plasma beta, $\beta \equiv p/(B^2/2)$ (color)
and the toroidal component of the magnetic field,
$B_\phi$ (contour) in the counter-rotating and
co-rotating disk cases at $t=47 \tau _{\rm S}$.
The blue color shows the region where
magnetic field dominates the gas pressure;
light red---yellow shows where gas pressure is dominant;
and solid contour line shows negative azimuthal
magnetic field ($B_\phi <0$), while the broken line
the positive value ($B_\phi > 0$).
The toroidal component
of the magnetic field $B_{\phi}$ is negative and
its absolute value is very large above the black hole in both cases.
The field increases to more than 10 times the initial magnetic field.
This amplification is caused by the shear of the plasma flow
in the Boyer-Lindquist frame
due to the frame dragging effect of the rotating black hole
(\cite{yokosawa91,yokosawa93,meier99}).
Under the simplifying assumption that the plasma
is at rest in the ZAMO frame,
the general relativistic Faraday law of induction and ideal MHD
condition yield,
\begin{equation}
\frac{\partial B_\phi}{\partial t} = f_1 B_r + f_2 B_\theta ,
\label{eqdyn}
\end{equation}
where $f_1 = c(h_3/h_1) \partial (\Omega _3/h_3)/\partial r$,
and $f_2 = c(h_3/h_2) \partial (\Omega _3/h_3)/\partial \theta$.
This expression is almost identical to that of $\omega$-dynamo
effect from the field of the terrestrial magnetism.
Noted that $f_2$ is one order smaller than $f_1$ when $a \sim 1$.
Where does the magnetic field amplification energy
comes from?
It does not come from the gravitational energy
or thermal energy of the disk, because frame dragging effect occurs even
when the plasmas of the disk and corona are rest and cool.
The only other possible energy source is the rotation of the
black hole itself. Indeed, the increase in the azimuthal magnetic field
component (eq. (\ref{eqdyn})) depends on the shear
of the rotational variable $\Omega _3$.
We conclude that the amplification energy of the magnetic
field is supplied by extraction of the rotational energy
of the black hole.
The distribution of the plasma beta ($\beta$)
and the azimuthal component of the magnetic field
($B_\phi$) of the counter-rotating and co-rotating disk cases
are quite different. In the co-rotating disk case,
they are similar to those of the Schwarzschild black hole
case.
In the counter-rotating disk case,
the outer part has a positive azimuthal component of the magnetic field
($B_\phi >0$), which is caused by the counter-rotating disk,
and the outer part has the high plasma beta. The inner part has a negative
azimuthal magnetic field ($B_\phi <0$) and low plasma beta.
Note that the very high plasma beta region (yellow region)
is outside of the jet; at this point it has
almost stopped and eventually will fall into the black hole.
The negative azimuthal magnetic field is caused by the
disk around the ergosphere, where the disk rotates in the same
direction as the black hole in the Boyer-Lindquist
frame.
To confirm the jet acceleration mechanism,
we estimate the power from the electromagnetic field,
$W_{\rm EM}={\bf v} \cdot ({\bf E} + {\bf J} \times {\bf B}$)
and the gas pressure, $W_{\rm gp} = - {\bf v} \cdot \nabla p$
along the line, $z=1.1r_{\rm S}$ which crosses the jet foot
(Fig. 3).
At $t=47 \tau _{\rm S}$, the gas pressure is dominant
in the co-rotating disk case (Fig. 3b).
However, in the counter-rotating disk case, the
electromagnetic power is dominant near the black hole even through
the gas pressure power is the same as that
of the co-rotating disk case (Fig. 3a).
The magnetically-driven jet in this latter case
is accelerated by the magnetic field anchored to the
ergospheric disk. The frame dragging effect rapidly
rotates the disk in the same direction
as the black hole rotation, increasing the
azimuthal component of the magnetic field and
the magnetic tension which, in turn, accelerates the plasma by the
magnetic pressure and centrifugal force, respectively.
(A detailed analysis shows that both component of the
magnetic forces are comparable.)
This mechanism of jet production, therefore, is a kind of Penrose
process that uses the magnetic field to extract rotational energy of
the black hole and eject a collimated outflow from very near the
horizon.
\section{Discussion}
We have presented general relativistic simulations
of jet formation from both counter-rotating and co-rotating
disks in a Kerr black hole magnetosphere.
We have found that jets are formed in both cases.
At the time when the simulations
were stopped ($t=47 \tau _{\rm S}$ ($53 \tau _{\rm S}$),
after the inner edge of the disk had rotated 1.2 (1.4) cycles
in the counter-rotating (co-rotating) disk case)
the poloidal velocities of the jets were $v \sim 0.4c$ (counter-rotating),
$\sim 0.3c$ (co-rotating), both sub-relativistic.
In the co-rotating disk case, the jet has a two-layered
structure: inner, gas pressure-driven jet and outer,
magnetically-driven jet. On the other hand, in the counter-rotating
case, a new magnetically-driven jet has been found
inside the gas pressure-driven jet. The new jet is accelerated
by the magnetic field induced by the frame dragging
effect in the ergosphere. In this case, existence of a
magnetically-driven jet is not clear outside the gas pressure-driven
jet. A longer term simulation may show a three-layered structure
including the outer, magnetically-driven jet.
Unfortunately, the
counter-rotating (co-rotating) disk case
could not be continued beyond $t=47 \tau _{\rm S}$
($t=53 \tau _{\rm S}$) because of
numerical problems.
We have performed one other case previously --- the infall of a
magnetized non-rotating disk into a rapidly-rotating black hole
(\cite{koide99b}).
The disk falls toward the black hole more rapidly than
the counter-rotating case.
At later times (after almost two inner
disk turns) it developed a {\em relativistic}
jet with a velocity of $v \sim 0.9c$ (Lorentz factor $\sim 2$).
We believe that, if we had been able to perform longer-term simulations
here, in at least the counter-rotating disk case the
magnetically-driven jet
also would have been accelerated to relativistic velocities (and possibly the
co-rotating case as well).
Despite its low speed, the magnetically-driven jet in
the counter-rotating disk case is nevertheless noteworthy
because it extracts rotational energy from the black hole.
While the process is similar to the Blandford-Znajek mechanism
(\cite{blandford77}), it appears more closely related to the model
of Takahashi {\it et~al.}\ (1990).
In our case, the electromagnetic field energy is transformed
immediately into kinetic energy in the jet.
A more detailed analysis and further calculations will be
reported in our next paper.
Recently, Wardle {\it et~al.}\ (1998) detected circularly
polarized radio emission from the jets of the
archetypal quasar 3C279. They concluded
that electron-positron pairs are
important components of the jet plasma.
Similar detections in three other radio sources have been
made (\cite{homan99}), which suggests that, in general,
extragalactic radio jets may be composed mainly
of an electron-positron pair plasma.
The electron-positron plasma is probably produced very near
the black hole. The mechanism we have investigated here,
magnetically-driven jet powered by
extraction of rotational energy of a black hole,
is a strong candidate for explaining the acceleration of
such electron-positron jets.
\vspace{1.0cm}
S. K. thanks M. Inda-Koide for discussions
and important comments for this study.
We thank K.-I. Nishikawa, M. Takahashi, A. Tomimatsu, P. Hardee,
and J.-I. Sakai
for discussions and encouragement.
We appreciate the support of the National Institute for Fusion Science
and the National Astronomical Observatory
in the use of their super-computers.
Part of this research was carried out at the
Jet Propulsion Laboratory, California Institute of Technology,
under contract with the National Aeronautics and
Space Administrations.
\bibliographystyle{alpha}
|
2,869,038,155,621 | arxiv | \section*{}
Engineering applications on complex geometries often involve polyhedral meshes with various element shapes. Our goal is to analyze a class of discretization schemes on such meshes for the model elliptic problem
\eq{ \label{eq:PDE}
- \divm(\tensor{\CondTh} \, \grd p) = s,
}
posed on a bounded polyhedral domain $\Omega\subset {\mathbb{R}}^3$ with source $s\in L^2(\Omega)$. For simplicity, we focus on homogeneous Dirichlet boundary conditions; the extension to all the usual boundary conditions for~\refp{eq:PDE} is straightforward. We introduce the gradient and flux of the exact solution such that
\eq{ \label{eq:g_f}
\vect{g} = \grd p, \qquad \vect{\phi} = - \tensor{\CondTh}\,\vect{g},\qquad \divm\vect{\phi} = s.
}
In what follows, $p$ is termed the potential. The conductivity $\tensor{\CondTh}$ can be tensor-valued, and its eigenvalues are uniformly bounded from above and from below away from zero.
Following the seminal ideas of Tonti~\cite{Tonti75PhysStruct} and
Bossavit~\cite{Bossa:98,Bossa:00}, compatible (or mimetic, or
structure-preserving) schemes aim at preserving basic properties of the
continuous model problem at the discrete level. In such schemes, the localization
of the degrees of freedom results from the physical nature of the
fields: potentials are measured at points, gradients along lines, fluxes
across surfaces, and sources in volumes (in the language of differential
geometry, a potential is a (straight) 0-form, a gradient a (straight)
1-form, a flux a (twisted) 2-form, and a source a (twisted) 3-form). Moreover, compatible schemes operate a
clear distinction between topological relations (such as $\vect{g} = \grd
p$ and $\divm\vect{\phi}=s$ in~\refp{eq:g_f}) and closure relations (such
as $\vect{\phi} = - \tensor{\CondTh}\,\vect{g}$ in~\refp{eq:g_f}). The localization of degrees of freedom makes it possible to
build discrete differential operators preserving the structural
properties of their continuous counterpart. Thus, the only source of
error in the scheme stems from the discretization of the closure
relation. This step relies on a so-called discrete Hodge
operator whose design is the cornerstone of the construction and
analysis of the scheme (see, e.g.\@\xspace, Tarhassari
\etal~\cite{BoKeTa99HodgeOp}, Hiptmair~\cite{Hip01DiscHodge}, and more
recently Gillette~\cite{Gille:11}). For
the numerical analysis of compatible schemes, we refer to the early work
of Dodziuk~\cite{Dodzi:76} (extending ideas of Whitney~\cite{Whitn:57})
and Hyman and Scovel~\cite{HySc88MFD}, and to more recent overview
papers by Mattiussi~\cite{Mattiussi00EFVFDF}, Bochev and
Hyman~\cite{BoHy05MimeticPrinciples}, Arnold
\etal~\cite{ArnFalWin06FEEC}, and Christiansen \etal~\cite{ChrMO:11}.
Although it is not always made explicit, an important notion in
compatible schemes is the concept of orientation; see Bossavit~\cite[no
1]{Bossa:98} and Kreeft \etal~\cite{KrPaG:11}. For instance,
measuring a gradient requires to assign an \emph{inner} orientation to
the line (indicating how to circulate along it), while measuring a flux
requires to assign an \emph{outer} orientation to the surface
(indicating how to cross it). Discrete differential operators act on
entities with the same type of orientation (either inner or outer),
while the discrete Hodge operator links entities with a different type of
orientation. Therefore, in addition to the primal mesh discretizing the
domain $\Omega$, a dual mesh is also introduced to realize a one-to-one
pairing between edges and faces along with the transfer of
orientation. For instance, outer-oriented dual faces are attached to
inner-oriented primal edges, and so on. The primal and dual mesh do not
play symmetric roles. The primal mesh is the one produced by the mesh
generator and is the only mesh that needs to be seen by the end
user. This mesh carries the information on the domain geometry, boundary
conditions, and material properties.
The dual mesh is used only in the intimate construction of the scheme,
and, in general, there are several possibilities to build it.
Section~2 introduces the classical ingredients of the
discrete setting, namely the de Rham (or reduction) maps defining the
degrees of freedom and the discrete differential operators along with their
key structural properties (commuting with de Rham maps, adjunction of gradient and divergence). Then,
following Bossavit~\cite[no~5]{Bossa:00} and Perot and Subramanian~\cite{PeSu07DC},
we introduce two families of schemes depending on the positioning of the
potential degrees of freedom. Choosing a positioning on primal vertices
leads to vertex-based schemes, while choosing a positioning on dual
vertices (which are in a one-to-one pairing with primal cells)
leads to cell-based schemes (the terminology is chosen to emphasize the salient role of the primal mesh). Both the vertex- and cell-based schemes
admit two realizations (yielding, in general, distinct discrete solutions), one
involving a Symmetric Positive Definite (SPD) system and the other a
saddle-point system. Each of these four systems involves a specific
discrete Hodge operator. Except in some particular cases (orthogonal meshes
and isotropic conductivity) where a diagonal discrete Hodge operator can
be built in the spirit of Covolume (Hu and
Nicolaides~\cite{HuNic:92}) and, more recently,
Discrete Exterior Calculus (Desbrun
\etal~\cite{DHLM05NotesDEC}) schemes,
the discrete Hodge operator is, in general, sparse and SPD. A natural
way to build this operator
is through a cellwise assembly of local operators.
In what follows, we focus on the two cases where the assembly is
performed on primal cells, namely the
vertex-based scheme in SPD form and the cell-based scheme in
saddle-point form.
Sections~3 and~4 are devoted to the
analysis of the vertex-based scheme in SPD
form. Section~3 develops an algebraic
viewpoint. Firstly, we recall the basic error estimate in terms of the
consistency error related to the lack of commuting property between the
discrete Hodge operator and the de Rham maps. Then, we state the two
design conditions on the local discrete Hodge operators, namely, on each primal cell, a stability
condition and a ${\mathbb{P}}_0$-consistency condition. Then, under
some mesh regularity assumptions, we establish new discrete functional
analysis results, namely the discrete counterpart of the well-known
Sobolev embeddings. While only the discrete Poincar\'e inequality is used
herein (providing stability for the discrete problem), the more general
result is important in view of nonlinear problems. Finally, we complete the
analysis by establishing a first-order error estimate in discrete energy and
complementary energy norms for smooth solutions in Sobolev spaces. A similar error
estimate has been derived recently by Codecasa and
Trevisan~\cite{CodTr:10} under the stronger, piecewise Lipschitz assumption on the exact gradient and flux. We close Section~3 by showing that
the present vertex-based scheme fits into the general framework of nodal
Mimetic Finite Difference (MFD) schemes analyzed by Brezzi \etal~\cite{BreBufLip09NodalMFD}.
Section~4 adopts a more specific viewpoint to design the
discrete Hodge operator using a dual barycentric mesh. The idea is to
introduce local reconstruction functions to reconstruct a gradient on each primal cell. Examples include Whitney forms on tetrahedral meshes using edge finite element functions and the Discrete Geometric Approach of Codecasa
\etal~\cite{CodST:09,CodST:10} using piecewise constant functions on a
simplicial submesh. An important observation is that we allow for
\emph{nonconforming} local gradient reconstructions, while existing
literature has mainly focused on conforming reconstructions (see, e.g.\@\xspace,
Arnold \etal~\cite{ArnFalWin06FEEC}, Back~\cite{Back:11}, Buffa and
Christiansen~\cite{BufChr07DualFE}, Christiansen~\cite{Chris:08},
Gillette~\cite{Gille:11}, and Kreeft \etal~\cite{KrPaG:11}).
We first state the design conditions on the local reconstruction functions,
following the ideas of Codecasa and Trevisan~\cite{CodTr:10}, and show that
the local discrete Hodge operator then satisfies the algebraic
design conditions of Section~3. This yields first-order error estimates on the reconstructed gradient and flux for smooth solutions. We also show that the present
scheme fits into the theoretical framework of Approximate Gradient
Schemes introduced by Eymard \etal~\cite{EymGuiHer10VAG}. Finally, we prove a second-order $L^2$-error estimate for the potential; to our knowledge, this is the first result of this type for vertex-based schemes on polyhedral meshes.
Section~5 deals with the design and analysis of cell-based schemes, first under an algebraic viewpoint and then using local flux reconstruction on a dual barycentric mesh. Our theoretical results are similar to those derived in~\S3 and~\S4 for vertex-based schemes. The cell-based schemes fit the unified analysis framework derived by Droniou \etal~\cite{DEGH10UnifiedApproach}, so that they constitute a specific instance of mixed Finite Volume (see Droniou and Eymard~\cite{DroEy:06}) and cell-based Mimetic Finite Difference (see Brezzi \etal~\cite{BreLipSha05ConvMFD}) schemes.
Finally, we present numerical results in Section~6 and draw
some conclusions in Section~7. Appendices~A
and~B contain the proof of some technical results.
\vspace*{2cm}
\begin{center}
\begin{tabular}{c}
\hline
\\
The detailed material is available from \url{http://hal.archives-ouvertes.fr/hal-00751284} \\
\\
\hline
\end{tabular}
\end{center}
\vspace*{1cm}
\section*{}
In this work, we have analyzed Compatible Discrete Operator schemes for
elliptic problems. We have considered both vertex-based and cell-based
schemes. The cornerstone is the design of the discrete Hodge operator
linking gradients to fluxes, and whose key properties have been stated
first under an algebraic viewpoint and then using the idea of local
(nonconforming) gradient reconstruction on a dual barycentric
mesh. Links between the compatible discrete operator approach and
various existing schemes have been explored. The present approach
distinguishes the primal and dual meshes and handles the discrete
gradient and divergence operators explicitly, without recombining them
with other operators. In the vertex-based (resp., cell-based) setting,
the discrete gradient is attached to the primal (resp., dual) mesh and
the discrete divergence to the dual (resp., primal) mesh. This contrasts
with Mimetic Finite Difference (either nodal or cell-based) and Finite
Volume schemes which handle only one discrete differential operator
explicitly (as already pointed out by Hiptmair~\cite{Hip01DiscHodge}), and also with Discrete Duality Finite Volume schemes (see,
e.g., Domelevo and Omnes~\cite{DomOm:05} and
Andreianov~\etal~\cite{AndBH:07}) which handle simultaneously the
discrete gradient and divergence operators on both primal and dual
meshes, thus leading to larger discrete systems. Additional topics to be
explored regarding elliptic problems include hybridization of the
cell-based scheme (in the spirit of hybrid Finite Volume schemes, see
Eymard \etal~\cite{Eymard2010Discretization}), higher-order extensions
of both schemes (e.g.\@\xspace, by allowing more than constants in the kernel of
the relevant commuting operator), and extensive benchmarking to study the
computational efficiency of the approach.
|
2,869,038,155,622 | arxiv | \section{Introduction}
Solar Cycle 24 was predicted to begin in 2008 March ($\pm$ 6 months), and peak in late 2011 or mid-2012,
with a cycle length of 11.75 years. So, the recent paucity of sunspots and the delay in the expected
start of Solar Cycle 24 were unexpected, even though it is well known that solar cycles are challenging
to forecast \citep{biesecker07,kilciketal09}. Since traditional models based on sunspot data require
information about the starting and rise times, and also the shape and amplitude of the cycle, the fine
details of a given solar cycle can be predicted accurately only after a cycle has begun ({\it e.g.},
\citealt{elling+schwentek92,joselynetal96}). Many of these models analyze a large number of previous
cycles in order to predict the pattern for the new cycle. In contrast, the technique of helioseismology
does not depend on sunspot data and has been used to predict activity two cycles into the future; this
method was used by \citet{dikpatietal06} to predict that sunspots will cover a larger area of the sun
during Cycle 24 than in previous cycles, and that the cycle will reach its peak about 2012, one year
later than forecast by alternative methods based on sunspot data ({\it e.g.}, \citealt{detomaetal04}).
The measurements of the length of the sunspot cycle show that the cycle varies typically between
10 and 12 years. Moreover, these variations in the cycle length have been associated with changes
in the global climate \citep{wilson06,wilsonetal08}. In addition, the Maunder Minimum illustrates
a connection between a paucity of sunspots and cooler than average temperatures on Earth.
The length of the sunspot cycle was first measured by Heinrich Schwabe in 1843 when he
identified a 10-year periodicity in the pattern of sunspots from a 17-year study conducted
between 1826 and 1843 \citep{schwabe}. In 1848, Rudolph Wolf introduced the relative sunspot
number, R, organized a program of daily observations of sunspots, and reanalyzed all
earlier data to find that the average length of a solar cycle was about 11 yrs.
For more than two centuries, solar physicists applied a variety of techniques
to determine the nature of the solar cycle. The earliest methods involved counting
sunspot numbers and determining durations of cyclic activity from sunspot minimum
to minimum using the ``smoothed monthly mean sunspot number'' \citep{waldmeier61, wilson87,
wilson94}. The ``Group sunspot number'' introduced by \citet{hoyt+schatten98} is another
well-documented data set and provides comparable results to those derived from relative sunspot numbers.
In addition, sunspot area measurements since 1874 describe the total surface area of
the solar disk covered by sunspots at a given time.
The analysis of
sunspot numbers or sunspot areas is often referred to as a one-dimensional approach
because there is only one independent variable, namely sunspot numbers or areas
\citep{wilson94}. Recently, \citet{lietal05} introduced a new parameter called the
``sunspot unit area'' in an effort to combine the information about the sunspot
numbers and sunspot areas to derive the length of the cycle. There is also a
two-dimensional approach in which the latitude of an observed sunspot is introduced
as a second independent variable \citep{wilson94}. When sunspots first appear on
the solar surface they tend to originate at latitudes around 40 degrees and migrate
toward the solar equator. When such migrant activity is taken into account it can
be shown that there is an overlap between successive cycles, since a new cycle begins
while its predecessor is still decaying. This overlap became obvious when
\citet{maunder04} published his butterfly diagram and demonstrated the latitude drift
of sunspots throughout the cycles. Maunder's butterfly diagram showed that although
the length of time between sunspot minima is on average 11 years, successive cycles
actually overlap by $\sim$ 1 to 2 years. In addition, \citet{wilson87} found that
there were distinct solar cycles lasting 10 years as well as cycles lasting 12 years. This type
of behavior suggests that there could be a periodic pattern in the length of the
sunspot cycle. A summary of analyses of the sunspot cycle is found in \citet{kuklin76} and
a more recent review of the long-term variability is given by
\citet{usoskin+mursula03}.
Sunspot number data collected prior to the 1700's show epochs in which
almost no sunspots were visible on the solar surface. One such epoch, known as the
Maunder Minimum, occurred between the years 1642 and 1705, during which the number
of sunspots recorded was very low in comparison to later epochs \citep{wilson94}.
Geophysical data and tree-ring radiocarbon data, which contain residual traces of solar
activity \citep{bal85}, were used to examine whether the Maunder period truly had a lower
number of sunspots or whether it was simply a period in which little data had been collected
or large degrees of errors existed. These studies showed that the timing of the Maunder
Minimum was fairly accurate because of the high quality of sunspot data during that period,
including sunspot drawings, and the dates are strongly correlated with geophysical data.
Other epochs of significantly reduced solar activity include the Oort Minimum
from 1010 - 1050, the Wolf Minimum from 1280 - 1340, the Sp\"{o}rer Minimum from
1420 - 1530 \citep{eddy77, stuiver80, siscoe80}, and the Dalton Minimum from
1790 - 1820 \citep{usoskin+mursula03}. These minima have been derived from
historical sunspot records, auroral histories \citep{eddy76}, and physical models which
link the solar cycle to dendrochronologically-dated radiocarbon concentrations
\citep{solankietal04}.
Our interest in predicting flaring activity cycles on cool stars ({\it e.g.}, \citealt{richardsetal03})
led us to investigate the long-term behavior of the solar cycle since solar flares display a typical
average 11-year cycle like sunspots \citep{bala+regan94}. In this paper, the preliminary results of
which were published in \citet{rogers+richards04} and \citet{rogersetal06}, we investigate the
variations in the length of the sunspot number cycle and examine whether the variability can be
explained in terms of a secular pattern. Our analysis can serve as a tutorial. We apply classical
one-dimensional techniques to recalculate the periodicities of solar activity using the sunspot number
and area data to provide internal consistency in our analysis of the long-term
behavior. These results are then used as a basis in the subsequent study of the sun's long-term
behavior. In \S2 we discuss the source of the data; in \S3 we describe the derivation of the
cycle from sunspot numbers and sunspot areas using two independent techniques; in \S4 we
examine the variability in the cycle length based on the times of cycle minima and maxima using
two independent techniques; and in \S5 we discuss the results.
\begin{deluxetable}{lll}
\tablecolumns{3}
\tablewidth{0pc}
\tabletypesize{\normalsize}
\tablecaption{Duration of the Data}
\tablehead{
\colhead{Data Set} & \colhead{ } & \colhead{Duration of Data}
}
\startdata
Spot Number & Daily & 1818 Jan 8 - 2005 Jan 31 \\
& Monthly & 1749 Jan - 2005 Jan \\
& Yearly & 1700 - 2004 \\
\hline
Spot Area & Daily & 1874 May 9 - 2005 Feb 28 \\
& Monthly & 1874 May - 2005 Feb \\
& Yearly & 1874 - 2004 \\
\enddata
\end{deluxetable}
\section{Data Collection}
The sunspot data used in this work were collected from archival sources that catalog sunspot numbers and
sunspot areas, as well as the measured length of the sunspot cycle. The sunspot
number data, covering the years from 1700 - 2005, were archived by the National Geophysical
Data Center (NGDC). These data are listed in individual sets of daily, monthly,
and yearly numbers. The relative sunspot number, R, is defined as R = K (10g + s),
where g is the number of sunspot groups, s is the total number of distinct spots,
and the scale factor K (usually less than unity) depends on the observer and is
``intended to effect the conversion to the scale originated by Wolf'' \citep{ngdc}.
The scale factor was 1 for the original Wolf sunspot number calculation. The spot
number data sets are tabulated in Table 1 and plotted in Figure \ref{f1}.
\begin{deluxetable}{cccc}
\tablecolumns{4}
\tablewidth{0pc}
\tabletypesize{\footnotesize}
\tablecaption{Length of the Sunspot Cycle}
\tablehead{
\colhead{} & \colhead{} & \colhead{Cycle Length} & \colhead{Cycle Length} \\
\colhead{Year of} & \colhead{Year of } & \colhead{(from minima)} & \colhead{(from maxima)} \\
\colhead{Minimum} & \colhead{Maximum} & \colhead{(yr)} & \colhead{(yr)}
}
\startdata
1610.8 & 1615.5 & ~8.2 & 10.5\\
1619.0 & 1626.0 & 15.0 & 13.5\\
1634.0 & 1639.5 & 11.0 & ~9.5\\
1645.0 & 1649.0 & 10.0 & 11.0\\
1655.0 & 1660.0 & 11.0 & 15.0\\
1666.0 & 1675.0 & 13.5 & 10.0\\
1679.5 & 1685.0 & 10.0 & ~8.0\\
1689.0 & 1693.0 & ~8.5 & 12.5\\
1698.0 & 1705.5 & 14.0 & 12.7\\
1712.0 & 1718.2 & 11.5 & ~9.3\\
1723.5 & 1727.5 & 10.5 & 11.2\\
1734.0 & 1738.7 & 11.0 & 11.6\\
1745.0 & 1750.3 & 10.2 & 11.2\\
1755.2 & 1761.5 & 11.3 & ~8.2\\
1766.5 & 1769.7 & ~9.0 & ~8.7\\
1775.5 & 1778.4 & ~9.2 & ~9.7\\
1784.7 & 1788.1 & 13.6 & 17.1\\
1798.3 & 1805.2 & 12.3 & 11.2\\
1810.6 & 1816.4 & 12.7 & 13.5\\
1823.3 & 1829.9 & 10.6 & ~7.3\\
1833.9 & 1837.2 & ~9.6 & 10.9\\
1843.5 & 1848.1 & 12.5 & 12.0\\
1856.0 & 1860.1 & 11.2 & 10.5\\
1867.2 & 1870.6 & 11.7 & 13.3\\
1878.9 & 1883.9 & 10.7 & 10.2\\
1889.6 & 1894.1 & 12.1 & 12.9\\
1901.7 & 1907.0 & 11.9 & 10.6\\
1913.6 & 1917.6 & 10.0 & 10.8\\
1923.6 & 1928.4 & 10.2 & ~9.0\\
1933.8 & 1937.4 & 10.4 & 10.1\\
1944.2 & 1947.5 & 10.1 & 10.4\\
1954.3 & 1957.9 & 10.6 & 11.0\\
1964.9 & 1968.9 & 11.6 & 11.0\\
1976.5 & 1979.9 & 10.3 & ~9.7\\
1986.8 & 1989.6 & ~9.7 & 10.7\\
1996.5 & 2000.3 & -- & -- \\
\hline
Average & & 11.0$\pm$1.5 & 11.0$\pm$2.0
\enddata
\end{deluxetable}
The sunspot area data, beginning on 1874 May 9, were compiled by the Royal
Greenwich Observatory from a small network of observatories. In
1976, the United States Air Force began compiling its own database from its Solar
Optical Observing Network (SOON) and the work continued with the help of the National
Oceanic and Atmospheric Administration (NOAA) \citep{hath04}. The NASA compilation
of these separate data sets lists sunspot area as the total whole spot area in
millionths of solar hemispheres. We have analyzed the compiled daily sunspot areas
as well as their monthly and yearly sums. The sunspot area data sets were tabulated
in Table 1 and plotted in Figure \ref{f2}. There may be subtle differences between
the two data sets since the sunspot number and area data were collected in different
ways and by different groups, but these differences should reveal themselves when the
data are analyzed.
The sunspot number cycle data from years 1610 to 2000 are shown in Table 2. This table displays
the dates of cycle minima and maxima as well as the cycle lengths calculated from those
minima and maxima. The first three columns of this table were taken from the NGDC \citep{ngdc},
and we calculated the fourth column from the dates of cycle maxima. These sunspot cycle data
are discussed further in \S 4.
\section{The Length of the Sunspot Cycle from Sunspot Numbers and Areas}
The sunspot number and sunspot area data were analyzed to provide a basis for the
analysis of the long-term behavior of the Sun. We used the same techniques that
were used by \citet{richardsetal03} in their study of radio flaring cycles of
magnetically active close binary star systems.
\begin{figure}[h]
\figurenum{1}
\epsscale{1.23}
\hspace{-17pt}
\plotone{f1.eps}
\caption{Archival data for (a) daily, (b) monthly, and (c) yearly sunspot numbers
from 1700 to 2005.
\label{f1}}
\end{figure}
\begin{figure}[h]
\figurenum{2}
\epsscale{1.23}
\hspace{-17pt}
\plotone{f2.eps}
\caption{Archival data for (a) daily, (b) monthly, and (c) yearly sums of
whole sunspot areas (0.001 x Solar Hemispheres) from 1874 May to 2005 February.
\label{f2}}
\end{figure}
\subsection{Power Spectrum \& PDM Analyses}
Two independent methods were used to determine the solar activity cycles. In the
first method, we analyzed the power spectrum obtained by calculating the Fast
Fourier transform (FFT) of the data. The Fourier transform of a function $h(t)$ is
described by $H(\nu) = \int h(t) ~e^{2 \pi i \nu t} dt$ for frequency, $\nu$, and
time, $t$. This transform becomes a $\delta$ function at frequencies that
correspond to true periodicities in the data, and subsequently the power spectrum
will have a sharp peak at those frequencies. The Lomb-Scargle periodogram analysis
for unevenly spaced data was used \citep{pressetal92}.
In the second method, called the Phase Dispersion Minimization (PDM) technique
\citep{stellingwerf78}, a test period was chosen and checked to determine if it
corresponded to a true periodicity in the data. The goodness of fit parameter,
$\Theta$, approaches zero when the test period is close to a true periodicity. PDM
produces better results than the FFT in the case of non-sinusoidal data. The
goodness of fit between a test period $\Pi$ and a true period, $P_{true}$ is given
by the statistic, $\Theta = s^2 / \sigma_t^2$ where, the data are divided into $M$
groups or samples, $$\sigma_t^2 = {\sum (x_i - \bar x)^2\over(N - 1)}, \hskip50pt
s^2 = {\sum ( n_j - 1)s_j^2\over(\sum n_j - M)},$$ $s^2$ is the variance of M
samples within the data set, $x_i$ is a data element ($S_{\nu}$), $\bar x$ is the
mean of the data, $N$ is the number of total data points, $n_j$ is number of data
points contained in the sample $M$, and $s_j$ is the variance of the sample $M$.
If $\Pi \neq P_{true}$, then $s^2 = \sigma_t^2$ and $\Theta = 1$. However, if $\Pi
= P_{true}$, then $\Theta$ $\to$ 0 (or a local minimum).
All solutions from the two techniques were checked for numerical relationships with
(i) the highest frequency of the data (corresponding to the data sampling
interval); (ii) the lowest frequency of the data, $dt$ (corresponding to the
duration or time interval spanned by the data); (iii) the Nyquist frequency, $N/(2
dt)$; and in the case of PDM solutions (iv) the maximum test period assumed. A
maximum test period of 260 years was chosen for all data sets, except in the case
of the more extensive yearly sunspot number data when a maximum of 350 years was
assumed. We chose the same maximum test period for the sunspot area analysis for
consistency with the sunspot number analysis, even though these test periods are
longer than the duration of the area data.
\begin{figure}
\figurenum{3}
\epsscale{1.22}
\hspace{-17pt}
\plotone{f3.eps}
\caption{Frequencies of solar activity derived from power spectrum (upper frame) and PDM
(lower frame) analyses calculated from (a) daily, (b) monthly, and (c) yearly sunspot
numbers. The labels within the plot show the durations of the derived cycles in units
of years.
\label{f3}}
\end{figure}
\begin{figure}
\figurenum{4}
\epsscale{1.22}
\hspace{-17pt}
\plotone{f4.eps}
\caption{Frequencies of solar activity derived from power spectrum (upper frame) and PDM
(lower frame) analyses calculated from (a) daily, (b) monthly, and (c) yearly sums of
sunspot areas from 1874 May to 2005 February. The labels within the plot show the
durations of the derived cycles in units of years.
\label{f4}}
\end{figure}
\subsection{Results of Power Spectrum and PDM Analyses \label{fftpdmresults}}
The results from the FFT and PDM analyses of sunspot number and sunspot area data
are illustrated in Figures \ref{f3} and \ref{f4}, corresponding to the daily,
monthly, and yearly sunspot numbers and the daily, monthly, and yearly sunspot
areas, respectively. In these figures, the top frame shows the power spectrum
derived from the FFT analysis, while the bottom frame shows the $\Theta$-statistic
obtained from the PDM analysis. We specifically used two independent techniques so
that we could test for consistency and determine the common patterns evident in
the data. The fact that the two techniques produced similar results shows that the
assumptions made in these techniques have minimal influence on the results. As
expected, our results confirmed the work done by earlier studies.
The sunspot cycles derived from these results are summarized in Table 3. The most
significant periodicities corresponding to the 50 highest powers and the 50 lowest
$\theta$ values suggest that the solar cycle derived from sunspot numbers is 10.95
$\pm$ 0.60 years, while the value derived from sunspot area is 10.65 $\pm$ 0.40
years. The average sunspot cycle from both the number and area data is 10.80 $\pm$
0.50 years. The strongest peaks in Figures \ref{f3} and \ref{f4} correspond to this
dominant average periodicity over a range from $\sim$7 years up to $\sim$12
years. A weaker periodicity was also identified from the PDM analysis with an
average period of 21.90 $\pm$ 0.66 years over a range from $\sim$20 -- 24 years.
\begin{deluxetable}{llcc}
\tablecolumns{4}
\tablewidth{0pc}
\tabletypesize{\normalsize}
\tablecaption{Schwabe Cycle Derived from FFT \& PDM Analyses}
\tablehead{
\colhead{ } & \colhead{ } & \multicolumn{2}{c}{Schwabe Cycle (yrs)} \\
\cline{3-4}
\colhead{Data Set} & \colhead{ } & \colhead{FFT} & \colhead{PDM}
}
\startdata
Sunspot Number & daily & 10.85 $\pm$0.60 & 10.86 $\pm$0.27 \\
& monthly & 11.01 $\pm$0.68 & 11.02 $\pm$0.68 \\
& yearly & 10.95 $\pm$0.72 & 11.01 $\pm$0.64 \\
\cline{1-4}
Average (Number) &&& 10.95 $\pm$0.60 \\
\cline{1-4}
Sunspot Area & daily & 10.67 $\pm$0.44 & 10.67 $\pm$0.42 \\
& monthly & 10.67 $\pm$0.39 & 10.66 $\pm$0.39 \\
& yearly & 10.62 $\pm$0.39 & 10.62 $\pm$0.36 \\
\cline{1-4}
Average (Area) &&& 10.65 $\pm$0.40 \\
\cline{1-4}
Average (All data) & & & 10.80 $\pm$0.50
\enddata
\end{deluxetable}
The errors for the FFT and PDM analyses were derived by measuring the Full Width at
Half Maximum (FWHM) of each dominant peak for each data set. The $1\sigma$ error
is then defined by $\sigma = {\rm FWHM}/2.35$. The three averages given in Table 3 were
determined by averaging the dominant solutions from the FFT and PDM analyses for
each data set. The errors in the averages were determined using standard
techniques \citep{bevington69,topping72}. While the errors for the sunspot area
results are smaller than those for the spot numbers, the area data are actually
less accurate than the sunspot number data because the measurement error in the
areas may be as high as 30\% \citep{hath04}. The higher errors for the area data
are related to the difficulty in determining a precise spot boundary.
Longer periodicities that could not be eliminated because of relationships with the
duration of the data set or other frequencies related to the data (as described in
\S 3.1) were also identified with durations ranging from $\sim$90 -- 260 years
(Figures \ref{f3} and \ref{f4}). These long-term periodicities are discussed
further in the following section.
\section{Variability in the Length of the Sunspot Cycle from Cycle Minima and Maxima}
The previous analysis of sunspot data provided some evidence of long term cycles in
the data. This secular behavior was studied in greater detail through an analysis of
the dates of sunspot minima and maxima from 1610 to 2000, as shown in Table 2. Since
there have been concerns about the difficulty in deriving the exact times of sunspot
minima, and the even greater complexity in the determination of the maxima, we derived
our results using the cycle minima and maxima separately. The sunspot cycle lengths
were calculated in two ways: (i) from the dates of successive cycle minima provided by
the NGDC, and (ii) from our calculations of cycle lengths derived from the maxima.
These cycle lengths are tabulated in Table 2 and plotted in Figure \ref{f5}. The data
in Figure 5 show substantial variability over time.
The cycle lengths derived from the dates of sunspot minima and maxima were analyzed
to search for periodicities in the cycle length using two techniques: (i) a median
trace analysis and (ii) a power spectrum analysis of the `Observed minus
Calculated' or (O-C) residuals.
\begin{figure}
\figurenum{5}
\epsscale{1.22}
\hspace{-17pt}
\plotone{f5c.eps}
\caption{Sunspot cycle durations derived from successive minima (crosses) and successive maxima (dots) for dates from 1610.8 to 1989.6.
\label{f5}}
\vspace{10pt}
\end{figure}
\subsection{Median Trace Analysis}
Median trace analyses have been used to identify hidden trends in scatter plots which,
at first glance, display no obvious pattern (e.g., \citealt{moore+mccabe05}). These
analyses have also been applied to astronomical data (e.g., \citealt{gottetal01,avelinoetal02,chen+ratra03,chenetal03}).
The method of median trace analysis is applicable to any scatter plot, irrespective of
how measurements were obtained, and is one of a general class of smoothing methods designed
to identify trends in scatter plots.
A median trace is a plot of the median value of the data contained within a bin of
a chosen width, for all bins in the data set \citep{hoaglin83}. A median trace
analysis depends on the choice of an optimal interval width (OIW). These OIWs,
$h_n$, were calculated using three statistical methods applied routinely to estimate
the statistical density function of the data. The first method defines the OIW as
\begin{equation}
h_{n,1} = \frac{3.49\,{\tilde s}}{n^{1/3}}
\end{equation}
\noindent
where $n$ is the number of data points and $\tilde s$, a statistically robust
measure of the standard deviation of the data called the mean absolute deviation from the sample
median, is defined as
\begin{equation} {\tilde s} = \frac{1}{n}~\sum_{i=1}^{n} \vert x_i - M \vert ~, \end{equation}
where $M$ is the sample median. The second method defines the OIW as
\begin{equation}
h_{n,2} = 1.66\,{\tilde s}\left(\log_en\over
n\right)^{1/3}.
\end{equation}
\noindent A third definition of the OIW is given by
\begin{equation}
h_{n,3} = {2 \times IQR\over n^{1/3}} ~ ,
\end{equation}
\noindent
where $IQR$ is the interquartile range of the data set. Optimal bin widths were
determined for three data sets corresponding to the cycle lengths derived from the
(i) cycle minima, (ii) cycle maxima, and (iii) the combined minima and maxima data.
Table 4 lists the solutions for the optimal interval widths ($h_{n,1}, h_{n,2},
h_{n,3}$) for each data set.
Since the values of the optimal bin widths ranged from $\sim$60 -- 120 years, we
tested the impact of different bin widths on our results. This procedure was
limited by the fact that only 35 sunspot number cycles have elapsed since 1610 (see
Table 2). The data set can be increased to 70 points if we analyze the combined
values of the length of the solar cycle derived from both the sunspot minima and
the sunspot maxima. Using our derived OIWs as a basis for our analysis, we
calculated median traces for bin widths of 40, 50, 60, 70, 80, and 90 years. These
are illustrated in Figure \ref{f6}. The lower bin widths were included to make
maximum use of the limited number of data points, and the higher bin widths were
excluded because, once binned, there would be too few data points to make those
analyses meaningful.
\begin{deluxetable}{lccccc}
\tablecolumns{6}
\tablewidth{0pc}
\tabletypesize{\normalsize}
\tablecaption{Optimal Interval Widths}
\tablehead{
\colhead{ } & \colhead{Data} & \colhead{St. Dev.} & \multicolumn{3}{c}{Opt. Bin Width (yrs)}\\
\cline{4-6}
\colhead{Data Set} & \colhead{$n$} & \colhead{$\tilde s$ (yrs)} & \colhead{$h_{n,1}$} & \colhead{$h_{n,2}$} & \colhead{$h_{n,3}$}
}
\startdata
Cycle Minima & 35 & 97.4 & ~103.9 & 75.4 & 116.6\\
Cycle Maxima & 35 & 97.0 & ~103.4 & 75.1 & 115.4\\
Combined & 70 & 97.3 & ~82.4 & 63.5 & 91.8
\enddata
\end{deluxetable}
Figure \ref{f6} shows the binned data (median values) and the
sinusoidal fits to the binned data. The Least Absolute Error Method
\citep{bates88} was used to produce the sinusoidal fits to the median trace in each
frame of the figure. These sinusoidal fits illustrate the long-term cyclic
behavior in the length of the sunspot number cycle. The optimal solution was
determined by identifying the fits that satisfied two criteria: (1) the cycle periods
deduced from the three data sets should be nearly the same, and (2) the cyclic
patterns should be in phase for the three data sets. Table 5 lists the derived
cycle periods for all three data sets: the (a) cycle minima, (b) cycle maxima, and
(c) combined minima and maxima data.
\begin{figure}[h]
\figurenum{6}
\epsscale{1.23}
\hspace{-17pt}
\plotone{f6c.eps}
\caption{Median traces for sunspot minima data (left column) and maxima data (middle column)
derived for bin widths of 40 -- 90 years. A sinusoidal fit to the median trace is shown
for each bin width (minima-dashed line, maxima-dotted line, and combined maxima and
minima-solid line). The average period of each derived sinusoidal fit is given at the
top of each frame. The optimal fits (right column) show that the optimal
bin width is in the range of 50 -- 60 years because it is only in these two cases that
the sinusoidal fits are in phase and the derived periods are approximately equal for
all three data sets.
\label{f6}}
\end{figure}
\subsection{Results of Median Trace Analysis}
The lengths of the sunspot number cycles tabulated by the National Geophysical Data
Center (Table 2 \& Figure \ref{f5}) show that the basic sunspot number cycle is an
average of (11.0 $\pm$ 1.5) years based on the cycle minima and (11.0 $\pm$ 2.0)
years based on the cycle maxima. This Schwabe Cycle varies over a range from 8 to
15 years if the cycle lengths are derived from the time between successive minima,
while the range increases to 7 to 17 years if the cycle lengths are derived from
successive maxima. These variations may be significant even though the data in
Figure \ref{f5} show {\it heteroskedasticity}, i.e., variability in the standard
deviation of the data over time. Although the range in sunspot cycle durations is
large, the cycle length converged to a mean of 11 years, especially after 1818 as
the accuracy of the data became more reliable. In particular, the sunspot number cycle
lengths from 1610 - 1750 had a high variance while the cycle durations since 1818 show a
much smaller variance (Figure \ref{f5}) because the data quality was poor in the 18th and early 19th century.
This variance may be influenced by the difficulty in identifying the dates of cycle minima and
maxima whenever the sunspot activity is relatively low. Even after the data became
more accurate there was still a significant $\pm$ 1.5-year range about the 11-year
mean. The range in the length of this cycle suggests that there may be a hidden
longer-term variability in the Schwabe cycle.
Our median trace analysis of the lengths of the sunspot number cycle uncovered
a long-term cycle with a duration between 146 and 419 years (Table 5), if the
data are binned in groups of 40 to 90 years (see \S 4.1). Since the median
trace analysis is influenced by the bin size of the data, we determined the
optimal bin width based on the goodness of fit between the median trace and
the corresponding sinusoidal fit (see Figure \ref{f6}). Based on the sunspot
minima (the best data set), the cycle length was 185 years for the 50-yr bin
width, 243 years for the 60-yr bin, 222 years for the 70-yr bin, 393 years for
the 80-year bin, and 299 years for the 90-yr bin; so we found no direct
relationship between the bin size and the resulting periodicity. Figure
\ref{f6} also shows the median traces for the data and illustrates that the
optimal bin width is in the range of 50 - 60 years because it is only in these
two cases that the sinusoidal fits are in phase and the derived periods are
approximately equal for all three data sets. The 50-year median trace
predicts a 183-year sunspot number cycle, while the 60-year trace predicts a
243-year cycle. Since the observations span $\sim$385 years, there is greater
confidence in the 183-year cycle than in the longer one because at least two
cycles have elapsed since 1610. Similar long-term cycles ranging from 169 to
189 years have been proposed for several decades \citep{kuklin76}.
\begin{deluxetable}{ccccc}
\tablecolumns{5}
\tablewidth{0pc}
\tabletypesize{\normalsize}
\tablecaption{Optimal Interval Widths}
\tablehead{
\colhead{Bin Width} & \multicolumn{4}{c}{Derived Periodicities (yrs)}\\
\cline{2-5}
\colhead{(yrs)} & \colhead{Minima} & \colhead{Maxima} & \colhead{Both} & \colhead{Average}
}
\startdata
40 & 157 & 165 & 146 & 156 $\pm$ 10\\
50 & 185 & 182 & 182 & 183 $\pm$ 2\\
60 & 243 & 243 & 243 & 243 \\
70 & 222 & 273 & 304 & 266 $\pm$ 41\\
80 & 393 & 349 & 419 & 387 $\pm$ 35\\
90 & 299 & 299 & 209 & 269 $\pm$ 52
\enddata
\end{deluxetable}
\subsection{Analysis of the (O-C) Data}
The median trace analysis gives us a rough estimate of the long-term sunspot cycle.
However, an alternative method to derive this secular period is to calculate the
power spectrum of the (O-C) variation of the dates corresponding to the (i) cycle
minima, (ii) cycle maxima, and (iii) the combined minima and maxima.
The following procedure was used to calculate the (O-C) residuals for each of the
data sets given above, based only on the dates of minima and maxima listed in Table
2. First, we defined the cycle number, $\phi$, to be $\phi = (t_i - t_0)/L$, where
$t_i$ are the individual dates of the extrema, and $t_0$ is the start date for each
data set. Here, L is the average cycle length (10.95 years) derived independently
by the FFT and PDM analyses from the sunspot number data (\S\ref{fftpdmresults}).
The (O-C) residuals were defined to be $$(O-C) = (t_i - t_0) - (N_c\times L)$$
where, $N_c$ is the integer part of $\phi$ and represents the whole number of
cycles that have elapsed since the start date. The resulting (O-C) pattern was
normalized by subtracting the linear trend in the data. This trend was found by
fitting a least squares line to the (O-C) data. The normalized (O-C) data are
shown in Figure \ref{f7} along with the corresponding power spectra.
\begin{figure}[h]
\figurenum{7}
\epsscale{1.24}
\hspace{-20pt}
\plotone{f7c.eps}
\caption{The cycle length (O-C) residuals (left frames) and the corresponding power
spectrum (right frames) derived from the sunspot cycle minima (top frame),
maxima (middle frame), and combined minima and maxima data (bottom frame). The solid
line through the data represents the long term cycle derived from the power spectrum analysis.
\label{f7}}
\end{figure}
\subsection{Results of (O-C) Data Analysis}
The power spectra of the (O-C) data in Figure \ref{f7} show that the long term
variation in the sunspot number cycle has a dominant period of $188 \pm 38$ years.
The Gleissberg cycle was also identified in this analysis, with a period of $87 \pm
13$ years. The solutions for these analyses are illustrated in Figure \ref{f7} and
tabulated in Table 6. The $1\sigma$ errors were calculated from the FWHM of the
power spectrum peaks, as described in \S3.2. The sinusoidal fit to the (O-C)
data in Figure \ref{f7} corresponds to the dominant periodicity of 188 years
identified in the power spectra. Another cycle with a period of $\sim40$ years
was also found.
\begin{deluxetable}{lcc}
\tablecolumns{3}
\tablewidth{0pc}
\tabletypesize{\normalsize}
\tablecaption{Derived Long Term Solar Cycles}
\tablehead{
\colhead{ } & \colhead{Gleissberg} & \colhead{Secular} \\
\colhead{Data Set} & \colhead{(yrs)} & \colhead{(yrs)}
}
\startdata
Cycle Minima & 86.8 $\pm$ ~8.8 & 188 $\pm$ 40\\
Cycle Maxima & 86.3 $\pm$ 18.1 & 187 $\pm$ 37\\
Combined & 86.8 $\pm$ 10.7 & 188 $\pm$ 38\\
\cline{1-3}
Average & 86.6 $\pm$ 12.5 & 188 $\pm$ 38
\enddata
\end{deluxetable}
\section{Discussion and Conclusions}
Our study of the length of the sunspot cycle suggests that the cycle length should
be taken into consideration when predicting the start of a new solar cycle. The
variability in the length of the sunspot cycle was examined
through a study of archival sunspot data from 1610 -- 2005. In the
preliminary stage of our study, we analyzed archival data of sunspot numbers
from 1700 - 2005 and sunspot areas from 1874 - 2005 using power spectrum
analysis and phase dispersion minimization. This analysis showed that the
Schwabe Cycle has a duration of (10.80 $\pm$ 0.50) years (Table 3) and that
this cycle typically ranges from $\sim$10 -- 12 years even though the entire
range is from $\sim$7 -- 17 years. Based on our results, we have found
evidence to show that (1) the variability in the length of the solar
cycle is statistically significant. In addition, we predict that (2) the
length of successive solar cycles will increase, on average, over the next 75
years; and (3) the strength of the sunspot cycle should eventually reach a
minimum somewhere between Cycle 24 and Cycle 31, and we make no claims about
any specific cycle.
The focus of our study was to investigate whether there is a secular pattern in
the range of values for the Schwabe cycle length. We used our
derived value for the Schwabe cycle from Table 3 to examine the long-term
behavior of the cycle. This analysis was based on NGDC data from 1610--2000, a
period of 386 years (using sunspot minima) or 385 years (using sunspot
maxima). The long-term cycles were identified using median trace analyses of
the length of the cycle and also from power spectrum analyses of the (O-C)
residuals of the dates of sunspot minima and maxima. We used independent
approaches because of the inherent uncertainties in deriving the exact times
of minima and the even greater complexity in the determination of sunspot
maxima. Moreover, we derived our results from both the cycle minima and the
cycle maxima. The fact that we found similar results from the two data sets
suggests that the methods used to determine these cycles (NGDC data) did not
have any significant impact on our results.
\begin{figure*}[h]
\figurenum{8}
\epsscale{1.19}
\plotone{f8c.eps}
\caption{
Sinusoidal fits to the sunspot number cycle corresponding to the derived periods of (a) 183
years, (b) 243 years, and (c) 188 years, compared with (d) the sunspot number data. The fits
to (a) and (b) were produced from binned cycle minima (dashed line), cycle maxima (dotted
line), and a combination of the two data sets (solid line). The bottom frame shows
sunspot numbers from 1700 -- 2004 (modern, solid line), 1610 -- 1715 (early, dotted line), and
950 -- 1950 (ancient, dashed line) reconstructed from radiocarbon data. The 183- and 188-year
periodicities display the best match to the historical minima.
\label{f8}}
\end{figure*}
The median trace analysis of the length of the sunspot number cycle provided
secular periodicities of 183 -- 243 years. This range overlaps with the
long-term cycles of $\sim$90 -- 260 years which were identified directly from
the FFT and PDM analyses of the sunspot number and area data (Figures \ref{f3}
and \ref{f4}). The power spectrum analysis of the (O-C) residuals of the dates
of minima and maxima provided much clearer evidence of dominant cycles with
periods of $188 \pm 38$ years, $87 \pm 13$ years, and $\sim40$ years. These
results are significant because at least two long-term cycles have
transpired over the $\sim$385-year duration of the data set.
The derived long-term cycles were compared in Figure \ref{f8} with documented
epochs of significant declines in sunspot activity, like the Oort, Wolf, Sp\"{o}rer,
Maunder, and Dalton Minima \citep{eddy77, stuiver80, siscoe80}. In
this figure, the modern sunspot number data were combined with earlier data
from 1610-1715 \citep{eddy76} and with reconstructed (ancient) data spanning
the past 11,000 years \citep{solankietal04}. These reconstructed sunspot
numbers were based on dendrochronologically-dated radiocarbon concentrations
which were derived from models connecting the radiocarbon concentration with
sunspot number \citep{solankietal04}. The reconstructed sunspot numbers are
consistent with the occurrences of the historical minima (e.g., Maunder
Minimum). \citet{solankietal04} found that over the past 70 years, the level
of solar activity has been exceptionally strong. Our 188-year periodicity is
similar to the 205-year de Vries-Seuss cycle which has been
identified from studies of the carbon-14 record derived from tree rings ({\it e.g.},
\citealt{wagneretal01,braunetal05}).
Figure \ref{f8} compares the historical and modern sunspot numbers with the derived
secular cycles of length (a) 183 years (\S4.2), (b) 243-years (\S4.2), and (c) 188
years (\S4.4). The first two periodicities were derived from the median trace
analysis, while the third one was derived from the power spectrum analysis of the
sunspot number cycle (O-C) residuals. The fits for the 183-year periodicity all
had the same amplitude, but were moderately out of phase with each other, while the
fits for the 243-year periodicity were in phase for all data sets, albeit
with different amplitudes.
An examination of frames (a) and (c) of Figure \ref{f8} reveals that the cycle lengths
increased during each of the Wolf, Sp\"{o}rer, Maunder, and Dalton Minima for the 183-year
and 188-year cycles. On the other hand, frame (b) shows no similar correspondence between
the cycle length and the times of historic minima for the 243-year cycle. Therefore, the
183- and 188-year cycles appear to be more consistent with the sunspot number data than the
243-year cycle. All four historic minima since 1200 occurred during the rising portion of
the 183- and 188-year cycles when the length of the sunspot cycle was increasing. According
to our analysis, the length of the sunspot cycle was growing during the Maunder Minimum when
almost no sunspots were visible. Given this pattern of behavior, the next historic minimum
should occur during the time when the length of the sunspot cycle is increasing (see Fig. \ref{f8}).
The existence of long-term solar cycles with periods between 90 and 200 years
is not new to the literature but the nature of these cycles is still not fully
understood. Our study of the length of the sunspot cycle shows that there
is a dominant periodicity of 188 years related to the basic Schwabe Cycle
and weaker periodicities of $\sim$40 and 87 years. This 188-year period, determined
over a baseline of 385 years that spans more than two cycles of the long-term periodicity,
should be compared with Schwabe's 10-year period that was derived from
17 years (i.e., less than two cycles) of observations \citep{schwabe}.
Our study also suggests that the length of the sunspot
number cycle should increase gradually, on average, over the next $\sim$75 years,
accompanied by a gradual decrease in the number of sunspots. This information
should be considered in cycle prediction models ({\it e.g.},
\citealt{dikpatietal06}) to provide better estimates of the starting time of a given cycle.
\section*{Acknowledgments}
We thank K. S. Balasubramaniam for his comments on the manuscript, A.
Retter for his comments on the research and for his advice on the (O-C) analysis,
and D. Heckman for advice on the data analysis. The SuperMongo plotting program
\citep{lupton77} was used in this research. This work was partially supported by
National Science Foundation grants AST-0074586 and DMS-0705210.
|
2,869,038,155,623 | arxiv | \section{Introduction}
Differential K-theory is an enhanced version of topological K-theory constructed
by incorporating connections and differential forms. It was developed in order to
refine the families of Atiyah-Singer
index theorem and also to classify the Ramond-Ramond field strengths in string theory
\cite{bun}, \cite{free}. One model of the first group in differential K-theory
$\widehat{K}^0(X)$ for a manifold $X$ is a Grothendieck group of vector
bundles equipped with connections and odd differential forms \cite{kar}. In \cite{SS}, Simons
and Sullivan constructed another model of $\widehat{K}^0(X)$ using just vector
bundles and connections. They got rid of the differential form at the cost of
introducing an equivalence relation among the connections on the vector bundles.
More precisely,
they constructed $\widehat{K}^0(X)$ as the Grothendieck group of ``structured''
vector bundles; the definition of a structured vector bundle is recalled in
Section \ref{sec2}.
In addition to constructing a model of $\widehat{K}^0(X)$, Simons
and Sullivan in \cite{SS} proved an interesting result about
the existence of stable inverses of hermitian structured bundles. (It is Theorem 1.15 in
\cite{SS} which is essentially Theorem \ref{main} below for the unitary group.) However,
their proof works only for the unitary group (and also to the more general
case of compact Lie
groups) because they used the existence of universal connections \emph{\'a la}
Narasimhan-Ramanan \cite{NR}. In \cite{PT} a slightly different proof of the theorem
of Simons and Sullivan was given that did not involve universal connections. In fact,
the proof in \cite{PT} is valid even for connections that are not compatible with
the metric.
All the flat connections considered here will have trivial monodromy representation.
Note that if $\nabla$ is a flat connection with trivial monodromy on a vector bundle
$V$ over $X$, and $x_0$ is a point of $X$, then there is a unique isomorphism $f$ of
$V$ with the trivial vector bundle $X\times V_{x_0}\,\longrightarrow\, X$, equipped with
the trivial connection, such that $f$ is connection preserving and coincides with
the identity map over $x_0$ (here $V_{x_0}$ denotes the fiber of $V$ over $x_0$).
In this paper we prove the following theorem.
\begin{theorem}
Let $G$ be one of the groups ${\rm GL}(N,\mathbb{C})$,~
${\rm SO}(N,\mathbb{C})$,~ ${\rm Sp}(2N, \mathbb{C})$. Given a structured vector bundle
$\mathcal{V} \,=\, [V\, , \{ \nabla \} ]$ on a smooth manifold $X$ such that the holonomy
of some equivalent connection $\nabla$ is in $G$, there exists a structured inverse
$\mathcal{W} \,=\, [W\, , \{ \widetilde{\nabla} \} ]$ with the property that
the holonomy of $\widetilde{\nabla}$ is in the same $G$, satisfying
$$\mathcal{V} \oplus \mathcal {W} \,=\, [X \times \mathbb{C}^M \, ,
\{ \nabla _F \}]$$
where $\nabla_F$ is a flat connection with trivial monodromy on the trivial
vector bundle $X \times \mathbb{C}^M$ over $X$.
\label{main}
\end{theorem}
As an immediate consequence of Theorem \ref{main} we have the following corollary.
\begin{coro}
Let $\widehat{K^0}_G(X)$ be the Grothendieck group of structured vector bundles
$\mathcal{V} \,=\, [V,\{ \nabla \}]$ satisfying the
condition that the connection $\nabla$ has holonomy in $G$
(both $G$ and $X$ are as in Theorem \ref{main}). Let $d$ denote the trivial flat connection on a trivial bundle $X\times \mathbb{C}^k$. Then the following two hold:
\begin{enumerate}
\item Every element of $\widehat{K^0}_G(X)$ is of the form $\mathcal{V} - [k]$ where $[k]
\,=\, [X \times \mathbb{C}^k, \{d\}]$ (so $k\,=\, N$ if $G\,=\, {\rm GL}(N,\mathbb{C})$
or ${\rm SO}(N,\mathbb{C})$ and $k\,=\, 2N$ if $G\,=\, {\rm Sp}(2N, \mathbb{C})$), and
\item $\mathcal{V} \,= \,\mathcal{W}$ in $\widehat{K^0}_G(X)$ if and only if $\mathcal{V} \oplus [\mathcal{N}]
\,=\, \mathcal{W} \oplus [\mathcal{N}]$ as structured bundles for some flat bundle
$[\mathcal{N}]$ with trivial monodromy.
\end{enumerate}
\label{group}
\end{coro}
In \cite{SS}, Simons and Sullivan proved what they called the Venice lemma which
essentially says that every exact form arises out of Chern character forms of trivial
bundles (see also \cite{PT}). Using the same ideas as in the proof of Theorem
\ref{main} we give a proof of the following holonomy version of the Venice lemma.
\begin{prop}\label{holovenice}
Fix $G$ to be one the groups ${\rm GL}(N,\mathbb{C})$,
${\rm SO}(N,\mathbb{C})$ and ${\rm Sp}(2N, \mathbb{C})$.
If $\eta$ is any odd smooth form on $X$, then there exists a trivial bundle $T\,=\,
X \times \mathbb{C}^k$ ($k$ is as in Corollary \ref{group})
and a connection $\nabla$ on it whose holonomy is in $G$ such that
\begin{equation}\label{veniceeq}
\mathrm{ch}(T, \nabla) - \mathrm{ch}(T, d) \,=\, d\eta\, .
\end{equation}
\end{prop}
\section{Preliminaries}\label{sec2}
As mentioned in the introduction, in order to define structured vector bundles we
need to define an equivalence relation between connections on vector bundles
on a smooth manifold. To do so, we recall the
definition of the Chern-Simons forms. Throughout, $V$ and $W$ are smooth complex
vector bundles on a smooth manifold $X$. For a connection $\nabla$ on a vector
bundle $V$, let $F_{\nabla}\,\in\, C^\infty(X,\, End(V)\otimes \bigwedge^2 T^*X)$
be the curvature of $\nabla$, and let $$\mathrm{ch} (\nabla) \,=\,
\mathrm{Tr} \exp \left(\frac{\sqrt{-1}}{2\pi} F_{\nabla}\right)$$ be the corresponding
Chern character form on $X$.
\begin{defi}
If $\nabla _1$ and $\nabla _2$ are smooth connections on $V$, then the Chern-Simons form
between them is defined as a sum of odd differential forms $\mathrm{CS}(\nabla _1, \nabla _2)$
modulo exact forms satisfying the following two conditions:
\begin{enumerate}
\item (\emph{Transgression})~\ ~$d\mathrm{CS}(\nabla _1\, , \nabla _2) \,=\,
\mathrm{ch}(\nabla _1) - \mathrm{ch}(\nabla_2)$.
\item (\emph{Functoriality})~\ ~ If $f \,:\, Y \,\longrightarrow\, X$ is a smooth map between
smooth manifolds $Y$ and $X$, then $\mathrm{CS}(f^{*} \nabla _1\, , f^{*} \nabla _2) \,=\,
f^{*} \mathrm{CS}(\nabla _1\, , \nabla _2)$ modulo exact forms.
\end{enumerate}
\label{cs}
\end{defi}
In \cite{SS} an equivalence relation between connections was defined, which we now
recall.
\begin{defi}
If $\nabla _1$ and $\nabla_2$ are two smooth connections on a vector bundle
$V$ on $X$, then $$\nabla _1 \,\thicksim\, \nabla _2$$ if
$\mathrm{CS} (\nabla _1, \nabla _2)\,=\, 0$ modulo exact forms
on $X$. The equivalence class of $\nabla$ is denoted by $\{\nabla \}$.
\label{eq}
\end{defi}
An isomorphism class $\mathcal{V} \,=\, [V\, , \{ \nabla \}]$ as in Definition
\ref{eq} is called a
\emph{structured vector bundle}. The direct sum of $\mathcal{V}\,=\,[V\, , \{ \nabla
_V \}]$ and $\mathcal{W} \,=\, [W\, , \{ \nabla_W \}]$ is defined as
$$
\mathcal{V}\oplus\mathcal{W}\,=\, [V\oplus W\, , \{ \nabla_V \oplus \nabla _W \}]\, .
$$
A \textit{symplectic} (respectively, \textit{orthogonal}) bundle on $X$ is a pair
$(E \, ,\varphi)$, where $E$ is a $C^\infty$ vector bundle on $X$ and $\varphi$
is a smooth section of $E^* \otimes E^*$, such that
\begin{enumerate}
\item{} The bilinear form on $E$ defined by $\varphi$ is anti-symmetric (respectively, symmetric), and
\item{} the homomorphism
\begin{equation}\label{e1}
E\,\longrightarrow\, E^*
\end{equation}
defined by contraction of $\varphi$ is an isomorphism, equivalently, the form
$\varphi$ is fiber-wise non-degenerate.
\end{enumerate}
A connection on a vector bundle $E$ induces a connection on $E^* \otimes E^*$. A
\textit{symplectic} (respectively, \textit{orthogonal}) connection on a symplectic
(respectively, orthogonal) bundle $(E \, ,\varphi)$ is a $C^\infty$ connection
$\nabla$ on the vector bundle $E$ such that the section $\varphi$ is parallel with
respect to the connection on $E^* \otimes E^*$ induced by $\nabla$.
If $(E \, ,\varphi)$ is a symplectic (respectively, orthogonal) bundle, then the
inverse of the isomorphism in \eqref{e1} produces a symplectic (respectively,
orthogonal) structure $\varphi'$ on $E^*$, because $(E^*)^*\,=\, E$. Let
$\nabla$ be a symplectic (respectively, orthogonal) connection on the
symplectic (respectively, orthogonal) bundle $(E \, ,\varphi)$. Then the
connection $\nabla'$ on $E^*$ induced by $\nabla$ is a symplectic
(respectively, orthogonal) connection on $(E^*, \varphi')$, where $\varphi'$
is defined above. We note that the isomorphism in \eqref{e1} takes $\varphi$ and
$\nabla$ to $\varphi'$ and $\nabla'$ respectively.
We define the holonomy version of differential K-theory next.
\begin{defi}
Let $V$ be a vector bundle with a connection $\nabla _V$ whose holonomy
is in a group $G$. Let $\{ \nabla _V \}_G$ denote the
equivalence class of all such connections and denote the corresponding
structured bundles by $\mathcal{V}$. Also, let $T_G$ denote the free group of such structured
bundles. The following group is the holonomy version of differential K-theory on a smooth
manifold $X$:
\begin{gather}
\widehat{K^0}_G (X) = \frac{T_G}{\mathcal{V}+\mathcal{W} - \mathcal{V} \oplus \mathcal{W}}
\end{gather}
\label{holodiffK}
\end{defi}
\begin{rema}
Notice that we require all the equivalent connections in $\{ \nabla_V \}_G$ to have holonomy in $G$ as a part of the definition
of equivalence. In particular, equivalent connections in the sense of \cite{SS} do not necessarily have their holonomy in the same group.
\end{rema}
From now onwards we drop the subscript $G$ whenever it is clear from the context.
\section{Proof of Theorem \ref{main}}
We divide the proof of Theorem \ref{main} into two cases.
\subsection{The case of $\mathbf{G=GL(N,\mathbb{C})}$}\label{s2.1}
\textbf{}\\
This case has already been covered in \cite{PT}. However here we provide a different proof.
Our approach relies on Lemma \ref{trivsublemma} proved below. We believe that
Lemma \ref{trivsublemma} maybe of interest in its own right. The geometrical content
of the above mentioned lemma is that on $\mathbb{R}^n$ every trivial bundle with
a connection is a subbundle, equipped with the induced connection, of a trivial bundle
equipped with a flat connection.
\begin{lemma}
Let $V$ be a trivial complex vector bundle of rank $r$ on $\mathbb{R}^n$, and let
$A$ be a connection on $V$. Then there
exists an invertible, smooth $(2n+2)r \times (2n+2)r$ complex matrix valued function $g$
such that $A_{ij}\,=\,[dg g^{-1} ] _{ij}$, where $1\,\leq\, i\, , j \,\leq\, r$.
\label{trivsublemma}
\end{lemma}
\begin{proof}
Notice that $A \,=\, \displaystyle \sum _{k=1} ^n A_k dx^k$, where $A_k$ are smooth
$r\times r$ complex matrix valued functions and $x^k$ are coordinates on $\mathbb{R}^n$.
We may write $A_k$ as
$$
A_k \,=\, 2I+A_k ^{\dag} A_k+A_k - (2I+A_k ^{\dag} A_k)\, .
$$
Using this it can be deduced that $A_k$ is a difference of two smooth functions with
values in $r\times r$ positive definite
matrices (an $r\times r$ matrix $B$ is called positive definite if
$v^{\dag} (B+B^{\dag}) v \,>\, 0 \ \forall \ v \neq 0$). Indeed, we have
$$
I+A_k ^{\dag} A_k+\frac{A_k + A_k ^{\dag}}{2} \,=\, \left(I+\frac{A_k}{2}\right)^{\dag}
\left(I+\frac{A_k}{2}\right) +\frac{3}{4} A_k ^{\dag} A_k \,\geq\, 0\, .
$$ Also, $dx^k \,=\, e^{-x^k} d(e^{x^k})$ and
$-dx^k\,=\, e^{x^k} d(e^{-x^k})$. Hence
$$A\,=\, \displaystyle \sum _{k=1} ^{2n} f_k dh_k$$ where the $h_i$ are
positive smooth functions and the $f_i$ are $r\times r$ positive-definite smooth matrix-valued functions.
We may attempt to find $g$ by forcing the first $r\times (2n+2)r$ sub-matrix of $dg$ to be
$$\left [ \begin{array}{ccccc}
dh_1 Id_{r\times r} &\ldots & dh_{2n} Id_{r\times r} & 0 & 0
\end{array} \right ]$$
and the first $(2n+2)r\times r$ sub-matrix of $g^{-1}$ to be $$\left [ \begin{array}{c} f_1 \\ f_2 \\ \vdots \\ f_{2n} \\ -\sum h_k f_k \\ Id_{r\times r} \end{array} \right ].$$ If we manage to find such a $g$, then $A_{ij} \,=\, [dg g^{-1}]_{ij}$. \\
Indeed, we claim that the matrix $g$ defined by
\begin{gather}
g=\left [ \begin{array}{cccccc}
h_1 Id_{r\times r} & h_2 Id_{r\times r} & \ldots & h_{2n}Id_{r\times r} & Id_{r\times r} & Id_{r\times r} \\
Id_{r\times r} & 0 & \ldots & 0 & f_1 \left(\displaystyle \sum_k h_k f_k\right) ^{-1} & 0 \\
0 & Id_{r\times r} & \ldots & 0 & f_2\left(\displaystyle \sum_k h_k f_k \right)^{-1} & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & \ldots & Id_{r\times r} & f_{2n}\left(\displaystyle \sum_k h_k f_k\right)^{-1} & 0 \\
0 & 0 & \ldots & 0 & \left(\displaystyle \sum _k h_k f_k\right)^{-1} & Id_{r\times r} \\
\end{array} \right ]\nonumber
\end{gather}
does the job. (Here $Id_{r\times r}$ is the $r\times r$ identity matrix.) This can be verified by a
straightforward computation. Note that $\displaystyle \sum _k h_k f_k$ is invertible because $h_k>0$ and $f_k + f_k ^{\dag} >0$.
\end{proof}
Since $\mathbb{R}^n$ is simply connected, any flat bundle on it has trivial monodromy.
Lemma \ref{trivsublemma} implies the following generalization:
\begin{prop}
Let $(V\, ,\nabla \,=\, d+A)$ be a complex rank $r$ vector bundle equipped with a
connection on a smooth manifold $X$ of
dimension $n$. Then there exists a trivial
complex vector bundle $T$ of rank $(4n+8r+2)(n+2r)$ on $X$, and a
smooth flat connection with trivial monodromy
$\widetilde{\nabla}\,=\,d+\widetilde{A}$ on $T$, such that
\begin{itemize}
\item $V\oplus W \,=\, T$ for some
smooth complex vector bundle $W$, and
\item{} the connection $A$ is induced from $\widetilde{A}$.
\end{itemize}
\label{trivsub}
\end{prop}
\begin{proof}
Using the Whitney embedding theorem, there is an embedding of the total space of $V$ in $\mathbb{R}^{2n+4r}$. The zero
section of $V$ is diffeomorphic to $X$ (and hence $X$ also sits in $\mathbb{R}^{2n+4r}$). The tangent
bundle $TV$ of $V$ is a subbundle of $T\mathbb{R}^{2n+4r}\vert_V$.
By endowing $V$ with the metric induced from the Euclidean metric on $\mathbb{R}^{2n+4r}$, we
may find the orthogonal
complement $TV^{\perp}$ of $TV$.
It satisfies $$TV\vert_X\oplus TV^{\perp} \,=\, T\mathbb{R}^{2(n+2r)}\vert_X\, .$$ The vector
bundle $V$ itself may be identified with a subbundle
of $TV$. Using the induced metric we may find the orthogonal complement $V^{\perp}$ of $V$ in $TV$. Therefore, there
exists a vector bundle $U = V^{\perp} \oplus TV^{\perp}$ on $X$ such that $V\oplus U\, =\,X \times
\mathbb{C}^{n+2r}\,=\, Q$.
We may endow $U$ with some arbitrary connection $\nabla _U$. This induces the connection
$\nabla_Q \,=\, \nabla \oplus \nabla _U$ on the bundle $Q$. Using a tubular neighborhood and a
partition of unity we may extend $\nabla_Q$ from $X$ to a connection $\nabla_{\widetilde{Q}}$
on the trivial vector bundle of rank $(n+2r)$
defined on all of $\mathbb{R}^{2n+4r}$. Now we may use Lemma \ref{trivsublemma} to come up with
a vector bundle $\widetilde{T}$ of rank $(4n+8r+2)(n+2r)$
on $\mathbb{R}^{2n+4r}$, equipped with a flat connection $\nabla _{\widetilde{T}}$, such that
$\nabla_{\widetilde{Q}}$ is induced from it. Restricting our attention to $X$ we see that
the vector bundle $T\,=\,\widetilde{T}\vert_X$ equipped with the connection
$\nabla _{\widetilde{T}} \vert_X$ satisfies the conditions in the proposition.
\end{proof}
Proposition \ref{trivsub} maybe viewed as a vector bundle version of the Nash
embedding theorem because it states that every connection arises out of a flat
connection with trivial monodromy. We now state a useful lemma \cite[Lemma 1.16]{SS}.
\begin{lemma}[Simons-Sullivan]
Let $V$ and $W$ be smooth vector bundles on a smooth manifold $X$. Let $\nabla$ be a
smooth connection on the direct sum $V\oplus W$ with curvature $R$. Let $\nabla _V$ and
$\nabla _W$ be the connections on $V$ and $W$ respectively constructed from $\nabla$ using
the decomposition of $V\oplus W$. Suppose that $R _{r,s} (V) \,\subseteq\, V$ and $R_{r,s}
(W)\,\subseteq\, W$ for all tangent vectors $r,s$ at any point of $X$. Then $$CS(\nabla _V
\oplus \nabla _W , \nabla ) \,=\, 0 \ \ \ \mathrm{modulo} \ \ \mathrm{exact}\ \
\mathrm{forms}\ \ \mathrm{on}\ \ X\, .$$
\label{sul}
\end{lemma}
Lemma \ref{sul} in conjunction with lemma \ref{trivsub} implies Theorem \ref{main} in the case $G\,=\,
{\rm GL}(N, \mathbb{C})$. Indeed, given a structured bundle $\mathcal{V} \,=\,
[V, \{ \nabla _V \}$, lemma \ref{trivsub} furnishes a flat
bundle $\mathcal{T}\,=\, [T, \{ \nabla _T \}]$ such that $V$ is a subbundle of $T$ with
the connection $\nabla _V$ being induced from $\nabla _T$.
Using the natural metric on $T$ we may find an orthogonal complement $W$ to $V$ so that
$V \oplus W \,=\, T$. Endowing $W$ with the
induced connection $\nabla _W$ from $\nabla _T$ (which is flat), it is straight-forward to check
that the conditions of lemma \ref{sul} are satisfied. Let
$$\mathcal{W} \,=\, [W\, , \{ \nabla _W \}]\, .$$ Using lemma \ref{sul} we see that
$$[T, \{ \nabla_{T} \}] \,=\, [V\oplus W, \{ \nabla_{T} \}] \,=\, [V\oplus
W, \{ \nabla _V \oplus \nabla _W \}] \,=\, \mathcal{V} \oplus \mathcal{W}\, .$$
\subsection{$\mathbf{G={\mathrm {Sp}}(2N,\mathbb{C})}$ or $\mathbf{G={\mathrm {SO}}(N,\mathbb{C})}$}
\textbf{}\label{se3.2}\\
{}From Section \ref{s2.1}
we know that there is a structured inverse $\mathcal{W} \,=\, [W\, , \{ \widetilde{\nabla} \} ]$
of $(E \, ,\nabla)$. We clarify that $W$ does not necessarily have a $G$--structure. Let
$\widetilde{\nabla}'$ denote the connection on $W^*$ induced by $\widetilde{\nabla}$.
Using the natural pairing of $W$ with $W^*$, the vector bundle $W\oplus W^*$ has a
canonical $G$--structure $\varphi_0$. We note that $\widetilde{\nabla}\oplus\widetilde{\nabla}'$ is a
$G$--connection on $(W\oplus W^*\, ,\varphi_0)$.
Clearly, $(W^*\, ,\widetilde{\nabla}')$ is a structured inverse of $(E^*\, , \nabla')$.
Therefore,
$$
(E^* \oplus W\oplus W^*\, , \nabla'\oplus \widetilde{\nabla}\oplus\widetilde{\nabla}')
$$
is a structured inverse of $(E \, ,\nabla)$. The connection
$\nabla'\oplus \widetilde{\nabla}\oplus\widetilde{\nabla}'$ preserves the $G$--structure
$\varphi'\oplus \varphi_0$ on the vector bundle $E^* \oplus W\oplus W^*$.
\section{Applications}
In this section we prove Corollary \ref{group} and Proposition \ref{holovenice}.
\subsection{Proof of Corollary \ref{group}}
\begin{enumerate}
\item Any element of $\widehat{K^0}_G(X)$ is of the form $[\mathcal{V}] - [\mathcal{W}]$ by definition. Since
there exists an inverse $\mathcal{Q}$ to $\mathcal{W}$ such that $\mathcal{Q} \oplus \mathcal{W}\,=\, [k]$,
where $[k]$ is flat with trivial monodromy, we see that $[\mathcal{V}] - [\mathcal{W}]\,=\, [\mathcal{V} \oplus
\mathcal{Q}] - [k]$.
\item If $[\mathcal{V}] \,=\, [\mathcal{W}]$ in $\widehat{K^0}_G(X)$, then $\mathcal{V} \oplus \mathcal{P} \,=
\,\mathcal{W} \oplus \mathcal{P}$ for some structured
bundle $\mathcal{P}$. Let $\mathcal{E}$ be an inverse of $\mathcal{P}$. Adding $\mathcal{E}$
to both sides we see that $\mathcal{V}
\oplus [N ] \,=\, \mathcal{W} \oplus [N]$ for some flat vector bundle $N$ with trivial monodromy.
\end{enumerate}
\subsection{Proof of Proposition \ref{holovenice}}
Using the Venice lemma in \cite{PT} we see that there exists a trivial bundle
$\widetilde{T}$ with a connection $\nabla_{\widetilde{T}} \,=\, d+A$ such that
$$\frac{d\eta}{2} \,=\,
\mathrm{ch}(\widetilde{T},\nabla _{\widetilde{T}}) - \mathrm{ch} (\widetilde{T},d)\, .$$ It is
not necessarily the case that the holonomy of $\nabla_{\widetilde{T}}$ lies in $G$. However, we
know that $$\frac{d\eta}{2} \,=\,\mathrm{ch}(\widetilde{T}^{*},\nabla _{\widetilde{T}^{*}})
- \mathrm{ch} (\widetilde{T}^{*},d)$$ because $d\eta$ is an even form. Therefore,
$$d\eta \,=\,\mathrm{ch}(\widetilde{T}^{*}\oplus \widetilde{T},\nabla _{\widetilde{T}^{*}}
\oplus \nabla _{\widetilde{T}}) - \mathrm{ch}
(\widetilde{T^{*}}\oplus \widetilde{T},d)\, .$$ Using the same reasoning as in Section
\ref{se3.2} we obtain the desired result.
\section*{Acknowledgements}
We are grateful to the referee for detailed comments to improve the exposition.
The first--named author acknowledges the support of a J. C. Bose Fellowship.
|
2,869,038,155,624 | arxiv | \section{Best Hyperparameters}\label{a:best_hyp}
\label{sec:hyperparams}
See Table~\ref{best_hyperparams} for the best hyperparameters for each model.
\begin{table}
\centering
\begin{tabular}{lccc}
\hline
\textbf{Model} & \textbf{epochs} & \textbf{batch size} & \textbf{learn. rate}\\
\hline
WG-SR & 5 & 16 & 1e-5 \\
BWP & 8 & 16 & 1e-5 \\
CSS & 8 & 16 & 1e-5 \\
MAS & 8 & 8 & 1e-5 \\
\hline
\end{tabular}
\caption{\label{best_hyperparams}
The best hyperparameters for every model.
}
\end{table}
\section{WSC Preprocessing}\label{a:wsc}
\label{sec:wsc_preprocessing}
When evaluating the CSS and the MAS model on the WSC dataset, we noticed a problem with the dataset, which interfered with locating the candidates in the text. The problem is that, in some WSC examples, the given candidate options do not match word-by-word the candidates as they appear in the text. For example,
\begin{quote}{\textit{Madonna fired her trainer because \underline{\hspace{0.5cm}} couldn't stand her boyfriend.}
Candidates: Madonna, The trainer.}
\end{quote}
In this example, we resolve this problem by manually replacing the candidate option ``the trainer" with ``her trainer", to match exactly the candidate as it appears in the text. By following this procedure, we manually modified all 88 problematic examples in WSC (out of 273 examples in total). Note that this problem does not exist for WinoGrande and DPR. Furthermore, in real-world applications, such a problem does not exist, since the candidates are not provided and have to be extracted automatically from the text. Detected candidates thus match the spans in the text.
We use this modified version of WSC only for the CSS and MAS models, because they require precise candidate localization. For WG-SR and BWP, we use the unmodified WSC version. The edited dataset can be found in the code repository\footnote{\url{https://github.com/YDYordanov/WS-training-objectives}}.
\section{Experiments}
For all four models, we select the best hyperparameters via grid search using 3 seeds, and then train the models with the best hyperparameters on 20 additional seeds. For WinoGrande, we use WG-dev (1,267 examples) for selecting the hyperparameters, and WG-train-XL as our training dataset. Due to the submission limitation (maximum one per week) of the WinoGrande leaderboard,\footnote{\url{https://leaderboard.allenai.org/winogrande/submissions/public}} we are unable to report all 80 trained models on WG-test, and instead we report them on WG-dev. For additional verification, we include results over the hyperparameter space, where WG-dev is a true test set.
We also report all models on the out-of-domain pronoun resolution datasets WSC (273 examples) and DPR (564 examples). The candidates provided in WSC were treated differently for the CSS and MAS models, as these models require precise candidate localization (see Appendix \ref{sec:wsc_preprocessing}).
For all four models, we do a grid search over the learning rate $\{5e-6, 1e-5, 3e-5, 5e-5\}$, the number of training epochs $\{3, 4, 5, 8\}$, and the batch-size $\{8, 16\}$, and we run each model with three different random seeds. This hyperparameter space is selected based on the union of the grid search by the original WG-SR work \cite{WinoGrande} and our observations on the other three models. The best hyperparameters (in Appendix \ref{sec:hyperparams}) are selected based on the maximum WG-dev accuracy across the three seeds.
For all experiments, we use linear learning rate decay with warm-up over 10\% of the training data, and the AdamW optimizer \cite{Wolf2019HuggingFacesTS}, for which we only alter the learning rate.
\section{Results}
Table~\ref{seeds_table} shows the final seed-wise results for all four objectives. We see that the semantic similarity objective (CSS) outperforms the other three objectives on out-of-domain testing, with 90.2\% average accuracy on WSC and 92.7\% average accuracy on DPR. On the other hand, the sentence ranking objective used by WG-SR clearly outperforms the other three objectives on in-domain testing, with 78.2\% average accuracy on WG-dev. This is confirmed by the contents of Table~\ref{hyperparam_wise_results_table}, where we see that WG-SR has a better mean and max accuracy on WG-dev over the entire hyperparameter space compared to the other three models. For these cases, WG-dev is a true test set, since early stopping was not used, and all tested setups are reported; hence, WG-dev has not influenced the models reported in Table~\ref{hyperparam_wise_results_table}.
In order to verify the statistical significance of our main results, we used the t-test for similar variances and different sample sizes to compare the distributions of accuracy on the converging seeds. Comparing the accuracies of CSS and WG-SR on WG-dev, WSC, and DPR, respectively, we get the following two-tailed $p$-values: $0.008249$, $0.003026$, and $0.017441$. All results are significant with $ p < 0.05 $.
We also observe that, even with the best hyperparameter combination, WG-SR exhibits seed-wise instability, as it fails to converge on 2 out of 20 seeds. This does not happen to the other three models. After considering 10 additional seeds, we obtained that WG-SR fails to converge on 10\% of the seeds (3 out of 30).
Moreover, during the hyperparameter search, we observed that all models were prone to not converge for certain combinations of hyperparameters. The convergence threshold that we used was selected as having $\le 60\%$ accuracy on WG-dev, and its value was selected based on the performance distribution of all models. We observed that all models either perform around $50\%$ accuracy or $70\%$ accuracy or more on WG-dev. $60\%$ in this context is a good middle ground threshold. Table~\ref{hyperparam_wise_results_table} shows that
MAS converged most often; however, it also had the highest performance variation with a standard deviation of 2.5. Out of the four models, WG-SR converged least often, for only 49 out of all 96 hyperparameter combinations.
WG-SR likely performs better in-domain than CSS, MAS, and BWP, since those three use existing properties of RoBERTa (such as the possibility to compare contextualized embeddings, the attention structure of the model, and its pre-trained LM prediction head, respectively) for a task that they were not originally designed for (pronoun resolution). WG-SR, on the other hand, only uses the output of RoBERTa at the 0-th token, which is not pre-trained.
We identify two possible reasons why WG-SR performs worse than CSS on out-of-domain examples. The first reason is the one mentioned above, namely, not explicitly exploiting the listed properties of the pre-trained model would lead to a better fit on a specific dataset, but worse ``general knowledge''. This reason is not completely warranted, since WG-SR has similar out-of-domain performance to BWP and MAS.
The second possible reason is that CSS uses an explicit candidate localization and candidate-pronoun matching (by comparing the embedding of the candidate and the pronoun), whereas in WG-SR these are achieved implicitly by feeding a pair of sentences to the model, one with the correct and one with the incorrect substitution. Again, this reason is not completely warranted, since MAS also uses explicit candidate localization and candidate-pronoun matching, but has a similar out-of-domain performance to WG-SR. Further investigation on the reasons why CSS outperforms WG-SR on the out-of-domain examples is left for future work.
\section{Introduction}
Hard cases of pronoun resolution have been a long-standing problem in natural language processing, which has served as a performance benchmark for the research community \cite{levesque2012winograd, wang2018glue, wang2019superglue}.
For example, the WinoGrande dataset \cite{WinoGrande} consists of pronoun resolution schemas that are constructed so that resolving them requires background knowledge and commonsense reasoning. In WinoGrande, the pronoun is obscured by ``\underline{\hspace{0.5cm}}'' to remove gender and number cues. The task is to find the correct candidate for ``\underline{\hspace{0.5cm}}'' out of two given candidates. For example:
\begin{quote}
\textit{John moved the couch from the garage to the backyard to create space. The \underline{\hspace{0.5cm}} is small.}
Candidates: garage, backyard.
\end{quote}
Recently, supervised learning on top of pre-trained language models has been established as the main approach for pronoun resolution \cite{Trick, WikiCREM, WinoGrande}. Under this type of approach, we identify four categories of objectives commonly used for pronoun resolution:
\begin{enumerate}
\item \label{itm:first} comparing the language model probabilities for each candidate \cite{Trick, WikiCREM, HNN},
\item \label{itm:second} using semantic similarity between the pronoun and the candidates \cite{UDSSM, HNN},
\item \label{itm:third} using sequence ranking among the possible substituted sentences \cite{sequence-ranking, WinoGrande}, and
\item \label{itm:fourth} selecting a candidate based on the attentions of the pronoun in a transformer model \cite{attention-not-all}.
\end{enumerate}
We list one representative model from each category.
For \ref{itm:first}, \citet{Trick} use the BERT masked language model \cite{devlin2018bert} to produce the probabilities of the pronoun to be replaced with each of the two candidates. For \ref{itm:second}, the Unsupervised Deep Structured Semantic Model (UDSSM-I) \cite{UDSSM} uses contextualized word embeddings produced by a bidirectional recurrent neural network (BiRNN), and then compares the word embedding of each candidate with the word embedding of the pronoun. For \ref{itm:third}, RoBERTa-WinoGrande \cite{WinoGrande} encodes a pair of sentences (one for each candidate substituted in the input) by using RoBERTa \cite{Roberta} to determine which substitution is the correct one. Finally, the zero-shot Maximum Attention Score (MAS) model \cite{attention-not-all} selects a candidate based on how much the pronoun attends to each candidate internally in BERT.
The problem with all these objectives is that they have not been introduced under the same circumstances. They use different language models and word embeddings (e.g., BERT, RoBERTa, or BiRNN), and have been trained on different data (e.g., DPR \cite{DPR}, WinoGrande, or no additional data). Therefore, it is unclear whether the choice of the objective function is essential for pronoun resolution tasks.
Moreover, the seed-wise stability and the expected performance of these models have usually not been reported. However, seed-wise instability and performance variation are well-known problems when fine-tuning transformer-based models \cite{liu2020understanding, dodge2020finetuning}.
In this work, we compare the performance and seed-wise stability of the four categories of training objectives for pronoun resolution on equal grounds. To do this, for category \ref{itm:fourth}, we adapt to training the zero-shot MAS model. For category~\ref{itm:second}, we also introduce Coreference Semantic Similarity (CSS), which is a simplification and modification of UDSSM-I for transformer encoders.
We select WinoGrande as our training and development dataset due to its large size (40,938 examples) and generalizability to other pronoun resolution tasks \cite{WinoGrande}. We also use for testing the following well-established datasets: the Winograd Schema Challenge dataset (WSC) \cite{levesque2012winograd} and the Definite Pronoun Resolution dataset (DPR) \cite{DPR}.
We choose as language model RoBERTa \cite{Roberta}, as it significantly outperforms BERT on WinoGrande, WSC, and DPR \cite{WinoGrande}.
Finally, our evaluations are done under an unprecedentedly large number of seeds (20).
\input{Models}
\input{Experiments_and_results}
\section{Summary and Outlook}
In this work, we categorized four existing objectives for pronoun resolution, and compared their performance and seed-wise stability on equal grounds. Our experiments showed that, on in-domain testing, the objective of sequence ranking based on the first token in RoBERTa outperforms the other three objectives, but can exhibit convergence problems. On out-of-domain testing, the objective of semantic similarity between the pronoun and each candidate outperforms the other three objectives.
Future work may investigate whether these results translate to other language models besides RoBERTa as well as other training datasets besides WinoGrande. Also, one could analyze the strengths and weaknesses of each objective, and evaluate other variations of these objectives.
\section*{Acknowledgments}
This work was supported by a JP Morgan PhD Fellowship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, the AXA Research Fund, the ESRC grant ``Unlocking the Potential of AI for Law'', the EPSRC studentship OUCS/EPSRC-NPIF/VK/1123106, and EU Horizon 2020 under the grant 952215.
We also acknowledge the use of the EPSRC-funded Tier 2 facility JADE (EP/P020275/1).
\section{Models}
This section presents the four training objectives and the models\footnote{The code is publicly available at: \url{https://github.com/YDYordanov/WS-training-objectives}.} that represent each of them.
All four models share the RoBERTa\footnote{\textit{roberta-large} from \cite{Wolf2019HuggingFacesTS}} contextualized word embeddings. RoBERTa has an identical transformer architecture to BERT \cite{devlin2018bert}, with the only difference being the training procedure. Hence, RoBERTa is a masked language model that outputs the probability distribution for filling a gap in the text (denoted by a ``\textless mask\textgreater \hspace{0.5pt}'' token).
Additionally, RoBERTa is a text encoder, with one output for each token of the input sentence. Three of the models (\ref{WG-baseline}, \ref{CSS}, and \ref{MAS}) use a multi-layer perceptron (MLP) classification ``head'', which takes some part of the encoder as input.
All four models use binary cross-entropy loss with a pair of probabilities as input, and the following target labels: sentence correctness for \ref{WG-baseline} and candidate correctness for \ref{BWP}, \ref{CSS}, and \ref{MAS}.
\subsection{WinoGrande Sequence Ranking} \label{WG-baseline}
We refer to the RoBERTa-WinoGrande model introduced by \citet{WinoGrande} as WG-SR, since it has a sequence ranking objective.
This model predicts which sentence of a pair of substituted sentences is more plausible. Each of the pair of sentences in the input of WG-SR is split in two before the substituted candidate. For example,
\begin{quote}
\textless s\textgreater \hspace{0.5pt} The city councilmen refused the demonstrators a permit because \textless /s\textgreater \hspace{0.5pt} \textless /s\textgreater \hspace{0.5pt} \underline{\hspace{0.5cm}} feared violence. \textless /s\textgreater,
\end{quote}
where ``\underline{\hspace{0.5cm}}'' is filled with each of the two candidates: ``the city councilmen'' or ``the demonstrators''.
The WG-SR code\footnote{\url{https://github.com/allenai/winogrande}} is based on the RobertaForMultipleChoice model \cite{Wolf2019HuggingFacesTS}, restricted to binary choice. This model consists of the pre-trained RoBERTa encoder and an MLP head based on the \textless s\textgreater \hspace{0.5pt} (first) token of RoBERTa's output. The MLP has one hidden layer with tanh activation, hidden size matching that of the encoder, and one-dimensional output. The pair of input sentences $(\text{S}_1, \text{S}_2)$ thus produces a pair of values, which are then passed through a softmax to obtain the two sentence probabilities $P(\text{S}_1)$ and $P(\text{S}_2)$.
\subsection{Binary Word Prediction} \label{BWP}
We denote by Binary Word Prediction (BWP) the model suggested by \citet{Roberta} in their code repository\footnote{\url{https://github.com/pytorch/fairseq/tree/master/examples/roberta/wsc}} as a modification of the model from \citet{Trick}. Instead of using margin loss, BWP uses binary cross-entropy loss. We select this modified version, because it is claimed to be more robust by its authors, and it also has two fewer hyperparameters.
For a given (unsubstituted) input sentence, the BWP model estimates which of the two candidates is more likely to fill the gap ``\underline{\hspace{0.5cm}}''. The input format is like in the following example, where ``\underline{\hspace{0.5cm}}'' is replaced by the ``\textless mask\textgreater''
token, to serve for the masked language model:
\begin{quote}
\textless s\textgreater \hspace{0.5pt} The city councilmen refused the demonstrators a permit because \textless mask\textgreater \hspace{0.5pt} feared violence. \textless /s\textgreater
\end{quote}
With such an input, the RoBERTa masked language model returns the log-probability predictions at the ``\textless mask\textgreater \hspace{0.5pt}'' token over the vocabulary. Of those predictions, only the ones corresponding to the two word candidates $c_1$ and $c_2$ are selected by BWP: $\text{log}P_{\text{vocab}}(c_1)$ and $\text{log}P_{\text{vocab}}(c_2)$. Here, the log-probability of each candidate is defined by averaging the log-probabilities of its tokens. Then, softmax is computed with inputs $\text{log}P_{\text{vocab}}(c_1)$ and $\text{log}P_{\text{vocab}}(c_2)$, which is how we define the pair of probabilities: $(P(c_1), P(c_2)) := (P_{\text{vocab}}(c_1) / (P_{\text{vocab}}(c_1) + P_{\text{vocab}}(c_2)), P_{\text{vocab}}(c_2) / (P_{\text{vocab}}(c_1) + P_{\text{vocab}}(c_2)))$.
\subsection{Coreference Semantic Similarity} \label{CSS}
We propose Coreference Semantic Similarity (CSS), a modification of the training objective of the Unsupervised Deep Structured Semantic Model (UDSSM-I) \cite{UDSSM}. Like UDSSM-I, the CSS objective works by comparison in the word embedding space, such that the candidate that is more similar to the embedding of the pronoun is selected. Unlike UDSSM-I, the CSS objective is simpler, with no attention weights on the tokens of the candidates. It also uses a transformer encoder instead of a recurrent neural network, which enables it to take advantage of state-of-the-art pre-trained language models.
The input format for this model is the same as for BWP (\ref{BWP}). This input is used by RoBERTa to produce contextualized word embeddings. For each candidate $c$, we define its contextualized word embedding $\text{emb}(c)$ by averaging the contextualized word embeddings of its tokens.
For classification, we compare the similarity scores of the embeddings of the \textless mask\textgreater \hspace{0.5pt} token with each of the two candidates $c_1$ and $c_2$, i.e., we compare $\text{sim}(\text{emb}(c_1), \text{emb}(\text{\textless mask\textgreater)})$ and $\text{sim}(\text{emb}(c_2),$ $\text{emb}(\text{\textless mask\textgreater)})$ and select the candidate with greater similarity.
For the similarity score function, we use \textit{additive alignment} \cite{bahdanau2014neural}, i.e., $\text{sim}(x, y) : = v^\top \text{tanh}(Wx+Uy)$, with the trainable parameters: vector $v$, and matrices $W$ and $U$, with hidden size equal to that of RoBERTa and output size of one.
During training, $\text{sim}(\text{emb}(c_1), \text{emb}(\text{\textless mask\textgreater}))$ and $\text{sim}(\text{emb}(c_2), \text{emb}(\text{\textless mask\textgreater}))$ are fed to a binary softmax function to obtain $P(c_1)$ and $P(c_2)$.
\subsection{Maximum Attention Score} \label{MAS}
The Maximum Attention Score (MAS) model was originally developed for zero-shot evaluation of transformer models on pronoun disambiguation \cite{attention-not-all}. It uses the attentions of all layers of a transformer model to produce a maximum attention score for each candidate that summarizes how much the pronoun attends to a candidate. The candidate that is most attended is selected. We adapt this objective to be trainable by replacing the summary of attentions with an MLP over the concatenated masked attention tensors, followed by a binary classifier.
The input of MAS is the same as for BWP (\ref{BWP}). Then, similarly to \citet{attention-not-all}, we extract the two attention tensors $A_{c_1}$ and $A_{c_2}$ given by the multi-layer RoBERTa attentions of the ``\textless mask\textgreater'' token to each of the two candidates $c_1$ and $c_2$, respectively. For each candidate $c$, the attention tensor $A_{c}$ is defined as the average of the attention tensors of all tokens that form $c$. The two corresponding max-masking tensors $M_{c_1}$ and $M_{c_2}$ are then derived as follows: for $i =1, 2$ and for each multi-index $j$ of the tensor $A_{c_i}$, we set $M_{c_i}(j) = 1$, if $A_{c_i}(j) \ge A_{c_{3-i}}(j)$, and $M_{c_i}(j) = 0$, otherwise. We obtain the two corresponding max-masked tensors by the element-wise products: $B_{c_1} = A_{c_1} \circ M_{c_1}$ and $B_{c_2} = A_{c_2}\circ M_{c_2}$.
Unlike \citet{attention-not-all}, we introduce an MLP on top of the concatenated tensor $B = [B_{c_1}, B_{c_2}]$ for binary classification. The MLP has two hidden layers, tanh activation, hidden size the same as its input, and two-dimensional output. It is followed by a binary softmax function to produce the two candidate probabilities $P(c_1)$ and $P(c_2)$.
|
2,869,038,155,625 | arxiv | \section{Introduction}
The uncertainty principle is originally a quantum physics principle stating
that some families of observable quantities cannot be measured simultaneously
with infinite precision. The uncertainty principle can be turned into
quantitative statements thanks to uncertainty inequalities, which provide bounds
on precision of simultaneous measurements of such quantities.
The prototype of uncertainty inequality is the celebrated Heisenberg
inequality, first formulated in~\cite{Heisenberg27uber},
which uses a variance measure as criterion for the measurement
precision. Namely, for all $x\inL^2(\RR )$,
$$
\int_{-\infty}^\infty t^2 |x(t)|^2\,dt\,.\,\int_{-\infty}^\infty \nu^2 |\hat
x(\nu)|^2\,d\nu\ge\frac1{16\pi^2}\ ,
$$
where the Fourier transform is normalized in such a way that the Fourier
transform of $e^{-\pi t^2}$ equals $e^{-\pi\nu^2}$.
Originally stated for position and momentum, the Heisenberg
inequality has been extended to more general observable pairs, under the name of
Robertson inequality~\cite{Robertson29uncertainty,Robertson34indeterminacy}.
Particular cases have been analyzed by various authors, see
e.g.~\cite{Dahlke95affine,dahlke08uncertainty,Flandrin01inequalities,Maass10do}
and references therein. The Robertson variance inequality has been criticized in
the physics literature, mainly because the bound in the inequality sometimes
depends explicitely on the left hand side, which has motivated to seek
alternative formulations. Besides, Robertson-type inequalities do not generalize
well to all situations: for example, the notion of variance is not necessarily
easy to define in some contexts, such as for periodic sequences or functions,
functions defined on compact manifolds or graphs, more generally in situations
where the notion of spreading away from a reference point is not
straightforward.
Among the generalizations, entropic inequalities, that use entropy measures to
quantize measurement precision have enjoyed renewed interest recently. In the
particular case of the
position-momentum situation, the corresponding entropic uncertainty inequality,
called the Hirschman-Beckner inequality~\cite{Hirschman57note}, is intimately
related to the sharp form of the Hausdorff-Young inequality, the so-called
Babenko-Beckner inequality~\cite{Beckner75inequalities}.
In signal processing terms, this uncertainty principle limits the simultaneous
concentration or sparsity of a function and its Fourier transform. The
inequality provides a lower bound on the differential entropies of their
respective square moduli.
\medskip
Uncertainty inequalities have received a renewed interest in the context of
sparse approximation and signal processing applications. Often in a
finite-dimensional setting, $\ell^p$ norms (with $p<2$) are used to measure
dispersion of signals. This provides some quantities in order to compare the sharpness of different representations ($\ell^2$ vectors) of a signal $x$ or probe the concentration of information inside them. For example, the signal itself and its Fourier transform are two representations of the same mathematical object. More generally, any projection of $x$ on a basis of the Hilbert space gives a representation of the signal. In this context, uncertainty bounds involving $\ell^0$ quasi-norm and $\ell^1$ norm have
been derived. A prototype of such bounds is the Elad-Bruckstein $\ell^0$
inequality: given two orthonormal bases in a
finite-dimensional Hilbert space and any vector $x$ in that space with set of
coefficients $a$ and $b$ with respect to the two bases,
$$
\|a\|_0\,\|b\|_0\ge \frac{1}{{\boldsymbol\mu}^2}\ ,
$$
where ${\boldsymbol\mu}$ is a constant called mutual coherence, that depends on the two
bases (and not on $x$). Such results have important implications for practical problems, as shown
in the pioneering work of Donoho and Stark~\cite{Donoho89uncertainty}.
For instance, $\ell^0$ bounds have been used to prove the
equivalence of $\ell^0$ and $\ell^1$-based sparse recovery algorithms, under
suitable sparsity assumptions~\cite{Donoho01uncertainty,Elad02Generalized}.
Results of similar nature have also been obtained in the context of the Fourier
transform on abelian groups (see for
example~\cite{Tao05uncertainty,Krahmer08uncertainty,Murty12uncertainty}).
As is well known in information theory, and remarked also
in~\cite{Przebinda03three}, Shannon entropy and $\ell^p$ norms are closely
connected, through R\'enyi entropies. Inequalities involving R\'enyi
entropies~\cite{Maassen90discrete,Dembo91information}
actually imply both Shannon entropy inequalities and $\ell^p$ inequalities.
This study has been divided into two parts (Sec.~\ref{sec:lzero} and Sec.~\ref{sec:entropy}). In the first part, we analyze such support ($\ell^0$) inequalities in the context of frame
decompositions, as follows. Given two frames ${\mathcal U}$ and ${\mathcal V}$ in a Hilbert space
${\mathcal H}$, denote by $U$ and $V$ the corresponding analysis operator. $Ux$ and $Vx$ are the two representations of $x$ with respect to the frames. For any
$x\in{\mathcal H}$, we prove for example a bound of the form
$$
\|Ux\|_0 \|Vx\|_0\ge \frac1{{\boldsymbol\mu}_*^2}\ ,
$$
where ${\boldsymbol\mu}_*$ is a constant that only depends on the two frames.
In the case of orthonormal bases, these inequalities yield refined forms for
support inequalities (${\boldsymbol\mu}_*\le{\boldsymbol\mu}$), for which we can analyze conditions for equality. The refined
inequalities involve cumulated coherence measures, instead of the standard
coherence measures used classically. In the case of frame decompositions, the
inequalities we obtain concern analysis coefficients, while most recent
contributions (in the domain of sparse decompositions and approximation) focus
on inequalities involving synthesis coefficients. Therefore, exact recovery
results such as those derived in~\cite{Elad02Generalized,Ghobber11uncertainty}
do not apply directly to the new results.
Though, given the renewed interest on analysis-based sparse
decompositions and co-sparsity (see e.g.~\cite{Nam11cosparse}), we believe that
these new inequalities are of interest, as they can yield bounds for the
performances of cosparse signal recovery methods. Consider for instance the
following signal separation problem: given two frames ${\mathcal U}$ and ${\mathcal V}$, and some
observed signal $u\in{\mathcal H}$, we want to split $u$ as a sum of two components whose
respective analysis coefficients with respect to frames ${\mathcal U}$ and ${\mathcal V}$ are sparse.
In other words, we want to solve
$$
\min_{x,y\in{\mathcal H}} \left[\|Ux\|_0 +\|Vy\|_0\right]
\quad\hbox{under constraint}\quad
u=x+y\ .
$$
where $U$ and $V$ are the analysis operators of two frames under consideration.
Given two such decompositions $u=x+y = x'+y'$, the above support inequality
directly leads to
\begin{align*}
\|Ux\|_0 + \|Vy\|_0 + &\|Ux'\|_0 + \|Vy'\|_0 \\
&\ge \|U(x-x')\|_0 + \|V(y'-y)\|_0\ge\frac2{{\boldsymbol\mu}_*}\ .
\end{align*}
Therefore, if one is given a splitting of the form $u=x+y$ such that
$\|Ux\|_0 + \|Vy\|_0 < 1/{\boldsymbol\mu}_*$, this splitting is automatically the solution
of the above optimization problem.
Besides support size estimates, we also obtain entropic inequalities
for analysis coefficients with respect to frames, that explicitely involve
the frame bounds. This is developed in the second part of the study. As a particular case, Shannon entropy bounds are derived, and
it is shown that the latter are only informative for tight frames.
In the latter case, the entropy inequalities take a fairly simple form; for
example, denoting by $S(a)$ the Shannon entropy of a vector $a$, we show that
given two tight frames ${\mathcal U}$ and ${\mathcal V}$, with respective analysis operators
$U$ and $V$, then
$$
S(Ux) + S(Vx)\ge -2\ln({\boldsymbol\mu}_*)\ .
$$
Such inequality also turns out to yield the above mentioned support inequalities
as a by-product. Finally, we also derive new $\ell^p$ inequalities as consequences of
R\'enyi entropic inequalities.
\section{Refined Elad-Bruckstein $\ell^0$ inequalities}\label{sec:lzero}
\subsection{Notations}
We first introduce the general setting we shall be working with. Throughout this
paper, we shall denote by ${\mathcal U}=\{u_k,\,k\in\Lambda\}$ and
${\mathcal V} =\{v_\ell,\,\ell\in\Lambda\}$ two countable frames for the Hilbert
space ${\mathcal H}$ (we refer to~\cite{Christensen03Introduction} for a self contained
account of frame theory). Here, the index set $\Lambda$ will be finite when ${\mathcal H}$ is
finite-dimensional, and infinite otherwise. $\|x\|$ also written $\|x\|_2$ is the norm of $x$ in ${\mathcal H}$. We denote by $A_{\mathcal U},B_{\mathcal U}$ and
$A_{\mathcal V},B_{\mathcal V}$ the corresponding frame bounds, i.e. we have for all $x\in{\mathcal H}$
\begin{align}
A_{\mathcal U}\|x\|^2&\le\sum_k |\langle x,u_k\rangle|^2\le B_{\mathcal U}\|x\|^2\ ,\\
A_{\mathcal V}\|x\|^2&\le\sum_\ell |\langle x,v_\ell\rangle|^2\le B_{\mathcal V}\|x\|^2\ .
\end{align}
Let $U:{\mathcal H}\to\ell^2(\Lambda)$ and $V:{\mathcal H}\to\ell^2(\Lambda)$ be the
corresponding analysis operators, i.e.
\begin{equation}
a_k\stackrel{\Delta}{=}(Ux)_k = \langle x,u_k\rangle\ ,\quad
b_\ell\stackrel{\Delta}{=}(Vx)_\ell = \langle x,v_\ell\rangle\ ,\qquad x\in{\mathcal H}\ ,
\end{equation}
and denote by $T=VU^\dagger: a\to b$ the change of frame operator (with
$U^\dagger$ the Moore-Penrose pseudo-inverse of $U$).
We shall also denote by ${\widetilde{{\mathcal U}}}$ and ${\widetilde{{\mathcal V}}}$ corresponding (generic)
dual frames, among which the canonical dual frames will be denoted
by ${\widetilde{{\mathcal U}}}^\circ = (U^*U){^{-1}}{\mathcal U}$ and ${\widetilde{{\mathcal V}}}^\circ=(V^*V){^{-1}}{\mathcal V}$.
As is well known (see~\cite{Christensen03Introduction}), the corresponding frame
bounds are respectively $A_{{\widetilde{{\mathcal U}}}^\circ}=1/B_{\mathcal U}$, $B_{{\widetilde{{\mathcal U}}}^\circ}=1/A_{\mathcal U}$,
and similarly for ${\widetilde{{\mathcal V}}}$. We recall that in the particular case where ${\mathcal U}$
and/or ${\mathcal V}$ are (Riesz) bases, ${\widetilde{{\mathcal U}}}$ and/or ${\widetilde{{\mathcal V}}}$ are the corresponding
bi-orthogonal bases.
In the following, we shall make use of the following quantity:
\begin{definition}
Let $r\in [1,2]$, let $r'$ be conjugate to $r$, i.e. such that $1/r + 1/r'=1$.
The mutual coherence of order $r$ of two frames ${\mathcal U}$ and ${\mathcal V}$ is defined by
\begin{equation}
{\boldsymbol\mu}_r({\mathcal U},{\mathcal V}) \stackrel{\Delta}{=} \sup_\ell \left(\sum_k |\langle
u_k,v_\ell\rangle|^{r'}\right)^{r/r'}\ ,
\end{equation}
In the case $r=1$, this corresponds to the standard mutual coherence, simply
denoted by ${\boldsymbol\mu}({\mathcal U},{\mathcal V})$.
\end{definition}
This quantity is clearly well-defined in finite-dimensional settings. Notice
also that in infinite-dimensional situations (i.e. when $\Lambda$ is an
infinite index set), this quantity is well-defined for all $r\in [1,2]$. Indeed,
${\boldsymbol\mu}_2({\mathcal U},{\mathcal V})\le B_{\mathcal U}\,\sup_\ell \|v_\ell\|^2$, and
${\boldsymbol\mu}_r^{r'/r}({\mathcal U},{\mathcal V})\le {\boldsymbol\mu}_2({\mathcal U},{\mathcal V}) \sup_{k,\ell}|\langle
u_k,v_\ell\rangle|^{r'-2}$, which is finite since $r'\ge 2$.
In finite-dimensional situations, the notion of mutually unbiased bases has been
introduced in the physics literature by Schwinger~\cite{Schwinger60unitary}
(see~\cite{Wehner10entropic} for a review).
\begin{definition}
Two orthonormal bases ${\mathcal U}$ and ${\mathcal V}$ in an $N$-dimensional Hilbert space ${\mathcal H}$ are
mutually unbiased (MUB) if
$$
|\langle u_k,v_\ell\rangle| = \frac1{\sqrt{N}}\ ,\qquad\forall k,\ell=0,\dots
N-1\ .
$$
${\mathcal U}$ and ${\mathcal V}$ are blockwise mutually unbiased bases (BMUB) of ${\mathcal H}$ if
${\mathcal U}=\{{\mathcal U}^{(1)},\dots{\mathcal U}^{(K)}\}$, ${\mathcal V}=\{{\mathcal V}^{(1)},\dots{\mathcal V}^{(K)}\}$, where
for all $k=1,\dots K$, ${\mathcal U}^{(k)}$ and ${\mathcal V}^{(k)}$ span the same subspace
${\mathcal H}^{(k)}$, of dimension $N_k$, and are MUBs for ${\mathcal H}^{(k)}$, with
$\bigoplus_k{\mathcal H}^{(k)}={\mathcal H}$.
\end{definition}
Notice that the coherence of a MUB in an $N$-dimensional Hilbert space equals
${\boldsymbol\mu}({\mathcal U},{\mathcal V})=1/\sqrt{N}$, the corresponding $r$-coherence equals
${\boldsymbol\mu}_r({\mathcal U},{\mathcal V})=N^{r/2-1}$, and the $r$-coherence of a BMUB equals
${\boldsymbol\mu}_r({\mathcal U},{\mathcal V})=\max_kN_k^{r/2-1}$.
\subsection{Refined Elad-Bruckstein inequality}
The classical Elad-Bruckstein $\ell^0$ inequality~\cite{Elad02Generalized} (a
strong form of which has been given in~\cite{Ghobber11uncertainty}) gives
a lower bound for the product of support sizes of two orthonormal basis
representations of a single vector. The inequality can be extended to the frame
case and generalized as follows.
\begin{theorem}
\label{th:ref.EB}
Let ${\mathcal U}$ and ${\mathcal V}$ be two frames of the Hilbert space ${\mathcal H}$. For any $x\in{\mathcal H}$,
$x\ne 0$, denote by $a=Ux$ and $b=Vx$ the analysis coefficients of $x$ with
respect to these two frames.
\begin{enumerate}
\item
For all $r\in[1,2]$, coefficients $a$ and $b$ satisfy the uncertainty inequality
\begin{equation}
\|a\|_0.\|b\|_0\ge\frac1{{\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})}\ .
\end{equation}
Therefore, $\|a\|_0.\|b\|_0\ge 1/{{\boldsymbol\mu}_*({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V},{\widetilde{{\mathcal V}}})}^{2}$, where
\begin{equation}
\label{fo:mustar}
{\boldsymbol\mu}_*({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V},{\widetilde{{\mathcal V}}})\stackrel{\Delta}{=}
\inf_{r\in [1,2]}\sqrt{{\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})}\ .
\end{equation}
\item
For all $r\in [1,2]$, the inequality can only be sharp if the following three
properties hold true:
\begin{itemize}
\item[i.] the sequences $|a|$ and $|b|$ are constant on their support,
\item[ii.] for all $k\in\hbox{supp}(a)$ (resp. $\ell\in\hbox{supp}(b)$) the sequence
$\ell\to|\langle{\tilde{u}}_k,v_\ell\rangle|$ (resp.
$k\to|\langle{\tilde{v}}_\ell,u_k\rangle|$) is constant on $\hbox{supp}(b)$ (resp. $\hbox{supp}(a)$).
\item[iii.] for all $k\in\hbox{supp}(a),\ell\in\hbox{supp}(b)$,
$\arg(\langle{\tilde{u}}_k,v_\ell\rangle)=\arg(b_\ell)-\arg(a_k) =
-\arg(\langle{\tilde{v}}_\ell,u_k\rangle)$.
\end{itemize}
\end{enumerate}
\end{theorem}
\underline{\bf Proof:}
\begin{enumerate}
\item
Let $r\in [1,2]$, let $x\in{\mathcal H}$, $x\ne 0$. First remark that
\begin{eqnarray*}
\|b\|_\infty &=& \sup_\ell|\langle x,v_\ell\rangle|\\
&=&\sup_\ell \left|\left\langle\sum_k a_k {\tilde{u}}_k,v_\ell\right\rangle\right|\\
&\le& \sup_\ell \sum_k |a_k|\,|\langle{\tilde{u}}_k,v_\ell\rangle|\ ,
\end{eqnarray*}
and H\"older's inequality yields
\begin{equation}
\label{fo:frame.LrLinf.bound}
\|b\|_\infty\le \|a\|_r\ {\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V})^{1/r}
\end{equation}
Similarly,
\begin{equation}
\label{fo:frame.LrLinf.bound2}
\|a\|_\infty\le \|b\|_r\ {\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})^{1/r}\ .
\end{equation}
Then, notice that
$$
\|a\|_r^r\le \|a\|_0\,\|a\|_\infty^r\le \|a\|_0\,\|b\|_r^r\ {\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})\ .
$$
The same estimate on $\|b\|_r$ proves the first part of the theorem.
\item
Assume first $r\ne 1$. As for the sharpness of the bound, notice first that the inequality
$\|a\|_r^r\le \|a\|_0\,\|a\|_\infty^r$ is sharp if and only if $|a|$ is constant
on its support (similarly, $|b|$ has to be constant on its support). Now in the
first inequality, H\"older's inequality is an equality if and only if the
sequence $k\to |\langle{\tilde{u}}_k,v_\ell\rangle|^{r'}$ is proportional to $|a|^r$,
meaning that the sequence $k\to |\langle{\tilde{u}}_k,v_\ell\rangle|$ is constant on
its support, which coincides with the support of $a$. A similar reasoning is
done for $b$ and the sequence $\ell\to|\langle{\tilde{v}}_\ell,u_k\rangle|$. The last
inequality to be investigated is $|b_\ell| \le \sum_k
|a_k|\,|\langle{\tilde{u}}_k,v_\ell\rangle|$. The latter becomes an equality if and only
if the sum only involves positive numbers, i.e. iff
$\arg(\langle{\tilde{u}}_k,v_\ell\rangle)=\arg(b_\ell)-\arg(a_k)$. A similar reasoning
yields the condition $\arg(\langle{\tilde{v}}_\ell,u_k\rangle) =
\arg(a_k)-\arg(b_\ell)$.
Finally, consider the case $r=1$. The above argument can be reproduced exactly,
except for the tightness argument for H\"older's inequality. The latter can now
be an equality only if the sequence $k\to |\langle{\tilde{u}}_k,v_\ell\rangle|$ is
equal to a constant (namely, ${\boldsymbol\mu}({\widetilde{{\mathcal U}}},{\mathcal V})$) on $\hbox{supp}(a)$, and smaller
outside the support. This does not change the conclusion.
\end{enumerate}
This concludes the proof.{\hfill$\spadesuit$}
\begin{remark}
\begin{enumerate}
\item
Clearly, by the arithmetic-geometric inequality, we also obtain the bound
\begin{equation}
\|a\|_0 + \|b\|_0\ge \frac2{{\boldsymbol\mu}_*({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V},{\widetilde{{\mathcal V}}})}
\end{equation}
\item
Using exactly the same techniques, the uncertainty inequality can be extended to
$K$ frames. Given $K$ frames ${\mathcal U}^{(k)},\ k=1,\dots K$ and denoting by
$a^{(k)}$ the corresponding sequences of analysis coefficients of any $x\in{\mathcal H}$,
we readily obtain the bound
\begin{equation}
\|a^{(1)}\|_0.\|a^{(2)}\|_0\dots\|a^{(K)}\|_0 \ge \left(\prod_{k=1}^{K}\mu_*^{(k)}\right)^{-1}
\end{equation}
where
$$\mu_*^{(k)}=\mu_*({\mathcal U}^{(k)},{\widetilde{{\mathcal U}}}^{(k)},{\mathcal U}^{(k+1 {\mathrm {mod}} K)},{\widetilde{{\mathcal U}}}^{(k+1{\mathrm {mod}} K)}),
$$
and again by the arithmetic-geometric inequality,
\begin{equation}
\|a^{(1)}\|_0 +\|a^{(2)}\|_0 +\cdots+\|a^{(K)}\|_0\ge K\
\left(\prod_{k=1}^{K}\mu_*^{(k)}\right)^{-1/K}\ .
\end{equation}
\end{enumerate}
\end{remark}
\begin{remark}
We notice that when ${\mathcal U}$ and ${\mathcal V}$ are orthonormal bases the result
generalizes the Elad-Bruckstein inequality. When ${\mathcal U}$ and ${\mathcal V}$ are
non-orthonormal bases ${\widetilde{{\mathcal U}}}$ and ${\widetilde{{\mathcal V}}}$ are their respective biorthogonal bases
and we obtain a straightforward generalization. In the case of frames, let us
point out that the generalization we obtain concerns analysis coefficients
rather than synthesis coefficients.
\end{remark}
\begin{remark}
Notice finally that these bounds involve arbitrary dual frames ${\widetilde{{\mathcal U}}}$ and ${\widetilde{{\mathcal V}}}$
of ${\mathcal U}$ and ${\mathcal V}$, not necessarily the canonical ones. Therefore the bound can
be make more general, in the form
\begin{equation}
\|a\|_0.\|b\|_0\ge 1/{{\boldsymbol\mu}_{**}({\mathcal U},{\mathcal V})}^{2}\ ,
\end{equation}
where
\begin{equation}
{\boldsymbol\mu}_{**}({\mathcal U},{\mathcal V})\stackrel{\Delta}{=}
\inf_{{\widetilde{{\mathcal U}}},{\widetilde{{\mathcal V}}}}\ \inf_{r\in [1,2]}\sqrt{{\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})}\ ,
\end{equation}
the infimum running over the family of dual frames of ${\mathcal U}$ and ${\mathcal V}$. A
characterization of such families can be found
in~\cite{Christensen03Introduction}, Theorem 5.6.5.
\end{remark}
\subsection{Examples and comments: the case of orthonormal bases}
Consider first the case where ${\mathcal U}$ and ${\mathcal V}$ are two orthonormal bases in
finite dimensional Hilbert spaces. First
notice that the case $r=1$ provides an elementary proof of the Elad-Bruckstein
inequality (which involves $1/{\boldsymbol\mu}_1({\mathcal U},{\mathcal V})^2$ as a lower bound), together
with explicit conditions for sharpness. In
the particular case of mutually unbiased bases, i.e. orthonormal bases such that
$|\langle u_k,v_\ell\rangle|$ is constant, ${\boldsymbol\mu}_r({\mathcal U},{\mathcal V})=N^{r/2-1}$ is
monotone and minimal for $r=1$, which yields the usual coherence ${\boldsymbol\mu} =
{\boldsymbol\mu}_1=1/\sqrt{N}$, $N$ being the dimension of the considered Hilbert space. An
example of mutually unbiased bases is provided by the Kronecker and Fourier
bases in ${\mathbb R}^N$, in which case the Elad-Bruckstein inequality coincides with
the inequality derived before by Donoho and Huo~\cite{Donoho01uncertainty}.
For blockwise mutually unbiased bases, we also obtain a monotone function of $r$
for the $r$-coherence ${\boldsymbol\mu}_r({\mathcal U},{\mathcal V})=\max_kN_k^{r/2-1}$, which means that
again the optimal bound is provided by ${\boldsymbol\mu}_1$.
In the case of orthonormal bases, the smallest possible value for the coherence
is provided by the Welch bound: ${\boldsymbol\mu}\ge 1/\sqrt{N}$. Therefore, we obtain
\begin{corollary}
Assume ${\mathcal U}$ and ${\mathcal V}$ are orthonormal bases.
The optimal bound for the refined Elad-Bruckstein uncertainty inequality is
attained in the case of mutually unbiased bases, for $r=1$.
\end{corollary}
Consider now the case where the inequality is an equality, in the case $r\ne 1$.
By Theorem~\ref{th:ref.EB}, the analysis coefficients $a$ and $b$ of the
corresponding optimizer are such that $|a|$ and $|b|$ are constant on their
support. The proof of part 2. of the theorem also implies that for
$k\in\hbox{supp}(a)$, the sequence $\ell\to|\langle u_k,v_\ell\rangle|$
vanishes outside $\hbox{supp}(b)$ and equals a constant on $\hbox{supp}(b)$; The latter
constant equals necessarily $\|b\|_0^{-1/2}$, and
${\boldsymbol\mu}_r({\mathcal U},{\mathcal V})=\|b\|_0^{r/2-1}$. Similarly, ${\boldsymbol\mu}_r({\mathcal V},{\mathcal U})=\|a\|_0^{r/2-1}$.
Assume finally that the inequality be an equality, the latter thus reads
$$
\|a\|_0.\|b\|_0 = \|a\|_0^{1-r/2}.\|b\|_0^{1-r/2}\ ,
$$
which implies (for nonzero signals $x$) $\|a\|_0.\|b\|_0=1$, i.e. the two bases
have at least one common element, and the signal is a multiple of one of these
common elements.
\begin{corollary}
Assume ${\mathcal U}$ and ${\mathcal V}$ are orthonormal bases. For $r\ne 1$, the corresponding
refined Elad-Bruckstein inequality cannot be an equality, unless the two bases
have a common element.
\end{corollary}
\medskip
Notice however that the case $r\in [1,2]$ constitutes a true generalization;
indeed, for general pairs of orthonormal bases, it turns out that
$\sup_r[1/{\boldsymbol\mu}_r({\mathcal U},{\mathcal V}){\boldsymbol\mu}_r({\mathcal V},{\mathcal U})] > 1/{\boldsymbol\mu}_1({\mathcal U},{\mathcal V})^2$.
This is examplified in Figure~\ref{fi:rcoherence}, where are displayed the
functions ${\boldsymbol\mu}_r({\mathcal U},{\mathcal V})$, ${\boldsymbol\mu}_r({\mathcal V},{\mathcal U})$ and
$\sqrt{{\boldsymbol\mu}_r({\mathcal U},{\mathcal V}){\boldsymbol\mu}_r({\mathcal V},{\mathcal U})}$ as a function of $r$, in a generic
situation: the two bases ${\mathcal U}$ and ${\mathcal V}$ are random bases, obtained by
diagonalization of random (Gaussian) symmetric matrices. As can be seen in this
picture, the minimum of these three functions is not attained for $r=1$ but for
a larger value. For the sake of comparison, the case of mutually unbiased bases
is also represented and exhibit a power law behavior as a function of $r$
(represented as a straight line in the logarithmic plot). This shows that the
coherence based bounds are not the best possible ones in general. Elementary
infinitesimal calculus yields the following expression for the behavior of the
$r$-coherence near $r=2$:
\begin{align*}
{\boldsymbol\mu}_r({\mathcal U},{\mathcal V}) =& 1 \!-\! (2\!-\!r) \max_\ell \left(\!\!-\!\sum_k |\langle
u_k,v_\ell\rangle|^2 \ln\left(|\langle u_k,v_\ell\rangle|^2 \right)\!\right)\\
&+ O((2-r)^2)\ ,
\end{align*}
i.e. the slope of the tangent at $r=2$ is given by the entropy-like expression
$$
\hbox{slope}=-\sum_k|\langle u_k,v_\ell\rangle|^2\ln\left(|\langle u_k,v_\ell\rangle|^2
\right)
$$
(see section below), which is known to be minimal (in finite dimensional
situations, see~\cite{Cover91elements} for more details) when the $|\langle
u_k,v_\ell\rangle|^2$ are all equal.
\begin{figure}
\centerline{
\includegraphics[width=7cm]{rcoherence_bw}
}
\centerline{
\includegraphics[width=8cm]{rcoherence_mdct}
}
\caption{Logarithm of $r$-coherence functions as a function of $r$.
${\boldsymbol\mu}_r({\mathcal U},{\mathcal V})$ (dashed), ${\boldsymbol\mu}_r({\mathcal V},{\mathcal U})$ (dash-dotted) and
$\sqrt{{\boldsymbol\mu}_r({\mathcal U},{\mathcal V}){\boldsymbol\mu}_r({\mathcal V},{\mathcal U})}$ (full); straight line: mutually
unbiased bases.
Top: random bases ${\mathcal U}$ and ${\mathcal V}$.
Bottom: MDCT bases with different window sizes.
}.
\label{fi:rcoherence}
\end{figure}
More generally, we have the following result on $r$-coherences.
\begin{proposition}\label{prop:mur}
Let $\{u_k\}_k$ and $\{v_l\}_l$ be two frames in the Hilbert space ${\mathcal H}$ of
dimension $N$.Fix $l$, let $s_l=\max_k|\langle u_k,v_l\rangle|$ and denote by
$n_l=\sharp (|\langle u_k,v_l\rangle|=s_l)$ the multiplicity of this maximal
value.
If $\max_l n_ls_l<1$, then there exists $r>1$ such that $\mu_r<\mu_1$.
\end{proposition}
\underline{\bf Proof:}
It is enough to show that the derivative of $\mu_r$ is negative at $r=1$ under
the stated conditions. Let us introduce the notation
$$
S_{kl}=|\langle u_k,v_l\rangle|\quad \text{ and }\quad L_l(r)=\ln \left(\sum_k
S_{kl}^{r'}\right)^{r/r'},
$$
so that $\mu_r=\sup_l L_l(r)$. If for all $l$ the derivative of $L_l$ is
negative, so is the derivative of $\mu_r$.
Since $\{u_k\}_k$ and $\{v_l\}_l$ are frames, $\sum_k S_{kl}>0$ for all $l$ and
$L_l$ is well-defined as well as its derivative near $r=1$, $r\ge 1$. This
latter reads:
\begin{align*}
L_l'(r)&=\ln n_ls_l+\sum_{k\in\Lambda}
\alpha_{kl}^{\frac{r}{r-1}}+\frac{\ln s_l}{r-1}\sum_{k\in\Lambda}
\alpha_{kl}^{\frac{r}{r-1}}\\
&+{\cal O}\left(\frac{1}{r-1}\left(\sum_{k\in\Lambda} \alpha_{kl}^{\frac{r}{r-1}}\right)^2\right),
\end{align*}
where $\alpha_{kl}=|\langle u_k,v_l\rangle|/n_ls_l$ and $\Lambda$ is the set of
$k$ such that $|\langle u_k,v_l\rangle|\neq s_l$.
For $r$ close to one, the dominant term is $\ln n_ls_l$. In this case, if
$\max_l n_ls_l<1$, the derivative of $\mu_r$ is negative.{\hfill$\spadesuit$}
\begin{remark}
If two orthonormal bases have a high mutual coherence ($\mu_1$), this implies that a single term is dominant and since in this case $s_l\le 1$, Proposition~\ref{prop:mur} holds. However for frames with high coherence $s_l$ is large for some $l$, and it is highly probable that its multiplicity be large as well. In this case, the conditions of
the proposition are not satisfied and $\mu_r$ may not be smaller than
$\mu_1$. If there is too little coherence (like in MUB),
$n_l$ may be large and again $\mu_r$ may not be smaller than $\mu_1$. Finally, for MUB, $n_ls_l=\sqrt{N}$, this is the slope of the curve plotted on Fig.~\ref{fi:rcoherence}.
\end{remark}
\section{Entropic inequalities}\label{sec:entropy}
The support inequalities described above can also be obtained as particular
limits of entropic inequalities, which have been derived during the last 20
years in the mathematical physics and information theory communities.
\subsection{Entropies}
In information theory, the notion of entropy is often used to measure disorder,
or information content of a random source; entropy measures are basically
related to measures of dispersion of the probability density function of the
random variables under consideration.
In the context of sparse analysis, the coefficients of the decomposition of any
finite-norm vector with respect to a frame can be turned into a probability
distribution, after suitable normalization. With the same notations as before,
denote by $a$ the sequence of analysis coefficients of $x\in{\mathcal H}\ (x\ne 0)$ with respect to
the frame ${\mathcal U}$, and we set $\tilde a = a/\|a\|_2$. Given
$\alpha\in[0,\infty]$ we introduce the corresponding R\'enyi entropy
\begin{equation}
R_\alpha(a) \stackrel{\Delta}{=} \frac1{1-\alpha}\,\ln\!\left(\|\tilde
a\|_{2\alpha}^{2\alpha}\right)\ .
\end{equation}
R\'enyi entropies fulfill a number of simple properties, among which we will use
the following two: monotonicity and limit to Shannon's entropy. More precisely,
for a given coefficient sequence $a$,
\begin{equation}
\alpha\le\beta\quad\implies\quad R_\alpha(a) \ge R_\beta(a)\ ,
\end{equation}
and
\begin{equation}
\lim_{\alpha\to 1} R_\alpha(a) = -\sum_n |\tilde
a_n|^2\ln\left(|\tilde a_n|^2\right)\stackrel{\Delta}{=} S(a)\ .
\end{equation}
$S(a)$ is the Shannon entropy of the coefficient sequence. Also, notice that
$R_0(a) = \ln \|a\|_0$. This will lead to support inequalities as consequences
of R\'enyi entropy inequalities.
Uncertainty inequalities involving entropy measures have been derived in several
different contexts
(see~\cite{Beckner75inequalities,Dembo91information,Maassen88generalized} for
example). We derive below similar inequalities in a more general setting.
\subsection{Entropic uncertainty inequalities for frame expansions}
As above, let us consider two frames ${\mathcal U}$ and ${\mathcal V}$. We use the same notations
as in the previous section, and introduce the following additional constants:
the geometric mean of the upper frame bounds ${\boldsymbol\rho}$, the geometric mean of
frame bounds ratios ${\boldsymbol\sigma}$, and the normalized $r$-coherence ${\boldsymbol\nu}_r$, written as
\begin{align}
\label{fo:notations}
&{\boldsymbol\rho}({\mathcal U},{\mathcal V})\stackrel{\Delta}{=}\sqrt{\frac{B_{\mathcal V}}{A_{\mathcal U}}}\ ,\qquad
{\boldsymbol\sigma}({\mathcal U},{\mathcal V}) \stackrel{\Delta}{=} \sqrt{\frac{B_{\mathcal U} B_{\mathcal V}}{A_{\mathcal U} A_{\mathcal V}}}\ge 1\ ,\nonumber\\
&{\boldsymbol\nu}_r({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V}) = \frac{{\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V})}{{\boldsymbol\rho}({\mathcal U},{\mathcal V})^r}\ .
\end{align}
For the sake of simplicity, we shall drop the $r$ index in the case $r=1$, and
set ${\boldsymbol\mu}={\boldsymbol\mu}_1$ and ${\boldsymbol\nu}={\boldsymbol\nu}_1$.
We then have the following theorem, which can be seen as a frame generalization
of the Maassen-Uffink uncertainty
inequality~\cite{Maassen88generalized,Dembo91information}:
\begin{theorem}
\label{th:gen.entropic.ineq}
Let ${\mathcal H}$ be a separable Hilbert space, let ${\mathcal U}$ and ${\mathcal V}$ be two frames of
${\mathcal H}$, and let ${\widetilde{{\mathcal U}}}$ and ${\widetilde{{\mathcal V}}}$ denote corresponding dual frames. Let
$r\in [1,2)$.
For all $\alpha\in [r/2,1]$, let $\beta=\alpha(r-2)/(r-2\alpha)\in [1,\infty]$.
For $x\in{\mathcal H}$, denote by $a$ and $b$ the sequences of analysis coefficient of
$x$ with respect to ${\mathcal U}$ and ${\mathcal V}$. Then the R\'enyi entropies satisfy the
following bound:
\begin{eqnarray*}
(2-r) R_\alpha(a) + rR_\beta(b)
&\ge& -2\ln({\boldsymbol\nu}_r({\mathcal U},\tilde{\mathcal U},{\mathcal V}))\\
&&- \frac{2r\beta}{\beta-1}\ln({\boldsymbol\sigma}({\mathcal U},{\mathcal V}))
\end{eqnarray*}
\end{theorem}
\underline{\bf Proof:} the proof is both a refinement and a frame
generalization of the proof in~\cite{Maassen88generalized,Dembo91information}.
Let $T: a\to b$ denote the linear operator of change of coordinate.
From the frame bounds, we obviously have the inequalities
$$
\|b\|_2\le\sqrt{\frac{B_{\mathcal V}}{A_{\mathcal U}}} \|a\|_2\ ,\qquad
\|a\|_2\le\sqrt{\frac{B_{\mathcal U}}{A_{\mathcal V}}} \|b\|_2\ ,
$$
so that we have the estimate
\begin{equation}
\|T\|_{2\to 2} \le \sqrt{\frac{B_{\mathcal V}}{A_{\mathcal U}}}
={\boldsymbol\rho}({\mathcal U},{\mathcal V})\ .
\end{equation}
A second bound is obtained as in~\eqref{fo:frame.LrLinf.bound}, and yields
\begin{equation}
\|T\|_{r\to_\infty} = {\boldsymbol\mu}_r(\tilde{\mathcal U},{\mathcal V})^{1/r}\ .
\end{equation}
Let $p_0=q_0=2$, $p_1=r$,
$q_1=\infty$, and set for $\theta\in [0,1]$
$$
\frac1{p} = \frac{1-\theta}2 + \frac{\theta}r\ ,\qquad
\frac1{q} = \frac{1-\theta}2\ .
$$
Clearly, $1-\theta = 2/q$ and $\theta = 1-2/q = r(1/p-1/q)$, and
the Riesz-Thorin lemma yields the following bound.
\begin{equation}
\|T\|_{p\to q}\le {\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V})^{(1-2/q)/r}{\boldsymbol\rho}({\mathcal U},{\mathcal V})^{2/q}\ ,
\end{equation}
Using the definition of $\tilde a$ and $\tilde b$ and the frame bounds, we deduce
\begin{align}
\|\tilde b\|_q&\le
{\boldsymbol\rho}({\mathcal U},{\mathcal V})^{2/q}{\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V})^{1/p -1/q}\sqrt{\frac{B_{\mathcal U}}{A_{\mathcal V}}}
\|\tilde a\|_p\nonumber\\
&\le{\boldsymbol\sigma}({\mathcal U},{\mathcal V}){\boldsymbol\nu}_r({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V})^{1/p-1/q}\|\tilde a\|_p\ ,
\end{align}
where we have used the bound $\|a\|_2/\|b\|_2\le{\boldsymbol\rho}({\mathcal V},{\mathcal U})$ and the
definition of ${\boldsymbol\nu}_r$ and ${\boldsymbol\sigma}$ in~\eqref{fo:notations}.
Set now $p=2\alpha$ and $q=2\beta$; taking logarithms, we get
\begin{eqnarray*}
\frac{1-\alpha}{2\alpha} R_\alpha(a)-\frac{1-\beta}{2\beta} R_{\beta}(b)&\ge&
-\left(\frac1{2\alpha}-\frac1{2\beta}\right)\,\ln({\boldsymbol\nu}_r({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V}))\\
&&\qquad- \ln({\boldsymbol\sigma}({\mathcal U},{\mathcal V}))\ ,
\end{eqnarray*}
Since $(\beta-1)/\beta = 1-2/q = r(1/2\alpha-1/2\beta)$, this implies
\begin{eqnarray*}
\frac{\beta(1-\alpha)}{\alpha(\beta-1)}R_\alpha(a) + R_\beta(b) &\ge&
-\frac2{r}\ln{\boldsymbol\nu}_r({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V})\\
&&\qquad - 2\frac{\beta-1}\beta\ln({\boldsymbol\sigma}({\mathcal U},{\mathcal V}))\ .
\end{eqnarray*}
Finally, explicit calculations give $\alpha=\beta r/(r+2(\beta-1))$, so that
$$
\frac{\beta(1-\alpha)}{\alpha(\beta-1)} = \frac{2-r}r\in [0,1]\ ,
$$
which yields the desired result.{\hfill$\spadesuit$}
Notice that since $(2-r)/r\in [0,1]$, this implies the (generally non sharp)
inequality
$$
R_\alpha(a) + R_\beta(b)\ge
-\frac{2}r\ln({\boldsymbol\nu}_r({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V})) -
\frac{2\beta}{\beta-1}\ln({\boldsymbol\sigma}({\mathcal U},{\mathcal V}))
$$
It is also worth noticing that in general, the limit $\alpha\to 1$ (which yields the
sum of the Shannon entropies as left hand side) is non-informative, since the
right hand side tends to $-\infty$, unless ${\boldsymbol\sigma}=1$, i.e. ${\mathcal U}$ and ${\mathcal V}$ are
tight. In that case the following simplified inequalities hold true:
\begin{corollary}
Assume ${\mathcal U}$ and ${\mathcal V}$ are tight frames, and let $r\in[1,2)$:
\begin{enumerate}
\item
For all $\alpha\in [r/2,1]$, with $\beta=\alpha(r-2)/(2\alpha-r)\in [1,\infty]$
\begin{equation}
(2-r)R_\alpha(a) +r R_\beta(b)\ge -2\ln({\boldsymbol\nu}_r({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V}))\ .
\end{equation}
\item
the following inequalities between Shannon entropies hold true:
\begin{equation}
S(a)+S(b) \ge -2\ln\left({\boldsymbol\mu}_*({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V},{\widetilde{{\mathcal V}}})\right)\ ,
\end{equation}
where ${\boldsymbol\mu}_*$ is defined in~\eqref{fo:mustar}.
\end{enumerate}
\end{corollary}
\underline{\bf Proof:} the first item is a direct consequence of the previous theorem in
the case of tight frames.
For the second item, remark that from the monotonicity of the R\'enyi entropy, we obtain
$(2-r)S(a) + rS(b)\ge (2-r)R_\alpha(a) +r R_\beta(b)$. Remark also that for tight
frames,
\begin{eqnarray*}
{\boldsymbol\nu}_r({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\nu}_r({\mathcal V},{\widetilde{{\mathcal V}}},{\mathcal U}) &=&
\frac{{\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})}{{\boldsymbol\sigma}({\mathcal U},{\mathcal V})^r}\\ &=&
{\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})\ .
\end{eqnarray*}
Symmetrizing the bound on Shannon entropies yield the desired result.
{\hfill$\spadesuit$}
\medskip
Notice that owing to the monotonicity property of R\'enyi entropies,
$R_0(a)=\ln(\|a\|_0)\ge S(a)$, and we recover the generalized Elad Bruckstein
inequality
$$
\|a\|_0.\|b\|_0\ge \frac1{{\boldsymbol\mu}_*({\mathcal U},{\widetilde{{\mathcal U}}},{\mathcal V},{\widetilde{{\mathcal V}}})^2}\ .
$$
\medskip
Similar results in the general case are discussed below.
\subsection{Consequence: $\ell^p$ inequalities for analysis frame coefficients}
Let us start again from the modified entropic inequality in
Theorem~\ref{th:gen.entropic.ineq}, and symmetrize it with respect to $a$ and
$b$. We obtain
\begin{eqnarray*}
&(2-r)(R_\alpha(a)+R_\alpha(b)) + r(R_\beta(a)+R_\beta(b))\ge\\
&\qquad\qquad
-2\ln\left(\frac{{\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})}{{\boldsymbol\sigma}({\mathcal U},{\mathcal V})^r}\right)
-\frac{4r\beta}{\beta-1}\ln({\boldsymbol\sigma}({\mathcal U},{\mathcal V}))\ .
\end{eqnarray*}
Using the monotonicity of R\'enyi entropies, i.e. $R_\alpha\ge R_\beta$, we then
get for all $\alpha\in [r/2,1]
\begin{eqnarray*}
R_{\alpha}(a) + R_{\alpha}(b) &\ge&
-\ln\left({\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})\right)\\
&&\qquad-r\frac{\beta+1}{\beta-1}\ln({\boldsymbol\sigma}({\mathcal U},{\mathcal V}))\ ,
\end{eqnarray*}
thus
\begin{eqnarray*}
\ln\left(\|\tilde a\|_{2\alpha}.\|\tilde b\|_{2\alpha}\right)\!\!\! &\ge&
-\frac{1-\alpha}{2\alpha}\ln\left({\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})\right)\\
&&\ - r\frac{(\beta+1)(1-\alpha)}{2\alpha(\beta-1)}\ln({\boldsymbol\sigma}({\mathcal U},{\mathcal V}))\\
&\ge&\left(\frac1{2}-\frac1{2\alpha}\right)
\ln\left({\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})\right)\\
&&\ -\left(1\!-\!\frac{r}2\right)\!
\left(1\!+\!\frac{r\!-\!2\alpha}{\alpha r\!-\!2\alpha}\right)\!\ln({\boldsymbol\sigma}({\mathcal U},{\mathcal V}))
\end{eqnarray*}
finally yields the bound, for $p\in [r,2]$
\begin{eqnarray}
\nonumber
\|a\|_p.\|b\|_p\! \!\!\! &\ge&
\!\!\!\!\left({\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})\right)^{\frac1{2}-\frac1{p}}\\
&&\!\!\!\!{\boldsymbol\sigma}({\mathcal U},{\mathcal V})^{-(1-\frac{r}2)\left(1+\frac{r-p}p(1-\frac{r}2)\right)}
\|a\|_2.\|b\|_2\ .\quad
\end{eqnarray}
Also, using the fact that $R_0(a) =\ln(\|a\|_0)\ge R_\alpha(a)$ for all
$\alpha\in [r/2,1]$, and specifying to the sharpest bound $\alpha=r/2$,
we also obtain
$$
\ln\left(\|a\|_0.\|b\|_0\right)\ge
-\ln\left({\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})\right)
-r\ln({\boldsymbol\sigma}({\mathcal U},{\mathcal V}))\ ,
$$
which yields
\begin{equation}
\|a\|_0.\|b\|_0\ge{\boldsymbol\sigma}({\mathcal U},{\mathcal V})^{-r}\frac1{{\boldsymbol\mu}_r({\widetilde{{\mathcal U}}},{\mathcal V}){\boldsymbol\mu}_r({\widetilde{{\mathcal V}}},{\mathcal U})}\ .
\end{equation}
It is worth noticing that this bound is similar to the support inequalities
obtained previously, except for the factor ${\boldsymbol\sigma}({\mathcal U},{\mathcal V})^{-r}$, which makes it weaker.
Thus the bound is equivalent to the previous one if and only if the frames are
tight. Notice also that sharper bounds are obtained, as before, by optimizing
with respect to $r$ and the dual frames ${\widetilde{{\mathcal U}}}$ and ${\widetilde{{\mathcal V}}}$ of ${\mathcal U}$ and ${\mathcal V}$.
\subsection{Remark: necessary conditions for equality in the tight case}
We now examine conditions for the entropic inequalities be saturated. Our aim is
to make the connection with the {\em constant on support} property we already
met in Theorem~\ref{th:ref.EB} and its proof. Since the entropic bounds we could
prove are not sharp in generic situations, we limit the present discussion to the
particular case of tight frames. Let ${\mathcal U}$ and ${\mathcal V}$ be two tight frames,
denote by $A_{\mathcal U}=B_{\mathcal U}$ and $A_{\mathcal V}=B_{\mathcal V}$ the corresponding frame constants, and
set
\begin{equation}
g_{k\ell} = \langle u_\ell,v_k\rangle\ .
\end{equation}
Straightforward calculations give
$$
\frac{\partial}{\partial\overline{a}_\ell}\,\ln(\|a\|_{2\alpha}^{2\alpha}) =
\frac{\alpha}{\overline{a}_\ell}\,\frac{|a_\ell|^{2\alpha}}{\|a\|_{2\alpha}^{2\alpha}}
\ ,
$$
and
$$
\frac{\partial}{\partial\overline{a}_\ell}\,\ln(\|b\|_{2\beta}^{2\beta}) =
\sum_{k=0}^{N-1} \overline{g}_{k\ell} \frac{\beta}{\overline{b}_k}
\,\frac{|b_k|^{2\beta}}{\|b\|_{2\beta}^{2\beta}}
$$
and therefore the variational equations associated with the optimization of
$(2/r - 1)R_\alpha(a)+R_\beta(b)$ under constraint $\|x\|=1 = \|a\|/\sqrt{A}$ read
$$
\frac{2-r}r\frac{\alpha}{1-\alpha}\frac1{\overline{a}_\ell}
\frac{|a_\ell|^{2\alpha}}{\|a\|_{2\alpha}^{2\alpha}}
+ \frac{\beta}{1-\beta}\sum_{k=0}^{N-1} \overline{g}_{k\ell} \frac{1}{\overline{b}_k}
\,\frac{|b_k|^{2\beta}}{\|b\|_{2\beta}^{2\beta}} = \lambda
\frac{a_\ell}{A_{\mathcal U}}\ ,
$$
where $\lambda$ is a Lagrange multiplier.
Now remark that $\beta/(1-\beta) = -\alpha(2-r)/r(1-\alpha)$; multiplying both
sides with $\overline{a}_k$ and summing over $k$, the constraint
$\|x\| = \|a\|/\sqrt{A_{\mathcal U}}=1$ gives $\lambda=0$, so that the variational
equations take the form, for $\alpha\ne 1$
\begin{equation}
\frac{|a_\ell|^{2(\alpha-1)}}{\|a\|_{2\alpha}^{2\alpha}}\,a_\ell =
\frac1{A_{\mathcal U}} \sum_{k=0}^{N-1} \overline{g}_{k\ell}
\,\frac{|b_k|^{2(\beta-1)}}{\|b\|_{2\beta}^{2\beta}}\,b_k\ .
\end{equation}
\begin{remark}
From the above expression, we can observe that $|a|$ is constant on its support
if and only if $|b|$ is, since $\sum_k \overline{g}_{k\ell} b_k=a_l$. In this situation, we have $|a_k|=\sqrt{A_{\mathcal U}/\|a\|_0}$
for all $k\in\hbox{supp}(a)$ and $|b_k|=\sqrt{A_{\mathcal V}/\|b\|_0}$
for all $k\in\hbox{supp}(b)$, so that
$$
R_\alpha(a) + R_\beta(b) = \ln\left(\|a\|_0.\|b\|_0\right)\ ,
$$
which therefore saturates the inequalities.
\end{remark}
Similar calculations on the Shannon entropy yield a comparable result.
\section{Conclusions}
We have examined in this paper entropic and $\ell^p$ uncertainty principles in
the framework of frame expansions. Our main results are extensions of support
and entropic uncertainty principles to the case of frames, which turn out to
generalize some known results when specializing to orthonormal bases. We showed
in particular that in general situations, bounds involving the classical mutual
coherence of the frames or bases under considerations are outperformed by the
new bounds involving generalized coherences.
While $\ell^p$ uncertainty principles have been mainly exploited in the
framework of sparse expansion problems, i.e. synthesis based approaches, our
results fit better into the so-called {\em analysis frameworks} (see
e.g.~\cite{Nam11cosparse}), as shortly explained in the introduction. Practical
consequences for co-sparse signal approximation and decomposition approaches are
still to be investigated further. This is ongoing work by the authors of the
present paper.
Let us mention that the finite dimensional case is by now fairly well
understood, and the existence of optimizers for the uncertainty inequalities is
closely connected to coefficient sequences that are constant on their support,
as already remarked by~\cite{Przebinda03three}.
In the infinite-dimensional case, such {\em constant on support} properties do
not make much sense in general situations, and the optimization problem is still
to be investigated much further.
\section*{Acknowledgements}
This work was supported by the UNLocX project of the Future Emerging
Technologies programme of the European Union (FET-Open grant number: 255931). B.
Torr\'esani also acknowledges partial support from the Metason project of the
french Agence Nationale de la Recherche CONTINT programme (ANR ANR-10-CORD-010).
\bibliographystyle{abbrv}
|
2,869,038,155,626 | arxiv | \section{Introduction}
\label{sec:intro}
Competences
have grown in popularity in the western educational world \cite{gordonKeyCompetencesEurope2009,lurieDeconstructingCompetencybasedEducation2017,nunezcortesModeloCompetencialCompetencia2016}, and so the interest on developing computational models for competences that can be used to support a variety of educational processes, from creating digital catalogues of competences to course design to monitoring competence development by students. Although meaning varies among organisations, in this paper we will assume a definition of \emph{competence} along the line of `the capability of someone to act effectively in some kind of situations, which demands the mobilization
of a variety of internal and external resources' which broadly integrates aspects of external performance and internal composition of competences that emerge in the literature.
Research in this area is important because little information is available regarding what competences the students have developed along their studies, and to what extend, beyond the stated learning objectives of the educational programmes they are subscribed in, and the titles of the courses they have taken and passed. Furthermore, information regarding the development of competences do not accumulate, neither at school nor later in life. For example, transversal competences are develop along many courses on specific contexts (e.g. problem solving in mathematics, geography, or biology), as well as through experiences at work and social interactions, yet there is no much accumulation of evidence regarding their development, particularly in a way suitable to provide automated support for teaching, learning, certifying, applying for a job, or hiring someone.
As e-learning provides facilities for automatic recollection of information regarding the development of competences by students, which are not equally available in face to face education, we have proposed to afford the digital e-learning environment with detailed information regarding competences, their interrelationships, and their relations to course activities, so that evidence of competence development by students can be accumulated, transformed into knowledge, and used to support the educational processes, particularly those related to decision making regarding the development of competences by students \cite{moralesIntelligentEnvironmentDistance2009}. More recently, we have proposed generic mechanisms for creating probabilistic graphical models (Bayesian networks) to trace the development of competences by students on the basis of competences maps \cite{morales-gamboaProbabilisticRelationalLearner2017}.
In this paper we present initial results from using such mechanisms for building a student model as a dynamic Bayesian network, and using it to trace the development of corresponding competences by some hypothetical students exhibiting prototypical performances. The estimates calculated by the network were compared against estimates provided by a sample of teachers. The results strongly suggest correlations among both sets of estimates, yet the teachers seemed to be more optimistic, and certain, about the development of competences by students, as if they were assuming, in the absence of evidence, competence improvement rather than decay along time. Furthermore, teachers seemed to value the previous history of evidence much less than the actual one.
We proceed by firstly providing a summary of related work in \autoref{sec:related}, followed by an explanation of how we construct our competences maps, in \autoref{sec:maps}. Then, in \autoref{sec:dbn} we describe with some detail how a dynamic Bayesian network is generated from the competences maps, including how the conditional probabilistic distributions are constructed for each type of relationship and, in some cases, for some specific ones. \autoref{sec:models} is devoted to describe the method followed for generating estimates of competence levels developed by prototypical students, both using the dynamic Bayesian network presented in previous sections, and through a questionnaire responded by a sample of teachers. The results obtained are compared in \autoref{sec:comparison}, and we provide some conclusions and suggestions for future work in \autoref{sec:conclusions}.
\section{Related work}
\label{sec:related}
\noindent There has been a considerable amount of work on computational representations for competences in this century. There is a standard \cite{competency_data_working_group_1484.20.1-2007_2008} and a recommendation \cite{imsgloballearningconsortiumIMSReusableDefinition2002} on how to encode competences for exchange between applications, which emphasise detailed description of competences in terms of their components, but they also include basic facilities for establishing relationships between competences. There has been work on extending the recommendation in order to have better descriptions of relationships between competetences, as well as ways for encoding and exchanging levels of competences development \cite{sampsonDevelopingCommonMetadata2007}. Further work exists on providing detailed descriptions of concepts, skills, procedures, principles and other competence elements, as well as including composition and specialization relationships, and complex procedures and conceptual structures, what enables complex descriptions of competences \cite{paquetteLearningDesignBased2006} and computational tools to deal with them \cite{stoofWebbasedSupportConstructing2007}. There is also work on developing rather formal definitions of competences for knowledge management inside an organization \cite{pepiotUECMLUnifiedEnterprise2007}. More recently, there is again a proposal of extending the recommendation commented previously \cite{imsgloballearningconsortiumIMSReusableDefinition2002}, this time to include relationships among competences as part of the model, distinguishing between general and specific competences, in the sense of performances in domains and subdomains, hence proposing a general framework for describing competences maps \cite{elasameCompetencyModelReview2018}.
Regarding the use of (dynamic) Bayesian networks for student modelling, a 2010 study \cite{conatiBayesianStudentModeling2010} suggest it has been motivated (1) by the large amount of uncertainty in estimating the cognitive or affective state of students on the basis of observations of their behaviour, and other measurable information, (2) by their sound foundations on probability theory to carry out inferences, and (3) by their transparency in comparison with other numerical representations such as neural networks. Nevertheless, a key issue in (dynamic) Bayesian student modelling is the simplification of the domain by establishing conditional independence between nodes, as well as the definition of the conditional probability distributions, as numerical representations of the influences of the parents of conditionally dependent nodes in the network, a task that can become daunting as the network grows and usually demands the availability of a large amount of data and the use of machine learning techniques \cite{kaserDynamicBayesianNetworks2017,sucarProbabilisticGraphicalModels2015a}. Recent work \cite{kaserDynamicBayesianNetworks2017} demonstrates a generic way of constructing student models as dynamic Bayesian networks on the basis of ``skill topologies'', defined by prerequisite relationships between skills, and a large collection of data.
So, the main contributions of this work are, on one hand, the integration of two fields of research that have been explored somehow separately: computational representations of competences maps and (dynamic) Bayesian student modelling; on the other hand, the proposal of a method for defining conditional probability distributions on the basis of types of relationships in competences maps of the kind shown in \autoref{sec:maps}, which go beyond the prerequisite type.
\begin{figure*}[!t]
\centering
\includegraphics[width=5.0in]{competence08}
\caption{Competences map for `To participate and to collaborate effectively in diverse teams'. Competences are shown in rounded yellow boxes, whereas attributes are shown in plain white boxes. Generalizacion/specialization relationships are shown as dashed lines, whereas inclusion/part-of relationships are shown as solid lines labelled with the `includes' tag. The relationship between competences and attributes are labelled with the `has' tag.}
\label{fig:competenceMap}
\end{figure*}
\section{Competence maps}
\label{sec:maps}
Our work is based on the notion of \emph{competence} as the capability to carry out a given action in a given context through the mobilization of various cognitive, affective, and conative resources, such as knowledge, skills, attitudes and values \cite{chanCompetencyAnalyserKnowledgebased2010}. This definition allow us to define two kinds of relationships between competences: \emph{generalization/specialization} relationships, generated through the removal or addition of resources (we call them \emph{competence attributes}), respectively, and the \emph{inclusion/part-of} relationship, generated by considering the attributes of some competences being part of a larger one, or by distributing the resources of a given competence along some sub-competences. We call a collection of competences interrelated by relationships of these kinds a \textit{competences map}.
In order to evaluate the expressiveness of our formalism, we have applied it to model the set of transversal competences established as a central element of the learning objectives of the National High School System in Mexico \cite{medinafloresMapaCompetenciasGenericas2017,secretariadeeducacionpublicaACUERDONumero4442008}. For example, the eighth competence in the set of transversal competences is described in the official documentation \cite{secretariadeeducacionpublicaACUERDONumero4442008} as follows:
\begin{quotation}
\noindent To participate and to collaborate effectively in diverse teams.
\noindent \textit{Attributes}:
\begin{itemize}
\item To propose ways to solve a problem or develop a team project, defining
a course of action with specific steps.
\item To provide points of view with openness and to consider those of other people in a reflective manner.
\item To assume a constructive attitude, congruent with their knowledge and skills, within different work teams.
\end{itemize}
\end{quotation}
By applying the formalism briefly described above (more details can be found in \cite{morales-gamboaProbabilisticRelationalLearner2017}) we generate the competences map presented in \autoref{fig:competenceMap} which includes the definition of the large and generic competence `to collaborate' as decomposed into two sub-competences, equally generic, `to propose' and 'to contribute'; a generic structure that is then specialized into collaborating in problem solving and collaborating in project execution. The full set of transversal competences for high school includes eleven competences, but the application of our formalism identifies much more, as illustrated in \autoref{fig:competenceMap}; over a hundred competences (nodes), considering both generic competences and more specific ones.
In order to simplify the competences map for the purposes of this study, by eliminating the repetitions included in the map shown in \autoref{fig:competenceMap}, in the rest of the paper we will use the submap shown in \autoref{fig:projectCollaborationMap} that focus on collaborative project development.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{collaborationMap}
\caption{Submap of the competences map shown in \autoref{fig:competenceMap} that is used in the study presented in this paper. Generalizacion/specialization relationships are shown as dashed lines, whereas inclusion/part-of relationships are shown as solid lines.}
\label{fig:projectCollaborationMap}
\end{figure}
\section{Dynamic Bayesian networks}
\label{sec:dbn}
\noindent A competence can be observed only through performances in concrete situations, and such performances are considered evidence of the level of competence. In the same way, the development of a more specific competence is evidence of the development of a more general one—as they share some key attributes. In a similar way, development of a super-competence cannot occur independently of the development of its sub-competences. So we could attribute some kind of causality to the generalization/specialization and inclusion/part-of kinds of relationships, at least in the case of competences. We then propose to create overlay student models \cite{holtStateStudentModelling1994,vanlehnStudentModelling1988} to trace the development of competences by students, associating a belief on the degree of development to each competence in the map, and transforming the competences map into a Bayesian network \cite{sucarProbabilisticGraphicalModels2015a}. As student competences evolve along time on the basis of their previous levels and new learning experiences, beliefs about their new levels are dependent both on beliefs about their previous levels and new evidence, so the Bayesian network should be dynamic. Furthermore, we assume that a new level of any competence is independent from old levels of the other competences in the map given the current level of such competence, and so are beliefs.
\begin{figure*}[!t]
\centering
\includegraphics[width=5.0in]{collaborate_dbn}
\caption{Dynamic Bayesian network corresponding to the competences map in \autoref{fig:projectCollaborationMap}. The nodes on the left side (Init Conditions) are initialized with the previous state of beliefs. The nodes in the central area (Temporal Plate) are firstly influenced by the nodes on the left, so they reproduce the previous state of the network; then they move one step in time, taking the previous state of the network and new evidence into account. Finally, the new beliefs are transferred to the nodes on the right (Term Conditions), from where they can be recovered.}
\label{fig:collaborationDBN}
\end{figure*}
The dynamic Bayesian network (DBN) corresponding to the competences map presented in \autoref{fig:projectCollaborationMap} is shown in \autoref{fig:collaborationDBN}. \emph{Init(ial) Conditions} nodes are set to the beliefs on the previous levels of the competences—e.g. recovered from a database— while nodes in \emph{Term(inal) Conditions} are used to recover the new beliefs, on the current level of the competences. The nodes in the \emph{Temporal Plate} actually stand for two instances of the (non dynamic) Bayesian network built from the competences map, which stand for two time steps and are linked with temporal relationships (shown as round arrows in the figure) between equivalent nodes; in addition, the second instance includes nodes for evidences, at the bottom. The first instance is linked to the \emph{Init(ial) Conditions}, whereas the second instance is linked to the \emph{Term(inal) Conditions}.
The operation of the proposed dynamic Bayesian student modelling based on competences maps is then as follows:
\begin{enumerate}
\item Given a student, and \emph{any} competences map composed as described in \autoref{sec:maps}, a non dynamic Bayesian network is built from the competences map using the conditional probability distributions described in \autoref{sec:conditionals}. Then flat probability distributions are set on the top nodes, and propagated. The final states of the beliefs in the nodes of the network are then stored somewhere as the initial, and current, level of the dynamic Bayesian network ($t=0$).
\item When new evidence arrives at time $t=k$ ($k$ an integer greater than zero), the following process is iterated $k$ times:
\begin{enumerate}
\item The nodes in the \emph{Init(ial) Conditions} are set to the current beliefs, which correspond to the previous levels of the competences.
\item The beliefs in the \emph{Init(ial) Conditions} are propagated to the first instance of the (non dynamic) Bayesian network.
\item The beliefs in the first instance of the (non dynamic) Bayesian network are propagated to the second instance using the temporal relationships, together with the evidence in the bottom nodes (if $t = k$).
\item The beliefs in the second instance of the (non dynamic) Bayesian network, corresponding to the current levels of the competences, are propagated to the \emph{Term(inal) Conditions}, and recovered from there.
\end{enumerate}
\item The final state of the \emph{Term(inal) Conditions} correspond to the beliefs on the new levels of the competences, at $t=k$, and are stored in replacement of the beliefs corresponding to $t = 0$.
\end{enumerate}
\subsection{Conditional probability distributions}
\label{sec:conditionals}
\noindent A common approach nowadays is to learn the conditional probability distributions from a large amount of data \cite{murphyDynamicBayesianNetworks2002,kaserDynamicBayesianNetworks2017}, which in this case would be about competence assessments by teachers. Unfortunately, in our case data is not quite abundant, particularly given the large amount of competences that underlie educational programmes, as well as the difference in perspectives that lead to different definitions, and hence competences maps, even for quite similar competences. Additionally, the competence-based educational model for high school education was introduced fifteen years \cite{secretariadeeducacionpublicaACUERDONumero4442008} ago and, despite the fact that many teachers were trained for competence-based teaching and evaluation, it is still work in progress, so data may be too ``dirty''.
So, instead of learning the conditional probability distributions from data, we decide to construct them from first principles on the basis of the different kinds of relationships in competences maps \cite{morales-gamboaProbabilisticRelationalLearner2017} (\autoref{tab:spegen} and \autoref{tab:subsup}), plus those added in its translations to a DBN (\autoref{tab:compevi} and \autoref{tab:paspre}), and a selection of numerical values for the fuzzy terms (\autoref{tab:numval})---in each table, the top row includes the possible values for the parent node, while the left column includes the possible values for the child node. Also, on the basis of a social constructivist perspective \cite{vygotskyMindSocietyDevelopment1978}, we decided to move away from the typical binary variables and to have three possible values for competence development (\emph{Low}, \emph{Medium}, and \emph{High}) representing no development (the student cannot perform the associated activity, even with scaffolding), partial development (the student can perform the associated activity, but only with scaffolding), and full development (the student can perform the associated activity on their own).
In the case of the inclusion/part-of relationship, in \autoref{tab:subsup}, the reasoning behind its design goes on the line that if someone cannot perform the super-competence (level Low), they have to get stuck in at least one sub-competence, which could, or could not, be a given sub-competence. So, from all possible configurations of competence levels among the sub-competences ($3^n$), we have to discard the cases in which level Low does not show ($2^n$). Then, we consider all cases in which a given sub-competence as level Low (as the other $n-1$ can have any value, they are $3^{n-1}$), as well as all cases in which the given sub-competence as other level, Medium or High ($3^{n-1} - 2^{n-1}$). Similarly, if someone can perform the super-competence but only with scaffolding, they cannot get stuck in any sub-competence but they would need help in at least one sub-competence. So, from all possible configurations of competence levels among the subcompetences ($2^n -1$, because we need to eliminate the case of all sub-competences to have level High), we consider the cases in which a given sub-competence has level Medium ($2^{n-1}$, as the others can any level other than Low), or High ($2^{n-1} - 1$, because not all others can have a High level). However, we have decided not to have a zero probability on any case, se we have adjusted the formulas in this case to allow some probability for level Low among sub-competences. Finally, if someone can perform the super-competence without any help, then they cannot get stuck nor need help on any sub-competence, so the probability for any sub-competence to have a level other than High should be zero, but we have opted for allowing small, non-zero probabilities for the other levels—more details of the rationale behind the design of all conditional probability distributions can be found in \cite{morales-gamboaProbabilisticRelationalLearner2017}
Concerning the design of the DBN, the final decision was on which numerical values to ascribe to the fuzzy terms used to describe the conditional probability distributions (transition probabilities). In this case, given the speed of decay of some probabilities in the conditional probability distribution for the inclusion/part-of relationship, we decided to define them on the bases of the standarized cummulative normal distribution so that the numerical values are those shown in \autoref{tab:numval}.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Conditional probability distribution for the specialization/generalization relationship. Source \cite{morales-gamboaProbabilisticRelationalLearner2017}.}
\centering
\begin{tabular*}{3.5in}{@{\extracolsep{\fill} } lccc}
\hline\noalign{\smallskip}
&\textbf{Low}&\textbf{Medium}&\textbf{High}\\
\hline
\noalign{\smallskip}
\textbf{Low}&Large&Medium&Small\\
\textbf{Medium}&Small&Large&Large\\
\textbf{High}&Very small&Small&Medium\\
\hline
\end{tabular*}
\label{tab:spegen}
\end{table}
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Conditional probability distribution for the inclusion/part-of relationship. The variable \(n\) stands for the number of sub-competences. Source \cite{morales-gamboaProbabilisticRelationalLearner2017}.}
\centering
\begin{tabular*}{3.5in}{@{\extracolsep{\fill} } lccc}
\hline\noalign{\smallskip}
&\textbf{Low}&\textbf{Medium}&\textbf{High}\\
\hline
\noalign{\smallskip}
\textbf{Low}&$\displaystyle\frac{3^{n-1}}{3^n - 2^n}$&$\frac{1}{2^n}$&Very small\\
\textbf{Medium}&$\displaystyle\frac{3^{n-1} - 2^{n-1}}{3^n - 2^n}$&$\frac{1}{2}$&Small\\
\textbf{High}&$\displaystyle\frac{3^{n-1} - 2^{n-1}}{3^n - 2^n}$&$\frac{2^{n-1}-1}{2^n}$&Large\\
\hline
\end{tabular*}
\label{tab:subsup}
\end{table}
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Conditional probability distribution for competence/evidence relationships.}
\centering
\begin{tabular*}{3.5in}{@{\extracolsep{\fill} } lccc}
\hline\noalign{\smallskip}
&\textbf{Low}&\textbf{Medium}&\textbf{High}\\
\hline
\noalign{\smallskip}
\textbf{Low}&Large&Medium&Small\\
\textbf{Medium}&Small&Large&Medium\\
\textbf{High}&Very small&Medium&Large\\
\hline
\end{tabular*}
\label{tab:compevi}
\end{table}
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Conditional probability distribution for the past/present relationship.}
\centering
\begin{tabular*}{3.5in}{@{\extracolsep{\fill} } lccc}
\hline\noalign{\smallskip}
&\textbf{Low}&\textbf{Medium}&\textbf{High}\\
\hline
\noalign{\smallskip}
\textbf{Low}&Large&Very small&Tiny\\
\textbf{Medium}&Very small&Large&Very small\\
\textbf{High}&Tiny&Tiny&Large\\
\hline
\end{tabular*}
\label{tab:paspre}
\end{table}
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Numerical values associated to the fuzzy terms used to describe the conditional probability distributions.}
\centering
\begin{tabular*}{3.5in}{@{\extracolsep{\fill} } lrl}
\hline\noalign{\smallskip}
\textbf{Term}&$\sigma$&\textbf{Value}\\
\hline
\noalign{\smallskip}
{Large}&0&0.5\\
{Medium}&-1&0.15865525393145707\\
{Small}&-2&0.02275013194817919\\
{Very small}&-3&0.00134989803163009\\
{Tiny}&-4&0.00003167124183311\\
\hline
\end{tabular*}
\label{tab:numval}
\end{table}
\section{Models of prototypical students}
\label{sec:models}
\noindent In order to observe the behaviour of student models created in such a way, we simulate a course devoted to the development of the specialized competences included in the map shown in \autoref{fig:projectCollaborationMap}, and students exhibiting three prototypical performances: low to medium performance, medium to high performance but with final failure, and two terms medium to high performance (second course is assumed not devoted to the development of such competences, but some activities make use of them). The courses are supposed to run for fourteen weeks (each one corresponding to a time slice), plus two weeks for revisions and additional examinations, and five weeks of holidays before the start of the next term (\autoref{tab:protstud}).
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Evidences for simulated students: low to medium performance (L2M), medium to high performance (M2H), but with final product missing, and two terms medium to high performance (LT M2H)). All competences are related to a project, and numeric values (0, 1, and 2) are assigned to competence levels (\emph{Low}, \emph{Medium}, and \emph{High}, respectively).}
\centering
\begin{tabular*}{3.5in}{@{\extracolsep{\fill} } rcccc}
\hline\noalign{\smallskip}
\textbf{Week}&Competence&\textbf{L2H}&\textbf{M2H}&\textbf{LT M2H}\\
\noalign{\smallskip}
\hline
1&Propose&0&1&1\\
2&Contribute&0&1&1\\
4&Propose&1&2&2\\
5&Contribute&0&1&1\\
7&Collaborate&0&1&1\\
10&Propose&1&2&2\\
11&Contribute&1&2&2\\
14&Collaborate&1&0&2\\
23&Propose&&&2\\
24&Contribute&&&2\\
25&Collaborate&&&2\\
35&Collaborate&&&2\\
\hline
\end{tabular*}
\label{tab:protstud}
\end{table}
\begin{figure*}[!t]
\centering
\subfloat[Average]{\includegraphics[width=2.5in]{l2m16-average.pdf}%
\label{fig:l2m16-average}}
\hfil
\subfloat[Uncertainty]{\includegraphics[width=2.5in]{l2m16-uncertainty.pdf}%
\label{fig:l2m16-uncertainty}}
\caption{Evolving beliefs on competence levels of the student with low to medium performance. Low, Medium, and High competence levels are translated to numbers (0, 1, and 2, respectively) and then beliefs are represented by both the average (\autoref{eqn:average}) and uncertainty of their probability distributions, the later calculated as normalized entropy \cite{sucarProbabilisticGraphicalModels2015a} (\autoref{eqn:uncertainty}).}
\label{fig:l2m16}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[Average]{\includegraphics[width=2.5in]{m2h16-average.pdf}%
\label{fig:m2h16-average}}
\hfil
\subfloat[Uncertainty]{\includegraphics[width=2.5in]{m2h16-uncertainty.pdf}%
\label{fig:m2h16-uncertainty}}
\caption{Evolving beliefs (average and uncertainty) on competence levels of the student with medium to high performance, who failed on the last evaluation.}
\label{fig:m2h16}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[Average]{\includegraphics[width=2.5in]{m2h37-average.pdf}%
\label{fig:m2h37-average}}
\hfil
\subfloat[Uncertainty]{\includegraphics[width=2.5in]{m2h37-uncertainty.pdf}%
\label{fig:m2h37-uncertainty}}
\caption{Evolving beliefs (average and uncertainty) on competence levels of the student with medium to high performance along two periods.}
\label{fig:m2h37}
\end{figure*}
Figures \autoref{fig:l2m16-average} and \autoref{fig:l2m16-uncertainty} show the evolving beliefs on the competence levels of the student with low to medium performance. Low, Medium, and High competence levels are translated to numbers ($i = 0$, 1, and 2, respectively) and then beliefs are represented by both the average,
\begin{equation}
\sum p(i)i,
\label{eqn:average}
\end{equation}
and uncertainty of their probability distributions, the later calculated as normalized entropy \cite{sucarProbabilisticGraphicalModels2015a},
\begin{equation}
\frac{\sum{p(i)\ln(p(i))}}{\ln(\frac{1}{3})}.
\label{eqn:uncertainty}
\end{equation}
Figures \autoref{fig:m2h16-average} and \autoref{fig:m2h16-uncertainty} show the case for the student with medium to high performance with final failure, while figures \autoref{fig:m2h37-average} and \autoref{fig:m2h37-uncertainty} show the case for the student with medium to high performance across two terms.
In all cases, uncertainty increases in time slices with no evidence, and it usually decreases on the presence of evidence. However, in the case of the student exhibiting low to medium performance, after two evidences for low level of competence at contributing to a project, an evidence of medium level of competence at time slice 11 increases uncertainty. A similar behaviour can be observed when evidence of low level performance arrives in time slice 14 for the student with medium to high performance: many beliefs go down but their uncertainties increase.
The prototype is more reluctant to accept evidence of good performance than of low or medium performance, due to the asymmetry in the conditional probability distribution in \autoref{tab:compevi}. Yet, after accumulating evidences from two terms in the case of a student with high performance, the beliefs show a tendency to slowly go up. It is clear in all cases that beliefs on the development of the more general competences move much slowly, as there is no direct evidence of their level. Yet beliefs show a tendency to go up, and decrease their uncertainty, suggesting some positive development is going on after all.
\section{Comparison}
\label{sec:comparison}
\noindent In order to evaluate the results obtained so far, and the overall design of our system, we decided to compare the estimations it produces against estimations provided by teachers. We designed a questionnaire in which we ask teachers to estimate the levels of the more specific competences of our prototypical students (given the evidence in time slices 3, 6, 7, 12, 14, and 35) and to provide a degree of certainty for their estimations. Finally, we ask them to estimate the levels of the more general competences at the end of the course, and their degree of certainty on that as well. We used the questionnaire to run an online poll among colleagues and doctoral students, 20 of which answered it---from a population of circa three hundred, so we can claim significance of results only at the level of the sample. Among the participants, 65\% said they are full time teachers, 25\% said they are subject teachers, 5\% said they are mainly researchers, and another 5\% said they do not teach. 60\% of the participants said they teach mainly at undergraduate level, 35\% said they are mainly postgraduate teachers, and only 5\% classify themselves mainly as high school teachers.
The teachers provided their estimations for the competence levels using a Likert scale with five values: (0) Low, (0.5) Rather Low, (1) Medium, (1.5) Rather High, and (2) High. They provided their degrees of certainty in a similar scale, which we show here inverted and normalized to describe their degree of uncertainty. We ran the Shapiro-Wilk test of normality on the estimations of competence levels provided by the teachers, and we found that most distributions are far from normal, so we proceed to the analysis of results using non parametric methods. For each question (58 in total) we calculated the first, second (median), and third quartiles of competence levels estimated, or degree of uncertainty. In order to measure the consistency among the answers using the interquartile range (IQR), we calculated IQR per question, and then we calculated the same quartiles for these new (meta) data. The results, presented in \autoref{tab:consistency}, show a tendency toward the minimum distance, or less, among the responses provided by the teachers, suggesting a high consistency among them.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Non parametric consistency analysis measuring the Interquartile Range (IQR) among responses, per question, and then calculating the quartiles and the new IQR among the IQRs previously calculated.}
\centering
\begin{tabular*}{2.5in}{@{\extracolsep{\fill} } lrr}
\hline\noalign{\smallskip}
\textbf{Quartile}&\textbf{Average}&\textbf{Uncertainty}\\
\hline
\noalign{\smallskip}
{Maximum}& 1.0 & 0.5\\
{Third} & 0.5 & 0.25\\
{Second (Median)} & 0.5 & 0.25\\
{First} & 0.125 & 0.25\\
{Minimum} & 0.0 & 0.0 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{IQR} & \textbf{0.375} & \textbf{0.0}\\
\noalign{\smallskip}
\hline
\end{tabular*}
\label{tab:consistency}
\end{table}
Then we compared the responses provided by the teachers against the estimations provided by the system, first graphically as in \autoref{fig:l2m16-comparison}, \autoref{fig:m2h16-comparison}, and \autoref{fig:m2h37-comparison}. We noticed that although the estimations provided by the system are, in general, lower than the median of those provided by the teachers (figures \autoref{fig:l2m16-average-comparison}, \autoref{fig:m2h16-average-comparison}, and \autoref{fig:m2h37-average-comparison}), there seems to be a correlation between them, so we calculated their Pearson's coefficient of correlation, $r$, and the results are shown in \autoref{tab:pearson}. They suggest that the teachers and the system followed the same pattern in estimating competence levels when there was more or less frequent evidence for the competence, and they indicate an unsurprising behaviour, particularly in the case of low to medium performance. The differences grow when there are surprises in performance (e.g. the student shows steady improvment but the fails at one examination), and when there is fewer evidence, as is the case for the competence of collaborating in project. Then we calculated first and third quartiles among the estimations provided by teachers, and we compared them against the estimations generated by the system. The results were that, in general, the estimations provided by the system are out of the range defined by the first and third quartile. In the case of the low to medium performance student, only 40\% of system estimations are in the range, whereas in the other two cases the percentages were only 26.7\% (low to medium with final failure) and 22.2\% (medium to high across two terms). So, these results confirm the significance of difference in estimations by the teachers and the system shown in \autoref{fig:l2m16-comparison}, \autoref{fig:m2h16-comparison}, and \autoref{fig:m2h37-comparison}. Furthermore, it shows the difference is bigger in the cases of evidence of medium to high performance.
\begin{figure*}[!t]
\centering
\subfloat[Average]{\includegraphics[width=2.5in]{l2m16-average-comparison.pdf}%
\label{fig:l2m16-average-comparison}}
\hfil
\subfloat[Uncertainty]{\includegraphics[width=2.5in]{l2m16-uncertainty-comparison.pdf}%
\label{fig:l2m16-uncertainty-comparison}}
\caption{Comparison of estimates by teachers and the system of competence levels of the student with low to medium performance.}
\label{fig:l2m16-comparison}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[Average]{\includegraphics[width=2.5in]{m2h16-average-comparison.pdf}%
\label{fig:m2h16-average-comparison}}
\hfil
\subfloat[Uncertainty]{\includegraphics[width=2.5in]{m2h16-uncertainty-comparison.pdf}%
\label{fig:m2h16-uncertainty-comparison}}
\caption{Comparison of estimates by teachers and the system of competence levels of the student with medium to high performance, who failed on the last evaluation.}
\label{fig:m2h16-comparison}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[Average]{\includegraphics[width=2.5in]{m2h37-average-comparison.pdf}%
\label{fig:m2h37-average-comparison}}
\hfil
\subfloat[Uncertainty]{\includegraphics[width=2.5in]{m2h37-uncertainty-comparison.pdf}%
\label{fig:m2h37-uncertainty-comparison}}
\caption{Comparison of estimates by teachers and the system of competence levels of the student with medium to high performance along tow periods.}
\label{fig:m2h37-comparison}
\end{figure*}
Regarding the degree of uncertainty in the estimations, uncertainty in the beliefs maintained by the system shows relatively high sensitivity to changes in the evidence, whereas the uncertainty in estimations by the teachers seems almost stable. Finally, the comparison of estimations of the final state of development of the most generic competences, and the corresponding uncertainties, by the teachers and the system, shown in \autoref{tab:general-competences}, suggest a similar pattern of the teachers being more optimistic by providing higher estimations of competence development, together with much lower levels of uncertainty.
We have carried out some attempts to get the system estimations closer to those provided by the teachers. That is to say, we have tried to relax the conditional probability distributions for the relationship between previous and current state of a node in the dynamic Bayesian network (\autoref{tab:paspre}), as a way of acknowledging student state changes along their studies. Unfortunately, those adjustments led to quicker increases in uncertainty along weeks, which seemed less realistic than the ones shown in this paper.
\begin{table*}[!t]
\caption{Pearson correlation coefficientes between competence level estimations by the teachers and the system for students of low to medium performance (L2M), medium to high but failing on final evaluation (M2H), and long term steady medium to high (LT M2H).}
\centering
\begin{tabular*}{5.0in}{@{\extracolsep{\fill} } lccc}
\hline\noalign{\smallskip}
& \multicolumn{3}{c}{\textbf{Competence level}}\\
\hline
\noalign{\smallskip}
&\parbox{3cm}{\centering\textbf{Collaborate in project}}&
\parbox{2cm}{\centering\textbf{Propose on project}}&
\parbox{2cm}{\centering\textbf{Contribute to project}}\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{L2M}&0.1026&0.9871&0.9808\\
\textbf{M2H}&0.2385&0.7406&0.6307\\
\textbf{LT M2H}&0.3458&0.8159&0.8512\\
\hline
\end{tabular*}
\label{tab:pearson}
\end{table*}
\begin{table*}[!t]
\caption{Comparison of estimates and incertainties as provided by the teachers and the system regarding the development of the most generic competences—to collaborate (Col), to propose (Prop), and to contribute (Cont)—by students of low to medium performance (L2M), medium to high performance with failure in final evaluation (M2H), and steady medium to high performance along two terms (LT M2H).}
\centering
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}l|ccc|ccc|ccc}
\hline\noalign{\smallskip}
& \multicolumn{3}{c}{\textbf{L2M}}&\multicolumn{3}{c}{\textbf{M2H}}&
\multicolumn{3}{c}{\textbf{LT M2H}}\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
&\parbox{0.7cm}{\centering\textbf{Col}}&
\parbox{0.9cm}{\centering\textbf{Prop}}&
\parbox{0.9cm}{\centering\textbf{Cont}}&
\parbox{0.7cm}{\centering\textbf{Col}}&
\parbox{0.9cm}{\centering\textbf{Prop}}&
\parbox{0.9cm}{\centering\textbf{Cont}}&
\parbox{0.7cm}{\centering\textbf{Col}}&
\parbox{0.9cm}{\centering\textbf{Prop}}&
\parbox{0.9cm}{\centering\textbf{Cont}}\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\parbox{2.05cm}{\begin{flushleft}\textbf{Teacher estimation (median)}\end{flushleft}}&1&1&1&1.5&1.5&1.25&2&2&1.75\\
\parbox{2.05cm}{\begin{flushleft}\textbf{Teacher uncertainty (median)}\end{flushleft}}&0.25&0.25&0.25&0.25&0.25&0.25&0.25&0.25&0.25\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\parbox{2.05cm}{\begin{flushleft}\textbf{System estimation (median)}\end{flushleft}}&1.01&1.32&1.25&0.97&1.33&1.32&1.05&1.40&1.39\\
\parbox{2.05cm}{\begin{flushleft}\textbf{System uncertainty}\end{flushleft}}&1&0.93&0.96&1&0.92&0.93&0.99&0.87&0.88\\
\noalign{\smallskip}
\hline
\end{tabular*}
\label{tab:general-competences}
\end{table*}
\section{Conclusions}
\label{sec:conclusions}
We have shown a way of creating overlay student models as dynamic Bayesian networks built on top of competences maps including generalization{/}specialization and inclusion/part-of relationships. It works by defining a conditional probability distribution per type of relationship (and cardinality, in the case of inclusion/part-of relationships), so it can be applied to any map restricted to those common relationships. This approach provides a general way for assigning weights to the relationships between competences, which does not make use of the fine grained composition of the competences to construct nor evaluate evidence, considering competences very much as holistic entities. In that sense, it is quite different from fine grained approaches typical in the field, but it would be much easier to make it work on real life conditions, provided it delivers reasonable results.
The results obtained from implementing this method on a given competences map, performing the modelling of competence development by some prototypical students, and then comparing the competence levels estimated by the prototype against those estimated by teachers, suggest that the inferencing carried out by the dynamic Bayesian network goes along what teachers estimate from evidences of performance by students, but teachers seem to be significantly more optimistic and confident on competence development by students, particularly when they get evidence of good performances. Furthermore, they seem willing to generalize the evidence gathered in relation to more specific competences, so they estimate similar levels of development for the more generic ones, and they do so without loosing certainty in their estimations. In general, it seems teachers give more attention to current evidence than our system does, and are more relaxed in considering previous evidence too. Furthermore, they seem to assume the student is engaged in learning, so they expect an improvement in competence levels, and positive evidence works as a confirmation for their expectations. In contrast, our system assumes decaying of competence levels unless evidence on the contrary is provided; a closed world assumption the teachers do not hold.
We have assumed all evidences to be hard ones, from what we gathered as prototypical performances, but nevertheless invented ones, and we have no information regarding the expertise on competence estimation by the participants on our study. Furthermore, the task imposed on the participants is not what they are used to do: a kind of meta-analysis on the evidence produced by others, instead of directly observing the performance of students and genereting the evidence themselves (what it would be their actual task if the system gets implemented for some educational programme). So we acknowledge we are still far from fully evaluating the modelling of the development of competence levels by our system.
Future work includes the discussion of the implications of this evaluation, and a subsequent fine tuning of our conditional probability distributions. An explicit modelling of the estimations provided by teachers, by means of adjusting the conditional probability distributions, would also be useful to understanding their reasoning. There is also work to do in expanding the development and testing with evidence produced from real (historical) data by expert competence evaluators, and in comparing the inferences performed by the system against their estimations. Beyond that, we would expect to incorporate the full map of transversal competences for Mexican high school students, and to do real-time student modelling, considering not only information provided by teachers, but information gathered from other sources, inside and outside schools.
\section*{Acknowledgments}
This work has been partially funded by the Common Space for Distance Higher Education (ECOESAD) and the Program for Teachers Professional Development (PRODEP). The core of our implementations is based on the SMILE reasoning engine for graphical probabilistic models, while images of them included in this paper were created using the GeNIe Modeler, both available free of charge for academic research and teaching use from \href{http://www.bayesfusion.com/}{BayesFusion, LLC}. We thanks colleagues and postgraduate students that voluntarily participated in our poll.
\bibliographystyle{IEEEtran}
|
2,869,038,155,627 | arxiv | \section{introduction}
Recently, intelligent reflecting surface (IRS) has been proposed as a promising candidate technology for the sixth-generation (6G) wireless system, due to its capability of realizing programmable propagation environment with low power consumption and hardware cost. Specifically, the IRS is a two-dimensional (2D) meta-surface composed of a large number of low-cost reflecting elements, each of which can independently reflect the impinging signals with adjustable phase shits. The adjustment of phase shifts can be realized by cheap positive intrinsic negative (PIN) diodes or varactor diodes \cite{hardware}. Moreover, the IRS can be flexibly deployed to assist data transmission without equipping any power-consuming transmit radio-frequency
(RF) chains \cite{survey1}. Due to the above advantages, the IRS has attracted considerable research interests.
In particular, there are massive research works adopting the IRS to enhance communication performance. By properly deploying the IRS between the transmitter and the receiver, a strong virtual line-of-sight (VLoS) link between them can be established. Through smartly designing IRS phase shifts, significant performance improvement can be achieved. For example, the work\cite{qingqing1} has demonstrated that a power gain of $M^2$ can be obtained by applying the IRS with $M$ reflecting elements. Also, the work \cite{9502509} has shown that applying the IRS with a large number of reflecting elements helps reduce the outage probability.
{\color{black}Moreover, the works \cite{BX1,BX6} proposed a double-IRS assisted system, and it was proved that the double-IRS assisted system with cooperative passive beamforming design is superior to the conventional single-IRS system in terms of both the maximum signal-to-noise ratio (SNR) and the multi-user effective channel rank.} Due to its great benefit in improving communication performance, the IRS has been extensively used in various communication scenarios to realize diverse desired goals, such as SNR or capacity maximization\cite{location_hu,BX4}, {\color{black}sum rate maximization\cite{pan1}}, power minimization or energy efficiency maximization\cite{qingqing1,9408385}, and symbol-error-rate minimization \cite{9097454,8928065}. Also, the IRS has been proposed to be integrated with other promising technologies, such as {\color{black}orthogonal frequency division multiplexing (OFDM) \cite{BX2,BX3}}, massive MIMO\cite{9528043,BX5}, millimeter wave (mmWave)\cite{9226616,9410435}, deep learning\cite{9505267,9264659}, cognitive radio\cite{9235486,9146170}, physical layer security\cite{9428001,9446526}, unmanned aerial vehicle (UAV)\cite{9400768,9234511}, and {\color{black}simultaneous wireless information and power transfer (SWIPT)\cite{pan2}}, to improve the communication performance of the considered systems.
In addition to improving the communication performance, another important promising function of the IRS is to assist location sensing in the wireless communication system.
In general, the mobile user can be localized according to channel parameters, such as, time of arrival (TOA)/time of difference arrival (TDOA) \cite{location_wireless_TOA}, angle of arrival (AOA)\cite{location_mmWave_AOA1} and received signal strength (RSS)\cite{location_cellular_RSS1}. However, the RSS-based localization has a poor location accuracy, which is affected by the network topology and propagation environment, e.g., path loss exponent and shadowing effects.
Although the TOA/TDOA-based and AOA-based localization can achieve a high location accuracy, they rely heavily on the line-of-sight (LoS) link, which may be blocked especially in the mmWave case.
Responding to this, the IRS has been proposed to overcome the blockage problem and improve the location accuracy in the wireless communication system, due to its capability of establishing a strong VLoS path between the BS and the mobile user. As an early research, the work \cite{location_LIS} first explored the potential of using IRS for wireless localization, where the Cramer-Rao lower bounds (CRLB) for positioning with IRS has been derived. Later on, J. He, {\em et al.} \cite{location_LIS_mmWave} investigated the 2D mmWave positioning with the assistance of a IRS, where the impact of the IRS on the location accuracy was evaluated. Then, the IRS-aided 2D mmWave positioning was extended to three-dimensional (3D) positioning\cite{location_RIS_3D}. Furthermore, an AoA-based mmWave positioning algorithm with the assistance of the IRS was proposed in \cite{location_IRS_mmWave2}, which achieves centimeter-level positioning accuracy.
H. Zhang, {\em et al.} \cite{location_RIS_RSS,location_RIS_RSS} considered the RSS-based localization, where the IRS was used to enlarge differences between the RSS values of adjacent locations so that a higher positioning accuracy can be achieved. In turn, the user location, provided by the global positioning system (GPS), was used for the design of IRS phase shifts in \cite{location_hu}.
In all the aforementioned works, location sensing and communication systems with the assistance of the IRS are usually designed separately and occupy different spectrum resources. With the wide deployment of the mmWave and massive multiple-input multiple-output (MIMO) technologies, it is possible to realize high-accuracy location sensing using communication signals. As such, it is desirable to jointly design the sensing and communication systems such that they can share the same frequency and time resources to improve the spectrum efficiency. This motivates the IRS-based integrated
sensing and communication (ISAC) system, where the IRS is introduced into the ISAC system to overcome the blockage problem in the mmWave system and maintain/improve both the communication performance and the sensing accuracy. However, there are few works investigating the design of the IRS-based ISAC system.
A recent work related to the concept of IRS-based ISAC was proposed in \cite{IRS_JLC}, where a specific framework of the joint location and communication (JLC) system was designed. Since the location sensing and data transmission cannot be conducted at the same time (i.e., share the same time resources), the IRS-aided JLC system cannot be counted as a real ISAC system in a strict sense, which allows sensing and communication to share the same time and frequency resources.
In this paper, we establish an ISAC system realized by a distributed semi-passive IRS, and design a specific framework of its working process, including transmission protocol design, location sensing and beamforming design. To the best of our knowledge, this is the first work investigating the IRS-based ISAC system, where location sensing and data transmission are conducted simultaneously, sharing the same time and frequency resources. Our main contributions are summarized as follows.
\begin{itemize}
\item We construct a 3D ISAC system realized by a novel IRS architecture, i.e., the distributed semi-passive IRS architecture. In the IRS-based ISAC system, location sensing is performed at the IRS by using communication signals, and the obtained location information is used for beamforming design such that the communication performance is improved.
\item A transmission protocol is proposed for the IRS-based ISAC system. Specifically, a coherence block consists of two periods, i.e., ISAC period and pure communication (PC) period. During the ISAC period, the mobile user sends information-carrying signals to the BS. Two semi-passive sub-IRSs operate in the sensing mode for localizing the user, while the passive sub-IRS operates in the sensing mode for assisting data transmission. The ISAC period consists of two time blocks. The location estimated in the first time block will be used for beamforming design in the second time block. During the PC period, two semi-passive sub-IRSs switch into the reflecting mode for data transmission, and the user location estimated in the second time block of the ISAC period is used for beamforming design so that communication performance can be improved.
\item {\color{black} We propose a location sensing scheme to localize the mobile user at the IRS, by using communication signals, thereby removing the requirement of dedicated positioning reference signals in conventional localization methods. Simulation results demonstrate that a millimeter-level positioning accuracy can be achieved.}
\item We propose two location-based beamforming schemes for the ISAC and PC periods, respectively, where both the BS combining vector and the IRS phase shift beam are designed according to the estimated user location. Simulation results show that although only imperfect location information is available, the proposed beamforming scheme for the ISAC period has almost the same performance as the optimal beamforming scheme with perfect CSI, and the proposed beamforming scheme for the PC period achieves similar performance to the alternating optimization (AO) beamforming scheme
assuming perfect CSI. {\color{black}These observations demonstrate that using sensed location information for beamforming can ensure communication performance.}
\end{itemize}
The remainder of the paper is organized as follows. In Section \ref{s1}, we introduce the IRS-based ISAC system, while in Section \ref{s2}, we propose a location sensing scheme. According to the estimated user location, low-complexity schemes for BS beamforming and IRS beamforming are presented in Section \ref{s3}.
Numerical results and discussions are provided in Section \ref{s4}, and finally Section \ref{s5} concludes the paper.
Notation: Boldface lower case and upper case letters are used for column vectors and matrices, respectively. The superscripts ${\left(\cdot\right)}^{*}$, ${\left(\cdot\right)}^{T}$, ${\left(\cdot\right)}^{H}$, and ${\left(\cdot\right)}^{-1}$ stand for the conjugate, transpose, conjugate-transpose, and matrix inverse, respectively. Also, the Euclidean norm, absolute value, Kronecker product are denoted by $\left\| \cdot \right\|$, $\left|\cdot\right|$ and $\otimes$ respectively. In addition, $\mathbb{E}\left\{\cdot\right\}$ is the expectation operator, and $\text{tr}\left(\cdot\right)$ represents the trace.
For a matrix ${\bf A}$, ${[\bf A]}_{mn}$ denotes its entry in the $m$-th row and $n$-th column, while for a vector ${\bf a}$, ${[\bf a]}_{m}$ denotes the $m$-th entry of it. Besides, $j$ in $e^{j \theta}$ denotes the imaginary unit.
Finally, $z \sim \mathcal{CN}(0,{\sigma}^{2})$ denotes a circularly symmetric complex Gaussian random variable $z$ with zero mean and variance $\sigma^2$.
\section{System Model} \label{s1}
As shown in Fig.~\ref{system_model}, we consider an IRS-aided system operating in the mmWave band, where a distributed semi-passive IRS with $M$ reflecting elements assists the uplink transmission between the BS and a single-antenna user. The distributed semi-passive IRS consists of 3 sub-IRSs. The first sub-IRS is passive with $M_1$ passive reflecting elements, while the $i$-th ($i=2,3$) sub-IRS is semi-passive with $M_i \ll M_1$ semi-passive reflecting elements that are capable of both sensing and reflecting. Herein, the distributed semi-passive IRS architecture is proposed to achieve integrated sensing and communication.
Specifically, the passive sub-IRS assists data transmission by reflecting, and meanwhile the two semi-passive sub-IRSs carry out user positioning by operating in the sensing mode.
The BS has an $N$-element uniform linear array (ULA) along the $y$ axis, while the $i$-th sub-IRS has an $M_{y,i}\times M_{z,i}$ uniform rectangular array (URA) lying on the $y$-$o$-$z$ plane.
Moreover, there is a backhaul link for information exchange between the BS and the IRS. In this paper, the quasi-static block-fading channel is considered for the user-IRS channel, which remains nearly unchanged in each fading block but varies from one block to another. Due to the fixed locations of both the BS and the sub-IRSs, we assume that the channels between the BS and sub-IRSs remain constant over a long period. Furthermore, we assume that the direct link between the BS and the user does not exist due to blockage or unfavorable propagation environment.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{system_model.pdf}
\caption{Illustration of the IRS-based ISAC system.}
\label{system_model}
\end{figure}
\subsection{ISAC Transmission Protocol}
As shown in Fig.\ref{transmission_protocol}, we consider a coherence block, during which the user-IRS channel remains unchanged. Each coherence
block is composed of two periods, ISAC period with $T_1$ time slots (symbol duration) and PC period with $T_2$ time slots. The ISAC period is divided into two time blocks with $\tau_1$ and $\tau_2$ time slots, respectively.
During each time block of the ISAC period, the user sends communication signals to the BS. The passive sub-IRS assists data transmission by reflection, and meanwhile the remaining two semi-passive sub-IRSs operate in the sensing mode. {\color{black}By using the signals sensed at the two semi-passive sub-IRSs, the IRS carries out the location estimation task and obtains the estimated user's location, which is then sent to the BS via the backhaul link\footnote{\color{black}In the considered IRS-based ISAC system, sensed location information and IRS phase shifts are exchanged between the BS and the IRS via the backhaul link. The sensed location information is exchanged once per time block of the ISAC period, while the IRS phase shifts are exchanged twice per coherence block.}.
In the first time block, the phase shift beam of the passive sub-IRS is randomly generated, due to the unavailability of any CSI knowledge. In the second time block, by using the user's location estimated in the first time block, the BS carries out the beamforming optimization task and obtains a better phase shift beam, which is then sent to the IRS via the backhaul link.
During the PC period, the two semi-passive sub-IRSs switch to the reflecting mode to enhance the uplink data transmission. The BS uses more accurate location information acquired in the second time block of the ISAC period for beamforming optimization and then shares the optimized phase shifts with the IRS via the backhaul link.}
\begin{remark}
In the first time block, we roughly estimate the user location in a short time so that the communication performance in the second time block can be quickly improved by using the estimated location information for beamforming. In general, the length of the second time block is longer than that of the first time block so that high-accuracy user location can be acquired for more effective beamforming design in the longest PC period.
\end{remark}
\begin{figure}[!ht]
\centering
\includegraphics[width=6in]{transmission_protocol.pdf}
\caption{ISAC transmission protocol.}
\label{transmission_protocol}
\end{figure}
\subsubsection{ISAC Period} During the $n$-th time block of the ISAC period, the user sends $\sqrt{\rho}s(t)$, satisfying {\color{black} $s(t) \sim \mathcal{CN}(0,1)$}, to the BS at time slot $t\in \mathcal{N}_n=\{(n-1)\tau_1+1 ,\cdots, \tau_1+(n-1)\tau_2 \}$, where $\rho$ is the transmit power. The received signal at the BS via the passive sub-IRS is
\begin{align} \label{E7}
{ y}(t)=\sqrt{\rho} [ {\bf w}^{(n)}]^H {\bf H}_{\text{I2B},1}{\bm \Theta}_{1}^{(n)} {\bf h}_{\text{U2I},1}s(t) + [ {\bf w}^{(n)}]^H {\bf n}_\text{BS}(t), t\in \mathcal{N}_n,n=1,2,
\end{align}
where ${\bf w}^{(n)}$, satisfying $\| {\bf w}^{(n)} \|=1$, is the BS combining vector in the $n$-th time block, ${\bf H}_{\text{I2B},i} \in \mathbb{C}^{N \times M_i}$ and ${\bf h}_{\text{U2I},i} \in \mathbb{C}^{M_i \times 1} $ are the channels from the $i$-th sub-IRS to the BS and from the user to the $i$-th sub-IRS, respectively. The phase shift matrix of the first sub-IRS in the $n$-th time block is given by ${\bm \Theta}_{1}^{(n)}=\text{diag}({\bm \xi}_1^{(n)})$, with the phase shift beam being ${\bm \xi}_1^{(n)}={[ e^{j\vartheta_{1,1}^{(n)} },..., e^{j\vartheta_{1,m}^{(n)}},..., e^{j\vartheta_{1,M_i}^{(n)} }]}^T$. In addition, $\bf{n}_\text{BS} $ is the additive white Gaussian noise (AWGN) at the BS, whose elements follow the complex Gaussian distribution $ \mathcal{CN} \left({ 0},\sigma_0^2 \right)$.
The two semi-passive sub-IRSs operate in the sensing mode, and the received signal at the $i$-th sub-IRS ($i=2,3$) is given by
\begin{align} \label{E1}
{\bf x}_i(t)=\sqrt{\rho} {\bf h}_{\text{U2I},i} s(t)+
\sqrt{\rho} {\bf H}_{\text{I2I},i}{\bm \Theta}_1^{(n)} {\bf h}_{\text{U2I},1} s(t) +{\bf n}_i(t), i=2,3, t\in \mathcal{N}_n,n=1,2,
\end{align}
where ${\bf H}_{\text{I2I},i}$ is the channel from the passive sub-IRS (i.e., the first sub-IRS) to the semi-passive sub-IRS (i.e., sub-IRS $i=2,3$). In addition, ${\bf n}_i$ is the AWGN at the $i$-th sub-IRS, whose elements follow the complex Gaussian distribution $\mathcal{CN} \left({ 0},\sigma_0^2 \right)$.
The instantaneous achievable rate\footnote{\color{black}
In the first time block of the ISAC period, non-cohrent detection is considered, due to the unavailability of any CSI knowledge. The rate achieved by non-coherent detection can be approximated by the rate expression (\ref{Erate}) with perfect CSI \cite{non-coherent}, since the length of the coherence time (in symbols) is much larger than the number of transmit antennas.} during the ISAC period is given by
\begin{align} \label{Erate}
R(t)=\log_2\left( 1+ \frac{ \rho \left| [ {\bf w}^{(n)}]^H {\bf H}_{\text{I2B},1}{\bm \Theta}_{1}^{(n)} {\bf h}_{\text{U2I},1} \right|^2 }{\sigma_0^2}\right) , t\in \mathcal{N}_n,n=1,2.
\end{align}
\subsubsection{PC Period} During the PC period, the user sends $\sqrt{\rho} s(t)$ to the BS at time slot $t\in \mathcal{T}_2 \triangleq \{T_1+1,\cdots,T_1+T_2\}$. The received signal at the BS via the whole distributed IRS is
\begin{align} \label{E10}
y(t)=\sqrt{\rho} {\bf w}^H (t) {\bf H}_{\text{I2B}}{\bm \Theta}(t) {\bf h}_{\text{U2I}}s(t) +{\bf w}^H (t) {\bf n}_\text{BS}(t), t\in \mathcal{T}_2,
\end{align}
where ${\bf H}_{\text{I2B}} \triangleq [{\bf H}_{\text{I2B},1},{\bf H}_{\text{I2B},2},{\bf H}_{\text{I2B},3}] \in \mathbb{C}^{N \times M}$ and ${\bf h}_{\text{U2I}} \triangleq [{\bf h}_{\text{U2I},1}^T,{\bf h}_{\text{U2I},2}^T,{\bf h}_{\text{U2I},3}^T]^T\in \mathbb{C}^{M \times 1} $ are the channels from the IRS to the BS and from the user to the IRS, respectively. The phase shift matrix is ${\bm \Theta}=\text{diag}({\bm \xi})$, where the phase shift beam is given by
${\bm \xi}\triangleq [{\bm \xi}_1^T,{\bm \xi}_2^T, {\bm \xi}_3^T]^T$ with the phase shift beam of the $i$-th sub-IRS being ${\bm \xi}_i={[ e^{j\vartheta_{i,1}},..., e^{j\vartheta_{i,m}},..., e^{j\vartheta_{i,M_i} }]}^T$.
The instantaneous achievable rate during the PC period is given by
\begin{align}
R(t)=\log_2\left( 1+ \frac{ \rho \left| {\bf w}^H (t) {\bf H}_{\text{I2B}}{\bm \Theta}(t) {\bf h}_{\text{U2I}} \right|^2 }{\sigma_0^2}\right) , t\in \mathcal{T}_2.
\end{align}
\subsection{Channel Model}
In general, the IRS is deployed with line-of-sight (LoS) paths to both the BS and the user. In addition, since the non-line-of-sight (NLoS) path is much weaker than the LoS path for mmWave communications\footnote{For mmWave signals, channel measurement campaigns reveal that signal power of the LoS component is about 13 dB higher than the sum of power of non-line-of-sight (NLoS) components{\color{black}\cite{muhi2010modelling}}.}, we only consider the dominated LoS paths. Hence, the channels from the $i$-th sub-IRS to the BS is modelled as
\begin{align} \label{E8}
& {\bf H}_{\text{I2B},i}=\alpha_{\text{I2B},i} {\bf a}\left(u_{\text{I2B},i}^\text{A} \right)
{\bf b}_i^H\left(u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} \right), i=1,2,3,
\end{align}
where $\alpha_{\text{I2B},i}$ is the complex channel gain, ${\bf a}$ and ${\bf b}_i$ are array response vectors for the BS and the $i$-th sub-IRS, respectively.
The two effective angles of departure (AoDs) at the $i$-th sub-IRS are defined as
\begin{align}
& u_{\text{I2B},i}^\text{D} =2 \pi \frac{d_\text{IRS} }{\lambda} \cos (\gamma_{\text{I2B},i}^\text{D}) \sin (\varphi_{\text{I2B},i}^\text{D}),\\
& v_{\text{I2B},i}^\text{D}=2 \pi \frac{d_\text{IRS} }{\lambda} \sin (\gamma_{\text{I2B},i}^\text{D}),
\end{align}
where $d_\text{IRS}$ is the distance between two adjacent reflecting elements, $\lambda$ is the carrier wavelength, $ \gamma_{\text{I2B},i}^\text{D} $ and $\varphi_{\text{I2B},i}^\text{D}$ are the elevation and azimuth AoDs for the link from the $i$-th sub-IRS to the BS, respectively.
The effective angle of arrival (AoA) at the BS is defined as
\begin{align}
u_{\text{I2B},i}^\text{A}=2 \pi \frac{d_\text{BS} }{\lambda} \sin (\theta_{\text{I2B},i}^\text{A}),
\end{align}
where $d_\text{BS}$ is the distance between two adjacent antennas, and $\theta_{\text{I2B},i}^\text{A}$
is the AoA at the BS.
Similarly, the channels from the user to the $i$-th sub-IRS is modelled as
\begin{align} \label{E2}
{\bf h}_{\text{U2I},i}= \alpha_{\text{U2I},i} {\bf b}_i \left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right), i=1,2,3,
\end{align}
where $\alpha_{\text{U2I},i}$ is the complex channel gain, $u_{\text{U2I},i}^\text{A}$ and $v_{\text{U2I},i}^\text{A}$
are two effective angles of arrival (AoAs) from the user to the $i$-th sub-IRS, which are defined as
\begin{align}
& u_{\text{U2I},i}^\text{A} =2 \pi \frac{d_\text{IRS} }{\lambda} \cos (\gamma_{\text{U2I},i}^\text{A}) \sin (\varphi_{\text{U2I},i}^\text{A}),\\
& v_{\text{U2I},i}^\text{A}=2\pi \frac{d_\text{IRS} }{\lambda} \sin (\gamma_{\text{U2I},i}^\text{A}),
\end{align}
where $ \gamma_{\text{U2I},i}^\text{A} $ and $\varphi_{\text{U2I},i}^\text{A}$ are the elevation and azimuth AoAs for the link from the user to the $i$-th sub-IRS, respectively. Furthermore, we assume that $d_\text{BS}=d_\text{IRS}=\frac{\lambda}{2}$. Then, the array response vectors for the BS and the $i$-th sub-IRS are given by
\begin{align}
& {\bf a} (u)=\left[1,\cdots, e^{j(n-1)u},\cdots, e^{j(N-1)u} \right]^T, \\
& {\bf b}_i (u,v)=\left[1,\cdots, e^{j(n-1)u},\cdots, e^{j(M_{y,i}-1)u} \right]^T \otimes \left[1,\cdots, e^{j(m-1)v},\cdots, e^{j(M_{z,i}-1)v} \right]^T.
\end{align}
Also, the channel from the passive sub-IRS to the semi-passive sub-IRS (i.e., sub-IRS $i=2,3$) is modelled as
\begin{align} \label{E3}
{\bf H}_{\text{I2I},i}=\alpha_{\text{I2I},i} {\bf b}_i\left(u_{\text{I2I},i}^\text{A},v_{\text{I2I},i}^\text{A} \right)
{\bf b}_1^H\left(u_{\text{I2I},i}^\text{D},v_{\text{I2I},i}^\text{D} \right), i=2,3,
\end{align}
where $\alpha_{\text{I2I},i}$ is the complex channel gain for the link from the first sub-IRS to the $i$-th sub-IRS, and the two effective AoAs at the $i$-th sub-IRS are defined as
\begin{align}
& u_{\text{I2I},i}^\text{A}=2 \pi \frac{d_\text{IRS} }{\lambda} \cos (\gamma_{\text{I2I},i}^\text{A}) \sin (\varphi_{\text{I2I},i}^\text{A}),\\
& v_{\text{I2I},i}^\text{A}= 2 \pi \frac{d_\text{IRS} }{\lambda} \sin (\gamma_{\text{I2I},i}^\text{A}),
\end{align}
where $ \gamma_{\text{I2I},i}^\text{A} $ and $\varphi_{\text{I2I},i}^\text{A}$ are the elevation and azimuth AoAs from the first sub-IRS to the $i$-th sub-IRS, respectively. The two effective AoDs at the first sub-IRS are defined as
\begin{align}
& u_{\text{I2I},i}^\text{D}=2 \pi \frac{d_\text{IRS} }{\lambda} \cos (\gamma_{\text{I2I},i}^\text{D}) \sin (\varphi_{\text{I2I},i}^\text{D}),\\
& v_{\text{I2I},i}^\text{D}= 2 \pi \frac{d_\text{IRS} }{\lambda} \sin (\gamma_{\text{I2I},i}^\text{D}),
\end{align}
where $ \gamma_{\text{I2I},i}^\text{D} $ and $\varphi_{\text{I2I},i}^\text{D}$ are the elevation and azimuth AoDs from the first sub-IRS to the $i$-th sub-IRS, respectively.
\section{Location Sensing} \label{s2}
During the ISAC period, the two semi-passive sub-IRSs operate in the sensing mode.
By substituting (\ref{E2}) and (\ref{E3}) into (\ref{E1}), the received signal at the $i$-th sub-IRS during the $n$-th time block can be rewritten as
\begin{align}
{\bf x}_i(t)&=\sqrt{\rho} \alpha_{\text{U2I},i} {\bf b}_i \left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right) s(t) \\
&+ \sqrt{\rho}\alpha_{\text{U2I},1}
\alpha_{\text{I2I},i} {\bf b}_i\left(u_{\text{I2I},i}^\text{A},v_{\text{I2I},i}^\text{A} \right)
{\bf b}_1^H\left(u_{\text{I2I},i}^\text{D},v_{\text{I2I},i}^\text{D} \right)
{\bm \Theta}_1^{(n)}
{\bf b}_1 \left(u_{\text{U2I},1}^\text{A},v_{\text{U2I},1}^\text{A}\right) s(t) +{\bf n}_i(t), t\in \mathcal{N}_n, \nonumber
\end{align}
where the first term is the signal from the user, which involves the user location information, and the second term is the interference from the passive sub-IRS.
The above equation can be expressed in a more compact form
\begin{align}
{\bf x}_i(t)=\sqrt{\rho}{\bf B}_i {\bm \beta}_i^{(n)} s(t)+{\bf n}_i(t), t\in \mathcal{N}_n,
\end{align}
where
\begin{align}
& {\bf B}_i \triangleq \left[{\bf b}_i \left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right), {\bf b}_i\left(u_{\text{I2I},i}^\text{A},v_{\text{I2I},i}^\text{A} \right) \right] \in \mathbb{C}^{M_i \times 2}, \\
&{\bm \beta}_i^{(n)}\triangleq \left[\alpha_{\text{U2I},i}, \
\alpha_{\text{U2I},1}
\alpha_{\text{I2I},i}{\bf b}_1^H\left(u_{\text{I2I},i}^\text{D},v_{\text{I2I},i}^\text{D} \right)
{\bm \Theta}_1^{(n)}
{\bf b}_1 \left(u_{\text{U2I},1}^\text{A},v_{\text{U2I},1}^\text{A}\right) \right]^T \in \mathbb{C}^{2 \times 1} .
\end{align}
Next, according to a sequence of $ {\bf x}_i(t), t \in \mathcal{N}_n$, we try to estimate the two pairs of effective AoAs for links from the user to the $i$-th sub-IRS and from the passive sub-IRS to the $i$-th sub-IRS, and then distinguish $\left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right)$ from these two pairs of effective AoAs.
Finally, according to the effective AoAs from the user to the two semi-passive sub-IRSs (i.e., $\left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right),i=2,3$), the location of the user is determined.
\subsection{Estimate Effective AoAs} \label{III.A}
By using total least square (TLS) estimation of signal parameters via rotational
invariance technique (ESPRIT) method, we separately estimate the effective AOAs corresponding to the $y$ axis (i.e., $u_{\text{U2I},i}^\text{A}$ and $u_{\text{I2I},i}^\text{A}$) and the $z$ axis (i.e., $v_{\text{U2I},i}^\text{A}$ and $v_{\text{I2I},i}^\text{A}$). Then, invoking the multiple signal classification (MUSIC) method, we pair the effective AOAs corresponding to the $y$ axis with those corresponding to the $z$ axis.
Without loss of generality, we focus on the estimation of the effective AoAs at the $i$-th sub-IRS in the $n$-th time block ($i \in \{2,3\}$ and $n \in \{1,2\}$). To remove the coherency of the received signals, we use forward-backward spatial smoothing (FBSS){\color{black} \cite{van2004optimum}} to preprocess the signals received at the semi-passive sub-IRS.
Specifically, for the $i$-th sub-IRS, we construct a set of $N_{\text{micro},i}$ micro-surfaces each with $L_{\text{micro},i}= Q_{y,i} \times Q_{z,i}$ semi-passive elements. Each micro-surface is shifted by one row along the $z$ direction or one column along the $y$ direction from the preceding micro-surface. An example is shown in Fig.\ref{subsurface}, where we construct a set of 4 micro-surfaces each with $3 \times 3$ semi-passive elements.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{subsurface.pdf}
\caption{An example of the micro-surface.}
\label{subsurface}
\end{figure}
The received signals at the $m$-th micro-surface of the $i$-th sub-IRS during the $n$-th time block are denoted by $ {\bf x}_{i,m} (t) \in {{\mathbb{C}}^{L_{\text{micro},i} \times 1}}, t \in \mathcal{N}_n$.
Following the FBSS technique, the auto-correlation matrix of ${\bf x}_{i,m} (t), t \in \mathcal{N}_n$, i.e.,
${\bf R}_{i}^{(n)} \triangleq \mathbb{E} \{{\bf x}_{i,m} (t) [{\bf x}_{i,m} (t)]^H \}$, can be estimated as
\begin{align}
\hat{\bf R}_{i}^{(n)} =\frac{1}{2 \tau_n N_{\text{micro},i} }
\sum\limits_{t \in \mathcal{N}_n } \sum\limits_{m=1}^{N_{\text{micro},i}}
\left\{ {\bf x}_{i,m} (t) [{\bf x}_{i,m} (t)]^H
+ {\bf J} [ {\bf x}_{i,m} (t) ]^* [ {\bf x}_{i,m}(t) ]^T {\bf J} \right\},
\end{align}
where ${\bf J}$ is the exchange matrix, with the 1 elements residing on its counterdiagonal and all other elements being zero.
Then, perform
eigenvalue decomposition of ${{\hat{\bf R}}}_{i}^{(n)}$
\begin{align}
\hat{\bf R}_{i}^{(n)}= {\bf U}_i^{(n)}\text{diag}\left(\lambda_{i,1}^{(n)},\dots,\lambda_{i,L_{ \text{micro},i }}^{(n)}\right) [{\bf U}_i^{(n)}]^H,
\end{align}
where $ {\bf U}_i^{(n)} \triangleq [{\bf u}_{i,1}^{(n)}, \dots, {\bf u}_{i,L_{\text{micro},i}}^{(n)}]$ and the eigenvalues $\lambda_{i,1}^{(n)},\dots,\lambda_{i,L_{\text{micro},i}}^{(n)}$ are in a descending order.
\subsubsection{ Estimate $u_{\text{U2I},i}^\text{A}$ and $u_{\text{I2I},i}^\text{A}$ by using the TLS ESPRIT algorithm} For the $i$-th sub-IRS, we construct two
auxiliary sub-surfaces of its first micro-surface, with the size of $L_{\text{aux},i}=(Q_{y,i}-1)\times Q_{z,i}$, as illustrated in Fig. ~\ref{subsurface_uy}.
The signal sub-space corresponding to the two auxiliary sub-surfaces is\footnote{\color{black}There are two pairs of AOAs seen at each semi-pasive sub-IRS, one corresponding to the link from user to the semi-passive sub-IRS, and another corresponding to the link from the passive sub-IRS to the semi-passive sub-IRS. Hence, the matrix ${\bf U}_{\text{S},ik}^{(n)}$ has two columns.}
\begin{align}
{\bf U}_{\text{S},ik}^{(n)} \triangleq {\bf J}_{k} {\bf U}_{\text{S},i}^{(n)}, k=1,2,
\end{align}
where ${\bf U}_{\text{S},i}^{(n)} \triangleq [{\bf u}_{i,1}^{(n)}, {\bf u}_{i,2}^{(n)} ] \in \mathbb{C}^{L_{\text{micro},i} \times 2} $ and ${\bf J}_{k} \in \mathbb{R}^{L_{\text{aux},i} \times L_{\text{micro},i}}$ ($k\in\{1,2\}$) is a selecting matrix, whose elements are either 1 or 0. If the $j$-th reflecting element of the micro-surface 1 is selected as the $i$-th element of the auxiliary sub-surface $k\in\{1,2\}$, then we set $[{\bf J}_{ k}]_{ij}=1$. Otherwise, we set $[{\bf J}_{k}]_{ij}=0$.
\begin{figure}[htbp]
\begin{minipage}[t]{1 \linewidth}
\centering
\subfigure[]{ \label{subsurface_uy}
\includegraphics[width=4in]{subsurface_uy.pdf}}
\end{minipage}
\begin{minipage}[t]{1 \linewidth}
\centering
\subfigure[]{ \label{subsurface_uz}
\includegraphics[width=4in]{subsurface_uz.pdf}}
\end{minipage}
\caption{An example of the auxiliary sub-surface.}
\label{simulation1}
\end{figure}
Calculate
\begin{align}
{\bm \Phi}_{\text{TLS},i}^{(n)}=-{\bf V}_{i,12}^{(n)} [{\bf V}_{i,22}^{(n)}]^{-1},
\end{align}
where ${\bf V}_{i,12}^{(n)}$ and ${\bf V}_{i,22}^{(n)}$ are $2 \times 2$ matrices defined by the eigendecomposition of the $4 \times 4$ matrix
\begin{align}
& {\bf C}_i^{(n)} \triangleq \left[ {\bf U}_{\text{S},i1}^{(n)}, {\bf U}_{\text{S},i2}^{(n)} \right]^H \left[ {\bf U}_{\text{S},i1}^{(n)}, {\bf U}_{\text{S},i2}^{(n)}\right]
\\
&=
\begin{bmatrix}
{\bf V}_{i,11}^{(n)} & {\bf V}_{i,12}^{(n)}\\
{\bf V}_{i,21}^{(n)} & {\bf V}_{i,22}^{(n)}
\end{bmatrix}
{\bm \Lambda}_{C,i}^{(n)}\begin{bmatrix}
{\bf V}_{i,11}^{(n)} & {\bf V}_{i,12}^{(n)}\\
{\bf V}_{i,21}^{(n)} & {\bf V}_{i,22}^{(n)}
\end{bmatrix}^H, \nonumber
\end{align}
where ${\bm \Lambda}_{C,i}^{(n)} \triangleq \text{diag}\left( \lambda_{\text{C},i1}^{(n)},\dots, \lambda_{\text{C},i4}^{(n)}\right) $ with the eigenvalues in a decreasing order.
Perform eigendecomposition of $ {\bm \Phi}_{\text{TLS},i}^{(n)}$ and obtain its eigenvalues $\lambda_{\text{TLS},il}^{(n)},l=1,2$.
Then, we have the two effective AoAs at the $i$-th sub-IRS corresponding to the $y$ axis estimated as
\begin{align}
\check{u}_{il}^{(n)}=\text{angle}(\lambda_{\text{TLS},il}^{(n)}),l=1,2.
\end{align}
It is worth noting that the estimators of $u_{\text{U2I},i}^\text{A}$ and $u_{\text{I2I},i}^\text{A}$ in the $n$-th time block belong to $\mathcal{U}_i^{(n)}\triangleq \{\check{u}_{il}^{(n)},l=1,2 \}$, i.e., $\hat{u}_{\text{U2I},i}^{\text{A},(n)},\hat{u}_{\text{I2I},i}^{\text{A},(n)} \in \mathcal{U}_i^{(n)} $.
\subsubsection{Estimate $v_{\text{U2I},i}^\text{A}$ and $v_{\text{I2I},i}^\text{A}$ by using TLS ESPRIT algorithm} For the $i$-th sub-IRS, we construct two
auxiliary sub-surfaces of its first micro-surface, with the size of $\tilde{L}_{\text{aux},i}=Q_{y,i} \times (Q_{z,i}-1)$, as illustrated in Fig.~\ref{subsurface_uz}. Following the similar process of estimating $u_{\text{U2I},i}^\text{A}$ and $u_{\text{I2I},i}^\text{A}$, we estimate the two effective AoAs at the $i$-th sub-IRS corresponding to the $z$ axis as $\check{v}_{il}^{(n)},l=1,2$. And the estimators of $v_{\text{U2I},i}^\text{A}$ and $v_{\text{I2I},i}^\text{A}$ in the $n$-th time block belong to $\mathcal{V}_i^{(n)} \triangleq \{\check{v}_{il}^{(n)},l=1,2 \}$, i.e., $\hat{v}_{\text{U2I},i}^{\text{A},(n)},\hat{v}_{\text{I2I},i}^{\text{A},(n)} \in \mathcal{V}_i^{(n)} $.
\subsubsection{Pair $\check{u}_{il}^{(n)}$ and $\check{v}_{il}^{(n)}$ by using the MUSIC algorithm}
Let
\begin{align}
\check{f}_{i,ls}^{(n)}\triangleq {\bf b}_{ \text{micro},i }^H ( \check{u}_{il}^{(n)}, \check{v}_{is}^{(n)})
{\bf U}_{\text{N},i}^{(n)},
\end{align}
where ${\bf U}_{\text{N},i}^{(n)} \triangleq [{\bf u}_{i,3}^{(n)}, \dots, {\bf u}_{i, L_{\text{micro},i} }^{(n)} ] \in \mathbb{C}^{L_{\text{micro},i} \times (L_{\text{micro},i}-2)} $, and ${\bf b}_{ \text{micro},i }$ is the array response vector of the micro-surface on the $i$-th sub-IRS. Then compute
\begin{align}
f\left( \check{u}_{il}^{(n)}, \check{v}_{is}^{(n)}\right)=
\check{f}_{i,ls}^{(n)} [\check{f}_{i,ls}^{(n)}]^H, l,s=1,2,
\end{align}
and choose the two minima
$f\left( \hat{u}_{il}^{(n)}, \hat{v}_{il}^{(n)}\right), l=1,2$, where $\hat{u}_{il}^{(n)} \in \mathcal{U}_i^{(n)}$ and $\hat{v}_{il}^{(n)} \in \mathcal{V}_i^{(n)}$. As such, we obtain two pairs of effective AoAs $\left( \hat{u}_{il}^{(n)}, \hat{v}_{il}^{(n)}\right), l=1,2$.
\subsubsection{Determine $\left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A} \right)$}
Since there are two pairs of effective AoAs, we need to determine which pair is corresponding to $\left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A} \right)$.
Denote the locations of the BS and the $i$-th sub-IRS by ${\bf q}_\text{BS}=(x_\text{BS},y_\text{BS},z_\text{BS})$ and ${\bf q}_i=(x_i,y_i,z_i)$, respectively.
Since the locations of the BS and all the sub-IRSs are fixed, we assume that these locations are perfectly known.
By invoking the available location information, the effective AoAs from the first sub-IRS to the $i$-th sub-IRS can be calculated as
\begin{align}
& u_{\text{I2I},i}^\text{A}= \frac{y_i-y_1}{\| {\bf q}_i-{\bf q}_1\|},\\
& v_{\text{I2I},i}^\text{A}= \frac{z_i-z_1}{\| {\bf q}_i-{\bf q}_1\|}.
\end{align}
Finally, excluding the effective AoA pair corresponding to $\left(u_{\text{I2I},i}^\text{A},v_{\text{I2I},i}^\text{A} \right) $ from $\left\{ \left( \hat{u}_{il}^{(n)},\hat{v}_{il}^{(n)}\right)| l=1,2\right\}$, the remaining one is the estimator for $\left(u_{\text{U2I},i}^{\text{A},(n)},v_{\text{U2I},i}^{\text{A},(n)} \right) $, i.e., $\left(\hat{u}_{\text{U2I},i}^{\text{A},(n)},\hat{v}_{\text{U2I},i}^{\text{A},(n)} \right)$.
\subsection{Estimate User Location} \label{III.B}
In the following, we try to estimate the user location according to the estimated effective AoAs
$\left(u_{\text{U2I},i}^{\text{A},(n)},v_{\text{U2I},i}^{\text{A},(n)} \right),i=2,3$. The locations of the sub-IRSs and the user have the following relationship with these effective AoAs:
\begin{align}
& \hat{u}_{\text{U2I},i}^{\text{A},(n)}=\frac{ y_i-\hat{y}_\text{U}^{(n)} }{\hat{d}_{\text{U2I},i}^{(n)} }, i=2,3,\\
& \hat{v}_{\text{U2I},i}^{\text{A},(n)}=\frac{ z_i-\hat{z}_\text{U}^{(n)} }{\hat{d}_{\text{U2I},i}^{(n)} }, i=2,3,
\end{align}
where $\hat{\bf q}_\text{U}^{(n)}=[\hat{x}_\text{U}^{(n)}, \hat{y}_\text{U}^{(n)},\hat{z}_\text{U}^{(n)}]^T$ denotes the user location estimated in the $n$-th time block, and $\hat{d}_{\text{U2I},i}^{(n)} \triangleq \|\hat{\bf q}_\text{U}^{(n)}-{\bf q}_i\|$ is the estimated distance between the user and the $i$-th sub-IRS.
The above equations can be expressed in a more compact form
\begin{align}
{\bf A}^{(n)}\hat{\bf z}^{(n)}={\bf p},
\end{align}
where
\begin{align}
& {\bf A}^{(n)} \triangleq
\begin{pmatrix}
1 & 0 & \hat{u}_{\text{U2I},2}^{\text{A},(n)} & 0\\
0 & 1 & \hat{v}_{\text{U2I},2}^{\text{A},(n)} & 0\\
1 & 0 & 0 & \hat{u}_{\text{U2I},3}^{\text{A},(n)}\\
0 & 1 & 0 & \hat{v}_{\text{U2I},3}^{\text{A},(n)}
\end{pmatrix},\\
& {\bf z}^{(n)} \triangleq
\left[ \hat{y}_\text{U}^{(n)}, \hat{z}_\text{U}^{(n)}, \hat{d}_{\text{U2I},2}^{(n)},\hat{d}_{\text{U2I},3}^{(n)}\right]^T, \\
& {\bf p} \triangleq \left[y_2,z_2,y_3,z_3 \right]^T.
\end{align}
By solving the above matrix equation, we obtain
\begin{align}
&\hat{d}_{\text{U2I},2}^{(n)}=
\frac{\hat{u}_{\text{U2I},3}^{\text{A},(n)} (z_2-z_3) -\hat{v}_{\text{U2I},3}^{\text{A},(n)} (y_2-y_3) }
{\hat{u}_{\text{U2I},3}^{\text{A},(n)} \hat{v}_{\text{U2I},2}^{\text{A},(n)} - \hat{u}_{\text{U2I},2}^{\text{A},(n)} \hat{v}_{\text{U2I},3}^{\text{A},(n)} }, \\
& \hat{d}_{\text{U2I},3}^{(n)}=
\frac{\hat{u}_{\text{U2I},2}^{\text{A},(n)} (z_3-z_2) -\hat{v}_{\text{U2I},2}^{\text{A},(n)} (y_3-y_2) }
{\hat{u}_{\text{U2I},2}^{\text{A},(n)} \hat{v}_{\text{U2I},3}^{\text{A},(n)} - \hat{u}_{\text{U2I},3}^{\text{A},(n)} \hat{v}_{\text{U2I},2}^{\text{A},(n)} },\\
& \hat{y}_\text{U}^{(n)}=y_2-\hat{u}_{\text{U2I},2}^{\text{A},(n)} \hat{d}_{\text{U2I},2}^{(n)}, \label{E4}\\
& \hat{z}_\text{U}^{(n)}=z_2-\hat{v}_{\text{U2I},2}^{\text{A},(n)} \hat{d}_{\text{U2I},2}^{(n)}\label{E5}.
\end{align}
Then, we calculate $\hat{x}_\text{U}^{(n)}$. Noticing that $\hat{x}_\text{U}^{(n)}$ satisfies
\begin{align}
\left(\hat{x}_\text{U}^{(n)}-x_i \right)^2 =\left(\hat{d}_{\text{U2I},i}^{(n)}\right)^2-\left(\hat{y}_\text{U}^{(n)}-y_i \right)^2-\left(\hat{z}_\text{U}^{(n)}-z_i \right)^2,i=2,3,
\end{align}
we have
\begin{align}
\hat{x}_\text{U}^{(n)}= \underset {\omega_2 }{\text{arg}} \ \ \underset{\omega_2 \in \{x_2 \pm d_{x,2}\}, \ \omega_3 \in \{x_3 \pm d_{x,3}\} }{ \text{min} } \left|\omega_2-\omega_3 \right|, \label{E6}
\end{align}
where
\begin{align}
d_{x,i} \triangleq \sqrt{ \left(\hat{d}_{\text{U2I},i}^{(n)}\right)^2-\left(\hat{y}_\text{U}^{(n)}-y_i \right)^2-\left(\hat{z}_\text{U}^{(n)}-z_i \right)^2}.
\end{align}
Combining (\ref{E4}), (\ref{E5}) and (\ref{E6}), we obtain the user location $\hat{\bf q}_\text{U}^{(n)}=[\hat{x}_\text{U}^{(n)}, \hat{y}_\text{U}^{(n)},\hat{z}_\text{U}^{(n)}]^T$.
{\color{black}
\begin{remark}
To estimate the user location, some IRS elements are equipped with receiving radio frequency (RF) chains, which incur additional power consumption and hardware cost. However, since the proportion of IRS elements with receiving RF chains is very small, the resulting increase in power consumption and hardware cost will not be much.
\end{remark}
}
\section{Beamforming Design}\label{s3}
In this section, the estimated user location is used for beamforming design in both the ISAC period and the PC period.
\subsection{ISAC period}
In the ISAC period, only the passive sub-IRS (i.e., the first sub-IRS) operates in the reflecting mode to assist uplink data transmission.
By substituting (\ref{E8}) and (\ref{E2}) into (\ref{E7}), the received signal at the BS can be rewritten as
\begin{align}
y(t)=& \sqrt{\rho} \alpha_{\text{I2B},1} \alpha_{\text{U2I},1} [ {\bf w}^{(n)}]^H {\bf a}\left(u_{\text{I2B},1}^\text{A} \right)
{\bf b}_1^H\left(u_{\text{I2B},1}^\text{D},v_{\text{I2B},1}^\text{D} \right){\bm \Theta}_{1}^{(n)} {\bf b}_1 \left(u_{\text{U2I},1}^\text{A},v_{\text{U2I},1}^\text{A}\right) s(t)\\
&+[ {\bf w}^{(n)}]^H{\bf n}_\text{BS}(t), t\in \mathcal{N}_n,n=1,2. \nonumber
\end{align}
We aim to maximize the received signal power at the BS, and the optimization problem is formulated as
\begin{subequations}
\begin{align}
&\underset{{\bf w}^{(n)},{\bm \xi}_1^{(n)}}{\text{max}} && \left| [ {\bf w}^{(n)}]^H {\bf a}\left(u_{\text{I2B},1}^\text{A} \right)
{\bf b}_1^H\left(u_{\text{I2B},1}^\text{D},v_{\text{I2B},1}^\text{D} \right){\bm \Theta}_{1}^{(n)} {\bf b}_1 \left(u_{\text{U2I},1}^\text{A},v_{\text{U2I},1}^\text{A}\right) \right|^2,\\
&\text{s.t.} &&\left\| {\bf w}^{(n)} \right\|=1,\\
& && |[{\bm \xi}_1^{(n)}]_i|=1,i=1,\cdots,M_1,
\end{align}
\end{subequations}
which can be equivalently decomposed into two sub-problems, i.e., the sub-problem corresponding to the BS combining vector
\begin{subequations}
\begin{align}
&\underset{{\bf w}^{(n)}}{\text{max}} \ \ \left| [ {\bf w}^{(n)}]^H {\bf a}\left(u_{\text{I2B},1}^\text{A} \right) \right|^2,\\
&\text{s.t.} \ \ \ \ \left\| {\bf w}^{(n)} \right\| =1,
\end{align}
\end{subequations}
and the sub-problem corresponding to the phase shift beam of the first sub-IRS
\begin{subequations}
\begin{align}
&\underset{{\bm \xi}_1^{(n)}}{\text{max}} \ \ \left |
{\bf b}_1^H\left(u_{\text{I2B},1}^\text{D},v_{\text{I2B},1}^\text{D} \right){\bm \Theta}_{1}^{(n)} {\bf b}_1 \left(u_{\text{U2I},1}^\text{A},v_{\text{U2I},1}^\text{A}\right) \right |^2,\\
&\text{s.t.} \ \ \ \ |[{\bm \xi}_1^{(n)}]_i|=1,i=1,\cdots,M_1.
\end{align}
\end{subequations}
It can be easily verified that the optimal solutions for the above sub-problems are
\begin{align}
& {\bf w}^{(n)}= \frac{1}{\sqrt{N}} {\bf a}\left(u_{\text{I2B},1}^\text{A} \right), \label{E9}\\
& {\bm \xi}_1^{(n)}=\text{diag}\left( {\bf b}_1^* \left(u_{\text{U2I},1}^\text{A},v_{\text{U2I},1}^\text{A}\right) \right)
{\bf b}_1\left(u_{\text{I2B},1}^\text{D},v_{\text{I2B},1}^\text{D} \right).\label{E10}
\end{align}
According to the locations of the BS and the first IRS, $u_{\text{I2B},1}^\text{A}$ in (\ref{E9}) is calculated as
\begin{align}
u_{\text{I2B},1}^\text{A}=\frac{ y_\text{BS}-y_1 }{ \left\|{\bf q}_\text{BS}-{\bf q}_1 \right\|}.
\end{align}
Due to unavailability of user location information during the first time block, the phase shift beam of the first sub-IRS, i.e., ${\bm \xi}_1^{(1)}$, is randomly selected in the first time block.
Hence, in the following, we only focus on the phase shift design of the first sub-IRS in the second time block (i.e., ${\bm \xi}_1^{(2)}$) by invoking the user location estimated in the first time block. As such, we have
\begin{align}
{\bm \xi}_1^{(2)}=\text{diag}\left( {\bf b}_1^* \left(\hat{u}_{\text{U2I},1}^{\text{A},(1)},\hat{v}_{\text{U2I},1}^{\text{A},(1)}\right) \right)
{\bf b}_1\left({u}_{\text{I2B},1}^{\text{D}},{v}_{\text{I2B},1}^{\text{D}} \right),
\end{align}
where the effective AoAs and AoDs are estimated according to the location of the first sub-IRS and the user location estimated in the first time block, which are given by
\begin{align}
& \hat{u}_{\text{U2I},1}^{\text{A},(1)}=\frac{ \hat{y}_{\text{U}}^{(1)}-y_1 }{ \left\|\hat{\bf q}_{\text{U}}^{(1)}-{\bf q}_1 \right\|}, \\
& \hat{v}_{\text{U2I},1}^{\text{A},(1)}=\frac{ \hat{z}_{\text{U}}^{(1)}-z_1 }{ \left\|\hat{\bf q}_{\text{U}}^{(1)}-{\bf q}_1 \right\|}, \\
& {u}_{\text{I2B},1}^{\text{D}}=-\frac{ {y}_{\text{BS}}-y_1 }{ \left\|{\bf q}_{\text{BS}}-{\bf q}_1 \right\|}, \\
& {v}_{\text{I2B},1}^{\text{D}}=-\frac{ {z}_{\text{BS}}-z_1 }{ \left\|{\bf q}_{\text{BS}}-{\bf q}_1 \right\|}.
\end{align}
\subsection{PC Period}
In the PC period, all the three sub-IRSs operate in the reflecting mode to assist the uplink data transmission between the BS and the user. Their phase shifts are designed according to the user location estimated in the second time block of the ISAC period.
By substituting (\ref{E8}) and (\ref{E2}) into (\ref{E10}), the received signal at the BS via the whole distributed IRS is
\begin{align} \label{E11}
y(t)=&\sum\limits_{i=1}^{3} \sqrt{\rho} \alpha_{\text{I2B},i} \alpha_{\text{U2I},i} {\bf w}^H {\bf a}\left(u_{\text{I2B},i}^\text{A} \right)
{\bf b}_i^H\left(u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} \right){\bm \Theta}_{i}(t) {\bf b}_i \left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right) s(t)\\
&+\widetilde{n}_\text{BS}(t), t\in \mathcal{T}_2, \nonumber
\end{align}
where we define $\widetilde{n}_\text{BS}(t) \triangleq {\bf w}^H(t) {\bf n}_\text{BS}(t) $.
Since the distances among the three sub-IRSs are much smaller than the distances between them and the BS, we have the following approximation
$u_{\text{I2B},i}^\text{A} \approx u_{\text{I2B},1}^\text{A}, i=2,3$. In addition, the number of reflecting elements of the first sub-IRS is much larger than that of the $i$-th sub-IRS ($i=2,3$). Hence, the BS combining vector is designed to combing signals from the direction of the first sub-IRS, i.e., $
{\bf w}=\frac{1}{N}{\bf a}\left(u_{\text{I2B},1}^\text{A} \right)
$.
As such, (\ref{E11}) becomes
\begin{align}
y(t)=&\sum\limits_{i=1}^{3} \zeta_i
{\bf b}_i^H\left(u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} \right){\bm \Theta}_{i} (t) {\bf b}_i \left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right) s(t)+\widetilde{n}_\text{BS}(t), t\in \mathcal{T}_2,
\end{align}
where $\zeta_i \triangleq \sqrt{\rho} \alpha_{\text{I2B},i} \alpha_{\text{U2I},i} {\bf a}^H\left(u_{\text{I2B},1}^\text{A} \right) {\bf a}\left(u_{\text{I2B},i}^\text{A} \right)$.
{\color{black}To maximize the received signal power at the BS, we formulate the following optimization problem:
\begin{subequations} \label{E12}
\begin{align}
&\underset{{\bm \xi}_i(t)}{\text{max}} \ \ \left| \sum\limits_{i=1}^{3} \zeta_i
{\bf b}_i^H\left(u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} \right){\bm \Theta}_{i}(t) {\bf b}_i \left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right) \right|^2,\\
&\text{s.t.} \ \ \ \
| [{\bm \xi}_i(t)]_s|=1,s=1,\cdots,M_i, i=1,2,3. \label{E14}
\end{align}
\end{subequations}
Although being non-convex, the above problem admits a closed-form solution by exploiting the special structure of its objective function. Specifically, the objective function satisfies the following inequality
\begin{align} \label{inequality}
\left| \sum\limits_{i\!=\!1}^{3} \zeta_i
{\bf b}_i^H (u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} ){\bm \Theta}_{i} (t) {\bf b}_i (u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}) \right|
\! \le \! \sum\limits_{i\!=\!1}^{3} \left| \zeta_i
{\bf b}_i^H(u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} ){\bm \Theta}_{i}(t) {\bf b}_i (u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}) \right|,
\end{align}
where the equality holds if and only if
\begin{align} \label{E13}
&\text{angle}\left\{ \zeta_i
{\bf b}_i^H\left(u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} \right){\bm \Theta}_{i}(t) {\bf b}_i \left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right)
\right\}
\\
&=
\text{angle}\left\{ \zeta_i
{\bf b}_j^H\left(u_{\text{I2B},j}^\text{D},v_{\text{I2B},j}^\text{D} \right){\bm \Theta}_{j} (t) {\bf b}_j \left(u_{\text{U2I},j}^\text{A},v_{\text{U2I},j}^\text{A}\right)
\right\}, i\ne j, i,j\in\{1,2,3\}. \nonumber
\end{align}
Next, we show that there always exists a solution ${\bm \xi}(t)\triangleq [ {\bm \xi}_1^T(t),{\bm \xi}_2^T(t),{\bm \xi}_3^T(t)]^T$ that satisfies (\ref{inequality}) with equality as well as the phase shift constraint (\ref{E14}).
With (\ref{inequality}), the problem (\ref{E12}) can be transformed into
\begin{subequations}
\begin{align}
&\underset{{\bm \xi}_i(t)}{\text{max}} \ \ \sum\limits_{i=1}^{3} \left| \zeta_i
{\bf b}_i^H\left(u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} \right){\bm \Theta}_{i} (t) {\bf b}_i \left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right) \right|^2,\\
&\text{s.t.} \ \ \ \ (\ref{E13}),
|[{\bm \xi}_i(t)]_s|=1,s=1,\cdots,M_i,i=1,2,3.
\end{align}
\end{subequations}
It is easily verified that the optimal ${\bm \xi}_i(t)$ is
\begin{align}
{\bm \xi}_i(t)= \text{diag}\left\{ {\bf b}_i^H \left(u_{\text{U2I},i}^\text{A},v_{\text{U2I},i}^\text{A}\right) \right\} {\bf b}_i\left(u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} \right) e^{j \phi_i(t)
}, i\in\{1,2,3\},
\end{align}
where $\phi_1(t)=0$, $\phi_2(t)=\text{angle}(\zeta_2)-\text{angle}(\zeta_1)$ and $\phi_3(t)=\text{angle}(\zeta_3)-\text{angle}(\zeta_1)$.
Using the location information estimated in the second time block of the ISAC period, we design ${\bm \xi}_i(t)$ ($t \in \mathcal{T}_2$) as
\begin{align} \label{E15}
{\bm \xi}_i(t)= \text{diag}\left\{ {\bf b}_i^H \left(\hat{u}_{\text{U2I},i}^{\text{A},(2)},\hat{v}_{\text{U2I},i}^{\text{A},(2)}\right) \right\} {\bf b}_i\left(u_{\text{I2B},i}^\text{D},v_{\text{I2B},i}^\text{D} \right) e^{j \phi_i(t)
}.
\end{align}
However, since $\zeta_{i}$ is unknown, $\left( \phi_2(t), \phi_3(t)\right)$ cannot be determined directly.
Hence, in the following, we propose a low-complexity bisection-based phase shift beam training algorithm to find the optimal phase tuple $\left( \phi_2(t), \phi_3(t)\right)$ that maximizes the received power at the BS.
As shown in Fig.\ref{phase_update}, during the first $5$ time slots of the PC period, the IRS generates phase shift beams according to (\ref{E15}) with the phase tuples $\left( \phi_2(t), \phi_3(t)\right), t\in \{T_1+n|n=1,\cdots,5\}$ set to be $\left( \pi, \pi\right)$, $\left( \frac{1}{2}\pi, \frac{1}{2}\pi\right)$, $\left( \frac{1}{2}\pi, \frac{3}{2}\pi\right)$, $\left( \frac{3}{2}\pi, \frac{1}{2}\pi\right)$ and $\left( \frac{3}{2}\pi, \frac{3}{2}\pi\right)$, respectively.
The BS feeds back the phase tuple with the largest BS receiving power to the IRS every five time slots, based on which the IRS updates its phase tuples $\left( \phi_2(t), \phi_3(t)\right)$ in the next 5 time slots and generates the corresponding phase shift beams according to (\ref{E15}). Specifically, if the phase tuple $\left( \phi_2(t^\star_k), \phi_3(t^\star_k)\right)$ achieves the largest BS receiving power among the five phase tuples $ \left( \phi_2( t), \phi_3( t)\right),{t} \in \{T_1+5(k-1)+n|n=1,\cdots,5\}$, the phase tuples in the next 5 time slots
$\left( \phi_2(t), \phi_3(t)\right), t\in \{T_1+5k+n| n=1,\cdots,5\}$ are updated as
\begin{align}
&\left( \phi_2(t^\star_{k}), \phi_3(t^\star_{k}) \right),\\
&\left( \phi_2(t^\star_{k})\pm \frac{\pi}{2^{k-1}}, \phi_3(t^\star_{k}) \pm \frac{\pi}{2^{k-1}} \right).
\end{align}
For example, as shown in Fig. \ref{phase_update}, we assume the phase tuple $\left( \frac{3}{2}\pi, \frac{1}{2}\pi\right)$ achieves the largest BS receiving power. Then, the phase tuples in the next $5$ time slots are updated as $\left( \frac{3}{2}\pi, \frac{1}{2}\pi\right)$, $\left( \frac{5}{4}\pi, \frac{1}{4}\pi\right)$, $\left( \frac{5}{4}\pi, \frac{3}{4}\pi\right)$, $\left( \frac{7}{4}\pi, \frac{1}{4}\pi\right)$ and $\left( \frac{7}{4}\pi, \frac{3}{4}\pi\right)$.
The detailed process is given by Algorithm \ref{alg1}.
}
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{phase_update.pdf}
\caption{Illustration of $\left( \phi_2(t), \phi_3(t)\right)$ update.}
\label{phase_update}
\end{figure}
\begin{algorithm}
\caption{\color{black} Bisection-Based Phase Shift Beam Training Algorithm}
\label{alg1}
{\color{black}
\begin{algorithmic}
\State {\bf Initialization:} Counting index $k=1$, tolerance $\epsilon>0$, and the phase tuples during the first 5 time slots of the PC period $\left( \phi_2(t), \phi_3(t)\right), t\in \{T_1+n | n=1,\cdots,5\}$ are set as $\left( \pi, \pi\right)$, $\left( \frac{1}{2}\pi, \frac{1}{2}\pi\right)$, $\left( \frac{1}{2}\pi, \frac{3}{2}\pi\right)$, $\left( \frac{3}{2}\pi, \frac{1}{2}\pi\right)$ and $\left( \frac{3}{2}\pi, \frac{3}{2}\pi\right)$.
\Do
\State The IRS generates phase shift beams according to (\ref{E15}) with the phase tuples $\left( \phi_2(t), \phi_3(t)\right), t\in \{T_1+5(k-1)+n | n=1,\cdots,5\}$.
\State Obtain the received signal power at the BS as $\| {\bf y}\left( \phi_2( t), \phi_3( t)\right) \|^2,{t} \in \{T_1+5(k-1)+n|n=1,\cdots,5\}$.
\State Find the phase tuple with the largest received power
\begin{align}
\left( \phi_2(t^\star_k), \phi_3(t^\star_k)\right) =\text{arg} \underset{ \left( \phi_2(t), \phi_3(t)\right),{t} \in \{T_1+5(k-1)+n|n=1,\cdots,5\} }{ \text{max} } \| { \bf y}\left( \phi_2( t), \phi_3( t)\right) \|^2.
\end{align}
\State Let $k\longleftarrow k+1$.
\State Update the phase tuples in the next 5 time slots $\left( \phi_2(t), \phi_3(t)\right), t\in \{T_1+5(k-1)+n| n=1,\cdots,5\}$ as
\begin{align}
&\left( \phi_2(t^\star_{k-1}), \phi_3(t^\star_{k-1}) \right),\\
&\left( \phi_2(t^\star_{k-1})\pm \frac{\pi}{2^{k}}, \phi_3(t^\star_{k-1}) \pm \frac{\pi}{2^{k}} \right).
\end{align}
\doWhile{ $\| {\bf y}\left( \phi_2(t^\star_{k-1}), \phi_3(t^\star_{k-1}) \right)\|^2-\|{\bf y}\left( \phi_2(t^\star_{k-2}), \phi_3(t^\star_{k-2}) \right) \|^2 > \epsilon$ }
\State In the remaining time slots $t\in \{T_1+5(k-1), T_1+5(k-1)+1,...,T_1+T_2\}$, design the phase shift beams as (\ref{E15}) with $\left( \phi_2(t), \phi_3(t) \right)=\left( \phi_2(t^\star_{k-1}), \phi_3(t^\star_{k-1}) \right)$ .
\end{algorithmic}
}
\end{algorithm}
{\color{black}
\begin{remark}
The computational complexity of the proposed beamforming algorithm is mainly determined by the calculation of (\ref{E15}). As such, the complexity is $\mathcal{O}\left(\sum\limits_{i=1}^{3}M_i^2 \right) $.
\end{remark}
}
{\color{black}
\section{Extension of the proposed IRS-based ISAC framework to more general cases}
\subsection{General Channel Model}
The proposed location sensing scheme with the LOS channel model can be easily extended to the general channel model with both LOS and NLOS paths. By applying the proposed AOA estimation scheme (see Section \ref{III.A}), the AOA pairs corresponding to both LOS and NLOS paths between the user and each semi-passive sub-IRS can be estimated. Then, according to these estimated AOA pairs, their corresponding path losses can also be obtained. Noticing that the LOS path has the smallest path loss, we can distinguish the AOA pair corresponding to the LOS path, based on which the user location can be determined by invoking the results provided in Section \ref{III.B}. In addition, by ignoring the weak NLOS components, our proposed beamforming algorithms based on the LOS channel model can be directly applied to the case with a general mmWave channel model. Of course, this would cause a performance loss.
\subsection{Multi-User Case}
Although the proposed IRS-based ISAC framework only considers the single-user case, it can be extended to the multi-user case by scheduling multiple users on different time-frequency blocks. For each user, communication and location sensing are conducted, sharing the same time and frequency resources.
As such, the proposed location sensing scheme and beamforming algorithms for the single-user case can be directly applied to the multi-user case.
}
\section{Simulation Results} \label{s4}
In this section, we provide simulation results to demonstrate the effectiveness of the proposed ISAC transmission protocol. The simulation setup is shown in Fig. \ref{simulation_model}, where the user is on the horizontal floor,
the BS is $20$ meter (m) above the horizontal floor, and the three sub-IRSs are respectively $5$~m, $7$~m and $8$~m above the horizontal floor.
The distances from the BS to the second sub-IRS and from the second sub-IRS to the user are set to be $d_{\text{B2I},2}=50$ m and $d_{\text{I2U},2}=6$ m, respectively.
The path loss exponents from the IRS to the BS, from the user to the IRS, and from one sub-IRS to other sub-IRSs are set as $2.3$, $2.2$ and $2.1$ respectively. The path loss at the reference distance of 1 m is set as 30 dB.
Unless otherwise specified, the following setup is used: $N=8$, $M_1=16 \times 16$, $M_2=M_3=M_\text{semi}=4 \times 4$, $T=1200$, $T_1=120$, $\tau_1=20$, $\rho=20$dBm and noise power $\sigma_0^2=-80$ dBm.
\begin{figure}[!ht]
\centering
\includegraphics[width=3.5in]{simulation_model.pdf}
\caption{Simulation setup (top view).}
\label{simulation_model}
\end{figure}
\subsection{Performance of Location Sensing}
Without loss of generality, we focus on the performance of location sensing during the first time block of the ISAC period. { \color{black} To this end, we adopt the root mean square error (RMSE) of the estimated user's location as the performance metric, given by
$\varepsilon=\sqrt{ \mathbb{E}\{ \| \hat{\bf q}_{\text{U}}^{(1)}-{\bf q}_{\text{U}}\|^2}\}$
}.
Fig.\ref{main1} shows the RMSE of the estimated user location under different transmit power. As can be readily seen, the RMSE of the estimated user location decreases with the transmit power. Moreover, increasing the number of semi-passive elements can significantly reduce the required transmit power. For instance, for the positioning accuracy of $10^{-2}$ m, the required transmit power drops from about $ 22$ dBm to $10 $ dBm, when the number of semi-passive elements (per semi-passive sub-IRS) increases from 16 to 36.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{main1.pdf}
\caption{\color{black}RMSE of the estimated user location.}
\label{main1}
\end{figure}
Fig.\ref{main2} shows the impact of the length of sensing time (i.e., $\tau_1$) on the RMSE. Since collecting more data helps suppress the adverse effect of noise, the accuracy of location sensing improves with the length of sensing time. In addition, although increasing the number of passive reflecting elements will cause more interference to the semi-passive sub-IRSs, which perform location sensing during the ISAC period, the accuracy of location sensing will not be affected. This is because the proposed location sensing method can effectively eliminate the interference from the passive sub-IRS.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{main2.pdf}
\caption{\color{black}Impact of sensing time on the location sensing accuracy.}
\label{main2}
\end{figure}
Fig.\ref{main3} shows the impact of the number of semi-passive elements (per semi-passive sub-IRS) on the accuracy of location sensing. As can be seen, the positioning accuracy improves as the number of semi-passive elements (per semi-passive sub-IRS) increases. In addition, a high positioning accuracy can be achieved even with a small number of semi-passive elements. For instance, for a given sensing time $\tau_1=30$, a millimeter-level position accuracy is achieved with only $25$ semi-passive elements (per semi-passive sub-IRS).
Moreover, increasing the number of semi-passive elements can shorten the time of location sensing. For example, for a give positioning accuracy of $5$ mm, the length of sensing time (i.e., $\tau_1$) drops from $30$ to $10$, when the number of semi-passive elements (per semi-passive sub-IRS) increases from $16$ to $25$.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{main3.pdf}
\caption{\color{black}Impact of the number of semi-passive elements on the location sensing accuracy.}
\label{main3}
\end{figure}
Fig.\ref{main4} presents the RMSEs under different distances between the IRS and the user. The performance of location sensing degrades with the increased distance from the user to the IRS, since the signals received by two semi-passive sub-IRSs become weaker. This performance degradation can be compensated by adding more semi-passive elements. By increasing the number of semi-passive element (per semi-passive sub-IRS) from $25$ to $36$, the RMSE remains unchanged at $ 10^{-2}$ m when the distance between the user and the IRS increases from $12$ m to $16$ m.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{main4.pdf}
\caption{\color{black}Impact of the IRS-user distance on the location sensing accuracy.}
\label{main4}
\end{figure}
\subsection{Performance of the Proposed Beamforming Algorithms}
In this subsection, we will provide numerical results to verify the effectiveness of the proposed beamforming algorithms in the ISAC and PC periods, respectively.
Fig.\ref{main5} presents the performance of the proposed beamforming scheme in the ISAC period, where the average achievable rate is defined as
{\color{black}
\begin{align}
\bar{R}_\text{ISAC}=\frac{1}{T_1} \sum\limits_{t=1}^{T_1} R (t).
\end{align}
}
The optimal scheme with perfect CSI and the random scheme with randomly generated phase shifts are presented as two benchmarks. The proposed beamforming scheme performs much better than the random scheme, and achieves similar performance to the optimal scheme. {\color{black}This is because the optimal beamforming is determined by location information, which can be accurately estimated by using the proposed location sensing scheme.} Moreover, as the transmit power increases, the gap between the proposed scheme and the optimal scheme gradually vanishes. Also, increasing the number of semi-passive elements improves the performance of the proposed scheme, due to more accurate location sensing.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{main5.pdf}
\caption{Performance of the proposed beamforming scheme in the ISAC period.}
\label{main5}
\end{figure}
Fig.\ref{main6} presents the performance of the proposed beamforming scheme in the PC period, where the average achievable rate is defined as
{\color{black}
\begin{align}
\bar{R}_\text{PC}=\frac{1}{T_2} \sum\limits_{t=T_1+1}^{T_1+T_2} R (t).
\end{align}
}
For comparison, the performance of the AO scheme with perfect CSI \cite{qingqing1} and the random scheme with randomly generated phase shifts are presented.
The proposed beamforming scheme is superior to the random scheme, and achieves almost the same performance as the AO beamforming scheme with perfect CSI. All the three beamforming schemes improve with the number of passive reflecting elements, due to the increased beamforming gain. Also, adding more semi-passive elements raises the performance of these beamforming schemes, especially in the case of a small number of passive reflecting elements.
It is worth noting that the performance improvement of the proposed beamforming scheme is due to both the increased beamforming gain and location sensing accuracy.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{main6.pdf}
\caption{Performance of the proposed beamforming scheme in the PC period.}
\label{main6}
\end{figure}
\subsection{Performance of the proposed ISAC transmission protocol}
Finally, we investigate the performance of the proposed ISAC transmission protocol, and define the average achievable rate as
{\color{black}
\begin{align}
\bar{R}=\frac{1}{T_1+T_2} \sum\limits_{t=1}^{T_1+T_2} R (t).
\end{align}
}
Fig.\ref{main7} shows the average achievable rate of the proposed ISAC transmission protocol under different ratios of sensing time to the whole transmission time (i.e., $T_1$/$T$), where we set $\tau_1:T_1=\frac{1}{10}$ and $T=2000$. The optimal ratio of sensing time to the whole transmission time decreases with the transmit power. For a low transmit power of $0$ dBm, the optimal ratio is about 1, which indicates that the two semi-passive sub-IRSs should always operate in the sensing mode. As the transmit power increases to $10$ dBm, the optimal ratio drops to 0.2. With the highest transmit power of $20$ dBm, the optimal ratio approaches to 0. This is because the sensing accuracy with a short sensing time is enough high in this case and the two semi-passive sub-IRSs should operate in the reflecting mode to help data transmission.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{main7.pdf}
\caption{Performance of the proposed ISAC transmission protocol with different $T_1$/$T$.}
\label{main7}
\end{figure}
Fig.\ref{main8} shows the average achievable rate of the proposed ISAC transmission protocol with different ratios of $\tau_1$ to $T_1$ (i.e., $\tau_1$/$T_1$), where we assume $T_1:T=\frac{1}{10}$ and $T=2000$. For all three configurations of transmit power, it is desired to allocate a very small portion of time slots to the first time block. This is because during the first time block, the BS does not have any CSI to design the IRS phase shifts and the corresponding achievable rate is very low. More time slots should be allocated to the second time block, where a much higher achievable rate can be achieved by properly designing IRS phase shifts according to the estimated user location during the first time block.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{main8.pdf}
\caption{Performance of the proposed ISAC transmission protocol with different $\tau_1$/$T_1$.}
\label{main8}
\end{figure}
{\color{black}
Fig.\ref{main9} compares the proposed ISAC protocol with a benchmark protocol. For the benchmark protocol, location sensing and communication tasks are conducted, occupying different time resources. One coherence block consists of two periods. The first period is dedicated to pure location sensing with the assistance of the two semi-passive sub-IRSs, while the second period is dedicated to pure communication to the BS with the assistance of all three sub-IRSs.
As can be readily seen, the proposed ISAC protocol achieves its best performance when the time allocation ratio $T_1/T$ is $0.6$, while the benchmark protocol achieves its best performance when the time allocation ratio is $0.1$. Moreover, the optimal rate achieved by the proposed ISAC protocol is much higher than that achieved by the benchmark protocol. This is because our proposed ISAC protocol allows the location sensing task to be carried out without occupying any time resources of communication.
\begin{figure}[!ht]
\centering
\includegraphics[width=4.5in]{main9.pdf}
\caption{\color{black}Comparison between the proposed ISAC protocol and the benchmark protocol.}
\label{main9}
\end{figure}
}
\section{conclusion} \label{s5}
In this paper, we construct an ISAC system by introducing a new IRS architecture (i.e., the distributed semi-passive IRS architecture) to the communication system. In the proposed IRS-based ISAC system, location sensing and data transmission can be conducted at the same time, occupying the same spectrum and time resources. The framework of the IRS-based ISAC system is designed, which includes the transmission protocol, location sensing and beamforming optimization. Specifically, we consider a coherence block, where the user sends communication signals to the BS. The coherence block is composed of the ISAC period with two time blocks and the PC period. During the ISAC period, uplink data transmission with the assistance of the passive sub-IRS and location sensing at the two semi-passive sub-IRSs are conducted simultaneously. The estimated location in the first time block will be used for beamforming design in the second time block. During the PC period, the whole distributed IRS operates in the reflecting mode for enhancing data transmission. The location obtained in the second time block of the ISAC period is used for phase shift design in the PC period.
Simulation results show that a millimeter-level positioning accuracy can be achieved with the proposed location sensing scheme. By increasing the number of semi-passive elements or sensing time, the positioning accuracy can be further improved. Although only the imperfect location information is available, the proposed beamforming scheme for the ISAC period achieves almost the same performance as the optimal beamforming scheme with perfect CSI, and the proposed beamforming scheme for the PC period has similar performance to the AO beamforming scheme \cite{qingqing1} assuming perfect CSI. By investigating the trade-off between the sensing and communication performance, we find that increasing the sensing time will always improve the sensing performance, but not always degrade the communication performance.
\bibliographystyle{IEEEtran}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.